According to a new paper by researchers at the nonprofit organization OpenAI, A.I. has a “dual nature.” When it comes to security, A.I. is capable of improving filters, surveillance tools, and risk assessments; but in the wrong hands, it could also strengthen malware and other software used to penetrate and destroy networks and data.
OpenAI’s mission is to create “safe” A.I. In order to achieve that lofty goal, it publishes papers and open-source software tools that it hopes will guide A.I. research in a positive direction. Those tools include OpenAI Gym, a kit for building reinforcement learning (RL) algorithms, which help bots refine decision-making processes.
“We should take steps as a community to better evaluate research projects for perversion by malicious actors, and engage with policymakers to understand areas of particular sensitivity,” the researchers wrote in the introduction to the security paper. “A.I. is going to alter the global threat landscape, so we should involve a broader cross-section of society in discussions.” Those “broader” groups include not only consumers and businesses, but also national security experts and ethicists.
Among OpenAI’s proposed solutions for heading off A.I.-powered cyberattacks: having “pre-publication risk assessments” for certain kinds of research, which could include sharing data among a small group of “trusted organizations.” A focus on “norms” that are “responsive to dual-use concerns” could also help the research community minimize threats before they become fully weaponized. That’s in addition to taking lessons from traditional cybersecurity, such as instituting “red teams” that can test A.I. systems for vulnerabilities.
The big question in all this is a straightforward one: would the tech firms investing in A.I. freely open up their code and research for inspection? OpenAI itself has pledged to keep things as transparent as possible, stating on its website: “We will not keep information private for private benefit.” However, it acknowledged in that same posting: “In the long term, we expect to create formal processes for keeping technologies private when there are safety concerns.”
In theory, companies could prove rather tight-lipped about their A.I., potentially opening the door to hackers to find and exploit unannounced vulnerabilities. That could result in a future threat landscape that closely mirrors the one we face now with “conventional” software. If you’re involved in cybersecurity, be prepared.