Don’t tell Donald Trump, but artificial intelligence (A.I.) may become quite good at generating fake news. Moreover, this generation of fabricated reportage is courtesy of OpenAI, the nonprofit designed explicitly to prevent A.I. from being used in terrible and unethical ways, so, um, great job, everyone.
Specifically, OpenAI’s “large-scale unsupervised language model” (dubbed GPT-2) is capable of generating “coherent paragraphs of text” and “achieves state-of-the-art performance on many language modeling benchmarks,” according to a posting on blog.openai.com. It was trained on a dataset of 8 million web pages, features 1.5 billion parameters, and is apparently very good at predicting the next word in a text string—meaning it could (reportedly) put together a very convincing article on the Martians that invaded Manhattan this morning.
If that wasn’t interesting/scary enough, the platform is also capable of “rudimentary reading comprehension, machine translation, question answering, and summarization—all without task-specific training.”
Although OpenAI has declined to release this creation, citing its potential danger, it has issued a smaller model trained on a much smaller set of parameters (117 million). You can check out a version of the software that might potentially doom the entire planet on GitHub.
OpenAI’s stated mission is to create A.I. that is “safe.” In addition to building new artificial-intelligence tools, it also releases papers and open-source software tools for A.I. research. “We will not keep information private for private benefit,” the organization stated at one point, “but in the long term, we expect to create formal processes for keeping technologies private when there are safety concerns.”
It seems that GPT-2 is a prime example of a technology that will be kept (somewhat) private, at least for the moment. OpenAI has proven reluctant to release certain platforms in the past, such as its evolving bot that can beat humans at “Dota 2,” a multiplayer video game in which virtue heroes battle for control of a fantasy-world landscape.
In the meantime, those interested in OpenAI’s tools can pick through the ones available on the organization’s website. Just make sure you don’t use that code to invent something that generates fake news—or creates SkyNet. If you accidentally coded something that murdered most of the human race, you’d feel bad, right? Right?