For decades, “Pac-Man” has presented video-game fans with a simple challenge: Navigate an onscreen maze, eat glowing dots along your path, avoid evil ghosts. Swallowing a particular kind of dot (a “power pellet”) will temporarily transform Pac-Man from the hunted to the hunter, swallowing those ghosts for bonus points.
Then IBM’s artificial-intelligence experts had to go and make a simple thing a lot more complicated. As revealed during the Fast Company Innovation Festival (and discussed in a Fast Company article), those researchers decided to use a machine-learning framework to teach Pac-Man to not eat ghosts.
As input for the machine-learning model, IBM used human-controlled gaming sessions in which players did their best to avoid the ghosts. In theory, this dataset would temper the A.I.’s inclination to play the game as aggressively as possible at all times. “When Pac-Man ate a power pellet—causing the ghosts to flee—it would play in ghost-avoidance mode rather than playing conventionally and trying to scarf them up,” wrote Fast Company’s Harry McCracken. “When the ghosts weren’t fleeing, it would segue into aggressive point-scoring mode.”
The broader implications are pretty obvious: If A.I.-powered software can change its behavior in response to radically evolving situations, without any outside guidance or input, it could help these systems think in more “human-like” ways.
Other organizations have embarked on similar research. For example, DARPA—the agency of the Department of Defense (DoD) responsible for cutting-edge research—has launched a program to teach A.I. platforms a little “common sense,” presumably so next-generation military robots don’t walk merrily off cliffs.
“The absence of common sense prevents an intelligent system from understanding its world, communicating naturally with people, behaving reasonably in unforeseen situations, and learning from new experiences,” Dave Gunning, a program manager in DARPA’s Information Innovation Office (I2O), wrote in the agency’s blog posting on the program. “This absence is perhaps the most significant barrier between the narrowly focused A.I. applications we have today and the more general A.I. applications we would like to create in the future.”
The consequences of A.I. platforms unable to adapt to changing circumstances (or bad data) are potentially immense. A recent Reuters article, for example, described how Amazon created a machine-learning tool to sort out job candidates—and the algorithm assumed that, because the majority of applicants were male, males were preferable hires. After the project began to show gender bias, Amazon shut it down.
The results could prove even more catastrophic in military or national-security contexts. Making A.I. “smarter” goes beyond a good game of “Pac-Man”—it could end up a matter of life and death.