“Starcraft II,” for those who don’t know, is a real-time strategy game that tasks its players with building intergalactic bases and armies. Much of the gameplay map is hidden from the player’s view; only by sending units into the unknown can you find your opponents, who are building their own bases and plotting their own strategies. That high degree of uncertainty demands that players make possibly game-ending decisions with little information, and respond quickly to unexpected situations.
In other words, it’s the perfect arena for testing an A.I. platform’s ability to improvise. And now, after months of testing, DeepMind and Blizzard have opened up their tools to developers and researchers.
This “SC2LE” release offers several goodies for A.I. pros, including Blizzard’s machine-learning API (vital for developing scripted and machine-learning-based bots, replay analysis, and tool-assisted human play), a dataset of anonymous gameplays, and the open-source version of DeepMind’s PySC2 toolset. There’s also a series of simple “RL mini-games” for testing out the performance of agents on specific in-game tasks.
“Part of StarCraft’s longevity is down to the rich, multi-layered gameplay, which also makes it an ideal environment for A.I. research,” reads DeepMind’s blog posting on the matter. “For example, while the objective of the game is to beat the opponent, the player must also carry out and balance a number of sub-goals, such as gathering resources or building structures. In addition, a game can take from a few minutes to one hour to complete, meaning actions taken early in the game may not pay-off for a long time.”
“Starcraft II” is far more complex than the old-school Atari games that DeepMind has used to train machine-learning platforms, making it the next logical step up the A.I. skills-building ladder. An A.I. platform already beat a human master at the (notoriously difficult and complex) game of Go; how long until self-thinking software manages to take human players’ “Starcraft” bases?