Gaming —

Skynet meets the Swarm: how the Berkeley Overmind won the 2010 StarCraft AI competition

StarCraft, one of the most popular games ever made, also serves as the perfect …

We’re gathered in a conference room on the Berkeley campus, the detritus of a LAN party scattered around us. The table is covered with computers and pizza, and there’s a game of StarCraft projected on the screen. Oriol Vinyals, a PhD student in computer science, is commanding the Terran army in a life-or-death battle against the forces of the Zerg Swarm.

Oriol is very good—one-time World Cyber Games competitor, number 1 in Spain, top 16 in Europe good. But his situation now is precarious: his goliath walkers are holding off the Zerg’s flying mutalisks, but they can’t be everywhere at once. The Zerg player is crafty, retreating in the face of superior firepower but never going far, picking off targets of opportunity and applying constant pressure.

Then Oriol makes a mistake. He moves his goliaths slightly out of position, just for a few seconds. It’s enough. The mutalisks react instantly, streaming through the gap in his defenses and making straight for his vulnerable workers. By the time Oriol brings the goliaths back to drive off the mutalisks, his workers are wiped out and his resource production is crippled. 

Oriol makes a desperate, last-ditch attack on the Zerg base, trying to break through before the mutalisks are reinforced, but it’s too late. One after another, his goliaths get ripped apart by the Zerg defenses. As a new wave of mutalisks emerges from the Zerg hatcheries, he has no choice but to concede—to the computerized AI that just defeated him.

The Berkeley Overmind's mutalisks swarming for the kill.
The Berkeley Overmind's mutalisks swarming for the kill.

There's a palpable air of celebration in the room; even Oriol is grinning. He was just beaten by the Berkeley Overmind, an AI agent that our team in the room spent the past few months working on. The Overmind is our entry into the 2010 StarCraft AI Competition, and after dozens of test matches, it has finally defeated our human StarCraft expert for the first time

We have a glorious, ego-affirming, reverse-John Henry kind of moment, but no time to savor it. With three days left before final submission of the code, our team has a lot of polishing and debugging to do. Professor Dan Klein, our faculty advisor, general, head coach, and driving force, smiles briefly and turns back to the whiteboard. He crosses out one of 20 test scenarios we still have to run.

“Okay,” he says. “We can beat goliaths. What’s next?”

This is the story of how our team created the Berkeley Overmind, and the technologies we used.

Building a better future through Zerg rushes

StarCraft is one of the most popular games ever, a huge hit from a company known for hits. It demands great skill of its players, and it is a mainstay of professional gaming leagues. In Korea, the game is so popular that professional StarCraft players are celebrities with six-figure contracts and their games broadcast live on national TV. 

It also happens to be a deeply challenging arena for artificial intelligence, and a successful StarCraft AI agent must attempt to solve a number of hard problems. Dan, who also teaches the introduction to AI class at Berkeley, says, “I can literally walk down the list of concepts we cover and show you where every one of them show up in StarCraft and in our agent.” 

StarCraft was released in 1998, an eternity ago by video game standards. Over those years Blizzard Entertainment, the game’s creator, has continually updated it so that it’s one of the most finely tuned and balanced Real Time Strategy (RTS) games ever made. It has three playable races: the human-like Terrans, with familiar tanks and starships, the alien Zerg, with large swarms of organic creatures, and the Protoss, technologically advanced aliens reliant on powerful but expensive units. Each race has different units and gameplay philosophies, yet no one race or combination of units has an unbeatable advantage. Player skill, ingenuity, and the ability to react intelligently to enemy actions determine victory.

This refinement and complexity makes StarCraft an ideal environment for conducting AI research. In an RTS game, events unfold in real-time and players’ orders are carried out immediately. Resources have to be gathered so fighting units can be produced and commanded into battle. The map is shrouded in fog-of-war, so enemy units and buildings are only visible when they’re near friendly buildings or units. A StarCraft player has to acquire and allocate resources to create units, coordinate those units in combat, discover, reason about and react to enemy actions, and do all this in real-time. These are all hard problems for a computer to solve. 

Dan sometimes compares StarCraft with other games that have driven AI research in the past. “Chess is hard because you have to look far into the future, and go is harder because there are lots of pieces. With poker there’s uncertainty,” he says. “In StarCraft, you have all of these things going on simultaneously, and you have very little time to compute a solution.”

Good human players solve these problems through practice and training, building up an extensive store of skills and expert knowledge. It’s impossible to simply embed this human knowledge in an AI agent, as the agent must actively reason about the game world and possible future actions. Creating a StarCraft AI that can match humans requires pushing the boundaries of what is currently possible with computers, with potential payoffs in applications far beyond the game.

The StarCraft AI competition was created to harness and promote StarCraft as a research environment. AI researchers have used RTS games in the past, but their efforts were hampered by the technology available. Open-source games were buggy and untested, and commercial games like StarCraft were inaccessible. 

This changed in early 2009 with the release of the Brood War API (BWAPI), an open-source toolkit developed by a group of enthusiasts that gives direct access to the game. Ben Weber, a student in the Expressive Intelligence Studio at UC Santa Cruz, had been working on RTS game-based research. He realized that StarCraft and BWAPI could make an immediate impact on his work and be a valuable tool for the AI community. He set about organizing a tournament for StarCraft AI agents to compete against each other, hoping to kick-start progress and raise interest. 

The announcement for the tournament was made in November of 2009, and the word soon went out on gaming websites and blogs: the 2010 Artificial Intelligence and Interactive Digital Entertainment (AIIDE) Conference, to be held in October 2010 at Stanford University, would host the first ever StarCraft AI competition.

StarCraft 101

When Dan and the students in his lab heard about the competition, they were immediately excited. “As soon as I heard the words, ‘There is API access to StarCraft,’” he said, “I knew that interesting research was going to happen and there was a whole space of research and class projects that was unlocked.”

The first challenge was to define the problem space. What were the important tasks that a StarCraft player would need to do, and how would this translate to an AI agent? 

The team that had coalesced at this point included Dan, some of his PhD students, and several other graduate students from AI and robotics research labs at Berkeley (including your author). This group had significant AI expertise but lacked StarCraft skills, while the undergraduate computer science population at Berkeley included many avid gamers. 

To help bridge this gap, Dan set up a class for teaching AI concepts through designing and building our tournament entry. The class would be an opportunity to share knowledge and explore how challenges from the game could be structured as concrete problems to be tackled in a principled, algorithmic manner. It would also be a great way to present and teach AI concepts.

The response to the class announcement was immediate and enthusiastic, and the class was a great success. We spent the semester learning about StarCraft and artificial intelligence, exploring algorithms and frameworks, and simply trying as many different things as we could. It was a valuable experience and instrumental to the evolution of our agent. “It’s safe to say that without the class we wouldn’t have done nearly so well," Dan said afterwards.

At the conclusion of the class, we had formulated the requirements of the game into three main problem areas. First, an agent needed to acquire and manage resources, and decide what to build and when. This is known in game terms as macro-management, or "macro," and is ultimately a planning and optimization problem. 

The next task is micro-management. Once it has an army, the agent must select targets and manage movement for its units, a challenge that poses a complex, multi-agent control problem. Finally, the agent needs to manage information, scouting out the enemy and adjusting its strategy appropriately to enemy actions, combining aspects of unit control and high-level planning. 

Channel Ars Technica