Good Game, Go Next

Last week was The International 2017, the biggest date in the DOTA 2 calendar where the world’s top teams compete in the complex and challenging MOBA for a prize pool totalling over $24m. In between the big matches Valve found time to make exciting new announcements about additions to the game, and some exhibition matches where professional players play for fun. They also gave a private research lab some free publicity, for some reason. Here’s a few words on OpenAI’s big announcement this week, and how we are losing control of the narrative on AI.

On Friday’s live stream, with millions of people watching, The International put on a special 1v1 showmatch featuring Dendi, one of the best-known and well-loved DOTA 2 players and previous world champion. His opponent was wheeled out with fog machines and a big hype intro – it was a PC, brought by OpenAI to represent the system they’d been training to play a special DOTA 2 mode called 1v1 mid, with a hero called Shadow Fiend. Dendi, while not quite at the peak of his powers any more, is still revered as a great 1v1 player. He promptly lost the first game against the bot, tapped out of the second, and gave up.

1v1 mid is a very special kind of game, sort of the DOTA 2 equivalent of penalties in football, or maybe one-on-one basketball practice. As the name suggests there are no other players to help, be helped by, or be ambushed by, and a lot of game features (like randomly-spawning powerup runes) are disabled. You win if you kill your opponent twice, or you kill their tower (a powerful structure each team has in their lane). 1v1 mid is primarily a way for mid players to focus on their ability to last hit (timing your attacks to kill enemy NPC creatures called creeps and get a gold bounty) deny (doing the same to your creeps to slow down your opponent) and harass and dominate their opponent. All of these skills help them in the first few minutes of a real DOTA 2 game.

I think it was Julian Togelius who described Go as the perfect game for an AI to play, and 1v1 mid isn’t too far off being the same in videogame form. 1v1 mid is a game driven by metrics – damage done, last hits and denies achieved, gold earned, experience gained. Your success both in the long- and short-term are governed by hard, measurable, understandable numbers, and even a small increase in any of those numbers has a direct impact on the outcome of the game. This creates a very good environment for the AI to learn in – an environment in which short-term rewards tend to point in roughly the direction of overall victory. By contrast, what makes DOTA 2 so rich and complex is precisely that most of the game does not follow this pattern. A lot of what’s done in the game has no immediate short-term impact, but instead plays into a slower, overall plan, which must adapt and shift as the situation changes. The hard work for OpenAI begins where 1v1 mid ends.

In DOTA 2 terms, beating professional players at 1v1 mid is a little bit like beating an international goalkeeper at penalties in football. Both parties have to exhibit skill, but it says almost nothing about their team’s ability to win a game of football (one of the reasons why resolving a game with penalties is so unsatisfying).

There’s another issue with this project beyond just working out whether it was hard or not though. Like most big AI companies, OpenAI are big on building their brand, and controlling how they talk about their work. On stage the bot was presented as starting from ‘total randomness’ and learning by playing itself, but in fact OpenAI later confirmed to The Verge that they’d hardcoded in the bot’s item choices, crucial skills like creep blocking, and anything that didn’t require “interaction with an opponent“. They also were coy about how exactly their bot was interfacing with the game. DeepMind’s Starcraft 2 work is trying to read pixels from the screen directly, a complex vision challenge. OpenAI uses the DOTA 2 API, meaning it receives precise data describing the game state directly, no interpretation required. This is more than just skipping a solve problem, which was how OpenAI put it to The Verge – they receive the kind of information that humans could never have, like the exact number of in-game units an enemy hero is from the bot, or frame-perfect reactions to an attack. Even DeepMind levelled the playing field in Starcraft by limiting actions-per-minute.

The OpenAI project is cool. Seeing new applications of AI is always cool, and the fact that some professional players said during interviews that they would like to use the bot to practice 1v1 mid is definitely an accomplishment. But this whole incident once again raises questions about whose work in AI we value, what models of scientific advancement we choose to support, and who we let control the narrative about modern artificial intelligence. I’ve been told by many AI researchers, including some holding very senior positions, that AI simply cannot be done in academia any more, and that industry is the only place AI can be advanced. The reason seems to be a combination of two factors: the massive amount of funding available, and a slow drain of talent from universities. Both of these things come from the same place: an unimpeachable public image as the vanguard of artificial intelligence.

This week the tech press contributed another voice to the chorus of people chanting in favour of privately-funded research into artificial intelligence, and Valve gave them stage time in front of millions of viewers to present a somewhat misleading picture of their work. DeepMind got stage time at BlizzCon to talk about their work on Starcraft 2 without even having anything to show for it. It’s troubling to me that the only people able to leverage this kind of exposure are companies with deep pockets and a brand manager. As with everything funded by public money, it’s easy to forget over time why exactly funding sources matter, why keeping things independent is significant. Perhaps most importantly, AI is a perception game, and the incentives an organisation like OpenAI has for promoting itself, in my opinion, clash dangerously with how AI should be presented to people.

Good game and well played to OpenAI, but I think it’s just the beginning of a very tough series ahead.


5 thoughts on “Good Game, Go Next

  1. ok but if an AI may (still) fall far behind a human “team”… what about an AI “made of” a “team of AIs” each one in a role play? 😉 (a hive “mind”, might say?)

    1. I actually think a human team will probably lose to the AI! Also this AI is technically made up of several smaller ones already 🙂

Leave a Reply

Your email address will not be published. Required fields are marked *