I’m in Salamanca, Spain this week to attend the International Conference on Computational Creativity, and even though I haven’t slept in 30 hours, Open AI dropped a big piece of news today about their DOTA 2 research and I wanted to provide a few thoughts in case you’re interested in the project and want a different angle on it. These aren’t particularly polished thoughts, apologies in advance, but you’ll have no end of thinkpieces and articles about it before the month is out, don’t worry.
OpenAI, an AI foundation funded by Elon Musk, has built a multi-agent AI system to play a very simple version of DOTA 2, a popular competitive online game. Last year you might remember they did something similar, on an even more simplified subset of DOTA 2 called 1v1 Mid. This new version takes several steps towards playing a full game of DOTA 2, and even though it’s still a long way off, it’s made some important steps forward.
Next month Open AI will stream a live game of the bots playing a team of “top” human players, and in August they’ll appear live on stage at the International and play an all-star lineup of human players, with most of these restrictions still in place. Continue reading
Hey everyone! It’s been quiet here for a couple of months, but I’ve been working hard on some really exciting things, and I can now announce what the first of those is: ANGELINA will be designing games live at EGX Rezzed this month! Rezzed is one of the biggest games events in the world, and ANGELINA will be there for all three days of the event, designing games all day long. This is a quick post about what ANGELINA actually is, what you can expect to see if you come along, and how you can take part even if you’re unable to attend!
Last week a few games sites covered the fact that the Cambridge Center for the Study of Existential Risk (CSER), a lab which investigates safety issues associated with things like artificial intelligence, had released a Civilisation V mod about the risk of superintelligent AI. Here’s what Rock, Paper, Shotgun quoted designer and CSER researcher Shahar Avin as saying about the project:
“We want to let players experience the complex tensions and difficult decisions that the path to superintelligent AI would generate,” said the Centre for the Study of Existential Risk’s Dr. Shahar Avin, who managed the project. “Games are an excellent way to deliver a complex message to a wide audience.”
This is a blog post about why games are not always an excellent way to deliver a complex message to a wide audience.
Two years ago I visited Dagstuhl, a research center in Germany, for a week of game AI research. I was writing Electric Dreams at the time for Rock, Paper, Shotgun; a series about games, AI and research. In the piece about Dagstuhl, I wrote about the fear I observed that academic pressures and economic shifts would stifle great, exciting games research:
Like every other part of the games industry, games researchers have a contribution to make to the future of games. If we don’t make spaces where we can do this work, Michael Mateas’ “country of possibilities” may remain undiscovered forever.
Last week I returned to Dagstuhl, and once again found myself discussing the health of game AI research. But this time, the problem wasn’t funding agencies or university administrators: the problem was us. This is a fairly introspective, Inside Baseball-esque post, but I’ve come away from Dagstuhl with a powerful urge to write it, so I hope you’ll forgive me. If you work in games research, particularly AI, and particularly if you were at Dagstuhl, I implore you to read it.
I was lucky enough to be a guest on the Checkpoints podcast this month! I talked about my origin story growing up watching Bad Influence! on the TV and playing Zool on the Amiga. I also got to have a terrific conversation about AI with Declan, and while chatting I let slip a new thing I have in the works – ANGELINA is being designed to stream game development live on Twitch, and I’m hoping to do some its first streams really soon. This is a short blog post about how that’s happening, and why I’m doing it. You can also follow ANGELINA on Twitch here!
Yesterday the House Of Lords – one of the two houses in the British parliament – gathered a Select Committee on Artificial Intelligence to meet with a number of academics and journalists and ask them for “the big picture” about the field. It was broadcast live on the web, and there’s even an archive here. I tweeted at length (mostly with tongue in cheek) about the event, but I also wanted to quickly summarise some thoughts – nothing formal, just some notes. I also want to point out that this is largely from memory, so apologies if I miss a detail or misattribute – let me know if you spot anything, I’ll correct it ASAP.
Last week was The International 2017, the biggest date in the DOTA 2 calendar where the world’s top teams compete in the complex and challenging MOBA for a prize pool totalling over $24m. In between the big matches Valve found time to make exciting new announcements about additions to the game, and some exhibition matches where professional players play for fun. They also gave a private research lab some free publicity, for some reason. Here’s a few words on OpenAI’s big announcement this week, and how we are losing control of the narrative on AI.
Seven years ago I started this site to write about ANGELINA, software I was making that could design its own videogames. The first games it made were simple arcade games with coloured circles that moved around a white screen, but the real objective of the project wasn’t just to make fun games, but to make a piece of software that people cared about, respected, were inspired by, and recognised as a creative individual. Over the years each new version of ANGELINA has tried to raise those stakes, to give ANGELINA more responsibility, and to take away more of my personal influence. Today I’m excited to tell you about a new version of ANGELINA that I’ve been working on, which takes more steps along that path. There’s still a lot of work to do, but I’d love to hear what you think.
In the summer of 2005 a herd of twenty-three driverless cars barrelled across the Nevada desert, watched by scientists, engineers and nervous representatives of military funding agencies. Several hours later the first car crossed the finish line claiming a $2 million prize for the DARPA Grand Challenge and, naturally, the keen attention of DARPA itself. But it wasn’t just their interest that was piqued – journalists were also waiting to see if the whole field of artificial intelligence might emerge from the wilderness along with the beaten-up cars. John Markoff, writing for the New York Yimes, began his coverage of the event by describing AI as:
“…a technology field that for decades has overpromised and underdelivered… At its low point some computer scientists and software engineers avoided the term artificial intelligence for fear of being viewed as wild-eyed dreamers.”
It’s safe to say that artificial intelligence as a field has largely beaten off that image today, and is currently enjoying a golden age of investment, growth and discovery. In 2006 Ray Kurzweil wrote in his book ‘The Singularity’ that “the AI winter is long since over” – ‘AI winter’ being a term people use to describe catastrophic slumps that the field experiences following a period of prosperity. New techniques emerge that seem to solve problems better than ever before, forecasts and predictions are made about the future, hopes are raised, and then eventually the bubble of excitement bursts under the weight of its own expectations. The winter that follows is long – research funding is cut, tech startups shutter, businesses and governments withdraw interest, and the public loses their faith in the field. When Kurzweil wrote that the winter was over in 2006 he may have been talking specifically about the winter that took place in the 1990s, but it’s possible he was also talking more generally – many AI researchers I’ve spoken to believe this is it, that there will be no more winters. In 2012 Demis Hassabis, then the founder of a little-known company called DeepMind Technologies, declared that ‘the time is right for a push towards general AI’.
2017 is the summer solstice for artificial intelligence, the warmest and longest day, the kind of day that makes it feel like summer might last forever. But nothing lasts forever, and this season will pass like all the others have before it. The only thing that we can affect is how bitter and harsh the coming winter will be, and that is largely dictated by how badly let down people feel when the bubble finally bursts. What dream did we sell them, what did we let them believe, how did we advise them to act and spend their money? We need to start thinking about the image of artificial intelligence this year, and change it for the better.
This week is AIIDE, a big academic conference all about AI and games. For the last few years I’ve co-organised a workshop called EXAG along with Alex Zook and Antonios Liapis, and this weekend it’ll be happening again. EXAG is always a very special time of year for me, and the papers I put into EXAG are normally my most favourite out of the whole year, because they can be about all kinds of new and unusual things. This year I wrote one with Adam Summerville about DOTA 2, and I’d like to tell you a little bit about the paper and the game.
Click Here To Read The Paper!