Here’s the big news: I’ve been awarded a Research Fellowship from the Royal Academy of Engineering, and starting in November 2018 I’ll be joining Queen Mary University in London to start five years of research into automated game design, as part of their Game AI Research Group! If you’d like to hear some specific things I hope to get up to, read on – otherwise, I’m looking forward to joining Queen Mary and starting a new phase of my research!
I’m in Salamanca, Spain this week to attend the International Conference on Computational Creativity, and even though I haven’t slept in 30 hours, Open AI dropped a big piece of news today about their DOTA 2 research and I wanted to provide a few thoughts in case you’re interested in the project and want a different angle on it. These aren’t particularly polished thoughts, apologies in advance, but you’ll have no end of thinkpieces and articles about it before the month is out, don’t worry.
OpenAI, an AI foundation funded by Elon Musk, has built a multi-agent AI system to play a very simple version of DOTA 2, a popular competitive online game. Last year you might remember they did something similar, on an even more simplified subset of DOTA 2 called 1v1 Mid. This new version takes several steps towards playing a full game of DOTA 2, and even though it’s still a long way off, it’s made some important steps forward.
Next month Open AI will stream a live game of the bots playing a team of “top” human players, and in August they’ll appear live on stage at the International and play an all-star lineup of human players, with most of these restrictions still in place. Continue reading
Last week a few games sites covered the fact that the Cambridge Center for the Study of Existential Risk (CSER), a lab which investigates safety issues associated with things like artificial intelligence, had released a Civilisation V mod about the risk of superintelligent AI. Here’s what Rock, Paper, Shotgun quoted designer and CSER researcher Shahar Avin as saying about the project:
“We want to let players experience the complex tensions and difficult decisions that the path to superintelligent AI would generate,” said the Centre for the Study of Existential Risk’s Dr. Shahar Avin, who managed the project. “Games are an excellent way to deliver a complex message to a wide audience.”
This is a blog post about why games are not always an excellent way to deliver a complex message to a wide audience.
Two years ago I visited Dagstuhl, a research center in Germany, for a week of game AI research. I was writing Electric Dreams at the time for Rock, Paper, Shotgun; a series about games, AI and research. In the piece about Dagstuhl, I wrote about the fear I observed that academic pressures and economic shifts would stifle great, exciting games research:
Like every other part of the games industry, games researchers have a contribution to make to the future of games. If we don’t make spaces where we can do this work, Michael Mateas’ “country of possibilities” may remain undiscovered forever.
Last week I returned to Dagstuhl, and once again found myself discussing the health of game AI research. But this time, the problem wasn’t funding agencies or university administrators: the problem was us. This is a fairly introspective, Inside Baseball-esque post, but I’ve come away from Dagstuhl with a powerful urge to write it, so I hope you’ll forgive me. If you work in games research, particularly AI, and particularly if you were at Dagstuhl, I implore you to read it.
I was lucky enough to be a guest on the Checkpoints podcast this month! I talked about my origin story growing up watching Bad Influence! on the TV and playing Zool on the Amiga. I also got to have a terrific conversation about AI with Declan, and while chatting I let slip a new thing I have in the works – ANGELINA is being designed to stream game development live on Twitch, and I’m hoping to do some its first streams really soon. This is a short blog post about how that’s happening, and why I’m doing it. You can also follow ANGELINA on Twitch here!
Last week was The International 2017, the biggest date in the DOTA 2 calendar where the world’s top teams compete in the complex and challenging MOBA for a prize pool totalling over $24m. In between the big matches Valve found time to make exciting new announcements about additions to the game, and some exhibition matches where professional players play for fun. They also gave a private research lab some free publicity, for some reason. Here’s a few words on OpenAI’s big announcement this week, and how we are losing control of the narrative on AI.
Seven years ago I started this site to write about ANGELINA, software I was making that could design its own videogames. The first games it made were simple arcade games with coloured circles that moved around a white screen, but the real objective of the project wasn’t just to make fun games, but to make a piece of software that people cared about, respected, were inspired by, and recognised as a creative individual. Over the years each new version of ANGELINA has tried to raise those stakes, to give ANGELINA more responsibility, and to take away more of my personal influence. Today I’m excited to tell you about a new version of ANGELINA that I’ve been working on, which takes more steps along that path. There’s still a lot of work to do, but I’d love to hear what you think.
This week is AIIDE, a big academic conference all about AI and games. For the last few years I’ve co-organised a workshop called EXAG along with Alex Zook and Antonios Liapis, and this weekend it’ll be happening again. EXAG is always a very special time of year for me, and the papers I put into EXAG are normally my most favourite out of the whole year, because they can be about all kinds of new and unusual things. This year I wrote one with Adam Summerville about DOTA 2, and I’d like to tell you a little bit about the paper and the game.
Click Here To Read The Paper!
I saw an article today about the future of AI in games and suchlike and I was tempted to start tweeting about it but that inevitably leads to boring arguments and isn’t very constructive. Instead, what I’m going to do is give you a list (in no particular order) of some researchers who I think are really interesting, who are important to the future of game AI, and who have interesting things to say, and most importantly who I don’t see interviewed or talked about enough. They’d all make great people to talk to for articles, features and interviews, and each one has a research portfolio that paints a cool future for games. Go check them out!
I’ve been really excited and interested in level design recently, and reading a lot of work by folks like Robert Yang about lighting, space, and building worlds in 3D. It’s amazing stuff and it links in really well to the research I want to do right now (mostly because it’s influencing the research I want to do right now!) I wanted to write a little update about some work I did recently along these lines – building a level generator that uses in-game cameras to evaluate levels.