Last week a few games sites covered the fact that the Cambridge Center for the Study of Existential Risk (CSER), a lab which investigates safety issues associated with things like artificial intelligence, had released a Civilisation V mod about the risk of superintelligent AI. Here’s what Rock, Paper, Shotgun quoted designer and CSER researcher Shahar Avin as saying about the project:
“We want to let players experience the complex tensions and difficult decisions that the path to superintelligent AI would generate,” said the Centre for the Study of Existential Risk’s Dr. Shahar Avin, who managed the project. “Games are an excellent way to deliver a complex message to a wide audience.”
This is a blog post about why games are not always an excellent way to deliver a complex message to a wide audience.
Two years ago I visited Dagstuhl, a research center in Germany, for a week of game AI research. I was writing Electric Dreams at the time for Rock, Paper, Shotgun; a series about games, AI and research. In the piece about Dagstuhl, I wrote about the fear I observed that academic pressures and economic shifts would stifle great, exciting games research:
Like every other part of the games industry, games researchers have a contribution to make to the future of games. If we don’t make spaces where we can do this work, Michael Mateas’ “country of possibilities” may remain undiscovered forever.
Last week I returned to Dagstuhl, and once again found myself discussing the health of game AI research. But this time, the problem wasn’t funding agencies or university administrators: the problem was us. This is a fairly introspective, Inside Baseball-esque post, but I’ve come away from Dagstuhl with a powerful urge to write it, so I hope you’ll forgive me. If you work in games research, particularly AI, and particularly if you were at Dagstuhl, I implore you to read it.