Scientific Victory

Last week a few games sites covered the fact that the Cambridge Center for the Study of Existential Risk (CSER), a lab which investigates safety issues associated with things like artificial intelligence, had released a Civilisation V mod about the risk of superintelligent AI. Here’s what Rock, Paper, Shotgun quoted designer and CSER researcher Shahar Avin as saying about the project:

“We want to let players experience the complex tensions and difficult decisions that the path to superintelligent AI would generate,” said the Centre for the Study of Existential Risk’s Dr. Shahar Avin, who managed the project. “Games are an excellent way to deliver a complex message to a wide audience.”

This is a blog post about why games are not always an excellent way to deliver a complex message to a wide audience.

The mod changes the way the scientific victory condition works by removing the Space Race and adding new AI-flavoured discoveries in the technology tree. They allow civilisations to work towards building benevolent AI that can transform the world into a “utopia”, but they also start a countdown to a Rogue AI being unleashed, which when completed ends the game in a loss for all civilisations (which, mechanically, is a really cool idea!) Players can slow this down by building AI Safety Labs, which reduce the rate at which a Rogue AI develops. Avin even seemed to think his mod might help people understand the challenges faced by governments and scientists. At a recent talk he gave in London, he said: “Maybe people will come up with good plans to deal with this, and if Civilisation is a good enough simulation then we will try them out.”

There’s been a lot of writing in the past about how games can convey messages through the systems they are composed of. ‘Persuasive games’ are games in which this is a driving force behind development – the idea that the audience can have their opinion swayed by playing a game and interacting with its systems. One of my favourite examples of this is Fate Of The World, where you are tasked with averting climate change by introducing economic, social and political alterations to the world. The game sold itself on the strength of its simulation, claiming to be based on actual research findings and real-world predictive models of the future, so that players could explore and understand the changing world first-hand.

Interacting with a system feels powerful in a way unique to games, different from simply being told about the same data or predictive models by a documentary film. In a game we can test theories out, look at extreme cases, and see the computed outcome of our own personal simulation. But this is a double-edged sword, because this unique power is dependent on us trusting the people who made the game to be representing things fairly, and communicating honestly. We probably don’t have time to read and understand all of the research Fate Of The World is based on, which is one of the advantages of playing the game. But if the data is flawed, or the simulation is coded inaccurately, unintentionally or otherwise, the player may never know and may receive the wrong idea as a result.

Avin likened making the mod to a “thought experiment” backed by numbers and systems: “You make a system that requires you to be quite specific about how your ideas will play out.” But this is the crucial difference between the Superintelligence mod and a game like Fate Of The World, which relied at least in part on predictive models about the future. The Superintelligence mod is pure thought experiment, and represents a view that has a lot less consensus in the AI community than something like climate change has in the climatology community.

If the mod was just something someone had cooked up in their spare time it might not be a problem, but with the CSER name attached – as well as Cambridge, one of the world’s most famous universities – the mod is now a publicity tool, carrying with it the weight of academic endorsement. And this is awkward, because with that extra reputation attached the game’s messages might now be interpreted a lot more strongly by those playing it. For example, the mod’s failure condition of a Rogue AI taking over the world will always happen unless players avert it – it is not something that has a chance of happening. The mod’s message is the AI is fundamentally unsafe, and doing any kind of experimentation with it will lead to the destruction of civilisation. To fight this, the mod advocates for technology becoming the “slave” of mankind, through the construction of safety labs (modelled on, I assume, CSER itself).

Many of the thought experiments surrounding human extinction by AI actually assume it is hugely unlikely, and that AI safety should be a concern not because of a high probability, but because of the scale of the potential catastrophe which outweighs the small chance of it happening. If the Rogue AI failure state only happened occasionally, and the player never knew what might trigger it, that might make for a more nuanced and interesting representation of humanity’s relationship with AI (although there’s still plenty of other things to take umbrage with). But in its current state it’s essentially presenting it as something that will definitely happen unless we act now, by building more labs like CSER. This feels disingenuous even if you are concerned about the risks of AI, as CSER is, and these nuances are lost behind the Cambridge branding and press releases. It feels less like a public awareness effort and more like a branding activity for the lab. As Avin put it in his recent talk, “There was one comment [on the mod] that said ‘This mod scares the **** out of me’ which is… what we want.” Avin is talking about public awareness here, but it’s worth bearing in mind that CSER benefits from people being worried about human extinction, and so intended or not, exaggerating the threat and getting people talking about it is a positive side-effect.

I’m all in favour of scientific outreach, and I think what Avin and his team did was inventive in that regard. I also think that as a mod, it’s pretty cool! It takes Civ’s science endgame in some new directions, and Avin did some good stuff. But as something issued by a lab as a piece of science communication, I like it less. I think we need to be more responsible when we use academic brands to endorse creative and semifictionalised work that deals directly with the issues we are trusted to be rational and methodical about. Part of society’s relationship with science and scientists is that they trust us to speak honestly about our work. I don’t think the mod was made dishonestly, or intended to deceive anyone, but I think it was an imprecise expression of CSER’s aims, and I don’t think anyone particularly cares because AI makes for a good news story, which makes journalists happy, and news stories make for good exposure for a research lab, which makes researchers happy. And this is not unique to CSER or this story about a mod for a game – it is a trend I have watched slowly grow across every corner of AI over the past five years.

Avin closed his talk by saying “I don’t know how this will be misinterpreted yet but I know it will be.” As someone who had to learn the hard way to be careful about how he talked about his work, I know it can be difficult to control a message and make sure people get the view of your work that you intended. But that’s why we need to be critical of ourselves and keep trying to do better, especially when using academic branding or the status of science to convince people of things. I think the AI community has been bad at doing this for a long time, because in general most people don’t lose out from being imprecise (and in many cases in AI, can actually gain from it). We have to be twice as careful when talking about difficult topics, because people are trusting us to help them navigate these complex ideas, and if we lose that trust then the much more immediate threats posed by AI – in jobs, in privacy, in militarisation, and more – may become impossible to talk to people about.

2 thoughts on “Scientific Victory

Comments are closed.