Prayer for the Solstice

In the summer of 2005 a herd of twenty-three driverless cars barrelled across the Nevada desert, watched by scientists, engineers and nervous representatives of military funding agencies. Several hours later the first car crossed the finish line claiming a $2 million prize for the DARPA Grand Challenge and, naturally, the keen attention of DARPA itself. But it wasn’t just their interest that was piqued – journalists were also waiting to see if the whole field of artificial intelligence might emerge from the wilderness along with the beaten-up cars. John Markoff, writing for the New York Yimes, began his coverage of the event by describing AI as:

“…a technology field that for decades has overpromised and underdelivered… At its low point some computer scientists and software engineers avoided the term artificial intelligence for fear of being viewed as wild-eyed dreamers.” 

It’s safe to say that artificial intelligence as a field has largely beaten off that image today, and is currently enjoying a golden age of investment, growth and discovery. In 2006 Ray Kurzweil wrote in his book ‘The Singularity’ that “the AI winter is long since over” – ‘AI winter’ being a term people use to describe catastrophic slumps that the field experiences following a period of prosperity. New techniques emerge that seem to solve problems better than ever before, forecasts and predictions are made about the future, hopes are raised, and then eventually the bubble of excitement bursts under the weight of its own expectations. The winter that follows is long – research funding is cut, tech startups shutter, businesses and governments withdraw interest, and the public loses their faith in the field. When Kurzweil wrote that the winter was over in 2006 he may have been talking specifically about the winter that took place in the 1990s, but it’s possible he was also talking more generally – many AI researchers I’ve spoken to believe this is it, that there will be no more winters. In 2012 Demis Hassabis, then the founder of a little-known company called DeepMind Technologies, declared that ‘the time is right for a push towards general AI’.

2017 is the summer solstice for artificial intelligence, the warmest and longest day, the kind of day that makes it feel like summer might last forever. But nothing lasts forever, and this season will pass like all the others have before it. The only thing that we can affect is how bitter and harsh the coming winter will be, and that is largely dictated by how badly let down people feel when the bubble finally bursts. What dream did we sell them, what did we let them believe, how did we advise them to act and spend their money? We need to start thinking about the image of artificial intelligence this year, and change it for the better.

Science and Scientists

Much of the perceived value of artificial intelligence comes from a reverence for its basis in science and for the scientists who work on it, and the public perception of these things is warped far outside the field of AI. The popular image of science and scientists over the last two decades has become more and more problematic, even though on first glance it appears to be more positive and uplifting than ever. We generally view scientists as confusing outsiders who don’t understand the real world, and similarly are hard for the real world to understand. Despite this, we mostly accept that they can do good things, and in recent years many people felt the need to defend or promote scientific research as cool, exciting or impactful. This image of scientists is captured perfectly by The Big Bang Theory: scientists are clever idiots to be laughed at, except when they are helping us laugh at other, uneducated idiots. The popularity of the British quiz show QI also used to exemplify this: knowledge was a weapon to use to catch other people out and beat them with. (Of course, science can be cool, exciting and impactful, along with most other things in life like art, or trainspotting, or reading books. It can also be dull, boring and pointless.)

Our understanding of how scientific progress works is also in pretty bad shape, partly thanks to the way journalists write about some kinds of science, but more importantly because of how governments and companies treat research. We treat weird or fringe research as a joke, and we complain that niche or wasteful work should be defunded, and this suits large businesses and governments because it encourages the kind of short-term, industry-focused research that benefits them. The public have been trained to demand immediate economic impact from publicly-funded research, which means funding is pushed more towards those areas of research which can claim they are helping the economy, and also pushes researchers further and further towards being an extension of corporate research and development interests.

We have a bad habit of viewing science as a linear path towards a final objective, something like the technology tree in Civilisation. We advance in little steps along a fixed path, and each problem is incrementally harder and follows on directly from the previous one. We tend to assume that progress implies a through path, that dead-ends will not appear, that techniques always scale and that problems always neatly stack into each other like matryoshkas. This is particularly evident in artificial intelligence, where we assume that small, highly-constrained systems can be scaled up infinitely to solve bigger and bigger problems, simply taking more resources and a little more time. This, more than anything else, is why AI winters happen: because things don’t scale forever, because constraints are there for a reason, because progress is messy and rough and inconsistent. The technology tree, if it even can be said to exist, looks more like a Jackson Pollock than a neat set of points, with paths and colours overlapping and weaving and doubling-back, and occasional blobs sat on their own as islands. But when it comes to thinking about science itself, as with thinking about scientific results, we prefer simple elegance.

Machines and Humans

The confused perception of science and scientists is just one part of the problem. The others are more specific to AI, and reflect a cold and reductivist approach to artificial intelligence that we have come to accept as the norm after many decades of science fiction and bad examples set by the technology industry. Let me show you what I mean: Randall Monroe’s XKCD 1002 shows a list of games ranked by ‘difficulty for computers’. Games are placed on a linear scale from easy to hard, with some games marked as ‘solved’. There’s two entries in particular on Monroe’s list that exemplify a problem we have in our perception of AI and ‘solved problems’. Beer Pong, a drinking game about firing ping pong balls and downing beer, is listed under ‘computers can beat top humans’. Seven Minutes In Heaven, a teenager’s party game about making out in cupboards, is listed under ‘computers may never outplay humans’. While the latter is clearly part of the comic’s punchline, it shows how our notion of AI-human competitiveness is completely dominated by an emotionless, sociopathic attitude to playing games. The robot playing Beer Pong doesn’t engage socially with people or drink beer – it just fires balls around. It plays beer pong about as well as a stiff breeze would. But because it solves the physical problems of the game, our sterile, mathematical assessment is that the game is solved. If you remove the social and emotional components of Seven Minutes In Heaven, as the Beer Pong robot did, you’re left with nothing. The modern understanding of AI doesn’t know what to do with this. It has subtracted the human from the process, found nothing else there, and declared it an unsolvable mystery.

Whether we think about it explicitly or not, I think we all have an implicit understanding that there are two sides to artificial intelligence: finding ways to get technology to do new things, and understanding how technology should behave in society, what its role is, and how humans and technology relate to one another. The mistake we make is assuming that everyone in artificial intelligence is equally interested in both of those problems. We assume that the people designing self-driving cars and facial recognition systems and automated translators all think deeply about the consequences of their work, its impact on society and how best to describe its strengths and weaknesses. In 2005, when AI researchers were regarded as ‘wild-eyed dreamers’ by journalists like John Markoff, this wasn’t a problem. But now AI ‘experts’ are integrating themselves into businesses, governments, the media and beyond, and the gulf between the people who are building things and the people who are thinking about how these things will affect the world is becoming very pronounced – and in general, we prefer listening to the former group of people. [1]

Science and Science Fiction

Most of the work we see being done today is in the subfield of machine learning, and neural networks in particular. A neural network is like a big switchboard of wires that’s able to rewire itself as it works, to get better at solving the problem it’s been given. This means that if you look inside the box you just see a big jumble of wires: you don’t know what they do, because you didn’t wire them up. All you can do is give the network some input – like a photo of a person – and see what comes out the other side – like a piece of paper saying whether the person is guilty of a crime or not. We sometimes call things like this black boxes – you can see what goes in and what comes out, but not the in-between.[2]

Machine learning has captured our imagination not because it’s good at what it does (although that is clearly a bonus) but because of the nature of machine learning as an AI technique. If it had been a different AI technique that was popular right now, we wouldn’t be quite as excited as we are – it appeals to us so strongly because of how it works. Firstly, it appeals to our notions of what the future looks and sounds like. We often use the term ‘neural network’ or ‘neuron’ to describe these systems. We’ve been describing computer systems using brain analogies for over seventy years (‘Electronic Brain’ was used to describe a computer that could beat humans at Tic-Tac-Toe) and neurons in particular evoke a feeling of scienceyness. They silently encourage us to extend the analogy further – they make us think about ourselves, about sentience, about learning, about nature. We don’t ask ourselves whether humans should be the template for intelligence, or the only way, we don’t even ask if the analogy makes much sense – we are the most intelligent creatures we know, and it makes perfect sense to build things in our own image.

The second reason neural networks appeals to us is that the way they operate appeals to our concept of how technology works. I’ve already talked about how we view science in general, but in terms of smaller details most people don’t know much about programming or code. They often have opinions about code, though – many people have told me they don’t consider software to be intelligent if you can examine the lines of code that instruct it to do things, for example. This is a sociological by-product of how we think about intelligence and machines, but no-one in AI cares about this very much: all that matters is that neural networks can’t be interrogated in this way, and that makes them mysterious. Specifically, it makes them more like an AI in films or on TV, since these media typically don’t spend time explaining how their AI work (and when they do, surprise, they often reference brains or neural networks). Neural networks also tend to have very specific weaknesses or problems. In fact they mimic the Big Bang Theory model of the scientist: frighteningly intelligent at specific tasks, but with fatal flaws and unexpected hiccups that reveal their imperfect nature. We like this because it hints at their unknowable nature, just as in popular media we’re shown AI that hide secrets and desires, go beyond their programming, misinterpret their instructions or act in ways that seem less machinelike. Neural networks look and sound exactly like we expect AI to look and sound, and in any other field their weaknesses, like being uninspectable or hiding bias, would be considered huge drawbacks. But here, their weaknesses sync up exactly with how we expect AI to be, and instead become strengths. This has contributed to the perfect storm of media attention these systems are receiving.

A Prayer For The Solstice

For many years now we’ve trained people to view our field in all of these problematic ways: to view scientists as ultrarational geniuses, who discover new ideas in moments of brilliance and leap along the perfectly straight road towards a utopian Future; to see academic research solely as a way to further human productivity and output; to see artificial intelligence as a clinical dissection of the real world; to see machine learning as a realisation of all of our hopes and fears about AI made real. Most of this is done accidentally, as miscommunication and misinterpretation, and occasionally it’s a wilful manipulation in order to benefit certain parties in the short-term. We have to do better.

Right now we’re basking in the midsummer sun. Investment money is everywhere, journalists are eager to report, governments and businesses are betting big. When winter comes, we’ll be judged by the difference between the public perception of our work, and the reality of it. The bigger the gap between the two, the harsher and longer the ensuing winter will be. That’s worrying, and it’s reason enough to work hard on improving communication, improving the discourse around AI.

But a bigger concern for me is that in the time between now and then we will do so much damage to the world in our bid to make the most of this fruitful time. As we grab for every bit of cash, every spotlight, as we take every opportunity to advise governments and businesses, to benefit as much as possible from the bubble we find ourselves in. I’m worried that there are too many of us working on the cold technical problems and not enough of us considering how society overlaps with what we do. We can survive even the harshest winter, just as we have before – we are privileged technologists with little to lose. The rest of the world may not be as lucky. We should be compassionate and aware enough to look out for others too.

Image credits: Ann Nuyts (top), Shirley Binn (bottom)

  1. [1]A major motivation for finally posting this was reading this incredible post by Liz Ryerson on this very topic.
  2. [2]This is currently a frontier problem in machine learning, but the main reason it has come to the fore seems to be legislation compelling AI systems to be explainable in commercial systems.

One thought on “Prayer for the Solstice

  1. Interesting article, Mike. What do you think the solution is? Broader computer science and AI education? More accurate media representations of science, scientists, and artificial intelligence? More careful and limited promises during AI booms?

    It would be interesting to read an in-depth history of AI booms and busts that focuses on common threads and themes from cycle to cycle.

Leave a Reply

Your email address will not be published. Required fields are marked *