Supervised Learning – House Of Lords & AI

Yesterday the House Of Lords – one of the two houses in the British parliament – gathered a Select Committee on Artificial Intelligence to meet with a number of academics and journalists and ask them for “the big picture” about the field. It was broadcast live on the web, and there’s even an archive here. I tweeted at length (mostly with tongue in cheek) about the event, but I also wanted to quickly summarise some thoughts – nothing formal, just some notes. I also want to point out that this is largely from memory, so apologies if I miss a detail or misattribute – let me know if you spot anything, I’ll correct it ASAP.

The committee met with six people in total: Nick Bostrom (Oxford, famous for being one of the earlier people in the current AI cycle to talk about extinction), Wendy Hall (Southampton), Mike Wooldridge (Oxford), Sara O’Connor (Financial Times), Rory Cellan-Jones (BBC) and Andrew Orlowski (The Register). First the academics answered questions for an hour, and then the journalists. The purpose of the meeting, in the words of the committee, was to get the “big picture” view of AI.

Skepticism

The overarching theme in the responses, from both journalists and researchers, was that everyone should lower their expectations. The amount of lowering varies depending on who you ask, but Wooldridge at one point explains that “no progress” has been made towards artificial general intelligence in the last five years, and that AI is in focus right now largely in a “narrow” sense: being good at specific tasks. One of the journalists put it more snappily: “I have never seen a self-driving car that can park.”

Cellan-Jones was pretty reflective about his own role in bringing AI to the fore. Comparing Machine Learning to Big Data or ‘The Cloud’, he said that coverage of things like Hawking’s fears of extinction had probably coloured people’s perception of AI badly. “I don’t think we need to worry about [Hawking’s fear of extinction]” he explained, “We need to worry about things like bias in algorithms.”

Worryingly, a lot of the members of the committee seemed to have misconceptions about AI, and I say worryingly because some also seemed to feel that they were somewhat knowledgeable about the topic due to past experience with it through their work. There were repeated questions about whether “learning” meant that an AI could learn anything at all – and it was actually the journalists, not the academics, who pointed out that the language surrounding AI was leading people astray.

Economy

The economy came up multiple times across both panels, and although I appreciate the pressure everyone was under, I feel like the journalists handled it better than the academics. Possibly due to years of exposure to the grant-writing process, academics were very keen to emphasise the potential for the UK to individually profit from artificial intelligence and the need to ‘support startups’ and get more PhD students in Computer Science – two things everyone is always calling for, despite little evidence it would actually be most useful. I’m not even sure it makes good business sense, even if it wasn’t incredibly depressing on paper anyway.

O’Connor probably provided the most memorable quote of the session, which I put at the start of this post: “People in blue collar jobs have been ‘disrupted’ for decades – the thought that it’s going to hit the white-collar middle-classes are what’s got people scared.” Amazingly, one of the committee members tried to dispute this, replying with words to the effect of “That’s because we’re the ones who are left, isn’t it?” which O’Connor quickly batted away. “Partly that. And partly that it’s us now, right?” People didn’t seem to like that – which is why it was probably the most useful thing said all day.

By contrast, Hall said earlier during the academic panel that “as with all technological advances, there’ll be more jobs created than lost” – a dangerous line to take with a government that is keen to promote industry above people, albeit indirectly through the Lords. Not only is it not a statement that particularly holds up to scrutiny, it doesn’t speak to the quality of those jobs or the distribution of them both socially and geographically. This was all wrapped up in a lot of uncomfortable “British industry” talk – that London has more AI startups than anywhere in the world, that we should capitalise on this, and perhaps most uncomfortably, “there’s not many British accents in DeepMind” and as a result we should worry about them leaving due to Brexit. Ick.

Society

I don’t know a lot about how select committees worked, but a number of questions were moved past quite quickly, and often they were asked by women and relating to the broader impact of AI on society. Maybe I’m reading too much into it, but it felt a bit like a boys club wanting to talk about British industry and robot extermination. When it did come up, the committee asked about how to better educate people about AI. There were some really thoughtful responses – Orlowski spoke about the need to not tunnel vision on AI in education. “There’s only so much time we can [educate] people… [Algorithms] is part of a balance curriculum, but if they don’t know culture or history, how can they account for the world?” The more important corollary here for me was that knowing culture and history is crucial for people who want to work with, on or around AI. One of the most damaging recommendations made in the entire thing was the suggestion that we needed more programmers and computer scientists.

One of the most bizarre moments in the session was when a Lord asked whether there were “parts of the world we should not be working with for security reasons”. Hall nodded, assuming he was referring to China, to which the Lord emphasised he meant “Communist countries”. To her credit Hall handled this pretty well and talked about China’s work positively and good experiences she’d had with the country, but the grim reality that this is being viewed as another way in which we can divide the world up some more and screw around with people is, while probably obvious, nevertheless extremely depressing. I did laugh at the question though.

A common theme across journalist and academic responses was the emphasis that AI is unlikely to replace humans who work in extremely social jobs. Other than a few clunking jokes about replacing accountants and lawyers (presumably forgetting that the Lords often are still employed, and many are lawyers) both panels pointed out that society is unlikely to readily swap out humans for software, and perhaps should be discouraged from doing so even if it is tempted to (Cellan-Jones and O’Connor spoke about their fears that social care might be offloaded to machines).

Tomorrow

The committee panel didn’t surprise me too much, but it did worry me. It’s encouraging that the committee seems extremely aware of the potential impact on jobs, but there were two things made clear by the session: some of the Lords have preconceptions about AI that even Elon Musk would think twice about; and the biggest concern above all else is how Britain can profit off this. That’s not to say social impact and suchlike were not considered – they were – but the overall balance of interest in the committee did not lead me to believe it was top of their list of concerns. Again, perhaps to be expected, but still sad.

Ultimately this is just one meeting of many the committee will have, and I only got to encounter these people for a couple of hours, so I clearly don’t have the full picture of what their big intentions are. The panelists were also under a lot of pressure, and so may not have been fairly represented either, hopefully you read this with some salt pinches on-hand. While I don’t know what other meetings they have scheduled, I’d like to see a wider grab of people being called up. Where are the Jamie Woodcocks to talk about social and economic impact; where are the Mitu Khandakers to talk about using AI for good; where are the @samims to talk about the creative impact of AI. I hope there’ll be more meetings like this, and that they won’t be left with quite as narrow a remit.

Thanks for reading and thanks if you put up with fifty stupid jokes about the livestream yesterday! More exciting news on ANGELINA is coming up real soon.

One thought on “Supervised Learning – House Of Lords & AI

Leave a Reply

Your email address will not be published. Required fields are marked *