It seems you can’t open a newspaper without Elon Musk predicting that artificial intelligence (AI) needs regulating – before it starts World War III. And if it’s not Elon, its Vladimir telling us AI will rule the world.
I’m starting to feel like I’m a very dangerous guy. That’s because I’m a professor of artificial intelligence.
There was a time, 20 years back, when people just smiled at me when I told them I was working on building intelligent machines. And I knew that smile was one of sympathy. Back then, AI was simply so hopeless.
But now, as AI begins to make some progress, people seem to live in fear of the next thing that will emerge from AI labs across the world.
Elon is, in fact, right. AI does need regulating. But he’s also almost surely wrong – AI isn’t going to start World War III anytime soon. Or rule the world. Or end humanity.
AI needs regulating because the big tech companies have got too big for their own good. And like every other industry sector before it that has got too big – the banks, the oil companies, the telecom firms – regulation is needed to ensure the public good. To ensure that we all benefit and not just the tech elite.
We’re beginning to see the corrosive effects of Facebook’s algorithms on political debate, of Amazon’s dominance of the retail sector, and of Google’s monopoly on search. And it’s hard to know where to begin with a company like Uber. There’s just so much to criticise.
However, the problems today are not caused by super smart AI, but stupid AI. We’re letting algorithms make decisions that impact on society. And these algorithms are not very smart. Joshua Brown discovered this to his cost last year when he became the first person killed by his autonomous car. In fact, a smarter car might have seen the truck turning across the road and saved his life.
There’s a very seductive and dangerous idea that is getting people like Elon all worked up: the idea of the "technological singularity". At some point, we’ll build machines sufficiently smart that they’ll be able to re-design themselves to be even smarter. And these smarter machines will then re-design themselves again.
This will signify a tipping point, when machine intelligence snowballs away. Before we know it, we’ll no longer be the smartest creatures on the planet. And if we’re not careful, these machines will use their superior intelligence to take over the planet. Just as we used our intelligence to take over the planet from the apes, the dolphins and the dinosaurs before us.
Now, the first thing you need to know about the singularity is that it is an idea mostly believed by people not working in artificial intelligence. People like the philosopher Nick Bostrom, and the futurist and inventor Ray Kurzweil.
Read more: Everyone needs to stop paying attention to Elon Musk’s tweets
Most people working in AI like myself have a healthy skepticism for the idea of the singularity. We know how hard it is to get even a little intelligence into a machine, let alone enough to achieve recursive self-improvement.
There are many technical reasons why the singularity might never happen. We might simply run into some fundamental limits. Every other field of science has fundamental limits. You can’t, for example, accelerate past the speed of light. Perhaps there are some fundamental limits to how smart you can be?
Or perhaps we run into some engineering limits. Did you know that Moore’s Law is officially dead? Intel is no longer looking to double transistor count every 18 months.
But even if we do get to the singularity, machines don’t have any consciousness, any sentience. They have no desires or goals other than the ones that we give them.
AlphaGo isn’t going to wake up tomorrow and decide humans are useless at Go, and instead opt to win some money at online poker. And it is certainly not going to wake up and decide to take over the planet. It’s not in its code.
All AlphaGo will ever do is maximise one number: its estimate for the probability it will win the current game of Go. Indeed, it doesn’t even know that it is playing Go.
So, we don’t have to fear that the machines are going to take over anytime soon. But we do have to worry about the impact even stupid AI is starting to have on our lives. It will widen inequality. It will put some people out of work. It will corrode political debate. Even stupid AI can be used by the military to transform warfare for the worse.
So, Elon, stop worrying about World War III and start worrying about what Tesla’s autonomous cars will do to the livelihood of taxi drivers.
And don’t just take my word for it. A recent survey of 50 Nobel Laureates ranked the climate, population rise, nuclear war, disease, selfishness, ignorance, terrorism, fundamentalism, and Trump as bigger threats to humanity than AI.
Toby Walsh is professor of artificial intelligence at the University of New South Wales and the author of "Android Dreams: The Past, Present and Future of AI" (Hurst, £16.99)
This article was originally published by WIRED UK