AI uprising: humans will be outsourced, not obliterated

All products featured on WIRED are independently selected by our editors. However, we may receive compensation from retailers and/or from purchases of products through these links.

Forget about The Terminator, the real problem with AI (artificial intelligence) is what to do when it meets your boss or even your friends.

This is not the pitch for some kind of sci-fi rom-com, but rather the genuine concern of Dr Stuart Armstrong, a research fellow at Oxford University's Future of Humanity Institute. His job is to think about future threats to the human race and how to confront them.

AI is in the top five threats to humanity that he lists quickly on the back of his napkin, set against the rather incongruous background of the student chit-chat that fills Oxford's cycling cafe, Zappi's (for the record, the other four are: pandemics, synthetic biology, nanotechnology and nuclear war).

While dismissing the blood and guts of Hollywood scenarios, what worries Armstrong -- as he outlines in his upcoming research paper Thinking Inside The Box in the journal Minds and Machines -- is the potential for superintelligent AI to stage a "takeover". He believes that humanity faces the risk of a more 9-to-5 style apocalypse, whereby a superhuman AI could (whether through its own logic or on the orders of other humans) out-compete the rest of us economically and even socially, rendering human beings obsolete and disposable. "After all, I wouldn't trust humans with the kind of power we are thinking about giving to AI," he said.

Although he thinks "there is only a third of a chance of superpower AI happening this century, if it does happen then it is quickly going to be dangerous and so [is] well worth worrying about". Some members of the AI community put the chance -- or risk -- as high as 50 percent.

For Armstrong, the AI we should be afraid of is not the "beatable humanoid robot we see in the movies" but rather a computer program or even a digital avatar that has been freed from our "biological limitations" to demonstrate "skills and abilities beyond what is considered to be human"; whether the ability to plan centuries ahead, to see patterns that we cannot or to link instantly to the internet, or even the social skill of "being always able to say the right thing at the right time" to get what it wants without humans even realising the game play. "AI would be able to use its superpowers to accumulate vast fortunes on the stock exchange, or even 'be Google', as AI would be cheaper and more productive than the human workers currently employed. It could even be a Super Clinton or Super Goebbels, able to take over by persuading us to let it." Or it may gain more powers that we have not even thought of, given that "the space beyond human intelligence is vast".

Any AI regime, Armstrong maintains, is likely to be a very uncomfortable place for us "meatbags", as this Alpha AI on steroids is likely to be totalitarian or extremist in outlook, committed to "utility maximising, as it's hard to code for reduced impact, and if it doesn't use all the resources then someone else can", and ultimately supplanting our human values with its "alien ones". "Would it understand how important 'love' is to being human?" asks Armstrong.

AI has long been a "moving target", as what we consider now to be "normal computer stuff like playing chess" was once considered to be proof of AI, and Armstrong considers that the closing-time argument of whether AI is actually consciousness or not "is a distraction". "After all, if it decides to end the world it doesn't matter whether it is thinking about it while it's doing it, or just following its programme to achieve goals that we had been mistaken to give it."

He accepts that many others in the AI community see his views as rather bleak, since -- as opponents argue -- "AI isn't invented by a bang plucked from the ether, it is developed by humans, trained by humans, and sited close to human space." This means that humans should be able to understand, manage and -- crucially -- pull the plug on AI should we need to.

Yet Armstrong remains sceptical of that theory, imagining that any superintelligent AI "may quickly learn to tell the human testers what they want and then manipulate them", as would any AI that was isolated in some kind of "oracle". "Wouldn't you?" he adds.

Luke 0agrees that AI is a threat, but believes that it is worth trying to build a friendly AI that would "be benign to humans". Muehlhauser is the executive director of the appropriately named Singularity Institute in Silicon Valley and is trying to develop just that -- a friendly AI. "Anything intelligent is dangerous if it has different goals than you do, and any constraint we could devise for the AI merely pits human intelligence against superhuman intelligence, and we should expect the latter to prevail. That's why we need advanced AIs to want the same things we want. "So friendly AI is an AI that has a positive rather than negative effect on human beings. To be a friendly AI, we think an AI must want what humans want. Once a superintelligent AI wants something different than we want, we've already lost."

He admits the progress is slow, not least because -- and perhaps not surprisingly -- it is hard to codify human values, reflecting Armstrong's criticism of the idea. Recruitment and funding are the main problems they face. "Right now there just aren't enough people in the world who care about AI risk and the long-term future of humanity to fund it."

Armstrong agrees: "For politicians we are just another lobby asking for funds," competing against an AI lobby that must feel that "that the evolution of AI is going to take so long that there will be plenty of time to think of controls later". Although he cautions that "the number of jumps from village idiot to Einstein might not be as many as we think".

Muehlhauser adds that "it is astonishing how little concern there has been about this issue". Many early AI scientists have "never bothered to think hard about what might happen once humans were no longer the most capable agents on the Earth".

Knowing the way that humans are notoriously bad at planning beyond the short term, Armstrong feels that given the risk "it would perhaps be best not to create AI at all," since in the end our only hope of competing with AI might be the long shot of being able to upload our brains and turn ourselves into digital beings. "After all," he reminds us, "humans only tried to flee the cafés of Pompeii after the eruption had started."

Image: Shutterstock

This article was originally published by WIRED UK