Yes, There is a Sub-Reddit Dedicated to Preventing SkyNet

The Futurology sub-Reddit recently hosted a must-read interview with Luke Muehlhauser, the CEO of the Singularity Institute. The interview is a wide-ranging discussion of artificial intelligence, technology, psychology, ethics, philosophy, the future of the human race.
Artificial Fiction Brain by Wikipedia user Gengiskanhg
The Best American Science Fiction and Fantasy editor Joe HillArtificial Fiction Brain by Wikipedia user Gengiskanhg (Creative Commons Licensed)

"The AI does not hate you, nor does it love you, but you are made of atoms it can use for something else" -- Eliezer Yudkowsky

The Futurology sub-Reddit recently hosted a must-read interview with Luke Muehlhauser, the CEO of the Singularity Institute. The interview is a wide-ranging discussion of artificial intelligence, technology, psychology, ethics, philosophy and the future of the human race.

Most of you GeekDads are familiar with the term "The Singularity," but for the sake of completeness, I’ll quote Wikipedia’s definition of it as the "hypothetical future emergence of greater-than-human superintelligence through technological means."

In more concrete terms, The Singularity refers to the point in our future when we create an artificial intelligence that is capable of improving its own software/hardware. Version 1.0 of this AI would quickly create Version 2.0, which would in turn create Version 3.0 and so on. If each version is, say 20% smarter than the previous version, then we have the conditions for exponential growth in the AI’s intelligence (100% human intelligence, then 120%, 144%, 173%, 207%...). Such growth would quickly lead to something beyond the comprehension of humans.

To give some perspective on that, consider the comment from Neil deGrasse Tyson in this video. Tyson points out that humans differ from chimpanzees by only about 2%, yet our intellectual capabilities completely dwarf those of our primate cousins. An artificial intelligence that is 2%, 20%, 200% or more powerful than a human brain will likely view our species much as we view chimpanzees.

Muehlhauser and the Singularity Institute are dedicated to making sure that any such emerging AI is "safe" and has a positive impact on civilization. The discussion on Reddit goes into great detail on just what that means, and how they hope to avoid any Skynet scenarios.

I’ve cherry-picked some of the choice comments below, but you really should head over and read the entire discussion yourself.

SupaFurry
"The AI does not love you, nor does it hate you, but you are made of atoms it can use for something else"

**Holy mother of god. Shouldn't we be steering away from this kind of entity, perhaps?

lukeprog
Yes, indeed. That's why we need to make sure that AI safety research is outpacing AI capabilities research. See my post "The AI Problem, with Solutions."

Of course, we're hitting the pedal-to-the-metal on AI capabilities research and there are fewer than 5 full-time researchers doing serious, technical, "Friendly AI" research. Right now, humanity spends 10,000x more on lipstick research each year than it does on Friendly AI research.

kilroydacat
What is Intelligence and how do you "emulate" it?

lukeprog
See the "intelligence" section of our Singularity FAQ. The short answer is: Cognitive scientists agree that whatever allows humans to achieve goals in a wide range of environments, it functions as information-processing in the brain. But information processing can happen in many substrates, including silicon. AI programs have already surpassed human ability at hundreds of narrow skills (arithmetic, theorem proving, checkers, chess, Scrabble, Jeopardy, detecting underwater mines, running worldwide logistics for the military, etc.), and there is no reason to think that AI programs are intrinsically unable to do so for other cognitive skills such as general reasoning, scientific discovery, and technological development.


lukeprog
I probably don't know about much of the most important private "AI capabilities" research. Google, Facebook, and NSA don't brief me on what they're up to. I know about some private projects that few people know about, but I can't talk about them.
The most important work going on, I think, is AI safety research — not the philosophical work done by most people in "machine ethics" but the technical work being done at the Singularity Institute and the Future of Humanity Institute at Oxford University.


lukeprog
We'd like to avoid a war with superhuman machines, because humans would lose — and we'd lose more quickly than is depicted in, say, The Terminator. A movie like that is boring if there's no human resistance with an actual chance of winning, so they don't make movies where all humans die suddenly with no chance to resist because a worldwide AI did its own science and engineered an airborne, human-targeted super-virus with a near-perfect fatality rate.

The solution is to make sure that the first superhuman AIs are programmed with our goals, and for that we need to solve a particular set of math problems, including both the math of safety-capable AI and the math of aggregating and extrapolating human preferences.
lukeprog
I have a pretty wide probability distribution over the year for the first creation of superhuman AI, with a mode around 2060 (conditioning on no other existential catastrophes hitting us first). Many AI people predict superhuman AI sooner than this, though — including Rich Sutton, who quite literally wrote the book on reinforcement learning.

Amusing note: Some military big-shots say things like "We'll never build fully-autonomous combat AIs; we'll never take humans out of the loop." Meanwhile, the U.S. military spends millions to get roboticist Ronald Arkin and his team to research and write the book Governing Lethal Behavior in Autonomous Robots. (One of the few serious works in the field of "machine ethics", BTW).


lukeprog
*"At the point when computers outstrip human intelligence in all or most areas, won't computers then take over doing most of the interesting and meaningful work?"
*

Yes.

*"Will humans be reduced to manual labor, as that's the only role that makes economic sense?"
*

No, robots will be better than humans at manual labor, too.

*"While we will have unprecedented levels of material wealth, won't we have a severe crisis of meaning... Will the singularity foment an existential crisis for humanity?"
*

It’s a good question. The major worry is that the singularity causes an "existential crisis" in the sense that it causes a human extinction event. If we manage to do the math research required to get superhuman AIs to be working in our favor, and we "merely" have to deal with an emotional/philosophical crisis, I'll be quite relieved.

One exploration of what we could do and care about when most projects are handled by machines is (rather cheekily) called "fun theory." I'll let you read up on it.


JulianMorrison
Humans are what happens when you build an intelligence by iteratively improving an ape. We are not designed minds. We are accidental minds. We are the dumbest creature that could possibly create a civilization, because cultural improvement is so much faster than genetic improvement that as soon as we were good enough, it was already too late to get any better.

On the upside though, we have the pro-social instincts (such as fairness, compassion, and empathy) that evolution built for tribal apes. Because we have them in common, we just attach them to intelligence like they were inevitable. They are not. As far as AIs go, they will have no more and no less than the motivations programmed in.

ZankerH
Yeah well, that's just, like, your opinion, man.

fuseboy

____The idea that we can state values simply (or for that matter, at all), and have them produce behavior we like, is complete myth, a cultural hangover from stuff like the Ten Commandments. They're either so vague as to be useless, or, when followed literally, produce disaster scenarios like "euthanize everyone!"

Clear statements about ethics or morals will generally be the OUTPUT of a superhuman AI, not restrictions on its behavior.

A super-intelligent, self-improving machine that evolves goals (inevitably making them different than ours), however, is a scary prospect.

Despite the fact that many of the disaster scenarios involve precisely this, perhaps the chief benefit to such an AI project will be that it will change its own values

By the way, if you like this sort of thing, the Futurology sub-reddit is the place for you. As well, the Singularity FAQ is also worth a look, especially the AI-in-a-box experiment.

And finally, Randall Munroe over at XKCD has this to say about Artificial Intelligence:

XKCD by Randall Munroe (CC Licensed)