SupaFurry
"The AI does not love you, nor does it hate you, but you are made of atoms it can use for something else"
**Holy mother of god. Shouldn't we be steering away from this kind of entity, perhaps?
lukeprog
Yes, indeed. That's why we need to make sure that AI safety research is outpacing AI capabilities research. See my post "The AI Problem, with Solutions."
Of course, we're hitting the pedal-to-the-metal on AI capabilities research and there are fewer than 5 full-time researchers doing serious, technical, "Friendly AI" research. Right now, humanity spends 10,000x more on lipstick research each year than it does on Friendly AI research.
kilroydacat
What is Intelligence and how do you "emulate" it?
lukeprog
See the "intelligence" section of our Singularity FAQ. The short answer is: Cognitive scientists agree that whatever allows humans to achieve goals in a wide range of environments, it functions as information-processing in the brain. But information processing can happen in many substrates, including silicon. AI programs have already surpassed human ability at hundreds of narrow skills (arithmetic, theorem proving, checkers, chess, Scrabble, Jeopardy, detecting underwater mines, running worldwide logistics for the military, etc.), and there is no reason to think that AI programs are intrinsically unable to do so for other cognitive skills such as general reasoning, scientific discovery, and technological development.
lukeprog
I probably don't know about much of the most important private "AI capabilities" research. Google, Facebook, and NSA don't brief me on what they're up to. I know about some private projects that few people know about, but I can't talk about them.
The most important work going on, I think, is AI safety research — not the philosophical work done by most people in "machine ethics" but the technical work being done at the Singularity Institute and the Future of Humanity Institute at Oxford University.
lukeprog
We'd like to avoid a war with superhuman machines, because humans would lose — and we'd lose more quickly than is depicted in, say, The Terminator. A movie like that is boring if there's no human resistance with an actual chance of winning, so they don't make movies where all humans die suddenly with no chance to resist because a worldwide AI did its own science and engineered an airborne, human-targeted super-virus with a near-perfect fatality rate.
The solution is to make sure that the first superhuman AIs are programmed with our goals, and for that we need to solve a particular set of math problems, including both the math of safety-capable AI and the math of aggregating and extrapolating human preferences.
lukeprog
I have a pretty wide probability distribution over the year for the first creation of superhuman AI, with a mode around 2060 (conditioning on no other existential catastrophes hitting us first). Many AI people predict superhuman AI sooner than this, though — including Rich Sutton, who quite literally wrote the book on reinforcement learning.
Amusing note: Some military big-shots say things like "We'll never build fully-autonomous combat AIs; we'll never take humans out of the loop." Meanwhile, the U.S. military spends millions to get roboticist Ronald Arkin and his team to research and write the book Governing Lethal Behavior in Autonomous Robots. (One of the few serious works in the field of "machine ethics", BTW).
lukeprog
*"At the point when computers outstrip human intelligence in all or most areas, won't computers then take over doing most of the interesting and meaningful work?"
*
Yes.
*"Will humans be reduced to manual labor, as that's the only role that makes economic sense?"
*
No, robots will be better than humans at manual labor, too.
*"While we will have unprecedented levels of material wealth, won't we have a severe crisis of meaning... Will the singularity foment an existential crisis for humanity?"
*
It’s a good question. The major worry is that the singularity causes an "existential crisis" in the sense that it causes a human extinction event. If we manage to do the math research required to get superhuman AIs to be working in our favor, and we "merely" have to deal with an emotional/philosophical crisis, I'll be quite relieved.
One exploration of what we could do and care about when most projects are handled by machines is (rather cheekily) called "fun theory." I'll let you read up on it.
JulianMorrison
Humans are what happens when you build an intelligence by iteratively improving an ape. We are not designed minds. We are accidental minds. We are the dumbest creature that could possibly create a civilization, because cultural improvement is so much faster than genetic improvement that as soon as we were good enough, it was already too late to get any better.
On the upside though, we have the pro-social instincts (such as fairness, compassion, and empathy) that evolution built for tribal apes. Because we have them in common, we just attach them to intelligence like they were inevitable. They are not. As far as AIs go, they will have no more and no less than the motivations programmed in.
ZankerH
Yeah well, that's just, like, your opinion, man.
fuseboy
____The idea that we can state values simply (or for that matter, at all), and have them produce behavior we like, is complete myth, a cultural hangover from stuff like the Ten Commandments. They're either so vague as to be useless, or, when followed literally, produce disaster scenarios like "euthanize everyone!"
Clear statements about ethics or morals will generally be the OUTPUT of a superhuman AI, not restrictions on its behavior.
A super-intelligent, self-improving machine that evolves goals (inevitably making them different than ours), however, is a scary prospect.
Despite the fact that many of the disaster scenarios involve precisely this, perhaps the chief benefit to such an AI project will be that it will change its own values