Artificial intelligence: it’s nothing like we expected

This article was taken from the February 2011 issue of Wired magazine. Be the first to read Wired's articles in print before they're posted online, and get your hands on loads of additional content by subscribing online.

Artificial intelligence is here. In fact, it's all around us.

But it's nothing like we expected.

Diapers.com warehouses are a bit of a jumble. Boxes of dummies sit above crates of romper suits, which rest next to cartons of baby food. In a seeming abdication of logic, similar items are placed across the room from one another. A person trying to figure out how the products were shelved could well conclude that no form of intelligence had a hand in determining what went where.

But the warehouses, located across the US, aren't meant to be understood by humans; they were built for bots. Every day, hundreds of robots course nimbly through the aisles, delivering items to flesh-and-blood packers on the periphery. Instead of organising the warehouse as a human might -- by placing like products next to one another, for instance -- Diapers.com's robots stick the items in various aisles throughout the facility. To fill an order, the first available robot simply finds the closest requested item. The storeroom is an ever-shifting mass that adjusts to constantly changing data, like the size and popularity of merchandise, the geography of the warehouse, and the location of each robot. Set up by Kiva Systems, which has outfitted similar facilities for Gap, Staples and Office Depot, the system can deliver items to packers at the rate of one every six seconds.

The Kiva bots may not seem very smart. But they represent a new forefront in the field of artificial intelligence. Today's AI doesn't try to recreate the brain. Instead, it uses machine learning, massive datasets, sophisticated sensors and clever algorithms to master discrete tasks. The Google global machine uses AI to interpret cryptic human queries. Credit-card companies use it to track fraud. And the financial system uses it to handle billions of trades every day (with only the occasional meltdown).

This explosion is the ironic pay-off of the seemingly fruitless decades-long quest to emulate human intelligence. That goal proved so elusive that some scientists lost heart and many others lost funding. There was talk of an AI winter. But even as the traditional dream of AI was freezing over, a new one was being born: machines built to accomplish specific tasks in ways that people simply never could. At first, there were just a few green shoots pushing up through the frosty ground. But now we're in full bloom. Welcome to the AI summer.

Today's AI bears little resemblance to its initial conception.

The field's trailblazers in the 50s and 60s believed success lay in mimicking the logic-based reasoning that human brains were thought to use. In 1957, the AI crowd confidently predicted that machines would soon be able to replicate all kinds of human mental achievements. But that turned out to be wildly unachievable, in part because we still don't really understand how the brain works, much less how to recreate it.

So during the 80s, graduate students began to focus on the kinds of skills for which computers were well suited, and found they could build something like intelligence from groups of systems that operated according to their own kind of reasoning. "The big surprise is that intelligence isn't a unitary thing," says Danny Hillis, who cofounded Thinking Machines, a company that, from 1982 to 1994, made massively parallel supercomputers. "What we've learned is that it's all kinds of different behaviours."

By using probability-based algorithms to derive meaning from huge amounts of data, researchers discovered that they didn't need to teach a computer how to accomplish a task; they could just show it what people did and let the machine figure out how to copy that behaviour under similar circumstances. They used genetic algorithms, which comb randomly generated chunks of code, skim the highest-performing ones, and splice them together to spawn new code. As the process is repeated, the evolved programs become amazingly effective, often comparable to the output of the most experienced coders.

MIT's Rodney Brooks also took a biologically inspired approach to robotics. His lab programmed six-legged buglike creatures by breaking down insect behaviour into a series of simple commands -- for instance, "If you run into an obstacle, lift your legs higher."

When the programmers got the rules right, the gizmos could figure out for themselves how to navigate even complicated terrain.

The fruits of the AI revolution are now all around us. Once researchers were freed from the burden of building a whole mind, they could construct a rich bestiary of digital fauna, which few would dispute possess something approaching intelligence. Google cofounder Larry Page says: "If you told somebody in 1978, 'You're going to have this machine, and you'll be able to type a few words and instantly get all of the world's knowledge on that topic,' they would probably consider that to be AI. That seems routine now, but it's a really big deal."

Even formerly mechanical processes such as driving a car have become collaborations with AI systems. "At first it was the automatic braking system,"

Brooks says. "Now you're starting to get automatic parking and lane-changing." Indeed, Google has been developing and testing cars that drive themselves with only minimal human involvement; by October 2010, they had already covered 225,000 kilometres.

In short, we are engaged in a permanent dance with machines, locked in an increasingly dependent embrace. And yet, because the bots' behaviour isn't based on human thought processes, we are often powerless to explain their actions. Wolfram Alpha, the website created by scientist Stephen Wolfram, can solve many mathematical problems. It also seems to display how those answers are derived. But the logical steps that humans see are completely different from the website's actual calculations. "It doesn't do any of that reasoning,"

Wolfram says. "Those steps are pure fake. We thought, how can we explain this to one of those humans out there?"

The lesson is that our computers sometimes have to humour us, or they will freak us out. Eric Horvitz -- now a top Microsoft researcher and a former president of the US Association for the Advancement of Artificial Intelligence -- helped build an AI system in the 80s to aid pathologists in their studies, analysing each result and suggesting the next test to perform. There was just one problem -- it provided the answers too quickly. "We found that people trusted it more if we added a delay loop with a flashing light, as though it were huffing and puffing to come up with an answer," Horvitz says.

But we must learn to adapt. AI is so crucial to some systems (such as the financial infrastructure) that getting rid of it would be a lot harder than simply disconnecting HAL 9000's modules. "In some sense, you can argue that the science-fiction scenario is already starting to happen," Danny Hillis says. "The computers are in control, and we just live in their world." Wolfram says this conundrum will intensify as AI takes on new tasks, spinning further out of human comprehension. "Do you regulate an underlying algorithm?" he asks. "That's crazy, because you can't foresee in most cases what consequences that algorithm will have."

In earlier days, humanists feared the ramifications of thinking machines. Now the machines are embedded in our lives, and those fears seem irrelevant. "I used to have fights about it," Brooks says. "I've stopped having fights. I'm just trying to win."

MUSIC An app that jams with you

A good session player is hard to find, but UJAM is always ready to rock. The web app doubles as a studio band and a recording studio. It analyses a melody and then produces sophisticated harmonies, bass lines, drum tracks and more.

Before UJAM's AI can lay down accompaniment, it must figure out which notes the user is singing or playing. Once it recognises them, the algorithm searches for chords to match the tune, using a mix of statistical techniques and musical rules. The rules-based module then uses its knowledge of Western music to narrow the chord options to a single selection.

The service is still in alpha, but it has attracted 2,500 testers who want to use AI to explore their musical creativity. As UJAM gathers more data on users' preferences and musical tastes, this info is fed back into the system, improving its performance.

In this respect at least, UJAM is like a human -- it gets better with practice. Jon Stokes

FRAUD DETECTION

The neural nets are watching

Credit-card fraud costs UK merchants and credit-card companies more than £500 million a year. That figure would be much higher without the use of computer surveillance systems to monitor every transaction. One of the most proven antifraud systems is FICO's Falcon Fraud Manager, which keeps tabs on more than four billion transactions a month worldwide and uses lightning-fast neural networks to scan for suspicious purchase patterns. Neural networks were originally designed to mimic human grey matter. Over time, however, the technology has become a basic building block of many computer systems. The networks typically consist of layers of interconnected "neurons", each of which produces a signal only when its input exceeds a certain threshold. Though the individual neurons are simple, the net as a whole can learn to recognise complex patterns.

The Falcon system specialises in detecting things a human might never notice. For example, if you use your card to buy a tank of petrol and then go directly to a jewellery store, your account will almost surely be flagged, especially if you're not a person who buys a lot of bling. The reason? Over years of correlating variables, testing and learning, the system has noticed that a criminal's first stop after stealing a credit card is often a petrol station. If that transaction goes through, the thief knows the card hasn't yet been reported as stolen and heads off on a spending spree -- often at some high-priced retailer.

JS

INTERNET SEARCH

Google's eyes are everywhere

A human brain gets visual information from two eyes. Google's artificial intelligence gets it from billions -- through the camera lenses of smartphones. The company collects images from users of Google Goggles, a mobile app that lets you run web searches by taking pictures. Snap a barcode and Goggles will shop for the item's best price. Take a picture of a book and it will link you to, say, a Wikipedia page about the author. Photograph the Eiffel Tower and it will give you historical background.

At the core of the service is Google's Superroot Server, software that co-ordinates the efforts of multiple object-specific recognition engines. There's one for text, one for landmarks, one for corporate logos, and so on. When an image arrives, Superroot sends it to each of these backend engines, which in turn use a variety of visual-recognition techniques to identify potential matches and compute confidence scores. Superroot then applies its own algorithm to decide which results, if any, to report back to the user.

Because of its modular design, Goggles can be expanded to recognise practically anything -- and Google is quickly adding new categories. Next up: identifying plants. JS

MEDICINE

Smart bot with X-ray specs

To the human eye, an X-ray is a murky, lo-res puzzle. But to a machine, it's a dense data field. No wonder AI is so useful in medicine. Bartron Medical Imaging's software aggregates hi-res image data from X-rays, MRIs, ultrasounds and CT scans, then groups together biological structures that share hard-to-detect similarities.

At the heart of the technology is an algorithm originally developed by Nasa for analysing satellite images, grouping pixels that share certain properties, even if they're far apart or in a different image altogether. This way, hidden features or diffuse structures within a region of tissue can be identified. Puzzle solved. JS

TRANSPORT

All aboard the algorithm

Model trains are easy to keep track of. But building a model to run real trains is a complex undertaking. So when the US Norfolk Southern Railway decided to install a smarter system to handle its sprawling operation, it brought in a team of algorithm experts from Princeton University.

What they got was the Princeton Locomotive and Shop Management System (Plasma), which used an algorithmic strategy to analyse Norfolk Southern's operations. Plasma tracks thousands of variables, predicting the impact of changes in fleet size, maintenance, transit time and other factors on real-world operations. The key breakthrough was making the model mimic the complex behaviour of the company's despatch centre in Atlanta. "Think of the despatch centre as one big, collective brain. How do you get a computer to behave like that?" asks Warren Powell, a professor at Princeton's Operations Research and Financial Engineering department.

The model that Powell and his team came up with was a kind of AI hive mind. Plasma uses a technology known as approximate dynamic programming to examine mountains of historical data. The system then models the despatch centre's collective human decision-making and even suggest improvements.

For now, Plasma is helping Norfolk Southern decide what its fleet size should be -- humans are still in control of running the trains. At least we're still good for something.

This article was originally published by WIRED UK