This article was taken from the May 2011 issue of Wired magazine. Be the first to read Wired's articles in print before they're posted online, and get your hands on loads of additional content by subscribing online.
Throughout the 60s, pioneers in artificial intelligence worked late nights trying to build simple robotic programs capable of finding, fetching and stacking small wooden blocks in patterns. It was one of those apparently simple problems that turn out to be exceptionally difficult, and it led AI scientists to think: perhaps the robot could solve the problem by distributing the work among specialised subagents -- small computer programs that each bite off a piece of the problem. One computer program could be in charge of finding, another could fetch, another could solve stacking. This idea of subagents did not solve the problem entirely -- but it brought into focus a new idea about the working of biological brains: the possibility that our minds may be a vast collection of interconnected subagents that are themselves mindless. AI pioneer Marvin Minsky wrote, "Each mental agent by itself can only do some simple thing that needs no mind or thought at all. Yet when we join these agents in societies -- in certain very special ways -- this leads to intelligence." Within this framework, thousands of little minds are better than a single large one.
The society-of-mind framework was a breakthrough, but, despite initial excitement, a collection of experts with divided labour has never yielded the properties of the human brain. It is still the case that our smartest robots are less intelligent than a three-year-old child. Why? I suggest the missing factor is competition among experts who all believe they know the right way to solve the problem. In a factory, each worker is an expert at one small task. In contrast, parties in a parliament hold differing opinions about the same issues. Brains are like parliaments. They are built of multiple, overlapping experts who compete over how best to proceed.
This is why you sometimes find yourself arguing with yourself -- a seemingly illogical feat that our current computers do not attempt.
The human brain runs on conflict.
When someone offers you chocolate cake, you are presented with a dilemma: some parts of your brain have evolved to crave sugar, while others care about potential consequences, such as a bulging belly. Part of you wants the cake and part of you tries to muster the will to refuse it. The final vote of your inner parliament determines which party controls your reaction. Because of these internal multitudes, biological creatures are conflicted, a term that could not be sensibly applied to an entity controlled by a single program. Your car cannot be conflicted about which way to turn: it has one steering wheel commanded by one driver, and it follows directions without complaint. Brains, on the other hand, can be of two minds, and often many more. There are several little sets of hands on the steering wheel of our behaviour.
Consider this lab experiment: if you put both food and an electric shock at the end of a pathway, a rat will pause a certain distance from the end. It begins to approach but withdraws when it receives a shock; it begins to withdraw but finds the courage to approach again; and so on. It oscillates, conflicted. If the rat is connected to a Newton meter, you can measure the force with which it advances towards the food and retreats from the electric shock. The rat pauses at the point where the two forces are equal, where the push matches the pull. Competing factions typically share the same goal but often have different ways of going about it. Just as Labour and Tory MPs both love their country but have different strategies for steering it, so the brain has competing factions that all believe they know the right way to solve problems.
When I was a child, we thought we would all have robots that would bring us food and clean our clothes and converse with us. But something went wrong with AI, and the only robot in my home is a moderately dim-witted, self-directing vacuum cleaner.
Artificial intelligence has become stuck because it has so far not adopted the idea of a democratic architecture. Although your computer is built of thousands of specialised systems, they are too polite. They never collaborate or argue. I suggest that conflict-based, democratic organisation -- which I call a team-of-rivals architecture -- is the best route to a fruitful new age of biologically inspired machinery.
David Eagleman is a neuroscientist and writer. His book Incognito: The Secret Lives of the Brain (Canongate) is out in April
This article was originally published by WIRED UK