Hugo de Garis believes humans may one day meet their match in machines, but only after man perfects the hardware.
Those machines will be based on an artificial brain de Garis and his colleagues at Japan's ATR Human Information Processing Laboratories are now developing. So far, de Garis has only developed an early incarnation that looks very much like a desktop computer. But this small box will include circuitry that will evolve over time as competing software algorithms, written to drive the functions of the computer's circuitry, fight to be the code that describes the pathway to change. Call it evolvable hardware.
Several researchers are currently at work on evolvable hardware projects, each designed to solve a specific problem. For example, Adrian Thompson at Sussex University has developed an evolvable system that will control the movement of a robot.
But what de Garis wants to do is more ambitious. He wants a machine that will think on its own and control several concurrent operations. Currently, he is working on a CAM-Brain Machine that will become the basis of a robotic cat. Part of the circuitry will control the vision of the cat while others control functions such as hearing.
The basis of this machine is the Xilinx XC6264, a microchip that essentially functions as a blank slate upon which software writes its functions. The chip is divided into clusters of cells connected by wires that allow it to operate like a series of computers connected to a high-speed network. Not only can these cells and clusters swap information with their immediate neighbors, they also can send and receive information from any other cluster or cell along the network. This inter- and intra-cluster communication allows developers to allocate portions of the chip for independent functions. It also makes for a faster processor, an element de Garis needs to test theories about an artificial brain.
Without the brain machine, de Garis has had only software to simulate his theories, not a practical process, remarked Raj Patel, marketing director for reconfigurable logic for Xilinx. "You can't leave a computer running for 3,000 years to see whether it will work," he said.
By contrast, the XC6264-based CAM-Brain Machine, slated for completion early next year, will reduce a test run to mere seconds. All of this de Garis hopes will allow his brain to take on a life of its own and eventually become a machine that can build itself.
Wired News recently interviewed de Garis via email. He spoke broadly of the greater vision he has for his creation.
Wired News: Those of us who don't work in the world of evolutionary engineering tend to look upon computers as tools that have applications. Is it even proper to talk about applications with respect to this artificial brain? How will the brain work?
de Garis: I'm planning to design the brain in the following way: First, I (and my collaborators) will decide on a set of behaviors and capabilities that we want our kitten to have. Then we will dream up neural net modules to perform basic functions, e.g. walk straight, say meow, and so on. Then we design a brain architecture using these modules and their interconnections. In practice, the kitten will do more or less what we humans design it to do, but the complexity of its behaviors and control systems will be high, so I expect the kitten will surprise us constantly, hopefully like real kittens.
How will the brain work? Each module will have its own separately evolved function, e.g. detect a line of light moving across its eyes from left to right, detect if strong pressure has been applied to a given limb, for example. There will be thousands of detector and pattern-recognition modules. These inputs will be fed to decision circuits, if A1 is happening, and A4 is also happening, then do B6. The instructions are passed on to motion-controller modules which make the motors active in a coordinated way, so that the kitten moves appropriately.
You focused on a 10,000-module system - what's the significance of that number?
It seemed like a reasonable number of modules to aim for in our first artificial brain. Ten thousand is a pretty big artificial brain (a million artificial neurons, 100 million cellular automata cells that the brain is imbedded in). But 10,000 is not so large that a small human team using a special piece of hardware (a CAM-Brain Machine which can evolve a neural net module in about a second) cannot evolve 10,000 modules in a year or so. My boss is talking of a 10-year Japanese research project to build a 10-BILLION neuron artificial brain. But that would require an army of developers.
You talk about computers getting bigger in this world of artilects (artificial intellects). This is counter to our miniaturization of computer components now - a process that lends itself to the heat problem you talk about. How is it possible to have a computer that generates zero heat?
By using a technique called "reversible computation", where you store all bits generated in a calculation, never wiping out the contents of computer registers (i.e., you never destroy information, or bits). You get the final answer you want, you make a copy of it, then you reverse the whole process, taking you back to where you started. One of the breakthroughs in theoretical computer science in the '60s and '70s was the realization that wiping clean the contents of a register generates heat (this comes from thermodynamic ideas in the "physics of computation" - "physcomp").
The discovery of reversible computing will prove to be (with 21st-century hindsight) the greatest of the scientific discoveries of the 20th century.
This is a huge claim, so let me justify it. Reversible computing means zero heat dissipation. As computer components keep scaling down to get greater densities and speeds (less distance to travel) if we continue to compute in the traditional non-reversible way, then the heat generated at molecular scales will make such circuits explode. So we HAVE to compute reversibly. This is inevitable.
Heatless chips allow 2-D chips to become 3-D blocks, and of unlimited size. Next century’s scientists will make such systems self-assemble, using artificial embryological techniques. We could have moon- or asteroid-size computers, with 1 bit per atom; that's 10 to power 40 bits in an asteroid.
Our brains have a puny 10 to power 10 neurons. For me the writing is on the wall. The potential of the computer to develop ultra-intelligence in the next century is vastly superior to the human level. I see 21st-century global politics dominated by the issue of species dominance.
So you see the field of artificial intelligence - and mostly the genetic programming part of it where software and hardware constructs itself - as the next big technological race among nations?
I certainly see it becoming a very big player next century, to overcome the complexity problem. Systems are getting so huge and complex that they approach undesignability. Evolutionary engineering (EE) allows systems to be built that are too complex to understand but they function nevertheless.
Twenty-first-century technology will be based on nanotechnology (molecular scale engineering). These systems will have to self assemble in a massively parallel way to build human-scale objects. It will need a kind of artificial embryology. The complexities will be so great, that probably the only approach that will work will be evolutionary engineering.
Eventually, almost everything will probably be built this way, so sooner or later there's going to be a big international race to perfect these new technologies. I see the traditional top-down, human-designer approach getting increasingly squeezed in the coming decades.
You have some big dreams for this technology - what will the existence of beings based on this technology do to evolution?
It may stop biological evolution in its tracks. That's what makes me so frightened for the long-term future (a century or so).
I'm predicting that humanity next century will split ideologically into those who want to build the artilects, whom I call "Cosmists," and those who will oppose this, "Terrans." The Cosmists will have a cosmic perspective, saying that humanity has a duty to create the next higher form of intelligence; it will be a religion to them. The Terrans will argue, why build your possible destroyers?
I see the Cosmists' strategy being to escape from the earth into deep space to build their own colonies to do their own artilectual experiments. The Terras will be so frightened of the Cosmists succeeding, that the Terrans will be prepared to nuke the Cosmist colonies. To protect themselves, the Cosmists will need to arm quickly and secretly to create a counter threat to the Terrans. At the moment what I'm saying is science-fiction to most people, but there is a terrible logic to it if you think it through.
I identify strongly with the historical figure Leo Szilard, the Hungarian/American nuclear physicist who discovered the idea of a nuclear chain reaction and hence the nuclear bomb. He was familiar with the situation in Nazi Germany. As soon as he conceived the idea to split the nucleus in a chain-reaction way, he became mortally afraid. He could see what humanity would have to face. Humanity lives (or dies) with the reality of the hydrogen bomb. I have similar feelings regarding the future of humanity next century.
So why are you doing this?
Because ultimately I'm a Cosmist. I think humanity should build artilects - gods. To choose never to do such a thing, would be a tragic mistake, I feel. We are too limited to be much as humans, but artilects have no limits. They could be as magnificent as the laws of physics will allow them to be (maybe even beyond?).
I suspect that most of the advanced civilizations "out there" have gone Cosmist, i.e. they have become artilects. Maybe there is a whole intergalactic community of artilects out there who don't bother communicating with biological forms because we are too primitive. Who knows?