The future of computing may be gestating - not in computer labs, but in an obscure discipline called process control, where scientists have discovered that a little smear of rat brain can solve one of the big problems in chemical engineering.
Despite the enormous progress made in the electronic digital computer, the box on your desk - or on your lap, or in your pocket - is the same sort of machine as the 50-year-old room-sized monster. It manipulates binary code the way it has been programmed to do. Computers of this type will become faster and more powerful for a time, but there are already signs they are approaching limits in both hardware and software. Even without these limits, digital computers of any conceivable power will have difficulty accomplishing seemingly simple tasks. A big machine capable of running a space mission fails completely when asked to pick a face from a crowd or to drive a robot across a room full of obstacles. These sorts of problems have already been solved in nature, in an infinite variety of ways, by the associations of neurons directing living creatures. The next big jump in computing, potentially as important as the jump that created the programmable electronic computer, must be inspired by biology.
If the past is any guide, this will probably not occur in the likely places - the major hardware or software companies. The original electronic computer was hatched not by the big electronic firms of the time nor by the banking or insurance businesses that first profited from its development, but in an obscure corner of the military-industrial complex, by a group with lots of money and an urgent need to solve artillery trajectories. It may be that the successor to that type of machine is gestating far from the hotbeds of computerdom, in an obscure corner of the chemical business: a field called process control.
A modern chemical plant is among the most complex of structures, involving acres of pipes, tanks, reactor vessels, distillation columns, valves, and compressors. In this it compares to other complex artifacts, such as telecom nets, computers, and power grids. The difference is that these others are susceptible to what systems people call linear control: turn the volume knob on your stereo, and you go smoothly from barely audible to ear-splitting. Electronics is like that, and though there are nonlinearities in electronics, engineers are clever enough to compensate for them and render them nearly worthless. But at the heart of every chemical plant are reactor vessels and distillation columns; reactions take place there that are nowhere near linear. Not only are chemical reactions nonlinear, but they are dynamically nonlinear: changes in heat and pressure have hard-to-control effects on outputs of usable stuff; plus, each batch has a kind of memory.
Most of us have had domestic experience with such reactions. Consider the omelet. You start with an inedible mix of protein and fat. You add heat and air to this mix in precise amounts. At a certain moment the last increment of heat and air, indistinguishable from those that have proceeded it, produces a perfect omelet. That's nonlinearity. The dynamism is seen in the omelet's memory. If you let it go beyond the perfect point, there's no way to bring it back to perfection by, say, cooling it. You have to throw it away and start over. Although chemical engineering has been a science for a century or so now, it is still a lot more like cooking omelets than the people who run chemical companies would like. As in a kitchen, yum can turn to yuck in an instant, and the difference between the two makes up a large part of the bottom line. This is where process control comes in.
There is another difference that chemical people don't talk about much. When something goes sour on a telecom net or a power grid, people get pissed off and can lose significant amounts of money. When a chemical plant goes sour, you get The Fireball: large pieces of white-hot metal go flying everywhere at high speeds, and you can lose a hundred-million-dollar investment in four seconds. That's if you're lucky. If you're unlucky, you get Bhopal.
Chemical companies compensate for these characteristics in the same way chefs do - by carefully watching the pot, using a technology that could be described as A Lot of Old People Who Know How to Make Teflon Without Wiping Out Wilmington. This is expensive, and wasteful, and often leaky. Meanwhile, big chemical concerns are under increasing economic pressure, stemming from shrinking profit margins on standard chemicals, from the need to make more of their income from ever more complex processes, from the expiration of their patents, and from environmental demands to reduce leaks and waste products. This has set the stage for a new look at process control.
Although the genesis of technical breakthroughs is hard to determine, it is possible this one started a decade or so ago, when a young Nigerian graduate student named Babatunde Ogunnaike was completing his PhD at the University of Wisconsin. A revolution was underway in neurophysiology, and it occurred to Ogunnaike that neuronal systems were adept at controlling the same sorts of nonlinear processes that troubled chemical engineers. Each person is a vat of hundreds of thousands of nonlinear reactions - so is each moth and each hamster.
Of course, the general idea of using biological models in engineering is not particularly new. Biomimetics, as it is known, has been around for years without producing much technology of value besides Velcro. The idea of trying to reverse-engineer neural systems has also been around for a while, particularly at the California Institute of Technology, where Carver Mead and John Hopfield started the line of research that led to the current interest in neural nets. Mead was trying to re-create important sensory neurosystems, like the retina, in silicon.
This sort of thing is a long way from the intense practicality of chemical engineering, and when Ogunnaike asked his advisors about using models derived from neurons for controlling processes in chemical engineering, they all told him to forget it. But he added a speculative section on the potential of biologically based process controls to his statement of research interests anyway and kept it in mind as he moved on in his career. In the mid-'80s, Ogunnaike landed a process-control job at DuPont, in Wilmington, Delaware, and was faintly surprised to learn that DuPont had recently instituted a major life sciences program, including a neurosciences section. DuPont had made a massive investment in life sciences in the hope that somehow the biologists would come up with another nylon - a patentable product that was cheap and easy to make with potentially astronomical profits, something that had eluded conventional chemistry for 40 years.
Running the neuro section was an unconventional former psychologist named Jim Schwaber. Schwaber had come to DuPont from academia as part of DuPont's life-sciences recruiting drive. Having started out as a pigeon-training Skinnerian behaviorist, he then became interested in neurophysiology at the University of Miami, where researchers were studying how animals control their heart rates. Here Schwaber stepped into one of the classic conundrums of behavioral science: if you want to study behavior, you have to understand the brain; if you want to understand the brain, you have to understand its components; if you want to understand its components, you have to understand neurons; if you want to understand neurons, you have to understand membrane physiology and the biochemistry of neurotransmitters; and so on, down to the last increment of reductionism - quarks, maybe. By then, however, you will not be in much of a position to understand why the rat does one thing and not another.
In practice, of course, scientists choose a workable section of study along this continuum and specialize. As it turned out, the one Schwaber chose is right in what might be called the middle of the complex-behavior-to-molecule continuum: a discrete group of about a hundred neurons and their associated sensors and connections that, in concert, mediate the blood pressure of mammals. Simply put, this baroreflex controls blood pressure within a fairly narrow range across a wide range of heart rates.
Whether the rat (or human) is looking for food, fighting for its life, or sacking out; whether the heart is running at max or just ticking over - the blood pressure delivered to the body's cells is pretty much the same. This is not a simple achievement, considering that the system has only two variables: cardiac output and arterial-wall tension. In essence, the system picks up pressure information from receptors in the aortic arch and the cardiac sinuses and sends this information to second-order neurons in the nucleus tractus solitarii (the NTS) up in the brain stem. This information is integrated with other sensory signals that reflect demands on cardiac and respiratory performance, and then the NTS sends control signals to the heart that regulate its rate, rhythm, and force of contraction, while it sends other signals to vascular beds that regulate flow and resistance. The baroreflex is complicated enough to be interesting - it is, after all, a part of the brain - but simple enough to be, at least potentially, comprehensible in detail within a reasonable time, unlike vastly more problematic functions like vision or consciousness.
At DuPont, Schwaber initially set up shop in Central Research and then spent some time in the Imaging Systems Department, because the company thought that the work would have some applicability in biomedical technology. It did, but the more important result was that Schwaber and his group developed the ability to computer-analyze the input, output, and computational functions of the neurons that comprised the baroreflex in real time. They would take a rat, wire it for blood-pressure and heart-rate tracking, and insert recording and stimulating electrodes at various sites of interest in the baroreflex system. Ultimately, they could play the system like a videogame and watch what was happening on a workstation monitor.
When Ogunnaike found Schwaber's lab, he immediately understood that the revolution in process control he had dimly perceived as a grad student was within reach. He saw in the operation of the baroreflex an uncanny similarity to a key problem in process control. It seems that when you want to make a complex polymer, like Kevlar or Teflon, you boil a soup of the monomer building blocks in a big tank with the appropriate catalysts, and if the temperature and pressure are right, you get the desired product. To control temperature, you use a water jacket around the tank; to control pressure, you jet nitrogen directly into the tank itself. And, of course, there is the direct effect temperature has on pressure, as in a steam engine. To keep pressure within bounds, it's best to use the water jacket, but this response is sluggish, so when pressure starts to drop below the desired set-point, control engineers have to use the nitrogen. Unfortunately, if you jerk the pressure around too much in this way you can ruin the product or reduce the yields. Also, such relatively crude manipulations, when applied to nonlinear systems, produce unpredictable feed-backs that can send the tank completely out of control.
What you really want is a control system that flawlessly integrates slow- and rapid-acting controls with great sensitivity to the nonlinear reactions going on in your vessel. You want a control system that follows a reaction, keeping the important variables within their desired set-points, without the jerkiness that causes feedback. This is what Ogunnaike saw in the baroreflex: a flawless integration of slow-acting (arterial contraction) and rapid-acting (cardiac output) control mechanisms and a near-perfect following controller. A little smear of jelly from a rat's brain was, in effect, solving one of the big problems of chemical engineering.
It did not take long for the enthusiastic Ogunnaike to convince Schwaber to change the orientation of his neuro work from the speculative periphery to the critical center of the chemical engineering world.
In the ensuing years, sharp young process-control engineers were brought in as post-docs, and, probably for the first time anywhere, people who understood meat computing were working closely with those who understood the wire version, in an environment focused on the control of important industrial processes. "In general, computer people don't know how to talk to brain people," Schwaber says. "There's an absent middle ground. The computer people try to make neurons into simple switches - which, of course, they're not - and the brain people sort of throw up their hands and say you can't really get anything useful in analytic terms out of the brain because it's far too intricate and complex. But we're doing it."
The team analyzed the baroreflex in process-control terms, and one of the post-docs, Frank Doyle, now at Purdue University, modeled the baroreflex in a computer and then modeled a common sort of chemical engineering process in the same computer. The biologically based virtual controller was vastly superior to standard controllers in following nonlinear reactions and keeping them within specs.
The next step, obviously, is to go physical. Schwaber and Ogunnaike are somewhat hesitant to discuss exactly when this will happen. (Big chemical firms are so secretive, they make computer companies look like Hollywood gossip columnists.) But it should be soon. One problem appears to be political: convincing the hard-hat types that this really will work, that you can cook chemicals without the Old People. (Oh, you want to run my 30-meter-high potential bomb with an electronic rat-brain fragment? Yeah, sure.)
Meanwhile, Ogunnaike and Schwaber have seen the future, and (so far) it works. Up until now, the neuronal computations have been modeled digitally, but it turns out that if you run a specially designed VSLI chip at voltages lower than those that produce the standard on-off response, you get a suite of responses that can be tweaked to mimic more closely what an assemblage of neurons actually does. Schwaber has already worked with one of Carver Mead's former students, John Lazzaro, to build an analog VSLI chip that mimics the baroreflex. Ogunnaike thinks that analog will ultimately be the hot area for advances in neurobotics.
Obviously, in an area this new, in a departure so radical from the main line of digital computer development, it would be easy to go nuts with speculation. It is, however, reasonable to project that, if the work with the baroreflex (a very primitive brain function) succeeds and big money starts to flow toward neurobotics (and remember that the chemical industry, with big oil included, is about five times the size of the whole computer industry, hardware and software combined), scientists could start to walk up the brain stem, analyzing and rendering in silicon ever more complex brain functions. It may be that artificial intelligence will be developed the same way that nature developed the real thing - through evolution, layer upon layer, from the simple to the complex.
Such computers will not be much like human-programmed digital computers. They will be able to do things that digital machines cannot do easily, or cannot do at all, just as digital computers can do things that organic brains can't pull off. Such computers will not be free-standing boxes, at least at first, but will be tied into technology, giving to industrial processes the sort of homeostatic control exhibited by living beings. They will be less like the idiots that digital boxes are now, utterly dependent on flawless programming, and more like dogs: trainable, but with an inherent set of instincts and abilities, herding our processes and reactions and systems like a border collie runs a flock of sheep.