William Gibson Didn't Invent Cyberspace, Air Force Captain Jack Thorpe Did. Fred Hapgood On The Real Origins - And The Future - Of Cyberspace.
William Gibson's classic Neuromancer was the first SF novel to show VR done big time. His characters moved around inside a computer generated landscape that was stable, populated, easily navigated, and the size of a country, maybe larger. He called this realm cyberspace.
Soon after his book appeared, the term started popping up as a synonym for the Internet. This usage spread despite a long list of differences between our Net and Gibson's. All of the objects in Gibson's medium are embedded in the same 3-D space, visible from a single point, and timed by the same clock. Sites on the real Net, however, have no geometrical or chronological relation to each other. Each exists as an island, in its own universe.
While Gibson was imagining exciting new electronic worlds, researchers working under a military contract were actually building something very close to his dream of cyberspace. In 1976, eight years before the publication of Neuromancer, Air Force Captain Jack Thorpe was serving as a research scientist in flight training R&D at Williams Air Force Base east of Phoenix, Arizona. His job was to advance flight simulators, then three-story mechanical devices that rode on great platforms and shook their pilots like a rag in a dog's mouth. These stand alone machines were used to train pilots in solitary tactics like treetop runs, carrier landings, and the evasion of antiaircraft fire. But Thorpe wanted to carry these single-pilot devices into a new application: teaching group skills.
"Group interactions are the most complicated combat operations," says the tall, soft-spoken Thorpe. "They also tend to be the ones in which the costs of screwing up are the highest. Yet because it is so difficult and expensive to organize groups, pilots get very little training in collective skills. They have to learn these skills on the job, during combat, which makes casualties disproportionately high during the first few missions."
Thorpe's plan, which he later described in an academic paper, was to build a network of interacting simulators. But it was going to take some complicated space/perception computer modeling. Consider, for example, a group of people looking at a blackbird. The retina of each bird-watcher receives a unique subset of the light rays bouncing off the bird, telling them what it looks like from their particular distance and angle. Creating that same effect in a simulator network would mean that all the viewable objects in the exercise must send a different flow of customized instructions to each participant, specifying how that object ought to be represented on the display. If 20 simulators were used, each object must compute and transmit 20 streams of data simultaneously. (And that doesn't count the signals needed for sound and mechanics.) Thorpe's idea would require networking dozens, probably hundreds, of simulators. Until then, the best the US Air Force could manage was two.
Hocus Pocus
By 1983, Thorpe was a major and a program manager at Advanced Research Projects Agency. He found himself with a ringside seat at the huge experiment in distributed networking known as the Arpanet, the precursor to the Internet. The project was generating so much interest in networking science that Thorpe organized a team to develop and test his simulator network idea (called Simnet, for simulator networking). Perceptronics, based in Woodland Hills, California, was selected to build a new generation of simulators, while Bolt Beranek & Newman in Cambridge, Massachusetts, won the contract to create the networking and system software.
The project manager for BBN was Duncan Miller, an MIT graduate and - perhaps more important - a member of the Society of American Magicians. As a magician, Miller understood that much of what we think of as "out there" is really internally constructed, coming from models running in our minds. Observed objects guide us into picking what mental models to run and what expectations to have, but we don't rely on them for every pixel. While sometimes our models break down, and we get deluded (especially if a magician is deliberately stressing them), but basically the system works.
Adopting this decentralized, observer-centered, bottom-up approach to building a simulator network would dramatically decrease bandwidth. Even better, it would scale: observed objects would not have to worry about finding new resources every time new viewers appeared. Any player who showed up with the essential basic hardware would be able to plug in.
The downside would be the added burden on the observers. Each simulator would have to calculate where the objects were that it needed to observe, pull out the images appropriate for those objects from its own local memory or storage, calculate their correct size and orientation, figure out what might be lying in their path, calculate those implications, paint the whole picture on the display screen, and repeat. In short, every node would have to have its own local running model of the entire simulation.
Humans do the same thing and Miller knew why: the models we use are highly simplified (otherwise magicians would not be able to screw them up so easily). Taking their cue from the human machine, Miller's team developed the idea of building a "toy" model for each entity in the simulation out of a few basic properties, such as vehicle type, speed, and direction. All of these toy models would be aggregated into a single unified "world model," which would be copied back to every participating simulator.
During the exercise, each simulator would run both the world model and a second, fine-tuning program. It would then compare the behavior of itself, or its "avatar," in the world model with the values calculated by the more precise, local model.
When these differences diverged by more than a set amount, it would broadcast the values of the more precise program to the network. Every simulator would then correct its world model accordingly. A second set of packets would go out whenever the simulator driver issued a command not expected by the models. Every other aspect of "reality" (how the tanks looked at various distances and orientations, for instance) was calculated locally or pulled off a disk by the simulator.
Simnet required other layers of magic as well. A key goal was "finding inexpensive ways of making a crew think they'd just fired a big gun, like putting large speakers inside their seats that literally kicked them in the pants," Miller recalls. "Or getting the out-the-window views to jiggle around at the same time the speakers in the seats were rumbling, which generated a very compelling illusion of moving over rough terrain."
One of the most important tricks was keeping latency - i.e., the time it takes a change to be generated, transmitted, received, decoded, and written to all the displays or speakers - low enough to support human reaction. If Alice were chasing Bob, and Bob swung to the left, Bob's maneuver would get painted on Alice's screen fast enough that Alice could react as if the chase were happening in real vehicles.
Running the loops at the speed of thought helped lift the exercise out of an experience with cheesy 2-D graphics - in which every tree looked like every other - into something that made crew members sweat. Says Miller, "When a T-72 tank pulled out from behind a barn and started swinging its turret around to put its gun tube on your tank, the situation felt plenty real."
War games
BBN created a working model by the end of 1985, and by 1990 the team had turned over 238 network simulators to the US Army. Meantime, the salability of the architecture allowed combat-training exercises to grow from dozens to hundreds of players, and incorporate a growing number of vehicle types. Today, a large distributed interactive simulation, or DIS, exercise might have 1,000 humans (the system could support 10,000 human players just as easily) and 9,000 software robots, representing the interactions of jets, tanks, ships, satellites, armored personnel carriers, and helicopters.
By the end of the decade, Darpa hopes to be running exercises with 100,000 participants that will include smoke, weather, and a variety of microterrains (forest, swamp, desert, et cetera).
Over the last 10 years, the DOD has realized that the technology is useful for far more than combat training. Giving lots of people real-time interactivity with complicated, dynamically changing data structures turned out to work for a long list of group tasks. By 1995 the DOD was using the technology for mission rehearsal, strategy definition, force planning (how many people are needed to do what), battle reenactment, tactical assessments of new weapons, concurrent manufacturing exercises, logistics, procurement (new weapons have to pass simulation tests to get combat certification), and long-term R&D assessments (such as evaluating battlefield lasers or smart weapons).
Simulations allowed discussions to advance by showing how a situation unfolds rather than arguing a case verbally - a key distinction when a lieutenant is trying to contradict a general. Stable, shared landscapes made it possible to represent information in terms of 3-D or even 4-D patterns (with the fourth dimension being time). The all-digital nature of this exercise meant that everything could be recorded for later review and replay.
Between the larger exercises and the spreading pool of applications, the DOD's commitment to DIS has grown steadily: a Defense Modeling and Simulation Office has been set up directly under the secretary of defense, a special Internet was built and reserved for defense simulations, and funding levels have risen to US$500 million a year.
Civilian clothes
In theory, the market for civvy DIS should be even larger, ranging from education, collective design, group-work environments, issue analysis, and decision support to entertainment, recreation, and art. But calculating and recalculating large environments fast enough to support human reaction speeds is an industrial-strength application, requiring very fast workstations with hundreds of megs of RAM and connections much quicker than anything possible with a 28.8 hookup to the Internet.
Such demands would surely tax most anything but high-end computers and high-speed networ�I`�'@nd overhandl�H�can lead to delay and packet loss. Delay degrades the rapid interactivity that is so important to the illusion. Packet loss is no less problematical, since a simulator has no way of knowing it has missed one change order until a second one arrives, telling it to alter the instructions it never received.
Rather than broadcasting change packets to everyone, one solution might be to bundle correction packets into specific categories and then let entities subscribe only to the ones they need. But such "multicasting" really just shifts the burden from communications to local processing and memory resources.
Despite the obstacles, several communities hope to bring DIS to the Internet. The MOO and MUD tribes are carving away from one end; the VRML developers - with their habitats and avatars - from another; and, most aggressively, the online computer gaming industry, where there are many companies that license or borrow from DIS developers.
Zombie Virtual Reality Entertainment plans to bring out a DIS-based tank game this year. Mark Long, a principal of the company, says his game will keep bandwidth requirements down by using smarter internal models (so that fewer error correction packets need to be sent out), fewer players (Zombie's product will support only about 18 players using 14.4 modems and 30 using 28.8), and leaving out bandwidth-intensive enhancements like telephony.
Meanwhile, DIS engineers are constantly skating ahead, developing enhancements that will require even more resources. Researchers at Mitsubishi Electric Research Laboratory wrote a tool they describe as "an operating system for shared environments" that allows chunks of any arbitrary geometry (not just simulators) to receive and transmit change orders. Any piece of a simulation can be altered, or alter itself, in any way and at any time. Scalable platform for large interactive environments, or Spline, also supports force-feedback interfaces, acoustic localization (noises get louder or softer with distance), and independent coordinate systems, which means that you can put a city inside an apartment, and then another city inside an apartment in that city, and so on.
MÄK Technologies, the Cambridge, Massachusetts-based leader in Defense Department DIS development, has built a number of these enhancements, including air turbulence and meteorology modules. The infantry simulator routine, for instance, blends a large vocabulary of prerecorded gaits with gait changes so the simulated soldier can shift realistically from running to crouching while keeping contact with a variety of ground textures.
Some DIS-related applications have also reached the corporate world. One of the most ambitious is the virtual landscape built by Bechtel to support the construction of an underground highway in Boston.
This domain, a 3-D map of the city, contains 50 years of aggregated geotechnical data from the area, including existing structures, proposed structures, and construction schedules. A contractor can pick a region, then dial ahead to a given date to look for an area that might be available to park a crane or store a shipment of I-beams. Business owners who are concerned that scheduled construction will deny customers access to their stores would be able to dial in a date, "enter" the virtual model, and check out the accessibility of their stores.
It is no stretch to imagine something like this model running permanently on a local civil-engineering and public-works network, jointly maintained by all the contractors, architects, and clients interested in building in a given area.
As the model continued to grow, constituencies outside civil engineering would look in: planners, traffic engineers, parks and recreation managers, real estate brokers, historical societies, insurance companies, location consultants, journalists, legislators.
As these groups made their contributions, one local region of cyberspace would gradually coalesce; in time, these regions would spread out and link up, creating a world.
This is probably how real cyberspace will form: built from the bottom up, as application communities establish, widen, expand, and ultimately join. Building true Gibsonian cyberspace - changing the Web into a vibrant electronic world - is an enormous task, possibly 100 years away. It might be to the 21st century what the great cathedrals or pyramids were to other ages: a task so huge it defines as much as expresses the culture.