They Render Unto Bill

The man at the next table in Coco's Bakery Restaurant has a whole bunch of friendly advice. "You should understand," he says, "the Softies ain't gonna tell you shit. They're loyal to Father Bill. They got an investment that isn't just financial, it's spiritual. It's a shared vision thing." This ragged character smelling of coffee […]

The man at the next table in Coco's Bakery Restaurant has a whole bunch of friendly advice. "You should understand," he says, "the Softies ain't gonna tell you shit. They're loyal to Father Bill. They got an investment that isn't just financial, it's spiritual. It's a shared vision thing."

This ragged character smelling of coffee and Camels calls himself Jake, has a sandy mustache and ponytail, wears a red-checked shirt and a baseball cap, and looks like a construction worker. He claims, though, that he used to be a C++ programmer at Microsoft and now has his own start-up company developing ultrathin keyboards for laptops. For all I know this may be true. You can't judge anyone by appearances here in the sprawling minimall mosaic just south of Redmond's industrial parks. A woman sitting behind me is complaining to her friend about software upgrade policies. Two kids waiting to pay at the register are arguing about lossy data compression. Everyone seems touched by nerd fever.

Microsoft is the catalyst, of course. Jake calls its employees "Softies," he says, because they're like Moonies or Trekkies, sharing a blissed-out, cult mentality. "And Gates, I call him Father Bill because he's the high priest leading the faithful."

This isn't the image we're accustomed to. Writers usually depict Microsoft as a cruel sweatshop riddled with angst. In his recent book I Sing the Body Electronic:

A Year with Microsoft on the Multimedia Frontier, Fred Moody refers to the "excruciating psychological pressure," which can "reduce people to rubble."

On the other hand, after a whole year studying the Softies, Moody also admits, "I left the company's campus more confused than I was when I entered."

And this is the problem. Most journalists aren't equipped to share the Softie headspace. They haven't experienced the compulsive pleasure of hacking code, so they find it impossible to believe that someone could actually like working on C++ algorithms twelve hours a day, seven days a week, under severe competitive pressure. As a result, they end up sounding like adults tut-tutting over kids playing videogames. "Why don't you go play baseball, dear? I really don't think it can be good for you, staring into a screen like that."

Well, maybe not; but the Micronerds I've met wouldn't have it any other way. As Jake says, they have a cultish quality. They sound ecstatic when they describe the intellectual stimulation at Microsoft, the lack of a dress code, the basketball court, the football field, the cheap cafeterias, the free beverages, and (of course) the stock options.

This presents me with a problem. There are things I have come here to find out - but the loyal disciples of Father Bill are a race apart, and they're certainly not going to jeopardize their state of grace by blabbing secrets their mentor wouldn't wish them to reveal.

Here's the background: Three years ago, Microsoft's research division went into a quiet little hiring frenzy, quickly growing from around 10 employees to more than 100. Among the newcomers were 11 experts in 3 D graphics, including world-famous pioneers such as Alvy Ray Smith, who co-founded Pixar, and Jim Blinn, who produced the first 3-D simulations of space missions for the Jet Propulsion Laboratory (JPL).

Smith's case is particularly interesting. In 1991, after moving on from Pixar, he started a company called Altamira. With two partners, he devoted a huge amount of effort over a two-year period to create Altamira Composer, a new kind of image-editing and composition program more fiexible than Photoshop. It was marketed for nine months - then withdrawn when Microsoft acquired Altamira and invited Smith to come on board.

I'd like to know why a diehard nonconformist entrepreneur would suddenly sell his brainchild and move to Redmond. I'd also like to know what Microsoft had in mind when it acquired SoftImage (which animated the dinosaurs in Jurassic Park) and Vermeer (which developed FrontPage software) and RenderMorphics (a supplier of 3-D application programming interfaces).

ut let me back up for a moment. The insidious danger of nerd fever is that even if you're not directly hooked on the tech biz, its obsessional priorities tend to be contagious. I'm reminded of the great mad genius Nikolai Tesla, who once plannedto build huge towers transmitting voltage directly to consumer devices. Something like that seems to be beaming out of Microsoft, except it's a psycho-voltage that works on the brains of consumers, causing them to resonate sympathetically with the value system of William H. Gates III.

This "Redmond voodoo" is the only way I can explain events such as last year's hysteria over Windows 95. In rational terms it was a fairly boring operating system, basically a Mac clone 10 years too late and far less elegant or powerful than Nextstep. But rational terms were never even considered as millions of consumers felt a sudden craving for a Start button and menus that popped up from the bottom of their screens instead of the top. It was like a modern version of Saint Vitus's dance, which caused 15th-century Italians to go into mad fugues of imitative calisthenics till they collapsed in a state of terminal exhaustion.

Maybe that's the real genius of Gates: Even though he is widely condemned and reviled, he manages to infect millions of people with his values. He makes dull software seem incredibly important.

Consequently, someone like Jake ends up babbling like an industry gossip columnist. "Bill's in big trouble, now," Jake tells me, as he stands up and stuffs a copy of Auto Trader magazine into his back pocket. "What happened to Apple, it's gonna happen to Microsoft. This Java shit has 'em running scared. You think anyone's gonna buy a new version of Word when you can rent wordprocessing from a Web site for 50 cents an hour?"

Indeed, Microsoft did undergo a massive reorganization this year. Out of four divisions, two were axed, and the head of another took early retirement. An Internet Platform and Tools Division was created just three months after Gates had sworn that it would never be necessary. Five thousand employees were bused to an auditorium where they received a new, heretical scripture regarding the Net: "Embrace existing standards - and extend them."

Could that be true? Well, Blackbird, a highly hyped tool using proprietary standards to put text and graphics on The Microsoft Network, was promptly killed and resuscitated as the Microsoft Internet Studio, which now uses existing standards to create text and graphics on Web pages. A month later, a press release announced that Microsoft's Internet Explorer plug-ins would be compatible with Netscape, and vice versa.

Meanwhile, Microsoft faced new opposition. Apple teamed up with Silicon Graphics to create Moving Worlds, a 3-D environment specifically designed for the Internet. "This initiative represents the first opportunity for users to experience ubiquitous, animated 3-D on the Web," said David Nagel, then Apple's senior vice president of worldwide research and development, now president of AT&T Laboratories. The Moving Worlds standard has been endorsed by more than 50 companies including Adobe, Asymetrix, Autodesk, Borland, Electronic Arts, IBM, Macromedia, Sega, and Sony - but not Microsoft.

It looks as if a future standards battle is shaping up over 3-D on the Net. But - wait a minute, I sound as if I'm touched with Redmond voodoo myself, babbling on like this. Let's keep things in perspective here. Why should we care about 3-D graphics? Isn't this just another gimmick, another Windows 95, another way of getting consumers to do the techno biz version of Saint Vitus's dance?

Maybe so, but in this case there's substance behind the hype. The term 3-D doesn't just mean Web-page buttons that look as if they've been carved out of veined marble. 3-D means objects that can move around freely in a video environment that the viewer can enter and explore. In other words, this is the first essential step toward true virtual reality.

The Web is too slow for intensive graphics traffic right now, but that bottleneck isn't going to last forever. Cyberspace is going to happen, and it will need true 3-D. For those of us who have been reading science fiction and waiting impatiently for total sensory immersion in a shared universe, Microsoft's interest in hiring graphics pioneers suddenly seems more interesting.

One thing remains unexplained, though. These pioneers were individualists, yet they surrendered to an organization commonly described (only half jokingly) as the Evil Empire. Why would they go along with that?

veryone calls it the "campus," and it does have a modern collegiate look, with scores of shoebox-shaped, dormlike buildings scattered amid grass and trees. But there's something artificial about the landscape. The lawns are so well manicured they could be AstroTurf. Little trees beside the gently curving service road are perfectly symmetrical, like clip art from an architect's visualization. The topography is as carefully calculated as Disneyland, without the rides. Behind the tinted glass of the boxy buildings are thousands of tiny offices, each measuring about 7 feet by 10. Really, the campus look is only skin deep. This is not a laid-back academic environment - it's a system contrived to extract maximum brain labor with minimum worker backlash.

The decor inside is a muted blend of pastels, like the hallways of a Hyatt. There's fine art on the walls - Robert Rauschenberg lithographs and 17th century woodblock prints. Most software companies dump their employees in uncivilized cubicles separated by chin-high partitions, where you have to wear headphones to block the background noise. But not here. It's so quiet you can hear the hiss of air conditioning. No phones are ringing, perhaps because email is the preferred mode of communication. There's no idle chatter among employees, because each person is sequestered behind a solid-core heavyweight door. Half the offices face the outside world, but the rest are internal, like monastic cells, each containing a Softie staring at a screen in solitary contemplation.

Almost 9,000 isolated individuals meditatively creating - what? Something that doesn't really exist. Patterns, sequences of 1s and 0s. It's about as far from a smokestack industry as you can get, but I suspect that Henry Ford would feel right at home here, because, in the nicest possible way, it's a production line.

I'm being treated like a loose cannon. Everyone's on guard. A public relations representative accompanies me, monitoring each conversation and taking copious notes. In one office, pinned to a bulletin board, I notice a multipage memo warning people what to say - and what not to say - during Charles Platt's visit. Still, after some negotiation, certain assurances have been given. I will be allowed to see some 3-D goodies, and I will talk to the Graphics Gods Microsoft has acquired with the obsessive completism of a kid collecting baseball cards.

First, though, I meet the man who created the research department.

Nathan Myhrvold is a cheerful, chubby fellow with a pink-and-white complexion, a sandy beard, and brown curls fiopping over the collar of his white sweatshirt. In blue pants, loafers, and wire-framed glasses, he's a prototypical Micronerd - also, of course, a multimillionaire, since he's been with the company for 10 years. (Generally speaking, any coder who arrived here more than five years ago has stock options worth a couple million.)

Myhrvold has been referred to in the trade press as "Microsoft's Mastermind." (See "The Physicist," Wired 3.09, page 152.) He's a member of the Office of the President and reports directly to Gates. His office is triple-size, with a view of distant snowy mountains. He even has a secretary, for heaven's sake.

Myhrvold established the research department in 1991 after he wrote a memo suggesting that there was no reason why research should always be done at academic institutions. Initially, he says, "a lot of people were afraid we would fall into the same trap as Xerox PARC or Bell Labs." He shakes his head knowingly. "Those companies did a lot of research they had no business doing."

The implication is clear: Microsoft managers are expected to be savvy enough to keep computer scientists under control and stop them from straying into "inappropriate" areas. You might think researchers would find this unpleasantly limiting, but according to Myhrvold they love it. "Most people in computer research want to see their work have some impact," he explains. "We convinced them that here, their work could touch the lives of millions of people. This was a unique value proposition."

The usual goal of a scientist is to uncover truths about the universe. Myhrvold knows this well; he has a doctorate in theoretical/mathematical physics, he used to do research on cosmology and quantum gravity and quantum-field theory in curved space-time with Stephen Hawking, and he still serves part-time on the advisory board of Princeton University's department of physics. Regarding his academic past, he says, "I do miss the purity of truth-finding."

Yet he happily traded it for a chance to "touch the lives of millions of people," and other computer scientists were willing to make much the same deal. First he lured researchers from IBM to work on natural language processing. Graphics research commenced two years after that.

What, specifically, are they working toward? "We're assuming great increases in processing power," says Myhrvold. "Five years hence, you'll record video straight onto the computer, and it will be a standard data type. By Christmas 1997, we're going to have MPEG video in every machine."

And if this sounds a bit prosaic, Myhrvold is quick to evoke the shared vision thing: "These tools have no precedent in the analog world. How many Spielbergs or Picassos never get to exercise their talent because they never have access to tools or audience? I believe that if we are able to empower millions of people with these tools, there will be a large and positive benefit. If the population of the Earth reaches its full creative potential and develops works of intellectual property that can be communicated with reduced friction of distribution, we could see a digital renaissance."

One of the IBM people who came to Microsoft was Dan Ling. It was he who assembled the 3-D graphics team.

Ling may be the only person on the Microsoft campus with a short, neat haircut and a starched, pale-blue shirt buttoned at the neck. His desk is clean and tidy. His black leather briefcase sits neatly on a gray file cabinet. He's very polite, very low-key, and very corporate.

At IBM's Thomas J. Watson Laboratory, Ling worked on virtual reality simulations. Consequently, he says, "when I came to Microsoft, 3-D interactive graphics interested me. We now have a combination of software and hardware ideas, which independent hardware vendors might want to build into their products."

Let me try to rephrase this highly diplomatic statement. It seems that Microsoft meets regularly with video-board manufacturers. None of them is big enough to establish standards on its own, so the hardware specs are devised "jointly," with Microsoft taking the lead. In other words, the software company is now setting standards for hardware.

Like Myhrvold, Ling used to be more of a scientist. He holds seven patents. He co-invented video RAM. "It is a loss," he says, when I ask him if he misses theoretical work. "But there are compensations." And though his style is very different, he goes into a soliloquy remarkably like Myhrvold's. "Seeing work being put into practice is important. At IBM, we had a very hard time making an impact on the company as a whole, but people here are strongly motivated to think about how an idea could wind up in a Microsoft product and end up on a million people's desks."

Jim Kajiya does the hands-on work of leading the 3-D graphics group. He's a total contrast to Ling: laid-back and informal, smiling a lot as if the idea of working at Microsoft still amuses him. With long, wild hair and frizzy sideburns, he looks like a Zen master or a hermit; but his office is a friendly place with a stereo and well-stocked bookshelves. He even has a full-size leather couch, though it's a tight squeeze. He sits on his office chair, and I sit on the couch; our knees are about 6 inches apart.

For 15 years, Kajiya worked at Caltech tackling 3-D problems such as rendering "touchable" graphics instead of smooth, unreal surfaces. "A lot of the research I've been doing over the last two decades is just now becoming relevant," he says, "because we didn't have the hardware to handle the computationally intensive calculations till now."

But what, exactly, are his techniques going to be used for?

He fidgets. "I still have the instincts of an academic," he says. "In academia, all information is free. But here -" He spreads his hands apologetically.

Well, we're thinking about cyberspace, right? Does that mean virtual shopping malls?

"Oh, no, that's not very interesting. Cyberspace is about interacting with each other."

Kajiya points out that if we go for a walk while talking to a friend in the real world, we interact using a mix of words, expressions, gestures, and movement - and we're simultaneously aware of the environment. Point-to point video isn't capable of conveying this experience, but 3-D graphics can do it if the synthetic world is properly modeled and each person is represented by a suitable agent.

This sounds like cyberspace as predicted by science fiction writers such as Vinge or Stephenson. "Yes," Kajiya agrees. "That vision is commonly accepted - at least among a relatively lunatic fringe of computer scientists. People assume it will just get done, somehow. But someone needs to figure it out for real. I'm very serious about making this happen."

Does he expect to realize it within, say, 10 years?

He hesitates for an instant, as if he might have had an earlier date in mind. Then he reasserts his diplomatic self-control. "Ten years," he says. "Yes."

Like Myhrvold, Kajiya has a grandiose long-term vision. "It's clear to me that what we're doing collectively is inventing a new medium. I think that's a pretty big deal. New media don't come along all that often, and when they do they cause tremendous social and cultural changes in our civilization. Writing, radio, television - 500 years from now those media are going to be largely forgotten. This digital, interactive, shared-spaces medium is the thing that's going to survive from this culture in a significant way."

And for Kajiya, Microsoft is the best means to the end. "When you're in academia, you write papers and eventually those ideas get used by other people, but it's a very long pipeline." He looks at me frankly. "I wanted to be somewhere I can actually do things and have an effect."

Kajiya looks surprised and worried when I mention that I was told I could view some of his work. I'm escorted to a lounge area - where I wait. I sense hasty conferences offstage. Finally my PR escort returns with two young members of the graphics team, Hugues Hoppe and Steven Gortler, both of them looking startled by the prospect of talking to a journalist.

I'm told that Microsoft's most impressive 3-D samples are currently being evaluated for inclusion in this year's Siggraph, the annual computer graphics conference. It would be, er, procedurally unacceptable to show me work that's currently under consideration.

Still, there are a couple of little things I can see. Hoppe slides a tape into a VCR. "This is an algorithm I developed," he says, sounding half shy, half proud. "It's a method for simplifying polygons in geometric modeling."

The video displays a kind of cubist sculpture built from angular, fiat faced objects. Then the faces start dividing themselves into smaller faces, and smaller still - till they become a continuous, smooth surface, a fully modeled replica of a human head. "This technique would be particularly useful online," Hoppe explains. "It's like a progressive GIF."

In other words, you won't have to wait for a full rendering to come down the wire; you'll get an approximate impression, and you can move on to something else as soon as you lose interest.

"Also," says Hoppe, "it doesn't take more storage space than the original complex image."

Now Gortler takes over. He demonstrates a system that he developed with another researcher, Michael Cohen, for automatically converting real world objects into 3-D data. A bowl of fruit is placed on a fiat white panel marked with distinctive black circular targets. Two more panels are placed behind the bowl. An observer captures the setup with a camcorder, shooting from various angles. The video frames are then automatically analyzed to capture and organize the way that light leaves the scene, in a process similar to making a hologram. This way, a graphics landscape can be quickly populated with solid objects captured from the real world as easily as we capture 2-D objects using a scanner.

"When people see the quality of our work here, they'll have to reevaluate their guesses about what's happening in our group," says Gortler, after his demo. He grins. "A lot of people thought we'd just be working on the next version of Excel."

Automated techniques stand in stark contrast to the work of Jim Blinn, who epitomizes Thomas Edison's famous maxim, "Genius is 1 percent inspiration, 99 percent perspiration."

Blinn created his first computer graphics back in 1967. He had to print them onto paper because he lacked access to video monitors with graphics capabilities.

Blinn is a fascinating character. He's a tall, modest man with a graying beard, looking slightly disheveled in a buttoned cardigan, like a 1950s math professor. In his spare time he plays the trombone, and he writes regular semihumorous math columns for the IEEE Computer Graphics and Applications journal. His list of favorite movies includes Deathstalker II, The Brady Bunch Movie, Amazon Women on the Moon, and Beach Babes from Beyond. Under the heading of "really good books that have changed my life" he mentions a couple of titles by Dr. Seuss, whose Oh, the Places You'll Go! was read aloud at his wedding.

If this seems a confusing mix of science and whimsy, it rests on a serious bedrock of obsession. Blinn has always been fascinated by every aspect of computer graphics. He met Jim Kajiya back in 1974. "He was the hardware engineer on the first frame buffer display," Blinn recalls. "And I was the first grad student waiting to play with it."

After inventing a couple of 3-D rendering techniques that are still used today, Blinn moved in 1977 to JPL, a division of Caltech. "I was the first human being ever to see a complete map of the surface of Ganymede. I copied data from the image processing group, and I processed it myself."

From that time on, Blinn did just about everything himself - because, he says, there was no one else to do it. "The first animation I produced for JPL, in 1979, was just barely technically possible. In fact, management didn't believe it was possible."

That simulation showed - with scrupulous accuracy - the solar system as seen by Voyager space probes. Even by modern standards the sequences look ambitious, including shifting perspectives and planets performing the intricate ballet of orbital mechanics.

Creating the effects entailed relentless drudgery. "I used to copy-type numbers from blueprints into a text editor," Blinn recalls. "I invented a little modeling language to convert that data into solid images of the space probe itself." To render each planet, Blinn obtained assorted pictures and mapped them onto a fiat surface. He painstakingly joined the fiattened photos, then mathematically rewrapped them around a sphere and applied a suitable texture, "all using my own tools," he says with a shrug.

He ran a PDP-11 day and night for nearly four weeks, yielding number sequences that were saved on magnetic tape. Then it was time to copy the images onto 16-mm film - which presented a problem, since a system to do this didn't yet exist. Blinn had to capture one frame at a time using a movie camera pointed at a video screen where images were generated with painful slowness. "I sat there changing the tapes every 45 minutes for about eight hours. Then I got the film back from the lab two days later and found the exposure was wrong, so I had to do it all over again."

All this for a five-minute sequence. "You had to be motivated," Blinn remarks dryly. "You really had to want to see that movie."

His work had an important infiuence, though. It was the first computer animation many Hollywood special-effects people ever saw. For a brief three months, he worked with Alvy Ray Smith at Lucasfilm, but Blinn was wary of the movie business. "There's a lot of financial risk, and your stuff may get edited out of the movie. It seemed more heartache than was justified. I was more interested in making films myself."

Blinn created some sequences for Carl Sagan's Cosmos series, and he did two educational series on PBS, Project Mathematics and The Mechanical Universe. The latter, in 52 parts, contained about eight hours of animation. Blinn describes it as "a gargantuan effort over a period of three years." As always, he worked solo, even doing some of the scriptwriting. "Ever since my childhood in a small town," he says, "I have learned to be self sufficient."

He stayed at Caltech for almost 20 years - and then was laid off in 1995 when his last project ran out of money. His old friend Jim Kajiya didn't have to try too hard to persuade him to join Microsoft.

Eventually, Blinn says, he'd like to get some of his rendering techniques running under Windows. In the meantime, he's been test-running graphics tools that his friends here have been building, including a 3-D application program interface named Direct3D and an interactive language program called Active VRML.

This is the first time he has functioned as a member of a team, but he's happy to be here, just as everyone else claims to be, and for the same reasons. "Research groups care whether their ideas will make it into products. I now have a channel into products that are going to be used by millions of people." He pauses, formulating one of his modest, matter-of fact statements. "It's nice knowing that the results of your work are going to get a wide distribution."

Alvy Ray Smith leans back and puts his sneakers up on his desk. Handsome, silver-haired, and silver-bearded, with a western style of speech and dress, Smith has an energetic, powerful presence. "What I'm trying to do," he says, speaking rapidly but emphatically, "is make sound and pictures ordinary."

Fifteen years ago, wordprocessing or database programs were new enough to be exciting. Today, they're basic tools of the trade. Smith wants multimedia to follow a similar curve.

"We'll have real-time 3-D video in 2003," he says in his authoritative style, as if a release date has already been confirmed. "By that I mean we'll have the power to compute 30 frames a second in 640-by-480 resolution."

He has personal reasons for wanting this to happen. "I keep wanting to make art," he says. "But I keep realizing the tools aren't there. I think I'm going to be spending most of my life constructing the tools. It's a lifetime commitment to make this happen." He pauses and frowns. "I just hope everyone remembers who did it, and how it got there."

Smith has certainly done his best to be memorable. His résumé reads like a list of great moments in computer graphics. He joined Xerox PARC in 1974 and produced work that found its way into the New York Museum of Modern Art. He arrived in the nascent stage of the Computer Graphics Laboratory at the New York Institute of Technology, where he collaborated with moviemaker Ed Emshwiller and co-invented the Alpha Channel, a concept that became so ubiquitous it appears even in mass-marketed programs such as Photoshop. From 1980 through 1986, he was director of computer graphics research at Lucasfilm; while there he directed the beautiful "genesis" sequence for Star Trek II: The Wrath of Khan, depicting the spread of life across a new world. He co-founded Pixar (where the classic Tin Toy animated short was made), then Altamira, which was then acquired by Microsoft.

Here in Smith's office, his image-editing program, Altamira Composer, is on the screen. "No other software lets you do this," he says, quickly retrieving a half-dozen color photographs of objects and dragging them around. The images are like fragments torn from a bitmap, yet the program preserves their separate identities, so they can still be selected as objects. "Pixel graphics and geometric objects don't normally coexist in the same space. Very few people really understand how to make that happen, but I do."

Will Composer be marketed in some form by Microsoft?

He shrugs. "Presumably."

Why did he choose to come and work here? He answers in a characteristically blunt style. "They were out shopping for imaging software, and we won the competition. I had to go through a bit of adjustment from an eight-person company to 20,000, but I'm treated very well here. I have some of my favorite colleagues, the company is hungry for the knowledge that we have, my wife has always been hammering on me to move back to Seattle, and I got paid well for Altamira. We were afraid, negotiating with Microsoft, that they'd dance with us, steal our ideas, then kiss us off. But they were gentlemanly in all negotiations."

Smith seems to enjoy his elder statesman role on the campus, where he circulates a series of technical memos lecturing his co-workers and "laying down pieces of philosophy," as he puts it. He's also concerned with strictly practical, unglamorous organizational matters. "You can't just have a 3-D system," he says, "you have to have content management married to it, along with a sound system. Does any of this exist? No. People have talked about it for 20 years, and now we're doing something about it. What's astonishing is that it's Microsoft. Microsoft was the DOS company, and for us Unix guys DOS was a joke. All my life I used Unix class machines and laughed at the PCs - they were little toys. I saw the Home Brew Computer Club when it was out in a garage in Menlo Park, and people thought I would be interested, but I wasn't because they couldn't do pictures. When I finally bought a 386, I was amazed to find it was an 8 mips machine. At Lucasfilm, all the work we did on Tin Toy and Star Trek II was on 5-mips systems, or less."

Like his colleagues, Smith has done his share of hands-on programming over the years, first in assembly language, then in C++. At the same time, though, he's an exhibited painter, and his work has appeared on book and magazine covers. His uncle taught oil painting; his father was an amateur artist and a schoolteacher. "I didn't get slapped when I tried to paint on the refrigerator," Smith says. "That may be the difference between people who think they can paint and people who think they can't."

He also grew up in close proximity to technology. "I come from New Mexico, near White Sands, where they test-fired rockets. I remember one day coming around a curve and seeing an Atlas missile beside the road. I heard the first atomic bomb go off. My great-uncle's ranch was right next to Trinity site."

More than any of the other graphics researchers, Smith uses an aesthetic vocabulary. I ask him if he can define the aesthetic qualities of computer graphics that seem to exert such a powerful fascination.

He thinks for a moment. "You know, people always ask me if we're going to be able to do accurate simulations of famous actors. Always! But whenever I hear these questions, I sense that there's something else that's really being asked. Is it just fear of human beings being replaced? No, I think it's more like awe. Maybe people have a desire for something ideal. Personally I cannot make the leap of faith to believe we can simulate actors - or if we do, it'll be so difficult, it would be cheaper to use a real person. But to a limited extent we can create perfection: the ideal apple, the ideal landscape. Perhaps that's where the fascination lies."

In which case, 3-D may be most effective if it retains an ethereal quality instead of trying to simulate the rough edges of reality. Toy Story was memorable partly because it looked subtly different from the real world. Adult characters in particular were highly stylized - which was deliberate, according to Smith, because a creature that's almost human looks intensely disturbing. "If you get close to rendering a human being, you'd better be damned close," he says. "In Tin Toy we included a baby; it worked in that movie because it was supposed to look monstrous, but the lesson was loud and clear."

According to Smith, though, the power of graphics extends far beyond mass entertainment. He points out that computers have a unique ability to control complexity. Science and engineering used to be reductive, simplifying tasks; but now, as he puts it, "complexity is OK."

Here, then, is his version of the shared vision. Since computers can cope with complexity, he has an intuitive feeling that they'll be important to a world that contains billions of people and is still experiencing population growth. And graphics can help to make communication as effective as possible.

Of course, this assumes that computing power will spread into lower income groups. Smith sees this as inevitable. "There'll be millions of old computers you can pick up for a dime. That's why I want to make this software seem ordinary. I want anyone to be able to use it."

And this, in turn, is why he's happy to be at Microsoft. In the past, he says, he felt like Captain Cook, mapping the territory of computer graphics. That was exciting, but now it's time to share his vision on a mass scale. Where better to do that than in a company known for its marketing skills?

"This is the best place to do what I've been wanting to do," he says. "We did this stuff out of sheer love for many years - but we had to suffer a lot of pain. Today, there's enough computing power around that we shouldn't have to suffer anymore."

In the 1970s, computer graphics was a tiny field. A 1-mips computer with RGB frame buffers capable of full-color rendering cost maybe US$500,000 in 1975 dollars, and you had to be a programmer to make it work. Since few people could write code and create art, the first Siggraph conference attracted only 600 people, and they stayed the night in a college dorm where they showed each other videos in the evening. They were a tight knit fraternity.

Then came microcomputers, and then came the concept of virtual reality. Now 3-D modeling seems integral to the online future.

Put yourself in the position of those graphics pioneers, who worked for decades, always underfunded, in relative obscurity. Now factor in that programming is still largely a male occupation. Writing a program means you create something that feels like a piece of yourself and is disseminated to other systems where it takes on a life of its own. The reproductive metaphor is painfully obvious.

Additionally, the programmer/machine relationship appeals to people who enjoy being in control. First the computer does what you tell it to do, then the software functions as your surrogate and makes other computers do the same thing.

It's no surprise, then, that software designers talk about wanting to have an impact. This is why some of them give away their programs as freeware, and it's also the reason a program's success is almost always measured by the number of units shipped.

Maybe this all helps to explain the Redmond voodoo, the fascination with software, the lure of computer graphics, the buying frenzy triggered by products such as Windows 95. Even for outsiders, there's vicarious excitement in seeing the strange power relationship between programmer and software, software and machine, machine and user. Seeds of binary code taking command of millions of computer systems - it's a science fiction vision, like a benign alien invasion. And since 3-D graphics have that element of otherworldly strangeness, they seem totally appropriate to the medium.

Meanwhile, to a programmer with that lust to disseminate his vision and a not-so-subtle desire for people to "Be reasonable - do it my way," a company like Microsoft isn't a malevolent giant - it's a generous patron and a heaven-sent collaborator. Press reports suggest that more than 2,000 of its employees are now stock-option millionaires who can afford to quit anytime, yet they still go on working there, enjoying their role in reaching out and molding our future.

So, why would a bunch of feisty graphics pioneers choose to join them?

Better you should ask why they would want to go anyplace else.