The Super Power IssueBeing Invisible

Next-gen optical camouflage is busting out of defense labs and into the street. This is technology you have to see to believe.

| The Super Power Issue

| Being Invisible

| The Antigravity Underground

| A User's Guide to Time Travel

| 8 Super Powers

Invisibility has been on humanity's wish list at least since Amon-Ra, a diety who could disappear and reappear at will, joined the Egyptian pantheon in 2008 BC. With recent advances in optics and computing, however, this elusive goal is no longer purely imaginary. Last spring, Susumu Tachi, an engineering professor at the University of Tokyo, demonstrated a crude invisibility cloak. Through the clever application of some dirt-cheap technology, the Japanese inventor has brought personal invisibility a step closer to reality.

Tachi's cloak – a shiny raincoat that serves as a movie screen, showing imagery from a video camera positioned behind the wearer – is more gimmick than practical prototype. Nonetheless, from the right angle and under controlled circumstances, it does make a sort of ghost of the wearer. And, unlike traditional camouflage, it's most effective when either the wearer or the background is moving (but not both). You don't need a university lab to check it out: Stick a webcam on your back and hold your laptop in front of you, screen facing out. Your friends will see right through you. It's a great party trick.

Of course, such demonstrations aren't going to fool anyone for more than a fraction of a second. Where is Harry Potter's cloak, wrapped around the student wizard as he wanders the halls of Hogwarts undetected? What about James Bond's disappearing Aston-Martin in Die Another Day? The extraterrestrial camouflage suit in the 1987 movie Predator? Wonder Woman's see-through Atlantean jet? It's not difficult to imagine a better system than Tachi's. In fact, invisibility that would satisfy any wizard – not to mention any spy, thief, or soldier – is closer than you might think.

US Defense Department press releases citing "adaptive," "advanced," and "active" camouflage suggest that the government is working on devices like this. If so, it's keeping them under wraps. However, NASA's Jet Propulsion Laboratory has published a preliminary design for an invisible vehicle, and battalions of armchair engineers have weighed in with gusto on newsgroups and blogs. As it happens, most of the schemes that have been advanced overlook the complexities of the problem. Invisibility isn't a simple matter of sensors that read the light beams on one side of an object and LEDs or LCDs that reproduce those beams on the other. In fact, such a system would work about as well as the laptop party trick with the webcam's lens removed: Objects right up against the sensors would produce blurry images on the display, but a few centimeters away they'd disintegrate into a featureless gray haze.

| Jill Greenberg Jill Greenberg

A real invisibility cloak, if it's going to dupe anyone who might see it, needs to represent the scene behind its wearer accurately from any angle. Moreover, since any number of people might be looking through it at any given moment, it has to reproduce the background from all angles at once. That is, it has to project a separate image of its surroundings for every possible perspective.

Impossible? No, just difficult. Rather than one video camera, we'll need at least six stereoscopic pairs (facing forward, backward, right, left, upward, and downward) – enough to capture the surroundings in all directions. The cameras will transmit images to a dense array of display elements, each capable of aiming thousands of light beams on their own individual trajectories. And what imagery will these elements project? A virtual scene derived from the cameras' views, making it possible to synthesize various perspectives. Of course, keeping this scene updated and projected realistically onto the cloak's display fabric will require fancy software and a serious wearable computer.

Many of the tech hurdles have been overcome already. Off-the-shelf miniature color cameras can serve as suitable light sensors. As for the display, to remain unseen at a Potteresque distance of, say, 2 meters, the resolution need not be much finer than the granularity of human vision at that distance (about 289 pixels per square centimeter). LEDs this size are readily available. Likewise, color isn't a problem – 16-bit displays are common and ought to suffice.

But it will take more than off-the-shelf parts to make the cloak's image bright enough to blend in with the daytime sky. If the effect is to work in all lighting conditions, the display must be able to reproduce anything from the faintest flicker of color perceptible to the human eye (1 milliwatt per square meter) to the glow of the open sky (150 watts per square meter). Actually, the problem is worse than that: According to Rich Gossweiler at HP Labs, the sun is 230,000 times more intense than the sky surrounding it. If we want the cloak to be able to pass in front of the sun without looking hazy or casting shadows, we'll need to make it equally bright. Of course, this would put severe demands on the display technology – LEDs just ain't that brilliant – and it would increase battery size or shrink battery life accordingly. So let's ignore the sun and take our chances. An average TV screen looks blank in full daylight, so we'll need something brighter, more along the lines of a traffic light.

Response time is also tricky. Like a TV screen, the cloak's display must be able to update faster than the eye's ability to perceive flickering. It has to register motion in real time without the blurring, ghosting, smearing, and judder that plague today's low-end monitors. A laptop LCD screen isn't going to cut it. A lattice of superbright LED microarrays probably will.

The real challenge, though, is turning the video images into a realistic picture. The view from a pair of cameras strapped to your body is different from the perspective of an observer even a short distance away. The observer can see things the cameras can't, thanks to parallax – the way the angles change with the distance.

Imagine a life-size photograph of a wagon as seen from 20 feet away. The view of this photo from an additional 20 feet away is about the same as a naked-eye view of the real wagon from 40 feet away. It doesn't satisfy depth perception but will trick a casual glance. But step back 10 more feet, and suddenly the edges don't match anymore; objects behind the wagon have a perfectly rectangular discontinuity around them. It's painfully clear that you're looking at a picture.

The solution? Create a synthetic image based on a 3-D model of the world. It's probably impractical to map real-world locations ahead of time, so this virtual scene will have to be constructed on the fly based on data from the cameras. The stereoscopic pairs allow the system to triangulate the location of every pixel in its sight, as well as detect color and brightness. Anything out of the cameras' view will appear as a blank area, but as the cameras move, they'll eventually see enough to build a model of the entire surrounding environment. To turn the model into a picture, the system will need to calculate the paths that a light beam can take through the scene on its way to the observer's eye. This is known as ray-traced rendering, and it's not trivial.

Especially thorny is how to cover the cloak with photorealistic synthetic imagery in a way that will fool observers from any angle. Standard displays (even flexible ones) are only intended for straight-on viewing. An invisibility cloak's pixels must spread their light in all directions, so the edges look as good and as realistic as the center. Even then, you'd have an image that looked pretty good from the one angle at which everything lined up with the background, but lousy and strange from anywhere else. The cloaked alien in Predator, for instance, is pretty darned invisible standing still in a gloomy jungle, but running through a well-lit area, he betrays a clear case of both parallax error and edge-color error. Harry Potter, on the other hand, walks effortlessly among peers and professors, undetected as long as he doesn't breathe too loudly.

If that's what we're after, our display will have to be an array of hemispherical lenses, each with a tiny 180 x 180-pixel videoscreen behind it. These fish-eyes – hyperpixels, if you like – will send custom-colored light beams to every degree of arc, allowing for up to 32,400 different viewing angles. Paired with image-warping software that coordinates and distributes all the different views, this is probably sufficient to trick the eye in most circumstances.

| Invisibility Today… (Left) Nik Schulz/L-Dopa; AP (Left) Nik Schulz/L-Dopa; AP In Susumu Tachi's cloaking system, a camera behind the wearer feeds background images through a computer to a projector, which paints them on a jacket as though it were a movie screen. The wearer appears mysteriously translucent – as long as observers are facing the projection head-on and the background isn't too bright.

…And Tomorrow Nik Schulz/L-Dopa Nik Schulz/L-Dopa To Achieve true invisibility, optical camouflage must capture the background from all angles and display it from all perspectives simultaneously. This requires a minimum of six stereoscopic camera pairs, allowing the computer to model the surroundings and synthesize the scene from every point of view. To display this imagery, the fabric is covered with hyperpixels, each consisting of a 180 x 180 LED array behind a hemispherical lens.

<p>Nwe just need to fit 289 hyperpixels into a square centimeter, along with sensors that track the position and orientation of each one. Multiply by 4 square meters of fabric, and add, oh, a wee bit of computing power.</p>

Hmuch computing power? Overall, our display has something like 375 billion pixels (32,400 per fish-eye times 11.6 million fish-eyes), or the equivalent of 286,000 SVGA monitors. Rendering a decent image generally requires at least 17 traced rays per pixel. However, even at the lowly rate of 1 ray per pixel, with 60 refreshes a second, the cloak will require a CPU running at 10 billion GHz. Add image capture, stereo vision, 3-D scene manipulation, image warping, and correction for deformations of the cloak, and we'll easily double that burden. Even if clever software tricks can reduce the computing load by a factor of 100 million, we'll still need a stack of a hundred 2-GHz Pentium motherboards.</p>

Athese computers will require electrical power – around 8 to 10 kilowatts total, enough to run six heavy-duty hair dryers. Thus, a superpowerful, hyperefficient substitute would be really helpful. For the sake of argument, let's say that sometime in the next couple of decades, we have a computer mighty enough to tackle this task while drawing the same 100 watts that a high-end laptop does today. (If we're willing to accept <em>ator</emel invisibility, Moore's law coupled with advanced graphics processing might make that possible within a decade.) The display itself will need power as well; even at 100 percent efficiency (no waste heat), it will draw at least 600 watts in full daylight (that's 150 watts per square meter to match the sky times 4 square meters of hyperpixelated fabric). At 12 volts DC, the norm for digital video systems, this level of power consumption will deplete a 2.5 kilogram, 20 amp-hour lithium-ion battery in just 24 minutes. For long daylight strolls through enemy territory we'll need a lighter, stronger battery.</p>

Ewith all this firepower, we'll never entirely avoid blank spots and misplaced pixels. Visual artifacts and anomalies will occur when a distant observer sees an object through the cloak that has never been in direct view of any of its cameras (imagine a highly dynamic environment like a battlefield, where an object can enter and exit the scene before the cameras have had a chance to process it fully). Also, one camera may see a pixel that others can't, resulting in points of known color but unknown distance. Highly fractal objects like trees may be difficult to reproduce by any method, whereas indoor and urban environments will be relatively error-free.</p>

Nbly, nothing we've discussed so far can mask the wearer's heat signature, and, in fact, the cloak is bound to generate substantial heat of its own. Harry Potter would stand out like a bonfire to even a cheapie thermal imaging system, and heat pumps and thermoelectric materials will simply add to the problem. If Harry can stand the weight penalty, a cylinder of compressed or liquefied air that slowly bleeds pressure can cool the garment and its wearer the way a spray can chills your hand.</p>

Bnd that, all I can say is that a holographic display could substantially reduce the computing load and eliminate the need for fish-eye optics. There's no need to simulate 3-D if your display can show it naturally. Today's videoscreens don't have the resolution to display holograms, but it's likely that arrays of quantum dots – up to 1,000 times smaller than the grain of film used to capture holographic images – one day will display very bright, full-color, full-motion holograms on a flexible surface.</p>

Ul engineers find a way around these obstacles, true invisibility will remain just out of reach. So relax: The men in black aren't leaning over your shoulder as you read this. Still, the tech is physically possible and likely on its way. As is the obvious countermeasure: a balloon full of screaming yellow paint.</p>