ROBO SPACE: How Space Perception Seperates Man From Machine
| PLUS
| Ad Space
For a robot coming fresh into the world, there is at first total confusion. What is "above"? What is "behind"? To the newborn android, all sensory input is a blur. Blobs float into view, the occasional sound drifts by, 3-D space is a mass of contradictory coordinates.
The problem isn't the hardware. Autonomous bots like Honda's Asimo and Sony's SDR-4X II have cameras for depth perception and microphones to help pinpoint a sound source. And in the lab, researchers in artificial intelligence have made strides in symbolic reasoning, allowing machines to make inferences based on definitions of spatial concepts.
But combining sensory perception and spatial reasoning remains elusive, which explains why robots lack a true sense of space. They don't know how to navigate a building, interact with physical objects, or dance from room to room. The trouble is, we have no clue how all this works. We know only that spatial cognition is incredibly complex and hard to achieve – so complex, in fact, that it can't be programmed by hand but must somehow evolve.
Differences in culture and language create their own problems. In English, we find it natural to think of the "front" of a tree as oriented toward the speaker, and so we say "The car is in front of the tree" to mean that the car is between ourselves and the tree. In many African languages, the front, or face, of the tree is oriented in the same direction as the face of the person looking at the tree. So, for exactly the same position of the car they would say "The car is behind the tree." This makes communicating with humans tricky.
Figuring out how to teach spatial cognition is precisely what's going on in current robotics research, including in my own laboratory. We are trying to create robots and robot cultures that develop an autonomous approach to space, time, and action. To do this, the machines remember a vast number of sensory experiences and try to recognize recurring bits and pieces. Over time, this helps them impose structure on the world and learn the consequences of their actions. We also program the robots to play language games in which they not only invent and learn words and grammatical constructions but also establish the meanings being expressed. In one of our experiments, thousands of androids played almost half a million language games, communicating about objects and their locations and evolving a shared vocabulary and a common set of concepts in the process.
Experiments like these predict a future in which robots can cultivate their own spatial cognition and language, adapted to whatever environment they find themselves in. Without this ability, they will forever remain brittle and unable to cope with a fast-changing, open-ended world.