Modern exploration robots like NASA’s Curiosity rover are incredible. They're strong machines, capable of driving, drilling, scooping, and even shooting lasers on other worlds. But the next generation of interplanetary probes will be smarter, with advanced computer and camera systems that let them recognize novel or interesting locations likely to yield important scientific discoveries.
Giving robots autonomy has long been a major area of research. Curiosity, for instance, recently gained the ability to navigate on its own on Mars. An engineer at mission control tells it where to go, and it figures out the best way to avoid obstacles and get there. But every detail of the rover’s scientific agenda is still meticulously set each day by humans on Earth. Several research groups are working to change that.
“We want a robot to know what’s desirable to observe,” said computer scientist and geologist Kiri Wagstaff of NASA's Jet Propulsion Laboratory. “So it can graduate from being a remote instrument to actually being a field assistant.”
Wagstaff is part of a team at JPL building algorithms that would allow robots on a distant world to recognize when something is scientifically attractive, take extra photos of it, and send them back to controllers on Earth. They have built a system called TextureCam, which snaps 3-D images of a landscape and makes an educated guess about which rocks would be of interest to geologists. This would help save a great deal of time because round-trip communication with a Mars probe can take upwards of 40 minutes, and bandwidth restrictions typically constrain data transmission to once a day.
Instead of sending back images and waiting for scientists to analyze them and send further instructions, a future robot outfitted with a device like TextureCam can cut out the middle man and figure out what's interesting on its own. It's kind of like the new iPhone camera burst mode, which choses what it thinks is the best photo from a series of rapid shots, except Mars rovers come in a much more authentic version of space gray.
Of course, the word “interesting” is subjective. How can you teach a robot to think like a curious scientist?
“We try to tell if an image is similar to previous images on basis of color and texture, or if it contains a novel color and texture,” said planetary scientist Patrick McGuire of the Freie Universität in Berlin, who works with a different team developing smart camera systems for interplanetary robots.
This works surprisingly well. McGuire and his collaborators use a mobile phone camera and laptop running three different computer vision algorithms to spot unusual features in pictures. During tests at a former coal mine in Virginia, their camera system could pick out locations where yellow lichen grew atop the rocks and coal beds against the otherwise uniform landscape. Their procedure could determine with 91 percent accuracy whether a new picture was similar to a series of older ones and roughly two-thirds of the time knew if it contained something unusual.
McGuire calls his system the Cyborg Astrobiologist because he hopes it will help future probes identify places where water once ran or the signatures of life could be present. Such a machine could take photos of a rock, zoom in on areas of interest, and send back the most important findings to scientists.
“Humans could use that information for guidance on how to approach the geological outcrop to try to get more information,” said McGuire.
Though these new systems aim to give robots the ability to choose scientifically interesting things, it’s likely that there will always need to be a person in the loop at some point. A robotic system can’t completely replicate human curiosity.
“You always want to be leaving the door open to things you can’t anticipate,” said Wagstaff. “There will always be serendipitous discoveries.”
Still, the fact that multiple teams are working on this issue, “speaks to the timeliness of the problem,” said computer scientist David Thompson, who works at JPL with Wagstaff. The algorithms under development now could also be used for a variety of uses, including facial recognition and weather pattern identification.
In the short term, the JPL engineers are mainly thinking about Martian exploration. The advanced camera systems could be a feature on future orbiters or even the Mars 2020 rover, which will follow on Curiosity’s success and cache interesting samples for a future mission to bring back to Earth. But the planetary science community one day wants to send probes to even more distant locations, like Jupiter’s icy moon Europa, which harbors a vast ocean beneath its frozen crust. With communication delays of many hours, a robot in the outer solar system will need to do as much work as possible without the aid of humans in order to maximize its scientific return.
“As our future space missions get more and more ambitious in their goals, that desire to achieve new objectives is driving this new technology,” said Wagstaff. “We expect more today, tomorrow, and the next day out of what space missions can accomplish.”