Crime in the age of intelligent machines
__T__he police officer slowly, cautiously approaches the entrance to the besieged apartment. After getting his bearings, he charges through an already-opened front door. Scanning the living room, he makes his way to the back bedroom, into which he knows murder suspect Craig Smith has retreated after five long hours of a standoff. Opening the bedroom's closet door, he finds the suspect burrowed under a pile of laundry. Without fear or hesitation, he plunges his arm into the mound of clothes, exposing Smith.
As the bewildered suspect tries to grab the clothing back, the officer fires his weapon, blasting a shotgun out of Smith's hand and disorienting him long enough for more officers to rush in and wrestle him into handcuffs. The tense ordeal is over. As an ambulance is called to take the wounded man to the hospital, and as police, the press, and frightened tenants scurry about, the arresting officer moves unceremoniously out of the building and up a ramp into his custom-made berth in the back of a van marked "Hazardous Materials Unit."
The "police officer" in this case is not one person but two human operators and a rather crude 480-pound robot dubbed the "Remote Mobile Investigator" (RMI). The arrest occurred on September 2, 1993, in Prince George's County, Maryland. Earlier in the day, Smith, 22, allegedly killed his girlfriend and raped another woman. After police had exhausted all other options for routing Smith from the building, they decided to borrow the local fire department's RMI-9 unit, a remote-controlled vehicle designed for handling hazardous waste and assisting firefighters. At a safe distance outside the apartment, a police captain and a robot tech used the RMI as their eyes, and finally as their fists, using a high-powered water cannon to clobber Smith.
While this is not the first time a robot has been used in such a standoff, it is among only a handful of such occurrences in recent years. But the success of this case and the press it received worldwide beg a number of questions. How prevalent will robotic arrests become as more police departments have access to this technology? Will increasingly lethal weapons be mounted on "robot cops?" Will autonomous robots with AI brains ever be given decision-making powers? What is the likely future of the robots that The Washington Times labeled, with eerie enthusiasm, "inhuman colleagues" in US law enforcement?
The word robot was first used in 1920 by Czech author Karel Capek, who derived it from robota, a Czech word meaning serf or slave. When Capek's play about the dehumanization of man, R.U.R. (Rossum's Universal Robots), was translated into English, the word robot quickly gained currency. Isaac Asimov's "Runaround," a short story published in Astounding Science Fiction in 1942, introduced the word robotics as well as Asimov's Three Laws of Robotics, which have since become part of scientific folklore. The first law - "a robot may not injure a human being, or through inaction, allow a human being to come to harm" - is clearly open for review in light of current events.
Kevin Dowling, a project scientist at Carnegie Mellon University's Robotics Institute, thinks the press attention in the Maryland case stems primarily from its appeal as a human interest story. He does, however, anticipate a marked increase in similar use of robots in law enforcement. "In some ways it makes perfect sense...using robots in situations like this. It's definitely cost effective from a human-life point of view. You're getting police officers out of harm's way, allowing them to make more intelligent decisions without endangering themselves.
It's also cost effective from a training point of view. You need fewer officers trained in handling highly lethal situations." But aren't there downsides to this? "Sure, just as distance from the danger can be a plus, it's also a drawback, limiting one's information about a situation. Being on the other end of a tethered or remote-controlled robot narrows one's focus. Police officers have had training and experience using all their senses and reflexes. Interacting through a robot carves away a lot of that sophistication and is akin to having blinders on." Dowling also cautions that as robots become more common in law enforcement and security, "clear lines of command will be critical, especially if the robots have weapons capabilities."
This need for clarity in the chain of command is underscored by Joe Berry, vice president of marketing at Cybermotion, one of the few manufacturers in the world that sells commercial autonomous robots for safety and security applications. Berry, a career military man and former chief of military police, sees a big future for both remote-controlled and autonomous mobile robots, but he is concerned about these robots carrying weapons. "We need to make clear decisions about the use of systems that could maim or kill a person. At Cybermotion, we're not interested in getting into that type of work. Now, in terms of equipping a unit with tear gas or some other incapacitating gas, that's something we would consider, but only under strict legal review. We're a peace-loving company," he adds. Berry cites other applications for autonomous robots, such as sentry duty and "loss prevention" at military installations, perimeter guards in prisons, and information-gathering agents in hostage situations. "A robot could kill the lights in a building, and then, using infrared vision, assess the situation, locate all the humans, and relay that information to the police outside."
"So far, the development of law enforcement robotics has been entirely in the area of bomb disposal," says Hans Moravec, author of the popular and controversial Mind Children: The Future of Robot and Human Intelligence. "Many international hot spots such as Northern Ireland and Israel have them, as well as a number of US police departments, like the NYPD. These are tethered or remote-controlled systems. Some of them have mounted shotguns that are used to blow out the windows of cars. A small crane on the robot then lowers an explosive charge that detonates the bomb. They are not yet sophisticated enough to defuse bombs. They'll need multifingered hands for that." He stops to rattle off the name and number of the company working on these hands, and then continues. "Autonomous mobile robots are not used in law enforcement, but they are gaining popularity as security robots...which are really more like roving burglar alarms."
Moravec is not terribly optimistic about the immediate future of sophisticated robots in law enforcement, not because he has objections to their use, but because he doesn't think there's enough of a market for them to drive down the price. He anticipates that it will be more than a decade before the use of robots with any decision-making capabilities becomes an issue for debate. Moravec predicts that the first big market for "smart" autonomous robots will be for commercial and domestic cleaning. "This will drive component development in the next decade or so," he predicts. "There will be no serious use of any type of sophisticated robots in law enforcement until then. There are high costs in significantly increasing the human bandwidth - the fidelity of remote telepresence, for instance." Moravec estimates that within ten years, a system offering VR telepresence control of a robotic cop could be available. Does he have problems with the idea of cops making arrests from within such a virtual environment? "I think there are more problems with cops in the field than there will be with robots...under these life and death situations. For one thing, the entire session will be recorded, so there will be greater accountability. Violence occurs through loss of control. Not having officers' lives in jeopardy will allow them to maintain their cool." Without reservation, Moravec concludes, "For them, it will be like playing a video game."
This video game mentality strikes a raw nerve in Manuel De Landa. In 1992, De Landa published War in the Age of Intelligent Machines, a book which, among other things, explores the tendency of military command-and-control decisions to migrate from humans to their so-called "smart" machines. De Landa is concerned about the increasing use of AI and other advanced technologies designed to remove humans from the decision-making loop. Seeing many similarities between military command structures and their domestic police counterparts, De Landa is quick to shoot down the seemingly cut-and-dried "it's safer for our boys in the field" defense of a robocop future.
"While I can see a good point in getting police out of harm's way, there is usually a political component to these arguments.
The development and deployment of a new weapons system is rarely based solely on issues of safety and human concern," says De Landa. Asked how this specifically applies to a case against robocops, he replies, "It becomes a way of further distancing the cop from the suspect. It is difficult to hit or shoot another human being. It is easier if you have a teleoperated mechanical prosthesis doing it for you. There would be a desensitization here that I'd be concerned with." He goes on to cite examples of municipal police departments already having too much distance from the communities they serve, such as living in a suburban belt, commuting to the inner city, and not understanding the language and culture of the people in those communities.
De Landa also maintains that too often we don't notice an encroaching new technology like this until it's too late for an open critical assessment. "We need to be careful about crossing the threshold of a new technology, especially a destructive one, without being clear on what we're doing and why. We need to ask ourselves if we want to cross that threshold, or if the momentum is already too great, how we might divert it. There is a tendency for these predatory technologies to gain a momentum of their own. It's very distressing to me that there hasn't been more public discussion on this."
And what do cops think of an increasing robotics presence on the force? Captain Jim Terracciano of the Prince George's County police department has no doubt that this technology can have a positive impact on law enforcement. He is, however, concerned when any technology is thought of as a replacement for human officers. It is his hope that technologies like this be used only in the context of more "community-oriented" law enforcement. "We can't afford to lose the human bond, especially in an increasingly high-tech environment. If we lose that, we lose everything...and then we have a sci-fi future like in RoboCop."
Just as Capek's play R.U.R. reflected people's fears about the advent of automation and post-WWI feelings of alienation, the 1987 sci-fi film RoboCop played to the '80s hysteria over runaway drug abuse and an overall corrosion of law and order. It was part of a recurring fantasy of heroic vigilantism - of lone individuals who see justice in clear black-and-white terms and can restore it by ignoring the current legal system, crippled as it is by corruption, indifference, and shades of gray.
RoboCop was hyped as "the future of law enforcement." Its tragic main character is a police officer who, after being blown to bits by cookie-cutter bad guys, is reanimated by police scientists as a cyborg, "part man, part machine, all cop." After he is programmed to "serve the public trust, uphold the law, and protect the innocent," he is set loose on the streets of a future Detroit to "stop every sleazeball he encounters." In a near future, where criminals rape and pillage at will, this high-tech knight in shining armor, his humanity conveniently deleted, is offered as a reasonable solution to the problem. While RoboCop is obviously fiction and an extrapolation from current anxieties, it is tempting to compare it with the prospect of real police having autonomous robots walk their beats, teleoperating arrests, and hunting down suspects as if they were "playing a video game." Is there really that much functional difference between the science fiction of a cyborg cop, his flesh physically fused to a machine, and the distinct scientific possibility of a robot law enforcer distantly controlled by a human operator?
If Moravec is correct in his projection that these types of robots could be available for police use in a decade, shouldn't we as a society start discussing the desirability of such a development? Or is it time to start working on a rewrite of Asimov's first law?