Big tech is suffering from an identity crisis. Amazon, Microsoft and IBM have all announced they are stopping selling facial recognition technology to police forces. The move comes several weeks into worldwide protests against racism and police brutality, which started following the death of George Floyd.
But facial recognition is just the start. Technology and policing are interwoven at much deeper levels and there’s plenty of innovation happening to introduce new tech to law enforcement agencies. Police forces and justice systems are experimenting with AI decision-making tools, predictive policing and even connected door bells. Campaigners have called out police forces for quickly developing and deploying new systems before they have been rigorously trialled and evaluated.
When it comes to facial recognition, the three companies have taken slightly different approaches to their moratorium. IBM’s chief executive, Arvind Krishna, told the US Congress it would stop providing “general purpose IBM facial recognition or analysis software” to law enforcement agencies – Krishna did not define a general purpose system – and said a national dialogue around the technologies should happen. Amazon said it will stop selling its Rekognition software for a year until laws around its use are in place. And Microsoft echoed the position for new laws to be introduced.
Concerns about the dangers of facial recognition systems aren’t new. Live facial recognition tech works in public spaces by scanning the faces of people who pass through a camera’s field of view and comparing their likeliness with a pre-assigned database of suspect images. The matching process is conducted by algorithms that are trained on previously collected data and identify points on people's faces.
But if there are underlying problems with this data, they can be transferred to the setups used in the real-world. Research from MIT Media Lab researcher Joy Buolamwini and colleagues has shown facial recognition systems to be more accurate if the subject is a white man and such systems – including Amazon’s – are known to misidentify people of colour more often than white people. Amazon has rejected the findings – with its chief technology officer saying the company can’t be held responsible for how its AI is used.
In the UK facial recognition systems have been found to be inaccurate and have resulted in the wrong people being stopped by police officers. London’s Met Police moved its trials of facial recognition to official deployments in January 2020, after a court ruled use of facial recognition technology in Wales did not break any privacy or civil liberties laws.
Around the world, facial recognition systems are largely unregulated by laws specific to them. In China, the technology has been to conduct surveillance on minority groups, including millions of Uighurs. In Uganda police have been forced to admit they were using a huge network of facial recognition cameras provided by Huawei but denied the technology had been used to help people spy on political opponents.
A comment element to all these systems is the lack of transparency around their scope and use, says Ioannis Kouvakas, a legal officer at charity Privacy International. Kouvakas says there’s a large amount of secrecy around the deals police forces and law enforcement agencies do with big technology firms and what is provided to them as a result. Contracts and terms of deals are infrequently disclosed and Kouvakas adds that technology created by private firms will never have the “public benefit” as the main criteria of how it works. Amazon has struck deals for police forces to promote its Ring doorbell, which can automatically record and save movements. The deals include prohibiting police from speaking about Ring without prior approval from Amazon.
“There's a very specific reason that such intrusive surveillance capabilities are traditionally entrusted to the state or traditionally entrusted to the police or state actors,” Kouvakas adds. Privacy International opposes the use of facial recognition technology. “This is mainly because it's so intrusive and it can compromise so many of our civil liberties that it's not meant to be carried out by private companies,” he says.
Nevertheless, law enforcement bodies around the world are looking to emerging technologies to assist with policing. In some instances, facial recognition systems have helped identify suspects wanted for specific crimes. A report from policing think tank The Police Foundation said the UK Avon and Somerset force has used software to combine fragmented databases and better understand officer performance.
Most new systems all have one thing in common: data is relied upon to help make, or automate, decisions. These data-driven systems can contain flaws. In March 2018, Durham Constabulary tweaked one part of an algorithmic system it was using to determine how likely people would be to reoffend. A review of the system found that including postcode data could reinforce attitudes about crime and reoffending in poorer areas. “We are also already at the point where some policing practices are leaving legal and regulatory frameworks behind,” the Police Foundation-sponsored report, released in March 2019, concluded.
Analysis from the UK shows at least 14 of the country’s police forces are using crime prediction software. Academics have claimed one AI-based predictive policing system, PredPol, contains flaws in its underlying code. In April, the Los Angeles Police Department stopped using PredPol, which followed Kent Police dropping the system at the end of 2018. “Predpol had a good record of predicting where crimes are likely to take place,” the force’s superintendent said at the time, adding: “What is more challenging is to show that we have been able to reduce crime with that information.”
Alexander Babuta, a national security research fellow at the Royal United Services Institute, says policing decisions around the adoption of new technology should be based around effectiveness and proportionality. He says that police chiefs he has spoken with during his research are asking for new regulation, and in some cases new laws, to tell them how new technologies can and can’t be used. In the UK, despite the lack of specific laws governing emerging technology, human rights and equality laws already exist and they can be used to govern new systems.
“You can't just look at it in terms of should private companies be selling facial recognition to police forces, or should should the police be using facial recognition at all,” Babuta says. “The question should always be ‘What do they want to use it for? And is that a justifiable means for achieving that goal?’”
Matt Burgess is WIRED's deputy digital editor. He tweets from @mattburgess1
This article was originally published by WIRED UK