New York City’s Surveillance Battle Offers National Lessons

A lack of police transparency highlights how citizens need to remain vigilant to take back control over their privacy.
Pedestrians walk beneath New York City Police Department wireless video recorders attached to a lamp post
The challenges and shortcomings faced in New York City show that transparency requirements on paper only matter when the public forces police to abide.Photograph: Mario Tama/Getty Images

In January, when New York’s Public Oversight of Surveillance Technology Act went into effect, the City of New York Police Department was suddenly forced to detail the tools it had long kept from public view. But instead of giving New Yorkers transparency, the NYPD gave error-filled, boilerplate statements that hide almost everything of value. Almost none of the policies list specific vendors, surveillance tool models, or information-sharing practices. The department’s facial recognition policy says it can share data “pursuant to on-going criminal investigations, civil litigation, and disciplinary proceedings,” a standard so broad it’s largely meaningless.

This marks the greatest test yet of Community Control of Police Surveillance (CCOPS), a growing effort to ensure that the public can take back control over the decisions of how communities are surveilled, deciding whether tools like facial recognition, drones, and predictive policing are acceptable for their neighborhoods. The battle playing out in New York City—over not just what tech police are permitted to use but how they use it, how that use is overseen, and how it’s disclosed—holds broad lessons on the future of surveillance. As more cities and municipalities around the country implement policies on surveillance technologies like facial recognition, and as more citizens push for CCOPS in their own communities, the challenges and shortcomings faced in New York City show that transparency requirements on paper only matter when the public forces police to abide.

Surveillance technologies already widely used by police departments across the country often make surveillance lower cost, faster, and passive. Take facial recognition: When run on video cameras in public squares, it can monitor faces constantly through an algorithm (i.e., cheaper and faster), from afar and in passing (e.g., not requiring any kind of physical search), and even outside the bounds of traditional Fourth Amendment warrant processes. Other examples abound: drones used to fly over protest crowds; police cars equipped with automatic license plate readers that scan and centrally store license plates as law enforcement vehicles drive down streets or through parking lots. Algorithms are all the while used across the criminal justice system, from police precincts “predicting” crime to bail hearings to the sentencing bench.

Despite examples like that of the NYPD, there have been numerous CCOPS success stories. The earliest adopter of the CCOPS model was Oakland, California, where generations of advocacy against police violence, primarily by Black and Latinx advocates, culminated in 2015 with the creation of the Oakland Privacy Commission. Oakland wasn’t just the first but the strongest CCOPS ordinance, granting the Privacy Commission independence and the full power to approve or ban police surveillance tools. Since its creation, the Privacy Commission has repeatedly questioned department officials, restricted the use of drones, fully banned predictive policing and biometric surveillance software, and most recently voted to recommend Oakland police stop using automatic license plate readers.

Across the bay, San Francisco followed suit with its own CCOPS law in 2019. While it didn’t go so far as to create an independent commission, it empowered the city’s legislature to approve or ban police surveillance tools. Notably, the bill also included a ban on government use of facial recognition, the first in the country. Numerous cities have done the same in the months since, banning targeted technologies like facial recognition, or improving overall accountability. Four jurisdictions have also banned police from signing nondisclosure agreements with surveillance vendors, taking away a common excuse for police opacity. Other success stories include that of San Diego, whose city council passed a surveillance-governing ordinance at the end of 2020 after backlash over a police “smart streetlight” program.

None of these decisions appeared out of thin air; a confluence of community activism, media reporting, attention from local politicians, and other factors made these ideas for surveillance reform into reality. New York City is currently running into a number of challenges with its own surveillance oversight that highlights this need for constant work, for making surveillance oversight not just about transparency on paper but also compelling and enforcing changes in police practice.

Per the Public Oversight of Surveillance Technology Act, the NYPD published an initial list of deployed surveillance technologies that includes audio recording devices, cell-site simulators, license plate readers, and facial and iris recognition. The public has until February 25 to submit comments in response. But issues plague these newly required disclosures—because adequate democratic oversight of these surveillance technologies is not achieved merely by knowing they exist. The department’s published documentation on facial recognition contains the same copied-and-pasted assurances as every other policy, claiming that said tools will be used only for legitimate law enforcement purposes.

It also states, “The NYPD does not use facial recognition technology to monitor and identify people in crowds or political rallies.” However, this blatantly contradicts the NYPD’s reported use of facial recognition to identify and arrest a Black Lives Matter activist last August, once again underscoring that disclosures are insufficient without public accountability and oversight of actual practice. And there are even more blatant errors, as where the NYPD claims that facial recognition and the gunshot-detection tool ShotSpotter don’t use “artificial intelligence” or “machine learning.” Not only are these claims false on their face when compared against media reporting on and marketing materials for ShotSpotter, they directly contradict New York’s own report on artificial intelligence systems, which was published just days later and includes both systems.

When asked about these contradictions, the NYPD provided the following statement: “The NYPD uses facial recognition as a limited investigative tool, comparing a still image from a surveillance video to a pool of lawfully possessed arrest photos. This technology helps bring justice to victims of crimes. Any facial recognition match is solely an investigative lead and not probable cause for arrest—no enforcement action is ever taken solely on the basis of a facial recognition match.”

While New York’s POST Act saga may be off to an ominous start, the true test will come late this year. First, we’ll see how New Yorkers respond to these draft policies through public comments. Then we’ll see what weight their views hold with the NYPD. At the end of that process, it may turn out this first round of policies was just a speed bump on the way to reform, or it may show that advocates will need to turn to more drastic alternatives. In any case, the lessons it teaches on fighting for surveillance oversight will resonate for years to come.


More Great WIRED Stories