Toggle Menu

Defund the Police Algorithms

Law enforcement is increasingly turning to software to surveil and anticipate crime. But a grassroots movement is emerging to resist algorithmic policing.

Michelle Chen

August 25, 2022

Tyler Cullen, of Vulcan Security Technologies, looks at video screens in the Hartford police Real-Time Crime and Data Intelligence Center in Hartford, Conn.(Dave Collins / AP Photo)

Last year, Zachary Norris was driving home from a hike in the Bay Area with his wife, his two children, and a friend when seven police cars converged behind him and ordered him to pull over. The officers approached him with guns drawn. He got out of his car and dropped to his knees. The police, he said, separated him from his family and pulled him into the back of one of their vehicles. Someone had swapped his car’s license plate with another—a common tactic used to evade law enforcement. An automatic license plate reader had identified Norris’s car as belonging to a suspect in an armed robbery.

The incident lasted about 35 minutes, Norris said, but the gravity of the encounter sank in when officers reunited him with his family after the police had detained him. “It hit me was when my kid ran to me and hugged me,” he said. “At that moment, it’s just like, ‘What could have happened?’”

Norris was fortunate. As a civil rights attorney at the Ella Baker Center for Human Rights, he cleared up the situation relatively quickly. (The Walnut Creek Police Department did not respond to a request for comment.) But he saw how algorithms that capture and process plate numbers can lead to confrontations that can easily become deadly. He joined the campaign to ban Oakland police from using license-plate readers—not just because the machines are inaccurate but because they make policing more invasive. Algorithmic technology, he said, was one facet of an entrenched system of police surveillance. Plate readers were “just a different tool layered on to inequality and discrimination and driving while Black,” he told me.

Algorithms and artificial intelligence have dramatically expanded the ability of law-enforcement institutions to identify, track, and target individuals or groups. And civil rights activists say the new technologies erode privacy and due process. Community groups are beginning to understand the ramifications of AI for privacy, discrimination, and social movements and are pushing back. Across the country, a grassroots movement is emerging to resist the secrecy of police algorithms and to demand that lawmakers ban the most intrusive surveillance schemes.

Current Issue

View our current issue

Subscribe today and Save up to $129.

Through facial-recognition programs, for instance, an officer can grab an image of a face from a surveillance video of a protest and then instantly cross-check it against a photo database. A “faceprint” can also be used in a “face analysis” to try to extrapolate demographic characteristics, such as gender, race, or even sexual orientation, according to vendor claims analyzed by the Electronic Frontier Foundation. Beyond just identifying an individual, face-tracking software can be used in tandem with other algorithmic technologies to trace the movements of a demonstrator as they travel home from a rally.

The technology is error-prone and often discriminatory. A recent study by the National Institute of Standards and Technology found that facial-recognition software misidentified Black and Asian faces 10 to 100 times more frequently than it did white faces.

In 2020, Detroit police apprehended Robert Williams as he pulled into his driveway. Facial-recognition technology had mistakenly determined that he was the suspect in a 2018 shoplifting case. According to a lawsuit filed by American Civil Liberties Union of Michigan, Williams “was arrested without explanation on his front lawn in plain daylight in front of his wife and children, humiliated, and jailed in a dirty, overcrowded cell for approximately 30 hours where he had to sleep on bare concrete—all for no reason other than being someone a computer thought looked like a shoplifter.”

Although police used the match to obtain an arrest warrant, even the department acknowledges the limits of the technology. Detroit Police Chief James Craig stated at a public meeting in June 2020, “If we were just to use the technology by itself, to identify someone, I would say 96 percent of the time it would misidentify.” Still, there is no definitive legal standard on whether facial technology can be relied upon to demonstrate probable cause for an arrest.

Other types of algorithmic monitoring have given police extraordinary abilities to track individuals. The New York City Police Department’s surveillance tools include  software that can identify individuals, including protesters, in video clips that contain certain objects or reveal specific physical traits.

One of the major obstacles to challenging potential civil rights abuses via algorithm is the opacity of such “black box” technology, which are typically developed by private corporations. The underlying architecture of the surveillance technology is generally obscured from the public, hiding how the software processes data or makes decisions.

To address the lack of transparency, the New York City Council passed the Public Oversight of Surveillance Technology (POST) Act in 2020, which directed the NYPD to create a surveillance impact and use policy and to report regularly on how the department was complying. Civil rights groups, however, say that NYPD’s disclosures under the law have been inadequate. Last October, the watchdog group Surveillance Technology Oversight Project (STOP) noted that the police had repeatedly “refused to disclose surveillance vendors and tools,” “hid the name of agencies that can access NYPD surveillance data,” and failed to comprehensively assess how bias embedded in surveillance technology affect different communities.

Independent journalism relies on your support


With a hostile incoming administration, a massive infrastructure of courts and judges waiting to turn “freedom of speech” into a nostalgic memory, and legacy newsrooms rapidly abandoning their responsibility to produce accurate, fact-based reporting, independent media has its work cut out for itself.

At The Nation, we’re steeling ourselves for an uphill battle as we fight to uphold truth, transparency, and intellectual freedom—and we can’t do it alone. 

This month, every gift The Nation receives through December 31 will be doubled, up to $75,000. If we hit the full match, we start 2025 with $150,000 in the bank to fund political commentary and analysis, deep-diving reporting, incisive media criticism, and the team that makes it all possible. 

As other news organizations muffle their dissent or soften their approach, The Nation remains dedicated to speaking truth to power, engaging in patriotic dissent, and empowering our readers to fight for justice and equality. As an independent publication, we’re not beholden to stakeholders, corporate investors, or government influence. Our allegiance is to facts and transparency, to honoring our abolitionist roots, to the principles of justice and equality—and to you, our readers. 

In the weeks and months ahead, the work of free and independent journalists will matter more than ever before. People will need access to accurate reporting, critical analysis, and deepened understanding of the issues they care about, from climate change and immigration to reproductive justice and political authoritarianism. 

By standing with The Nation now, you’re investing not just in independent journalism grounded in truth, but also in the possibilities that truth will create.

The possibility of a galvanized public. Of a more just society. Of meaningful change, and a more radical, liberated tomorrow.

In solidarity and in action,

The Editors, The Nation

(The NYPD did not respond directly to these criticisms, but directed The Nation to its POST Act website containing policies for the use of different surveillance technologies.)

Clarence Okoh, a lawyer with the Center for Law and Social Policy, estimates that “virtually every single law enforcement agency in the country has some type of algorithmic-enabled technology.” Yet, from a civil rights standpoint, “there’s a major rights remedy gap, because the laws that were designed to protect us from discriminatory harms that flow from business and government practices were designed at the time where you had human decision makers.”

State and local governments are grappling with how to regulate law enforcement’s use of algorithmic surveillance. In recent years, at least 17 cities have banned the government from using facial recognition, including Minneapolis, Pittsburgh, and New Orleans. But the restrictions have faced formal and informal resistance from law enforcement and legislators. Virginia, for example, barred local police from using facial recognition, but the legislature loosened the prohibition this year to permit facial recognition in select circumstances, such as identifying a suspect using a police photograph database.

In Alameda County, Calif., activists have accused local police of systematically violating an existing ban. Immigrant rights groups Mijente and NorCal Resist, along with several individuals, sued Clearview AI, a secretive facial-recognition company, for providing police and the Alameda district attorney access to its image database of more than 3 billion photos scraped from the web and social media. Investigators allegedly circumvented the law by using Clearview’s software on a free trial basis. The lawsuit charges Clearview with privacy violations and the law enforcement agencies with free-speech violations, because using the database made the plaintiffs vulnerable to “being targeted, harassed, and surveilled as a result of their advocacy efforts.” The suit is one of several legal challenges that civil liberties groups have brought against Clearview, one of which resulted in significant limits on the company’s collaboration with police in Illinois earlier this year.

Steven Renderos, the executive director of the advocacy organization MediaJustice and a named plaintiff in the Alameda lawsuit, said it felt “creepy,” but not surprising, when he discovered his own image in the Clearview AI database—photos of him taken at political events and posted to social media. While protest is intended to make people politically visible, he said, “scraping the web and building out this massive database that Clearview then turns around and sells to law enforcement [is] putting people’s political participation at risk and creating a chilling effect for people that will feel less likely to want to engage in that kind of behavior in the future.”

Policing the future

An even more controversial area of algorithmic surveillance involves anticipating crimes through “predictive policing.” In recent years, police departments and criminal court systems throughout the country have used programs to calculate the likelihood that an individual will commit a crime or whether a person held in pretrial detention should be released by a judge on bail. The technology uses information like a person’s age, education level, marital status, criminal record, or history of substance use disorder to make a determination. Rights groups say it “tech washes” profiling and discrimination, effectively outsourcing discrimination to artificial intelligence rather than human judgment. Some—but not all—of these programs have ended amid public outcry.

“I don’t see algorithms as fundamentally changing the way police departments police.… What’s different is the unprecedented power and scope of these technologies or Big Data analytical capabilities, and the purported badge of objectivity that this brings,” said Meena Roldán Oberdick, an attorney at the civil rights organization Latino Justice. “Using large-scale crime analysis or predictive policing technologies allows police departments to…abdicate responsibility for decision-making to data or machines.”

Civil rights groups are also increasingly concerned that algorithmic assessments are bringing predictive policing into schools. A Tampa Bay Times investigation revealed that in Pasco County, Fla., the sheriff’s office has been tracking public school students based on data shared by the school district and child welfare authorities—from their grades to disciplinary issues to reports of abuse or drug use—and placing certain students on a list of “at risk” individuals. The sheriff’s office claims the list is used to identify children’s needs for social services and behavioral interventions. In a statement to The Nation, the sheriff’s office stated that it did not use predictive policing, but instead had adopted a crime-fighting strategy known as “intelligence-led policing,” which “works with those who have shown a consistent pattern of offending to attempt to break the cycle of recidivism.”

But Beverly Ledbetter, a local activist who taught in Pasco County schools for over three decades, fears that police are drawing schoolchildren into a cycle of intervention: “Any time something happens in a neighborhood, if that child lives in that neighborhood, they are one of the first suspects…because they have been identified as a potential future criminal.”

Algorithmic Accountability

While rights advocates keep challenging algorithmic surveillance in courts, they also emphasize the limitations of the law. Litigation often comes too late to help the communities most affected by algorithm-driven assessments, which tend to be poor, overpoliced, and racialized.

Juyoun Han, a partner of the digital technology and human rights group at the law firm Eisenberg & Baum, said, “Reactive litigation is so disheartening, because once harm has been found and felt by the use of algorithmic decision-making systems, that means there were millions of the same cases that have already touched millions of people’s lives.”

The legislative landscape around algorithms is evolving. In addition to state and local laws curbing or requiring disclosure for facial recognition and other biometric tools used by police and other government agencies, Democratic Senator Ron Wyden introduced the Algorithmic Accountability Act aimed at tackling algorithmic discrimination. The legislation would authorize the Federal Trade Commission to investigate and document private-sector uses of algorithms. The bill has not moved since it was introduced in February, though the Federal Trade Commission has announced plans to strengthen its oversight of algorithms in business transactions.

Other proposed bills would regulate algorithms on social media platforms to clamp down on socially harmful content. But these federal initiatives focus on algorithms used in private commerce rather than in police investigations or court decisions. The legislation often calls for auditing technology for potential bias, but activists are skeptical about the effectiveness of audits, given the lack of independent, industry-wide ethical standards for evaluating algorithmic systems.

Albert Fox Cahn, the executive director of STOP, said, “we don’t actually have consensus standards for what a good algorithmic audit looks like.” Instead, STOP advocates for a “moratorium on a lot of uses of algorithmic systems,” enacted through state regulation or a presidential executive order.

Renderos of MediaJustice is now helping to lead campaigns against tech firms like Amazon and Palantir, which have both supplied surveillance tools to Immigration and Customs Enforcement. The aim is to shift the political conversation around algorithmic surveillance from a technical policy debate to an abolitionist discourse about the role that technology should, or shouldn’t, play in the governance and policing of vulnerable communities.

For activists, the first step is ensuring democratic control of the government’s technology infrastructure. This could take the form of community-based oversight bodies that can preemptively review any new technology that a local law-enforcement agency wants to use. A model known as Community Control over Police Surveillance is in place in more than 20 municipalities. It requires public buy-in before police adopt new surveillance technologies. The government could also establish an algorithm oversight agency, analogous to the Food and Drug Administration or an environmental regulator, to investigate new software before it is authorized for public use.

For some rights advocates, however, the end goal is not curbing algorithmic surveillance but forcing government agencies to drop the technology altogether. “We need space and time for us to consider what sorts of policies we can put in place,” Renderos said, “What sort of protections do we need to put in place to ensure that we can prevent harm from happening…[and] have a real pathway towards repair, if harm does happen?”

STOP argues that more surveillance does not mean more safety, and that the burden of proof should be on the state to prove that the public benefits of new algorithmic technologies outweigh the risks. In the absence of an effective regulatory regime, “the hope is to invert the American legal presumption that police are allowed to use these surveillance technologies unless they’ve been outlawed,” Cahn told me, “Instead, we need to go to a model where they’re only permitted once the public has actively consented, which, with many of the systems, the public simply would not do.”

Michelle ChenTwitterMichelle Chen is a contributing writer for The Nation.


Latest from the nation