In May of 2010, prompted by a series of high-profile scandals, the mayor of New Orleans asked the US Department of Justice to investigate the city police department (NOPD). Ten months later, the DOJ offered its blistering analysis: during the period of its review from 2005 onwards, the NOPD had repeatedly violated constitutional and federal law.
It used excessive force, and disproportionately against black residents; targeted racial minorities, non-native English speakers, and LGBTQ individuals; and failed to address violence against women. The problems, said assistant attorney general Thomas Perez at the time, were “serious, wide-ranging, systemic and deeply rooted within the culture of the department.”
Despite the disturbing findings, the city entered a secret partnership only a year later with data-mining firm Palantir to deploy a predictive policing system. The system used historical data, including arrest records and electronic police reports, to forecast crime and help shape public safety strategies, according to company and city government materials. At no point did those materials suggest any effort to clean or amend the data to address the violations revealed by the DOJ. In all likelihood, the corrupted data was fed directly into the system, reinforcing the department’s discriminatory practices.
Sign up for the The Algorithm
Artificial intelligence, demystified
Predictive policing algorithms are becoming common practice in cities across the US. Though lack of transparency makes exact statistics hard to pin down, PredPol, a leading vendor, boasts that it helps “protect” 1 in 33 Americans. The software is often touted as a way to help thinly stretched police departments make more efficient, data-driven decisions.
But new research suggests it’s not just New Orleans that has trained these systems with “dirty data.” In a paper released today, to be published in the NYU Law Review, researchers at the AI Now Institute, a think tank that studies the social impact of artificial intelligence, found the problem to be pervasive among the jurisdictions it studied. This has significant implications for the efficacy of predictive policing and other algorithms used in the criminal justice system.
“Your system is only as good as the data that you use to train it on,” says Kate Crawford, cofounder and co-director of AI Now and an author on the study. “If the data itself is incorrect, it will cause more police resources to be focused on the same over-surveilled and often racially targeted communities. So what you’ve done is actually a type of tech-washing where people who use these systems assume that they are somehow more neutral or objective, but in actual fact they have ingrained a form of unconstitutionality or illegality.”
The researchers examined 13 jurisdictions, focusing on those that have used predictive policing systems and been subject to a government-commissioned investigation. The latter requirement ensured that the policing practices had legally verifiable documentation. In nine of the jurisdictions, they found strong evidence that the systems had been trained on “dirty data.”
The problem wasn’t just data skewed by disproportionate targeting of minorities, as in New Orleans. In some cases, police departments had a culture of purposely manipulating or falsifying data under intense political pressure to bring down official crime rates. In New York, for example, in order to artificially deflate crime statistics, precinct commanders regularly asked victims at crime scenes not to file complaints. Some police officers even planted drugs on innocent people to meet their quotas for arrests. In modern-day predictive policing systems, which rely on machine learning to forecast crime, those corrupted data points become legitimate predictors.
The paper’s findings call the validity of predictive policing systems into question. Vendors of such software often argue that the biased outcomes of their tools are easily fixable, says Rashida Richardson, the director of policy research at AI Now and lead author on the study. “But in all of these instances, there is some type of systemic problem that is reflected in the data,” she says. The remedy, therefore, would require far more than simply removing one or two instances of bad behavior. It’s not so easy to “segregate out good data from bad data or good cops from bad cops,” adds Jason Schultz, the institute’s research lead for law and policy, another author on the study.
Vendors also argue that they avoid data more likely to reflect biases, such as drug-related arrests, and opt instead for training inputs like 911 calls. But the researchers found just as much bias in the supposedly more neutral data. Furthermore, they found that vendors never independently audit the data fed into their systems.
The paper sheds light on anther debate raging in the US over the use of criminal risk assessment tools, which also use machine learning to help determine anything from defendants’ fate during pretrial proceedings to the severity of their sentences. “The data we discuss in this paper is not just isolated to policing,” says Richardson. “It’s used throughout the criminal justice system.”
Currently, much of the debate has focused on the mechanics of the system itself—whether it can be designed to produce mathematically fair results. But the researchers emphasize that this is the wrong question. “To separate out the algorithm question from the social system it’s connected to and embedded within doesn’t get you very far,” says Schultz. “We really have to acknowledge the limits of those kinds of mathematical, calculation-based attempts to address bias.”
Moving forward, the researchers hope their work will help reframe the debate to focus on the broader system rather than the tool itself. They also hope it will prompt governments to create mechanisms, like the algorithmic impact assessment framework the institute released last year, to bring more transparency, accountability, and oversight to the use of automated decision-making tools.
If the social and political mechanisms that generate dirty data aren’t reformed, such tools will only do more harm than good, they say. Once people recognize that, then maybe the debate will finally shift to “ways we can use machine learning and other technological advances to actually stop the root cause of [crime],” says Richardson. “Maybe we can solve poverty and unemployment and housing issues using government data in a more beneficial way.”