Ratcheting up pressure on the Justice Department to crack down on unregulated predictive policing algorithms, Democratic lawmakers on Thursday called for strict new requirements on any DOJ-funded research into police tools promising to forecast crime, saying that research into some programs already in use has revealed “major problems with their effectiveness and reliability.”
“These algorithms, which automate policing decisions, not only suffer from a lack of meaningful oversight regarding whether they actually improve public safety, but it is also likely they amplify biases against historically-marginalized groups,” the lawmakers wrote Thursday in a letter to Attorney General Merrick Garland, obtained first by Gizmodo.
The letter, led by Sens. Ron Wyden, Ed Markey, Alex Padilla, Raphael Warnock, and Jeff Merkley and U.S. Representatives Yvette Clarke and Shelia Jackson Lee, calls for “ongoing, independent audits by experts” and a system of due process for any victims of error.
The probe of DOJ’s funding practices comes amid heightened tensions between Black communities and police departments across the United States over repeated, documented incidents of excessive force, including recorded shootings of Black suspects who are unarmed, defenseless, and in some cases, even complying with police commands.
The lawmakers pointed to a 2019 study from New York University School of Law and NYU’s AI Now Institute which discovered predictive policing systems being trained on what the researchers call “dirty data,” or data derived from “corrupt, biased, and unlawful practices.”
At least five of the departments identified as likely relying on erroneous data in attempts to predict crime acquire that capability, in part, through DOJ grants.
“We are deeply concerned such programs may amount to violations of citizens’ constitutional rights to equal protection and due process under the law,” the lawmakers told Garland. “We worry that the use of untested black box algorithms with biased training data can directly harm innocent individuals and communities.”
The probe of DOJ’s funding practices comes amid heightened tensions between police departments and Black communities across the United States over biased policies and excessive use of force by officers, including the recorded shootings of Black suspects who are unarmed, defenseless, and in some cases, in the process of complying with police commands.
Even as the high-profile trial of Derek Chauvin, the former Minneapolis police officer accused of unlawfully killing George Floyd, came to a close Thursday morning, protesters and riot police in Minnesota were clashing nightly over the killing of 20-year-old Duante Wright, a Black man fatally shot Sunday by the head of the state’s largest police union, in what the officer has claimed was an accident.
So-called “hotspot policing,” or attempts to proactively direct patrol officers to areas where crimes are most likely to occur, has long drawn the ire of civil rights experts who say the data underpinning predictive tools is weighted down by statistics reflecting expired policies and unlawful or biased activity with historically outsized impacts on Black communities.
“When datasets filled with inaccuracies influenced by historical and systemic biases are used without corrections, these algorithms end up perpetuating such biases and facilitate discriminatory policing against marginalized groups, especially Black Americans,” the lawmakers wrote.
The departments NYU researchers focused on, for example, supplied data used to generated predictions of where future crimes were likely to occur, but had themselves been investigated or sanctioned for corruption and illegal practices in the past. The cases, researchers said, show that “illegal police practices can significantly distort the data that is collected, and the risks that dirty data will still be used for law enforcement and other purposes.”
The technology has especially raised concerns about the byproduct of over-policing of both poor and minority communities, residents of which—engaged in criminal activities or not—are more heavily exposed to policing.
“Police service can lead to the reduction of liberties of citizens through such law enforcement actions as police stops or arrests,” criminologist David Weisburd wrote in 2016. “While the purpose of these activities may be to aid the community in its efforts to reduce crime, for individuals who live in communities where there are more police, this can mean an increased likelihood of being the focus of such actions.”
Other researchers have argued that such tools are likely to produce a “feedback loop,” in which the predictions, themselves based on dirty data, will only increase the likelihood of even more predictions targeting the same neighborhoods and communities, ad infinitum.
“The use of predictive policing must be treated with high levels of caution and mechanisms for the public to know, assess, and reject such systems are imperative,” the NYU researchers said.
In their letter to Attorney General Garland, the lawmakers have asked DOJ to release documentation of any efforts to ensure predictive technologies comply with federal civil rights statutes, as well as a “detail annual accounting” of grant money used to support the development or implementation of prediction tools.
They’ve also requested details about any efforts at DOJ to determine the efficacy and reliability of such software, including whether police vendors have been subjected to validation studies or expert audits during procurement processes. Moreover, the lawmakers asked whether the department has ever halted funding for any program about which concerns have been raised.
The lawmakers have requested Garland’s response by May 28, saying they hope to learn how DOJ is bringing accountability and transparency to the issue.
Correction: An early version of this article stated that only four U.S. Senators had joined the letter to Attorney General Merrick Garland. It mistakenly excluded Sen. Jeff Merkley, who is also a cosigner.