Report: Cops Have Been Pressuring ShotSpotter to Alter Evidence

Court records appear to show police departments asking ShotSpotter to change entries to make them more advantageous to court cases.

We may earn a commission from links on this page.
Image for article titled Report: Cops Have Been Pressuring ShotSpotter to Alter Evidence
Photo: Scott Olson (Getty Images)

A new report from Motherboard suggests that, in multiple instances, police departments have exerted pressure on AI-driven gunfire detector firm ShotSpotter—pushing the company to alter evidence against specific suspects, often to bolster the legal cases against them.

ShotSpotter sells a popular automated system that uses acoustic sensors and algorithms to help police detect gunfire in targeted geographical areas. The firm, which launched some 25 years ago, describes itself as a “leader in precision policing technology solutions that enable law enforcement to more effectively respond to, investigate, and deter crime.” In recent years, the company has proliferated throughout the country, contracting with more than 120 different cities. Increasingly, its data is used in court cases to place suspects at or near the scene of crimes, and the company claims that its evidence has been used in close to 200 court cases.

However, a review of multiple recent court cases suggests that ShotSpotter has frequently altered the data it initially collects after being approached by police, often to the benefit of the government’s case against particular defendants.


To grasp this, you have to understand how ShotSpotter collects data: The company uses acoustic sensors, hidden throughout a city, which transmit noise back to its server. Then, ShotSpotter’s algorithm analyzes the noise and either classifies it as a gunshot or something else. After this, a human employee is charged with reviewing the data and making their own determination of its origin. However, Motherboard writes that the “company’s analysts frequently modify alerts at the request of police departments—some of which appear to be grasping for evidence that supports their narrative of events.”

One example provided is that of Michael Williams, a 64-year-old Chicago resident who stands accused of shooting another man to death on the city’s streets last year. ShotSpotter data was collected that police initially said showed Williams was the culprit. However, it was discovered during court proceedings that the sound had been reclassified: A ShotSpotter analyst had overridden the initial AI designation related to the noise—shifting it from the classification as a “firework” to gunfire. At a later point, a ShotSpotter expert also changed the street address of the noise from its original address to one much, much closer to the spot where cops say the murder occurred. Court documents say both reports had the same GPS coordinates and the street address was corrected to match the actual location the sensors had identified. Williams has maintained that the victim was actually struck by a drive-by shooting.


In another case, a New York man, Silvon Simmons, was shot by Rochester police in 2016 after they stopped the vehicle he was riding in. The cops claim Simmons fired on them first, but no physical evidence could be found at the scene to prove that claim. Pertinently, ShotSpotter didn’t initially pick up any sign of gunshots at the scene, classifying the noises from the area as “helicopter rotors.” When approached by police, a company analyst ultimately decided that those rotor noises were, in fact, four gunshots—which was the number of times police had fired at Simmons. Unsatisfied, cops asked the analyst “to essentially search and see if there were more shots fired than ShotSpotter picked up,” at which point he discovered a fifth shot.

Some of these cases have inspired legal challenges. In the case of Williams, the suspect’s attorney subsequently filed a motion against the ShotSpotter data—essentially asking the judge to reconsider whether it was strong enough to be included in the case. Surprisingly, the prosecution subsequently withdrew the evidence instead of defending it, all but admitting that it wasn’t substantial, Motherboard reports.


“Through this human-involved method, the ShotSpotter output in this case was dramatically transformed from data that did not support criminal charges of any kind to data that now forms the centerpiece of the prosecution’s murder case against Mr. Williams,” the motion claims.

After a request for comment by Gizmodo, ShotSpotter provided the following statement in regards to the implication that its analysts had somehow been pressured by police to shift data:

The [Motherboard] article referenced is misleading and inaccurate about the use of ShotSpotter in criminal cases. We categorically deny any allegations that ShotSpotter manipulates any details of an incident at the request of the police. We respond to requests to further investigate an incident but only to provide the facts that we can determine and not to fit a pre-determined narrative.


In regards to the motion, the company said:

We understand the Cook County State’s Attorney’s decision that ShotSpotter could not play an evidentiary role in this specific case. In the last several years, ShotSpotter has partnered with the Chicago Police Department and the City of Chicago to save lives, reduce gun violence, and make neighborhoods safer...Whether ShotSpotter offers probative evidence depends on the specific facts of each shooting incident, and we fully support carefully evaluating those facts and making case-by-case determinations.


The motion in the Williams case further argues that there hasn’t been sufficient testing of the company’s technology for accuracy. ShotSpotter’s analysis of the audio it receives comes largely from its marketing department and is staffed by “customer service representatives with little more than high school diplomas.” Prosecutors did not challenge the motion, but ShotSpotter disputes this characterization from the Williams pleading and says the team of “Real Time Alert” analysts who review audio files to determine if a given sound is gunfire receive over 100 hours of training and must meet accuracy standards of at least 99%.

Those analysts “on average, spend less than a minute conducting their so-called ‘forensic evaluation’ of each noise,” the motion claims. ShotSpotter says that while their “Real Time Alerts” take around 60 seconds, those alerts are not used as evidence at trial. The company says it provides “Detailed Forensic Reports” and expert testimony for that purpose which are based on hours of review of audio recordings and provide more precise information about where and when guns were fired than what is gleaned from the “Real-Time Alerts” process.


“Unreliable ShotSpotter evidence should not be used in the criminal justice system until the proponent can establish the reliability and performance parameters of the system through adequate validation testing,” the motion says.

Editor’s Note: This story has been updated to include a clarification of the reclassification that occurred in the Williams case and ShotSpotter’s response to statements made in the motion to exclude ShotSpotter evidence in that case. Following publication, ShotSpotter filed a libel suit against Vice over its report, which Vice has since moved to dismiss.