“Lack of transparency alone should have brought a rejection of the NYPD’s contract renewal with ShotSpotter. By renewing, NYPD agreed to open our pocketbooks once again with little in return, including the right to know the dangers we are truly propelling.”

Benjamin Kanter/Mayoral Photo Office
An NYPD vehicle in Manhattan on a snowy day in 2018.
When it comes to the trade-off of privacy for security, we might all barter differently. But nobody would pay a hefty fee to have their rights and safety simultaneously compromised. And no one would enter such an agreement blindly.
On Tuesday, ShotSpotter’s parent company, SoundThinking, announced that the NYPD extended its contract with ShotSpotter in a $21.8 million deal. ShotSpotter is a “gunshot detection” technology, designed to identify and locate gunfire in real-time using a network of acoustic sensors.
Ideally, the sound of a gunshot is accurately identified, and law enforcement are quickly notified on where the gunshot occurred. This mere premise of audibly monitoring entire communities raises red flags. Unfortunately, the actual operation and application of this technology has created an even grimmer reality than imagined.
Hidden sensors listening in on conversations, conversations that are reasonably expected to be private, is a wiretapping nightmare. SoundThinking claims that the risk of voice surveillance is “extremely limited” and sensors only pick up “loud explosive and impulsive sounds that are likely gunfire.”
However, people’s voices have been recorded by ShotSpotter and prosecutors have sought to introduce such voice recordings as evidence during criminal trials. In one Massachusetts case, the evidence was not admitted since the court ruled it would violate the Massachusetts Wiretap Act. However, in one California case, the court did admit the voice recording into evidence.
Now let’s add ShotSpotter’s notorious unreliability to the equation. The Brooklyn Defenders Services just released a groundbreaking report analyzing nine years of NYPD performance data as it pertains to ShotSpotter technology. The data was obtained via a FOIL request, resulting in the “largest disclosure of ShotSpotter performance data to date from any city.”
This report demonstrates the NYPD’s confirmation rate, in terms of confirming a ShotSpotter alert actually resulted from detected gunfire, has only been 16.57 percent over the past nine years. The audit report conducted by the New York City Comptroller, published earlier in June of this year, is consistent with these more recent findings. The audit found that “during the sampled months of review in 2022 and 2023, ShotSpotter alerts only resulted in confirmed shootings between 8 percent and 20 percent of the time.”
Despite this unreliability, ShotSpotter still serves as an important catalyst for stops, without being held to the same regulatory requirements of other investigative tools (like drug-sniffing dogs), including recording maintenance and false positives. Therefore, in application, ShotSpotter mistaking a slamming door or firework for a gunshot unnecessarily “puts boots on the ground” in communities, often communities of color.
These officers enter these communities expecting gunfire, concerned for their own safety, and are called to scrutinize the behavior of any nearby pedestrians or vehicles, even something as slight as a passerby tilting or “cant[ing] away” the right side of their body.” This recipe for disaster is what led to the death of 13-year-old Latino American boy, Adam Toledo. Police shot and killed the child with his hands in the air after being drawn to his location by a faulty ShotSpotter alert.
As stated in the Brooklyn Defenders Service report, ShotSpotter’s sensors are disproportionately located in communities of color and “Black residents are 3.5 times more likely to have an officer deployed based on an unconfirmed alert than a neighborhood with predominantly white residents.” According to the report, alerts occur 21.91 times per day on average. This means that NYPD officers respond to a ShotSpotter alert and enter a community expecting gunfire about 22 times per day for absolutely no reason, no detected gunfire, about 83 percent of the time.
To add to this string of dangerous dead-end, false positive deployments, ShotSpotter’s system of noise analysis allows the company to bend their data. How? The final decision maker is not software, but humanware: a ShotSpotter employee that verifies noise characteristics. Silvon Simmons fell victim to this system, being tried for attempted murder despite a total lack of physical or eyewitness evidence. Simmons, a Black man repeatedly shot at by an officer in Rochester N.Y., was convicted of a lesser gun charge based significantly on ShotSpotter evidence.
ShotSpotter had initially identified the sounds in question as coming from a helicopter, but after police informed them about an officer-involved shooting, they reclassified the sounds as gunshots. The number of identified shots changed several times, first being identified as three shots, then four and eventually matching the officer’s claim of five shots. ShotSpotter engineer Paul Greene testified that the employees reclassified the sounds as a helicopter to a gun because the Rochester Police Department told them to do so. In response to the prosecutor’s questioning of how often this occurs, Greene responded “all the time…we trust our law enforcement customer to be really upfront and honest with us.”
A similar case involved a ShotSpotter employee changing the location of the initial gunshot alert to a location a block away, matching the scene of the crime. This employee, Walter Collier, previously worked for the Chicago Police Department and after being asked about his training, ShotSpotter responded that “no official or formal training materials exist for our forensic experts.”
One might argue that the mere presence of these sensors is still valuable in deterring crime. However, a 17-year study examining ShotSpotter effects across 68 large metropolitan counties concluded that the technology “had no significant impact on firearm-related homicides or arrest outcomes.”
An example of this impact, or lack thereof rather, was reflected in statements by San Antonio Police Chief William McManus: “In the 15 months it’s been in operation, officers have made only four arrests and confiscated seven weapons that can be attributed to ShotSpotter technology.” And these arrests weren’t cheap, costing an estimated $136,500 each when accounting for the technology itself and the officer overtime for the program. And once again returning to the Brooklyn Defender Services report, “only 0.7 percent of responses resulted in NYPD making an arrest for alleged illegal activity of any kind.”
If this lack of privacy, reliability, consistency, and deterrence wasn’t alarming enough, bear in mind that ShotSpotter’s toll on New York City has long been heavily suppressed. Returning to the audit conducted by the New York City Comptroller earlier this year, a major criticism was the lack of transparency. For example, in terms of costs to the city alone, the NYPD simply does not track the time or staff costs spent responding to ShotSpotter alerts that are false alarms. As stated, “data currently collected and published by NYPD does not support a comprehensive assessment of the tool’s effectiveness or economy, does not fully inform the public or government officials interested in ShotSpotter’s performance, and therefore does not currently support renewal of the contract.”
It wasn’t until the Brooklyn Defenders Services report that we were finally able to see some hard numbers and percentages. And all this was revealed only as a result of a FOIL request, obligating production of information that NYPD attempted to keep secret. This lack of transparency alone should have brought a rejection of the NYPD’s contract renewal with ShotSpotter. By renewing, NYPD agreed to open our pocketbooks once again with little in return, including the right to know the dangers we are truly propelling.
Skorpen is a legal intern at the Surveillance Technology Oversight Project (S.T.O.P.) and 2L at Stanford Law School with an interest in the preservation of civil liberties in a digital world.