Over a dozen cybersecurity experts are slamming Apple and the European Union’s plans to scan photos on people’s phones for known child sexual abuse materials (CSAM), the New York Times reports. In a 46-page study, the experts say the photo scanning tech is not only ineffective, but it’s also “dangerous technology.”
The experts told the NYT that they had begun their study before Apple announced its CSAM plans in August. That’s because the EU released documents last year that indicated the government wanted to implement a similar program that not only scanned for CSAM but also organized crime and terrorism on encrypted devices. The researchers also said they believe a proposal to allow this tech in the EU could come as soon as this year.
The way the tech works is it scans photos on your phone before they’re sent and encrypted on the cloud. Those photos are then matched against a database of known CSAM images. While Apple tried several times to clarify how the feature worked and released extensive FAQs, security and privacy experts were adamant that Apple had built a “back door” that could be abused by governments and law enforcement to surveil law-abiding citizens. Apple tried to allay those fears by promising it wouldn’t let governments use its tools that way. Those promises did not appease experts at the time, and some researchers claimed they were able to reverse-engineer the algorithm and trick it into registering false positives.
Amid the backlash, Apple hit pause on its program in early September. However, hitting pause isn’t the same as pulling the plug. Instead, Apple said it was going to take some extra time to refine the feature, but didn’t provide details as to what that revision process would look like or what its new release timeline would be.
The concerning thing here is even if Apple does eventually nix its CSAM plans, the EU was already building a case for its own version—and one with a wider scope. The experts told the NYT that the reason they published their findings now was to warn the EU about the dangers of opening this particular Pandora’s box.
“It’s allowing scanning of a personal private device without any probable cause for anything illegitimate being done,” Susan Landau, professor of cybersecurity and policy at Tufts University, told the New York Times. “It’s extraordinarily dangerous. It’s dangerous for business, national security, for public safety and for privacy.”
It’s easy to get lost in the weeds when it comes to the CSAM debate. Apple, for instance, launched an uncharacteristically slipshod PR campaign to explain every nut and bolt of its privacy failsafes. (Spoiler: Everyone was still massively confused.) However, the question isn’t whether you can make this sort of tool safe and private—it’s whether it should exist in this capacity at all. And if you were to ask the security experts, it seems the resounding answer is “no.”