Tricking AI into seeing a gun is particularly troubling, as object recognition is quickly becoming a key element in smart policing. In September, security start-up Knightscope unveiled a new line of “crime fighting robots,” self-driving dune buggies equipped with surveillance gear and object recognition, marketing them as supplemental security for airports and hospitals. What happens when robots report a high-level threat to authorities because of a paper turtle? Similarly, Motorola and Axon (formerly Taser) have invested in real-time object recognition in their body cameras. If this exploit can be used to trick AI into mistaking something harmless as dangerous, could it to do the opposite, disguising weapons as turtles?

Advertisement

Anish Athalye, co-author on the MIT paper, says the problem isn’t as simple as fixing a single vulnerability; AI needs to learn sight beyond simply recognizing complex patterns:

“It shouldn’t be able to take an image, slightly tweak the pixels, and completely confuse the network,” he told Quartz. “Neural networks blow all previous techniques out of the water in terms of performance, but given the existence of these adversarial examples, it shows we really don’t understand what’s going on.”

Advertisement

But, privacy experts may hedge the need to accelerate AI-fueled recognition. We already live in a largely unregulated and perpetual surveillance state. Half of all American adults are in a federal face recognition database and simply unlocking your phone means you could be matched to a database. Better “sight” for AI inevitably means stronger surveillance. It’s an uneasy trade-off, but with AI poised to paradigmatically shift every aspect of modern life, including health, security, transportation, etc., we need to predict and prevent these exploits.

Correction: The previous version of this article misattributed the research. Japanese researchers designed the one-pixel attack, while MIT researchers used the 3D-printed turtle to fool software. We regret the error.

Advertisement

[Quartz via MIT Technology Review]