Photo: Kyushu University

Two new research papers on object recognition, one from Japanese researchers at Kyushu University and one from experts at MIT, have startling implications for how artificial intelligence “sees” potential threats.

Typically, object-recognition works by complex pattern matching: the software measures the pixels in an image and matches that to an internal blueprint of a given object’s dimensions, or what it thinks it should look like. Japanese researchers developed what they call a “one pixel attack,” an algorithm which identifies and alters a single pixel in an image, forcing the AI to “see” something else.

Horses became cars and cars became dogs. By altering a single pixel in a 1,024-pixel image, the attack was successful 74 percent of the time. When changing five pixels, the success rate of the attack rose to 87 percent.


MIT researchers took this further by confusing recognition software in real time with 3D objects. Using an algorithm, they 3D printed a turtle, purposefully altering the texture and color so that AI sees a rifle, even from different angles and distances. Crucially, these researchers were able to choose what they wanted the AI to see, a deeply troubling circumvention of object recognition.

Here is MIT’s real-time misidentification in action:

Tricking AI into seeing a gun is particularly troubling, as object recognition is quickly becoming a key element in smart policing. In September, security start-up Knightscope unveiled a new line of “crime fighting robots,” self-driving dune buggies equipped with surveillance gear and object recognition, marketing them as supplemental security for airports and hospitals. What happens when robots report a high-level threat to authorities because of a paper turtle? Similarly, Motorola and Axon (formerly Taser) have invested in real-time object recognition in their body cameras. If this exploit can be used to trick AI into mistaking something harmless as dangerous, could it to do the opposite, disguising weapons as turtles?


Anish Athalye, co-author on the MIT paper, says the problem isn’t as simple as fixing a single vulnerability; AI needs to learn sight beyond simply recognizing complex patterns:

“It shouldn’t be able to take an image, slightly tweak the pixels, and completely confuse the network,” he told Quartz. “Neural networks blow all previous techniques out of the water in terms of performance, but given the existence of these adversarial examples, it shows we really don’t understand what’s going on.”

But, privacy experts may hedge the need to accelerate AI-fueled recognition. We already live in a largely unregulated and perpetual surveillance state. Half of all American adults are in a federal face recognition database and simply unlocking your phone means you could be matched to a database. Better “sight” for AI inevitably means stronger surveillance. It’s an uneasy trade-off, but with AI poised to paradigmatically shift every aspect of modern life, including health, security, transportation, etc., we need to predict and prevent these exploits.

Correction: The previous version of this article misattributed the research. Japanese researchers designed the one-pixel attack, while MIT researchers used the 3D-printed turtle to fool software. We regret the error.

[Quartz via MIT Technology Review]