“We want to demonstrate that it’s possible,” Nicholas Carlini, a U.C. Berkely PhD student and one of the authors of a research paper the team published this month, told the Times,“and then hope that other people will say, ‘Okay this is possible, now let’s try and fix it.’”

Advertisement
Advertisement

While subliminal attacks can be dangerous, imagine the chaos of a smart home being fed confusing commands to lock or unlock doors or turn on lights, it’s important to remember the AI fueling Siri and Alexa isn’t being “hacked,” it’s being fooled. Cybersecurity experts use the term “adversarial example” to refer to the potential dangers of tricking AI into erroneously recognizing something. While Carlini and his team’s research can be troubling, particularly as people turn to voice assistant for running their homes, other researchers have demonstrated adversarial examples that include makeup or glasses to fool face recognition software.

[New York Times]