The Future Is Here
We may earn a commission from links on this page

A New Method Of Spotting Deepfake Videos Looks For the Subtle Movements We Don't Realize We Make

We may earn a commission from links on this page.

The quality and speed at which videos can now be faked using neural networks and deep learning processes promises to make the upcoming presidential election even more of a nightmare. But by exploiting something overlooked in current deepfake techniques, researchers have found an automated way to spot fake videos.

Deepfake videos are far from perfect right now. Created from giant libraries of images scraped from the internet, they’re often generated at low resolutions (which helps hide imperfections) and appear overly compressed. But the technology is improving at a startling rate, and flaws in the process, like deepfake videos that were easy to spot because the subjects never blinked, are quickly improved to make them more and more believable.

Advertisement

It’s an arms race that neither side is going to stand down from any time soon, but researchers from UC Berkeley and the University of Southern California believe they’ve developed the next weapon when it comes to battling, or simply just accurately identifying faked videos. Using a similar process to how deepfakes are created—by studying existing footage of the current crop of presidential hopefuls—they trained an AI to look for the presence of each person’s “soft biometric” signature. It sounds complicated, but when speaking, we all have subtle but unique ways we move our bodies, heads, hands, eyes, and even lips. It’s all done subconsciously—you don’t realize your body is doing it, nor does your mind immediately recognize when someone else is—but as a result, it’s a detail that current deepfake processing techniques don’t take into account when creating a fake.

Advertisement
Advertisement

In testing, the new AI was able to accurately spot deepfakes at least 92 percent of the time, including videos created using several techniques, and those with degraded image quality due to video files being overly compressed. The researchers plan to further improve the AI’s success rate by taking into account the unique cadence and characteristics of a person’s actual voice. But the reality is that deepfake techniques are evolving and improving at such a rate that they’ll probably compensate and be able to fool this AI before 2020 even arrives. This research represents a battle won, but the war for truth online is still going to wage on.