“It’s not a story of mis- or disinfo, but rather the intersection of a fairly mundane business use case w/AI technology, and resulting questions of ethics & expectations,” DiResta wrote in a Tweet.” “What are our assumptions when we encounter others on social networks? What actions cross the line to manipulation?”

Advertisement

In a statement sent to Gizmodo, LinkedIn said it had investigated and removed accounts that violated its policies around using fake images.

Our policies make it clear that every LinkedIn profile must represent a real person. We are constantly updating our technical defenses to better identify fake profiles and remove them from our community, as we have in this case,” a LinkedIn spokesperson said. “At the end of the day it’s all about making sure our members can connect with real people, and we’re focused on ensuring they have a safe environment to do just that.”

Advertisement

Deepfake Creators: Where’s The Misinformation Hellscape We Were Promised?

Misinformation experts and political commentators forewarned a type of deepfake dystopia for years, but the real-world results have, for now at least, been less impressive. The internet was briefly enraptured last year with this fake TikTok video featuring someone pretending to be Tom Cruise, though many users were able to spot the non-humanness of it right away. This, and other popular deep fakes (like this one supposedly starring Jim Carey in The Shining, or this one depicting an office full of Michael Scott clones) feature clearly satirical and relatively innocuous content that don’t quite sound the, “Danger to Democracy” alarm.

Advertisement

Other recent cases however have tried to delve into the political morass. Previous videos, for example, have demonstrated how creators were able to manipulate a video of former President Barack Obama to say sentences he never actually uttered. Then, earlier this month, a fake video pretending to show Ukrainian president Volodymyr Zelenskyy surrendering made its rounds through social media. Again though, it’s worth pointing out this one looked like shit. See for yourself.

Advertisement

Deepfakes, even of the political bent, are definitely here, but concerns of society stunting images have not yet come to pass, an apparent bummer leaving some post-U.S. election commentators to ask, “Where Are the Deepfakes in This Presidential Election?

Humans Are Getting Worse At Spotting Deepfake Images

Still, there’s a good reason to believe all that could change…eventually. A recent study published in the Proceedings of the National Academy of Sciences found computer-generated (or “synthesized”) faces were actually deemed more trustworthy than headshots of real people. For the study, researchers gathered 400 real faces and generated another 400, extremely lifelike headshots using neural networks. The researchers used 128 of these images and tested a group of participants to see if they could tell the difference between a real image and a fake. A separate group of respondents were asked to judge how trustworthy they viewed the faces without hinting that some of the images were not human at all.

Advertisement
Image for article titled Move Over Global Disinformation Campaigns, Deepfakes Have a New Role: Corporate Spamming

The results don’t bode well for Team Human. In the first test, participants were only able to correctly identify whether an image was real or computer generated 48.2% of the time. The group rating trustworthiness, meanwhile, gave the AI faces a higher trustworthiness score (4.82) than the human faces (4.48.)

Advertisement

“Easy Access to such high-quality fake imagery has led and will continue to lead to various problems, including more convincing online fake profiles and—as synthetic audio and video generation continues to improve—problems of nonconsensual intimate imagery, fraud, and disinformation campaigns,” the researchers wrote. “We, therefore, encourage those developing these technologies to consider whether the associated risks are greater than their benefits.”

Those results are worth taking seriously and do raise the possibility of some meaningful public uncertainty around deepfakes that risks opening up a pandora’s box of complicated new questions around authenticity, copyright, political misinformation, and big “T” truth in the years and decades to come.

Advertisement

In the near term though, the most significant sources of politically problematic content may not necessarily come from highly advanced, AI driven deepfakes at all, but rather from simpler so-called “cheap fakes” that can manipulate media with far less sophisticated software, or none at all. Examples of these include a 2019 viral video exposing a supposedly hammered Nancy Pelosi slurring her words (that video was actually just slowed down by 25%) and this one of a would-be bumbling Joe Biden trying to sell Americans car insurance. That case was actually just a man poorly impersonating the president’s voice dubbed over the actual video. While those are wildly less sexy than some deepfake of the Trump pee tape, they both gained massive amounts of attention online.

Update 3/29, 9:00 AM: Added statement from LinkedIn.