Advancements in deepfake technology are quickly making your face just another piece of personal data you need to worry about getting stolen. One iPhone app that recently went viral makes creating a fake video as easy as taking a selfie.
The Chinese face-swapping app Zao exploded in popularity over the weekend, shooting to the top China’s iOS store where it hasn’t budged from its number one spot since. Not bad for an app that only been out three days.
It’s the work of a subsidiary of Momo Inc., a company behind a popular Chinese dating and livestreaming app of the same name, Bloomberg reported. By dropping a single photo into Zao, users can see themselves star in hundreds of movies, TV shows, and musical performances in seconds. Not just any scene, mind you; users choose from a selection of pre-determined clips, and the result is downright impressive for a deepfake, a.k.a. a realistic-looking fake video generated by machine learning. So, for instance, think you make a better Leonardo DiCaprio than Leonard DiCaprio? Bet.
Some of these fake videos built on one image could pass as real, a huge step forward given that the technology previously required hundreds of images for AI to reference so its reproduction could human convincingly. But other videos still look off in that uncanny valley kind of way. To craft an even more realistic deepfake in Zao, users can provide a series of photos of themselves following on-screen prompts asking them to blink, open their mouths, etc. and the results look pretty damn seamless. Zao has a built-in meme creator as well, no doubt a contributor to why its videos got plastered all over social media.
Another face-modification app, FaceApp, had a similar success story earlier this year, going viral globally with its gender-swapping filter and again with its instant-aging filter. And, just like FaceApp, as Zao’s popularity soared it sowed privacy concerns and outrage in its wake. Once users got over the novelty of seeing themselves as Wolverine or Jon Snow, the implications of the technology powering the app set in, which proved particularly terrifying given its terms of service upon release.
Before a recent revision, Zao’s user agreement included the eerily dystopian provision that it had “free, irrevocable, permanent, transferable, and relicense-able” rights to whatever its users uploaded, according to a Bloomberg report. This news was, understandably, not well received. People pissed off about how their data was being used, criticizing it an invasion of privacy, flooded Zao with one-star reviews and thousands of negative comments over the weekend.
In response, the company quickly implemented a TOS update that threw out this controversial language. Now Zao’s user agreement says anything someone uploads will only be used to improve the company’s tech unless that user gives permission for other purposes. The app’s rating in the iOS store has yet to recover from the damage, though, where it currently has 2.9 stars out of five.
“We understand the concern about privacy,” a company statement reads per Bloomberg. “We’ve received the feedback, and will fix the issues that we didn’t take into consideration, which will need a bit of time.”
Zao’s rapid rise and not-quite-fall (the privacy controversy wasn’t enough to push it down the app store charts) reflects a lot of concerns some consumers and lawmakers have about how quickly deepfake technology has progressed in an unnervingly short time.
Back when that video of a lip-synced Barack Obama was making the rounds, fake videos seemed like a concern confined to public figures or celebrities, folks with hundreds of photos available to reference. As technology has become more sophisticated, though, that required number of images has fallen dramatically. Meanwhile, regulatory legislation has not kept up pace nationwide. While making silly clips and seeing what you look like as a granny may not pose a threat, the idea that someone could exploit this technology for more sinister means is. Especially when America has a presidential coming up next year.