If you were on Twitter yesterday, you may have seen the photo of Senator Cory Booker, a hair’s breadth from Senator Sheldon Whitehouse’s mouth, making the rounds thanks in part to Republican Senator John Cornyn of Texas, who tweeted: “Masks?” The implication was that the image was taken at Amy Coney Barrett’s confirmation hearing, but as many were quick to point out, it was taken by an Associated Press photojournalist in 2018.
Miscaptioned photos of Democrats without masks have been popular lately, but out-of-context images spread in virtually every news cycle. There were no dolphins were in the Venice canal; these troops were not participating in D-Day; these bikers were not in Tulsa; these techno fans were not anti-maskers; these firefighters were not in California; these highway sharks were not from hurricanes. Just peruse Snopes’s “miscaptioned” archives to get the idea. Now, a tech company is proposing a way to retrofit the internet to show us photos’ provenance.
The photo verification company Truepic has partnered with chip manufacturer Qualcomm—which provides chips for many Android devices—to implement a “secure” mode inside of a smartphone’s native camera app to add its own date, time, and location tags. That process solves the problem of easily-spoofable metadata, which the camera app typically pulls from the device’s settings rather than some external form of verification. Truepic’s tool (though accessed through the camera app) bypasses the app and gets pixel data directly from the camera’s sensor (so you know if a photo’s unedited); those location and time tags come via GPS and a government-maintained atomic clock, respectively.
In other words, it’s not a unique platform that you can debunk preexisting photos and deepfakes, it’s a proposal for how smartphone photos could include some kind of verification indicator in the future. That might not be so pie-in-the-sky as it sounds, since platforms are desperate for quick verification tech.
As studies have shown, there’s not much you can do about the salivating public’s desire for fake or mislabelled photos. A 2018 study by MIT researchers found that misinformation spreads up to 100 times farther and six times faster than truth, and political falsehoods spread three times faster than other misinformation. And a 2018 National Bureau of Economic Research study found that information on the 2016 U.S. presidential election was absorbed in 50-70 minutes, often forcing fact-checkers and journalists into an impossible race against time. As of 2014, Twitter reported that tweets with photos got a 35% bump in retweets.
The conventional proposed solution to misleadingly captioned or fake photos has been detection, which, Truepic notes, hasn’t been going so well. “Every time you build a new algorithm for detection, you are automatically making the A.I. that generates a fake image or video more sophisticated,” Sherif Hanna, vice president of research and development at Truepic told Gizmodo via video conference. “It’s an unwinnable arms race.” Hanna pointed out that Facebook recently held a competition with a $1 million award for deepfake recognition tools, and the winner’s model only detected deepfakes with 65.18% accuracy.
Truepic is using the same hardware-level security features in Qualcomm’s chips that already protects fingerprints and digital payments. (For this reason, they aren’t yet able to make this work within Apple’s walled garden that doesn’t allow devs to tinker with base-level settings.) “We’re getting it directly from the camera sensors, securely,” Hanna said. “And then creating that fingerprint, a digital signature that protects the image.”
Everybody has to use the tool in order for this to work, but there are plenty of incentives for device manufacturers to add it, and for consumers to use it. Mounir Ibrahim, Truepic’s VP of strategic initiatives, told Gizmodo that the company has received interest not only from journalists but also from the fintech and insurance sectors, banks, construction companies, and the auto industry. He can imagine a future in which a checkmark might appear next to photos on your Airbnb listing, dating app profile, or Amazon store page. Ibrahim believes that image verification tools might be nearly as integral to conducting business as email is.
It’s unclear what a Truepic-verified photo might look like to an Airbnb user looking to make sure that the listing is real. And they’ll have to plan the verification display strategically, so as not to rule out old-fashioned tricks: a photo of a photo, a cardboard cutout, maybe. “One of the things that we don’t want to do, for example, is to put a green checkmark right on the photo versus a red X,” Hanna told Gizmodo. “We don’t want to give people the automatic carte blanche to say, oh, OK. There’s a green checkmark, so everything that’s in the photos is absolutely real. The scene in front of it may have been staged. So we have to create the sign carefully.”
Hanna and Ibrahim predict that “secure” mode might be commercially available in some devices in 12 to 18 months. But they acknowledge that widespread implementation, updating platforms so that verification can be displayed, is a much longer project. “Web browsers would have to be updated, gallery apps have to be updated, et cetera,” Hanna said. “It will take work. We expect this to be a five, ten year journey before it is completely widespread.”