Researchers Want to Build Fake Photo Detection Tools Right Into Our Cameras

Photo: Getty

There are ways in which digital forensics experts can identify if an image has been manipulated, but there’s certainly room for improvement, especially at a time when faking images is only getting easier, more realistic, and, most insidiously, more coordinated. And researchers see the very technology being weaponized to create a fake image as a way to detect it.

Researchers at New York University’s Tandon School of Engineering published a study—Neural Imaging Pipelines - the Scourge or Hope of Forensics?—want to use machine learning to spot fake photos. The researchers suggest that a detection method be baked right into the source of the fake—the camera. They detail a method in which a neural network replaces the photo development process so that the original image taken is marked with something like a digital watermark to indicate the photo’s provenance in a digital forensics analysis. In other words, the process identifies a photos origin and whether it has been manipulated since its original state.

Advertisement

Provenance—the chain of custody of content—is a crucial piece of information to determine whether content is authentic or has been tampered with in some way. Digital archivists, for example, rely in part on tools and tech, as well as metadata, in order to understand the context of a piece of work. To determine whether something has been altered, you have to go back to the source, and the NYU researchers believe this can be done with a neural network.

“People are still not thinking about security—you have to go close to the source where the image is captured,” Nasir Memon, one of the researchers on the study, told Wired. “So what we’re doing in this work is we are creating an image which is forensics-friendly, which will allow better forensic analysis than a typical image. It’s a proactive approach rather than just creating images for their visual quality and then hoping that forensics techniques work after the fact.”

According to the study, a neural imaging pipeline “learns to introduce carefully crafted artifacts” onto a high-fidelity image when it is being processed within a digital camera, and the researchers claim that the technique increased image manipulation detection accuracy from about 45 percent to over 90 percent. In an example in the study, the machine learning model learns how to identify whether an image is directly from the camera (authentic) or has been “affected by a certain post-processing operation” (fake).

“With solid understanding of neural imaging pipelines, and a rare opportunity of replacing the well-established and security oblivious technology, we have a chance to significantly improve photo authentication capabilities in next-generation devices,” the researchers wrote in the study.

Advertisement

There are some limitations to the widespread adoption of this type of technique. For starters, it’s contingent on camera designers—be it digital camera manufacturers or smartphone makers—employing photo development processes using machine learning, equipped with these types of neural networks. And in its current state, this system doesn’t apply to fake videos—though that kind of application is theoretically possible.

“A lot of the research interest is in developing techniques to use machine learning to detect if something is real or fake,” Memon told Wired. “That’s definitely something that needs to be done, we need to develop techniques to detect fake and real images, but it’s also a cat and mouse game. Many of the techniques that you develop will eventually be circumvented by reasonably well-equipped, reasonably smart adversaries.”

Advertisement

It’s also important to note that while this method does serve as a technically impressive example of how to develop advancements in fraud-detecting tech, it isn’t a tool developed for the general public. For meticulous forensic analysis, it’s great. To grapple with our increasingly loose grip on what’s real and what’s not, it’s a technique that will likely have little to no effect.

Share This Story

About the author

Melanie Ehrenkranz

Reporter at Gizmodo

TwitterPosts
PGP Fingerprint: F59C8C880028D597CAC38D11AFA2B9828B06C702PGP Key