Privacy is an obvious concern now that everything from smartphones to smartwatches to even smart glasses has built-in cameras. Banning covert cameras is never going to happen, and digitally altering images for privacy reasons is a real pain. So, researchers at UCLA are instead working on a radical new kind of camera that can selectively capture or ignore specific objects in frame before they’re even recorded.
If you’ve ever seen an investigative news show protect the identity of a source by blurring or pixelating their facial features, then you’re already familiar with one of the many methods we already use for preserving privacy. Other approaches include encrypting sensitive media, or more advanced processing techniques that digitally erase part of a photo using tools like Photoshop. There’s also automated algorithms, which services like Google Maps use to blur faces and license plates in billions of photos.
Those are all post-processing methods, however, which happen after a digital image has been captured and stored. The original unprocessed images potentially containing private data still exist and could still be exposed—something we’ve seen happen time and time again—which is why the UCLA researchers wanted to address privacy concerns at the source: when light enters a camera, but before it hits the image sensor.
Camera makers could potentially release firmware updates with AI-powered tools that, for instance, could be used to selectively erase specific people from a photo. But that requires a level of processing power even a high-end digital camera may not have, so the UCLA researchers addressed the problem optically, through a technique they call “diffractive computing,” as detailed in a recently published paper.
Even if you’re well versed in photography, this camera takes a radically different approach to capturing images. The researchers started with a desired object they wanted to be recorded—in this case, a couple of very simple black and white, hand-written number twos—and used it to train a deep learning-based design tool that generates a series of diffractive layers that can be 3D-printed and assembled in series to create a “computational imager” that sits in front of an “output plane,” where the final image is captured.
Each layer features tens of thousands of microscopic diffractive features that are specifically designed to allow light that matches the desired objects to pass through unaffected, while light from other objects is diffracted and optically erased into non-sensical, low-intensity patterns that look like random noise. This means the image that’s actually captured in the end can’t be reverse-engineered to extrapolate what was removed.
As you can probably imagine, the practical applications for this radically-different approach to photography are incredibly limited at the moment. You’re not going to see a ‘don’t capture Uncle Bill’ feature added to the iPhone’s camera app any time soon. But the research offers some impressive benefits over current techniques. Not only does the ‘image processing’ literally happen at the speed of light, since it’s entirely optical and analog, but the design of the diffractive layers could also introduce optical encryption, hiding details in a photo that can only be revealed using a decryption key that shows how the original image can be recovered.