Many scenes you point a camera at are doomed to result in a crappy picture. Either the background is blown out, or the foreground is too dark. It’s a limitation of every camera sensor. MIT is working on technology that captures light in a new way, eliminating this problem completely.
Currently, there’s only so wide a range of lights and darks a camera sensor can record. This is why high-contrast scenes are such a pain to take a good picture of. MIT’s camera, dubbed Modulo, is composed of pixels that can read the light hitting them, then reset themselves to take multiple readings to deal with excess light in the bright areas of a scene. Those multiple readings are then processed and turned into an image where detail is preserved in the brightest and darkest areas.
The result is similar to HDR photography, but it doesn’t require shooting multiple exposures at separate instances and then combining them afterwords. The Modulo seems to rely on pixels that measure multiple levels of light radiance throughout a scene, re-interpolating the data into a recognizable image. It’s hard to say how practical or effective this technology could be in real world photography. It’s still experimental and we have no idea if it could result in any kind of consumer application.
Yet it’s always intriguing to see steps being made toward tackling such a fundamental limitation to cameras. Knowing the rules of exposure and harnessing the camera’s controls to make a pleasing image is one of the foundations of photography. To imagine a world of Modulo cameras, where exposure is no longer an issue, is pretty crazy.
Contact the author at mhession@gizmodo.com.