New Camera Sensor Captures Images and Depth Data At the Same Time

Illustration for article titled New Camera Sensor Captures Images and Depth Data At the Same Time

Samsung has developed what they're touting as the world's first sensor that can capture both an RGB and range (or depth) image at the same time, granting Kinect-like gesture recognition capabilities to a host of devices.

Advertisement

The new CMOS sensor uses an array of red, blue, and green pixels that sit alongside the depth-recording z-pixels, which are four times as large. As a result, the sensor can only capture images with a resolution of 1,920 x 720, while the depth map is limited to just 480 x 360.

But the prospects of having the capabilities of the Kinect's sensor bar inside a camera, possibly even inside your phone, are pretty cool. Instead of only operating a smartphone through its touchscreen display, its front-facing camera could also track your hand movements in 3D space, recognizing complex gestures. And speaking of 3D, the extra depth data could be used to fake 3D images or even automatically replace the background behind you like a green screen, if you didn't want people to know you were Facetiming them from the bathroom. [Tech-On!]

Share This Story

Get our newsletter

DISCUSSION

knyghtryda
knyghtryda

I would like to know how they are managing to compute the distance. From my reading of the article and some basic googling they're using a TOF (time-of-flight) camera setup, which basically means its a camera that acts like a LIDAR. A Pulse is sent out, and the distance is computed by how long it takes for the light to come back. Does that means in order for this to work there is going to need to be an IR emitter? That severely limits the range on these things if put into a small setup like a phone. Then again the article was short on details, so maybe their using a novel method that isn't quite TOF and isn't quite structured light (what Kinect uses). Verrrrry interesting stuff.