MIT Is Making Better Glasses-Free 3D

Illustration for article titled MIT Is Making Better Glasses-Free 3D

The Nintendo 3DS is pretty great. A solid enough 3D screen with no goofy glasses. We need more stuff like that if 3D is going to take over our living room. And MIT agrees, but they think they got an even better way to push out glasses-free 3D.


They're calling it HR3D and it could double the battery life of devices without compromising screen brightness or resolution. Also, wider angles and multiple perspectives! You can share the 3D joy! Sound great but how are they doing it?

The MIT researchers' HR3D system uses two layers of liquid-crystal displays. But instead of displaying vertical bands, as the 3DS does, or pinholes, as a multiperspective parallax-barrier system would, the top LCD displays a pattern customized to the image beneath it.

Going into the project, the researchers had no idea what the customized pattern would look like. But once they'd done the math, they found that the ideal pattern ends up looking a lot like the source image. Instead of consisting of a few big, vertical slits, the parallax barrier consists of thousands of tiny slits, whose orientations follow the contours of the objects in the image.

So they're not using the 3DS' parallax barrier (two versions of the same image, one for right eye and one for left sliced into vertical segments) but uses a barrier that changes with the image, a more versatile method. Right now though, it requires a lot of computational power to spit out the perfect barrier for the image. If they can simplify it, maybe we'll have legitimate glasses-free 3DTVs that work for everyone. [MIT via Pop Sci]


So does anyone have any clue whatsoever as to how this "magic number crunching pattern" works? There's this lengthy discussion in the article about how the 3DS works, but other than their technique using an LCD screen, no mention of what it's doing that's so different.

Do we still need two images rendered, or does it have to know the depth information about every pixel? How would it handle occlusion in that case? Does the barrier LCD need to be at a higher resolution than the source image to display this magic pattern? What are the real world limits of the device— how many people can view it comfortably, and at what angles and rotations?

How does the device handle situations where the image is split horizontally (like most movies) but the device is being held at, say, a vertical rotation? Does it correct these situations somehow? Would a device rendering 3D in realtime be able to rely on the separation to correct its display accordingly, or does the separation provided by the barrier only work for a "wider range" of rotations and not "all" rotations?

These are questions that as say, an investor, I would expect these fine gentlemen to be asked now that they've gone public with their idea. We need more info about how the product works to be able to gauge its applications and limitations.