Viewing 3D content without glasses or goggles has proved to be one of the toughest things for interface designers to achieve—it never really looks right. At this year’s SIGGRAPH, a group of researchers presented a display that creates a 3D human in stunning detail using a cluster of 216 projectors.
A team from USC’s Institute for Creative Technologies has built an automultiscopic 3D display which essentially makes a 3D model of the person with video. After capturing video of a person using 30 cameras in intensely bright light, the images are divided among the 216 projectors. The projectors are arranged in a semicircle around a large screen, so as viewers walk around the screen their eyes smoothly transition from one projection to the next. The result is feeling as if you can see crystal-clear depth and detail.
Since it’s so realistic, the tech is being used to create full-scale “digital humans” which could be used in a museum or educational context. Speech recognition helps cue up answers to questions so it feels interactive even if it’s not. And because the humans are so realistic, you feel like the person is actually making eye contact with you and listening closely as you’re talking to them.
When I saw this at action at SIGGRAPH it was playing the engrossing memories of a Holocaust survivor. Unlike so many other attempts at holograms or innovative VR experiences that allowed you to “talk” to people, this one felt the most real. And true to the description, as you walked from one side of the screen to the other, you were able to see new details in his face and clothing.
Of course, the stories this digital human was telling were particularly captivating but I also think the technology was especially engaging. Dozens of people crowded around this man to hear his tales, and it felt as if he was right there in front of us. That fact that we weren’t wearing goggles and stumbling around to see it made the experience that much more poignant.