Simply having someone wave a smartphone back and forth in front of themselves while taking a video is a much easier way to generate stills from multiple angles, but the process can take quite a few seconds to complete, and it means the subject is constantly moving, despite their best efforts to remain as still as possible. To solve this, the research team developed a new method they call Deformable Neural Radiance Fields (or D-NeRF, for short) that is able to compare frames to determine how much the subject has moved between them, and then automatically calculate the necessary deformations so that the imperfect two-dimensional image data that’s extracted can be adjusted and still be used to create an accurate and interactive 3D model.

One day, assuming nerfies actually catch on, someone looking at the photo of a fancy meal shared on Instagram would potentially be able to pan around and examine the restaurant itself. Or, if an amateur fashionista shared a nerfie of themselves trying on a new top, others would be able to adjust the position of the camera to see the matching pants that went with it. It’s a technology that could potentially provide an entirely new perspective on social media, but at the same time, as many of us probably take video calls while secretly wearing pajamas under our desks, perhaps nerfies offer a look at our lives that’s a little too invasive to be comfortable.