These Nightmare Videos Are Generated From Still Baby Photos by a Neural Network

Is that a baby or the blob? It’s actually just the sick and twisted result of a neural network predicting what a still photo of a baby would look like if it were moving. Researchers at MIT have published demonstrations of their work on generative video, and the “hallucinated” outcomes of are both impressive and repulsive.

The researchers’ model is based on teaching a neural network to make a differentiation between foreground and background. Then, the neural network fills in the necessary blanks to create just one second of video, seen here as looping gifs.

“Beach” photos processed by MIT’s neural network. Gif: MIT/Prosthetic Knowledge

Convincingly representing humans is one of the emerging technology’s toughest feats. It makes sense that babies in “motion” are so disastrously wrong. You can learn about the technical process and see hundreds of examples of generative “Beach,” “Golf,” “Train Station” and “Baby” here.

[Prosthetic Knowledge]

Share This Story

About the author