Vote 2020 graphic
Everything you need to know about and expect during
the most important election of our lifetimes

Google May Have Used Your Mannequin Challenge Videos to Train AI

Illustration for article titled Google May Have Used Your Mannequin Challenge Videos to Train AI
Photo: Getty Images

Remember back in 2016 when everybody was posting those Mannequin Challenge videos? Well, it turns out that instead of collecting dust in Ye Olde Meme Archive, Google researchers are using the videos to help train robots to better navigate their surroundings.


While humans are naturally able to look at a 2D video and understand it was filmed in a 3D space, robots aren’t so good at that yet. That’s part of the reason why robots struggle to autonomously navigate new areas, and its also a challenge when it comes to building self-driving cars.

Turns out, the Mannequin Challenge presented the perfect data set for teaching robots how to perceive depth in a 2D image. If you happened to live under a rock in 2016, the challenge involved a group of people freezing in place—often in dynamic poses—while the person recording moves around capturing the scene from multiple angles.


Of the unnumbered videos uploaded to YouTube, the researchers selected 2,000 of them. They then filtered the clips to remove those unsuitable for training—if someone, say, unfroze, used fisheye lenses, or had synthetic backgrounds that could lead to borked results. The final data set was then used to train a neural network that could predict the depth of a moving object in a video. According to the paper’s conclusion, accuracy was much higher using this method than previous state-of-the-art methods.

There are some limitations, however. The researchers noted that their method may not be quite so accurate when it comes to cars and shadows. However, they did make their data set public. So, how do you know if your particular Mannequin Challenge video was used in the set? Short answer is: You don’t.

According to MIT Technology Review, which initially reported on the study, AI researchers commonly scrape publicly available images to train bots. And the more advanced the models researchers use, the more data they need to train the neural networks. So if you upload a video to YouTube, and an AI researcher happens to think it helps teach a neural network how to better navigate, well, you uploaded your video and made it publicly available.


Microsoft recently deleted its Celeb database of 100,000 faces from the internet. Though it was supposedly public figures only, it was found that faces of private individuals also made their way into the set. Plus, while the set was intended to be used for academic purposes only, the MS Celeb set has been used by private companies—including those in China working facial recognition surveillance.

Sure, that can be a bit unsettling, but don’t let that stop you from sharing your pics and #livingyourbestlife. Just keep in mind there’s a chance that maybe Instagramming your pizza could also be teaching a machine how to cook.


Consumer tech reporter by day, danger noodle by night. No, I'm not the K-Pop star.

Share This Story

Get our newsletter


Hardly surprising and fair use of public domain data.