Lion fish. Petri dish. Dough. According to one very befuddled artificial neural network, all of these things can be found in the short intro to Star Trek: The Next Generation.

The animation isn’t exactly a photorealistic depiction of space—but it’s close enough that most of us can recognize its planets, spacecraft, and other details. Yet when Finnish YouTuber Ville-Matias Heikkilä asked an artificial neural network to describe what it “saw” in the intro, it responded with all the lucidity of a deeply drunk robot that had been locked in a closet since 1996. The Starship Enterprise? That’s a CD player. The Star Trek logo? A sea slug. The rings around a Saturn-style planet? Well, that’s a “hair slide,” of course.


So why, when we’ve seen so many examples of how intelligent neural networks can be, does it fail so deeply here? The answer is simple, but requires a basic knowledge of how artificial neural networks “learn” to see.

Each neural network is made up of layers of “neurons.” Each of these neuron layers is responsible for deciphering different elements of an image, beginning with the basics and leading up to specifics about geometry and color. As we explained earlier this summer, a network learns to “see” when Google (or whomever built it) feeds it countless images to decipher. Here’s how Google’s team describes it:

Well, we train networks by simply showing them many examples of what we want them to learn, hoping they extract the essence of the matter at hand (e.g., a fork needs a handle and 2-4 tines), and learn to ignore what doesn’t matter (a fork can be any shape, size, color or orientation).

Remember Deep Dream, the neural network algorithm that Google released publicly this summer? What it “saw” in existing images was based on the content it had been trained with—resulting in it “seeing” things that weren’t really there, like strange animals in the sky or architectural details where there certainly weren’t any. Those things were simply all it knew.


That’s the case here too—ask a neural network to describe space without teaching it about space, and you’ll get some very interesting answers. Heikkilä’s video is worth watching a few times over. Happy hair sliding!

[Ville-Matias Heikkilä on YouTube; h/t Prosthetic Knowledge]

Contact the author at