When you look for shapes in the clouds, you’ll often find things you see every day: Dogs, people, cars. It turns out that artificial “brains” do the same thing. Google calls this phenomenon “Inceptionism,” and it’s a shocking look into how advanced artificial neural networks really are.
In a post yesterday on its blog, Google’s artificial neural networks research team explains how they’re building the kind of very advanced computer vision systems that are able to identify when you’re looking at a picture of, say, an orange versus a banana. The entire post is sincerely gripping, but here’s a slightly shorter synopsis.
First, it helps to know a little bit about the structure of neural networks. Google succinctly explains how they’re made up of layers of artificial neurons, as many as 30 at once. When you run a photo through the network, the first layer detects low-level information, like the edges in the picture. The next layer might fill in some information about the shapes themselves, getting closer to figuring out what’s depicted. “The final few layers assemble those into complete interpretations—these neurons activate in response to very complex things such as entire buildings or trees,” Google’s engineers explain.
Google “trains” each network by feeding it tons of images—sometimes focusing on a specific type of image, like trees or animals. The Google team found that these networks can even generate images of certain objects if they’re asked to:
Apple MacBook Air Laptop
The M1 chip delivers 3.5x faster performance than the previous generation all while using way less power. Get up to 18 hours of battery life.
So what would happen if you asked a single layer of the network to “enhance” the things it detects about a certain image? For example, if you asked the layer in charge of detecting edges in images to take that information and build on it? Some weird stuff starts to happen:
Then things get really interesting. Google asked the higher level neuron layers—the ones which identify specific elements of the image, not just shapes and corners—to build on that they detect in an image. In one case, they fed an image of clouds through a layer that had already been trained to detect animals in photos:
Zoom in on that second image, and you’ll see the results of the network’s daydreaming about animals:
Yes. Those are fantastical creatures created entirely by an artificial neural network looking for animals in an image of clouds. Google actually has a term for this: Inceptionism.
Then, for the piece de resistance of this post, the team goes one step further: It shows us what it looks like when a network is placed in an endless feedback loop with an image it generated. In other words, if you ask a network that’s an “expert” on architectural arches to create an image of those arches—and then ask it to generate more based on that image—you get images that look like the fever dream of MC Escher:
Or Game of Thrones on acid:
Or just a straight up Magic Eye picture:
Google calls these “dreams.” Dreams created entirely by artificial neural networks.
Go check out the entire gallery of “Inceptionism” dreams created by these machine brains and be awed.
[Google; h/t The Guardian]
Contact the author at kelsey@Gizmodo.com.