All the Questions About Artificial Neural Networks You Were Too Afraid to Ask

This year has seen artificial neural networks hit the big time, with Google’s Deep Dream program introducing millions of people to the concept for the very first time. But if you’re still not completely clear on, uh, how the hell they even work–fear not.


Natalie Hammel and Lorraine Yurshansky are two Google employees who have absolutely nothing to do with with deep learning–the duo work at Google’s Creative Lab, which must be about as far away from the machine learning research department as possible.

That makes them the perfect people to explain the fine-grained computer science going on in the labs of their colleagues in a new video, in which they interview people like Christopher Olah and Greg Corrado, Google researchers who work on the forefront of artificial intelligence and machine learning.

In their video, the creatives ask the scientists to explain their complex work teaching artificial brains to “see” the world. It may seem simplistic at times, but by the second half of the video, I was glad they started with the basics. If you’re interested in neural networks but don’t have the computer science background to know how they work, you’ll enjoy. And if you’ve got lingering questions, the’ll wbe doing an AMA on Reddit with the researchers this Friday.

[Google Research Blog h/t Prosthetic Knowledge]

Contact the author at



Artificial neural networks have been around a long time as evidenced by this movie quote:

“My CPU is a neural-net processor—a learning computer. But Skynet presets the switch to read-only when we’re sent out alone.”

“Doesn’t want you doing too much thinking, huh?”


Since the rudiments of the idea were first proposed in 1943, slow but steady progress has been made in making these analogs better and better. Back in the late 80s, when I got out of university, we’d hit a roadblock on making these networks deeper than two or three layers. The software and math was too complicated.

If you look carefully at this Google promotional video, when it mentions deep learning, you’ll see that we’ve figured out how to get past that complexity hurdle. Now we’ve got artificial neural networks that are now many layers deep.

But, as a complete nonexpert, I know there is lots more work to be done. These systems still aren’t much smarter than fish or many invertebrates. HAL, Skynet, GlaDOS and the Iron Giant are still science fiction.

But on the plus side, something as complicated as an insect brain, first appearing in the fossil record about 400 million years ago, took natural selection 3.5 billion years to arrive at.

Since 1943, we’ve been at this, what? Maybe 70 years? Seems to me engineering is getting very good at this stuff very quickly.