The Future Is Here
We may earn a commission from links on this page

Is your brain actually running a computer algorithm while you sleep?

We may earn a commission from links on this page.

It's no surprise that computers function a little bit like human minds — after all, humans built them using ourselves as models. But recent research into neural networks suggests that there is an uncanny connection between computer learning algorithms, and what your brain does when you're asleep.

Advertisement

Over at Quanta magazine, Natalie Wolchover has a great article on Boltzmann machines, or software that imitates the way synapses form networks in your brain. The weird part? New research on the sleeping human brain reveals that we may have stumbled across an algorithm that describes what happens in our brains when we're not awake.

Writes Wolchover:

[In a Boltzmann machine], synapses in the network start out with a random distribution of weights, and the weights are gradually tweaked according to a remarkably simple procedure: The neural firing pattern generated while the machine is being fed data (such as images or sounds) is compared with random firing activity that occurs while the input is turned off.

Geoffrey Hinton, a pioneer in the field of artificial intelligence, thinks the best approach to understanding how brains learn is to try to build computers that learn in the same way. “You inevitably discover a lot about the computational issues, and you discover them at a level of understanding that psychologists don’t have,” he said.

Each virtual synapse tracks both sets of statistics. If the neurons it connects fire in close sequence more frequently when driven by data than when they are firing randomly, the weight of the synapse is increased by an amount proportional to the difference. But if two neurons more often fire together during random firing than data-driven firing, the synapse connecting them is too thick and consequently is weakened.

The most commonly used version of the Boltzmann machine works best when it is “trained,” or fed thousands of examples of data, one layer at a time. First, the bottom layer of the network receives raw data representing pixelated images or multitonal sounds, and like retinal cells, neurons fire if they detect contrasts in their patch of the data, such as a switch from light to dark. Firing may trigger connected neurons to fire, too, depending on the weight of the synapse between them. As the firing of pairs of virtual neurons is repeatedly compared with background firing statistics, meaningful relationships between neurons are gradually established and reinforced. The weights of the synapses are honed, and image or sound categories become ingrained in the connections. Each subsequent layer is trained the same way, using input data from the layer below.

. . . Over the past five to 10 years, studies of brain activity during sleep have provided some of the first direct evidence that the brain employs a Boltzmann-like learning algorithm in order to integrate new information and memories into its structure. Neuroscientists have long known that sleep plays an important role in memory consolidation, helping to integrate newly learned information. In 1995, Hinton and colleagues proposed that sleep serves the same function as the baseline component of the algorithm, the rate of neural activity in the absence of input.

“What you’re doing during sleep is you’re just figuring out the base rate,” Hinton said. “You’re figuring out how correlated would these neurons be if the system were running by itself. And then if the neurons are more correlated than that, increase the weight between them. And if they’re less correlated than that, decrease the weight between them.”

Advertisement
Advertisement

Read the rest of this fascinating article at Quanta magazine

Image by Luis Louro via Shutterstock