The biggest challenge with robots is enabling them to do many different things in many different situations, instead of just sitting on an assembly line screwing on bottle caps. In fact, programming them to learn like human babies might be the best way to make our robotic friends much more capable.


Quartz and MIT Technology Review are reporting on Darwin, a humanoid mini-bot that was developed at the University of California, Berkeley. Its motions are controlled by simulated neural networks that resemble an infant’s: It’s new to the world, unafraid to make mistakes, but also able to learn from those errors in a very organic way. That could be a good way of improving robots’ adaptive learning—that ability to quickly and naturally change behavior in unfamiliar scenarios.

The Berkeley team, led by postdoc researcher Igor Mordatch, says the robot has learned to stand and stay upright, among other simple motions. Darwin’s body is covered in sensors that send data from the environment to the robot’s neural networks—those computer algorithms built to mimic our own brains, which the team worked on for two years.


Eventually, the team’s goal is to develop the system to the point where Darwin could wander around a room on its own, with the same mobility and curiosity as a toddler. The plan is to have it stumbling around the Berkeley campus by January, and then picking up objects by next summer.

This kind of machine learning is really tricky to pull off, period, but especially so with a human-like android, as opposed to a Siri-like handheld. This is the reason why we don’t have a Rosie the Robotic Maid yet: Tidying up a room is actually an extraordinarily tough task for a robot, since it involves learning how to work in an unpredictable, disorganized environment, picking up and using a wide array of objects, and adapting to a setting they were familiar with at one point but now is totally new.

So for now, it’s one step at a time. And if robots start taking those actual steps like our own human offspring, they could get a lot smarter a lot faster.


[Quartz and MIT Technology Review]