In an important advance that takes us one step closer to the inevitable robopocalypse, MIT researchers have developed a system that teaches robots how to acquire new skills—and then teach those skills to different types of robots.
This browser does not support the video element.
The system is called C-LEARN, and it was developed by researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL). Using C-LEARN, people who have no experience with computer programming can teach a robot how to perform a task—like dropping a flask into a bucket, or pulling a rod from a container—by providing it with some basic rules about the task, and allowing the robot to view a single demonstration of the task being completed.
Incredibly, a robot can then transfer this newly-acquired knowledge to another robot, even if the robot learning is physically different than the robot teaching. Eventually, the C-LEARN system could allow factories to utilize a host of different robot types, and not have to worry about programming each and every one of them individually. It could also help robots to quickly learn and teach new tasks in high pressure situations, such as when they’re busy exterminating the entire human species, or more practically, when they’re defusing bombs.
C-LEARN applies two basic robot teaching principles: Learning from a demonstration, and learning by brute programming, where each physical parameter has to be hand-coded by an expert. On their own, these teaching strategies come with drawbacks. With demos, robots can’t really apply lessons to other situations or environments, and with motion-planning methods, the teaching is time consuming and labor intensive. CSAIL researchers Claudia Pérez-D’Arpino and Julie Shah combined these two principles to make up for the deficiencies of each.
“By combining the intuitiveness of learning from demonstration with the precision of motion-planning algorithms, this approach can help robots do new types of tasks that they haven’t been able to learn before, like multistep assembly using both of their arms,” noted Pérez-D’Arpino in MIT News.
The first step of the teaching process is to provide a robot with information on how to reach or grasp various objects with different constraints (the “C” in C-LEARN actually stands for constraints). For example, even though certain objects may be similar in terms of shape, like a steering wheel or tire, a different set of movements is required when attaching these parts to a car. For the second stage, a human operator uses a 3D user interface to show the robot how to complete the task. In tests, after observing a single demo, robots were able to access their knowledge base, and make a suggested movement for the operator to approve or modify as needed. If there’s no operator, the robot can just make a guess (when just guessing, MIT’s test robots were successful 87.5 percent of the time, as opposed to 100 percent of the time when humans helped out).
“This approach is actually very similar to how humans learn in terms of seeing how something’s done and connecting it to what we already know about the world,” says Pérez-D’Arpino. “We can’t magically learn from a single demonstration, so we take new information and match it to previous knowledge about our environment.”
Importantly, this knowledge can then be taught to another robot. In the lab, the CSAIL researchers taught a set of tasks to Optimus, a two-armed robot designed for bomb disposal tasks. Later, it seamlessly transferred this knowledge to Atlas, an imposing bipedal robot that weighs over 400 pounds. By the end of the experiment, both robots were able to open doors, transport objects, and pull objects from containers—even though the robots had dramatic physical differences, and Atlas was never directly taught the skills by a human.
C-LEARN is an important advance because, rather than directly imitating motion, the robot has to infer the principles behind the motion, a more human-like approach. We don’t repeat each physical action we’re taught in a literal way. Instead, we integrate what we’ve learned through demonstrations, and then apply our knowledge to similar contexts.
A paper describing C-LEARN has been accepted to the IEEE International Conference on Robotics and Automation (ICRA), which will take place from May 29 to June 3 in Singapore.