To make their Mini Cheetah better equipped to skillfully scramble across varying terrains, robotics researchers at MIT’s CSAIL used AI-powered simulations to quickly teach the bot to adapt its walking style as needed. That included learning how to run, which resulted in a new gait that allows the robot to move faster than it ever has before.
As much as robot designers strive to engineer and program a robot to handle any situation it might experience in the real world, it’s an impossible task. The world is endlessly chaotic. And when simply walking down a sidewalk, a robot could face a myriad of obstacles from smooth pavement to slippery patches of ice to areas covered in loose gravel to all of the above one after the other. It’s why bi-pedal robots and even quadrupeds usually have a very slow and careful gait. They’re designed and programmed to expect the worst-case scenario when it comes to the terrain they’re navigating and proceed very carefully, even when walking across smooth surfaces free of any debris or obstacles.
Adaptability is the key to making robots move faster and more confidently across varying terrains; changing their gait and speed when they’ve identified a transition from safer surfaces like pavement to materials like loose gravel that require a more careful and slower approach. A robot’s programming can be manually modified and upgraded every time it encounters a new terrain it can’t successfully navigate, but that’s a time-consuming process that inevitably sets the robot up for failure every time it encounters something new.
A better approach is to create a robot that can learn by trial and error, and automatically modify and alter its behavior and movements all by itself when it encounters a new terrain. The problem with that approach is that, as with a toddler, it’s not safe to let a robot simply run wild to have all these learning experiences on its own. One of the most promising use cases for robots is being able to send a machine with the same capabilities as a human into areas not safe for humans to go, and requiring a constant babysitter means a robot can’t fulfill that role.
To skip past the childhood full of random learning experiences that most humans go through and accelerate the Mini Cheetah’s development, the researchers at MIT CSAIL turned to artificial intelligence and simulations. In just three hours’ time, the robot experienced 100 days worth of virtual adventures over a diverse variety of terrains and learned countless new techniques for modifying its gait so that it can still effectively loco-mote from point A to point B no matter what might be underfoot.
The Mini Cheetah might not necessarily be able to recognize that it’s loose gravel that’s constantly causing it to lose its footing or ice that’s making its feet slip, but by constantly monitoring its movements it can tell when it’s not walking as effectively as it could, and based on what its legs are doing, it can now adapt their movements to ensure it keeps moving forward. Those adaptations can even compensate for how the robot’s components are performing or underperforming as a result of damage or being over-stressed.
There’s another reason robots don’t run, and it has nothing to do with researchers worried about damaging a custom machine that potentially costs hundreds of thousands of dollars to build. Running requires a robot to push its various components—like electric motors and servos—to the limits of its operating range, at which point they can start to behave and perform different behaviors that are as hard to predict as what might happen to a robot traversing slippery ice. But the same way that the Mini Cheetah can now adapt to different terrains, it can also adapt to how its own components are functioning, which allows it to run more effectively.
It might not be the most graceful thing to watch when moving at high speeds, but the Mini Cheetah hit a new top speed of 3.9 m/s, or a little over 8.7 MPH, which is faster than the average human can run. The new approach isn’t just about teaching robots to run, however. Robot hands could be quickly taught to safely handle thousands of different objects they’ve never physically touched before, and autonomous drones could be taught how to fly in inclement weather through safe simulations instead of sending them out into the real thing to learn by trial and error.