Engineers in Switzerland say they’ve found a way to make a four-legged robot even harder to fight off during the eventual robopocalypse. In a new paper, published Wednesday in Science Robotics, they describe a system that trains the bot to move faster than ever, while still being able to resist attempts to knock it down. It could even get back up through its own power if it did fall.
The ANYmal was originally developed by researchers at the Robotic Systems Lab, out of the Swiss Federal Institute of Technology in Zurich (ETH Zurich). It’s since been commercialized as part of the company ANYbotics, founded in 2016, and continues to be updated. Unlike many four-legged robots in existence today, the waterproof ANYmal was specifically designed to traverse less-than-ideal conditions, like the woods, industrial sites, and snowy landscapes. According to ANYbotics, the bot can already be used in the real world to reach dangerous places humans can’t go, including for search-and-rescue missions. It even cameoed on a recent episode of the The X-Files last year, playing—what else—an attack robot.
While four-legged robots are further ahead than their two-legged counterparts when it comes to mimicking the agility and coordination of actual living things without human input, there’s still a lot of room for improvement. One of the potential solutions roboticists have turned to is a form of machine learning called reinforcement learning. This method would let robots train themselves through trial-and-error to find the best way to carry out a task, like walking. Put very simply, reinforcement learning would let the robot almost “think” and learn like an animal, with its own internal logic.
Using reinforcement learning with a physical, legged robot isn’t easy, though, since they and their motions are so complex. So for now, scientists have largely stuck to computer simulations of a robot learning. But using the data from these simulations to train real-life robots like ANYmal is also hard, according to lead author Jemin Hwangbo, a scientist at the Robotic Systems Lab at ETH Zurich.
“It has been extremely challenging to develop control policies for sophisticated legged systems,” he told Gizmodo via email. “There are a countless number of situations that robots face and it is nearly impossible to design a control logic that covers all of them.”
In the new paper, Hwangbo and his team wrote that they were able to develop a neural network that allows them to translate simulation data to the robot better and easier than before. These simulations were carried out close to a thousand times faster than it would have taken in the real world. And the team said they were able to cut down on the computing power you’d expect to churn through for a similar system, only needing a typical PC to perform the simulations.
The ultimate results of their work—showcased in a series of videos—definitely seem impressive and a little terrifying. The newly trained ANYmal was faster and more energy efficient, able to beat its speed-walking record by 25 percent while also better following commands to move at a certain velocity. Already a hardy bot, the new training still left it capable of keeping itself upright as the researchers tried their best to kick it over. And it could now even flip itself back up from a fall, which the team said has never been observed in a four-legged robot of similar complexity.
Their new training technique, Hwango said, shouldn’t just work for ANYmal, either. They think it could help any four-legged robot be better on its feet. But there’s still more work to do in training a robot to be this agile across many situations.
“The policies presented in this paper is only for even terrain,” he said. “For traversing rough and unstructured terrains, we need vision sensors and an appropriate policy to process the information thereof. We are working in this direction and hope to present a more versatile solution soon.”
In the meantime, we should probably all hope that the ANYmal doesn’t remember any of the violence committed against it in the name of science.