The path toward humanoid robots capable of transforming themselves into trucks, dinosaurs, and airplanes is now a bit shorter, thanks to the invention of a modular robot that can alter its physical form to match the needs of a situation.
Introducing MSRR, a “modular self-reconfigurable robot” designed and built by researchers at Cornell University and the University of Pennsylvania. The system can alter its physical form to meet the demands of a particular task, even in previously unknown environments. To work, the system combines powerful perception tools with high-level planning capabilities and modular hardware. The details of MSRR were published today in Science Robotics.
When MSRR is placed in a new environment and assigned a task, like picking up garbage or mailing a letter, the first thing it does is create a map of its surroundings. Once oriented, the system decides whether a physical transformation is required to fulfill its goal, such as changing into a snake-like robot to climb a set of stairs, or forming an elongated arm to reach into a narrow corridor.
The system is comprised of several mobile modules. Each module can separate itself from the larger structure, re-orient its position, and snap itself back onto the superstructure at the desired location. By shifting its body parts in this way, MSRR can alter its function, locomotive capabilities, and shape. The modular hardware is controlled and coordinated by the system’s central “brain.”
It’s still very basic, but refined, scaled-up versions could be used for navigating unpredictable and dangerous situations.
“Two important future applications are search-and-rescue and bomb disposal,” Jonathan Daudelin, the lead author of the new study and a roboticist at Cornell, told Gizmodo. “Both of these areas involve widely varying and unknown environmental conditions that are well suited for the adaptive capabilities of modular robots. In addition, damage sustained by modular robots can be repaired more easily by simply replacing the damaged modules instead of the entire robot.”
To help it navigate around its environment, the MSRR system is equipped with several perception tools. Each detachable module has a 3D camera that can measure distance to each pixel in the acquired image. A small computer processes the data and controls the overall, collective movements of the robot. Using camera data, the system builds a 3D map of the robot’s environment as it moves and tracks the robot’s location within the map.
“Since our system is designed to work in unknown environments, we use an exploration algorithm to tell the robot where to move in order to explore unseen parts of the environment,” said Daudelin. “As the robot explores, it detects colored objects related to its task and records their locations. Another perception tool analyzes the robot’s 3D view of the environment in order to classify the type of conditions in the environment. For example, if the robot sees an object it needs to retrieve, it determines if the object is in a free area, in a narrow tunnel, or up on a ledge.”
The high-level planner, said Daudelin, can then use this information to decide if the robot needs to reconfigure its shape in order to retrieve the object.
In addition to its perception tools, the system contains a library of possible configurations and actions, so no prior training is required. It’s currently equipped with four presets: Car, Scorpion, Snake, and Proboscis. With these four transformer-types, it can gain access to objects, and then collect, transport, and deposit them at the desired location.
The system was put through three distinct tests, in which it maneuvered around the environment, transformed itself, gathered objects, and even delivered a letter. In one demonstration, for example, the robot had to find, retrieve, and deliver pink and green metal objects to a designated drop-off zone, which was marked with a blue square. Here’s how the system fared, as the authors recount in the new study:
The demonstration environment contained two objects to be retrieved: a green soda can in an unobstructed area and a pink spool of wire in a narrow gap between two trash cans. Various obstacles were placed in the environment to restrict navigation. When performing the task, the robot first explored by using the “Car” configuration. Once it located the pink object, it recognized the surrounding environment as a “tunnel” type, and the high-level planner reactively directed the robot to reconfigure to the “Proboscis” configuration, which was then used to reach between the trash cans and pull the object out in the open. The robot then reconfigured to Car, retrieved the object, and delivered it to the drop-off zone that the system had previously seen and marked during exploration.
Daudelin said the use of multiple robotic elements introduced many possible failure points. This made it more important for the team to create more checks and robust behaviors in the system in order to detect and recover from failures during a mission, he said.
“Since individual modules are small, they are not very powerful, and therefore have a very restricted set of capabilities and are highly susceptible to small failures,” he added. “Computation was also limited due to the small size of the sensor module containing the robot computer.”
Looking ahead, Daudelin said his team would like to endow the modules with the ability to modify their environments to overcome obstacles.
“I also researched the use of machine learning combined with path planning algorithms to enable modular robots to navigate difficult terrain autonomously by reconfiguring their shapes as necessary to pass obstacles,” he said.
Roboticists obviously have a long way to go before the sci-fi vision of Transformers are realized, but this research is a positive step in that direction.