In a time of social distancing, a giant teddy bear giving enormous hugs sounds like a great way for people to safely cope with isolation. But Arizona State University’s robo-bear, which can learn to hug by observing humans, looks like it’s ready to give you the last embrace you’ll ever feel.
As more and more robots are integrated into places of work like factories and distribution centers, they need to be able to safely co-exist and work alongside their human counterparts. To date, the solution has been to either to segregate robots and hide them behind protective barriers so humans can’t get near them, or to pack them full of sensors so they can either autonomously avoid people or stop moving until it’s safe to do so again. Humans actually working with robots, and not just beside them, is still a concept mostly relegated to science fiction—but researchers are working on it.
The biggest hurdle to overcome is that humans can be unpredictable, and attempting to pre-program a robot to account for every possible movement and interaction with one is an act of futility. How a human picks up a part and hands it to a robot for the next step in an assembly line could be slightly different every single time, which is why, more often than not, humans are instead trained to deal with robotic co-workers who can be programmed to be far more predictable. The problem with that approach is that it limits a robot’s abilities and adaptability, and for smaller companies who may have different needs for a robot day by day, it makes investing in the hardware less appealing. The ideal solution is a robot that can learn tasks, and how to deal with people, all on its own.
That’s what being demonstrated in this video shared by Arizona State University’s Interactive Robotics Lab on YouTube. It’s a continuation of research first published three years ago in a paper titled, “Bayesian Interaction Primitives: A SLAM Approach to Human-Robot Interaction.” That’s a mouthful, but the research involves teaching robots how to use their various sensors, including live feeds from video cameras, to not only figure out their environment and their location in it (SLAM is short for “simultaneous localization and mapping”) but also to track the movements of a human and accurately predict their intended actions—be it simply handing an object over for a robot to reach out and grab, or in this case, embracing a human in a hug.
By simply watching a human go through the motions a handful of times, the robot can effectively learn, all by itself, how to perform a hug. And it’s not limited to just the person who performed the initial demonstration. The robot learns how to mimic and perform these actions with anyone, no matter their shape or size, or even the motions they use to initiate the hug. By learning on its own, and coming up with generalizations that are refined through real-time observation, the robot can quickly assess that an incoming human is simply looking for affection... hopefully.
By dressing the robot up as a giant plush teddy bear, the researchers are clearly trying to invoke an emotional response in test subjects so that the resulting hug is performed more authentically, like they were genuinely embracing another person. The only problem is that thanks to video games like Five Nights at Freddy’s, and the terrifying animatronic performers at restaurants like Chuck E. Cheese, it’s hard to imagine anyone approaching and embracing this dressed-up robot with anything but trepidation and terror. Death by robot teddy bear is now one more thing to genuinely worry about.