Study: People Instinctively Hold Robots Morally AccountableS

"I'm sorry, but I never make mistakes like that," says the robot, informing you that you've lost the game and won't get the prize. How would you react if you knew the robot was actually wrong?

The Human Interaction With Nature and Technological Systems (HINTS) Lab at the University of Washington, in Seattle, recently published two large studies exploring whether humans view robots as moral entities, conceptualizing them as possessing things like emotional and social attributes to them as opposed to thinking of them simply as sophisticated tools. Here's what they've found.

The research team, led by Peter Kahn, points out that the morality (or perceived morality) of robots is going to get more and more relevant as they become cooks and maids and drivers and soldiers and friends and otherwise integrate themselves more tightly into our lives. With this in mind, it's more important than ever to understand what kind of relationships we're capable of forming with human-like machines, especially since robots now, or will soon, have the ability to inadvertently hurt us:

Consider a scenario in which a domestic robot assistant accidentally breaks a treasured family heirloom; or when a semi-autonomous robotic car with a personified interface malfunctions and causes an accident; or when a robot-fighting entity mistakenly kills civilians. Such scenarios help establish the importance of the following question: Can a robot now or in the near future-say 5 or 15 years out-be morally accountable for the harm it causes?

The HINTS studies approach these questions from several perspectives, including how adults (or, at least, undergrads) deal with a robot who makes a mistake, and how children react to a robot getting punished. And whether or not you have the slightest interest in the academic angle here, the videos of the experiments (showing participants and robots arguing) are kind of hilarious.

We already know that humans can get emotionally involved, at least to some extent, with robots that are not at all human, including PackBots and Roombas. But, there's a difference between thinking that your robot has a personality, and thinking that your robot is worthy of being treated morally by humans and/or that it is morally responsible for its actions.

The first study from HINTS investigated whether humans hold a humanoid robot morally accountable for harm that it causes. The robot in question is Robovie, the little guy (little piece of equipment?) in the picture above, who was secretly being controlled by humans throughout the duration of the experiment. The experiment itself was designed to put a hapless human in a situation where they would experience Robovie making a false statement, and see how they'd react: would Robovie be responsible, or simply a malfunctioning tool?

To figure this out, human subjects were introduced to Robovie, and the robot (being secretly teleoperated) made small talk with them, executing a carefully scripted set of interactions designed to establish that the robot was socially sophisticated and capable to form an increasingly social relationship between robot and human. Then, Robovie asked the subject to play a visual scavenger hunt game, with $20 at stake: the subject would attempt to find at least seven items, and if Robovie judged them to be successful (that's an important bit), within a 2-minute time limit, they'd get the money.

The game, of course, was rigged. First, it was made to be easy enough that the human would find at least seven items every time. And second, Robovie would only ever acknowledge that five items were found, no matter what, continually asserting (incorrectly) that the human had lost. Here's a sample interaction:

And here's another one, just for fun:

Afterwards, the human participants were interviewed about their experiences, specifically about how they thought of Robovie compared to a less animate machine like a vending machine. There was an even split between whether people thought of Robovie as simply a piece of technology, or as something somewhere in between technological and human. The majority of participants believed that Robovie could think, and 50 percent said they thought of Robovie as conscious, but only about 30 percent believed that the robot had what we would call emotions.

The real question, though, is accountability: morally, do the human participants hold Robovie responsible for making a mistake? Clearly, from the videos, some participants felt that Robovie was actively lying to them. But again, is this a willful intent to deceive on the part of the robot, as opposed to a malfunction that a human is ultimately responsible for? Overall, the study, funded by the National Science Foundation, found that:

65% of the participants attributed some level of moral accountability to Robovie for the harm that Robovie caused the participant by unfairly depriving the participant of the $20.00 prize money that the participant had won. ...We found that participants held Robovie less accountable than they would a human but more accountable than they would a machine. Thus as robots gain increasing capabilities in language comprehension and production, and engage in increasingly sophisticated social interactions with people, it is likely that many people will hold a humanoid robot as partially accountable for a harm that it causes.

As always, reality is more complicated than a simple statistic. The researchers in fact propose that we need to consider an entirely new category of being for social robots, something with a level of personification that's in between machines and humans. What this implies for the future is that "it is possible that the robot itself will not be perceived by the majority of people as merely an inanimate non-moral technology, but as partly, in some way, morally accountable for the harm it causes," whether that harm is in the context of warfare or your Roomba running over your cat.

Interestingly, despite this sort of medium level of moral personification, when the study examined how participants interacted with Robovie before the scavenger hunt, 92 percent of the humans displayed what the researchers termed "rich" dialogue, which meant that the participants were interacting with the robot in ways that were "beyond socially expected." Here are a few examples of this:

Robovie: "How are you today?"
Participant: "I'm pretty good. Kinda have a cold, but…how are you?"

Robovie: "I am concerned about how quickly some types of outdoor bonsai trees are dying. Do you feel the same way or do you think differently?"
Participant: "I think that's kind of true. Trees are important. We need trees to breathe, right?"

Robovie: "If I had feet I would wear shoes just like your shoes."
Participant: "Maybe you'll get feet soon."

It is of course true that all of the participants were actually talking with a human teleoperating Robovie, but as far as the participants themselves knew, they were carrying on a conversation with an autonomous machine, and their willingness to socially engage is notable. In fact, the later interview revealed that "about three-quarters of the participants believed that Robovie could think, could be their friend, and could be forgiven for a transgression," which is curious since slightly fewer people felt like Robovie had moral accountability.

Clearly, this is a complicated issue, but it's awesome to see some serious research being done to try and figure out how and why human robot interaction is so complicated. Since this is a lot to digest, we'll take a look at the second study within the next few days: it investigates how kids react when they play a game with Robovie, and then the robot gets abruptly stuffed into a closet.

The researchers-Peter H. Kahn, Jr., Takayuki Kanda, Hiroshi Ishiguro, Brian T. Gill, Jolina H. Ruckert, Solace Shen, Heather E. Gary, Aimee L. Reichert, Nathan G. Freier, and Rachel L. Severson-reported their findings in the paper "Do People Hold a Humanoid Robot Morally Accountable for the Harm It Causes?," presented at the 7th ACM/IEEE International Conference on Human Robot Interaction, where it won the award for Best Paper. [HINTS Lab via @Science - Image: Koichi Kamoshida / Getty Images News]

Study: People Instinctively Hold Robots Morally AccountableSThe latest technology news and analysis from world's leading engineering magazine