Self-Driving Cars Can't Choose Who to Kill Yet, But People Already Have Lots of Opinions

Illustration for article titled Self-Driving Cars Cant Choose Who to Kill Yet, But People Already Have Lots of Opinions
Image: Seth Wenig (AP)

That people would generally prefer to minimize casualties in a hypothetical autonomous car crash has been found to be true in past research, but what happens when people are presented with more complex scenarios? And what happens when autonomous vehicles must choose between two scenarios in which at least one individual could die? Who might those vehicles save, and on what basis do they make those ethical judgments?

Advertisement

It may sound like a nightmarish spin on “would you rather,” but researchers say such thought experiments are necessary to the programming for autonomous vehicles and the policies that regulate them. What’s more, the responses around these difficult ultimatums may vary across cultures, revealing there’s no universality in what people believe to be a morally superior option.

In one of the largest studies of its kind, researchers with MIT’s Media Lab and other institutions presented variations of this ethical conundrum to millions of people in ten languages across 233 countries and territories in an experiment called the Moral Machine, the findings of which were published in the journal Nature this week.

Advertisement

In a reimagined version of the trolley problem—an ethical thought experiment that asks whether you would opt for the death of one person to save several others—the researchers asked participants on its viral game-like platform decide between two scenarios involving an autonomous vehicle with a sudden brake failure. In one instance, the car will opt to hit pedestrians in front of it to avoid killing those in the vehicle; in the other, the car will swerve into a concrete barrier, killing those in the vehicle but sparing those crossing the street.

Moral Machine
Moral Machine
Image: MIT Media Lab

Scenarios included choosing one group or another based on avatars of different genders, socioeconomic statuses (e.g. an executive versus a homeless person), fitness levels, ages, and other factors. As one example, participants were asked whether they’d opt to spare a group of five criminals or four men. In another example, both groups were represented by one man, one woman, and a boy. However, an additional detail in the case of the pedestrian group was that they were “abiding by the law by crossing on the green signal,” insinuating that the driver of car group may have been breaking the law.

The researchers found that taken as a whole, their data showed people tended to prefer sparing more lives, young people, and humans over animals. But the preferences of participants deviated when their countries and cultures were considered. For example, respondents in China and Japan were less likely to spare the young over the old, which as the MIT Technology Review noted, may be “because of a greater emphasis on respecting the elderly” in their cultures. Another example highlighted by the magazine was that respondents in countries or territories “with a high level of economic inequality show greater gaps between the treatment of individuals with high and low social status.”

Advertisement
Moral Machine
Moral Machine
Image: MIT Media Lab

“People who think about machine ethics make it sound like you can come up with a perfect set of rules for robots, and what we show here with data is that there are no universal rules,” Iyad Rahwan, a computer scientist at the Massachusetts Institute of Technology in Cambridge and a co-author of the study, said in a statement.

Advertisement

Critics have pointed out that there are likely decisions that precede any ethical ultimatum as extreme as the ones presented in this research. But if anything, what this data shows is that there’s much to consider about the decision-making processes of artificial intelligence. And the researchers of the study said they hope its findings will be a springboard for more nuanced discussion around universal machine ethics.

“We used the trolley problem because it’s a very good way to collect this data, but we hope the discussion of ethics don’t stay within that theme,” Edmond Awad, a postdoctoral associate at MIT Media Lab’s Scalable Cooperation group and a co-author of the study, told the MIT Technology Review. “The discussion should move to risk analysis—about who is at more risk or less risk—instead of saying who’s going to die or not, and also about how bias is happening.”

Advertisement

[MIT Technology Review, Nature]

Share This Story

Get our newsletter

DISCUSSION

I’m sick of this argument, as it’s a complete red herring.

The idea of a self-driving car *choosing* is not exactly how the programming works. It really comes down to the term “imminent impact.” What would a self-driving car do if an out-of-control bus was coming towards it? The idea of avoiding an imminent situation (that is beyond the self-driving car’s control) is currently science fiction. At most, the car might consider an available lane if it detects an ‘obstruction’, but for a self-driving car, the only failsafe is to come to a complete stop since something is obstructing its path.

The idea of ‘evading’ a situation that is outside of the car’s control is not a thing. The car cannot pull up onto the sidewalk or grass, those places don’t exist any more than you can walk through walls in a video game. It cannot *choose* to run into something else or to drive somewhere that it does not recognize as the road surface.

Self driving cars really only have 2 safety features. 1) don’t bump into things (under its own power) and 2) have a good restraint system. If something is in its path, and there is no other open pat, it will simply come to a stop and wait. If that obstruction is an out-of-control bus closing in at 70MPH, it is beyond the car’s control to recognize the situation, let alone evade, especially if all paths are obstructed.  Whether a small child or a speeding bus, all obstructions are the same.