Should Your Driverless Car Kill You to Save Two Other People?

Illustration for article titled Should Your Driverless Car Kill You to Save Two Other People?

There's a train speeding down the tracks towards five innocent people who will never get away in time. You can save them by pulling a switch, but it'll kill another person on a different track. It's a thought experiment people have debated for ages, but it's about to become a real dilemma when we program robots to pull the switch, or not.

Advertisement

Popular Science explains how the classic hypothetical question becomes real:

A front tire blows, and your autonomous SUV swerves. But rather than veering left, into the opposing lane of traffic, the robotic vehicle steers right. Brakes engage, the system tries to correct itself, but there's too much momentum. Like a cornball stunt in a bad action movie, you are over the cliff, in free fall.

Your robot, the one you paid good money for, has chosen to kill you.

Maybe the robot itself didn't decide to kill you. Maybe its programmer did. Or the executive who decided on that company policy. Or the legislators who wrote the answer to that question into law. But someone, somewhere authorized a robot to act.

Advertisement

That's not the only possible situation either. As Patrick Lin asks in Wired, when faced with a choice of hitting one of two cars or people, what criteria should a driverless car use to pick its target? The future holds a whole bunch of complicated robo-ethics questions we're going to have to hammer out eventually, but in the meantime let's start with one:

Should a driverless car be authorized to kill you?

Image by Olivier Le Queinec/Shutterstock

Share This Story

Get our newsletter

DISCUSSION

mattnovak
Matt Novak

The more interesting and immediate question for me is what drivers will do when they're sitting behind the wheel and it appears like something bad is going to happen under the robot's control.

Humans will almost certainly be able to override the controls of semi-autonomous cars for decades to come. If your driverless car looks like it's about to hit something, would you take control or just trust the robot to avoid it?

Dutch researchers looked at this question back in 1999 and found that about half of people took control of the car when it looked like they were in immediate danger. I'd love to see follow-up studies now that driverless cars feel even closer to mainstream reality.