There's a train speeding down the tracks towards five innocent people who will never get away in time. You can save them by pulling a switch, but it'll kill another person on a different track. It's a thought experiment people have debated for ages, but it's about to become a real dilemma when we program robots to pull the switch, or not.
Popular Science explains how the classic hypothetical question becomes real:
A front tire blows, and your autonomous SUV swerves. But rather than veering left, into the opposing lane of traffic, the robotic vehicle steers right. Brakes engage, the system tries to correct itself, but there's too much momentum. Like a cornball stunt in a bad action movie, you are over the cliff, in free fall.
Your robot, the one you paid good money for, has chosen to kill you.
Maybe the robot itself didn't decide to kill you. Maybe its programmer did. Or the executive who decided on that company policy. Or the legislators who wrote the answer to that question into law. But someone, somewhere authorized a robot to act.
That's not the only possible situation either. As Patrick Lin asks in Wired, when faced with a choice of hitting one of two cars or people, what criteria should a driverless car use to pick its target? The future holds a whole bunch of complicated robo-ethics questions we're going to have to hammer out eventually, but in the meantime let's start with one:
Should a driverless car be authorized to kill you?
Image by Olivier Le Queinec/Shutterstock