For instance, we’re pretty sure the risk from asteroids is a fair bit smaller than the risk from say, war or AI. But at the same time, if you’re NASA, figuring out how to prevent an asteroid from hitting the Earth is a good use of your time. I think what ultimately would be best is to have a broad network of programs taking care of everything that needs to be done in a coordinated fashion, so that we’re not duplicating efforts.


Does thinking about the destruction of humanity every day get you down?

Denkenberger: Optimists tend to ignore the risks. Pessimists tend to take the risks seriously and don’t think we can do anything about them. And not enough work gets done either way. I’m say I’m more in the optimist camp, but now that I take the risks seriously I do think we can do something about them.


Baum: Yea, okay, I have my moments when it kinda settles in exactly what we’re talking about here. But on a day to day basis, this is a job, and you get used to it. A good comparison here is medical doctors. They’re in life and death situations every day, and most of those situations are much more immediately emotional than the analysis we do on computers. And they’re able to distance themselves emotionally from it, so that on a day to day basis, they can just keep going. It needs to be that way or you’d just wear out. The other side of it is that this is deeply fascinating work.

Follow the author @themadstone