Oxford’s Global Priorities Project has compiled a list of catastrophes—both natural and self-inflicted—that could kill off 10 percent or more of the human population. It’s a real buzzkill of a report and it says that any of these catastrophes could happen within the next five years.
We live in an era of accelerating change, when scientific and technological advancements are arriving rapidly. As a result, we are developing a new language to describe our civilization as it evolves. Here are 20 terms and concepts that you’ll need to navigate our future.
It was hailed as the most significant test of machine intelligence since Deep Blue defeated Garry Kasparov in chess nearly 20 years ago. Google’s AlphaGo has won two of the first three games against grandmaster Lee Sedol in a Go tournament, showing the dramatic extent to which AI has improved over the years. That…
The Information Technology & Innovation Foundation has released its nominees for its annual Luddite Awards. Recognizing “the worst of the year’s worst innovation killers,” this year’s crop includes everything from restrictions on car-sharing to bans on automatic license plate readers. But by referring to “AI…
The onset of World War I and the current climate change crisis have a lot more in common than you might think. Here’s why the two historical events are eerily similar—and why it’s so damn hard for us to prevent a self-inflicted disaster that everyone knows is coming.
During a recent United Nations meeting about emerging global risks, political representatives from around the world were warned about the threats posed by artificial intelligence and other future technologies.
The prospect of self-replicating nanobots devouring the Earth is a frightening one, indeed. But as Idea Couture foresight strategist Jayar LaFontaine explains, there are some practical things we can do to prevent such nightmares from happening.
Earlier this summer, more than a thousand prominent thinkers and specialists signed an open letter calling for a ban on autonomous killing machines. Since then, a number of critics have condemned the motion, citing it as both dangerous and useless. Optimization researcher Toby Walsh explains why we shouldn’t be so…
Set in the near future, Ghost Fleet dares to imagine what the next global war might actually look like. We talked to P.W. Singer to learn how he and his co-author August Cole managed to produce a futuristic techno-thriller that’s as plausible as it is entertaining. We were also given an exclusive excerpt from the…
Drought and extreme heat may significantly increase the risk of power shortages in the Western U.S. unless its utilities adopt “climate-proofing” measures, according to new research.
With chants of "I say robot, you say no-bot!", a group of protesters took to the streets in Austin, Texas to warn against the rise of artificial intelligence. The movement, though small in number, may be the start of a larger trend.
Normally, the things around us become damaged after experiencing an unexpected disruption or shock. But there are aspects to our world that actually get better after a setback. Here's why things that don't kill us can sometimes make us stronger.
Bill Gates has joined the growing chorus of concern over the potential risks of artificial superintelligence. He shared his thoughts in a recent Reddit AMA, writing: "I agree with Elon Musk and some others on this and don't understand why some people are not concerned." He has now added his name to an open letter…
Many of us, owing to an intuitive sense of where technological and social progress are taking us, have a preconceived notion of what the future will look like. But as history has continually shown, the future doesn't always go according to plan. Here are 11 ways the world of tomorrow may not unfold the way we expect.
Stephen Hawking, Elon Musk, and many other prominent figures have signed an open letter pushing for responsible AI oversight in order to mitigate risks and ensure the "societal benefit" of the technology.
Everything, actually. Artificial intelligence is poised to accompany humanity for the rest of its existence. We have a responsibility to make it safe. While we still can.
Stephen Hawking is once again warning about the perils of AI. "The development of full artificial intelligence could spell the end of the human race," he recently told the BBC, adding that "It would take off on its own, and re-design itself at an ever increasing rate...Humans, who are limited by slow biological…
It's hard to assess the sustainability of our civilization when climate scientists and ecologists have nothing to compare ourselves to. Which is why we need to learn from the success — and potential failures — of distant alien civilizations.
Artist and computer scientist Jaron Lanier has penned a longread for The Edge where he argues that the biggest threat of artificial intelligence comes from the fact that it's an elaborate fraud, and that it introduces religious thinking to what should otherwise be a technical field.
Game theory is a powerful tool for understanding strategic behavior in economics, business, and politics. But some experts say its true power may lie in its ability to help us navigate a perilous future.