It was hailed as the most significant test of machine intelligence since Deep Blue defeated Garry Kasparov in chess nearly 20 years ago. Google’s AlphaGo has won two of the first three games against grandmaster Lee Sedol in a Go tournament, showing the dramatic extent to which AI has improved over the years. That…
Stephen Hawking is at it again, saying it’s a “near certainty” that a self-inflicted disaster will befall humanity within the next thousand years or so. It’s not the first time the world’s most famous physicist has raised the alarm on the apocalypse, and he’s starting to become a real downer. Here are some of the…
By studying a nearby sun-like star, astronomers have concluded that the Sun is capable of releasing solar flares a thousand times greater than anything previously recorded. Scientists say the chances of this are quite slim, but warn that such an event would threaten life on Earth.
During a recent United Nations meeting about emerging global risks, political representatives from around the world were warned about the threats posed by artificial intelligence and other future technologies.
The prospect of self-replicating nanobots devouring the Earth is a frightening one, indeed. But as Idea Couture foresight strategist Jayar LaFontaine explains, there are some practical things we can do to prevent such nightmares from happening.
More than a thousand prominent thinkers and leading AI and robotics researchers have signed an open letter calling for a ban on “offensive autonomous weapons beyond meaningful human control.”
In Part I of Kurz Gesagt’s animated explainer of the Fermi Paradox we learned about the vexing problem that is the Great Silence. This follow-up video presents some intriguing solutions that may explain the disturbing absence of intelligent alien life.
With chants of "I say robot, you say no-bot!", a group of protesters took to the streets in Austin, Texas to warn against the rise of artificial intelligence. The movement, though small in number, may be the start of a larger trend.
This coming weekend at the annual meeting of the American Association for the Advancement of Science, experts will be discussing the potential benefits and risks of a SETI scheme in which messages about Earth — including the entire contents of Wikipedia — would be transmitted to hundreds of star systems.
Bill Gates has joined the growing chorus of concern over the potential risks of artificial superintelligence. He shared his thoughts in a recent Reddit AMA, writing: "I agree with Elon Musk and some others on this and don't understand why some people are not concerned." He has now added his name to an open letter…
This year's Edge.org question asks, "What do you think about machines that think?" Editor John Brockman collected 182 individual responses from such prominent thinkers as Nick Bostrom, Daniel Dennett, Rodney Brooks, Susan Blackmore, Alison Gopnik, Andy Clark, and Martin Rees.
Stephen Hawking, Elon Musk, and many other prominent figures have signed an open letter pushing for responsible AI oversight in order to mitigate risks and ensure the "societal benefit" of the technology.
Everything, actually. Artificial intelligence is poised to accompany humanity for the rest of its existence. We have a responsibility to make it safe. While we still can.
Stephen Hawking is once again warning about the perils of AI. "The development of full artificial intelligence could spell the end of the human race," he recently told the BBC, adding that "It would take off on its own, and re-design itself at an ever increasing rate...Humans, who are limited by slow biological…
Artist and computer scientist Jaron Lanier has penned a longread for The Edge where he argues that the biggest threat of artificial intelligence comes from the fact that it's an elaborate fraud, and that it introduces religious thinking to what should otherwise be a technical field.
Game theory is a powerful tool for understanding strategic behavior in economics, business, and politics. But some experts say its true power may lie in its ability to help us navigate a perilous future.
Futurists and science fiction authors often give us overly grim visions of the future, especially when it comes to the Singularity and the risks of artificial superintelligence. Scifi novelist David Brin talked to us about why these dire predictions are often simplistic and unreasonable.
As we head deeper into the 21st century, we're starting to catch a glimpse of the fantastic technological possibilities that await. But we're also starting to get a grim sense of the potential horrors. Here are 10 frightening technologies that should never, ever, come into existence.
There's a saying among futurists that a human-equivalent artificial intelligence will be our last invention. After that, AIs will be capable of designing virtually anything on their own — including themselves. Here's how a recursively self-improving AI could transform itself into a superintelligent machine.
Some futurists and science fiction writers predict that we're on the cusp of a world-changing "Technological Singularity." Skeptics say there will be no such thing. Today, I'll be debating author Ramez Naam about which side is right.