Unchecked AI Could Make a Misinformation Nightmare Even Worse

If you ask anyone right now who “controls” AI your guess is about as good as anyone else’s. Does the U.S. government control AI? Does Google? Does Facebook? What about Elon Musk? The point the authors try to make throughout the book is that, as of now, there’s no clear or established hierarchy or cooperation to make sure more advanced AI capabilities operate under some sort of unified vision. That’s a problem that needs to be fixed, they argue.
“We cannot leave its [AI] development or application to any one constituency, be it researchers, companies, governments, or civil society organizations,” the book reads.
This lack of cooperation, they argue, could lead to some dicey situations. Though the authors steer clear of going full Skynet Terminator takeover, they do outline a range of potential society level oh shit moments that they argue could happen if all parties involved in AI aren’t on the same page. The most convincing of these arguments, to this writer at least, was the claim that more powerful algorithms could lead to a disinformation nightmare, where juiced-up news and other salacious content spreads so rapidly and quickly that even basic neighbors can’t agree on what’s true. That’s the most believable because, depending on who you ask, it’s already happening.