Sam Altman, the CEO of ChatGPT maker OpenAI, is wrapping up a jet-setting tour of meetings with world leaders in an effort to steer global AI regulation. Altman and his company want lawmakers thinking more about a potential future where AI destroys society, and less about not the problems AI will wreak in the short term. At a talk with students in Tokyo Monday, Altman said he thinks the strategy is working.
“One of the reasons for this trip was to get to talk to leaders around the world about what we think needs to happen globally,” Altman said at the talk. “I came to the trip ... skeptical that it was going to be possible in the short term to get global cooperation to reduce existential risk but I am now wrapping up the trip feeling quite optimistic we can get it done.”
Altman was short on details, but he and his AI brethren have made clear what they “think needs to happen” when it comes to governing their industry. At a recent US congressional hearing, Altman had a friendly chat about how the work he’s doing may someday destroy us all, and suggested the primary focus should be a new regulatory agency that oversees anyone working on “superintelligence.” Superintelligence is a term for a hypothetical future AI technology that’s as smart as a human being, or smarter! Unlike ChatGPT, that technology does not exist now, and it may never.
Altman acknowledges AI might cause problems right now as well, but these concerns seem like an afterthought. In a recent blog post about regulation on OpenAI’s website, the company’s leaders gave non-hypothetical problems exactly one sentence worth of attention: “We must mitigate the risks of today’s AI technology too.”
Those risks are not imaginary. AI tools make creating oceans of content a trivial exercise, which will likely usher in a new era of misinformation where we’ll have to question whether every quote, picture, and video is a fake. Because AI tools are trained to mimic existing data, they may also exacerbate society’s biases and structural problems. We’ve already seen how ChatGPT or Microsoft’s Bing chatbot can be coerced to spit out hate speech in controlled settings. In the wild, these tools may subtly and inadvertently promote harmful ideas in ways that are hard to spot. And, of course, AI tools will eliminate some significant amount of jobs for people who aren’t robots.
Still, there’s some hope for anyone who thinks world governments should consider the impact of the AI technology that we have right now the real world.
UK Prime Minister Rishi Sunak announced Monday that leading AI makers Google DeepMind, OpenAI, and Anthropic agreed to give the British government access to their models for research and safety purposes, according to Politico. Sunak said the access will allow his government “to help build better evaluations and help us better understand the opportunities and risks of these systems.” However, Sunak seemed fixated on turning the UK into “island of innovation,” rather than mitigating the technology’s problems. Politico reported that the Prime Minister declined to set out specific proposals for legislation.
For now, the European Union seems to be the closest to meaningful AI regulation among major AI powers. A proposed law in the EU would set out broad guidelines for AI, banning tools with “unacceptable risks” like social scoring systems, and setting up strict regulation for use cases like evaluating job applicants. In response, Altman initially said OpenAI might have to stop operating in the EU, a threat he walked back shortly thereafter. China has laid down stringent laws forcing AI to comply with Communist Party standards, a legal offensive that has human rights experts worried.
Anyone who follows American lawmakers’ bumbling, decades-long failure to regulate the most important technology issues won’t be surprised to learn that the US is trailing far behind in the AI legislation race. Our aging cadre of congresspeople is very worked up about all of this AI business, but so far, they don’t seem worked up enough to do much actual work. The federal government has all but given up on passing new laws in this area, and the most promising proposals for AI regulation would modify laws we already have on the books.
Want to know more about AI, chatbots, and the future of machine learning? Check out our full coverage of artificial intelligence, or browse our guides to The Best Free AI Art Generators and Everything We Know About OpenAI’s ChatGPT.