Those pushing for AI regulation have a strange new ally. The U.S. Chamber of Commerce, the largest pro-business lobbying group in the country, released a report on generative artificial intelligence Thursday, calling on lawmakers to create some sort of regulation around the ballooning technology.
At the same time, the chamber’s report offers very few examples or specifics on where this regulation should go, save for a “risk-based approach” to regulating AI. While this could be the kind of real push lawmakers need to act on forming meaningful regulation, at this point it seems poised to offer more limp regulation that won’t actually help some of the people most-impacted by AI development.
In the report, David Hirschmann, the chamber’s president and CEO of its Technology Engagement Center, called for a more responsible and ethical deployment of AI, writing “for Americans to reap the benefits of AI, people must trust it.” The report noted AI is projected to add $13 trillion to the global economic growth by 2030, but of course economic crystal ball projections are less than reliable. The report also estimated that over the next 10 to 20 years, “virtually every business and government agency will use AI.”
The chamber argued that without regulation, we could see harm to both the economy and people. The report doesn’t really get into specifics of what this regulation should look like. The lobbying group said Congress should “focus on applications and outcomes of AI” rather than putting any roadblocks on the technology itself. This is an interesting tact as one of the main ongoing controversies within the world of artificial intelligence is how modern AI models are trained on hundreds of terabytes of information, images, and more scraped from the internet without users’ express permission. There’s also the lingering impact of AI-generated deepfakes starting to spread around the internet. Facebook was recently inundated with sexual deepfake ads using a fake version of Emma Watson and other celebrities.
For example, the artist community has been particularly outraged at the fact their work has been used to train generative AI models, such as OpenAI’s DALL-E 2, StabilityAI’s Stable Diffusion, and Midjourny. This week, Spawning.ai, which runs the site haveibeentrained.com, said that requests to remove artwork from AI datasets have resulted in 78 million artworks being opted out for AI training. StabilityAI, ArtStation, and Shutterstock have promised to abide by these opt out requests, but that doesn’t mean other companies or large datasets will.
And there’s also a question of how major tech companies plan to use websites and users’ own data for the purpose of training AI. OpenAI and its founder Sam Altman have promised not to use companies’ data when they purchase the new ChatGPT API, but regular users can still expect that any of their information put into the AI prompt will be used for training purposes. Without any kind of digital privacy law, we should only expect users’ data will be used to train AI, for good or ill.
The report also does not delve into what this technology actually is, and how it operates. This could prove rather unhelpful for a rather tech-illiterate Congress whose average age is near 60. The number of tech companies big and small adopting AI has risen exponentially since the start of the year. Microsoft’s Bing chatbot has exceeded 100 million daily active users in just over a month since it’s been active. And with more companies employing ChatGPT onto their platforms, it’s easy to see how the chamber can say AI will be employed in every facet of daily life in a mere decade.
So all-in-all, AI regulation may be needed sooner rather than later. Normally, the U.S. Chamber of Commerce has a relatively strong anti-regulatory stance. It’s picked fights with the Federal Trade Commission and others for a supposed “war against American businesses.” The lobbying group has previously mentioned there could be some sort of regulation for the whole crypto thing, and last year the chamber launched a bipartisan commission on artificial intelligence co-chaired by Reps. John Delaney and Mike Ferguson.
But so far, Congress has been as slow to act on AI as it had been on cryptocurrencies, and crypto’s been big for much longer. California Rep. Ted Lieu recently wrote in a New York Times piece that it would be impossible to create a new federal agency to monitor AI in the current Congress. He said he would be introducing legislation for a new AI commission to “provide recommendations” for how existing agencies can regulate the technology. Still, there’s yet been no bill proposed to curb AI’s dangers. A few federal agencies, such as the chamber’s hated enemy the FTC, has threatened to go after companies for obtuse claims about AI’s capabilities.
AI tech has spread so rapidly in just the past three months that it seems there’s no tech company out there not trying to shove AI into their systems, like a child squeezing Playdough into an electrical socket. While OpenAI’s sophisticated chatbot ChatGPT has captured the world’s attention, there’s also plenty of work being done on image, video, and voice generative models. All these AI developments could have massive repercussions for both artistic and non-artistic work in multitudes of industries.
The AI industry will need more than just “risk-based” regulation to truly make it a fair technology for anybody putting their work online.
Update 3/9/23 at 12:15 p.m. ET: This post was updated to correct the title of David Hirschmann, the chamber’s president and CEO of its Technology Engagement Center.
Want to know more about AI, chatbots, and the future of machine learning? Check out our full coverage of artificial intelligence, or browse our guides to The Best Free AI Art Generators and Everything We Know About OpenAI’s ChatGPT.