Colorado Senator and failed Democratic presidential candidate Michael Bennet introduced a new bill Friday to create an “AI Task Force” of government experts who could identify risks and reduce potential civil liberties and privacy drawbacks from the disruptive tech. The proposed legislation marks one of the first AI safety bill to make its way Congress as public concerns around AI and calls for regulation gain traction amid the widespread popularity of ChatGPT and other chatbots.
Bennet’s AI Task Force, according to Reuters, would look for gaps in current AI regulation and move quickly to recommend new policy proposals. Task force members would come from multiple executive branch agencies, including the Department of Defense and the National Institute of Standards of Technology. This supposed dream team of AI-interested government bureaucrats would team up for 18 months and issue a report providing policy recommendations. A year and a half, while a relatively quick timeline for notoriously slow-moving government projects, still looks like a snail’s pace compared to the rate of innovation in large language models over the past two years. ChatGPT debuted in November.
“There’s going to be a lot of improvisation and iterative approaches to try to wrestle with this because AI is so new to everyone in the government,” Bennet told Reuters. Bennet did not immediately respond to Gizmodo’s request for comment.
The bill comes about a month after Bennet wrote a letter to the CEOs of OpenAI, Google, Snap, and Meta prodding them on the dangers ChatGPT-style chatbots could pose to young children. In the letter, Bennet cited an example of a chatbot instructing a child how to cover bruises ahead of a visit with child protective services. In another case, a chatbot reportedly gave suggestions to a 13-year-old girl on how to lie to her parents about sneaking off to see a 31-year-old man.
“Younger users are at an earlier stage of cognitive, emotional, and intellectual development, making them more impressionable, impulsive, and less equipped to distinguish fact from fiction,” Bennet wrote. “Against this backdrop, it is not difficult to see the risk of exposing young people to chatbots that have at times engaged in verbal abuse, encouraged deception, and suggested self-harm.”
The Colorado senator isn’t the only one interested in forming an AI task force either. Late last week Homeland Security Secretary Alejandro Mayorkas announced the agency would form its own team tasked with exploring how the tech could be used to screen for drugs and human trafficking victims, secure water and electric grids, and provide insights into a wide sprawling range of other possible use cases. If all that sounds somewhat broad and unspecific, Mayorkas says that’s sort of the point.
“We must never allow ourselves to be susceptible to ‘failures of imagination,’ which, as the 9/11 Commission concluded nearly 20 years ago, held us back from connecting the dots and preparing for the destruction that was being planned on that tragic day,” Mayorkas said.
“We must instead look to the future and imagine the otherwise unimaginable, to ensure that whatever threats we face, our Department—our country—will be positioned to meet the moment.”
Want to know more about AI, chatbots, and the future of machine learning? Check out our full coverage of artificial intelligence, or browse our guides to The Best Free AI Art Generators and Everything We Know About OpenAI’s ChatGPT.