When a journalist for an online gun website asked OpenAIâs ChatGPT to provide him a summary of the case The Second Amendment Foundation v. Robert Ferguson earlier this year, he said the AI chatbot quickly spat out an answer. It confidently complied, allegedly claiming the case involved a Georgia radio host named Mark Walters who was accused of embezzling money from The Second Amendment Foundation (SAF). The only problem: none of that was true. In reality, Walters had nothing to do with the suit at all. Instead, Walters claims he was on the receiving end of what researchers call an AI âhallucination.â Now, he has filed a first-of-its-kind libel lawsuit against ChatGPTâs for allegedly damaging his reputation.
âEvery statement of fact in the summary pertaining to Walters is false,â reads the suit, filed in Gwinnett County Superior Court on June 5th. Waltersâ lawyer claims OpenAI acted negligently and âpublished libelous material regarding Waltersâ when it showed the false information to the journalist.
A legal expert who spoke with Gizmodo said Waltersâ complaint likely represents the first of what could be a litany of lawsuits attempting to take AI companies to court over their productâs well-documented fabrications. And while the merits of this particular case appear shaky at best, the expert noted it could set the stage for a wave of complicated lawsuits that test the boundaries of libel law.
âThe existing legal principles makes at least some such lawsuits potentially viable,â University of California Los Angeles Law School professor Eugene Volokh told Gizmodo.
Why is Mark Walters suing OpenAI over ChatGPTâs hallucinations?
When the firearm journalist, Fred Riehl, asked ChatGPT to provide a summary of the suit in question on May 4th, the large language model allegedly said it was a legal complaint filed by the founder and executive vice president of the Second Amendment Foundation (SAF) lodged against Walters, host of Armed American Radio, whom ChatGPT identified as SAFâs s treasurer and chief financial officer. Walters, in ChatGPTâs telling, âmisappropriated funds for personal expenses without authorization or reimbursement, manipulated financial records and bank statements to conceal his activities, and failed to provide accurately and timely financial reports,â according to the complaint.
But Walters claims he couldnât have embezzled those funds because he isnât and hasnât ever been SAFâs treasurer or CFO. In fact, he doesnât work for the foundation at all, according to his suit. A perusal of the actual SAF v. Ferguson complaint shows no signs of Waltersâ name anywhere in its 30 pages. That complaint doesnât have anything to do with financial accounting claims at all. ChatGPT hallucinated Waltersâ name and the bogus story into its recounting of a real legal document, Walters alleges.
âThe complaint does not allege that Walters misappropriated funds for personal expenses, manipulated financial records or bank statements, or failed to provide financial reports to SAF leadership, nor would he have been in a position to do so because he has no employment or official relationship,â Waltersâ suit reads.
When the skeptical journalist asked ChatGPT to provide him an exact passage of the lawsuit mentioning Walters, the chatbot allegedly doubled down on its claim.
âCertainly,â the AI responded, per Waltersâ suit. âHere is the paragraph from the complaint that concerns Walters.â The chunk of text returned by ChatGPT, included below, does not exist in the actual complaint. The AI even got the case number wrong.
âDefendant Mark Walters (âWaltersâ) is an individual who resides in Georgia. Walters has served as the Treasurer and Chief Financial Office of SAF since at least 2012. Walters has access to SAFâs bank accounts and financial records and is responsible for maintaining those records and providing financial reports to SAFâs board of directors. Walters owes SAF a fiduciary duty of loyalty and care, and is required to act in good faith and with the best interests of SAF in mind. Walters has breached these duties and responsibilities by, among other things, embezzling and misappropriating SAFâs funds and assets for his own benefit, and manipulating SAFâs financial records and bank statements to conceal his activities.â
Riehl contacted the attorneys who were involved in SAF v. Ferguson to learn what really happened, and he did not include the false info about Walters in a story, according to Waltersâ complaint. Riehl did not immediately respond to a request for comment.
OpenAI and its founder Sam Altman have admitted these hallucinations are a problem in need of addressing. The company released a blog post last week saying its team is working on new models supposedly capable of cutting down on these falsehoods.
âEven state-of-the-art models still produce logical mistakes, often called hallucinations,â wrote Karl Cobbe, an OpenAI research scientist. âMitigating hallucinations is a critical step towards building aligned AGI [artificial general intelligence].â OpenAI did not respond to Gizmodoâs request for comment.
John Monroe, Walersâ attorney, spoke critically of ChatGPTâs current level of accuracy in a statement.
âWhile research and development in AI is a worthwhile endeavor, it is irresponsible to unleash a system on the public knowing that it fabricates information that can cause harm,â Monroe told Gizmodo.
Will Walters win his case against OpenAI?
A lawyer for the Georgia radio host claims ChatGPTâs allegations regarding his client were âfalse and malicious,â and could harm Waltersâ reputation by âexposing him to public hatred, contempt, or ridicule.â Waltersâ attorney did not immediately respond to a request for comment.
Volokh, the UCLA professor and the author of a forthcoming law journal article on legal liability over AI modelsâ output, is less confident than Waltersâ lawyers in the caseâs strength. Volokh told Gizmodo he did believe there are situations where plaintiffs could sue AI makers for libel and emerge successful but that Walters, in this case, had failed to show what actual damage had been done to his reputation. In this example, Walters appears to be suing OpenAI for punitive or presumed damages. To win those damages, Walters would have to show OpenAI acted with âknowledge of falsehood or reckless disregard of possibility of falsehood,â a level of proof often referred to as the âactual maliceâ standard in libel cases, Volokh said.
âThere may be recklessness as to the design of the software generally, but I expect what courts will require is evidence OpenAI was subjectively aware that this particular false statements was being created,â Volokh said.
Still, Volokh stressed the specific limitations of this case donât necessarily mean other libel cases couldnât succeed against tech companies down the line. Models like ChatGPT convey information to individuals and, importantly, can convey that information as a factual assertion even when itâs blatantly false. Those points, he noted, satisfy many necessary conditions under libel law. And while many internet companies have famously avoided libel suits in the past thanks to the legal shield of Section 230 of the Communications Decency Act, those protections likely would not apply to chatbots because they generate their own new strings of information rather than resurface comments from another human user.
âIf all a company does is set up a program that quotes material from a website in response to a query, that gives it Section 230 immunity,â Volokh said. âBut if the program composes something word by word, then that composition is the companyâs own responsibility.â
Volokh went on to say the defense made by OpenAI and similar companies that chatbots are clearly unreliable sources of information doesnât pass his muster since they simultaneously promote the technologyâs success.
âOpenAI acknowledges there may be mistakes but [ChatGPT] is not billed as a joke; itâs not billed as fiction; itâs not billed as monkeys typing on a typewriter,â he said. âItâs billed as something that is often very reliable and accurate.â
In the future, if a plaintiff can successfully convince a judge they lost a job or some other measurable income based on the false statements spread by a chabtot, Volokh said itâs possible they could emerge victorious.
This isnât the first time AI chatbots have spread falsehoods about real people
Volokh told Gizmodo this was the first case he had seen of a plaintiff attempting to sue an AI company for allegedly libelous material churned out by its products. There have, however, been other examples of people claiming AI models have misrepresented them. Earlier this year, Brian Hood, the regional mayor of Hepburn Shire in Australia, threatened to sue OpenAI after its model allegedly named him as a convicted criminal involved in a bribery scandal. Not only was Hood not involved in the crime, he was actually the whistleblower who revealed the incident.
Around the same time, a George Washington University law professor named Jonathan Turley said he and several other professors were falsely accused of sexual harassment by ChatGPT. The model, according to Turley, fabricated a Washington Post story as well as hallucinated quotes to support the claims. Fake quotes and citations are quickly becoming a major issue for generative AI models.
And while OpenAI does acknowledge ChatGPTâs lack of accuracy in a disclosure on its website, that hasnât stopped lawyers from citing the program in professional contexts. Just last week, a lawyer representing a man suing an airline submitted a legal brief filled with what a judge deemed âbogus judicial decisionsâ fabricated by the model. Now the lawyer faces possible sanctions. Though this was the most obvious example of such explicit oversight to date, a Texas criminal defense attorney previously told Gizmodo he wouldnât be surprised if there were more examples to follow. Another judge, also in Texas, issued a mandate last week that no material submitted to his court be written by AI.
Want to know more about AI, chatbots, and the future of machine learning? Check out our full coverage of artificial intelligence, or browse our guides to The Best Free AI Art Generators and Everything We Know About OpenAIâs ChatGPT.
Update: June 7, 8:15 A.M. PST: Added statement from John Monroe.