Producing airtight, accurate facts isn’t ChatGPT’s strong suit. The OpenAI-created large language model is known to fabricate information for its own ends—outright lying in service of generating fluid text, quickly or completing tasks. But is ChatGPT legally responsible for its falsehoods? Can the inventions of an artificial intelligence be damaging and defamatory, and can the company behind it be held liable?
One Australian politician is angling to find out, in what could become the first ever defamation lawsuit against an AI. Brian Hood, the regional mayor of Hepburn Shire, is prepared to sue OpenAI if the company doesn’t adequately address and correct false information about him that’s shown up in ChatGPT, per a report from Reuters.
The AI reportedly falsely named Hood as a convicted criminal, involved in a past, very real, bribery scandal at Australia’s Reserve Bank (RBA). In 2011, officials from an RBA subsidiary, Note Printing Australia, were found guilty on conspiracy to bribe foreign government officials. The offending actions took place between 1999 and 2004. Multiple officials were sentenced for their involvement in the crimes.
And, for a time at least, if you listened to ChatGPT, Hood was among those who partook in the nefarious scheme, per the politician’s lawyers. Yet, Hood was never found guilty of any crime in the RBA scandal. He was never charged. In fact, Hood was the whistleblower who brought the misdeeds to light—the proverbial hero, not villain, of the story.
The local politician reportedly became concerned about his reputation after numerous members of the public mentioned to him that ChatGPT was listing him as a criminal, according to Reuters. So, he contacted his lawyers.
Hood’s legal team sent a letter of concern to OpenAI on March 21, granting the company 28 days to fix the inaccuracy, per the outlet. Yet, the San Francisco-based company, led by Sam Altman, hasn’t yet responded, the lawyers told Reuters.
If the letter of concern progresses into a lawsuit, “it would potentially be a landmark moment,” James Naughton, one of the attorneys at the lawfirm representing Hood, Gordon Legal, said to Reuters. “It’s applying this defamation law to a new area of artificial intelligence.”
Gizmodo reached out to both Gordon Legal and OpenAI with questions outside of business hours, but neither the company nor the law firm replied as of publication time.
Specifically, any lawsuit would focus in on ChatGPT’s lack of footnotes, which offers a false and unverifiable sense that the text it has provided is faithful to reality, Naughton further said. “It’s very difficult for somebody to look behind that to say ‘how does the algorithm come up with that answer?’” Naughton noted. “It’s very opaque.”
The financial consequences of a defamation suit for OpenAI likely wouldn’t be very significant. Damages in Australia are capped at less than $300,000, per Reuters. However, if the case were to move forward and ultimately go in favor of the mayor, it could set a significant global legal precedent for the amount of liability tech companies hold when it comes to AI-produced errors and misinformation.
Massive, mainstream tech companies like Microsoft and Google have already worked to incorporate generative AI into many of their products. And, even at the very first public launch events, the apparent problem of AI lies popped up for both Google’s Bard and Microsoft’s Bing. But so far, these tech giants haven’t had to legally answer for the bad intel they’re providing to users.
Any lawsuit could have big implications for Microsoft, specifically, which uses OpenAI’s tech in its own AI-powered search and chat following a multi-billion dollar partnership deal. Note: Microsoft’s version of the tech does include footnotes, but they’re not always the clearest and most reliable. Gizmodo reached out to the company for comment, but did not immediately receive a response.
Though Hood’s lawyers told Reuters that OpenAI hadn’t responded to their letter of concern, Gizmodo briefly tested whether ChatGPT was still producing lies about the mayor. In multiple rounds of back and forth where I asked questions about Hood and the 2011 bribery convictions, I couldn’t get the bot to tell me that he was a criminal, or had been charged with any crime. Instead, the AI seems to now understand that Hood was the good guy in the scandal. That said, ChatGPT still mischaracterized some of the details in its account of Australian government near-history.
“The whistleblower in the 2011 bribery scandal at the Reserve Bank of Australia (RBA) was Brian Hood, who was a former agent for Securency International, a joint venture between the RBA-owned Note Printing Australia and a British company,” it wrote. However, Hood was not a former agent at Securency International. He was a secretary at Note Printing Australia, per a 2013 Sydney Morning Herald report. The bot also claimed only two officials were found guilty of their charges, but the actual number is higher. These errors may not be defamatory, but they’re still untrue.