Producing airtight, accurate facts isnât ChatGPTâs strong suit. The OpenAI-created large language model is known to fabricate information for its own endsâoutright lying in service of generating fluid text, quickly or completing tasks. But is ChatGPT legally responsible for its falsehoods? Can the inventions of an artificial intelligence be damaging and defamatory, and can the company behind it be held liable?
One Australian politician is angling to find out, in what could become the first ever defamation lawsuit against an AI. Brian Hood, the regional mayor of Hepburn Shire, is prepared to sue OpenAI if the company doesnât adequately address and correct false information about him thatâs shown up in ChatGPT, per a report from Reuters.
The AI reportedly falsely named Hood as a convicted criminal, involved in a past, very real, bribery scandal at Australiaâs Reserve Bank (RBA). In 2011, officials from an RBA subsidiary, Note Printing Australia, were found guilty on conspiracy to bribe foreign government officials. The offending actions took place between 1999 and 2004. Multiple officials were sentenced for their involvement in the crimes.
And, for a time at least, if you listened to ChatGPT, Hood was among those who partook in the nefarious scheme, per the politicianâs lawyers. Yet, Hood was never found guilty of any crime in the RBA scandal. He was never charged. In fact, Hood was the whistleblower who brought the misdeeds to lightâthe proverbial hero, not villain, of the story.
The local politician reportedly became concerned about his reputation after numerous members of the public mentioned to him that ChatGPT was listing him as a criminal, according to Reuters. So, he contacted his lawyers.
Hoodâs legal team sent a letter of concern to OpenAI on March 21, granting the company 28 days to fix the inaccuracy, per the outlet. Yet, the San Francisco-based company, led by Sam Altman, hasnât yet responded, the lawyers told Reuters.
If the letter of concern progresses into a lawsuit, âit would potentially be a landmark moment,â James Naughton, one of the attorneys at the lawfirm representing Hood, Gordon Legal, said to Reuters. âItâs applying this defamation law to a new area of artificial intelligence.â
Gizmodo reached out to both Gordon Legal and OpenAI with questions outside of business hours, but neither the company nor the law firm replied as of publication time.
OpenAI includes a disclaimer about ChatGPTâs accuracy (or lack thereof) in its terms of use. âOur Services may in some situations result in incorrect Output that does not accurately reflect real people, places, or facts,â the company notes. However, that doesnât necessarily mean the company canât be taken to task for those lies. And Hoodâs lawyers say it needs to be. âHeâs an elected official, his reputation is central to his role, Naughton told Reuters. âIt makes a difference to him if people in his community are accessing this material.â
Specifically, any lawsuit would focus in on ChatGPTâs lack of footnotes, which offers a false and unverifiable sense that the text it has provided is faithful to reality, Naughton further said. âItâs very difficult for somebody to look behind that to say âhow does the algorithm come up with that answer?ââ Naughton noted. âItâs very opaque.â
The financial consequences of a defamation suit for OpenAI likely wouldnât be very significant. Damages in Australia are capped at less than $300,000, per Reuters. However, if the case were to move forward and ultimately go in favor of the mayor, it could set a significant global legal precedent for the amount of liability tech companies hold when it comes to AI-produced errors and misinformation.
Massive, mainstream tech companies like Microsoft and Google have already worked to incorporate generative AI into many of their products. And, even at the very first public launch events, the apparent problem of AI lies popped up for both Googleâs Bard and Microsoftâs Bing. But so far, these tech giants havenât had to legally answer for the bad intel theyâre providing to users.
Any lawsuit could have big implications for Microsoft, specifically, which uses OpenAIâs tech in its own AI-powered search and chat following a multi-billion dollar partnership deal. Note: Microsoftâs version of the tech does include footnotes, but theyâre not always the clearest and most reliable. Gizmodo reached out to the company for comment, but did not immediately receive a response.
Though Hoodâs lawyers told Reuters that OpenAI hadnât responded to their letter of concern, Gizmodo briefly tested whether ChatGPT was still producing lies about the mayor. In multiple rounds of back and forth where I asked questions about Hood and the 2011 bribery convictions, I couldnât get the bot to tell me that he was a criminal, or had been charged with any crime. Instead, the AI seems to now understand that Hood was the good guy in the scandal. That said, ChatGPT still mischaracterized some of the details in its account of Australian government near-history.
âThe whistleblower in the 2011 bribery scandal at the Reserve Bank of Australia (RBA) was Brian Hood, who was a former agent for Securency International, a joint venture between the RBA-owned Note Printing Australia and a British company,â it wrote. However, Hood was not a former agent at Securency International. He was a secretary at Note Printing Australia, per a 2013 Sydney Morning Herald report. The bot also claimed only two officials were found guilty of their charges, but the actual number is higher. These errors may not be defamatory, but theyâre still untrue.