The Associated Press is putting its foot down on journalists using any kind of AI program to write articles, though that isn’t stopping the company itself from making a quick buck in exchange for training generative AI on older AP content.
In a Wednesday blog post, the AP’s VP of standards, Amanda Barrett, said that while the wire service sees artificial intelligence as a tool to “improve how we work” generative AI won’t be used “as a replacement for journalists.”
In a detailed bullet point list, Barrett laid out how AP staff are allowed to “experiment” with ChatGPT and other chatbots “with caution,” saying they’re not allowed to use it to generate any publishable content. More than that, anything that the AI outputs should be treated as “unvetted source material,” essentially meaning that journalists would need to go out and find actual sources for any info the AI generates and it should go through the standard process of corroboration.
AP journos are also required to vet if any of their sources are free of AI’s malign influence. That means reporters will be asked to conduct reverse image searches or drill down on concurring sources. Essentially, AI will force AP journalists to be extra concerned about the veracity of sources. These new guidelines follow from an open letter signed by major media organizations including AP begging lawmakers to force AI makers to ask for consent before training AI on their content.
“Journalists should exercise the same caution and skepticism they would normally, including trying to identify the source of the original content, doing a reverse image search to help verify an image’s origin, and checking for reports with similar content from trusted media,” Barett said.
AP already uses AI to generate some articles, though this system has been in use for nearly a decade and is employed for simplified coverage of niche data-driven news like corporate earnings reports and local sporting events. The Associated Press has since added other AI tools that summarize stories for subhead blurbs and analyze social media feeds.
As far as generative AI images are concerned, AP said it won’t use AI to modify any photos, video, or audio. The company will “refrain from using AI-generated images that are “false depictions of reality” though they will use them when the story centers on that specific AI-created content.
None of that is to mean the Associated Press is eschewing AI. Far from it. The company brokered a two-year deal with ChatGPT-maker OpenAI to allow the company to train its language models on some of The Associated Press’s back-catalog of content. The makers of ChatGPT and GPT-4 have offered paltry philanthropic donations to U.S. journalism nonprofits as they try to push their AI tools into more local newsrooms.
It’s the opposite tact that other major newsrooms have taken to tackle the wave of generative AI. The New York Times recently changed its Terms of Service policy to forbid anybody from using its articles to train AI. Google has reportedly tried to court major newsrooms like the NYT and The Washington Post to get them more excited about using AI tools. Other companies like CNET have experimented with fully AI-generated articles, though the content was chock full of inaccuracies. Gizmodo and other sites owned by G/O media also published several AI-generated articles in July, which were noted for their gross inaccuracies.
The Associated Press has a lot of pull as one of the largest news wire services in the U.S. Its content is republished by more than 1,000 smaller news outlets all throughout the U.S. The company also hosts hundreds of bureaus in different countries and publishes in English, Spanish, and Arabic. Most of the news industry relies at least partly on the AP Stylebook for their writing style guide. Now we’ll see if more outlets follow AP’s lead and ignore the siren song of quick, cheap, awful, soulless, and inaccurate AI-written articles.