CNET reporter Jackson Ryan published an article last month describing how ChatGPT, an AI that can generate human-sounding text, would affect journalists and the news industry: “ChatGPT Is a Stunning AI, but Human Jobs Are Safe (for Now).”
“It definitely can’t do the job of a journalist,” Ryan wrote of ChatGPT. “To say so diminishes the act of journalism itself.”
The article said ChatGPT isn’t coming for journalists’ jobs just yet, but the very publication that ran Ryan’s article has been quietly publishing articles written by AI since November, according to Futurism and online marketer Gael Breton. The AI-written CNET articles bear the byline CNET Money Staff which is identified on the outlet’s website as “AI Content published under this author byline is generated using automation technology.”
CNET responded in a linked statement via email, saying the Money editorial team was trying out the technology “to see if there’s a pragmatic use case for an AI assist on basic explainers around financial services topics.”
CNET’s Editor in Chief, Connie Guglielmo, said in the statement that the company’s goal had been if the AI engine would be able to assist their “busy staff of reporters and editors with their job to cover topics from a 360-degree perspective.”
The company questioned if CNET would benefit from AI content to provide available facts, allowing readers to “make better decisions.”
The first article written by CNET Money Staff was published on November 11 with the headline, “What is a credit card charge-off?” Since then, the news site has published 73 AI-generated articles, but the outlet says on its website that a team of editors is involved in the content “from ideation to publication. Ensuring that the information we publish and the recommendations we make are accurate, credible, and helpful to you is a defining responsibility for what we do.”
The outlet says they will continue to publish each article with “editorial integrity” and says, “Accuracy, independence, and authority remain key principles of our editorial guidelines.”
The most recent versions of consumer-facing artificial intelligence have taken the tech community by storm with their ability to write passable essays, articles, and computer code in seconds, though the quality varies, and ChatGPT has been banned from several high-profile forums. CNET is not the first news outlet to utilize AI technology, as the Associated Press has boasted of being “one of the first news organizations to leverage artificial intelligence,” since 2015, according to its website. “Today, we use machine learning along key points in our value chain, including gathering, producing, and distributing the news,” the site reads. It’s not clear whether the AP uses AI to write the stories themselves.
Other major news outlets have incorporated AI technology into their work,with the Washington Post announcing it was using AI to provide live updates for the 2020 Presidential election on its podcasts. The goal, the outlet said, was to keep listeners up-to-date during the steady stream of election-based news that would be coming out.
The question of whether AI is supplanting jobs is yet to be answered. Ryan wrote that ChatGPT’s inability to understand or read emotion makes it useless in the context of journalism. ChatGPT, he says, doesn’t have the ability to describe the feelings seen on a player’s face when they win the World Cup, or talk to Ukrainians about how the Russian Invasion has changed their lives, and would definitely have “no hope of covering Musk’s takeover of Twitter.”
Guglielmo said in her statement that CNET will “continue to assess these new tools to determine if they’re right for our business.” She added, “For now CNET is doing what we do best – testing a new technology so we can separate the hype from reality.”