CNET has claimed that all of its AI-generated articles are “reviewed, fact-checked and edited” by real, human staff. And each post has an editor’s name attached to it in the byline. But clearly, that alleged oversight isn’t enough to stop artificial intelligence’s many generated mistakes from slipping through the cracks.

Advertisement

Usually, when an editor approaches an article (particularly an explainer as basic as “What is Compound Interest”), it’s safe to assume that the writer has done their best to provide accurate information. But with AI, there is no intent, only the product. An editor evaluating an AI-generated text cannot assume anything, and instead has to take an exacting, critical eye to every phrase, world, and punctuation mark. It’s a different type of task from editing a person, and one people might not be well-equipped for, considering the degree of complete, unfailing attention it must take and the high volume CNET seems to be aiming for with its AI-produced stories.

It’s easy to understand (though not excusable) that when sifting through piles of AI-generated posts, an editor could miss an error about the nature of interest rates among the authoritative-sounding string of statements. When writing gets outsourced to AI, editors end up bearing the burden, and their failure seems inevitable.

Advertisement

And the failures are almost certainly not just limited to the one article. Nearly all of CNET’s AI-written articles now come with an “Editors’ note” at the top which says, “We are currently reviewing this story for accuracy If we find errors, we will update and issue corrections,” indicating the outlet has realized the inadequacy of its initial editing process.

Gizmodo reached out to CNET for more clarification about what this secondary review process means via email. (Will each story be re-read for accuracy by the same editor? A different editor? An AI fact-checker?) However, CNET didn’t directly respond to my questions. Instead, Ivey Oneal, the outlet’s PR manager, referred Gizmodo to Guglielmo’s earlier statement and wrote, “We are actively reviewing all our AI-assisted pieces to make sure no further inaccuracies made it through the editing process. We will continue to issue any necessary corrections according to CNET’s correction policy.”

Advertisement

Given the apparent high likelihood of AI-generated errors, one might ask why CNET is pivoting away from people to robots. Other journalistic outlets, like the Associated Press, also use artificial intelligence—but only in very limited contexts, like filling information into pre-set templates. And in these narrower settings, the use of AI seems intended to free up journalists to do other work, more worthy of their time. But CNET’s application of the technology is clearly different in both scope and intent.

All of the articles published under the “CNET Money” byline are very general explainers with plain language questions as headlines. They are clearly optimized to take advantage of Google’s search algorithms, and to end up at the top of peoples’ results pages—drowning out existing content and capturing clicks. CNET, like Gizmodo and many other digital media sites, earns revenue from ads on its pages. The more clicks, the more money an advertiser pays for their miniature digital billboard(s).

Advertisement

From a financial perspective, you can’t beat AI: there’s no overhead cost and there’s no human limit to how much can be produced in a day. But from a journalistic viewpoint, AI-generation is a looming crisis, wherein accuracy becomes entirely secondary to SEO and volume. Click-based revenue doesn’t incentivize thorough reporting or well-put explanation. And in a world where AI-posts become an accepted norm, the computer will only know how to reward itself.

Update 1/17/2023, 5:05 p.m. ET: This post has been updated with comment from CNET.

Maybe AI-Written Scripts are a Bad Idea?
Subtitles
  • Off
  • English
Maybe AI-Written Scripts are a Bad Idea?