After getting caught using an algorithm to write dozens of articles, the tech publication CNET has apologized (sorta) but wants everybody to know that it definitely has no intention of calling it quits on AI journalism.
Yes, roughly two weeks ago Futurism reported that CNET had been using an in-house artificial intelligence program to pen droves of financial explainers. The articles—some 78 in total—were published over the course of two months under the bylines “CNET Money Staff” or “CNET Money,” and weren’t directly attributed to a non-human writer. Last week, after an online uproar over Futurism’s findings, CNET and its parent company, media firm Red Ventures, announced that it would be temporarily pressing “pause” on the AI editorials.
It would appear that this “pause” isn’t going to last long, however. On Wednesday, CNET’s editor and senior vice president, Connie Guglielmo, published a new statement about the scandal, in which she noted that, eventually, the outlet would continue to use what she called its “AI engine” to write (or help write) more articles. In her own words, Guglielmo said that...
[Readers should] ...expect CNET to continue exploring and testing how AI can be used to help our teams as they go about their work testing, researching and crafting the unbiased advice and fact-based reporting we’re known for. The process may not always be easy or pretty, but we’re going to continue embracing it – and any new tech that we believe makes life better.
Guglielmo also used Wednesday’s piece as an opportunity to address some of the other criticisms aimed at CNET’s dystopian algo—namely, that it had frequently created content that was both factually inaccurate and potentially plagiaristic. Under a section titled “AI engines, like humans, make mistakes,” Guglielmo copped to the fact that its so-called engine made quite a few mistakes:
After one of the AI-assisted stories was cited, rightly, for factual errors, the CNET Money editorial team did a full audit...We identified additional stories that required correction, with a small number requiring substantial correction and several stories with minor issues such as incomplete company names, transposed numbers or language that our senior editors viewed as vague.
The editor also admitted that some of the automated articles may have not passed the sniff test when it comes to original content:
In a handful of stories, our plagiarism checker tool either wasn’t properly used by the editor or it failed to catch sentences or partial sentences that closely resembled the original language. We’re developing additional ways to flag exact or similar matches to other published content identified by the AI tool, including automatic citations and external links for proprietary information such as data points or direct quotes.
It would be one thing if CNET had very publicly announced that it was engaging in a bold new experiment to automate some of its editorial tasks, thus letting everybody know that it was doing something new and weird. However, CNET did just the opposite of this—quietly rolling out article after article under vague bylines and clearly hoping nobody would notice. Guglielmo now admits that “when you read a story on CNET, you should know how it was created”—which seems like standard journalism ethics 101.