CNET mass-corrects AI-written finance explainers

Tombstone icon

Starting in November 2022, CNET quietly published 77 financial explainer articles written by an AI tool under the byline "CNET Money Staff." Readers had to hover over the byline to learn the articles were produced "using automation technology." In January 2023, Futurism broke the story, and a follow-up identified factual errors in a compound interest article, prompting a full audit. CNET editor-in-chief Connie Guglielmo confirmed corrections were issued on 41 of the 77 articles - more than half - including some she described as "substantial." CNET paused AI-generated publishing and updated its disclosure practices, though Guglielmo said the outlet intended to continue using AI tools.

Incident Details

Severity:Facepalm
Company:CNET
Perpetrator:Executive
Incident Date:
Blast Radius:Large corrections; credibility hit; policy changes on AI usage.

The Quiet Rollout

Starting sometime in November 2022, CNET began publishing financial explainer articles generated by an AI tool. The articles covered basic personal finance topics: "What Is Zelle and How Does It Work?", "What Is Compound Interest?", that sort of thing. They were bylined "CNET Money Staff," and the only indication that AI was involved was a small note accessible by hovering over the byline, which explained the article was produced "using automation technology."

CNET did not make a public announcement about the experiment. There was no blog post, no press release, no disclosure on the site explaining that the outlet was testing AI-generated content. The articles simply appeared, formatted like any other CNET article, and unless a reader specifically clicked or hovered on the byline attribution, they would have no reason to suspect a machine wrote them.

By January 2023, 77 such articles had been published.

Futurism Breaks the Story

On January 12, 2023, Futurism reported that CNET had been quietly publishing AI-written articles for months. The coverage drew immediate attention - CNET was a major technology news outlet, and the discovery that it was using AI to write articles without prominently disclosing the fact struck the media world as both ironic and concerning.

Futurism followed up with a detailed analysis of one of the AI articles, a piece about compound interest, and identified factual errors. The errors were not minor: the article contained mathematical mistakes and conceptual misunderstandings about how compound interest works. For a financial explainer article - the kind of content CNET was presumably publishing to help readers make informed decisions about their money - getting the math wrong was a serious problem.

Once the compound interest errors became public, the rest of the 77 articles came under scrutiny. CNET's editorial team conducted an internal audit.

41 of 77

CNET editor-in-chief Connie Guglielmo published an editorial on January 25, 2023, acknowledging the situation. She said CNET had used an "internally designed AI engine" - not ChatGPT - to generate the articles, and that the tool was designed to "help" produce content that human editors would review before publication.

The audit results were not flattering. CNET issued corrections on 41 of the 77 AI-written articles. That's a correction rate of 53% - more than half of the AI-generated output required fixes after the fact. Guglielmo characterized the corrections as a mix of "substantial" and "minor," but declined to break down how many fell into each category.

Guglielmo noted that "AI engines, like humans, make mistakes." This was a technically accurate statement that didn't land the way it was presumably intended. Humans do make mistakes, but human-written articles at major publications don't typically require corrections on more than half of the pieces published. A 53% correction rate is not a "mistakes happen" situation; it's a quality control failure.

Red Ventures and the SEO Machine

CNET was owned by Red Ventures, a digital media company whose business model centered on affiliate marketing and search engine optimization. Red Ventures had acquired CNET in 2020 and also owned Bankrate, an online financial services comparison site. The AI article program fit squarely within Red Ventures' business model: produce high volumes of search-optimized explainer content that would rank on Google and drive traffic to affiliate links.

The financial explainer articles CNET's AI was producing weren't investigative journalism or breaking news; they were SEO content. Articles about "What Is APR?" or "How Does a Savings Account Work?" are designed to capture search queries from people looking for basic financial information. The more of these articles a site publishes, the more search traffic it can capture, and the more clicks it can drive to financial product partners.

Viewed through this lens, the AI tool made commercial sense: it could produce high volumes of simple explainer content at a fraction of the cost of paying human writers. The problem was that the content needed to be accurate, and the AI tool couldn't reliably deliver accuracy even on relatively simple topics.

The Disclosure Problem

Beyond the factual errors, the incident raised questions about transparency. CNET had published 77 AI-generated articles over roughly two months without making a clear, prominent disclosure to readers. The hover-to-reveal attribution - requiring readers to interact with the byline to discover the AI involvement - was not meaningfully different from hiding the information.

CNET's defense was that the "CNET Money Staff" byline was consistent with how the outlet handled other team-written content. But team-written content produced by humans and content produced by an AI tool are materially different things, and readers had a reasonable expectation of knowing which one they were reading.

Wired reported that CNET staffers were among those pushing back on the AI experiment. The outlet's journalists had not been centrally involved in the decision to deploy the AI tool, and some were uncomfortable having AI-generated content published alongside their own bylined work without clear differentiation. A unionization effort at CNET, which had begun before the AI rollout, gained relevance as staff saw AI content generation as a potential threat to their roles.

The Pause and After

CNET paused AI-generated publishing in late January 2023. Guglielmo said the outlet would update its disclosure and citation practices and continue to explore using AI tools - she did not say the experiment was over, only that it was on hold while they figured out how to do it better.

The incident caught particular attention because of CNET's identity as a technology publication. A site that covered AI, reviewed tech products, and reported on the promises and pitfalls of automation had itself been caught using AI badly - producing inaccurate content, hiding how it was made, and getting caught by an outside publication rather than discovering the problems internally.

CNET's sister site, Bankrate, also used AI to produce content, and Futurism reported that those articles contained errors as well. The Bankrate AI articles were restarted even after the CNET pause, and one of the first pieces published promptly contained an error - a detail that did not help the narrative that AI-assisted publishing was ready for production use.

The episode established a template that would recur throughout 2023 and into 2024: a media company turns to AI to cut costs, publishes the output without adequate review, gets caught when errors surface, issues a defensive statement suggesting AI and humans are equally fallible, pauses the program, and then quietly resumes some version of it later. CNET was not the last to run this playbook. It was just the first to get caught.

Discussion