Reporter fired after AI tool provided by her employer fabricated sources in front-page article
Wisconsin State Journal reporter Audrey Korte was fired in July 2025 after publishing a front-page article about a downtown Madison development plan that contained factual errors and fabricated sources generated by an AI tool. The tool had been provided by the newspaper's parent company, Lee Enterprises, and was installed on employee computers. Korte said she used it for grammar and style editing, but it introduced false information she didn't catch before publication. The article was pulled, replaced with a re-reported version, and stamped with a disclaimer citing "unauthorized AI use" and "fabricated sources." Korte was terminated. She publicly accepted responsibility for not catching the errors but noted she had received no training on the tool that was already installed on her work computer.
Incident Details
Tech Stack
References
A reporter used an AI tool that was already installed on her work computer. The tool introduced fabricated sources into the article she was editing. The fabrications got published on the front page. The reporter got fired. The company that installed the tool on her computer called it "unauthorized AI use."
That sequence of events, which played out at the Wisconsin State Journal in July 2025, captures something specific about how newsrooms have handled AI adoption: deploy the tools broadly, provide minimal training, and then assign individual blame when the predictable happens.
The article
Audrey Korte, a reporter at the Wisconsin State Journal, published a front-page article in the Sunday print edition about a development plan for downtown Madison. The article appeared to readers as a standard piece of local government reporting - the kind of story that requires talking to city planners, reviewing documents, and synthesizing the details into something readable.
After publication, problems surfaced. The article contained factual inaccuracies and, more critically, fabricated sources - quotations or attributions that didn't correspond to real people or real statements. The Wisconsin State Journal pulled the article, replaced it with a version that had been re-reported from scratch, and attached a public disclaimer noting the original had been removed due to "unauthorized AI use" and "fabricated sources."
The AI tool
Korte addressed the incident publicly in September 2025, two months after her termination. She explained that she had used an AI program that was provided by Lee Enterprises, the newspaper's parent company, and was installed on employee work computers. She said she used it with the intention of improving grammar and style - treating it as an advanced editing tool rather than a content generation system.
The AI tool did more than fix grammar. It introduced information - fabricated sources, specifically - that was not in Korte's original draft. Whether the tool was prompted to generate content or whether it spontaneously added fabricated information during what Korte understood to be an editing pass is not entirely clear from her public account. Either way, false information was inserted into the article by the AI, and Korte did not catch it during her review before filing.
Korte accepted responsibility for failing to verify the AI's output. She also noted that she had received no training on the AI tool that was installed on her computer. The software was available. Nobody told her how to use it safely, what its limitations were, or what risks it posed to journalistic integrity.
The "unauthorized" framing
The Wisconsin State Journal's disclaimer characterized the incident as "unauthorized AI use." This framing is worth examining. The AI tool was provided by Lee Enterprises. It was installed on company computers. It was available to reporters. At what point does a tool installed by your employer on your work computer become "unauthorized" when you use it?
Lee Enterprises, like many media companies, adopted AI tools during the rapid industry-wide push to integrate generative AI into newsroom workflows. The company apparently made the tools available without establishing clear policies about when and how they could be used, what constituted acceptable use versus unauthorized use, or what safeguards reporters should apply when working with AI-generated output.
When the tool produced fabricated content that made it to print, the company categorized its use as unauthorized and terminated the employee. Whether any of the unnamed editors who reviewed Korte's article before it reached the front page bore any responsibility was not publicly addressed.
The editorial process gap
A front-page article at a major regional newspaper goes through editorial review before publication. Copy editors, section editors, and managing editors all have opportunities to catch problems before an article reaches print. The fabricated sources in Korte's article - which included attributions to people or statements that didn't exist - survived this editorial process.
This suggests one of two things: either the editorial review was cursory enough that fabricated sources weren't detected, or the fabrications were plausible enough that editors didn't question them. Both are concerning. AI-generated text is specifically good at producing plausible-sounding attributions, quotes, and citations that don't correspond to reality. It creates sources that sound like they could exist. Catching AI fabrications requires active verification - checking whether the cited person exists, whether they actually said what's attributed to them, whether the facts check out. A standard editorial review that focuses on clarity, style, and coherence won't catch fabrications that are well-constructed.
This is the same problem that has surfaced in legal filings (fabricated case citations that sound real), academic papers (fabricated references to papers that don't exist), and other contexts where AI-generated content is integrated into workflows that don't include verification of factual claims. The editorial process at the Wisconsin State Journal was designed for human-written content that contained real reporting. It was not designed for content that might include machine-generated fabrications that look like real reporting.
The pattern across newsrooms
The Wisconsin State Journal incident joins a growing collection of newsroom AI failures. CNET published AI-written articles with factual errors in 2023. Gannett's AI-generated sports articles were incoherent. The Washington Post launched AI-generated podcasts riddled with errors in late 2025. Ars Technica retracted an article after AI-fabricated quotes were accidentally included in March 2026.
Each of these incidents has a similar shape: a media organization adopted AI tools, something went wrong with the output, the error reached the public, and the organization was left explaining how its editorial processes - the thing that's supposed to distinguish professional journalism from everything else - failed to catch the problem.
The Wisconsin State Journal case adds a dimension the others lack: the company deployed the tool, provided no training, and then fired the employee for using it. Whether this framing would hold up in an employment dispute is an open question, but it illustrates a gap between how organizations adopt AI tools (enthusiastically, broadly) and how they handle the consequences when those tools fail (by pointing at the individual who was closest to the failure).
The training gap
Korte's public account emphasized a point that applies far beyond journalism: the AI tool was available but the training wasn't. She was given access to a tool capable of hallucinating entire sources and inserting them into professional work product, without any guidance on the tool's propensity to do exactly that.
This training gap exists across industries. AI coding tools are installed on developer machines without security training. AI writing tools are deployed in newsrooms without journalism-specific guidance. AI analysis tools are given to analysts without instruction on how to verify AI-generated outputs. The tools ship fast. The organizational knowledge about how to use them safely ships slowly, if at all.
In journalism, the stakes of that gap are particularly visible because the output is public. A fabricated source in a front-page newspaper article is caught by readers, competitors, and the internet at large. In other fields, fabricated AI outputs may circulate internally for months before anyone checks whether they're real.
Korte lost her job over a failure that involved a tool her employer provided, a training program her employer didn't provide, and an editorial review process that didn't catch the problem. She accepted responsibility for her part. Whether the company accepted responsibility for its part is a question the public disclaimer - which called the use "unauthorized" rather than "untrained" or "unsupported" - answers plainly enough.