Journalism Stories
14 disasters tagged #journalism
Nota shut down its AI local news network after it was caught copying local reporters
Nota launched an 11-site local news network in 2025 with the usual "underserved communities" rhetoric and the less-usual decision to let AI-assisted workflows repurpose other people's reporting. By early April 2026, Axios Richmond and Poynter had documented widespread plagiarism, including lifted quotes, paraphrased reporting, and reused photos from local outlets. Nota fired one editor, took down the network, and signaled the sites were likely gone for good. The promised fix for news deserts lasted about as long as it took actual local reporters to notice their work had been stolen.
The New York Times dropped Alex Preston after an AI-assisted review copied a Guardian review
A January 6, 2026 New York Times review of Jean-Baptiste Andrea's Watching Over Her was updated on March 30 with an editor's note acknowledging that it contained language and details similar to an earlier Guardian review. On March 31, reporting from The Guardian said the Times had cut ties with freelance reviewer Alex Preston after he admitted using an AI tool that pulled material from the earlier review into his draft. It was not a hallucination story. It was the equally useful reminder that AI-assisted writing can turn plagiarism into something a newsroom does by accident and publishes anyway.
Ars Technica fires senior AI reporter after AI tool fabricated quotes in published story
Ars Technica retracted an article by senior AI reporter Benj Edwards after it contained fabricated quotations generated by an AI tool and attributed to a source who never said them. The publication acknowledged the incident as a "serious failure of our standards" and Edwards was subsequently fired. Edwards noted the irony on Bluesky: "The irony of an AI reporter being tripped up by AI hallucination is not lost on me."
Washington Post launched AI podcast that failed its own quality tests at an 84% rate
The Washington Post launched "Your Personal Podcast," an AI-generated audio news product, in December 2025 despite internal testing showing that between 68% and 84% of AI-generated scripts failed to meet the publication's editorial standards across three rounds of evaluation. The AI fabricated quotes from public figures, misattributed statements, mispronounced names, and inserted its own editorial commentary as if it were the Post's position. The internal review concluded that "further small prompt changes are unlikely to meaningfully improve outcomes without introducing more risk." The product team recommended launching anyway. Post editors revolted, with one writing in Slack that it was "truly astonishing that this was allowed to go forward at all."
BBC/EBU study says AI news summaries fail ~half the time
A BBC audit of 2,700 news questions asked in 14 languages found that Gemini, Copilot, ChatGPT, and Perplexity mangled 45% of the answers, usually by hallucinating facts or stripping out attribution. The consortium logged serious sourcing lapses in a third of responses, including 72% of Gemini replies, plus outdated or fabricated claims about public-policy news, reinforcing fears that AI assistants are siphoning audiences while distorting the journalism they quote.
Reporter fired after AI tool provided by her employer fabricated sources in front-page article
Wisconsin State Journal reporter Audrey Korte was fired in July 2025 after publishing a front-page article about a downtown Madison development plan that contained factual errors and fabricated sources generated by an AI tool. The tool had been provided by the newspaper's parent company, Lee Enterprises, and was installed on employee computers. Korte said she used it for grammar and style editing, but it introduced false information she didn't catch before publication. The article was pulled, replaced with a re-reported version, and stamped with a disclaimer citing "unauthorized AI use" and "fabricated sources." Korte was terminated. She publicly accepted responsibility for not catching the errors but noted she had received no training on the tool that was already installed on her work computer.
Syndicated AI book list ran in major papers with made-up titles
A freelance writer working for King Features Syndicate used AI to research a summer reading list for the Chicago Sun-Times and Philadelphia Inquirer. Of the fifteen books recommended, only five were real. The rest were hallucinated titles attributed to real authors like Isabel Allende and Delia Owens. The list ran in print in a 64-page special section before 404 Media, NPR, and others exposed the fabrications. Both newspapers issued corrections and statements distancing their newsrooms from the syndicated content.
LA Times had to pull AI "Insights" after it softened the Klan
The Los Angeles Times launched an AI feature called "Insights" in March 2025 to label opinion pieces, summarize them, and generate an opposing viewpoint. It immediately attached itself to a Gustavo Arellano column about Anaheim's history with the Ku Klux Klan and produced language suggesting the 1920s Klan could be framed as a response to social change rather than as an explicitly hate-driven movement. The feature was removed from that article within a day. The newspaper had managed to bolt an automated both-sides machine onto a hate group history piece and act surprised when that went badly.
Apple pulled AI news summaries after fake BBC headlines
Apple Intelligence's notification-summary feature spent late 2024 turning news alerts into fiction with excellent lock-screen placement. In the most widely cited example, it generated a false BBC alert claiming Luigi Mangione had shot himself. The BBC complained that Apple was attaching fabricated claims to its reporting, other publishers raised similar concerns, and Apple responded in January 2025 by disabling notification summaries for News & Entertainment apps in iOS 18.3 while it reworked the feature.
Cody Enterprise reporter resigned after AI fabricated quotes from real people
The Cody Enterprise was forced into public apologies and corrections in August 2024 after reporter Aaron Pelczar resigned amid evidence that an AI tool he used to help write stories had inserted fabricated quotations. A competing reporter at the Powell Tribune spotted robotic phrasing, suspiciously polished source quotes, and one article that bizarrely ended by explaining the inverted pyramid style of news writing. The resulting review found seven stories that included invented or altered quotes from seven people, including Wyoming Gov. Mark Gordon. The paper removed many of the quotes, issued corrections, and then adopted an AI detection and policy response after learning, a little late, that generative text tools are not interchangeable with reporting.
Sports Illustrated: Fake-Looking Authors and AI Content Backlash
Futurism reported in November 2023 that Sports Illustrated had published product reviews under fake author names such as "Drew Ortiz" and "Sora Tanaka," whose headshots were traced to AI-generated portrait marketplaces. When questioned, SI deleted the profiles without explanation. The articles came from third-party content partner AdVon Commerce. SI said AdVon used pen names without authorization and terminated the partnership. The SI union demanded answers. Within weeks, Arena Group - SI's parent company - fired CEO Ross Levinsohn and three other executives.
Microsoft’s AI poll on woman’s death sparks outrage
In late October 2023, Microsoft Start republished a Guardian article about the death of Sydney water polo instructor Lilie James and auto-attached an AI-generated "Insights" poll asking readers, "What do you think is the reason behind the woman's death?" - with options of murder, accident, or suicide. Readers blamed the Guardian's journalist directly, with some demanding the writer be fired, unaware the poll was Microsoft's AI. Guardian CEO Anna Bateson wrote to Microsoft President Brad Smith calling the poll an inappropriate use of generative AI. Microsoft deactivated all AI-generated polls on news articles and launched an investigation.
Gannett pauses AI sports recaps after mockery
In August 2023, Gannett - the largest newspaper chain in the United States - deployed an AI service called LedeAI to auto-generate high school sports recaps for the Columbus Dispatch and other papers. The articles went viral on social media for their robotic phrasing, missing player names, and bizarre constructions like "close encounter of the athletic kind." Several articles required corrections appended with notes about "errors in coding, programming or style." Gannett paused the experiment and said it would add "hundreds of reporting jobs" alongside AI tools, though the connection between the two claims was unclear.
CNET mass-corrects AI-written finance explainers
Starting in November 2022, CNET quietly published 77 financial explainer articles written by an AI tool under the byline "CNET Money Staff." Readers had to hover over the byline to learn the articles were produced "using automation technology." In January 2023, Futurism broke the story, and a follow-up identified factual errors in a compound interest article, prompting a full audit. CNET editor-in-chief Connie Guglielmo confirmed corrections were issued on 41 of the 77 articles - more than half - including some she described as "substantial." CNET paused AI-generated publishing and updated its disclosure practices, though Guglielmo said the outlet intended to continue using AI tools.