Journalism Stories

14 disasters tagged #journalism

Tombstone icon

Nota shut down its AI local news network after it was caught copying local reporters

Apr 2026

Nota launched an 11-site local news network in 2025 with the usual "underserved communities" rhetoric and the less-usual decision to let AI-assisted workflows repurpose other people's reporting. By early April 2026, Axios Richmond and Poynter had documented widespread plagiarism, including lifted quotes, paraphrased reporting, and reused photos from local outlets. Nota fired one editor, took down the network, and signaled the sites were likely gone for good. The promised fix for news deserts lasted about as long as it took actual local reporters to notice their work had been stolen.

Facepalmby Publisher
Eleven local news sites shut down; copied work traced to at least 29 outlets and 53 journalists; public credibility collapse for Nota's local-news experiment
ai-content-generationjournalismslop-the-presses+1 more
Tombstone icon

The New York Times dropped Alex Preston after an AI-assisted review copied a Guardian review

Mar 2026

A January 6, 2026 New York Times review of Jean-Baptiste Andrea's Watching Over Her was updated on March 30 with an editor's note acknowledging that it contained language and details similar to an earlier Guardian review. On March 31, reporting from The Guardian said the Times had cut ties with freelance reviewer Alex Preston after he admitted using an AI tool that pulled material from the earlier review into his draft. It was not a hallucination story. It was the equally useful reminder that AI-assisted writing can turn plagiarism into something a newsroom does by accident and publishes anyway.

Facepalmby Freelance reviewer
Published New York Times review carried unattributed language from a Guardian review; editor's note added; freelance relationship terminated; reputational damage for a flagship culture desk
ai-content-generationjournalismslop-the-presses+1 more
Tombstone icon

Ars Technica fires senior AI reporter after AI tool fabricated quotes in published story

Feb 2026

Ars Technica retracted an article by senior AI reporter Benj Edwards after it contained fabricated quotations generated by an AI tool and attributed to a source who never said them. The publication acknowledged the incident as a "serious failure of our standards" and Edwards was subsequently fired. Edwards noted the irony on Bluesky: "The irony of an AI reporter being tripped up by AI hallucination is not lost on me."

Facepalmby Reporter
Published article contained fabricated quotes attributed to a real person; retraction issued; reporter terminated; reputational damage to a trusted tech publication
ai-hallucinationai-content-generationjournalism+2 more
Tombstone icon

Washington Post launched AI podcast that failed its own quality tests at an 84% rate

Dec 2025

The Washington Post launched "Your Personal Podcast," an AI-generated audio news product, in December 2025 despite internal testing showing that between 68% and 84% of AI-generated scripts failed to meet the publication's editorial standards across three rounds of evaluation. The AI fabricated quotes from public figures, misattributed statements, mispronounced names, and inserted its own editorial commentary as if it were the Post's position. The internal review concluded that "further small prompt changes are unlikely to meaningfully improve outcomes without introducing more risk." The product team recommended launching anyway. Post editors revolted, with one writing in Slack that it was "truly astonishing that this was allowed to go forward at all."

Facepalmby Executive
Fabricated quotes published at scale under Washington Post branding; internal revolt from editorial staff; national media coverage of quality failures.
ai-content-generationai-hallucinationjournalism+2 more
Tombstone icon

BBC/EBU study says AI news summaries fail ~half the time

Oct 2025

A BBC audit of 2,700 news questions asked in 14 languages found that Gemini, Copilot, ChatGPT, and Perplexity mangled 45% of the answers, usually by hallucinating facts or stripping out attribution. The consortium logged serious sourcing lapses in a third of responses, including 72% of Gemini replies, plus outdated or fabricated claims about public-policy news, reinforcing fears that AI assistants are siphoning audiences while distorting the journalism they quote.

Facepalmby AI Product
Public-service broadcasters warn that unreliable AI summaries erode trust in news and drive audiences away from verified outlets.
ai-assistantai-hallucinationjournalism+2 more
Tombstone icon

Reporter fired after AI tool provided by her employer fabricated sources in front-page article

Jul 2025

Wisconsin State Journal reporter Audrey Korte was fired in July 2025 after publishing a front-page article about a downtown Madison development plan that contained factual errors and fabricated sources generated by an AI tool. The tool had been provided by the newspaper's parent company, Lee Enterprises, and was installed on employee computers. Korte said she used it for grammar and style editing, but it introduced false information she didn't catch before publication. The article was pulled, replaced with a re-reported version, and stamped with a disclaimer citing "unauthorized AI use" and "fabricated sources." Korte was terminated. She publicly accepted responsibility for not catching the errors but noted she had received no training on the tool that was already installed on her work computer.

Facepalmby Reporter
Front-page print article published with fabricated sources; reporter terminated; Lee Enterprises under scrutiny for deploying AI tools without training or clear policies.
ai-content-generationai-hallucinationjournalism+1 more
Tombstone icon

Syndicated AI book list ran in major papers with made-up titles

May 2025

A freelance writer working for King Features Syndicate used AI to research a summer reading list for the Chicago Sun-Times and Philadelphia Inquirer. Of the fifteen books recommended, only five were real. The rest were hallucinated titles attributed to real authors like Isabel Allende and Delia Owens. The list ran in print in a 64-page special section before 404 Media, NPR, and others exposed the fabrications. Both newspapers issued corrections and statements distancing their newsrooms from the syndicated content.

Facepalmby Syndication/Editorial
Syndicated misinformation across multiple papers; reader trust impact; corrections issued.
journalismai-content-generationai-hallucination+3 more
Tombstone icon

LA Times had to pull AI "Insights" after it softened the Klan

Mar 2025

The Los Angeles Times launched an AI feature called "Insights" in March 2025 to label opinion pieces, summarize them, and generate an opposing viewpoint. It immediately attached itself to a Gustavo Arellano column about Anaheim's history with the Ku Klux Klan and produced language suggesting the 1920s Klan could be framed as a response to social change rather than as an explicitly hate-driven movement. The feature was removed from that article within a day. The newspaper had managed to bolt an automated both-sides machine onto a hate group history piece and act surprised when that went badly.

Facepalmby Executive
Public backlash; reputational damage to the paper; newsroom distrust of the feature; the Klan article's framing overshadowed by the AI add-on
ai-content-generationjournalismbrand-damage+1 more
Tombstone icon

Apple pulled AI news summaries after fake BBC headlines

Jan 2025

Apple Intelligence's notification-summary feature spent late 2024 turning news alerts into fiction with excellent lock-screen placement. In the most widely cited example, it generated a false BBC alert claiming Luigi Mangione had shot himself. The BBC complained that Apple was attaching fabricated claims to its reporting, other publishers raised similar concerns, and Apple responded in January 2025 by disabling notification summaries for News & Entertainment apps in iOS 18.3 while it reworked the feature.

Facepalmby Consumer AI feature
False breaking-news alerts on iPhones, publisher trust damage, and a public rollback by Apple.
ai-hallucinationjournalismproduct-failure+2 more
Tombstone icon

Cody Enterprise reporter resigned after AI fabricated quotes from real people

Aug 2024

The Cody Enterprise was forced into public apologies and corrections in August 2024 after reporter Aaron Pelczar resigned amid evidence that an AI tool he used to help write stories had inserted fabricated quotations. A competing reporter at the Powell Tribune spotted robotic phrasing, suspiciously polished source quotes, and one article that bizarrely ended by explaining the inverted pyramid style of news writing. The resulting review found seven stories that included invented or altered quotes from seven people, including Wyoming Gov. Mark Gordon. The paper removed many of the quotes, issued corrections, and then adopted an AI detection and policy response after learning, a little late, that generative text tools are not interchangeable with reporting.

Facepalmby Reporter
Seven stories tainted by fabricated or altered quotes; public apologies and corrections; reporter resigned; local newsroom credibility damaged.
ai-content-generationai-hallucinationjournalism+2 more
Tombstone icon

Sports Illustrated: Fake-Looking Authors and AI Content Backlash

Nov 2023

Futurism reported in November 2023 that Sports Illustrated had published product reviews under fake author names such as "Drew Ortiz" and "Sora Tanaka," whose headshots were traced to AI-generated portrait marketplaces. When questioned, SI deleted the profiles without explanation. The articles came from third-party content partner AdVon Commerce. SI said AdVon used pen names without authorization and terminated the partnership. The SI union demanded answers. Within weeks, Arena Group - SI's parent company - fired CEO Ross Levinsohn and three other executives.

Facepalmby Commerce Editorial
Content takedowns; partner terminated; trust erosion
ai-content-generationbrand-damagejournalism+2 more
Tombstone icon

Microsoft’s AI poll on woman’s death sparks outrage

Oct 2023

In late October 2023, Microsoft Start republished a Guardian article about the death of Sydney water polo instructor Lilie James and auto-attached an AI-generated "Insights" poll asking readers, "What do you think is the reason behind the woman's death?" - with options of murder, accident, or suicide. Readers blamed the Guardian's journalist directly, with some demanding the writer be fired, unaware the poll was Microsoft's AI. Guardian CEO Anna Bateson wrote to Microsoft President Brad Smith calling the poll an inappropriate use of generative AI. Microsoft deactivated all AI-generated polls on news articles and launched an investigation.

Facepalmby Product Manager
Feature disabled platform-wide; reputational damage with publishers.
ai-content-generationbrand-damagejournalism+1 more
Tombstone icon

Gannett pauses AI sports recaps after mockery

Aug 2023

In August 2023, Gannett - the largest newspaper chain in the United States - deployed an AI service called LedeAI to auto-generate high school sports recaps for the Columbus Dispatch and other papers. The articles went viral on social media for their robotic phrasing, missing player names, and bizarre constructions like "close encounter of the athletic kind." Several articles required corrections appended with notes about "errors in coding, programming or style." Gannett paused the experiment and said it would add "hundreds of reporting jobs" alongside AI tools, though the connection between the two claims was unclear.

Facepalmby Executive
Chain-wide pause of AI copy; reputational hit in local markets.
ai-content-generationai-hallucinationbrand-damage+2 more
Tombstone icon

CNET mass-corrects AI-written finance explainers

Jan 2023

Starting in November 2022, CNET quietly published 77 financial explainer articles written by an AI tool under the byline "CNET Money Staff." Readers had to hover over the byline to learn the articles were produced "using automation technology." In January 2023, Futurism broke the story, and a follow-up identified factual errors in a compound interest article, prompting a full audit. CNET editor-in-chief Connie Guglielmo confirmed corrections were issued on 41 of the 77 articles - more than half - including some she described as "substantial." CNET paused AI-generated publishing and updated its disclosure practices, though Guglielmo said the outlet intended to continue using AI tools.

Facepalmby Executive
Large corrections; credibility hit; policy changes on AI usage.
ai-content-generationai-hallucinationbrand-damage+3 more