Slop the Presses Stories

8 disasters tagged #slop-the-presses

Tombstone icon

Ars Technica fires senior AI reporter after AI tool fabricated quotes in published story

Feb 2026

Ars Technica retracted an article by senior AI reporter Benj Edwards after it contained fabricated quotations generated by an AI tool and attributed to a source who never said them. The publication acknowledged the incident as a "serious failure of our standards" and Edwards was subsequently fired. Edwards noted the irony on Bluesky: "The irony of an AI reporter being tripped up by AI hallucination is not lost on me."

Facepalmby Reporter
Published article contained fabricated quotes attributed to a real person; retraction issued; reporter terminated; reputational damage to a trusted tech publication
ai-hallucinationai-content-generationjournalism+2 more
Tombstone icon

Washington Post launched AI podcast that failed its own quality tests at an 84% rate

Dec 2025

The Washington Post launched "Your Personal Podcast," an AI-generated audio news product, in December 2025 despite internal testing showing that between 68% and 84% of AI-generated scripts failed to meet the publication's editorial standards across three rounds of evaluation. The AI fabricated quotes from public figures, misattributed statements, mispronounced names, and inserted its own editorial commentary as if it were the Post's position. The internal review concluded that "further small prompt changes are unlikely to meaningfully improve outcomes without introducing more risk." The product team recommended launching anyway. Post editors revolted, with one writing in Slack that it was "truly astonishing that this was allowed to go forward at all."

Facepalmby Executive
Fabricated quotes published at scale under Washington Post branding; internal revolt from editorial staff; national media coverage of quality failures.
ai-content-generationai-hallucinationjournalism+2 more
Tombstone icon

BBC/EBU study says AI news summaries fail ~half the time

Oct 2025

A BBC audit of 2,700 news questions asked in 14 languages found that Gemini, Copilot, ChatGPT, and Perplexity mangled 45% of the answers, usually by hallucinating facts or stripping out attribution. The consortium logged serious sourcing lapses in a third of responses, including 72% of Gemini replies, plus outdated or fabricated claims about public-policy news, reinforcing fears that AI assistants are siphoning audiences while distorting the journalism they quote.

Facepalmby AI Product
Public-service broadcasters warn that unreliable AI summaries erode trust in news and drive audiences away from verified outlets.
ai-assistantai-hallucinationjournalism+2 more
Tombstone icon

Syndicated AI book list ran in major papers with made-up titles

May 2025

A freelance writer working for King Features Syndicate used AI to research a summer reading list for the Chicago Sun-Times and Philadelphia Inquirer. Of the fifteen books recommended, only five were real. The rest were hallucinated titles attributed to real authors like Isabel Allende and Delia Owens. The list ran in print in a 64-page special section before 404 Media, NPR, and others exposed the fabrications. Both newspapers issued corrections and statements distancing their newsrooms from the syndicated content.

Facepalmby Syndication/Editorial
Syndicated misinformation across multiple papers; reader trust impact; corrections issued.
journalismai-content-generationai-hallucination+3 more
Tombstone icon

Sports Illustrated: Fake-Looking Authors and AI Content Backlash

Nov 2023

Futurism reported in November 2023 that Sports Illustrated had published product reviews under fake author names such as "Drew Ortiz" and "Sora Tanaka," whose headshots were traced to AI-generated portrait marketplaces. When questioned, SI deleted the profiles without explanation. The articles came from third-party content partner AdVon Commerce. SI said AdVon used pen names without authorization and terminated the partnership. The SI union demanded answers. Within weeks, Arena Group - SI's parent company - fired CEO Ross Levinsohn and three other executives.

Facepalmby Commerce Editorial
Content takedowns; partner terminated; trust erosion
ai-content-generationbrand-damagejournalism+2 more
Tombstone icon

Microsoft’s AI poll on woman’s death sparks outrage

Oct 2023

In late October 2023, Microsoft Start republished a Guardian article about the death of Sydney water polo instructor Lilie James and auto-attached an AI-generated "Insights" poll asking readers, "What do you think is the reason behind the woman's death?" - with options of murder, accident, or suicide. Readers blamed the Guardian's journalist directly, with some demanding the writer be fired, unaware the poll was Microsoft's AI. Guardian CEO Anna Bateson wrote to Microsoft President Brad Smith calling the poll an inappropriate use of generative AI. Microsoft deactivated all AI-generated polls on news articles and launched an investigation.

Facepalmby Product Manager
Feature disabled platform-wide; reputational damage with publishers.
ai-content-generationbrand-damagejournalism+1 more
Tombstone icon

Gannett pauses AI sports recaps after mockery

Aug 2023

In August 2023, Gannett - the largest newspaper chain in the United States - deployed an AI service called LedeAI to auto-generate high school sports recaps for the Columbus Dispatch and other papers. The articles went viral on social media for their robotic phrasing, missing player names, and bizarre constructions like "close encounter of the athletic kind." Several articles required corrections appended with notes about "errors in coding, programming or style." Gannett paused the experiment and said it would add "hundreds of reporting jobs" alongside AI tools, though the connection between the two claims was unclear.

Facepalmby Executive
Chain-wide pause of AI copy; reputational hit in local markets.
ai-content-generationai-hallucinationbrand-damage+2 more
Tombstone icon

CNET mass-corrects AI-written finance explainers

Jan 2023

Starting in November 2022, CNET quietly published 77 financial explainer articles written by an AI tool under the byline "CNET Money Staff." Readers had to hover over the byline to learn the articles were produced "using automation technology." In January 2023, Futurism broke the story, and a follow-up identified factual errors in a compound interest article, prompting a full audit. CNET editor-in-chief Connie Guglielmo confirmed corrections were issued on 41 of the 77 articles - more than half - including some she described as "substantial." CNET paused AI-generated publishing and updated its disclosure practices, though Guglielmo said the outlet intended to continue using AI tools.

Facepalmby Executive
Large corrections; credibility hit; policy changes on AI usage.
ai-content-generationai-hallucinationbrand-damage+3 more