AI Content Generation Stories
19 disasters tagged #ai-content-generation
Nota shut down its AI local news network after it was caught copying local reporters
Nota launched an 11-site local news network in 2025 with the usual "underserved communities" rhetoric and the less-usual decision to let AI-assisted workflows repurpose other people's reporting. By early April 2026, Axios Richmond and Poynter had documented widespread plagiarism, including lifted quotes, paraphrased reporting, and reused photos from local outlets. Nota fired one editor, took down the network, and signaled the sites were likely gone for good. The promised fix for news deserts lasted about as long as it took actual local reporters to notice their work had been stolen.
The New York Times dropped Alex Preston after an AI-assisted review copied a Guardian review
A January 6, 2026 New York Times review of Jean-Baptiste Andrea's Watching Over Her was updated on March 30 with an editor's note acknowledging that it contained language and details similar to an earlier Guardian review. On March 31, reporting from The Guardian said the Times had cut ties with freelance reviewer Alex Preston after he admitted using an AI tool that pulled material from the earlier review into his draft. It was not a hallucination story. It was the equally useful reminder that AI-assisted writing can turn plagiarism into something a newsroom does by accident and publishes anyway.
Ars Technica fires senior AI reporter after AI tool fabricated quotes in published story
Ars Technica retracted an article by senior AI reporter Benj Edwards after it contained fabricated quotations generated by an AI tool and attributed to a source who never said them. The publication acknowledged the incident as a "serious failure of our standards" and Edwards was subsequently fired. Edwards noted the irony on Bluesky: "The irony of an AI reporter being tripped up by AI hallucination is not lost on me."
Guardian investigation finds Google AI Overviews gave dangerous health misinformation
A Guardian investigation found Google's AI Overviews displayed false and misleading health information across multiple medical topics. AI summaries gave incorrect liver function test ranges sourced from an Indian hospital chain without accounting for nationality, sex, or age. The feature advised pancreatic cancer patients to avoid high-fat foods, which experts said could increase mortality risk. Stanford and MIT researchers called the absence of prominent disclaimers a critical danger. Google removed some AI Overviews for health queries after the investigation, but many remained active.
AI police report claims officer shape-shifted into a frog
Heber City Police Department's Axon Draft One AI report tool transcribed background dialogue from The Princess and the Frog playing on a television into an official police report, claiming an officer had shape-shifted into a frog while conducting police activity. The incident exposed design flaws in AI report-writing tools that process all body camera audio without distinguishing between relevant police interactions and ambient background noise.
Amazon pulled Prime Video's AI recaps after Fallout errors
Amazon launched Prime Video "Video Recaps" as a beta generative-AI feature meant to help viewers catch up between seasons. A recap for Fallout instead got basic plot points wrong, including mislabeling one of The Ghoul's flashbacks as "1950s America" rather than 2077 and misdescribing a key scene with Lucy. Prime Video then pulled the recap feature from the shows in the test program, which is not ideal for a tool whose entire job is remembering the plot.
Washington Post launched AI podcast that failed its own quality tests at an 84% rate
The Washington Post launched "Your Personal Podcast," an AI-generated audio news product, in December 2025 despite internal testing showing that between 68% and 84% of AI-generated scripts failed to meet the publication's editorial standards across three rounds of evaluation. The AI fabricated quotes from public figures, misattributed statements, mispronounced names, and inserted its own editorial commentary as if it were the Post's position. The internal review concluded that "further small prompt changes are unlikely to meaningfully improve outcomes without introducing more risk." The product team recommended launching anyway. Post editors revolted, with one writing in Slack that it was "truly astonishing that this was allowed to go forward at all."
Deloitte to refund Australian government after AI-generated report
Deloitte Australia agreed to partially refund a $440,000 contract after admitting its welfare compliance review for the Department of Employment and Workplace Relations contained fabricated academic citations and a fictitious judicial quote generated by Azure OpenAI GPT-4o. University of Sydney researcher Christopher Rudge found the revised report introduced even more hallucinated references than the original.
Anthropic agrees to $1.5B payout over pirated books
Anthropic accepted a $1.5 billion settlement with authors who said the Claude team scraped pirate e-book sites to train its chatbot. The deal pays roughly $3,000 per book across 500,000 works, heads off a December trial, and forces one of the richest AI startups to bankroll the writing community it previously treated as free training data.
AI-generated npm pkg stole Solana wallets
A malicious npm package called @kodane/patch-manager, apparently generated using Anthropic's Claude, posed as a legitimate Node.js utility while hiding a Solana wallet drainer in its post-install script. The package accumulated over 1,500 downloads before npm removed it on July 28, 2025, draining cryptocurrency funds from developers who installed it without realizing the payload ran automatically with no further user action required.
Reporter fired after AI tool provided by her employer fabricated sources in front-page article
Wisconsin State Journal reporter Audrey Korte was fired in July 2025 after publishing a front-page article about a downtown Madison development plan that contained factual errors and fabricated sources generated by an AI tool. The tool had been provided by the newspaper's parent company, Lee Enterprises, and was installed on employee computers. Korte said she used it for grammar and style editing, but it introduced false information she didn't catch before publication. The article was pulled, replaced with a re-reported version, and stamped with a disclaimer citing "unauthorized AI use" and "fabricated sources." Korte was terminated. She publicly accepted responsibility for not catching the errors but noted she had received no training on the tool that was already installed on her work computer.
Syndicated AI book list ran in major papers with made-up titles
A freelance writer working for King Features Syndicate used AI to research a summer reading list for the Chicago Sun-Times and Philadelphia Inquirer. Of the fifteen books recommended, only five were real. The rest were hallucinated titles attributed to real authors like Isabel Allende and Delia Owens. The list ran in print in a 64-page special section before 404 Media, NPR, and others exposed the fabrications. Both newspapers issued corrections and statements distancing their newsrooms from the syndicated content.
California's failed bar exam included AI-drafted questions
The State Bar of California disclosed in April 2025 that 23 scored multiple-choice questions on its already troubled February bar exam were developed with AI assistance by its psychometric vendor, ACS Ventures. Test-takers had already reported crashes, lag, copy-paste failures, and lost answers. Then the bar admitted that some questions in this licensing exam for future lawyers had been drafted with AI, reviewed by the same outside vendor, and used anyway. The bar asked the California Supreme Court for score relief, while legal academics described the admission as staggering.
LA Times had to pull AI "Insights" after it softened the Klan
The Los Angeles Times launched an AI feature called "Insights" in March 2025 to label opinion pieces, summarize them, and generate an opposing viewpoint. It immediately attached itself to a Gustavo Arellano column about Anaheim's history with the Ku Klux Klan and produced language suggesting the 1920s Klan could be framed as a response to social change rather than as an explicitly hate-driven movement. The feature was removed from that article within a day. The newspaper had managed to bolt an automated both-sides machine onto a hate group history piece and act surprised when that went badly.
Cody Enterprise reporter resigned after AI fabricated quotes from real people
The Cody Enterprise was forced into public apologies and corrections in August 2024 after reporter Aaron Pelczar resigned amid evidence that an AI tool he used to help write stories had inserted fabricated quotations. A competing reporter at the Powell Tribune spotted robotic phrasing, suspiciously polished source quotes, and one article that bizarrely ended by explaining the inverted pyramid style of news writing. The resulting review found seven stories that included invented or altered quotes from seven people, including Wyoming Gov. Mark Gordon. The paper removed many of the quotes, issued corrections, and then adopted an AI detection and policy response after learning, a little late, that generative text tools are not interchangeable with reporting.
Sports Illustrated: Fake-Looking Authors and AI Content Backlash
Futurism reported in November 2023 that Sports Illustrated had published product reviews under fake author names such as "Drew Ortiz" and "Sora Tanaka," whose headshots were traced to AI-generated portrait marketplaces. When questioned, SI deleted the profiles without explanation. The articles came from third-party content partner AdVon Commerce. SI said AdVon used pen names without authorization and terminated the partnership. The SI union demanded answers. Within weeks, Arena Group - SI's parent company - fired CEO Ross Levinsohn and three other executives.
Microsoft’s AI poll on woman’s death sparks outrage
In late October 2023, Microsoft Start republished a Guardian article about the death of Sydney water polo instructor Lilie James and auto-attached an AI-generated "Insights" poll asking readers, "What do you think is the reason behind the woman's death?" - with options of murder, accident, or suicide. Readers blamed the Guardian's journalist directly, with some demanding the writer be fired, unaware the poll was Microsoft's AI. Guardian CEO Anna Bateson wrote to Microsoft President Brad Smith calling the poll an inappropriate use of generative AI. Microsoft deactivated all AI-generated polls on news articles and launched an investigation.
Gannett pauses AI sports recaps after mockery
In August 2023, Gannett - the largest newspaper chain in the United States - deployed an AI service called LedeAI to auto-generate high school sports recaps for the Columbus Dispatch and other papers. The articles went viral on social media for their robotic phrasing, missing player names, and bizarre constructions like "close encounter of the athletic kind." Several articles required corrections appended with notes about "errors in coding, programming or style." Gannett paused the experiment and said it would add "hundreds of reporting jobs" alongside AI tools, though the connection between the two claims was unclear.
CNET mass-corrects AI-written finance explainers
Starting in November 2022, CNET quietly published 77 financial explainer articles written by an AI tool under the byline "CNET Money Staff." Readers had to hover over the byline to learn the articles were produced "using automation technology." In January 2023, Futurism broke the story, and a follow-up identified factual errors in a compound interest article, prompting a full audit. CNET editor-in-chief Connie Guglielmo confirmed corrections were issued on 41 of the 77 articles - more than half - including some she described as "substantial." CNET paused AI-generated publishing and updated its disclosure practices, though Guglielmo said the outlet intended to continue using AI tools.