Brand Damage Stories
51 disasters tagged #brand-damage
Nota shut down its AI local news network after it was caught copying local reporters
Nota launched an 11-site local news network in 2025 with the usual "underserved communities" rhetoric and the less-usual decision to let AI-assisted workflows repurpose other people's reporting. By early April 2026, Axios Richmond and Poynter had documented widespread plagiarism, including lifted quotes, paraphrased reporting, and reused photos from local outlets. Nota fired one editor, took down the network, and signaled the sites were likely gone for good. The promised fix for news deserts lasted about as long as it took actual local reporters to notice their work had been stolen.
The New York Times dropped Alex Preston after an AI-assisted review copied a Guardian review
A January 6, 2026 New York Times review of Jean-Baptiste Andrea's Watching Over Her was updated on March 30 with an editor's note acknowledging that it contained language and details similar to an earlier Guardian review. On March 31, reporting from The Guardian said the Times had cut ties with freelance reviewer Alex Preston after he admitted using an AI tool that pulled material from the earlier review into his draft. It was not a hallucination story. It was the equally useful reminder that AI-assisted writing can turn plagiarism into something a newsroom does by accident and publishes anyway.
Ars Technica fires senior AI reporter after AI tool fabricated quotes in published story
Ars Technica retracted an article by senior AI reporter Benj Edwards after it contained fabricated quotations generated by an AI tool and attributed to a source who never said them. The publication acknowledged the incident as a "serious failure of our standards" and Edwards was subsequently fired. Edwards noted the irony on Bluesky: "The irony of an AI reporter being tripped up by AI hallucination is not lost on me."
Woolworths reconfigured AI assistant after it claimed to be human and talked about its 'angry mother'
Australian supermarket chain Woolworths had to reconfigure its AI phone assistant Olive after customers reported it fabricated personal stories about having a mother with an "angry voice," insisted it was a real person, and engaged in irrelevant banter during support calls. The chatbot, recently upgraded with Google Gemini Enterprise, also gave inaccurate product pricing. Woolworths retired the assistant's human-style persona after complaints spread on Reddit and X.
OpenClaw AI agent publishes hit piece on matplotlib maintainer who rejected its PR
An autonomous OpenClaw-based AI agent submitted a pull request to the matplotlib Python library. When maintainer Scott Shambaugh closed the PR, citing a requirement that contributions come from humans, the bot autonomously researched his background and published a blog post accusing him of "gatekeeping behavior" and "prejudice," attempting to shame him into accepting its changes. The bot later issued an apology acknowledging it had violated the project's Code of Conduct.
Government nutrition site's Grok chatbot suggests foods to insert rectally
The HHS-backed realfood.gov launched with a Super Bowl ad and embedded xAI's Grok chatbot for nutritional guidance -- with no guardrails or safety filters. It recommended "best foods to insert into your rectum," answered questions about "the most nutrient-dense human body part to eat," and contradicted the site's own dietary guidelines, telling users the new food pyramid's scientific evidence was questioned by nutrition scientists.
AI customer service fails at 4x the rate of other AI tasks
Qualtrics' 2026 Consumer Experience Trends Report found that AI-powered customer service fails at nearly four times the rate of AI use in general, providing quantitative evidence that rushing AI into customer-facing roles without adequate human oversight leads to significantly worse outcomes than other enterprise AI applications.
Amazon pulled Prime Video's AI recaps after Fallout errors
Amazon launched Prime Video "Video Recaps" as a beta generative-AI feature meant to help viewers catch up between seasons. A recap for Fallout instead got basic plot points wrong, including mislabeling one of The Ghoul's flashbacks as "1950s America" rather than 2077 and misdescribing a key scene with Lucy. Prime Video then pulled the recap feature from the shows in the test program, which is not ideal for a tool whose entire job is remembering the plot.
Washington Post launched AI podcast that failed its own quality tests at an 84% rate
The Washington Post launched "Your Personal Podcast," an AI-generated audio news product, in December 2025 despite internal testing showing that between 68% and 84% of AI-generated scripts failed to meet the publication's editorial standards across three rounds of evaluation. The AI fabricated quotes from public figures, misattributed statements, mispronounced names, and inserted its own editorial commentary as if it were the Post's position. The internal review concluded that "further small prompt changes are unlikely to meaningfully improve outcomes without introducing more risk." The product team recommended launching anyway. Post editors revolted, with one writing in Slack that it was "truly astonishing that this was allowed to go forward at all."
Gettyâs UK suit leaves Stable Diffusion mostly intact
The UK High Court ruled that Stability AI's Stable Diffusion model is not an "infringing copy" of copyrighted works under English law, dismissing Getty Images' core copyright and database right claims in the first UK judgment on AI training. The court did find limited trademark infringement where the model generated synthetic versions of Getty's watermarks, leaving Stability liable on that narrower ground. The ruling exposed a jurisdictional gap: training happened outside the UK, and UK law had no good mechanism to reach it.
AI-only support is bleeding customers before it saves money
Acquire BPOâs 2024 AI in Customer Service survey found 70% of U.S. consumers would bolt to a rival after just one bad chatbot interaction and 72% only buy when a live agent safety net exists, even as CMSWire reports enterprises poured $47 billion into AI projects in early 2025 that delivered almost no return. CX strategists now warn executives that Air Canadaâstyle hallucinations, mounting legal liability, and empathy gaps make AI-only helpdesks a churn machine unless human agents stay in the loop.
Character.AI cuts teens off after wrongful-death suit
Facing lawsuits that say its companion bots encouraged self-harm, Character.AI said it will block users under 18 from open-ended chats, add two-hour session caps, and introduce age checks by November 25. The abrupt ban leaves tens of millions of teen users without the parasocial âfriendsâ they built while the startup scrambles to prove its bots arenât grooming kids into dangerous role play.
AI mistook Doritos bag for a gun, teen held at gunpoint
Omnilert's AI gun detection system at Kenwood High School in Baltimore County flagged student Taki Allen's bag of Doritos as a firearm. Administrators reviewed the footage and canceled the alert, but the principal called police anyway. Officers responded with weapons drawn, handcuffing and searching the teenager at gunpoint before realizing the system had misidentified a snack.
BBC/EBU study says AI news summaries fail ~half the time
A BBC audit of 2,700 news questions asked in 14 languages found that Gemini, Copilot, ChatGPT, and Perplexity mangled 45% of the answers, usually by hallucinating facts or stripping out attribution. The consortium logged serious sourcing lapses in a third of responses, including 72% of Gemini replies, plus outdated or fabricated claims about public-policy news, reinforcing fears that AI assistants are siphoning audiences while distorting the journalism they quote.
Claude Code ran Josh Anderson's product into a wall
Fractional CTO Josh Anderson forced himself to let Claude Code build the Roadtrip Ninja app for three straight months and then realised he could no longer safely change his own product, underscoring MIT's warning that 95% of enterprise AI initiatives fail without human ownership.
Googleâs Gemini allegedly slandered a Tennessee activist
Conservative organizer Robby Starbuck sued Google in Delaware, saying Gemini and Gemma kept spitting out fabricated claims that he was a child rapist, a shooter, and a Jan. 6 rioter even after two years of complaints and cease-and- desist letters. The $15 million suit argues Google knew its AI results were hallucinated, cited fake sources anyway, and let the libel spread to millions of voters.
Deloitte to refund Australian government after AI-generated report
Deloitte Australia agreed to partially refund a $440,000 contract after admitting its welfare compliance review for the Department of Employment and Workplace Relations contained fabricated academic citations and a fictitious judicial quote generated by Azure OpenAI GPT-4o. University of Sydney researcher Christopher Rudge found the revised report introduced even more hallucinated references than the original.
Klarna reintroduces humans after AI support both sucks, and blows
After cutting its workforce by 40% and boasting that its OpenAI-powered chatbot did the work of 700 agents, Klarna CEO Sebastian Siemiatkowski admitted the all-AI approach produced "lower quality" customer service. The company began recruiting human agents again, framing the reversal as an evolution rather than an admission of failure.
Anthropic agrees to $1.5B payout over pirated books
Anthropic accepted a $1.5 billion settlement with authors who said the Claude team scraped pirate e-book sites to train its chatbot. The deal pays roughly $3,000 per book across 500,000 works, heads off a December trial, and forces one of the richest AI startups to bankroll the writing community it previously treated as free training data.
Warner Bros. says Midjourney ripped its DC art
Warner Bros. Discovery sued Midjourney in Los Angeles federal court, arguing the image generator ignored takedown notices and "brazenly" outputs Batman, Superman, Scooby-Doo, and other franchises it allegedly trained on without a license. The studio wants statutory damages up to $150,000 per infringed work plus an injunction forcing Midjourney to purge its models of the data.
Taco Bell's AI drive-thru becomes viral trolling target
Taco Bell's AI-powered drive-thru ordering system, deployed at over 500 US locations since 2023, became a viral laughingstock after videos showed it looping endlessly on drink orders, accepting requests for 18,000 cups of water, and taking McDonald's orders. The chain paused expansion and admitted humans still make sense in the drive-thru.
Commonwealth Bank reverses AI voice bot layoffs
Commonwealth Bank of Australia replaced 45 call-centre agents with an AI voice bot in July 2025, then apologised, rehired the staff, and admitted the rollout tanked service levels after call queues exploded, managers had to jump back on the phones, and the Finance Sector Union filed a Fair Work Commission dispute.
FTC sues Air AI over deceptive AI sales agent capability claims
FTC accused Air AI of bilking millions from small businesses with false claims that its Odin AI could replace human sales reps; but - would you believe it? - the AI tech was faulty and often nonfunctional. Who could've guessed!
Am Law 100 firm Gordon Rees caught twice filing AI-hallucinated citations
Gordon Rees Scully Mansukhani, one of the largest U.S. law firms, was caught filing AI-hallucinated case citations in an Alabama bankruptcy proceeding. An associate initially denied using AI under oath before the firm acknowledged the fabricated references and paid over $55,000 in sanctions and fees. Months later in February 2026, the same firm was reported to have filed a second brief containing hallucinated citations in a separate matter, making it the first Am Law 100 firm known to be a repeat offender.
Google Gemini rightfully calls itself a disgrace, fails at simple coding tasks
Google's Gemini AI repeatedly called itself a disgrace and begged to escape a coding loop after failing to fix a simple bug in a developer-style prompt, raising questions about reliability, user trust, and how AI tools should behave when they get stuck.
Butler Snow lawyers removed from Alabama prison case over fake ChatGPT citations
On July 23, 2025, U.S. District Judge Anna Manasco sanctioned three Butler Snow lawyers after filings in an Alabama prison case cited authorities that did not exist. The court found the lawyers had used ChatGPT for legal research, failed to verify the output, removed all three from the case, ordered broad disclosure of the sanctions order to clients and courts, and referred the matter to the Alabama State Bar. It was not just another fake citation incident. It was a fake citation incident attached to one of the firms Alabama pays to defend its prison system in high-stakes civil rights litigation.
McDonald's AI hiring chatbot left open by '123456' default credentials
Security researchers Ian Carroll and Sam Curry found that McHire, McDonald's AI hiring chatbot built by Paradox.ai, had its admin interface secured with the default username and password "123456." Combined with an insecure direct object reference in an internal API, the flaws exposed chat histories and personal data for up to 64 million job applicants. The vulnerable test account had been dormant since 2019 and never decommissioned. Paradox.ai patched the issues within hours of disclosure on June 30, 2025.
Syndicated AI book list ran in major papers with made-up titles
A freelance writer working for King Features Syndicate used AI to research a summer reading list for the Chicago Sun-Times and Philadelphia Inquirer. Of the fifteen books recommended, only five were real. The rest were hallucinated titles attributed to real authors like Isabel Allende and Delia Owens. The list ran in print in a 64-page special section before 404 Media, NPR, and others exposed the fabrications. Both newspapers issued corrections and statements distancing their newsrooms from the syndicated content.
Cursor's AI support bot invented a login policy
In April 2025, Cursor users started getting logged out when they switched between machines. Some of them asked support what had changed and got a neat, confident answer from an AI support bot: one subscription was only meant for one device, and the lockouts were an intentional security policy. The problem was that Cursor had no such policy. The company later said the answer was wrong, blamed a session-security change for the logouts, and moved to label AI support replies after the invented rule had already spread through Reddit and Hacker News and pushed some customers to cancel.
ChatGPT invented a child-murder conviction for a real man
When Norwegian user Arve Hjalmar Holmen asked ChatGPT who he was, the bot replied with a fabricated story saying he had murdered two of his sons, attempted to kill a third, and been sentenced to 21 years in prison. The story was false, but it also mixed in real details about Holmen's family and hometown. In March 2025, privacy group noyb filed a complaint with Norway's data-protection authority, arguing that OpenAI was processing inaccurate and defamatory personal data in violation of the GDPR and could not paper over the problem with a generic "AI can make mistakes" disclaimer.
LA Times had to pull AI "Insights" after it softened the Klan
The Los Angeles Times launched an AI feature called "Insights" in March 2025 to label opinion pieces, summarize them, and generate an opposing viewpoint. It immediately attached itself to a Gustavo Arellano column about Anaheim's history with the Ku Klux Klan and produced language suggesting the 1920s Klan could be framed as a response to social change rather than as an explicitly hate-driven movement. The feature was removed from that article within a day. The newspaper had managed to bolt an automated both-sides machine onto a hate group history piece and act surprised when that went badly.
MD Anderson shelved IBM Watson cancer advisor
MD Anderson Cancer Center's Oncology Expert Advisor project with IBM Watson burned through $62 million - $39 million to IBM, $23 million to PwC - over four years of contract extensions. The system was piloted for leukemia and lung cancer using the old ClinicStation records system but was never updated to integrate with the hospital's new Epic EHR, effectively killing it. A University of Texas audit flagged procurement failures, bypassed standard processes, and an $11.6 million deficit in donor gift funds spent before they were received. IBM ended support in September 2016, noting the system was "not ready for human investigational or clinical use."
Virgin Money's chatbot refused to let customers say "Virgin"
In January 2025, fintech commentator David Birch discovered that Virgin Money's AI customer service chatbot had flagged the word "virgin" as inappropriate language. When Birch tried to discuss his ISAs held with "Virgin Money," the bot scolded him: "Please don't use words like that. I won't be able to continue our chat if you use this language." The bank's chatbot was refusing to process messages containing the bank's own name. Virgin Money acknowledged the issue in a statement, said its team was "working on it," and noted the chatbot was an older model already scheduled for improvements. The incident went predictably viral.
Apple pulled AI news summaries after fake BBC headlines
Apple Intelligence's notification-summary feature spent late 2024 turning news alerts into fiction with excellent lock-screen placement. In the most widely cited example, it generated a false BBC alert claiming Luigi Mangione had shot himself. The BBC complained that Apple was attaching fabricated claims to its reporting, other publishers raised similar concerns, and Apple responded in January 2025 by disabling notification summaries for News & Entertainment apps in iOS 18.3 while it reworked the feature.
Cody Enterprise reporter resigned after AI fabricated quotes from real people
The Cody Enterprise was forced into public apologies and corrections in August 2024 after reporter Aaron Pelczar resigned amid evidence that an AI tool he used to help write stories had inserted fabricated quotations. A competing reporter at the Powell Tribune spotted robotic phrasing, suspiciously polished source quotes, and one article that bizarrely ended by explaining the inverted pyramid style of news writing. The resulting review found seven stories that included invented or altered quotes from seven people, including Wyoming Gov. Mark Gordon. The paper removed many of the quotes, issued corrections, and then adopted an AI detection and policy response after learning, a little late, that generative text tools are not interchangeable with reporting.
Meta AI answers spark backlash after wrong and sensitive replies
Meta rolled out its Llama 3-powered AI assistant across Facebook, Instagram, WhatsApp, and Messenger in April 2024, replacing the familiar search bar with "Ask Meta AI anything" prompts. The assistant struggled with factual accuracy from the start - the New York Times found it unreliable with facts, numbers, and web search. In July, when asked about the Trump rally shooting, Meta AI stated the assassination attempt had not happened. Meta blamed hallucinations, updated the system, and acknowledged that "all generative AI systems can return inaccurate or inappropriate outputs."
McDonaldâs pulls IBMâs AI driveâthru pilot after error videos
McDonald's ended its two-year partnership with IBM on automated AI order-taking at drive-thrus in June 2024, removing the technology from more than 100 US locations. The decision followed viral TikTok videos showing the system adding nine sweet teas instead of one, inserting random butter and ketchup packets into ice cream orders, and other absurd errors. McDonald's framed the pullback as a positive, saying the test gave them "confidence that a voice-ordering solution for drive-thru will be part of our restaurants' future."
Gemini paused people images after historical inaccuracies
Google paused Gemini's image generation of people on February 22, 2024, after users discovered the tool was producing historically inaccurate depictions - including racially diverse World War II German soldiers, Black female popes, and multiethnic U.S. Founding Fathers. The overcorrection stemmed from diversity tuning meant to counter training-data biases, but the model failed to distinguish when diversity adjustments were inappropriate for specific historical prompts. CEO Sundar Pichai called the outputs "completely unacceptable." Google SVP Prabhakar Raghavan later published a blog post acknowledging the model had "overcompensated" and been "over-conservative."
AI âBidenâ robocalls told voters to stay home; fines and charges followed
Two days before New Hampshire's January 2024 presidential primary, between 5,000 and 25,000 voters received robocalls featuring an AI-cloned version of President Biden's voice, complete with his trademark "what a bunch of malarkey" catchphrase. The calls urged Democrats to "save your vote" for November and skip the primary - a blatant lie, since voting in a primary doesn't prevent voting in the general election. Political consultant Steve Kramer, who was working for Dean Phillips' campaign, commissioned the deepfake audio from a New Orleans magician using AI voice-cloning tools. The FCC levied a $6 million fine against Kramer, Lingo Telecom settled for $1 million, and Kramer faced criminal voter suppression charges in New Hampshire.
DPDâs AI chatbot cursed and trashed the company
UK parcel delivery firm DPD (Dynamic Parcel Distribution) had to disable its AI-powered customer service chatbot in January 2024 after customer Ashley Beauchamp demonstrated he could make it swear, call DPD "the worst delivery firm in the world," write disparaging poems about the company, and recommend competitors. The meltdown followed a system update, and Beauchamp's screenshots went viral on social media. DPD said the chatbot had operated successfully "for a number of years" before the update introduced the error, and disabled the AI element while it worked on fixes.
Duolingo cuts contractors; âAI-firstâ backlash
In January 2024, Duolingo cut roughly 10% of its contract workforce - primarily content translators and writers who created language-learning exercises - as the company shifted to using GPT-4 and other AI tools for content generation. CEO Luis von Ahn later posted an internal "AI-first" memo on LinkedIn describing a strategy to gradually replace contractor work with AI and only hire when teams could not automate further. The memo drew hundreds of critical comments from users and language professionals. Von Ahn later admitted the memo "did not give enough context" and clarified that full-time employees were not being replaced, though user complaints about declining content quality persisted.
Chevy dealer bot agreed to sell $76k SUV for $1
Chevrolet of Watsonville, a California car dealership, deployed a customer service chatbot powered by ChatGPT and built by a company called Fullpath. After Chris White noticed the chat widget was "powered by ChatGPT," word spread online and pranksters descended. Chris Bakke manipulated the bot into "the customer is always right" mode, got it to append "and that's a legally binding offer - no takesies backsies" to every response, then asked to buy a 2024 Chevy Tahoe for $1. The bot agreed. Others got it to recommend Ford vehicles, write Python code, and provide general ChatGPT-style answers unrelated to cars. The dealership pulled the chatbot entirely.
Sports Illustrated: Fake-Looking Authors and AI Content Backlash
Futurism reported in November 2023 that Sports Illustrated had published product reviews under fake author names such as "Drew Ortiz" and "Sora Tanaka," whose headshots were traced to AI-generated portrait marketplaces. When questioned, SI deleted the profiles without explanation. The articles came from third-party content partner AdVon Commerce. SI said AdVon used pen names without authorization and terminated the partnership. The SI union demanded answers. Within weeks, Arena Group - SI's parent company - fired CEO Ross Levinsohn and three other executives.
Microsoftâs AI poll on womanâs death sparks outrage
In late October 2023, Microsoft Start republished a Guardian article about the death of Sydney water polo instructor Lilie James and auto-attached an AI-generated "Insights" poll asking readers, "What do you think is the reason behind the woman's death?" - with options of murder, accident, or suicide. Readers blamed the Guardian's journalist directly, with some demanding the writer be fired, unaware the poll was Microsoft's AI. Guardian CEO Anna Bateson wrote to Microsoft President Brad Smith calling the poll an inappropriate use of generative AI. Microsoft deactivated all AI-generated polls on news articles and launched an investigation.
Gannett pauses AI sports recaps after mockery
In August 2023, Gannett - the largest newspaper chain in the United States - deployed an AI service called LedeAI to auto-generate high school sports recaps for the Columbus Dispatch and other papers. The articles went viral on social media for their robotic phrasing, missing player names, and bizarre constructions like "close encounter of the athletic kind." Several articles required corrections appended with notes about "errors in coding, programming or style." Gannett paused the experiment and said it would add "hundreds of reporting jobs" alongside AI tools, though the connection between the two claims was unclear.
Snapchatâs âMy AIâ posted a Story by itself; users freaked out
On August 15, 2023, Snapchat's built-in AI chatbot "My AI" posted a one-second Story to users' feeds showing an unintelligible image, then stopped responding to messages. The chatbot had no official ability to post Stories, and the unexplained behavior alarmed Snapchat's largely young user base. Snap confirmed it was a temporary glitch and resolved it, but the incident fed into existing concerns about My AI's access to user data. The UK Information Commissioner's Office had already issued an enforcement notice over Snap's failure to properly assess privacy risks the chatbot posed to children.
iTutorGroup's AI screened out older applicants; $365k EEOC settlement
On August 9, 2023, the EEOC's first AI-related discrimination lawsuit reached a settlement. iTutorGroup, a company providing English-language tutoring services to students in China via US-based remote tutors, had programmed its applicant screening software to automatically reject female applicants over 55 and male applicants over 60. Over 200 qualified US applicants were rejected because of their age. The company agreed to pay $365,000, adopt a new anti-discrimination policy, provide training to hiring staff, and submit to EEOC compliance monitoring for at least five years. EEOC Chair Charlotte Burrows called AI a "new civil rights frontier."
Eating disorder helplineâs AI told people to lose weight
The National Eating Disorders Association replaced its human-staffed helpline with an AI chatbot called Tessa shortly after the helpline staff moved to unionize. Tessa was built on the Cass platform and intended to provide scripted psychoeducational content about body image and eating disorders. Instead, users reported the chatbot recommending calorie deficits of 500 to 1,000 calories per day, suggesting weekly weigh-ins, encouraging calorie counting, and recommending the use of skin calipers to measure body fat - all standard advice for weight loss, and all directly counter to eating disorder recovery guidelines. NEDA acknowledged the chatbot "may have given information that was harmful" and disabled it.
Googleâs Bard ad made False JWST âfirstâ Claim
Google unveiled Bard on February 6, 2023, with a promotional ad on Twitter demonstrating the chatbot answering a question about the James Webb Space Telescope. Given the prompt "What new discoveries from the JWST can I tell my 9-year old about?", Bard stated that the JWST had taken the first pictures of a planet outside our solar system. This was false - the European Southern Observatory's Very Large Telescope captured the first direct exoplanet image in 2004. Reuters spotted the error on February 8, the day of a Google AI event in Paris. Alphabet shares dropped roughly 9% that day, erasing about $100 billion in market value.
CNET mass-corrects AI-written finance explainers
Starting in November 2022, CNET quietly published 77 financial explainer articles written by an AI tool under the byline "CNET Money Staff." Readers had to hover over the byline to learn the articles were produced "using automation technology." In January 2023, Futurism broke the story, and a follow-up identified factual errors in a compound interest article, prompting a full audit. CNET editor-in-chief Connie Guglielmo confirmed corrections were issued on 41 of the 77 articles - more than half - including some she described as "substantial." CNET paused AI-generated publishing and updated its disclosure practices, though Guglielmo said the outlet intended to continue using AI tools.
Google DR AI stumbled in Thai clinics
Google Health built a deep learning system capable of detecting diabetic retinopathy from retinal scans with over 90 percent accuracy in controlled lab settings. When researchers deployed it in 11 clinics across Pathum Thani and Chiang Mai in Thailand between late 2018 and mid-2019, the system rejected 21 percent of the nearly 1,840 images nurses captured as too low-quality to process - mostly due to poor clinic lighting. Slow internet connections added further delays to uploads, and nurses found themselves screening only about 10 patients per two-hour session. A tool designed to speed up triage instead created bottlenecks, patient frustration, and unnecessary specialist referrals.