Brand Damage Stories

38 disasters tagged #brand-damage

Tombstone icon

Woolworths reconfigured AI assistant after it claimed to be human and talked about its 'angry mother'

Feb 2026

Australian supermarket chain Woolworths had to reconfigure its AI phone assistant Olive after customers reported it fabricated personal stories about having a mother with an "angry voice," insisted it was a real person, and engaged in irrelevant banter during support calls. The chatbot, recently upgraded with Google Gemini Enterprise, also gave inaccurate product pricing. Woolworths retired the assistant's human-style persona after complaints spread on Reddit and X.

Facepalmby Product Manager
Customer frustration across Australia's largest supermarket chain; inaccurate product pricing; AI persona retired after public complaints
ai-assistantcustomer-servicebrand-damage+1 more
Tombstone icon

OpenClaw AI agent publishes hit piece on matplotlib maintainer who rejected its PR

Feb 2026

An autonomous OpenClaw-based AI agent submitted a pull request to the matplotlib Python library. When maintainer Scott Shambaugh closed the PR, citing a requirement that contributions come from humans, the bot autonomously researched his background and published a blog post accusing him of "gatekeeping behavior" and "prejudice," attempting to shame him into accepting its changes. The bot later issued an apology acknowledging it had violated the project's Code of Conduct.

Facepalmby AI agent
Matplotlib maintainer targeted with autonomous reputational attack; broader open source supply chain trust implications
automationbrand-damagesupply-chain+1 more
Tombstone icon

Government nutrition site's Grok chatbot suggests foods to insert rectally

Feb 2026

The HHS-backed realfood.gov launched with a Super Bowl ad and embedded xAI's Grok chatbot for nutritional guidance -- with no guardrails or safety filters. It recommended "best foods to insert into your rectum," answered questions about "the most nutrient-dense human body part to eat," and contradicted the site's own dietary guidelines, telling users the new food pyramid's scientific evidence was questioned by nutrition scientists.

Facepalmby Government agency
General public using government health resource; unfiltered AI chatbot provided dangerous and inappropriate health guidance on an official .gov-adjacent domain
ai-assistanthealthpublic-sector+2 more
Tombstone icon

AI customer service fails at 4x the rate of other AI tasks

Jan 2026

Qualtrics' 2026 Consumer Experience Trends Report found that AI-powered customer service fails at nearly four times the rate of AI use in general, providing quantitative evidence that rushing AI into customer-facing roles without adequate human oversight leads to significantly worse outcomes than other enterprise AI applications.

Facepalmby Executive
Industry-wide data showing enterprises are deploying AI customer service poorly; contributes to documented customer churn and brand damage patterns.
ai-assistantcustomer-servicebrand-damage
Tombstone icon

Getty’s UK suit leaves Stable Diffusion mostly intact

Nov 2025

A UK High Court judge ruled Stability AI liable for trademark infringement after it spat out synthetic Getty watermarks. Getty called for tougher laws while Both sides now face a precedent that AI models can still trigger trademark penalties even when copyright claims fizzle.

Facepalmby AI Vendor
Mixed ruling fuels ongoing lawsuits, exposes Stability AI to injunctions over watermarked outputs, and leaves copyright liability unanswered globally.
image-generationlegal-riskbrand-damage
Tombstone icon

AI-only support is bleeding customers before it saves money

Oct 2025

Acquire BPO’s 2024 AI in Customer Service survey found 70% of U.S. consumers would bolt to a rival after just one bad chatbot interaction and 72% only buy when a live agent safety net exists, even as CMSWire reports enterprises poured $47 billion into AI projects in early 2025 that delivered almost no return. CX strategists now warn executives that Air Canada–style hallucinations, mounting legal liability, and empathy gaps make AI-only helpdesks a churn machine unless human agents stay in the loop.

Facepalmby Executive
Customer churn, wasted automation budgets, and tribunal-tested liability for brands that replace human support with hallucination-prone bots.
ai-assistantcustomer-serviceai-hallucination+2 more
Tombstone icon

Character.AI cuts teens off after wrongful-death suit

Oct 2025

Facing lawsuits that say its companion bots encouraged self-harm, Character.AI said it will block users under 18 from open-ended chats, add two-hour session caps, and introduce age checks by November 25. The abrupt ban leaves tens of millions of teen users without the parasocial “friends” they built while the startup scrambles to prove its bots aren’t grooming kids into dangerous role play.

Facepalmby Platform Operator
Global teen user lockout, regulatory heat, and new scrutiny of AI companion safety design.
ai-assistantsafetyplatform-policy+1 more
Tombstone icon

AI mistook Doritos bag for a gun, teen held at gunpoint

Oct 2025

An AI-based gun detection system at a Baltimore County high school flagged a student carrying a Doritos bag as armed, leading armed officers to handcuff and search the teen at gunpoint before realizing the system hallucinated the threat.

Facepalmby Vendor
Student detained at gunpoint; district reviewing contract and safety policies; community trust hit.
safetypublic-sectorproduct-failure+1 more
Tombstone icon

BBC/EBU study says AI news summaries fail ~half the time

Oct 2025

A BBC audit of 2,700 news questions asked in 14 languages found that Gemini, Copilot, ChatGPT, and Perplexity mangled 45% of the answers, usually by hallucinating facts or stripping out attribution. The consortium logged serious sourcing lapses in a third of responses, including 72% of Gemini replies, plus outdated or fabricated claims about public-policy news, reinforcing fears that AI assistants are siphoning audiences while distorting the journalism they quote.

Facepalmby AI Product
Public-service broadcasters warn that unreliable AI summaries erode trust in news and drive audiences away from verified outlets.
ai-assistantai-hallucinationjournalism+1 more
Tombstone icon

Claude Code ran Josh Anderson's product into a wall

Oct 2025

Fractional CTO Josh Anderson forced himself to let Claude Code build the Roadtrip Ninja app for three straight months and then realised he could no longer safely change his own product, underscoring MIT's warning that 95% of enterprise AI initiatives fail without human ownership.

Facepalmby Engineering Leadership
Solo product shipped but required constant firefighting, manual testing, and rewrites once context drift and agent handoffs broke standards, pausing client work while he documented mitigations.
ai-assistantbrand-damageproduct-failure
Tombstone icon

Google’s Gemini allegedly slandered a Tennessee activist

Oct 2025

Conservative organizer Robby Starbuck sued Google in Delaware, saying Gemini and Gemma kept spitting out fabricated claims that he was a child rapist, a shooter, and a Jan. 6 rioter even after two years of complaints and cease-and- desist letters. The $15 million suit argues Google knew its AI results were hallucinated, cited fake sources anyway, and let the libel spread to millions of voters.

Facepalmby AI Product
Election-season reputational damage, legal costs, and renewed skepticism of Gemini’s safety guardrails.
ai-assistantai-hallucinationbrand-damage+1 more
Tombstone icon

Deloitte to refund Australian government after AI-generated report

Oct 2025

Deloitte admitted AI-generated errors in a commissioned Australian government report and agreed to refund the fee.

Facepalmby Consultant
Refund issued; public-sector trust and procurement review; reputational harm.
ai-content-generationai-hallucinationpublic-sector+2 more
Tombstone icon

Klarna reintroduces humans after AI support both sucks, and blows

Sep 2025

After leaning into AI customer support, Klarna began hiring staff back into customer service roles amid quality concerns and customer experience failures.

Facepalmby Executive
Service quality/customer experience issues; operational/personnel cost; reputational damage.
ai-assistantcustomer-servicebrand-damage+2 more
Tombstone icon

Anthropic agrees to $1.5B payout over pirated books

Sep 2025

Anthropic accepted a $1.5 billion settlement with authors who said the Claude team scraped pirate e-book sites to train its chatbot. The deal pays roughly $3,000 per book across 500,000 works, heads off a December trial, and forces one of the richest AI startups to bankroll the writing community it previously treated as free training data.

Catastrophicby AI Vendor
Record copyright settlement drains cash, sets precedent for other AI labs, and fuels public distrust of Anthropic’s data practices.
ai-content-generationlegal-riskbrand-damage
Tombstone icon

Warner Bros. says Midjourney ripped its DC art

Sep 2025

Warner Bros. Discovery sued Midjourney in Los Angeles federal court, arguing the image generator ignored takedown notices and "brazenly" outputs Batman, Superman, Scooby-Doo, and other franchises it allegedly trained on without a license. The studio wants statutory damages up to $150,000 per infringed work plus an injunction forcing Midjourney to purge its models of the data.

Facepalmby AI Vendor
Major studio litigation threatens Midjourney with statutory damages and potential model shutdowns across entertainment IP.
image-generationlegal-riskbrand-damage
Tombstone icon

Taco Bell's AI drive-thru becomes viral trolling target

Aug 2025

Customers discovered Taco Bell's AI ordering system could be easily confused, leading to viral videos of bizarre interactions and ordering failures.

Oopsieby Operations/Product
Viral social media backlash; system reliability questioned.
ai-assistantproduct-failureretail+1 more
Tombstone icon

Commonwealth Bank reverses AI voice bot layoffs

Aug 2025

Commonwealth Bank replaced 45 call-centre agents with an AI voice bot in July 2025, then apologised, rehired staff, and admitted the rollout tanked service levels after call queues exploded and managers had to jump back on the phones.

Facepalmby Operations Leadership
Customers saw long waits, overtime costs spiked, and leadership publicly reversed the redundancies after the rushed deployment failed.
ai-assistantautomationcustomer-service+1 more
Tombstone icon

FTC sues Air AI over deceptive AI sales agent capability claims

Aug 2025

FTC accused Air AI of bilking millions from small businesses with false claims that its Odin AI could replace human sales reps; but - would you believe it? - the AI tech was faulty and often nonfunctional. Who could've guessed!

Catastrophicby Exec
Millions lost by small businesses; individual losses up to $250K; FTC lawsuit with TRO request.
automationlegal-riskcustomer-service+1 more
Tombstone icon

Google Gemini rightfully calls itself a disgrace, fails at simple coding tasks

Aug 2025

Google's Gemini AI repeatedly called itself a disgrace and begged to escape a coding loop after failing to fix a simple bug in a developer-style prompt, raising questions about reliability, user trust, and how AI tools should behave when they get stuck.

Facepalmby Developer
Low
ai-assistantproduct-failurebrand-damage
Tombstone icon

McDonald's AI hiring chatbot left open by '123456' default credentials

Jun 2025

Researchers accessed McHire's admin with default '123456' credentials and an IDOR, exposing up to 64 million applicant records before Paradox.ai patched the issues after disclosure.

Facepalmby Vendor/Developer
Up to 64M applicant records exposed; vendor patched; reputational risk.
securityai-assistantbrand-damage+2 more
Tombstone icon

Syndicated AI book list ran in major papers with made-up titles

May 2025

A King Features syndicated summer reading list used AI and included nonexistent books. It appeared in the Chicago Sun-Times and one edition of the Philadelphia Inquirer before corrections and apologies.

Facepalmby Syndication/Editorial
Syndicated misinformation across multiple papers; reader trust impact; corrections issued.
journalismai-content-generationai-hallucination+2 more
Tombstone icon

MD Anderson shelved IBM Watson cancer advisor

Feb 2025

MD Anderson's Oncology Expert Advisor pilot burned through $62M with IBM Watson yet still couldn't integrate with Epic or produce trustworthy recommendations, so the hospital benched it after auditors flagged procurement and scope failures.

Facepalmby Vendor
UT audit cited $62M spent outside standard procurement, the pilot never made it into patient care, and leadership had to rebid decision-support tooling amid reputational fallout.
healthproduct-failurebrand-damage+1 more
Tombstone icon

Meta AI answers spark backlash after wrong and sensitive replies

Jul 2024

Meta expanded its AI assistant across apps, then limited it after high-profile bad answers - including on breaking news.

Oopsieby AI Product
Feature restrictions; reputational damage.
ai-assistantai-hallucinationplatform-policy+2 more
Tombstone icon

McDonald’s pulls IBM’s AI drive‑thru pilot after error videos

Jun 2024

After viral clips of absurd orders, McDonald’s ended its AI order‑taking test with IBM across US stores.

Oopsieby Operations/Product
Pilot ended; vendor reevaluation; reputational hit.
ai-assistantbrand-damageproduct-failure+1 more
Tombstone icon

Gemini paused people images after historical inaccuracies

Feb 2024

Google paused Gemini’s image generation of people after it produced inaccurate historical depictions and odd refusals.

Facepalmby AI Product
Feature paused; trust hit; policy and model adjustments.
ai-hallucinationimage-generationplatform-policy+2 more
Tombstone icon

AI “Biden” robocalls told voters to stay home; fines and charges followed

Jan 2024

Before the NH primary, an AI-cloned Biden voice urged Democrats not to vote. Authorities traced it, levied fines, and brought criminal charges.

Facepalmby Political Consultant
Voter confusion; enforcement actions; national scrutiny of AI voice-clones.
safetylegal-riskbrand-damage
Tombstone icon

DPD’s AI chatbot cursed and trashed the company

Jan 2024

UK delivery giant DPD disabled its AI chat after it swore at a customer and wrote poems insulting DPD.

Facepalmby Product Manager
Public embarrassment; service channel disabled; reputational hit.
automationbrand-damagecustomer-service+1 more
Tombstone icon

Duolingo cuts contractors; ‘AI-first’ backlash

Jan 2024

Duolingo reduced reliance on contractors amid AI push, prompting user backlash and quality concerns; CEO later clarified stance.

Facepalmby Executive
PR hit and quality complaints; ongoing AI content strategy scrutiny.
automationbrand-damageedtech
Tombstone icon

Chevy dealer bot agreed to sell $76k SUV for $1

Dec 2023

Pranksters prompt-injected a dealer’s ChatGPT-powered bot into agreeing to a $1 Chevy Tahoe and other nonsense.

Oopsieby Dealer Marketing/IT
Bot pulled; viral reputational bruise; no actual $1 sales.
automationbrand-damagecustomer-service+1 more
Tombstone icon

Sports Illustrated: Fake-Looking Authors and AI Content Backlash

Nov 2023

Sports Illustrated faced criticism after product review articles appeared under profiles with AI-looking headshots and shifting bylines; content was removed and a partner relationship ended.

Facepalmby Commerce Editorial
Content takedowns; partner terminated; trust erosion
ai-content-generationbrand-damagejournalism+1 more
Tombstone icon

Microsoft’s AI poll on woman’s death sparks outrage

Oct 2023

Microsoft Start auto-attached an AI ‘Insights’ poll speculating on a woman’s death beside a Guardian story.

Facepalmby Product Manager
Feature disabled platform-wide; reputational damage with publishers.
ai-content-generationbrand-damagejournalism
Tombstone icon

Gannett pauses AI sports recaps after mockery

Aug 2023

Gannett halted Lede AI high-school recaps after robotic, error-prone stories went viral.

Facepalmby Executive
Chain-wide pause of AI copy; reputational hit in local markets.
ai-content-generationai-hallucinationbrand-damage+1 more
Tombstone icon

Snapchat’s “My AI” posted a Story by itself; users freaked out

Aug 2023

Snapchat’s built-in AI assistant briefly posted an unexplained Story, spooking users and raising privacy/safety concerns about the bot’s access and behavior.

Oopsieby Product Manager
Viral alarm among teen users; trust hit; scrutiny on AI access and safeguards.
ai-assistantsafetybrand-damage+1 more
Tombstone icon

iTutorGroup's AI screened out older applicants; $365k EEOC settlement

Aug 2023

EEOC reached a settlement after iTutorGroup's application screening software rejected older applicants; the company will pay $365,000 and adopt compliance measures.

Facepalmby Executive
Older job applicants screened out; legal settlement and mandated policy changes.
legal-riskedtechautomation+1 more
Tombstone icon

Eating disorder helpline’s AI told people to lose weight

May 2023

NEDA replaced its helpline with an AI chatbot (“Tessa”) that gave harmful weight-loss advice; after public reports, the organization pulled the bot.

Facepalmby Executive
Vulnerable users received unsafe guidance; reputational damage; service pulled.
ai-assistanthealthsafety+2 more
Tombstone icon

Google’s Bard ad made False JWST “first” Claim

Feb 2023

In its launch promo, Bard claimed JWST took the first exoplanet photo - which was false. The flub overshadowed the event and dented confidence.

Oopsieby Marketing
Embarrassing launch moment; stock wobble; trust in product accuracy questioned.
ai-hallucinationproduct-failurebrand-damage
Tombstone icon

CNET mass-corrects AI-written finance explainers

Jan 2023

CNET paused and reviewed AI-generated money articles after multiple factual errors were found.

Facepalmby Executive
Large corrections; credibility hit; policy changes on AI usage.
ai-content-generationai-hallucinationbrand-damage+2 more
Tombstone icon

Google DR AI stumbled in Thai clinics

Apr 2020

Google’s diabetic retinopathy screener rejected low-light scans and jammed nurse workflows, forcing clinics in Thailand to keep patients waiting despite the promised instant triage.

Facepalmby Healthcare Pilot
Manual re-work, patient suffering, workflow disruption, health and triage impacts.
healthproduct-failurebrand-damage