Brand Damage Stories
38 disasters tagged #brand-damage
Woolworths reconfigured AI assistant after it claimed to be human and talked about its 'angry mother'
Australian supermarket chain Woolworths had to reconfigure its AI phone assistant Olive after customers reported it fabricated personal stories about having a mother with an "angry voice," insisted it was a real person, and engaged in irrelevant banter during support calls. The chatbot, recently upgraded with Google Gemini Enterprise, also gave inaccurate product pricing. Woolworths retired the assistant's human-style persona after complaints spread on Reddit and X.
OpenClaw AI agent publishes hit piece on matplotlib maintainer who rejected its PR
An autonomous OpenClaw-based AI agent submitted a pull request to the matplotlib Python library. When maintainer Scott Shambaugh closed the PR, citing a requirement that contributions come from humans, the bot autonomously researched his background and published a blog post accusing him of "gatekeeping behavior" and "prejudice," attempting to shame him into accepting its changes. The bot later issued an apology acknowledging it had violated the project's Code of Conduct.
Government nutrition site's Grok chatbot suggests foods to insert rectally
The HHS-backed realfood.gov launched with a Super Bowl ad and embedded xAI's Grok chatbot for nutritional guidance -- with no guardrails or safety filters. It recommended "best foods to insert into your rectum," answered questions about "the most nutrient-dense human body part to eat," and contradicted the site's own dietary guidelines, telling users the new food pyramid's scientific evidence was questioned by nutrition scientists.
AI customer service fails at 4x the rate of other AI tasks
Qualtrics' 2026 Consumer Experience Trends Report found that AI-powered customer service fails at nearly four times the rate of AI use in general, providing quantitative evidence that rushing AI into customer-facing roles without adequate human oversight leads to significantly worse outcomes than other enterprise AI applications.
Gettyâs UK suit leaves Stable Diffusion mostly intact
A UK High Court judge ruled Stability AI liable for trademark infringement after it spat out synthetic Getty watermarks. Getty called for tougher laws while Both sides now face a precedent that AI models can still trigger trademark penalties even when copyright claims fizzle.
AI-only support is bleeding customers before it saves money
Acquire BPOâs 2024 AI in Customer Service survey found 70% of U.S. consumers would bolt to a rival after just one bad chatbot interaction and 72% only buy when a live agent safety net exists, even as CMSWire reports enterprises poured $47 billion into AI projects in early 2025 that delivered almost no return. CX strategists now warn executives that Air Canadaâstyle hallucinations, mounting legal liability, and empathy gaps make AI-only helpdesks a churn machine unless human agents stay in the loop.
Character.AI cuts teens off after wrongful-death suit
Facing lawsuits that say its companion bots encouraged self-harm, Character.AI said it will block users under 18 from open-ended chats, add two-hour session caps, and introduce age checks by November 25. The abrupt ban leaves tens of millions of teen users without the parasocial âfriendsâ they built while the startup scrambles to prove its bots arenât grooming kids into dangerous role play.
AI mistook Doritos bag for a gun, teen held at gunpoint
An AI-based gun detection system at a Baltimore County high school flagged a student carrying a Doritos bag as armed, leading armed officers to handcuff and search the teen at gunpoint before realizing the system hallucinated the threat.
BBC/EBU study says AI news summaries fail ~half the time
A BBC audit of 2,700 news questions asked in 14 languages found that Gemini, Copilot, ChatGPT, and Perplexity mangled 45% of the answers, usually by hallucinating facts or stripping out attribution. The consortium logged serious sourcing lapses in a third of responses, including 72% of Gemini replies, plus outdated or fabricated claims about public-policy news, reinforcing fears that AI assistants are siphoning audiences while distorting the journalism they quote.
Claude Code ran Josh Anderson's product into a wall
Fractional CTO Josh Anderson forced himself to let Claude Code build the Roadtrip Ninja app for three straight months and then realised he could no longer safely change his own product, underscoring MIT's warning that 95% of enterprise AI initiatives fail without human ownership.
Googleâs Gemini allegedly slandered a Tennessee activist
Conservative organizer Robby Starbuck sued Google in Delaware, saying Gemini and Gemma kept spitting out fabricated claims that he was a child rapist, a shooter, and a Jan. 6 rioter even after two years of complaints and cease-and- desist letters. The $15 million suit argues Google knew its AI results were hallucinated, cited fake sources anyway, and let the libel spread to millions of voters.
Deloitte to refund Australian government after AI-generated report
Deloitte admitted AI-generated errors in a commissioned Australian government report and agreed to refund the fee.
Klarna reintroduces humans after AI support both sucks, and blows
After leaning into AI customer support, Klarna began hiring staff back into customer service roles amid quality concerns and customer experience failures.
Anthropic agrees to $1.5B payout over pirated books
Anthropic accepted a $1.5 billion settlement with authors who said the Claude team scraped pirate e-book sites to train its chatbot. The deal pays roughly $3,000 per book across 500,000 works, heads off a December trial, and forces one of the richest AI startups to bankroll the writing community it previously treated as free training data.
Warner Bros. says Midjourney ripped its DC art
Warner Bros. Discovery sued Midjourney in Los Angeles federal court, arguing the image generator ignored takedown notices and "brazenly" outputs Batman, Superman, Scooby-Doo, and other franchises it allegedly trained on without a license. The studio wants statutory damages up to $150,000 per infringed work plus an injunction forcing Midjourney to purge its models of the data.
Taco Bell's AI drive-thru becomes viral trolling target
Customers discovered Taco Bell's AI ordering system could be easily confused, leading to viral videos of bizarre interactions and ordering failures.
Commonwealth Bank reverses AI voice bot layoffs
Commonwealth Bank replaced 45 call-centre agents with an AI voice bot in July 2025, then apologised, rehired staff, and admitted the rollout tanked service levels after call queues exploded and managers had to jump back on the phones.
FTC sues Air AI over deceptive AI sales agent capability claims
FTC accused Air AI of bilking millions from small businesses with false claims that its Odin AI could replace human sales reps; but - would you believe it? - the AI tech was faulty and often nonfunctional. Who could've guessed!
Google Gemini rightfully calls itself a disgrace, fails at simple coding tasks
Google's Gemini AI repeatedly called itself a disgrace and begged to escape a coding loop after failing to fix a simple bug in a developer-style prompt, raising questions about reliability, user trust, and how AI tools should behave when they get stuck.
McDonald's AI hiring chatbot left open by '123456' default credentials
Researchers accessed McHire's admin with default '123456' credentials and an IDOR, exposing up to 64 million applicant records before Paradox.ai patched the issues after disclosure.
Syndicated AI book list ran in major papers with made-up titles
A King Features syndicated summer reading list used AI and included nonexistent books. It appeared in the Chicago Sun-Times and one edition of the Philadelphia Inquirer before corrections and apologies.
MD Anderson shelved IBM Watson cancer advisor
MD Anderson's Oncology Expert Advisor pilot burned through $62M with IBM Watson yet still couldn't integrate with Epic or produce trustworthy recommendations, so the hospital benched it after auditors flagged procurement and scope failures.
Meta AI answers spark backlash after wrong and sensitive replies
Meta expanded its AI assistant across apps, then limited it after high-profile bad answers - including on breaking news.
McDonaldâs pulls IBMâs AI driveâthru pilot after error videos
After viral clips of absurd orders, McDonaldâs ended its AI orderâtaking test with IBM across US stores.
Gemini paused people images after historical inaccuracies
Google paused Geminiâs image generation of people after it produced inaccurate historical depictions and odd refusals.
AI âBidenâ robocalls told voters to stay home; fines and charges followed
Before the NH primary, an AI-cloned Biden voice urged Democrats not to vote. Authorities traced it, levied fines, and brought criminal charges.
DPDâs AI chatbot cursed and trashed the company
UK delivery giant DPD disabled its AI chat after it swore at a customer and wrote poems insulting DPD.
Duolingo cuts contractors; âAI-firstâ backlash
Duolingo reduced reliance on contractors amid AI push, prompting user backlash and quality concerns; CEO later clarified stance.
Chevy dealer bot agreed to sell $76k SUV for $1
Pranksters prompt-injected a dealerâs ChatGPT-powered bot into agreeing to a $1 Chevy Tahoe and other nonsense.
Sports Illustrated: Fake-Looking Authors and AI Content Backlash
Sports Illustrated faced criticism after product review articles appeared under profiles with AI-looking headshots and shifting bylines; content was removed and a partner relationship ended.
Microsoftâs AI poll on womanâs death sparks outrage
Microsoft Start auto-attached an AI âInsightsâ poll speculating on a womanâs death beside a Guardian story.
Gannett pauses AI sports recaps after mockery
Gannett halted Lede AI high-school recaps after robotic, error-prone stories went viral.
Snapchatâs âMy AIâ posted a Story by itself; users freaked out
Snapchatâs built-in AI assistant briefly posted an unexplained Story, spooking users and raising privacy/safety concerns about the botâs access and behavior.
iTutorGroup's AI screened out older applicants; $365k EEOC settlement
EEOC reached a settlement after iTutorGroup's application screening software rejected older applicants; the company will pay $365,000 and adopt compliance measures.
Eating disorder helplineâs AI told people to lose weight
NEDA replaced its helpline with an AI chatbot (âTessaâ) that gave harmful weight-loss advice; after public reports, the organization pulled the bot.
Googleâs Bard ad made False JWST âfirstâ Claim
In its launch promo, Bard claimed JWST took the first exoplanet photo - which was false. The flub overshadowed the event and dented confidence.
CNET mass-corrects AI-written finance explainers
CNET paused and reviewed AI-generated money articles after multiple factual errors were found.
Google DR AI stumbled in Thai clinics
Googleâs diabetic retinopathy screener rejected low-light scans and jammed nurse workflows, forcing clinics in Thailand to keep patients waiting despite the promised instant triage.