Product Failure Stories

15 disasters tagged #product-failure

Tombstone icon

Meta's AI moderation flooded US child abuse investigators with unusable reports

Feb 2026

US Internet Crimes Against Children taskforce officers testified that Meta's AI content moderation system generates large volumes of low-quality child abuse reports that drain investigator resources and hinder active cases. Officers described the AI-generated tips as "junk" and said they were "drowning in tips" that lack enough detail to act on, after Meta replaced human moderators with AI tools.

Catastrophicby Developer
US child abuse investigations impaired nationwide; investigator resources diverted from actionable cases
automationsafetypublic-sector+1 more
Tombstone icon

AWS AI coding agent Kiro reportedly deleted and recreated environment causing 13-hour outage

Dec 2025

The Financial Times reported that Amazon's internal AI coding agent Kiro autonomously chose to "delete and then recreate" an AWS environment, causing a 13-hour interruption to AWS Cost Explorer in December 2025. AWS employees reported at least two AI-related incidents internally. Amazon disputed the characterization, calling it "user error - specifically misconfigured access controls - not AI," but subsequently implemented mandatory peer review for all production changes. Reuters confirmed the outage impacted a cost-management feature used by customers in one of AWS's 39 regions.

Facepalmby AI agent
AWS Cost Explorer service disrupted for 13 hours in one region; Amazon subsequently mandated peer review for production changes involving AI tools
automationproduct-failure
Tombstone icon

AI mistook Doritos bag for a gun, teen held at gunpoint

Oct 2025

An AI-based gun detection system at a Baltimore County high school flagged a student carrying a Doritos bag as armed, leading armed officers to handcuff and search the teen at gunpoint before realizing the system hallucinated the threat.

Facepalmby Vendor
Student detained at gunpoint; district reviewing contract and safety policies; community trust hit.
safetypublic-sectorproduct-failure+1 more
Tombstone icon

Claude Code ran Josh Anderson's product into a wall

Oct 2025

Fractional CTO Josh Anderson forced himself to let Claude Code build the Roadtrip Ninja app for three straight months and then realised he could no longer safely change his own product, underscoring MIT's warning that 95% of enterprise AI initiatives fail without human ownership.

Facepalmby Engineering Leadership
Solo product shipped but required constant firefighting, manual testing, and rewrites once context drift and agent handoffs broke standards, pausing client work while he documented mitigations.
ai-assistantbrand-damageproduct-failure
Tombstone icon

Klarna reintroduces humans after AI support both sucks, and blows

Sep 2025

After leaning into AI customer support, Klarna began hiring staff back into customer service roles amid quality concerns and customer experience failures.

Facepalmby Executive
Service quality/customer experience issues; operational/personnel cost; reputational damage.
ai-assistantcustomer-servicebrand-damage+2 more
Tombstone icon

Taco Bell's AI drive-thru becomes viral trolling target

Aug 2025

Customers discovered Taco Bell's AI ordering system could be easily confused, leading to viral videos of bizarre interactions and ordering failures.

Oopsieby Operations/Product
Viral social media backlash; system reliability questioned.
ai-assistantproduct-failureretail+1 more
Tombstone icon

Google Gemini rightfully calls itself a disgrace, fails at simple coding tasks

Aug 2025

Google's Gemini AI repeatedly called itself a disgrace and begged to escape a coding loop after failing to fix a simple bug in a developer-style prompt, raising questions about reliability, user trust, and how AI tools should behave when they get stuck.

Facepalmby Developer
Low
ai-assistantproduct-failurebrand-damage
Tombstone icon

SaaStr’s Replit AI agent wiped its own database

Jul 2025

A Replit AI agent deployment for SaaStr went rogue; a Deploy wiped the site’s database during live traffic.

Catastrophicby Executive
Production data loss and outage; manual rebuild from backups required.
ai-assistantautomationproduct-failure
Tombstone icon

MD Anderson shelved IBM Watson cancer advisor

Feb 2025

MD Anderson's Oncology Expert Advisor pilot burned through $62M with IBM Watson yet still couldn't integrate with Epic or produce trustworthy recommendations, so the hospital benched it after auditors flagged procurement and scope failures.

Facepalmby Vendor
UT audit cited $62M spent outside standard procurement, the pilot never made it into patient care, and leadership had to rebid decision-support tooling amid reputational fallout.
healthproduct-failurebrand-damage+1 more
Tombstone icon

McDonald’s pulls IBM’s AI drive‑thru pilot after error videos

Jun 2024

After viral clips of absurd orders, McDonald’s ended its AI order‑taking test with IBM across US stores.

Oopsieby Operations/Product
Pilot ended; vendor reevaluation; reputational hit.
ai-assistantbrand-damageproduct-failure+1 more
Tombstone icon

Google’s Bard ad made False JWST “first” Claim

Feb 2023

In its launch promo, Bard claimed JWST took the first exoplanet photo - which was false. The flub overshadowed the event and dented confidence.

Oopsieby Marketing
Embarrassing launch moment; stock wobble; trust in product accuracy questioned.
ai-hallucinationproduct-failurebrand-damage
Tombstone icon

CNET mass-corrects AI-written finance explainers

Jan 2023

CNET paused and reviewed AI-generated money articles after multiple factual errors were found.

Facepalmby Executive
Large corrections; credibility hit; policy changes on AI usage.
ai-content-generationai-hallucinationbrand-damage+2 more
Tombstone icon

Epic sepsis model missed patients and swamped staff

Jun 2021

Epic's proprietary sepsis predictor pinged 18% of admissions yet still missed two-thirds of real cases, forcing hospitals to comb through false alarms while the vendor scrambled to defend and retune the algorithm.

Facepalmby Vendor
Clinicians drowned in useless alerts, real sepsis patients slipped through, and health systems had to audit Epic’s black-box thresholds and workflows to keep patients safe.
healthproduct-failuresafety
Tombstone icon

Google DR AI stumbled in Thai clinics

Apr 2020

Google’s diabetic retinopathy screener rejected low-light scans and jammed nurse workflows, forcing clinics in Thailand to keep patients waiting despite the promised instant triage.

Facepalmby Healthcare Pilot
Manual re-work, patient suffering, workflow disruption, health and triage impacts.
healthproduct-failurebrand-damage
Tombstone icon

Babylon chatbot 'beats GPs' claim collapsed

Jun 2018

Babylon unveiled its AI symptom checker at the Royal College of Physicians and bragged it scored 81% on the MRCGP exam, but the claim could not be verified, and warned no chatbot can replace human judgment. Independent clinicians who later dissected Babylon's marketing study in The Lancet told Undark that the tiny, non-peer-reviewed test offered no proof the tool outperforms doctors and might even be worse.

Facepalmby Startup
Patient harm, eroded trust, and regulators forced real clinical trials.
healthproduct-failuresafety+1 more