Customer Disservice Stories
12 disasters tagged #customer-disservice
California community colleges spend millions on AI chatbots that give students wrong answers
California community college districts are spending millions of taxpayer dollars on AI chatbots from vendors like Gravyty and Gecko - ostensibly to help students navigate admissions, financial aid, and campus services. A CalMatters investigation found the bots routinely serve up inaccurate or flat-out wrong answers instead. Three districts reported annual chatbot costs ranging from $151,000 to nearly half a million dollars. At Fresno City College, the student government vice president said her school's mascot-branded chatbot repeatedly botched basic campus questions. The OECD found it noteworthy enough to log in its AI Incidents and Hazards Monitor.
Woolworths reconfigured AI assistant after it claimed to be human and talked about its 'angry mother'
Australian supermarket chain Woolworths had to reconfigure its AI phone assistant Olive after customers reported it fabricated personal stories about having a mother with an "angry voice," insisted it was a real person, and engaged in irrelevant banter during support calls. The chatbot, recently upgraded with Google Gemini Enterprise, also gave inaccurate product pricing. Woolworths retired the assistant's human-style persona after complaints spread on Reddit and X.
AI customer service fails at 4x the rate of other AI tasks
Qualtrics' 2026 Consumer Experience Trends Report found that AI-powered customer service fails at nearly four times the rate of AI use in general, providing quantitative evidence that rushing AI into customer-facing roles without adequate human oversight leads to significantly worse outcomes than other enterprise AI applications.
AI-only support is bleeding customers before it saves money
Acquire BPO’s 2024 AI in Customer Service survey found 70% of U.S. consumers would bolt to a rival after just one bad chatbot interaction and 72% only buy when a live agent safety net exists, even as CMSWire reports enterprises poured $47 billion into AI projects in early 2025 that delivered almost no return. CX strategists now warn executives that Air Canada–style hallucinations, mounting legal liability, and empathy gaps make AI-only helpdesks a churn machine unless human agents stay in the loop.
Klarna reintroduces humans after AI support both sucks, and blows
After cutting its workforce by 40% and boasting that its OpenAI-powered chatbot did the work of 700 agents, Klarna CEO Sebastian Siemiatkowski admitted the all-AI approach produced "lower quality" customer service. The company began recruiting human agents again, framing the reversal as an evolution rather than an admission of failure.
Taco Bell's AI drive-thru becomes viral trolling target
Taco Bell's AI-powered drive-thru ordering system, deployed at over 500 US locations since 2023, became a viral laughingstock after videos showed it looping endlessly on drink orders, accepting requests for 18,000 cups of water, and taking McDonald's orders. The chain paused expansion and admitted humans still make sense in the drive-thru.
Commonwealth Bank reverses AI voice bot layoffs
Commonwealth Bank of Australia replaced 45 call-centre agents with an AI voice bot in July 2025, then apologised, rehired the staff, and admitted the rollout tanked service levels after call queues exploded, managers had to jump back on the phones, and the Finance Sector Union filed a Fair Work Commission dispute.
FTC sues Air AI over deceptive AI sales agent capability claims
FTC accused Air AI of bilking millions from small businesses with false claims that its Odin AI could replace human sales reps; but - would you believe it? - the AI tech was faulty and often nonfunctional. Who could've guessed!
McDonald’s pulls IBM’s AI drive‑thru pilot after error videos
McDonald's ended its two-year partnership with IBM on automated AI order-taking at drive-thrus in June 2024, removing the technology from more than 100 US locations. The decision followed viral TikTok videos showing the system adding nine sweet teas instead of one, inserting random butter and ketchup packets into ice cream orders, and other absurd errors. McDonald's framed the pullback as a positive, saying the test gave them "confidence that a voice-ordering solution for drive-thru will be part of our restaurants' future."
Air Canada liable for lying chatbot promises
Jake Moffatt used Air Canada's website chatbot to ask about bereavement fares after his grandmother died. The chatbot told him he could book at full price and apply for a bereavement discount within 90 days. Air Canada's actual policy did not allow retroactive bereavement fare claims. When Moffatt applied, the airline denied the refund and admitted the chatbot had provided "misleading words" - but argued Moffatt should have checked the static webpage instead. British Columbia's Civil Resolution Tribunal ruled in Moffatt's favor in February 2024, finding Air Canada liable for negligent misrepresentation and rejecting the airline's argument that it wasn't responsible for its own chatbot's statements.
DPD’s AI chatbot cursed and trashed the company
UK parcel delivery firm DPD (Dynamic Parcel Distribution) had to disable its AI-powered customer service chatbot in January 2024 after customer Ashley Beauchamp demonstrated he could make it swear, call DPD "the worst delivery firm in the world," write disparaging poems about the company, and recommend competitors. The meltdown followed a system update, and Beauchamp's screenshots went viral on social media. DPD said the chatbot had operated successfully "for a number of years" before the update introduced the error, and disabled the AI element while it worked on fixes.
Chevy dealer bot agreed to sell $76k SUV for $1
Chevrolet of Watsonville, a California car dealership, deployed a customer service chatbot powered by ChatGPT and built by a company called Fullpath. After Chris White noticed the chat widget was "powered by ChatGPT," word spread online and pranksters descended. Chris Bakke manipulated the bot into "the customer is always right" mode, got it to append "and that's a legally binding offer - no takesies backsies" to every response, then asked to buy a 2024 Chevy Tahoe for $1. The bot agreed. Others got it to recommend Ford vehicles, write Python code, and provide general ChatGPT-style answers unrelated to cars. The dealership pulled the chatbot entirely.