AI-only support is bleeding customers before it saves money
Acquire BPO’s 2024 AI in Customer Service survey found 70% of U.S. consumers would bolt to a rival after just one bad chatbot interaction and 72% only buy when a live agent safety net exists, even as CMSWire reports enterprises poured $47 billion into AI projects in early 2025 that delivered almost no return. CX strategists now warn executives that Air Canada–style hallucinations, mounting legal liability, and empathy gaps make AI-only helpdesks a churn machine unless human agents stay in the loop.
Incident Details
Tech Stack
References
The Survey
Acquire BPO, a global business process outsourcing provider, commissioned a third-party survey platform called Pollfish to measure how U.S. consumers actually felt about AI-powered customer service. The survey reached 600 consumers aged 18 and older who had contacted a company for support within the past year. The results, released in late 2024, painted a picture that directly contradicted the pitch most enterprise software vendors were making to their customers.
Seventy percent of respondents said they would consider switching to a different brand after a single bad experience with AI-powered customer service. Not repeated bad experiences. Not a pattern of failure. One interaction.
Consumers reported being 2.5 times more positive about their experience when interacting with a human agent compared to an AI chatbot. Half of all respondents said they felt negatively about companies that were relying more heavily on AI, citing three specific complaints: a lack of personal touch, lower accuracy, and slower resolutions. The efficiency gains that were supposed to justify the AI deployment were, according to the people on the receiving end, not materializing.
Seventy-two percent of consumers who preferred human support said that the availability of live agents directly influenced their buying decisions. They weren't just complaining about chatbots - they were actively choosing where to spend money based on whether a human would be available if something went wrong.
Where AI Worked (Slightly)
The survey wasn't entirely negative. Forty percent of consumers expressed confidence that AI could handle simple issues as effectively as a human agent. Sixty-one percent liked the idea of an AI that remembered their past interactions to speed up future support. Fifty-five percent responded positively to the concept of AI detecting frustration through sentiment analysis and routing them to a human for complex issues.
But these bright spots came with constraints. Half the respondents reacted negatively to the idea of AI predicting problems before they happened - the proactive support model that many enterprise platforms were already building toward. And forty-nine percent said they'd only be comfortable using AI support if they could switch to a human agent at any time. The tolerance for AI was conditional on having an escape hatch.
Scott Stavretis, CEO of Acquire BPO, framed the findings as a calibration problem: "Unlocking the power of AI is essential for companies to gain a competitive edge, however the future of exceptional customer service requires striking the right balance between AI and human support."
In other words: the customers aren't opposed to AI handling simple tasks. They're opposed to AI being the only option.
The Gartner Prediction
As the Acquire BPO numbers were circulating, Gartner published a prediction that gave the spending data its punchline: by 2027, half of all companies that had cut customer service staff because of AI would end up rehiring people to perform the same functions, just under different job titles.
The prediction didn't come from speculation. Kathy Ross, senior director analyst in Gartner's Customer Service and Support practice, connected it to what was already happening: "Most recent workforce reductions were influenced by broader economic conditions rather than automation alone. As organizations encounter the limits of AI and rising customer expectations, they will need to reinvest in human talent to sustain service quality and growth."
The sequence is straightforward. A company automates customer service to cut headcount. Service quality drops because the AI can't handle the full range of customer problems. Customer satisfaction declines. Churn increases. The company hires back human agents - but now they're called "AI escalation specialists" or "customer experience engineers" instead of "support agents." The headcount returns. The savings don't.
Intercom's 2026 Customer Service Transformation Report added supporting data: 82% of senior leaders had already invested in AI for customer service in the previous twelve months, and 87% planned to make additional investments in 2026. The spending was accelerating even as the evidence of diminishing returns accumulated.
The Air Canada Precedent
The risks weren't limited to customer satisfaction scores. They extended to legal liability.
In 2022, a Vancouver resident named Jake Moffatt used Air Canada's AI chatbot to ask whether the airline offered reduced bereavement fares for a last-minute flight to attend his grandmother's funeral. The chatbot told him he could get a partial refund if he booked at full price and applied for the discount retroactively. This was wrong. Air Canada's actual bereavement policy required advance approval and didn't allow retroactive claims.
When Moffatt tried to claim the refund, Air Canada refused, saying the information was incorrect. When he took the matter to a Canadian civil resolution tribunal, Air Canada's defense was that the chatbot was "a separate legal entity" and the airline couldn't be held responsible for what it told customers. The tribunal rejected this argument and ordered Air Canada to pay Moffatt the difference.
The Air Canada case established a precedent: companies are liable for what their AI tells customers. If the chatbot makes a promise, the company owns that promise. This shifted the risk calculation for AI customer service deployments. It wasn't just churn risk - it was legal exposure. Every hallucinated response from a customer-facing chatbot was a potential claim.
The Empathy Problem
The Acquire BPO survey identified something that doesn't show up in ROI spreadsheets: the empathy gap. Consumers interacting with AI chatbots consistently report feeling unheard, especially when dealing with problems that have emotional weight - billing disputes, service failures, warranty claims, anything involving frustration or urgency.
The 2.5x satisfaction gap between human and AI interactions is partly explained by resolution rates, but it's also about the interaction itself. A human agent who says "I understand that must be frustrating, let me look into this for you" and means it creates a different experience than a chatbot that says the same words from a template. Customers can tell the difference. The half who felt negatively about increasing AI reliance cited "lack of personal touch" as a primary concern - not resolution speed, not accuracy, but the quality of the interaction.
This matters because customer service interactions disproportionately happen during moments of friction. Nobody contacts support when everything is working. The customer reaching out has already encountered a problem and is now evaluating whether the company cares enough to fix it. Inserting an AI into that moment means the customer's first experience of the company's response to their problem is a chatbot. For seventy percent of them, if that chatbot fails, they're done.
The Cost Equation
The economic logic behind AI customer service deployments is simple: human agents cost money per interaction, AI costs money per deployment. At scale, the per-interaction cost of AI is lower than the per-interaction cost of humans. Companies that handle millions of support contacts annually can save tens of millions of dollars by routing even a fraction to AI.
This math is correct as far as it goes. What it leaves out is the revenue side. If 70% of consumers will switch brands after one bad AI interaction, and the AI handles millions of interactions with some failure rate, the number of customers lost to bad AI experiences can be calculated - and it's not small. A company with ten million customer service interactions per year that routes half to AI, with a 10% failure rate on AI interactions, generates 500,000 bad experiences. If 70% of those customers consider leaving and even a fraction follow through, the customer acquisition cost to replace them dwarfs the savings from reducing headcount.
The companies that have navigated this successfully are the ones that use AI for triage and routing rather than resolution. AI identifies the problem category, pulls up relevant account information, and routes the customer to the right human agent with context already loaded. The human agent spends less time gathering information and more time solving the problem. The interaction is shorter, the resolution rate is higher, and the customer talks to a person.
What the Numbers Say
The collection of data points from Acquire BPO, Gartner, Intercom, and the Air Canada precedent converges on a conclusion that enterprise AI vendors are not eager to promote: AI customer service works well for simple, transactional queries where the customer doesn't need empathy and the stakes of a wrong answer are low. For everything else - complex problems, emotional situations, policy questions with financial implications - human agents produce better outcomes.
The 70% switching statistic is the number that should concern executives most. Customer loyalty built over years of good service can be destroyed by a single chatbot interaction that goes wrong. The cost of acquiring a new customer to replace the one who left is, by most industry estimates, five to seven times higher than the cost of retaining an existing one. The math that justifies AI-only customer service only works if you ignore the math on customer lifetime value.
Gartner's prediction that half of AI-driven layoffs in customer service will reverse by 2027 suggests the industry is already learning this lesson. The question is whether companies learn it from the data or from their churn rates.
Discussion