Air Canada liable for lying chatbot promises

Tombstone icon

Jake Moffatt used Air Canada's website chatbot to ask about bereavement fares after his grandmother died. The chatbot told him he could book at full price and apply for a bereavement discount within 90 days. Air Canada's actual policy did not allow retroactive bereavement fare claims. When Moffatt applied, the airline denied the refund and admitted the chatbot had provided "misleading words" - but argued Moffatt should have checked the static webpage instead. British Columbia's Civil Resolution Tribunal ruled in Moffatt's favor in February 2024, finding Air Canada liable for negligent misrepresentation and rejecting the airline's argument that it wasn't responsible for its own chatbot's statements.

Incident Details

Severity:Facepalm
Company:Air Canada
Perpetrator:Product Manager
Incident Date:
Blast Radius:Legal liability; refund + fees; policy/process review.

Tech Stack

AI customer-service chatbotWebsite CMSSupport workflow

References

The Booking

In late 2022, Jake Moffatt's grandmother passed away. Moffatt, a British Columbia resident, needed to fly from Vancouver to Toronto for the funeral. He went to Air Canada's website to book tickets and, while there, asked the airline's automated chatbot about bereavement fares - discounted rates that airlines offer to passengers traveling due to a death in the family.

The chatbot gave him an answer. It told Moffatt that Air Canada offered bereavement rates, and that he could either apply for the reduced fare before his flight or submit a claim for a partial refund within 90 days after traveling. The chatbot directed him to the bereavement travel webpage for more details.

Moffatt took the chatbot at its word. He booked the flight at full price, planning to apply for the bereavement discount after traveling. On November 11, 2022, he also spoke with an Air Canada representative by phone who told him the discounted fare for each leg would be approximately $380 - significantly less than the full price he'd paid.

After the trip, Moffatt submitted his application for the bereavement fare. He filled out the refund form, provided a death certificate, and waited.

The Denial

Air Canada denied the claim. The airline's bereavement policy, as stated on its static bereavement travel webpage, did not allow bereavement fares to be applied retroactively. You had to request the discounted fare before you booked your flight. If you booked at full price and traveled, you couldn't go back and get the difference afterward.

The chatbot had told Moffatt the opposite of the actual policy.

From December 2022 to February 2023, Moffatt corresponded with Air Canada by phone and email, trying to get the partial refund the chatbot had promised. In February 2023, he emailed the airline with a screenshot of the chatbot conversation showing the 90-day window language.

An Air Canada representative responded and admitted that the chatbot had provided "misleading words." But the representative pointed to the bereavement travel webpage as the authoritative source, where the real policy was spelled out, and told Moffatt that the chatbot's answer didn't match the airline's actual terms. Air Canada said it had noted the problem and would update the chatbot. It did not, however, give Moffatt the refund.

The Tribunal Claim

Moffatt filed a complaint with British Columbia's Civil Resolution Tribunal (CRT), a small-claims-style body that handles consumer disputes. The case, Moffatt v. Air Canada (2024 BCCRT 149), was decided on February 14, 2024, by Tribunal Member Christopher Rivers.

Air Canada's defense strategy was striking. The airline argued that it could not be held responsible for the chatbot's statements because the chatbot was, in effect, a separate entity. The airline's position was that customers should understand that a chatbot's responses might not be accurate and that they bore the responsibility to verify information against the airline's static webpages.

In other words: our chatbot said something wrong, but you should have known better than to trust our chatbot.

The Ruling

Tribunal Member Rivers was not persuaded. His reasoning was direct and contained a passage that would be widely quoted in tech policy discussions afterward: Air Canada did not explain "why the webpage was inherently more trustworthy than the chatbot" or "why customers should be expected to double-check information found in one part of Air Canada's website against another part."

The tribunal found that Air Canada owed Moffatt a duty of care and was responsible for all the information available on its website, including information provided by the chatbot. The chatbot was part of Air Canada's website. It was deployed by Air Canada, on Air Canada's domain, to serve Air Canada's customers. The idea that it somehow operated as an independent agent whose statements the airline wasn't accountable for had no legal basis.

The ruling classified the chatbot's false statement as negligent misrepresentation. The tribunal found that Moffatt had relied on the chatbot's information, that his reliance was reasonable, and that Air Canada had breached its duty to provide accurate information through its customer-facing tools.

Air Canada was ordered to pay Moffatt damages representing the difference between what he paid and what the bereavement fare would have been, plus interest and tribunal fees. The total amount came to several hundred dollars - a modest sum for a major airline. The legal precedent was worth considerably more.

The "Separate Entity" Defense

Air Canada's argument that the chatbot was a separate entity deserves closer examination, because it reflects a defense that other companies have attempted and will likely attempt again. The logic goes: we deployed an AI system, but we can't control everything it says, so we shouldn't be liable for its errors.

The tribunal rejected this cleanly, but the attempt is instructive. If the defense had succeeded, it would have created a framework where companies could deploy AI systems on their websites, have those systems provide incorrect information that causes financial damage to customers, and then disclaim responsibility by pointing to the AI's autonomous nature. The company gets the benefit of the chatbot (reduced customer service costs, 24/7 availability) without the liability for what it tells people.

The CRT's ruling established clearly that, at least in British Columbia, this doesn't fly. A company is responsible for all information on its website, regardless of whether that information was written by a human, generated by a chatbot, or produced by any other automated system the company chose to deploy.

Why the Chatbot Got It Wrong

The ruling itself didn't detail the technical reasons behind the chatbot's error. Air Canada did not present evidence about how the chatbot was trained, tested, or configured. But the failure mode is recognizable to anyone familiar with how customer service chatbots ingest and present information.

Air Canada's bereavement policy presumably existed as text on the bereavement fare webpage. The chatbot was designed to answer customer questions using information from Air Canada's website. When Moffatt asked about bereavement fares, the bot needed to identify the relevant information and present it accurately.

Somewhere in that process, the chatbot produced a statement that reversed the actual policy. Instead of saying "you must apply for bereavement fares before booking," it said you could apply for a retroactive refund within 90 days. The chatbot didn't add a caveat or express uncertainty. It presented the false information with the same confidence as any other response.

This is the standard failure mode of AI chatbots deployed to represent company policy. The systems are designed to sound helpful and authoritative, because that's what makes them useful as customer service tools. But the same design that makes them sound authoritative also means they sound authoritative when they're wrong. A human customer service agent who wasn't sure about the bereavement policy might say "let me check on that" or transfer the call. A chatbot that doesn't know the right answer still produces an answer, and that answer comes formatted as though it's reliable.

The Broader Impact

The Moffatt ruling became one of the most widely cited examples of AI chatbot liability in the first half of 2024. It was covered by The Guardian, the Washington Post, Forbes, CBS News, the BBC, and dozens of other outlets. Legal commentators and AI policy researchers referenced it as a clear precedent for holding companies accountable for their AI systems' statements.

The ruling's practical impact was straightforward: companies that deploy AI chatbots to interact with customers are legally responsible for what those chatbots say. The chatbot is the company's agent, not an independent party. If the chatbot gives wrong information and a customer relies on it to their detriment, the company bears the financial and legal consequences.

For Air Canada, the financial cost of the ruling was trivial. For the broader industry, the precedent was significant. Many companies had been deploying customer service chatbots without robust accuracy testing, treating them as low-cost alternatives to human agents without considering what happens when the bot gets something wrong. The Moffatt case provided a concrete answer: the same thing that happens when a human customer service agent gives wrong information. The company is liable.

The Customer Experience Question

One detail that received less attention was the customer experience Moffatt endured. He had just lost his grandmother. He was booking emergency travel for a funeral. He turned to the airline's own website for help and was given bad information that cost him money. When he tried to get the promised refund - with screenshots of the chatbot's statements and a death certificate - the airline's response was to tell him the chatbot was wrong and he should have known better.

The correspondence stretched from December 2022 to February 2023 before Moffatt escalated to the tribunal. The ruling didn't come until February 2024 - more than a year after the initial claim. Throughout that time, Air Canada's position was that its own website had given incorrect information but the customer should bear the consequences.

That posture, more than the technical failure of the chatbot itself, is what made the case resonate. It represented a company using an AI system to cut costs on customer service, refusing to honor the system's commitments when they proved costly, and then arguing in a legal proceeding that the AI wasn't really the company's responsibility. The tribunal's ruling said otherwise.

Discussion