AI Chatbot Liability: What the Air Canada Case Teaches About Using AI in Contact Centers
A few years ago, something unusual happened in Canada.
An airline’s AI chatbot told a customer something incorrect. Something that wasn’t even in the company’s official policy.
And the airline lost a court case over it.
Not because of a technical bug.
Not because the AI was malicious.
But because customers trusted what the AI said, and the company was held legally responsible for it.
This shouldn’t be an isolated news item.
It should be a wake-up call.
Because we are rushing to adopt AI inside contact centers with the assumption that:
If it’s automated, it must be efficient.
If it’s smart, it must be trustworthy.
That’s not true. Not even close.
The Air Canada Case: A Reality Check
Here’s what happened in simple terms:
A customer used Air Canada’s AI chatbot on the airline website to understand its bereavement fare policy, a special discount when you have to travel due to a death in the family.
The chatbot advised him to purchase a ticket at full price and then apply for a bereavement refund within 90 days.
Apparently, the airline’s refund policy did not allow that.
He did what the bot advised.
Booked the tickets and submitted the refund application.
The claim was denied.
Then he sued.
And the tribunal ruled in his favour.
Air Canada had to pay damages because the chatbot’s answer was misleading, and the company couldn’t argue that the AI was somehow separate from them.
Explore our blog on: Pros and Cons of Chatbot
This Isn’t About One Airline Having a Bad Bot
This is about a fundamental truth of AI today:
AI doesn’t know what’s true. It only predicts what looks right.
That’s how LLM-based systems work.
That prediction process means the model will confidently give plausible but incorrect answers.
In customer experience, that’s dangerous.
Because customers trust the answer.
If you put AI frontline without human oversight, you transfer that trust and the risk to a machine that’s not designed to verify truth.
Regulations Won’t Save Us Either
Some people think:
So let’s just regulate AI.
Maybe.
But here’s the thing:
Regulations can set standards, but they don’t improve the real-time accuracy of an AI system.
They don’t prevent an AI agent from hallucinating.
They just punish companies after the damage is done.
Like what happened with Air Canada.
Regulation is reactive.
It doesn’t fix predictive errors.
And in a fast-moving contact center, post-facto punishment doesn’t help customers in the moment.
This legal ruling doesn’t just raise a liability question.
It raises a deeper question:
Who is responsible for real-time answers provided by algorithms?
And the answer is clear:
The company deploying them always.
AI doesn’t have agency. It doesn’t bear responsibility.
We do.
So if Not AI Frontline, Then What?
AI in contact centers shouldn’t be about replacing humans.
It should be about augmenting them safely.
There’s a distinction few leaders are making clearly:
Critical decision-making information should not be directly served by AI.
That includes:
- Policy interpretations.
- Refund rules.
- Contract terms.
- Compliance obligations.
- Financial impact scenarios.
Because these carry legal and emotional consequences.
AI can hallucinate, mix contexts, and produce misinformation that sounds authoritative, sometimes more credible than official text.
That’s why deploying AI for critical customer decisions without verification is a disaster waiting to happen.
Instead, AI should be used internally to support humans, not replace them in critical roles.
Where AI Actually Works in Contact Centers
AI can support agents in ways that improve service without exposing customers to hallucinated risks:
Suggested Replies
AI can propose starter responses based on context, but human agents choose before sending.
Real-Time Information Retrieval
AI can quickly pull relevant policy snippets, but it should cite sources rather than paraphrase freely.
Sentiment Detection
Understanding when a customer is frustrated or upset helps the agent respond empathetically.
Summarization
AI can generate concise summaries of long chat or call logs for agent review, not final answers.
Prioritization
AI can identify high-impact tickets so agents focus where they’re needed most.
There are examples of AI assisting humans, not AI telling customers what to do.
How Good Implementation Looks
Let’s break down what getting it right actually means in practice:
AI as the Assistant, Not the Authority
If an AI handler says:
According to policy, you should do X.
That’s risky.
But if it says:
Here are the policy excerpts relevant to your request. Please review.
That’s assistance.
One is directive.
One is supportive.
Only one should go to customers without verification.
Humans Stay In the Critical Loop
For non-critical FAQ responses, AI can serve directly if accuracy is guaranteed and there’s a fallback.
For any refund-related matter, including rules, consequences, exceptions, or legal terms, the agent must validate it before the customer sees it.
AI should be preparing, not confirming.
AI Systems Need Guardrails
This is non-negotiable.
You don’t set and forget.
Hallucinations aren’t occasional typos. They are systemic architectural characteristics of LLMs.
Guardrails include:
- Truth-verification layers.
- Source citation.
- Frequent retraining.
- Policy alignment checks.
- Regular testing with real use cases.
Transparency With Customers
This is crucial.
If a customer interacts with any automated system, let them know:
- This is AI-assisted.
- Information may need human verification.
- Click here for official policy.
Transparency builds trust.
Assumed authority destroys it.
What Air Canada Tells Us About the Future
If a court can hold a company liable for misinformation from a chatbot, it signals a fundamental shift.
Errors from machines are treated as company actions, not machine quirks.
This means:
- Companies must own accuracy.
- Deployment strategies must focus on human + AI collaboration, not AI autonomy.
- AI tools must have clear constraints, not full decision-making authority.
- Customers must be protected from unverified machine outputs.
A Simple Rule I Use Now
AI can be trusted to assist humans, but not to make decisions that affect customers’ rights, money, or emotions.
If it’s critical, a human must validate it first.
That’s not caution.
That’s responsibility.
We are standing at a crossroads:
One path leads to an automated service that sounds smart but misleads and misinforms.
The other leads to human-centered service where AI makes humans better, not obsolete.
Air Canada’s ruling isn’t a warning against innovation.
It’s a warning against unaccountable automation.
If we get this wrong, customers won’t just be disappointed.
They’ll lose trust.
And trust in customer experience is the one thing no technology can ever replace.