The First Argument for AI Chatbots Being Its Own Entity Fails

sdecoret / shutterstock.com
sdecoret / shutterstock.com

Air Canada has now made history as the first company to try and have their artificial intelligence (AI) chatbots recognized as its own thing during a lawsuit. As first reported by the Canadian Broadcasting Channel, the airline was sued by Jake Moffatt following his attempt to receive a bereavement airfare rate.

Going through their AI chatbot, he asked about getting the fare following the death of his grandmother. Told by the chatbot if he booked immediately, he could get the difference via a refund application, but only if he filed for it within 90 days. Taking the advice at face value, he shelled out $1,630 for the tickets.

Following his return home, Moffatt had his application denied. When he challenged their efforts to decline the refund with screenshots of the chat, the Air Canada rep he spoke with apologized as the chatbot gave him “misleading words,” but reiterated that the proper answer was elsewhere on the website.

Taking Air Canada to small claims court, the airline argued their chatbot is “a separate legal entity responsible for its own actions.” Never mind the fact that the website and chatbot are owned and programmed by Air Canada.

Called a “remarkable submission” by adjudicator Christopher Rivers, he ruled that Air Canada was responsible for the mistakes of its chatbot. He also pointed out that while Air Canada may have had the information elsewhere on the website, Moffatt had no reason to know one part of the website would be right and the chatbot wrong.

This kind of mistake by AI and chatbots has been slowly eroding the systems that people depend on. As people are being replaced by programs to mimic our human answers, the humanity is being taken out of things like customer service and business transactions. Thankfully people like Rivers are holding companies responsible for the actions of their bots.