904 356-JOBS (5627)

904 356-JOBS (5627)

Companies are adding AI chatbots to their websites. It’s producing some unintended consequences. (Courtesy of the Jacksonville Business Journal) — Air Canada recently found itself on the hook for erroneous refund information produced by its AI chatbot, with a Canadian tribunal ordering the airline to pay up to a customer who was denied a bereavement discount.

The denial followed claims from the passenger that a chatbot instructed him a discount was available, leading to the purchase of tickets. The company argued it was not liable for the claims made by the chatbot and that its bereavement fare policy was correct on another part of its website — claims the tribunal rejected.

In a separate incident, a viral post on X (formerly known as Twitter) late last year chronicled how a user tricked a Chevrolet dealership chatbot into promising to sell a brand new Tahoe for $1.

The two cases highlight how artificial intelligence can produce some unintended consequences as companies embrace AI technology. It’s why legal experts are urging companies to use caution before diving in when it comes to AI.

In the United States, in particular, legal experts say companies need to be aware that they likely are on the hook for claims made by AI chatbots they deploy on their websites — whether those claims are true or false.

“American companies are indeed responsible for commitments made via chatbots on their websites, similar to any other official communication channel,” said Beth Simone Noveck, director of the Burnes Center for Social Change and professor of experiential AI at Northeastern University in Boston.

Noveck said that liability is why it’s so important for businesses to train, test and oversee these new AI systems. 

“Merging AI with human oversight is crucial to maintain both accuracy and accountability, safeguarding against misrepresentations that could mislead consumers and legally bind the company,” Noveck said in an email.

That’s because companies must legally honor commitments they make upon which a customer could be reasonably expected to rely — an old concept AI systems may run up against.

“What is new is that companies are rushing to adopt untrained and untested AI to decrease their costs, potentially leading to a degradation of customer service,” Noveck said.

AI challenges continue

The latest wrinkle in the use of AI tools comes after so-called “generative AI” exploded in popularity with the release of OpenAI’s ChatGPT in November 2022. The company quickly saw 100 million users. Now, 100 million users use the platform weekly.

Other AI systems, such as Alphabet Inc’s Gemini, have rapidly proliferated as companies rush to the nascent market.

AI systems, however, have shown a troubling habit of “hallucinating” in which they confidently share false information. They’ve also shown the propensity to be coerced or forced into certain responses or answers. 

Despite how new AI technology is, experts agree: Companies are liable for what’s on their websites — and that includes chatbots accessed through those sites.

“A chatbot is basically an extension of your website,” said Ben Michael, an attorney at Michael & Associates in Austin, Texas, in an email. “It’s no different from having written information available on your website. You’re held accountable for claims made in writing on a website. A chatbot’s information is no different.”

Jerry Levine, general counsel at ContractPodAI and longtime attorney in New York, said while law is still evolving around AI, companies need to keep their websites updated and realize that customers will treat their chatbots as authoritative sources. 

“In addition, organizations should be aware that they will likely not be able to rely on a defense of ‘We didn’t know what the chatbot was doing’ (especially if it’s publicly available on the company’s own website),” Levine said in an email. 

Human engagement remains important

Levine said companies should make sure to limit the kind of information chatbots can deliver and always make sure the source of the information is provided. For more complex questions or issues, he said companies should retain a human element of engagement.

“As beneficial as this technology is, it doesn’t outweigh the value of human oversight, and a key to this technology being successful in general is for humans to work in collaboration with it vs. relying on the sole outputs of the chatbot,” Levine said.

A survey of 774 online business owners and 767 customers by chatbot and automation platform Tidio found that 88% of customers had at least one conversation with a chatbot in the past year. It also revealed that 62% of respondents said they would use an online chatbot for their issues whereas 38% would wait for a human to help them.

Additionally, 19% of the owners said their company already uses a chatbot; 62% said they plan to add a chatbot in the future. The remaining 19% said they don’t want a chatbot.

Experts say companies need to proceed with caution when it comes to using AI to cut labor costs, as there are several important factors to consider that transcend the bottom line.

Many companies are holding off on incorporating the technology in hopes that the regulatory landscape becomes more clear.

Attorney Seth Price previously spoke to The Playbook on how to roll out AI tools in your own workplace