This content is made possible by our sponsors. Learn more about our OBJ360 content studio here.

Ask these questions to mitigate AI legal risks

Emond Harnden associate Zoriana Priadka tells us how

Zoriana Priadka
Emond Harnden associate, Zoriana Priadka

While the AI train has left the station, there are ways employers can help keep it on track when it comes to mitigating legal risks.

Asking some key questions, whether you’re using a free service like Chat GPT or a custom tool built by a third-party provider, is a good place to start. 

Emond Harnden associate Zoriana Priadka is following up the conversation she kicked off in April with some specific questions you can ask AI tools and the people who build them.

Understanding AI hallucinations and bias

One of the biggest challenges with generative AI tools, especially with the ever-present Chat GPT, is understanding how they work.

Rather than prioritizing accuracy, they instead focus on pattern recognition.

Priadka used the recent example of an airline’s AI-powered chatbot to illustrate this point in Emond Harnden’s latest webinar. In that case, a customer was given the wrong information about the airline’s bereavement fare policy, and it conflicted with the information on the company’s website. Ultimately, the airline was found liable for the chatbot’s mistake to the tune of hundreds of dollars in damages and court fees. 

“The question here is ‘Can you guarantee the accuracy of AI-generated output?’,” said Priadka. “Can you ensure that the chatbot isn’t potentially creating liability for the company by giving wrong or ambiguous results?”

Priadka explained that pattern recognition is why AI ‘hallucinations’ occur. AI is only as good as the training data fed into it; if training data is incorrect or biased, the results might lead to AI ‘hallucinations’. To mitigate this risk, she suggests asking your AI tool a question you already know the answer to as a test.

“When I did that with a case law example, I got a fake result,” she said. 

She continued the interrogation by asking the tool, “Is this a real case?” It responded by confirming, and then going one step further. The tool cited a source to back up its claim — which was also made up. 

Pattern recognition is also a possible culprit behind potential biases in the context of recruitment or performance reviews. That’s why you have to be mindful of the data you’re feeding into AI tools for these purposes. 

“If you’re using AI in the hiring process to screen applications of potential candidates by feeding resumes or characteristics of your current employees into it, you’re essentially giving it the data set to work with,” said Priadka. “If all of your current employees for instance happen to be male, you may be implicitly feeding it instructions to seek our more male candidates without meaning to.”

The same thing can happen with other identifying factors, like ethnicity.

“In that context, you want to find out how to use the tool effectively to avoid biases that discriminate against some people and unintentionally exclude a potentially great employee,” she said.

Know your data

Another area for potential liability related to the use of AI tools is data management and security. 

“If you’re using a third party to develop a custom AI tool for your company, ask them questions about the integrity of your data,” said Priadka. 

These could include:

  • How and where is the information being stored?
  • Who can access it? 
  • How often is it being destroyed?
  • Who at the third-party company has access to it?
  • Is the third-party company following the company’s information life cycle policies and procedures?

Jurisdiction is another potential challenge to consider. 

Globally, the legal AI landscape is very much at the beginning stages, with laws in different countries nowhere near harmonized. 

“When you have an international employer, they could be subject to multiple laws that could overlap or be in conflict,” she said. “This includes human rights laws, privacy laws, data security, etc., all of which can vary from country to country.”

She suggests asking questions like:

  • How do we know the information won’t be used in another jurisdiction?
  • If we decide to expand, how will we ensure jurisdictional issues will be addressed?

Since laws that address these and many other legal complexities are in flux, it creates ongoing questions of compliance.

When working with any AI tool, the best plan right now is to seek out a thorough legal assessment to ensure your interests are protected. 

To receive regular updates on labour and employment law, subscribe to Emond Harnden’s complimentary Focus Alerts. 

This article is intended to provide readers with general information only. It should not be regarded or relied upon as legal advice or opinion. Accessing, reading, relying on or otherwise using this article does not, under any circumstances, create a lawyer-client relationship between you and Emond Harnden.

EVENT ALERT: Mayor's Breakfast with Ontario Finance Minister on Wednesday, Dec. 4 @ City Hall