This content is made possible by our sponsors. Submit your expert blog here.

Workplace policies make AI work for you with less risk

The use of Artificial Intelligence (‘AI’) in Canada and around the world is exploding, and has been a key focus across political, legal and business arenas. At the same time, more employees are using AI for work, often without guidance or controls in place. Like all emerging technologies, AI represents a powerful tool that comes with both significant opportunities and risks, which can be mitigated through carefully drafted workplace policies

Recent AI Cases

Canada’s courts have turned their mind to the use of AI, which is reflected in several recent decisions addressing the misuse of AI in court submissions. The most recent of these is Federal Court decision, Lloyd’s Register Canada Ltd. v. Choi, 2025 FC 1233 (‘Llyod’s’). It represents a cautionary tale for self-represented litigants, highlighting the serious nature of misleading the court with AI generated materials.

The case involved an application by Lloyd’s to remove a motion filed by the respondent, Munchang Choi, which contained references to non-existent cases, or “AI hallucinations.” Mr. Choi, who was self-represented, claimed that his use of AI tools was limited to drafting and research, and that he had made a mistake when transcribing the reference to a case. This was not the first time however, that Mr. Choi had relied on AI generated cases, and the applicant raised concerns about the credibility of his explanation.

The Court found that the use of “hallucinated” cases was a serious matter, and that Mr. Choi had failed to take full responsibility for his actions in misleading the court or to express appropriate contrition. It ordered that the motion record be removed from the court file and awarded the applicant costs.

In Ko v Lee, 2025 ONSC 2965, the applicant’s lawyer, Jisuh Lee, referenced several non-existent or “fake” court cases that had been generated through ChatGPT in her arguments.

The Court found that Ms. Lee had violated her duties to the court and emphasized that misrepresentation of the law by a lawyer poses real risks of causing a miscarriage of justice that undermines the dignity of the court and the fairness of the civil justice system. Ms. Lee’s forthcoming and contrite response to the court mitigated against the potential consequences that could have been imposed in this case.

The significant negative publicity surrounding this case denounced the misuse of AI in Canadian courts, and served as a deterrent to the legal profession, reminding lawyers of the serious consequences that can flow from relying on AI generated submissions without verifying them first.

Finally, Zhang v Chen, 2024 BCSC 285 also involved a lawyer’s use of ChatGPT in the preparation of court materials and the inadvertent use of “fake” cases. The respondent’s lawyer, Ms. Ke, had inadvertently included two fictitious AI generated cases in her client’s application without verifying them. Ms. Zhang sought costs against Ms. Ke for time spent in addressing the non-existent cases.

The court found that Ms. Ke had not included the cases with an intent to deceive, and that the circumstances of the case did not justify awarding special costs. She had expressed regret for using a generative AI tool that was not fit for legal purposes and had been subjected to significant negative publicity. The court did, however, find Ms. K personally liable for costs related to additional effort and expense resulting from the confusion created by the AI hallucinations.

Aside from these leading decisions, the issue of AI hallucinated cases has come up in a variety of legal contexts, including in tribunal settings and criminal proceedings. These cases have often involved self-represented litigants with limited resources or legal expertise who may be unaware of the potential for AI to generate non-existent cases.

Mitigating AI Risks Through Workplace Policies

The cases above highlight the importance for businesses to proactively manage and avoid legal as well as reputational risks by establishing policies and procedures around the responsible use of AI by their employees. These policies should address issues like the types of AI platform that can be used, privacy and information security, responsibility for supervising AI use and verifying sources, intellectual property (IP) infringement, and any other relevant issues that may arise in the specific context of the business.

Businesses should also ensure they are compliant with new laws aimed at AI. In Ontario, for example, Bill 149, Working for Workers Four Act, 2024, introduced a new requirement to disclose the use of AI within the recruitment process for public and online job postings, which will come into force on January 1, 2026. For more information on this requirement, you can read our recent article by Mira Nemr, Employer Considerations for Job Postings: The Effects of Bills 149 and 190.

Our Employment Law Team Can Help

If you have any question about AI workplace policies, or need help drafting one, please contact our Employment Law Team.

About the Author

Kate Terroux is the Research Director at Perley-Robertson, Hill & McDougall LLP/s.r.l. Called to the Ontario Bar in 2006, she brings extensive experience providing legal, policy, and governance advice across industries including government, health care, finance, and labour. Kate is recognized for her strategic research insight, mentorship, and ability to help clients manage legal and reputational risk efficiently.

OBJ INSIDER HOLIDAY SAVINGS EVENT. See the full story. 

Close the CTA