This content is made possible by our sponsors. Learn more about our OBJ360 content studio here.

Meet the professor who’s helping write the rule book on the ethics of AI

Fake legal cases and bad weight loss advice demonstrate some ethical shortcomings of relying on AI

Jason Millar of uOttawa studies AI

Perhaps you’ve considered integrating more AI tools into your work to improve efficiency or save staff time for more valuable tasks.

Perhaps you’ve also heard the stories of businesses getting burned by AI: the lawyer who used ChatGPT for case prep and was given fictional cases, or the eating disorder group that replaced its staff with an AI chatbot which started giving out bad advice.

“When you’re considering these tools, ask yourself, “do I fully understand the limitations of the system I’m applying? Do I understand what risks I’m passing on?” said Jason Millar, a professor at uOttawa’s School of Engineering Design and Teaching Innovation. “We don’t have a lot of regulation around AI right now and we’re really just starting to wrap our heads around what kind of documentation and testing needs to be done to roll these systems out responsibly to the public.”

Millar, who is also the Canada Research Chair in the Ethical Engineering of Robotics and Artificial Intelligence and Director of the Canadian Robotics and Artificial Intelligence Ethical Design Lab (CRAiEDL.ca), has been studying the ethics of technology and technology design for two decades, including topics such as privacy, safety, and responsible innovation of AI. 

Millar is fostering a growing movement toward uniting engineers and ethicists to chart the course for these world-changing technologies. 

But whether it’s generative AI like DALL·E, automated driving systems, or simple chatbots, ultimately, Millar notes, AI is a tool – and all tools have their own strengths,weaknesses, and risks.

That is why the research Millar and his colleagues are doing to develop tools and methodologies that engineers and policymakers can use to integrate ethical thinking into their daily workflows is so critical. 

You can use AI for idea generation just as readily as taking a book off the shelf and flipping to a random page – the difference with AI is that we haven’t fully developed the norms and limits that contribute to responsible use. 

“We don’t always know exactly what data set was used or how the model was trained and so, if we don’t have a lot of transparency with these models, it’s hard to really understand what all the risks are with a specific tool,” he said. “There’s a lot of pressure for companies to use these tools and a lot of opportunity when we get it right, but the risks are significant.”

Millar suggests companies that are looking to do more with AI do some research before diving in head first. 

Even if they’re a small company or just starting out, they should look at forming partnerships with labs such as his to help ensure the right analysis and documentation is completed and that the social and ethical dimensions of potential products are carefully considered. 

“Ethical concerns are best dealt with early on in the product development life cycle – you don’t want them getting baked into that whole process such that they become very hard to undo,” he says. 

“There are tools and techniques and processes that you can use to do the analysis early on and it doesn’t have to be painful or confusing. It’s a matter of working with people who have the right expertise, and so we’re training designers and engineers to have that expertise so they can go out and support companies in applying AI safely and responsibly.”

To learn more about the ethics of AI or to get in touch with Jason Millar and his lab, visit CRAiEDL.ca

EVENT ALERT: Mayor's Breakfast with Ontario Finance Minister on Wednesday, Dec. 4 @ City Hall