Techopia Live: Determining the Trustworthiness of AI Technology

As more and more companies are incorporating artificial intelligence into their daily business, few consider how they’ll monitor or mitigate its potential biases and risks until after an issue is identified. On Techopia Live, host Sherry Aske spoke with Niraj Bhargava, the CEO of Ottawa’s NuEnergy AI, a software enterprise that aims to help organizations measure and manage trust in their AI technology. The following is an edited transcript of the interview.

OBJ: Niraj, what is your elevator pitch for NuEnergy AI?

OBJ360 (Sponsored)

NB: NuEnergy AI is Canada’s leader in AI governance. We uniquely dedicate our AI and governance experience and expertise to help build and deliver guardrails, so our clients can ethically take advantage of the power of AI.

Last month, NuEnergy AI announced the launch of our hosted Machine Trust Platform, which is designed to support the ethical and transparent governance and measurement of artificial intelligence deployments. The software platform is launched through a pilot with the RCMP. They’re the first testing department approved through ISED, (Innovation, Science and Economic Development Canada) and the Innovative Solutions Canada program to test the R&D innovation. After delivering an executive education program, NuEnergy AI is currently working with the RCMP to develop a framework and configure our Machine Trust Platform as part of our launch.

OBJ: How do you determine what responsible AI looks like?

NB: Great question. It’s not one size fits all. We co-create an AI governance framework for our clients that is based on their values, their compliance requirements and most importantly the perspectives of what is trustworthy in the eyes of their clients or key stakeholders. So we build a framework and configure the platform to the definition of responsible AI of our clients.

OBJ: How long has this been in the works for you guys? And were there any challenges in getting to this launch phase?

NB: It is over three years of R&D through the COVID period, and we’ve had a dedicated team focused on building this platform, testing, configuring and running pilots. Our platform is open, transparent and configurable – one place for self-evaluation on the governance and ethics of AI, but also the adoption of standards. We’ve been scouring the world for standards and frameworks, benchmarks, and accessing the best tools and techniques and having a dashboard to monitor the AI. The toolset we use is fully integrated, and they’re the best tools that we have found in the world that can actually monitor and measure AI on topics like bias, privacy and transparency.

OBJ: Now that it’s out there, what are the next steps? I know you mentioned you’re working with some guardrails with different government organizations. What will you be looking for in this next phase?

NB: Our clients’ AI use cases come in a lot of different forms. So we have different kinds of clients that are looking at machine learning and AI in different forms. But they’re also at different stages of development. Some are still looking through the data and cleansing their data and looking at building models, others are in the development stage, others have deployed. And in some cases, these models are already out there and drifting. So, it varies depending on the stage, the impact level. Some are high-impact and some are low-impact kinds of models and algorithms. And also those trust questions. What are those trust questions that the clients have on these particular models and algorithms? So, there are different kinds of use cases that we need to test and configure our platform to because it is a very diverse world of AI.

OBJ: Let’s shift gears a bit from talking about the product to talking about the industry. How great is the demand for this right now? It seems like AI is everywhere. So are there others out there that are trying to do a similar thing to you guys?

NB: Absolutely. AI is out there, it’s today, not tomorrow. And the ethics of AI is a topic that is of great concern. The Gartner Group has identified AI governance as a major growth sector in the coming years. So we see the market is moving very quickly now, but it’s interesting coming from the tech sector. It’s usually the private sector that leads over the public sector, but in our case, it really is the public sector that’s leading, as it should. Because concerns over the public trust of machine learning and AI are even higher in the public sector. But it’s very relevant in the private sector as well. What we’re finding in the private sector is that it’s often about reputational risks and concerns on media attention and advancing legislation. So we’re finding in the private sector that it’s really a board-level and fiduciary-responsibility conversation, but in the public sector, there is an adoption and responsibility on ethics that is moving ahead in looking at governing AI. So it is a fast-moving market, but it is expected to grow substantially in the next few years.

OBJ: What is your plan to stay on top of that rapidly evolving market?

NB: We configure frameworks or co-create frameworks, but we also have built an education arm of our organization. We’ve also recruited a top global faculty to help educate and build awareness around AI governance. So we do education programs for boards, for Canadian technology firms and government departments. We’re building that awareness and understanding of the ethical questions that we need to be considering as we’re building machine learning and AI models and deploying them.

OBJ: Could you give an example of something that the tool may identify or how it would work with a specific client?

NB: Absolutely. Our tool is recognizing that we need to understand what goes behind building the model. Often, the models are being built by technical experts in machine learning and AI, but we need some transparency to understand how the model was built and what kind of issues there may be. So the main topics that come up are not just ethical questions, but questions around bias, around privacy, around transparency, and explaining the model. But we certainly can recognize that humans are biased, and machines can be as well. The training data that goes into building a model may be trained on people of different ages, genders and ethnicities. So we need to have some transparency to know what went into that model being developed. Our platform allows you to look at the data and understand and measure those biases and decide from a governance point of view whether that’s acceptable, as an example.

OBJ: Is there anything else that you want to let people know about the AI industry or the platform itself?

NB: I guess the message I’d give is that it’s never too early to get the guardrails up. One of our first customers is Transport Canada, and they’ve taught us that you don’t build a superhighway and then wait for that first accident to realize that maybe we could’ve put some guardrails up and put some safety measures into place. We need the guardrails on the superhighway of AI, and we need to do that upfront. They’ll evolve, but we need to think about that so we can sleep well as well as get the advantages of AI. We’ve come a long way on human trust. We need to now think about machine trust as well.

Get our email newsletters

Get up-to-date news about the companies, people and issues that impact businesses in Ottawa and beyond.

By signing up you agree to our Terms of Use and Privacy Policy. You may unsubscribe at any time.

Sponsored

Sponsored