Last month, computer scientist Joy Buolamwini penned an article in Time magazine describing a troubling interaction with artificial intelligence. She described using facial recognition software that couldn’t identify her dark-skinned face – until she put on a plain white mask, at which point the algorithms underlying the technology were able to detect her humanity.
Buolamwini’s experience is not the only example of a machine learning application either unable to identify non-white users or, worse, biasing outcomes against them. It’s also backed up by her own research that revealed signs of gender and racial bias in AI systems developed by some of the world’s biggest tech companies.
While advocates are excited by the promise of machine learning tools, evidence is growing that artificial intelligence is capable of problematic biases if left unchecked. In Ottawa, developers and business leaders are increasingly espousing the importance of responsible development – and even finding business opportunities in keeping AI ethical.
OBJ360 (Sponsored)
World Junior Championships set to boost Ottawa’s economy and global reputation
The World Junior Championships will kick off in Ottawa in December, bringing tens of millions of dollars of economic activity to the city, as well as a chance for local
Progress can create unlikely allies
There was a time when mining exploration and the environment were like oil and water. Several years ago, I attended social impact investing conferences in America and the U.K. with
“In some ways, this is not specific to AI; any time there’s new technology investments, there’s always an ethical conversation to be had,” says Stephan Jou, chief technology officer at Ottawa’s Interset AI.
“What I think is interesting about AI is that it’s moving so quickly and the promise of AI is … so outstanding and so pervasive.”
Interset, which was recently acquired by IT giant Micro Focus, is among the signatories of the Montreal Declaration for Responsible AI, a creed for developers committed to developing AI systems within a transparent and trustworthy framework. Jou says that while AI holds the promise of automation and exciting new technologies, that kind of raw potential requires proactive oversight.
“AI by itself as math – as pure math – doesn’t have a moral compass, right? The bad guys can use it just as easily as the good guys,” he says.
That’s not to say that “bad guys” are the only ones using AI irresponsibly. As was likely the case in Buolamwini’s experience with inaccurate facial recognition technology, the datasets developers use to train AI hold a great bearing on the final application. Buolamwini, herself the founder of the Algorithmic Justice League, says facial recognition algorithms are often trained with pictures of predominantly white men, making women and visible minorities all but invisible.
“Machine learning … has this self-learning capability that creates a big upside but also means that we need to think differently about how we would manage it,” says Niraj Bhargava, the CEO of NuEnergy.ai.
Bhargava’s Ottawa-based startup isn’t just focused on developing artificial intelligence – it works with companies to implement a framework to ensure ethical behaviour. Through a mix of professional services and software, the firm seeks to measure AI systems based on levels of bias, transparency and privacy.
“If you can’t measure it, you can’t manage it,” Bhargava says.
Roughly a year into business, NuEnergy.ai has a few lead clients under its belt and has also landed a coveted spot on the federal government’s AI source list, qualifying it to supply the feds with AI solutions. Bhargava says that while the company believes its framework can be a boon to any AI developer, the level of trust required for public sector applications represents a strong market for the company.
“Considering unintended consequences are certainly at a high level within the whole public sector … so developing these checks and balances with the government makes a lot of sense for us.”
Jou says there are numerous ways to ensure AI applications are developed in a responsible way, many of which boil down to the underlying algorithms. Some such algorithms are able to explain why they made certain choices in processing a result, which can aid in transparency. Others have deliberately limited scopes, a function that Jou says helps with ensuring an application isn’t used in a way for which it wasn’t intended.
Developing well-behaved AI systems is quickly becoming the industry standard, Jou says, largely due to privacy concerns. Applications that handle sensitive information such as patient records need to be airtight and auditable to ensure data isn’t leaking into the wrong hands.
Jou notes that the other important use of ethical AI is as a recruiting tool. With thousands of startups around the world claiming to be on the cutting edge of artificial intelligence, a company that stakes its reputation on responsible development could well stand out from the crowd.
“It helps everyone to know that they’re building AI for good, not for evil,” Jou says.