Members of Canada’s tech community are concerned about how the country will rein in the risks of artificial intelligence without stifling innovation.
As they gathered in Toronto for the annual Elevate tech conference, much of their chatter focused on the technology’s great promise, but many said they also feared over-regulating AI would put the nation behind its counterparts hurdling toward adoption without guardrails.
“I’m a little bit afraid of just putting the brakes on because while we might want to put the brakes on, other places aren’t putting the brakes on and I feel that that’s going to create an adoption gap that we can’t afford to lose,” said Joel Semeniuk, chief strategy officer of Waterloo tech hub Communitech at a breakfast adjacent to the conference.
PPRC is launching their own career mentorship ship program “PPRC Connect” in the new year for people with disabilities, and more.
You would be hard-pressed to find someone living in Ottawa who hasn’t had a slice of Gabriel Pizza. Served up in 42 restaurants in Ontario and Quebec, at events including
“I actually feel like we need to go all in but with all of the regulatory perceptions in place at the same time.”
Semeniuk’s remarks came as the globe nears one year since the debut of ChatGPT, a generative AI chatbot capable of humanlike conversations and tasks that was developed by San Francisco-based OpenAI. A new iteration of the technology with voice and image capabilities was released this month.
ChatGPT’s advent kickstarted an AI race, where companies as big as Google and Microsoft revved up their AI efforts and started pouring billions of dollars into the sector, hoping to orchestrate bigger advances in the technology’s capabilities and adoption.
But as the flurry around AI got underway, concerns loomed large. Many of tech’s biggest champions, including Tesla empresario Elon Musk, Apple co-founder Steve Wozniak and so-called “AI godfather” Geoffrey Hinton, warned of the technology’s risks and proposed slowing down the rate it was moving at.
On a visit to Toronto in the summer, Hinton shared that he was concerned the technology would perpetuate bias and discrimination, joblessness, echo chambers, fake news, battle robots and existential risk.
Semeniuk sees both sides. The startups he works with view AI as having “tremendous excitement and tremendous trepidation.”
Many are not averse to regulation, but the shape of policies are hard to decipher because the industry is evolving so quickly and much of its potential is still unrealized.
“So I do believe in regulation, but I also don’t believe that it should be an all or nothing,” Semeniuk said.
“We have no idea what we’re regulating and how to regulate it without understanding the use cases that come out of it.”
The federal government has long had its eye on introducing AI guardrails, but has not moved as rapidly as the technology.
Innovation Minister Francois-Philippe Champagne only revealed a voluntary code of conduct for generative AI at Montreal tech conference All In on Wednesday.
Adopters of the code agree to a slew of promises including screening datasets for potential biases and assessing any AI they create for “potential adverse impacts.” Toronto-based AI darling Cohere and Waterloo software company OpenText have already agreed to the terms.
Others, including Google, have adopted their own principles, which vow the company won’t dabble in AI that can be used to inflict harm.
Carole Piovesan, a managing partner at INQ Law, worries about the pace of Canada’s approach to AI.
She recently spoke with a machine learning researcher who argued regulation has to be realistic about where we are because we aren’t in the futuristic world of AI that many foresee coming.
Piovesan, however, rebutted that “we take forever to get there.”
“If we can’t start to forecast what the crystal ball looks like, then we’re never going to keep pace,” she said on the Elevate stage.