After 29 years in telecommunications at Bell Canada, Heather Black was pulled in a different direction.
Now, she’s a leader in the cybersecurity space. Since 2023, Black, who is based in Ottawa, has represented public-sector clients across the country as regional vice-president of Palo Alto Networks.
“Everybody’s talking about cybersecurity,” Black told OBJ on Wednesday. “There’s not a day that goes by where it’s not on the news or having an impact on an organization that’s been breached. It’s just fascinating to me, but it’s also very rewarding.”
(Sponsored)

Family-owned Coke Canada Bottling investing to grow in Ottawa-Gatineau
Have you ever wondered where your favourite Coca-Cola products come from? Few people in know that over 300 popular beverages products, like Coca-Cola, Coke Zero, Fuze, Fanta, Monster Energy, A&W

Family-owned Coke Canada Bottling investing to grow in Ottawa-Gatineau
Have you ever wondered where your favourite Coca-Cola products come from? Few people in know that over 300 popular beverages products, like Coca-Cola, Coke Zero, Fuze, Fanta, Monster Energy, A&W
In this instalment of Top of Mind in Tech, Black breaks down the cybersecurity concerns plaguing the public sector, how AI is changing the way hackers attack, and how AI developers are addressing users’ biggest worries.
This transcript has been edited for length and clarity.
How would you describe the current state of cybersecurity in the public sector?
You have to live under a rock to not know that there’s geopolitical tension going on right now in this day and age with “elbows up” and trying to ensure a secure and sovereign Canada. How do we ensure that the life that we as Canadians cherish continues? What’s happening from a public-sector and government perspective? How do governments ensure that access to our citizen data is protected: our intellectual capital, all the great work we do building technology and solving challenges like COVID vaccines? How do we ensure that the intellectual capital we are producing is protected for the benefit of Canada, our military and the people graciously putting their lives on the line to protect Canada and our allies? How do we ensure that they have access to accurate data in a timely fashion, that’s secure?
It’s no easy task for governments, whether it’s federal, provincial or municipal. The fight is not a fair fight. The adversaries, the bad guys, just need to get one per cent on the test to be massively successful. The public-sector entities have to get 100 per cent because, if they don’t, there’s a risk. So when we think about that fight, there’s an inequity, an imbalance, in what’s happening on the battlefield of cyber.
How have attacks changed over the years?
We have a ton of critical data in Canada. It goes back to that intellectual capital, but also data from the business of government. In terms of what’s changing, the adversaries — such as state-sponsored adversaries like CRINK, which is what we call China, Russia, Iran and North Korea — are trying to attack Canada. They’re leveraging artificial intelligence to help expedite the speed and velocity of the attacks. A human is not keeping up with the attacks that are coming.
If you think about getting text messages — like if you bank with CIBC but you get a text about your TD card being compromised — two years ago there were spelling mistakes and it was obvious it was an attack. With AI, it’s really difficult to spot. They’re using AI to improve the validity of the attack. And they’re doing the same with the Government of Canada. They’re doing the same to our municipalities and our health-care organizations on a national basis. The pace of change that organizations need to keep up with, because of AI specifically, is changing the game. And in the public sector, you’ve got all different levels of employees and users with different levels of understanding of the problem. Therein lies some of the challenge.
How have cybersecurity organizations like yours changed their approach in response to AI?
For organizations to be successful, you need to leverage AI to fight AI. Security is a data problem. There’s such a large volume of data that leveraging AI is the only way to protect your organization.
We talk a lot about the federal government and the cost-savings measures that the new Liberal government has requested of departments. Each of them has to come up with cost rationalizations. Well, many of these organizations are talking about leveraging artificial intelligence and it’s not limited to the federal government.
There are a lot of implications that organizations need to think about. It’s not just about having AI. If your bot delivers an automated response to a resident that is completely wrong, what’s the impact of that? Is it on the brand? Is it reputational? Could it be a payment issue? Or, God forbid, did they disclose personal and private information inadvertently? There’s a lot to think about as you safely adopt AI within an organization. In health care, how do you protect private data? What’s preventing a practitioner from uploading my medical records into a (large language model)? Now my health records are somewhere where the hospital doesn’t control the access to that data.
In a tech environment, we talk about people, process and tools. Having processes, guardrails, and regulations — which governments are doing a great job trying to finetune for ethical and safe use of AI. But there’s a people component, too. How do you educate? Organizations need to educate and provide some awareness.
How are AI developers responding to user concerns?
I’ll use Palo Alto specifically. We leverage AI in our tools and we have for many years and we are very mindful in ensuring that we’re securing the AI. We held a summit in Ottawa at the Rogers Centre in October. We had over 750 people registered for that summit and in one of the sessions, we spoke about how you eat the elephant that is AI. (We spoke about) how we regulate AI within Palo Alto Networks from a product perspective and how we leverage it, which is incredibly important to help us deliver the cyber-outcomes for our clients. But how do we drive an awareness campaign with our employees and our contractors and third parties that we engage with, and then how do we apply that from a policy and governance perspective?
It’s great to have a legislation or a regulation that you have to adhere to, but if you don’t have the technology to give you visibility, are you actually following it? I had a client who was convinced that there was a lot of AI adoption in their organization. (With our tools), we were able to give them visibility into how many of their users were using ChatGPT, Gemini, Copilot and even Deep Seek, which is banned at the federal level.
AI is a continually learning, mechanical brain. If you allow it to access your systems, or any internal tools, it could get access to all the crown jewels of the company. If your AI tool is breached and it has access to your crown jewels, it could take the whole organization down. So with understanding and awareness of the impact, you get to fight fire with fire and leverage AI to fight AI. It takes a village. We share data and we work with competing technology providers to make sure we’re sharing information on what we’re seeing. It’s a team sport to protect Canada and it takes a lot.

