Evidence of meeting #152 for Public Safety and National Security in the 42nd Parliament, 1st Session. (The original version is on Parliament’s site, as are the minutes.) The winning word was cybersecurity.

A video is available from Parliament.

On the agenda

MPs speaking

Also speaking

Charles Docherty  Assistant General Counsel, Canadian Bankers Association
Trevin Stratton  Chief Economist, Canadian Chamber of Commerce
Scott Smith  Senior Director, Intellectual Property and Innovation Policy, Canadian Chamber of Commerce
Andrew Ross  Director, Payments and Cybersecurity, Canadian Bankers Association
Ruby Sahota  Brampton North, Lib.
Andrew Clement  Professor Emeritus, Faculty of Information, University of Toronto, As an Individual
David Masson  Director, Enterprise Security, Darktrace

5:20 p.m.

Prof. Andrew Clement

That's an interesting question. I can't really speak to that specifically. I was invited to an Internet governance session in Beijing, and I've been writing about network sovereignty for some time, but I was also aware in going to China that Chairman Xi Jinping used the term “network sovereignty” in a very, very different sense about Chinese Internet infrastructure.

I took pains to make it clear that the sovereignty needed to be understood within an international framework of human rights, and that's what I developed in that. My presentation was very well received by some of the people in the audience. I got compliments for it, and the editors were keen to have it published in the journal that came out of it, but it was published in Chinese, and, unfortunately, I have not heard anything further from them.

I don't know if it was met with stony silence, or whether people are quietly appreciating it, which is what I hope. Thank you for finding that paper.

5:25 p.m.

Liberal

The Chair Liberal John McKay

Thank you, Mr. Paul-Hus.

Mr. Dubé, you have seven minutes, please.

5:25 p.m.

NDP

Matthew Dubé NDP Beloeil—Chambly, QC

Thank you, both, for being here.

Mr. Masson, sticking with machine learning and AI.... In this study, we've looked a lot at the implications of non-state actors—people trying to steal money, and things of that nature. It's a very abstract idea, but I'm just wondering where your thoughts are on the uses of AI by state actors. In other words, we've clearly delineated what the boundaries are for use of force and, for example, when there's a conflict between countries, what a war crime is, and things like that. Unless I'm mistaken, I don't think that delineation is quite as clear when it comes to attacking critical infrastructure, particularly if we're using this kind of machine.

I'm just wondering—and this question is kind of open-ended—what your thoughts are on how state actors are deploying this and what kinds of concerns there could be in the financial sector, or others that could potentially be affected, where those rules of engagement don't necessarily exist yet.

5:25 p.m.

Director, Enterprise Security, Darktrace

David Masson

I used to be a British diplomat. I remember 12 or 14 years ago having it explained to me that a cyber-attack by a nation state or another state was an act of war. However, ever since then, it seems to have become a very, very grey issue. I was at a conference the other week where it cropped up again, and nobody could actually define at what point you reach that stage. Maybe it's because a lot of the time it's been easier, particularly for western democracies, to just ignore that issue, for obvious reasons.

State actors are investing heavily in AI because everybody is investing heavily in AI. The witnesses who were here before invest a lot in AI for their banking systems. This isn't about cybersecurity or cyber-attacks. They're just using AI because AI can do so much more so much more quickly and so much more accurately.

We use AI because we are saying that human beings can't keep up with the scale of this threat, so we use AI to do all the heavy lifting for human beings. It's a bit of a myth to say that AI is going to replace people. That is not the case. There is no broad AI [Editor—Inaudible]. That doesn't exist.

What you see at the moment is AI being used for specific purposes for specific tools in specific areas. We use it for cybersecurity, but the bad guys—and I'm happy to say “bad guys” because we get stuck with the Internet of things—are going to use it because it's going to make things easier for them. In my statement, I pointed out how some of the nation state attacks that we used to see, such as the attack against Sony—a lot of resources went into that—or some of the attacks we've seen in Ukraine, need people, time, money and effort. However, if you use AI to do that, you need less money, time and effort, and, as I say, it will lower the bar for entry to these kinds of attacks.

When we see the first AI attack—we, as a company, think it might be this year; we've been seeing hints of it for quite a few years, but it could be later on—many of the current techniques and systems that are used for protecting networks from cyber-threats will become redundant overnight. That will happen very, very quickly.

Some state-threat actors and others are using AI in the foreign influence field, in the misinformation campaigns that go on. There's a lot of stuff about that. You may have noticed that some of the media platforms have been heavily criticized following the horrendous attacks in New Zealand because they didn't do anything about it quickly enough. But now, if you use AI—we can do it now—you can construct a lie at scale and at speed. It doesn't matter how palpably untrue it is. When you do that, that sort of quantity develops a quality all of it's own, and people will believe it. That's why bad guys are going to start investing in AI.

5:25 p.m.

NDP

Matthew Dubé NDP Beloeil—Chambly, QC

So, my question becomes this: If we look at Bill C-59, for example, where you're giving CSE defensive and offensive capabilities—and part of that is proactively shutting down malware that might be...or an IP, or things like that—is there concern about escalation and where the line is drawn?

Part of this study.... The problem is that we're all lay people, or most of us anyway—I won't speak for all—when it comes to these things. My understanding of AI—because I've heard that, too—is that it's not what we think of it as being from popular culture. Does that mean that if, due to employing AI to use some of these capabilities that the law has conferred on different agencies, AI is continuing...? How much human involvement is there in the adjustments? If that line is so blurry as to what the rules of engagement are, is there concern that AI is learning how to shut something down, that the consequences can be graver than they were initially, but the system is sort of evolving on its own? I don't want to get lost. I don't know what the proper jargon is there, but....

5:30 p.m.

Director, Enterprise Security, Darktrace

David Masson

It's already the case that some attacks that large actors carry out might be targeted against a particular target, but they don't consider collateral damage. There was an attack a few years called NotPetya. It targeted Ukraine, but it spread worldwide and caused havoc absolutely everywhere.

With regard to the way that people are using AI now—when I talk about narrow AI, that is specific tools for specific occasions—if your concern is that they'll launch an AI attack and it will develop a mind of its own and do its own thing, that's not the case. This is the kind of AI where there's still a pilot in the cockpit. There are still human beings running it and deciding to let it loose. You're still going to get collateral damage, particularly if it's unregulated state actors that are doing it—

5:30 p.m.

NDP

Matthew Dubé NDP Beloeil—Chambly, QC

If I may, because my time is running out....

My intention was less about humans losing control and that caricature of it, and more just wondering about if they're learning the best pathways to be on the offensive, for example.

5:30 p.m.

Director, Enterprise Security, Darktrace

David Masson

Any offensive that a country like Canada is likely to have will have been thought through very carefully. It's not just a case of being able to judge the impact you're going to have; that's absolutely what they'll be doing before they launch this.

5:30 p.m.

NDP

Matthew Dubé NDP Beloeil—Chambly, QC

The pathways you're perhaps unintentionally shutting off aren't at random.

5:30 p.m.

Director, Enterprise Security, Darktrace

David Masson

You will have to be absolutely accurate on what they're going to do.

5:30 p.m.

Liberal

The Chair Liberal John McKay

You unfortunately have about 20 seconds. You can save it for the final round. Thank you.

Mr. Graham, welcome to the committee. Bear in mind the translators are trying to translate whatever language you're speaking.

5:30 p.m.

Liberal

David Graham Liberal Laurentides—Labelle, QC

If they can encrypt me in real time, we'll be all set.

I have a lot of questions, so I'll ask you to keep your answers as short as my speaking, if it's possible. They're to both of you, not specifically to one or the other.

To start with, what's the life expectancy of an unpatched or unmaintained server on the Internet? If somebody puts a server on the Internet and doesn't touch it again, how long is that going to be online?

5:30 p.m.

Director, Enterprise Security, Darktrace

5:30 p.m.

Liberal

David Graham Liberal Laurentides—Labelle, QC

That's an important point.

5:30 p.m.

Director, Enterprise Security, Darktrace

David Masson

When you talk of a patch, you should patch the minute they tell you.

5:30 p.m.

Liberal

David Graham Liberal Laurentides—Labelle, QC

For the record, what's a zero-day?

5:30 p.m.

Director, Enterprise Security, Darktrace

David Masson

A zero-day is an attack that nobody has seen before. It's completely new and novel.

5:30 p.m.

Liberal

David Graham Liberal Laurentides—Labelle, QC

You mentioned earlier there's a shortage of about a half a million cybersecurity employees or professionals. I've been involved in the free software community for about 20 years and the people around me today are very much the same people who were around me 20 years ago. How do we modernize the people in the software industry and the cybersecurity industry? How do we get the next generation to be interested in it and to learn it?

5:30 p.m.

Director, Enterprise Security, Darktrace

David Masson

I would highly recommend the efforts of the Province of New Brunswick, which has has been teaching cybersecurity in school for some years now, to the point where major companies are now snapping up kids when they graduate at 18.

5:30 p.m.

Liberal

David Graham Liberal Laurentides—Labelle, QC

Okay. Do we generally do security by design or are we more reactive as a society?

5:30 p.m.

Director, Enterprise Security, Darktrace

David Masson

Right now, it's reactive. I'm a big fan of Dr. Ann Cavoukian when she talks about privacy by design—and it should be “security by design”.

A new term has come out called Sec and DevSecOps and DevOps—that is, as you're writing your code, you should be considering security, absolutely.

5:30 p.m.

Liberal

David Graham Liberal Laurentides—Labelle, QC

Dr. Clement, make sure that if you have something to say, you speak up, because I'm going through this fairly quickly. Don't be shy.

5:30 p.m.

Prof. Andrew Clement

Sure.

5:30 p.m.

Liberal

David Graham Liberal Laurentides—Labelle, QC

Are there advantages in security of open source versus closed source that you know of? Is there any security in having a closed-source system, where there's no public access to that code?

5:30 p.m.

Director, Enterprise Security, Darktrace

David Masson

Professor?