Evidence of meeting #107 for Industry, Science and Technology in the 44th Parliament, 1st Session. (The original version is on Parliament’s site, as are the minutes.) The winning word was chair.

A recording is available from Parliament.

On the agenda

MPs speaking

Also speaking

Vass Bednar  Executive Director, Master of Public Policy in Digital Society Program, McMaster University, As an Individual
Andrew Clement  Professor Emeritus, Faculty of Information, University of Toronto, As an Individual
Nicolas Papernot  Assistant Professor and Canada CIFAR AI Chair, University of Toronto and Vector Institute, As an Individual
Leah Lawrence  Former President and Chief Executive Officer, Sustainable Development Technology Canada, As an Individual

6:45 p.m.

Prof. Nicolas Papernot

Of course.

6:45 p.m.

Conservative

Bernard Généreux Conservative Montmagny—L'Islet—Kamouraska—Rivière-du-Loup, QC

I understand.

6:45 p.m.

NDP

The Acting Chair NDP Brian Masse

Thank you, Mr. Généreux.

Do you want a quick answer?

6:45 p.m.

Conservative

Bernard Généreux Conservative Montmagny—L'Islet—Kamouraska—Rivière-du-Loup, QC

Yes, I'd like to hear from Mr. Clement.

6:45 p.m.

Prof. Andrew Clement

Yes, I've said that consultation is sadly in short supply around AIDA, and a lot of the problems that I think we see in AIDA would have been addressed if people had been able to, from a variety of perspectives, come in and present them.

What do you do now? Separating it, I think, would help. Whether it can be amended, I don't know. That's not for me, but, at this point, I think that, if it does get passed, we should build into it a review process so that, in an ongoing way, we can provide a broad-based updating of it, as others have said.

6:45 p.m.

NDP

The Acting Chair NDP Brian Masse

Thank you very much.

We're now moving to Ms. Lapointe for five minutes.

6:45 p.m.

Liberal

Viviane LaPointe Liberal Sudbury, ON

Thank you, Mr. Chair.

I want to congratulate Mr. Garon on passing his bill earlier this afternoon in the House.

My question is for Mr. Papernot.

On Monday, we heard from Gillian Hadfield, who's the chair of the Schwartz Reisman Institute. In her testimony here at committee, she said that the legislation concerned does a lot to address individual harms, but she also suggested that the legislation doesn't adequately address harms or risk on a systemic level, for example, trading on our financial markets.

The question I would have for you is: How does the introduction of AI into these systems impact our ability to control and to also ensure reliability of our financial markets or protect against antitrust behaviour and maintain trust in our judicial systems?

6:45 p.m.

Prof. Nicolas Papernot

I don't have expertise in the financial markets. What I'll tell you is in terms of the transparency that AI systems lack. It's very difficult to understand how AI systems arrive at specific predictions from the data that they train from. It would be very difficult to understand how they have extracted patterns in historical financial data, for instance, to make the predictions that they make on the current market. That's probably the main problem.

The additional issue that I see is that you're going to have to provide some understanding to humans as to how these AI systems make their predictions. Again, there's a very difficult open problem to solve before we arrive at technology that is capable of doing that, and we risk seeing something that is similar to how we have greenwashing. We can have similar issues in AI systems, where the way that their predictions are justified can be disconnected from the way that the predictions are made by the AI systems, so that can lead to misleading claims about them not being biased and so on.

I would think that those two aspects are relevant to the financial markets in particular.

6:45 p.m.

Liberal

Viviane LaPointe Liberal Sudbury, ON

Can you offer the committee your expert opinion on legislating these systemic harms in a manner that can keep up with the technology? Should the onus be on technology design and developers?

6:45 p.m.

Prof. Nicolas Papernot

I think what's important is to have a conversation. I think one example that comes to mind is the aerospace industry, where there's a really thorough process for surfacing errors that are being encountered in deployment. This is what we're missing in the AI industry, having a very clear protocol as to how we should surface bugs in the algorithms as they occur so that we can then figure out the solutions as engineers but also implement the right legislation to support these solutions.

That's something I would leave to you to see how we could draw inspiration from these regulated sectors like the aerospace industry.

6:50 p.m.

NDP

The Acting Chair NDP Brian Masse

I see Mr. Clement has his hand up. Could we get him in on it?

I'll give you a minute of my time coming up.

6:50 p.m.

Prof. Andrew Clement

This is a really crucial issue. We've heard that AI systems are unpredictable in their behaviour, and also that we can't understand, or we can't explain, how they've come up with those, so those are two big problems.

In addition to having conversations and being open about this, we need to apply a sort of precautionary and accountability principle, so that organizations that put them into play are accountable and have to take prior steps before they start experimenting on us.

The aerospace industry has these well worked out systems, because when a plane comes down, everybody knows about it, and that's dreadful. When an AI system, and it doesn't have to be an AI system, but when a complicated digital network system goes wrong, the problems are distributed, and it's very difficult to analyze them. It's in a worse situation, I think, than aerospace.

6:50 p.m.

Liberal

Viviane LaPointe Liberal Sudbury, ON

Thank you.

Mr. Papernot, we've already seen many privacy breaches with online systems. What should we know about the escalation of these types of breaches, based on your extensive work in privacy, security, and machine learning?

6:50 p.m.

Prof. Nicolas Papernot

What do you mean by the escalation?

6:50 p.m.

Liberal

Viviane LaPointe Liberal Sudbury, ON

The escalation of the types of breaches we're seeing.

6:50 p.m.

Prof. Nicolas Papernot

I see.

Again, it goes back to my point about the issue of putting the emphasis on the specific pieces of data that are being leaked versus how that data is analyzed. As more and more data leaks are happening, this means that malicious individuals have access to more and more information about individuals. They can then go and find a line through these data leaks.

It makes it easier for them to go and reidentify people from data that has been protected through de-identification. Whereas, if instead we focus on the algorithms that are being used to analyze this data, we can ensure that the output of the analysis is not going to leak additional information about the individuals. That is how we will control how much information is available about individuals to malicious entities. It's really about changing the focus from trying to modify the data itself to how we analyze the data.

6:50 p.m.

NDP

The Acting Chair NDP Brian Masse

Thank you.

I'm sorry, but you're out of time. I gave you some of my time, as well.

Mr. Garon has been very patient. It's also an especially exciting day for him.

Congratulations on your bill passing in the chamber. It was no small feat.

6:50 p.m.

Bloc

Jean-Denis Garon Bloc Mirabel, QC

Thank you, Mr. Chair.

I'm going to turn to you, Professor Bednar. In the 19th and 20th centuries, industries extolled the virtues of a free market with no regulation. This has led to huge fortunes, huge monopolies, as well as abuses against consumers.

All of this has led to historic regulations. One is the antitrust laws that we know today and the big consumer protection laws. However, with the artificial intelligence industry advancing at an exponential rate, I get the impression that we need a framework for the market to work.

I will quote you in English, a language I rarely use. You said earlier, “Smart regulation clarifies markets”.

In French, we would say that smart regulations make markets work better. As we know, that is the basis of economics, in a way.

Do you think that, in this context, the best solution is for this industry and the market to regulate themselves? In your opinion, are we at a stage in the development of artificial intelligence where regulation, viewed from a historical perspective, is as important as antitrust legislation may have been at one time?

6:50 p.m.

Executive Director, Master of Public Policy in Digital Society Program, McMaster University, As an Individual

Vass Bednar

No, I'm not in favour of prolonged self-regulation. Much of the regulatory lack that we've seen is a product of the late 1990s U.S. approach which was famously referred to as permissionless innovation. We took a step back, and I think Canada had a little bit of an echo there.

I understand that firms often have their own kinds of policy practices in place, but now it's time for Canada to formalize just where that bar is set, and continue to learn from both the private sector, large businesses, small businesses, and, of course, a range of actors from civil society and—

6:55 p.m.

Bloc

Jean-Denis Garon Bloc Mirabel, QC

Professor Bednar, I have to interrupt you, because time is very limited. That said, the chair is very generous.

Do you think that, in its current form, Bill C‑27 is too permissive when it comes to self‑regulation? Should we rely instead on government regulations, for example?

6:55 p.m.

Executive Director, Master of Public Policy in Digital Society Program, McMaster University, As an Individual

Vass Bednar

I think that by virtue of being a massive piece of legislation, it prevents pure self-regulation through firms. It asks firms to comply in particular ways that cost them and have some cost associated with these new norms.

I don't think it puts forward or enshrines pure self-regulation when it comes to privacy and the use of data.

6:55 p.m.

NDP

The Acting Chair NDP Brian Masse

Thank you very much.

We'll move now to Mr. Généreux.

You have the remaining committee time here, which is about three and a half minutes.

6:55 p.m.

Conservative

Bernard Généreux Conservative Montmagny—L'Islet—Kamouraska—Rivière-du-Loup, QC

Thank you, Mr. Chair.

Mr. Papernot, a number of witnesses have told us that they weren't consulted before the bill was introduced, even though the minister boasted about consulting 300 organizations. That's what we heard from one woman—I've forgotten her name, unfortunately—who represented a women's rights advocacy organization.

Earlier, my colleague, Michelle Rempel Garner, was talking about deepfakes, image manipulation. I'm choosing my words carefully here. As we all know, Taylor Swift was victimized a few days ago. Her image was used. Does this bill actually protect women?

6:55 p.m.

Prof. Nicolas Papernot

I would suggest you talk to the populations affected by biases that will be exacerbated by AI systems, because I don't think I'm in a position to comment on the risks.

6:55 p.m.

Conservative

Bernard Généreux Conservative Montmagny—L'Islet—Kamouraska—Rivière-du-Loup, QC

Mr. Clement, what are your thoughts on the issue?