Evidence of meeting #109 for Industry, Science and Technology in the 44th Parliament, 1st Session. (The original version is on Parliament’s site, as are the minutes.) The winning word was risk.

A recording is available from Parliament.

On the agenda

MPs speaking

Also speaking

Nicole Foster  Director, Global Artificial Intelligence and Canada Public Policy, Amazon Web Services, Inc.
Jeanette Patell  Director, Government Affairs and Public Policy, Google Canada
Rachel Curran  Head of Public Policy, Canada, Meta Platforms Inc.
Amanda Craig  Senior Director of Public Policy, Office of Responsible AI, Microsoft
Will DeVries  Director, Privacy Legal, Google LLC
John Weigelt  National Technology Officer, Microsoft Canada Inc.

6:20 p.m.

Liberal

Iqwinder Gaheer Liberal Mississauga—Malton, ON

I'd like to ask the same question to the panel generally. In terms of compliance to the AIDA, how is that for your respective companies?

6:20 p.m.

Head of Public Policy, Canada, Meta Platforms Inc.

Rachel Curran

Maybe I can chime in here. I think Amanda is being very diplomatic.

The AIDA, in a number of respects, goes well beyond the most stringent proposal out there internationally, which is the EU AI Act. It's already the subject of a lot of debate among member states. It doesn't have the support of countries like France, for instance, who want to ensure their own domestic industry is given a chance to flourish.

The AIDA has created a standard that doesn't exist anywhere else in the world, so if you're asking us if we would meet that standard if it were imposed here, sure. We have the resources to meet it. The compliance costs are incredibly high. Would that mean certain products may not be launched in Canada? Maybe. However, all of us work for companies that are able to meet very high thresholds because we have the resources and money to do that.

It's going to have a significant negative impact on the Canadian AI industry and on innovation in Canada. That's the word of caution. Canada should make sure it's aligning itself with other jurisdictions. We're a relatively small market. The EU is setting a benchmark that is world-leading. We should at the very least not exceed that.

6:25 p.m.

Director, Global Artificial Intelligence and Canada Public Policy, Amazon Web Services, Inc.

Nicole Foster

I echo the comments of the panel, but I hope the committee is going to have the opportunity to hear from Canadian companies that deploy AI where AI is not their core business but is deployed in their business. This is to understand the impact that these kinds of regimes are going to have on the agriculture industry, financial services industry, manufacturing industry and energy industry.

This technology is being deployed not just by AI-developing companies; our customers are in every sector of the economy in Canada. I think the committee should really take some time to hear from Canadian companies that are going to be impacted and that leverage our services or may develop their own services. It should be a high priority for this committee to hear from them.

6:25 p.m.

Liberal

Iqwinder Gaheer Liberal Mississauga—Malton, ON

Virtually too....

6:25 p.m.

Director, Government Affairs and Public Policy, Google Canada

Jeanette Patell

From a Google perspective, obviously we've been public in our commitment to having responsible AI principles. My colleague Tulsee is at the centre of our governance process around that, which is scalable and incredibly rigorous and robust.

Similar to some of the comments we've heard from others, one of the areas where you might want to consider a focus is in the definition of “high impact”. Insert some considerations or factors for the threshold of what would define a high-impact system. That might provide clarity and flexibility as the technology involves. If you think about the severity and probability of harm, scale of use, nature of the harm and these types of things, giving some guidance to regulators will help provide the certainty and protections the government is looking to establish here, while also giving clarity and predictability to companies large and small that will need to build the systems to comply with this.

I think we have a real opportunity right now to get this right from the outset and build a coherent and consistent policy environment for Canadian companies, which are going to need to be prepared to succeed on the global stage.

6:25 p.m.

Liberal

Iqwinder Gaheer Liberal Mississauga—Malton, ON

Great. Thank you.

6:25 p.m.

Liberal

The Chair Liberal Joël Lightbound

Thank you very much.

Mr. Garon, you have the floor.

6:25 p.m.

Bloc

Jean-Denis Garon Bloc Mirabel, QC

Thank you, Mr. Chair.

I will continue with you, Ms. Curran. Please don’t see it as badgering; I could have put the question to anyone.

The industry—and I repeat, we like it—is trying to show us its broad sense of responsibility, that it has principles and wants to evolve. However, there seem to be things which might seem simple for the broader public that should have been done but weren’t. We are talking about identifying fakes, deep fakes and the rest.

Everyone is wondering why this hasn’t been done yet. I understand you are in a competitive environment.

I am wondering if, in the current environment, there isn’t a cost in terms of market and profits when it comes to adopting ethical standards that surpass those of one’s competitors.

6:25 p.m.

Head of Public Policy, Canada, Meta Platforms Inc.

Rachel Curran

To be clear, we don't allow deepfake video and audio on our platform. We have developed a number of tools to take them down and identify them.

6:25 p.m.

Bloc

Jean-Denis Garon Bloc Mirabel, QC

Since we only have two minutes, I’ll go quickly. My question is this: In the current environment, could raising ethical standards above those of one’s competitors lead to a cost in terms of market, profits or clientele? That is the crux of my question.

6:25 p.m.

Head of Public Policy, Canada, Meta Platforms Inc.

Rachel Curran

No, I don't think so. To the extent that we want to be a trusted, credible brand, we want to be world-leading and industry-leading in ethical standards. I think there is no negative cost to that; there is a positive benefit. However, what we're talking about here is the standard that Canada is setting in the regulation of AI systems. There will be a cost if that regulation, that threshold, is set at a level that far outstrips those of our international peers.

6:25 p.m.

Bloc

Jean-Denis Garon Bloc Mirabel, QC

Thank you.

My last question is for Ms. Craig.

High-impact systems are defined as those that pose a risk for health, human rights or safety. If I understood correctly, those are basically the criteria.

What about electoral interference and disinformation on a daily basis?

6:30 p.m.

Senior Director of Public Policy, Office of Responsible AI, Microsoft

Amanda Craig

I'm so sorry. The earpiece failed just as I was listening to you.

6:30 p.m.

Bloc

Jean-Denis Garon Bloc Mirabel, QC

Can I get my 20 seconds back, Mr. Chair?

6:30 p.m.

Liberal

The Chair Liberal Joël Lightbound

Let’s see if Ms. Craig is getting interpretation.

6:30 p.m.

Senior Director of Public Policy, Office of Responsible AI, Microsoft

Amanda Craig

It just turned off. I'll try again.

6:30 p.m.

Bloc

Jean-Denis Garon Bloc Mirabel, QC

Can you hear me?

6:30 p.m.

Senior Director of Public Policy, Office of Responsible AI, Microsoft

6:30 p.m.

Bloc

Jean-Denis Garon Bloc Mirabel, QC

High-impact systems are defined here as systems that pose risks for health, human rights or safety. That is a nice definition. I’m not an expert and I don’t know what I think of it, but it certainly is important.

However, I’d like to know where disinformation or electoral interference, for example, fit within these criteria.

Are they risks for health, human rights or safety? Might this definition need to evolve at some point?

6:30 p.m.

Senior Director of Public Policy, Office of Responsible AI, Microsoft

Amanda Craig

You're referring to sensitive uses. I think there are ways to think about disinformation risk that would certainly fit into the categories that I described regarding potential psychological harm or impact to human rights.

I think what's really important is that the approach to defining what's high risk is clear and can evolve for changing challenges like misinformation and disinformation. That's the opportunity in the AIDA: to define clearly what's high risk and a process for evolving it over time.

6:30 p.m.

Bloc

Jean-Denis Garon Bloc Mirabel, QC

Thank you.

6:30 p.m.

Director, Global Artificial Intelligence and Canada Public Policy, Amazon Web Services, Inc.

Nicole Foster

I think your questioning also points out the challenge of trying to have such broad, sweeping, horizontal legislation, because it's very difficult to try to capture all of these very specific use cases in one piece of legislation. In a lot of cases, it actually might make more sense, for example, to direct Health Canada to look specifically at how to navigate complex use cases that are occurring in the health care industry, or directing the financial services regulator to deal more with those use cases.

Those are extremely complex use cases for people like us. While we might be really smart at understanding AI policy and risk mitigation, those are complex use cases that those regulators already understand extremely well, and they understand how to manage risk in those sectors very well.

As to the complexity of trying to create this broad legislation, it might be more appropriate to direct Health Canada to see what levers they have already to regulate AI in specific use cases, or OSFI to look at the financial services sector—

6:30 p.m.

Head of Public Policy, Canada, Meta Platforms Inc.

Rachel Curran

I'm sorry to jump in here, but you can also amend the legislation to identify specific use cases that you are concerned about: If it's election disinformation or if it's delivery of health care services or accommodation services, you can insert those into this bill as specific cases that you want addressed. You set a threshold for harm that is a materiality test, but then you can list specific examples of use cases that you are particularly concerned about and that you want addressed.

6:30 p.m.

Liberal

The Chair Liberal Joël Lightbound

Thank you.

Ms. Patell, who is online, I give you the floor.

6:30 p.m.

Director, Government Affairs and Public Policy, Google Canada

Jeanette Patell

Thank you.

I just wanted to pick up on what Nicole Foster was saying, because it's consistent with the approach that the United Kingdom is taking. It really leans into the existing expertise that our sectoral regulators have in the space. They can best understand the risks that exist for their sectors, the actors in those sectors, and they know the questions to ask. Maybe Canada can take a cue from how other countries are approaching this specific question.