Evidence of meeting #109 for Industry, Science and Technology in the 44th Parliament, 1st Session. (The original version is on Parliament’s site, as are the minutes.) The winning word was risk.

A recording is available from Parliament.

On the agenda

MPs speaking

Also speaking

Nicole Foster  Director, Global Artificial Intelligence and Canada Public Policy, Amazon Web Services, Inc.
Jeanette Patell  Director, Government Affairs and Public Policy, Google Canada
Rachel Curran  Head of Public Policy, Canada, Meta Platforms Inc.
Amanda Craig  Senior Director of Public Policy, Office of Responsible AI, Microsoft
Will DeVries  Director, Privacy Legal, Google LLC
John Weigelt  National Technology Officer, Microsoft Canada Inc.

5:50 p.m.

Senior Director of Public Policy, Office of Responsible AI, Microsoft

Amanda Craig

I think it's a really hard but important question. At Microsoft, we've been contemplating this as well. We've been establishing our own internal governance program and understanding how we calibrate an application of our requirements in that governance process to higher-risk scenarios.

We have developed three categories of what we call “sensitive uses” internally at Microsoft.

The first is any system that has an impact on life opportunity or life consequences. In that category, we think about systems that impact opportunities for employment, education or legal status, for example.

The second category is any system that has an impact on physical or psychological safety. Think about safety-critical systems in the context of critical infrastructure, for example, or systems that might be used by vulnerable populations.

The third category is any system that has an impact on human rights.

I do think it's useful to have a framework to think about triggers for higher risk and then, where there is readiness to go further, to think about some of the more specific sorts of use cases like education and employment. That is represented in some of the high-impact examples in the AIDA as well. Then it's also to recognize that there is going to be a need to evolve and to put guardrails in place for how you think about the high-impact systems and the examples evolving over time. It's about not just having an open-ended process but also thinking about what the triggers are going to be for meeting that bar going forward.

5:50 p.m.

Liberal

Francesco Sorbara Liberal Vaughan—Woodbridge, ON

Thank you.

I'd like to go to Google.

There were some documents you were going to send to the committee. A few people mentioned that. If you can send them, that would be great. Any background information is worth it.

Can we get Google to comment quickly?

5:50 p.m.

Director, Government Affairs and Public Policy, Google Canada

Jeanette Patell

I think I would echo a lot of the comments from my counterparts and point in particular to the definition of “harm”, because I think that could solve a lot of issues here. If you have a test of material harm, that can resolve exactly what the threshold is for both the identification and mitigation of risks associated with specific use cases for AI systems.

Right now, the definition simply says harm includes psychological harm or economic harm, but there's no calibration for what harm is really defined as, nor a test for that.

5:50 p.m.

Liberal

The Chair Liberal Joël Lightbound

Thank you.

Mr. Garon, you have the floor.

5:50 p.m.

Bloc

Jean-Denis Garon Bloc Mirabel, QC

Thank you, Mr. Chair.

I thank all the witnesses for being here today.

As a citizen and consumer, I like your products and I use them. As a person always lost while driving, I am especially happy that your services exist. You spoke ad nauseam of all the advantages these services offer to us. We like them. What can I say? That’s how it is.

I’ll come back to the question from my colleague, Mr. Sorbara, because I found it very interesting. We are talking about high-impact, low-impact, high-risk and low-risk systems. We are talking about rapidly evolving technologies. I understand that some technologies can have different uses, which makes the situation a little complex. I understand your message about very set definitions in the bill on what does or doesn’t have a high impact, and that in certain respects, you differ on those definitions. That’s entirely legitimate.

Isn’t it normal for legislators, who are elected by the public, and the government, which wants to protect the public, to have definitions that differ from the industry’s when it comes to the terms “high impact”, “low impact”, “high risk” and “low risk”?

Given that our roles are not the same, isn’t it legitimate for us not to have the same definition?

5:55 p.m.

Director, Global Artificial Intelligence and Canada Public Policy, Amazon Web Services, Inc.

Nicole Foster

It's perfectly legitimate for legislators to have a different view of high- versus low-impact AI. I think the point of the discussion is to try to make sure that the finite resources Canadian companies have are not used unnecessarily to do really complex assessments where they may not be necessary.

To give a bit more context to that, some risk assessments are actually in the millions of dollars to complete. They're very complex, and they require a lot of due diligence and information. Once you've completed that risk assessment, you likely want it audited by a third party in very high-risk use systems. That is not a small undertaking, especially for small companies and start-ups looking to start up their new business.

5:55 p.m.

Bloc

Jean-Denis Garon Bloc Mirabel, QC

We sometimes interrupt the witnesses because our speaking time is so limited.

Some witnesses told us it might be useful to have a federal registry of broad generative models. For example, companies would be required to add certain codes or certain models to the registry, evaluate the risks and present a risk mitigation plan to the government based on the model and its uses.

If I understand correctly, you think it is too complex.

5:55 p.m.

Director, Global Artificial Intelligence and Canada Public Policy, Amazon Web Services, Inc.

Nicole Foster

It's actually not effective. It's not that it's too complex; it isn't necessarily going to be an effective way to evaluate whether a system operates appropriately for that use case.

What we do for customers is provide good, clear information about recommended use cases for the models we offer to customers. The only way to evaluate how a model performs appropriately for your use case is to test it.

In that testing process with your data, you're going to be able to test and evaluate if it's performing appropriately for you use case. Just throwing up a bunch of models is not going to be that effective in giving us that information.

5:55 p.m.

Bloc

Jean-Denis Garon Bloc Mirabel, QC

I understand.

Ms. Curran, I’d like to ask you a question about something you said.

5:55 p.m.

Head of Public Policy, Canada, Meta Platforms Inc.

Rachel Curran

Yes. I'm sorry. I want to—

5:55 p.m.

Bloc

Jean-Denis Garon Bloc Mirabel, QC

Please let me ask it.

We talked about content moderation. One of my colleagues asked you if content moderation boils down to deciding what Quebeckers and Canadians see on the internet.

The content we see on your platforms is not random. It is very deterministic and based on what people looked at before, for instance. It causes a lot of worry among a lot of people.

I’d like you to give us a clear answer on this matter. Do you think your company also chooses what people see on the internet?

5:55 p.m.

Head of Public Policy, Canada, Meta Platforms Inc.

Rachel Curran

Yes, it does. We pick up signals from what people are interested in, largely. We've applied a lot more transparency and control around that for our users. You can click on a piece of content and go to “Why am I seeing this?”, and you can find out what signals our systems are reading in order to show you that particular piece of content.

We're trying to give users a lot more control over that. You're right that our systems decide what's most relevant and most interesting to our users. Our fear is that with an onerous regulatory system applied to that kind of content prioritization, ultimately, it's going to be up to government regulators what we should be showing to Canadians.

5:55 p.m.

Bloc

Jean-Denis Garon Bloc Mirabel, QC

I understand, Ms. Curran, but you said you were in favour of a somewhat American approach, such as the White House voluntary commitments for artificial intelligence, and that these commitments might be worthwhile.

Do you think that a voluntary approach, self-regulation, could be successful? Do you think that a citizen can trust in what they’ve seen in the past and believe in the industry’s ability to self-regulate in the absence of a crisis?

5:55 p.m.

Head of Public Policy, Canada, Meta Platforms Inc.

Rachel Curran

No, I don't think self-regulation is the right approach. As someone who has worked in government for a lot of years, I'm a fan of smart regulation. The issue is what kind of regulation is applied and how those issues are debated. The government has indicated that it's in the process of preparing an online harms bill, and we think that's the context in which to debate where that line should be drawn.

We have always said to governments, “You tell us what you define as disinformation or misinformation, and we'll make sure we enforce against those rules.” That would be a lot easier for us than engaging in an internal debate—which we do daily—around where the line should be drawn and what content we should allow on our platforms.

We are very happy to engage in those discussions, and we would love policy-makers and decision-makers to set those rules for us, but there needs to be an open and robust public debate around where the line should be drawn. In this case—and I think another member referenced this—it looks like content regulation through the back door, and it doesn't really allow for an open, informed and public debate around where to draw the line for what's acceptable content online.

6 p.m.

Bloc

Jean-Denis Garon Bloc Mirabel, QC

I know my time will be cut short during my next turn.

Based on what you are telling us, the approach you support is to pass online hate legislation that would lead to the same outcome, but it would not be a way to regulate what people see on the internet through the back door.

Is that not a somewhat contradictory way of looking at the situation?

6 p.m.

Head of Public Policy, Canada, Meta Platforms Inc.

Rachel Curran

No, I don't think so. We would like government, if it's willing, to regulate content issues. Those are sensitive issues. They're constitutionally sensitive issues. There are federal and provincial aspects to that kind of discussion, so if government wants to regulate in those areas, let's have that discussion and let's have it openly.

I think if we're asked to regulate content through a provision in an AI regulation bill, it's not going to allow us to explain why we're showing particular content to Canadians or why we're restricting particular content from Canadians on the grounds that it's misinformation or disinformation. That's not fair to our users. It's not fair to Canadians that they won't understand why we're taking those decisions.

6 p.m.

Bloc

Jean-Denis Garon Bloc Mirabel, QC

Thank you, Ms. Curran.

6 p.m.

Liberal

The Chair Liberal Joël Lightbound

Thank you very much, Mr. Garon.

Mr. Masse, you have the floor.

6 p.m.

NDP

Brian Masse NDP Windsor West, ON

Thank you, Mr. Chair.

Thank you to the witnesses.

Just for public awareness and to my colleagues, I will be tabling a motion, not for debate for today but for a subsequent meeting, and looking for feedback and whether there are amendments. It says:

That pursuant to Standing Order 108(1), the committee send for, from the auto manufacturers Ford Motor Company of Canada, Limited, General Motors of Canada Company, and Stellantis (FCA Canada Inc.), BMW Group Canada Inc., Honda Canada Inc., Hyundai Auto Canada Corp., Jaguar Land Rover Canada ULC, Kia Canada Inc., Maserati Canada Inc., Mazda Canada Inc., Mercedes-Benz Canada Inc., Mitsubishi Motor Sales of Canada, Inc., Nissan Canada Inc., Porsche Cars Canada Ltd., Subaru Canada, Inc., Toyota Canada Inc., Volkswagen Group Canada Inc. and Volvo Car Canada Ltd., a comprehensive report on their strategies and initiatives taken to date and on further actions aimed at improving security features to address auto theft in Canada; and that the documents be submitted to the committee within five working days.

We have done this before at this committee. The reason I'm suggesting we do it is that I don't want to turn this entirely over to Public Safety or Transport because of the amount of money that's going through this file to the auto industry. It won't take committee time, but we'll be able to figure out whether or not we might want to have a more comprehensive study about that issue in the future.

I'm looking forward to seeing if any of our colleagues have amendments to that. It will be in your mailboxes tomorrow morning.

The first thing I want to ask relates to an issue we have here: Either we trust the bill through regulation and a bit of vagueness, or we trust the industry by not having any legislation. That could mean upwards of five years, quite potentially...depending upon Parliament and how long it lasts. Even if it doesn't last, to get something through would take a lot of time, so we have a decision to make.

Ms. Curran, you mentioned that you were in public policy before. I think you worked for Prime Minister Harper, if my memory is correct, as director of policy. In July 2019, the U.S. Federal Trade Commission imposed a record $5-billion fine against Facebook for deceiving users in their ability to control the privacy of their personal data. First, in that case—and I don't know—were Canadians having the same problems that Americans were? Second, why would we just trust that no public policy would be the best policy at the moment versus the bill?

6 p.m.

Head of Public Policy, Canada, Meta Platforms Inc.

Rachel Curran

Thank you for that question, Mr. Masse.

I don't think we are arguing that there should be no legislative framework. Our position may differ a bit from those of my colleagues here on that. I think the bill, in its original form as its written now, minus some of the amendments that have been proposed by the minister, is actually quite good and quite workable.

I understand the political imperatives here. I understand the concern the public has for generative AI products in particular. I think it is incumbent upon the government and decision-makers to put some kind of guardrails, as Mr. Sorbara talked about, around the development and deployment of AI. Our only caution is that Canada not do that in a way that's so far out of alignment with other jurisdictions that it's going to have a negative impact on the development and deployment of AI in this country.

6:05 p.m.

NDP

Brian Masse NDP Windsor West, ON

On that issue, I mentioned the Federal Trade Commission. How much was allocated to Canada for reparations? Did a reciprocal amount come to Canada?

You're asking for some harmonization here with other countries. In that particular case, there was $5 billion. Were there any reparations to Canadians who were affected by the breach the U.S. Federal Trade Commission noted? We actually get money sometimes, even from consumer abuse in the United States, through a number of different processes. Did any money come to Canada for that breach of trust?

6:05 p.m.

Head of Public Policy, Canada, Meta Platforms Inc.

Rachel Curran

I don't know the answer to that, Mr. Masse, but we will follow up to get you that information.

6:05 p.m.

NDP

Brian Masse NDP Windsor West, ON

I appreciate that.

I want to move now to Ms. Foster.

You mentioned criminal provisions. Again, this is part of the challenge we're faced with. In July 2020, Amazon was hit with a record fine of almost $900 million U.S. by the European Union for processing personal data in violation of the GDPR rules for privacy violations.

In that particular case, were Canadians under the same privacy violation that citizens under the GDPR got reparations for? Did Canada get any reciprocal treatment for the privacy violations that may have taken place?

6:05 p.m.

Director, Global Artificial Intelligence and Canada Public Policy, Amazon Web Services, Inc.

Nicole Foster

I'm not aware of the particulars of that case. I'm also here on behalf of Amazon Web Services as opposed to Amazon, so it's more difficult for me to answer.

6:05 p.m.

NDP

Brian Masse NDP Windsor West, ON

Would the Amazon case with the U.S. Federal Trade Commission be something you're familiar with? That's with regard to delivery drivers and the period of two and a half years...where there was a settlement. Is that one you'd be familiar with?