Evidence of meeting #109 for Industry, Science and Technology in the 44th Parliament, 1st Session. (The original version is on Parliament’s site, as are the minutes.) The winning word was risk.

A recording is available from Parliament.

On the agenda

MPs speaking

Also speaking

Nicole Foster  Director, Global Artificial Intelligence and Canada Public Policy, Amazon Web Services, Inc.
Jeanette Patell  Director, Government Affairs and Public Policy, Google Canada
Rachel Curran  Head of Public Policy, Canada, Meta Platforms Inc.
Amanda Craig  Senior Director of Public Policy, Office of Responsible AI, Microsoft
Will DeVries  Director, Privacy Legal, Google LLC
John Weigelt  National Technology Officer, Microsoft Canada Inc.

February 7th, 2024 / 6:45 p.m.

John Weigelt National Technology Officer, Microsoft Canada Inc.

We are in active conversations with governments across North America regarding how we make sure our elections are safe and how we ensure that we stamp out disinformation. We also have strong connections with defence here. We see they are putting in place AI programs that have responsible safeguards and tools to ensure that there's proper human oversight in this new world of conflict.

6:45 p.m.

Conservative

Brad Vis Conservative Mission—Matsqui—Fraser Canyon, BC

Is there a possibility of overlap between the roles of National Defence, personal security, the application of AI and the threats we face? Does this need to be studied further?

6:45 p.m.

National Technology Officer, Microsoft Canada Inc.

John Weigelt

I believe absolutely it needs to be studied further.

6:45 p.m.

Conservative

Brad Vis Conservative Mission—Matsqui—Fraser Canyon, BC

Thank you.

6:45 p.m.

Liberal

The Chair Liberal Joël Lightbound

Thank you, Mr. Vis.

Mr. Turnbull, you have the floor.

6:45 p.m.

Liberal

Ryan Turnbull Liberal Whitby, ON

Thanks, Chair.

Thanks to all the witnesses for being here today. I'm really finding the exchanges valuable. The insights and expertise you're sharing are making a big contribution to this conversation today, so thanks for being here, all of you.

I think your companies are all very large, very profitable and very successful, in part because of what my colleague Mr. Sorbara said, which is that you're generating a lot of value for your customers. At the same time, I think all of you are trying to demonstrate leadership in responsible AI and the use of responsible AI. That's great.

As legislators, of course, we have a big role to play, and we have to make decisions based on what's in the public interest. I think we're partners in that conversation, so I appreciate our ability to work together and your candour in the comments you've made.

Earlier this week, we heard from Yoshua Bengio, who is sometimes referred to as the godfather of AI. He was quite expressive of the exponential benefits and risks that are growing as AI evolves. This highlighted, at least from my perspective, the need for speed in getting this legislation through Parliament.

Notwithstanding that there may need to be some amendments and some changes, do all of you agree that as the Canadian government, we need to act with speed to make sure this legislation gets done?

Can I ask each of you for a quick answer?

Go ahead, Ms. Foster.

6:45 p.m.

Director, Global Artificial Intelligence and Canada Public Policy, Amazon Web Services, Inc.

Nicole Foster

To be honest, I don't know that the legislation changes much about how we'll be approaching responsible AI. It will change the level of compliance and the complexity we have to comply with. It may, in one way, divert resources towards compliance and away from responsible AI development, but that would not be a reason not to legislate. I don't think any of our companies are slowing down our efforts to ensure the responsible deployment of this technology, and we continue to rapidly innovate and invest to ensure that we're doing the right things.

I think some of the existential risk is very theoretical, and I think we're very focused on some of the real risks that need to be mitigated in how AI is deployed today. We continue to invest in determining what the next iteration of AI requires from AI developers. How do we manage some of those emerging risks around hallucination and toxicity, and how do we develop appropriate red teaming and safety testing for these generative AI models?

I don't think it will change how we approach responsible AI, but it might change—

6:45 p.m.

Liberal

Ryan Turnbull Liberal Whitby, ON

I'm trying not to interrupt you. That wasn't a short answer, but I appreciate that I'm asking you to answer a tough question with a short answer. We politicians do that a lot, I find, but I didn't do it intentionally.

I'm just trying to get a sense of whether you agree that speed is necessary. I take it from what you're saying that you almost don't need government legislation because you're already responsible in AI development and usage. We could probably beg to differ, and you have said already that that doesn't mean governments shouldn't legislate.

I'll move along the line to Ms. Curran.

6:45 p.m.

Head of Public Policy, Canada, Meta Platforms Inc.

Rachel Curran

Yes, I do think speed is necessary, if for no other reason than to maintain public confidence in AI and in our products and services. I think it's important to get it right and make sure that you don't step on the work that's being done by the Canadian AI ecosystem, but I think speed is a good idea. Passing something is a good idea.

6:50 p.m.

Liberal

Ryan Turnbull Liberal Whitby, ON

Thank you.

Go ahead, Ms. Craig.

6:50 p.m.

Senior Director of Public Policy, Office of Responsible AI, Microsoft

Amanda Craig

We agree that there's a need to move quickly with amendments but also deliberately. My colleague mentioned earlier the importance of consulting with other industry sectors. That is one thing we think would be incredibly valuable because of the breadth of impact, especially to lower-risk systems and how they will impact Canadian businesses across sectors.

6:50 p.m.

Liberal

Ryan Turnbull Liberal Whitby, ON

Thank you.

Ms. Patell, I see your hand up.

6:50 p.m.

Director, Government Affairs and Public Policy, Google Canada

Jeanette Patell

We would go to the proverb “If you want to go fast, go alone; if you want to go far, go together.” We see this as a real global opportunity, and we want to go far together, so I think this is one of those areas where we need to collaborate with international partners to get it right.

6:50 p.m.

Liberal

Ryan Turnbull Liberal Whitby, ON

Great. Thank you for your comments there.

I'm going to ask another general question. How many of you have adopted the voluntary code of conduct on the responsible development and management of advanced generative AI systems?

Ms. Foster, please give a short answer.

6:50 p.m.

Director, Global Artificial Intelligence and Canada Public Policy, Amazon Web Services, Inc.

Nicole Foster

Which code of conduct is this? Is it the Canadian one or the G7 one?

6:50 p.m.

Liberal

Ryan Turnbull Liberal Whitby, ON

It's the code of conduct that the Government of Canada announced and produced after consultation.

6:50 p.m.

Director, Global Artificial Intelligence and Canada Public Policy, Amazon Web Services, Inc.

Nicole Foster

We have aligned to the White House commitments and look to see those internationalized through an international consensus.

6:50 p.m.

Liberal

Ryan Turnbull Liberal Whitby, ON

That's not the Canadian code of conduct but the American one. Got it.

Go ahead, Ms. Curran.

6:50 p.m.

Head of Public Policy, Canada, Meta Platforms Inc.

Rachel Curran

We asked to be included in the Canadian code. We were told it was for Canadian companies only. We were one of the first to sign on to the White House commitments. We are signed on to other voluntary codes that are similar.

6:50 p.m.

Liberal

Ryan Turnbull Liberal Whitby, ON

Thank you.

Go ahead, Ms. Craig.

6:50 p.m.

Senior Director of Public Policy, Office of Responsible AI, Microsoft

Amanda Craig

We're also focused on the effort to define a global standard or global code of conduct through the G7.

6:50 p.m.

Liberal

Ryan Turnbull Liberal Whitby, ON

Okay.

Ms. Patell, is it the same for you?

6:50 p.m.

Director, Government Affairs and Public Policy, Google Canada

Jeanette Patell

Similarly, we're focused on international entities like the G7. That's an area where we can work with Canada, which is also active in the development of that code of conduct.

6:50 p.m.

Liberal

Ryan Turnbull Liberal Whitby, ON

Thank you.

I'm going to switch tracks here a bit. All of you have made comments on the definition of harm and defining material harm, but all of you have described slightly different ways that you determine that yourselves internally. I think that's what I heard. I can repeat back some of the things I jotted down, but it sounded like there was a slight variation in how you assess that internally. Most of you have said that a lot of what you consider to be a high-impact system depends upon the use case.

There are two questions here. Maybe I'll start with the use case question, because I think it's probably the most difficult one. My feeling is that if we were to try to predict all of the various use cases....

Ms. Curran, you said that we should identify the use cases we're concerned about and then, I think you said, identify the threshold of harm, if I'm not mistaken. I find that as regulators and legislators, it would be very difficult to determine all of the various use cases. I'm sure you can't predict use cases either. What I'm struggling with is how that is a real approach for legislators to take. Could you respond to that?

6:50 p.m.

Head of Public Policy, Canada, Meta Platforms Inc.

Rachel Curran

I think you can do both things at the same time. You can set a materiality threshold that's broad and potentially applicable to an infinite variety of use cases and also outline specific use cases that you are concerned about, including election disinformation and the provision of health care services or employment services. You can have an overriding threshold of materiality that applies to a broad range, an infinite range, of potential use cases.