Evidence of meeting #109 for Industry, Science and Technology in the 44th Parliament, 1st Session. (The original version is on Parliament’s site, as are the minutes.) The winning word was risk.

A recording is available from Parliament.

On the agenda

MPs speaking

Also speaking

Nicole Foster  Director, Global Artificial Intelligence and Canada Public Policy, Amazon Web Services, Inc.
Jeanette Patell  Director, Government Affairs and Public Policy, Google Canada
Rachel Curran  Head of Public Policy, Canada, Meta Platforms Inc.
Amanda Craig  Senior Director of Public Policy, Office of Responsible AI, Microsoft
Will DeVries  Director, Privacy Legal, Google LLC
John Weigelt  National Technology Officer, Microsoft Canada Inc.

5:35 p.m.

Conservative

Rick Perkins Conservative South Shore—St. Margarets, NS

Were there actual amendments?

5:35 p.m.

Director, Global Artificial Intelligence and Canada Public Policy, Amazon Web Services, Inc.

Nicole Foster

That specific amendment was, I feel, a reflection of our comments. Regarding other specific comments, what we don't think was heard was the need to really differentiate between what's high and low impact more clearly and the need for better clarity around the criminal provisions.

5:35 p.m.

Head of Public Policy, Canada, Meta Platforms Inc.

Rachel Curran

We also raised the concerns that I raised in my opening statement about content moderation and prioritization systems. Essentially, the content Canadians are seeing online—

5:40 p.m.

Conservative

Rick Perkins Conservative South Shore—St. Margarets, NS

I'll come back to that one.

5:40 p.m.

Head of Public Policy, Canada, Meta Platforms Inc.

Rachel Curran

—should not be scoped in as high risk, so we raised that with department officials, absolutely. We raised our concerns around remote access. We also raised concerns around the specific obligations for general purpose AI systems.

We have raised the issues I raised in my opening statement.

5:40 p.m.

Senior Director of Public Policy, Office of Responsible AI, Microsoft

Amanda Craig

We have also had the opportunity to provide input since it was tabled, and we shared proposed amendments and fixes similar to what I raised in my opening remarks with regard to the need for a focus on high risk, for rethinking enforcement and for thinking about the requirements being differentiated based on risk.

5:40 p.m.

Director, Government Affairs and Public Policy, Google Canada

Jeanette Patell

We've shared our concerns with the proposed text, and we will be pleased to share specific amendments with all committee members.

5:40 p.m.

Conservative

Rick Perkins Conservative South Shore—St. Margarets, NS

That was going to be my next question. If you proposed wording for amendments, could you please share that with the committee?

5:40 p.m.

Director, Government Affairs and Public Policy, Google Canada

5:40 p.m.

Conservative

Rick Perkins Conservative South Shore—St. Margarets, NS

A number of you mentioned interoperability. How is it possible to be interoperable with other countries when other countries don't have legislation on this?

Go ahead, Ms. Curran.

5:40 p.m.

Head of Public Policy, Canada, Meta Platforms Inc.

Rachel Curran

This is a really good question.

Look, I understand that Canada wants to act quickly to regulate AI. The original form of the bill, which was a very high-level framework and would have allowed a lot of these issues to be discussed and finalized during the regulation-making process, would have allowed us, allowed Canada, to align with other international jurisdictions. The problem is that the amendments the minister has proposed to the bill in his letters to the committee take a position on all of the issues that are currently under discussion in international forums as part of the G7 process, as part of the Bletchley Declaration and as part of the OECD process. Other jurisdictions, our peer jurisdictions, are discussing these issues now.

The minister's proposed amendments, if accepted by this committee, are going to box Canada into a regulatory framework that may look very different from the one that emerges from international discussions. That is really our concern—not the original text of the bill, but the amendments proposed by the minister.

February 7th, 2024 / 5:40 p.m.

Conservative

Rick Perkins Conservative South Shore—St. Margarets, NS

Right. Originally, the bill only dealt with high-impact systems without a definition. My problem with this bill was that everything was originally in regulation, including the definitions and the policing, except for the penalties. Of course, they knew how to penalize something they couldn't define in the bill. That's been replaced with two more definitions: “general impact” and “machine learning”.

Regarding what's high impact, you reference that they have included a definition in a schedule that they can amend by regulation after the bill passes. Number four is, I think, the one that speaks to the moderation of content. Is this a backdoor way to do Bill C-11?

5:40 p.m.

Head of Public Policy, Canada, Meta Platforms Inc.

Rachel Curran

Yes, and I'm not sure if that's deliberate or not.

Really, we think this is better dealt with in the context of the pending online harms legislation. This will determine what Canadians see online, whether it's something that appears in your social media feed or your YouTube recommendations. It's even something as innocuous as a Canadian company deciding how to rank camping gear for purchase by Canadians using an automated system.

The regulatory obligations proposed in this bill are really going to impact the way those systems work. We think this is better dealt with in a context where we can have a free, robust discussion about where the line should be drawn in terms of the content that Canadians see. I know department officials have said that this provision is designed to deal with misinformation, for instance. Let's have a discussion about where that line should be drawn in the context of a bill that's designed to deal with that.

5:40 p.m.

Conservative

Rick Perkins Conservative South Shore—St. Margarets, NS

Ms. Craig, you mentioned low risk. Low risk, in other words, is now being classified—I presume a lot of it—as general purpose.

I use an example from my riding. I have a seafood company in my riding that's using AI to determine whether something is a surf clam or scallop and what direction it should be in before it goes into the machine to be shucked. If it was used by other companies and got sold, that would fall under general purpose.

It seems like it's excessive. Is there any AI that wouldn't fit into a “general purpose” definition?

5:40 p.m.

Director, Global Artificial Intelligence and Canada Public Policy, Amazon Web Services, Inc.

Nicole Foster

I think we should apply the same approach to high risk versus low risk, even when we're talking about general purpose systems. The same technology you're talking about can be in a high-risk context or a low-risk context. Even visual search, for example, is using computer vision technology or facial recognition technology. There can be a very low-risk context for general purpose systems or high-risk. I think we need to apply the same kind of lens around high or low risk.

To go back to your question about making our legislation more interoperable internationally, the best shortcut we have for that is to really lean on the work being done by international standards organizations and bodies to determine what the right standards are for how these systems are deployed and designed. By referencing those instead of creating bespoke Canadian regulations, it provides a huge enablement to Canadian companies that want to be able to scale globally, whether they're using AI or not.

5:45 p.m.

Conservative

Rick Perkins Conservative South Shore—St. Margarets, NS

Are we too early?

5:45 p.m.

Director, Global Artificial Intelligence and Canada Public Policy, Amazon Web Services, Inc.

Nicole Foster

You can create the framework for those things now and then reference those pieces as they evolve. We're going to see a number of international standards that will probably be published in the next few months. That work is happening at a very rapid pace. Again, the technology is also evolving at a very rapid pace. I think you want to give yourself flexible frameworks and allow the international technical work to inform what Canada will have to rely on as well.

5:45 p.m.

Senior Director of Public Policy, Office of Responsible AI, Microsoft

Amanda Craig

Going back to your original question, that was the exact point I was trying to make in referencing low-risk systems. In the legislation, they're referred to as general purpose systems, but they are quite broadly defined to contemplate technology that's used for multiple different purposes or activities. That could encompass quite a broad swath of technology. As Nicole said, technology used in low-risk or high-risk circumstances may end up being treated similarly by being impacted by requirements, even if it's just used in a low-risk context.

5:45 p.m.

Liberal

The Chair Liberal Joël Lightbound

Thank you very much.

We'll now turn to MP Sorbara for six minutes.

5:45 p.m.

Liberal

Francesco Sorbara Liberal Vaughan—Woodbridge, ON

Thank you, Chair.

Welcome, everyone.

It's not lost on me that as you folks are giving your presentations, literally millions if not hundreds of millions of people—your clients or customers—are using your services as we do this panel. They are benefiting from those services for productivity purposes and being connected. Obviously, AI is driving a lot of that.

There are a lot of positives going on with artificial intelligence that people benefit from every day without even thinking twice about it. It's something that we obviously are looking at and have looked at for two years now. It requires, in my view, guardrails, if I can use that term, or safeguards.

A few terms have been brought forth: content moderation or ecosystem, and high impact versus low impact. I'm going to try to keep this at a high level.

In terms of the differentiation between high-impact and low-impact systems, where is the right balance? I'll take us back to an accounting approach, where you have principles that are very prescriptive. How do we strike a balance between high impact and low impact when we're reviewing AI so that we're not spending a ton of bureaucracy or time capturing the low-impact systems, which in and of themselves are quite beneficial for consumers and companies?

I'll start with Nicole; then we can go across and go online.

5:45 p.m.

Director, Global Artificial Intelligence and Canada Public Policy, Amazon Web Services, Inc.

Nicole Foster

I think we would start with a meaningful decision that may impact an individual's human rights or health and safety. We would probably start with that as the basic definition. It is an area where I think considerable debate went on in the EU to clearly define what those exact use cases are.

There are even low-risk systems that you would want to.... You wouldn't even regulate all high-risk systems the same way. The questions you would ask would be different, and the risk assessments you would do would be different.

5:45 p.m.

Liberal

Francesco Sorbara Liberal Vaughan—Woodbridge, ON

Just before we go to Rachel, I did write down that in your comments, Nicole, you said that high impact is currently “too ambiguous”.

5:45 p.m.

Director, Global Artificial Intelligence and Canada Public Policy, Amazon Web Services, Inc.

Nicole Foster

Yes. There are a lot of use cases. A lot of times, for example, health care will be viewed as a high-impact use case, or human resources and recruiting will. If you use a video conferencing platform to conduct a job interview, you are using AI, and I would argue that that's not a high-risk use case. The system is identifying a face. It may be blurring a background. It may have—

5:50 p.m.

Liberal

Francesco Sorbara Liberal Vaughan—Woodbridge, ON

I don't want to cut you off, but I do want to hear from the others.

I'll hear from Rachel, Amanda and then John, please.

5:50 p.m.

Head of Public Policy, Canada, Meta Platforms Inc.

Rachel Curran

I would agree with that. I think it's the definition of “harm”. If you set a threshold of material harm in various areas, you're going to almost certainly capture high-impact use cases. For instance, we've dealt with the delivery of health care services, but Nicole referenced human rights issues, so it adds that target. Accommodation, employment or credit opportunities raise human rights issues and should probably be defined as high impact.

I don't think you have to capture every possible use case, but if you at least set a legal threshold of material harm, you're going to capture most high-impact cases.