Evidence of meeting #109 for Industry, Science and Technology in the 44th Parliament, 1st Session. (The original version is on Parliament’s site, as are the minutes.) The winning word was risk.

A recording is available from Parliament.

On the agenda

MPs speaking

Also speaking

Nicole Foster  Director, Global Artificial Intelligence and Canada Public Policy, Amazon Web Services, Inc.
Jeanette Patell  Director, Government Affairs and Public Policy, Google Canada
Rachel Curran  Head of Public Policy, Canada, Meta Platforms Inc.
Amanda Craig  Senior Director of Public Policy, Office of Responsible AI, Microsoft
Will DeVries  Director, Privacy Legal, Google LLC
John Weigelt  National Technology Officer, Microsoft Canada Inc.

6:05 p.m.

Director, Global Artificial Intelligence and Canada Public Policy, Amazon Web Services, Inc.

Nicole Foster

My expertise is in AI for Amazon Web Services as opposed to Amazon. I don't have that expertise.

6:05 p.m.

NDP

Brian Masse NDP Windsor West, ON

I appreciate that. I'm not trying to put you in an awkward spot, but you mentioned the criminal provisions.

I have a list of fines and penalties against Amazon in the United States from the Federal Trade Commission for that. Part of what we have to decide here is some of those elements. Canada, quite frankly, isn't getting the same treatment for its citizens.

I'll move to Microsoft.

Microsoft has agreed to pay $3 million in fines for selling software to sanctioned entities and individuals in Cuba, Iran, Syria and Russia from 2012 to 2019. The U.S. Department of the Treasury says that the majority of these apparent violations involved blocked Russian entities.

To Microsoft, is some of that activity still taking place or is that now completed? How can we be entirely trusting if there are no regulations related to AI, there's a waiting period to do something and we hear of cases like that?

6:05 p.m.

Senior Director of Public Policy, Office of Responsible AI, Microsoft

Amanda Craig

Unfortunately, I also don't have the expertise to respond to your question about this specific case.

I will say that, from our perspective, there is an opportunity to still move forward with this legislation and to do so in a way that's swift and that addresses real concerns that exist about AI, deployment and high-risk scenarios. It's by making adjustments to the amendments around approaching high-impact systems and defining requirements for the general purpose or lower-risk systems.

6:05 p.m.

NDP

Brian Masse NDP Windsor West, ON

I'm sorry, but I have limited time. If you don't know that one, then I'll move on to this one, because it is about what we're talking about.

The U.S. Federal Trade Commission had charges that Microsoft violated children's online privacy protection. This involves privacy protection for us given what the age of consent is. There was a $20-million settlement for that. I'm just wondering whether, in that case under Microsoft, Canadian children were under the same violation that took place in the settlement.

Some of these cases that I have here.... This isn't hard research. This is from The New York Times. I'm not asking anything that's unknown, based on preparation, or gotcha stuff. It's just The New York Times information that you use your products to find.

I want to know if Canadian children had the same exposure that's been settled in the United States.

6:05 p.m.

Senior Director of Public Policy, Office of Responsible AI, Microsoft

Amanda Craig

We'd be really happy to follow up with you. Unfortunately, I don't have that information today.

6:05 p.m.

NDP

Brian Masse NDP Windsor West, ON

Okay.

I just have one quick yes-or-no question for all the companies with regard to the 3% digital services tax. Are you opposed to the digital services tax that's been proposed by Canada? I'd be interested to know the positions. I can go by company or individual.

I'll leave it in your hands, Mr. Chair, but I'd be interested to know whether they support or oppose the tax, yes or no.

6:10 p.m.

Director, Global Artificial Intelligence and Canada Public Policy, Amazon Web Services, Inc.

Nicole Foster

I think the concerns with the digital services tax are more about the timing, with Canada moving forward out of step with other jurisdictions. There are fewer concerns about the tax specifically. It's more the timing and alignment with other countries.

6:10 p.m.

NDP

Brian Masse NDP Windsor West, ON

That's fair enough.

6:10 p.m.

Head of Public Policy, Canada, Meta Platforms Inc.

Rachel Curran

That's exactly our position as well. As long as there is a globally harmonized approach, we're happy to pay more tax and pay a digital services tax. Just don't make it a one-off.

6:10 p.m.

Senior Director of Public Policy, Office of Responsible AI, Microsoft

Amanda Craig

We also support an internationally harmonized approach such as that being advanced at the OECD.

6:10 p.m.

Director, Government Affairs and Public Policy, Google Canada

Jeanette Patell

We also support the OECD process and international corporate tax reform.

6:10 p.m.

NDP

Brian Masse NDP Windsor West, ON

Thank you for that.

There's a reason I asked that. I'm vice-chair of the Canada-U.S. Inter-Parliamentary Group, and when we go to Washington, we hear that congressional senators have been actively lobbied by your companies to get them to tell us to oppose the tax. If you do a quick search, you will find that many of the individuals are receiving financial donations.

With that, I think I'm out of time, Mr. Chair.

6:10 p.m.

Liberal

The Chair Liberal Joël Lightbound

You're way out of time, Mr. Masse, but I feel generous tonight, as I've been with all members. That's how I do it.

Mr. Williams, the floor is yours for more or less five minutes.

6:10 p.m.

Conservative

Ryan Williams Conservative Bay of Quinte, ON

Yes, more or less. Hopefully it will be seven.

6:10 p.m.

Voices

Oh, oh!

6:10 p.m.

Conservative

Ryan Williams Conservative Bay of Quinte, ON

Don't take away my time, Mr. Chair.

Thank you, witnesses.

I want to focus tonight on looking at international standards catching up to those of some of our peers when it comes to minimizing harms to artists, creators and the general public. I know you all agree: You have consumer products and you want to protect creators and consumers.

A big concern of AI is people's control over how their likeness could be used for profit where the likeness's value reflects an investment by the individual...against online harms. Obviously, we also want to protect consumer use and rights.

I'll note some of the biggest examples we have today.

I met with a group yesterday from Music Canada. There's an AI-generated Johnny Cash who can sing Barbie Girl by Aqua perfectly. This is a computer system learning an artist and replicating them. We can look to what could happen if that were used with Michael Jackson or others to create full albums. Who's protecting them? Are there laws to protect consumers and, of course, consumer rights?

The second one is deepfakes. They are very concerning. The biggest example right now is Taylor Swift. That's also not just for celebrities. Something a colleague of ours, Michelle Rempel Garner, has been especially vocal on is the use of AI-generated fake photos and videos for intimate partner violence.

Looking at those concerns and this material harm, how do each of you see Canada catching up with the AIDA or existing legislation and ensuring we protect consumer rights?

I'll start with Ms. Foster and we'll go around the room.

6:10 p.m.

Director, Global Artificial Intelligence and Canada Public Policy, Amazon Web Services, Inc.

Nicole Foster

I have a household full of Swifties. We talk about this at the dinner table.

I think you've raised a great example, actually, of where existing legislation could be more purpose-built to solve a problem like that. It is already illegal to share intimate images without consent. I think adding clarification in the Criminal Code to ensure that this covers AI-generated images is probably a more effective vehicle for addressing that particular issue. I think Canada could act on that very quickly and very efficiently.

The industry is working pretty hard to ensure that it is much easier to detect AI-generated content. Among the commitments we made in July at the White House, we made a commitment to develop watermarking and other tools to detect generated content. By November, we'd already released a watermarking tool within our tightened AI model. The industry is also trying to move very quickly to address some of these harms and is obviously collaborating to make sure that there are laws in place to address them, but there's also what we can do on the technical side to help ensure that those images are detected quickly.

The copyright question is a very hot topic, and the government has had consultation on this. From a policy perspective, I think there are two sides to consider. I think there's the AI training and the data input. How do we allow for greater assurance that we have great data to train models? There are techniques, too, on the output side to ensure that we suppress copyrighted content. It is possible to ensure that models have suppression techniques to ensure that copyrighted content is not on the output side of the model

It's good to separate the discussion into understanding the two aspects of AI and ensure that we have good models that reflect Canadian content as well. I think we should think about the fact that most available models are dominated by either the Chinese language or English. There's a lack of, for example, French-language content, but we have other minority languages in this country as well. We want to make sure that we permit appropriate content use for training models to ensure that we have access to Canadian-specific large language models when we interact with them.

6:15 p.m.

Head of Public Policy, Canada, Meta Platforms Inc.

Rachel Curran

I'd agree with that.

The issue you've raised around deepfakes, both video and audio, would not be addressed by Bill C-27, at least not anytime soon. I know that the government made an announcement—I think today—around the issue of deepfakes and an intent to deal with them. They could be dealt with very easily through an amendment to the Criminal Code or existing legislation.

It's the same thing around election disinformation. If that's a harm committee members are concerned about, that can be addressed through a quick amendment to the Canada Elections Act. There's even the Copyright Act on issues of creator rights. The use of material in the context of AI development that impacts creator rights can be dealt with through the Copyright Act as well.

There are existing statutes. We advocated previously for a sectoral approach to AI regulation because of this, but those could all be dealt with very quickly. They won't be dealt with in the context of Bill C-27 quickly.

6:15 p.m.

Senior Director of Public Policy, Office of Responsible AI, Microsoft

Amanda Craig

With regard to deepfakes, similar to my colleague from AWS, we see an opportunity to either make adjustments to the Criminal Code or think about opportunities to address that concern in the upcoming online harms legislation.

With regard to copyright, from our perspective there's a need to think about how we're enabling the use of AI to advance the spread of knowledge, to enable new creative works consistent with copyright law and to protect the rights and needs of creators. We want to continue to engage with partners and governments on how to achieve those objectives. We think it's been really productive to have the ongoing consultation that we've submitted a response to, and we look forward to any further opportunities to follow up on that.

6:15 p.m.

Director, Government Affairs and Public Policy, Google Canada

Jeanette Patell

Similar to others, we see other vehicles as being appropriate mechanisms to address the incredibly legitimate concerns around images of that sort.

I want to be clear that for non-consensual images, we have a zero tolerance policy on Google, regardless of whether those images are synthetic. We provide tools to individuals who might find themselves in that terrible situation so we can support them in addressing it. We also have very clear policies with regard to our own generative AI tools that ensure that generating sexually explicit content of this sort is a prohibited use.

I think maybe my colleague Tulsee can also speak a bit to this issue, because detection and watermarking are really important technological advances where we're going to need a very collaborative approach.

6:15 p.m.

Liberal

The Chair Liberal Joël Lightbound

Unfortunately, we have a technical issue. We can't hear you. Perhaps we'll have time to get back to you at a later point.

In any event, Mr. Williams, I'm sorry, but you are out of time. We'll cut it off here. Hopefully you will have a chance to get back to that.

Mr. Gaheer, the floor is yours.

6:20 p.m.

Liberal

Iqwinder Gaheer Liberal Mississauga—Malton, ON

Thank you, Chair.

Thank you to the witnesses for making time for this committee.

My questions are largely for Microsoft.

Ms. Craig, how do you believe the government should approach regulating a field that's evolving so rapidly, and how do you think the AIDA gets it right in meeting that challenge? One of the examples I think of is the initial schedule for high-impact systems that can be edited as technology evolves and time goes on.

6:20 p.m.

Senior Director of Public Policy, Office of Responsible AI, Microsoft

Amanda Craig

In the overall approach to legislation and the regulation of technology and AI in particular, it is incredibly important to think about establishing a framework and a process through which you can iterate over time with appropriate guardrails in place. One example is allowing the high-impact system schedule to continue to reflect growing deployment of this technology and new high-risk scenarios by adding to it over time, with appropriate guardrails in place to ensure that the process by which that happens also requires that additions reflect the same risk analysis and that a threshold is being met to add to the schedule.

I think ensuring that the processes for implementing requirements, for example, evolve over time with changing approaches to safety systems for AI. That is important. Also, ensuring that the risk-based approach is really foundational to the regulation will be important, as will ensuring that the way the regulation applies is not overly broad. Having onerous requirements applied to low-impact systems restricts how Canadian businesses can continue to use AI for lots of innovative purposes.

6:20 p.m.

Liberal

Iqwinder Gaheer Liberal Mississauga—Malton, ON

Regarding compliance, are the proposed amendments are currently in the AIDA sufficiently specific for Microsoft to plan compliance efforts, and would you already be in line with some of those efforts given your existing internal structures?

6:20 p.m.

Senior Director of Public Policy, Office of Responsible AI, Microsoft

Amanda Craig

The legislation and the recently proposed amendments provide a high-level structure for requirements, and through the implementation process, we expect there will be more detail for defining the requirements.

There is also the enforcement approach, in which there is an ability—for example, through the power to audit—that is less specific. There seems to be an opportunity to provide more detail to the implementation process about how organizations can demonstrate compliance with the requirements that we defined in more detail.

From a Microsoft perspective, we have been working on internal governance for responsible AI for seven years, and we have developed a lot of these constructs internally, which we can think of as a starting point. We can imagine a lot of other organizations may not have been spending as much time on that issue or may not have as many resources to apply to that issue. Providing more certainty on how to comply will be of incredible value.

We have developed our responsible AI principles, and we've developed a responsible AI standard to put those principles into practice. It truly looks like a set of really specific goals and requirements that apply to internal teams working on AI to fulfill practices like ensuring that we are reducing bias and mitigating the risk of bias and ensuring that we have sufficient transparency and accountability built into our processes.