Evidence of meeting #106 for Industry, Science and Technology in the 44th Parliament, 1st Session. (The original version is on Parliament’s site, as are the minutes.) The winning word was going.

A recording is available from Parliament.

On the agenda

MPs speaking

Also speaking

Todd Bailey  Chief Intellectual Property Officer and General Counsel, Scale AI, As an Individual
Gillian Hadfield  Chair, Schwartz Reisman Institute for Technology and Society, University of Toronto, As an Individual
Wyatt Tessari L'Allié  Founder and Executive Director, AI Governance and Safety Canada
Nicole Janssen  Co-Founder and Co-Chief Executive Officer, AltaML Inc.
Catherine Gribbin  Senior Legal Adviser, International Humanitarian Law, Canadian Red Cross
Jonathan Horowitz  Legal Adviser, International Committee of the Red Cross, Regional Delegation for the United States and Canada, Canadian Red Cross

11:45 a.m.

Liberal

Tony Van Bynen Liberal Newmarket—Aurora, ON

Ms. Gribbin, go ahead.

11:45 a.m.

Senior Legal Adviser, International Humanitarian Law, Canadian Red Cross

Catherine Gribbin

Just to pull in an analogy from my area of work, under international and humanitarian law there's article 36 that asks for a weapons review. It talks about the review of weapons and of the means and methods of warfare, and that review has to take place before to ensure that the weapon, the means and methods can, in fact, be used in accordance with international humanitarian law.

Having heard what others have spoken to this morning, I do think that there is a means by which to provide the clarity that is needed and the instruction to those who are concerned, and that there is a possibility. We have that currently in Canada's legislative system, so I think there is a means by which that clarity and instruction can be given, just to use that comparison.

11:45 a.m.

Liberal

Tony Van Bynen Liberal Newmarket—Aurora, ON

Thank you.

I think that's my time, Mr. Chair.

11:45 a.m.

Liberal

The Chair Liberal Joël Lightbound

Thank you.

Mr. Garon, the floor is yours.

11:45 a.m.

Bloc

Jean-Denis Garon Bloc Mirabel, QC

Thank you, Mr. Chair.

First, I'd like to tell you how happy I am to be here. Thank you for having me.

I also want to thank the witnesses for coming.

Mr. Tessari L'Allié, there are obviously various interpretations of the imminent dangers of AI. In my opinion, it's very simple: there are pros and cons.

You talked about the labour force and the fact that humans could be replaced. However, an important report on AI was published recently by the Massachusetts Institute of Technology task force headed by David Autor and his colleagues. They seemed to say that, every time there's a major technological revolution, people fear that new technologies are replacing humans. This was the case with the automobile, as well as with the Internet. Typically, adapt takes time. These cycles take 30 to 40 years.

Still, there's a sense of urgency because it seems that, in the very short term, the negatives outweigh the positives. We need only think of conflicts or misinformation.

Is that why the adoption of a regulatory and legislative framework is urgent?

11:45 a.m.

Founder and Executive Director, AI Governance and Safety Canada

Wyatt Tessari L'Allié

Yes, absolutely.

When it comes to jobs, there's a difference with previous technological revolutions. To date, humans have always been able to do something that technology couldn't. When that's no longer true, the very nature of the economy will change. As long as we're able to do something that AI cannot, there will be jobs. However, that will change.

With regard to the immediate risks, it'll be absolutely essential to manage this transition. If it's managed well, we'll be able to create a very beneficial world, thanks to AI. However, in order to benefit from AI in general and avoid doing more harm than good, we need to minimize the security risks, manage the economic transition and ensure that no one is left behind.

11:50 a.m.

Bloc

Jean-Denis Garon Bloc Mirabel, QC

Ultimately, you're saying that we need rules. At the same time, as Mr. Bailey said so aptly, we don't know what rules we need. This means adopting a flexible framework to allow the rules to evolve quickly, and probably in less time than the legislative cycle.

The bill provides ample room for the industry to self-regulate. Self-regulation is, in fact, the mechanism that the industry prefers in order to ensure flexibility. However, I think that, if the industry preferred self-regulation, it would already be a fait accompli.

What do you think of that approach?

11:50 a.m.

Founder and Executive Director, AI Governance and Safety Canada

Wyatt Tessari L'Allié

There are many examples throughout history that demonstrate that it's never a good idea to let an industry regulate itself. Although the industry is demonstrating a lot of good faith right now, it's absolutely essential that this be enshrined in legislation, that the industry must abide by that law and that it be applied equitably to all companies.

What's good about the minister's proposed amendments is that the schedule can be adapted, meaning that classes of use can be added or removed, and that each line in the schedule can be amended by regulation. That ensures significant flexibility, which is a good approach.

After that, as I said, questions about capacity, open-source code and research and development need to be added. That said, overall, what the minister is proposing through these amendments is a good idea.

11:50 a.m.

Bloc

Jean-Denis Garon Bloc Mirabel, QC

Mr. Bailey, I'll come back to you on that point in a moment. I know that you've got something to say.

Mr. Tessari L'Allié, would you be in favour of including a mechanism to ensure an automatic review of the legislation, so that it isn't static and we can continue to move forward?

11:50 a.m.

Founder and Executive Director, AI Governance and Safety Canada

Wyatt Tessari L'Allié

Yes, absolutely. I'm aware of such concerns. I think Michael Geist was the one who said there was a promise to update the Personal Information Protection and Electronic Documents Act a few years after it came into force, but it never happened. When it comes to AI, I think that public concerns and political interests are significant enough to merit a legislative review.

11:50 a.m.

Bloc

Jean-Denis Garon Bloc Mirabel, QC

Mr. Bailey, I'll re‑ask you the question. You said that no one knew what the rules should have been. Who will know and when?

11:50 a.m.

Chief Intellectual Property Officer and General Counsel, Scale AI, As an Individual

Todd Bailey

I'm an engineer by training, so I take a very practical approach.

We here in Canada are not going to change what happens outside of our borders with our regulations. We need to balance these important harms with a mind to making sure.... We need to create more Shopifys and more Coveos. As Tobi Lütke, the founder of Shopify, in a Star Wars reference mentioned, the way you defeat the empire is by arming the rebellion.

Our rebellion is Canadian AI businesses. There are a lot of harms here. This is not an easy balancing act. You can't bite off more than you can chew. You have to start simple. You have to make sure you're protecting against the concerns of my friends here, and make sure it's something that Canadian businesses.... AI businesses in Canada are overwhelmingly—

11:50 a.m.

Bloc

Jean-Denis Garon Bloc Mirabel, QC

From what I understand, when it comes to regulations, the European Union is leading the way right now. The Americans have moved forward very quickly by enacting an executive order, but the outcome is unclear.

What you're saying is that Canada's a minor player and has yet to make a move. Canada doesn't know what rules to adopt yet, because it has to follow other countries.

11:50 a.m.

Chief Intellectual Property Officer and General Counsel, Scale AI, As an Individual

Todd Bailey

Essentially, yes.

The whole world is going to have to work together. We can be first, but if we're going in the wrong direction, it's not going to help Canadian businesses to do that. If you look at what's happening in the U.S. and EU, the rush is to get the infrastructure in place, but nobody really wants to be the first one to jump into the boiling pot on regulating the technology because nobody knows what's going to happen.

Even in the U.S., they've drawn a line and said that you have to be above this line before these regulations are going to apply. Guess what's going to happen. Everybody's going to stay right here on the border.

11:50 a.m.

Bloc

Jean-Denis Garon Bloc Mirabel, QC

Thank you.

11:50 a.m.

Liberal

The Chair Liberal Joël Lightbound

Thank you very much, Mr. Garon

Mr. Masse.

11:50 a.m.

NDP

Brian Masse NDP Windsor West, ON

Thank you, Mr. Chair.

Welcome, Monsieur Garon, to our committee. We also acknowledge Mr. Lemire's work on the committee. He was a very good member, and we wish him well in his new committee.

One thing you mentioned about the Red Cross was interesting, but it leads to other discussions that I want to get your thoughts on such as policing domestically and security—private security. I'll maybe go around the table at some point, but maybe start with the Red Cross. I think we should be having some concerns there. In the United States, there's already been AI making mistakes on facial recognition.

I had a chance to attend a number of conferences as part of the Canada-U.S. parliamentary association. The national and state legislatures, Congress and Senate, from all across the U.S. have a lot of workshops. We heard from some of the large AI players that we haven't even heard from here, but there was quite a recognition of the racial biases that are currently being programmed right in there because they don't even have the right people.

Can we maybe talk a bit about it domestically? I take your point with regard to the international issues, and I want to thank your organization for a lot of good work. I have a vulnerable community that has a lot of people from across the globe, so I want to thank you for that.

Perhaps we'll start with that and go across the board, if anybody else, online as well, would like to contribute to this part.

11:55 a.m.

Senior Legal Adviser, International Humanitarian Law, Canadian Red Cross

Catherine Gribbin

I'm happy to and happy to hand it to Jonathan.

Not to get too legal, but under the current framework a policing operation is going to take place under international human rights law and domestic human rights law. That's where Canada has taken its international obligations and has brought them into how we legislate.

When you're looking at the legal regime on the use of force under international and domestic human rights laws, it provides that instruction to police. That's why we have it referenced in our recommendations about compliance. It's not just with international humanitarian law, which, as Jonathan mentioned, applies during times of armed conflict, but it's realizing that we have that interplay between human rights law and humanitarian law and the domestic context of that applying in Canada as well as for Canadian operations overseas in partnered military operations.

That's why we absolutely referenced those two legal regimes and the fact that AI and those capabilities all have to be used in compliance with that pre-existing body of law. Again, we see it as a missed opportunity to make that explicit, and that's how it could be included in definitions, etc.

I'll pass the floor to Jonathan for anything additional.

11:55 a.m.

Legal Adviser, International Committee of the Red Cross, Regional Delegation for the United States and Canada, Canadian Red Cross

Jonathan Horowitz

Thank you for that.

The only thing that I would add is that some of the ICRC's concerns that are primarily focused on armed conflict are transferable and translatable to other situations where AI is being used to assist people in making decisions. Some of those concerns are, as you mentioned, with the bias that's in the data being used. There's also concern around user bias. Do the users know what the system is supposed to be used for? Will they become overreliant on that system to the point of removing their own human judgment?

We have concerns, as you may know, around lack of transparency in AI systems that can have serious consequences and around lack of predictability, not knowing exactly why the AI system provides the output to the user that it provides. Particularly important during situations of armed conflict, but not always, is that artificial intelligence systems can produce results at a speed that, if they have certain autonomous features, outpaces human decision-making.

These are all things that are of particular relevance in situations of armed conflict, but I think you can imagine that they would also be relevant outside of situations of armed conflict, whether it's in the Canadian context or in any other domestic context.

Thank you.

11:55 a.m.

NDP

Brian Masse NDP Windsor West, ON

Dr. Hadfield.

11:55 a.m.

Prof. Gillian Hadfield

Yes, I think this is a good context. Think about facial recognition and different error rates across different groups. I think it's a great example if we're thinking about how safe harbours and regulatory markets might work, and why we're limiting ourselves when we say it's only in these domains. Look, we can have facial recognition across all of these domains. We should be asking this: Are there steps anybody who is deploying facial recognition technology in any domain—who's developing it or purchasing and deploying it—can take to verify that it's meeting minimum legal standards?

A safe harbour would require that by establishing that, as long as you've done these kinds of tests or as long as you've employed this kind of technology and maybe this independent third party provider of a technology, whom we've certified and approved, to verify that the accuracy of your facial recognition system is equitable across different groups.... That's the kind of thinking we need to be developing, and we need to recognize that it's something that will evolve. The technology is going to evolve. The systems will evolve. You need that agility to do that.

That's an example where you give companies greater certainty to build. I think we should all be thinking about how we encourage AI development and deployment throughout Canada. However, you reduce that uncertainty by providing some safe harbours and some mechanisms at a lower cost that companies can use to verify that.

Noon

Liberal

The Chair Liberal Joël Lightbound

Thank you very much, Mr. Masse.

Mr. Vis.

Noon

Conservative

Brad Vis Conservative Mission—Matsqui—Fraser Canyon, BC

Thank you, Mr. Chair.

Thank you to our witnesses today.

I just want to touch on some of the themes that Mr. Garon and Mr. Perkins touched on earlier.

In question nine of our document that we were all sent from the Library of Parliament, it states that, according to the AIDA companion document, “Canada...will work together with [our] international partners”.

It also notes that the United Kingdom recently released a regulatory proposal for artificial intelligence that is said to be flexible and pro-innovation. Unlike the AIDA, it proposes to create principles for the development and responsible use of artificial intelligence. These principles will be released in a non-statutory form and implemented by existing regulators, who will be encouraged and, if necessary, specifically empowered to regulate AI in accordance with these principles in areas within their regulatory authority.

Mr. Bailey, what do you think of the United Kingdom’s approach?

Some of those principles, I think I should outline, are transparency and expandability, privacy and confidentiality, and the avoidance of harm.

What do you think of that approach as it relates to business development and innovation versus the approach taken by the Government of Canada?

Noon

Chief Intellectual Property Officer and General Counsel, Scale AI, As an Individual

Todd Bailey

The first thing I'll say is that I'm not familiar with that, but just based on the description that you've provided, I don't think self-regulation is a viable path to be following on this. There needs to be government regulation. I think in that sense it's good that we are here talking about part 3 of this act.

I certainly haven't suggested that part 3 should not go forward. What I'm just saying is that it requires its own.... I'm glad its getting a good light shone on it today, but in terms of that, I think the approach of regulation, of government defining rules and then enforcing those rules, is the right approach versus what I understand is being proposed in Europe.

Noon

Conservative

Brad Vis Conservative Mission—Matsqui—Fraser Canyon, BC

What if the United States goes along with the U.K. approach? What if the EU goes along with that approach as well? Is Canada going to be the outlier?