Evidence of meeting #106 for Industry, Science and Technology in the 44th Parliament, 1st Session. (The original version is on Parliament’s site, as are the minutes.) The winning word was going.

A recording is available from Parliament.

On the agenda

MPs speaking

Also speaking

Todd Bailey  Chief Intellectual Property Officer and General Counsel, Scale AI, As an Individual
Gillian Hadfield  Chair, Schwartz Reisman Institute for Technology and Society, University of Toronto, As an Individual
Wyatt Tessari L'Allié  Founder and Executive Director, AI Governance and Safety Canada
Nicole Janssen  Co-Founder and Co-Chief Executive Officer, AltaML Inc.
Catherine Gribbin  Senior Legal Adviser, International Humanitarian Law, Canadian Red Cross
Jonathan Horowitz  Legal Adviser, International Committee of the Red Cross, Regional Delegation for the United States and Canada, Canadian Red Cross

11:25 a.m.

Liberal

The Chair Liberal Joël Lightbound

Thank you.

I'll now give the floor to Nicole Janssen for five minutes.

11:25 a.m.

Nicole Janssen Co-Founder and Co-Chief Executive Officer, AltaML Inc.

Thank you for the invitation to share my thoughts with the committee today.

My name is Nicole Janssen. I'm the co-founder and co-CEO at AltaML. AltaML is the largest pure-play applied AI company in Canada. We create custom AI software solutions for private industry enterprises, as well as the public sector. AltaML is not quite six years old, but we've worked with over a hundred companies on over 400 AI use cases.

I want to start by saying that Bill C-27 is both necessary and a solid step in the right direction. Canada has the potential to be the global leader in responsible AI. That is the title that is up for grabs—

11:25 a.m.

Liberal

The Chair Liberal Joël Lightbound

I'm sorry, Ms. Janssen. Could you pause for just one second? I think we have a technical difficulty. We'll try to get this resolved—apologies for that.

In the meantime, we'll move to the Canadian Red Cross, with Catherine Gribbin and Jonathan Horowitz.

We look forward to whoever wants to start from your organization.

Ms. Gribbin.

11:25 a.m.

Catherine Gribbin Senior Legal Adviser, International Humanitarian Law, Canadian Red Cross

Thank you.

January 29th, 2024 / 11:25 a.m.

Jonathan Horowitz Legal Adviser, International Committee of the Red Cross, Regional Delegation for the United States and Canada, Canadian Red Cross

Good afternoon, everyone. Thank you for the invitation to appear before you.

Catherine and I will be focusing solely on part 3 of Bill C-27.

We are representatives of the International Committee of the Red Cross and the Canadian Red Cross. Our organizations work to minimize the suffering of victims of armed conflict, and we work with governments to ensure respect for the laws that regulate armed conflict.

We appear before you today to emphasize that, when governments regulate AI, you need to consider how AI is, can and will be used in armed conflict and to ensure that it does not contribute to unlawful harms.

Today, we are observing in real time that privately made AI systems developed and designed for civilian use are finding their way onto battlefields, whether adapted by militaries, armed groups or civilians. We are particularly concerned with the use of AI that can result in death, injury and other serious harms. This includes the use of AI in misinformation and disinformation campaigns and how they can disrupt and interfere with humanitarian operations. Artificial intelligence allows harmful information to be generated and spread at a scope and scale never before imagined, with real-world dangers for civilians in armed conflict as well as those who work in these contexts.

To address these concerns, we recommend that the bill require that all Canadian-made AI systems used in armed conflict must be designed to comply with international humanitarian law in accordance with Canada's pre-existing legal obligations. International humanitarian law, or IHL, is the body of international law that places limits on how warring parties may fight each other in armed conflicts and, importantly, it provides protections to civilians and others no longer participating in those hostilities.

To ensure IHL compliance, it will also be critical that the bill include language that preserves effective human control and judgment in the use of AI that could have serious consequences on human life in situations of armed conflict; that the bill ensure AI systems are traded in compliance with Canada's export control obligations; and that the bill clearly regulate AI systems used in misinformation and disinformation campaigns and must contain language that ensures the definition of “harm” in proposed subsection 5(1) includes types of harm that AI systems may cause through the creation and spread of misinformation and disinformation.

11:25 a.m.

Senior Legal Adviser, International Humanitarian Law, Canadian Red Cross

Catherine Gribbin

Our second major concern is that the bill includes the exemptions, as you already know, for the Minister of National Defence and the director of CSIS, as well as the chief of the CSE and other government positions, so while the bill's focus is on preventing harm by private industry, the bill does offer you a critical opportunity to reduce the risks of AI even further by providing clarity and certainty that AI uses by those who are currently exempted are currently regulated by pre-existing laws.

Novel AI capabilities can produce unpredictable effects and can operate with a lack of transparency that can be extremely dangerous for civilians and other victims of war, so the legal uncertainty created by the current bill places many people at much higher risk, in our opinion. The opportunity to make these changes should not be missed, and we believe that your silence should not be misinterpreted or cannot be misinterpreted as suggesting that government use of AI in armed conflict is unregulated.

We recommend that, alongside that, the private sector's design of AI be in line with pre-existing legal obligations. That includes international human rights law and international humanitarian law. We also strongly recommend that the bill be amended to provide legislative clarity to government actors and that the bill, as Jonathan mentioned, should be explicit about compliance with export control obligations and pre-existing legal obligations.

You will find those proposals in our written submission.

In conclusion, we trust that your goal is to ensure the use of AI enables rather than impedes the protection of civilians during times of armed conflict and ensures the provision of humanitarian assistance.

As you contemplate how best to regulate AI, we ask that the law that is put in place help to prevent AI from resulting in unlawful harm in armed conflict, knowing that AI systems, whether designed by the private or the public sector, might appear on the battlefield in unexpected and unintended ways, whether by militaries, by armed groups or by civilians.

To achieve the bill's purpose of preventing the harms and risks that AI can cause, we believe that the bill must better incorporate Canada's pre-existing obligations under international law, including humanitarian law, and a human-centred ethical approach to AI.

Thank you.

11:30 a.m.

Liberal

The Chair Liberal Joël Lightbound

Thank you very much.

We still have a bit of a technical issue with Ms. Janssen. We'll start the discussion and perhaps interrupt it at some point to give her the opportunity to share her thoughts on Bill C-27.

I will turn it over to Mr. Perkins for six minutes.

11:30 a.m.

Conservative

Rick Perkins Conservative South Shore—St. Margarets, NS

Thank you, Mr. Chair.

Thank you, witnesses, for another fascinating presentation on this bill.

Perhaps I could start with Mr. Bailey and Monsieur L'Allié.

One the one hand, we have this issue where, you know, it's just advanced math and don't worry about the fear. On the other hand, you also said don't worry about the fear, but it could end humanity within two to five years if it becomes smarter than us, which you're saying it will. It's pretty hard for us to juxtapose those two issues.

Perhaps, Mr. Bailey, you could start, and then Mr. L'Allié. How do we balance that? Contrary to perhaps one of the witnesses, I also have a problem with a bill that removes Parliament from setting the legislative framework about the limits on any part of our public policy, which this bill does.

11:30 a.m.

Chief Intellectual Property Officer and General Counsel, Scale AI, As an Individual

Todd Bailey

Working in Canadian AI as I do, I speak to experts who are assessing these various claims. I think there's a consensus that this sort of world-ending risk is maybe 20 years out, maybe 30 years out, or something like that, and that we have time to regulate these things now. I would say that my focus in the remarks I made is that we have a choice between whether we want foreign companies to be deciding this or we want Canadian companies to be playing along.

One of the concerns is that some of the regimes that have been proposed right now sort of lock you in the current state, in which obviously Canada is not a big player. We can go and write laws if we like. Are they going to be followed? Are we going to be able to enforce them? This is the thing. The power that we can give ourselves is the opportunity for Canadian....

For example, one important aspect is that we talk a lot about ChatGPT, but there are now hundreds of large language models that are open source. These are by people and companies that don't necessarily have the regulatory department to deal with the regulations that are maybe being proposed in some corners.

11:30 a.m.

Founder and Executive Director, AI Governance and Safety Canada

Wyatt Tessari L'Allié

I think there's not necessarily a contradiction between our positions. The purpose of this bill is to make sure that the good types of AI, the beneficial ones, the ones that are harmless, are developed and that Canada's leading in that.

On the timeline piece, look, nobody can predict the future, but the reason so many people think it's short term is that, if you look at the trends, whether it be the amount of compute going into the algorithms, the amount of data going into the algorithms, the amount of efficiency algorithms or the amount of money going into this space, all these trends are exponential. Now, the incident report is that everything is doing this. I mean, if you remember COVID, for the longest time it was nothing and then all of a sudden it was something. That scenario is entirely possible with AI, where we go from not much AI to machine learning to generative AI to, oops, suddenly the human level relatively quickly. It's very unintuitive but quite possible, and that's what you have to be ready for.

11:35 a.m.

Conservative

Rick Perkins Conservative South Shore—St. Margarets, NS

Thank you.

The minister proposed amendments at the beginning of this process more than 500 days ago—as someone said, almost two years ago—on both the privacy side and the AI side for a flawed bill. We've had a lot of witnesses on the AI side say it's a very flawed bill. Many want us to just defeat it and start all over again.

This bill started with an attempt to basically control what was called a “high-impact system”. The minister's amendments introduce two new levels of control. One is machine learning in the legislation. The other is general purposes, which, to me, seems like just about everything that would come in AI and gives the minister total regulatory power to oversee them, fine them, police them and all of that.

On the schedule, on the back of the high-impact systems, first, do you agree that now almost everything is covered with the minister's proposed amendment because they put in general-purpose AI and machine learning as well? Second, do you agree with the definition of “high-impact” that is attached in the schedule for the minister's amendment?

Mr. Bailey, please go ahead first.

11:35 a.m.

Chief Intellectual Property Officer and General Counsel, Scale AI, As an Individual

Todd Bailey

From the perspective of business, there are really two aspects to this regulation I want you to understand, whether it's in the U.S. or the EU or here in Canada. There's the infrastructure piece. We need to put in place an infrastructure with a commissioner and understand who will do what. Then there are the actual rules themselves. As one of the witnesses said here this morning, as things progress quickly, nobody really knows what the rules should be. Nobody has agreed, whether in the U.S. or even the EU for that matter, what the rules should be, but we should definitely be in a hurry to get an infrastructure in place.

On specifically whether or not I agree with the definitions, I'll defer on that and say that I'm not an expert in drafting legislation. What I am an expert in is that Canadian businesses need to be able to read it and understand it, and that, as legislators, if we don't understand what it means.... We shouldn't abandon our tradition of understanding the laws that we're writing.

11:35 a.m.

Founder and Executive Director, AI Governance and Safety Canada

Wyatt Tessari L'Allié

Specifically with regard to the definition of “high-impact”, the minister's amendments are a very significant step in the right direction. Including the general-purpose systems is very good. For the particular schedule, our main recommendation is to include not just use cases but also capacities. This is because a lot of these capacities, especially things like autonomous self-improvement or [Technical difficulty—Editor], and I can go into details of what they are, are dangerous by default. You don't necessarily want your system to be making a thousand copies of itself onto somebody else's computer without necessarily controlling it. Our recommendation would be to expand use cases and capabilities.

The second piece is that this bill is specifically focused on making systems available for use in the context of international trade, which will catch a lot of it, but it's not going to catch all of it, specifically open source and also R and D. It's understandable to want to give companies the ability to do research and development without legislation, but the problem is that, for the most advanced systems, once that system is built, it can be hacked, stolen and misused. Accidents can happen at the R and D stage, so R and D has to be included in the bill, as well as government, open source and military.

11:35 a.m.

Conservative

Rick Perkins Conservative South Shore—St. Margarets, NS

Thank you.

I'm going to go on to a more philosophical question. I could start with the Red Cross witnesses, and then if anyone else wants to comment, that would be great.

We talked about what essentially the western democracies are trying to do to get together to have some sort of coordinated approach on how we legislate and protect against harms, but we're not the only players in the game. We know that China and Russia in particular—maybe Iran—are already spending enormous amounts of money on this.

How do you deal with the issue that they operate from a very different moral compass, I'll call it, than we do in approaching these issues, whether it's about warfare, corporate things, individual privacy and freedom, deepfakes or all of those things that are starting to happen now?

11:35 a.m.

Senior Legal Adviser, International Humanitarian Law, Canadian Red Cross

Catherine Gribbin

I'm going to give the floor to Jonathan first, and then I'm happy to weigh in.

11:35 a.m.

Legal Adviser, International Committee of the Red Cross, Regional Delegation for the United States and Canada, Canadian Red Cross

Jonathan Horowitz

Hi. Thank you very much for that question. I think it's a very important one.

One of the things that Catherine and I have both emphasized—and it goes back to a remark that was just made about a lack of legal frameworks—is that there actually are some legal frameworks that exist at the international level, particularly international humanitarian law, which puts limits on different means and methods of warfare, including ones that have already been created, ones that are emerging and ones that will be created in the future.

The reason I mention this is that there may be questions around interpretation. There may be questions around compliance with international humanitarian law, depending on the context you're dealing with or different actors that are being referred to. What doesn't change is that the rules remain set in stone; they're firm. There are going to be complications, of course, around different interpretations, but there is a baseline. There is a de minimis set of rules that the international community has agreed to, particularly with regard to the use of artificial intelligence in situations of armed conflict, and that legal framework is international humanitarian law. That's one response for your consideration.

Thank you.

11:40 a.m.

Founder and Executive Director, AI Governance and Safety Canada

Wyatt Tessari L'Allié

I'd like to add that, fortunately, China is actually ahead of us in terms of AI regulation. I think part of it is that.... I mean, the western democracies are afraid of losing control of AI systems. The Chinese and the Russians are terrified because they depend on control, so if their system is not doing what they want it to do, if it's spitting out non-party line [Technical difficulty—Editor], they're more concerned about that than we are. Therefore, there are mutual common-ground areas within AI regulation, fortunately.

11:40 a.m.

Conservative

Rick Perkins Conservative South Shore—St. Margarets, NS

Mr. Bailey and any other witness, we have the U.K.—

11:40 a.m.

Liberal

The Chair Liberal Joël Lightbound

Mr. Perkins, we're already three minutes over. It was a fascinating question, so I let it go. However, we can't do more.

Go ahead, Mr. Van Bynen.

11:40 a.m.

Liberal

Tony Van Bynen Liberal Newmarket—Aurora, ON

Thank you very much, Mr. Chair.

I continue to be amazed at the additional information that we're getting through these witnesses.

I certainly appreciate your being here to contribute to a very important question that we need to address.

My first question is for Professor Hadfield. I'm intrigued by your comments about safe harbours and regulatory markets. We've heard from witnesses who've emphasized the importance of having a law now, even if it's imperfect, in order to protect Canadians and to provide certainty for businesses. Others have said that we need to split the bill and start all over with AIDA.

I'd like you to comment on this and particularly on whether you think “high-impact system” could be defined in law in a way that would not become obsolete as technology advances. I'd like you to answer that in the context of your safe harbours and regulatory markets suggestions.

11:40 a.m.

Prof. Gillian Hadfield

I'm glad we're focusing on this part of the approach.

I do think that the effort, which we've also seen in the European Union, to specify that these are the domains in which we are concerned, which we've raised in terms of applications, is unlikely to be robust and stable over time because there are domains we haven't thought about. The point of a general-purpose system, the GPT-4 type of system, is that it's going to find its way into absolutely everything we're doing. That's point number one, so I think that coming at it from the point of view of saying, “We're only going to carve out these ones,” is not going to be stable.

Let me go to the safe harbours and regulatory markets approach. I'll start with the safe harbours one because the term was used here...and it's one I use a lot. We need to get the infrastructure in place to give us the capacity to act as we learn, and we will learn only over time how things are playing out. Industry needs some certainty, and the idea of a safe harbour is to say, “Let's work through where we think, with these kinds of controls in place, this kind of thing is currently safe,” so that entities that are applying AI, building AI, can reach the certainty they need by saying, “We've done what's in the safe harbour. We're protected for now.” Now, that may need to evolve. There's just no way to get around the fact that this is going to be a domain of uncertainty and it's going to evolve. That's true across a complex economy, but safe harbours are a technique I think we should be exploring.

The regulatory markets approach would then also say, “Okay, let's identify and let's start with those areas where we know there are concerns.” We know a lot about the use of models to discriminate, for example. Can we foster the development of new technologies that will help us track things like that and have government give its stamp of approval to those types of technologies, again, in an iterative, evolving type of way? There's no way to get around the fact that we cannot write a piece of legislation that is going to say, “Here are the things we're concerned about. Here are the precise things we're concerned about, and here's what you can do to completely avoid any liability and concern.” I don't think there's any pathway like that.

11:45 a.m.

Liberal

Tony Van Bynen Liberal Newmarket—Aurora, ON

My next question, then, is for Mr. Bailey. Do you think that high-impact systems can be defined in law in a way that they would not become obsolete as technology progresses?

11:45 a.m.

Chief Intellectual Property Officer and General Counsel, Scale AI, As an Individual

Todd Bailey

I believe so, yes.

One of the concerns I have right now with the definitions that have been proposed is that some of them are a bit broad and some of them are a bit more focused. For example, there's one that relates to health care, and it basically says “anything in health care”. Scale AI has funded 15 projects in health care that have to do with everything from scheduling operating rooms to keeping the lights on at the hospital, and so on. I don't think this is what the drafters had in mind when they were talking about high-impact systems. For example, number two, relating to providing a service, 100% of what's available to us technologically is a service of some sort, so are we now making the entire ecosystem “high-impact”?

There are knobs and dials that need to be adjusted on this, but I do believe there needs to be a balance between what's in the regulation and what's in the law, just from a point of view of having Canadian businesses able to understand what they're supposed to be doing.

11:45 a.m.

Liberal

Tony Van Bynen Liberal Newmarket—Aurora, ON

I have just one quick question, and I'll come back to you. You mentioned we need infrastructure. Do you see that infrastructure being an independent commission, or do you see that infrastructure as being part of the government?

11:45 a.m.

Chief Intellectual Property Officer and General Counsel, Scale AI, As an Individual

Todd Bailey

From my perspective—again, I'm not a professor or an expert in these things—the role that I see the AI commissioner doing is actually.... AI is not a completely new thing. It affects workers, and we have departments of the government that deal with that. It affects privacy, and we have commissioners who deal with that.

I think one opportunity for an AI commissioner is as an expert within the government on what AI technologies there are and what the issues are that are presented to businesses and citizens, and as a bit of a coordinator. If you look to the U.S., in President Biden's executive order he's ordering many different departments to go off and do work, but there's no coordination between them. I do see a role. It's not necessarily a mirror of the Privacy Commissioner's role, though.