Evidence of meeting #108 for Industry, Science and Technology in the 44th Parliament, 1st Session. (The original version is on Parliament’s site, as are the minutes.) The winning word was systems.

A recording is available from Parliament.

On the agenda

MPs speaking

Also speaking

Ignacio Cofone  Canada Research Chair in AI Law and Data Governance, McGill University, As an Individual
Catherine Régis  Full Professor, Université de Montréal, As an Individual
Elissa Strome  Executive Director, Pan-Canadian AI Strategy, Canadian Institute for Advanced Research
Yoshua Bengio  Scientific Director, Mila - Quebec Artificial Intelligence Institute

11:50 a.m.

NDP

Brian Masse NDP Windsor West, ON

Thank you.

Go ahead, Dr. Bengio.

11:50 a.m.

Scientific Director, Mila - Quebec Artificial Intelligence Institute

Yoshua Bengio

I completely agree with everything my colleagues said. I'll just add this notion of creating a level playing field.

If you have corporations that really want to do good and there's no mandatory regulation, then they're forced to do as bad as the worst guy in the class. What you want is the opposite: You want best in class. Without regulation, we get into the worst-in-class scenario, where the organizations that are less responsible end up winning.

11:50 a.m.

NDP

Brian Masse NDP Windsor West, ON

Thank you for that.

I'll go to Mr. Cofone, but first, I guess, the trouble we're in right now is that we have this voluntary code out there already. There are the actions and deliberations of companies that are making decisions right now, some in one direction and some in another, until they're brought under regulatory powers. I think the ship has sailed, to some degree, in terms of where this can go. We're left with this bill and all the warts it has on a number of different issues.

One thing that's a challenge—and maybe Mr. Cofone can highlight a little bit of this with his governance background—is that I met with ACTRA, the actors guild, and a lot of their concerns on this issue have to be dealt with through the Copyright Act. If we don't somehow deal with it in this bill, though, then we actually leave a gaping hole for not just abuse of the actors—that includes children—and their welfare, but we also leave a blind spot for how the public can be manipulated and so forth in everything from consumer society to politics to a whole slew of things.

What do we do? Do you have any suggestions? How do we fix those components that we're not even...? It's a separate act.

11:55 a.m.

Canada Research Chair in AI Law and Data Governance, McGill University, As an Individual

Ignacio Cofone

Yes. Perhaps I can quickly add something to your prior question besides agreement with the prior three responses.

An environment like you brought up is a great example. In environmental law, years ago, we thought that regulating was challenging, because we mistakenly thought that the costs were local but the harms were global. Not regulating meant developing the industry while not imposing global harms.

With AI, it's the same. We think sometimes the harms are global and the costs of regulating are local, but that is not the case. Many of the harms of AI are local. That makes it urgent for Canada to pass a regulation such as this one, a regulation that protects its citizens while it fosters industry.

On the Copyright Act, it's a challenging question. As Professor Bengio pointed out a bit earlier, AI is not just one technology. Technologies do one thing—self-driving cars drive and cameras film—but AI is a family of methods that can do anything. Regulating AI is not about changing one variable or the other; AI will actually change or affect all of the law. We will have to reform several statutes.

What is being discussed today is an accountability framework plus a privacy law, because that's the one that's most intimately affected by AI. I do not think we should have the illusion that changing this will account for all AI and for all the effects of AI, or think that we should stop it because it doesn't capture everything. It cannot. I think it is worth discussing an accountability framework to account for harm and bias and it is worth discussing the privacy change to account for AI. It is also possibly warranted to make a change in the Copyright Act to account for generative AI and the new challenges it brings for copyright.

11:55 a.m.

NDP

Brian Masse NDP Windsor West, ON

Do I have any time left, Mr. Chair?

11:55 a.m.

Liberal

The Chair Liberal Joël Lightbound

You can go ahead, Brian.

11:55 a.m.

NDP

Brian Masse NDP Windsor West, ON

Okay.

Really quickly, maybe I could get a yes-or-no answer or whether it's a good idea or bad idea, maybe in the long-term, if eventually we got to a joint House and Senate committee that overlooked AI on a regular basis, similar to a defence thing. Would that be a good thing or a bad thing? It would cover all those bases of other jurisdictions, rather than just the industry committee, if we had both houses meet and oversee artificial intelligence in the future.

I know it's a hard one—yes or no—but I don't have much time.

Could we go in reverse order? Thank you.

11:55 a.m.

Canada Research Chair in AI Law and Data Governance, McGill University, As an Individual

11:55 a.m.

Prof. Catherine Régis

Yes.

11:55 a.m.

Scientific Director, Mila - Quebec Artificial Intelligence Institute

Yoshua Bengio

I can't answer. I'm not enough of a legal scholar.

11:55 a.m.

NDP

Brian Masse NDP Windsor West, ON

That's fine. Fair enough. It's just an idea.

Would you comment, Ms. Strome?

11:55 a.m.

Executive Director, Pan-Canadian AI Strategy, Canadian Institute for Advanced Research

Dr. Elissa Strome

I think that would be helpful.

11:55 a.m.

NDP

Brian Masse NDP Windsor West, ON

Okay. Thank you.

Thank you, Mr. Chair.

11:55 a.m.

Liberal

The Chair Liberal Joël Lightbound

Thank you very much.

Mr. Généreux, the floor is now yours.

11:55 a.m.

Conservative

Bernard Généreux Conservative Montmagny—L'Islet—Kamouraska—Rivière-du-Loup, QC

Thank you very much, Mr. Chair.

I'd like to thank all the witnesses. Today's discussions are very interesting.

I'm not necessarily speaking to anyone in particular, but rather to all the witnesses.

Bad actors, whether they be terrorists, scammers or thieves, could misuse AI. I think that's one of Mr. Bengio's concerns. If we were to pass Bill C‑27 tomorrow morning, would that prevent such individuals from doing so?

To follow up on the question from my Bloc Québécois colleague earlier, it seems clear to me that, even in the case of a recorded message intended to scam someone, the scammer will not specify that the message was created using AI.

Do you really believe that Bill C‑27 will change things or truly make Quebeckers and Canadians safer when it comes to AI?

Noon

Scientific Director, Mila - Quebec Artificial Intelligence Institute

Yoshua Bengio

I think so, yes. What it will do, for example, is force legitimate Canadian companies to protect the AI systems they've developed from falling into the hands of criminals. Obviously, this won't prevent these criminals from using systems designed elsewhere, which is why we have to work on international treaties.

We already have to work with our neighbour to the south to minimize those risks. What the Americans are asking companies to do today includes this protection. I think that if we want to align ourselves with the United States on this issue to prevent very powerful systems from falling into the wrong hands, we should at least provide the same protection as they do and work internationally to expand it.

In addition, sending the signal that users must be able to distinguish between artificial intelligence and non‑artificial intelligence will encourage companies to find technical solutions. For example, one of the things I believe in is that it should be the companies making the content for cameras and recorders that encrypt a signature to distinguish what is generated by artificial intelligence from what is not.

For companies to move in that direction, they need legislation to tell them that they need to move in that direction as much as possible.

Noon

Conservative

Bernard Généreux Conservative Montmagny—L'Islet—Kamouraska—Rivière-du-Loup, QC

Will Bill C‑27 allow it to be as effective as, or equivalent to, the U.S. presidential executive order currently in force?

Do you think the Americans will then pass legislation that will go further than this current presidential executive order?

The EU has already been much quicker to adopt measures than we've been. What is the intersection between Bill C‑27 and the bill that's about to be passed in Europe?

February 5th, 2024 / noon

Scientific Director, Mila - Quebec Artificial Intelligence Institute

Yoshua Bengio

I'll let my colleagues answer some of those questions. However, I would like to clarify something I proposed in what I said and wrote. It has to do with setting a criterion related to the size of the systems in terms of computing power, with the current threshold above which a system would have to be registered being 1026 operations per second. That would be the same as in the United States, and it would bring us up to the same level of oversight as the Americans.

This criterion isn't currently set out in Bill C‑27. I would suggest that we adopt that as a starting point, but then allow the regulator to look at the science and misuse to adjust the criteria for what is a potentially dangerous and high‑impact system. We can start right away with the same thing as in the United States.

In Europe, they've adopted more or less the same system, which is also based on computing power. Right now, it's a simple, agreed‑upon criterion that we can use to distinguish between potentially risky systems that are in the high‑impact category and systems that are 99.9% classified as AI systems without a national security risk.

Noon

Conservative

Bernard Généreux Conservative Montmagny—L'Islet—Kamouraska—Rivière-du-Loup, QC

Professor Régis, I'd like to hear your opinion on how our bill compares with the European legislation.

Noon

Prof. Catherine Régis

I would like to raise a few small points. There was a question about whether the Canadian legislation will be sufficient. First, it will certainly help, but it won't be enough, given the other legislative orders that must be taken into account. The provinces have a role to play in this regard. In fact, as we speak, Quebec is launching its recommendations report on regulating artificial intelligence, entitled “Prêt pour l'IA”. The Government of Quebec has mandated the Conseil de l'innovation du Québec to propose regulatory options, so we have to consider that the Canadian legislation will be part of a broader set of initiatives that will help solidify the guarantees and protect us well.

As for the United States, it's difficult to predict which way it will go next. However, President Biden's executive order was a signal of a magnitude few expected. So it's a good move then, and one to watch.

Your question touches a bit on the really important issue of interoperability. How will Canada align with the European Union, the United States and others?

As for the European case, the final text of the legislation was published last week. Since it's 300 pages long, I don't have all the details; however, I will tell you that we certainly have to think about it, so as not to penalize our companies. In other words, we really need to know how our legislation and Canadians are going to align with it, to a certain extent.

Furthermore, one of the questions I have right off the bat is this. European legislation is more focused on high‑risk AI systems, and their legal framework deals more with risk, while ours deals more with impact. How can the two really work together? This is something that needs more thought.

12:05 p.m.

Conservative

Bernard Généreux Conservative Montmagny—L'Islet—Kamouraska—Rivière-du-Loup, QC

I'm joking, but you could ask ChatGPT to summarize these 300 pages for you.

12:05 p.m.

Prof. Catherine Régis

That would be hilarious.

12:05 p.m.

Scientific Director, Mila - Quebec Artificial Intelligence Institute

Yoshua Bengio

I would like to add one thing. Having principles‑based legislation protects us from upcoming changes and provides the necessary consistency. It gives regulators the chance to adapt the key details of our regulations to our partners' regulations.

12:05 p.m.

Liberal

The Chair Liberal Joël Lightbound

Thank you.

Thank you, Mr. Généreux.

Mr. Van Bynen, you have the floor.

12:05 p.m.

Liberal

Tony Van Bynen Liberal Newmarket—Aurora, ON

Thank you very much, Mr. Chair.

We've had a number of witnesses before us with a very wide range of perspectives, some of whom have told us to rip up the bill and start all over again. At the same time, we've also heard that the genie is out of the bottle. It's operating almost like the Wild West out there.

My question is for Ms. Strome. In November 2023, CIFAR published “Regulatory Transformation in the Age of AI”. The report summary cautions that the current efforts to regulate AI will be doomed if they ignore a crucial aspect of the transformative impact of AI on the regulatory processes themselves.

Can you go over the findings of this report in a little more detail?