Evidence of meeting #108 for Industry, Science and Technology in the 44th Parliament, 1st Session. (The original version is on Parliament’s site, as are the minutes.) The winning word was systems.

A recording is available from Parliament.

On the agenda

MPs speaking

Also speaking

Ignacio Cofone  Canada Research Chair in AI Law and Data Governance, McGill University, As an Individual
Catherine Régis  Full Professor, Université de Montréal, As an Individual
Elissa Strome  Executive Director, Pan-Canadian AI Strategy, Canadian Institute for Advanced Research
Yoshua Bengio  Scientific Director, Mila - Quebec Artificial Intelligence Institute

11:35 a.m.

Scientific Director, Mila - Quebec Artificial Intelligence Institute

Yoshua Bengio

Maybe the shortest-term concern that was a priority, for example, for the experts consulted by the World Economic Forum just a few weeks ago is disinformation. An example is the current use of deep fakes in AI to imitate images of people by imitating their voices and rendering their movement in video and interacting with people through texts and through dialogue in a way that can fool a social media user and make them change their mind on political questions.

There's real concern about the use of AI in politically oriented ways that go against the principles of our democracy. That's a short-term thing.

The one that I would say is next, which may be a year or two later, is the threat in terms of the use of these advanced AI systems for cyber-attacks. These systems, in terms of programming, have been making a lot of rapid progress in recent years, and it's expected to continue even faster than any other ability, because we can generate an infinite amount of data for that, just like in playing the game of Go. When these systems get strong enough to defeat our current cyber-defences and our industrial digital infrastructure, we are in trouble, especially if these systems fall into the wrong hands. We need to secure those systems. One of the things that the Biden executive order insisted on is that these large systems need to be secured to minimize those risks.

Then there were other risks that people talk about, such as helping bad actors to develop new weapons or to have the expertise that they wouldn't have otherwise. All of these things need a law as quickly as possible to make sure that we minimize those risks.

11:40 a.m.

Liberal

Ryan Turnbull Liberal Whitby, ON

Thank you for that.

I also just wanted to mention something. I know you're aware as a signatory that our government developed a voluntary code of conduct for advanced generative artificial intelligence systems. I wanted to ask how AIDA builds on that voluntary code. Do you see the two as complementary, with the voluntary code preceding the bill and the bill actually adding on to that and furthering this mission of ensuring that we have a regulatory environment that provides some certainty?

Can you speak to that, Mr. Bengio?

11:40 a.m.

Scientific Director, Mila - Quebec Artificial Intelligence Institute

Yoshua Bengio

Absolutely. You are exactly right.

Voluntary codes are useful to get off the ground quickly, but there's no guarantee that companies will follow that code. Also, the voluntary code is very vague. We need to have more precision about criteria for what is acceptable and what is not. Companies, I think, need to have that.

We've seen that some companies have even declared publicly in the U.S. that they wouldn't follow the Biden voluntary code, so I think we have no choice. We have to make sure that there's a level playing field. Otherwise, we're favouring the corporations that don't go by the voluntary code. For them it means less expense [Technical difficulty—Editor] with the public. We really need to have regulations and not just [Technical difficulty—Editor].

11:40 a.m.

Liberal

Ryan Turnbull Liberal Whitby, ON

Thank you. I think I got that last part. You got a little choppy.

Is my time up, Chair?

11:40 a.m.

Liberal

The Chair Liberal Joël Lightbound

Yes.

11:40 a.m.

Liberal

Ryan Turnbull Liberal Whitby, ON

Thank you very much.

11:40 a.m.

Liberal

The Chair Liberal Joël Lightbound

Thank you, Mr. Turnbull.

Over to you, Mr. Garon.

11:40 a.m.

Bloc

Jean-Denis Garon Bloc Mirabel, QC

Thank you, Mr. Chair.

Thank you to the witnesses for being with us.

Professor Bengio, you talked about the imminent threat that disinformation poses to democracy. Deepfakes are now more and more common. You are appearing by video conference, so under the current regulatory framework, what assurances do I have that it is actually you taking part in today's meeting?

11:40 a.m.

Scientific Director, Mila - Quebec Artificial Intelligence Institute

Yoshua Bengio

That's a good question.

We need rules to prevent exactly that. For example, computer systems such as Zoom and social media platforms should have to state clearly whether any video content, audio content or text is computer-generated, in other words by AI, or whether it is really coming from a human. We need laws to protect the public from that sort of thing.

Companies should also be incentivized to develop technology, so we are better able to distinguish between what is real and what is fake.

11:40 a.m.

Bloc

Jean-Denis Garon Bloc Mirabel, QC

Recently, we've heard about scams that use AI to imitate people's voices and dupe a grandmother or grandfather. You'll have to forgive me if I don't use the right terminology. As I understand it, you are saying that the current regulatory framework neither requires companies nor incentivizes them—because there is a cost attached—to identify when something is fake.

Does Bill C-27, in its current form, remedy that? Does it cover everything it should, or does it need to be strengthened?

February 5th, 2024 / 11:40 a.m.

Scientific Director, Mila - Quebec Artificial Intelligence Institute

Yoshua Bengio

I think some aspects of the bill could do with being strengthened, but my colleague Ms. Régis could probably answer that better than I could.

11:40 a.m.

Prof. Catherine Régis

If I understand correctly, the amendments recently proposed by the minister reflect a desire to have AI-generated information identified for the public's sake. Yes, I think that is an important element to prevent confusion and an overall climate of distrust in society. I think it's definitely a good idea to pursue that legislatively.

11:40 a.m.

Bloc

Jean-Denis Garon Bloc Mirabel, QC

Thank you, Ms. Régis.

Professor Bengio, in your opening statement, you talked about provisions that could be implemented right away, given the urgent need for action. You described something along the lines of a registry, whereby large generative AI systems and models would be registered with the government and include a risk evaluation.

Basically, you're saying that we should do the same thing we do for drugs: before a drug is allowed on the market, the manufacturer has to show that it is safe and that the benefits outweigh the risks.

Are you likening the challenge with AI systems to a public health issue, thereby warranting that companies submit substantial evidence about their products to a government agency?

11:45 a.m.

Scientific Director, Mila - Quebec Artificial Intelligence Institute

Yoshua Bengio

Yes, that's right. It actually works that way in many sectors of our society, not just for drugs. Think of when a bridge or train is built, or when a new technique is developed to process meat. The public has to be protected so that things don't go awry. Companies have to be transparent and demonstrate that their products will not cause harm.

To date, computer technology has escaped all that—the thinking was that it wouldn't have any significant impacts on society. Now we are at a point where computer technology, AI in particular, is about to completely transform society. Transformation can be good or bad, so we need a framework.

11:45 a.m.

Bloc

Jean-Denis Garon Bloc Mirabel, QC

Professor Bengio, my next question is for both you and Professor Régis.

Now and again, we've been told that the industry is able to regulate itself. We've also been told that the voluntary approach can work. Personally, I'm not inclined to put a lot of faith in that approach. What do you make of the industry's ability to regulate itself?

Here's some food for thought to help get you started. Isn't self-regulation an incentive for illicit actors to hitch a ride on the wagon of everyone else—all those who are self-regulating—and thus reap the benefits of not doing it themselves?

What do you make of the voluntary approach?

11:45 a.m.

Scientific Director, Mila - Quebec Artificial Intelligence Institute

Yoshua Bengio

As I've said in response to previous questions, I think self-regulation can be a good intermediate step because of how quickly it can be put in place. Companies can work in coordination to establish certain standards. That's the upside of self-regulation.

However, there are going to be bad actors, and there will be something of an incentive to cut corners if we don't have mandatory rules that are the same across the board.

11:45 a.m.

Bloc

Jean-Denis Garon Bloc Mirabel, QC

Professor Régis, is Canada a big enough player to regulate the industry adequately? A lot of Canadians think Canada is a major G7 country, but the reality is that Canada is a relatively small economy. Are we powerful enough to wield any influence?

11:45 a.m.

Prof. Catherine Régis

Influence is an issue, but I'd like to briefly comment on the self-regulation aspect, if I may. I think it's important. In my view, self-regulation clearly isn't adequate. There's a pretty strong consensus in the international community that opting strictly for self-regulation isn't enough. That means legislation has its place: it imposes obligations and formal accountability measures on companies.

That said, it's important to recognize that this legislation, Bill C-27, is one tool in the important tool box we need to ensure the responsible deployment of AI. It's not the only answer. The law is important, but highly responsive ethical standards are also necessary. The tool box should include technical defensive AI, where you have AI versus AI. International standards as well as business standards need to be established. Coming up with a comprehensive strategy is really key. This bill won't fix everything, but it is essential. That's my answer to your first question.

Sorry, could you please remind me what your second question was?

11:45 a.m.

Bloc

Jean-Denis Garon Bloc Mirabel, QC

Can Canada have any real clout, since it doesn't have a huge economy or a strong presence in the AI world?

11:45 a.m.

Prof. Catherine Régis

While Canada obviously doesn't have as much clout as China or the United States in AI development, it is still an important player for a number of reasons. First, Canada is known for its strong research capacity. Canada has been involved in various initiatives, including the creation of the Global Partnership on AI. That makes Canada an actor that wants to take a stand and whose voice in this space is still important.

Nevertheless, if Canada doesn't want to fall behind, it needs to be true to its vision and values by taking very clear action at the national level. That will give Canada real credibility in this space.

11:45 a.m.

Liberal

The Chair Liberal Joël Lightbound

Thank you very much, Professor Régis.

Mr. Masse, you have the floor.

11:45 a.m.

NDP

Brian Masse NDP Windsor West, ON

Thank you, Mr. Chair.

Thanks to our witnesses.

There are a couple of things that have worked in the past in this committee that have come to light on where we are right now and how it relates to the voluntary code. One of the things is that it used to be legal in Canada for businesses to write off fines and penalties on the environment or on anti-consumer court cases. They would actually get a tax deduction of up to 50% off the fines and penalties. Drug companies were fined for being misleading and environmental companies were fined for doing the wrong thing—actually, it wasn't environmental companies, but there was environmental damage that was done.

It led to this imbalance that made it actually an incentive, a business-related expense, to go ahead with bad practices that affected people and the environment, because it actually paid off for them. It created an imbalance for innovation and so forth.

The other one is my work on enacting the right to repair, which passed through this committee and was in the Senate. We ended up taking a voluntary agreement in the auto sector. We basically said that we got a field goal instead of a touchdown. This has now emerged again as an issue, because some of the industry will follow the voluntary agreement and some won't. Some wouldn't even sign on to the voluntary agreement, including Tesla, until recently. There are still major issues, and now they're back to lobbying here on the Hill. We did know the vulnerability 10 years ago, when we started this, that when it worked in towards the electronics and the sharing of information and data, it changed things again, and there wasn't anything there.

My question is for Ms. Strome, Ms. Régis and Mr. Bengio.

With this voluntary agreement, have we created a potential system right now whereby good actors will come to the table and follow a voluntary agreement while bad actors might actually use it as an opportunity to extend their business plans and knock out competition? I've seen that happen in those two examples that we've had there.

I'll start with you, Ms. Strome, because you haven't been on yet. Then we can hear from Ms. Régis and Mr. Bengio, if we can go in that order, please.

11:50 a.m.

Executive Director, Pan-Canadian AI Strategy, Canadian Institute for Advanced Research

Dr. Elissa Strome

I actually think that the voluntary code was an important and critical first step towards regulating this sector. It was a way to move things forward, and it was also a way to open the conversation and the discourse about the need for responsible AI practices, methodologies and approaches as we innovate in this sector. It was a very important first step, but it can't be the last step.

As you identify and as others have recognized, voluntary codes of conduct and voluntary regulations are just that; they're voluntary. We need much firmer and clearer rules, regulations and guidelines about what our expectations are about how the technology is developed, deployed, and monitored and how it's assessed for its impact so that we understand what those impacts may be.

11:50 a.m.

NDP

Brian Masse NDP Windsor West, ON

Thank you.

Go ahead, Ms. Régis.

11:50 a.m.

Prof. Catherine Régis

I think it goes back to my previous point. Elissa was very clear, and I agree, that it clearly is not enough. There is a way to really avoid having to comply with these voluntary norms. I'm a law professor, so for me it makes sense, for sure, to have binding regulations in that space, especially since there are a lot of power dynamics and economic interests at stake.

With the proposed bill, one thing that is very important and that I like about it is that it focuses on ex ante measures. We've been talking about what if something happens and how Canadians will suffer the consequences if something happens. Let's not wait to have too many of these consequences and really focus on forcing people to have ex ante measures to make sure that before anything important is launched on the market, they do their due diligence and we have access to it. We make sure that it's transparent and that we have some accountability mechanisms to make sure that these consequences are avoided. We force that.