Evidence of meeting #101 for Industry, Science and Technology in the 44th Parliament, 1st Session. (The original version is on Parliament’s site, as are the minutes.) The winning word was artificial.

A recording is available from Parliament.

On the agenda

MPs speaking

Also speaking

Erica Ifill  Journalist and Founder of Podcast, Not In My Colour, As an Individual
Adrian Schauer  Founder and Chief Executive Officer, AlayaCare
Jérémie Harris  Co-Founder, Gladstone AI
Jennifer Quaid  Associate Professor and Vice-Dean Research, Civil Law Section, Faculty of Law, University of Ottawa, As an Individual
Céline Castets-Renard  Full Law Professor, Civil Law Faculty, University of Ottawa, As an Individual
Jean-François Gagné  AI Strategic Advisor, As an Individual
George E. Lafond  Strategic Development Advisor, As an Individual
Stephen Kukucha  Chief Executive Officer, CERO Technologies
Guy Ouimet  Engineer, Sustainable Development Technology Canada

4:35 p.m.

Associate Professor and Vice-Dean Research, Civil Law Section, Faculty of Law, University of Ottawa, As an Individual

Dr. Jennifer Quaid

Yes, but it's a voluntary agreement.

I don't want to use up your time, but if you want to talk about corporate compliance and how the voluntary rules work in comparison to the binding rules, I could go on forever.

4:35 p.m.

Liberal

The Chair Liberal Joël Lightbound

Yes, that's not surprising.

Thank you very much, Ms. Quaid.

I'd like to inform the members of the committee that the amendments on the portion of the bill pertaining to artificial intelligence were released last week. They are now accessible and have been distributed.

Mr. Turnbull, you have the floor for six minutes.

4:35 p.m.

Liberal

Ryan Turnbull Liberal Whitby, ON

Thank you, Chair.

Thanks to all the witnesses for being here today. This is a very challenging topic for even the smartest of legislators. I really value the expertise that all of you bring to this conversation. It's really helping inform our discussions.

Professor Quaid, you said in your opening remarks that delay is “not an option”. You used the words “vital” and “urgent”. It sort of sounds like right now in Canada AI development and the regulations around it are a bit of a Wild West in terms of anything goes. Can you speak to the urgency that you spoke to and just stress that a little bit more?

4:35 p.m.

Associate Professor and Vice-Dean Research, Civil Law Section, Faculty of Law, University of Ottawa, As an Individual

Dr. Jennifer Quaid

I think the urgency comes from the fact that for a long time there has basically been an unregulated sphere. Perhaps everyone was a little bit asleep at how quickly things evolved. I think now we are late. I mean, everyone is late.

I can't speak to the specificity of the development of the technology. I am not a scientist of artificial intelligence. But I do know a thing or two about law and about business law, and I can tell you that if you want businesses to modulate their behaviour as a function of the public interest, you need legislation. The profit motive or the structure of our corporate law is extremely permissive. They will not make the choices you want to make. We have to make those choices, or rather, you, as the representatives of Canadians, have to make those choices. What is most important? You put that down.

That doesn't mean we don't fine-tune. That doesn't mean we don't adapt. But we have to start laying some rules down, because right now what's driving the choices is self-interest, and mostly that's economic.

4:40 p.m.

Liberal

Ryan Turnbull Liberal Whitby, ON

Thanks. Sometimes we talk about perfection being the enemy of the good. It seems like this is one of those situations where we need to get legislation passed in order to have something, which, of course, as Mr. Harris has pointed out, with the rapid pace of the evolution of AI development, we're probably going to need to continue to update.

Would you agree with that, Ms. Quaid?

4:40 p.m.

Associate Professor and Vice-Dean Research, Civil Law Section, Faculty of Law, University of Ottawa, As an Individual

Dr. Jennifer Quaid

Yes. I would say there are some examples of other sectors that evolved very rapidly and that we have lots of experience regulating. We don't need to reinvent the wheel. We do need to be creative. We need to be more agile. We need to be prepared to bring new elements into the regulatory process.

I think there are lots of smart people who have great ideas to help you with that. I don't think we can start by saying, oh, it's new and we don't know what to do. I think the time for that has long gone. We need to move forward. It's not perfect. I will never say that it's perfect—no law is perfect—but it is perfectible or improvable. We need to start somewhere.

4:40 p.m.

Liberal

Ryan Turnbull Liberal Whitby, ON

Thank you.

Ms. Quaid, I'm going to you again.

You mentioned something called “structural immunity” as being a risk in your opening remarks, I think.

I understand the concept itself, but I'd like to have an example of where that might be a real risk for us, in terms of our work moving forward, and how we might be able to avoid that.

4:40 p.m.

Associate Professor and Vice-Dean Research, Civil Law Section, Faculty of Law, University of Ottawa, As an Individual

Dr. Jennifer Quaid

I'm coming at this with my corporate criminal liability hat, and this statute is primarily criminal law. That was one of the astonishing things when I first read this bill. Relying on criminal enforcement comes with some costs in terms of how you prepare evidence and put things together.

What I'm concerned about is when we don't have transparency about who's involved with what decisions in relation to this technology. I can't speak to how it's actually done. I think the experts here can say something about that. What we need to insist on is transparency about who does what, because you cannot convict a corporation or an organization in this country without knowing who did what, what their status is and what their decision-making power is in the organization. I will direct you to section 2 of the Criminal Code, if you want to read it.

Even in the case of regulatory liability, where an employee can engage the liability of the organization, you don't need to have a status-based association that they're a senior officer. You still need to know who did what, otherwise you have no evidence. I think it's really important to make sure we create a regime that forces the information out so that then we can assess.

That doesn't mean we're going to convict all the time or that we're going to prosecute all the time, but if everything is hidden, then this is just window decoration. You will never, ever get a prosecution, or even administrative liability, in my view.

4:40 p.m.

Liberal

Ryan Turnbull Liberal Whitby, ON

Thank you for that. It's very helpful testimony.

Mr. Harris, I want to ask you a question similar to that of Mr. Généreux's.

I similarly had the experience of listening to you and feeling like I was in a horror movie, a sci-fi novel or some intersection of those when I heard you talk. I know that you're bringing up these risks and potential harms as a very real thing, so I don't want to take that lightly, but it is quite scary to hear.

I want to ask you a bit of an ethical or philosophical question. You had talked about mitigating the risks. You had talked about a blanket ban on, or explicitly forbidding, certain types of AI or advanced AI systems. One question that occurs to me when we're dealing with, essentially, advanced AI, is whether it is surpassing human intelligence. I think that's what I'm hearing. You talked about the superhuman and the power-seeking behaviours as being a real risk.

I'm interested in how we develop an ethical and/or legal framework. I think that is a core challenge in this work, which I'm grappling with. A lot of our ethical and our legal concepts rely on things like reasonably foreseeable futures. They rely on concepts of duty, etc., most of which rely on humans' ability to look at what the outcomes might be, given our past experience.

You talked about how some of our national security assumptions had been invalidated. Are some of our ethical assumptions and our legal assumptions being invalidated by the advancement of AI? How do human beings create a system or a set of guidelines for something that is actually beyond our intelligence?

It's a tough question.

4:45 p.m.

Co-Founder, Gladstone AI

Jérémie Harris

I think those are excellent questions.

I think, fortunately, we're not without tools for dealing with them. To piggyback off the testimony that Jennifer just gave, I think it's actually quite right to ask, “How can we massage this into a form that fits within our legal frameworks?” We're not going to overhaul the Constitution tomorrow. It's not going to happen.

One thing we can do is to recognize the fact that we can't predict the capabilities of systems at the next level of scale, so safety by design would seem to imply “until we can”. We're not talking about a blanket ban. We're saying, “until we can”, let's incentivize the private sector to make fundamental advances in the science of AI and to give us a scientific theory for predicting the emergence of those dangerous capabilities.

I'd also say we can draw inspiration from the White House executive order that came out recently. One of the key things they do—again, to piggyback off this idea, like sunlight is the best disinfectant, to bring this all out to the fore so that we can evaluate what's going on—is have a reporting requirement in the executive order. If you train an AI system that uses above a certain amount of computational power in the training process, you need to report the results of various audits you've performed, various evaluations. Those evaluations have to do with bioweapon design capability, chemical synthesis ability and self-replication ability. That's all baked into the executive order.

Seeing something like that, where we have a tiered process that essentially mirrors what we see in the EO, where we base it on computational processing power thresholds; above this line, you have to do this, and above that line, you have to do that. It's that sort of thing.

4:45 p.m.

Liberal

Ryan Turnbull Liberal Whitby, ON

That's very helpful.

How much time do I have?

4:45 p.m.

Liberal

The Chair Liberal Joël Lightbound

It's an interesting line of questioning, Mr. Turnbull. You can continue.

4:45 p.m.

Liberal

Ryan Turnbull Liberal Whitby, ON

Thank you. You're very generous.

Is it really the case that computational power is the key predictor of how an advanced AI system will evolve and how it therefore correlates with the level of risk?

I'm reluctant to think it's that simple. Perhaps that's what you said. Am I accurate?

4:45 p.m.

Co-Founder, Gladstone AI

Jérémie Harris

No, you're quite right to be reluctant to think it's that simple. That's the single best indicator that we have right now. A couple things can factor into this, too. You can make breakthroughs at the theoretical level, the algorithmic level, that effectively mean you can squeeze more juice out of the lemon. For the same amount of computational power, you can do more. That's precisely why, whatever that computational power threshold is, you want to offload that to regulators to determine what that is. Don't enshrine that into law, because it will change quickly. That's one piece.

To the question of what other capabilities might emerge from these systems, it also depends on the training data. If you train these systems on bio-sequence data, they will learn for less computational power how to make a bioweapon. That's enshrined as well in the executive order. There's a lower threshold for those sorts of technologies.

4:45 p.m.

Liberal

Ryan Turnbull Liberal Whitby, ON

Thanks.

Mr. Chair, I can end there, but I see that Mr. Gagné wants to make a comment. Maybe we could allow him that.

4:45 p.m.

Liberal

The Chair Liberal Joël Lightbound

Of course.

Go ahead, Mr. Gagné.

4:45 p.m.

AI Strategic Advisor, As an Individual

Jean-François Gagné

I have a quick reaction here.

The latest progress in science has demonstrated techniques where you could invest a significant amount of money in compute inference—that's not training—to be able to have models of a certain size perform like they were 10 times bigger. It's never that simple.

Yes, it is a proxy model size, but there are ways with sufficient money or sufficient compute that you can go further than model size. There are ways to go around that and get performance out of these models. There are also ways to specialize smaller models.

Again, I think it's a use case-based approach that can potentially offer an opportunity to mitigate the risks. I think the use cases mentioned are absolutely relevant, but the triggers are never that simple.

4:45 p.m.

Liberal

The Chair Liberal Joël Lightbound

Thank you very much.

Mr. Lemire, you have the floor.

4:45 p.m.

Bloc

Sébastien Lemire Bloc Abitibi—Témiscamingue, QC

Thank you, Mr. Chair.

Thank you to all the witnesses.

Mr. Harris, I remember when you came to tell us, as legislators, about the risks of things going wrong with artificial intelligence. If I'm not mistaken, in your address you gave a potential example. You said that if someone wanted to get to Toronto more quickly, they could use artificial intelligence to simulate a major police intervention following an accident or some kind of attack. That would clear the road for them to get there more quickly.

In a situation like the truckers convoy near the Hill last year, it would be all too easy to use artificial intelligence to show an image of the Parliament Buildings on fire, as part of a serious disinformation ploy.

Was it actually you who gave that talk?

4:50 p.m.

Co-Founder, Gladstone AI

Jérémie Harris

It was indeed me.

4:50 p.m.

Bloc

Sébastien Lemire Bloc Abitibi—Témiscamingue, QC

Okay. We'll continue later.

Mr. Gagné, I've been listening to you from the beginning and find that we agree on the need to adopt parts one and two of the bill fairly quickly.

However, for part 3, given the rapid development of the situation around the world, is the current form of the bill still relevant today? Are we on the wrong track? Should we stop and rewrite everything, or continue with what we have been doing?

4:50 p.m.

AI Strategic Advisor, As an Individual

Jean-François Gagné

I agree on the fact that it's urgent to establish a base.

You know things work with legislation and other such matters better than I do. I don't know how long it would take to start over from scratch, but I think it would be a lengthy process. I feel that an effort should be made to come up with a version that provides a solid foundation that applies to most instances and, most importantly, is specific. That, in my view, is the way to go.

The danger arises when you start adding things. I read the amendments. I also felt bad when Mr. Généreux said that they had not been published, because I had read them on the train on my way here. I asked myself why I had been given access to the text of the amendments.

The list of high-impact artificial intelligence system categories was presented. On that, I'd like to say that there are so many applications that I was wondering why there is a separate category. It's important to be specific and more transparent, to comply with the regulations, and to factor in all the costs of implementing the infrastructure. If any thought is being given to the health, media or social media sectors, more precision is needed. If the field is too broad, it leaves room for interpretation.

If startup companies conducting research are attempting to develop products for the health field, they will need capital to put something very elaborate in place, and the costs will be high. Those are the kinds of factors that have to be kept in mind. It's important to be specific in what you're looking for.

4:50 p.m.

Bloc

Sébastien Lemire Bloc Abitibi—Témiscamingue, QC

Absolutely.

I was gratified when I heard your testimony, because I've been reading about artificial intelligence issues for several months. My first observation is that while Canada was once a leader in AI, that is no longer the case, unfortunately.

We need to adopt the best existing approach rather than attempt to invent something ourselves. Personally, as a Quebecker, I am always concerned about preserving our cultural distinctiveness and finding a way to protect the future of our young companies. That has an economic impact.

One of the criticisms of the bill is its lack of clarity in terms of criminal liability. The bill covers industry, and if there is to be legislation, it's not going to be for those who are behaving, but rather those who are not. Are the bad guys afraid of what's in the bill? Are these regulations really binding? How can we regulate the offenders in the industry?

4:50 p.m.

AI Strategic Advisor, As an Individual

Jean-François Gagné

The bad guys will just take their model to the country next door and make it available on the Internet.

I understand wanting to have ways of stopping them and punishing them, but it's important not to try to achieve a perfect system or a perfect law that will avoid any risks or criticism. That would slow down innovation, and Canadian businesses adopting these technologies shouldn't continue to lag behind. Basically, we don't want to end up either impeding or requiring very much in some of these areas.

It is possible to place certain obligations on some players with huge economic interests in the country. They can be held accountable. On the other hand, if the goal is to have a framework that actually works, then it's important to ensure that it's not overly general and that it is not applied either too loosely or too broadly, because that would make it difficult for dynamic Canadian organizations to innovate, make rapid decisions and have confidence in the regulatory framework rather than be afraid of it.

4:55 p.m.

Bloc

Sébastien Lemire Bloc Abitibi—Témiscamingue, QC

Internationally, the Americans made an important move with their recent executive order. Is that the way to go? Is the consensus reached at the recently held summit on artificial intelligence security adequate? Is that a minimum or a benchmark? As legislators, what should we be aiming at to get the job done for us?