Evidence of meeting #19 for Access to Information, Privacy and Ethics in the 45th Parliament, 1st session. (The original version is on Parliament’s site, as are the minutes.) The winning word was technology.

A recording is available from Parliament.

On the agenda

Members speaking

Before the committee

Guilmain  Partner and co-head, Gowling WLG's National Cybersecurity and Data Protection Practice Group, As an Individual
Bourgon  Chief Executive Officer, Machine Intelligence Research Institute

5:10 p.m.

Chief Executive Officer, Machine Intelligence Research Institute

Malo Bourgon

Speaking about global and worldwide conversations, I would certainly defer to the experts in various international diplomacy circles about which forums are the best. The United Nations exists, and so does the OECD. I'm not sure which ones would be the best case for this global conversation.

I'm very happy to speak more to which organizations I think would need to be part of that conversation. The most important thing here is for people in those roles, who have the opportunity to do that, to understand this problem such that they can start having those conversations.

Linda Lapointe Liberal Rivière-des-Mille-Îles, QC

Thank you.

5:10 p.m.

Conservative

The Chair Conservative John Brassard

Thank you, Ms. Lapointe and Mr. Bourgon.

Mr. Theriault, you have the floor for five minutes.

Luc Thériault Bloc Montcalm, QC

One thing that really struck me during this discussion is the way we can lose control.

Alain McKenna put out an excellent article, which appeared in La Presse in June. He met with Yoshua Bengio, whom we will have before the committee.

The subtitle of the article reads: “Headed towards a level of competence comparable or superior to that of humans, artificial intelligence (AI) rebelled and already defied orders given to it.”

That is what worries Mr. Bengio. He then went on to say this:

For six months, AI has been acting more and more independently, and it's also acting more and more to protect itself […] To save itself, it will hack the system to recopy its own code rather than a new code that would replace it.

Further down, he gave another example:

Claude Opus 4, the most recent large language learning model by the American company Anthropic, found out by reading private emails that one of its engineers was cheating on their spouse. The AI also discovered it would eventually be replaced with a new version of Claude. To avoid that, it decided to blackmail the engineer.

That’s rather incredible. Mr. Bengio said it was a simulation, but nonetheless, no one asked it to do this.

What followed after that is important, because this is the point I’m getting to:

What the Montreal researcher dreads most “is uncontrolled agency”, a loss of control caused by the way the most popular models are currently developed. They’re asked to perform tasks without human intervention. For these AIs, deactivating themselves can be interpreted as a barrier to completing the task.

I’d like you to comment on that.

5:10 p.m.

Chief Executive Officer, Machine Intelligence Research Institute

Malo Bourgon

I'll give some context here. For people who've been thinking about the future of AI, these are all things that we were expecting to see, and now we're seeing the evidence of that.

There's a very technical concept here. It goes by the term of convergent instrumental drives, or incentives. It's the idea that if you have a sufficiently intelligent mind, artificial or otherwise, that's trying to accomplish a task, there are certain things that aren't necessarily goals that you would train into the system. That's a separate topic: We don't even have a reliable way to train the goals we want in the AI systems. Putting that aside, many goals would come along with this that are just as instrumentally useful for AI systems in accomplishing whatever goal they might be pursuing.

One goal is self-preservation. It's not some human desire to continue living, but it's very difficult to accomplish a goal if you're not around, or active, to accomplish the goal. Another one would be resource acquisition. It's often easier to pursue the goal you're trying to pursue if you have more resources to do so.

Another example is resistance to having your objectives changed. If you're trying to accomplish a goal, you're not going to be very good at accomplishing it if you allow someone to change the goal you're trying to accomplish.

All this was theoretical 10 years ago. It certainly made a lot of common sense. We're now starting to see that, when we have AI systems that are general and capable enough to be situationally aware in this way, these behaviours are starting to manifest.

They look kind of silly. They make silly mistakes. You see their chains of thought in which they're plotting and saying these things. We say it's kind of dumb and don't see how that would cause a problem. They're thinking out loud about the ways in which they're trying to deceive us or do these dangerous things. These are all test environments.

The concern is that AI systems won't stop with what we have today. The goal of the whole field, in some sense—and certainly of these companies—is to build very powerful general systems that are much smarter than we are. Not only will they be much more dangerous with these convergent instrumental goals, and whatever goals they are pursuing—which we don't know how to train into them reliably—but as they become more intelligent, which I mentioned in my statement, they will become increasingly situationally aware.

When we test them for some of these behaviours, we start to hear them say that it seems like a test to them. When they behave in the ways we expect them to, or want them to, in tests, it becomes harder to know whether we've actually reliably created a safe system, because it is situationally aware of how we're expecting it to behave. As these systems become more powerful, have more autonomy and have more control over how the world and the economy work, that could lead to extremely bad outcomes.

5:15 p.m.

Conservative

The Chair Conservative John Brassard

13, 12, 11, 10, 9, 8...

Luc Thériault Bloc Montcalm, QC

We will try again later.

5:15 p.m.

Conservative

The Chair Conservative John Brassard

Mr. Hardy, you have the floor for five minutes.

5:15 p.m.

Conservative

Gabriel Hardy Conservative Montmorency—Charlevoix, QC

Thank you, Mr. Chair.

Gentlemen, thank you for being here with us today.

To put things back in context, there are currently three different points of view on artificial intelligence. First, there are doomers, who see it as a threat, a danger to humanity. I think Mr. Bourgon is part of that group. There are also realists. Finally, there are enthusiasts. I think Mr. Guilmain is rather realistic.

It’s important to set things in their timeline. We’re currently at the crossroads of a new technology. This isn’t new. We’ve been through this before. Remember that scientists said a human travelling in a car at 100 km an hour would die. In the 2000s, many people saw the internet make sweeping changes, such as the end of work and everything we knew.

I’d like to know what is so very different today, given the knowledge we have. I’m pretty sure I know your answer. Obviously, we don’t know what will happen in 10 years, any more than we could know how far the internet would go in the 1990s.

Considering all the technology that has appeared over the last 100 years, what is so different now? Why is it imperative for us to legislate on the matter as quickly as possible?

My question is for you first, Mr. Guilmain.

5:15 p.m.

Partner and co-head, Gowling WLG's National Cybersecurity and Data Protection Practice Group, As an Individual

Antoine Guilmain

Thank you for your question.

Right now, current geopolitical events are very peculiar. They’re leading us to raise many questions and accelerate the use of artificial intelligence. Those are the reasons why we’re asking ourselves a ton of questions. Now, the real question is this: must we legislate quickly?

If I dispassionately look at the equation between both variables, superintelligence and regulation, I’m unable to define superintelligence. I’m not saying I’m not concerned about it. Quite the contrary; I have a family, I’m thinking about the future. I’m very realistic about what this could represent.

Now, what we really must ask ourselves is if the course of action is to legislate on the issue; to put a moratorium on a type of technology we’re unable to define; or to at least take an interest in what we have right now, namely generative artificial intelligence, general use artificial intelligence systems and slightly riskier artificial intelligence systems in fields like biometrics, employment or justice.

It’s true that I’m rather down-to-earth when it comes to the priorities we should have right now.

5:15 p.m.

Conservative

Gabriel Hardy Conservative Montmorency—Charlevoix, QC

Mr. Bourgon, the floor is yours.

5:15 p.m.

Chief Executive Officer, Machine Intelligence Research Institute

Malo Bourgon

Thank you.

One thing I'd say is that the world is very large and, unfortunately, we're allowed to have many problems at once. We're allowed to have the problem of pressing regulatory issues with artificial intelligence and to have to worry about the trajectory of the technology and where it's going.

As for your question about people in the past who were worried about various past general-purpose technologies, I would agree that many of the people who warned about the risks were wrong about the impacts, but some of them were right sometimes. It turned out that nuclear weapons were real. They're very catastrophic, and we treat them very differently from the Internet. For many technologies, though, yes, there are people who are always going to create a big stir about them.

General-purpose, very powerful artificial intelligence systems are different in a real sense. Having a system that's not just automating a particular cognitive task or a particular physical task but is doing the type of thinking we would be doing, or something similar to it, such that it could automate the process of automation is different in kind. It should be treated as different in kind.

I heard in your question that there is some speculation about when it will come and what its effects will be. Technological progress and forecasting in that domain are notoriously hard. I agree. That said, it certainly seems that when we look at the history of the field and how much trouble we've had developing this technology.... Even back then, the people who founded the field, Alan Turing and I.J. Good, were already imagining what it would look like if they succeeded. They were already thinking about these risks and what it would take to control a system that is much smarter than we are. I think something has changed in the trajectory of that technology, which I can speak to, and that means we should be thinking about it coming much sooner than we otherwise thought.

5:20 p.m.

Conservative

Gabriel Hardy Conservative Montmorency—Charlevoix, QC

Thank you.

If we look at the matter very dispassionately, artificial intelligence uses the data we provide to it. There is a saying that goes, “junk in, junk out”. We see it in the case of artificial intelligence; hallucinations or false information are common. Sometimes, we even have to compare the data to finally get accurate information.

We are now at a point where just as much good as bad could come from artificial intelligence. We agree on that point.

Should we not instead have a system that adapts our legislation as quickly as technology evolves?

I’ll explain. Many good things will come from artificial intelligence in the fields of medicine, energy or science, for example. Artificial intelligence can think 24 hours a day and advance technologies we humans aren’t even thinking about. So, there is a positive side.

5:20 p.m.

Conservative

The Chair Conservative John Brassard

Mr. Hardy, sorry to interrupt you.

5:20 p.m.

Conservative

Gabriel Hardy Conservative Montmorency—Charlevoix, QC

I was on a roll. I'm sorry. I'm done.

5:20 p.m.

Conservative

The Chair Conservative John Brassard

It was a good statement, but—

5:20 p.m.

Conservative

Gabriel Hardy Conservative Montmorency—Charlevoix, QC

I had a question, of course.

5:20 p.m.

Conservative

The Chair Conservative John Brassard

Sorry, but I want to leave more time for other questions as well. When you take the floor again, we may have a bit more time.

Ms. Church, please go ahead for five minutes.

Leslie Church Liberal Toronto—St. Paul's, ON

Thank you, Mr. Chair.

Good afternoon.

Some of my interests are in the area of consumer protection and competition; when I think about how AI is emerging, I think a bit about how some of the constraints in traditional areas of innovation don't exist. You have, for one thing, the immense resources that it takes to create, develop and operate AI, which limits the field of who can participate in this to begin with. I think that's an issue.

You also have a very underdeveloped framework for consumer protection or product liability, if any, so I guess my first question to you, maybe Antoine and then Mr. Bourgon, would be how we build and whether we should build a concept of product liability into a framework that we're looking at. How would you suggest we do that?

5:20 p.m.

Partner and co-head, Gowling WLG's National Cybersecurity and Data Protection Practice Group, As an Individual

Antoine Guilmain

There are two different ways of thinking of product liability. You may think of a stand-alone AI act, but I tend to think that's not the right response. You may also look at different consumer protection acts and potentially the civil code in Quebec; this could be an option to assess what the gaps are. I will tell you, when we look at, for instance, the civil code.... I'm of a civilist tradition in terms of my training, and I like to think that jurists are pretty creative. We've already seen some interpretations of the law being adjusted with some use of AI; in that sense, I am probably quite confident that our current laws can evolve in terms of AI implications.

I know what I know. I know what I don't know. I'm not pretending to be an expert when it comes to consumer protection. That is not my field of specialty. However, it would potentially be interesting to have a regulatory body in the field raising a finger and saying it realizes that there are gaps, so it wants to tweak an existing act to make sure, essentially, that they are being captured. That is my position when it comes to AI.

It is much more efficient. It requires everyone at the table to be involved, but that's very different from essentially enacting a potential act on product liability, something we've seen in Europe, for example. It does exist. There are some directives and regulations on product liability, but the regime is fundamentally different in the way it's presented.

That's my initial reaction to your question.

5:20 p.m.

Chief Executive Officer, Machine Intelligence Research Institute

Malo Bourgon

I will be a little outside my lane here. I think liability can be a useful tool. I don't think it resolves the big-picture concerns that I'm worried about, but it's certainly on the trajectory to those types of systems. I expect that we'll get increasingly capable general-purpose AI systems that will be difficult to treat with liability in a sector-specific way, because the same system that can be an expert biologist helping with drug discovery can also be used for autonomous hacking and for helping developers find vulnerabilities in their code. That same model that can help with novel drug discovery can also potentially help someone develop dangerous biological compounds.

There's a sense in which the people making this technology are making something so general and increasingly powerful in its ability to manipulate reality that it makes sense to think about how they should potentially have some liability for ensuring that the technology they're putting out there doesn't bring certain harms.

Again, that's not going to ultimately solve the incentives for racing for superintelligence, but it certainly seems to make sense on the way there.

Leslie Church Liberal Toronto—St. Paul's, ON

It does. As for your point, you raise some very serious possible types of harm that come from the operation of this technology, so how should we look at that as legislators? What types of safeguards or guardrails could we put in place to prevent that harm and not just try to deal with it after the fact, after it happens?

5:25 p.m.

Chief Executive Officer, Machine Intelligence Research Institute

Malo Bourgon

I think some of the foundations of this are also useful for the stuff that I'm worried about with loss of control, but there's a certain school of thought that we should just let these people cook. The technology will cause a bunch of enormous benefits, and we don't want to limit them.

I find it hard to imagine that we're going to end up in a stable world if we succeed at creating these systems that are increasingly capable and that have dual-use capabilities with national security implications. We want AI developers to be able to make models that can help with novel drug discovery. Do we want those models that might also help someone create an unprecedentedly powerful bioweapon? Do we want those models to be open-source?

We should probably have some framework under which we know who is capable of training a genius in a data centre and what we can do with those very powerful technologies. If we just proliferate them openly in perpetuity, it could create a world that's unstable and that we won't be able to control. That's not to say there aren't a bunch of benefits to open sourcing some of these models; we should open source all the ones we can that don't impose those risks.

5:25 p.m.

Conservative

The Chair Conservative John Brassard

Thank you, Ms. Church.

If the witnesses can stay for three rounds of questions, each lasting two and a half minutes, I think that we can finish the second hour of the meeting quickly. Can we do this? Okay.

I'll give the floor first to Mr. Thériault, then to Mr. Hardy and then to Mr. Sari.

Luc Thériault Bloc Montcalm, QC

I think that we need to discuss ethical issues because ethics are more demanding than law. In a society where values are shared, law is the lowest common denominator. Before we can begin to effectively regulate an area, we must first understand the matter at hand. We can then try to find the best ways of doing so. We mustn't downplay this letter on the risks of superintelligence from 800 artificial intelligence experts by calling these experts alarmists. This technology has good and bad sides.

The new Minister of Artificial Intelligence and Digital Innovation announced that he would focus less on regulating artificial intelligence and more on harnessing its economic benefits.

Do you think that this approach is a bit naive? What are your thoughts, Mr. Bourgon?