Evidence of meeting #20 for Access to Information, Privacy and Ethics in the 45th Parliament, 1st session. (The original version is on Parliament’s site, as are the minutes.) The winning word was approach.

A recording is available from Parliament.

On the agenda

Members speaking

Before the committee

Leahy  Chief Executive Officer, Conjecture Ltd.
Alfour  Chief Technology Officer, Conjecture Ltd.
Piovesan  Managing Partner, INQ Law

Luc Thériault Bloc Montcalm, QC

Thank you, Mr. Chair.

Gentlemen, your presentations are quite compelling.

I will start by asking you a slightly more technical question.

Some people would say that this is alarmist rhetoric, that artificial intelligence is a long way off, that we have not yet attained artificial superintelligence and that we will have time to look into it when the time comes.

However, two trends indicate the complete opposite of what these people are saying. Let me know if you agree. First, computing power is increasing exponentially. Second, the artificial intelligence models currently available appear to be advancing in intelligence at an exponential rate, thanks to available data and the size of the model.

Would you agree that we don’t have as much time as we think?

11:25 a.m.

Chief Technology Officer, Conjecture Ltd.

Gabriel Alfour

I clearly think so.

Another way to think about it is that 15 years ago no one predicted where the capabilities of artificial intelligence would be now. This is the big thing that has happened. This is a big reason experts are warning about the extinction risks. Very few people, almost no one in AI, expected that we would have models as powerful as the GPT suite of models from several companies.

Put simply, when I was a teenager, no one expected that we'd have models that could talk to people. This was inconceivable. Things are accelerating faster and faster.

Luc Thériault Bloc Montcalm, QC

Governments are somewhat fascinated by means that improve efficiency, and they seem much more inclined to rush to implement applications to achieve greater efficiency. Time will tell whether there are indeed efficiency gains.

From what I understand from your remarks, artificial intelligence is a monster in the making, and it is evident that we cannot allow its development to continue without better regulation. Now, the question is how to regulate it.

In Canada, Bill C‑27, which died on the Order Paper, proposed the creation of the Commissioner of Artificial Intelligence and Data position within Innovation, Science and Economic Development Canada.

The recent Carney government has a Minister of Artificial Intelligence and Digital Innovation. We would like to invite him here soon, but he does not want to meet with us. Last June, he stated that he would place greater emphasis on finding ways to exploit the economic benefits of this technology rather than on regulation.

What do you think of this approach?

11:30 a.m.

Chief Executive Officer, Conjecture Ltd.

Connor Leahy

I think there are, obviously, many reasons to focus on the positives rather than the negatives. It is, frankly, more profitable and more fun, to say it rather simply.

It is important to note that we are facing a polycrisis in many areas across the globe. I'm less familiar with Canada than I am, for example, with the U.K., where I currently live, or my native Germany or America, but we are facing many challenges. Often, there is a seduction towards thinking of technological solutions. Often, technology is a very powerful solution. It is a very powerful way to help address these problems. I believe AI can be helpful for many of these problems, but it is also important to be skeptical.

There was a world where nuclear power, when it was first being developed, was thought as a solution to everything. Some wanted to make nuclear-powered aircraft, for example, that would spew radiation as they flew. They wanted to use nuclear weapons for mining applications. I think the Russians actually tried that one, and many such things.

This is not to say that nuclear power is not an incredibly useful and powerful technology. I think nuclear power plants are some of the most effective ways to make energy, but the reason they are so safe, good and useful is good regulation. This took decades of hard work. It took the invention, by many experts, of whole new forms of safety engineering to reap the benefits.

I think we're seeing a similar thing here. If we try to reap the benefits without the correct amount of safety engineering that is necessary, we will see the same thing we've seen with social media repeat. We will not see the AI equivalent of safe, economically productive nuclear reactors.

11:30 a.m.

Conservative

The Chair Conservative John Brassard

You only have 15 seconds left, Mr. Thériault.

Luc Thériault Bloc Montcalm, QC

Then I can’t ask more questions.

11:30 a.m.

Conservative

The Chair Conservative John Brassard

Thank you.

We have Mr. Barrett for five minutes.

Go ahead.

Michael Barrett Conservative Leeds—Grenville—Thousand Islands—Rideau Lakes, ON

Based on your experience with malicious AI uses such as deepfakes, cyber-attacks and autonomous exploitation tools, what safeguards or regulatory measures should we be examining to protect Canadians? What examples can you point to of where those are successfully being implemented either at a corporate level or at a national or subnational level?

I'll give both gentlemen a crack at the question if they'd like.

11:30 a.m.

Chief Technology Officer, Conjecture Ltd.

Gabriel Alfour

I will start.

There are basically two separate categories of risks. There are the risks that come from superintelligence and more generally from systems that cannot be controlled. For these risks, I recommend basically regulating the development of such systems.

The problem with such systems is that once they're developed, we cannot control them; we cannot put the genie back in the bottle. This is one type of regulation that is quite important. That is why we put a strong emphasis on international agreements and this type of regulation.

The other one is for the, let's say, more prosaic risks of current systems. Future systems are not yet superintelligent, which means that if there is a problem, we can put the genie back in the bottle. Here, it's more at the application level. When I say the application level, I mean more the AI companies—you want to regulate the bottlenecks. Here it will be beneficial to put stringent regulations on the dangerous aspects of AI and put strong liability regimes in place so that if the systems they build are used for nefarious purposes, they will also be liable for it.

I would separate those two, one being the development part of regulation for superintelligent AI and the other being the applications for current systems and near-future systems.

11:35 a.m.

Chief Executive Officer, Conjecture Ltd.

Connor Leahy

To pick up on what my colleague just spoke about, it's important to note that a play is very often attempted by these actors where they attempt to blame the users for misuse of their tools at every possible opportunity.

In general liability law, the best practice is to put liability on the part of the supply chain that is best suited to addressing the harm. Obviously, the part best suited to addressing the harm is these massive corporations with the best technical talent in the world, massive platform leverage and so on, rather than the user. I would push against user liability as the way to address these risks and push much more for developer or employer liability.

11:35 a.m.

Conservative

Michael Barrett Conservative Leeds—Grenville—Thousand Islands—Rideau Lakes, ON

I'd like to follow up on the point with respect to the regulation of development.

If we gatekeep the advancement or development here, and we have treaties with many other countries but, for example, Russia and/or China don't participate in the treaty, wouldn't we find ourselves in a situation where our adversaries are proceeding with development in an AI arms race while we just watch and hope that things don't get out of hand? Based on the context you provided in your opening statements, we know that it almost certainly will, but we may not have developed tools that will allow us to defend against it.

Would I be correct in saying we can also use models to defend us against rogue states and the models they would develop and deploy to our detriment?

11:35 a.m.

Chief Technology Officer, Conjecture Ltd.

Gabriel Alfour

There are basically two regimes. There's the superintelligent regime and there's what comes before that.

In the superintelligent regime, everyone loses. If Russia builds superintelligent AI systems, everyone loses; they cannot control it. It's the same thing for China, it's the same thing for the U.S. and it's the same thing for any country. This is in the superintelligent regime. It's only the superintelligent AI systems that basically have any agency left.

Then there's the pre-superintelligent regime, where there is an actual race, because until there's superintelligence, when you can still steer your systems, you stand to gain a lot of benefits from developing stronger systems. This is why international agreements are important, and indeed, if we fail to build and reach international agreements, things will go badly.

The same is true for biotechnology. The same is true for any type of dangerous weapons, like nuclear weapons. This is why we believe that international agreements are critical, or else you get into the superintelligent regime and things go badly.

11:35 a.m.

Conservative

The Chair Conservative John Brassard

Thank you for your response.

Mr. Barrett, thank you for your questions.

Mr. Sari, you have the floor for five minutes.

Abdelhaq Sari Liberal Bourassa, QC

Thank you very much, Mr. Chair.

Thanks to the two witnesses with us today.

You have shared some very important information about artificial intelligence. I would not necessarily say that your presentations are alarming, but they are very factual. I can only agree with you about the risk. I can only agree with you on the extent of the impact that superintelligence could have on everyone’s daily life.

However, my opinion differs from yours on one point, and I would like to discuss it. You used the verb “halt”. Are you saying that we need to halt the development of these technologies or halt their use?

As you clearly explained, persuasion technologies or systems already exist. I think Canadians are already using such technologies. Do you want us to halt their use, given that most of these systems are developed outside Canada, or do you want us to halt their development on Canadian soil?

You drew a very interesting parallel with the nuclear arms race. However, in the nuclear arms race, geographical boundaries are important, which is not the case for technologies developed using training systems and artificial neural networks, which can be used anywhere in the world.

I would like to hear your opinion on this.

11:40 a.m.

Chief Executive Officer, Conjecture Ltd.

Connor Leahy

These are some very excellent questions, and this is, in fact, a very difficult thing.

The non-locality of digital technologies presents novel, unprecedented risks of proliferation and difficulty of control. With nuclear weapons, for example, we have the luck, in a sense, that uranium ore is quite bulky and centrifuges are quite hard to build and quite visible from space, if you do them right. We have some of these benefits when it comes to AI systems, such as data centres. Others, such as the open-sourcing of many such systems, present novel difficulties.

This is why we've put such emphasis on the regulation of the development. If a superintelligent system was made, it would probably be a computer file that can probably never be deleted. It would be something that would spread. We might not even know what we're dealing with when it is first developed. It is quite likely that we won't even know that the first superintelligent system is superintelligent until it's too late. It is already the case right now that many of our AI systems have capabilities that we didn't know at the time of development. We only discover much later that our systems are capable of things we didn't know.

This is a novel regime. This is not something we have dealt with very well historically. Even historically, things such as export controls on software have not been very successful or have been very tricky to enforce.

It's very important to say that we do not think all AI should be halted or that all AI applications should be halted or not used by users. I'm sure my colleague would agree with me that we very much enjoy many of the AI applications on the market today. What we want is to gain the benefits of the kind of AI we have right now and continue forward into more powerful applications.

Abdelhaq Sari Liberal Bourassa, QC

Time is running out and because I’d like to ask you a question about Canada’s strategy, I will wrap up this question by saying that I also hope there will be an international agreement. You alluded to this, but I’m not very optimistic about that because I see the race in the field of quantum servers and facilities. I think I am a little less optimistic than I should be.

We received more than 11,000 comments during the countrywide consultation on Canada’s strategy. I believe that we also need to educate Canadians. How can we institutionalize citizen participation so that it becomes a permanent pillar in the development of artificial intelligence‑related safety policies?

11:40 a.m.

Conservative

The Chair Conservative John Brassard

Answer in 30 seconds or less, please.

11:40 a.m.

Chief Technology Officer, Conjecture Ltd.

Gabriel Alfour

We believe that awareness and education are extremely important for controlling AI. It's about half of what we do. First is education.

The second thing is putting people into the decisions of deploying systems. A lot of people are against the development of superintelligent AI systems, and whatever we can do to put them in the loop to have a say, we believe is good.

11:40 a.m.

Conservative

The Chair Conservative John Brassard

Thank you so much.

Thank you, Mr. Sari.

Mr. Thériault, you have the floor for five minutes.

Luc Thériault Bloc Montcalm, QC

Thank you very much, Mr. Chair.

The chief executive officer of the machine intelligence research institute told us when he testified last week that a global shutdown is not currently politically feasible and that’s why his organization is focusing on safeguarding the ability to shut down artificial intelligence by creating a kind of off-switch. He proposed putting in place the technical, legal and institutional infrastructure necessary to restrict the dangerous development and deployment of artificial intelligence on an international scale. This is what he calls an off-switch. That would lead to a coordinated, international shutdown of cutting-edge artificial intelligence activities at some point in the future.

What do you think of this proposal?

11:45 a.m.

Chief Executive Officer, Conjecture Ltd.

Connor Leahy

We generally believe that there is a lot of value in what are often called “stop button” proposals, mostly not from a technical perspective, but as a social-political factor. As an example, I have in the past asked someone who worked at a large tech company whether they could shut down all of their servers if they wanted to. He said no. He didn't know where all of them were. There was no single person in the entire company who knew where all the servers were and what software was running on them.

As for creating legislation, I am not familiar, unfortunately, with the specific proposal you mention. There is a lot of value in this, but it sounds to me that the proposal pushes against development.

We cannot rely on waiting until we see a superintelligent system, because by the time we see one, it is already too late. It's quite likely that when the first superintelligent system gets built, we will not even recognize it as being that until quite a bit later, and that will be far too late.

It's very important—which is why there's a repeated emphasis on precursors—to make sure that such systems never get developed in the first place. To do this, we need to already be in the loop before such systems are built.

Luc Thériault Bloc Montcalm, QC

You mentioned precursors several times. How can precursors be restricted?

11:45 a.m.

Chief Technology Officer, Conjecture Ltd.

Gabriel Alfour

I'll take this one.

There are many different types of precursors. Some are hardware, like data centres or the way to build GPUs into the supply chain and things like this. Some are software, like the types of AI systems that have been built and the type of scaffolding you have on top of them.

A deep thing that is quite important to understand is that this is a moving target. As time passes, it gets easier to build superintelligent systems, and more things get into the category of precursors. This is also why we believe there is an urgency and that we should tackle this as soon as possible.

Right now, we can get away with, for instance, preventing research programs that are aimed at building superintelligent systems. We should also focus on limiting the open-sourcing of models, because once they're there, you cannot take them back. It is the same for data centres. For every data centre, there should be stop buttons and kill switches. There should be clear regimes for what can be done with them and so on. These are the types of regulations that we should have on precursors.

Fundamentally, it is a moving target. As the technology changes and as the way the technology is built changes, the target itself changes. The precursors of 15 years ago were very different than they are now, and it would have been much simpler to tackle the problem 15 years ago.

Luc Thériault Bloc Montcalm, QC

Thank you.

How much time do I have left, Mr. Chair?

11:45 a.m.

Conservative

The Chair Conservative John Brassard

You have 35 seconds left, or perhaps a little more.