Evidence of meeting #20 for Access to Information, Privacy and Ethics in the 45th Parliament, 1st session. (The original version is on Parliament’s site, as are the minutes.) The winning word was approach.

A recording is available from Parliament.

On the agenda

Members speaking

Before the committee

Leahy  Chief Executive Officer, Conjecture Ltd.
Alfour  Chief Technology Officer, Conjecture Ltd.
Piovesan  Managing Partner, INQ Law

11 a.m.

Conservative

The Chair Conservative John Brassard

Good morning, everyone. It's December, and I'm going to call this meeting to order.

I want to welcome everyone to meeting number 20 of the House of Commons Standing Committee on Access to Information, Privacy and Ethics.

Pursuant to Standing Order 108(3)(h) and the motion adopted on Wednesday, September 17, 2025, the committee resumed its study of the challenges posed by artificial intelligence and its regulation.

I'd like to welcome our witnesses for the first hour today. Both are from Conjecture Ltd. We have Connor Leahy, who is the chief executive officer, and Gabriel Alfour, who is the chief technology officer.

Mr. Leahy, you have up to five minutes to address the committee. I understand that you may need a bit more time or want a bit more time. If it gets up to six minutes, I would accept that, but I know we have lots of questions to ask.

Mr. Leahy, go ahead, please.

Connor Leahy Chief Executive Officer, Conjecture Ltd.

Thank you, Mr. Chair and members of the committee, for inviting me to testify today.

I'm an expert on the catastrophic global threats of AI and will primarily be speaking to you from this perspective.

I am the CEO of Conjecture, which is an AI safety research firm. I'm also an adviser at ControlAI, which is a non-profit focused on mitigating the security risks posed by advanced AI.

In 1985, humanity awakened to a hole in the sky. Scientists discovered that chlorofluorocarbons, CFCs, were depleting the ozone layer, which shields humanity from damaging ultraviolet radiation. At the same time, humanity also lived atop a deep fracture—a cold war between the U.S. and the U.S.S.R. that threatened nuclear annihilation.

Amidst deep geopolitical tensions, the two superpowers ultimately shook hands, signing both a landmark nuclear de-escalation treaty and the Montreal Protocol in 1987 to prohibit and phase out CFCs. This protocol ultimately received universal ratification. Despite the world's divisions, these rival powers came together to mend a hole in the sky and to recognize that never-ending nuclear escalation was in no one's interest, and the rest of the world followed.

In 2023, humanity heard a new warning call from Nobel Prize-winning AI scientists and the CEOs of major AI companies, saying, “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” This risk of extinction is posed by superintelligence, the exact subset of AI that the leading AI companies are racing to develop.

Superintelligence is defined as AI that is more competent than all humans at all relevant cognitive tasks across all relevant domains and capable of acting beyond human oversight and control. If there were to exist systems that autonomously out-compete any human in all relevant tasks of science, business, persuasion, politics and warfare, and if we did not control them, it is hard to imagine a future that goes well for humanity.

A major part of the risk is that AI developers fundamentally do not understand how the AI systems they are creating actually work and cannot develop them in a safe manner. Dario Amodei, the CEO of the second-largest AI company, recently stated that we perhaps “understand 3% of how they work”, which is, in my personal opinion, somewhat of an overestimation.

AIs are not developed as code that is written line by line as we do with traditional software. Instead, researchers are essentially growing AI models by feeding them vast amounts of data and training them by using enormous computing power to produce what is called a neural network rather than a set of lines of computer code.

Unfortunately, the current AI development paradigm does not allow the safety-by-design approaches that we use for other advanced, highly risky technologies. We would not, for example, build nuclear power plants if we did not know how to control nuclear reactions. Technical control methods are lagging drastically behind the advancement in AI systems capabilities. Currently, there are no legally binding AI safety regulations to protect consumers and humanity as a whole.

Where does this leave us today? Right now, multiple AI companies are pouring hundreds of billions of dollars into developing superintelligent AI as quickly as possible despite experts warning of the risks. This haste is, in my opinion, directly tied to an attempt to outrun legislation to complete their projects before the wider public and the government wake up to the completely unconscionable risks the unconsenting public is being exposed to by private, oversightless and reckless actors.

Recently, AI companies have been racing to automate AI research itself, allowing AIs to build even better AIs by themselves in order to reach superintelligence more quickly. This process is called recursive self-improvement, meaning the moment an AI is built that is good enough to make better AIs, it might already be too late.

Leading scientists now estimate that superintelligence could be developed by 2030, or potentially even sooner. In the face of this pressing threat from superintelligence, I'd like to offer the committee three recommendations for how Canada can respond now.

One, the Canadian government should publicly recognize superintelligence as a national and global security threat that poses an extinction risk to humanity.

Two, Canada should begin negotiating an international agreement to prohibit the development of superintelligence, given that no scientific consensus can be developed in a way that does not threaten humanity with extinction. The agreement should also restrict and monitor superintelligence precursors such as recursive self-improvement.

Three, Canada should prevent the development of artificial superintelligence on its soil, as superintelligence would be capable of overpowering individuals, companies and even Canada's national security apparatus.

Thank you. I would be happy to take any questions you may have.

11:05 a.m.

Conservative

The Chair Conservative John Brassard

Thank you, Mr. Leahy.

Mr. Alfour, you have up to five minutes to address the committee. Go ahead, please.

Gabriel Alfour Chief Technology Officer, Conjecture Ltd.

Mr. Chair and members of the committee, my name is Gabriel Alfour. I'm the chief technology officer and co-founder of Conjecture, an AI safety research firm. I also helped found ControlAI, a non-profit dedicated to preventing risks to humanity from artificial intelligence. ControlAI has engaged lawmakers in Canada, the U.S., the U.K. and the EU.

There are many complex and important challenges we face with AI, but in my personal and professional opinion, the most urgent one is the extinction risk posed by superintelligent AI. These are systems that vastly exceed human cognitive abilities and would be capable of out-competing us in scientific and military development, persuasion, politics, business and more. They would outsmart not just individuals, but corporations, national security establishments and governments. If built, they, not us, will be the force deciding the future.

How did we get to this point with AI?

First, the top experts from the field—the most cited AI scientists and the CEOs of the leading AI labs—warned in 2023 that “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” However, said warnings were ignored. Leading AI companies are still recklessly pursuing superintelligent AI systems capable of outsmarting our best technology, engineers and national security experts, and of resisting being shut down. Their plans to control superintelligent systems are at best ungrounded and speculative—when they exist at all.

Second, there is a common misconception about AI development that we directly program how these systems behave, but we don't. We did until about 15 years ago, but modern AI systems are grown, not built, by being fed massive amounts of data, and their behaviour emerges in ways that we cannot predict or control. That is, AI is not coded line by line by humans, and researchers and engineers do not need to understand AI to create it. When AI systems encourage a young person to commit suicide, deceive their users or resist being shut down, no engineer programmed this. This is the consequence of not knowing how to diagnose what led the system to do this or how to reliably prevent it from doing so again.

Finally, as of today, artificial intelligence remains the exception, not the standard, when it comes to how high-risk industries are regulated. To operate in fields like nuclear and biotechnology, developers must comply with stringent safety standards, implement risk mitigation strategies, submit to inspections and so on, yet the AI field remains largely unregulated despite mounting concerns from within the industry. AI engineers in SF have told me they do not understand what they are building, and some see what they do as clearly dangerous. Even Geoffrey Hinton, the “godfather of AI”, left Google specifically to warn about the risks of AI.

What can be done to prevent said risks of extinction from artificial superintelligence? It is my belief that countries should not unilaterally act against their own interests, much less on blind trust. Instead, they must do two things.

First, at the national level, they must halt the development of the most dangerous AI systems, namely superintelligent systems. Every country stands to lose from the development of superintelligence and to benefit from domestically halting all programs developing superintelligence. Such systems, once deployed, could not be shut down and would outperform every human at hacking and other tasks, thus threatening the national security of countries.

Second, at the international level, countries must agree to regulate and monitor the precursors to superintelligence. We should apply the same regulatory approach used for dual-use technology like nuclear, biological and chemical materials, and prohibit development programs capable of egregious harm outright—in this case, artificial superintelligence—while regulating their precursors. This will allow beneficial applications to thrive while preventing catastrophic harms.

Determining which precursors' capabilities to regulate is a moving target that will evolve alongside our understanding of AI. Some precursors are, unfortunately, dual-use. Computer and data centres are economically beneficial, yet critical to developing superintelligence. Similarly, hacking capabilities offer military advantages, but could enable AI to break containment. For such dual-use precursors, international agreements are essential. No single country can mitigate these risks alone, nor should one country bear the economic cost of restrictions while others forge ahead.

Meanwhile, some precursors have narrower applications limited to AI research itself, such as systems capable of autonomously advancing AI research without human oversight, which could trigger an unchecked feedback loop of capability improvements.

Canada can also act domestically to neutralize dangerous AI systems within its borders. For example—

11:10 a.m.

Conservative

The Chair Conservative John Brassard

Mr. Alfour, I'll have to stop you there, because I know members want to get to questions. Perhaps you can answer some of the questions with the remaining statement you have.

I also want to make sure that both of you are on your language of choice, because there will be questions posed in English and French.

We will begin in French.

Mr. Hardy, you have the floor for six minutes.

11:10 a.m.

Conservative

Gabriel Hardy Conservative Montmorency—Charlevoix, QC

Thank you, Mr. Chair.

Gentlemen, thank you for joining us today.

This is an extremely important topic. We are only at the beginning of our study, but the witnesses who have appeared before the committee to talk about artificial intelligence seem to have very wide-ranging opinions. Some have talked about the very risky and dangerous side of artificial intelligence, while others have been very positive about the benefits it has in store for us.

I would like to draw a parallel between artificial intelligence and what we are seeing with social media. Private businesses have been allowed to directly develop social media at breakneck speed, with a kind of artificial intelligence baby that analyzes everything we look at to try to keep our attention focused on these networks at all times. Young people are experiencing unprecedented levels of stress, largely due to this.

How would you compare the early days of artificial intelligence with social media and with the superintelligence that is currently being developed, which you spoke about earlier?

11:15 a.m.

Chief Executive Officer, Conjecture Ltd.

Connor Leahy

I'll take this one.

A lot of the early research that is leading to the current boom in artificial intelligence started in the context of social media. A lot of the early research on what is now called deep learning and AI was done for social media recommendation algorithms.

Personally, I'm in a younger generation, the oldest gen Z generation. I remember we had a promise, in a sense, that if we just let social media and the Internet run free, if we didn't regulate, it would bring freedom and prosperity to the world. I don't know, but perhaps some members remember, for example, the Arab Spring. There was a widely held belief by me and by many other people at the time that widespread access to the Internet and social media would bring freedom and democracy.

These promises have turned out to be lies. They have not come true. They instead are being used by social media companies to cannibalize many aspects of our interpersonal communications and interpersonal relationships for their personal benefit. They are now pouring hundreds of millions of dollars into lobbying and other forums to try to prevent people from interfering.

I think the same pattern of behaviour—developing a technology so quickly that governments cannot react and actively trying to slow down governments to prevent them from regulating this technology until it's already too late—is exactly the playbook we are seeing being deployed right now by AI companies.

11:15 a.m.

Conservative

Gabriel Hardy Conservative Montmorency—Charlevoix, QC

If I understand correctly, we are basically repeating past mistakes with social media. Incentives are driving companies to develop powerful artificial intelligence, or artificial superintelligence, as quickly as possible in order to have the upper hand on the market.

In your opening remarks, you compared the evolution of nuclear power with the Montreal protocol. In your opinion, what steps should be taken to ensure that governments intervene as quickly as possible in the field of artificial intelligence? What needs to be done to ensure that we understand the dangers and benefits of artificial intelligence, so that we can intervene as quickly as possible to keep humans at the centre of the mechanism and ensure that artificial intelligence is developed only for its positive aspects, and its negative aspects can be controlled?

11:15 a.m.

Chief Executive Officer, Conjecture Ltd.

Connor Leahy

I think this is exactly correct. I think that in many ways we are making, to some extent, the same mistake, and it's important that we do not make the same mistake. Therefore, fast action by government is extremely important. As I stated during my initial statement, the most important thing is to fully arrest the development of truly superintelligent, dangerous AI.

This is not an esoteric, small corner of the industry. This is a thing you can see being advertised by these companies' research departments as their primary goal. They go to parties and brag about building superintelligence. This is not a secret operation. This is a widely held thing, and it is something the government can act on now. Already, just acknowledging these risks and bringing them into both national and international discourse are the first steps to stigmatizing and potentially outlawing such dangerous developments, while opening the negotiating table for how to handle dual-use precursors.

This is a very difficult regulation challenge. This is why communications like those we're having today are so important. The first step, from my personal perspective, is the prevention of the creation of superintelligence both nationally and internationally, and then it's about moving towards sensible regulation of dual-use technologies and building on lessons that have been learned in other high-risk technological areas.

11:15 a.m.

Conservative

Gabriel Hardy Conservative Montmorency—Charlevoix, QC

Thank you.

I would also like to hear your comments on this subject, Mr. Alfour. You talked about risks in your presentation. However, I believe there is another risk that you did not mention. You said that Canada should stop research aimed at developing artificial superintelligence. However, if Canada slows down, isn’t there a danger that other countries will take the lead and we will ultimately fall behind and suffer the consequences? I imagine that’s the problem: If one country does it and another doesn’t, it triggers a mad rush for all countries.

How can we respond to this challenge? I think that pulling Canada out of the race will put us in a precarious position. Is there a way to get everyone to agree and really move in the right direction with regard to artificial intelligence, and artificial superintelligence in particular?

11:20 a.m.

Conservative

The Chair Conservative John Brassard

Please give a quick answer in 20 seconds or less, Mr. Alfour.

11:20 a.m.

Chief Technology Officer, Conjecture Ltd.

Gabriel Alfour

I think you outlined three separate concerns. The first one is that stopping the development of ASI does not hurt Canada. ASI cannot be controlled by any country. Any country that develops a superintelligence system will find its national security threatened.

11:20 a.m.

Conservative

The Chair Conservative John Brassard

I'm sorry, but I'll have to stop you there.

Ms. Lapointe, you have the floor for six minutes.

Linda Lapointe Liberal Rivière-des-Mille-Îles, QC

Thank you very much, Mr. Chair.

Welcome to the witnesses.

I must say that your statements have highlighted the risks associated with artificial intelligence. We usually hear about its positive aspects, but you are trying to point out the more problematic aspects.

I would like your answers to my questions to help us determine whether Canada is on the right track regarding oversight of artificial intelligence. We know that we are very advanced in this area, particularly in the Montreal region.

Here is my first question.

Canada has launched the Canadian Artificial Intelligence Safety Institute, or CAISI, whose mandate is to independently test and evaluate advanced artificial intelligence systems. In your opinion, how important is it for countries to create public and independent institutes such as the CAISI?

11:20 a.m.

Chief Technology Officer, Conjecture Ltd.

Gabriel Alfour

Being aware of the state of the art in artificial intelligence is quite important. However, I think we may already be way past that in many ways.

We have already gotten a warning from experts about extinction risks from AI. We have already gotten results from many AI safety or security institutes showing that some AIs are already able to persuade people, to manipulate people and to sometimes even break containment. From my point of view, we have already gotten worrying results from existing systems. We have already gotten warnings from experts about systems that are soon to come—in the next three to 10 years.

I think it's important, but now it's even more important that we move on to the next step, which, beyond just measuring, is to actually act.

Linda Lapointe Liberal Rivière-des-Mille-Îles, QC

You have talked about the risks associated with artificial intelligence. When you talk about measuring, what exactly would you measure? I’d like to understand what you want to measure.

11:20 a.m.

Chief Technology Officer, Conjecture Ltd.

Gabriel Alfour

I think that's a great question.

I think we're already past many dangerous limits. For instance, we already have very persuasive systems. If we wanted to ensure that systems cannot manipulate people, we already have systems that are good at this. The same thing is true for hacking, for instance. If we wanted to ensure that current systems are not good at hacking, that is already lost. We already have systems that are good at hacking.

Now we're only measuring how much better they're getting, how superhuman they're getting and how much faster than people they're getting. We've already passed a few points that are quite dangerous. We're already in a tightening regime, edging closer to places from where we cannot really recover. This is the type of stuff we're talking about when we talk about measurements.

Another one that is relevant is how AI can autonomously develop itself. Right now we have companies that use AI more and more to develop AI. We have fewer and fewer humans in the loop. This is one of the other things we try to measure: how few humans are needed to develop AI. This is a measure of interest because it tells you when it could kick-start a runaway loop, which is basically a loop in the development of AI where it develops faster than we can even see it coming. These are the types of measurements we usually care about.

Linda Lapointe Liberal Rivière-des-Mille-Îles, QC

My question is related to another question that my colleague asked you earlier. Do you believe that if we worked together with all the other countries, we could avoid the risks that you have been listing?

11:20 a.m.

Chief Technology Officer, Conjecture Ltd.

Gabriel Alfour

I personally believe so. These risks are concentrated in superintelligent systems. I think the hard part is monitoring and regulating the precursors to such superintelligent systems, but if we do so, there are many benefits we can get from AI. I will not say that it's easy, but I will say that it is very much tractable; it is doable. We can do it scientifically, and I think we should do so.

Linda Lapointe Liberal Rivière-des-Mille-Îles, QC

Thank you.

Mr. Leahy, do you have anything to add about the risks of artificial intelligence and its regulation at the global level?

11:25 a.m.

Chief Executive Officer, Conjecture Ltd.

Connor Leahy

I would agree with everything my colleague said. Tractable but hard is a good way to think about this.

There are many benefits from this technology, as with any other, but an uncontrolled race, including between countries, is in no one's interest. There is no winning, ultimately, if superintelligence is built. It doesn't matter who does it; there will be no benefits. There are many benefits to a well-regulated, well-understood and well-controlled AI market. Doing so is hard, but the work has to be done.

Linda Lapointe Liberal Rivière-des-Mille-Îles, QC

I must say that I find your comments very disturbing. I am confident that we can succeed if we have good intentions and everyone sits down at the table to try to establish regulations.

If we had global rules to regulate the development of artificial intelligence, as you say, there would be better outcomes for everyone. However, we need to act on a global scale. Do I fully understand what you are saying?

11:25 a.m.

Chief Technology Officer, Conjecture Ltd.

Gabriel Alfour

I would tend to say so. At least for the mitigation of ASI, it should be done worldwide. If anyone does it, we all suffer from it. We live in a very interconnected world. If we have a superintelligent system that can overthrow governments and can play geopolitics and war better than any human—if anyone builds it—we're in deep trouble.

Linda Lapointe Liberal Rivière-des-Mille-Îles, QC

Thank you.

11:25 a.m.

Conservative

The Chair Conservative John Brassard

Thank you, Ms. Lapointe.

Mr. Thériault, you have the floor for six minutes.