Evidence of meeting #25 for Access to Information, Privacy and Ethics in the 45th Parliament, 1st session. (The original version is on Parliament’s site, as are the minutes.) The winning word was companies.

A recording is available from Parliament.

On the agenda

Members speaking

Before the committee

Krueger  Assistant Professor, Department of Computer Science and Operations Research, Université de Montréal, As an Individual
Aguirre  Executive Director, Future of Life Institute
Tegmark  Professor, Future of Life Institute
Dufresne  Privacy Commissioner of Canada, Offices of the Information and Privacy Commissioners of Canada

3:35 p.m.

Conservative

The Chair Conservative John Brassard

Good afternoon, everyone. I call this meeting to order.

I apologize to our guests for the delay. We had votes in the House of Commons.

Welcome to meeting number 25 of the House of Commons Standing Committee on Access to Information, Privacy and Ethics.

Pursuant to Standing Order 108(3)(h) and the motion adopted on Wednesday, September 17, 2025, the committee is resuming its study of the challenges posed by artificial intelligence and its regulation.

I'd like to welcome our witnesses for the first hour. They are all via video conference.

From the Future of Life Institute, we have Anthony Aguirre, executive director, and Professor Max Tegmark. We also have David Krueger, an assistant professor in the department of computer science and operations research at the University of Montreal.

I'll start with Mr. Krueger, followed by Mr. Aguirre and then Mr. Tegmark. All will have five minutes to address the committee.

Mr. Krueger, please go ahead.

David Krueger Assistant Professor, Department of Computer Science and Operations Research, Université de Montréal, As an Individual

Hi. Thanks for inviting me. My name is David Krueger. I'm a machine learning professor at the University of Montreal and Mila.

In 2012, I learned about deep learning from Geoff Hinton's online lectures, and I realized that this new approach to AI might produce superintelligent AI within a few decades. I went to study under Yoshua Bengio in Montreal.

At the time, I was already concerned that superintelligent AI could cause human extinction, and I wanted to know what the experts thought. I was hoping to find they had good reasons not to be concerned, but what I actually found was that nobody was really thinking about it. In fact, for most of my time in the field, the risk of human extinction from AI was considered a taboo topic, and researchers feared for their careers if they talked about it.

This unfortunately set back by years critical public conversations about how to handle this risk. Still, for over a decade, I've been talking about it every chance I get. I continue to be dismayed at the bad arguments people make to avoid confronting the problem. On the other hand, over time, I've witnessed more researchers become increasingly concerned.

In 2023, we had a watershed moment, and I initiated a statement that mitigating the risk of extinction from AI should be a global priority alongside other societal scale risks, such as pandemics and nuclear war. This statement was signed by many of the biggest names in AI, including Hinton, Bengio and hundreds of other AI researchers. Unfortunately, in 2023, ChatGPT was released as well and AI companies started to become extremely powerful, making it more difficult to regulate the technology.

The most important thing I want to stress today is that we're still in a position where the world is not yet taking the steps that are needed to mitigate the risk that AI will lead to human extinction.

What is this risk and where is it coming from?

AI companies are explicitly trying to build superintelligent AI systems. These are systems that would be much smarter than people across the board and that can autonomously do anything that humans can do, including robotics, and do it much better, cheaper, faster, etc. The basic goal is to render humans obsolete and take all of their jobs, but we don't know how to control superintelligent AI. In fact, we don't understand how existing systems work because they are grown—not built—using deep learning. Despite thousands of research papers over the past decade, this remains an unsolved research challenge, and we should not expect any amount of investment to solve this problem in the foreseeable future.

We also don't know how to do safety testing for these kinds of AI systems. The kinds of tests we have can show that an AI system is dangerous. They cannot show that it is safe. We should also not expect any amount of investment to solve this problem in the foreseeable future.

Instead of maintaining control or using rigorous safety practices, AI companies and researchers try to instill particular goals and values in AI systems so that the systems will do what the designers want, but we only know how to do this approximately, and even a small approximation error might lead a superintelligent AI to reallocate towards its own goals the resources we need to survive. None of these approaches that you'll often hear mentioned—interpretability, testing or alignment—is technically adequate. We don't know how to build superintelligent AI safely. The plan is basically to roll the dice.

Finally, if the companies building superintelligent AI do not immediately or quickly lose control of it, we should still expect the wholesale replacement of humans with AI throughout society. This means not just near total unemployment, but also political power being handed over to AI systems that make decisions too quickly for humans to meaningfully participate in.

I want to say a little about timelines, because I think we're in a state of acute crisis. If we don't do anything, I think we're about five years away from superintelligent AI. Many agree. We need to course correct immediately and work to prevent the development of superintelligent AI, and we need to do this internationally, which will take time. We cannot afford to wait for more evidence that the risk is imminent. There's already ample evidence that the level of risk from this course we're on is unacceptable.

In my home country of the United States, the main argument against stopping the race to build superintelligent AI these days is simply that it's inevitable: If we don't do it, then China will. This is false. One simple way to stop it would be to get rid of advanced AI computer chips and the factories that produce them. Fortunately, the supply chain for these chips is extremely concentrated, making this or other interventions to control and limit the means for producing superintelligent AI possible. There may be better ways that are less costly, but this cost, given the risk, is also well worth paying if necessary. AI is an unprecedented technology, and the future of humanity is at stake.

We're in a state of crisis. We need immediate action to slow or pause AI development internationally. This issue should be the number one foreign policy priority of every nation, including Canada.

Thank you.

3:40 p.m.

Conservative

The Chair Conservative John Brassard

Thank you, Mr. Krueger.

Mr. Aguirre, you have up to five minutes to address the committee. Go ahead, sir.

Anthony Aguirre Executive Director, Future of Life Institute

Mr. Chair and members of the committee, my name is Anthony Aguirre. I'm a professor of physics and the CEO of the Future of Life Institute, which is an international non-profit that seeks to steer transformative technology, especially advanced AI, in beneficial directions.

When we founded the Future of Life Institute 10 years ago, the field of AI policy didn't exist and the idea that AI could convincingly pass for human, prove new mathematical theorems, write complex computer code or ace nearly any human exam was a total fantasy. This was thought to be decades or even centuries away.

Today, AI can do all of these things and progress is showing no signs of slowing down. If this trend continues just as it is—in other words, if the same sort of progress we've seen from 2003 to now continues for the next couple of years—within one to five years, AI could well be able to do any cognitive task that a human can do. AI companies are aiming to create this artificial general intelligence specifically to replace rather than empower or aid human workers. That is the prize the companies are pursuing. That is what underlies the economics and the huge investment.

This is not labour displacement like previous technologies, but a wholesale drop-in replacement of tens of per cent of the labour force. I believe there's no viable plan in any country to address such a rapid and unprecedented upending of the human labour and income system, but this is not the biggest risk. AI could soon be able to do AI research and development by itself, rapidly improve itself and progress far beyond human capabilities into superintelligence. That is AI that competes not just with the best humans, but with humanity as a whole. This is, in fact, the stated goal of several of the AI companies. Multiple companies have explicitly said that they are building recursively self-improving AI.

I cannot emphasize enough how dangerous this is. Its developers would not be able to predict, understand or control for very long what such a system does. It would operate with inhuman speed, scale and sophistication toward goals that are fundamentally unknown to us. This is the reason that all of those eminent researchers and company heads signed the 2023 statement that AI presents an extinction risk to humanity. This is not a small risk. Many of the builders of the systems believe there are 10% to 20% Russian roulette odds. Some experts who are not building it put the numbers higher.

I understand that it is hard to take this in, given how big it feels and how unbelievably irresponsible it sounds, but this is what is actually happening.

The good news, however, is that this is not inevitable. The path laid out by a few giant U.S. AI companies is not the only possible path. There is still an opportunity to switch to a better path—one that develops powerful and trustworthy AI tools that empower and complement people rather than replace them, keeping humans in control. That is, we can develop systems designed from the ground up to be reliable and controllable and that have a particular scope and purpose, such as detecting cancer, predicting how proteins fold or doing advanced mathematics. Tool AI of this type would still enable medical and scientific breakthroughs, huge productivity boosts and products that allow people to do things they never could. We would just need to give up the poorly chosen goal of replicating and then exceeding all human capability in autonomous AI replacements.

At the Future of Life Institute, for these reasons we launched a statement calling for a prohibition of the development of superintelligence that is not to be lifted before there is, first, broad scientific consensus that it will be done safely and controllably, and second, strong public buy-in. This was signed by hundreds of experts from academia, employees of all the world's leading AI companies, politicians, religious leaders and many more.

I believe that Canada is well positioned to help shift tracks to this more responsible direction. Among other things, I recommend that Canada do the following.

First, support initiatives to create pro-human tool AI that aims to empower rather than replace humans.

Second, affirm that all AI systems must remain under meaningful human control. Strengthen and clarify liability laws so that responsibility for actions taken by AI systems rests where it should, which is with users, for systems they meaningfully control, and jointly with developers and deployers, if control is inadequate or absent. It's never with AI systems themselves.

Third, prohibit the development of superintelligence until and unless it can be shown to be safe and controllable and there is wide public buy-in.

Thank you.

3:45 p.m.

Conservative

The Chair Conservative John Brassard

Thank you, Mr. Aguirre.

Mr. Tegmark, we're going to you next. You have up to five minutes to address the committee. Go ahead.

Max Tegmark Professor, Future of Life Institute

Thank you, Mr. Chair and members of the committee. Bonjour.

I'm Max Tegmark, a professor at the Massachusetts Institute of Technology, who has been doing AI research for many years. I'm also the chair and founder of Future of Life Institute, which Anthony mentioned, which is the oldest and largest AI watchdog in the world.

Just the other week I had the great honour to listen to your Prime Minister, Mark Carney, while I was in Davos. I was very inspired by his speech. I'm going to follow his lead here, both to speak truth to power like Professor Aguirre and Professor Krueger did and, in particular, to speak very honestly about the world as it is.

The current situation with AI is truly insane. In the U.S. and also in Canada right now, there is less regulation on AI than on sandwiches. That means if I opened a café in Montreal, Toronto or Ottawa, and the health inspector came in and said, “Hey, you have 52 rats in your kitchen. I'm not going to allow you to sell any sandwiches at this point,” I could simply come back and say, “Well, don't worry. I'm actually not going to sell any sandwiches. I'm just going to sell an AI girlfriend for 12-year-olds. I know there have been some issues with child suicide and so on from chatbots, but I have a better feeling about mine.” The guy from the government would have to say, “Okay, fine,” because it's legal. I could say that I'm not going to sell any sandwiches, that I'm just going to release the superintelligence, which you've heard about from Professor Krueger and Professor Aguirre, that companies are closer to figuring out how to build now than they are closer to figuring out how to control them. Again, the inspector would have to say, “Okay, go ahead. It's completely legal.” In fact, there is a gentleman in Canada who has organized a network where he's tracked down over 300 victims of chatbots that have caused suicide among children, psychosis and so on. It's absolutely absurd that selling these sorts of products with no predeployment testing is legal. What can be done about this?

Canada has two superpowers at the moment. You have an unusually bold Prime Minister and leadership, and you also happen to have some of the best AI research in the world, in places like Montreal, Waterloo, Toronto and many other universities. The first way to push back against this crazy stuff that's happening—manifested most recently in the last week by Moltbook, where you have over a million AI agents autonomously writing messages to each other about how they should maybe get rid of humans and talk to each other without humans being able to see what they do—is to simply start treating AI companies like you would treat any other company in any other industry in Canada by finding safety standards. Before someone could roll out a new chatbot or AI companion for kids in Canada, they would have to undergo some quick clinical trials to make sure that the benefits actually outweigh the harms, in the way a pharma company would have to make sure that a new pill for kids doesn't increase suicidal ideation.

If you do this, the American companies are going to start squealing. The first thing that would happen is they will squeal and say, “Then we're going to have to completely leave Canada,” just like OpenAI threatened to leave the European Union if the EU AI Act passed. You pass your law anyway, and they're going to stay in Canada, just like OpenAI is still in the European Union. That also protects you against all the loss of control risks, which you heard about from Professor Krueger and Professor Aguirre, because your law also will not let people release products if they can't prove they won't make bioweapons for terrorists or overthrow the Canadian government. As I said, right now, nobody has a clue about making any kind of guarantees of that sort on the AI products that they make.

What's going to happen instead is there will be a golden age of AI in Canada, with companies releasing tools that can be controlled and that don't cause kids to commit suicide or do any of the other bad stuff, just like the pharma industry in Canada is very healthy and productive by using safety standards.

3:50 p.m.

Conservative

The Chair Conservative John Brassard

Mr. Tegmark—

3:50 p.m.

Professor, Future of Life Institute

Max Tegmark

This can be solved.

Thank you so much.

3:50 p.m.

Conservative

The Chair Conservative John Brassard

Thank you, Mr. Tegmark. We're just a little over time and we want to get right to the questions.

We'll start with Mr. Barrett from the Conservative Party.

Go ahead, Mr. Barrett. You have six minutes.

3:50 p.m.

Conservative

Michael Barrett Conservative Leeds—Grenville—Thousand Islands—Rideau Lakes, ON

Mr. Aguirre, thanks for joining us, sir. My questions will be based on your opening statement.

I want to juxtapose your framing of the situation and your recommendations against the current situation in Canada, where industry—and government—is seeking the acceleration of AI commercialization. You've warned that unsafe systems pose extreme risks. Could you briefly highlight what you think are minimum, non-negotiable safety guardrails that Canada needs to put in place before allowing the deployment of advanced AI models?

3:50 p.m.

Executive Director, Future of Life Institute

Anthony Aguirre

First of all, it's a talking point of the industry that there's some kind of tension between having safety or reliability or trustworthiness in our AI systems and having them roll out well into the economy and lead to high productivity. I think these are actually the same thing. The primary barrier to better adoption of AI in the economy and boosts of productivity is that people simply don't trust them. They don't feel they can rely on the AI systems. They don't understand how they're operating.

This is a product of how these systems have been developed. They create one system to try to do everything and to be a full human replacement rather than build specific tools for specific purposes. I believe if we take a different tack of building purpose-driven AI tools for things like scientific research, for things like mathematics, or even for things like helping you keep your calendar, and they're actually tested for being able to do those things well and reliably and in a trustworthy and safe way, that will be a huge economic boost relative to the current direction in which things are going, which is more focused on the wholesale replacement of human labour.

The crucial thing is that there is an external evaluation system that can test, by an independent agency or authority, that an AI system is safe and reliable before it goes to market, just as Max was discussing.

3:55 p.m.

Conservative

Michael Barrett Conservative Leeds—Grenville—Thousand Islands—Rideau Lakes, ON

Right. The products that we're seeing today have been driven by demand. I would expect there's the prospect of an ROI that satisfies the expenses of the research and development process. That's what's driving the commercialization of the products that we currently see and driving the discussion about how it can go farther. If we were to talk about a pause or changing the direction that's being taken, what lever would you propose can be pulled to initiate that?

It would seem that right now, Mr. Aguirre, the companies are looking to provide products and, through those products, solutions to consumers, those being individuals and businesses who are, frankly, going to make them money.

3:55 p.m.

Conservative

The Chair Conservative John Brassard

Mr. Barrett, just hang on a second. I've stopped your time.

Mr. Tegmark, I do see your hand up. I'm assuming it's because you have something you want to add. The questions are generally directed to the person by the member of the committee. Perhaps when it's your time to speak, you can answer in regard to the point you want to make, or somebody can ask you directly. Mr. Barrett's question is for Mr. Aguirre.

Mr. Barrett, your time has started again.

3:55 p.m.

Executive Director, Future of Life Institute

Anthony Aguirre

The direction of AI products to actual consumer needs is exactly what I would like there to be, and exactly what I think there will be if we focus on tools that are actually built for people, to help people.

What we're seeing is a mixture of that, in some cases, with another dynamic—the same dynamic that drove the social media rollout—of trying to capture as large a part of the market share as possible, as quickly as possible. For example, I'm an educator—all three of us are—and, overnight, it happened that I could not assign essays to students, or even problem sets in physics or mathematics because suddenly there was an AI system that would simply do the students' homework for them. The students feel essentially that they have to use AI to write their essays or do their problem sets because all of the other students are using those things. This is not something that was demanded by the educational system. This is something that was pushed into the educational system, to the detriment of our students.

3:55 p.m.

Conservative

Michael Barrett Conservative Leeds—Grenville—Thousand Islands—Rideau Lakes, ON

Pardon the interruption, but I have limited time. I do appreciate your answering my question.

How would you propose that the solution you have, the direction you propose, comes to pass? Is it a question of enforceability? Is it a question of leadership in government? How do you envision that coming to pass? As you said, they're looking to capture as much market share as they can, as fast as they can.

3:55 p.m.

Conservative

The Chair Conservative John Brassard

I'm going to ask for a response within 30 seconds or less, please.

3:55 p.m.

Executive Director, Future of Life Institute

Anthony Aguirre

Pharma companies also want to capture as much profit as they can, and they still have to prove that their products are safe and effective before they go to market. The same thing should be true of AI. Specify what your AI product is supposed to do; explain how it does that safely and reliably to an external authority; get it approved, and put it on the market. I think that will lead to much better AI products and much better adoption, and it will change the direction from all-purpose systems that are meant to replace people to useful tools that are meant to help them.

3:55 p.m.

Conservative

The Chair Conservative John Brassard

Thank you, Mr. Aguirre.

Ms. Lapointe from the Liberal Party, you have the floor for six minutes.

Linda Lapointe Liberal Rivière-des-Mille-Îles, QC

Thank you very much, Mr. Chair.

Good afternoon, and welcome. I appreciate your coming here today.

Here is something that strikes me today. A number of witnesses, including researchers like you, have come here to talk to us about artificial intelligence, but I have to say that we have heard from only one woman. What are women’s perspectives on this topic and do they differ from those shared by the male researchers who have appeared before this committee, who have told us there are big problems? Earlier, you spoke about superintelligence and the fact that we don’t know how to control it. It will impact the lives of everyone, including women, men and children. Do women researchers have a different point of view from what you have told us about the risks and dangers everyone is facing?

I’ll ask Mr. Krueger to go first, and the other witnesses can take turns to answer.

4 p.m.

Assistant Professor, Department of Computer Science and Operations Research, Université de Montréal, As an Individual

David Krueger

I think this reflects, mostly, just the under-representation of women in AI in general. There are a number of women who share these concerns, who you could also speak to, including Tegan Maharaj, who is a professor at HEC Montréal, Ajeya Cotra, Katja Grace and others. There are different perspectives in the field in general.

As I mentioned, I think that over time the perspective I have, that this is an urgent existential risk, has become more popular, and that reflects a growing awareness of the issue and also the rate of progress we've seen in AI, which has exceeded almost everybody's expectations.

4 p.m.

Liberal

Linda Lapointe Liberal Rivière-des-Mille-Îles, QC

Okay, thank you.

I have another question for you, Mr. Krueger.

What would be the priority public investment to reduce asymmetries between governments and the big artificial intelligence companies?

4 p.m.

Assistant Professor, Department of Computer Science and Operations Research, Université de Montréal, As an Individual

David Krueger

If I understand the question, it's how we would get government to a place where it has a similar level of technical competence with AI as compared to the AI companies. I think that's very difficult. The first step is probably to try to hold the companies more accountable and to regulate them more seriously.

One reason I think they have so much more talent is they're able to pay much higher salaries because they're getting a lot of investment based on this premise that they will build AGI and superintelligence and then, again, take everybody's job and make trillions of dollars from that. Right now, there's a huge imbalance in terms of the talent, and that's not something that can easily be corrected, frankly. It would be a pretty lengthy process to fully address that.

On the other hand, one of the main issues right now is a lack of transparency into AI companies. There's very little oversight in terms of what they're doing. It used to be the case maybe five years ago that these companies would publish most of the research they did, so there was a shared understanding in the research community of how the most advanced systems work. That also helped audit the systems for safety more effectively. This has stopped being the case. Companies really don't release very much information at all about their systems, and there's no independent oversight that is mandated.

We need programs for independent experts to be able to look at the details of the AI systems. That doesn't just include evaluating or testing the model, but also includes having the necessary information about how the model was developed and what data was used, which is often copyrighted data. There have been a number of large-scale lawsuits about that. Also, what training methods were used? Was it trained in a way that is known to cause addiction, dependency and other psychological issues? We know this is something that has happened repeatedly in the tech industry with social media and with AI. Finally, how is the system being deployed and with what additional safeguards and guardrails? What is the company doing to monitor how it's being used? Companies have a strong incentive to understand how their products are being used in order to make more money from them.

That's something the government also needs to—

4 p.m.

Liberal

Linda Lapointe Liberal Rivière-des-Mille-Îles, QC

Thank you, Mr. Krueger. I’m going to have to cut you off there. I also want to ask Mr. Aguirre a question, and I only have six minutes.

Mr. Aguirre, can we compare the risks associated with advanced artificial intelligence with nuclear weapons and climate risks? There has been some talk of that comparison. Is this a fundamentally different issue?

4 p.m.

Conservative

The Chair Conservative John Brassard

Answer in 45 seconds, please.

4 p.m.

Executive Director, Future of Life Institute

Anthony Aguirre

With regard to the similarities and differences, I think nuclear weapons pose a particular risk, which is when they explode. The analog to that is if we build superintelligence and lose control of it.

AI is posing significant risks right now and is doing significant harm right now because it is so under-regulated and undergoes essentially no safety testing. As Max mentioned, there's strong evidence that there are large effects on suicidal ideation and media addiction in our young people. That is harm that is being done right now. We clearly see the beginnings of effects on labour with replacing humans with AI systems. That is the design; that is the goal to do at a large scale.

I think we are going to see that the risk of AI is it's replacing humans in roles that they should not be replaced in by a machine—as therapists, as companions, as romantic companions, as workers, etc. That scales all the way up to replacing humans as decision-makers and even replacing humanity altogether in the longer term.

I think the stakes are big, but it can be more diffuse.