I'm sorry, Mr. Aguirre. We're over time. Thank you, sir.
I now give the floor to Mr. Thériault from the Bloc Québécois for six minutes.
Evidence of meeting #25 for Access to Information, Privacy and Ethics in the 45th Parliament, 1st session. (The original version is on Parliament’s site, as are the minutes.) The winning word was companies.
A recording is available from Parliament.
Conservative
The Chair Conservative John Brassard
I'm sorry, Mr. Aguirre. We're over time. Thank you, sir.
I now give the floor to Mr. Thériault from the Bloc Québécois for six minutes.
Bloc
Luc Thériault Bloc Montcalm, QC
Thank you, Mr. Chair.
Thank you very much, Mr. Aguirre, Mr. Kruger and Mr. Tegmark for your enlightening and sober presentations that provide a sense of meaning and purpose for the ethical dimension of this committee.
We’re starting a fundamental reflection here. You have spoken about building scientific consensus. I think that consensus is becoming clearer with each committee sitting.
I’ll start with Mr. Aguirre.
In your article, which calls for keeping the future human, you talk about how computing power can easily be quantified, accounted for and monitored with little ambiguity once good rules are in place.
Mr. Kruger, you have described advanced artificial intelligence as an immense project that is only made possible through deliberate effort.
I’d like to hear your two points of view on the technical feasibility of a verification scheme.
Mr. Tegmark, you can chime in afterwards.
Go ahead, Mr. Aguirre.
Executive Director, Future of Life Institute
I'm happy to start.
Yes, I think it's absolutely feasible. As David suggested, AI is only made possible through huge amounts of computation done by very specialized chips. These chips are built essentially by one company, using machines built by one company and designs built by a handful of companies.
We've seen a lot of discussion of this compute capability in terms of export restrictions and controls, but these chips have hardware-level security. It isn't just about where the chips go but also about controls that can be on them. These chips have hardware-level security capabilities that enable verification of their level and type of use. Just as your phone can be remotely bricked if someone steals it, AI hardware can and should be configured so that it could be shut down at the hardware level. I think the more powerful the AI, the more it needs a reliable off-switch.
We can use the capabilities of the hardware, the base layer at which these AI systems are operating, both to add a layer of control and to add a layer of verification if we institute red lines that should not be crossed in their development and deployment.
Assistant Professor, Department of Computer Science and Operations Research, Université de Montréal, As an Individual
I'll jump in. I'll agree with what Anthony said.
It's really important to emphasize just how much investment is going into this. In the past, people used to think there was no real way to regulate AI and make sure nobody was doing something dangerous with it because it was software and anybody could do it on their laptop in their garage. That's not at all the case right now.
In the case right now, these systems are taking hundreds of millions and billions of dollars. This isn't the sort of stuff that is publicly disclosed these days, but there are huge investments that continue to increase, and the hardware is extremely specialized, as Anthony mentioned. This is the main point for intervention for international regulation of AI, which, as I mentioned, is absolutely critical.
The only thing I would add is that it's very important to think about how to make such a scheme as robust as possible. Verification might look something like a white list of types of AI systems that are allowed to be run on the computer chips. It also might look like location tracking so that we know where the chips are in case we need to recall them.
In fact, we should stop developing more powerful AI systems immediately. The most robust way of doing that would be to actually stop building the chips, as opposed to trying to set up a more complicated and less robust system of technical verification. That's my personal view: To give us some breathing room, we should stop building the chips and stop building and maintaining the factories that produce them.
As I and the other experts mentioned, we have potentially a few years here. This is not a situation in which we have time to try to find the perfect solution. We need to immediately implement a solution that will slow or pause the incredible rate of progress towards superintelligence.
Professor, Future of Life Institute
We heard some detailed technical answers to this important question about the practical steps that can be taken. I just want to add that we're not, as was mentioned earlier, talking about a pause here on AI. I'm not talking about a pause on AI at all. I'm just talking about a pause on AI girlfriends for 12-year-olds, AI that can teach terrorists to make bioweapons and other products that are clearly more harmful than beneficial for Canadians.
This is no different from what the health products and food branch of Health Canada does all the time for medicines. We don't say that Canada has a pause on medicines just because Health Canada does not allow pharma products that haven't done the clinical trials to be released. We can simply do for AI exactly the same thing that we've done for medicines as a very first step in the right direction.
Conservative
Bloc
Luc Thériault Bloc Montcalm, QC
All the same, a distinction should be made between specialized artificial intelligence and artificial superintelligence. Some things, such as the parental control PIN, are easier to control than the relentless race to develop this superintelligence.
Conservative
The Chair Conservative John Brassard
Thank you.
Mr. Hardy from the Conservative Party has the floor for five minutes.
February 2nd, 2026 / 4:10 p.m.
Conservative
Gabriel Hardy Conservative Montmorency—Charlevoix, QC
Thank you, witnesses, for joining us.
I’ll start with Mr. Tegmark, who has spoken only briefly so far.
Mr. Tegmark, you often speak about the potential for artificial intelligence to be transformative and goals that have potential benefits for humanity.
In your opinion, is the use of artificial intelligence in the medical field among the goals that we should focus on in the near future? To what extent can artificial intelligence change and improve the medical field?
I would even venture to ask you if governments should join this venture to help society find a cure for chronic and degenerative diseases.
I’d like you to speak to the medical field and artificial intelligence.
Professor, Future of Life Institute
Thank you. This is something very close to my heart.
AI has enormous potential for improving medical treatments, curing cancer and so on. This is not in the future; it's in the past already. Even though all the AI companies talk a lot about curing cancer, there's actually one company only that has really made real progress, and it's Google DeepMind. It released AlphaFold, which is really helping drug discovery, and got the Nobel Prize for it.
Cancer has already gone from killing maybe 80% of the people for some types to 20%, so we're sort of 80% of the way towards curing cancer. The key risk I worry about is simply that we squander all these incredible benefits by letting AI-based pharma remain completely unregulated, which can cause a backlash. Many of you remember there was a product called thalidomide that was sold in Canada and America to pregnant women with morning nausea. Because pharma was completely unregulated back then, this caused over 100,000 babies in North America to be born without arms or legs, which in turn is why the Food and Drug Administration was created.
Yes, let companies innovate, amazingly, and cure diseases with AI, but let's treat them the same way we treat pharma companies and make sure they don't get their products released until they have been properly tested.
Conservative
Gabriel Hardy Conservative Montmorency—Charlevoix, QC
Basically, what you’re saying is that artificial intelligence can actually contribute to global efforts to address many chronic and degenerative diseases, but the problem is not so much the speed of progress as letting strictly profit-driven companies commercialize products that are not yet fit for purpose.
However, I think artificial intelligence has potential to drive major breakthroughs in ultra-specialized treatments. In your research or across the overall artificial intelligence market, have you come across opportunities for more specialized treatments that are tailored to each individual rather than relying on more general approaches?
Professor, Future of Life Institute
Absolutely. There's huge potential in making customized treatments that can sequence the patient's DNA, for example, for cancer, figuring out which particular mutations they have in their cancer cells and develop a custom treatment just for them. There are absolutely incredible opportunities there.
I'm a firm believer in innovation and in private sector innovation and the key to get this is to create the right incentives. In pharma, in aviation, and restaurants even, industry innovates to produce safe products where the harm outweighs the good, because those are the ones that they're allowed to sell. If we can quickly create the correct incentives for the AI industry, then these companies that are currently doing, in my opinion, very reckless things will quickly focus on innovating a race to the top with safe products. I don't blame the companies. I blame the lack of providing the right incentives to them.
Conservative
Gabriel Hardy Conservative Montmorency—Charlevoix, QC
Thank you very much.
Mr. Krueger, for some time now, we have heard that at some point, private businesses may consider removing humans from the equation and operating with fewer human employees, and that they will replace them with artificial intelligence. I think this is not the first time we’re hearing this.
Is that happening already? Are companies conducting studies to determine performance? Generally, are you seeing big tech saying they did well to replace humans with artificial intelligence, or when they do their math, do they realize that overseeing, verifying and monitoring the much-touted emerging artificial intelligence robots is just as costly as having humans who are doing a good job?
Conservative
The Chair Conservative John Brassard
That's the end of the time, Monsieur Hardy.
I’m sorry, Mr. Hardy. We’ll continue with Mrs. Church for five minutes because I suspect the answer to your question will be fairly lengthy.
Ms. Church, go ahead.
Liberal
Leslie Church Liberal Toronto—St. Paul's, ON
Thank you, Mr. Chair.
That’s a great question, Mr. Hardy.
My question, I think, is probably for Mr. Tegmark.
You've spoken quite a bit about binding safety standards, and I appreciate the comparison to how we approach drug and pharmaceutical regulation.
As parliamentarians and lawmakers, where do you see us starting on this? It's one thing to look at some of the outcomes of the uses of chatbots and AI, particularly where they cross into child safety. I think those are certainly some very key and obvious areas where we need to be looking at how to ensure safety.
How else do we, in some ways, capture the breadth of how AI works across so many different fields and touches many industries and many potentially problematic areas? How do we capture that in a regulatory model that would be effective for us to address some of the harms you're raising?
Professor, Future of Life Institute
That's a great question.
The simple way to view this is that all of the diverse applications of AI simply have this same approach that we have in all other powerful industries, that it's the company's job to innovate and demonstrate to independent, government-appointed experts that the harms are outweighed by benefits.
I would start rather politically with child safety, because that is so incredibly politically salient and winnable right now. In America, we have about 95% of Republicans and Democrats agreeing that this has to happen. I call it the Bernie to Bannon coalition, and I think we're likely to see some legislation this year here in the U.S.
Once this precedent is set that we're going to treat AI like any other industry, we can add to the list of safety standards not only that they must not greatly enhance suicide risk in kids but also national security things. For example, you can't sell things if they can teach terrorists to make bioweapons. You can't release things if they could overthrow the government, as we heard from Professor Aguirre and Professor Krueger. It flows naturally from this simple approach of just treating AI companies like other companies.
I want to add one more thing. If this business about loss of control sounds strange, it's a very obvious idea that goes back to Alan Turing in 1951, that, if you build a bunch of robots that are vastly smarter than all humans, then of course they can build robot factories and make new robots. This is very much what companies are trying to do now.
Also, because they can make more robots, they check off the definition of being a species. Go down to your nearby zoo and ask yourself who is in the cages right now. Which species is it? Is it the humans? No, it's not. Why not? It's because we are the smartest species on earth. What we're basically saying is, let's keep it that way. Let's not let companies sell something—
Liberal
Leslie Church Liberal Toronto—St. Paul's, ON
Let me jump in here. I think you're probably hearing a lot of interest from us in terms of how we get our arms around this issue.
Let me turn to Mr. Aguirre for a moment, because both of you were involved in the the Future of Life Institute.
I'm very interested in the concept of tool AI. I hadn't heard that expression before. I think that's interesting in terms of thinking about how we bound some of these models into the specific areas they're working in and how we limit their breadth.
There's one thing I'd like to ask you in terms of your knowledge of other organizations or your own that are working in the space. Is anyone embarking on something like a model code, something that countries around the world could look at as we go down this path of trying to very quickly regulate or establish safety parameters in a very fast-moving sector? How are organizations like yours helping us parliamentarians and legislators around the globe to move in a direction that captures how we should be approaching this issue?
Executive Director, Future of Life Institute
That's a great question. There's a sort of frustrating chicken-and-egg problem in that it's hard to build up the governance capacity in terms of the evaluations, the certifications and the whole infrastructure that is needed to evaluate and test these models when there's no requirement to do so. There's no customer without some sort of regulation that requires those things.
On the other hand, when you're thinking about regulations, it feels very daunting that there are all of these different use cases for these systems, and you have to think about how you are going to regulate all these things.
As Max has pointed out and you suggested, I think it's critical to identify some first steps to take. Maybe it's around child safety testing or just requirements that, when you produce an AI system along this sort of tool orientation, you say what it's for and then you can start to assess whether that AI system is fit for that purpose.
Conservative
The Chair Conservative John Brassard
Thank you, sir. We were over time on that one.
Mr. Thériault, you have the floor for five minutes.
Bloc
Luc Thériault Bloc Montcalm, QC
Thank you, Mr. Chair.
I’m going to quote you, Mr. Aguirre: “The leaders of Deepmind, OpenAI, and Anthropic…have all literally signed a statement that advanced AI poses an extinction risk to humanity.”
You say that this is unprecedented, given that they are building these systems “under commercial incentives and near-zero government oversight”.
What should we make of companies that issue warnings about their own products, but continue to develop them anyway?
I’d also like to hear from you, Mr. Krueger.
Executive Director, Future of Life Institute
It's pretty astonishing. We've never seen an industry both developing something and publicly admitting how very dangerous that thing is. This is a product of how the industry has peculiarly developed and, in particular, the race condition that these companies find themselves in.
A couple of weeks ago in Davos, we heard from two of the heads of AI companies that they would like to slow down. They feel worried about what they're doing, but they feel they can't because they're in a race. If they hit the brakes, everyone else is going to keep their foot on the accelerator, and they'll lose out. All of these companies feel they have to build this thing because somebody is going to do it, and they feel that if somebody is going to do it, it might as well be them.
This is a crazy situation for us to be in, just like the classic arms race that ended up with 70,000 nuclear warheads that nobody felt like was such overkill. That's where we ended up because there was an arms race. We're in a similar situation here where it takes an outside actor—and it really has to be the government—to call a quit to the race. They're not going to be able to do it by themselves, even if they want to.
Assistant Professor, Department of Computer Science and Operations Research, Université de Montréal, As an Individual
I agree with this. What's happening here is the idea of human extinction has been used to push this narrative that this is some sort of corporate PR stunt. This is demonstrably false. As somebody who has been in this field for much longer than these companies have even existed, in the case of Anthropic or OpenAI, these concerns go back decades.
It's commendable that the CEOs have acknowledged these risks at the same time. Maybe sometimes they exaggerate things in order to hype up their product or something, but I think that's basically what's going on. They are desperate because they see they are a few years away from building something they are scared of, and they want something to step in and defuse the race, if possible.
However, they don't believe that's possible, generally, and I think that's the mistake. We need to understand it is possible. That's why I mentioned the computer chips as a key point of intervention. If none of these companies and nobody globally can get access to these giant piles of chips and the energy to build these data centres, then we won't see this race continuing at anything like the current pace.
I will also add that I think the reason they are doing this is there's this element of “If we don't do it, somebody else will.” There's the desire to make tons of money. For some people, they also want to usher in this new species Max was talking about. They want to see humanity replaced by AI, which they view as the natural next step in evolution. There are many public comments to this effect from various people within the industry, and this is an incredibly antisocial attitude.
Bloc
Luc Thériault Bloc Montcalm, QC
I’d like to wrap up this discussion.
Mr. Aguirre, you have called for an international agreement between the U.S., China and other countries that are capable of having a solid verification mechanism to ensure parties and rivals don’t defect.
Mr. Krueger, you have said that ending the race to build superintelligence is reasonable and possible and that this is a moral imperative. I would agree with that. What can this committee do to begin negotiations or to reach such an agreement?
This is an issue that continues to come up. You’re not the only ones who have said that. What is being done?
My questions are for Mr. Aguirre and Mr. Krueger.