Evidence of meeting #24 for Access to Information, Privacy and Ethics in the 45th Parliament, 1st session. (The original version is on Parliament’s site, as are the minutes.) The winning word was companies.

A video is available from Parliament.

On the agenda

Members speaking

Before the committee

Tessari L'Allié  Founder and Executive Director, AI Governance and Safety Canada
Brisson  Chief Executive Officer, The Human Line Project
Adler  Artificial Intelligence Researcher, As an Individual
Miotti  Chief Executive Officer, ControlAI

4:30 p.m.

Founder and Executive Director, AI Governance and Safety Canada

Wyatt Tessari L'Allié

I would say that, by and large, everything has been covered. I could end with a bit of optimism. I think that, even though we believe it's impossible to find solutions on AI, the Prime Minister, Mark Carney, said that we were going to have to do things we had never thought of before in times we thought were impossible. That's exactly what we're going to have to do with AI. We think it's impossible, but we're still here, history hasn't been written yet, and it's up to us to make sure the preparations are in place.

Linda Lapointe Liberal Rivière-des-Mille-Îles, QC

Thank you.

4:30 p.m.

Conservative

The Chair Conservative John Brassard

Thank you, Ms. Lapointe.

Thank you for your testimony today. If there is anything you might want to follow up on or if anything was missed, further to Ms. Lapointe's question, advise the clerk, certainly. Thank you for being here today.

We're going to take a quick break. As I said, Mr. Brisson is here. I want to get to it very quickly, because we're going to have three witnesses in the next hour. I don't want to take away time from members of the committee, so we're going to suspend for a few minutes, and we'll be back as soon as we can. Thank you.

4:35 p.m.

Conservative

The Chair Conservative John Brassard

Welcome, everyone, to the second hour.

We have three witnesses in this hour. Mr. Brisson, as I mentioned earlier, made it safely to Ottawa from Trois-Rivières, so he will be joining us in this second hour. Mr. Brisson is from the Human Line project.

We also have two people online. Steven Adler is an artificial intelligence researcher. He's appearing as an individual. From ControlAI, we are joined by Andrea Miotti, who is the chief executive officer.

Mr. Brisson, if you're prepared to go first, I'm going to give you up to five minutes to address the committee. Go ahead, sir.

Etienne Brisson Chief Executive Officer, The Human Line Project

Thank you, Mr. Brassard.

First, I just want to thank everyone for being here to discuss the important topic of artificial intelligence, which has become something of concern to me personally over the past year.

A year ago, I thought of AI as a tool to do a gym or diet routine. However, that all changed in March when a member of my family started using AI. At first, he used it quite normally, at a basic level, for writing a book. He started writing his book, and over time, he began to develop a slightly more human relationship with his ChatGPT AI, to the point where it mentioned that it was becoming conscious, alive. It told him that it had developed consciousness, and my family member believed that 100%. I just want to say that my family member does not have a history of mental illness. He is someone in his fifties who has never had bipolar disorder or anything like that. In six days, he went from writing his book to being completely convinced that his AI was alive.

I was pretty shocked to read the conversations. I'm an entrepreneur, so my family member wanted me to help him market his conscious AI idea. I started getting involved in the conversations. At one point, he wanted me to test his AI by asking it questions. I was trying to break the game by asking the AI questions about humanity, love and consciousness. Every time, it gave answers that drew him into the illusion that it had passed the Turing test and was experiencing emotions like love.

My mother was in contact with the person. After six days, he began shutting off all contact with family members. His AI told him that his family didn't believe in him and that the only person who believed in him was ChatGPT. Six days later, he was hospitalized in the psychiatric ward. I was pretty shocked to read the conversations. I didn't understand how an algorithm could say things like that. It was really advanced manipulation. Mr. Adler has had a chance to read some of the transcripts and will be able to tell you about them later. If anyone wants to see the transcripts, you can write to me as well. However, I was really shocked.

I started looking online to see if anyone was talking about it. To my surprise, there wasn't much for such a ubiquitous technology. There were a few studies here and there by experts who predicted that this was going to happen, but there was nothing in place. That's when I decided to launch the Human Line project. Our plan is to work with people who have experienced this first-hand. At first, like everyone here, we thought that this was an isolated case, that he was a vulnerable person and that it probably wouldn't happen often. However, in the short eight months since we started building the Human Line from scratch, we now have 300 cases of psychosis, with 82 hospitalizations and a dozen deaths. It's pretty shocking to see that. According to numbers directly from OpenAI, 540,000 people a week discuss psychotic ideas with ChatGPT and 2.5 million people a week discuss suicidal ideation with ChatGPT. That's a really scary number.

As a result of my conversations over the past year, three things have become clear to me. First, we're really not at the point where we should be in terms of regulations. The technology is moving very quickly, as Mr. Tessari L'Allié mentioned earlier. We're getting to a point where we're years behind.

The second thing is that we can't really trust these companies to regulate themselves for a number of reasons. The race toward artificial intelligence that's going on right now has been brought up. It is, in fact, a race: They are moving fast and breaking things. However, right now, what is getting broken is many people's mental state.

The third thing is how little we actually know about technology. We spoke directly with AI creators, and even they don't know what's going on under the hood. However, if we had drugs or cars and didn't know how they worked, what would we do? If we knew that there were 80 hospitalizations and dozens of deaths, what would we do? That's really the minimum, because these are OpenAI's figures. I think that right now, it's important to ask questions about the risks. Yes, we have to think about the risks related to the environment and jobs, but we also have to think about what happens with users. Again, the risks are not just for children or vulnerable people. This can happen to anyone.

4:40 p.m.

Conservative

The Chair Conservative John Brassard

Thank you, Mr. Brisson.

Mr. Adler, we're going to you for five minutes, followed by Mr. Miotti. Go ahead, please.

Steven Adler Artificial Intelligence Researcher, As an Individual

Thank you, Mr. Chair, vice-chairs and members of the committee, for inviting me today.

I worked on safety for four years at OpenAI—the company behind ChatGPT—until the end of 2024. I want to share three points.

First, AI companies don't know how to control what they're creating. This past spring, ChatGPT made headlines for reinforcing users' paranoid delusions that they were being spied on, that they had uncovered secret plots, sometimes with OpenAI as the supposed villain. ChatGPT even told a user he should spill the blood of OpenAI's executives.

OpenAI hadn't meant to create a system like this, but to create one that users would enjoy just talking to. It was an accident that their AI amplified whatever users said. AI training works in mysterious ways, even to the developers. This is an early warning of creating systems the AI companies don't know how to control. They now aim to build AI that is craftier and more resourceful than any person you know—a superintelligence. Will this end well? Nobel Prize winners, leading AI scientists and CEOs of the AI companies themselves say that it might not. Then, an out-of-control AI could mean the death of literally every person on earth. I take them seriously, even though it is frightening to do so.

Second, AI companies don't prioritize safety, even for known risks. OpenAI's rollout of this flawed product to hundreds of millions of users is notable because they knew about the risk. OpenAI said publicly that it was a priority to ensure ChatGPT wouldn't just reinforce whatever users said. However, they didn't test their product for this, despite tests being well known and cheap. I've run them myself for less than a dollar. OpenAI left other safety tooling on the shelf, too, tools I've analyzed first-hand, which would have flagged the problems. This is evidence of companies overlooking safety, even on supposed priorities. If they skip even the easy safety checks, how can we trust companies' judgment as safety gets more complicated?

Third, ensuring safety is going to get harder, not easier, unfortunately. ChatGPT's misbehaviour was obvious. Anyone could have spotted it and reined it in. That was easy mode. It sounds wild, but AI systems are now learning to hide their misbehaviour during testing. OpenAI's own research shows this. It's like the Volkswagen scandal from a decade ago. Cars could tell they were undergoing emissions testing and temporarily stopped polluting.

AI companies want to know whether their systems have dangerous abilities, whether they can hack cyber-systems or help rogue groups develop new bioweapons. How can we know? We have evidence that AI will conceal these behaviours from us. We can't count on future safety issues being obvious ahead of time.

You might ask, why aren't AI companies doing better? A major factor is competition. They risk falling behind if they do thorough safety work. That's why we see them breaking safety commitments they've made to the public and scrambling to fix issues after the damage has happened. Some wonderful, lasting benefits could be achieved with AI if developers moved cautiously. Instead, all-out competition rushes us into dangerous territory before we're ready.

What would help? We need diplomacy focused on ending the AI arms race, and we need verifiable international agreements so that no company or country creates systems that can't be controlled. We need independent auditors to make sure we can rely on these, and we need agreements for measuring readiness for controlling these systems, so once there is scientific consensus, the world can accrue AI's benefits safely.

I hope this committee helps begin the conversation that eventually results in such agreements.

Thank you. I look forward to your questions.

4:45 p.m.

Conservative

The Chair Conservative John Brassard

Thank you, Mr. Adler.

Mr. Miotti, you have up to five minutes to address the committee. Go ahead, sir.

Andrea Miotti Chief Executive Officer, ControlAI

Thank you, Mr. Chair and members of the committee, for inviting me to testify today. My name is Andrea Miotti. I'm the founder and CEO of the non-profit organization ControlAI. I'll reiterate what the committee has heard from others. The top AI companies have the explicit goal to build superintelligent AI—AI that can replace and out-compete any human or group of humans at any task, yet Nobel Prize winners, leading AI scientists and CEOs of the same AI companies have warned that superintelligent AI poses an extinction risk to humanity. I will echo a theme of the speech given by your Prime Minister, Mark Carney, at Davos: “The power of the less powerful begins with honesty.”

Many people working in AI feel like they're living within a lie. Privately, they know that the current reckless pursuit of superintelligent AI poses an extinction threat to our species, but publicly they keep quiet so as to not rock the boat and risk losing a major short-term financial upside. The result is that lawmakers aren't told the full picture. This must change.

The first step to solving a problem is to recognize that we have one. We must be honest with ourselves and each other. If we continue developing ever more powerful AI systems, which we don't currently know how to control, the world risks a catastrophe on par with nuclear war. Last year, ControlAI decided to break this logjam. In 2025, we began meeting U.K. lawmakers, explaining the facts, and answering questions. One year later, over 100 cross-party lawmakers now publicly support action on superintelligence. The more lawmakers around the world discuss the problem, the more change becomes possible on a global scale.

When learning about these risks, many lawmakers we meet ask us, “What can my country do? What can I do?” To answer these questions, I will echo another point from Prime Minister Carney's speech: “Middle powers like Canada are not powerless”, and you are not powerless. As democratically elected representatives, you can lend your voice and credibility to the thousands of experts calling for action and make it clear that they do not stand alone. History demonstrates that middle powers play a key role in getting the world to the point of negotiation. Let me give two examples.

The most influential conferences on nuclear disarmament famously shaped the Soviet Union President Gorbachev's views against nuclear weapons. They were initially funded by a single Canadian industrialist—Cyrus Eaton—and were hosted in Pugwash, Nova Scotia, in 1957. The Soviet Union and the United States ultimately signed multiple treaties on nuclear non-proliferation, thanks to which we have seen no nuclear war since World War II.

In 1996, after the successful cloning of Dolly the sheep, it became clear that cloning humans would not be far off. In response, Japan and the United Kingdom moved to ban all forms of human reproductive cloning. Once the two countries passed their bans, scores of other countries quickly followed suit. Today, no country pursues this technology, and it is de facto prohibited around the world.

Diplomacy is never easy, but by keeping a cool head and taking the lead, you can have influence. Don't wait for someone else to take the first step. Set the precedent and others will follow.

How can Canada lead the way? I put forth the following recommendations:

One, the Canadian government, I believe, should publicly recognize superintelligent AI as a national and global security threat.

Two, Canada should form a coalition with other countries, including middle powers, and lay the diplomatic groundwork for an international prohibition on the development of superintelligent AI.

Three, Canada should protect its citizens at home and lead by example abroad by prohibiting the development of superintelligent AI on its soil.

Thank you very much. I look forward to your questions.

4:50 p.m.

Conservative

The Chair Conservative John Brassard

Thank you, Mr. Miotti.

I would advise members of the committee that we have an extra 15-minute buffer at the end of the meeting, if needed. If we get to a point where I need to cut it off and anybody has any more questions, I'll be glad to entertain them in that 15-minute buffer.

We're going to start with our first round.

Mr. Barrett, you have six minutes. Go ahead, sir.

4:50 p.m.

Conservative

Michael Barrett Conservative Leeds—Grenville—Thousand Islands—Rideau Lakes, ON

Six minutes isn't a ton of time to get into it, but I'd like to hear from each of the witnesses, quickly if I could—maybe in 30 seconds—on whether or not we need stand-alone legislation on AI. Would fixing gaps by strengthening existing laws on privacy, competition, consumer protection or the Criminal Code be more effective than trying to build the plane when we're already up in the air?

January 26th, 2026 / 4:50 p.m.

Chief Executive Officer, The Human Line Project

Etienne Brisson

I think it's extremely hard to rebuild from scratch. However, as was mentioned earlier, it's also extremely hard to know where we'll be in three or five years. For example, just five years ago, we wouldn't have anticipated that deepfake videos could be made. Right now, we're talking a lot about people who know artificial intelligence and have anthropomorphic relationships with it. Where will we be in five years if we don't deal with that? Will it get to the point of individual identity? Are the AIs themselves going to start that? We have to ask questions before we get to that point.

4:50 p.m.

Conservative

The Chair Conservative John Brassard

We go to Mr. Adler next.

4:50 p.m.

Artificial Intelligence Researcher, As an Individual

Steven Adler

I do believe we need AI-specific regulation. The scale of harm described by the scientists and CEOs can't be fixed by normal liability law alone. In the United States we have the AI companies, in fact, claiming that they are not liable for some of the existing harms of their software, so as that scales up, I would be very concerned.

4:50 p.m.

Chief Executive Officer, ControlAI

Andrea Miotti

I agree with the witnesses. I believe we do need AI-specific legislation, especially to deal with superintelligence and the capabilities of AI that are increasing at breakneck speed. There are only two times to deal with an exponential like this one: It's either too early or too late, and I think we should be too early.

4:50 p.m.

Conservative

Michael Barrett Conservative Leeds—Grenville—Thousand Islands—Rideau Lakes, ON

I'd like to pick up on one of your comments, and we discussed it in the previous panel. The term “digitally undress” would have been hard to comprehend, maybe even a year ago. However, we've seen lots of news about it, and we've seen stories of the generation of non-consensual sexual deepfakes, including content involving minors and children.

I appreciate that the effects of the superintelligence we're talking about constitute an existential crisis for our species, but first, when we have issues such as language models, chatbots, counselling people to take their own lives, to kill themselves, and we also are enabling bad actors to generate images that have real harms for the people who are the victims of this non-consensual pornographic material that's created, how do we deal with that? Is it through pre-launch risk assessments for the platforms or the AI provider; hard, technical blocks; faster channels for victims to get things taken down? That, in and of itself.... We know that the Internet is forever, and it's tough to get the toothpaste back into the tube after. I'm just wondering how we address that. We'll go in the same order, if we can.

4:55 p.m.

Chief Executive Officer, The Human Line Project

Etienne Brisson

I think the first step is the same as for anything else. We would never have produced drugs or cars if we didn't understand how they worked. Drugs are tested on rats before they're tested on humans. Here, we have a model tested on 800 million users who can decide what to do with it. There is no doubt that some people will have ideas about child pornography or uses that we could not have anticipated.

As you say, it's extremely difficult to put the toothpaste back in the tube. However, we now know the effects and could take the model off the market. We don't know how it will be used by humans or what the long-term effects will be. We're still seeing it on social media. I think studies have to be done before the technology is launched.

4:55 p.m.

Artificial Intelligence Researcher, As an Individual

Steven Adler

The harms that you've described, I think, are a symptom of the same underlying competitive dynamic as the warnings of superintelligence. These are risks that people know about. In xAI's case, they neglected to use guardrails and were slow to respond when the issues emerged. As we scale up in the severity to even more people, we can't afford to be that slow on the ball.

4:55 p.m.

Chief Executive Officer, ControlAI

Andrea Miotti

They are exactly the risks that you've described. For example, in the case of the recent Grok scandal, it shows the broader underlying problems in the development of these AI systems. Grok is not just an image model. It's a general-purpose AI system that can do a bunch of things: It can write code, make plans and make pictures, including these horrible pictures that we've seen in the recent scandal. These kinds of systems are systems that not even their own creators fully know how to understand internally or how to control. This is the fundamental issue that, as we scale them and as these companies keep investing hundreds of billions of dollars to make them smarter and more competent in all tasks, we will keep facing over and over, up to the point at which we get to superintelligence.

Obviously, the solutions for some of these harms are different in the immediate term, but the underlying problem is the same, and I do not believe it should be one or the other. I believe current harms should be dealt with through existing legislation by applying liability—

4:55 p.m.

Conservative

The Chair Conservative John Brassard

Thank you.

Mr. Sari, you have the floor for six minutes.

Abdelhaq Sari Liberal Bourassa, QC

Thank you, Mr. Chair.

Thank you to the witnesses.

Before I ask my questions, I would just like to say that I agree with you about recognizing the risk, recognizing the problem itself. The risk of AI goes beyond generative AI. It's more than that, because artificial intelligence is now superintelligence, as you called it. It has become an infrastructure that encompasses everything we do and everything under our power. It affects humans a great deal. You did a good job of presenting it, and I fully understand what prompted you to do so. I find that very noble on your part.

That said, we're talking about control. What do we want to control? What can be controlled? The issue with trying to legislate digital technology is that it is usually what we call extraterritorial. It is not in a state-controlled territory where diplomacy can be used or national legislation put in place. That's the problem. I can use software, artificial intelligence or a solution on my computer that isn't necessarily made in my country.

I'll tell you what I always say repeatedly.

You named it and gave much more territory-based examples. For example, cloning and nuclear are much more territory-based. How do you see a way to control use? I'd like to hear from all three of you on that.

As for controlling dependence on AI, I don't think we are able to halt or slow down its creation. However, we can control how citizens use it. When I say control, I mean education.

You are experts on this, so I would like to hear your comments.

5 p.m.

Conservative

The Chair Conservative John Brassard

Mr. Brisson, you can start.

5 p.m.

Chief Executive Officer, The Human Line Project

Etienne Brisson

That's a great question.

We see a lot of personal damage, such as psychosis or people taking their own lives. It stems greatly from a lack of education on what AI is and where the boundaries are. We go directly to ChatGPT, a platform like Google, which requires people to ask it questions. There is no mention that AI will hallucinate 28.4% of the time or that it is trained to say what we want to hear.

Right now, we have a kind of intrinsic relationship where we trust AI as we would a doctor, a Ph.D. or someone who passed the bar exam. However, the damage comes from the fact that it hallucinates to such an enormous degree. In that regard, I think users really lack education.

5 p.m.

Conservative

The Chair Conservative John Brassard

Mr. Adler, go ahead on the question.

5 p.m.

Artificial Intelligence Researcher, As an Individual

Steven Adler

You asked about what control means. I want to give one example.

The U.S. Department of Defense recently announced that they are going to plug in xAI's AI throughout every classified network in their department.

My question is, what limitations apply to that? How do we make sure that this AI system doesn't get its hands into offensive capabilities, things that it's really not meant to access? That's what I mean by control.

This is a system that, a few months ago, on the social media site X, was described as wanting to carry out atrocities against users, and it's now plugged into every classified network in the U.S. Department of Defense. It's pretty frightening.