Evidence of meeting #24 for Access to Information, Privacy and Ethics in the 45th Parliament, 1st session. (The original version is on Parliament’s site, as are the minutes.) The winning word was companies.

A video is available from Parliament.

On the agenda

Members speaking

Before the committee

Tessari L'Allié  Founder and Executive Director, AI Governance and Safety Canada
Brisson  Chief Executive Officer, The Human Line Project
Adler  Artificial Intelligence Researcher, As an Individual
Miotti  Chief Executive Officer, ControlAI

3:30 p.m.

Conservative

The Chair Conservative John Brassard

I'm going to call the meeting to order.

I want to welcome everybody and wish everybody a very happy new year.

Welcome to meeting number 24 of the House of Commons Standing Committee on Access to Information, Privacy and Ethics.

Pursuant to Standing Order 108(3)(h) and the motion adopted on Wednesday, September 17, 2025, the committee is resuming its study of the challenges posed by artificial intelligence and its regulation.

I would like to welcome our witness for the first hour today.

We have Wyatt Tessari L'Allié, who is the founder and executive director of AI Governance and Safety Canada.

Now, we were to have a second witness, Etienne Brisson. My understanding is that he is facing some serious road conditions.

Mr. Hardy, you took the same route this morning.

Roads conditions were bad, weren't they?

Okay, so if he does get here for the second hour, I'm going to be glad to include him as a witness in that second hour.

Mr.... I'm going to call you Wyatt, okay? You have up to five minutes to address the committee. Go ahead, sir.

Wyatt Tessari L'Allié Founder and Executive Director, AI Governance and Safety Canada

Mr. Chair, committee members, thank you for honouring me with the invitation to address you today.

AI Governance and Safety Canada is a non-profit, non-partisan organization, as well as a community of people from across the country. We start by asking the following question: What can we do in and from Canada to ensure advanced AI is safe and benefits everyone?

Since 2022, we've been making public policy recommendations to the federal government, such as our presentations on the AI bill and related data, as well as addressing parliamentary committees on the matter.

So far in this study, you’ve heard about the impacts that Canadians are already dealing with. Even with current systems, chatbots have talked teenagers into suicide, and developers can’t reliably predict what the models will do.

You’ve heard that, with capabilities continuing to accelerate, there are much bigger risks fast approaching and that global companies like OpenAI and Google are competing to build smarter-than-human AI systems in the near term, systems that they themselves admit they won’t know how to control.

If a nuclear power plant melts down, it’s a tragedy, but the rest of the world moves on and eventually recovers. With smarter-than-human AI, we may not get a second chance. If, through accident or poor design, a system interpreted human beings as an obstacle to achieving the goal it was given and started taking actions against us, there is no guarantee that technologists or governments would ever be able to regain control. It would be a global crisis the world might never recover from.

If you find the situation downright scary, you are not alone. The question is, what do we do? As Canadians sitting around this table in 2026 looking at the exponential advance of AI, mostly driven by entities outside of our borders, what can we do?

If we want, we can try to play whack-a-mole with current AI impacts and ignore the bigger picture within which they fit. We can try to deny or dismiss what the leading labs are building, wasting the limited time we have to operate, or we can take a hard look at where things are heading and start preparing now in a manner that also addresses current risks, because if we’re not ready to give up, Canada has a number of options at its disposal.

In October, we published our white paper, “Preparing for the AI Crisis: A Plan for Canada”. In it, there are four key recommendations.

First, pivot to meet the AI crisis. The development of smarter-than-human AI is the biggest threat to Canadians’ safety. For that reason alone, it deserves to be a top priority. AI will disrupt almost every other file you’re working on, from national defence to jobs to health care to education to energy and the environment. Much like with COVID in 2020, there are times when the responsible thing for government to do is pivot to address the developing crisis and reassess the priority of other files accordingly. Given its wide scope and long-term implications, AI needs to be a cabinet-level priority, and action needs to be coordinated with opposition parties and the provinces.

Second, spearhead global talks. The race to smarter-than-human AI is a global phenomenon that no country can manage on its own. At this time more than ever, the world needs leadership, and Canada is well placed to deliver it. The strongest card we can play is to advance global talks and solutions and lay the groundwork for an AI treaty that the U.S. and China might sign when the crisis hits and they realize they have no alternative.

Third, build Canada’s resilience. While domestic action alone cannot protect Canadians, plenty can be done to mitigate the secondary impacts, such as putting in place supports for displaced workers, banning deepfakes and strengthening critical infrastructure against cyber-attacks. By taking the initiative at home, Canada will be in a stronger position to navigate the AI crisis and negotiate from a position of strength.

Fourth, launch a national conversation on AI. Canadians deserve to be informed and consulted on a technology that will fundamentally reshape their lives. We need nationwide public hearings to educate and consult on core societal decisions pertaining to our future with AI.

Last week, Prime Minister Carney put Canada in a leadership role on the world stage. This is an unprecedented opportunity to push for global AI safety measures while building resilience at home and to be the adult in the room when it matters most. The stakes couldn’t be higher. The clock is ticking. Let’s get to work.

3:35 p.m.

Conservative

The Chair Conservative John Brassard

Thank you for your opening statement.

We're going to have six-minute rounds, starting with Mr. Barrett. These are questions, and it's going to be back and forth.

For the sake of the interpreters, I would ask you to speak a little bit more slowly in your responses, if you don't mind. I know they had your opening statement, which was fine, but we want to make sure we have proper interpretation.

Go ahead, Mr. Barrett, for six minutes.

3:35 p.m.

Conservative

Michael Barrett Conservative Leeds—Grenville—Thousand Islands—Rideau Lakes, ON

What specific evidence-based risks justify urgent federal intervention of the kind you've suggested? How do we avoid the situation where the policies that are made are driven by fear instead of the actual situation on the ground? Are there examples or evidence, perhaps, that you would be able to share with us?

3:35 p.m.

Founder and Executive Director, AI Governance and Safety Canada

Wyatt Tessari L'Allié

Sure.

So far we're seeing a bunch of early warning signs, everything from chatbots talking teenagers into suicide or job impacts on the youth or record levels of scams and AI-powered cyber-attacks. These are important and need to be dealt with, but they're not in themselves a justification for whole-of-government action. Why they matter is where they're headed. Right now we have teenager AI: very simple systems, or relatively simple systems, I should say. What all the leading AI labs are actively building.... Particularly if you go to their websites, they're saying that they are building smarter-than-human AI systems. According to their CEOs and according to a lot of experts, including the engineers who left these organizations as whistle-blowers because they don't trust them, also say that yes, in two to five years, smarter-than-human AI is possible.

The reason governments need to act currently and be proactive is really the fact that we may not have much time to prepare for the much bigger risks to come. Given that it will require global solutions, and global solutions take forever to put in place even if we have 20 years, we're in a race against time.

On the point of hope and fear, I fully agree we can't.... I'm doing what I'm doing because I believe there are still solutions and because I think there are positive ways forward. We can't let fear paralyze us, and we can't tell ourselves that AI is no good at all, because there are a lot of really good applications of AI, in health care, in energy, in all that kind of stuff. I think it's very important to be neither enthusiastic nor pessimistic about AI and just be very clear-eyed about what's coming and how we can prepare.

Michael Barrett Conservative Leeds—Grenville—Thousand Islands—Rideau Lakes, ON

Is there more than one strategy that needs to be employed when we look at some of the real effects that we're seeing from the use of this new teenage generation of AI? Some of the most real effects we've seen, of course, are news reports about language models counselling vulnerable people to die by suicide, to take their own lives. We've seen also a rise in the creation of child sexually exploitive material and deepfakes using children. That is the worst kind of deepfake, but it's not the only kind. It's happening to public figures, to anyone who has a picture online, and really to anyone whose likeness can be described to a language model. These are some of the real-world consequences we're seeing today. You referenced the effects on the job market for youth for entry-level jobs.

What's the answer? Is it a series of measures that need to be taken? When we talk about deepfakes, is it a question of needing to update the criminal law so that individuals are held personally responsible for their actions in the creation of this unacceptable material that is not intended to be covered by free speech laws or freedom of expression and goes well beyond that? It's victimizing individuals. Is it instead that we need to pass laws where it's incumbent on the tech companies to ensure the safeguards are in place? Is it both?

3:40 p.m.

Founder and Executive Director, AI Governance and Safety Canada

Wyatt Tessari L'Allié

It's yes to both.

I do think that improvements to the Criminal Code to be able to deal specifically with deepfakes would help. I would also say, as witnesses before me have said, that current laws could go a lot further than they currently do in terms of protecting Canadians. If we gave more resources to the current regulators to be able to apply them in the context of AI, that could be a much faster way, and possibly a more effective way, of getting protections in place.

In terms of the responsibilities for technologies—technologists—I think it's very hard to stop a teenager in their basement in Russia from creating a deepfake of somebody. However, you can tell Google that if it wants to do operations in Canada, it has to take down deepfakes within a certain amount of time so that while the image itself is created, it doesn't get spread and it doesn't harm people's reputations.

I think it's yes, strongly, to both.

3:40 p.m.

Conservative

Michael Barrett Conservative Leeds—Grenville—Thousand Islands—Rideau Lakes, ON

All right.

I would just say in the last 10 seconds that with all of the potential risks and how catastrophic they can be, we might first have to address these very real and serious challenges, like the ones we just talked about.

Thanks very much.

3:40 p.m.

Conservative

The Chair Conservative John Brassard

Thank you, Mr. Barrett.

Mr. Sari, you have the floor for six minutes.

Abdelhaq Sari Liberal Bourassa, QC

Thank you very much, Mr. Chair.

Happy new year everyone.

I'd like to thank the witnesses for their presentations.

Before asking the questions I have regarding the ways the government and the House of Commons can intervene, I'd like to set the table by explaining exactly what this is about. When it comes to the digital world in general and AI in particular, I like to separate the two. Whether we like it or not, AI development is such that it is becoming a basic infrastructure, just like electricity, transport and the Internet. It is also used in decision-making and operational processes, and in geopolitical and political spheres, which is feeding our dependence on it.

Generally, when citizens depend on a technology, or anything else for that matter, it creates a certain vulnerability. I don't think we can do anything about our dependence on AI or even reduce it. That's my opinion, as someone who's worked in this field. We can reduce our vulnerability, but not our dependence, because AI is here to stay.

Mr. Tessari L'Allié, as an expert in this field, what do you think a government or a government policy framework can do exactly? What should we prioritize to reduce the vulnerability associated with AI's rapid development?

3:40 p.m.

Founder and Executive Director, AI Governance and Safety Canada

Wyatt Tessari L'Allié

Because of the current context and how fast AI is being developed, we need to stop trying to react to yesterday's AI. It hasn't worked. We have to plan ahead. If it takes two years, three years or five years for the government to adopt a related legislation, we have to imagine what AI will look like in three years or five years. That's why we need to focus on planning ahead instead of focusing on regulations.

Regarding dependence, workers are already forgetting how to do certain tasks, because AI can do them better and faster. People forgetting how to do certain tasks aren't the only ones experience a loss of resilience. Society as a whole is also experiencing a loss of resilience, because if all of a sudden AI breaks down or is taken away, people won't know how to do certain things.

To reduce vulnerability, society needs to know how to do things. AI tools are very useful to improve productivity and do things faster, but at the same time, our education system needs to continue teaching people how to do things by themselves to reduce this dependence and vulnerability.

Abdelhaq Sari Liberal Bourassa, QC

I've said in the past that there are two things to consider. On the one hand, there's the technology itself, which we can't control, because as you said, someone can develop a new technology in their basement. On the other hand, there's the user that can be educated. You talked about education. I don't want to influence you or interfere in your field of expertise, but I'd like to expand on that.

Is there a way to establish or to recommend some form of digital sovereignty? Without such sovereignty, it's hard to talk about the other technologies over which we have no control.

3:45 p.m.

Founder and Executive Director, AI Governance and Safety Canada

Wyatt Tessari L'Allié

You're absolutely right, especially in the context where we are being threatened by the U.S. We purchase most of our computational power from the U.S., from American data centres. If the U.S. suddenly decided to cut us off or limit our access to this power, it would greatly hinder our economy.

In short, digital sovereignty is important. We need our own data centres and the ability to do what we want here at home, without having to depend on another country.

Abdelhaq Sari Liberal Bourassa, QC

If we're not digitally sovereign, how can we control tech giants like Google and Microsoft? It's hard to get around them. I've read that many countries, including from the EU, are trying to establish their own digital sovereignty.

Meanwhile, is there a way to gain some control—and I insist on the word “some”—on how to introduce these solutions in our own market and how our citizens can use them?

3:45 p.m.

Founder and Executive Director, AI Governance and Safety Canada

Wyatt Tessari L'Allié

As I said before, if we want to have control, we need the ability to create our own AI tools, have our own data centres, and depend on an educated workforce able to do the work without relying on AI.

Abdelhaq Sari Liberal Bourassa, QC

I'd like to end by coming back to something I said earlier.

I don't think the government can control people's dependence on AI and how the technology evolves. That said, as you pointed out, we have the capacity to train people to reduce their vulnerability and ensure the tools used in all our systems, whether they be in health care, finance or transport, are safer. That's important, because AI has access to our data, our history and our future.

Thank you very much, Mr. Tessari L'Allié.

3:45 p.m.

Conservative

The Chair Conservative John Brassard

Thank you, Mr. Sari.

Mr. Thériault, you have the floor for six minutes.

Luc Thériault Bloc Montcalm, QC

Thank you, Mr. Chair.

Mr. Tessari L'Allié, I'd like to start with a broad question. I also don't want to offend you.

In your document, you talk about strengthening critical infrastructure against cyber-attacks, developing AI and drone defence capabilities, and preparing security agencies against biological, nuclear and chemical weapons proliferation, to name a few, which general AI can facilitate. I think you'll agree with me that this would be taking a defensive stance against ever more powerful systems.

I agree on the idea of defence, but in the end, isn't that a losing strategy? Wouldn't we need to eventually stop the development of systems more powerful than our defences?

3:45 p.m.

Founder and Executive Director, AI Governance and Safety Canada

Wyatt Tessari L'Allié

Absolutely. There's the era before superhuman AI, and the era after. Regarding national defence against drones, AI and others, in the short term, the government needs to increase investments in critical infrastructure to protect them against cyber-attacks. You are right that if we reach a point where AI can thwart all our plans, Canada won't be able to defend itself without help.

Our best strategy relies on one of Canada's strengths, which is to move talks along with international partners, because every other country faces the same issues. Neither the U.S., nor China or Russia can defend itself against a supersmart AI. Everyone should work together, because everyone is vulnerable.

I basically agree with you. There's a lot we can do in the short term that could lead to greater security resilience. That said, in the long term, an international solution is the only option to prevent these systems from being created, at least until we're able to control them.

Luc Thériault Bloc Montcalm, QC

What you're saying is that, based on what you know, we're already losing control and, in the medium term, there's a strong possibility we won't be able to control this technology.

3:50 p.m.

Founder and Executive Director, AI Governance and Safety Canada

Wyatt Tessari L'Allié

That's correct. I refer to experts in the field. I'm more of a generalist interested in the intersection between government and technology. Experts working in state-of-the-art AI laboratories developing those systems say they don't know how to control them and it scares them. The only reason they're accelerating development is they hope someone else will be able to control those systems if they can't, that those people might have a better chance of doing so. Such a race makes no sense.

The consequences of the current systems are significant, but not jarring. It's not the end of the world. An issue with ChatGPT could lead to a teenager losing their life or a cyber-attack, but we could get over that. However, if AI reaches a point where it can understand what we do better than us, act more quickly than us and thwart our plans, we'll be vulnerable to its decisions and our security forces won't be able to defend us.

Luc Thériault Bloc Montcalm, QC

Not to mention what we were talking about earlier regarding the misuse of this technology beyond the negative or pernicious impact on teenagers, for example.

3:50 p.m.

Founder and Executive Director, AI Governance and Safety Canada

Wyatt Tessari L'Allié

Absolutely.

Short-term risks could be biological, for example. AI can already be used to create new viruses the human body would have a hard time fighting. There are biological and nuclear weapons, among others. If a human can imagine something, an increasingly competent AI could do the same and use it. Even if we don't think we can control AI, the simple fact that it could develop a weapon of mass destruction is reason enough to take this matter seriously.

Luc Thériault Bloc Montcalm, QC

Looking at various governments or world superpowers, geostrategic and geopolitical positioning, and the interest superpowers have in increasing their power, are you optimistic?

There's little framework around AI right now. It's a black hole, unlike nuclear weapons. When it comes to nuclear weapons, we can see all of that. How can we control AI?

3:50 p.m.

Founder and Executive Director, AI Governance and Safety Canada

Wyatt Tessari L'Allié

Countries are really already racing against one another for other reasons. The United States talks about wanting to dominate in artificial intelligence. China wants to lead the world in artificial intelligence.