Evidence of meeting #29 for Industry and Technology in the 45th Parliament, 1st session. (The original version is on Parliament’s site, as are the minutes.) The winning word was copyright.

A recording is available from Parliament.

On the agenda

Members speaking

Before the committee

Geist  Canada Research Chair in Internet and E-Commerce Law, Faculty of Law, University of Ottawa, As an Individual
Bennett  Professor Emeritus, University of Victoria, As an Individual
Bengio  Full Professor, Université de Montréal, As an Individual
Dehghantanha  Professor and Canada Research Chair in Cybersecurity and Threat Intelligence, University of Guelph
Craig  Associate Professor of Law, Osgoode Hall Law School, York University, As an Individual
Cukier  Professor, Entrepreneurship and Strategy, Ted Rogers School of Management, and Academic Director, Diversity Institute, As an Individual

Dominique O'Rourke Liberal Guelph, ON

If I can, I'll ask a follow-up question of Dr. Bengio.

In our conversation around the defence industrial strategy, I had a lot of questions on ethics and self-guided weapons. How do we implement basic ethics and eliminate bias in AI? How do we ensure that? I understand that's part of your work. Then, how do we compete with industries, companies or countries that don't have that as their starting point?

4:20 p.m.

Full Professor, Université de Montréal, As an Individual

Yoshua Bengio

It's actually an advantage to have AI that is reliable. In fact, it can be a niche advantage that Canada can offer the world if we do push forward sufficiently in that direction.

On autonomous weapons, it's clearly a tricky situation, because if you're in a war—I'm thinking about Ukraine—and your adversaries are using AI without any restraint and you don't, you might think you're in trouble, but we have no choice. We should not be sacrificing our values and our democracies because of the challenge of war. We should do our best with the constraints we have.

We can deal with the ethical questions about, say, the use of weapons that can be automated, both on the technical front.... For example, I mentioned the work we're doing at LawZero. We want the AI used in those contexts to obey the international laws of war. We also need societal and legal guardrails, but we can work on both fronts.

The Chair Liberal Ben Carr

Thank you very much, Madame O'Rourke.

Mr. Ste‑Marie, you have two and a half minutes.

Gabriel Ste-Marie Bloc Joliette—Manawan, QC

Thank you, Mr. Chair.

Mr. Bengio, I will ask my two questions together.

First, we have just been talking about autonomous weapons and the use of artificial intelligence in defence. What should the ethical rules and international treaties on this issue be?

Second, is the government looking at replacing public servants with artificial intelligence applications? What cautions, what advice should accompany that? What limits should there be?

You have two minutes to answer all that, so answer as you wish. Thank you.

4:20 p.m.

Full Professor, Université de Montréal, As an Individual

Yoshua Bengio

I think that, to an extent, I have already answered the question about autonomous weapons. I would say that we must work on countermeasures. The problem with powerful artificial intelligence possibly being in the hands of people abusing that power is that, currently, democratic, social and international institutions are not robust enough to counter those challenges, either militarily or in any other area. We must innovate. With autonomous weapons, the military response to the use of artificial intelligence is going to have to be strengthened to provide a greater power. We want the power to be used for defence, but not for attacks on innocent people. How can we be sure that rules of that kind are followed? By developing appropriate technology and, at the same time, by making institutions strong enough, especially in terms of transparency, to stand up to it and prevent abuse.

The same challenge applies to the public servants who are going to lose their jobs. We have to start with a comprehensive long-term strategy—let's say over five years and eventually over ten years—to decide what to do with those who are going to lose their jobs. In terms of economics, the solution cannot simply be to offer them assistance. We can certainly do that, but we have to make sure that we have enough money to provide that assistance. If the profits generated by automation are sent elsewhere, we will not have the means to assist.

We must also work on developing an artificial intelligence economy, both in Canada and with our partners. That will protect us by making sure that the benefits of automation will be distributed to those who need them.

Gabriel Ste-Marie Bloc Joliette—Manawan, QC

Thank you very much.

The Chair Liberal Ben Carr

Thank you, Mr. Ste‑Marie.

Ms. Konanz, the floor is yours for five minutes.

4:25 p.m.

Conservative

Helena Konanz Conservative Similkameen—South Okanagan—West Kootenay, BC

Thank you.

I have a question for Dr. Bengio.

4:25 p.m.

Full Professor, Université de Montréal, As an Individual

Yoshua Bengio

I'll feel bad if you only ask the questions of me.

4:25 p.m.

Conservative

Helena Konanz Conservative Similkameen—South Okanagan—West Kootenay, BC

I know. I have Dr. Geist next, so no worries.

Along with the theme of some of the other questions, artificial intelligence is obviously going to be one of the biggest drivers of economies around the world in the foreseeable future.

4:25 p.m.

Full Professor, Université de Montréal, As an Individual

4:25 p.m.

Conservative

Helena Konanz Conservative Similkameen—South Okanagan—West Kootenay, BC

I see that you agree with that, but it also doesn't carry a lot of public confidence. There's a lot of concern about job disruption, copyright infringement and transparency.

What do you think is the balance we should look to strike so that our regulatory frameworks don't become immediately technologically outdated but have more teeth than simply rubber stamps?

4:25 p.m.

Full Professor, Université de Montréal, As an Individual

Yoshua Bengio

What we should not do is try to establish confidence through marketing: “It's all going to be fine; don't worry. You won't lose your job, your children won't have any problems, your data will be safe and no rogue AI will emerge.” I think that would be a terrible mistake, but that is often what leaders in companies and governments tend to do.

We should be honest with people, and they should also understand that there's a lot of uncertainty around all those risks, but that requires a public discussion. That's the first point.

The second point is that it is possible to build regulations that do not prescribe a particular way to solve the problem. The general principle is very simple: a regulation for the harms that AI could create in society. It's the same principle we ask builders of bridges, trains, planes or the factories that deal with our meat to follow. The companies building those systems have to demonstrate to the public that their products will be safe. They choose how to demonstrate it. They choose what technology they use to build their systems. They should come up with a scientifically valid estimation of the risks and how they can mitigate them.

4:25 p.m.

Conservative

Helena Konanz Conservative Similkameen—South Okanagan—West Kootenay, BC

Following up on that and the talk about the public sector earlier, in my experience in municipal government, job elimination was one of the primary concerns for not implementing AI. I spoke to many leaders in the community who wanted to be innovative but couldn't be. At the same time, they've been pressured to avoid AI, not only from their employees but also from the people who vote for them.

In this situation, the private sector will be racing forward, perhaps unsustainably, while public services might choose to lag, even though they are trying to be innovative at the same time, and this doesn't really create confidence. What approach should be taken so that we don't see a rapid loss of office and white-collar professions, but at the same time we create some confidence in the people who are leading communities to start being innovative and use AI? I know I put a lot in there, but it's a problem when....

4:25 p.m.

Full Professor, Université de Montréal, As an Individual

Yoshua Bengio

You think I can square this circle.

4:25 p.m.

Conservative

Helena Konanz Conservative Similkameen—South Okanagan—West Kootenay, BC

Yes. I know you can.

4:25 p.m.

Full Professor, Université de Montréal, As an Individual

Yoshua Bengio

My view on these kinds of questions is that they are social choices, and economic choices in some cases. For, say, Canadian companies that are exporting and competing against American companies, for example, if they don't use technology that allows them to be competitive, they're going to lose, so for them it's going to be very difficult to do anything but. There's a sense that those decisions are not just decisions we can take alone in Canada. They're decisions we should discuss with our partners around the world.

In cases of government services, it should be a choice. Yes, we could be more efficient, but then what's going to happen with the people who lose their jobs? We shouldn't hide behind the idea that it's all going to be fine. We should have a plan to deal with that and a plan that is discussed with our society and our citizens, because we have to face that challenge collectively. It's not easy, and I shouldn't be the one telling you the answer. It's something we should discuss collectively.

The Chair Liberal Ben Carr

Thank you very much.

Mr. Bains, the next five minutes are yours. You will be the last questioner for this round.

Parm Bains Liberal Richmond East—Steveston, BC

Thank you, Mr. Chair.

My first question is for Dr. Bennett.

Western alienation is real. We're witnessing it today here. As the only member from British Columbia on this committee, I think it's important that we engage you in this discussion as well. Thank you for joining us.

You're an expert on surveillance technologies, privacy, and protection policies. I spent time on the ethics committee, which has everything to do with access to information, privacy and ethics. Protection, of course, is important for Canadians, in order to trust in their government, and across the private sector.

These rights are essential to Canadians, but we need structure, and we've heard that. We've heard from Dr. Bengio about the potential bias in how AI systems are built. Without the structures, we need some teeth in the work we're doing.

Could you please shed some light on what institutions and policies are needed to support Canadians' privacy rights with respect to AI?

4:30 p.m.

Professor Emeritus, University of Victoria, As an Individual

Colin Bennett

Perhaps I could answer that by reflecting on and accentuating a couple of things that Professor Bengio and Professor Geist have said.

What we're talking about here in terms of the governance of AI is the principle of accountability. There's been a lot of analysis of accountability in privacy issues. It means that organizations do the risk assessment, do the analysis of the problems they're likely to face and stand ready to demonstrate compliance. They don't necessarily have to do it, but they stand ready to demonstrate it.

Last year, there was a proposal to develop codes of practice around AIDA that really didn't go anywhere. The process was flawed, in my judgment, but codes of practice play a really important role here too. They may be on the company level or they may be in terms of industrial associations. My central point here is that just because we're dealing with amazingly new and potentially powerful technologies, it does not mean that the governance issues are any different from what they were when we were talking about these problems 30 years ago, as I was. I gave testimony before the committee that looked at PIPEDA back in 2000.

We should be learning about what works and what doesn't from those experiences. We're not talking about imposing prescriptive regulation and we're not talking about self-regulation; we're talking about co-regulation, meaning that the companies are incented to do the right thing and are punished if they do not.

You've given me a very broad question, so I'm going to take the liberty of making a couple of other comments.

On the question of consultation, I don't regard this issue, AI, as just another policy question. It's not just another law and policy that requires the standard stakeholder consultation. It is so general. It is so pervasive. It is going to affect all aspects of our lives, so the consultation process also needs to be fundamentally different. For example, I would like the government to think critically about citizens' assemblies in this area.

Professor Bengio is putting his thumb up. Good.

Citizens' assemblies can play more than the role of getting feedback from ordinary citizens about what they think about these issues. Citizens are going to be exposed to this and are being exposed to it constantly, but they're also going to be hurt by it. Your constituents are going to be denied, and are being denied, rights and services because of decisions that are made in automated machines without proper human oversight. Citizens' assemblies, I think, can play a very critical role here.

On this question of digital sovereignty, I have no insights into whether a new CPPA is coming along soon from ISED, but we hope that it will. We also hope that it will be strengthened with respect to the issue concerning the international flows of personal data. I think this is accentuating what Professor Geist said.

The previous version just said that a company had to do diligence when it sent information elsewhere. It had to ensure that the rules in Canada were applied. It didn't matter whether that company was based in Ontario, Europe, a developed country or elsewhere with authoritarian regimes, so there's something deeply wrong, in my view, with the way the government has been thinking about the protection of the international flows of personal data.

We have some views about that. I hope that when this committee—assuming it's this committee—comes to look at a new view of CPPA, it will consider these questions about digital sovereignty and think very critically not only about the stronger rules that need to be in place to ensure that Canadians' personal data remains in Canada and the role for data localization, but also about the rules being really strong when that data is transferred overseas.

I hope that addressed your question, Mr. Bains.

Parm Bains Liberal Richmond East—Steveston, BC

Thank you.

The Chair Liberal Ben Carr

That's wonderful. Thank you very much.

That brings us to the end of the first hour of testimony.

Thank you very much to the witnesses for appearing. This continues to be a fascinating conversation. I know that it's not just people in this room who are paying attention to it; it's also those we represent across the country who are looking for guidance. I appreciate very much your taking the time out of your incredibly busy schedules to offer your perspectives to us.

Colleagues, we're going to suspend for no more than five minutes, and then we will resume in the second hour.

The meeting is suspended.

The Chair Liberal Ben Carr

Colleagues, we're going to continue.

That was a fascinating first hour. I've said a few times that I'm starting to wonder if this is real life or if I'm in a sci-fi movie. Maybe it's a little bit of both.

We have three new witnesses with us this hour. One is joining us online, and two are here in the room.

I'd like to welcome a professor of law from Osgoode Hall Law School at York University, Dr. Carys Craig. Welcome.

We also welcome Professor Wendy Cukier, academic director of the Diversity Institute in the entrepreneurship and strategy department at the Ted Rogers School of Management.

As well, joining us online is professor and Canada research chair Ali Dehghantanha. Welcome.

Professor Dehghantanha, we're going to start with you. You have up to five minutes for your opening remarks.

Ali Dehghantanha Professor and Canada Research Chair in Cybersecurity and Threat Intelligence, University of Guelph

Thank you for the invitation to appear today.

My name is Ali Dehghantanha. I am a professor and Canada research chair in cybersecurity and threat intelligence at the University of Guelph, and I also work closely with industry on securing real-world AI systems. I would like to focus my remarks on a critical gap that is currently limiting Canada's ability to fully realize the benefits of artificial intelligence in strategic sectors.

Today, the primary barrier to AI adoption is not capability; it is trust. Across sectors, organizations are increasingly capable of building and deploying AI systems. However, they are often unable to safely operationalize these systems at scale due to concerns around security, misuse, reliability and regulatory exposure. In sectors like advanced manufacturing and construction, where AI-driven automation meets physical safety, the stakes of this trust gap are particularly high.

In practice, we are seeing that AI systems are being deployed without sufficient mechanisms to continuously monitor, verify and remediate risks once they are in operation. This creates what I would describe as an AI security deadlock, where innovation is technically possible but deployment is slowed or blocked by unresolved risk.

Current approaches to AI governance tend to focus on pre-deployment checks, model evaluation or static compliance frameworks. While these are important, they are not sufficient for modern AI systems, which are dynamic, adaptive and increasingly integrated into critical workflows.

What is missing is a run-time layer of control—an infrastructure that continually observes AI behaviour, detects failures or misuse, and actively intervenes to correct or contain those issues in real time. This is similar to how cybersecurity evolved. We do not secure systems today solely through a design-time review; we rely on continuous monitoring, detection and response. AI systems require a similar paradigm. Furthermore, this run-time approach allows for robust security oversight without requiring access to a company's proprietary source code or sensitive training data, protecting Canadian intellectual property while ensuring safety.

From a policy perspective, I would suggest three priority areas.

First, Canada should support the development of standards and frameworks for continuous AI risk monitoring and post-deployment assurance. This includes defining what “safe operation” means in practice—not just at deployment, but throughout the life cycle of AI systems.

Second, we should incentivize secure AI deployment, not just AI deployment. Many current programs focus on building AI capabilities, but fewer address the operational challenge of deploying these systems safely in high-stakes environments.

Third, Canada has the opportunity to lead in the emerging domain of AI security and risk orchestration. Supporting domestic companies and research efforts in this space can strengthen both our economic position and our digital sovereignty. As we look toward the horizon of quantum computing, the need for these real-time adaptive security layers to protect our AI infrastructure against next-generation threats becomes even more urgent.

Finally, I would like to emphasize that the goal is not to slow down AI innovation but to enable it. By addressing the security and trust gap, we can unlock faster, safer and more responsible adoption of AI across Canada's strategic industries.

Thank you. I look forward to your questions.

The Chair Liberal Ben Carr

Thank you very much.

Professor Craig, we'll turn the floor over to you for up to five minutes.