Evidence of meeting #20 for Access to Information, Privacy and Ethics in the 45th Parliament, 1st session. (The original version is on Parliament’s site, as are the minutes.) The winning word was approach.

A recording is available from Parliament.

On the agenda

Members speaking

Before the committee

Leahy  Chief Executive Officer, Conjecture Ltd.
Alfour  Chief Technology Officer, Conjecture Ltd.
Piovesan  Managing Partner, INQ Law

12:10 p.m.

Conservative

The Chair Conservative John Brassard

Thank you, Ms. Piovesan.

I know there's a fourth, fifth and sixth component to your recommendations—I've seen your opening remarks. Perhaps committee members can guide you with that in their lines of questioning.

Mr. Barrett, you have six minutes. Go ahead, please.

12:10 p.m.

Conservative

Michael Barrett Conservative Leeds—Grenville—Thousand Islands—Rideau Lakes, ON

I'd like to pick up on one of the points you mentioned in your opening. It was about the regulation of frontier AI systems. In doing that, how would Canada defend against rogue nations developing and deploying AI weapons? What international coordination would be required to make safeguards in the development of frontier AI systems effective?

12:10 p.m.

Managing Partner, INQ Law

Carole Piovesan

Every nation is drawn to protect its own systems as much as possible. Canada has been working internationally since 2019 through the Global Partnership on Artificial Intelligence, through the OECD, through the G8 and G7, and through other international mechanisms, including, now, the international AI safety institutes. We've been working quite diligently to establish more of a commonality in how we are approaching AI and the regulation or protection of frontier AI around the world, where rogue nations may be developing certain AI systems that are offensive to our own principles.

It's really important to understand that Canada is already embedded in these international committees, and we are playing a key role in establishing what the norms ought to look like and the mechanisms to enforce those norms. We won't be able to do it alone at all. We have to find out who our friends are and where we have commonality in approach and values, and, through that, establish the mechanisms we need in order to defend those values and that approach.

12:15 p.m.

Conservative

Michael Barrett Conservative Leeds—Grenville—Thousand Islands—Rideau Lakes, ON

I wanted to ask you about the long-term trajectory of superintelligent AI and the worst-case scenarios that will follow if adequate safeguards aren't put in place to stop it from getting away from us as a species.

First, could you tell me in what terms we're talking? Could you qualify, in your opinion, whether we are talking about long term, 10 years or five years? What do you think?

12:15 p.m.

Managing Partner, INQ Law

Carole Piovesan

I'm obviously not a technologist, but I've sat around circles to hear what some of the technologists are saying. What I am hearing, and I have no reason to disbelieve them, is that we are decades away or less. We're no longer talking about....

I remember sitting in a class with Dr. Hinton years ago when he was suggesting that artificial general intelligence was hundreds of years away. Then, in 2023, he shifted that approach to say that we're much closer than we ever thought.

The pace of technological development is happening at an exponential level. By all accounts from what I'm hearing, with no reason to disbelieve, we are much closer to superintelligent computers than we probably thought we would be just a few years ago.

12:15 p.m.

Conservative

Michael Barrett Conservative Leeds—Grenville—Thousand Islands—Rideau Lakes, ON

It's easy for one to fall down the dystopian mind hole about what that looks like.

I have just over two minutes left. What do you hear? What is discussed? What are the outcomes we're looking to avoid? How can we as a Parliament play a role in preventing that from happening?

12:15 p.m.

Managing Partner, INQ Law

Carole Piovesan

We're looking to avoid superintelligent computers that are superior to us in a way that is harmful to us. Dr. Hinton spoke about embedding maternal instincts into superintelligent computers so they would be more empathetic and protective.

Parliament can play a role in ensuring there is regular tracking of the development of, and inputs into the development of, frontier models and how they are implemented within Canadian society, with a view to shaping what the values that would be encoded in these systems ought to look like. Parliament can play a role in ensuring that we identify the values we ultimately want these systems to be embedded with, and then in providing a process through which those values can be embedded in the systems so we can see the outputs.

12:15 p.m.

Conservative

Michael Barrett Conservative Leeds—Grenville—Thousand Islands—Rideau Lakes, ON

Thanks very much.

12:15 p.m.

Conservative

The Chair Conservative John Brassard

Thank you, Mr. Barrett.

Mr. Sari, you have the floor for six minutes.

Just make sure, Carole, that your interpretation is on.

Abdelhaq Sari Liberal Bourassa, QC

Thank you very much, Mr. Chair.

Ms. Piovesan, thank you for the information you have shared with us and for the insight you have provided.

Nevertheless, I will begin with the same introduction as I did for the two witnesses who preceded you, by seeking to understand what we can control.

In your opinion, will it be possible to control the development of this superintelligence? Shouldn’t we instead be working harder to control the use of artificial intelligence itself, given that most of the systems designed to persuade Canadians are not developed in Canada?

12:20 p.m.

Managing Partner, INQ Law

Carole Piovesan

That's a fair point; the systems aren't necessarily developed here.

First of all, through international coordination, we have a mechanism to inform what those values and standards ought to look like. We have precedent for this in a number of different respects, some of which you heard about from the earlier witnesses. We have some role to play in being clear about what embedding those good values looks like and what it means to develop intelligent computers that are aligned with the role we believe they should play in society.

I will give you the example of the G7 back in 2018, when our government and all the governments of the G7 were talking very much about the values in the context of AI and how we were going to advance standards, policies and practices that would protect those values. From there, you saw a movement into the global partnership on AI, and you saw a coordination through the United Nations on AI with a view to creating more of a harmonization in approach. Cut to the most recent G7, where the emphasis was primarily on adoption because we're now at a stage where we can see the real-world uses of AI and are much more prone to and excited about, in a lot of ways, the adoption of these technologies, which is a good thing.

In this current context, as we are starting to become much more familiar with the technology and understand its opportunities and use, we have to be mindful of the risks, but we cannot lose sight of the opportunities. We have a role to play in ensuring safe use where there is actual risk. What I don't want to do is establish a system that applies the same kinds of controls for all uses. We have to be targeted.

Abdelhaq Sari Liberal Bourassa, QC

Exactly.

You said that we need to be very aware of the risks, and I agree with you on that. I think our government is already aware of these risks, as are several G7 member governments, as you said.

You also mentioned the American approach, which is much more competitive, and the U.K. approach, which is much more based on ethics and accountability.

What approach can you really suggest to the Canadian government?

As I have another question for you, I would ask you to be a little more concise, please.

12:20 p.m.

Managing Partner, INQ Law

Carole Piovesan

I'm more inspired by the U.K.'s and Singapore's approaches than I am by those of the EU or the U.S.

Abdelhaq Sari Liberal Bourassa, QC

Don’t you see that this approach may not have the desired results if other governments are not aligned in some way? This artificial superintelligence will continue to develop.

12:20 p.m.

Managing Partner, INQ Law

Carole Piovesan

I think that's exactly right. It doesn't necessarily stop the development of AGI, but it does embed a more mature approach in how we regulate it today.

Abdelhaq Sari Liberal Bourassa, QC

My last question concerns Bill C‑27, which you were in favour of.

What lessons have you learned from this bill, and what would be the best way forward? We have already done some work on it, and we can’t just throw it all away. In your opinion, what lessons have been learned with regard to this bill?

12:20 p.m.

Managing Partner, INQ Law

Carole Piovesan

Part 3 of Bill C-27 was the artificial intelligence and data act, and the best lesson to learn from it is that the overarching accountability framework that would have been put in place required a turning of the mind to the context of the use of AI, determining its potential impact level and then establishing a diligence process, soup to nuts, in response to that level of risk.

Abdelhaq Sari Liberal Bourassa, QC

I haven’t read your book yet, but I think it would be a really good read for the holidays.

Could you briefly tell me what your position is on digital sovereignty in Canada?

12:20 p.m.

Managing Partner, INQ Law

Carole Piovesan

It's a complex question to answer in a short period.

I understand and appreciate the approach of digital sovereignty. I think there's a lot that's extremely important about digital sovereignty. I also recognize it's a long-term investment.

Abdelhaq Sari Liberal Bourassa, QC

Thank you very much.

12:25 p.m.

Managing Partner, INQ Law

12:25 p.m.

Conservative

The Chair Conservative John Brassard

Thank you, Mr. Sari.

Mr. Thériault, you have the floor for six minutes.

Luc Thériault Bloc Montcalm, QC

Thank you very much, Mr. Chair.

Welcome, Ms. Piovesan.

In June 2025, Professor Bengio, who is also the founder of Mila, the Quebec Artificial Intelligence Institute, and scientific adviser to the institute, launched a new non-profit research organization on artificial intelligence security called LawZero, to prioritize security over commercial imperatives.

In your opinion, what are the greatest risks that artificial intelligence poses to security?

You said earlier that, from the outset, we must first ask ourselves what the objectives are for using this technology. People are very much driven by the lure of profit.

12:25 p.m.

Managing Partner, INQ Law

Carole Piovesan

We have to acknowledge the cybersecurity risks of artificial intelligence and whether we're ready as a country to defend against those risks. We also absolutely have to recognize the human rights- and social development-related concerns about artificial intelligence and walk in with our eyes wide open as businesses in our country start to adopt AI with much more interest.

Luc Thériault Bloc Montcalm, QC

With regard to human rights, the Montreal Declaration for Responsible Development of Artificial Intelligence states the following about AI systems:

1. AIS must be designed and trained so as not to create, reinforce, or reproduce discrimination based on—among other things—social, sexual, ethnic, cultural, or religious differences. 2. The development of AIS must help eliminate relationships of domination between groups and people based on differences of power, wealth, or knowledge.

Does the government take sufficient account of artificial intelligence biases in policy implementation? For example, did it do so in its Bill C‑27? How should it further integrate this concern?