Evidence of meeting #20 for Access to Information, Privacy and Ethics in the 45th Parliament, 1st session. (The original version is on Parliament’s site, as are the minutes.) The winning word was approach.

A recording is available from Parliament.

On the agenda

Members speaking

Before the committee

Leahy  Chief Executive Officer, Conjecture Ltd.
Alfour  Chief Technology Officer, Conjecture Ltd.
Piovesan  Managing Partner, INQ Law

Luc Thériault Bloc Montcalm, QC

I will ask my question, and if we run out of time, you can answer it when I have another turn.

One of the ethical issues that worries me is the energy-intensive nature of data centres. For example, there are two coal-fired power plants in Mumbai that are extremely polluting. They were scheduled to be shut down, but they will continue to operate to meet the enormous electricity needs of Amazon’s data centres, which are being built all over the world to compete with other large companies such as Google. I am concerned that artificial intelligence is being developed for the benefit of wealthy countries, but at the expense of the environment and the health of people living in developing countries.

11:45 a.m.

Conservative

The Chair Conservative John Brassard

Please give a brief answer.

Luc Thériault Bloc Montcalm, QC

This could be the beginning of an answer.

11:45 a.m.

Chief Technology Officer, Conjecture Ltd.

Gabriel Alfour

This is a more general aspect of the fact that AI is developed without much concern for people and that people are not put in the loop. That's mostly how we see it.

11:45 a.m.

Conservative

The Chair Conservative John Brassard

Thank you.

Mr. Mantle, you have five minutes. Go ahead.

11:45 a.m.

Conservative

Jacob Mantle Conservative York—Durham, ON

Thank you, Mr. Chair.

Thank you to our witnesses for appearing and providing valuable testimony.

Picking up on the comment from my Bloc colleague about the protection of people, I want to focus on priorities for a moment and the government's current priorities with respect to artificial intelligence.

You may be aware that the government is currently reviewing its AI strategy, but the existing AI strategy, the pan-Canadian artificial intelligence strategy, lists three priorities. The first priority is commercialization, so making money off of AI; the second is standards or protections in that area; and the third is talent and research.

Do you think those priorities are in the proper order?

11:50 a.m.

Chief Executive Officer, Conjecture Ltd.

Connor Leahy

I think it wouldn't be a surprise that we would disagree that these are the most important priorities. Both of us are technologists by background. We love technology. We got into this to do technology to make the world a better place. Technology is dual-use. It is power. It is very important to use it correctly when we're dealing with unprecedented technology that has this kind of power.

To get a good outcome, the most important thing is to get this right and to not repeat the mistakes of social media. Don't let technology exist for technology's sake. Let technology exist to benefit people. This is not what technology will do by default. We have to make it do that. We think this should likely be the top priority.

In the sense of the most acute risk, personally, we believe superintelligence to be the most pressing.

11:50 a.m.

Chief Technology Officer, Conjecture Ltd.

Gabriel Alfour

I would tend to agree.

Perhaps I can add something. I think it was in July of this year that YouTube Shorts reached 200 billion views per day. There was an entire article from YouTube about this, which was extremely happy about the fact that it got 200 billion views per day.

We think a lot of people building AI systems are in this paradigm. We know there are many technical employees at AI companies. It's extremely fun to watch loss go down and results on benchmarks go higher. It's good in itself to get more technology to just get more. It's fun. It's great to see. You don't need to think much.

I will echo what Connor said. I believe the biggest priority when developing technology should be to ensure that it benefits people and benefits humanity.

11:50 a.m.

Conservative

Jacob Mantle Conservative York—Durham, ON

If I'm understanding your testimony, you're suggesting that the priorities should be reversed and that standards and protection should come before commercialization in the government's strategy. Is that correct?

11:50 a.m.

Chief Technology Officer, Conjecture Ltd.

Gabriel Alfour

I think benefiting humanity can be done through many ways. In the context of superintelligence, we believe it's through regulation and protection, but for other things, it's through using the technology in good ways, in ways that benefit people.

Personally, I'm quite hopeful about AI in the context of education. I think different people have different priorities, but if one tries to use AI for good, I believe a lot can be done with it. It's a very powerful technology.

11:50 a.m.

Conservative

Jacob Mantle Conservative York—Durham, ON

That's great.

Canada has recently created, in this government, a new ministerial position, the Minister of AI. In one of his first speeches after becoming Minister of AI, Mr. Solomon said that Canada would move away from “over-indexing on warnings and regulation” to make sure the economy benefits from AI. Could I get your reaction to that approach?

11:50 a.m.

Chief Executive Officer, Conjecture Ltd.

Connor Leahy

My general opinion is that, yes, you can get a lot of economic benefit by neglecting human flourishing. This is something we've seen historically many times. There are many ways to pump a stock market at the expense of people. For example, deregulating Ponzi schemes is very profitable in the short term. Eventually, the bill comes.

I think we're seeing a similar thing here. Is building long-term, responsible stewardship and building a good society that uses technology effectively...? Again, I think benefiting mankind also means using AI, but using it correctly and for the benefit of mankind. This is harder and in the short term less profitable.

11:50 a.m.

Conservative

The Chair Conservative John Brassard

Thank you, Mr. Mantle.

Mr. Leahy, I'm going to get you to move your microphone down a bit. It's coming in a little hollow right now. We want to make sure the interpreters understand what you're saying.

Ms. Church, you have five minutes. Go ahead.

Leslie Church Liberal Toronto—St. Paul's, ON

Thanks, Mr. Chair.

Welcome to both of our witnesses.

I'll go first to Mr. Alfour. What experience do you have in dealing with the Canadian artificial intelligence safety institute, or CAISI, which was launched in November 2024? Do you have any experience in dealing with it directly?

11:55 a.m.

Chief Technology Officer, Conjecture Ltd.

Gabriel Alfour

I do not have much experience dealing with it directly, aside from interacting with a few people who I think were there when I talked to them.

Leslie Church Liberal Toronto—St. Paul's, ON

Would either of you have comments on how you think its mandate and work are shaping up?

I would note for the committee that CAISI was developed by the Government of Canada in part to examine the risks posed by advanced AI systems to help develop tools and guidelines to manage those risks. It also works collaboratively internationally to try to develop protocols for AI safety.

It's only a year old at this stage, but I was wondering if either of you had any guidance for us as a committee on the reach of its mandates or how you think its work could be enhanced going forward.

11:55 a.m.

Chief Technology Officer, Conjecture Ltd.

Gabriel Alfour

As far as I understand it, right now, CAISI is observatory. It does not regulate and it does not constrain. I can't say if it's the role of CAISI specifically to regulate and constrain. This is a political call that is outside of my purview, but I think someone should have this authority.

We had warnings two years ago from experts about extinction risks. We already know that several leading AI corporations are racing specifically for superintelligence. Now is the time for action. Whether this should be done through CAISI, through the Minister of AI or through another entity is beyond my pay grade, but I believe it should be done.

Leslie Church Liberal Toronto—St. Paul's, ON

It's one of the tools in the tool box to help develop the framework that Parliament would need to legislate if we were going down that route. It's interesting, because I think we're attuned to the risk, but as you've acknowledged, finding the means to regulate and understand AI in its various forms is going to pose a challenge.

Mr. Leahy, do you have any wisdom for us on how we could grow the mandate or work of CAISI to support us in developing framework and security protocols to deal with this?

11:55 a.m.

Chief Executive Officer, Conjecture Ltd.

Connor Leahy

I have not had personal experience with CAISI. I have not talked to them. I have talked to many people around Yoshua Bengio at Mila and his group. He is one of the godfathers of AI. These are the people I have the most experience with.

Generally, it is very important to have some amount of technical capability that can give non-partisan advice to governments. This is a very important function. It's also very important to understand that a lot of these institutions, in a sense, through no fault of their own, are often corrupted by business interests.

Many of the people who work at these companies or at these institutions have very lucrative offers from these kinds of companies and often have their identity tied to technology being good. On the other hand, there are many people whose identity is tied to technology not being good. We don't want either of these things. What we want is a balanced understanding of how we can mitigate the risks that are truly unacceptable, while also benefiting from the ones that we can't mitigate. This is a hard tightrope to balance, but it's important to make this the clear mandate.

Leslie Church Liberal Toronto—St. Paul's, ON

Thank you. That's very helpful. I think it's helpful that the Canadian AI institutes, including Mila, are involved with CAISI as well to provide some of that balanced input.

Let me take you in a different direction. What type of counterprotocol would you suggest? You've mentioned the geopolitical risks we face from other foreign actors. I'm curious to know if you have any guidance for us to think about, either on the cybersecurity reaction or on the protocols we should put in place to protect our critical infrastructure and systems. Are there any other counterprotocols that you might suggest we look at as a government?

11:55 a.m.

Conservative

The Chair Conservative John Brassard

I need a rather quick response to the question, please.

11:55 a.m.

Chief Technology Officer, Conjecture Ltd.

Gabriel Alfour

I tend to think that countries should now seriously think about national firewalls to prevent outside actions. There has been a lot of taboo about them in the past. If a country is to ensure its cybersecurity and security in its cybersphere, it should strongly consider this in general.

That's my personal opinion, not that of ControlAI. To your question, I think that's the most direct one.

Noon

Conservative

The Chair Conservative John Brassard

Thank you, Ms. Church.

Mr. Leahy and Mr. Alfour, I want to thank you for appearing before the committee this morning.

We are going to suspend for a few minutes while we get into our second hour.

12:05 p.m.

Conservative

The Chair Conservative John Brassard

I'm going to call the meeting to order.

I would like to welcome our witness in the second hour. From INQ Law, we have Carole Piovesan, who is the managing partner.

Unfortunately, we were to have another witness, but they could not be here. It's the Carole show for the second hour.

Carole, you have up to five minutes to address the committee. Go ahead, please.

Carole Piovesan Managing Partner, INQ Law

Good afternoon. Thank you, Mr. Chair and honourable members of the committee.

My name is Carole Piovesan. I am a managing partner at INQ Law, where I advise clients on privacy, cybersecurity, data governance and AI risk management.

I've had the privilege of contributing to AI policy discussions nationally and internationally, including through the OECD.AI Policy Observatory. I have previously appeared before this committee, as well as the INDU committee. I am an adjunct professor at the University of Toronto's faculty of law, where I teach AI regulation. As well, I co-authored the book Leading Legal Disruption: Artificial Intelligence and a Toolkit for Lawyers and the Law.

The opinions I present today are my own personal opinions and do not reflect those of my law firm.

To understand how we should govern AI, we should go back to first principles and ask what AI is trying to achieve. In 1950, Alan Turing posed what he called the imitation game: a test to determine whether machines could think. He believed that one day machines would be able to play games, remember, observe results of their own behaviours, learn from rewards and punishments, and even deliberately introduce mistakes into their working.

Today, some of the leading AI researchers around the world are divided on where the trajectory of AI is taking us. Award-winning Canadian researchers such as Yoshua Bengio and Geoffrey Hinton, both pioneers of deep learning, warn that we may soon have computers that exceed human intelligence, with profound implications for safety and control—indeed, the existential threat we all hear about. Others, such as Yann LeCun, another pioneer, advance an argument that is more aligned with artificial machine intelligence, which is best understood as augmenting human intelligence rather than replacing it.

The purpose for pursuing AI and the achievements of those pursuits matters in how we think about governance. If a tool is to be used to extend human capabilities, we govern its use. If AI is an autonomous system capable of independent reasoning, we regulate its development and deployment with a different level of vigilance. Canada's approach must account for both.

Around the world, we are seeing at least three distinct models of governance being presented for AI.

Under the Trump administration, we are seeing a deregulatory approach in the United States with an emphasis on competitiveness over comprehensive safeguards. The federal approach relies on existing sectoral laws applied through agencies such as the FTC, while actively resisting state-level experimentation with stricter AI rules.

The United Kingdom and Singapore take a different approach. There, we are finding a much more tailored sectoral approach to AI regulation. The U.K., in particular, has a principles-based approach asking existing sector-specific regulators to interpret and apply cross-cutting principles such as safety, transparency, fairness and contestability within their domains. The U.K. considers that this approach offers critical adaptability that keeps pace with rapid technological change, although there are certain developments that suggest binding measures for the most powerful AI models may be forthcoming.

Singapore has certainly adopted a much more soft law, voluntary framework. There is no specific AI regulation. However, Singapore's approach through consensus building among government, industry and citizens, and through instruments such as the model AI governance framework and the AI Verify Foundation testing tool kit, has proven somewhat successful in building a sense of trust and a common approach to AI development. With Singapore's investment in national AI literacy and its consultative and iterative approach to governance, it's a model from which Canada can draw inspiration.

Then we see the third model, which is far more prescriptive. That model is found in the EU AI Act, which I know this committee has already heard about. That act is much more horizontal and is focused on the prescriptive life cycle of AI development and deployment across the supply chain.

Canada's approach should be tailored to our context. Regulating frontier AI systems is not the same as regulating Copilot in the use of a law firm or a chatbot on a service line. The U.K.'s context-specific approach recognizes this. Canada is more like the U.K. and Singapore than the United States or Europe. We value proportionate regulation that protects rights while enabling innovation.

I'll close with my three-point call to action.

The first is to continue building a regulatory guidance approach for safe AI. Our AI safety institute must be operating at full force, demonstrating that Canada takes the safety of these systems seriously. We must continue to target iterative standards guidance and a directives-based approach to artificial intelligence, with an emphasis on real-world testing for high-risk AI contexts. Lab benchmarks and off-line evaluations only show how models perform on static tests, not how they actually interact in real-world use.

Second, and very importantly, we need to improve the diversity in representation and perspectives in policy and throughout the development, evaluation and deployment process. Individual perspectives matter, and they are highly unrepresented throughout the AI ecosystem.

Third, we must conduct an environmental scan to better understand, on a sectoral basis, where our laws may have gaps to account for AI or where AI is already accounted for, so we have the coverage we need for the everyday use of AI in business. Targeting soft and hard law at home in a tailored manner and enabling Canada to play to its trusted global position to ensure robust and harmonized standards, certifications and guidance for responsible AI should be our path forward.

Thank you. I welcome the committee's questions.