Evidence of meeting #106 for Industry, Science and Technology in the 44th Parliament, 1st Session. (The original version is on Parliament’s site, as are the minutes.) The winning word was going.

A recording is available from Parliament.

On the agenda

MPs speaking

Also speaking

Todd Bailey  Chief Intellectual Property Officer and General Counsel, Scale AI, As an Individual
Gillian Hadfield  Chair, Schwartz Reisman Institute for Technology and Society, University of Toronto, As an Individual
Wyatt Tessari L'Allié  Founder and Executive Director, AI Governance and Safety Canada
Nicole Janssen  Co-Founder and Co-Chief Executive Officer, AltaML Inc.
Catherine Gribbin  Senior Legal Adviser, International Humanitarian Law, Canadian Red Cross
Jonathan Horowitz  Legal Adviser, International Committee of the Red Cross, Regional Delegation for the United States and Canada, Canadian Red Cross

Noon

Chief Intellectual Property Officer and General Counsel, Scale AI, As an Individual

Todd Bailey

One of the reasons that I'm suggesting that Canada not be first and not forge ahead is that we need to exist within this world. We're not a leader. We're a leader in research; we're not a leader in adoption. We're a small market. If rules get written elsewhere and they don't apply in Canada, this will be worse than just not getting the Super Bowl commercials. We'll get cut off.

Noon

Conservative

Brad Vis Conservative Mission—Matsqui—Fraser Canyon, BC

Okay. Thank you.

Ms. Hadfield, in your letter to our committee, it states that we should focus not on domains but on degrees of impact in the definition of “high-impact system”. According to that point you raised there, what are your thoughts on the definitions that have been outlined by the minister in some of the companion documents he has sent to committee members?

Noon

Prof. Gillian Hadfield

Thank you very much for the question.

This is an important point. It goes to an observation that I think we heard previously. If you say your high-impact area is health care, that can be everything from a scheduling application all the way through to a treatment and diagnosis application. Those have very different actual impacts.

Most of our legal system.... Think about the background law that's here, which is tort law or malpractice law, for example, in the health care domain. It keys on how big the impact is that you could have in a given context. It doesn't say that everything in health care, everything in education or everything in adjudication is high impact.

I see that the definitions of “high-impact” are still going by domain. That does track with what the EU is doing. I think this is the mistake that the EU is making as well.

This is why we need to be thinking of this as an iterative process, where we need to find out where somebody can be suffering a real harm or where society or the economy can be suffering a real harm and not just say that anything in this domain is.... I think that's going to be really excessive and burdensome for industry because we are going to require a ton of process around things where, honest to gosh, it's really not going to make a big difference to people's welfare.

I think we should be finding methods that don't say, “If it's health or education, it must be high impact.” I think you want to look at specific applications.

12:05 p.m.

Conservative

Brad Vis Conservative Mission—Matsqui—Fraser Canyon, BC

Thank you.

In some of the proposals we received at the end of November and early December, the government talked about creating a centre of expertise on AI within the Department of Industry.

Is the Department of Industry the correct place for government to be studying, examining and regulating AI? Should a body looking at some of the most existential harms our generation could face be an independent office of Parliament, for example? That's one suggestion.

That's for you, Ms. Hadfield.

12:05 p.m.

Prof. Gillian Hadfield

Thank you.

I think this is a really important question. I want to go back to the observation that this is a general-purpose technology. It's going to change the way absolutely everything works. I think we do need to be asking all of our regulators throughout the system to look at this.

A body of expertise that can pull that together, coordinate and be a centre of expertise.... I do think this is the direction the U.K. and the U.S. are headed. I'm not familiar enough with the kinds of structures available in the Canadian system, but if there was an independent office under Parliament, I think that would be good.

I want to reference the earlier question about whether this should be an independent commission, like the one that's being proposed, for example.

I think there are dangers in having an independent commission that's charged with protecting against harms from AI, because I think that will not put enough weight on the enormous economic and welfare benefits that will derive from AI. I think the appeal of having it under the ministry right now is that there's an obligation to balance the risks and the benefits, and the costs and the advantages.

However, I do think—

12:05 p.m.

Conservative

Brad Vis Conservative Mission—Matsqui—Fraser Canyon, BC

Some people have criticized the department in terms of how a single department can, in one respect, be responsible for economic development, yet also enforce the very economic development that may derive from the department's involvement in the industry itself.

In the first part of the bill, we've spoken a lot about the protection of privacy for children, for example. We haven't even touched upon the impact that AI is going to have on youth development in our country.

Can a department really be committed to doing both of those things, when you factor in things like the sensitive information of children?

12:05 p.m.

Liberal

The Chair Liberal Joël Lightbound

We'll need a brief answer, Ms. Hadfield.

12:05 p.m.

Prof. Gillian Hadfield

Thank you.

I think we're going to need a lot of places where we have this protected. AIDA is now almost two years old. I thought it was perfectly fine to have it inside the ministry at that point. I think things are moving along. I think we're going to need other places focusing on this as well.

12:05 p.m.

Liberal

The Chair Liberal Joël Lightbound

Mr. Tessari L'Allié, I'll let you briefly add something.

12:05 p.m.

Founder and Executive Director, AI Governance and Safety Canada

Wyatt Tessari L'Allié

The concern is absolutely valid. For the purposes of practicality, this is a huge issue. You're probably going to need dozens, if not hundreds, of staff. It will probably have to be in ISED because it has to be coordinated across government.

I would highly recommend having an independent parliamentary office whose goal it is to oversee ISED to make sure it is not misusing the power.

12:05 p.m.

Liberal

The Chair Liberal Joël Lightbound

Thank you very much.

Ms. Lapointe.

12:05 p.m.

Liberal

Viviane LaPointe Liberal Sudbury, ON

Thank you, Mr. Chair.

Mr. Tessari L'Allié, in your opening statement you referenced the implications that AI has for labour. I believe the wording you used was that human resources would become “increasingly less useful”.

In your opinion, how can we wisely manage these implications?

12:10 p.m.

Founder and Executive Director, AI Governance and Safety Canada

Wyatt Tessari L'Allié

That is a huge discussion. Alongside our efforts on Bill C-27, we're also calling for a national dialogue on AI because what the human being does in a context where everything can be done better by an AI system is a huge question.

Precisely because it is smarter than humans, we could create a world that is better. We could live more meaningful and more fulfilling lives, but right now nobody knows exactly what that means. This is why it's worth taking the time to talk about it.

It's also why you need a law to regulate it in the meantime, so if you have to slow down certain capabilities to give people time to figure out what's next, you can do that.

12:10 p.m.

Liberal

Viviane LaPointe Liberal Sudbury, ON

Thank you.

You also said that Canada needs an AI and data act to limit the current and future harms by banning high-risk uses and capabilities.

How do you foresee the enforcement of these high-risk uses?

12:10 p.m.

Founder and Executive Director, AI Governance and Safety Canada

Wyatt Tessari L'Allié

Actually, what the minister has suggested, with the powers to be able to audit and oversee operations, which is a huge one, is very important.

Basically, you need a very competent group of skilled people in government who are on the ball with what's happening and can work with industry to let them know to hold off on this and work on that instead, for example...with safe harbours or regulatory environments as well. Most important is that you have a big enough team with the authority to do things well and the oversight to make sure it is not incompetent and not being lobbied.

12:10 p.m.

Senior Legal Adviser, International Humanitarian Law, Canadian Red Cross

Catherine Gribbin

If I may, I'll jump in on Mr. Bailey's earlier point about the fact that AI, in whatever way it is going to be used in the future, is governed by the law. When we are talking about your question about how to ensure its lawful use, we are all cognizant of the fact that we do have pre-existing laws that already govern any use of AI. In the examples used earlier, there was anti-discrimination. We have that human rights framework. We also should realize that we have humanitarian law about its use.

Within IHL, it talks about that research and development aspect. It is really important for us to be aware of the fact that it's coming into existence where there are already laws that will govern its use and provide that instruction to those who are creating it as well as using it, so that it must be done in a lawful manner. That is an important realization and framework to remind ourselves of.

Thank you.

12:10 p.m.

Liberal

Viviane LaPointe Liberal Sudbury, ON

Thank you.

Ms. Hadfield, I found it really interesting when you talked about an area of concern where there's a need to focus on both individual harm and systemic risks.

Can you expand on this point, specifically from the lens of what the government can do around that?

12:10 p.m.

Prof. Gillian Hadfield

It's very clear that AIDA is focused on individual harms. We've adopted a product safety-type approach—as has the EU—that says that companies should be looking at whether or not their products can cause this harm.

That is not addressing the question of what it means that we already have autonomous systems. Trading on our financial markets is an example. For the rapid advances that have been mentioned, like what could happen in the next two to five years, the talk is about personalized AI agents out there buying, selling, creating products and operating websites. We're about to see that kind of autonomy, with autonomous agents starting to participate in our economies. Our thinking is still five years ago on this. We need to rapidly get up to speed on that fact.

The systemic harms that I think about are what happens to the equilibrium of our financial, economic, regulatory and political domains when we have huge amounts of autonomous action taking place. We've already seen that in social media. We need to think about how we'd act there.

The types of things I'd say we need to be thinking about are.... All of our regulators should be doing what I've called a regulatory impact analysis to figure out how the introduction of the systems impact our capacity to control the liquidity and reliability of our financial markets to protect against antitrust behaviour in our other markets, or to ensure that our court systems, for example, and our decision-making systems are still safe and trusted.

We have to be thinking about it at that level. I do not think that the individual harm, product safety and risk management approach that AIDA and the EU are taking will get us there. That's the systemic point.

12:10 p.m.

Liberal

Viviane LaPointe Liberal Sudbury, ON

It speaks to the other comment that you made about the rate of change demanding responsiveness and adaptability. Can you advise this committee on how the government, specifically, can effectively accomplish this?

12:15 p.m.

Prof. Gillian Hadfield

It's really critical to recognize that we are at a point in history we have never been at before. Our approaches on regulation and legislation are not going to keep up with this, and we will suffer for it, but there are approaches.

One of the things the government can do—and this is the regulatory markets—is to try to encourage private sector entities to build the technologies that will track. You've probably all heard about red teaming exercises. These are exercises that, say, OpenAI is doing to try to make sure that ChatGPT can't get hacked to tell people how to build a bomb. That's happening inside the companies right now.

The government can basically certify the providers of those services, independent companies and organizations and say.... I don't know. I'm looking at Wyatt here. It's the back of his head, unfortunately, because I'm on the cameras.

You can have organizations that say they have hired those terrific engineers—and I know there are a lot of them out there who want to be working on this side of the problem—and we have tested their systems and then said this is a system that they trust to protect against this piece of it.

There's just no getting around that it's going to be iterative and peaceful in that way, but we need to get started. We cannot spend another two years talking about this. We need to get started.

12:15 p.m.

Chief Intellectual Property Officer and General Counsel, Scale AI, As an Individual

Todd Bailey

If I could just add quickly to that, one of the things that has come to light recently is that the big tech companies are actually the ones stoking this fear. There's a gentleman by the name of Andrew Ng who has started to ask why.

There's a concept of regulatory capture, whereby entrenched businesses want the regulation to favour them. The idea of certification of OpenAI's tool is great for OpenAI because it's now a barrier to entry for smaller companies to come in. When I talk about this quote of the way you defeat the empire is by arming the rebellion, it's certainly not by entrenching the empire in regulation.

That's one concern that Canadian businesses face.

12:15 p.m.

Liberal

The Chair Liberal Joël Lightbound

Thank you.

Mr. Garon.

12:15 p.m.

Bloc

Jean-Denis Garon Bloc Mirabel, QC

Thank you very much, Mr. Chair.

I'll continue with you, Professor Hadfield.

You spoke about the regulatory framework. It's in the public interest to have a regulatory framework. However, you said that this framework shouldn't be overly general.

You also suggested creating a mandatory registry of large AI models. I'd like you to take a minute to tell us about this registry and what companies would have to provide or report to the registry.

Also, given what we've heard today, aren't you afraid that some companies will view this proposal as a threat to innovation or a business risk in relation to the code they've developed?

In short, I want to know what this mandatory registry would look like and if it would represent a business risk for innovators.

12:20 p.m.

Prof. Gillian Hadfield

Thank you.

I want to say, first of all, that I think the framework needs to be general in the sense that it can reach all of the possible uses and all of the possible impacts that we need to learn about. We're going to have to introduce particular requirements along the way.

Let me talk to the proposal registry that has been partially adopted now in the U.S. executive order from the Biden White House. The idea here is that you would make it quite clear to companies who needs to register, and it's about the largest models that have that capacity for general intelligence and, as I was mentioning, the autonomous behaviour in the economy.

The commercial risk that you're recognizing is.... What would the registry require? The registry would require that there be a government office or a government agency, and this goes back to the question of whether it should be an office under Parliament. Those are questions to explore.

It requires, as a starting point, a framework point and an infrastructure point, that those entities that are proposing to deploy into our economy and into our society should have to disclose to government what they've built, how big it is, what capabilities they know about and what kinds of data it was trained on. This is as a starting point for us to know what's out there because, right now, our governments don't have that visibility.

12:20 p.m.

Bloc

Jean-Denis Garon Bloc Mirabel, QC

Once companies send that information to government, for example, who would have access to it?