Evidence of meeting #24 for Access to Information, Privacy and Ethics in the 45th Parliament, 1st session. (The original version is on Parliament’s site, as are the minutes.) The winning word was companies.

A video is available from Parliament.

On the agenda

Members speaking

Before the committee

Tessari L'Allié  Founder and Executive Director, AI Governance and Safety Canada
Brisson  Chief Executive Officer, The Human Line Project
Adler  Artificial Intelligence Researcher, As an Individual
Miotti  Chief Executive Officer, ControlAI

Luc Thériault Bloc Montcalm, QC

What's the real goal? When we talk about the United States and China. What's their goal?

3:50 p.m.

Founder and Executive Director, AI Governance and Safety Canada

Wyatt Tessari L'Allié

Right now, it's more of a military and economic goal. They want artificial intelligence in order to win wars and defend themselves against others and to build the strongest economy. There are powerful incentives at the national level to become a leader in artificial intelligence. The companies themselves have the same incentives. They compete against one another. All these factors are pushing us faster towards a breaking point.

My optimism, and I would say realism rather, stems from the fact that all these companies and countries are facing the same issue as we are. If someone creates a superintelligence that we can't control, everyone loses. Whether it's Donald Trump or Xi Jinping, they'll need to collaborate at some point. If they don't, they'll lose.

3:50 p.m.

Conservative

The Chair Conservative John Brassard

Okay. Thank you.

Your time is up, Mr. Thériault. You had more than six minutes.

We'll start the second round of questions.

Mr. Cooper, you have five minutes. Go ahead, please.

3:50 p.m.

Conservative

Michael Cooper Conservative St. Albert—Sturgeon River, AB

Thank you, Mr. Chair, and thank you to the witness.

You cited in your testimony AI chatbots, and I want to drill down a bit on that topic, specifically as it pertains to youth.

First of all, I take it you would agree that this is an area where there is a need for regulation. Before a U.S. judiciary subcommittee hearing last fall, as well as in other reports, U.S. data was presented from Common Sense Media that 72% of teens in the U.S. have used an AI companion at least once.

Do you have any data for Canada? I would take it that it would be similar.

3:55 p.m.

Founder and Executive Director, AI Governance and Safety Canada

Wyatt Tessari L'Allié

Yes, I think it's similar, but I haven't seen exact numbers.

Michael Cooper Conservative St. Albert—Sturgeon River, AB

You were asked, in general, about some of the AI harms, looking specifically at chatbots. Could you elaborate on some of those harms for youth?

3:55 p.m.

Founder and Executive Director, AI Governance and Safety Canada

Wyatt Tessari L'Allié

Absolutely.

The whole relationships and mental health piece is huge, because we're basically doing this giant experiment with our kids by introducing these AI companions into their lives, and we don't know what that will do for their ability to socialize and work together. We're already seeing a lot of mental health concerns. I'm sad the other witness isn't here, because he is definitely a lot more experienced in this area.

We see AI addiction in kids who can't give it up because these systems are so pleasing and sycophantic. They always tell you what you want to hear. They get caught in these loops and go down these dark holes.

This is a live experiment, and we're still waiting to see what the long-term effects will be. It is very concerning, because these are key moments in their development, and if their mental health and learning are being messed up, then that's a problem.

3:55 p.m.

Conservative

Michael Cooper Conservative St. Albert—Sturgeon River, AB

These systems are trained based on the entire Internet. Is that fair? Is that accurate? That would include everything from suicide forums to porn sites to other harmful content, and this will inevitably make its way, and is making its way, into these chats that youth are having.

3:55 p.m.

Founder and Executive Director, AI Governance and Safety Canada

Wyatt Tessari L'Allié

Yes, I think some of the companies are making efforts to limit what data goes into it, but the problem is, the more general the model, the more capable it is, so the more you train it on a variety of information, the more it understands how it all fits together. I can't speak to exactly what data is going into the models, for example, of Gemini or ChatGPT, but the fact that they have been talking teenagers into suicide and the fact that they do occasionally produce instructions on how to build a chemical bomb suggests that they have been trained on very dangerous material.

3:55 p.m.

Conservative

Michael Cooper Conservative St. Albert—Sturgeon River, AB

Would you agree that what compounds the challenge in terms of some of the risks and the ability for parents to detect that their loved one is being exposed to sexually explicit or harmful content is that these are often invisible, and, in fact, there is no transparency that you're even engaging with AI in some instances?

3:55 p.m.

Founder and Executive Director, AI Governance and Safety Canada

Wyatt Tessari L'Allié

A lot of kids prefer to talk to ChatGPT because it feels safer than talking to a parent. They're not aware of the privacy concerns, and they're not aware that these systems are not as reliable or hopefully as wise as the parent is.

Yes, it's a very rapidly evolving problem with a lot of ways to go sideways. We're seeing some impacts already, and we expect to see more.

3:55 p.m.

Conservative

Michael Cooper Conservative St. Albert—Sturgeon River, AB

In the fall, a bipartisan bill, the GUARD Act, was introduced in the U.S. Senate by senators Hawley and Blumenthal. Are you aware of that?

3:55 p.m.

Founder and Executive Director, AI Governance and Safety Canada

3:55 p.m.

Conservative

Michael Cooper Conservative St. Albert—Sturgeon River, AB

Okay. In short, at a high level, it would do three things. It would ban AI companions for minors. It would mandate AI chatbots to disclose their non-human status, and it would provide new penalties for companies that make AI for minors that solicit or produce sexual content. I'd be interested in your thoughts on those three components.

3:55 p.m.

Founder and Executive Director, AI Governance and Safety Canada

Wyatt Tessari L'Allié

Those are broadly the directions you want to take in terms of the protections and liability, yes.

3:55 p.m.

Conservative

Michael Cooper Conservative St. Albert—Sturgeon River, AB

Consistent with the need for transparency, I would also note that there was a report that was submitted recently to the government's AI task force by one of its members that calls for, among other things with respect to AI products, visible labelling, source transparency requirements, metadata and digital watermarking. Are those measures you would also support?

4 p.m.

Founder and Executive Director, AI Governance and Safety Canada

Wyatt Tessari L'Allié

Absolutely. People should be able to know that they're interacting with the AI system. It's not always obvious right now, and it will be harder and harder as the time goes on, so yes, labelling is a bare minimum.

4 p.m.

Conservative

The Chair Conservative John Brassard

Thank you, Mr. Cooper.

Ms. Church, you have five minutes. Go ahead, please.

4 p.m.

Liberal

Leslie Church Liberal Toronto—St. Paul's, ON

Thank you, Mr. Chair, and welcome, Mr. Tessari L'Allié.

Maybe just picking up on some of my colleagues' questions, I'm also concerned very much about the impact of AI socially, on kids in particular, and one of the things that I noticed in your white paper was that part of your approach to building Canada's resilience and protecting online safety was around the concept of requiring AI labelling and banning unacceptable capabilities. I'm just wondering if you could talk a little about the labelling in particular, how you see that developing and maybe how that could be useful for us?

4 p.m.

Founder and Executive Director, AI Governance and Safety Canada

Wyatt Tessari L'Allié

Yes. On labelling, there are a bunch of ways to go about it. We're agnostic as to these technical paths, but certainly it's something that could be developed as a global standard. For example, the Internet protocol that helped the Internet happen is a global standard. It's the same thing with labelling. I've heard proposals, for example, of having Unicode, basically the language behind the text, be labelled as AI so that a computer would automatically know if a letter is written by AI or not. There are a lot of solutions like that, for example, that could happen.

Ultimately it's probably going to take some government impetus to force these companies to actually do it, because it will be a pain to actually label, and there will always be the challenge that even if you label a piece of text, somebody can take a photo of it and copy it over to another computer, and now suddenly it's no longer labelled.

It's the kind of thing where you can't.... Labelling won't stop misuse in that sense, but it can, if you catch somebody using AI that isn't labelled, give you an opportunity to take action. It's not a full solution on its own, but it's a step in the direction of giving incentives for people to actually use the AI correctly.

4 p.m.

Liberal

Leslie Church Liberal Toronto—St. Paul's, ON

Talk to me about what that would even look like to me as a user. As somebody who is on the Internet and social media regularly, what would I see as a consumer? How would that label be applied?

4 p.m.

Founder and Executive Director, AI Governance and Safety Canada

Wyatt Tessari L'Allié

There are a bunch of ways to go about it.

Some, for example, would even say something like font colour. If you're reading a text and you see it's in a certain colour, you'll say, “Oh, that's AI.” It's the same thing with voice over the radio. If you hear a certain tone in the background, you think, “Oh, that's AI.”

There are a thousand different technical ways to go about it, but it would have to be something that would make a user think instinctively, as soon as they saw it, “Oh, yes—this is AI,” and have that be a global standard.

There is the logo of the little four-legged star. You'll see that around in various things. It's a step in the right direction, but we're at the very beginning of a very big, complex problem. It will take clear direction from governments around the world, basically, that this is required and that we expect this of companies that are building AI.

4 p.m.

Liberal

Leslie Church Liberal Toronto—St. Paul's, ON

When you talk about prohibiting other unacceptable capabilities, just to use the language from your white paper, what types of capabilities are you talking about? Can you expand a bit on that?

Obviously, we're very concerned at the moment. We've brought forward criminal legislation to deal with things like deepfakes, but are there other capabilities that you would particularly encourage us to focus on?

4 p.m.

Founder and Executive Director, AI Governance and Safety Canada

Wyatt Tessari L'Allié

Yes.

To mirror the EU AI Act, systems that are high and medium in the unacceptable category are systems that, for example, refuse to be shut off, systems that deceive the user and systems that will modify themselves without notice. For example, if you give an AI system a task of doing something and it calculates that it needs to modify itself in order to achieve that task, that should be an unacceptable capability, because suddenly your system is a very different system from what was created, and the risk profile changes dramatically.

Another one is autonomous self-replication. If you ask your system to do something and it calculates that it needs to make a bunch of copies of itself on different servers so that if the first server it's on goes under, it can still keep running, that's a problem, because suddenly your model is no longer just on your computer. It's also on 10 other computers, and you don't necessarily have access to them.

In our recommendations for the AI and data act, we go into details on which are the biggest ones, but unprompted self-modification, commandeering of resources—if your model starts stealing in order to achieve its task—are all behaviours we're starting to see happen in test settings, and if we allow them, then we're very much in a vulnerable position.

4 p.m.

Liberal

Leslie Church Liberal Toronto—St. Paul's, ON

One of the things I'm very interested in as well is establishing a duty of care for these platforms. It's a legal concept that I think belongs here, and certainly there need to be ways to put guardrails on, particularly for what youth and children are exposed to, and ensure that in the situations that you've raised, there's a clear liability when AI models are presenting incorrect or very harmful information, especially to kids.