Evidence of meeting #111 for Industry, Science and Technology in the 44th Parliament, 1st Session. (The original version is on Parliament’s site, as are the minutes.) The winning word was prices.

A recording is available from Parliament.

On the agenda

MPs speaking

Also speaking

Momin M. Malik  Ph.D., Data Science Researcher, As an Individual
Christelle Tessono  Technology Policy Researcher, University of Toronto, As an Individual
Jim Balsillie  Founder, Centre for Digital Rights
Pierre Karl Péladeau  President and Chief Executive Officer, Quebecor Media Inc.
Jean-François Lescadres  Vice-President, Finance, Vidéotron ltée
Peggy Tabet  Vice-President, Regulatory Affairs, Quebecor Media Inc.

5:35 p.m.

Conservative

Brad Vis Conservative Mission—Matsqui—Fraser Canyon, BC

In the November 28 letter from the minister and his proposed amendments, in one of his bullets he talked about creating clearer obligations across the AI value chain, establishing data governance measures and establishing measures to assess and mitigate the risk of having biased output. You already mentioned the definition. My assessment is that, as we are having this broader discussion on governance in respect to AI, the government and the officials at Industry Canada don't really know what they're doing right now, so they're providing themselves, in this bill, massive and broad regulatory powers.

I'm personally having a debate about whether in fact we need this law: whether we should be voting in favour of this aspect of Bill C-27 on artificial intelligence or whether the government could simply do this through their regulatory capacity right now. I don't know.

Do you have any comments on that? Is it even necessary to grant industry so many regulatory powers and so much oversight in legislation? Would it make any difference if we just did that through GIC regulation?

5:35 p.m.

Technology Policy Researcher, University of Toronto, As an Individual

Christelle Tessono

That's a really good question.

I would say that it is important to have in place legislation on artificial intelligence in the country, and I think that legislation should work towards facilitating collaboration across different sectors and departments.

What is happening right now in the country is that we have departments working on their own guidelines and their own standards without being able to speak to other experts in other departments—

5:35 p.m.

Conservative

Brad Vis Conservative Mission—Matsqui—Fraser Canyon, BC

I'm sorry to interrupt you. I do really value your testimony right now.

Do you think we need to take an approach similar to that of the United States, where I believe the White House has instructed various departments to be looking at AI regulation with respect to their spheres of influence?

5:35 p.m.

Technology Policy Researcher, University of Toronto, As an Individual

Christelle Tessono

It is my understanding that departments in Canada are doing similar work; it's just that they don't have the same powers that agencies and commissions in the United States have. The FTC, for example, can issue orders and fines and penalties and such, but I don't think that is the case for Canada.

That's why it would be important to have a regulator that would be independent and that would be able to impose fines while also working with departments.

5:35 p.m.

Conservative

Brad Vis Conservative Mission—Matsqui—Fraser Canyon, BC

I definitely agree with you on the regulator.

I believe I'm out of time now.

Thank you so much.

5:35 p.m.

Liberal

The Chair Liberal Joël Lightbound

You are. Thank you very much.

Mr. Van Bynen, the floor is yours.

February 14th, 2024 / 5:35 p.m.

Liberal

Tony Van Bynen Liberal Newmarket—Aurora, ON

Thank you, Mr. Chair.

This has been a real learning experience. I think a lot of differing concepts have been brought forward. It will be a challenge for us to land on some common ground in terms of bringing this legislation forward.

There's an additional document that I'd like to have some thoughts on. On September 27, the government unveiled the voluntary code of conduct on the responsible development and management of advanced generative AI systems. What are the strengths and weaknesses of the code of conduct?

I'll start with Mr. Malik and then to go Ms. Tessono.

5:35 p.m.

Ph.D., Data Science Researcher, As an Individual

Dr. Momin M. Malik

I have not read this, so I will defer to my colleague.

5:35 p.m.

Technology Policy Researcher, University of Toronto, As an Individual

Christelle Tessono

With the code of conduct, the main flaw is that it is voluntary. Companies can choose to adopt it, but it doesn't mean they're obliged to. In order to protect Canadians against harms caused by generative AI, things need to be enforceable.

5:35 p.m.

Liberal

Tony Van Bynen Liberal Newmarket—Aurora, ON

Do you believe the publication of the code of conduct provided sufficient information on how the code, the legislation and the regulations would interact?

5:35 p.m.

Technology Policy Researcher, University of Toronto, As an Individual

Christelle Tessono

Personally, as a researcher, I don't think so. I think the code of conduct is something that industry would have a lot more to say about.

What I'll say is that the code of conduct is part of a bigger puzzle on the regulation of artificial intelligence. It's not the only piece needed in order to safeguard Canadians against harms.

5:35 p.m.

Liberal

Tony Van Bynen Liberal Newmarket—Aurora, ON

We talked earlier, in a previous discussion, about how Bill C-27 in part appears to be at least based on the European Union's model. How would you compare those two pieces of legislation? More importantly, can you highlight some of the elements of the European proposal that are not included in the AIDA and should be?

Then I'll pass it over to my colleague.

5:40 p.m.

Technology Policy Researcher, University of Toronto, As an Individual

Christelle Tessono

The EU act creates different thresholds of reporting and transparency requirements for companies deploying different types of AI systems. In Canada, we have reporting and transparency requirements for only a specific class of systems. It means we're more exclusionary. The EU includes more systems within its scope. It also has a list of unacceptable risks and systems that should be prohibited if they pose unacceptable risks. This makes stronger regulation and protects people against harm.

5:40 p.m.

Liberal

Tony Van Bynen Liberal Newmarket—Aurora, ON

Thank you.

Go ahead, Mr. Turnbull.

5:40 p.m.

Liberal

Ryan Turnbull Liberal Whitby, ON

Thanks.

I just wanted to go back to my line of questioning earlier, which was about the right to object to automated processing of personal data. I really feel like Bill C-27 has dealt with this through express consent for using biometric data. I can just withhold my consent if I don't want someone to use that data. If they contravene that requirement, they would be breaking the law, because they wouldn't have sought my express consent.

I don't understand why in your paper you're recommending that we do something that is actually, I feel, included in the bill. Can you maybe speak to that, Ms. Tessono, from your perspective?

5:40 p.m.

Technology Policy Researcher, University of Toronto, As an Individual

Christelle Tessono

Yes. I think rights are certainly very important to have, but in order to act on rights, it creates an unfair burden on the everyday person.

For example, I contested the use of my data. It was a financially, emotionally and physically exhausting process. I did that when I was living in the U.S. as a researcher at Princeton. It was not easy to do. Even with my expertise and access to resources and privileges, it wasn't an easy process. I can only imagine how very hard it would be for one of your constituents—a single mother or a teenager or a minor—to contest the use of an AI system and to ensure that their consent is respected.

5:40 p.m.

Liberal

Ryan Turnbull Liberal Whitby, ON

Just to clarify that, though, if the company has not sought their consent, they have broken the law. They would be subject to enforcement and penalties that are included in the law, would they not? So I don't understand what you're saying. I agree with what you're saying, but I feel like the bill is dealing with this. I don't see the deficiency that maybe you're seeing.

I'm just trying to understand your perspective on this. Could you clarify a little bit further?

Do you understand what I'm saying? Because—

5:40 p.m.

Technology Policy Researcher, University of Toronto, As an Individual

Christelle Tessono

I understand what you're saying. The deficiency arises when there's not an independent commissioner who is empowered to proactively investigate situations and commission audits. Yes, it would be illegal, but it would be dealt with at the courts, and that would take a lot of time and resources. Again, this is for something seen at scale, but if it's an individual case, it will be even harder for someone to go through the legal process at the courts.

5:40 p.m.

Liberal

The Chair Liberal Joël Lightbound

Thank you.

Mr. Garon, you have the floor for two and a half minutes.

5:40 p.m.

Bloc

Jean-Denis Garon Bloc Mirabel, QC

Thank you, Mr. Chair.

Ms. Tessono, I'll let you respond in French.

Your resumé and research show that you have studied the interactions between technology and racial inequalities. You also talked about bias. We know that algorithms reproduce what they feed on. If the data that they feed on includes racial inequalities, the algorithms can reproduce these inequalities.

For the sake of clarity, I would like a specific real‑world example of an artificial intelligence application currently in use that has generated these types of biases in people's daily lives.

5:40 p.m.

Technology Policy Researcher, University of Toronto, As an Individual

Christelle Tessono

Excellent question. I'm happy to respond in French.

I know that, so far in Canada, we have six cases involving Black people who were misidentified by facial recognition systems and who lost their refugee status as a result. These cases are currently before the Supreme Court of Canada. These are specific cases where the use of facial recognition systems can cause people to lose their status...

5:45 p.m.

Bloc

Jean-Denis Garon Bloc Mirabel, QC

Thank you. I'm interrupting you because time is running out, not because this isn't relevant—quite the contrary.

We understand this aspect. We've heard of cases involving Clearview AI, for example. These cases have also been addressed in other committees. That said, artificial intelligence technology is often harmless. It helps us find our way around—I'm thinking of Google Maps, for instance—and do all sorts of things on a daily basis.

Are there any other specific examples involving applications that I could have on my telephone, for instance? This isn't a trick question. I'm really struggling to find specific examples. We hear a great deal about bias. I'm trying to get a clear picture of what it involves.

Think of the applications that we use on a daily basis. What could it be, for example?

5:45 p.m.

Technology Policy Researcher, University of Toronto, As an Individual

Christelle Tessono

The applications that we use on a daily basis include social media, for example. Companies use recommendation and moderation systems that categorize users to sell them products or show them content knowing that it will interest them. For children, this creates mental health issues. Children are exposed to explicit or mentally harmful content, for example.

5:45 p.m.

Bloc

Jean-Denis Garon Bloc Mirabel, QC

Thank you.

5:45 p.m.

Liberal

The Chair Liberal Joël Lightbound

Thank you.

Mr. Masse had to leave briefly, so I'll give the floor to Mr. Williams.