Evidence of meeting #111 for Industry, Science and Technology in the 44th Parliament, 1st Session. (The original version is on Parliament’s site, as are the minutes.) The winning word was prices.

A recording is available from Parliament.

On the agenda

MPs speaking

Also speaking

Momin M. Malik  Ph.D., Data Science Researcher, As an Individual
Christelle Tessono  Technology Policy Researcher, University of Toronto, As an Individual
Jim Balsillie  Founder, Centre for Digital Rights
Pierre Karl Péladeau  President and Chief Executive Officer, Quebecor Media Inc.
Jean-François Lescadres  Vice-President, Finance, Vidéotron ltée
Peggy Tabet  Vice-President, Regulatory Affairs, Quebecor Media Inc.

5:05 p.m.

Liberal

The Chair Liberal Joël Lightbound

Go ahead, Mr. Turnbull, on a point of order.

5:05 p.m.

Liberal

Ryan Turnbull Liberal Whitby, ON

I know we're studying Bill C-27. I'm just not sure of the relevance. I know that SDTC is another topic this committee is studying, but I don't understand how Mr. Perkins' line of questioning and request for documentation are related to the current work we're doing on today's agenda. It's not to say that Mr. Balsillie wouldn't be able to do that in future meetings on SDTC, but this is not the time or the place, in my opinion.

5:05 p.m.

Liberal

The Chair Liberal Joël Lightbound

I tend to agree, Mr. Turnbull, and I will ask Mr. Perkins to focus on the matter at hand before this committee, which is Bill C-27. However, I'll note that Mr. Balsillie is free to communicate with the committee, as he wishes, the information he feels is relevant to our studies, by and large.

Go ahead, Mr. Perkins.

5:05 p.m.

Conservative

Rick Perkins Conservative South Shore—St. Margarets, NS

I appreciate that, but I was establishing the fact that Mr. Balsillie has had a long period of advocacy in this area, which is relevant to this bill.

You said in your opening statement, in one of your recommendations, that we need to “Give individuals the right to contest and object to AI”. That's an important element. I am also aware that when the scientist for Microsoft, Mr. Rashid, developed this early learning model, he made the technology widely available. Mark Zuckerberg has also said that with the next generation of AI he will make that available.

What do you think is the result of making this technology widely available for anyone to use?

5:05 p.m.

Founder, Centre for Digital Rights

Jim Balsillie

Well, be careful of the contrast between algorithms very broadly and learning models narrowly. There is the open source that they are doing with the Facebook account case and how that locks you into needing their tools. So those are open or sort of open, but the algorithms that manipulate our children or do the other forms of biasing are long-standing and have been around since the beginning of the surveillance capitalism model some 20 years ago.

I think AIDA's job is a broad one, and LLMs are a subset of that. Again, you received notice, and it was mentioned in previous testimony, that first nations haven't been consulted on this, and they're going to contest this in the courts, and there are many other aspects of civil society. This is complex, multi-faceted stuff. The consequences are high. There are incompatibilities with what certain provinces are doing and who trumps whom, and it looks as though the federal legislation trumps the provincial. This is a place where you have to get it right in a complex zone.

So, yes, LLMs are tricky, and Canada's approach on this, which I commented on, goes beyond AIDA. You cannot think of this stuff independently of computing power and sovereign infrastructure and how we're going to approach those properly to be a sovereign, safe and prosperous country. If you're in for a penny, you're in for a pound.

5:05 p.m.

Conservative

Rick Perkins Conservative South Shore—St. Margarets, NS

So there's no advantage to being first on this.

5:05 p.m.

Founder, Centre for Digital Rights

Jim Balsillie

There isn't, not if the legislation is wrong. Also, we should not squander the scarce resources we have to try to build on the kind of country we inherited.

5:05 p.m.

Liberal

The Chair Liberal Joël Lightbound

Thank you very much.

Mr. Turnbull, the floor is yours.

5:05 p.m.

Liberal

Ryan Turnbull Liberal Whitby, ON

Okay.

Thank you to all the witnesses for being here today. I really appreciate your contributions.

Mr. Balsillie, welcome back to committee. I know it's your second time here for this study. I appreciate your contributions.

I just want to say something off the top here, which is that we've had 86 witnesses, 20 meetings at INDU and 59 written briefs; the department and ministry have conducted over 300 meetings and consultations on Bill C-27, and the regulations that will be forthcoming will involve two years' worth of extensive consultations before they are released. I think there has been consultation. I understand that some witnesses today feel as though there needs to be more, and I value their perspective, but I just want to correct the record. When people say no consultation has been done, I think the evidence or the facts substantiate a different claim.

I just wanted to start with that.

5:10 p.m.

Conservative

Brad Vis Conservative Mission—Matsqui—Fraser Canyon, BC

I have a point of order, Mr. Chair.

5:10 p.m.

Liberal

The Chair Liberal Joël Lightbound

Mr. Turnbull, wait just one second. We have a point of order.

5:10 p.m.

Conservative

Brad Vis Conservative Mission—Matsqui—Fraser Canyon, BC

I just have to interrupt. The witnesses have not said that there has not been consultation, but that consultation has not been sufficient, just to clarify for the record.

Thank you.

5:10 p.m.

Liberal

The Chair Liberal Joël Lightbound

Okay. That's not a point of order, Mr. Vis. I would appreciate no further interruptions.

That is not taking away from your time, Mr. Turnbull. Go ahead.

5:10 p.m.

Liberal

Ryan Turnbull Liberal Whitby, ON

Thank you.

I've taken the time, Mr. Malik and Ms. Tessono, to read the report you two worked on, called “AI Oversight, Accountability and Protecting Human Rights”, which I thought was quite provocative, interesting and, I think, really well done. In that report, in the summary of recommendations, the fourth recommendation says, “Bill C-27 Needs Consistent, Technologically Neutral and Future-Proof Definitions”.

I want to ask both of the panellists who are joining us remotely today, how do you make definitions future-proof when AI is evolving so quickly? Mr. Malik, maybe you could start, and then I can go to Ms. Tessono.

5:10 p.m.

Ph.D., Data Science Researcher, As an Individual

Dr. Momin M. Malik

Absolutely. The specific models and trends are evolving quickly, but I think there have been about 20 or 30 years of statistical machine learning as the core of everything we see AI having success with. That, in turn, is—at least as I talk about it—an instrumental use of correlations. Historian Matt Jones and data scientist Chris Wiggins have a fantastic book about this, How Data Happened, which details this shift.

I think we can focus on thinking of this like insurance: How do we regulate what insurance does? In the sense that it is addressing the goals, the outcomes and the processes, it is going to persist whatever new model comes out, if it is indeed based on correlations, as everything has been for the past 30 years and as everything currently is as well.

Now I'll pass it over.

5:10 p.m.

Liberal

Ryan Turnbull Liberal Whitby, ON

Thank you.

Ms. Tessono, would you weigh in on that? It's a recommendation from one of the reports that you co-authored. Can you share with us your perspective?

5:10 p.m.

Technology Policy Researcher, University of Toronto, As an Individual

Christelle Tessono

Yes. By technology being “neutral and future-proof”, we also mean definitions that are narrow and specific to current trends in artificial intelligence. For example, in clause 5 of the bill, there's a definition of “biased output”, but it focuses too much on the outputs that systems generate, when harms emerge throughout the AI life cycle. We should be having definitions that are more inclusive of the development, design and deployment of technologies, rather than focusing too much on the output.

As a reminder, I would also like to say that the contexts don't really change when we use technology—that is, education, health care and government—so we should be focusing on regulating the contexts in which they're used as well. Prohibitions on systems that process biometric data are a way to be technologically neutral, in my opinion, and future-proof as well.

5:10 p.m.

Liberal

Ryan Turnbull Liberal Whitby, ON

Yes, I think we've heard testimony from several witnesses I can recall—and this was from the industry partners or big players who were here a short time ago—who said that the use case for or the application of AI and the context really mattered for assessing the risk and defining whether it would be a high-impact system or not. I found that at least interesting to think about, but I thought it was impractical in terms of building a legislative framework. If the government had to predict every single use case and every single context, I think that would be quite challenging for the government to do.

Would you agree with that, Ms. Tessono?

5:15 p.m.

Technology Policy Researcher, University of Toronto, As an Individual

Christelle Tessono

No, I don't think so.

5:15 p.m.

Liberal

Ryan Turnbull Liberal Whitby, ON

I'm sorry. Can you say that again?

5:15 p.m.

Technology Policy Researcher, University of Toronto, As an Individual

Christelle Tessono

No, I wouldn't agree with that, because we already have systems being deployed actively, and we can build on the existence of their application to build frameworks that are flexible as well. I think it's really a question of building an infrastructure of regulation that is flexible and also inclusive of the different stakeholders who are present in the deployment, development and design of the AI systems.

5:15 p.m.

Liberal

Ryan Turnbull Liberal Whitby, ON

Thank you for that.

I'm going to jump to a slightly different topic.

Recommendation number 5 is about addressing the human rights implications of algorithmic systems. Mr. Balsillie mentioned as well the right to object to the automated processing of personal data.

Doesn't Bill C-27 currently already address this through both the requirement for record keeping and the easy identification of an AI-generated output, which has to be watermarked or identifiable? Also, biometric information is technically protected, so you would have to have express informed consent in order to use that.

Isn't that already addressed in this bill in some very real respects? Maybe you think we should go further.

I will ask Mr. Malik first, and then Ms. Tessono.

5:15 p.m.

Ph.D., Data Science Researcher, As an Individual

Dr. Momin M. Malik

I would defer to my colleague, but I think it's also about what happens with some of those things that are recorded. Again, if AI is not defined flexibly enough, somebody could just call the product “not AI”, and then it might not be covered.

I'll defer to my colleague for everything else.

5:15 p.m.

Liberal

Ryan Turnbull Liberal Whitby, ON

Ms. Tessono, could you weigh in on this?

Thank you.

5:15 p.m.

Technology Policy Researcher, University of Toronto, As an Individual

Christelle Tessono

Thank you.

Transparency reporting requirements are very useful to policy-makers, researchers and journalists who understand systems and how to better address them, but for the everyday person who is facing these systems, I am reminded of this expression in French:

an ounce of prevention is worth a pound of cure.

It is better to avoid situations in which someone would be facing an unacceptable risk from AI. That's why prohibitions on systems that create unacceptable risks are the best way to ensure that human rights are operationalized in the bill. That is what the EU AI Act does by establishing different sets of requirements and prohibitions. It's not only about unacceptable risks; it's also for high-risk, low-risk and general-purpose AI systems.

I think that being clear will safeguard Canadians from harm.