Evidence of meeting #94 for Industry, Science and Technology in the 44th Parliament, 1st Session. (The original version is on Parliament’s site, as are the minutes.) The winning word was c-27.

A recording is available from Parliament.

On the agenda

MPs speaking

Also speaking

Daniel Konikoff  Interim Director of the Privacy, Technology & Surveillance program, Canadian Civil Liberties Association
Tim McSorley  National Coordinator, International Civil Liberties Monitoring Group
Matthew Hatfield  Executive Director, OpenMedia
Sharon Polsky  President, Privacy and Access Council of Canada
John Lawford  Executive Director and General Counsel, Public Interest Advocacy Centre
Yuka Sai  Staff Lawyer, Public Interest Advocacy Centre
Sam Andrey  Managing Director, The Dais, Toronto Metropolitan University

4:35 p.m.

Liberal

Viviane LaPointe Liberal Sudbury, ON

Thank you, Mr. Chair.

Mr. Andrey, I would be interested in hearing your thoughts on Bill C-27 and its objectives to address online misinformation and online harm.

4:35 p.m.

Managing Director, The Dais, Toronto Metropolitan University

Sam Andrey

Sure. I would be delighted. We think a lot about misinformation and online harm. The government has been considering legislation on online safety for a while and been consulting about it. We're urging that it move forward.

We were surprised, but I think pleasantly surprised, that the AI act now would be a potential vehicle to address some of the harms of content recommendation systems, or “social media”, as most people refer to it. It was in Minister Champagne's list. If the online safety legislation doesn't move forward, or if it really focuses heavily on content like child sexual exploitation and terrorist content more specifically, then I think this could be a vehicle in which we attempt to regulate the recommendation systems and their algorithmic amplification for potential harm. I think it's a good example of the type of thing that will take time to do correctly through the regulatory process, but I think it is a potential way.

Specifically on the generative AI component of it, in the voluntary code that was referenced, there's a proposed requirement for what's called watermarking. It's basically people being able to detect that it's a manipulated image or video or a deepfake. Especially as generative AI improves and our ability to trust anything we're seeing with our eyes breaks down, that type of technical and regulatory response will be very important.

That's just an example of how we can use this bill. I think that is very important.

4:40 p.m.

Liberal

Viviane LaPointe Liberal Sudbury, ON

One of the challenges we have is in trying to strike that balance between freedom of expression and the need to combat online harm in this legislation. What are your thoughts on how that balance has been struck?

4:40 p.m.

Managing Director, The Dais, Toronto Metropolitan University

Sam Andrey

It's a really core challenge, especially when it comes to misinformation, as opposed to some other content that's more clearly illegal, say things like hate speech.

With respect to misinformation, yes, we have to be very careful, but I tend to focus on a “more speech” approach rather than a censorship approach, which is building the systems where fact checkers are adding context to things we're seeing online and where things like deepfakes are being labelled so people know it. It's not to say there won't be manipulated imagery online, of course—that has always been the case—but people should know that what they're seeing is that. I think that's a way to balance freedom of expression and the real harms that are happening with respect to disinformation.

There are other pieces about algorithmic propagation and the financial motives that we can get into, but I think, at its core, any legislation or regulation through the AI act that tries to regulate speech needs to put at the forefront. Companies need to consider the freedom of expression alongside the other aims.

4:40 p.m.

Liberal

Viviane LaPointe Liberal Sudbury, ON

You talked about the bill including provisions related to AI and automated content moderation. In your opinion, what's the role of AI in enforcing these regulations?

4:40 p.m.

Managing Director, The Dais, Toronto Metropolitan University

Sam Andrey

That's a good question.

Most large online platforms use automated systems to do content moderation. Those can produce imperfect results. Right now, you're seeing legitimate pro-Palestinian expression being caught up in filters about Hamas, just as an example. These systems are imperfect, though for the scale of the systems, they're often necessary.

We think, though, that a potential online safety bill, or potentially the AI act, could create additional recourse for users to challenge systems. The EU Digital Services Act, which is their equivalent, provides the ability for users to receive an explanation as to why it was taken down and to appeal it. That's something we don't have here in Canada, just as an example.

Those kinds of content moderation systems are getting better over time. AI and large language models will undoubtedly help make them more effective, but I think, at the end of the day, basically the recourse for a human to be in the loop for those things that are grey is absolutely necessary.

4:40 p.m.

Liberal

Viviane LaPointe Liberal Sudbury, ON

In your opinion, what are some of the biggest challenges that Canada may face in implementing and enforcing this legislation effectively?

4:40 p.m.

Managing Director, The Dais, Toronto Metropolitan University

Sam Andrey

The AI or just in general...?

4:45 p.m.

Liberal

Viviane LaPointe Liberal Sudbury, ON

It's just in general.

4:45 p.m.

Managing Director, The Dais, Toronto Metropolitan University

Sam Andrey

It will be a challenging task. I think part of the challenge is that AI—and it's the reason that the bill exists in the first place—is going to start to affect every part of the economy, and it's going to be used in a bunch of different sectors in a bunch of a different ways. The regulator, whoever it is, is going to be tasked with having to develop deep expertise in a lot of functions of the economy to be able to regulate its potential risks and harms. I think that is number one.

I think it's also why the existing regulatory model is so worrisome. It's so deeply embedded within the department. We would urge creating more independence. There's a bunch of ways that could happen. You could make it a parliamentary appointment that's by itself. There have been some suggestions of giving it to the Privacy Commissioner, which obviously has some resources in infrastructure and expertise. I can see both sides of that; the risks of AI are broader than privacy. At the very least, make it a GIC appointment, which is imperfect but at least creates some accountability and rules around the appointment. At the moment, it's not even that.

I could have more to say about that, but I'll leave it there.

4:45 p.m.

Liberal

Viviane LaPointe Liberal Sudbury, ON

Thank you.

4:45 p.m.

Liberal

The Chair Liberal Joël Lightbound

Thank you.

I'll give the chance to Mr. Hatfield to add his comments.

4:45 p.m.

Executive Director, OpenMedia

Matthew Hatfield

I have a quick observation on Canada's positioning here. A lot of the AI models and outputs of the AI models are going to be created in other countries but will still affect Canada. We can't prevent some of the worst harms that could occur from AI on our own. They're going to affect us even if we have incredible laws here.

However, Canada could distinguish itself by having uniquely poor AI rules. We could go it alone, in the sense of having some major misses on preventing harms and allowing people to do things in Canada that are not permitted elsewhere. That's why I'm very concerned about the balance of costs and benefits in our going it alone and trying to be out first. I'm not sure that we can win big. I do think that we can lose big.

4:45 p.m.

Liberal

The Chair Liberal Joël Lightbound

Thank you.

Mr. Savard‑Tremblay, you have the floor.

4:45 p.m.

Bloc

Simon-Pierre Savard-Tremblay Bloc Saint-Hyacinthe—Bagot, QC

Thank you, Mr. Speaker. I assume I have the floor for two and a half minutes.

Mr. Andrey, in your report last month, you state that Quebec has the highest rate of use of AI in Canada. You also say that only 2% of companies cite security or privacy concerns, and an even smaller percentage cite legal obstacles. On the other hand, you also point out that companies co not have all the information they need to fully understand the value and profitability of these technologies.

First of all, why is the rate of use of AI higher in Quebec than in other provinces?

4:45 p.m.

Managing Director, The Dais, Toronto Metropolitan University

Sam Andrey

That is a great question. I think Quebec has done a nice job of creating a robust AI ecosystem, and that shows up in the numbers and how more Quebec businesses have adopted AI systems. The number is still not that much higher than in the rest of the country. It's still in the single digits, but it's better than the rest of the country. We have lessons to learn there.

On AI adoption, I know that we're talking about privacy risks and harms, but for Canada's prosperity we have to become a more innovative and productive economy. Technology is a key enabler of that. I don't want to come across as anti-AI. It is very important, but we need to do it responsibly. For them to increase adoption, companies want assurance that what they're going to deploy is not going to get them in trouble, that it's going to be safe and that it is subject to legal guardrails. These things work together, and there's also work to be done on workforce development and talent and a whole bunch of other obviously enabling conditions. However, I actually do think that the AI act can help in assuring companies, especially small and medium-sized enterprises that are not going to have lawyers to access to think about these things, that the AI they're going to purchase is safe to use.

4:45 p.m.

Bloc

Simon-Pierre Savard-Tremblay Bloc Saint-Hyacinthe—Bagot, QC

You talked about it. You said that artificial intelligence should be used responsibly and that it is a good tool for prosperity.

What needs to be included in Bill C‑27 so that we can promote the responsible adoption of AI?

4:45 p.m.

Managing Director, The Dais, Toronto Metropolitan University

Sam Andrey

I think I got that.

Do you mind repeating the question? I'm sorry.

4:45 p.m.

Bloc

Simon-Pierre Savard-Tremblay Bloc Saint-Hyacinthe—Bagot, QC

You said that AI must continue to be adopted responsibly, because it contributes to the prosperity of our people and our economies.

What needs to be incorporated into Bill C‑27 to make that happen?

4:45 p.m.

Managing Director, The Dais, Toronto Metropolitan University

Sam Andrey

Thank you.

I think the ability for the law to meaningfully prevent and ban outright bias in these systems, psychological harm and misuse and malicious use depends on the context of the system we're talking about, but in financial services, in health care, in content moderation, which we were talking about, and in generative AI, there's a whole variety of ways in which harms and risks could manifest.

What is good about this bill is that it is comprehensive and wide in terms of its application, so the regulator, when it gets stood up, will have a big job in starting to prioritize which to focus on first. Minister Champagne's list provides some hints at that, but I think to secure responsible adoption, we need to focus on the systems that are also going to be used by a lot of businesses.

Generative AI is a good example of that, in that, increasingly, businesses are starting to think about how they could embed those in their processes to make their businesses more efficient.

4:50 p.m.

Liberal

The Chair Liberal Joël Lightbound

Thank you.

Mr. Masse, go ahead.

4:50 p.m.

NDP

Brian Masse NDP Windsor West, ON

Thank you, Mr. Chair.

Maybe I'll go to Mr. Hatfield first and then I'll go to the other panellists quickly as well.

What's your opinion—or do you have one—on an AI and data commissioner as an independent officer of Parliament? That could be done even before or without this bill, similar to the case with the Privacy Commissioner, the Competition Bureau and so forth. It seems as though there is almost consensus on the Hill that it's really going to involve almost all different parliamentary functions and committees and so forth.

That's for you, Mr. Hatfield, and then if anyone else on the panel in the room would like to comment, I have a couple of minutes, so please do so as quickly as you can.

4:50 p.m.

Executive Director, OpenMedia

Matthew Hatfield

Yes. I think that would be immensely valuable, especially getting them started on reporting to Parliament on what they think is going on. Evaluating the legislation, either before or after it's passed, would be valuable.

4:50 p.m.

NDP

Brian Masse NDP Windsor West, ON

Excellent. Thank you.

Is there anybody else on the panel...?

4:50 p.m.

President, Privacy and Access Council of Canada

Sharon Polsky

Yes, if I may.

I think it's a terrific idea if the law requires that the regulator and others be fully funded so that they can actually do the job they are tasked with doing, and if they are able to write it into AIDA when it's split out from Bill C-27 and becomes its own, please, so that before AI products are allowed to be put on the market—I don't care from where in the world they are—they must go through basically a testing sandbox. It's not the self-interested vendor saying, “Don't worry your pretty little head; it's not biased.” It's an independent officer of Parliament whose office will identify and test the products—confidentially, with no secrets being divulged and no IP worries on behalf of the companies—so that, the same way any other product needs to be fit for purpose before it's released on the market, AI products must also.