Evidence of meeting #94 for Access to Information, Privacy and Ethics in the 44th Parliament, 1st Session. (The original version is on Parliament’s site, as are the minutes.) The winning word was use.

A recording is available from Parliament.

On the agenda

MPs speaking

Also speaking

Anatoliy Gruzd  Professor and Canada Research Chair in Privacy-Preserving Digital Technologies, Toronto Metropolitan University, As an Individual
Catherine Luelo  Deputy Minister and Chief Information Officer of Canada, Treasury Board Secretariat
Commissioner Bryan Larkin  Deputy Commissioner, Specialized Policing Services, Royal Canadian Mounted Police
Brigitte Gauvin  Acting Assistant Commissioner, Federal Policing, National Security, Royal Canadian Mounted Police
Clerk of the Committee  Ms. Nancy Vohl
Alexandra Savoie  Committee Researcher

November 27th, 2023 / 4:05 p.m.

Conservative

Jacques Gourde Conservative Lévis—Lotbinière, QC

Thank you, Mr. Chair.

I'll go back to AI. It's a very effective tool that is used in applications to speed up the transfer of information, and to study and profile us, unfortunately.

Could this tool, in the short term, become a weapon that turns against us, Canadians, or against anyone in the world who is being overly profiled? Could that in turn constitute some form of interference?

4:10 p.m.

Professor and Canada Research Chair in Privacy-Preserving Digital Technologies, Toronto Metropolitan University, As an Individual

Dr. Anatoliy Gruzd

The question is a bit broad, but I'll try to contextualize it.

When we're talking specifically about generative AI tools, the concern for me, from the data privacy perspective, would be Canadians going to websites like ChatGPT. They will tag their private and personal information into the window without realizing that they are actually consenting to that data being used for future training. They don't know whether that content will be printed out or spit out in somebody else's stream. I think that would be one form of concern.

The other form of concern, of course, is social media platforms relying on AI tools to detect harmful content, just because of the scale of the problem. Earlier this year I was looking at some of the transparency report charts from Meta, showing how they removed around 65% of content automatically that was classified as harassment and bullying. There's still a significant percentage, around 35%, that users had to report for platforms to act on. From that perspective, it is important to flag some of that problematic content that they won't have enough human content moderators or fact-checkers to look at.

When we look at AI, I think we have to differentiate the kind of use case we're actually talking about.

4:10 p.m.

Conservative

Jacques Gourde Conservative Lévis—Lotbinière, QC

You gave us a good explanation of what artificial intelligence can do at the moment. Based on your explanation, it's a benevolent tool.

What do you think AI could look like on digital platforms in three, five or 10 years?

4:10 p.m.

Professor and Canada Research Chair in Privacy-Preserving Digital Technologies, Toronto Metropolitan University, As an Individual

Dr. Anatoliy Gruzd

There will be more automation. I wonder sometimes to what extent, though. It's already writing emails for us. It's creating websites for us. There will be potential push-back. People will want to have some authentic interactions.

That's probably more of a futuristic outlook. I don't know whether you want me to continue on that line of thinking.

4:10 p.m.

Conservative

Jacques Gourde Conservative Lévis—Lotbinière, QC

We are trying to legislate on digital platforms or artificial intelligence, but in the future, I think AI will be the Achilles heel of all platforms.

Should we legislate on that, rather than on platforms?

4:10 p.m.

Professor and Canada Research Chair in Privacy-Preserving Digital Technologies, Toronto Metropolitan University, As an Individual

Dr. Anatoliy Gruzd

The first thing, of course, is to know whether Canadian data is being used to train generative AI applications, period. That will be number one. The second is that when Canadians see content coming through social media platforms or other online news, they need to be able to differentiate between whether it's created by AI or it's not. Those are the two things I would focus on first.

4:10 p.m.

Conservative

Jacques Gourde Conservative Lévis—Lotbinière, QC

How do you think we could go about finding the most effective way of obtaining tools that would allow us to legislate or limit excesses at the international level?

4:10 p.m.

Professor and Canada Research Chair in Privacy-Preserving Digital Technologies, Toronto Metropolitan University, As an Individual

Dr. Anatoliy Gruzd

Some of the privacy legislation tools you're considering may be effective in terms of making sure Canadians can request that their data be removed from some of those services. That could be quite effective.

The other aspects I referred to earlier, in my opening remarks.... It's about creating a repository and code of conduct for this information, in particular. Right now, it is happening and functioning. Major online platforms in the EU—these are defined as platforms with 45 million plus—report, usually every six months or so, on their activities and what they've done to stop foreign interference, country by country. We don't see any stats about it in Canada.

Related to your question about AI, when platforms take action on AI-driven content, I would like to see how much of that content.... What was the purpose?

I think that will inform our next steps.

4:10 p.m.

Conservative

The Chair Conservative John Brassard

Thank you, Dr. Gruzd.

Thank you, Mr. Gourde.

Mr. Kelloway, you have five minutes. Go ahead.

4:10 p.m.

Liberal

Mike Kelloway Liberal Cape Breton—Canso, NS

Thank you, Mr. Chair.

Doctor, it's great to see you.

There have been some very great questions from all parliamentarians here.

I'm going to approach the next series of questions in a couple of ways.

First, what can average Canadians out there do to protect themselves from disinformation and misinformation? That's one.

However, you also brought up, on several occasions, what communities are doing regarding terms of service—an initiative. I'd like you to unpack that, and the EU code of ethics.

Are there three things the Government of Canada can do to bring TikTok and other social media platforms to the table in order to ensure there's less misinformation and disinformation from an economic standpoint, domestically and internationally? I think MP Green highlighted that. He has done so very effectively on many occasions.

That would be the series of questions I have, and I can unpack those as you go.

4:15 p.m.

Professor and Canada Research Chair in Privacy-Preserving Digital Technologies, Toronto Metropolitan University, As an Individual

Dr. Anatoliy Gruzd

We have individual education and what individual Canadians can do. We have to talk about what age group we're discussing. Earlier, I heard in this committee that the focus is on the underage population, which is a quite important and vulnerable group. However, sometimes we overlook older adults and other age groups.

Frankly, education shouldn't stop, but we cannot prepare individuals for all cases. That's why I mentioned earlier that platforms should be compelled to incorporate tools that can signal whether something is potentially problematic. We had a great example during the COVID pandemic, when platforms stepped up and provided useful interventions—even simple things, such as adding a link to Health Canada when somebody talked about COVID, or flagging that some of the content in the post may not accurately relate to scientific knowledge. Those interventions are in fact helpful in reducing the spread of misinformation and disinformation. Unfortunately, lately we are seeing those initiatives being dropped completely. The things we learned from those initiatives are not applicable to other domain areas.

If we are talking specifically about the education of younger adults or teenagers, we can't just think about traditional.... We can teach those skills. Also, look at interesting interventions, such as games that essentially show.... Put them in a position of running an information operation. There are a number of interesting studies that show the effectiveness of these campaigns. They have to make themselves run such a campaign, and in that situation you actually then become more aware of things that may be coming at you in your real-life interactions.

Can you please repeat the other aspects of the question?

4:15 p.m.

Liberal

Mike Kelloway Liberal Cape Breton—Canso, NS

Sure. I threw a few questions at you, so I would be glad to recap the next couple.

In one of your answers to a question by one of the parliamentarians here, you talked about terms of service—as I took it—as a community initiative. You can tell me if I'm wrong on that, in terms of fighting against disinformation and misinformation.

Also, can you unpack exactly why the EU code of ethics is the gold standard, or why it is helpful in combatting disinformation and misinformation?

4:15 p.m.

Professor and Canada Research Chair in Privacy-Preserving Digital Technologies, Toronto Metropolitan University, As an Individual

Dr. Anatoliy Gruzd

In terms of service, the initiative I referred to is called Terms of Service; Didn't Read, ToS;DR. Essentially it's been around for 10 years. It's volunteer-run. It's supported by non-profits. There are essentially some legal and technology experts who are trying to deconstruct each platform's terms of service, and they created a rubric. Essentially they simplify it in terms of service. You can install a browser extension. Every time you go to a platform, whether it's a social media platform or another website, if they have information about it they will show their ratings but also explain what the key concerns are in different categories. Perhaps it would be something like the fact that they have access to your private messages or they're not actually deleting your data, or other concerns. Then you can dive deeper and actually click on those concerns to read more and get to the terms of service, where it actually says so.

The reason I like this initiative is that it's an independent oversight. That leads to the second question you asked me. The initiative in the EU is called the Code of Practice on Disinformation. It started when they created this transparency centre, where large online platforms have to complete a form on which essentially they have to report back to the EU what they are actually doing to fight disinformation. They have to be very specific.

4:20 p.m.

Conservative

The Chair Conservative John Brassard

Thank you.

We have two-and-a-half-minute rounds.

The Conservatives will have two and a half minutes, as will the Liberals. We will start with Mr. Villemure, who will be followed by Mr. Green.

Mr. Villemure, you have the floor.

4:20 p.m.

Bloc

René Villemure Bloc Trois-Rivières, QC

Thank you very much, Mr. Chair.

You know, I'm not a Liberal.

Dr. Gruzd, what should we think of the fact that OpenAI is reviewing its terms of use by distinguishing the use that will be made of the data for business or research purposes?

Indeed, as of December 14, if we want to use ChatGPT, all our data will be likely to be used by companies.

4:20 p.m.

Professor and Canada Research Chair in Privacy-Preserving Digital Technologies, Toronto Metropolitan University, As an Individual

Dr. Anatoliy Gruzd

This is a tricky question, because for any start-up the research used potentially will lead to business use cases. We don't know whether that dataset collected under the research umbrella would then be carried forward for other projects that are money-making for them. I think we have to imply similar principles. If it's currently PIPEDA, it should be applied equally to research data use and use for business. The only research exception I would make, essentially, is for independent vetted researchers and journalists, and that actually goes to earlier questions about what we can do to mandate access to that type of data that companies are already collecting, so that you have more independent audit of that data.

Those things can be done. The platforms will tell you that if there's a privacy or IP concern, they cannot share data with researchers. I've heard that said so many times, but in fact there are many ways to share this type of data using privacy-preserving technology, so that researchers can report it.

4:20 p.m.

Bloc

René Villemure Bloc Trois-Rivières, QC

If possible, I would like you to look at the new terms of use of OpenAI and tell us what you think about it by email, because it is very worrisome, given where we are.

You mentioned a few applications earlier, including Telegram and WeChat. However, of all the messenger applications that we, as members of Parliament, use, what is the safest? We're all on WhatsApp, Telegram, and so on.

What should we be doing?

4:20 p.m.

Professor and Canada Research Chair in Privacy-Preserving Digital Technologies, Toronto Metropolitan University, As an Individual

Dr. Anatoliy Gruzd

The safest is to disconnect from social media and the Internet, but it's not an option, as we discussed. Seriously, we really have to consider whether a messaging app is using encryption and the type of encryption that platforms don't actually have access to. That's something that should be spelled out in any messaging app. If a messaging app had access to your private messages, I would not use it, because it's very problematic.

4:20 p.m.

Conservative

The Chair Conservative John Brassard

Thank you, Dr. Gruzd, that's sage advice.

Mr. Green, go ahead for two and a half minutes please.

4:20 p.m.

NDP

Matthew Green NDP Hamilton Centre, ON

Thank you very much, Mr. Chair.

Thank you for your testimony. I'm finding it very helpful.

We're in a unique opportunity. We have the former president of the Treasury Board here at committee now. We know that the decision to ban TikTok was one that was made by the chief information officer, who we'll have before committee. We have heard in previous testimonies from CSIS and from our Communications Security Establishment that they provided advice to the chief information officer. They wouldn't get into what the details were of their advice, but they provided advice and ultimately the decision back in February 27, I believe, was to ban this from government devices.

I would give you this opportunity, sir, and ask you this, with your subject matter expertise: If you were advising the chief information officer under this proposed ban of TikTok, what advice would you give them, and what other areas or topics might you have covered?

4:20 p.m.

Professor and Canada Research Chair in Privacy-Preserving Digital Technologies, Toronto Metropolitan University, As an Individual

Dr. Anatoliy Gruzd

I heard that testimony. I think the reference was something with an unacceptable level of risk in that recommendation, and that's all we know at this point. I hope the next witness will be able to give you a bit more insight.

In the public domain, we don't have that any more than what we just said, so—

4:20 p.m.

NDP

Matthew Green NDP Hamilton Centre, ON

I'm talking about information that you, as a subject matter expert, would provide to the CIO on the topic of social media platforms and the issues around privacy and access to information on government devices.

What information would you give them, knowing what platforms can do and where the focus should be?

4:20 p.m.

Professor and Canada Research Chair in Privacy-Preserving Digital Technologies, Toronto Metropolitan University, As an Individual

Dr. Anatoliy Gruzd

If the recommendation is with regard to user data privacy, we should treat all social media platforms equally, large or small, and then we'd have to audit them in the same way. That would be my advice.

On banning one platform, as I mentioned in my opening remarks, unless there's clear evidence of some malicious acts by state actors through back doors.... Without that, by banning it, we undermine our democratic processes, and it creates a perception of politicization of this topic.

What happens if another platform...or new evidence arises that, in fact, the state actor had backdoor access? Will our citizens trust that new decision?

4:25 p.m.

NDP

Matthew Green NDP Hamilton Centre, ON

That's very important.

I want to thank you for taking the time to be here. I would like to invite you, in my last 10 seconds.... If there's anything else you see from other testimony or things you might want to add some light to, you're always welcome to provide any additional comments in writing to this committee for our consideration at the report stage.

Thank you very much.

4:25 p.m.

Conservative

The Chair Conservative John Brassard

Thank you, Mr. Green.

Thank you, Dr. Gruzd.

We'll go to Mr. Barrett for two and a half minutes.