Evidence of meeting #134 for Access to Information, Privacy and Ethics in the 44th Parliament, 1st Session. (The original version is on Parliament’s site, as are the minutes.) The winning word was political.

A recording is available from Parliament.

On the agenda

MPs speaking

Also speaking

Jacob Suelzle  Correctional Officer, Federal, As an Individual
Michael Wagner  Professor and William T. Evjue Distinguished Chair for the Wisconsin Idea, University of Wisconsin-Madison, As an Individual
Samantha Bradshaw  Assistant Professor, New Technology and Security , As an Individual
Karim Bardeesy  Executive Director, The Dais at Toronto Metropolitan University

5 p.m.

Assistant Professor, New Technology and Security , As an Individual

Dr. Samantha Bradshaw

I have. Even though I live and work in the U.S., I'm actually Canadian.

5 p.m.

Conservative

Frank Caputo Conservative Kamloops—Thompson—Cariboo, BC

Okay, perfect. That's good. You'll be up to date, I'm sure, on a lot of Canadian political happenings.

You each have your area of expertise and sometimes we get very nuanced areas of expertise, but I always like to ask this question.

If you could change one thing that would inhibit misinformation and disinformation.... You've talked a lot about social media. If you were responsible in this area for the Canadian government, what would you do first thing tomorrow?

Professor Bradshaw.

5 p.m.

Assistant Professor, New Technology and Security , As an Individual

Dr. Samantha Bradshaw

I think I would be working around issues that have more to do with platform transparency. I say this because I think, especially in the Canadian context, we actually don't have a very good empirical understanding of how the activities we see state actors engage in translate to changes in behaviour, and particularly voting behaviour.

I think there are some real, measurable consequences of these kinds of campaigns when they, for example, attack activists or female journalists because there's clear political suppression happening, with very measurable consequences that appear in the literature. However, getting somebody to change their mind or alter their voting behaviour.... These kinds of things are very ingrained and embedded in our identities. Being able to actually get access to better data to study social media's immediate effect on those kinds of attitudinal changes over time is really important to enhance a lot of the concerns raised by the field.

We're starting to develop more empirics and evidence around this, but we need better access to data. That's where I would really start if I wanted to see changes.

Unfortunately, I don't think there is a silver bullet solution to misinformation and disinformation. There isn't something that we can immediately do to make this problem go away because it's something that's really at the conflux of human behaviour. The real technical design of platforms that might incentivize certain kinds of information to go viral over others is also socially shaped by the people who are interacting with it. It's something that will need a lot more long-term attention.

I think that starting with the empirics and getting a better grounding and understanding of the causal mechanisms will be really important.

5:05 p.m.

Conservative

Frank Caputo Conservative Kamloops—Thompson—Cariboo, BC

It sounds like we're looking at a 10-year to 20-year project here, based on the way you said it.

Mr. Bardeesy, please go ahead.

October 22nd, 2024 / 5:05 p.m.

Executive Director, The Dais at Toronto Metropolitan University

Karim Bardeesy

I definitely endorse every single thing that Professor Bradshaw said. To create that fact base for policy-makers is really important.

I will maybe answer with two quick points.

First is a longer term project, which is to have an all-of-system education system response that brings in the media companies and those who are collectively responsible for creating a shared space for debate and factual presentation. That, I believe, is actually a shared responsibility between educators and the media sector.

Second, I think I'll come back to the passage of the online harms act. Bill C-63 would have a positive effect on some of these phenomena.

5:05 p.m.

Conservative

Frank Caputo Conservative Kamloops—Thompson—Cariboo, BC

One thing that I'm struck by as a parliamentarian.... I came in partway through this study. You talk about deepfakes and I have my idea of what that is.

One thing that really concerns me is just how easy it is to create such content and, secondarily, how easy it is to disseminate such content.

Given those two factors, are we really in a situation where as opposed to prevention, we're really looking at management?

5:05 p.m.

Assistant Professor, New Technology and Security , As an Individual

Dr. Samantha Bradshaw

Yes, but I also think that, in the context of misinformation and disinformation, and coordinated efforts to manipulate election processes, there's a much longer “kill chain”—in technical platform speak—around an information operation.

In order to get the deepfake on the platform and its going viral, you have to be able to create a fake account. In order to create the fake account, you have to deceive a lot of the internal systems within these platforms. Even though there is an easier ability to create and disseminate deepfake-related content, if we're talking about it in the context of an information operation, we still need to consider the broader life cycle that these operations have to go through.

A lot of the mitigation measures have nothing really to do with the AI side of things. They still rely much more on the old school IP detection and all of the tricks that platforms play to figure out if this is a real person or a fake account.

5:10 p.m.

Conservative

The Chair Conservative John Brassard

Thank you, Professor Bradshaw and Mr. Caputo.

We're going to go to Mr. Bains for six minutes.

Go ahead, sir.

Parm Bains Liberal Steveston—Richmond East, BC

Thank you, Mr. Chair, and to both of our witnesses for joining us today.

I'm going to start with Ms. Bradshaw. You mentioned tropes and other methods—tactics employed by Russia specifically—and we've recently seen a rise in other hostile actors trying to manipulate western voices.

Then there's the use of domestic commentators who amplify these tropes.

Can you describe a little bit how it is being translated here, and then having domestic commentators amplifying these things? How can we combat some of those things?

5:10 p.m.

Assistant Professor, New Technology and Security , As an Individual

Dr. Samantha Bradshaw

Definitely.

In the more effective disinformation campaigns that I've studied, a lot of them aren't relying on false information. Instead they're drawing on harmful identity tropes. They're using these ideas around racism, sexism and xenophobia to polarize society and to suppress certain kinds of people from participating, or even to incite violence against particular groups or individuals within societies.

When we're thinking about combatting this kind of identity-based disinformation, it's really a tricky challenge because you just slap a label on something that is sexist on the Internet, and you can't simply fact-check racism away. It's very much a long-term human bias problem, so it's going to take a long-term strategy to manage that.

Drawing attention to the fact that these are the tactics and strategies of influence operations today is really important. Platforms can do more, particularly on the political violence side of things. When we're going to more extreme and egregious cases—I'm thinking about Myanmar and the coordinated campaigns against the Rohingya population by the government there—where we see violence and even a genocide against a particular group of people, having platforms do appropriate human rights assessments and making sure they have enough content moderators who have a local language understanding and local contextual understanding of any given society is really important.

Parm Bains Liberal Steveston—Richmond East, BC

You're putting it back onto the platform providers to do a little more work as well.

I met with the Ukrainian Canadian Congress yesterday. They advocated for bans on Russian state-led media on the Internet. Canada has banned certain media. It has also taken measures to look at other social media platforms—WeChat and TikTok and others like that. I'm wondering if bans are effective and if you can talk a little bit about that.

Maybe I'll switch to Mr. Bardeesy to engage in the conversation as well.

Can you shed some light on that? Why are these methods working?

5:10 p.m.

Executive Director, The Dais at Toronto Metropolitan University

Karim Bardeesy

Banning on broadcast channels is a bit easier than banning Internet sites or online content. That's why we think at The Dais that the online harms act, which doesn't look so much at specific platforms as it does specific behaviour and content, is the way to go, coupled with some of that algorithmic transparency and coupled with some of that availability to have data that Professor Bradshaw mentioned.

I'll also bring in a piece that we haven't really talked about yet. This entire conversation exists in a context of trust or mistrust. Where there is a trusted messenger, that is where the misinformation or disinformation is more likely to land. Where there's a context of mistrust, then a messenger can fill that vacuum and generate trust.

I think that's the main concern with some of these propaganda outfits. It's not that people around this table don't see them as propaganda; it's more that there are people who perhaps have lower trust in some of the mainstream media institutions or some of the institutions of society more generally. In Canada the Reuters digital news survey shows that trust in mainstream media and news overall has fallen 20 percentage points over the last few years. It's in that context that some of these actors can weaponize some of those platforms or associate themselves with ideas that are harmful to Canada.

It's very difficult, both legislatively and in other ways, to actually ban platforms or sites in Canada. Countries have tried to do this, including some western countries and some in the global south with very large populations. We've seen those bans. TikTok is banned in India. There was an attempt to ban X in Brazil. It's difficult enough to ban at the individual outlet level. It's very operationally difficult and may not be meeting the interests of what we're trying to pursue here.

5:15 p.m.

Conservative

The Chair Conservative John Brassard

Thank you, Mr. Bains.

Before giving the floor to Mr. Villemure, I want to make sure the interpretation is working properly.

Can I have a thumbs-up from both our witnesses if you heard that in English?

Good.

You have the floor, Mr. Villemure.

René Villemure Bloc Trois-Rivières, QC

Thank you, Mr. Chair.

Thank you to both witnesses for being with us today.

You both paint a rather bleak picture of the current situation. I'm going to ask you both the same question, starting with you, Ms. Bradshaw.

Often, the goal of rogue states, as I will call them, is to sow chaos or division, and disinformation can be one of the tools to do that.

Is this the beginning of a cognitive war, Ms. Bradshaw?

5:15 p.m.

Assistant Professor, New Technology and Security , As an Individual

Dr. Samantha Bradshaw

I don't know if it's necessarily a start. I think a lot of these strategies go back and have a very, very long history. A lot of the current Russian playbook for information operations reflects a lot of the Cold War strategies of the past.

I wouldn't say we're necessarily at the start, but we do need to think about responses that Professor Bardeesy highlighted and rebuild trust to create that cognitive defence against these kinds of attacks.

René Villemure Bloc Trois-Rivières, QC

Yes, the discussion revolves around trust. Philosophically speaking, trust means you don't need to prove something. Nowadays, fact checkers are in charge of maintaining trust.

I'm going to correct myself: We aren't just at the beginning of a cognitive war.

Mr. Bardeesy, are the tools that are being used to sow chaos and division part of a cognitive warfare strategy?

5:15 p.m.

Executive Director, The Dais at Toronto Metropolitan University

Karim Bardeesy

They may be, but it's really up to us to decide how to respond. As you and Ms. Bradshaw said, it's a matter of trust within a society. We need to increase people's trust in institutions.

As you know, the political tactic of sowing chaos and undermining trust is being used not only in foreign countries, but also here at home. It's up to us to decide how stringent the measures we put in place should be. We need to have vigorous political debate without it being a means for foreign actors to sow chaos to harm us and make them think they can get away with it.

René Villemure Bloc Trois-Rivières, QC

A vigorous debate requires us to weigh freedom of expression on the one hand and the public interest on the other. Where would you draw the line between the two?

5:15 p.m.

Executive Director, The Dais at Toronto Metropolitan University

Karim Bardeesy

At The Dais, we work to train the leaders of the future. One of the ways we do that is by encouraging youth leadership.

It's up to you, as members of Parliament, to get information from all sources.

I'm very sorry, but I'm going to switch to English.

It's for you to model the kind of political space that you want to be in. I believe, from a public policy matter, that these bills, like the online harms act and the foreign interference projet de loi, form effective guardrails and it's incumbent on leaders, not just political leaders but leaders across society, to show the norms for us to have a good debate, but that don't let the foreign actors take confidence in creating more misinformation around the debate we do have.

René Villemure Bloc Trois-Rivières, QC

Thank you, Mr. Bardeesy.

Mr. Chair, how much time do I have left?

5:20 p.m.

Conservative

The Chair Conservative John Brassard

You have one minute and forty-five seconds left, Mr. Villemure.

René Villemure Bloc Trois-Rivières, QC

I'm going to use that time to turn to you, Ms. Bradshaw.

It's in the financial interest of digital platforms to get people to click. We know that controversy generates more clicks than matters of public interest.

As a government, we have a duty to protect free speech on the one hand and keep free enterprise going on the other. How can we reconcile these two things?

5:20 p.m.

Assistant Professor, New Technology and Security , As an Individual

Dr. Samantha Bradshaw

I think this is really the million-dollar question of the century, how do we create business models that are going to support democracy rather than ones that are going to incentivize hate and anger and fear and frustration? I don't have a good answer to that question, but I do want to acknowledge that these things are in tension.

Coming back to the platform transparency angle, I do think that when we have insight into how platforms make the trade-offs between freedom of expression and other interests, we can better evaluate how they are doing content moderation, whether that's good for democracy or not.

The questions they are tackling sometimes are very difficult questions that don't have a right or wrong answer. Things and initiatives like the Facebook Oversight Board that are creating a public record of very difficult cases to set precedent for how these decisions are made, I think are really positive steps. I'd want to see more transparency initiatives and more efforts going into those kinds of applications.

René Villemure Bloc Trois-Rivières, QC

It's also important to note that one person's view of the public interest isn't necessarily the same as another's.

Thank you, Mr. Chair.

5:20 p.m.

Conservative

The Chair Conservative John Brassard

Thank you, Mr. Villemure.

Mr. Green, you have six minutes. Go ahead, sir.