Evidence of meeting #130 for Access to Information, Privacy and Ethics in the 44th Parliament, 1st Session. (The original version is on Parliament’s site, as are the minutes.) The winning word was disinformation.

A recording is available from Parliament.

On the agenda

MPs speaking

Also speaking

Jon Bateman  Senior Fellow and Co-Director, Technology and International Affairs Program, Carnegie Endowment for International Peace
Benjamin Fung  Professor and Canada Research Chair, McGill University, As an Individual
Clerk of the Committee  Ms. Nancy Vohl

3:45 p.m.

Conservative

The Chair Conservative John Brassard

Good afternoon, everyone.

My apologies to our witnesses for the delay, but we had votes today in the House of Commons.

I will call the meeting to order.

Welcome to meeting number 130 of the House of Commons Standing Committee on Access to Information, Privacy and Ethics.

Pursuant to Standing Order 108(3)(h) and the motion adopted by the committee on Tuesday, February 13, 2024, the committee is resuming its study of the effects of disinformation and misinformation on the work of parliamentarians.

I'd like to welcome our witnesses for the first hour today.

As an individual, we're pleased to welcome Mr. Benjamin Fung, a professor and Canada research chair at McGill University.

From the Carnegie Endowment for International Peace, Mr. Jon Bateman has joined us by video conference. He's a senior fellow and co-director of technology and international affairs.

I'll start with you, Mr. Bateman. If you want to address the committee for up to five minutes, you're welcome to do so.

Please go ahead, sir.

Jon Bateman Senior Fellow and Co-Director, Technology and International Affairs Program, Carnegie Endowment for International Peace

Thank you, Chair and committee members.

It's an honour to appear at this important hearing.

My name is Jon Bateman. I'm a senior fellow and co-director of the technology and international affairs program at the Carnegie Endowment for International Peace. Carnegie is an independent, non-profit think tank with headquarters in Washington, D.C. and global centres in Europe, Asia and the Middle East.

In recent years, democracies worldwide have grown increasingly concerned about threats to the integrity of their information environments, including misinformation, disinformation and foreign influence. My Carnegie colleagues and I have drawn on empirical evidence to clarify the nature and extent of these threats, and to assess the promise and pitfalls of potential countermeasures. Today I will share some overarching lessons from this research.

To be clear, I'm not an expert on the Canadian situation specifically, so I may not be able to give detailed answers about particular incidents or unique dynamics in your country. Instead, I will highlight key themes that are applicable across democracies.

Let me start with some important foundations.

As you have already heard, misinformation can refer to any false claim, whereas disinformation implies an intentional effort to deceive. Foreign influence can be harder to define, because it requires legal or normative judgments about the boundaries of acceptable foreign participation in domestic discourse, which are sometimes unclear.

Foreign actors often use mis- and disinformation, but they also use other tools, such as co-optation, coercion, overt propaganda and even violence. These activities can pose serious threats to a country's information integrity.

Still, it is domestic actors—ordinary citizens, politicians, activists and corporations—that are the major sources of mis- and disinformation in most democracies. This should not be surprising. Domestic actors are generally more numerous, well resourced, politically sophisticated, deeply embedded within society and invested in domestic political outcomes.

Defining and differentiating these threats is hard enough. Applying and acting on the definitions is much harder.

Calling something mis- or disinformation requires invoking some authoritative source of truth, yet people in democracies can and should disagree about what is true. Such disagreements are inevitable and essential for driving scientific progress and social change. Overzealous efforts to police the information environment can transgress democratic norms or deepen societal distrust.

However, not all factual disputes are legitimate or productive. We must acknowledge that certain falsehoods are undermining democratic stability and governance around the world. A paradigmatic example is the claim that the 2020 U.S. presidential election was stolen. This is provably false. It was put forward with demonstrated bad faith and it has deeply destabilized the country.

Mis- and disinformation are highly imperfect concepts, but they do capture something very real and dangerous that demands concerted action.

What should be done? In our recent report, Dean Jackson and I surveyed a wide range of countermeasures, from fact-checking to foreign sanctions to adjustments of social media algorithms. Drawing on hundreds of scientific studies and other real-world data, we asked three fundamental questions: How much is known about each measure? How effective does it seem, given what we know? How scalable is it?

Unfortunately, we found no silver bullet. None of the interventions were well studied, very effective and easy to scale all at the same time.

Some may find this unsurprising. After all, disinformation is an ancient, chronic phenomenon driven by stubborn forces of supply and demand. On the supply side, social structures combine with modern technology to create powerful political and commercial incentives to deceive. On the demand side, false narratives can satisfy real psychological needs. These forces are far from unstoppable, yet policy-makers have limited resources, knowledge, political will, legal authority and civic trust.

Thankfully, our research does suggest that many popular countermeasures are both credible and useful. The key is what we call a “portfolio approach”. This means pursuing a diversified mixture of multiple policies with varying levels of risk and reward. A healthy portfolio would include tactical actions, such as fact-checking and labelling social media content, that seem fairly well researched and effective. It would also involve costlier, longer-term bets on promising structural reforms, such as financial support for local journalism and media literacy.

Let me close by observing that most democracies do not yet have a balanced portfolio. They are underinvesting in the most ambitious reforms with higher costs and longer lead times.

If societies can somehow manage to meet the big challenges, like reviving local journalism and bolstering media literacy for the digital age, the payoff could be enormous.

Thank you, and I look forward to your questions.

3:50 p.m.

Conservative

The Chair Conservative John Brassard

Thank you, Mr. Bateman.

I really appreciate the insight in your opening statement.

Mr. Fung, you have up to five minutes to address the committee.

Please start.

Benjamin Fung Professor and Canada Research Chair, McGill University, As an Individual

Thank you.

Good afternoon, Chair and committee members.

I'm a professor and Canada research chair at McGill University, and I am a computer scientist. My research interests include AI, cybersecurity and disinformation analysis. I am particularly interested in analyzing disinformation spreading in the Chinese Canadian communities. I am not going to repeat the disinformation examples, as I believe you have already heard many of those examples from different channels in the last few years. Instead, I would like to focus on recommendations that may help in fighting disinformation from the Chinese government.

Let's take a closer look at what other countries have been doing to fight disinformation.

The U.S. government has set up an agency called the Global Engagement Center, which is responsible to counter foreign state and non-state propaganda and disinformation efforts aimed at influencing the policies and security of the United States. The Global Engagement Center has the authority to pre-empt disinformation from social media. Furthermore, it has a technology engagement division, which plays an important role to transform technologies from concepts to applications at scale and pushes innovations to both public and private sectors.

Another country that is at the front line of fighting disinformation from the Chinese government is Taiwan. My collaborator, Sze-Fung Lee, has done an excellent study. Here, I will highlight a few key points from her research.

Unlike the U.S. model, Taiwan takes a decentralized approach. It has multiple fact-checking centres that are run by the civil societies. This set-up successfully gains the trust of the general public because citizens understand that these fact-checking centres are not controlled by the government and they know they can participate in the process too. Most importantly, they have an effective social network to spread the correct information back to the society.

Taiwan has a few think tanks that analyze the origins, tactics and implications of disinformation. They regularly organize conferences to bring disinformation experts together to facilitate collaboration. There's no conflict between the U.S. model and the Taiwan model. In Canada, we can do both.

My third recommendation is to look into the social media platforms. Social media platforms like WeChat and TikTok play a crucial role in spreading disinformation, despite heavy Chinese government censorship. WeChat, the most popular app, circulates Chinese government-approved propaganda, while accurate Canadian information struggles to reach users. Without the co-operation of social media platforms, any solutions are meaningless. Interventions should include banning bot accounts, restricting posts or adding warning messages. Platforms that do not comply with this new regulation should be subject to evaluations and penalties.

Finally, I would like to share my latest observation. There are two types of social media bots—human bots and AI bots. Human bots are easier to detect as they use specific vocabularies, or sometimes they just follow China's time zone. Their posts typically spread within two to three layers of sharing, mostly staying within the Chinese Canadian community. However, the emerging trend is the AI bots. AI bots can spread disinformation beyond five layers of sharing, even reaching local communities. Therefore, I would like to emphasize that this disinformation issue is not limited to the Chinese Canadian community. With the advancement of AI technologies, all Canadians are affected.

Thank you very much.

3:55 p.m.

Conservative

The Chair Conservative John Brassard

Thank you, Mr. Fung.

We're going to start with our questioning now. Each member is going to have six minutes for questioning. For our witnesses, it's a very limited amount of time to ask these questions. Oftentimes, members will reclaim their time to try to ask as many questions as they can within those six minutes. Please don't take it personally if you get cut off. We're going to try to get through this as best we can.

I'm going to start with Mr. Caputo for six minutes.

3:55 p.m.

Conservative

Frank Caputo Conservative Kamloops—Thompson—Cariboo, BC

Thank you, Mr. Chair.

Thank you, Mr. Bateman.

Thank you, Professor Fung, for being here.

Mr. Bateman, you intimated at the outset of your remarks that you may not be overly familiar with the situation here. I want to elaborate on a few of the principles you spoke about.

You spoke about supply and demand when it comes to foreign interference or disinformation, and also about the fact that it is politicians who are most often the sources of misinformation and disinformation.

Would you agree, as well, that politicians are critical not only when it comes to what they put out but also in their function when they are in a security capacity of ensuring disinformation doesn't get out? In other words, politicians and government play a protective role.

Would you agree with that?

3:55 p.m.

Senior Fellow and Co-Director, Technology and International Affairs Program, Carnegie Endowment for International Peace

Jon Bateman

Yes, I would.

3:55 p.m.

Conservative

Frank Caputo Conservative Kamloops—Thompson—Cariboo, BC

Part of that protective role is this: When government sees things going awry or sideways, and sees misinformation and disinformation occurring, it has an obligation to act.

I take it you'd agree with that, as well.

3:55 p.m.

Senior Fellow and Co-Director, Technology and International Affairs Program, Carnegie Endowment for International Peace

Jon Bateman

It depends on the nature of the action being contemplated, because some actions can be helpful and others can be counterproductive.

3:55 p.m.

Conservative

Frank Caputo Conservative Kamloops—Thompson—Cariboo, BC

Okay.

What I'm saying is that, when misinformation and disinformation are occurring, the government could conceivably do nothing. However, to do nothing is to allow this to occur even more.

Isn't that right?

3:55 p.m.

Senior Fellow and Co-Director, Technology and International Affairs Program, Carnegie Endowment for International Peace

Jon Bateman

Yes, to some extent.

Typically, a lot of information is transmitted through society without the government taking any particular action. In the case of foreign influence activities, there's a lot more the government can do. Whether or not to publicly disclose such activity, or take technical or diplomatic measures against the country at issue, is often a complicated calculation.

4 p.m.

Conservative

Frank Caputo Conservative Kamloops—Thompson—Cariboo, BC

It certainly is a complicated calculation, but I think you'd agree that shining a light on foreign interference, in some way, is always the best antidote to address it.

Is it not?

4 p.m.

Senior Fellow and Co-Director, Technology and International Affairs Program, Carnegie Endowment for International Peace

Jon Bateman

It often is.

The exception to that principle is that, sometimes, the foreign actor may anticipate—even desire and benefit from—the public disclosure of their operation. For example, if Russia is conducting an influence operation that is publicly exposed, and that public exposure actually creates a lot more societal anxiety, fear and distrust than the initial influence operation itself, it could be considered a win for Russia.

That's one of the complications government needs to consider.

4 p.m.

Conservative

Frank Caputo Conservative Kamloops—Thompson—Cariboo, BC

I see what you're saying. They're sowing chaos and getting their desired results.

When it comes to elected officials, generally, I think it's in everybody's best interest to know whether the people they're putting an X beside in elections have been willingly or semi-willingly participating in foreign interference.

Do you agree with that?

4 p.m.

Senior Fellow and Co-Director, Technology and International Affairs Program, Carnegie Endowment for International Peace

4 p.m.

Conservative

Frank Caputo Conservative Kamloops—Thompson—Cariboo, BC

In such cases, transparency is paramount. If the government is aware that elected officials are participating in foreign interference willingly, the best thing that can be done is to address those things publicly. Is it not?

4 p.m.

Senior Fellow and Co-Director, Technology and International Affairs Program, Carnegie Endowment for International Peace

Jon Bateman

That would require a framework of law and, again, careful consideration. For example, I'm a former U.S. intelligence analyst, so I'm familiar with the possibility that there could be unverified intelligence information about someone being co-opted or roped into foreign disinformation—

Frank Caputo Conservative Kamloops—Thompson—Cariboo, BC

Right.

4 p.m.

Senior Fellow and Co-Director, Technology and International Affairs Program, Carnegie Endowment for International Peace

Jon Bateman

—but it might not be a legal certainty.

4 p.m.

Conservative

Frank Caputo Conservative Kamloops—Thompson—Cariboo, BC

I'm sorry. I don't mean to cut you off.

I'm going to ask you to operate on the assumption that we have intelligence services in Canada that have verified things and come to conclusions. The conclusions are that 11 parliamentarians have, either wittingly or semi-wittingly, acted with foreign and hostile states. This intelligence has been verified. It went into a report. In this case, people in Canada are expected to vote for these people in the next 12 to 13 months, in all likelihood.

Does it not make sense for democracy—for the integrity of the system—for foreign interference to be stymied at its root? Expose it and shine a light on it. Does that not make a ton of sense?

4 p.m.

Senior Fellow and Co-Director, Technology and International Affairs Program, Carnegie Endowment for International Peace

Jon Bateman

Without commenting on the Canadian situation—because I don't know the details—I would say there are situations where an intelligence assessment might fall short of a prosecutable offence. That would then create a judgment call and a difficult decision.

However, I'm not familiar with the Canadian specifics.

4 p.m.

Conservative

Frank Caputo Conservative Kamloops—Thompson—Cariboo, BC

You talked about the Russia example. Could there be any worse discord than people questioning whether who they're voting for has been compromised by a hostile state?

4 p.m.

Senior Fellow and Co-Director, Technology and International Affairs Program, Carnegie Endowment for International Peace

Jon Bateman

I do think one of the situations where public disclosure can be helpful is when the lack of disclosure creates an environment in which selective leaks and rumours are running rampant.

We saw this in previous U.S. elections, and that did seem to lead to a policy of greater disclosure, but not universal disclosure. Each disclosure needs to be taken on its own terms.

4 p.m.

Conservative

Frank Caputo Conservative Kamloops—Thompson—Cariboo, BC

Thank you.

4 p.m.

Conservative

The Chair Conservative John Brassard

Mr. Bains, we'll go over to you for six minutes. Please, go ahead.