Evidence of meeting #130 for Access to Information, Privacy and Ethics in the 44th Parliament, 1st Session. (The original version is on Parliament’s site, as are the minutes.) The winning word was disinformation.

A recording is available from Parliament.

On the agenda

MPs speaking

Also speaking

Jon Bateman  Senior Fellow and Co-Director, Technology and International Affairs Program, Carnegie Endowment for International Peace
Benjamin Fung  Professor and Canada Research Chair, McGill University, As an Individual
Clerk of the Committee  Ms. Nancy Vohl

4 p.m.

Liberal

Parm Bains Liberal Steveston—Richmond East, BC

Thank you, Mr. Chair.

Thank you, Mr. Bateman and Mr. Fung, for joining us today.

I'm going to continue on the same line of questioning. You mentioned something about how selective pieces of information can lead to rampant rumours. You're talking about politicians, and the whole issue around international peace is a key part of your work.

Do you think there ought to be greater scrutiny of candidates running for office, particularly if they have, in the past, worked closely with what we know are hostile states?

4 p.m.

Senior Fellow and Co-Director, Technology and International Affairs Program, Carnegie Endowment for International Peace

Jon Bateman

The general trend in a number of democracies has been toward increasing transparency and tighter regulation of foreign influence, such as bolstering enforcement of foreign registration requirements and the like. Traditionally, it's the voters themselves who are asked to be the ultimate gatekeepers.

Parm Bains Liberal Steveston—Richmond East, BC

Yes, but again, that leads to the general complexities around the recommendations that I know you've mentioned, and Mr. Fung has mentioned some other issues around fact-checking, which makes it very difficult for the general public to really know, because the misinformation and disinformation has become so very organized and sophisticated, in a sense.

There are other examples if you look at people who are working with foreign entities that are research groups or producing reports. We have an example here, from 2020, of a current member of Parliament who helped produce a controversial report in association with the Macdonald-Laurier Institute and a CBC reporter alleging that Pakistan secretly created a Sikh separatist movement. This was later amplified by officials overseas—Indian officials—and that led to more information and disinformation spreading.

Can you comment on that?

4:05 p.m.

Senior Fellow and Co-Director, Technology and International Affairs Program, Carnegie Endowment for International Peace

Jon Bateman

I'm not aware of the specifics. I would say that, as I mentioned during my opening remarks, the boundaries of acceptable foreign involvement and domestic discourse are often unclear.

For example, in democracies, it is traditionally acceptable for a foreigner to speak in a domestic context, and it could be a foreign corporation, a foreign resident or a foreign business. It could be called public diplomacy and the like. Equivalently, it could traditionally be acceptable for a citizen or a politician domestically to engage with foreigners.

Things often become more challenging where there is some kind of covertness to the relationship and a violation of domestic law. I'll say that the norms and the boundaries around this are really being rethought and reinvestigated for this new era.

Parm Bains Liberal Steveston—Richmond East, BC

Yes. In this instance, there were funds raised by other special interest groups, oil giants and other observer research foundations from different countries.

What I wanted to look at is the need for recommendations when it comes to the work we're doing here. What do we need to look at with respect to relationships with previous governments? In this specific situation, we recently received media reports coming from India, for example, saying that hundreds of millions of dollars should be raised to make sure that the current Justin Trudeau government is defeated. That is widely available in international news.

4:05 p.m.

Senior Fellow and Co-Director, Technology and International Affairs Program, Carnegie Endowment for International Peace

Jon Bateman

I would offer two recommendations, sir. On the specific question of foreign involvement in elections, most democracies have a lot more they could do to build capacity, monitoring, surveillance and enforcement around these types of laws and to close loopholes and modernize. But it's crucial to not create a situation where the cure is worse than the disease—in other words, whether the fear of being tarred as a foreign agent or being tarred as someone influenced by foreigners actually chills more legitimate domestic discourse than it helps.

The other recommendation I would give is what I might call “upstream” of all this—namely, building the social infrastructure to help citizens make good decisions. That comes back to journalism, media literacy and the like.

Parm Bains Liberal Steveston—Richmond East, BC

Thank you.

I'll continue this in another round.

4:05 p.m.

Conservative

The Chair Conservative John Brassard

Then we'll give a little more time—15 seconds—to Mr. Villemure and Mr. Green.

Mr. Villemure will be next, and the line of questioning will be in French. I want to encourage Mr. Bateman and Mr. Fung to make sure they have on their French to English interpretation.

Mr. Villemure, you have the floor for six minutes.

René Villemure Bloc Trois-Rivières, QC

Thank you very much, Mr. Chair.

Good afternoon, Mr. Fung and Mr. Bateman.

Mr. Bateman, could you tell us about the distinctions to be made between Russian, Chinese and Indian disinformation, for example?

4:10 p.m.

Senior Fellow and Co-Director, Technology and International Affairs Program, Carnegie Endowment for International Peace

Jon Bateman

Each of those countries has different tools, organizations and strategies. For example, if we're talking about covert digital disinformation, Russia was the original innovator. It developed very sophisticated online personas that could masquerade as real. Originally, China was much more blunt in its operations. It had less convincing personas but simply a lot more activity.

That's shifting over time. China is becoming more like Russia in its sophistication, and Russia itself is debuting new techniques. Both countries also have a very significant overt propaganda capability, such as RT in the case of Russia.

I'm not that familiar with Indian activities.

René Villemure Bloc Trois-Rivières, QC

Are there countries other than India, China and Russia that could be cited as examples of disinformation sources?

4:10 p.m.

Senior Fellow and Co-Director, Technology and International Affairs Program, Carnegie Endowment for International Peace

Jon Bateman

Many countries are involved in these activities. We know a lot about only a few of them. Iran is an emerging example. There was just an incident in the United States where a group of Iranian hackers hacked the Trump campaign and obtained and released sensitive information about the vice-presidential nominee. Iran has shifted to have information and influence activities, against the United States and others, to be a greater and greater portion of its digital operations.

René Villemure Bloc Trois-Rivières, QC

Should we make a distinction between the concept of propaganda and disinformation?

4:10 p.m.

Senior Fellow and Co-Director, Technology and International Affairs Program, Carnegie Endowment for International Peace

Jon Bateman

Yes. Among experts, the definitions of all these terms are constantly debated. To be candid, there is no single satisfying term that captures all of the problems we're describing here that everyone agrees on.

Disinformation, in its modern term, has been popularized, or repopularized, only recently, but had a Russian origin and emerged initially during the Cold War. Propaganda is an older term, and nowadays is used more broadly to encompass a variety of overt media activities.

René Villemure Bloc Trois-Rivières, QC

Thank you very much.

Given that political parties seem to be permanently campaigning and that we seem to be living in an era of confrontation amongst politicians, has disinformation become more relevant or more effective?

4:10 p.m.

Senior Fellow and Co-Director, Technology and International Affairs Program, Carnegie Endowment for International Peace

Jon Bateman

It's difficult to say whether the problem of disinformation overall is getting better or worse. Certainly in previous eras in the United States, Canada and other western democracies, we had our own challenges—a highly partisan news media, a lack of journalistic standards of objectivity that exists nowadays and on and on. But I think you're right that the willingness of political leaders to spread and become a source of disinformation and misinformation is probably the number one challenge facing any democracy today. If we can't police this problem, very little else that we do will be effective.

René Villemure Bloc Trois-Rivières, QC

Thank you very much, Mr. Bateman.

Mr. Fung, could you tell us about the effects of artificial intelligence on disinformation?

4:10 p.m.

Professor and Canada Research Chair, McGill University, As an Individual

Benjamin Fung

The impact of AI is on both sides. Someone who tries to spread disinformation can create some AI bots to collaboratively spread the disinformation. They can use, for example, large language models to make the spread of disinformation more effective. As I mentioned before, with the use of local vocabularies in that country, let's say English or French, they can spread it to the local community. It's not just limited to the minority group.

On the defence side, we can also use AI to fight disinformation. For example, in Taiwan, they have chatbots. You can submit your question to the chatbot. They can respond as to whether this piece of information is likely to be true or false. Of course, it depends on whether their database is reliable or not.

René Villemure Bloc Trois-Rivières, QC

I'm going to pick up on the issue you just raised. In terms of defence, if we were to say that here in Canada, on the eve of an election campaign, we want to protect and defend ourselves against disinformation, how could artificial intelligence help us in practical terms?

4:15 p.m.

Professor and Canada Research Chair, McGill University, As an Individual

Benjamin Fung

For example, we can use AI to identify collaborative processes. When we talk about spreading disinformation, it's not by one or two accounts. It spread by multiple or thousands of accounts. We can use AI to detect these types of co-operative activities, and that requires the co-operation of the social media companies.

René Villemure Bloc Trois-Rivières, QC

I have very little time left, Mr. Fung. Could you send us some additional information, possibly in writing, on the concept of using artificial intelligence as a defence against disinformation?

4:15 p.m.

Professor and Canada Research Chair, McGill University, As an Individual

Benjamin Fung

Of course, yes.

There are many articles in the computer science community that talk about how to identify accounts that co-operate together to boost posts or to boost products. These are the same techniques that can be used for identifying and boosting disinformation.

René Villemure Bloc Trois-Rivières, QC

Thank you very much.

4:15 p.m.

Conservative

The Chair Conservative John Brassard

Thank you, Mr. Villemure.

Mr. Green, you have six minutes and 15 seconds. Go ahead.

Matthew Green NDP Hamilton Centre, ON

Mr. Fung, we've spent some time focusing on state actors. We've highlighted adversarial countries that we believe are acting in malicious ways against our democracy and to undermine our institutions; however, there's a recent example of what ended up being a plot by a pretty amateur person to boost the appearance of the Conservative leader in a northern event, which led to much discussion and cynicism around the future of our own democratic processes domestically.

Can you perhaps, for a minute, talk about the ways in which non-state actors, both sophisticated and corporate as well as unsophisticated people with access to technology, could potentially disrupt?