Evidence of meeting #115 for Access to Information, Privacy and Ethics in the 44th Parliament, 1st Session. (The original version is on Parliament’s site, as are the minutes.) The winning word was finkelstein.

A recording is available from Parliament.

On the agenda

MPs speaking

Also speaking

Ben Nimmo  Threat Investigator, OpenAI, As an Individual
Joel Finkelstein  Founder and Chief Science Officer, Network Contagion Research Institute
Sanjay Khanna  Strategic Advisor and Foresight Expert, As an Individual

12:30 p.m.

Conservative

The Chair Conservative John Brassard

Good afternoon, everyone.

Welcome to meeting number 115 of the House of Commons Standing Committee on Access to Information, Privacy and Ethics.

Pursuant to Standing Order 108(3)(h), the committee is resuming its study of the impact of disinformation and misinformation on the work of parliamentarians.

I want to remind everybody to be mindful of their microphones. I'm not going to go through the list of things that I have to say, but when you're not using the earpieces, make sure they're in the proper place.

For those online, for the benefit of the interpreters, try not to talk over each other. We want to avoid any injury during these hybrid sittings.

I'd like to welcome our witnesses today. We are only going to have one hour; we are in the process of rescheduling the second panel to a later date. Unfortunately, with all the votes today, we're in this position. I apologize to those witnesses.

I'd like to welcome, for the first hour, Mr. Ben Nimmo, who's a threat investigator for OpenAI.

I've also got Mr. Joel Finkelstein, who is the founder and chief science officer from the Network Contagion Research Institute.

Mr. Sanjay Khanna was supposed to be on our second hour. He was here in person in the audience, so I have taken the liberty of asking him to join us for this panel. He is here as an individual and is a strategic adviser and foresight expert.

Mr. Khanna, thank you for accommodating us.

I'm going to start with you, Mr. Nimmo. I understand that you've only got until one o'clock. Again, I apologize for the votes.

You have up to five minutes to address the committee. My job is to keep things on time, so I will stop you right at five minutes.

May 2nd, 2024 / 12:30 p.m.

Ben Nimmo Threat Investigator, OpenAI, As an Individual

Thank you, Mr. Chair.

Thank you all for being here.

I would like to point out that I am speaking today in my personal capacity as somebody who has been studying covert influence operations for a long time. I've been doing this job for a decade, and it's particularly welcome to be in a conversation like this here, because 10 years ago conversations like this were not happening. There was not a general awareness of covert influence operations in the larger world of disinformation. The fact that we now have such a thriving defender community and such a thriving conversation is an enormous step forward, and that is something to welcome.

Whenever there is a large conversation like this, it is very important to have clarity over what we are focusing on, what we are talking about and how we measure what we're looking at. There are a couple of points I will make. I will try to keep it very brief.

First of all, when we talk about covert influence operations, which has been my specialization for a long time, a lot of the conversation tends to be around the content they post, because that's the thing that is most visible, and often it's the most easily identifiable. But there's a very useful framework, created by a French scholar called Camille François, which is the ABC framework. It divides influence operations into actor, behaviour and content. When you think about the ways in which the defender community can intervene, the way we can expose and disrupt this kind of operation, it's the middle portion—the behaviour—that is actually the most essential to focus on. In the space of influence operations, if you look historically, most of the content they have posted over time has not actually been the kind of content that would violate any terms of service. It would be the expression of opinion—I support this politician or I do not support this politician.

What was troublesome about this kind of operation was the use of fake accounts, the use of coordination and the use of perhaps fake websites they were building on and fake distribution networks. My work has been very much focused on the behaviours that threat actors go through. When we think about the responses the defender community can come out with, it helps to look at these operations as a series of steps they go through, a series of behavioural procedures, which might begin, for example, with registering an email address, registering a web domain or setting up social media accounts. Then for each of those steps, we have to start thinking about appropriate responses to that step and the appropriate person to do those things.

Last year, with a former colleague, I published a paper called “The Online Operations Kill Chain”, which describes how you can actually sequence and set out the behavioural steps that operations like this can go through. I've shared that with the committee, so I hope you all have access to that already.

That's about the behaviour these operations show. It's also worth thinking about the actors that are behind these kinds of covert influence operations, because sometimes there's a state actor, and sometimes there may be a commercial actor. You do find companies out there that offer influence operations for hire. Then the question becomes what the appropriate response is to a different type of actor in the space. But whenever we're talking about covert influence operations, it's also really important to ask whether they are having any impact and whether we can actually observe that a specific operation is having a specific impact. Historically, a small number of operations have visibly had an impact—most notably the Russian hack and leak operations in 2016 targeting the U.S—but in my experience as an investigator, far more of the operations that have been exposed have not managed to reach real people. They've posted stuff on the Internet, and it has stayed there. There was a Russian operation called “secondary infektion”, for example, which between 2014 and 2019 posted hundreds of pieces of content across hundreds of different platforms, none of which appears to have been seen by any real people. So influence operations are not all equal. We shouldn't treat them as such, and it's important to ask whether there is a way we can measure how far they are actually reaching.

In 2020 I wrote a paper called “The Breakout Scale” on how to assess the impact of various different influence operations and see whether they're actually going somewhere or not. This is a really important thing to be thinking through, because one of the things that operations try to do is to make themselves look powerful even when they're not. They will try to generate fear, even when there's no reason to have that fear. For example, before the U.S. mid-terms in 2018, the Russian Internet Research Agency claimed to have already interfered in the election, whereas in fact, what had been happening was that they'd run maybe 100 Instagram accounts, which had already been taken down. Having a tool that allows us to measure the impact or even to estimate the impact of these operations is critical to the conversation.

Again, that has been shared with the committee.

When we think about—

12:35 p.m.

Conservative

The Chair Conservative John Brassard

I'm sorry, Mr. Nimmo. It's been five minutes. It goes quickly.

As I said at the onset, you're on limited time here. I would encourage you to submit to the committee any other thoughts that you may have—either comments or responses—after you hear some of the questions today.

Mr. Finkelstein, you have up to five minutes to address the committee. Go ahead, sir.

12:35 p.m.

Joel Finkelstein Founder and Chief Science Officer, Network Contagion Research Institute

Thank you so much.

I'm Joel Finkelstein, the chief science officer and the founder of the Network Contagion Research Institute.

Our organization profiles a lot of different threats that are facing governments, democracy and vulnerable communities. There are two that I want to bring to the attention of lawmakers today because I think they're highly emblematic of the kinds of threats that lawmakers often can't see, that platforms themselves have challenges policing and that have the capacity—I think intrinsically—for a profound breakout in the near future in ways that I think could create terrible harms for society and for vulnerable communities.

The first one that we talk about a lot is child harms. There's been a surge of online child harms through deceptive practices using AI.

The second is platform-scale manipulation by state actors. In this case, we're talking about TikTok.

In the first case, we found that there were cyber criminal syndicates in west Africa using AI to impersonate beautiful women—complete with videos, pictures and images. They would speak to teenagers. There was a 1,000% increase of these cases where they would impersonate women to get these teenagers into compromising positions and then “sextort” them. This has created a rash of 21 suicides—with several in Canada—of troubled children who have been sextorted this way.

You can well imagine the application that this is going to have towards the elderly. Platforms are terrible at policing this. This criminal syndicate from Nigeria was passing out manuals on how to do this on TikTok, YouTube and Scribd. This is facilitating a breakout of this kind of crime, which is only one example of something that has the capacity to be severely disarming to lawmakers as it begins interfering with other processes, among the elderly and youth.

These kinds of catfishing schemes and harms are very challenging to police. We need investigative mechanisms to understand them and unearth them more rapidly in order to address them. I sent you reports on that and I encourage everyone to take a look.

The other issue is not just that you have individual actors who are empowered by technology, but manipulations of entire platforms. NCRI performed research on TikTok, with its 1.5 billion users, and looked at inexplicable discrepancies in material that was sensitive to the Chinese Communist Party. This looked at whether the hashtags were on Israel, Ukraine or Kashmir or whether they pertained to Tibet or the South China Sea.

We saw in some cases it was 50 to one that these were more prevalent on comparable platforms than they were on TikTok, which suggested to us an incredible discrepancy that argued for a mass suppression of information and promotion of others through a charm offensive.

Genocide denial.... These problems are rampant on TikTok in a way that creates an “Alice in Wonderland” reality for 1.5 billion users. Our social psychology analysis suggests that this is impactful and alters the psychology of users towards a more friendly, pro-China stance on a massive scale.

Understanding these kinds of problems requires that parliamentarians and democratic bodies have greater insight and investigative capacity rapidly at their fingertips to be able to explore and understand emerging threats before those threats can get the better of them.

I will cede the rest of my time.

12:40 p.m.

Conservative

The Chair Conservative John Brassard

I kind of wish you wouldn't, Mr. Finkelstein. You had me glued there. I'm sure members of the committee will have lots of questions for you.

Mr. Khanna, you have up to five minutes to address the committee. Go ahead, sir.

12:40 p.m.

Sanjay Khanna Strategic Advisor and Foresight Expert, As an Individual

My respect for your work as legislators and parliamentarians is implicit in my remarks.

As a strategic foresight consultant, I advise business, government, higher education, NGOs and registered charities about comprehensively and strategically thinking about the future. Once clients understand plausible scenarios that they may face, they can prepare for disruptions. I propose to this committee that parliamentarians must thoroughly prepare for uncertain futures.

Today Canada is less resilient than it was prepandemic. Many of us feel highly distressed, experience a more challenging economy, view politicians and institutions with greater distrust, and face the toxic consequences of polarization online and in real life.

We are living through multiple converging and overlapping crises, geopolitical instability, climate impacts, emerging diseases and technologies that fuel misinformation and disinformation. The RCMP's heavily redacted report indicated similar foreboding threats in Canada's near future. At this crossroad, I believe that Canada faces two stark choices: to build resilience to reveal falsehoods and ascertain truth with coordinated and holistic efforts or to see resilience neutralized through individualized and fragmented responses.

I harbour grave concerns about what governing might be like in a future where parliamentarians and Canadians are unable to differentiate facts from mis- and disinformation and ultimately act contrary to their individual, community and collective interests.

Parliamentarians' work influences all persons living in Canada. While all of us, including your constituents, are targets of mis- and disinformation, you as parliamentarians are at increased risk of being targeted because of your time-honoured political and legislative roles. Multiple anti-democratic actors, nation-states, criminal entities and advocacy interests seek to subvert or co-opt parliamentarians by amplifying mis- and disinformation from individual to population scales.

Canada's adversaries seek to obstruct parliamentarians' deliberative decision-making and stakeholder engagement. This threatens Canada's domestic and foreign policy, thereby challenging Canadians' economic prosperity and social cohesion. It is a common misconception that these efforts are easily detected, but subtle manipulation of a single piece of information can be easy to miss. Targeting of your trusted staff, departments and the agencies you rely on for research and analysis creates new information vulnerabilities.

Mis- and disinformation exploit technologies of social media, machine learning and artificial intelligence that parliamentarians increasingly depend on for democratic engagement and constructive action and that our economy depends on for competitive advantage. By design, mis- and disinformation are threat multipliers. They promote distrust of bedrock institutions such as the Parliament of Canada, the justice system, fact-checked media, non-partisan research, universities, health care providers and the international institutions that arose after World War II to foster co-operation and stability.

Politics of rage and grievance driven by mis- and disinformation instigate polarization at individual, group and population levels. In this environment, parliamentarians must determine if and how their positions on policy, funding and legislation may unwittingly serve Canada's adversaries or be influenced by any entity that could compromise Canada's resilience.

Parliament needs to be seen to balance mis- and disinformation with the broader contextual perspective expected of trusted institutions behaving in the national interest. Establishing cohesive whole-of-Parliament and whole-of-society approaches to addressing this mis- and disinformation is a critical mission to rebuild trust and social licence.

Parliamentarians need no reminder that Canada's enemies are pleased for us to be divided, rendering Parliament incapable of acting in the national interest, protecting agri-food supply chains, building climate security, strengthening energy and transportation networks, and securing our elections. For parliamentarians, ensuring that mis- and disinformation do not interfere with cross-party collaboration in the House is necessary for Canada's material well-being and physical and mental health.

Parliamentarians and their staff need to continuously learn about how sophisticated approaches to deception and/or impersonation of legislators via convincing AI-driven manipulations of video, voice, text and images may irreparably harm political reputations and our democracy.

In the short term, as Canada navigates an era of multiple converging crises, the structured approach of scenario planning can assist parliamentarians as they devise resilient public policy, legislation, regulation and stakeholder engagement. In the longer term, consider the potential for a Canadian charter of digital rights and freedoms to articulate responsibilities and protections for Canadians related to mis- and disinformation.

Thank you.

12:45 p.m.

Conservative

The Chair Conservative John Brassard

Thank you, Mr. Khanna.

We're going to start with our first six-minute round. I want to advise the committee that, if we need a little extra time before question period because some of your questions are not being answered.... It's not whether they're being answered but whether you have more questions you would like to add. We can extend for another 15 minutes if we need to. As we get closer to the bottom of the hour, I'll see whether there's a desire to move on.

Okay, Mr. Barrett, we're going to start with you for six minutes. Go ahead, sir.

12:45 p.m.

Conservative

Michael Barrett Conservative Leeds—Grenville—Thousand Islands and Rideau Lakes, ON

Thanks, Chair.

At the top of my time, I'm going to give verbal notice of a motion. I'm not moving the motion, but I'm going to give notice of it. That will give the opportunity for it to be received by the clerk, translated and distributed to members of the committee. Of course, the provision of 48 hours would then be in effect before it could be moved.

12:45 p.m.

Conservative

The Chair Conservative John Brassard

Okay, go ahead on the verbal notice. I'm not stopping your time, Mr. Barrett.

12:45 p.m.

Conservative

Michael Barrett Conservative Leeds—Grenville—Thousand Islands and Rideau Lakes, ON

This week, Chair, Global News revealed two investigative pieces they've been working on.

The first is one I mentioned yesterday. It's on a series of meetings between lobbyist Kirsten Poon and high-level political staff across multiple federal departments. Those meetings aimed at securing $110 million in federal grants for the Edmonton International Airport. These efforts occurred between 2021 and 2022 and involved Poon's connection to Justin Trudeau's cabinet minister Randy Boissonnault, who represents Edmonton Centre. Mr. Boissonnault, in transitioning from his consulting business to his ministerial role, delegated control of his business to Poon, who resumed his lobbying activities.

Now, Mr. Boissonnault gave his former business partner his only client, which was the Edmonton Regional Airports Authority. The minister's influence, of course, is attached to that transfer. This firm lobbied the Edmonton airport authority, an organization regulated by the government on federal government land with board members appointed by the government—the same government Mr. Boissonnault is now a member of. Mr. Boissonnault was collecting payments from the company that was lobbying his government. This, of course, raises incredible concerns with respect to the Lobbying Act, the Conflict of Interest Act and the Conflict of Interest Code for members.

Now there's a second report from Global News revealing that Justin Trudeau's cabinet minister Randy Boissonnault remained listed as a director of the Global Health Imports Corporation, or GHI, for over a year after his 2021 election. Mr. Boissonnault claims that he's had no involvement with GHI since his election, but the company co-founded by Mr. Boissonnault in 2019 after his electoral defeat secured contracts totalling $8.2 million from other levels of government—provincial and municipal—for pandemic supplies.

Now imagine the disadvantage when competing for government contracts against a company that has a member of Justin Trudeau's federal cabinet listed as one of the directors. That's why such a scenario is in fact prohibited by law. I would be remiss not to mention that GHI faced multiple lawsuits for unpaid bills and unfulfilled deliveries, resulting in default judgments totalling over $7.8 million. Allegations of wire fraud were made against Mr. Boissonnault's GHI co-founder Stephen Anderson in one of the lawsuits. Despite winning lawsuits, suppliers struggled to recoup owed funds from Minister Boissonnault's company, which he was still listed as a director for, raising questions about the legitimacy of its operation and the fairness of its bidding process.

Chair, the motion is that:

Pursuant to Standing Order 108(3)(h) and in light of new media reports, that the committee undertake an immediate study into Minister Randy Boissonnault and allegations of fraud and contravention of ethics and lobbying laws; that the committee invite Minister Randy Boissonnault, Kirsten Poon, Stephen Anderson of Global Health Imports and the Ethics Commissioner to testify individually, in addition to any other relevant witnesses; and that the committee report its findings to the House.

Chair, I'd like to share the rest of my time with Mr. Viersen.

12:50 p.m.

Conservative

The Chair Conservative John Brassard

The motion is on notice.

Go ahead, Mr. Viersen.

12:50 p.m.

Conservative

Arnold Viersen Conservative Peace River—Westlock, AB

Thank you, Mr. Chair, and thank you to the witnesses for being here today, particularly Mr. Nimmo.

I'd like to start with you around misinformation and disinformation.

I like your “influence operations” deal. One of the challenges that I have seen is around what goes and what doesn't go. It's not necessarily that it's wrong information; it's just that some things are promoted aggressively, and other things that you would think would become viral don't become viral. I'm just wondering if you have any take on how actors can manipulate things to push things that go forward and repress things that should probably go forward.

12:50 p.m.

Conservative

The Chair Conservative John Brassard

You have 45 seconds, Mr. Nimmo.

12:50 p.m.

Threat Investigator, OpenAI, As an Individual

Ben Nimmo

Thank you.

I'd actually pick up on a point that Mr. Khanna made. Something we have regularly seen, and something I have seen in many different roles, is that influence operations will try and land their content in front of a particular influencer, celebrity or politician in the hope that they will then amplify it themselves. To Mr. Khanna's point, there is a real need for great caution by all of us. Every one of you in the room is a celebrity in your own way. There's a need for great caution because quite often threat actors will not try and land something directly in front of a viral audience; they will try and land it in front of some kind of springboard, and anyone who has a social media following is that potential springboard. That can be the way in which things break out, hence the need for a kind of legislated resilience and care.

12:50 p.m.

Conservative

The Chair Conservative John Brassard

Thank you, Mr. Nimmo and Mr. Viersen.

Mr. Housefather, you have six minutes. Go ahead, please, sir.

12:50 p.m.

Liberal

Anthony Housefather Liberal Mount Royal, QC

Thank you so much.

Mr. Nimmo, I have just a short question. Do you know whether Meta uses its algorithms to amplify hateful posts in order to monetize the platform?

12:50 p.m.

Threat Investigator, OpenAI, As an Individual

Ben Nimmo

Mr. Housefather, I'm not sure if the committee members are aware, but I no longer work at Meta. I have not worked there for a couple of months. When I was working there, I was a threat investigator specializing in influence operations. I do not know about the ins and outs of the algorithmic methods there.

12:50 p.m.

Liberal

Anthony Housefather Liberal Mount Royal, QC

Fair enough. I wasn't sure if your non-disclosure would prevent you from answering that either, but I was just wondering because we've had Meta witnesses before and they haven't exactly been forthcoming.

Mr. Finkelstein, I'm going to turn to you. I'm familiar with a lot of the work that you've done, and some of your most impressive work relates to anti-Semitism—how anti-Semitic tropes are spread on social media and how misinformation is fed from social media and then amplified to the extent that it comes out in the real world.

Can you give us some examples of that?

12:50 p.m.

Founder and Chief Science Officer, Network Contagion Research Institute

Joel Finkelstein

I wouldn't even know where to start. I think that the problem has become so prolific.

I think that there are historical reasons; that's true. The hatred of Jews is a fairly high-octane hate. It's very powerful. It has a 1,000-year history, or a 3,000-year history or whatever, to draw upon in order to inform critics and people who are looking for something to blame.

One of the things that we discussed earlier is that in the current information environment, it's so complex that the people who win are the ones who are most successful at playing the blame game. When that happens, the groups that have historically been best at receiving that blame end up being highlighted.

It's really not different from other forms of hate; it's just far more elaborate, robust and systemic. I think a lot of what we're seeing now is unusual, and there's an anomalous rise in anti-Semitism. This has been drafted deliberately. There have been efforts going back to the Soviet Union and to others to agitate for a blood libel in the United Nations and other places to accuse Israel of genocide and to say that what's happening in Israel amounts to grotesque violations of human rights that don't occur elsewhere and are unique to the Jewish people and to Israel—to one and only one nation.

The way that has been amplified across college campuses in the face of recent aggressions in Israel has led to spill-out where we definitely know that these signals start online, and they forward-predict anti-Semitism.

It's not just that the systems exist in and of themselves. We know that where we see the strengthening of these blood libels and this high-velocity political language is where the geographic signature of that language will predict where anti-Semitic attacks take place. The temporal signature of that information will predict when they take place and not the other way around.

We know that the social media signal is carrying something that is potentially instructive. That's what's important to understand about this dynamic. It's highly manufactured by enemy nations. It's perfect for creating blame and uncertainty in already-fraught societies. The result is we see that being pushed on very deliberatively by the CCP and very deliberatively by enemies of democracy because they know it's going to be successful.

12:55 p.m.

Liberal

Anthony Housefather Liberal Mount Royal, QC

Would you say that Russia, for example, and Iran are behind disinformation on social media—on platforms like TikTok, YouTube and Meta—in order to convince people to accept anti-Semitic tropes and disproportionate blame on Israel?

12:55 p.m.

Founder and Chief Science Officer, Network Contagion Research Institute

Joel Finkelstein

Very famously, the Soviet information.... It wasn't called the GRU at the time, but it had a swastika campaign in the seventies in Europe. It decided it would paint swastikas everywhere. This goes to Ben's point about successful versus unsuccessful disinformation operations and what characterizes signal and what characterizes noise. Oftentimes, what becomes signal is concerned about noise, right? What happened in Europe was that 70 swastikas were put up by GRU members. That's all it took. That's all it took for people to start becoming convinced that everybody else was hateful. Once they were convinced that everybody else was hateful, they started blooming up organically.

So, you have a catalytic process whereby people are oftentimes initially.... You know, what's happened in social media is the growth of what we call false polarization. We're told that the other side has become so radical. Don't believe it. It's not true. Consistently, when we poll people to understand what their positions are, what we learn is that we have far more consensus on virtually any issue than what is depicted on social media, so we're being fed a conflict. In that environment, the suspicion of being undermined by a fifth column or the suspicion of having your control taken out from under you, being sold on that conflict, is where it becomes so important to say, “Hey, guys, this is a small signal. It's not that important.” However, when there's no one trusted to be able to say that, all the suspicions have fuel to become fire very quickly.

12:55 p.m.

Liberal

Anthony Housefather Liberal Mount Royal, QC

Based on all the data that you are talking about, that these investigations that you and others are doing are providing, how do parliamentarians reliably obtain this type of data? How do we close the gap between what you know and what we know?

12:55 p.m.

Founder and Chief Science Officer, Network Contagion Research Institute

Joel Finkelstein

You know, I said in the podcast we were talking about earlier that I really feel that what's needed for democracies is a rapid investigatory function to be able to help really visualize the difference. Let's imagine. Let's go back to Europe. Let's do the experiment. Seventy GRU people are putting up swastikas. Instead of saying that everyone is doing it, the headline is that this is 70 people and that we're not as bad as we suspect each other of being. That becomes the headline. We aren't bad people, right? We're being presented with complexity that we've never had to deal with before, and that explains a lot of our bad behaviours and bad choices.

1 p.m.

Conservative

The Chair Conservative John Brassard

Thank you.

1 p.m.

Founder and Chief Science Officer, Network Contagion Research Institute

Joel Finkelstein

Ultimately, when people are aligned, it's possible—definitely possible—for us to have sensible, common-sense and consensus-driven conversations that are productive, as long as we have the capacity to separate out signal from noise.