Thank you, Mr. Chair.
Thank you all for being here.
I would like to point out that I am speaking today in my personal capacity as somebody who has been studying covert influence operations for a long time. I've been doing this job for a decade, and it's particularly welcome to be in a conversation like this here, because 10 years ago conversations like this were not happening. There was not a general awareness of covert influence operations in the larger world of disinformation. The fact that we now have such a thriving defender community and such a thriving conversation is an enormous step forward, and that is something to welcome.
Whenever there is a large conversation like this, it is very important to have clarity over what we are focusing on, what we are talking about and how we measure what we're looking at. There are a couple of points I will make. I will try to keep it very brief.
First of all, when we talk about covert influence operations, which has been my specialization for a long time, a lot of the conversation tends to be around the content they post, because that's the thing that is most visible, and often it's the most easily identifiable. But there's a very useful framework, created by a French scholar called Camille François, which is the ABC framework. It divides influence operations into actor, behaviour and content. When you think about the ways in which the defender community can intervene, the way we can expose and disrupt this kind of operation, it's the middle portion—the behaviour—that is actually the most essential to focus on. In the space of influence operations, if you look historically, most of the content they have posted over time has not actually been the kind of content that would violate any terms of service. It would be the expression of opinion—I support this politician or I do not support this politician.
What was troublesome about this kind of operation was the use of fake accounts, the use of coordination and the use of perhaps fake websites they were building on and fake distribution networks. My work has been very much focused on the behaviours that threat actors go through. When we think about the responses the defender community can come out with, it helps to look at these operations as a series of steps they go through, a series of behavioural procedures, which might begin, for example, with registering an email address, registering a web domain or setting up social media accounts. Then for each of those steps, we have to start thinking about appropriate responses to that step and the appropriate person to do those things.
Last year, with a former colleague, I published a paper called “The Online Operations Kill Chain”, which describes how you can actually sequence and set out the behavioural steps that operations like this can go through. I've shared that with the committee, so I hope you all have access to that already.
That's about the behaviour these operations show. It's also worth thinking about the actors that are behind these kinds of covert influence operations, because sometimes there's a state actor, and sometimes there may be a commercial actor. You do find companies out there that offer influence operations for hire. Then the question becomes what the appropriate response is to a different type of actor in the space. But whenever we're talking about covert influence operations, it's also really important to ask whether they are having any impact and whether we can actually observe that a specific operation is having a specific impact. Historically, a small number of operations have visibly had an impact—most notably the Russian hack and leak operations in 2016 targeting the U.S—but in my experience as an investigator, far more of the operations that have been exposed have not managed to reach real people. They've posted stuff on the Internet, and it has stayed there. There was a Russian operation called “secondary infektion”, for example, which between 2014 and 2019 posted hundreds of pieces of content across hundreds of different platforms, none of which appears to have been seen by any real people. So influence operations are not all equal. We shouldn't treat them as such, and it's important to ask whether there is a way we can measure how far they are actually reaching.
In 2020 I wrote a paper called “The Breakout Scale” on how to assess the impact of various different influence operations and see whether they're actually going somewhere or not. This is a really important thing to be thinking through, because one of the things that operations try to do is to make themselves look powerful even when they're not. They will try to generate fear, even when there's no reason to have that fear. For example, before the U.S. mid-terms in 2018, the Russian Internet Research Agency claimed to have already interfered in the election, whereas in fact, what had been happening was that they'd run maybe 100 Instagram accounts, which had already been taken down. Having a tool that allows us to measure the impact or even to estimate the impact of these operations is critical to the conversation.
Again, that has been shared with the committee.
When we think about—