Thank you for the opportunity to have us appear before you today.
My name is Dr. Lindsay Hundley, and I am the global threat intelligence lead at Meta. My work is focused on producing intelligence to identify, disrupt and deter adversarial threats on our platforms. I've worked to counter these threats at Meta for the past three years, and my work at the company draws on over 10 years of experience as a researcher focused on issues related to foreign interference, including in my doctoral work at Stanford University and during research fellowships at both Stanford University and Harvard Kennedy School.
I'm joined today by Rachel Curran, the head of public policy for Canada.
At Meta, we work hard to identify and counter foreign adversarial threats, including hacking and cyber-espionage campaigns as well as influence operations—what we call coordinated inauthentic behaviour, or CIB. Meta defines CIB as any coordinated effort to manipulate public debate for a strategic goal in which fake accounts are central to the operation. CIB occurs when users coordinate with one another and use fake accounts to mislead others about who they are and what they are doing.
At Meta, we believe that authenticity is a cornerstone of our community. Our community standards prohibit inauthentic behaviour, including by users who seek to misrepresent themselves, use fake accounts or artificially boost the popularity of content. This policy is intended to protect the security of user accounts and our services and create a space where people can trust the people and communities that they interact with on our platforms.
We also know that threat actors are working to interfere with and manipulate public debate. They try to exploit societal divisions, promote fraud, influence elections and target authentic social engagement. Stopping these bad actors is one of our highest priorities, and that is why we've invested significantly in people and technology to combat inauthentic behaviour at scale.
The security teams at Meta have developed policies, automated detection tools and enforcement frameworks to tackle deceptive campaigns, both foreign and domestic. These investments in technology have enabled us to stop millions of attempts to create fake accounts every day and to detect and remove millions more, often within minutes after creation. Just this year, Meta has disabled nearly two billion fake accounts, and the vast majority, over 99%, were identified proactively.
Our strategy to counter these adversarial threats has three main components. First there are expert-led investigations to uncover the most sophisticated operations. Second is public disclosure and information-sharing to enable cross-societal defences, and third are product and engineering efforts to build the insights derived from our investigations and turn them into more effective, scaled and automated detection and enforcement.
A key component of this strategy is our public quarterly threat reports. Since we began this work, we've taken down and disclosed more than 200 covert influence operations from 68 countries that operated in 40 languages, from Amharic to Urdu to Russian to Chinese. Sharing this information has enabled our teams, investigative journalists, government officials and industry peers to better understand and expose Internet-wide security risks, including ahead of critical elections.
We've also shared detailed technical indicators linked to these networks in a public-facing repository hosted on GitHub, which contains more than 7,000 indicators of influence operations activity across the Internet.
Before I close, I'd like to touch on a few trends that we're monitoring in the global threat landscape.
To start, Russia, Iran and China remain the top three sources of foreign interference networks globally. We have removed nearly 40 operations from Russia that target audiences around the world, including four new operations in just this past quarter. Russian-origin operations have become overwhelmingly one-sided over the past two years, pushing narratives to support those who are less supportive of Ukraine.
Likewise, China-origin operations have evolved significantly in recent years to target broader, more global audiences, including in languages other than Chinese. These operations have continued to diversify their tactics, including targeting critics of the Chinese government, attempting to co-opt authentic individuals and using AI-generated news readers in an attempt to make fictitious news outlets look more legitimate.
Finally, we've seen threat actors increasingly decentralize their operations to withstand disruptions from any singular platform. We've seen them outsource their deceptive campaigns increasingly to private firms. We are also seeing them leverage generative AI technologies to produce higher volumes of original content at scale, though their abuse of these technologies has not impeded our ability to detect and remove these operations.
I would be happy to discuss any of these trends in more detail.
I want to close by saying that countering foreign influence operations is a whole-of-society effort, which is why we engage with our industry peers, independent researchers, journalists, government and law enforcement.
Thank you so much for your focus on this work. We look forward to answering your questions.