Thank you so much, and thank you for the opportunity to appear before you today.
My name is David Agranovich. I am the director of threat disruption at Meta.
My work is focused on coordinating our cross-company efforts to identify, disrupt and deter adversarial threats on our platforms. I've worked to counter these threats at Meta for the past six years. Previously, I worked in the U.S. government on Russian interference issues, culminating as the director for intelligence and director for Russia at the National Security Council.
I'm joined today by Rachel Curran, who is our head of public policy for Canada.
At Meta, we work hard to identify and counter adversarial threats. These include hacking, spyware and cyber espionage operations, as well as influence operations or what we call “coordinated inauthentic behaviour”, or CIB, which we define as any coordinated effort to manipulate public debate for a strategic goal in which fake accounts are central to the operation.
At Meta, our community standards prohibit inauthentic behaviour, including by users who seek to misrepresent themselves, use fake accounts or artificially boost the popularity of content. This policy is intended to protect the security of users and our services and create a space where people can trust the people and the communities that they interact with on our platforms.
We also know that threat actors are working to interfere with and manipulate public debate, exploit societal divisions, promote fraud, influence elections and target authentic social engagement. Stopping these bad actors is one of our highest priorities. This is why we have invested significantly in people and technologies to combat inauthentic behaviour. The security teams at Meta have developed policies, automated detection tools and enforcement frameworks to tackle deceptive actors, both foreign and domestic. These investments in technology have enabled us to stop millions of attempts to create fake accounts every day and to detect and remove millions more, often within minutes of their creation.
Just this year, Meta disabled more than two billion fake accounts, the vast majority of which, over 99%, were identified proactively before receiving any report from a user.
Our strategy to counter these adversarial threats has three main components. The first is expert-led investigations to uncover the most sophisticated operations. The second is public disclosure and information sharing to enable cross-societal defence. The third is product and engineering efforts to build the insights derived from our investigations into more effective scaled and automated detection and enforcement.
A key component of this strategy is our public quarterly threat reports. Since we began this work, we've taken down and disclosed more than 200 covert influence operations. These operated from 68 different countries and operated in at least 42 different languages, from Amharic and Urdu to Russian and Chinese.
Sharing this information has enabled our teams, investigative journalists, government officials and industry peers to better understand and expose Internet-wide security risks, including those ahead of critical elections. We also share detailed technical indicators linked to these networks in a public-facing repository hosted on GitHub, which contains more than 7,000 indicators of influence operations activity across the Internet.
I want to very briefly share the key trends we've observed in the course of our investigations into influence operations around the world.
First, Russia continues to be the most prolific source of CIB. We've disrupted more than 40 operations from Russia that targeted audiences all over the world. Second, Iran remains the second most active source of CIB globally. Third, while historically China-origin clandestine activity was limited on our platforms, we've seen a shift by Chinese operations in the past two years to target broader, more global audiences in languages other than Chinese.
Across the different geographic operations, we've seen an increasing reliance on private firms selling influence as a service; the use of generative AI tools—though, I would note, with little impact on our investigative capabilities; and finally, amplification through uncritical media coverage of these networks.
I'd be happy to discuss these operations in more detail throughout our discussion today.
Countering foreign influence operations is a whole-of-society effort, which is why we work with our industry peers—including some of the folks represented here today—as well as independent researchers, investigative journalists, government and law enforcement.
Thank you for your focus on this work. I look forward to answering your questions.