Thank you for the opportunity to appear before you today.
My name is Nathaniel Gleicher, and I'm the head of security policy at Meta.
My work is focused on addressing the adversarial threats that we face every day to the security and integrity of our products and services and taking steps to protect our users in every way we can.
I have worked in cybersecurity and trust and safety for two decades, first as a technical expert and then as a cybercrime prosecutor at the U.S. Department of Justice and as director for cybersecurity policy at the National Security Council.
I'm joined by video conference today by two colleagues at Meta: Rachel Curran, the head of public policy for Canada; and Dr. Lindsay Hundley, our lead for influence operations policy.
At Meta, we work hard to identify and counter foreign adversarial threats, including hacking campaigns and cyber-espionage operations, as well as influence operations, what we call coordinated inauthentic behaviour, or CIB, which we define as any “coordinated efforts to manipulate public debate for a strategic goal, in which fake accounts are central to the operation.”
CIB is when users coordinate with one another and use fake accounts to mislead others about who they are and what they are doing. At Meta, our community standards prohibit inauthentic behaviour, including by users who seek to misrepresent themselves, use fake accounts or artificially boost the popularity of content. This policy is intended to protect the security of user accounts and our services and to create a space where people can trust the people and communities they interact with on our platforms.
We also know that threat actors are working to interfere with and manipulate public debate, exploit societal divisions, promote fraud, influence elections and target authentic social engagement across the Internet. Stopping these bad actors, both on our platforms and more broadly, is one of our highest priorities. That's why we have invested significantly in people and technology to combat inauthentic behaviour.
The security teams at Meta have developed policies, automated detection tools and enforcement frameworks to tackle deceptive actors, both foreign and domestic. These investments in technology have enabled us to stop millions of attempts to create fake accounts every day and to detect and remove millions more, often within minutes of their creation. Just this year, Meta has disabled almost two billion fake accounts. The vast majority of those, more than 99% of them, were identified proactively before receiving any report.
As part of this work, we regularly publish reports on our work to counter the threats we're discussing here today. To talk more about that, I'd like to hand it over to Dr. Hundley, who coordinates our work to identify and expose foreign interference.