You're right that the threats are always evolving. As I mentioned a moment ago, I think we were slow as a company to spot the new types of threats that emerged out of the U.S. presidential election. Since then, we have spent significant resources and significant time, and have hired—we're doubling our security team—to try to address these things.
AI is going to play a huge role in that. At scale, with 23 million people, and 2.2 billion people around the world using our service, you're right that if everybody posts just one time a day, that is, by definition, 2.2 billion pieces of content. AI will allow us to use automation to identify bad actors.
You're absolutely right that we cannot guarantee 100% accuracy. It goes the other way, too, sir. I think what you're alluding to is that we want to be very careful about the false positive scenario, in which you accidentally take down things that are legitimate content and that don't violate community standards. We do have to be very careful about that.
I do want to assure you—and we have said this in other places as well—that while we are certainly dedicating a lot of resources, staff, and time to addressing these concerns that we know about, we are obviously also looking ahead to identify threats that we think are emerging, to get ahead of this, so that we are on top as electoral events happen around the world.