Thank you. That is exactly the right question to ask, and one that we work on every day.
I'll just note that our ability to identify, disrupt and stop malicious automation improves every day. We are now catching—I misspoke earlier—425 million accounts, which we challenged in 2018.
Number one is stopping the coordinated bad activity that we see on the platform. Number two is working to raise credible voices—journalists, politicians, experts and civil society. Across Latin America we work with civil society, especially in the context of elections, to understand when major events are happening, to be able to focus our enforcement efforts on those events, and to be able to give people more context about people they don't understand.
I'll give you one example because I know time is short. If you go onto Twitter now, you can see the source of the tweet, meaning, whether it is coming from an iPhone, an Android device, or from TweetDeck or Hootsuite, or the other ways that people coordinate their Twitter activities.
The last piece of information or the way to think about this is transparency. We believe our approach is to quietly do our work to keep the health of the platform strong. When we find particularly state-sponsored information operations, we capture that information and put it out into the public domain. We have an extremely transparent public API that anybody can reach. We learn and get better because of the work that researchers have undertaken and that governments have undertaken to delve into that dataset.
It is an incredibly challenging issue, I think. One of the things you mentioned is that it's easy for us to identify instantaneous retweets and things that are automated like that. It is harder to understand when people are paid to tweet, or what we saw in the Venezuelan context with troll prompts, those kinds of things.
We will continue to invest in research and invest in our trolling to get better.