Trends measure the conversation in real time and try to distinguish conversations that are always having a high level of engagement, English Premier League or the Mexican election writ large. What trends are trying to identify is acceleration above the normal. When that happens organically, like you mentioned, it has a different pattern from when it happens augmented by bots.
Since 2014, which for us is a lifetime, we've had the ability to protect trends from that kind of inorganic automated activity. Kevin mentioned an arms race. I think that's a good term for the battle against malicious automation. Right now we're challenging 450 million accounts a year for being inauthentic, and our tools are very subtle, very creative. They look at signals like instant retweets or activity that's so fast that it's impossible to be human.
Despite that, 75% is what ultimately gets kicked off the service, so 25% of the people who we thought were acting in an inauthentic way were able to pass the challenge. We are over-indexing to try to stop this inauthentic activity. This is a place where there is no delta between the societal values of trusting online activity and our imperatives as a company, which is that we want people when they come to Twitter to believe in what they see, to know that they are not getting messed about with Russian bots or whatever, so we work very hard to get this right and we're continuing to make improvements on a weekly basis.