The way you phrased that question means you understand the complexity of it.
I echo a lot of what Kevin just said, that we have similar approaches but very different platforms. I think what Twitter brings to our fight against disinformation, against efforts to manipulate the platform, and against efforts to distract people is to look at the signals and the behaviour first, and the content second. We operate in more than 100 countries, and in many more than 100 languages. We have to get smarter about how we use our machine learning, our artificial intelligence, to spot trouble before it kicks up and really causes challenges.
I think there are certain areas that are more black and white than the issues you guys have been focused on today. Terrorism is a great example. When we started putting our anti-spam technology towards the fight against terrorism, we were taking down 25% of accounts before anybody else told us about them. Today that number is 94%. We've taken down 1.2 million accounts since the middle of 2015 when we started using those tools. We've gotten to the point now where 75% of terrorist accounts, when we take them down, haven't been able to tweet once. Instead of content, they're saying, “Go do jihad”. They're coming in from places we've already seen. They're using email addresses or IP addresses that we know of. They're following people who we know are bad folks.
I'm using that as an example of how when it's black and white it's easy, or it's easier. Another example of a black and white issue is child sexual exploitation. There's no good-use case on our platform for child sexual exploitation. Abuse is harder. Misinformation is a lot harder, but that doesn't mean that we're stopping. We are really taking a harder look at the signals that indicate an abusive interaction. such as when something isn't being liked, whether you're talking about it in English, French, or Swahili, and whether you're talking about contextual cues that we wouldn't be able to understand.
On the issue of disinformation in particular, we're doing a lot of the things that Kevin described. An important approach that we're taking in general, and one that we're very excited about, is trying to figure out how we measure these issues in such a way that our engineers can aim at them. Jack Dorsey, our CEO, announced an effort he's calling the health of the conversation on the platform. That circles around four issues. Do we agree on what the facts are, or are fake facts driving the conversation? Do we agree on what's important, or is distraction taking us away from the important issues? Are we open to alternative ideas? This means is there receptivity or toxicity? That's the opposite of it. Then, are we exposed to different ideas, different perspectives? I think we're already pretty healthy about that on Twitter. If you say that cats are better than dogs, you're going to hear about it from your friends and from others.
We've gone out to researchers around the world and said tell us how we can measure; tell us what data we have and what data we need, and then we can measure our policy changes, our enforcement changes, against those.
Right now, we measure the health of the company on very understandable things. How many people do we have? How many monthly users do we have? How much time are they spending on the platform? How many advertisers do we have and how much are they spending? Those are important things for the bottom line for Wall Street. For the health of the conversation on Twitter, which is why people come to Twitter, it's to have a conversation with the world and figure out what's happening.
If we can get those numbers right, we can measure changes. We can do A/B testing against it, and we think we have the best engineers anywhere. We think if we give them a target to aim at, we can get to these really, really, really difficult gnarly issues that have a lot of black, white, and grey in between.