Context matters with regard to actions that we take. We have a number of signals that we measure and take a look at before we take action. Let me break that into two pieces: The first would be the reporting and the second would be review.
We publicly acknowledge that there's too much burden on victims to report to Twitter and that is why we are trying to do a better job. We now have a dedicated hateful conduct workflow so that we know we can raise those issues for review faster. As I mentioned, we're working with proprietary technology. We realize we have to do a better job in reporting abuse.
In reviewing those accounts or those tweets flagged for action, it's extremely important for us to try to get it right. There are a number of behavioural signals we get, so if I tweet something and you mute me, you block me and you report me, clearly something's up in the quality of the content. Further, we take a look at our rules and we take a look at the laws where that tweet came from.
The other part of context is that there are very different conversations happening on Twitter. Often, the example we use is gaming. It is perfectly acceptable in the gaming community to say something like, “I'm coming to kill you tonight, be prepared”, so we would like to make sure we have that context as well.
These are the things we consider when we make that review.