It's the latter option. You modify the algorithm.
What happens in those cases is that the algorithm is lacking the information it needs to be accurate, so you're looking at the action rate that the algorithm leads to. By the way, a lot of these are like bots. You're implementing bots in the platform through algorithms. If I create a bot that is giving me a 10% action rate, that means, of all of the content that is flagged to me, I'm only taking action on 10%. That means that the algorithm is certainly not accurate enough, but I can feed it more information.
I referred before to patterns of behaviour. I could say, “Only flag to me accounts that have been created within this time span in this IP address, trying to use this hashtag, trying to tweet to these people.” The more information you give it, the more accurate it is. We have found that for certain types of abuse, spam being a great example, we have been able to eliminate most of the support, based on very accurate algorithms. However, by no means does this happen from one day to the next. It has taken months and years to reach the right amount of information for those algorithms to be properly deployed on the site.