I don't know that it's exactly how it is. I think we have different ways of doing this. To get to your point, we use automated systems with some of the machine-learning technology that has actually been pioneered out of Canada. We have an AI lab in Montreal.
A lot of the groundbreaking work they're doing there is to automate these things, so there are some things we can remove very quickly: for example, terrorist content, child nudity and child exploitation. I can tell you that proactively these systems find and remove over 99% of that kind of content that people try to put on Facebook.
The second door, which is a door that we're talking about here, is where context and nuance are important. Where context is important, we have humans look at it. We don't want to have just an automated system remove something and deny someone's speech, just because, without understanding the context. There, we do rely on humans; and there, I agree with you that it does take some time, but I think we generally are pretty fast at it. We can always improve and certainly we are working on it, but again, the statistics are upwards of 99% proactive removal before any human sees it.