As you know, during the consultation process we are going to provide some input into that process. Generally, though, if we're talking about hateful or prohibited content on the platform, I think a better way to think about it is the prevalence. It is a general understanding of how much of this is on the platform, and we feel very comfortable and we think it appropriate to be measured against that.
As I said, about a year ago we were proactively detecting about 24% of that. We're trying to better that. We're over 60%. We know we still have more work to do, but I think prevalence is an important measurement.
I think in other parts of the world we've talked about timelines, to get something down within a certain period. A couple of things are particularly challenging with that. For example, something may be up there for a while but has been seen by almost nobody, whereas there are things that have been there for potentially a very short or a long period of time that are seen by a lot of people.
We want to get at the question of reach versus whatever is out there, specific pieces of content that you have to take down by a certain time. In other parts of the world I think that has led to the unintended consequence of over-censorship of content.
We understand we shouldn't have certain things on the platform. They are a violation of our policies, and in some cases a violation of local law. We want to act expeditiously on that, but we want to be measured on the prevalence and the ability of our systems to do what we say we're going to do versus specific timelines that lead to these kinds of unintended consequences of censorship.