There are two ways of enforcing our systems, to be honest. One is the automated system, as I think one of your colleagues mentioned, which uses artificial intelligence. Some of the technology was developed in Canada: machine learning to go and find all these things.
In fact, I have some statistics here. In terms of hate speech, in the last quarter of 2020, our automated systems found over 97% of hate speech directed at groups automatically, before any human had seen them or reported them. That's where we are. Now, 97% is not 100%, so we still have a ways to go, but we're getting better every day. That's our posture. That's the way we do it right now.
The other piece, though, is that because speech is important from a contextual standpoint, we have to be careful on some of the grey zones for speech that, in fact, it is an attack on the community and not something else, for example, spreading awareness about Asian racism. We need humans as well, so part of that 35,000-person team that I referred to consists of people who are going to be looking at the context and saying that this image was shared, this video was shared, or this text was shared, but is the context of this to attack Asians, or is this to raise awareness about discrimination and racism? That context matters in terms of whether or not we would enforce and take it down.
It is really a parallel process that meets when we need to get more context. We have automated systems that go and find things automatically. We're constantly improving, but we're at about 97% of proactive identification and we need humans to verify some of the more challenging ones, where the speech is grey and we have to be sure of the context. Then, in the most complicated cases, they get escalated to people like me and Rachel, where we will look at specific pieces of content emanating from Canada, consult with experts and think through whether or not we're going to be drawing the line in the right place.