Additionally, just to address another question that I think is permeating, about how we go about removing content and identifying it, we do remove content for a number of various abuse types. It's not just violence. In that specific case, where we're talking about misinformation, the appearance of some of these tropes that appeared in Sri Lanka and other countries, we removed that on the cause that it would lead to violence. But we have policies that cover things like hate speech, where violence may not be imminent. We have things like personal identifiable information, bullying, which we take very seriously, that may not lead directly to violence but we do enforce those policies directly and we try to enforce them as swiftly as possible.
We now have 30,000 people globally working on these issues. There was a comment earlier about having people with the right amount of context to really weigh in. For all the countries that are represented here, I just want to say that, within that 30,000 people, we have 15,000 content moderators who speak more than 50 languages. They work 24 hours a day, seven days a week. Some of them are located in countries that are here before us today. We take that very seriously.
Additionally, we do have a commitment to working with our partners—government, civil society and academics—so that we are arriving at the answers that we think are correct on these issues. I think we all recognize that these are very complex issues to get right. Everyone here, I think, shares the idea of ensuring the safety of our community, all of whom are your constituents. I think we share those same goals. It's just making sure that we are transparent in our discussion and that we come to a place where we can agree on the best steps forward. Thank you.