To begin with the process itself, as I mentioned, especially in the context of hate content, we are dealing with such a quantity that we rely on our machine learning and image classifiers to recognize content. If the content has been recognized before and we have a digital hash of it, we automatically take it down. If it needs to be reviewed, it is sent to this team of reviewers.
They are intensely trained. They are provided with local support, as well as support from our global teams, to make sure they are able to deal with the content they're looking at and also the needed supports. That is so that as they look at what can be horrific content day after day, they are in a work environment and a social environment where they don't face the same sorts of pressures that you're describing. We are very conscious that they have a very difficult job, not just because they're trying to balance rights versus freedom of expression versus what society expects to find when online, but also because they have the difficult job of reviewing material that others do not want to review.
For us, whether they're based in one office or another around the world, we are focused on giving them training and support so they can do their job effectively and have work-life balance.