The assertion that we algorithmically prioritize hateful and false content because it increases our profits is just plain wrong. As a company, we have every commercial and moral incentive to try to give the maximum number of people as much of a positive experience as possible on the platform, and that includes advertisers. Advertisers do not want their brands linked to or next to hateful content.
Our view is that the growth of people or advertisers using our platforms means nothing if our services aren't being used in ways that bring people closer together. That's why we take steps to keep people safe, even if it impacts our bottom line and even if it reduces their time spent on the platform. We made a change to News Feed in 2018, for instance, which significantly reduced the amount of time that people were spending on our platforms.
Since 2016, we've invested $13 billion in safety and security on Facebook, and we've got 40,000 people working on safety and security alone at the company.