Good afternoon.
The Center for Countering Digital Hate is a non-profit that seeks to disrupt the monetized architecture of online hate and misinformation, which has been overwhelming enlightenment values of tolerance, of science and of democracy that underpin our nation's prosperity.
Our organization had been around for six years. We have around 20 staff in London and Washington, D.C. We're independent. We're not affiliated with any political party. We don't take money from governments or from technology companies.
Our research throughout that six years has tracked the rise of online hate, including anti-Semitism. The reason we started this organization was that we were seeing the rise of virulent anti-Semitism and disinformation on the left in the United Kingdom, as well as seeing that fringe actors, from anti-vaxxers to misogynist incels to racists such as white supremacists and jihadists, are able to easily exploit digital platforms to promote their own content.
The platforms and search engines benefit commercially from this system, and that is one of the central insights of CCDH: There is an economy and an industry around hate and misinformation now that is so profitable that it inherently leads to the sustainability and further proliferation of this industry and to platforms not being incentivized to do more than send a press release when problems are raised.
Put simply, our problems are threefold.
One is the proliferation of bad actors. These are extremists who are sharing dangerous misinformation and hate content online. They're organized and skilled in exploiting platform structures and undermining public safety and democracy.
Another problem is that platforms profit from the spread of extreme content through a system that promotes engagement over any other metric, including public good, safety or anything else, and that companies do not factor in public safety in the design of their products and do not effectively self-regulate through adequately resourcing the enforcement of their own rules.
Another is bad laws, the absence of legislation and global coordination at a scale that will protect citizens through assessing and enforcing common standards and sharing intelligence and metrics about them.
We've published a series of reports on things like anti-Semitism. Our most recent was on anti-Muslim hate. It showed that even when you report anti-Muslim hate to platforms by using their own tools, nine out of 10 times they fail to take it down. That includes posts promoting the Great Replacement conspiracy theory, violating pledges that they made in the wake of the 2019 Christchurch mosque attacks when they signed up to the Christchurch call. That conspiracy theory inspired the Christchurch attacks as well as the Tree of Life Synagogue attacks in Pennsylvania in the United States.
So there are commercial hate and disinformation actors who are making a lot of money from spreading discord and peddling lies. I've used anti-Muslim hate as an example, but we found the same figures with anti-Semitism, with misogyny and with anti-Black hateful content in the past.
Why are they failing to act? The truth is that there is a web of commercial actors, from platforms to payment processes to people who provide appetizing technology that is embedded on hateful content, giving the authors of that hateful content money for every eyeball they can attract to it. It has revenues in the high millions, tens of millions and hundreds of millions of dollars that have made some entrepreneurs in this space extremely wealthy.
For example, the leading anti-vaxxer in the United States, Joe Mercola, claims in court testimony that he's worth $100 million. That's what this industry is worth.
The creation of this industry has involved a series of moral choices by companies to profit from this hate. To back this up, these greedy, selfish and frankly lazy companies have proselytized the notion that they're right to profit from hate, without criticism, without boycotts, without regulatory action and without even moral opprobrium or justifiable moral opprobrium. It's somehow a God-given right, because a violation of it, they say, would be cancelling them—which is nonsense.
Our experiences in organizations suggest that four things are missing from existing powers globally. One is safety by design being enforced. Second is the power to compel transparency around algorithms and around enforcement of community standards and of the economics. We need bodies to hold companies accountable and set standards so that we don't have a race to the bottom morally. Finally, we need the power to hold social media platforms and executives responsible for the decisions they take.