On one, algorithms and recommender systems are the functions that rank and organize content on social media platforms, and they present it in users' feeds based on how likely each individual is to engage and interact with it. That sounds innocent, but CCDH research has shown a strong relationship between these algorithms and the promotion of hateful content, because the design of these algorithms prioritizes attention and engagement, and incendiary content like identity-based hate is privileged [Technical difficulty—Editor] being broadcast to more people [Technical difficulty—Editor] than content about [Technical difficulty—Editor].
On two, these were commenced to operate alongside. In “Hate Pays", CCDH shows that social media accounts used the Israel-Gaza conflict to grow and profit [Technical difficulty—Editor] engaging hate content by turbocharging their follower growth, visibility and revenues.
Specifically, we found that accounts that began posting hateful anti-Semitism or Islamophobia in the aftermath of the attacks on October 7 grew four times faster, on average, than before the attack. This quantified how bad actors are able to exploit conflict to grow their following, disseminate hateful messages and potentially profit from this hate.
On three, the irony is, of course, that all platforms have rules about hateful content on their platforms, but again and again, CCDH has shown how the platforms failed to act on Islamophobia when it was reported to them. In our 2022 report, “Failure to Protect”, CCDH showed that Facebook, Instagram, TikTok, Twitter and YouTube failed to act on 89% of posts containing anti-Muslim hatred and Islamophobic content reported to them.
Our researchers used platforms' own reporting tools to flag 530 posts that contained disturbing, bigoted and dehumanizing content that targets Muslim people through racist caricatures, conspiracies and false claims. They've been viewed 25 million times. There were hashtags such as #deathtoislam, #islamiscancer and #raghead. Content spread using the hashtags received at least 1.3 million impressions, and 89% of the time, even when told about it, they did nothing.
Finally, on four, online hate has off-line consequences. Social media companies have failed to act on any of the matters identified by CCDH, and these systemic failures have now been recognized as a factor in hate-motivated attacks around the world, from Christchurch to Pittsburgh. These overt acts of hate in the off-line world materialize social media's failings and highlight the significant stakes.
Toxic communication is not simply an unavoidable occurrence in the digital town square, but rather a product of the social media business model and the financial incentives they create, with fundamental off-line consequences.
To conclude, CCDH supports the standing committee in undertaking this inquiry and believes that any solution to the blight of anti-Muslim and anti-Jewish hate in Canada must address social media platforms' role in amplifying and distributing identity-based hate.