These are all very good questions. They're not easy questions by any stretch.
One of the most disturbing things we found in this round of work—the Institute for Strategic Dialogue is doing much of our online analysis—is that in two successive years, Canadian posters were among the most active within the far-right ecosystem, if you will.
Just quantitatively, that's problematic. We tend to think we are immune to those kinds of narratives, but there you are. In particular in the first round—that would have been the 2019 report that we did with ISD—we actually found that they were, in fact, second and third in two of the most extreme platforms, Fascist Forge and Iron March. These are the ones that are most likely to promote violence, and mass violence in particular.
Again, quantitatively, that is the problem, but it's also a problem qualitatively, given the breadth of the speech, the viciousness of the speech as it's directed towards particular individuals or particular communities, whether it's emails or posts directed towards an individual or it's those who vilify particular groups. It's rampant online, obviously.
I think we have to consider the impacts of this on a sense of community, a sense of belonging and a sense of security, as well. It is something that absolutely silences communities. It makes them less willing to engage online, which has become the way we communicate—especially now, with COVID.
How do we confront it and how do we regulate it? It's such a challenge. We've been exploring it globally over the last five or six years. We've been trying to constrain the most heinous sorts of speeches.
When I'm talking about hate speech here, I'm talking about dangerous speech, speech that promotes violence, that explicitly promotes vilification and that directs hatred towards particular groups. Warman v. Kouba identified these sorts of elements of speech as the hallmarks of hate.
I think we need to put much more pressure on social media giants to enforce their community standards. Most of them are at least as strong as our own federal definitions. We need to encourage the actual use of those. I hear so many...from the research but also from the people I work with. They are identifying speech that seems to cross those boundaries, which.... There's no response to the complaints, so I think we need to hold their feet to the fire.
In terms of the alternative platforms, that's where the real challenge lies because access to the darkest spaces is more difficult for researchers, for police, for journalists and for anyone who wants to know what's happening there. There are challenges there because they're specifically set up to avoid any sort of community standards. Most of us are at a loss as to how to respond to those. Again, perhaps we put pressure on the domains to not host them, as happened with Parler. I think it was after the January 6 events.
I think that is a new challenge presenting itself.