Good morning. Thank you very much for having me here today. I appreciate the invitation.
My name is Brad Galloway. I'm working as a research and intervention specialist with the Organization for the Prevention of Violence, which is located in Edmonton, Alberta. My main goals there are to take part in up-and-coming research, specifically on the far-right extremist movement in Canada, and more specifically, as of recent times, looking at the online dynamics of far-right extremism.
I often weave in my own personal lived experiences with the far right in Canada, as I spent 13 years within that movement in Canada, mostly at the beginning, in the offline context. However, I spent about 10 years operating also in the online context, so I know a lot about this online activity from an insider's perspective. I've used a lot of my experiences in taking part in some academic research as of recent times.
I'm also working with Life after Hate, which is another group that is similar to the Organization for the Prevention of Violence. We're looking at doing interventions and helping other people leave extremist movements. Some of those initiatives will definitely include looking at ways to build on online intervention strategies to intervene with people, and also providing resources for people who want to leave these types of movements.
It is my belief that communities are formed on shared ideas, experiences and cultures. ln order to distinguish and define themselves, groups compare themselves to others in positive and negative ways. It is in the latter that problems might arise.
A healthy, culturally diverse society is one that respects, accords dignity to and even celebrates the differences between cultures and communities. However, when groups start to distinguish and compare themselves in a negative manner to other groups on grounds such as race, religion, culture, ethnicity and so on, there is a potential for destructive and abiding conflicts. This leads to an us-versus-them mentality.
lt is in this sense that hate and extremism are interrelated phenomena that exist along a continuum of behaviours and beliefs that are grounded in this us-versus-them mindset. The perpetuation of associated rhetoric can create an environment where discrimination, harassment and violence are viewed by individuals as not only a reasonable response or reaction but also as a necessary one. When this is left unchecked, deepening, sometimes violent divides within society can undermine who we all are as Canadians and fray the social fabric of the country.
For the last 30 years, technology—first telephones and later the Internet—has played a crucial role in the growth of the white supremacist movement throughout Canada. Early versions of hate speech online in the 1990s and 2000s were being distributed through automated calls and websites. For example, the Heritage Front, a white supremacist group, had automated computerized calls spouting racist information. Other examples included the Freedom-Site network and the white civil rights hotline.
Beginning in 1996, we then saw the emergence of online discussion forums such as Stormfront, which notably was one of the first white supremacy websites and is still very active today. Stormfront was the first of this series of online far-right platforms and was used to communicate and organize.
Today we see more activity on social media sites, such as Facebook, Twitter and Gab, though most of these conventional forums still exist and are often used in conjunction with the new platforms, inclusive of apps. Often content removal or regulation are suggested to mitigate such sites and platforms. I would say that they both have their upsides, but they are very much faced with many challenges, both legal and ethical.
More with regard to the present, extremist groups and individual influencers promote social polarization and hate through available technology and are highly adaptive to pressing demands by law enforcement, governments and private social media companies.
Further, online hate speech is highly mobile. I would argue that these hate groups, and organized hate groups specifically, are using this mobility to further their transnational hate movements. Even when this content is removed, it finds expression elsewhere. Individual influencers are adaptive at finding new spaces.
If content is removed, it often re-emerges on another platform or under the name of a different user. Often the rhetoric and the networks move from established networks, where counter-speech can occur and where journalists and law enforcement are able to easily track their activity, onto platforms where detection is more challenging and where what are often termed “counter-narratives” are harder to deploy.
There are a multitude of examples, both domestically and internationally, of individuals who are promoting hate being kicked off one major platform—for instance, Facebook—only to move to either another major platform such as Twitter, or any host of smaller platforms, such as Gab or Telegram. Today’s online space is a more dynamic, immersive and interactive multiplatform online space than has ever previously existed, when there were only a few forums or a few telephone lines.
Influencers and propagators of hate distribute through multiple interlinked platforms. This new dynamic has demonstrably had an ability to mobilize hate-based activism and extremism, especially for lone-actor, violent extremists such as those who perpetrated the Tree of Life synagogue and Quebec City mosque attacks. The individuals who carried out these attacks did not necessarily engage directly with ideological influencers or a networked group, but they were mobilized based on the hate they felt and the sense of crisis they saw stemming from an opposing group.
What is the solution? I don't think there's any golden ticket solution. However, we believe that ultimately the first step in prevention and countering the propagation of hate speech and extremism is awareness, beginning with a better understanding of the nature of hate crimes and hate incidents online and off-line. We need better data on who is most targeted by hate and what the intersectional dimensions of targeting are—as in black, woman, Muslim who wears the hijab—and where these things take place. We need data on whether certain public spaces, like public transit, or certain public platforms, such as Facebook or Twitter, are more conducive to hate speech and harassment.
In order to do this, there needs to be more incentive for victims of hate crimes to come forward. Often there is stigmatization, fear and skepticism around reporting a hate incident to the police. These issues need to be constructively challenged and mitigated through a multisectoral approach.
A recent example that I found is the proposed bill SB 577 in the state of Oregon, where they are also dealing with a rapid increase in hate crimes. This new bill requires law enforcement agencies to refer alleged hate crime victims to a new state hotline that is staffed by the Oregon Department of Justice, which connects callers to local mental health services, advocacy groups and other useful resources for crime victims. This allows victims to be in a safe, understanding environment while moving forward with a multitude of resources to address their hate experiences. It provides victims with some more resources and could increase reporting.
Online parallels are easy to imagine. Already some American non-profits are creating online resource hubs for people who have been doxed and had their personal information exposed. These resources could be repurposed and redeployed to address the issue we’re talking about today.
Many witnesses have likely discussed the legal challenges associated with changes to legislation. With the time I have left, I would instead like to touch upon some efforts that could occur further upstream of hate speech that don’t require legislative change.