Thank you, Mr. Chair, and thank you to the committee for the invitation to appear before you today.
It is frankly disturbing that we live in a world where online hate is rising, where what Whitney Phillips has called “the oxygen of amplification” has elevated extremist views and where in several cases online hate speech has directly led to offline violence, so I very much welcome the committee's careful consideration of how Canada can address these troubling developments.
I've personally examined European and North American approaches to hate speech, extremism and disinformation. Today, I will briefly outline some of the other approaches democracies are taking, which I talk about more in my brief that I've submitted and, second, how the German example in particular raises some questions for the reconsideration of introducing section 13 again. Finally, I'll discuss some measures that could be taken to address a broader category of harmful speech, which is a non-legal category, but I can try to address some of the broader questions that have been raised.
Let me first state the very sobering fact that hate speech is not a problem that can be solved. It will be a continual, evolving and ongoing threat. Still, levels of hate speech can ebb and flow. This depends upon the architecture of online ecosystems and the type of speech they promote, as well as the broader political, economic and cultural factors. This can facilitate more hate speech and hate-related crime, but it can also do the reverse.
First, this is an international problem, as I've mentioned. Democracies around the world are trying to find ways to address this issue. Let me name a couple of the examples that we can discuss in questions.
First, the U.K. has suggested an approach to regulate through a “duty of care” framework that requires social media companies to have a design that prevents online harms. France has suggested a regulation that would mandate transparency and “accountability by design” from the social media companies. Finally, Germany has taken a legal approach, creating a law that requires social media companies with more than two million unique users in Germany to address and enforce 22 statutes of speech law that already exist in Germany.
There's a range of things, from the legal to the co-regulatory to self-regulatory and codes of conduct.
In the case of what we're discussing today, the German Netzwerkdurchsetzungsgesetz, or NetzDG, is particularly instructive. Passed in 2017 and in force since 2018, this is technically a German mouthful word that is literally translated as “network enforcement law”, so it doesn't introduce new statutes of speech law. Rather, it requires social media companies to enforce law that already exists and to actually attend to complaints that are posted within 24 hours or face up to 50 million euros of fine per post.
Let me then talk about some considerations this raised. First, this was not about introducing new law but enforcing existing law. It has been a major problem in the German case to get Facebook and company to comply. Second, it raises questions about how we get social media companies to actually comply with and enforce existing law. It also raised the question of the scale. To give you a sense, YouTube and Twitter, in a six-month period, were receiving more than 200,000 complaints, so there's a question of the scale of the enforceability and potential backlogs. There's also the question of whether things would be enforced nationally or globally. We've seen that mostly what falls under it is actually being taken down under a company's global terms of service.
This law also only deals with pieces of content, so it doesn't deal with other ways in which hate can be propagated or funded online through ecosystems. Let me give a Canadian example here.
Very recently, a member of the Canadian far right tried to use the GoFundMe platform to raise money for an appeal against a libel suit he had lost for defaming a Muslim Canadian. Ontario Supreme Court Justice Jane Ferguson called the far right man's words “hate speech at its worst”, but only after complaints from a journalist and members of the public did the GoFundMe platform actually take down this man's appeal for funds, even though it violated their terms of service. This is just one illustration of how this is broader than actual pieces of content.
Finally, let me talk about the way in which we might address a broader category of harmful speech, which is a non-legal category of speech but speech that may undermine free, full and fair democratic discourse online. I've written a report with Chris Tenove and Fenwick McKelvey, two fellow academics, about how we can address this problem of harmful speech without infringing on our democratic right to free expression. Let me give three suggestions.
First, we have suggested the creation of a social media council. This would mandate regular meetings of social media companies and civil society, particularly marginalized groups that are disproportionately affected by hate and harmful speech online. This social media council could be explicitly created through the framework of human rights. The idea is supported by, amongst others, the UN special rapporteur on freedom of expression and opinion. By linking to international human rights, this would also ensure that Canada doesn't inadvertently provide justifications for liberal regimes to censor speech in ways that could deny basic human rights elsewhere around the world.
Second, we should firmly consider what kinds of transparency we might mandate from social media and online companies. There's so much that we don't know about the way the algorithms work and whether they promote bias in various kinds of ways. We should contemplate whether to, along the lines of algorithmic impact assessments, require audits and transparency from the companies to understand if their algorithms are themselves facilitating discrimination or promoting hate speech.
Third, we need to remember that civil society is an important part of this question. This is not something to solely be addressed by governments and platforms. Civil society plays a key role here. We often see that platforms only take down certain types of content after it has been flagged and raised by civil society organizations or journalists. We need to support those civil society organizations and journalists who are working on this, and also who are supporting those who are deeply affected by hate and harmful speech.
Finally, we also need to support the research that thinks through the sort of positive element of this, that is to say, how do we encourage more constructive engagement online?
As you can see from this short testimony, there's much to be done, on all sides.
Thank you for inviting me to be part of this conversation.