Mr. Chair, members of the committee, I would like to thank you for inviting me today to discuss artificial intelligence and social media regulation in Canada.
I begin with an oft-quoted observation: “For every complex problem, there is a solution that is clear, simple and wrong.”
Canada is not the first country to consider how to best keep the Internet safe. In 2019, for instance, the French Parliament adopted the Avia law, a bill very similar to the online harms legislation that the Canadian government considered last year. The bill required social media platforms to remove “clearly illegal content”, including hate speech, from their platforms. Under threat of significant monetary penalties, the service providers had to remove hate speech within 24 hours of notification. Remarkably, France's constitutional court struck the law down. The court held that it overly burdened free expression.
However, France's hate speech laws are far stricter than Canada's. Why did this seemingly minor extension of hate speech law to the online sphere cross the constitutional line? The answer is what human rights scholars call “collateral censorship”. Collateral censorship is the phenomenon where if a social media company is punished for its users' speech, the platform will overcensor. Where there's even a small possibility that speech is unlawful, the intermediary will err on the side of caution, censoring speech, because the cost of failing to remove unlawful content is too high. France's constitutional court was unwilling to accept the law's restrictive impact on legal expression.
The risk of collateral censorship depends on how difficult it is for a platform to distinguish legal from illegal content. Some categories of illegal content are easier to identify than others. Due to scale, most content moderation is done using artificial intelligence systems. Identifying child pornography is relatively easy for such a system; identifying hate speech is not.
Consider that over 500 million tweets are posted on Twitter every day. Many seemingly hateful tweets are actually counter-speech, news reporting or art. Artificial intelligence systems cannot tell these categories apart. Human reviewers cannot accurately make these assessments in mere seconds either. Because Facebook instructs moderators to err on the side of removal, counterintuitively, online, the speech of marginalized groups may be censored by these good-faith efforts to protect them. That is why so many marginalized communities objected to the proposed online harms legislation that was unveiled last year.
Let me share an example from my time working at the Oversight Board, Facebook's content moderation supreme court. In August 2021, following the tragic discovery of unmarked graves in Kamloops, British Columbia, a Facebook user posted a picture of art with the title “Kill the Indian, Save the Man”, and an associated description. Without any user complaints, two of Facebook's automated systems identified the content as potentially violating Facebook's policies on hate speech. A human reviewer in the Asia-Pacific region then determined that the content was prohibited and removed it. The user appealed. A second human reviewer reached the same conclusion as the first.
To an algorithm, this sounds like success, but it is not. The post was made by a member of the Canadian indigenous community. It included text that stated the user's sole purpose was to bring awareness to one of the darkest periods in Canadian history. This was not hate speech; it was counter-speech. Facebook got it wrong, four times.
You need not set policy by anecdote. Indeed, the risk of collateral censorship might not necessarily preclude regulation under the charter. To determine whether limits on free expression are reasonable, the appropriate question to ask is, for each category of harmful content, such as child pornography, hate speech or terrorist materials, how often do these platforms make moderation errors?
Although most human rights scholars believe that collateral censorship is a very significant problem, social media platforms refuse to share their data. Therefore, the path forward is a focus on transparency and due process, not outcomes: independent audits; accuracy statistics; and a right to meaningful review and appeal, both for users and complainants.
This is the path that the European Union is now taking and the path that the Canadian government should take as well.
Thank you.