Thank very much, Chair.
Thank you to the committee for this invitation to speak to Bill C-63.
I'm grateful to be here on behalf of the International Civil Liberties Monitoring Group, a coalition of 44 Canadian civil society organizations that work to defend civil liberties in the context of national security and anti-terrorism measures.
The provisions of this bill, particularly in regard to part 1 of the online harms act, are vastly improved over the government's original 2021 proposal, and we believe that it will respond to urgent and important issues. However, there are still areas of serious concern that must be addressed, especially regarding undue restrictions on free expression and infringement on privacy.
This includes, in part 1 of the act, first, the overly broad definition of the harm of “content that incites violent extremism or terrorism” will lead to overmoderation and censorship. Further, given the inclusion of the online harm of “content that incites violence”, it is redundant and unnecessary.
Second, the definition of “content that incites violence” itself is overly broad and will lead to content advocating protest to be made inaccessible on social media platforms.
Third, the act fails to prevent platforms from proactively monitoring, essentially surveilling, all content uploaded to their sites.
Fourth, a lack of clarity in the definition of what is considered “a regulated service” could lead to platforms being required to break encryption tools that provide privacy and security online.
Fifth, proposed requirements for platforms to retain certain kinds of data could lead to the unwarranted collection and retention of the private information of social media users.
Finally, seventh, there has been little consideration on how this law will inhibit the access of Canadians and people in Canada to content shared by people in other countries.
Briefly, on part 2 of the act, this section amends Canada's existing hate-crime offences and creates a new stand-alone hate crime offence, and it is only tangentially related to part 1. It has raised serious concerns among human rights and civil liberties advocates in regard to the breadth of the offences and the associated penalties. We've called for parts 2 and 3 to be split from part 1 in order to be considered separately, and we're very pleased to see the government's announcement yesterday that it intends to do just that.
I'd be happy to speak to any of these issues during questions, and I've submitted a more detailed brief to the committee with specific amendments on these issues. However, I'd like to try to focus in the time I have on the first two points that I've made regarding “content that incites violent extremism or terrorism”, as well as a definition of “content that incites violence”.
The harm of “content that incites violent extremism or terrorism” is problematic for three reasons and should be removed from the act. First, it is redundant and unnecessary. The definitions of “content that incites violent extremism or terrorism” and “content that incites violence” are nearly identical, the major difference being that the first includes a motivating factor for the violence it is attempting to prevent. These two forms of harms are also treated the same throughout the online harms act, including requirements for platforms to retain information related to these harms for a year to aid in possible investigations.
Moreover, and maybe most importantly, incitement to violence alone would clearly capture any incitement to violence that arises from terrorist or extremist content. Further definition of what motivates the incitement to violence is unnecessary.
Second, if included, incitement to terrorism will result in the unjustified censorship of user content. “Terrorism”, and with it “extremism”, are subjective terms based on interpretation of the motivations for a certain act. The same opinion expressed in one context may be viewed as support for terrorism and therefore violent, while, in another, it may be viewed as legitimate and legally protected political speech.
Acts of dissent become stigmatized and criminalized not because of the acts themselves but because of the alleged motivation behind the acts. As we have seen, this leads to unacceptable incidents of racial, religious and political profiling in pursuit of fighting terrorism.
Studies have also extensively documented how social media platforms already overmoderate content that expresses dissenting views under the auspices of removing “terrorist content”. The result is that, by including terrorism as a motivating factor for posts that incite violence, the act will be biased against language that is not, in fact, urging violence but is seen as doing so because of personal or societal views of what is considered terrorism or extremism.
I note also that “extremism” is not defined in Canadian law. This ties into the third key part that we're concerned about, and that's that parts of the language used in this definition are undefined in Canadian law or the Criminal Code. This contradicts the government's main justification for all seven harms—that they align with the Criminal Code and do not expand existing offences.