First of all, we have to assume that material will be clearly hateful, extreme, and it causes harm. Once we have that set of facts before us, how do we then deal with it?
It's our opinion that we need a multipronged approach, that a provision in the Canadian Human Rights Act cannot stand alone. Clearly, we need agencies, regulatory agencies, police, social media platforms, Internet service providers, and so on, to play a role.
The question is, do you become reactive, so after something happens, a complaint gets filed or a charge is laid? Section 13 of the Canadian Human Rights Act was very effective at shutting down websites. There could be some amendments around jurisdiction, perhaps providing the commission with a way to deal with things more quickly, but the issue with a complaint-based system is that it takes time.
If we are limiting freedom of expression, we have to ensure that it's very narrowly limited. The issue becomes what happens to social media. Websites, you can shut down, and you can fine Internet service providers, but if we were to open the Canadian Human Rights Act to complaints based on Twitter, YouTube and Facebook, I can't imagine that we would be resourced to do any other work. That is something that the committee should consider.
However, in terms of a proactive compliance model whereby you have standards, I'm sure the committee has heard of examples in Europe where that has happened, where they're held accountable. Internet service providers, Facebook, YouTube and Twitter are held accountable for letting hate fester online and potentially cause harm and lead to violence.