More broadly on the content moderation issue, there's clearly a broad spectrum of potential harmful speech and a broad range of ways to address different problems along that spectrum—hate speech, child pornography, and criminal activity on one end of the extreme, and maybe just political views we don't agree with on the other end. We'll engage different things in different spaces, and that's fine.
The other important point here is that there is national context to the way we regulate speech, and that is okay. We know what the alternative default is. If we're not imposing those national guidelines, regulations, and incentives on speech, the default is the interpretation of the terms of use of a global company. Twitter has terms of use different from Facebook's, and Google/YouTube has terms of use different from the other two. We know, for example, that Twitter has a very free-speech-leaning application of its terms of use. Up until recently, almost anything was allowed. Twitter was incentivizing engagement and activity over the limiting of speech. That was a corporate decision, and that has caught different consequences in different national environments.
In Canada, we have criminalized hate speech. When we did that, there was a lot of push-back from free-speech advocates in the United States, who said Canada was limiting speech too much, but we made that decision as a democracy ourselves and then built an infrastructure to apply it.
The questions for us now in Canada—which are different from the questions for Germans, for instance, who have a different application of hate speech for various historical reasons—are how we are going to apply our current hate speech standards onto platforms, and whether we are going to extend those hate speech provisions to other kinds of content that we now think have negative costs in society beyond those original provisions. Those are two separate questions, I think.