The thing is that there's a distinction between hate speech, which is captured under the Criminal Code, and what I think is an increasingly growing concern, which is harmful speech. We don't want to conflate the two. As a male who has grown up online, and having talked to my female counterparts, I think there's concern about the amount of aggression. I think this is also particularly true now for female politicians. Just think about the amount of vitriol being spewed. I think there is some way of dealing with that, which is different from dealing with hate speech, both in terms of concern and in terms of tactics.
That's part of that content moderation, and that already happens on social media platforms. Social media platforms are already making decisions about what content is accessible. Instagram producers online are already struggling with what parts of their body they can expose or not expose based on the content moderation of that platform.
The specific point about this is about recommendation. This is how platforms make recommendations about what content you see. This is often described as a filter bubble, whereby they're filtering your content. I think there is less concern about the filter bubble than there is about the fact that if you look at YouTube, it optimizes for engagement. If you look at Facebook, it's for meaningful social interactions.
It's those particular kinds of logic that are recommending content that might have some, to use Taylor Owen's words, negative externalities. We need to have more transparency about the consequences of those recommendations, and in particular about some of the ways there might be some red lines about what content can be recommended. I think a standards council could be one of the ways. I also think that when you get into the enforcement issue and you're trying to shut down hate speech quickly, that's another point at which there might be intervention.