That's a good question.
Most large online platforms use automated systems to do content moderation. Those can produce imperfect results. Right now, you're seeing legitimate pro-Palestinian expression being caught up in filters about Hamas, just as an example. These systems are imperfect, though for the scale of the systems, they're often necessary.
We think, though, that a potential online safety bill, or potentially the AI act, could create additional recourse for users to challenge systems. The EU Digital Services Act, which is their equivalent, provides the ability for users to receive an explanation as to why it was taken down and to appeal it. That's something we don't have here in Canada, just as an example.
Those kinds of content moderation systems are getting better over time. AI and large language models will undoubtedly help make them more effective, but I think, at the end of the day, basically the recourse for a human to be in the loop for those things that are grey is absolutely necessary.