Yes. Part of that is having strong trust and safety teams with humans on those teams who are complementing whatever technology they're using to help screen for inappropriate content, or in the case of Canada, content that would be in defiance of the proposed online harms act.
There are lots of measures that can be taken by platforms to be more transparent around the way their algorithms work and to help individuals shape those algorithms more themselves, rather than just being presented with, “This is the way your feed is going to work.”
There are some platforms that are now experimenting with measures and approaches that put more of the ability to curate what you see in the hands of the user rather than the hands of either an algorithm or the company itself, and we think that's definitely progress.