I think it is very important, because as we address different forms of harms, we need to look at modelling different approaches. That's why, in our comments, we're not proposing changes in terms of addressing child sexual abuse material or other things, but focusing specifically around national security and anti-terrorism concerns.
That said, in terms of algorithmic transparency, we think that it would be important to, overall, have a mandate for these platforms to have to be open about the development of their algorithms and what kind of information is being fed into them.
As we've argued in other places around the current artificial intelligence and data act, there need to be third party assessments to ensure that these algorithms are doing their job, not only in ensuring that they're efficient in what they're being asked to do but also in ensuring that there aren't negative repercussions. We know that already, with the use of artificial intelligence and algorithms, there have been documented cases of bias around age, gender and race, so it's important that there be openness, and that's something that's missing from Bill C-63.