Yes, thanks so much for that opportunity.
What I think is critical—and this builds on what Matt was just talking about—is that there is always a risk of overcorrection if the focus is purely on harms. That's why it's important that one of the key harms can be to freedom of expression, and to privacy in particular, so it's important for the companies to be filing digital safety plans that explain how they make decisions bespoke to their services that balance out the scope of harms but also think through a way of doing it that's most protective of privacy and freedom of expression. The digital safety commission would have a duty to consider that in what they do, but it needs to also be on the company.
I think, concerning the child protection measures, that the best interest of the child is protected under international law. I think that is the blueprint here. Detailing specifically what it is about child protection that we're looking for when we talk about safety by design is incredibly important.
Of course, there's algorithmic accountability. I can discuss this further with you, but I'm conscious of time.