It's an extremely good question. At this point there are quite a lot of proposals out there on what it could be, but I think the first thing, to come straight to the point, is that transparency or explainability itself is insufficient. Just saying we can explain what it does, and therefore it's enough, is not enough. You have to have someone who's in a meaningful way accountable for the actions of these things, and you need a governance framework around it.
When we're talking about, especially in the context of social media, having a framework for how content is moderated, it also means appeal mechanisms, transparency mechanisms and ensuring that there is some kind of external adjudication if there is a disagreement in these contexts, and then adding an extra layer of complexity when we're talking about regulatory responses to this.
There is a challenge that once AI-type systems or automated systems have been embedded within organizations, over time those organizations become dependent on those systems, and it's very difficult to move beyond them or get out of them, so you need to be quite strong on the governance quite early to make sure that you're really having a strong and meaningful effect on how—