But I guess my point is that it wouldn't be just about disclosing the factors. When we had Richard Allan, the VP for global policy, at our international committee in London last fall, he said, you know, if speech crosses the threshold for hate, obviously we should take it down, but if it's right up against that line, maybe we shouldn't encourage and promote it. And I'm sitting there thinking, yes, obviously you shouldn't promote that kind of content, but that's the algorithm. That's the newsfeed algorithm to promote reactions, regardless of what those reactions are. Even if they're negative reactions, they're looking for eyeballs. They're not looking for much beyond that when they want to generate profit.
If there is an algorithmic impact assessment and we are setting the rules of what that assessment should entail, I agree with you that there's an element of transparency and disclosure. It shouldn't just be about the inputs, necessarily. A company should have to come to terms with what the potential adverse affects are as well, I think, and have to put that in such an assessment. They have to turn their minds to that.
Do you think that is a useful and additional layer of accountability and transparency?