As far as I can tell, none of the legislation that has tried to address online harms has made a difference to people who are victimized by it. I mean, platforms may point and say they did this and they did that, but I dare say that if you ask people who use these platforms, they will not perceive that there's much of a difference in their safety or how they perceive these platforms.
Of course, we run into opposition to doing anything about online harms, so I think we should be moving forward with a different model. I don't think we should have a complicated model that looks at censoring or taking down individual pieces of content. I think that we should have an ombudsperson model.
The basic idea is that you have an ombudsperson that is a well-resourced regulator with investigatory powers, so they can kick down the door of Facebook and take its hard drives. I'm being a little hyperbolic here, but we know that these platforms hide data from us and lie to journalists, so we do need broad investigatory powers to investigate them.
I believe that this ombudsperson should be able to issue recommendations on the platforms about the algorithms and things like that. That would be very similar to what their own employees kind of want to do behind the scenes. Like, if they learn that something drives polarization and negative engagement and is leading to hate speech, they suggested to maybe do this instead, or put this in as a stopgap measure.
If we had an ombudsperson who could look at what was happening under the hood and make recommendations on the platforms, that's the direction we want to go. Where the platforms do not take those recommendations, we feel that the ombudsperson should be able to apply to a court. The court can measure what the ombudsperson is recommending versus all the charter implications. If the court decides that it's a good measure and it's charter consistent, then the court can make it an order. Then if the platforms don't follow it, they could face a big fine.
This is a much more flexible way to move forward because it means that any particular arguments we might have against free speech versus hate speech, etc., are taken out of the hands of government and instead happen with a bunch of intervenors in front of a court and a judge. That's how we would move forward because it's kind of flexible. We can put it in place now and we can defer some of those arguments and have them in front of a court where they belong.