We have a couple of different models to look at. I will profile the German model and tell you where I think it went right and where it went wrong.
The Germans set a bar, I think, of a million domestic subscribers to the service, which basically meant three companies—Google, Facebook, and Twitter—and they said, "You have 24 hours to remove illegal content from the moment you get notified that it's there".
The problem with that was that they put all the burden on the companies. They gave all the decision-making authority to the companies about what was and wasn't illegal, and they had no appeals process.
The benefit they got from that was the resources and the technical ability of the companies to rapidly find not only the content that drew a complaint, but all content like that and all copies of that content all across the network and to quickly bring it down, much as they do for copyright violations, much as they do for other forms of fraud and illegal content. Counterterrorism functions the same way.
In my view, the problem is that we need more regular order judicial review. The prosecutors who would normally have brought a case like that through the usual court procedure ought to be involved in the oversight so that when the algorithm comes back and says these are the thousand cases of this piece of hate speech we see on the network, there is either a common review of that content to ensure it's meeting a public interest standard of free expression, legal/illegal, or it goes into an appeals process and goes through regular order judicial review.