Different jurisdictions have different requirements, but probably the best known is the European Union's Code of Conduct on countering illegal hate speech online. It's modest in its scope because it is transnational, but it requires social media organizations to report how many requests they have received, how many reports they have received of alleged hate speech, how many of them were analyzed and addressed within the first 24 hours, how many were addressed within 48 hours, and what percentage of those posts were ultimately taken down.
I think that is an excellent initiative. One of the metrics of its success is that over time, the proportion of reports that have been investigated by, for example, Facebook, within the first 24 hours has doubled over just four or five years. It shows that public exposure to their record creates a tremendous incentive for social media organizations to increase their compliance with their own rules, as well as for national anti-hate legislation.
I would say, though, that if this is a path Parliament wishes to go down in Canada, more would be better. That means more transparency, more detailed information about the reports that social media firms receive and how they deal with them. The greatest weakness of the EU's approach is that it only requires social media to turn its mind to these reports quickly. It doesn't require them to turn their minds to these reports effectively. In other words, if they address 50% of the reports in the first 24 hours but they get all of their analyses wrong, they've still exceeded the European benchmark. An additional step of random sampling of those reports by an independent third party to assess not just how quickly they're dealing with them but how effectively, I think, would be an excellent idea.
There are privacy risks—