I think we have a similar area of concern, which is that when complaints are made to Facebook about content on their site that is misleading or wrong or that might be harmful, the company doesn't always take it down at all, or quickly enough. That would suggest to me that they don't have the resources in place to do that effectively. That is clearly a ground for concern for the future.
I think, as well, that they should not only act more quickly on user referral but also use the tools they have to try to identify for themselves whether content is likely to be problematic or open to challenge, and then investigate it themselves and take it down. We know they can track what users do on and off the site and whom they are engaging with and what they are doing. They do that, they say, for security reasons. I think they could use those techniques to identify the sources of disinformation.
When they are talking about news and fake news, I think they are right about having more transparency over who has posted this information, where they are based, who they are. With political advertising, you have to do that from a page where you verify your location and your identity as part of setting up that page to place ads. I think they are looking to change some of their policies in a way that would be helpful.
But for me this is where you then come back to the question about liability. If we say “We want you to act in this way”, and you don't do it, is there a liability in law that can be enforced against them for not doing it?