This is obviously something that people are thinking about at this moment in time. Should we say there is a liability for the big tech platforms to act against known sources of fake news and disinformation, particularly during election periods?
We know that Facebook track their users activity, not just on the site, but on other sites too. I think they have the capability to identify people who are potential sources of fake news and disinformation, and to bar them from the site or to disrupt what they are doing. I think that would be an important step for us to take.
In France, they are discussing having a judge 24-7 that you can go to during an election who will give a ruling on whether something is fake news or not, and whether it should be taken down or not. We could, of course, go the German route, which they use for hate speech, in particular, where there could be heavy fines for organizations involved in the dissemination of disinformation.
I think this is going to be an increasingly important debate. In western countries I think we have been late to the party on this. If you talk to people in eastern European countries and the Baltic states and Ukraine, this has been an issue they have been dealing with for many years—and certainly Russian interference in their politics through disinformation.
We know with the new technologies and the power of augmented reality to create videos of your giving a speech that you have never given in a place that you have never been to, people are going to need trusted new sources. Also, we're going to need to do more, I think, to make it clear to people the trusted sources, the ones that don't have a reputation for spreading disinformation, and to identify and call up those that do.