A big part of our focus ends up being on technology, but we also need to understand what this technology sits on top of, and if we don't understand how societies are terrified by these huge changes we're seeing, which we can map back to the financial crisis.... We're seeing huge global migration shifts, so people are worried about what that does to their communities. We're seeing the collapse of the welfare state. We're also seeing the rise of automation, so people are worried about their jobs.
You have all of that happening underneath, with technology on top of that, so what is successful in terms of disinformation campaigns is content that reaffirms people's world views or taps into those fears. The examples that you gave there are around fears.
Certainly, when we do work in places such as Nigeria, India, Sri Lanka and Myanmar, you have communities that are much newer to information literacy. If we look at WhatsApp messages in Nigeria, we see that they look like the sorts of spam emails that were circulating here in 2002, but to Tristan's point, in the last 20 years many people in western democracies have learned how to use heuristics and cues to make sense of this.
To your point, this isn't going anywhere because it feeds into these human issues. What we do need is to put pressure onto these companies to say that they should have moderators in these countries who actually speak the languages. They also need to understand what harm looks like. Facebook now says that if there's a post in Sri Lanka that is going to lead to immediate harm, to somebody walking out of their house and committing an act of violence, they will take that down. Now, what we don't have as a society is to be able to say, what does harm look like over a 10-year period, or what do memes full of dog whistles actually have in terms of a long-term impact?
I'm currently monitoring the mid-term elections in the U.S. All of the stuff we see every single day that we're putting into a database is stuff that it would be really difficult for Facebook to legislate around right now, because they would say, “Well, it's just misleading” and “It's what we do as humans”. What we don't know is what this will look like in 10 years' time when all of a sudden the polarization that we currently have is even worse and has been created by this drip-feed of content.
I'll go back to my point at the beginning and say that we have so little research on this. We need to be thinking about harm in those ways, but when we're going to start thinking about content, we need to have access to these platforms so we can make sense of it.
Also, as society, we need groups that involve preachers, ethicists, lawyers, activists, researchers and policy-makers, because actually what we're facing is the most difficult question that we've ever faced, and instead we're asking, as Tristan says, young men in Silicon Valley to solve it or—no offence—politicians in separate countries to solve it. The challenge is that it's too complex for any one group to solve.
What we're looking at is that this is essentially a brains trust. It's cracking a code. Whatever it is, we're not going to solve this quickly. We shouldn't be regulating quickly, but there's damage.... My worry is that in 20 years' time we'll look back at these kinds of evidence proceedings and say that we were sleepwalking into a car crash. I think we haven't got any sense of the long-term harm.