Great.
Thanks for having me today.
My name is Samantha Bradshaw. I'm a researcher on the computational propaganda project at the University of Oxford. I'll shorten that to Comprop.
On the Comprop project, we study how algorithms, big data and automation affect various aspects of public life. Questions around fake news, misinformation, targeted political advertisements, foreign influence operations, filter bubbles, echo chambers, all these big questions that we're struggling with right now with social media and democracy, are things that we are researching and trying to advance some kind of public understanding and debate around.
Today I'm going to spend my 10 minutes talking through some of the relevant research that I think will help inform some of the decisions the committee would like to make in the future.
One of our big research streams has to do with monitoring elections and the kinds of information that people are sharing in the lead-up to a vote, and we tend to evaluate the spread of what we call “junk news”. This is not just fake news and not just information that is false or misleading, but it also includes a lot of that highly polarizing content—the hate speech, the racism, the sexism—this highly partisan commentary that's masked as news. These are the kinds of junk information that we track during elections. In the United States, that was one of our most dramatic examples of the spread of junk news around elections. We found about a 1:1 ratio of junk information being shared to professionally produced news and information.
What's really interesting here is that if you look at the breakdown of where this information was spreading most, you see it tended to be targeted to swing states, and to the constituencies where 10 or 15 votes could tilt the scale of the election. This is really important because content doesn't just organically spread, but it can also be very targeted, and there can be organized campaigns around influencing the voters whose votes can turn an election.
The second piece of research that I'd like to highlight for everyone here today has to do with our work on what we call “cyber troops”. These are the organized public opinion manipulation campaigns. These are the people who work for government agencies, political parties or private entities. They have a salary, benefits. They sit in an air-conditioned room, and it's part of their job to work on these influence operations. Every year for the last two years we've done a big global inventory to start estimating some of the capacities of various governments and political party actors in carrying out these manipulation campaigns on social media.
There are a few interesting findings here. I'm not going to talk about all of them, for sake of time, but I'd like to highlight what we're seeing in democracies and what some of the key threats are. For democracies, it tends to be the political parties who are using these technologies, such as political bots, to amplify certain messages over others and maybe even spreading misinformation themselves in some of the cases we've seen. They tend to be the ones who use these organized manipulation tactics within their own population.
We also tend to see democracies using these techniques as part of more military psychological or influence operation activities. For the most part, it's the political parties who tend to focus domestically. We also see a lot of private actors being involved in these sorts of campaigns around elections, so where a lot of the techniques around social media manipulation were developed in more military settings for these information warfare techniques back in 2009 or 2010, now it tends to be private companies or firms that are offering these as services. Companies such as Cambridge Analytica are the biggest example, but there are so many different companies out there who are working with politicians or with governments to shape public discussions online in ways that we might not consider healthy for democracy and for democratic debate.
I guess the big challenge for me when I'm looking at these problems is that a lot of the data that goes into the targeting is no longer being held by the government, by Statistics Canada, which is the best information about Canadian public life. Instead it's being held by private companies such as Facebook or Google that collect personal information and then use that to target voters around elections.
In the past, it was all about targeting us commercially to sell us shampoo or other kinds of products. We knew it was happening and we were somewhat okay with it, but now when it comes to politics, selling us political ideologies and selling us world leaders, I think we need to take a step back to critically ask to what extent we should be targeted as voters.
I know that a lot of the laws right now are around transparency and improving why we're seeing certain messages, but I would take that a step further to ask if I should even be allowed to be targeted because I'm a liberal or on a even more microscale than that.
I know one of my colleagues earlier talked about targeting because you are identified as being a racist. At those much deeper levels as to who we are as individuals that really get to the core of our identity, I think we need to have a serious debate about that within society.
In terms of some of the future threats we're seeing around social media manipulation, disinformation and targeted advertisements, there are big questions around deep fakes and artificial intelligence making political bots a lot more conversational so that the person behind the account or the bot behind the account is human and more genuine. That might make it harder for citizens and also the platforms to detect these fake accounts that are spreading disinformation around election periods. That's one of the future threats on the horizon.
Professor Dubois talked about messaging platforms, things like WhatsApp and Telegram. A lot of these encrypted channels are incredibly hard to study because they are encrypted. Of course, encryption is incredibly important, and there's a lot of value in having these kinds of communication platforms, but the way they are affecting democracy by spreading junk information raises serious questions that we need to tackle, especially when you look at places like India or Sri Lanka where this misinformation is actually leading to death.
The third point on the horizon in the future is regulation. I think there is a real risk of over-regulation in this area. With Europe, for example, and Germany's NetzDG law, I applaud them for trying to take some of the first steps to making this situation better by placing fines on platforms. There has been a lot of, I guess, unintended consequences to that law, and we tend to see a lot more.
To use a good example, as soon as that law was put into place, there was someone from the alt-right party who had made some horribly racist comments online, and it got taken down, which is good, but what also got taken down was all the political satire, all the people calling that comment out as being racist, so you lose a lot of that really important democratic deliberation if you force social media companies to take on the burden of making all of those really hard decisions about content.
I do think one of the threats and one of the challenges in the future is over-regulation. As governments, we need to find a way to create smart regulations that get to the root of the problem instead of just addressing some of the symptoms, such as the bad content itself.
I will end my comments there. I look forward to your questions.