Thank you very much for the chance to be involved in this. We've been doing lots of research on this. I find it to be a very, very serious threat.
Over the years, information has been used as a weapon to target whoever the opponent is, but we used to call it propaganda. Disinformation I perceive as digital propaganda that reaches us digitally through SMS, text, blog, Twitter and Facebook, etc., but very much unlike propaganda, where counter-propaganda has been deployed, right now I think we essentially don't have any defences or any equivalent counter-disinformation to defend us.
Up until very recently, disinformation was seen as “just posts on social media”, and as quite harmless. Any general user reading it would not see the coordinated effort behind disinformation or the specific intent behind it. This makes it really difficult and tricky to identify. At least with propaganda, we saw the leaflets being dropped from the sky or the messages being broadcast through megaphones. We could recognize it as propaganda, or we would have an idea of the source and the intent, whereas with online information, the source and the intent are quite often hidden and obfuscated.
It does have real-world consequences. The Trump election was shown to have Russian influence. Brexit also allegedly had foreign influence. These are humongously big, drastic changes.
I'll pick on Russia for a bit. Russia did this through troll farms, creating thousands of social media accounts that looked to be ordinary users. These accounts supported radical political groups with specific political reasons. They fabricated articles, invented stories and posted nonsense. Quite often they even posted the truth but with a twist, aiming at vulnerable groups who then got riled up. These fake users can have very many followers. They look established and real.
This was done in an organized fashion in a state-run campaign. Internet Research Agency, as an example, had hundreds of employees. They had 12-hour shifts, from 9 a.m. to 9 p.m. and from 9 p.m. to 9 a.m. These shifts overlapped with U.S. holidays and working hours, so they looked real. With a budget of about $600,000 Canadian a month, which might seem like a lot, they were able to achieve real impact abroad. Compared with a military intervention, $600,000 a month is negligible.
Disinformation or any such content is designed to spread. A study in 2016 showed that this content does spread six times faster than real news. There's an old proverb that says a lie goes halfway around the world before the truth gets its boots on. That's very much true here as well.
Another study in 2019 showed that over 70 countries had such disinformation campaigns. Facebook was the number one platform for this. Canada is not immune to any of this. We've had election interference. One MP in B.C. lost an election specifically because of disinformation. We are under attack. Disinformation is promoting the superiority of foreign countries and undermining confidence in our democracies, etc.
It's now been six years since we started working on disinformation detection, specifically looking at training computer models to detect this type of content. We've done about four or five projects specifically on this, funded by the Government of Canada. The end goal is to detect this disinformation campaign with artificial intelligence. Our models show that with about 90% accuracy, we can detect this content, so we know that this is doable.
Back in January of 2022, we were asked by our project funders—the Canadian Armed Forces, at that point—to study Russian online activity to see what their stance was with respect to Ukraine.
We submitted our findings on February 13, 11 days before the war started, essentially saying that Russia is painting itself as the victim and that it's taking steps to defend itself, and that NATO, the European Union, the U.S. and other western nations are aggressors against Russia. Eleven days later they attacked Ukraine.
This plan to attack—not this specific plan, but the intent to attack—was seen online beforehand.