Thank you very much, Mr. Chair and the committee, for inviting me to discuss this important topic.
I'm a professor of history and public policy at the University of British Columbia, where I direct the Centre for the Study of Democratic Institutions, or CSDI. At CSDI, we aim to understand the past, analyze the present and train for the future, so I'll make three points today—one about the past, one about the present and one about the future.
First is the past. Misinformation and disinformation are a feature, not a bug, of the international system. So, too, is foreign interference in elections. The U.S. feared French interference all the way back in 1796. In the second half of the 20th century, the two Cold War superpowers, the U.S. and Soviet Union, intervened in around 11% of all national executive elections around the world.
The question is not if foreign interference will happen, but rather why some states engage in this practice at particular moments.
Some of my research examined why Germans tried to use the then-new technology of radio to influence global politics from 1900 to 1945. Germans wanted to interfere in foreign information environments because they felt boxed in politically and economically. Losing World War I accelerated those feelings. This obviously did not end well. The Nazis built on decades of experimentation to spread racist and anti-Semitic content, ending in a world war of words as well as weapons.
Without getting into more historical weeds, this shows that analyzing international relations actually helps to predict potential foreign disinformation campaigns. This phenomenon will not disappear, but will wax and wane, so we need systemic interventions to embed resilience through educational initiatives, platform interventions, transparency, research and other measures to strengthen democracy.
Second is the present. The current social media and AI environment has created new economic incentives for misinformation and disinformation. For understandable reasons, these committee meetings are focused on politics, but making money fuels the problem, too.
We need stronger enforcement of electoral regulations on platforms to guard against this during elections. Canada might also coordinate with other democracies facing the same problem. For example, an intergovernmental task force could coordinate on issues like demonetizing disinformation. This could draw lessons from other multilateral institutions like the Financial Action Task Force, or FATF.
More broadly, Canada has much to learn from other jurisdictions, like Finland on media literacy or Taiwan on transparency and combatting disinformation while preserving freedom of expression.
Third is the future. generative AI or gen AI is obviously at the top of most people's minds. I recently co-authored a report released by CSDI on the role of gen AI in elections around the world in 2024. We found that gen AI is currently pervasive, but not necessarily persuasive, yet it still creates problems. We find that gen AI threatens democratic processes like elections in three main ways.
First, it enables deception by lowering the barrier to entry to create problematic content. This accelerates problems that already existed on social media platforms.
Second, gen AI pollutes the information environment by worsening the quality of available information online.
Third, gen AI intensifies harassment. It's far easier to create deepfakes that may be used to harass female political candidates in particular. We should worry about this amplification of online abuse and harassment of political candidates, which is something that I've studied in Canada since 2019. This could target specific individuals or under-represented groups to force them out of politics.
To date, there is little evidence that beneficial use of gen AI in elections will outweigh these harmful ones. Multiple measures are needed to address the challenges of gen AI. Although not election-specific, for example, the British Columbia Intimate Images Protection Act offers one avenue to protect female political candidates from deepfakes. We should look for similar legislation to address other challenges posed by gen AI.
To sum up, the past tells us that disinformation is not going anywhere, but we do have power to mitigate it. The present tells us to grapple with the economic incentives, too. The future warns us to address issues with gen AI, like deepfakes, before they get out of hand.
Thank you very much.