Thank you, Madam Chair, for the invitation to address the committee today.
My name is Sam Andrey. I am the managing director of The Dais, a policy and leadership institute at Toronto Metropolitan University, where we work to advance public policy solutions for the responsible governance of technology and a strong democracy.
We have been conducting regular surveys of Canadians over the past four years to better understand online misinformation and to track public attitudes toward regulating online platforms.
I want to begin tonight by sharing a high level of what we understand from our research about the spread of online misinformation in Canada. About half of Canadians say they see false information online at least a few times a month. The use of online platforms for news, particularly Facebook, YouTube and private messaging apps, is associated with higher exposure to and belief in misinformation.
About 10% to 15% of Canadians have a relatively high degree of belief in misinformation and are more likely to hold false or conspiratorial beliefs about many topics, such as COVID-19, the Russian invasion of Ukraine, and immigration. This group tends to have lower trust in mainstream media and public institutions in general. Conversely, this group shows higher levels of trust in and use of social media and messaging platforms for news, and people in this group are less likely to say that they fact-check things they see online using another source. These collectively are conditions that can be taken advantage of by foreign actors to both seed and amplify false information online.
What are potential policy solutions to this challenge? This is not an easy question for a liberal democracy while protecting free expression and avoiding unintended consequences, including the potential to produce chilling effects, surveillance creep and the censoring of voices that represent the most vulnerable.
There are of course proactive efforts the federal government is already supporting in some way—things such as digital literacy programming in schools and communities, and maintaining strong, independent journalism. We also have measures in place now, through the Canada Elections Act, to monitor digital election ads and prohibit foreign parties from directly purchasing those ads.
However, a number of allied jurisdictions are also now advancing regulatory models that place additional legal responsibilities on online platforms to more transparently address their systemic risks to society, including their role in spreading foreign disinformation that is designed to undermine democratic processes.
Regulatory models could, for example, advance responsibilities to require labels on synthetic or deepfake media, or to clamp down on what's commonly referred to as “coordinated inauthentic behaviour”, a tactic that can be used by foreign actors to artificially spread false information through the use of fake or automated accounts.
There are also efforts under way to improve tools that enable users to more easily fact-check or understand the context of what they come across online. For example, WhatsApp has rolled out a feature for highly forwarded messages, whereby you can tap on a magnifying glass and send that message to a Google search. Twitter has also begun piloting its “Community Notes” feature, which allows users to add context to misleading tweets, which others can then rate the helpfulness of. Nudging features like these, as well as other efforts that encourage users to think twice before sharing, can help mitigate the spread of misinformation without censorship.
I want to close by saying that we have found, through our surveys, that these platform governance proposals are supported widely, by more than 80% of Canadians, and that the majority of Canadians believe the intentional spread of false information is a threat to Canadian democracy that needs to be addressed by our governments.
Thank you for the opportunity. I'm looking forward to your questions.