Thank you and good afternoon.
I'm happy to offer some general thoughts about misinformation and misinformation correction before answering your questions to the best of my knowledge and experience.
My name is Michael W. Wagner. I have a Ph.D. in political science from Indiana University. I'm the William T. Evjue distinguished chair for the Wisconsin Idea, and a professor in the school of journalism and mass communication, where I direct the center for communication and civic renewal at the University of Wisconsin-Madison.
It's well established that Russia's Internet Research Agency, or IRA, operated thousands of Twitter accounts, posing as individuals, to weigh in on political discussions on social media in the United States and other countries, including Canada. Beyond driving some social media conversations witnessed and engaged with by users of social media platforms like Twitter—now called X—these IRA accounts also found their way into legitimate news coverage, being quoted as examples of the person on the street, further amplifying IRA messages about issues like support for Russia's war with Ukraine. This greatly amplifies the reach of its messages, as more people consume legitimate news sources than use social media to learn about and discuss politics. It also increases the likelihood that lawmakers could be affected by IRA posts, as research also demonstrates that parliamentarians use legitimate news sources as a way to read public opinion—something lawmakers can then choose to use in their own decision-making calculus about how to represent their constituents.
In terms of another aspect of misinformation online, it's useful to think about what factors are most associated with inaccurate content going viral and spreading widely and quickly. Posts with more emotional resonance are more likely to get shared online. Posts published at times that people are habitually more likely to be on social media make things go viral as well. Perhaps most importantly, key influencers in politics and the news media sharing or spreading that information are often critical amplifiers to virality.
In terms of misinformation correction, fact checks can work to help people come to believe things that are verifiably true. Labelling stories as a fact check tends to motivate audiences to think about the accuracy of information, while they're consuming it. People willing to admit what they don't know are also more likely to benefit from fact checks. However, fact checks come at a cost, the cost of people believing that the fact checkers are biased, which could affect long-term, trusting relationships the audience has with more legitimate news sources.
Another promising strategy to correct misinformation on social media is the use of a strategy called "observed correction". Rather than engaging with the person making a misinformation or disinformation claim, simply correcting the claim without focusing on the person and linking to the accurate information is useful. Research shows that observational correction occurs when seeing misinformation shared by others being debunked on social media. It reduces misperceptions or beliefs in misinformation among the audiences witnessing the exchange, even if it doesn't affect the opinion of the person who created the false post to begin with. This strategy is shown to be more effective in some circumstances than pre-bunking misinformation, and there's some evidence that logic-based interventions perform better than fact-based interventions as well.
I'm happy to answer questions about these factors or other factors related to misinformation and the health of democracies.
Thank you.