Thank you, Mr. Chair and committee members, for this opportunity to discuss the potential threat of foreign interference and the risks associated with the misuse of social media data.
I'm Anatoliy Gruzd, a Canada research chair and professor at Toronto Metropolitan University. I'm also a co-director of the social media lab, where I study social media's impact on society, information privacy and the spread of misinformation around conflicts such as the Russia-Ukraine war.
While my comments today are my own, they are grounded in research conducted at the social media lab and are informed by 15 years of working with various types of social media data.
As previous witnesses have testified, there are concerns that TikTok could be vulnerable to foreign interference, leading to major implications for our national security and individual privacy. However, I would like to point out that a loaded gun is different from a smoking gun. Despite its being framed as a national security threat, to date, there's still no public evidence that the Chinese government has spied on Canadians using a back door, or privileged access, to the TikTok app.
That is not to say there is nothing to worry about. There are valid concerns regarding the potential for TikTok and other platforms to be exploited by malicious actors for propaganda and radicalization. For example, Osama bin Laden's 2002 “Letter to America” recently resurfaced on TikTok and was seen by millions. However, these concerns are not limited to any one platform. Rather, they represent broader challenges to the integrity and security of our information environment.
As such, we must take a comprehensive approach to addressing these issues by compelling platforms to commit to the following: adopting the principles of privacy by design and by default, investing in expanding their trust and safety teams, and sharing data with researchers and journalists.
I'll expand each of these points.
Teaching digital literacy is important, but it's unfair to place all the responsibilities on individuals. Social media platforms are complex, and algorithms that decide what users see and don't see remain black boxes. The only true choice we have is to disconnect from social media, but it's not realistic or practical, as our own research has shown, because most Canadians have at least one social media account.
It's important to shift the focus from individual responsibility to developing strategies that compel companies to implement privacy by design and by default. Currently, it's all too common for platforms to collect more data by default than necessary.
However, even with privacy protection settings enabled, Canadians may still be vulnerable to malicious and state actors. According to a national survey that our lab released last year, half of Canadians reported encountering pro-Kremlin narratives on social media. This highlights concerns about the reach of foreign propaganda and disinformation in Canada, extending beyond a single platform.
In another example, earlier this year, Meta reported a sophisticated influence operation from China that spanned multiple platforms, including Facebook, Twitter, Telegram and YouTube. The operation tried to impersonate EU and U.S. companies, public figures and institutions, posting content that would match their identity before shifting to negative comments about Uyghur activists and critics of China.
To fight disinformation, platforms should expand their trust and safety teams, partner with fact-checking organizations and provide access to credible news content. Unfortunately, some platforms, like Meta and X, are doing the exact opposite.
To evaluate how well platforms are combatting disinformation, Canada should create an EU-style code of practice on disinformation and a transparency repository that would require large platforms to report regularly on their trust and safety activities in Canada.
To further increase transparency and oversight, Canada should mandate data access for researchers and journalists, which is essential to independently detect harmful trends. In the EU, this is achieved through the new Digital Services Act.
Currently, TikTok doesn’t provide data access to Canadian researchers, but it does so for those who reside in the U.S. and EU. Sadly, TikTok is not alone in this regard. Recently, X shut down its free data access for researchers.
In summary, while it's important to acknowledge the impact of foreign interference on social media, banning a single app may not be effective. It could also undermine trust in government, legitimize censorship and create an environment for misinformation to thrive.
A more nuanced approach should consider the various forms of information and develop strategies to address them directly, whether on TikTok or other platforms. This may involve a wider adoption of privacy by design and by default, expanding trust and safety teams in Canada and compelling platforms to share data with researchers and journalists for greater transparency and independent audit.
Thank you.