Thank you.
Good evening. My name is Christopher Parsons. As mentioned, I'm a senior research associate at the Citizen Lab. I appear before this committee in a professional capacity that represents my views and those of the Citizen Lab. Comments are based on our research into Chinese technology companies. The Citizen Lab is an academic institution, and our work operates at the intersection of technology and human rights.
In my time today, I want to point to some of the ways by which we can develop trust in the products and services that are manufactured in, transited through or operated from China. I do so by first turning to the issue of supply chain dependencies.
A rising concern is the extent to which Canadian companies, such as our telecoms, might become dependent on products made by Chinese companies, inclusive of Huawei. Dependency runs the risk of generating monocultures or cases in which a single company dominates a Canadian organization's infrastructure. In such cases, up to three risks can arise.
First, monocultures can enable foreign governments to leverage dependencies on a vendor to apply pressure in diplomatic, trade or defence negotiations. Second, monocultures can create a path dependency, especially in 5G telecommunications environments, where there's often a degree of vendor lock-in into vendors' telecom equipment. Third, monocultures risk hindering competition among telecommunications vendors, to the effect of increasing capital costs to Canadian telecommunications providers.
All of these challenges can in part be mediated by requiring diversity in Canadian telecommunications companies' networks, as has been recommended in the past by CSE's deputy chief of information technology security, Scott Jones. In this case, trust would come from not placing absolute trust in any given infrastructure vendor.
I now turn to building trust in software and hardware systems more generally. Software and hardware errors are often incidentally placed into digital systems. Some errors are egregious, such as including old and known vulnerable code in a piece of software. Others are more akin to spelling or grammar errors, such as failing to properly delimit a block of code. There are also limited situations where state agencies compel private companies to inject vulnerabilities into their products or services to enable espionage or attack operations.
No single policy can alleviate all of the risks posed by vulnerabilities. However, some can enhance trust by reducing the prevalence of incidental vulnerabilities and raising the cost of deliberately injecting vulnerabilities into digital systems. Some of these trust-enhancing policies include, first, requiring companies to provide a bill of goods that declares their products' software libraries and dependencies, as well as their versions. This would help ensure that known deficient code isn't placed in critical infrastructure and also help responders identify vulnerable systems upon any later discovery of vulnerabilities in the libraries or dependencies.
Second, Canada and its allies can improve on existing critical infrastructure assessments by building assessment centres that complement the U.K.'s, which presently assesses Huawei equipment. Working collectively with our allies, we'd be better able to find incidental vulnerabilities while raising the likelihood of discovering state adversaries' attempts to deliberately slip vulnerabilities into systems' codebases.
Third, Canada could adopt robust policies and processes to ensure that government agencies disclose vulnerabilities in critical infrastructure to appropriate vendors and communities, as opposed to potentially secretly hoarding them for signals intelligence or cyber-operations.
I will now briefly turn to increasing trust in Chinese social media platforms. Citizen Lab research has shown that WeChat has previously placed Canadians' communications under political surveillance to subsequently develop censor lists that are applied to China-registered WeChat accounts. Our research into TikTok, released today, revealed there's no apparent or overt political censorship or untoward surveillance of Canadians' communications on that platform.
Based on our findings, we suggest that social media companies be required to publish more information on their activities to enhance trust. This would include publishing detailed content moderation guides, publishing how and why companies engage in monitoring and censoring behaviours, publishing how organizations interact with government agencies and address their corresponding demands, and publishing annual transparency reports that detail the regularity and effects of state and non-state actors who make requests for users' data.
Platforms could also be compelled to make available algorithms for government audit where there is reason to suspect they're being used to block or suppress lawful communications in Canada or where they're being used to facilitate influence operations. Platforms could also be compelled to disclose when user data flows through or is accessible by parts of their organizations that have problematic human rights, data protection or rule of law histories.
To conclude, we at the Citizen Lab believe that the aforementioned sets of recommendations would ameliorate some of the cyber-related risks linked with the Chinese supply chain management issue, and social media platform issues more broadly. However, we also believe these policies should be applied in a vendor- and country-agnostic way to broadly improve trust in digital systems.
I would just note to the committee that the brief we have also submitted provides additional details and recommendations, especially as applied to Internet standards, which I have declined to get into in this.
Thank you for your time, and I look forward to your questions.