Thank you, Mr. Chair, for the invitation to address this committee.
I am honoured to speak to you from Calgary and the traditional territory of the people of the Treaty 7 region and the Métis Nation of Alberta.
I've had the opportunity to listen to some of the witnesses and the discussion leading up to my appearance. With my time, I would like to pull us back to look at the broader legal issues at play.
My key message is that this is not just about privacy. Privacy is one piece of the pie. For example, Discord does not use tools to detect child sexual abuse content, and it does not monitor or offer a tool for reporting livestreamed content. That's a recipe for disaster. This is a safety design problem, not only a privacy one.
This is about platform regulation. The health of our information ecosystem depends on privately owned platforms and the choices they make in the design of their products, corporate governance, culture and content moderation systems. In short, platforms have tremendous power.
Canada is currently a laggard in regulating platforms. Much of what this committee has discussed would be addressed by online harms legislation, which we do not yet have in Canada. Europe, the U.K. and Australia all have laws to address these issues. In some cases, they are on their second-generation or third-generation law. Canada has zero federal laws that apply generally to platform regulation. We can learn from the good and the bad of these other laws, but it is time to act now.
What do we need, and what are the areas we must be careful about?
First, platform regulation is a field like protecting the environment, and multiple areas of law must work in concert to protect our safety and rights. In particular, privacy law and online harms legislation are mutually reinforcing, so we need both. For example, algorithms that push harmful content do so by harvesting personally identifiable information, which is covered by privacy law. However, the algorithm can also draw from anonymized aggregate data, which falls outside of privacy law.
Online harms legislation can better target the choices that platforms make about their product designs and content moderation systems. Social media mines data to determine likes and interests, but it is what it does with this that online harms laws can address—such as Meta amplifying emotive and toxic content on Facebook by treating angry and love reactions as five times more valuable than likes. This fuelled the spread of misinformation and disinformation.
Second, platforms are part of the solution. They can be important collaborators and innovators in solving problems. There is, however, a friction when they are almost state-like in their role. Some have their own national security teams, essentially setting national security policy.
We also depend on platforms to go above and beyond the law in addressing hateful content, disinformation and violent extremism, all of which are not necessarily illegal. However, that is not a substitute for law to set industry standards. Standards are needed. The examples I gave were platforms with relatively sophisticated governance structures. There are many popular platforms that minimally govern the risks of their products.
Third, when we talk about the risks of harm, we should be clear that not all risks are the same. Child protection, hate and terrorist propaganda, disinformation, and violence all have different dynamics and should not be distilled to one legal rule, except for the basic idea of corporate due diligence.
Further, when we talk about the risks of harm, these include risks to fundamental rights: the rights to freedom of expression, to privacy and to equality. Any analysis of solutions in law or governance must be through the lens of protection and promotion of rights. This is particularly challenging when it comes to addressing misinformation and disinformation because, except in narrow circumstances, it is lawful to believe and share false information.
I will leave you with this: What are the basic components needed in online harms legislation?
Platforms should have a duty to manage the risks of harm of their products and a duty to protect fundamental rights. There should be transparency obligations matched with a way to vet transparency through audits and access to data by vetted researchers. There should be the creation of a regulator to investigate companies and educate the public, and there should be access to recourse for victims, because this is a collective harm but also an individual one.
Thank you, and I welcome questions.