Thank you.
When I teach freedom of expression to my law students, I start with the question of what freedom of expression means to them. Before looking at the law or philosophy, we should all start with the question of what expression means to us personally. It touches every aspect of our lives and democracy, and this meaningfulness is what informs our legal structure.
A commitment to freedom of expression asks a lot of us. It asks us to protect offensive, disturbing and shocking expression in the belief that society as a whole benefits, even if individuals are caught in the crosshairs. However, it is not an absolute right and it never has been.
Canadian courts have generally adopted a negative approach to freedom of expression, assuming that if government just stays out of the way, we'll be free. This, I suggest, is a false assumption. We do not enjoy equal freedom to express ourselves, and law can be an important vehicle to protect and promote freedom of expression.
This is especially important in the area of technology law, which is where I work, where laws targeting private companies are an important vehicle to ensure users' rights are protected.
When I got into this area almost 20 years ago, my focus was on how technology companies had become private arbiters of expression. No matter what we want to do online, we rely on a private company to make it happen. They decide who has access, what content stays up or comes down, the systems of dispute resolution, and how their sites are designed, using persuasive techniques to nudge behaviour, such as endless scrolling, rewards, notifications and “likes”, essentially hijacking our minds.
This means these companies have extraordinary power—more than most states. They are the deciders of global free expression norms, and there's minimal transparency about their practices and minimal legal mechanisms with which to hold these companies accountable. These companies are also soft targets for government pressure to remove certain content, called jawboning.
At its worst, it operates as a form of shadow regulation—government A pressures platform Y to remove certain content. More commonly, law enforcement, for example, investigates whether a post is criminal hate speech. They think it might be, but in the meantime, they think it probably violates the platform's own terms and conditions. Law enforcement notifies the platform of the post, and the platform independently assesses it against its own moderation processes. In this situation, is the state suppressing lawful expression? Generally, no, but it matters how this is done, and informal measures always risk being illegitimate in substance or appearance.
Now, I don't want to give the impression that the companies are bad actors—many are the source of innovation to the problems we face—but in the end, these are just companies. They're not good or bad, but they do have fiduciary responsibilities to act in their company's best interests, so there's only so much they can ever do to act in society's best interests, and some companies elect to do very little.
My message is this: When companies are this powerful and have this much impact on society, it is the government's job to create a legal framework around that.
There are two key steps that are crucial to promote and protect freedom of expression and address online harms. The first is to pass part 1 of Bill C-63 after, of course, careful study and amendments. It proposes a systemic approach to social media regulation.
What do I mean by a systemic approach? This approach is not concerned about specific content—whether this post or that is hate propaganda and whether a company leaves it up or takes it down. Rather, it targets the system that makes social media tick. What content moderation systems does the company have in place? Does it provide due process? Does the platform address the risks of the recommender system? Does the company have a plan to address inauthentic accounts and manipulation of its systems by bots and deepfakes?
The companies are required to be transparent about their practices, and a regulator can investigate companies for failing to have proper systems in place. In terms of freedom of expression, a systemic approach is the best in class to provide the most protection to freedom of expression while targeting the core problems social media have made so much worse.
The second step is to reform data privacy law and introduce AI legislation, such as some form of Bill C-27. These are data-driven businesses. The design of their interfaces, their practices concerning the collection, use and disclosure of user data, and their use of AI systems provide the keys to our minds and health and our agency to participate and express ourselves freely. Privacy has always been key to the enjoyment of freedom of expression, and therefore Bill C-27, or some version of it, is a key complement to Bill C-63.
Thank you.