Good afternoon, and thank you for the invitation to participate in this study.
My name is Jacques Marcoux. I'm the director of research with the Canadian Centre for Child Protection. We are a registered charity that has been operating for nearly 40 years. We operate Cybertip.ca, which is Canada's national tip line for the public reporting of online child sexual abuse and exploitation. When you read in the news about the thousands of online sextortion and luring cases across the country, in many of those cases, we were the first point of contact for these kids.
I also want to note that our organization is viewed internationally as a world leader in the discovery, identification and the issuance of take-down notices for child sexual abuse material all across the Internet. We do this through the deployment of a number of technological tools built by our organization over the years. It's a platform we call Project Arachnid. Just for a sense of scale for the committee, on any given day, we issue anywhere between 2,000 to 20,000 take-down notices to hundreds of online service providers across dozens of countries.
We have, quite frankly, seen it all, so I want to really emphasize that the perspective I'm here to share today is really grounded in this reality that thousands of Canadians experience online, and this isn't hypothetical and it isn't philosophical; this is real, and it happens on mainstream services that all of us use.
With that said, it may not be clear exactly how our work connects to freedom of expression, so I want to provide a couple of examples.
First, it's important to know that in our space, we especially focus on expression in the form of images and videos. This includes expression that is criminal but also expression that is often referred to as “lawful but awful”. This, for example, can include images of kids in highly sexualized poses or even the spread of images or information that's used to doxx them.
We proactively seek the removal of this content online, and we routinely encounter resistance and even outright denials from online services. We also know, from our work with survivors who are on the receiving end of this so-called expression, that it has an incredibly chilling effect on their ability to participate in online life. In fact, part of the services and supports we provide to these individuals is assistance to help them dramatically limit their online footprint for the sake of their personal safety on the Internet. We also work with victims who spend their lives trying to stop the spread of images of their abuse or their personal information across the Internet.
Consider that the person who has disseminated this content essentially does this with zero friction in the exercise of their expression, and it's often done anonymously. With a few clicks, that content of these victims goes online, and it can be downloaded thousands of times with potentially infinite online reach. For the victims, it's just a minefield of barriers, and they are often asked by service providers to “prove it” or to provide ID in order to get anything taken down. When they go to police, what they often discover is that there's little that can be done, and sometimes that's because the content is technically lawful.
These challenges, as you can imagine, are exactly why we as an organization support online safety-type regulations, measures that ensure that the systems themselves that act as these vehicles for our expression have obligations to, for example, anticipate and plan and especially design their services in ways that limit predictable harms and foster healthy environments. It's simply not enough to act once the harm is done.
Some examples of ways that operators can protect and enhance free expression include really basic concepts like providing users with reporting tools; blocking bots that artificially amplify what I'll call inauthentic expression; eliminating problematic algorithmic incentives; and, of course, having stringent rules for the swift removal of illegal content, such as child sexual abuse imagery. This list could go on and on, but I think it's important to recognize that a lot of the core principles behind this list are simply borrowed from all the other industries in Canada that are subject to regulation.
If I can leave you with one core thought, it's this: The digital spaces we all use to express ourselves are very often undeservedly characterized as altruistic public squares of free expression, when in reality these environments are commercial entities, and they're designed to drive engagement and traffic at all cost with little regard for the public interest or the rights of users.
For the government, a decision to not intervene is, in and of itself, an action that has a dramatic impact on free expression. The alternative to intervention is simply to roll the dice and hope that foreign companies voluntarily prioritize the rights of Canadians over their objectives, which may be commercial, political or otherwise.
Thank you.