Thank you, Madam Chair, for the opportunity to appear virtually before you to discuss this important topic.
I'm a professor and Canada research chair at the University of British Columbia in Vancouver. I direct the centre for the study of democratic institutions, where we research platforms and media. Two years ago I served as a member of the expert advisory group to the heritage ministry about online safety.
Today, I will focus on three aspects of harms related to illegal sexually explicit material online, before discussing briefly how Bill C-63 may address some of these harms.
First, the issue of illegal sexually explicit material online overlaps significantly with the broader question of online harm and harassment, which disproportionately affects women. A survey in 2021 found that female journalists in Canada were nearly twice as likely to receive sexualized messages or images, and they were six times as likely to receive online threats of rape or sexual assault. Queer, racialized, Jewish, Muslim and indigenous female journalists received the most harassment.
Alongside provoking mental health issues or fears for physical safety, many are either looking to leave their roles or unwilling to accept more public-facing positions. Others have been discouraged from pursuing journalism at all. My work over the last five years on other professional groups, including political candidates or health communicators, suggests very similar dynamics. This online harassment is a form of chilling effect for society as a whole when professionals do not represent the diversity of Canadian society.
Second, generative AI is accelerating the problem of illegal sexually explicit material. Let's take the example of deepfakes, which means artificially generated images or videos that swap faces onto somebody else's naked body to depict acts that neither person committed. Recent high-profile targets include Taylor Swift and U.S. Congresswoman Alexandria Ocasio-Cortez. These are not isolated examples. As journalist Sam Cole has put it, “sexually explicit deepfakes meant to harass, blackmail, threaten, or simply disregard women's consent have always been the primary use of the technology”.
Although deepfakes have existed for a few years, generative AI has significantly lowered the barrier to entry. The number of deepfake videos increased by 550% from 2019 to 2023. Such videos are easy to create, because about one-third of deepfake tools enable a user to create pornography, which comprises over 95% of all deepfake videos. One last statistic is that 99% of those featured in deepfake pornography are female.
Third, while it is mostly prima facie easy-to-define illegal sexually explicit material, we should be wary of online platforms offering solely automated solutions. For example, what if a lactation consultant is providing online guidance about breastfeeding? Wholly automated content moderation systems might delete such material, particularly if trained simply to search for certain body parts like nipples. Given that provincial human rights legislation protects breastfeeding in much of Canada, deletion of this type of content would actually raise questions about freedom of expression. If parents have the right to breastfeed in public in real life, why not to discuss it online? What this example suggests is that human content moderators remain necessary. It is also necessary that they are trained to understand Canadian law and cultural context and also to receive support for the very difficult kind of work they do.
Finally, let me explain how Bill C-63 might address some of these issues.
There are very legitimate questions about Bill C-63's proposed amendments to the Criminal Code and Canadian Human Rights Act, but as regards today's topic, I'll focus briefly on the online harms portion of the bill.
Bill C-63 draws inspiration from excellent legislation in the European Union, the United Kingdom and Australia. This makes Canada a fourth or fifth mover, if not increasingly an outlier in not regulating online safety.
However, Bill C-63 suggests three types of duties for platforms. The first two are a duty to protect children and a duty to act responsibly in mitigating the risks of seven types of harmful content. The third most stringent and relevant for today is a duty to make two types of content inaccessible—child sexual exploitation material and non-consensual sharing of intimate content, including deepfakes. This should theoretically protect the owners of both the face and the body used in a deepfake. A newly created digital safety commission would have the power to require removal of this content in 24 hours as well as impose fines and other measures for non-compliance.
Bill C-63 also foresees the creation of a digital safety ombudsperson to provide a forum for stakeholders and to hear user complaints if platforms are not upholding their legal duties. This ombudsperson might also enable users to complain about takedowns of legitimate content.
Now, Bill C-63 will certainly not resolve all issues around illegal sexually explicit material, for example, how to deal with copies of material stored on servers outside Canada—