Thank you, Madam Chair and committee members, for the opportunity to contribute to your important prestudy of Bill C-63, the online harms act.
I'm Andrew Clement, a professor emeritus in the faculty of information at the University of Toronto, speaking on my own behalf. I'm a computer scientist by training and have long studied the social and policy implications of computerization. I'm also a grandfather of two young girls, so I bring both a professional and a personal interest to the complex issues you're having to grapple with.
I will confine my remarks to redressing a glaring absence in part 1 of the bill—a bill I generally support—which is the need for algorithmic transparency. Several witnesses have made a point about this. The work of Frances Haugen is particularly important in this respect.
Social media operators, broadly defined, provide their users with access to large quantities of various kinds of content, but they're not simply passive purveyors of information. They actively curate this content, making some content inaccessible while amplifying other content, based primarily on calculations of what users are most likely to respond to by clicking, liking, sharing, commenting on, etc.
An overriding priority for operators is to keep people on their site and exposed to revenue-producing advertising. In the blink of an eye, they select the specific content to display to an individual following precise instructions, based on a combination of the individual's characteristics—for example, demographics, behaviour and social network—and features of the content, such as keywords, income potential and assigned labels. This is referred to as an “algorithmic content curation practice”, or “algorithmic practice” for short.
These algorithmic practices determine what appears most prominently in the tiny display space of personal devices and thereby guides users through the vast array of content possibilities. In conjunction with carefully designed interactive features, such curation practices have become so compelling, or even addictive, that it holds the attention of U.S. teens, among others, for nearly five hours a day. Disturbingly, their time spent on social media is strongly correlated with adverse mental health outcomes and with a rapid rise in suicide rates starting around 2012. We've heard vivid testimony about this from your other witnesses. Leading operators are aware of the adverse effects of their practices but resist reform, because it undermines their business models.
While we need multiple approaches to promote safety online, a much better understanding of algorithmic curation practices is surely one of the most important.
Canadians have begun calling for operators to be more transparent about their curation practices. The Citizens' Assembly on Democratic Expression recommended that digital service providers “be required to disclose...the...inner workings of their algorithms”. Respondents to the online consultation regarding this proposed online harms legislation noted “the importance of...algorithmic transparency when setting out a regulatory regime.” Your sister standing committee, the Standing Committee on Public Safety and National Security, has made a similar recommendation: “That the Government of Canada work with platforms to encourage algorithmic transparency...for better content moderation decisions.”
Internationally, the U.S., the EU and others have developed or are developing regulatory regimes that address online platforms' algorithmic practices. Most large social media services or online operators in Canada also operate in the EU, where they are already subject to algorithmic transparency requirements found in several laws, including the Digital Services Act. It requires that “online platforms...consistently ensure that recipients of their service are appropriately informed about how recommender systems impact the way information is displayed, and can influence how information is presented to them.”
While Bill C-63 requires operators to provide detailed information about the harmful content accessible on the service, it is surprisingly silent on the algorithmic practices that are vital for determining the accessibility, the reach and the effects of such content. This lapse is easily remedied through amendments—first, by adding a definition of “algorithmic content curation practice”, and second, by adding requirements for the inclusion of algorithmic content curation practices in the digital safety plans in clause 62 and in the electronic data accessible to accredited persons in clauses 73 and 74. I will offer specific amendment wording in a written submission.
Thank you for your attention, and I welcome your questions.