Thank you.
Together with Professor Valerie Steeves, I co-lead a seven-year project funded by SSHRC called the eQuality project. It's focused on understanding how big data practices, especially targeted advertising, affect young people's online interactions and can set them up for conflict and discrimination.
Today, I'm going to be drawing from research from several Canadian studies on young people. Two are 2017 studies. One was conducted by the eQuality project for the Law Commission of Ontario and is related to online defamation. The other was co-conducted by the eQuality project and MediaSmarts, under a grant from the OPC. It focused on young people's decision-making about privacy in the context of posting photos online. I'll also draw from the eGirls project, which was a three-year project co-led by Professor Steeves and I that looked specifically at young women's and girls' online experiences. Finally, I'll draw from the results of the MediaSmarts study entitled “Young Canadians in a Wired World”, most recently reported on in 2015-16.
Basically, I think there are three take-aways from these studies of relevance to this committee. First of all, young people are very concerned about reputational harm, and for girls and young women in particular, permanent reputational harm is considered by many to be the danger associated with networked media. Second, privacy, particularly the mechanisms for controlling access to and use of young people’s data is foundational to addressing these harms, particularly as young people think about whether and how the information they post or that is posted about them now may be unfairly used out of context in the future in ways that interfere with their prospects for employment and maintaining healthy relationships, among other things. Third, young people do have strategies and norms to mitigate these dangers, but corporate practices and online architectures make it very difficult for them to implement those strategies, or invisibly undermine them through machine-based processes such as algorithmic profiling for targeted advertising.
In a nutshell, from the studies on youth perspectives from our Canadian research, young people do actively seek out online publicity, but they are also particularly aware of the complications that publicity introduces. Because of this, they rely on a number of strategies to protect their online reputations, including thinking very carefully about what they post, monitoring what other people are posting about them, and getting colleagues to assist them when material is posted about them that is negative. However, the commercial nature of networked media makes it very difficult for them to keep control over their reputations.
In what we've come to call a perfect storm, digital architectures incent young people to shed data that is in turn used to profile and categorize them for purposes of targeted advertising. This involves predictions about who they are and who they ought to be that are often premised on narrow, mediatized stereotypes and presumptions about the groups into which they are aggregated. When young people try to reproduce these stereotypes themselves in order to attract the “likes” and “friends” set up by platforms as numeric markers of success, they are opened up to conflict with others who monitor, judge, and sometimes stalk them.
In this environment, we asked young people what policy-makers should do. I have four things I want to share with you.
The first thing is that you need to directly engage young people in the policy-making process. Policy development models need to be reformed to require direct engagement by young people from diverse social locations as experts in the policy formulation process itself, because research to date indicates a serious gap between policies set by adults and the experiences of young people.
Second, we need to look for responses that go beyond telling youth what to do and what not to do. The young people in the research that I'm drawing from today understood that being involved in networked spaces was essential to their lives, and all indications of our social, economic, and cultural worlds affirm that reality. It's not just an impression that they have. In fact, we’ve spent billions of dollars and years of policy and program development trying to get them online and to keep them online as part of our economic development plans. As such, advice like just going offline if you want to protect your privacy or you don't want to be harassed is both unrealistic and insulting.
Third, move beyond informed consent models. In the current environment of surveillance and prediction that is largely invisible to the user, traditional data protection models based on consent just aren't enough to protect young people's privacy and equality because, in many cases, no one can even explain what it is that machines are doing with our data. Further, and in any event, if we could explain that, simple disclosure of those processes wouldn't be enough because network technologies are so embedded in young people's lives that young people really have no choice but to consent to terms of use that purport to allow these practices, even when they don't agree with or understand them.
Fourth, we need to regulate platform providers to improve privacy and equality. Many of our participants suggested that platform providers should not be permitted to keep young people's data as long as they do. This was in part because they were so conscious of how the permanent cache of information about them opened them up to judgment and reputational harm that could affect them now and in the future.
There are a number of potentially responsive regulatory options. First of all, as many people have testified before this committee, we can ensure that the OPC has enforcement powers in order to deal with these issues effectively.
Second, we can mandate greater accountability and transparency by service providers as a first step to better understand what exactly it is they are doing with our data to profile us and shape our online experiences, and to find out how often that profiling and those processes are premised on discriminatory stereotypes or yield discriminatory outcomes that affect individuals' life chances.
This kind of profiling—machine-based, invisible to users, involving processes humans often cannot understand or explain—can lead to discrimination on grounds that are currently legally prohibited, some of which could have serious implications for young people in particular. Currently, it's very difficult to open up the black box and understand exactly what it is that's happening, although we get glimmers from research projects such as that of ProPublica, which recently revealed discrimination in the price of SAT prep tests such that Asian students were more than twice as likely to pay a higher price for a prep test because they were Asian or because they lived in a zip code that was associated with Asians from both high- and low-income groups. It may well be that insights gained from this kind of disclosure from service providers about what it is they are doing will make it even clearer that the best option is just to prohibit the use of young people's data for the purposes of targeted advertising, full stop.
Third, we can consider legislative provisions that are better aimed at supporting young people in protecting their reputations now and in the future than the current PIPEDA provisions relating to accuracy and completeness. These include examples such as the right of erasure, as seen in California, and the right to be forgotten, as seen in the European Union, which I can say more about if others are interested later on.
Finally, if we're simply too wed to the consent model to depart from it despite knowing its obvious limitations in the climate we're in, we could consider requiring service providers, regardless of their terms of service, to get separate, explicit consent from young people or their parents to use their personal information for targeted advertising, and to provide ongoing, easy opportunities to opt out of that decision. It's less likely to be as effective as any of the other things I've talked about, but at least it offers the possibility of interrupting the commercial cycle of presumed access to young people's data.
In conclusion, the current commercial “data for services” model of network communications renders young people vulnerable to discriminatory profiling and reputational harm that can have long-lasting impacts. It's time for us as adults to take responsibility for the economic and social policies that have resulted in their seamlessly integrated online-offline world. Carrying out that responsibility requires the direct engagement of young people from a variety of social locations in processes like these, rather than just asking for the opinions of adults like me who have had the privilege of working with some of them.
Thank you.