Thank you very much for inviting me back.
I understand that today one of the things we've been asked to focus on is this notion of algorithmic curation. I'm making these remarks as the co-leader of the eQuality Project, which is a project that in fact is focused on the big data environment and its impacts on online conflict between young people. I'm also a member of the steering committee of the National Association of Women and the Law.
Big data, or the big data environment, where each of us trade our data online for the services we get, is a mechanism for sorting all of us, including young people, into categories in an attempt to predict what we will do based on what we've done in the past and also to influence our behaviour in the future, especially around marketing, to encourage us to purchase certain goods or to consume in certain ways.
In terms of our concerns at the eQuality Project with the big data model, and with algorithmic sorting in particular, there are three that I want to touch on.
The first is this assumption that the past predicts the future. This can become a self-fulfilling prophecy, which in the context of youth is particularly concerning. The assumption is not only that what we do predicts what we will do individually in the future, but that what people who are assumed to be like us will do or have done in the past somehow predicts what we as individuals will do in the future.
We can begin with an example that will appear soon in the eQuality Project annual report, courtesy of my co-leader Valerie Steeves. Think about online advertising and targeting. If you are a racialized male online and the algorithmic sort sorts racialized males as people who are more likely to commit crimes, then the advertising targeted to those people in that category—the young racialized male—might lean more toward names of criminal lawyers and ads for searching out people's criminal records, as opposed to advertising for law schools, which might be the kind of advertising that a middle-class white young person might get. There's a study by Latanya Sweeney about this.
The shaping of our online experience, that information to which we have access, according to our algorithmic sorting into groups, then can become a bit of a self-fulfilling prophecy because it's assumed that there's certain information that's relevant to us, and that's the information that we have access to. I don't know if you have ever sat side by side with someone and done a Google search and have seen that you get different results. That's one thing. The assumption that the past predicts the future is problematic in a very conservative way. It's problematic when the groups that we're using are based on discriminatory categories as well.
The second problem obviously is the constraint that this imposes on change, the constraint that it imposes on people's equal capacity to participate and to grow. In the context of young people, our concern is around whether young people will be influenced in ways such that they internalize the stereotypes that are wallpapering their online spaces, how internalization of that stereotype may affect their self-presentation, their self-understanding, and their understanding of their possibilities for future growth and participation, and in what ways this may set youth up for conflict with one another and set youth up to judge each other according to the stereotype's marketed standards that are part of the algorithmic sort in an online environment.
The third problem that we're particularly concerned with is the lack of transparency, of course, around this algorithmic sort. We cannot question it. Most people, even people who are computer programmers, don't necessarily understand the outcomes of the algorithmic sort. When important decisions are getting made about people's lives, such as what information they have access to and what categories they're being sorted into, and we have a system that we are not allowed to question, that isn't required to be transparent, and that isn't required to provide us with an explanation of why it is we've been sorted in this particular way, there are obviously serious democratic issues.
Again, our concern in the eQuality Project is to focus on the impact that this has on young people, particularly young people from vulnerable communities, which includes girls.
What to do about this?
One of the important points, which came from earlier work that I did with Professor Steeves in the eGirls Project, is that more surveillance is not the solution. The big data algorithmic environment is a surveillance environment. It's a corporate surveillance environment and, of course, the corporate collection of this data spills over into the public environment, because it creates opportunities for public law enforcement access to this data.
What the girls in the eGirls Project told us about their experiences in the online environment was that surveillance was a problem and not a solution. Algorithmic sort solutions that purport to categorize young people according to surveillance of their data instill greater distrust for young people and adults, and greater distrust of young people in the systems they're using.
I think it's really important to think about refocusing and reshaping our concerns on corporate practices here, rather than on training children to accept an algorithmic model, to accept that they're going to be sorted in this particular way. We should take a step back and ask corporations to better explain their practices—the how, the why, the when—and to consider regulation if necessary, including to require that explanations be provided where decisions are being made about a young people's life chances according to algorithmic curation and sorting.
Those are my remarks for now.