Thank you very much.
First, I'd really like to thank the committee for undertaking this study. I think it's incredibly important and very timely, given the changes we've seen since PIPEDA was first passed.
When I think back over that period of time, I always find myself thinking about three things. PIPEDA, as you know, was enacted to create trust in the information marketplace. Second, when PIPEDA was being passed, it was quite clear that the intention was to create consent as a floor and not a ceiling. Last, data protection and the provisions that are included in PIPEDA were part of a larger strategy that was designed to protect privacy as a human right. At the time, PIPEDA was seen as a necessary part of this protection, but it was not sufficient in and of itself.
In the last 20 years or so, I've spent a lot of time doing research on children's attitudes and experiences with privacy and equality in network spaces. I think that research raises real concerns about the first of those points, the success of PIPEDA to create trust in the information marketplace.
You could argue that part of it is a lack of education. I was in the field talking to 13- to 16-year-olds in October and November. We asked them about fair information practices, and none of them was able to identify a single one of them. In fact, almost none of them could remember the point at which they consented to the collection of their information when they signed up or posted material on Snapchat or Instagram.
Certainly when you talk to young people about the regulatory regime, they talk about privacy policies, and they don't talk about them in a very flattering way. From their point of view, these have been purposely written to obfuscate and confuse them, so they won't know what's happening, and so they will feel powerless.
They repeatedly—and increasingly, actually, over the years—have told us that the commercial surveillance they experience on these platforms is creepy; and “creepy” is a really important word because typically it means that someone's privacy has been invaded. It's a marker. But at the same time, since their school lives, their home lives, their work lives, and their play lives are so interpolated with technology, they really feel they don't have any choice about it whatsoever.
I think a good starting point for your study is the recognition that even though so many Canadian young people and Canadian adults have flocked to these platforms, that doesn't mean they're comfortable with the current regulatory framework.
In 2015 we surveyed 5,500 kids between the ages of 10 and 17 across the country. We asked them, “Who should be able to see what you post online?” and 83% of them said that the corporations that own the platforms where they're posting the information should not have access to it. So if I put something up on Facebook, Facebook shouldn't be looking. And 95% said that marketers should not be able to see what they post. Whether they've posted in a public place or a private place, they felt it was private to them.
Typically when kids are talking about privacy, they're not talking about non-disclosure, they're talking about audience control, and marketers were not an audience they wanted or expected. Some 96% said that companies that sell them smart phones and other devices or apps that use GPS should not be able to use it to locate them in the real world; and 99% said that marketers should never be able to use GPS to figure out where they were in the real world.
I think this brief snapshot really strongly suggests that there is a disconnect between the regulatory model and the lived experiences of the people who play, shop, go to school, and hang out on these platforms.
I think that disconnect is really related to a bit of the fiction that's embedded in PIPEDA. PIPEDA assumes that, when someone posts a photo on Instagram or is keeping a streak going at midnight on Snapchat, they knowingly and consciously are undertaking a commercial transaction, that they are trading their personal information for access to the platform.
But from the point of view of the people who live on these platforms, it's not a commercial transaction. If I'm on Snapchat, I'm chatting with my friends, I'm doing my homework, I'm signing a petition, I'm exercising my free speech, or I'm exercising my freedom of association. I don't think that's an outrageous perspective. Certainly that's the same relationship we have with our land lines. Although I spend $70 a month so Bell can put a phone line in my house and I can talk to people, I certainly don't expect Bell to listen to my phone calls.
I had a painter in the other day. I don't expect Bell to interrupt my conversation with my painter and tell me, “Home Depot has a sale on paint right now”, and sell to me in that environment. And I certainly don't expect Bell to take all that information and run it through an algorithm to figure out if I'm a criminal or not.
If we go back and look at that time period, part of reconnecting to that earlier hope for PIPEDA, I think, calls upon us to place privacy or data protection in a much broader context.
Go back to the Finestone report of 1997, in which privacy was seen as a social value, a democratic value, and a human right. I think that broader perspective provides this committee with two advantages.
The first one is that it's exactly the kind of thinking that you're going to need to use if you intend to harmonize our privacy protection regime with the European general data protection regulation that comes into force and effect in 2018. I think it's arguable that Europe has done a much better job than North America has in navigating through the challenges we've seen in network spaces over the last 15 years or so, precisely because of a strong commitment to human rights and a strong jurisprudence working on that commitment.
I also think that this broader perspective, placing data protection as is necessary but insufficient on its own piece of protecting privacy as a human right, will help us navigate the consent debate more effectively. As I said, when PIPEDA was passed, it was very clearly articulated that consent was intended to be a floor and not a ceiling, and it sure felt like a leaky ceiling after about six months had gone by.
Particularly given the commissioner's comments on big data, certainly there's pressure to weaken consent provisions and there's pressure to make more information publicly available precisely so corporations can sidestep the provisions that we now have. There's more pressure to de-identify and to accept de-identified information as non-personalized information for the purposes of the legislation.
It's always for the promise of big data: if we can just keep all the information, we'll be able to learn new things, because artificial intelligence will identify patterns that are hidden to us, so that we can predict behaviour, we can be more efficient, and we can be more effective. I think privacy is the best way to crack that open and to begin to examine the ethical concerns that flow from this type of information use. Big data is not predictive. This comes back to my human rights concern. Big data is never predictive; it can look only to the past. It assumes that I will do in the future what I did in the past, but even worse than that, it assumes that I will do what people like me have done in the past.
There's a deep concern around these kinds of information infrastructures, which is that we will unintentionally and unconsciously recreate biases in our information systems. We'll either program them in through false proxies, or they'll be learned by the algorithms themselves. We can look at the example in England where they identified young criminals. The youngest potential criminal they identified was three years of age, and he was identified because he was racialized, he was impoverished, and he lived in a particular area. There are discriminatory outcomes that are hidden within this information management system.
Even if we take the position that the algorithm will be able to learn, I think all you have to do is look at what happened with Microsoft's Tay to realize that an open season on information will lead to unintended consequences that will harm the most marginalized in our society.
At a practical level, I have five suggestions.
I think we need to strengthen the reasonable purposes clause. I was lucky enough to participate in the commissioner's meeting on consent, and it was quite interesting. We had quite a debate, because the representatives of the businesses I was sitting with kept saying that businesses have a right to collect information, while I kept saying, “No, businesses don't have a right.” People have rights. Businesses have needs and desires. I found it quite interesting that they kept pointing to the purpose clause. I think there's an opportunity to enrich our commitment to human rights within PIPEDA by opening up and reaffirming the need to protect individual rights against business uses, rather than business “rights”.
Second, I imagine that you're seriously considering adding a right to delink information if there's no public value. It's the right to be forgotten clause. From young people's point of view, certainly, this is absolutely crucial. When you sit down and talk to young people about the risks they're worried about online, that's it. They say, “Oh, something I did when I was 16 is going to sink me, and I will never be able to get over it.” I think that's a particularly important area to examine.
Also, young people certainly ask for regulators to mandate more technical controls so they can more easily control their audiences and take down content. I'm personally quite concerned that community standards are being created by corporations and that our elected representatives are not active in that space of setting standards for the kinds of discourse that are appropriate in Canadian context.
Fourth, I'd strongly urge you to consider mandating some form of algorithmic transparency. So many of these practices are hidden, and it's only getting worse, and so I think corporations should be required to be fully transparent with their information practices, particularly because of this concern about discriminatory outcomes.
Last, I'd ask you to consider holding corporations to account for those discriminatory outcomes if they're going to get the benefit of access to this information. It's like pollution; somebody is going to pay for the dirty water. Since we're building this system right from the get-go, we should be considering who that burden should fall on, and I would argue that it should fall on the people who profit from it.
Thank you very much.