The BCCLA is a non-partisan society with a mandate to uphold civil liberties and human rights in Canada. Privacy is one of our most important portfolios, and we thank you for the opportunity to appear at this review of PIPEDA.
I'll just note at the outset that we have not been able to review the GDPRs sufficiently to comment upon the upcoming review of adequacy. We are pleased to leave that commentary to others on this panel.
Our association supports and echoes many of the recommendations and concerns that have already been voiced by academics, regulators, and witnesses from civil society. For example, we strongly support meaningful enforcement powers in PIPEDA, specifically, order-making powers, the ability to level financial penalties, and to award compensation to complainants in appropriate circumstances.
We have been calling for these powers for over a decade. In our view, there is no longer any credible argument for retaining the so-called ombudsperson model, as provincial counterparts have long demonstrated that order-making powers can be effectively combined with co-operative investigations, mediation, and education.
Likewise, we join others, including the BC Freedom of Information and Privacy Association, in calling for federal political parties to be covered under PIPEDA, like provincial political parties are covered under our corresponding legislation here in British Columbia.
Our association has heard from many Canadians, and most particularly from those areas that consider themselves ground zero in various robocalls scandals, that the complete failure to regulate the collection, use, and disclosure of their personal information held by federal political parties is entirely unacceptable. For all the obvious reasons, including historical abuses that have facilitated electoral fraud, this is a matter of immense importance and urgency.
I would like to speak briefly to a topic that has been discussed under the title “the right to be forgotten”, or more broadly, online reputation. This is an area of competing rights in which the BCCLA has not yet taken an official position. We are nevertheless very alive to the competing claims and the interests involved, and we would like to clarify a few points.
First, we need to understand the context of this discussion and, in our view, reject the notion that we are talking about a situation that is in any way analogous to ripping index cards out of the library card catalogue, the current go-to metaphor for de-indexing.
In no library that has ever existed has anyone been able to command the service of gathering information about their neighbour who is not a public figure, whose activities were not otherwise within the public interest, or for some other reason notable. Tenants, co-workers, ex-partners of current partners, classmates, and acquaintances—up until recently the vast majority of these members of the public, or ordinary people—have enjoyed the privacy protection of practical obscurity. The Internet and powerful search engines have eroded this protection very significantly, and people are definitely being harmed.
To give you an example of online reputation matters that spring from British Columbia, a small business in Nanaimo had a protracted battle with Google about Google's obligations under its own policy to remove anonymous online reviews. Those included libellous personal attacks on specific company employees. One of those employees, whom I'll call Ms. Jones, was said to be racist and to have the attention span of a wood bug. The company's inability to get the anonymous personal attacks of these employees removed was the subject of a CBC story. In fact, it appears that it was only the negative publicity and the media that finally made Google remove this review.
I needed to recollect the facts of this story for this submission to the committee. It had been reported in the media. I found that article using the following online search: “Google, B.C., online review, personal attack”. Those were my search terms. I found the article, and this is precisely as it should be. The information about Ms. Jones was contained in the article, as it was when it was first published. This too is precisely as it should be.
Then, as an experiment, I searched for “Ms. Jones” just by her name alone, as anyone might do—a nosy neighbour, prospective employer, landlord, or client. The first substantive hit in that search was the article containing the personal attack on her as a cognitively deficient racist.
Is this as it should be?
If this is a problem—and we know it is, because people contact our organization looking for solutions to exactly this kind of problem—how do we fix it without causing harm to other critically important rights, those of access to information and freedom of expression? We say that in order to do that discussion, we have to be very specific about the problem. The problem is not that searching for online reputation stories leads me to Ms. Jones. The problem is that searching for Ms. Jones leads me to the online reputation stories that report the content of libellous statements about her.
Without exploring what options are available to remedy this specific problem, it does seem, at a minimum, premature to announce that a remedy would necessarily be unconstitutional. Certainly the hope would be to find a way to meaningfully secure all the rights at issue.
Finally, I want to address the use of what are called “ethical assessments” or “ethical frameworks” for big data and the Internet of things. As the OPC indicated in their overview of submissions received in their consultation on consent, there is a great deal of enthusiasm within business and industry for ethical frameworks for the use of personal information, either as an added level of accountability or, more likely, as compensation for a system in which consent is being eroded.
The question of if and how consent can be made meaningful is, of course, a very large discussion. My sole point at this juncture is simply to stress that the model for assessment that is being proposed is not ethical. Calling it an ethical framework is deeply problematic.
In this framework, the people who want to use the data, in order to make money from it, will decide whether it is justified to use that data given the risks to privacy, reputation, etc. Those risks are assumed by other people. The people who stand to benefit are the people who are deciding what the risk level is and whether their purposes outweigh those purported risks. The people who are themselves being subjected to the risk have no say in the process.
It is simply impossible to describe this distribution of benefits and risks as one that is ethical. Assuredly, there are many individuals who would undertake this task with a conscience and with a desire to operate ethically and fairly. That said, individuals aside, the process itself is nakedly one of foxes guarding the henhouse, with merely a promise to be really ethical foxes—although, as you will note by the OPC's reviews, not so ethical that they would like a disinterested third party, say an independent ethics board, to have any part in that guarding function.
In sum, we would like to tell the committee that we have no confidence that the solution of ethical frameworks is either ethical or a solution.
Thank you very much.