Mr. Chair, honourable members, thank you and good afternoon. I appreciate the opportunity to appear before you today as part of your PIPEDA review, a statute in desperate need of legal reform.
My name is Ian Kerr. I'm a professor at the University of Ottawa, where I hold a unique four-way position in the Faculty of Law, Faculty of Medicine, school for information studies, and the department of philosophy. For the past 17 years, I have held the Canada research chair in ethics, law, and technology. Canada research chairs are awarded to “outstanding researchers acknowledged by their peers as world leaders in their fields.”
I come before you today in my personal capacity.
I'd like to begin by reinforcing some points that have already been made in previous testimony.
First, to put it colloquially, and to disagree with my colleague David Young, the call for stronger enforcement through order-making power, the ability of the OPC to impose meaningful penalties, including fines, is by now a total no-brainer.
As Micheal Vonn of the BCCLA who recently testified before you said, “There is no longer any credible argument for retaining the so-called ombudsperson model”. This has already been acknowledged by Commissioner Therrien, former commissioner Stoddart, and assistant commissioner Bernier, and has been fortified by testimony from other Canadian jurisdictions that already have order-making power, which commissioners Clayton and McArthur have testified before you as being advantageous. Strong investigatory and order-making powers are a necessary component of effective privacy enforcement, especially in a global environment. Let's get it done.
Second, I agree with former commissioner Stoddart and with overlapping testimony of Professor Valerie Steeves, both of whom have stated that PIPEDA's language needs to be strengthened in ways that reassert its orientation towards human rights. As Professor Steeves attests, privacy rights are no longer reducible to data protection, which itself is not reducible to a balancing of interests. Enshrining privacy as a human right, as PIPEDA does, reflects a profound and crucial set of underlying democratic values and commitments. Privacy rights are not merely trade-offs for business or governmental convenience. PIPEDA needs stronger human rights language.
Having reinforced these views, the majority of my remarks will focus on two central themes raised by this study, transparency and meaningful consent. I will use this framing language to orient your thinking, but in truth, both of these concepts themselves require expansion in light of dizzying technological process.
When PIPEDA was enacted, the dominant metaphor was George Orwell's 1984, “Big Brother is Watching You.” Strong privacy rights were seen as an antidote to the new possibility of dataveillance, the application of information technology by government and industry to watch, track, and monitor individuals by investigating the data trails they leave behind through their activities. Though perhaps no panacea, PIPEDA's technology-neutral attempt to limit collection, use, and disclosure was thought to be a sufficient corrective.
However, technological developments in the last 17 years since PIPEDA go well beyond watching. Today, I will focus on a single example, the use of artificial intelligence, AI, to perform risk assessment and delegated decision-making. The substitution of machines for humans shifts the metaphor away from the watchful eye of Big Brother towards what Professor Daniel Solove has characterized as:
...a more thoughtless process of bureaucratic indifference, arbitrary errors, and dehumanization, a world where people feel powerless and vulnerable, without any meaningful form of participation in the collection and use of their information.
This isn't George Orwell's 1984; this is Franz Kafka's trial of Joseph K.
Since the enactment of PIPEDA, the world we now occupy permits complex, inscrutable artificial intelligence to make significant decisions that affect our life chances and opportunities. These decisions are often processed with little or no input from the people they affect, and little or no explanation of how these decisions were made. Such decisions may be unnerving, unfair, unsafe, unpredictable, unaccountable, and unconstitutional. They interfere with fundamental rights, including the right to due process and even the presumption of innocence.
It's worth taking a moment to drill down on some real-life examples. IBM Watson is used by H&R Block to make expert decisions about people's taxes. At the same time, governments are using AI to determine who is cheating on their taxes.
Big Law uses ROSS to help its clients avoid legal risk. Meanwhile law enforcement agencies use similar applications to decide which individuals will commit crimes and which prisoners will reoffend. Banks use AI to decide who will default on a loan. Universities use AI to decide which students should be admitted. Employers use AI to decide which people get the jobs, and so on.
But here's the rub. These AIs are designed in ways that raise unique privacy challenges. Many use machine learning to excel at decision-making. This means that AI can go beyond its original programming to make discoveries in the data that human decision-makers would neither see nor understand.
This emergent behaviour is what makes AI so useful. It's also what makes it inscrutable. Machine learning, knowledge discovery in databases, and other AI techniques produce decision-making models differing so radically from the way that human decisions are made that they resist our ability to make sense of them. Ironically, AIs display great accuracy, but those who use them and even their programmers often don't know exactly how or why.
Permitting such decisions without an ability to understand them can have the effect of eliminating challenges that are essential to the rule of law. When an institution uses your personal information and data about you to decide that you don't get a loan, your neighbourhood's going to be the one under more police surveillance, you don't get to go to university, you don't get the job, or you don't get out of jail, and those decisions can't be explained by anyone in a meaningful way, such uses of your data interfere with your privacy rights.
I think this is the sort of reason that a number of experts have come before you to talk about what they called algorithmic transparency, but in my respectful submission, transparency doesn't go far enough. It's not enough for governments or companies to disclose what information's been used or collected when AIs affect our life chances and opportunities. Those who use AIs have a duty to explain those decisions in ways that allow us to challenge the decision-making process itself. That's a basic privacy principle that's enshrined in data protection worldwide.
I would therefore submit that PIPEDA requires a duty to explain decision-making by machines. A duty to explain addresses transparency and consent but goes further in order to ensure fundamental rights to due process and the presumption of innocence. This is the approach that's taken in GDPR. I would go even further, following EU GDPR article 22, and suggest that PIPEDA should also enshrine “a right not to be subject to decisions based solely on automated processing”.
PIPEDA was enacted to protect human beings from technological encroachment. Decision-making about people must therefore maintain meaningful human control. PIPEDA should prohibit fully automated decision-making that does not permit human understanding or human intervention, and to be clear, I make this submission not to ensure EU adequacy but because it's necessary to protect human rights.
Mama raised me right. Among other things, she taught me that you don't accept a dinner invitation and then complain to your hosts about what is being served. Mama's gentle wisdom notwithstanding I would like to conclude my remarks with two uncomfortable observations.
First, as I appear before you today, I think it's fair to say that my sense of déjà vu is not unwarranted. With the exception of a few new points like my submission in favour of a duty to explain, much of what I have said, indeed much of what everyone who has appeared before you has said, has all been said before.
Although many honourable members of this committee are new to these issues, those who have done their homework will surely know that we've already done this dance in hearings around Bill S-4, Bill C-13, the Privacy Act, the privacy and social media hearings, and of course the PIPEDA review of 2006. Yet we see very little in the way of substantive legislative change.
Although ongoing study is important, I say with respect that you are not Zamboni drivers. The time has come to stop circling around the same ice. The time has come to make some important legislative changes.
Second, as I prepare for the question period, I look around the table and pretty much all I see are men. Inexplicably, your committee itself is composed entirely of men. Yes, I realize that you have called upon a number of women to testify during the course of these proceedings. This, of course, makes sense. After all, a significant majority of privacy professionals are women. Indeed, I think it's fair to say that the global thought leadership in the field of privacy is by majority the results of contributions by women.
I find it astonishing and unjustifiable that you have no women on this committee, a decision to me as incomprehensible as many of those made by algorithms.
I feel compelled to close my remarks by making this observation a part of the public record.
Thank you for your careful attention. I look forward to questions.