Thank you to the chair and the committee for inviting the Canadian Civil Liberties Association to appear before you today.
Facial recognition—or, as we often think of it at CCLA, facial fingerprinting, to draw a parallel to another sensitive biometric—is a controversial technology. You will hear submissions during this study that tout its potential benefits and others that warn of dire consequences for society that may come with particular use cases, especially in the context of policing and public safety. Both sides of the debate are valid, which makes your job during this study especially difficult and so profoundly important. I'm grateful that you've undertaken it.
The CCLA looks at this technology through a rights lens. This focus reveals that not just individual and collective privacy rights are at risk in the various public and private sector uses of face surveillance and analysis, but also a wide range of other rights. I know that you’ve heard in previous submissions about the serious risk to equality rights raised by faulty versions of this technology that work less well on faces that are Black, brown, indigenous, Asian, female or young—that is, non-white and non-male.
What I’d add to that discussion is the caution that if the technology is fixed and if it becomes more accurate on all faces across the spectrums of gender and race, it may become even more dangerous. Why? It's because we know that in law enforcement contexts, the surveillance gaze disproportionately falls on those same people. We know who often suffers discrimination in private sector applications. Again, it's those same people. In both cases, a perfect identification of these groups or members of these groups who already experience systemic discrimination because of who they are and what they look like carries the potential to facilitate simply more perfectly targeted discriminatory actions.
In addition to equality rights, tools that could allow ubiquitous identification would have negative impacts on a full range of rights protected by our Canadian Charter of Rights and Freedoms and other laws, including freedom of association and assembly, freedom of expression, the right to be free from unreasonable search and seizure by the state, the presumption of innocence—if everyone’s face, as in the Clearview AI technology, becomes a subject in a perpetual police lineup—and ultimately rights to liberty and security of the person. There’s a lot at stake.
It’s also important to understand that this technology is creeping into daily life in ways that are becoming commonplace. We must not allow that growing familiarity to breed a sense of inevitability. For example, many of us probably unlock our phones with our face. It’s convenient and, with appropriate built-in protections, it may carry relatively little privacy risk. A similar one-to-one matching facial recognition tool was recently used by the Liberal Party of Canada in its nomination voting process prior to the last federal election. In that case, it was a much more risky use of a potentially faulty and discriminatory technology because it took place in a process that is at the heart of grassroots democracy.
The same functionality in very different contexts raises different risks. This highlights the need for keen attention, not just to technical privacy protections, which exist in both the phone and voting app examples, but to contextually relevant protections for the full set of rights engaged by this technology.
What is the path forward? I hope this study examines whether—not just when and how—facial recognition can be used in Canada, taking those contextual questions into consideration. CCLA believes, similar to our previous witness, that regulation is required for those uses that Canadians ultimately deem appropriate in a fair and free democratic state.
Facial recognition for mass surveillance purposes should be banned. For more targeted uses, at the moment CCLA continues to call for a moratorium, particularly in a policing context, in the absence of comprehensive and effective legislation that provides a clear legal framework for its use, includes rigorous accountability and transparency provisions, requires independent oversight and creates effective means of enforcement for failure to comply.
A cross-sector data protection law grounded broadly in a human rights framework is necessary, especially in the environment where the public and private sectors are using the same technologies but are currently subject to different legal requirements. Better yet, targeted laws governing biometrics or data-intensive algorithmically driven technologies could be even better fit for purpose. There are a number of examples globally where such legislation has recently been enacted or is under consideration. We should draw inspiration from those to create Canadian laws to put appropriate guardrails around potentially beneficial uses of FRT and protect people across Canada from its misuse or abuse.
Thank you. I welcome your questions.