Good afternoon, Mr. Chair and members of the committee.
My name is Tamir Israel and I'm a lawyer with the Samuelson-Glushko Canadian Internet Policy and Public Interest Clinic at the University of Ottawa, which sits on the traditional unceded territory of the Algonquin Anishinabe people.
I want to thank you for inviting me to participate in this important study into facial recognition systems.
As the committee has heard, facial recognition technology is versatile and poses an insidious threat to privacy and anonymity, while undermining substantive equality. It demands a societal response that's different and more proactive than that to other forms of surveillance technology.
Face recognition is currently distinguished by its ability to operate surreptitiously and at a distance. Preauthenticated image databases can also be compiled without participation by individuals, and this has made facial recognition the biometric of choice for achieving a range of tasks. In its current state of development, the technology is accurate enough to inspire confidence in its users but sufficiently error prone that mistakes will continue to occur with potentially devastating consequences.
We have long recognized, for example, that photo lineups can lead police to fixate erroneously on particular suspects. Automation bias compounds this problem exponentially. When officers using an application such as Clearview AI or searching a mug shot database are presented with an algorithmically generated gallery of 25 potential suspects matching a grainy image taken from a CCTV camera, the tendency is to defer to the technology and to presume the right person has been found. Simply including human supervision will, therefore, never be sufficient to fully mitigate the harms of this technology.
Of course, racial bias remains a significant problem for facial recognition systems as well. Even for top-rated algorithms, false matches can be 20 times higher for Black women, 50 times higher for native American men, and 120 times higher for native American women than they are for white men.
This persistent racial bias can render even mundane uses of facial recognition deeply problematic. For example, a United Kingdom government website relies on face detection to vet passport image quality, providing an efficient mechanism for online passport renewals. However, the face detection algorithm often fails for people of colour and this circumstance alienates individuals who are already marginalized by locking them out of conveniences available to others.
As my friend Ms. Bhandari mentioned, even when facial recognition is cured of its biases and errors, the technology remains deeply problematic. Facial recognition systems use deeply sensitive biometric information and provide a powerful identification capability that we know from other investigative tools such as street checks will be used disproportionately against indigenous, Black and other marginalized communities.
So far, facial recognition systems can be and have been used by Canadian police on an arrested suspect's mobile device, on a device's photo album, on CCTV footage in the general vicinity of crimes and on surveillance photos taken by police in public spaces.
At our borders, facial recognition is at the heart of an effort to build sophisticated digital identities. “Your face will be your passport” is becoming an all-too-common refrain. Technology also provides a means of linking these sophisticated identities and other digital profiles to individuals, driving an unprecedented level of automation.
At all stages, transparency is an issue, as government agencies in particular are able to adopt and repurpose facial recognition systems surreptitiously, relying on dubious lawful authorities and without any advance public licence.
We join many of our colleagues in calling for a moratorium on public safety and national security related uses of facial recognition and on new uses at our borders. Absent a moratorium, we would recommend amending the Criminal Code to limit law enforcement use to investigations of serious crimes and in the absence of reasonable grounds to believe. A permanent ban on the use of automated, live biometric recognition by police in public spaces would also be beneficial, and we would also recommend exploring a broader prohibition on the adoption of new facial recognition capabilities by federal bodies absent some sort of explicit legislative or regulatory approval.
Substantial reform of our two core federal privacy laws is also required. Bill C-27 was tabled this morning and it would enact the artificial intelligence and data act, as well as reform our private sector law, our federal law PIPEDA. Those reforms are pending and will be discussed, but beyond the amendments in Bill C-27, both PIPEDA and the Privacy Act need to be amended so that biometric information is explicitly encoded as sensitive, requires greater protection in all contexts and, under PIPEDA, requires express consent in all contexts.
Both PIPEDA and the Privacy Act should also be amended to legally require companies and government agencies to file impact assessments with the Privacy Commissioner prior to adopting intrusive technologies. Finally, the commissioner should be empowered to interrogate intrusive technologies through a public regulatory process and to put in place usage limitations or even moratoria where necessary.
Those are my opening remarks. I thank the committee for its time. I look forward to your questions.