Thank you for the introduction, Mr. Chair.
Thank you to the committee for inviting me to participate as a witness on the topic of the use and impact of facial recognition technology.
As noted, my name is Dr. Alex LaPlante. I am the senior director of product and business development at Borealis AI, which is RBC's R and D lab for artificial intelligence. The views I express today are my own; they do not reflect the views of Borealis AI, RBC or any other institution with which I'm affiliated.
I've spent the last 15 years building and deploying advanced analytics and AI solutions for academic and commercial purposes, and I've seen the positive outcomes that AI can drive. However, I'm also acutely aware that, if we don't take care to adequately assess the application, development and governance of AI, it can have adverse effects on end-users, perpetuate and even amplify discrimination and bias towards racialized communities and women, and lead to unethical usage of data and breaches of privacy rights.
I will focus my comments on two areas: data privacy, and data quality and algorithmic performance. I will then conclude with my recommendations around the governance of this technology.
Biometric data is some of the most sensitive data that exists, so privacy is paramount when it comes to safely collecting, using and storing it. Biometric data has been collected and used without individuals' consent or knowledge in several instances, including in the case of Clearview AI breaching these individuals' privacy rights and putting them at the mercy of unregulated and unvalidated AI systems. This is particularly concerning in high-risk use cases such as criminal identification. There have also been cases of function creep, where companies gain consent to collect biometric data to use in one particular way but go on to use it in other ways beyond this original stated intent.
The best FRT systems can achieve accuracy rates of 99.9% and perform consistently across demographic groups. However, not all algorithms are made equal, and in some cases false positive rates can vary by factors of 10 to even 100 for racialized populations and women. This gap in performance is directly related to the lack of representative, high-quality data.
One field of AI research that should be highlighted in the context of FRT is adversarial robustness. It is the backbone of practices like cloaking, which look to deceive FRTs. This can be achieved through physical manipulation like obscuring facial features or, more covertly, by making modifications to facial pictures that are indiscernible to the human eye but that ensure the pictures are no longer identifiable.
Law enforcement agencies in Canada and abroad have employed technology built on unverified data scraped from the web that can be easily manipulated in ways that are undetectable without direct access to source data. Without proper oversight and regulation, these companies can easily manipulate their data to control who can or cannot be identified with their systems.
Beyond data quality issues, FRT, like any high-risk AI system, should undergo extensive validation so that its limitations are properly understood and taken into consideration when applied in the real world. Unfortunately, many FRTs on the market today are true black boxes and are not available for validation or audit.
While my comments focus on the risks of FRT, I believe there's a lot of value in this technology. We need to carefully craft regulations that will allow FRT to be used safely in a variety of contexts and that address Canada's key legislative gaps as well as concerns around human rights and privacy. In working in the highly regulated financial sector, I have participated in the effective governance of high-risk AI systems where issues of privacy, usage, impact and algorithmic validation are evaluated and documented comprehensively. I believe similar approaches can address many of the primary concerns around this technology.
Regulations need to provide FRT developers, deployers and users with clear requirements and obligations regarding specific uses of this technology. This should include the requirement to gain affirmed consent for the collection and use of biometric data, as well as purpose limitation to avoid function creep. FRT legislation should leverage the privacy principles of necessity and proportionality, especially in the context of privacy-invasive practices.
Further, governance requirements should be proportional to risk materiality. Impact assessments should be common practice, and there should be context-dependent oversight on issues of technical robustness and safety, privacy and data governance, non-discrimination, and fairness and accountability. This oversight should not end once a system is in production but should instead continue for the lifetime of the system, requiring regular performance monitoring, testing and validation.
Last, clearer accountability frameworks for both developers and end-users of FRT are needed, which will require a transparent legislative articulation of the weight of human rights versus commercial interests.
All that being said, these regulations should seek to take a balanced approach that reduces the administrative and financial burdens for public and private entities where possible.
Thank you very much. I look forward to your questions.