Thank you, Mr. Chair and members of the committee. Good morning.
My name's Carole Piovesan. I'm a managing partner at INQ Law, where my practice concentrates in part on privacy and AI risk management. I'm an adjunct professor at the University of Toronto's Faculty of Law, where I teach on AI regulation. I also recently co-edited a book on AI law, published by Thomson Reuters in 2021. Thank you for the opportunity to make a submission this morning.
Facial recognition technologies, FRTs, are becoming much more extensively used by public and private sectors alike, as you heard Ms. Khoo testify. According to a 2020 study published by Grand View Research, the global market size of FRTs is expected to reach $12 billion U.S. by 2028, up from a global market size of approximately $3.6 billion U.S. in 2020. This demonstrates considerable investments and advancements in the use of FRTs around the world, indicating a rich competitive environment.
While discussions about FRTs tend to focus on security and surveillance, various other sectors are using this technology, including retail and e-commerce, telecom and IT, and health care. FRTs present a growing economic opportunity for developers and users of such systems. Put simply, FRTs are becoming more popular. This is why it is essential to understand the profound implications of FRTs in our free and democratic society, as this committee is doing.
For context, FRTs use highly sensitive biometric facial data to identify and verify an individual. This is an automated process that can happen at scale. It triggers the need for thoughtful and informed legal and policy safeguards to maximize the benefits of FRTs, while minimizing and managing any potential harms.
FRTs raise concerns about accuracy and bias in system outputs, unlawful and indiscriminate surveillance, black box technology that's inaccessible to lawmakers, and ultimately, a chilling effect on freedom. When described in this context, FRTs put at risk Canada's fundamental values as enshrined in our Canadian charter and expressed in our national folklore.
While the use of highly sensitive, identifiable data can deeply harm an individual's reputation or even threaten their liberty—as you heard Ms. Khoo testify—it can also facilitate quick and secure payment at checkout, or help save a patient's life.
FRTs need to be regulated with a scalpel, not an axe.
The remainder of my submission this morning proposes specific questions organized in four main principles that align with responsible AI principles we see around the world, and are intended to guide targeted regulation of FRTs. The principles I propose align with the OECD artificial intelligence principles and leading international guidance on responsible AI, and address technical, legal, policy and ethical issues to shape a relatively comprehensive framework for FRTs. They are not intended to be exhaustive, but to highlight operational issues that will lead to deeper exploration.
The first is technical robustness. Questions that should inform regulation include the following. What specific technical criteria ought to be associated with FRT use cases, if any? Should there be independent third parties engaged as oversight to assess FRT from a technical perspective? If so, who should that oversight be?
Next is accountability. Questions that should inform regulation include the following. What administrative controls should be required to promote appropriate accountability of FRTs? How are those controls determined and by whom? Should there be an impact assessment required? If so, what should it look like? When is stakeholder engagement required and what should that process look like?
Next, is lawfulness. Questions that should guide regulation include the following. What oversight is needed to promote alignment of FRT uses with societal values, thinking through criminal, civil and constitutional human rights? Are there no-go zones?
Certainly last, but not least, is fairness. Questions associated with fairness regulation include the following. What are the possible adverse effects of FRTs on individual rights and freedoms? Can those effects be minimized? What steps can or should be taken to ensure that certain groups are not disproportionately harmed, even in low-risk cases?
Taken together, these questions allow Canada to align with emerging regulation on artificial intelligence around the world, with a specific focus on FRTs given the serious threat to our values as balanced against some of the real beneficial possibilities.
I look forward to your questions. Thank you.