Evidence of meeting #11 for Access to Information, Privacy and Ethics in the 44th Parliament, 1st Session. (The original version is on Parliament’s site, as are the minutes.) The winning word was use.

A recording is available from Parliament.

On the agenda

MPs speaking

Also speaking

Cynthia Khoo  Research Fellow, Citizen Lab, Munk School of Global Affairs and Public Policy, University of Toronto, As an Individual
Carole Piovesan  Managing Partner, INQ Law
Ana Brandusescu  Artificial Intelligence Governance Expert, As an Individual
Kristen Thomasen  Professor, Peter A. Allard School of Law, University of British Columbia, As an Individual
Petra Molnar  Lawyer, Refugee Law Lab, York University

11 a.m.

Conservative

The Chair Conservative Pat Kelly

I call this meeting to order.

Welcome to meeting number 11 of the House of Commons Standing Committee on Access to Information, Privacy and Ethics.

Pursuant to Standing Order 108(3)(h) and the motion adopted by the committee on Monday, December 13, 2021, the committee is commencing its study of the use and impact of facial recognition technology.

Today's meeting is taking place in a hybrid format pursuant to the House order of November 25, 2021. Members are attending in person in the room and remotely using the Zoom application. As you are aware, the webcast will always show the person speaking rather than the entirety of the committee.

I will remind members in the room that we all know the public health guidelines. I understand you've heard them many times now and I won't repeat them all, but I encourage everyone to follow them.

I would also remind all participants that no screenshots or photos of your screen are permitted. When speaking, please speak slowly and clearly for the benefit of translation. When you are not speaking, your microphone should be on mute.

Finally, I will remind all of you that comments by members and witnesses should be addressed through the chair.

I now welcome our witnesses for the first panel. We have, as an individual, Ms. Cynthia Khoo, who is a research fellow at the Citizen Lab, Munk School of Global Affairs and Public Policy, University of Toronto.

From INQ Law, we have Ms. Carole Piovesan, who is a managing partner.

We'll begin with Ms. Khoo. You have up to five minutes for your opening statement.

11 a.m.

Cynthia Khoo Research Fellow, Citizen Lab, Munk School of Global Affairs and Public Policy, University of Toronto, As an Individual

Thank you, and good morning.

My name is Cynthia Khoo and I am an associate at the Center on Privacy and Technology at Georgetown Law in Washington, D.C., as well as a research fellow with the Citizen Lab at the University of Toronto.

I am here today in a professional capacity, though I am providing my own views as an individual based on my work at the Citizen Lab and which are further informed by the work of my colleagues at both the Citizen Lab and the Privacy Center.

Today I'll discuss four key concerns with police use of facial recognition technology, each with a proposed recommendation.

To begin, I'll introduce you to three people: Robert Williams was singing in his car when a crime he had nothing to do with occurred; Nijeer Parks was transferring funds at a Western Union; and Michael Oliver was simply at work.

All three are Black men who were wrongfully arrested by police relying on facial recognition technology. They have endured lost jobs, traumatized children and broken relationships, not to mention the blow to personal dignity. These are the human costs of false confidence in, and unconstitutional uses of, facial recognition technology.

This is the same technology that researchers have found is up to 100 times more likely to misidentify Black and Asian individuals, and that misidentifies more than one in three darker-skinned women, but does work 99% of the time for white men.

Although I used examples from the United States, the same could easily happen here, if it hasn't already. Racial discrimination against Black and indigenous people imbues every stage of the Canadian criminal justice system, from carding, arrests and bail to pleas, sentencing and parole. Embedding facial recognition algorithms into this foundation of systemic bias may digitally alchemize past injustices into an even more and perhaps permanently inequitable future.

Therefore, recommendation number one is to launch a judicial inquiry into law enforcement use of pre-existing mass police datasets, such as mug shots. This is to assess the appropriateness of repurposing previously collected personal data for use with facial recognition and other algorithmic policing technologies.

I turn now to my second point. Even if all bias were removed from facial recognition, the technology would still pose an equal or even greater threat to our constitutional and human rights. Facial recognition used to identify people in public violates privacy preserved through anonymity in daily life and relies on collecting particularly sensitive biometric data. This would likely induce chilling effects on freedom of expression such as public protests about injustice. Such capability also promises to exacerbate gender-based violence and abuse by facilitating the stalking of women who are just going about their lives and who must be able to do so free of fear.

Facial recognition has not been shown to be sufficiently necessary, proportionate or reliable to outweigh these far-reaching repercussions. Thus, recommendation number two is to place a national moratorium on the use of facial recognition technology by law enforcement until and unless it's shown to be not only reliable but also necessary and proportionate to legitimate aims. This may well mean a complete ban in some cases, as several U.S. cities have already done. Canada should not shy away from following suit. This software cannot bear the legal and moral responsibility that humans might otherwise abdicate to it over vulnerable people's lives and freedom.

The third problem is lack of transparency and accountability. That this is a problem is evident in that the public knows about police facial recognition primarily only through media, leaked documents and FOI requests. Policies governing police use of facial recognition can be even more of a black box than the algorithms themselves are said to be. This circumstance gives rise to severe due process deficits in criminal cases.

Recommendation number three is to establish robust transparency and accountability measures in the event such technology is adopted. These include immediate and advance public notice and public comment, algorithmic impact assessments, consultation with historically marginalized groups and independent oversight mechanisms such as judicial authorization.

Fourth and last, we need strict legal safeguards to ensure that police reliance on private sector companies does not create an end run around our constitutional rights to liberty and to protection from unreasonable search and seizure. Software from companies such as Clearview AI, Amazon Rekognition and NEC Corporation is typically proprietary, concealed by trade secret laws and procured on the basis of behind-the-scenes lobbying. This circumstance results in secretive public-private surveillance partnerships that strip criminal defendants of their due process rights and subject all of us to inscrutable layers of mass surveillance.

I thus conclude with recommendation number four. If a commercial technology vendor is collecting personal data for and sharing it with law enforcement, they must be contractually bound or otherwise held to public interest standards of privacy protection and disclosure. Otherwise the law will be permitting police agencies to do indirectly what the Constitution bars them from doing directly.

Thank you. I welcome your questions.

11:05 a.m.

Conservative

The Chair Conservative Pat Kelly

Thank you.

Now, for five minutes, we have Ms. Piovesan.

11:05 a.m.

Carole Piovesan Managing Partner, INQ Law

Thank you, Mr. Chair and members of the committee. Good morning.

My name's Carole Piovesan. I'm a managing partner at INQ Law, where my practice concentrates in part on privacy and AI risk management. I'm an adjunct professor at the University of Toronto's Faculty of Law, where I teach on AI regulation. I also recently co-edited a book on AI law, published by Thomson Reuters in 2021. Thank you for the opportunity to make a submission this morning.

Facial recognition technologies, FRTs, are becoming much more extensively used by public and private sectors alike, as you heard Ms. Khoo testify. According to a 2020 study published by Grand View Research, the global market size of FRTs is expected to reach $12 billion U.S. by 2028, up from a global market size of approximately $3.6 billion U.S. in 2020. This demonstrates considerable investments and advancements in the use of FRTs around the world, indicating a rich competitive environment.

While discussions about FRTs tend to focus on security and surveillance, various other sectors are using this technology, including retail and e-commerce, telecom and IT, and health care. FRTs present a growing economic opportunity for developers and users of such systems. Put simply, FRTs are becoming more popular. This is why it is essential to understand the profound implications of FRTs in our free and democratic society, as this committee is doing.

For context, FRTs use highly sensitive biometric facial data to identify and verify an individual. This is an automated process that can happen at scale. It triggers the need for thoughtful and informed legal and policy safeguards to maximize the benefits of FRTs, while minimizing and managing any potential harms.

FRTs raise concerns about accuracy and bias in system outputs, unlawful and indiscriminate surveillance, black box technology that's inaccessible to lawmakers, and ultimately, a chilling effect on freedom. When described in this context, FRTs put at risk Canada's fundamental values as enshrined in our Canadian charter and expressed in our national folklore.

While the use of highly sensitive, identifiable data can deeply harm an individual's reputation or even threaten their liberty—as you heard Ms. Khoo testify—it can also facilitate quick and secure payment at checkout, or help save a patient's life.

FRTs need to be regulated with a scalpel, not an axe.

The remainder of my submission this morning proposes specific questions organized in four main principles that align with responsible AI principles we see around the world, and are intended to guide targeted regulation of FRTs. The principles I propose align with the OECD artificial intelligence principles and leading international guidance on responsible AI, and address technical, legal, policy and ethical issues to shape a relatively comprehensive framework for FRTs. They are not intended to be exhaustive, but to highlight operational issues that will lead to deeper exploration.

The first is technical robustness. Questions that should inform regulation include the following. What specific technical criteria ought to be associated with FRT use cases, if any? Should there be independent third parties engaged as oversight to assess FRT from a technical perspective? If so, who should that oversight be?

Next is accountability. Questions that should inform regulation include the following. What administrative controls should be required to promote appropriate accountability of FRTs? How are those controls determined and by whom? Should there be an impact assessment required? If so, what should it look like? When is stakeholder engagement required and what should that process look like?

Next, is lawfulness. Questions that should guide regulation include the following. What oversight is needed to promote alignment of FRT uses with societal values, thinking through criminal, civil and constitutional human rights? Are there no-go zones?

Certainly last, but not least, is fairness. Questions associated with fairness regulation include the following. What are the possible adverse effects of FRTs on individual rights and freedoms? Can those effects be minimized? What steps can or should be taken to ensure that certain groups are not disproportionately harmed, even in low-risk cases?

Taken together, these questions allow Canada to align with emerging regulation on artificial intelligence around the world, with a specific focus on FRTs given the serious threat to our values as balanced against some of the real beneficial possibilities.

I look forward to your questions. Thank you.

11:10 a.m.

Conservative

The Chair Conservative Pat Kelly

Thank you very much.

We'll go to the first round of questions.

First up is Mr. Williams for six minutes, please.

11:10 a.m.

Conservative

Ryan Williams Conservative Bay of Quinte, ON

Thank you very much, Mr. Chair.

Thank you very much to our panellists this morning for coming on board.

I'll start with you, Ms. Khoo. I'd like to clarify a question I had from your recommendations. You have recommended a moratorium on facial recognition technology at this point. Is that correct?

11:10 a.m.

Research Fellow, Citizen Lab, Munk School of Global Affairs and Public Policy, University of Toronto, As an Individual

Cynthia Khoo

It's a moratorium on the use of facial recognition technology by law enforcement. The reason it's a moratorium and not a ban is that essentially the moratorium would give time to look further into the issue—to launch a judicial inquiry, for example—until we can determine whether it is appropriate to use facial recognition, under what circumstances and with what safeguards, and then include the time to put those safeguards in place.

11:10 a.m.

Conservative

Ryan Williams Conservative Bay of Quinte, ON

Thank you.

In September of 2020, you wrote the following:

The Canadian legal system currently lacks sufficiently clear and robust safeguards to ensure that use of algorithmic surveillance methods—if any—occurs within constitutional boundaries and is subject to necessary regulatory, judicial, and legislative oversight mechanisms.

I think it falls within that theme. We know that right now algorithmic surveillance methods are still here. Could you tell the committee what kind of safeguards we need in order to properly protect Canadian rights today?

11:10 a.m.

Research Fellow, Citizen Lab, Munk School of Global Affairs and Public Policy, University of Toronto, As an Individual

Cynthia Khoo

Absolutely. I would say that the first one off the bat is transparency. A lot of the knowledge we have of what is even in place these days, as I mentioned, comes because of investigative journalists or document leaks, for example, when it ideally should be law enforcement and the government telling us that up front, ideally prior to adoption, and giving the public a chance to comment on the potential impacts of these technologies. That's the first thing.

The second thing is that we need oversight mechanisms that will assess—either of the impact assessments, for example—ahead of time and not after the fact the potential harms of these technologies, particularly with respect to historically marginalized communities.

These are kind of higher-level principle safeguards, but going more into the weeds, our report focused on the criminal law context. Another example of safeguards would be that in the case of specific criminal defendants, there should be disclosure requirements so that it's known if these types of technologies have been used in their case, for example, and they have an opportunity to respond.

11:15 a.m.

Conservative

Ryan Williams Conservative Bay of Quinte, ON

Thank you.

In this committee's other ongoing study about the collection and use of mobility data, we heard about the absence of prior and informed consent on the use of personal data and information. How important is it for any collection of Canadians' personal information to have clear and informed consent prior to the collection being undertaken?

11:15 a.m.

Research Fellow, Citizen Lab, Munk School of Global Affairs and Public Policy, University of Toronto, As an Individual

Cynthia Khoo

I would say as a starting principle that it is extremely important that all residents of Canada are able to give prior and informed consent to their data being collected. I understand that this is complicated in the criminal justice context, but I think this is where the connection to commercial vendors becomes really salient. Commercial vendors are collecting so much data that should be done under prior and informed consent, but it is not. In some cases, it is either permitted to not be or it is just not in practice. That data gets funnelled through to law enforcement agencies. On this issue, I think that's something that warrants a lot of attention from this committee.

11:15 a.m.

Conservative

Ryan Williams Conservative Bay of Quinte, ON

Thank you.

Ms. Piovesan, are there any current protections in Canadian law for the uses of collected facial recognition data? To clarify, I mean protection regarding how, where and for how long it's stored, and how it can be used or sold.

11:15 a.m.

Managing Partner, INQ Law

Carole Piovesan

There are some protections under privacy law that we look at. Again, it depends on who is conducting the collection. If we're talking about a state actor, there is a patchwork of regulation and common law that will govern how certain information can be collected, stored, retained. Under federal private sector privacy law, for instance PIPEDA, there are certainly requirements that would, as Ms. Khoo said, demand that companies that want to collect such sensitive data do so on the basis of consent. If you look at Quebec, for instance, given that facial recognition technology involves biometric data, you'd be looking at a consent requirement as well. We do have a patchwork, depending on who the actor is who's actually leading that collection.

The issue is that we don't have comprehensive regulation, or frankly, a comprehensive approach when it comes to the use of facial recognition technology as a technology from soup to nuts, meaning that from the collection of that data through to the actual design of the system through to the use of that system, having a clear understanding of the appropriate safeguards in place from beginning to end, including that collection and storage of that data, the assessment of that data, the disclosure requirements around that data. Whether it's a public or non-public actor, there are different disclosure requirements, potentially, but disclosure requirements nonetheless. We have a right to know when aspects of our faces or anything that's an immutable, sensitive data point is being collected and stored and potentially used in a way that could be harmful against us.

Really, again, we have this patchwork of laws and regulations, depending on who the actor collecting that information is, but we don't have a comprehensive or really focused law around facial recognition technology.

11:15 a.m.

Conservative

Ryan Williams Conservative Bay of Quinte, ON

Okay. Thank you very much.

11:15 a.m.

Conservative

The Chair Conservative Pat Kelly

We'll go next to Mr. Fergus for six minutes.

March 21st, 2022 / 11:15 a.m.

Liberal

Greg Fergus Liberal Hull—Aylmer, QC

Thank you very much, Mr. Chair.

I'd like to thank our two witnesses for their presentations.

I rarely do this, but today I'm going to speak in English.

This is an issue on which I've done most of my reading in English, so I'll continue asking my questions in English.

First of all, let me thank Ms. Piovesan as well as Ms. Khoo for their contributions, not only to our study here but in terms of what they've written and published beforehand.

Ms. Khoo, I'd like to start with you. I've read a number of articles you've been involved with. One that certainly has caught my attention is one you co-authored in the Citizen Lab report. For the purposes of this committee I think it would be really important if you were to briefly explain what algorithmic technologies are. Then I'm going to have a few questions that are going to move on from there.

11:20 a.m.

Research Fellow, Citizen Lab, Munk School of Global Affairs and Public Policy, University of Toronto, As an Individual

Cynthia Khoo

That is great question. Algorithmic technologies can be very broad, depending on what level you're defining them at. In our report we defined algorithmic policing technologies specifically. If you think about it, an Excel spreadsheet could potentially be an algorithmic technology, in the sense that it relies on algorithms.

Algorithmic policing technologies, for the purposes of scoping our report—and I suspect it would probably be helpful in scoping for your committee—are emerging technologies that rely on automated computational formulas that are often used to assist or supplement police decision-making.

11:20 a.m.

Liberal

Greg Fergus Liberal Hull—Aylmer, QC

When we take a look at these algorithmic technologies, they're based on data that is collected by police—and I think you make a very good argument here, but for the purposes of the committee again—which in itself has been shown to have a pretty strong bias in the collection of that data. Is that not correct?

11:20 a.m.

Research Fellow, Citizen Lab, Munk School of Global Affairs and Public Policy, University of Toronto, As an Individual

Cynthia Khoo

That's correct.

11:20 a.m.

Liberal

Greg Fergus Liberal Hull—Aylmer, QC

Any algorithmic approach we would adopt in terms of collecting, using for artificial intelligence purposes, frankly, we would just be exacerbating these biases.

11:20 a.m.

Research Fellow, Citizen Lab, Munk School of Global Affairs and Public Policy, University of Toronto, As an Individual

Cynthia Khoo

I think that would be true in a lot of cases, yes.

11:20 a.m.

Liberal

Greg Fergus Liberal Hull—Aylmer, QC

In that case, then, I very much understand your secondary recommendation for there to be a national moratorium on the use, by law enforcement, of these kinds of technologies. The question is—and I'm fascinated to find out—why do you limit it to that? Why wouldn't you want to put a moratorium on the scraping of this kind of information by the private sector or non-public sector organizations?

11:20 a.m.

Research Fellow, Citizen Lab, Munk School of Global Affairs and Public Policy, University of Toronto, As an Individual

Cynthia Khoo

That is a really excellent question.

The reason I focused my remarks and will focus most of my comments today on the criminal justice context is purely because that was the scope of my research. I don't want to speak too far afield from issues that I've actually done that immersive study of myself, first-hand.

However, I do think there are a lot of really good reasons to engage in the same level of depth of research in the use of facial recognition not only in the commercial sector, but even by non-law enforcement government agencies. There may well be really good arguments to invoke a moratorium on facial recognition in those sectors as well. I can only speak more in depth to the policing context, but that's not to say that it wouldn't also be appropriate in these other contexts.

11:20 a.m.

Liberal

Greg Fergus Liberal Hull—Aylmer, QC

The reason I say this is that, as you pointed out, there is a possibility for people to do indirectly what they can't do directly while we work out the legal frame that could be used in terms of the establishment of the use of such technologies.

This technology really came to my attention three years ago now. One of the public broadcasters here in Canada, in Quebec, in French Canada, actually decided to use AI facial recognition technology to try to identify members of Quebec's National Assembly. As you say, and all studies have pointed out, if you are person of colour, if you are non-white, the error rate increases dramatically.

It would seem to me that it would behoove all of us to be careful in terms of trying to establish some limits as to how this information is collected and used in any context, not just in the criminal justice system.

Would you agree with that?

11:20 a.m.

Research Fellow, Citizen Lab, Munk School of Global Affairs and Public Policy, University of Toronto, As an Individual

Cynthia Khoo

Yes, I think I would agree with that. Particularly, we've seen so many examples of emerging technologies, both facial recognition and other types of algorithmic policing technologies, where we as a society and human rights would really have benefited from a precautionary approach and not the infamous “move fast and break things” approach.

I do agree, though, with Ms. Piovesan, who talked about taking a more granular approach, a scalpel rather than an axe, but you're right. We do need time to figure out specifically what the contents of that approach are.

If being cautious and preventing harm means putting a stop to the use of this technology while we figure it out, it would be fair to say that's a sound approach.