Evidence of meeting #27 for Access to Information, Privacy and Ethics in the 44th Parliament, 1st Session. (The original version is on Parliament’s site, as are the minutes.) The winning word was use.

A recording is available from Parliament.

On the agenda

MPs speaking

Also speaking

Esha Bhandari  Deputy Director, American Civil Liberties Union
Tamir Israel  Staff Lawyer, Samuelson-Glushko Canadian Internet Policy and Public Interest Clinic

June 16th, 2022 / 3:45 p.m.

Conservative

The Chair Conservative Pat Kelly

I call the meeting to order.

Welcome to meeting number 27 of the House of Commons Standing Committee on Access to Information, Privacy and Ethics. Pursuant to Standing Order 108(3)(h) and the motion adopted by the committee on Monday, December 13, 2021, the committee is resuming its study of the use and impact of facial recognition technology.

I would like to now welcome our witnesses.

From the American Civil Liberties Union, we have Esha Bhandari. From the Samuelson-Glushko Canadian Internet Policy and Public Interest Clinic, we have Tamir Israel, staff lawyer.

I apologize for the late start. It was just a function once again—and not uncommon at this time of the year—of the timing of votes in the House of Commons. The meeting was scheduled for one hour, from 3:30 to 4:30. We will still go ahead for the full hour, starting now.

With that, I will ask Ms. Bhandari to begin.

You have the floor for up to five minutes.

3:45 p.m.

Esha Bhandari Deputy Director, American Civil Liberties Union

Thank you very much, Mr. Chair.

Thank you to the committee for the invitation.

My name is Esha Bhandari, and I am a deputy director of the American Civil Liberties Union's speech privacy and technology project based in New York. I am originally from Saint John, New Brunswick.

I'd like to speak to the committee about the dangers of biometric identifiers with a specific focus on facial recognition.

Because biometric identifiers are personally identifying and generally immutable, biometric technologies—including face recognition—pose severe threats to civil rights and civil liberties by enabling privacy violations, including the loss of anonymity in contexts where people have traditionally expected it, enabling persistent tracking of movement and activity, and identity theft.

Additionally, flaws in the use or operation of biometric technologies can lead to significant civil rights violations, including false arrests and denial of access to benefits, goods and services, as well as employment discrimination. All of these problems have been shown to disproportionately affect racialized communities.

What exactly are we talking about with biometrics?

Prior to the digital age, collection of limited biometrics like fingerprints was laborious and slow. Now we have the potential for near instantaneous collection of biometrics, including face prints. We have machine learning capabilities and digital age network technologies. All of these technological advances combined make the threat of biometric collection even greater than it was in the past.

Face recognition is, of course, an example of this, but I want to highlight that voice recognition, iris or retina scans, DNA collection, gait and keystroke recognition are also examples of biometric technology that have effects on civil liberties.

Facial recognition allows for instant identification at a distance without the knowledge or consent of the person being identified and tracked. Even in the past, identifiers that needed to be captured with the knowledge of the person, such as fingerprints, can now be collected without the knowledge of the person, which includes DNA that we shed as we go about our daily lives. Iris scans can be done remotely, and facial recognition and face prints can be collected remotely without the knowledge or consent of the person whose biometrics are being collected.

Facial recognition is particularly prone to the flaws of biometrics, which include design flaws, hardware limitations and other problems. Multiple studies have shown that face recognition algorithms have markedly higher misidentification rates for people of colour, including Black people, children and older adults. There are many reasons for this. I won't get into the specifics of that, but in part it's because of the datasets that are used but also flaws in real world conditions.

I also want to highlight that often the error rates that are shown in test conditions are exacerbated in real world conditions, which are often worse than test conditions—for example, when a facial recognition tool is being used on poor quality surveillance footage.

There are also other risks with face recognition technology when it is combined with other technology to infer emotion, cognitive state or intent. We see private companies increasingly promoting products that purport to detect emotion or affect, such as aggression detectors, based on facial tics or other movements that this technology picks up on.

Psychologists who study emotion agree that this project is built on faulty science because there is no universal relationship between emotional states and observable facial traits. Nonetheless, these video analytics are proliferating, claiming to detect suspicious behaviour or detect lies. When deployed in certain contexts, this can cause real harm, including employment discrimination if a private company is using these tools to analyze someone's face during an interview to infer emotion or truthfulness and deny jobs based on this technology.

I have been speaking, of course, about the flaws with the technology and the error rates that it has, which, again, disproportionately fall on certain marginalized communities, but there are, of course, problems even when the facial recognition technology functions and functions accurately.

The ability for law enforcement, for example, to systematically track people and their movements over time poses a threat to freedom and civil liberties. Sensitive movements can be identified, whether people are travelling to protests, to medical facilities or other sensitive locations. In recognition of these dangers from law enforcement use, at least 23 jurisdictions in the United States, from Boston to Minneapolis, and San Francisco and to Jackson, Mississippi, have enacted legislation halting law enforcement or government use of face recognition technology.

There's also, of course, the private sector use of this technology, which I just want to highlight. Again, you see trends now where, for example, landlords may be using facial recognition technology in buildings, which enables them to track their tenants' movements in and out of the building and also their guests—romantic partners and others—who come in and out of the building. We also see this use in private shopping malls and in other contexts as well—

3:50 p.m.

Conservative

The Chair Conservative Pat Kelly

Ms. Bhandari, I'm sorry to have to interrupt you, but I will ask you to wrap up in the next couple of seconds so that we can carry on. You are a bit over time.

3:50 p.m.

Deputy Director, American Civil Liberties Union

Esha Bhandari

Yes, absolutely.

I just want to conclude with a couple of policy recommendations.

One, government use of facial recognition technology should be prohibited—law enforcement use—but at the very least, there has to be regulation to constrain its use and protect individuals from the harm that can result.

Also, that same regulation should extend to private entities that use facial recognition technology.

Thank you very much.

3:50 p.m.

Conservative

The Chair Conservative Pat Kelly

Thank you.

With that, we go to Mr. Israel for up to five minutes.

3:50 p.m.

Tamir Israel Staff Lawyer, Samuelson-Glushko Canadian Internet Policy and Public Interest Clinic

Good afternoon, Mr. Chair and members of the committee.

My name is Tamir Israel and I'm a lawyer with the Samuelson-Glushko Canadian Internet Policy and Public Interest Clinic at the University of Ottawa, which sits on the traditional unceded territory of the Algonquin Anishinabe people.

I want to thank you for inviting me to participate in this important study into facial recognition systems.

As the committee has heard, facial recognition technology is versatile and poses an insidious threat to privacy and anonymity, while undermining substantive equality. It demands a societal response that's different and more proactive than that to other forms of surveillance technology.

Face recognition is currently distinguished by its ability to operate surreptitiously and at a distance. Preauthenticated image databases can also be compiled without participation by individuals, and this has made facial recognition the biometric of choice for achieving a range of tasks. In its current state of development, the technology is accurate enough to inspire confidence in its users but sufficiently error prone that mistakes will continue to occur with potentially devastating consequences.

We have long recognized, for example, that photo lineups can lead police to fixate erroneously on particular suspects. Automation bias compounds this problem exponentially. When officers using an application such as Clearview AI or searching a mug shot database are presented with an algorithmically generated gallery of 25 potential suspects matching a grainy image taken from a CCTV camera, the tendency is to defer to the technology and to presume the right person has been found. Simply including human supervision will, therefore, never be sufficient to fully mitigate the harms of this technology.

Of course, racial bias remains a significant problem for facial recognition systems as well. Even for top-rated algorithms, false matches can be 20 times higher for Black women, 50 times higher for native American men, and 120 times higher for native American women than they are for white men.

This persistent racial bias can render even mundane uses of facial recognition deeply problematic. For example, a United Kingdom government website relies on face detection to vet passport image quality, providing an efficient mechanism for online passport renewals. However, the face detection algorithm often fails for people of colour and this circumstance alienates individuals who are already marginalized by locking them out of conveniences available to others.

As my friend Ms. Bhandari mentioned, even when facial recognition is cured of its biases and errors, the technology remains deeply problematic. Facial recognition systems use deeply sensitive biometric information and provide a powerful identification capability that we know from other investigative tools such as street checks will be used disproportionately against indigenous, Black and other marginalized communities.

So far, facial recognition systems can be and have been used by Canadian police on an arrested suspect's mobile device, on a device's photo album, on CCTV footage in the general vicinity of crimes and on surveillance photos taken by police in public spaces.

At our borders, facial recognition is at the heart of an effort to build sophisticated digital identities. “Your face will be your passport” is becoming an all-too-common refrain. Technology also provides a means of linking these sophisticated identities and other digital profiles to individuals, driving an unprecedented level of automation.

At all stages, transparency is an issue, as government agencies in particular are able to adopt and repurpose facial recognition systems surreptitiously, relying on dubious lawful authorities and without any advance public licence.

We join many of our colleagues in calling for a moratorium on public safety and national security related uses of facial recognition and on new uses at our borders. Absent a moratorium, we would recommend amending the Criminal Code to limit law enforcement use to investigations of serious crimes and in the absence of reasonable grounds to believe. A permanent ban on the use of automated, live biometric recognition by police in public spaces would also be beneficial, and we would also recommend exploring a broader prohibition on the adoption of new facial recognition capabilities by federal bodies absent some sort of explicit legislative or regulatory approval.

Substantial reform of our two core federal privacy laws is also required. Bill C-27 was tabled this morning and it would enact the artificial intelligence and data act, as well as reform our private sector law, our federal law PIPEDA. Those reforms are pending and will be discussed, but beyond the amendments in Bill C-27, both PIPEDA and the Privacy Act need to be amended so that biometric information is explicitly encoded as sensitive, requires greater protection in all contexts and, under PIPEDA, requires express consent in all contexts.

Both PIPEDA and the Privacy Act should also be amended to legally require companies and government agencies to file impact assessments with the Privacy Commissioner prior to adopting intrusive technologies. Finally, the commissioner should be empowered to interrogate intrusive technologies through a public regulatory process and to put in place usage limitations or even moratoria where necessary.

Those are my opening remarks. I thank the committee for its time. I look forward to your questions.

3:55 p.m.

Conservative

The Chair Conservative Pat Kelly

Thank you.

For the first round of up to six minutes, we have Mr. Kurek.

3:55 p.m.

Conservative

Damien Kurek Conservative Battle River—Crowfoot, AB

Thank you very much. I'd like to thank our witnesses for joining us here today. As I often start, please feel free to follow up with specific recommendations. Certainly, that is valuable when it comes time for this committee to put together its report.

I appreciate the content of your opening statements. One big challenge that I think lawmakers and public policy-makers have in terms of addressing this issue is trying to find that right balance. There's law enforcement that says it needs every tool available to help fight crime, to help protect victims, etc. On the other side we have the valid argument that we need to ensure that vulnerable people, groups, are protected and that the rights of Canadians, in the case of Canada, are respected.

I'd ask this question to both of you. Do you have any recommendations for this committee as to how to find that right balance? I'll start with Mr. Israel.

3:55 p.m.

Staff Lawyer, Samuelson-Glushko Canadian Internet Policy and Public Interest Clinic

Tamir Israel

When you're talking about an intrusive technology like this, the onus is on the government to make its case for the use of the technology. One big problem—and I know the committee has heard this—is that currently adoption happens at the ground level, and any legislative response comes in response to that.

Some of our recommendations would flip that around by either creating a legislative obligation to go to the legislature and make the case for the use of some of these techniques in advance, or alternatively, by empowering the regulator, the independent Privacy Commissioner of Canada, to play a proactive role in assessing and approving or disapproving elements of these technologies. I think that might be one meta consideration for how to address some of these challenges more broadly.

3:55 p.m.

Conservative

Damien Kurek Conservative Battle River—Crowfoot, AB

Thank you.

Ms. Bhandari.

3:55 p.m.

Deputy Director, American Civil Liberties Union

Esha Bhandari

Just to follow up on Mr. Israel, I would say I agree that, when we're talking about a new technology, particularly one with as many flaws as facial recognition, the onus has to be on law enforcement. In this case we know the flaws. Multiple studies have shown the disproportionate error rates and the consequential impacts on people's lives. There have been a few high-profile instances in the United States of Black men being falsely arrested by facial recognition match and the devastating consequences for people because of that.

At the very least, do extensive study to show that those flaws and error rates have been eliminated and that there aren't disproportionate impacts on people based on demographics. That's just not there now. In the absence of that, essentially the widespread use of facial recognition technology in these specific contexts is running an experiment on the population at large.

We're not there yet, but certainly, looking forward I'd also agree that the concerns about persistent tracking and identity theft, all of those exist. Any balance that this committee strikes in its recommendations has to take into account the harms that will result even if the technology functions as it's claimed to function.

4 p.m.

Conservative

Damien Kurek Conservative Battle River—Crowfoot, AB

Thank you for that.

As a little bit of a follow-up on that, Ms. Bhandari, this committee had studied the usage of mobility data in terms of Canada's public health response to COVID-19. It was interesting. During your opening statement I heard some consistency in the remarks you shared to some of the concerns that this committee heard over the course of that previous study. I'm wondering if you have any observations, if you had a chance at all to see the work that this committee did. Was there anything you'd like to add to that?

4 p.m.

Deputy Director, American Civil Liberties Union

Esha Bhandari

There's definitely, I think, a tie-in to those concerns, because location tracking is one of the harms from even properly functioning facial recognition. It's a society in which every move we make is in a database to be used. I think the concerns about mobile tracking and contact tracing, when they come without those safeguards, are the same for facial recognition.

Again, we don't currently live in a society where we expect, regardless of whether we're out in public or not, our every move would be accessible to government, to law enforcement and to private companies, potentially, that want to market to us or exploit us in some way. The technology is there to track us everywhere we go all day long.

Location tracking concerns that this committee considered previously are also applicable here.

4 p.m.

Conservative

Damien Kurek Conservative Battle River—Crowfoot, AB

I appreciate that.

I have one final question in my last minute.

We often hear the example of Clearview AI and how that is no longer used in Canada—no longer contracted by law enforcement. However, there certainly are a whole host of other providers, and some further applications that may not have as direct a purpose as Clearview AI.

Could you perhaps share with the committee your experience with other providers or other example that this committee should maybe be aware of?

There are about 30 seconds left.

4 p.m.

Deputy Director, American Civil Liberties Union

Esha Bhandari

There are certainly other providers.

The American Civil Liberties Union has settled with Clearview in the United States to limit them from selling their database to private entities in the United States. However, that is one company among many companies.

Many of the companies are not necessarily consumer facing. They won't be the big tech names that people are aware of. Again, transparency is so key. The public may not know of these companies in the way that they know of big social media, for example.

4 p.m.

Conservative

Damien Kurek Conservative Battle River—Crowfoot, AB

Thank you.

4 p.m.

Conservative

The Chair Conservative Pat Kelly

Next we'll go to Ms. Saks, for up to six minutes.

4 p.m.

Liberal

Ya'ara Saks Liberal York Centre, ON

Thank you, Mr. Chair.

Thank you to the witnesses here today. I'll start with you, Mr. Israel, if I may.

In September 2020, the Samuelson-Glushko Canadian Internet Policy and Public Interest Clinic released a report on facial recognition, which you wrote. It focuses primarily on its use at borders.

In a very specific fashion, because of time, what were the main findings of the report? Could you give the top three? Then I have a follow-up question.

4 p.m.

Staff Lawyer, Samuelson-Glushko Canadian Internet Policy and Public Interest Clinic

Tamir Israel

I would say the main findings were that facial recognition is being adopted at the border without due consideration for the harms it would cause, without much external oversight and often without regard to existing policies, such as the Treasury Board's policy on artificial intelligence, where you are supposed to bring in external guidance when adopting intrusive technologies like that.

Then, once it is adopted, it often gets repurposed very quickly for reasons beyond the narrow reasons of the context in which it was developed.

The last one is that it often provides a link between digital and physical presences in ways that allow for automation in the application of many other automated assessment tools, which is problematic in and of itself.

4 p.m.

Liberal

Ya'ara Saks Liberal York Centre, ON

Thank you.

You said in your opening remarks that human interaction with these platforms is insufficient. We've heard from other witnesses that human interaction is actually an imperative part of being able to utilize this technology.

Could you clarify that a little bit, please?

4 p.m.

Staff Lawyer, Samuelson-Glushko Canadian Internet Policy and Public Interest Clinic

Tamir Israel

I agree that it's an integral part, but it is important to also recognize that it is not enough.

I think this committee heard, as well, from a witness on how humans interact with facial images that they are presented with, and how their own biases creep in. The example provided was of a photo lineup that is often used by police, and how it replicates the type of image output that you often get from a facial recognition system, where it gives you maybe the top 10 or 15 matches. We know that, as an investigative tool, that has led to a lot of problems in the past for police.

That is the type of human supervision that we're talking about. It's worse in the context of facial recognition systems, because the tendency is to trust in the automated results of the system and that they produced an accurate match. You're questioning it even less than in the context where you get just a general photo lineup and try to figure out who a person is. What it ends up doing is embedding cognitive and other biases.

4:05 p.m.

Liberal

Ya'ara Saks Liberal York Centre, ON

I understand. There is an element of human error in everything in life.

However, my next question on that is this. If we move to a legislative guardrail on this, can you actually legislate for human error? We can legislate for human intervention, but how do we legislate for human error, then? The train has left the station on these technologies, and we're trying to look at what the guardrails would be. Can we really legislate human error?

4:05 p.m.

Staff Lawyer, Samuelson-Glushko Canadian Internet Policy and Public Interest Clinic

Tamir Israel

I think legislating a human in the decision-making loop is an important thing to include. I think it's also important to recognize that it doesn't solve all of the problems. It's often presented as the solution, but you're often going to have a lot of bias problems, even with a human involved in the investigation. Beyond that, you still need more.

We still recommend a moratoria, given the wide potential for harm with this particular technology, until we get to a better and more concrete regulatory and technologically developed state.

4:05 p.m.

Liberal

Ya'ara Saks Liberal York Centre, ON

Thank you so much.

I'm going to move to Ms. Bhandari, if I may.

I'd like to touch on something that, unfortunately, we didn't get to enough in this study, which is location tracking technologies used in commercial and retail spaces. For example, Cadillac Fairview is a big mall owner here in Canada. They tend to have cameras and other technologies in their spaces, from what I understand.

We talk a lot about the legislation in terms of the relationship of private companies with law enforcement. I'll start with you, Ms. Bhandari, and perhaps also Mr. Israel.

What are your thoughts on how we legislate that private or commercial relationship with these types of technologies going forward, in an ideal world, if there was a moratorium and we had time to think about this?

4:05 p.m.

Conservative

The Chair Conservative Pat Kelly

Ms. Bhandari, before you respond to Ms. Saks' question, I would ask you to move the boom of your microphone up a little bit. We had a little bit of trouble with your audio.

Let's go ahead and I'll stop you again if we need another adjustment.