Evidence of meeting #27 for Access to Information, Privacy and Ethics in the 44th Parliament, 1st Session. (The original version is on Parliament’s site, as are the minutes.) The winning word was use.

A recording is available from Parliament.

On the agenda

MPs speaking

Also speaking

Esha Bhandari  Deputy Director, American Civil Liberties Union
Tamir Israel  Staff Lawyer, Samuelson-Glushko Canadian Internet Policy and Public Interest Clinic

4:05 p.m.

Deputy Director, American Civil Liberties Union

Esha Bhandari

I hope this is better.

To address the question on private sector use, the harms are real. I'll highlight a few examples.

In Michigan, for example, a skating rink was using a facial recognition tool on customers who were coming in and out. A 14-year-old Black girl was ejected from the skating rink after the face recognition system incorrectly matched her to a photo of someone who was suspected of previously disrupting the rink's business.

We've seen private businesses use this type of technology, whether it's in concert venues, stadiums or sports venues, to identify people on a blacklist—customers they don't want to allow back in for whatever reason. Again, the risk of error and dignitary harms involved with that, the denial of service, is very real. There's also the fact that this tracking information is now potentially in the hands of private companies that may have widely varying security practices.

We've also seen examples of security breaches, where large facial recognition databases held by government or private companies have been revealed in public. Because these face prints are immutable—it's not like a credit card number, which you can change—once your biometrics are out there and potentially used for identity purposes, that's a risk.

Similarly, we've seen some companies—for example, Walgreens in the United States—now deploying face recognition technology that can pick out a customer's age and gender and show them tailored ads or other products. This is also an invasive practice that could lead to concerns about shoppers and consumers being steered to discounts or products based on gender stereotypes, which can further segregate our society.

Even more consequentially, it's used by employers—

4:10 p.m.

Conservative

The Chair Conservative Pat Kelly

I'm sorry to have to do it again, but we are a fair bit over time on this round.

I'm going to go next to Monsieur Villemure for six minutes.

4:10 p.m.

Bloc

René Villemure Bloc Trois-Rivières, QC

Thank you very much, Mr. Chair.

Good afternoon, Ms. Bhandari.

This committee has heard about all sorts of horror—and error—stories. We heard about prejudice that was directed primarily at racialized populations, for example. Your association is an advocacy association, so it is militant.

I will, for a moment, play devil's advocate.

Is there any advantage to using facial recognition?

4:10 p.m.

Deputy Director, American Civil Liberties Union

Esha Bhandari

Thank you for your question.

I agree with what Mr. Israel said earlier, which was that the onus and the burden should be on the entities seeking to use facial recognition. Certainly I think private companies would say there's an advantage. They are making money off of it. Collecting data, as we know, is very profitable. I don't think that's an advantage this committee should consider when weighed against the invasion of rights.

When we're talking about law enforcement and government uses, I think it's true that new technology will always be claimed to be solving age-old problems, but I don't see any evidence that the use of facial recognition technology and any perceived benefits it might bring to law enforcement outweigh the type of transformation it would render in society.

4:10 p.m.

Bloc

René Villemure Bloc Trois-Rivières, QC

In your view, the infringement of rights is so great that it is not worth talking about the advantages, which are often commercial—which, by the way, is not the purpose of our study.

Is this right?

4:10 p.m.

Deputy Director, American Civil Liberties Union

4:10 p.m.

Bloc

René Villemure Bloc Trois-Rivières, QC

You mentioned motion detectors earlier. In the past, I've read some studies about this, some of which were a bit frivolous. They were talking about the ability to recognize people's sexual or political preferences through facial recognition.

Is any of this possible? Are these claims, on the contrary, totally fanciful?

4:10 p.m.

Deputy Director, American Civil Liberties Union

Esha Bhandari

There are a lot of companies marketing the ability to do this right now. The science is not there to support it.

Currently, all the experts say that there is not a reliable link between our physical or biological manifestations and those mental states or propensities, but I think the real fear, of course, is that society will come to accept these links, that we'll go back to an age where we thought physiognomy or physical characteristics revealed character and that this new technology will be seen as providing new and objective truth.

I'm not sure that the fear is that it will in fact reveal mental states, but the concern is that it is being marketed as such.

4:10 p.m.

Bloc

René Villemure Bloc Trois-Rivières, QC

I'm coming back to your last comment.

Should the public get used to electronic surveillance? Isn't it a waste of time to try to legislate the use of billions of images that are already circulating?

4:10 p.m.

Deputy Director, American Civil Liberties Union

Esha Bhandari

It is absolutely not a lost cause, and there is plenty of time for the government to take action. Regulating the flow of information going forward is critical to putting a halt to some of the harms we've seen.

It's not inevitable that we continue to be awash in biometric information, that we continue to be tracked. What has come before, of course, has come before, but moving forward, we can put guardrails in place. We can have strong laws and regulations. Just because an industry has been unregulated in the past doesn't mean that it's too late to solve it now.

4:10 p.m.

Bloc

René Villemure Bloc Trois-Rivières, QC

You talk about user-provided content. Shouldn't we be raising awareness about privacy or the risks related to the use that is made of data?

4:10 p.m.

Deputy Director, American Civil Liberties Union

Esha Bhandari

That is certainly one element of it. I also want to highlight, though, that oftentimes data is collected from people without their consent. So many people who use the Internet to shop or to search for information don't know how they're being tracked.

As I mentioned, face prints being captured are often captured without our consent. Nobody meaningfully can say “no” if surveillance cameras are gathering that. This is not a problem of people willingly giving up their biometrics. Most of the time, people don't know, which is why a knowledge and consent requirement before biometrics are captured is very critical.

4:10 p.m.

Bloc

René Villemure Bloc Trois-Rivières, QC

Could you briefly tell us about the lawsuit you launched against Clearview AI in the State of Illinois?

4:15 p.m.

Deputy Director, American Civil Liberties Union

Esha Bhandari

Yes. We filed a lawsuit against Clearview under an Illinois state law known as BIPA, the Biometric Information Privacy Act, and we settled that lawsuit.

Under the terms of the settlement, there are two key provisions. Clearview can no longer provide to any private entity access to its database containing millions and millions of face prints nationwide—permanently. It's a permanent ban on selling to private entities in the country, with a few exceptions, and a five-year ban on law enforcement access within Illinois.

The only way we were able to bring this lawsuit is that Illinois has the Biometric Information Privacy Act, which makes it rare in the United States, and that law shows the potential of regulation. That law is what enabled us to sue Clearview and reach the settlement whereby Clearview can no longer sell its face print technology to private entities around the country.

4:15 p.m.

Bloc

René Villemure Bloc Trois-Rivières, QC

I would ask you to send us documentation on the law in question or the lawsuit you have filed, if possible. It would be helpful to us.

4:15 p.m.

Deputy Director, American Civil Liberties Union

Esha Bhandari

I would be happy to do that.

4:15 p.m.

Bloc

René Villemure Bloc Trois-Rivières, QC

Can you tell me if Clearview AI can still sell its technology outside of the United States at this time?

4:15 p.m.

Deputy Director, American Civil Liberties Union

Esha Bhandari

Our lawsuit doesn't address anything they do outside the United States. That's correct.

4:15 p.m.

Bloc

René Villemure Bloc Trois-Rivières, QC

All right.

Thank you very much.

4:15 p.m.

Conservative

The Chair Conservative Pat Kelly

Next will be Mr. Green for up to six minutes.

4:15 p.m.

NDP

Matthew Green NDP Hamilton Centre, ON

Thank you.

I'd like to pick up on that. I can recall coming back through an international flight and being herded through an American Homeland Security checkpoint. I believe it was at Pearson. It was the first time that I was having to contemplate iris scans. I'm wondering if the witnesses can speak—and perhaps we can start with Ms. Bhandari—about the way in which Nexus in our airports has.... Has her work in the States led to any investigations or research on Nexus's public-private biodata collection service?

4:15 p.m.

Deputy Director, American Civil Liberties Union

Esha Bhandari

One of the areas that we have been concerned about is the expansion of facial recognition and other biometric technology in airports. We haven't looked specifically at Nexus, but the same principle holds with, for example, the global entry system in the United States.

The concern, of course, is that as people become required to provide face prints or iris scans to access essential services—going to the airport, crossing the border, entering a government building—it facilitates a checkpoint society the likes of which we haven't seen before. These are not contexts in which people can meaningfully opt out, so one clear area of regulation could be providing people with a meaningful opt-out by saying that, if you don't want to prove your identity via an iris scan, we'll provide you the option to do so another way, with your passport, for example, or with your Nexus card, for example.

On the airport use and the border use, because it's such a coercive environment, because people don't have the choice to walk away, that has been a big concern.

4:15 p.m.

NDP

Matthew Green NDP Hamilton Centre, ON

Go ahead, Mr. Israel.

4:15 p.m.

Staff Lawyer, Samuelson-Glushko Canadian Internet Policy and Public Interest Clinic

Tamir Israel

It's exactly as Ms. Bhandari said. It's a big problem, because programs like Nexus are opt in, in a sense, but the pressure to get through the border—the explicit use of the border as a pain point to encourage travellers to sign up for these types of systems—is a problem.

Canada, for example, piloted a program with the Netherlands, one developed by the World Economic Forum. It's basically a digital identity, housed on your phone, with a lot of your passport information and additional social identity verification program information. The idea was to see if that could be used in replacement of a passport, in order to facilitate border crossings. Facial recognition was the technology built into that system. The end vision of that system—it's very explicit—is getting travellers to voluntarily sign up for it to avoid delays at the border, because it gives you access to faster security processing. However, it later becomes available to banks, telecommunication companies and other entities, as well, for similar identity verification programs.

4:15 p.m.

NDP

Matthew Green NDP Hamilton Centre, ON

Oh boy, Mr. Israel, I think you might have touched the third rail when you talked about the World Economic Forum, in this context.

I have an interest in your report. You talked about how some evidence suggests that Canadian border control agencies appear to be unaware of racial biases inherent in these systems. What little public information is available suggests these capabilities may have been assessed for general levels of inaccuracy but not for racial bias. I'll reference the ongoing saga of the no-fly lists for kids. In this country, we literally have children being flagged, identified and put on no-fly lists because they might have Muslim-sounding names. They are caught up in some kind of bureaucratic nightmare, quite frankly, when trying to travel.

Can you talk about the risks posed by that lack of awareness about racial bias, particularly in terms of having human guardrails in place to help offset some of these inconsistencies and inaccuracies?