Evidence of meeting #11 for Access to Information, Privacy and Ethics in the 44th Parliament, 1st Session. (The original version is on Parliament’s site, as are the minutes.) The winning word was use.

A recording is available from Parliament.

On the agenda

MPs speaking

Also speaking

Cynthia Khoo  Research Fellow, Citizen Lab, Munk School of Global Affairs and Public Policy, University of Toronto, As an Individual
Carole Piovesan  Managing Partner, INQ Law
Ana Brandusescu  Artificial Intelligence Governance Expert, As an Individual
Kristen Thomasen  Professor, Peter A. Allard School of Law, University of British Columbia, As an Individual
Petra Molnar  Lawyer, Refugee Law Lab, York University

12:25 p.m.

Conservative

Ryan Williams Conservative Bay of Quinte, ON

Thank you.

Ms. Thomasen, in a March 2020 article with the CBC, you were talking about the Windsor Police's use of Clearview AI's facial recognition tool. You said, “How do we know that, if the victim is even identified, that their information is going to be protected?” I think that is a key message as the matter of facial recognition becomes more and more widespread.

My question to you is essentially to help answer the question you posed in the CBC story. How do we make sure Canadians know that their information is protected?

12:25 p.m.

Prof. Kristen Thomasen

To give some context to that question, that was engaging a narrative that arises often with respect to police use of facial surveillance, which is that we use it to protect...in this instance it was children from harm. We need to worry about the broader impact on privacy as a social good.

What I was getting at there was what—

12:25 p.m.

Conservative

The Chair Conservative Pat Kelly

I'm afraid you really weren't left with enough time to answer that question. I'm going to have to cut it off and go to Ms. Saks.

12:25 p.m.

Prof. Kristen Thomasen

I'm happy to submit a further explanation.

12:25 p.m.

Conservative

The Chair Conservative Pat Kelly

Please, indeed, submit a written explanation if you have one available or if you'd like to provide one.

Go ahead Ms. Saks, for four minutes, please.

12:25 p.m.

Liberal

Ya'ara Saks Liberal York Centre, ON

Thank you, Mr. Chair.

Mr. Williams might be pleased that I'm going to be stepping off a bit from his question.

I think we're all in agreement that more needs to be done in understanding the use of this technology and making sure that there's a robust consultation with all of those impacted by privacy and how it's used. A previous witness described it as needing to go through this with a scalpel and not with an axe. I appreciate the calls for a moratorium for us to be able to utilize that scalpel. It's an important metaphor.

Ms. Thomasen, in talking about the victims, I've heard a lot of the negative impacts. I don't disagree with them. I am someone who has been engaged in fighting human trafficking for many many years. I understand the impacts of migration and borders, and human trafficking's impact on both women and children, many of them from racialized minorities.

Is there not some wisdom in using the scalpel in this technology, so that we can effectively protect those who are victims of human trafficking, or children who are subject to assault or child pornography? Are there other tools that we need to find ways to protect them?

Is that not a consideration in this discussion?

12:30 p.m.

Prof. Kristen Thomasen

Yes. Privacy is a social good that benefits everyone, including the women and children who are often engaged in the narrative of saying that one of the beneficial uses of facial recognition is to protect marginalized or victimized groups. It's very important to acknowledge those potential beneficial uses of facial recognition while nuancing that narrative considerably. In particular, we need to recognize the way in which the potential erosion of privacy as a social good will also harm women and children.

One beneficial use case of facial surveillance, as far as I understand it, is an example from Canada, called Project Arachnid. It might be helpful to the committee to speak to someone involved in that project. It's a very narrowly designed use case of facial surveillance, or facial recognition technology more specifically. I'd be happy to speak more about definitions in another question.

The specific goals and purposes for the creation of an in-house facial recognition system have been set very narrowly. That is quite distinct from the broader arguments or narratives that facial recognition should not be banned or limited in various ways because there can be, generally speaking, potentially positive use cases. It's far more important to balance the social positive good of privacy in those kinds of discussions.

I feel like I'm limited on time. I'd be more than happy to talk about it more.

12:30 p.m.

Liberal

Ya'ara Saks Liberal York Centre, ON

I would like to try to get in one more question, if I may.

12:30 p.m.

Conservative

The Chair Conservative Pat Kelly

You can have just one.

12:30 p.m.

Liberal

Ya'ara Saks Liberal York Centre, ON

In December 2021, you gave a submission to the Toronto Police Service in regards to its consultations on the use of artificial intelligence technology. You made quite a number of recommendations in your submission. If you would like to, you may highlight one key recommendation here, but I would encourage you to then provide us with submissions in writing so that we may review them.

12:30 p.m.

Prof. Kristen Thomasen

I'll happily do that.

That was a co-authored submission.

One key recommendation I would like to highlight right now is that this technology is not inevitable. The fact that it exists does not mean that it should exist or that we should be using it. It does not mean that we shouldn't limit it.

Pointing to some beneficial use cases should not be sufficient to limit our thinking around the potential harms that can arise from more widespread use of the technology. In particular, we should be thinking more about the interrelationships between how police services, corporate agencies and individuals might be working together to collect information for the purposes of carceral end goals.

12:30 p.m.

Conservative

The Chair Conservative Pat Kelly

Thank you.

Mr. Villemure, you have four minutes.

12:30 p.m.

Bloc

René Villemure Bloc Trois-Rivières, QC

I would like to ask a brief question of each of the three witnesses, in their order of appearance.

Ms. Brandusescu, does facial recognition technology mean the end of freedom?

12:30 p.m.

Artificial Intelligence Governance Expert, As an Individual

Ana Brandusescu

I would say no, because we can ban facial recognition. The end of freedom is a very complex and dire question and statement. I would argue that, again, this isn't just about mass surveillance; it's about how our governments interact with industry, how they procure different software and they have no idea what they're purchasing—

12:30 p.m.

Bloc

René Villemure Bloc Trois-Rivières, QC

I'm sorry for interrupting you, but my time is limited. We'll come back to it.

Ms. Thomasen, I ask you the same question.

12:30 p.m.

Prof. Kristen Thomasen

I agree with previous witnesses: This is a complex question to answer quite straightforwardly.

I would also encourage that the committee consider beyond just facial recognition. There are all forms of different biometric recognition that feed into the conversation we're having today.

12:30 p.m.

Bloc

René Villemure Bloc Trois-Rivières, QC

Thank you very much.

What do you think, Dr. Molnar?

12:30 p.m.

Lawyer, Refugee Law Lab, York University

Dr. Petra Molnar

I would just encourage a contextual specificity with regard to this question, particularly when we're talking about freedoms: for whom?

In immigration, of course, we're talking about an opaque and discretionary space that's already very high risk. In this instance, yes, it can definitely be very limiting.

12:30 p.m.

Bloc

René Villemure Bloc Trois-Rivières, QC

Thank you.

Ms. Brandusescu, is facial recognition technology transforming the public space in terms of surveillance, as Dr. Habermas sees it?

12:30 p.m.

Artificial Intelligence Governance Expert, As an Individual

Ana Brandusescu

Yes, I would argue that there is mass surveillance, but specifically also a discriminatory racist and sexist surveillance, as we know, because this tech is very discriminatory in the way it is on a very computational level. The more we accept it into society the more it will just be something that we get used to. I don't want to have that convenience.

There was a convenience point made earlier. Sometimes I say that convenience will be the end of us when we use it to open up our phones. The more it becomes part of our daily lives, the more we just think it's okay to have it, but actually, it isn't, because it can really harm certain individuals and groups. Sometimes it's okay to just not have that technology at all.

It's a bigger question to have. It's a question around digital literacy. We need to have these discussions, and actually we need to have the critical digital literacy to ask the right questions.

12:35 p.m.

Bloc

René Villemure Bloc Trois-Rivières, QC

Thank you very much.

Beyond the identified biases, such as those related to race or age, the citizen who is not targeted by those biases nevertheless enters a world of surveillance, correct?

12:35 p.m.

Artificial Intelligence Governance Expert, As an Individual

Ana Brandusescu

We can go ahead and talk about data analytics firms and Palantir and others that aren't even FRTs. The world of surveillance goes way beyond FRT, and that's a bigger question to have about our country's military-industrial complex and where these technologies even come from.

We need to again zoom out and look at the way that technology has taken over. We need to reflect on what tech solutionism means, on why we put so much money and funding in tech innovation specifically, and why we look at innovation as just being tech-prone, and not actually funding groups who work very hard on social issues to understand this technology, to create public awareness and also education on this.

I have an optimistic view of the future, even though I am very critical of this technology. We have to imagine, to think about how we can live without some of this technology, and we'll be fine.

12:35 p.m.

Conservative

The Chair Conservative Pat Kelly

Thank you. I'm going to have to move now to Mr. Green for four minutes, please.

12:35 p.m.

NDP

Matthew Green NDP Hamilton Centre, ON

Thank you.

I will begin with my question to Ms. Brandusescu. In your report “Artificial Intelligence Policy and Funding in Canada: Public Investments, Private Interests”, one of your main findings is that the current government policy allows companies with links to human rights abuses to pre-qualify as government AI suppliers.

In your previous answer, you talked about the military-industrial complex. We've heard stories of companies that actually tout their technologies as being battle-tested.

Are you aware of any companies that have been pre-qualified as suppliers, not just for facial recognition but throughout AI and this whole spectrum, that have previously been implicated in human rights abuses?

12:35 p.m.

Artificial Intelligence Governance Expert, As an Individual

Ana Brandusescu

Thank you for the excellent question.

Yes, I am aware. One such supplier is Palantir Technologies Inc., which is a data analytics company that worked with the U.S. government to plan mass arrests for nearly 700 people and the separation of children from their parents, causing irreparable harm. You can see the report of Amnesty U.S.A. on that from 2020, yet as I mentioned in my opening statement, Palantir has committed to Canada's algorithmic impact assessment and it's on that pre-qualified supplier list, seen as an ethical measure that supports responsible AI. To be committed to an AIA that's supposed to be ethical and then commit with another government these human rights abuses is very paradoxical and contradictory.

I ask our government, especially the Treasury Board, which manages that list, to reconsider as I mentioned, to get them off the list—and not just them, but others that I haven't looked into deeply about potential human rights abuses.