Evidence of meeting #11 for Access to Information, Privacy and Ethics in the 44th Parliament, 1st Session. (The original version is on Parliament’s site, as are the minutes.) The winning word was use.

A recording is available from Parliament.

On the agenda

MPs speaking

Also speaking

Cynthia Khoo  Research Fellow, Citizen Lab, Munk School of Global Affairs and Public Policy, University of Toronto, As an Individual
Carole Piovesan  Managing Partner, INQ Law
Ana Brandusescu  Artificial Intelligence Governance Expert, As an Individual
Kristen Thomasen  Professor, Peter A. Allard School of Law, University of British Columbia, As an Individual
Petra Molnar  Lawyer, Refugee Law Lab, York University

11:20 a.m.

Liberal

Greg Fergus Liberal Hull—Aylmer, QC

I have only about 20 seconds left. Hopefully, I can borrow five seconds from my colleague.

Very quickly, is there anywhere in the world that has established a framework for the use of AI facial recognition?

11:25 a.m.

Research Fellow, Citizen Lab, Munk School of Global Affairs and Public Policy, University of Toronto, As an Individual

Cynthia Khoo

I know that the European Union has been doing a lot of work in this area. They would be one jurisdiction, to start.

Also, the U.S. cities that I mentioned, particularly in California and Massachusetts, have been engaging in bans, moratoria and various frameworks of regulations to different degrees of strictness. That would be a potential model to look to as well.

11:25 a.m.

Liberal

Greg Fergus Liberal Hull—Aylmer, QC

Thank you very much.

11:25 a.m.

Conservative

The Chair Conservative Pat Kelly

Thank you. That was very well timed.

With that, we will go to Mr. Villemure.

You have six minutes.

11:25 a.m.

Bloc

René Villemure Bloc Trois-Rivières, QC

Thank you, Mr. Chair.

I want to thank the witnesses for their fantastic presentations.

I'm going to ask both witnesses the same question. I'd like very brief answers because I'll move on to something else after.

Ms. Khoo, does facial recognition mean the end of freedom?

11:25 a.m.

Research Fellow, Citizen Lab, Munk School of Global Affairs and Public Policy, University of Toronto, As an Individual

Cynthia Khoo

Without more context around that statement, it might be somewhat broad to say facial recognition technology inherently means the end of freedom.

11:25 a.m.

Bloc

René Villemure Bloc Trois-Rivières, QC

Okay. Thank you very much.

Ms. Piovesan, what do you think?

11:25 a.m.

Managing Partner, INQ Law

Carole Piovesan

I would agree with that. There are opportunities to use facial recognition technology that could be very beneficial. I gave the example of health care. Doing so and just having regular complete acceptance or denial of facial recognition, I don't think is the way to go. There are positive benefits, but there are some serious implications of the use of facial recognition technology, both of which have to be considered as we look to regulation.

11:25 a.m.

Bloc

René Villemure Bloc Trois-Rivières, QC

Thank you very much.

I'll come back to you, Ms. Khoo. The number of images captured so far is almost impossible to assess. Does this mean that it's already too late to do something?

11:25 a.m.

Research Fellow, Citizen Lab, Munk School of Global Affairs and Public Policy, University of Toronto, As an Individual

Cynthia Khoo

In terms of filling in some details, I imagine you might be talking about the three billion images captured by Clearview AI. In some respects, you could say it's too late in the sense that Clearview AI is already out there, they've already set up shop, they've sold contracts to all these police agencies, and even if they are no longer in Canada, they're still working in other countries. From that perspective, maybe it's too late.

However, it's never too late to act. For example, Clearview AI was operational in Canada at one point, and now they're not, because we found out and the OPC stepped in. There was public outcry. When it comes to technological issues, it's really easy to fall into a trap of technological inevitability or assuming that technology is here to stay. That is really not always the case.

Even when we talk about other types of algorithmic technologies, for example, the Federal Trade Commission in the United States has started issuing as part of their remedies in certain cases the disgorgement of algorithms: not only deleting data that has been illicitly collected, but even deleting the algorithmic models that have been built on top of that illicitly collected data.

11:25 a.m.

Bloc

René Villemure Bloc Trois-Rivières, QC

Thank you very much.

Ms. Piovesan, is it too late?

11:25 a.m.

Managing Partner, INQ Law

Carole Piovesan

No. I agree with Ms. Khoo entirely.

We have seen some movements, particularly out of the FTC, to demand there be a disgorgement of the algorithm and a deletion of the data. We've increasingly seen movement to better regulate those entities that are using facial recognition technologies or broader artificial intelligence technology to demonstrate conformity with technical, administrative and other requirements, to show that they are appropriate for the market in which they will be used from a vendor perspective, and provide an impact assessment from the user perspective. This underpins the importance of accountability in the use of artificial intelligence, including the use of facial recognition technologies.

I don't think it's too late.

11:25 a.m.

Bloc

René Villemure Bloc Trois-Rivières, QC

Thank you, Ms. Piovesan. I'll continue with you, if I may.

About two months ago, the Superior Court of Quebec handed down a decision on Clearview AI, asking that the company return the data it holds or destroy it. Clearview AI simply refused, adding that it is not in Canada and we have no authority over it.

What is done in cases like this?

11:25 a.m.

Managing Partner, INQ Law

Carole Piovesan

The extra-jurisdictional enforcement of these types of decisions is very difficult. We've seen this raised by courts before. We draw inspiration from the General Data Protection Regulation out of the EU that is starting to impose very significant fines, not for actual activity in the European jurisdiction, but for the use of European data subjects—the use of data of European residents.

Opportunities to extend jurisdiction and enforcement are being very much explored. We've seen this in Quebec, absolutely, with the passing of new private sector reform of the privacy law. It is certainly a consideration that we saw in the old Bill C-11, which was to reform aspects of PIPEDA. We'll see what comes out of the new reform, when and if it comes.

11:30 a.m.

Bloc

René Villemure Bloc Trois-Rivières, QC

Thank you very much.

You talked about a holistic approach in a recent interview. Could you elaborate on that?

11:30 a.m.

Managing Partner, INQ Law

Carole Piovesan

Absolutely.

When we're looking at the regulation of artificial intelligence, we need to look at aspects of the data, as well as the use and the design of the technology to ensure that it is properly regulated. In different jurisdictions, including the United States and the EU, we see an attempt to regulate artificial intelligence—including facial recognition very specifically—that takes a risk-based approach to the regulation.

If we draw inspiration from the EU's draft artificial intelligence act, we see that a criticality of risk is first anticipated, which means there are some use cases that are considered prohibitive or very high-risk. Others are considered high-risk categories for regulation and then the risk level decreases.

The high-risk categories are specifically regulated with a more proscriptive pen, telling both vendors and users of those systems what some of the requirements are and what needs to be done to verify and validate the system and the data, and then imposes ongoing controls to ensure that the system is operating as intended.

That is a really important point because when you are using a high-risk AI system—recognizing that artificial intelligence is quite sophisticated and unique in its self-learning and self-actioning embodiment—having those controls after the fact is really critical to ensure that there is an ongoing use.

11:30 a.m.

Conservative

The Chair Conservative Pat Kelly

Thank you.

11:30 a.m.

Bloc

René Villemure Bloc Trois-Rivières, QC

Thank you very much.

11:30 a.m.

Conservative

The Chair Conservative Pat Kelly

We're just a little over the time limit.

I'd now like to go to Mr. Green for six minutes.

March 21st, 2022 / 11:30 a.m.

NDP

Matthew Green NDP Hamilton Centre, ON

Thank you.

I want to begin by acknowledging that today is March 21, which marks the International Day for the Elimination of Racial Discrimination. It was some 60 years ago, in 1960, in fact, when the Sharpeville police massacre happened in South Africa against workers.

I want to take a step back from the specificity around the tools and talk about the systems for a moment, and draw a direct line between what I believe occurred under C-51 and the implementation of anti-terrorism protocols provincially that led to the analog version of facial recognition, which was the practice of street checks and racial profiling, otherwise known as “carding” by local police services. I'll pick up from there, because I believe that practice of racial profiling, the analog version, has been in a very sophisticated way ruled out and then reimplemented as has been identified here through private sector contracts that allow companies like Clearview to do indirectly what police services were doing directly.

I want to also situate the conversation in the system, which is this notion of predictive policing as the basis of my questions, because I believe that the topic of facial recognition may be overly broad to get any kind of real coverage on this.

My questions will be to Ms. Khoo, who had laid out in an extensive report some of the bases for recommendations moving forward. I would like Ms. Khoo to comment on the evolution of predictive policing, its inherent racial bias and this notion of creating de facto panoptic prisons within our communities that are often over-surveilled, over-policed and underserviced.

Ms. Khoo, would you care to comment on that, and perhaps draw any lines that you may have come across between the practices of street checks and carding to populate data in things like CPIC, which would obviously be replaced by more sophisticated data such as AI and facial recognition?

11:35 a.m.

Research Fellow, Citizen Lab, Munk School of Global Affairs and Public Policy, University of Toronto, As an Individual

Cynthia Khoo

Thank you very much for that question. I'm just trying to compile all my thoughts, because there are many issues that could fall under the umbrella you set out.

The first point I would make is that you're absolutely right in tracing that line. That's something we heard from a lot of the racial justice activists we talked to in the research for our report. For them, this is just 21st century state violence. It used to be done with pen and paper, and now it's done with computers and algorithms.

We're trying to move away from the term “predictive policing” just because, by this point, it's more of a marketing term and suggests a lot more certainty than the technology can really promise, because it's been popularized and it's what people know. One way that highlights the racial justice history behind it is asking if this would still be a problem if the technology worked perfectly. Our answer would be to look at what it's being used for. It's used for break and enters and so-called street crime and property crime. You will only ever catch a particular type of person if you're looking at a particular type of crime.

There's this great satirical project that makes a very compelling point in New York. They published something that they called the “white collar” crime heat map. That is essentially a crime heat map that only focuses on the financial district of downtown Manhattan. So, why are there not venture capitalists rushing to fund the start-up to create that type of predictive policing? It's because even if it worked perfectly, it still only enures to the benefit and detriment of particular social groups that fall along historical lines of systemic oppression.

The second point is I'm really happy that you brought up the “zooming out” contextualization of these technologies, because I believe in the next panel, you will be talking to Professor Kristen Thomasen, who is a colleague of mine. I would highly encourage you to pay attention to her comments, because she primarily focuses on situations in these technologies in the broader context of their being a socio-technical system and how you can't look at them divorced from the history that they're in. Even in Brazil, there was a rising field within the algorithmic accountability field that looked at the idea of critical algorithmic accountability or critical AI. They looked at what would it look like to decolonize artificial intelligence studies, for example, or to centre these historically marginalized groups even among the data scientists and the people who are working on these issues themselves.

I think I had one or two other thoughts, but maybe I'll stop there for now.

11:35 a.m.

NDP

Matthew Green NDP Hamilton Centre, ON

With my remaining minute, I recall, as a city councillor, taking on the process of street checks and racial profiling. Through FOIs, as a city councillor, I came across an internal memo from the Ontario Ministry of the Attorney General which, under the anti-terrorism protocol, stated that street checks provided a unique opportunity for the mass collection of data.

I reference our own local Hamilton Police Service's use of Clearview. I reference many times when wrongful identity scenarios happened and the lawsuits that happened in response. I reference their constant refrain on this topic of predictive policing.

The first time I heard that was at a business planning session with the Hamilton Police. All I could think about was Minority Report and how terrifying that was as a sci-fi social commentary some 20 years ago. Here we are today.

Thank you.

11:35 a.m.

Conservative

The Chair Conservative Pat Kelly

We'll go to the next round of five minutes with Mr. Kurek.

Go ahead.

11:35 a.m.

Conservative

Damien Kurek Conservative Battle River—Crowfoot, AB

Thank you very much. I appreciate the testimony that was provided and the questions that have been asked.

This whole discussion, be it on facial recognition or artificial intelligence, is really touching on what is a Pandora's box of massive implications in our society, law enforcement and technology. In reading on this subject, you see everything from how we log into our phones to evidence that's being compiled without consent for criminal prosecutions.

My hope is to get a couple of questions in to both of you. My first question surrounds the interplay between the state and private corporations and, sometimes, the contracts by which state actors—whether they be police forces or otherwise— will engage private corporations.

Ms. Khoo can answer first. Do you have specific recommendations about what regulations should look like to ensure that Canadians' privacy is protected in this case?

11:40 a.m.

Research Fellow, Citizen Lab, Munk School of Global Affairs and Public Policy, University of Toronto, As an Individual

Cynthia Khoo

I will start with three recommendations.

The first one is that if law enforcement is considering adopting facial recognition technology or algorithmic policing technology, it's a very real option for them not to engage with a commercial vendor at all. For example, Saskatchewan Police's protective analytics lab built all of their technology in-house, specifically to avoid these kinds of problems and being beholden to proprietary interests. It's publicly funded technology. It's all run by the province, the University of Saskatchewan and the municipal police force. That doesn't mean that there are no problems, but at least it cuts out the problems that would be associated with being tied to a commercial vendor.

The second thing is that, if you are going to procure from a commercial vendor, we would suggest putting in several really strict upfront procurement conditions. An example would be not accepting contracts with any company that has said it's not willing to waive its trade secrets for the purposes of independent auditing and making sure that it is contractually bound to comply with public interest privacy standards.

The third way to protect privacy and ensure public accountability is by ensuring less secrecy around these contracts. We shouldn't be finding out about them after the fact, through leaks, persistent FOIs or investigative journalists. We should know about them before they happen, when they're still at the tender stage, and have an opportunity to comment on them.