Evidence of meeting #25 for Access to Information, Privacy and Ethics in the 44th Parliament, 1st Session. (The original version is on Parliament’s site, as are the minutes.) The winning word was data.

A recording is available from Parliament.

On the agenda

MPs speaking

Also speaking

Nestor Maslej  Research Associate, Institute for Human-Centered Artificial Intelligence, Stanford University, As an Individual
Sharon Polsky  President, Privacy and Access Council of Canada

5:05 p.m.

NDP

Matthew Green NDP Hamilton Centre, ON

In your view, then, could you perhaps comment on best practices that Canada should learn from, in comparison to other AI legislation and jurisdictions?

5:05 p.m.

Research Associate, Institute for Human-Centered Artificial Intelligence, Stanford University, As an Individual

Nestor Maslej

Perhaps not necessarily as a best practice but a point of reality, one of the big takeaways from this “AI Index” report is that AI is becoming increasingly ubiquitous in all of our lives.

Ten years ago, there were a lot of AI problems that were very difficult to solve. This meant that AI was something that was just being researched, whereas, if you move forward 10 years, AI is now one of those things that are coming out of the lab and moving into the real world. A lot of companies are very excited about using AI technologies, and you're going to start seeing them used more and more. Investment in AI is going through the roof, and the number of AI patents is going through the roof.

Very often, I would say, a lot of companies are quite keen to use AI before perhaps coming to terms with some of the negative ways in which it can be deployed. As a regulator, very often it might be worth asking, when should we care about this? When is the time to regulate? I would say that—

5:05 p.m.

Conservative

The Chair Conservative Pat Kelly

I have to cut you off. I'm very sorry to do so, but we're getting further behind.

I'm going to have to cut the times for the subsequent rounds, and I still think we might end up having to squeeze a little past 5:30 to get in a few minutes of committee business.

We're going to go with four minutes each for Mr. Bezan and Mr. Fergus, two each for Mr. Villemure and Mr. Green, and then four each for Mr. Kurek and Ms. Hepfner.

Go ahead, Mr. Bezan, for four minutes.

5:10 p.m.

Conservative

James Bezan Conservative Selkirk—Interlake—Eastman, MB

Thank you, Mr. Chair, and I want to thank our witnesses for appearing today.

Mr. Maslej, you were reeling off quite a bit of data and hard percentages after looking at it, yet, in your “AI Index” report of 2021, you said there wasn't enough data out there. Have you collected enough data to help us as regulators develop the legislative framework to control artificial intelligence or to provide the right policy framework in which to move ahead on things like facial recognition?

5:10 p.m.

Research Associate, Institute for Human-Centered Artificial Intelligence, Stanford University, As an Individual

Nestor Maslej

I would say yes.

AI is obviously something that changes day by day. I mean, 2022 has been a tremendous year of AI progress; it seems like every week there's a new model that's breaking ground. I don't think we're ever going to get to a point where we'll have data to sufficiently know the answer to every question, but we're getting more data, and an absence of absolute data does not mean that we shouldn't take action.

We know, for instance, as I stated earlier, that a lot of these facial recognition systems perform a lot worse on these kinds of wild photos, photos where individuals are not looking into the camera straight, or where lighting is not super good, and that might have important implications for how these technologies are regulated.

We're still far away from getting to a point where we're going to have data to answer every single question, but we are getting more data, and I think the data we have at the moment is sufficient to take action on certain different issues.

5:10 p.m.

Conservative

James Bezan Conservative Selkirk—Interlake—Eastman, MB

Through the committee here, we've heard quite a bit about the shortfalls in how the data has been accumulated and how the technology has been adapted, but with bias and prejudice. Do we feel we are in a position—in your case, coming from Stanford University—where things are more balanced on that side of the equation or...? I'll ask this in my final minutes here of Ms. Polsky as well: What are the chances for abuse, the false positives and, ultimately, those who want to definitely use this to further human rights abuses?

5:10 p.m.

Research Associate, Institute for Human-Centered Artificial Intelligence, Stanford University, As an Individual

Nestor Maslej

I can perhaps go first and then I'll defer to Ms. Polsky.

I would say again that there are definitely questions that remain unanswered, but we do have a lot of data that says things that are difficult to dispute. As mentioned, the paper that I cited earlier shows that facial recognition systems can be biased, and I think that's a generally well-accepted fact. That can be something that a committee of regulators could act on, but I'll defer to Ms. Polsky for an additional answer.

5:10 p.m.

President, Privacy and Access Council of Canada

Sharon Polsky

Thank you.

I'm not an academic. I leave that to you, sir, but I go back to news reports out of the Welsh police, where the senior-ranking officer said that facial recognition—and I'm paraphrasing—came up with something like 92% false positives, and he said that was okay, because no technology is perfect. That was in 2017. In 2020 or 2021, the chief of police of Chicago, I believe it was, said that facial recognition was something like 95% erroneous.

That can have profound implications on people's lives, because once you are identified as a person of interest, you're in the system, and then any time a cop looks at you, they run your name and you're already there. There's already a presumption that they should look a little more closely at you, because the facial recognition got it wrong.

5:10 p.m.

Conservative

The Chair Conservative Pat Kelly

Thank you, Ms. Polsky.

With that, we'll go to Mr. Fergus for up to four minutes.

5:10 p.m.

Liberal

Greg Fergus Liberal Hull—Aylmer, QC

Thank you, Mr. Chair, and I'd like to thank the witnesses for being here today.

Through you, Mr. Chair, I would have asked this question of both our witnesses, just following up on what Mr. Green asked, but given that our other witness didn't want to pronounce on this issue, I will ask this question of Ms. Polsky.

Ms. Polsky, in answering a question from my colleague, you indicated the problem of false positives and the extremely high percentage of false positives, on the order of 19 times out of 20. Given that you have pointed out those numbers, and given that a number of our witnesses before this committee have pointed out that we should place a moratorium on the use of facial recognition technology by the public and perhaps even by the private sector until a framework for this technology is put in place, do you feel that there should be a moratorium? Would you agree with those witnesses that there should be a moratorium on the use of FRT in public spaces as well as private spaces?

5:15 p.m.

President, Privacy and Access Council of Canada

Sharon Polsky

The short answer is yes, considering that several years ago, when I did some research, Toronto already had 15,000 CCTV cameras in public use. That doesn't include what's in stores, cars, cellphones and all the rest of it. Calgary replaced its lamp standards with a new type of light, but the lamp standards themselves, 80,000 of them, are capable of having microphones and high-resolution cameras, watching and listening to everything and everybody.

All too often, we have public bodies not doing the facial recognition or any of these AI-embedded technologies themselves but engaging private sector organizations to do it and getting around the accountability. I would say there needs to be a moratorium on public and private usage of it.

5:15 p.m.

Liberal

Greg Fergus Liberal Hull—Aylmer, QC

Thank you very much, Ms. Polsky.

Mr. Maslej, going back to chapter 3 of your report.... I had an opportunity to read your report. I also had an opportunity to speak to you about the report in advance of this meeting. I'm wondering if you could talk about how you feel or what your report says about what types of measures need to be adopted by the community to eliminate the bias that you'll find in the algorithms so that we'd be able to promote better fairness and the ability of FRT to accurately reduce bias against females or against people of colour as much as possible. What's the report on progress on that? What have you seen over the last couple of years?

5:15 p.m.

Research Associate, Institute for Human-Centered Artificial Intelligence, Stanford University, As an Individual

Nestor Maslej

I will say that our report doesn't make any concrete recommendations for steps that should be taken. It's more trying to take stock of where the AI landscape is. I will make a couple of points, though.

First, I think the report would clearly imply that there should be a greater consciousness that AI tools are going to become increasingly ubiquitous and that a lot of these tools are flawed. They're not perfect. Sometimes people are going to use these tools without being aware of what their flaws might be. Perhaps we should be asking ourselves how they might be flawed a lot sooner, before we actually use them.

On the second point, I will say that chapter 5 looks at legislator—

5:15 p.m.

Liberal

Greg Fergus Liberal Hull—Aylmer, QC

I'm sorry, Mr. Maslej. Let me interrupt you there. I have very little time—

5:15 p.m.

Conservative

The Chair Conservative Pat Kelly

You have none.

5:15 p.m.

Liberal

Greg Fergus Liberal Hull—Aylmer, QC

Oh, drat. Well, this will be very quick.

You pointed out that there's a reason to give some greater consideration to the use of this technology, but wouldn't that lead you or the report to come to the conclusion that there should be a moratorium until greater certainty or greater accuracy can be brought to bear for the use of this technology? Can you answer yes or no, if possible?

5:15 p.m.

Conservative

The Chair Conservative Pat Kelly

It will have to be yes or no. We're out of time.

5:15 p.m.

Research Associate, Institute for Human-Centered Artificial Intelligence, Stanford University, As an Individual

Nestor Maslej

Again, I will politely decline to answer that question. I don't think the report—

5:15 p.m.

Conservative

The Chair Conservative Pat Kelly

Okay. Thank you.

With that, we'll move to Monsieur Villemure for two minutes.

5:15 p.m.

Bloc

René Villemure Bloc Trois-Rivières, QC

Thank you very much, Mr. Chair.

Ms. Polsky, we are often told that facial recognition data must be used to create a feeling of safety. However, it seems to me that mass surveillance as you describe it and as we understand it is likely to create a feeling of unsafety rather than safety. What do you think about that?

June 9th, 2022 / 5:20 p.m.

President, Privacy and Access Council of Canada

Sharon Polsky

I have to agree with you. Keep in mind that the people who are telling us that there is great demand for these new technologies are the vendors. They're the ones who will profit from it.

It's as simple as that.

5:20 p.m.

Bloc

René Villemure Bloc Trois-Rivières, QC

Thank you very much.

Mr. Maslej, on page 62 of the Artificial Intelligence Index Report 2022, you talk about algorithm error rates. How can algorithms be refined by accumulating large amounts of data without it becoming surveillance?

5:20 p.m.

Research Associate, Institute for Human-Centered Artificial Intelligence, Stanford University, As an Individual

Nestor Maslej

That is one of the challenges in this kind of endeavour. I would say, broadly speaking, that it's a matter of asking the question of how we're going about collecting that data. In the absence of a regulatory framework, it is easy for different companies to operate in different kinds of capacities. If rules are more clearly ironed out and identified, it is easier to have players operating on the same field.

It is a challenge. Data is essential in the operation of these systems, but just because data is essential—I would say this as an individual, not representing my institution—that does not imply that we shouldn't have any kind of regulations or—

5:20 p.m.

Bloc

René Villemure Bloc Trois-Rivières, QC

I have to interrupt you, as I have only a few seconds left.

Thank you.

Ms. Polsky, could you tell us in writing what elements you find worthwhile in the European legislation on facial recognition and data protection?

5:20 p.m.

President, Privacy and Access Council of Canada

Sharon Polsky

I'm sorry. I missed the beginning. If I would....?