Evidence of meeting #15 for Access to Information, Privacy and Ethics in the 44th Parliament, 1st Session. (The original version is on Parliament’s site, as are the minutes.) The winning word was used.

A recording is available from Parliament.

On the agenda

MPs speaking

Also speaking

Rob Jenkins  Professor, University of York, As an Individual
Sanjay Khanna  Strategic Advisor and Foresight Expert, As an Individual
Angelina Wang  Computer Science Graduate Researcher, Princeton University, As an Individual
Elizabeth Anne Watkins  Postdoctoral Research Associate, Princeton University, As an Individual

12:10 p.m.

Prof. Rob Jenkins

I'm sorry, that's probably not really in my area of expertise. I can speak to the cognitive science of face recognition, but I'm not an expert on the law or the policy.

12:10 p.m.

Liberal

Lisa Hepfner Liberal Hamilton Mountain, ON

So, you don't know whether other countries are looking into some sort of guardrails or moratoriums or at the legislation around AI or facial recognition technology.

12:10 p.m.

Prof. Rob Jenkins

I know that they are, but I don't have a deep knowledge of those processes.

12:10 p.m.

Liberal

Lisa Hepfner Liberal Hamilton Mountain, ON

Maybe Ms. Wang or Ms. Watkins can weigh in from a U.S. perspective. Is there any legislation that's being looked at on the U.S. side in the same way as Canada?

12:10 p.m.

Computer Science Graduate Researcher, Princeton University, As an Individual

Angelina Wang

I'm also not familiar with this.

12:10 p.m.

Postdoctoral Research Associate, Princeton University, As an Individual

Dr. Elizabeth Anne Watkins

This is not my area of expertise, but I will say that one area of legislation that's been particularly useful for workers in automated decision-making is in the GDPR, and its functional right to an explanation. While the GDPR does not actually have the words “right to an explanation”, a lot of the guardrails around ensuring that companies have to provide workers with insights into how decisions are being made about them by automated systems could be a really useful model.

12:10 p.m.

Liberal

Lisa Hepfner Liberal Hamilton Mountain, ON

Other than what we've heard, does anybody have any further advice on how, as legislators, we can help make this practice, if it comes, other than a moratorium? Maybe more specifically what guardrails could we put into place to make sure that the risks are mitigated somewhat?

Nobody wants to tackle that.

12:10 p.m.

Strategic Advisor and Foresight Expert, As an Individual

Sanjay Khanna

I'll just bring up something I said earlier on drawing on as much research and insight as you possibly can on racialized minorities, first nations, children, or anyone who is more vulnerable to this sort of exploitation, or could be made vulnerable by changing economic circumstances that the government of the day and members of the various parties are concerned about. Looking prospectively at this to figure out how to safeguard those individuals is probably very important in the mix.

12:10 p.m.

Liberal

Lisa Hepfner Liberal Hamilton Mountain, ON

We've also heard today a lot about how the biases in AI come from the human biases that we have in our society, because the machines are programmed by humans. I'm wondering if this is universal, because I did see briefly one study that...algorithms that were developed in Asia may not have the same discrimination problems that algorithms developed in North America have.

Perhaps, Ms. Wang, you can talk about that. Are there better ways to develop this technology so we can still get the benefit while mitigating some of the discrimination risks?

12:10 p.m.

Computer Science Graduate Researcher, Princeton University, As an Individual

Angelina Wang

Thank you.

I think that each model is developed in the context of the different study that it's made by, and so models developed in Asia also have lots of biases. They are just a different set of biases than models that have been developed by Canadians or Americans.

For example, a lot of object recognition tools have shown that they are not as good at recognizing the same objects—for example, soap—from a different country than the country where the dataset came from.

There are ways to get around this, but this requires a lot of different people involved with different perspectives, because there really is just no universal viewpoint. I think there's never a way of getting rid of all the biases in the model, because biases themselves are very relative to a particular societal context.

12:15 p.m.

Conservative

The Chair Conservative Pat Kelly

Thank you.

That takes us to the end of the second block. We're going to go into subsequent rounds now.

Just for the information of members, it does not appear that there will likely be a vote at this point, so we will be able to probably complete this meeting. There will be plenty of opportunity for members to get questions in.

With that, we go now to Mr. Williams.

12:15 p.m.

Conservative

Ryan Williams Conservative Bay of Quinte, ON

Thank you, Mr. Chair.

I'm going to follow my colleague, Ms. Hepfner, on some of the questioning.

Mr. Jenkins, again, you've written about the other-race effect, which is a theory that own-race faces are better remembered than other-race faces. We know that facial recognition technology is very accurate with white faces, but its accuracy drops with other skin colours.

Could this be due to the other-race effect of the programmers, essentially a predominantly white programming team creating an AI that is better at recognizing white faces? Would the same bias apply to an FRT AI developed by a predominantly, let's say, Black programming team? What does your research show, and what are you seeing in your studies?

12:15 p.m.

Prof. Rob Jenkins

Bias among programmers could be a factor, but I don't think we need to invoke that to understand the demographic group differences that we see in these automatic face recognition systems.

I think that can be explained by the distribution of images that are used to train the algorithms. If you feed the algorithms mostly, let's say, white faces, then it will be better at recognizing white faces than faces from other races. If you feed it mainly Black faces, it will be better at recognizing Black faces than white faces.

Maybe the analogy with language is helpful, here. It matters what's in your environment as you are developing as a human, and it also matters as you're being programmed as an artificial system.

12:15 p.m.

Conservative

Ryan Williams Conservative Bay of Quinte, ON

Ms. Wang, we know that facial recognition technology is terribly inaccurate with correctly identifying non-white people. We've heard of error rates of up to 34% for darker-skinned females. This FRT-induced digital racism is unacceptable and further reinforces why this technology should not be used for law enforcement.

You've written about mitigating bias in machine learning. How do we end this digital racism?

12:15 p.m.

Computer Science Graduate Researcher, Princeton University, As an Individual

Angelina Wang

It's very hard to think about, because none of these technologies are ever going to be used in a vacuum, and they're always situated in a particular social context. Even if you had some sort of facial recognition system that worked perfectly, or at least the same across different people with different skin tones, the way this is used, for example, for surveillance or policing, is itself still very racist. You can never really disentangle the technology from [Technical difficulty--Editor]

12:15 p.m.

Conservative

Ryan Williams Conservative Bay of Quinte, ON

I want to follow up on one of my colleague's questions. Can this technology be used for good?

Something I've read about is having this technology used to help curb human trafficking, finding images using AI to identify, let's say, an individual who might have been 13 when they disappeared and is now older. Using that technology for good may be used in human trafficking or solving some of that.

To all of the panellists, are there ways to have that used as a positive aspect by law enforcement and not a negative? Are there ways you can see right now that it can be something that's protected when we're looking at legislation?

12:15 p.m.

Prof. Rob Jenkins

I think you characterized facial recognition technology as a tool, and, in my view, that's exactly the correct characterization. You can use a tool to try to help other people, or you can use it to try to harm other people, so we need to understand the intent of people as well as understand the capabilities of the technology itself.

12:20 p.m.

Conservative

Ryan Williams Conservative Bay of Quinte, ON

I have the same question for anyone else who can answer that in 40 seconds.

12:20 p.m.

Strategic Advisor and Foresight Expert, As an Individual

Sanjay Khanna

I might add that consumer companies, consumer brands and retailers are looking quite closely at the technology and are advancing how they think about sentiment analysis and perceiving how customers are feeling in a branded or transactional environment. Some people might not find that particularly threatening. They might find it a benefit in some way, but guardrails are still needed around that.

There are always going to be some economic arguments for traffic, for sales and for different kinds of marketing and sales engagement and transactional opportunities that probably need to be looked at, should these technologies be employed, from an oversight standpoint.

12:20 p.m.

Conservative

The Chair Conservative Pat Kelly

Thank you.

Now we have Mr. Bains for up to five minutes.

April 4th, 2022 / 12:20 p.m.

Liberal

Parm Bains Liberal Steveston—Richmond East, BC

Thank you, Mr. Chair.

Thank you to all of our witnesses for taking the time today.

I want to leave an open question here for any of our witnesses.

Based on your responses to Mr. Green's earlier question, there seems to be a considerable amount of legislation needed before FRT is widely used.

My questions come from Richmond, British Columbia. It's home to a strong South Asian and Asian demographic. We learned from an earlier panel expert who joined us that the VPD is using FRT without a lot of oversight.

Are any of you aware of any British Columbia law enforcement agencies using FRT?

Mr. Khanna, are you aware of any of this?

12:20 p.m.

Strategic Advisor and Foresight Expert, As an Individual

Sanjay Khanna

No, I'm not aware of how the Vancouver Police Department is using FRT.

12:20 p.m.

Liberal

Parm Bains Liberal Steveston—Richmond East, BC

Okay, and I'll stay with you, then.

In a paper, you and your colleagues acknowledge that machine learning systems perpetuate and amplify certain biases present in the data. As a result, you developed the revised tool to enable pre-emptive analysis of large-scale datasets. How does the revised tool mitigate these biases?

12:20 p.m.

Strategic Advisor and Foresight Expert, As an Individual

Sanjay Khanna

I think this could be another Sanjay Khanna who happens to be working in AI and machine learning. It's not me.

12:20 p.m.

Liberal

Parm Bains Liberal Steveston—Richmond East, BC

Oh, okay.