Evidence of meeting #15 for Access to Information, Privacy and Ethics in the 44th Parliament, 1st Session. (The original version is on Parliament’s site, as are the minutes.) The winning word was used.

A recording is available from Parliament.

On the agenda

MPs speaking

Also speaking

Rob Jenkins  Professor, University of York, As an Individual
Sanjay Khanna  Strategic Advisor and Foresight Expert, As an Individual
Angelina Wang  Computer Science Graduate Researcher, Princeton University, As an Individual
Elizabeth Anne Watkins  Postdoctoral Research Associate, Princeton University, As an Individual

Noon

Conservative

The Chair Conservative Pat Kelly

Thank you.

You have two and a half minutes, Mr. Green.

Noon

NDP

Matthew Green NDP Hamilton Centre, ON

Thank you.

Ms. Wang, in your work, you examine the amplification of bias in machine learning systems. My fear is that this committee has spent a lot of time on facial recognition, but perhaps hasn't been able to fully grasp the impacts of AI and of machine learning. Could you briefly describe the concept of bias amplification in machine learning, and perhaps describe what some of the material consequences of bias amplification are, and who tends to be most impacted?

Noon

Computer Science Graduate Researcher, Princeton University, As an Individual

Angelina Wang

Bias amplification refers to a notion of bias that is often thought of as just a correlation in the data. This correlation could be between some particular demographic group and some concept that they are stereotypically related to. Because machine learning models are trying to pick up on any patterns that are available in the data to learn, they frequently amplify these biases and will overpredict them whenever they are deployed.

Noon

NDP

Matthew Green NDP Hamilton Centre, ON

Do you have any examples in law enforcement, for instance? We're hearing terms around predictive policing and a throwback to the Minority Report example. Would you care to comment on any research you may have found related to law enforcement's use of machine learning?

Noon

Computer Science Graduate Researcher, Princeton University, As an Individual

Angelina Wang

Sure, in predictive policing, if communities of colour and different neighbourhoods with higher proportions of Black citizens may have higher levels of crime, then predictive policing models may over-report those communities in the future to be more likely to have crime, even if that is not true, and will over-amplify this compared to the base rate of what the correlation actually is.

Noon

NDP

Matthew Green NDP Hamilton Centre, ON

To address this problem you have a tool. What do you see as the main benefits of this tool and who do you envision using this as a way to enable pre-emptive data analysis?

Noon

Computer Science Graduate Researcher, Princeton University, As an Individual

Angelina Wang

I'm not sure what tool you're referring to, but I think measuring these correlations and being aware that even a model with very high accuracy may not be itself amplifying biases and might be creating the same biases that are in the dataset. Even if a model isn't adding additional biases, the existing dataset will already have these too.

Noon

NDP

Matthew Green NDP Hamilton Centre, ON

Thank you.

For the record, I thought I saw your work attached to a revised tool, but maybe I was mistaken.

Noon

Computer Science Graduate Researcher, Princeton University, As an Individual

Angelina Wang

That's referring to biases in visual datasets.

12:05 p.m.

NDP

Matthew Green NDP Hamilton Centre, ON

Got it.

Thank you so much, I appreciate the insight into that.

12:05 p.m.

Conservative

The Chair Conservative Pat Kelly

Thank you.

Mr. Bezan for five minutes.

12:05 p.m.

Conservative

James Bezan Conservative Selkirk—Interlake—Eastman, MB

Thank you, Mr. Chair.

I want to thank our witnesses for their time and expertise on this important study we're undertaking.

I want to go around to all four witnesses to ask them a question following on where Mr. Green was going.

When you take artificial intelligence and machine learning, tie that in with facial recognition and then the possible application of that in the criminal justice system, will this significantly impede constitutional rights, our charter freedoms that we have here in Canada, as potentially being used under the Criminal Code?

I will start with Ms. Wang.

12:05 p.m.

Computer Science Graduate Researcher, Princeton University, As an Individual

Angelina Wang

I'm sorry. I don't think I'm familiar enough with that.

12:05 p.m.

Conservative

James Bezan Conservative Selkirk—Interlake—Eastman, MB

Essentially, if FRT and AI are used as part of evidence in the conviction of individuals, would that present problems under our Criminal Code and under the Charter of Rights and Freedoms? Can we rely on FRT as enough evidence to deal with our criminal justice system and protect the rights of individuals?

12:05 p.m.

Computer Science Graduate Researcher, Princeton University, As an Individual

Angelina Wang

I think that because you can acquire facial images without any sort of consent, and that there are so many errors and you don't really know why a model would make a particular decision, then that would go against human rights.

12:05 p.m.

Conservative

James Bezan Conservative Selkirk—Interlake—Eastman, MB

Okay.

Professor Watkins.

12:05 p.m.

Postdoctoral Research Associate, Princeton University, As an Individual

Dr. Elizabeth Anne Watkins

Thank you. Forgive my ignorance with the Canadian criminal charter.

In the U.S., we have a right to freedom of movement. If facial recognition technology is collecting faces from people as they move through public space, then that means the decisions they make about which public spaces through which they move could be potentially chilled. The implementation of FRT into public surveillance would have a chilling effect on that particular right. That's just one of many examples.

12:05 p.m.

Conservative

James Bezan Conservative Selkirk—Interlake—Eastman, MB

Okay.

Mr. Khanna.

12:05 p.m.

Strategic Advisor and Foresight Expert, As an Individual

Sanjay Khanna

I believe this is going to be a test that works its way through the courts. Assuming FRT and machine learning algorithms are used to identify criminals and not just their social media postings and so on, as happened in Ottawa, then tests are going to need to happen against the charter, in my view, to develop some legal precedent around this. Certainly, harms are plausible.

12:05 p.m.

Conservative

James Bezan Conservative Selkirk—Interlake—Eastman, MB

Okay.

Mr. Jenkins.

12:05 p.m.

Prof. Rob Jenkins

If facial recognition accuracy is low, then there are concerns about miscarriage of justice. If it's low for some people but high for others, there are concerns about equality. If it's high for everybody, there are concerns about privacy. Those are all of the options.

April 4th, 2022 / 12:05 p.m.

Conservative

James Bezan Conservative Selkirk—Interlake—Eastman, MB

Okay.

As we're going through this and we're hearing loud and clear on the recommendations—accountability, transparency, putting in place a moratorium until we have actual legislation in place—how do we bring forward, as parliamentarians, the proper safeguards to ensure that facial recognition is being used correctly, that bias is removed, that discrimination is eliminated, or minimized at the very least, so that we can write into the Criminal Code, the Privacy Act, PIPEDA, the guardrails we need to make sure we're not relying overly heavily on facial recognition technology, keeping in mind that there are always going to be issues around public safety and national security?

I'll go to Mr. Khanna first.

12:05 p.m.

Strategic Advisor and Foresight Expert, As an Individual

Sanjay Khanna

I'll answer that I think this parliamentary committee is taking steps in that direction by drawing on such a wide group of interprofessional and interdisciplinary experts.

Another thing that's important is for there to be opportunities for employees of companies that have the largest datasets which might be used to be compelled to provide evidence on how they're using these technologies as well, in order to inform legislative approaches. They could be company employees who come out and are whistle-blowers, who are then able to report to these committees in some sort of way.

Drawing on what people know within industry, to equalize and create a proper symmetry between what you know as legislators and what companies know internally, is probably very important.

12:10 p.m.

Conservative

The Chair Conservative Pat Kelly

Thank you.

Now we will we go to Ms. Hepfner for five minutes.

12:10 p.m.

Liberal

Lisa Hepfner Liberal Hamilton Mountain, ON

Thank you very much.

Thank you to the witnesses for their time today. Through the chair, I want to take advantage of the fact that we have three different countries represented here.

Starting with Mr. Jenkins, maybe you can talk to us a little bit about whether the U.K. is looking at any sorts of rules or guardrails around AI. We've talked about how legislators should approach this before it gets too late. I'm wondering what other countries are doing.