Evidence of meeting #11 for Access to Information, Privacy and Ethics in the 44th Parliament, 1st Session. (The original version is on Parliament’s site, as are the minutes.) The winning word was use.

A recording is available from Parliament.

On the agenda

MPs speaking

Also speaking

Cynthia Khoo  Research Fellow, Citizen Lab, Munk School of Global Affairs and Public Policy, University of Toronto, As an Individual
Carole Piovesan  Managing Partner, INQ Law
Ana Brandusescu  Artificial Intelligence Governance Expert, As an Individual
Kristen Thomasen  Professor, Peter A. Allard School of Law, University of British Columbia, As an Individual
Petra Molnar  Lawyer, Refugee Law Lab, York University

12:55 p.m.

Lawyer, Refugee Law Lab, York University

Dr. Petra Molnar

Yes, absolutely. When we're talking about refugee determination in particular, we're talking about an extremely high-risk application of technology. Like you rightly say, and like our report did in 2018, if mistakes are made and if someone is, for example, wrongfully deported to a country that they're fleeing from, the ramifications can be quite dire.

It's very concerning that we are testing and experimenting in this opaque and discretionary space without the appropriate oversights and safeguards. That is something that has to change, because it has real impacts on real people's lives.

12:55 p.m.

Conservative

The Chair Conservative Pat Kelly

Thank you very much.

With that, we're going to get to the final two rounds.

We have Mr. Bezan for four minutes, and then we'll go to Ms. Hepfner and Ms. Khalid.

Go ahead, Mr. Bezan.

12:55 p.m.

Conservative

James Bezan Conservative Selkirk—Interlake—Eastman, MB

Thank you, Mr. Chair.

I thank our witnesses very much.

I'm going to direct my questions towards Ms. Brandusescu. You're well written. I perused at least three reports that you've already published, everything from “AI for the Rest of Us” and “Weak privacy, weak procurement: The state of facial recognition in Canada” to “Artificial intelligence policy and funding in Canada: Public investments, private interests”. I believe what you're suggesting is follow the money and you can see where the private interests lie.

Should federal and provincial governments be funding this type of AI technology and facial recognition technology?

12:55 p.m.

Artificial Intelligence Governance Expert, As an Individual

Ana Brandusescu

I have a brief question back to you. When you say “be funding this type of technology”, should governments fund FRT?

12:55 p.m.

Conservative

James Bezan Conservative Selkirk—Interlake—Eastman, MB

That's what I'm asking you.

12:55 p.m.

Artificial Intelligence Governance Expert, As an Individual

Ana Brandusescu

Okay. No, they shouldn't.

We're at this point where we're funding a lot of R and D, and some of the R and D can come up as FRT. Again, the end goal should be a ban.

We're already seeing the European Parliament calling for a ban on this. It is the latest ban that was called. It is possible to move from a moratorium to a ban, and we should. We're not even at a moratorium. We can start with law enforcement, but as other witnesses have mentioned, FRT is a problem across the government. It's not only a law enforcement problem, although law enforcement is the worse problem that FRT [Inaudible—Editor].

Governments should not fund it. They should fund civil society, digital rights groups and community groups that are doing this work to showcase all the harm that comes out of FRT. They know the social issues and the communities they work in. They are the experts and they should also be involved in the conversation of what the government decides to fund.

12:55 p.m.

Conservative

James Bezan Conservative Selkirk—Interlake—Eastman, MB

How do we look at both the policy directives and the funding of artificial intelligence and FRT? What do we then need to do on the side of privacy legislation, whether it's the Privacy Act or PIPEDA? What safeguards do we have to build in there to ensure that biometric data is protected?

12:55 p.m.

Artificial Intelligence Governance Expert, As an Individual

Ana Brandusescu

That is a lot of questions. I'll take up one, which is we should turn the directive on automated decision-making—

12:55 p.m.

Conservative

James Bezan Conservative Selkirk—Interlake—Eastman, MB

You can send it in as well. You could also reply in writing after the committee winds up, but if you can give us a quick synopsis, that would be great.

12:55 p.m.

Artificial Intelligence Governance Expert, As an Individual

Ana Brandusescu

Yes. A quick one would be to improve the directive on automated decision-making. Make sure that the internal reviews that are written every six months actually make it to the public. We're still waiting for the one that was supposed to be released last year. Apparently, it will be released in April. We're still waiting on it.

Others have mentioned how we shouldn't rely on investigative journalists to keep doing the work. The public should have this information. We should have updates. We should have home pages on the OPC and Treasury Board websites and in other spaces to show the latest involvement, like the procurement use of these technologies, like FRT, until they are banned.

The directive itself needs improvement. I will have those improvements and recommendations in writing later. We should follow the EU and the U.S. in putting together an act that covers the transparency of law enforcement, which is currently not covered by other public AI registries around the world. I will also put that in writing.

1 p.m.

Conservative

James Bezan Conservative Selkirk—Interlake—Eastman, MB

Would bringing in that accountability or the control powers leveraged by policing agencies across this country require amendments to our Criminal Code? How do we then tie that in with the private sector that's—

1 p.m.

Conservative

The Chair Conservative Pat Kelly

I'm sorry, Mr. Bezan. You've only left enough time for a yes or no to that question. Then we'll have to move on.

1 p.m.

Artificial Intelligence Governance Expert, As an Individual

Ana Brandusescu

I'll just give a maybe. That's not my expertise.

1 p.m.

Conservative

The Chair Conservative Pat Kelly

Okay. Thank you.

With that, we'll finish it off with Ms. Khalid for four minutes.

1 p.m.

Liberal

Iqra Khalid Liberal Mississauga—Erin Mills, ON

Thank you very much, Mr. Chair, and thank you to the witnesses.

I'll start with Ms. Molnar.

The United States put out a commitment that by 2023, 97% of all the people who travel through their airports will go through a facial recognition kind of system. In Canada, our IRCC immigration application assessment processes—not just for refugees, but also for all visitors and immigrants who are seeking immigration to Canada—are now being transitioned into an AI model of assessing the applications.

Can you perhaps talk a little bit about profiling and how this could directly or indirectly impact how institutional discrimination could occur?

1 p.m.

Lawyer, Refugee Law Lab, York University

Dr. Petra Molnar

What might be instructive is a comparator between what Canada could be doing and what the European Union is looking at under its proposed regulation on artificial intelligence. There, it clearly recognizes that individual risk assessments for the purposes of immigration and refugee processing are high risk. There are conversations around an outright ban of individualized risk assessment that can be used for profiling and for strengthening systemic discrimination, which is already something our immigration system is replete with.

I think there is an opportunity for the Canadian government to really think through how best to regulate the use of facial recognition technology for the purposes of immigration. You're absolutely right. It is already in use, both within Canada and also with its regional partners, like the United States, with whom it also shares a lot of the data.

Data sharing is an element we didn't really discuss today, but it's something that we all need to pay more attention to.

1 p.m.

Liberal

Iqra Khalid Liberal Mississauga—Erin Mills, ON

Thank you.

Ms. Brandusescu, do you want to comment on that as well?

1 p.m.

Artificial Intelligence Governance Expert, As an Individual

Ana Brandusescu

Yes, I agree with Ms. Molnar completely.

1 p.m.

Liberal

Iqra Khalid Liberal Mississauga—Erin Mills, ON

Great. Thank you.

Lastly, we've heard some of the pros of facial recognition in locating missing children and in breaking up child pornography rings, for example. We do give up a little bit of our privacy to ensure the security and well-being of our communities.

Where does the commercial aspect of it fall in? Do any of you want to comment on that?

1 p.m.

Lawyer, Refugee Law Lab, York University

Dr. Petra Molnar

Perhaps I'll reiterate that when we're talking about commercial interests and the kind of bottom-line thinking that the private sector often brings into the mix, it's a very different framework for responsibility when it comes to the high-risk use of technology, particularly at the border, or as you're referencing, with human trafficking.

Again, we need to pay careful attention to the particular actors involved in the ecosystem in which these technologies develop and are deployed. None of this is neutral. It is all a political exercise.

1 p.m.

Kristen Thomasen

I can also jump in.

I think in approaching thinking about regulation and limits on facial surveillance through the lens of regulating use, users or the designer availability of the technology, we can start to think about things like restraints or restrictions on the use of commercial facial surveillance systems. Instead, fund or develop in-house systems using data that is not just legally sourced, but sourced through fully informed consent and processes that ensure the dignity of the individuals whose data is being processed. It would be designed and used only for very specific use cases, as opposed to commercial systems like Clearview AI, for instance, that's being used in a wide range of different scenarios, none of which are taking into account the specific social context and implications for the people whose data is being processed or who are being affected by the use of that system.

I think there are ways we can really distinguish very narrow use cases and not build into a narrative that says we need facial recognition because it can be used to protect people from potential harm.

1:05 p.m.

Conservative

The Chair Conservative Pat Kelly

Thank you so much.

That concludes the round.

With that, I thank our witnesses so much. We had some very important and interesting testimony today, so thank you to all of you.

The meeting is adjourned.