Evidence of meeting #19 for Access to Information, Privacy and Ethics in the 44th Parliament, 1st Session. (The original version is on Parliament’s site, as are the minutes.) The winning word was frt.

A recording is available from Parliament.

On the agenda

MPs speaking

Also speaking

Owen Larter  Director, Responsible Artificial Intelligence Public Policy, Microsoft
Mustafa Farooq  Chief Executive Officer, National Council of Canadian Muslims
Rizwan Mohammad  Advocacy Officer, National Council of Canadian Muslims

3:35 p.m.

Conservative

The Chair Conservative Pat Kelly

I call the meeting to order.

Welcome to meeting number 19 of the House of Commons Standing Committee on Access to Information, Privacy and Ethics.

Pursuant to Standing Order 108(3)(h) and the motion adopted by the committee on Monday, December 13, 2021, the committee is resuming its study of the use and impact of facial recognition technology.

Today’s meeting is taking place in a hybrid format, pursuant to the House order of November 25, 2021. Members are attending in person in the room and remotely by using the Zoom application.

I have a couple of comments for the benefit of witnesses. We have witnesses in the room and witnesses participating by Zoom. Please wait until I recognize your name before speaking. If you are participating by Zoom, click on the microphone icon to activate your mike, and please mute yourself when not speaking. In the room, your mike should be controlled—you shouldn't have to hit the button—but just be aware and make sure that your microphone is lit up before you speak. I'll remind you that comments should be addressed through the chair.

Now I would like to welcome our witnesses.

We have, from Microsoft, Owen Larter, director responsible for artificial intelligence public policy; and from the National Council of Canadian Muslims, we have Mustafa Farooq, chief executive officer; and Rizwan Mohammad, advocacy officer.

We will start with Mr. Larter.

You have up to five minutes for your opening statement.

May 5th, 2022 / 3:35 p.m.

Owen Larter Director, Responsible Artificial Intelligence Public Policy, Microsoft

Thank you very much.

Good afternoon, everyone.

Thank you very much, Mr. Chair and vice-chairs, for the opportunity to contribute today.

My name is Owen Larter. I'm in the public policy team in the Office of Responsible AI at Microsoft.

There are really three points that I want to get across in my comments today.

First, facial recognition is a new and powerful technology that is already being used and for which we now need regulation.

Second, there is a particular urgency around regulating police use of facial recognition, given the consequential nature of police decisions.

Third, there is a real opportunity for Canada to lead the way globally in shaping facial recognition regulation that protects human rights and advances transparency and accountability.

I want to start by applauding the work of the committee on this really important topic. We at Microsoft are suppliers of facial recognition. We do believe that it can bring real benefits to society. This includes helping secure devices and assisting people who are blind or with low vision to access more immersive social experiences. In the public safety context, it can be used to help find victims of trafficking and as part of the criminal investigation process.

However, we are also clear-eyed about the potential risks of this technology. That includes the risk of bias and unfair performance, including across different demographic groups; the potential for new intrusions into people's privacy; and possible threats to democratic freedoms and human rights.

In response to this, in recent years we've developed a number of internal safeguards at Microsoft. They include our facial recognition principles. It includes the creation of our Face API transparency note. This transparency note communicates in language that is aimed at non-technical audiences how our facial recognition works, what its capabilities and limitations are and the factors that will affect performance, all with a view to helping customers understand how to use it responsibly.

Facial recognition work builds on Microsoft's broader responsible AI program. This is a program that ensures colleagues are developing and deploying AI in a way that adheres to our principles. The program includes our cross-company AI governance team and our responsible AI standard, which is a series of requirements that colleagues developing and deploying AI must adhere to. It also includes our process for reviewing sensitive AI uses.

In addition to these internal safeguards, we also believe that there is a need for regulation. This need is particularly acute in the law enforcement context, as I mentioned. We really do feel that the importance of this committee's work cannot be overstated. We commend the way in which it is bringing together stakeholders from across society, including government, civil society, industry and academia to discuss what a regulatory framework should look like.

We note that while there has been positive progress in places like Washington state in the U.S., with important ongoing conversations in the EU and elsewhere, we do believe that Canada has an opportunity to play a leading role in shaping regulation in this space.

We think that type of regulation needs to do three things. It needs to protect human rights, advance transparency and accountability, and ensure testing of facial recognition systems in a way that demonstrates they are performing appropriately.

When it comes to law enforcement, there are important human rights protections that regulations need to cover, including prohibiting the use of facial recognition for indiscriminate mass surveillance and prohibiting use on the basis of an individual's race, gender, sexual orientation or other protected characteristics. Regulations should also ensure it's not being used in a way that chills important freedoms, such as freedom of assembly.

On transparency and accountability, we think law enforcement agencies should adopt a public use policy setting out how they will use facial recognition, setting out the databases they will be searching and how they will task and train individuals to use the system appropriately and to perform human review. We also think vendors should provide information about how their systems work and the factors that will affect performance.

Importantly, systems must also be subject to testing to ensure they are performing accurately. We recommend that vendors of facial recognition like Microsoft make their systems available for reasonable third party testing and implement mitigation plans for any performance gaps, including across demographic groups.

We also think that organizations deploying facial recognition must test systems in operational conditions, given the impact that environmental factors like lighting and backdrop have on performance. In the commercial setting, we think regulation should require conspicuous notice and express opt-in consent for any tracking.

I'll close my remarks by saying that we commend many of the elements of the provincial and federal privacy commissioners' recommendations from earlier this week, which set out important elements of the legal framework for facial recognition.

Thank you very much.

3:40 p.m.

Conservative

The Chair Conservative Pat Kelly

Thank you, Mr. Larter.

Now we have Mr. Farooq for five minutes.

3:40 p.m.

Mustafa Farooq Chief Executive Officer, National Council of Canadian Muslims

I'll actually pass it over to my colleague, if that's okay, Chair.

3:40 p.m.

Conservative

The Chair Conservative Pat Kelly

Okay, Mr. Mohammad, go ahead.

3:40 p.m.

Rizwan Mohammad Advocacy Officer, National Council of Canadian Muslims

Thank you, Mr. Chair and members of the committee, for the opportunity to offer our thoughts on this study.

My name is Rizwan Mohammad, and I'm an advocacy officer with the National Council of Canadian Muslims, the NCCM. I'm joined today by NCCM CEO Mustafa Farooq. I'd also like to thank NCCM intern Hisham Fazail for his work on our submission.

Today we want to look at the heart of the problem with facial recognition technology, or FRT. A number of national security and policing agencies, as well as other government agencies, have come before you to tell you how FRT is an important tool that has great potential use across government. You've been told that FRT can help escape problems of human cognition and bias.

Here are some other names that you all know, names affiliated with times that these same agencies told you that surveillance would be done in ways that were constitutionally sound and proportionate. The are Maher Arar, Abdullah Almalki and Mohamedou Ould Slahi.

The same agencies that lied to the Canadian people about surveilling Muslim communities are coming before you now to argue that while mass surveillance will not be happening, FRT can and should be used responsibly. Those agencies, like the RCMP, have already been found to have broken the law according to the Privacy Commissioner when it comes to FRT.

We are thus making the following two recommendations, and we want to be clear that our submissions are limited to exploring FRT in the non-consumer context.

First, we recommend that the government put forth clear and unequivocal privacy legislation that severely curtails how FRT can be utilized in the non-consumer context, allowing only for judicially approved exceptions in the context of surveillance.

Second, we recommend that the government set out clear penalties for agencies caught violating rules around privacy and FRT.

Let us begin with the first recommendation, calling for a blanket ban on FRT across the government without judicial authorization in the context of any and all national security agencies, including but not exclusive to the RCMP, CSIS, and the CBSA. You know the reasons for this already. A 2018 report in the U.K. found new figures showing that facial recognition software used by the U.K. Metropolitan Police returned incorrect matches in 98% of cases. Another study from 2019, which drew on a different methodology, reported that the Metropolitan Police returned incorrect matches, or a false positive rate, in 38% of cases.

We are well aware that FRT works differently, and with different accuracy results, depending on the technology, but we all acknowledge as a matter of fact that there are algorithmic biases when it comes to FRT. Given what we know, given the privacy risks that FRT poses, and given that Canadians, including members on other committees in this House, have raised concerns around systemic racism in policing, we agree with other witnesses who have appeared before this committee in calling for an immediate moratorium on all uses of FRT in the national security context and for the RCMP until legislative guidelines are developed.

Simultaneously, we recommend that in developing legislative guidelines, a very high threshold be utilized, including judicial authorization, oversight and timeline limitations.

Secondly, we are shocked by the blasé attitude that the RCMP has taken in approaching the issue of its use of Clearview AI. First the RCMP denied using Clearview AI, but then confirmed it had been using the software after news broke that the company's client list had been hacked. An excuse was given that the use of FRT wasn't known widely in the RCMP. The false answer the RCMP gave to the Privacy Commissioner, which was as credible as the “dog ate my homework” excuse, was completely unacceptable.

The RCMP then had the audacity, after the Privacy Commissioner's findings in the report, to state that it did not necessarily agree with the findings. While the RCMP has taken certain steps to ameliorate the concerns raised, a failure of accountability, when it comes to clear errors and misleading statements, must require clear penalties. Otherwise, how can we trust any such process or commitment to avoid mass surveillance?

We encourage this committee to recommend that strong penalties be assessed against agencies and officers who may breach the rules created around FRT, potentially through an amendment to the RCMP Act. We will provide the committee with a broader written brief in due course.

Subject to any questions, these are our submissions.

Thank you.

3:45 p.m.

Conservative

The Chair Conservative Pat Kelly

Thank you for those remarks.

We will begin our questions with Mr. Williams. Mr. Williams, you have six minutes.

3:45 p.m.

Conservative

Ryan Williams Conservative Bay of Quinte, ON

Thank you to our witnesses for attending today.

Through you, Mr. Chair, I have some questions for Mr. Larder.

There's knowledge that you banned U.S. police services from using facial recognition technology. What was the situation, or what were the actions taken, that led Microsoft to ban those police services from FRT?

3:45 p.m.

Director, Responsible Artificial Intelligence Public Policy, Microsoft

Owen Larter

Thank you very much for the question.

It is the case that we don't sell facial recognition to local police in the U.S. I think our position is that it's really important to get law in place that can protect human rights in the context of facial recognition. I think one of the challenges in the U.S. is that there is no law on that front. There isn't any privacy law, the type of privacy law that you have in a lot of other countries, including in Canada, although I'm aware of ongoing conversations around how the privacy framework in Canada can be improved and that they are important conversations to have as well.

That's our position. That's why we're using our voice proactively, to attend conversations like this and contribute to important work like this to make sure that we can get in place some robust regulation for the use of facial recognition, with particular urgency around police and more broadly to make sure that the technology is being used in a way that is transparent, accountable and rights-protecting.

3:45 p.m.

Conservative

Ryan Williams Conservative Bay of Quinte, ON

Are Canadian police services also banned?

3:50 p.m.

Director, Responsible Artificial Intelligence Public Policy, Microsoft

Owen Larter

That's not the policy at present.

3:50 p.m.

Conservative

Ryan Williams Conservative Bay of Quinte, ON

Is that because we have different policies here? Are there policies that Canada has right now that you like?

3:50 p.m.

Director, Responsible Artificial Intelligence Public Policy, Microsoft

Owen Larter

Yes. To come back to what I referenced before, I think there's a framework of laws that we're looking for to ensure that facial recognition is used in a way that is rights-respecting. I think privacy law is a part of that. I think there's an opportunity to improve privacy frameworks around the world. We're aware of the ongoing conversation in Canada as well. The lack of any sort of broad privacy laws in the U.S. is the main reason for that position.

3:50 p.m.

Conservative

Ryan Williams Conservative Bay of Quinte, ON

Thank you.

You just talked about how your responsible AI had a set of guidelines that had to be followed for its use. What are those guidelines?

3:50 p.m.

Director, Responsible Artificial Intelligence Public Policy, Microsoft

Owen Larter

We have our broader responsible AI program, which we have been developing for the last few years. It has a few components. We have a company-wide AI governance team. This is a multi-stakeholder team with some of our Microsoft researchers. These are world-leading AI researchers sharing knowledge about where the technology is and around where the state-of-the-art technology is going. They come together with people working on legal and policy issues and people with an engineering background to oversee the general program.

In terms of the other components, we also have a responsible AI standard. This is a set of requirements across our six AI principles, which I can go into detail on, that ensure that any teams that are developing AI systems or deploying AI systems are doing so in a way that meets our principles.

The final piece we have is also a “sensitive use” review process. This comes into play when any potential development or deployment of a system hits one of three potential triggers. Any time a system is going to be used in a way that affects an individual's legal opportunities or legal standing, any time there is a potential for psychological or physical harm, or any time there is an implication for human rights, then the governance team that I mentioned will come together and review whether we can move forward with a particular deployment or development of AI to ensure that it's being done in a responsible way.

You can imagine that those conversations apply across all of our systems, including the discussions we're having on facial recognition.

3:50 p.m.

Conservative

Ryan Williams Conservative Bay of Quinte, ON

Thank you.

You talked about proper testing protocols. What recommendations would you make to our committee on what you're using for proper testing protocols? Do they use also human review in terms of looking at that technology?

3:50 p.m.

Director, Responsible Artificial Intelligence Public Policy, Microsoft

Owen Larter

We think that this is a really important part of the conversation, and it's for a number of reasons.

The accuracy of facial recognition has improved markedly in recent years. There's some very good research being done by the National Institute of Standards and Technology in the U.S., or NIST, that shows that accuracy has improved markedly for the best-performing systems in recent years. There is, however, a very wide gap between the best-performing systems and the least well-performing systems, and the less accurate systems tend to be more discriminatory as well, so we think testing is really important.

There are a couple of components to it. We think that vendors like Microsoft should allow for their systems to be tested by independent third parties in a reasonable fashion, so we allow for that at the moment via an API. A third party can go and test our system to see how accurate it is. We think that vendors should be required to respond to any testing and address any material performance gaps, including across demographics, so that's one thing: vendors doing something on the testing side.

We also think it's really very important that organizations deploying a facial recognition service test it in operational conditions. If you are a police customer and you're using a facial recognition system, you shouldn't just take the word of the vendor that it's going to be accurate in the abstract; you also need to test it in operational conditions. That's because environmental factors like image quality or camera positions have a really big impact on accuracy.

You can imagine that if you have a camera that is placed looking down on someone's head and there are smudges on the lens or poor quality imagery going into the system in general, it's going to have a really big impact on performance; therefore, there should also be a testing requirement for organizations deploying facial recognition to make sure that they know that it is working accurately in the environment in which it's going to be used.

3:50 p.m.

Conservative

Ryan Williams Conservative Bay of Quinte, ON

Thank you very much, Mr. Larter.

3:50 p.m.

Conservative

The Chair Conservative Pat Kelly

Now, for six minutes, we have Ms. Hepfner.

3:50 p.m.

Liberal

Lisa Hepfner Liberal Hamilton Mountain, ON

Thank you very much.

Thank you to all the witnesses for joining us today. I'd also like to start with you, Mr. Larter.

I was reading an article written by Microsoft's Brad Smith in 2018 that covers a lot of issues similar to those you are talking about today. Facial recognition technology was being developed, and Microsoft was calling on government to impose regulations on the industry.

I'm wondering if you could reflect on how it works when tech giants can come up with this technology and then ask governments to regulate it. Is that how it should work? Are there better ways that we can maybe bring governments in as technology is being developed?

I'm just hoping you can reflect on that a bit.

3:55 p.m.

Director, Responsible Artificial Intelligence Public Policy, Microsoft

Owen Larter

It's a really important question, and we definitely think that it's for government to play a leading role in creating a regulatory framework for technology in general, including technologies like facial recognition.

We've tried to do a couple of things over the last few years. First was to implement internal safeguards so that we're doing our bit as a vendor of facial recognition to make sure that the technology is being used responsibly. I talked about our responsible AI program. We also have our Face API transparency note, which I think is a really important part of the conversation and hits at this need for transparency around how facial recognition is developed and deployed.

This transparency note is a document that we make publicly available, and it is clear about how a system works in terms of some of the capabilities of the technology, limitations about the technology and what it shouldn't be used for and the factors that will affect performance, so that a customer using the technology is well informed and able to make informed and responsible deployment decisions.

That's some of what we've been doing internally. We do also think—because it's really important to build trust in technology in general and particularly in facial recognition, given some of the potential risks it can raise, which I mentioned in my remarks—that there is also a need for a regulatory framework.

We are keen to support those conversations. That's why we're very happy to be invited to discussions like this today. We really want to contribute our knowledge around how the technology works and where it is going so that we can create, led by governments and in conjunction with others across society like civil society, a good, robust regulatory framework for technology so that the benefits of this powerful technology can be realized in a way that also addresses some of the challenges.

3:55 p.m.

Liberal

Lisa Hepfner Liberal Hamilton Mountain, ON

Thank you.

In your opening remarks, you went over a bunch of different ways that FRT is being used for good reasons and for possibly bad reasons as well. Can you let this committee know, through you, Mr. Chair, how widespread FRT is in our society right now? How is it affecting the lives of everyday Canadians?

3:55 p.m.

Director, Responsible Artificial Intelligence Public Policy, Microsoft

Owen Larter

It's a very good question. I would say it is increasingly used. It is a technology that can have a lot of benefits, and I think individuals and organizations are realizing that.

There are a few different applications. A lot of them have to do with security, such as verification using facial recognition. For example, when you're logging in to your phone or your computer, often that is done through a facial recognition tool now. Frictionless and contactless check-in at airports would be another example of how facial recognition is being used, which has been particularly important over the last couple of years during the depths of the COVID crisis, obviously.

Beyond that, I think there are some really beneficial applications in the accessibility context. There are a number of organizations doing really interesting research around how you can use facial recognition to help those who are blind or with low vision better understand and interact with the world around them. We had a project called Project Tokyo, which involved facial recognition, and it used a headset so that a blind individual would be able to scan a room—let's say a canteen or an open space at work—and if there was someone who had enrolled in the system and consented to be part of this individual's facial recognition system, he or she would be able to identify that person and be able to go over proactively and start a conversation in a way that would be very difficult otherwise.

Another application that I think a lot of people in the accessibility community are excited about is facial recognition for people with Alzheimer's or similar diseases that make it increasingly difficult to remember or recognize friends and loved ones. You can imagine the way in which facial recognition is now being explored to help prompt individuals to be able to recognize those friends and loved ones.

It's becoming a long answer, but I'll round off by saying there are also positive applications in the law enforcement context as well. We do think that as part of the criminal investigation process, facial recognition, with robust safeguards around it, can be a useful investigative tool. It's also being used for online identification of missing and trafficked individuals, including children, in a way that has been very beneficial as well.

There are some real benefits there, but, again, there are the challenges that I also mentioned, which is why you need a regulatory framework that can realize those benefits in a way that addresses the challenges.

4 p.m.

Liberal

Lisa Hepfner Liberal Hamilton Mountain, ON

Thank you very much.

Mr. Chair, I have about 30 seconds left. I would just like to give this committee oral notice of a motion that I distributed yesterday. It is as follows:

That, pursuant to Standing Order 108(3)(h)(vii), the committee undertake a study in order to examine the issue of digital surveillance by employers of Canadians who work from home, including: (a) the prevalence of digital surveillance by employers; (b) the types of surveillance being collected; (c) how personal surveillance data is being stored and secured; (d) what rules are in place to protect employees' privacy rights while working from home; (e) data collection disclosure and permission rights of employees; that the committee report its findings and recommendations to the House; and that, pursuant to Standing Order 109, the committee request that the government table a comprehensive response to the report.

Thank you.

4 p.m.

Conservative

The Chair Conservative Pat Kelly

Thank you. You are giving notice of this motion?