Thank you very much.
Good afternoon, everyone.
Thank you very much, Mr. Chair and vice-chairs, for the opportunity to contribute today.
My name is Owen Larter. I'm in the public policy team in the Office of Responsible AI at Microsoft.
There are really three points that I want to get across in my comments today.
First, facial recognition is a new and powerful technology that is already being used and for which we now need regulation.
Second, there is a particular urgency around regulating police use of facial recognition, given the consequential nature of police decisions.
Third, there is a real opportunity for Canada to lead the way globally in shaping facial recognition regulation that protects human rights and advances transparency and accountability.
I want to start by applauding the work of the committee on this really important topic. We at Microsoft are suppliers of facial recognition. We do believe that it can bring real benefits to society. This includes helping secure devices and assisting people who are blind or with low vision to access more immersive social experiences. In the public safety context, it can be used to help find victims of trafficking and as part of the criminal investigation process.
However, we are also clear-eyed about the potential risks of this technology. That includes the risk of bias and unfair performance, including across different demographic groups; the potential for new intrusions into people's privacy; and possible threats to democratic freedoms and human rights.
In response to this, in recent years we've developed a number of internal safeguards at Microsoft. They include our facial recognition principles. It includes the creation of our Face API transparency note. This transparency note communicates in language that is aimed at non-technical audiences how our facial recognition works, what its capabilities and limitations are and the factors that will affect performance, all with a view to helping customers understand how to use it responsibly.
Facial recognition work builds on Microsoft's broader responsible AI program. This is a program that ensures colleagues are developing and deploying AI in a way that adheres to our principles. The program includes our cross-company AI governance team and our responsible AI standard, which is a series of requirements that colleagues developing and deploying AI must adhere to. It also includes our process for reviewing sensitive AI uses.
In addition to these internal safeguards, we also believe that there is a need for regulation. This need is particularly acute in the law enforcement context, as I mentioned. We really do feel that the importance of this committee's work cannot be overstated. We commend the way in which it is bringing together stakeholders from across society, including government, civil society, industry and academia to discuss what a regulatory framework should look like.
We note that while there has been positive progress in places like Washington state in the U.S., with important ongoing conversations in the EU and elsewhere, we do believe that Canada has an opportunity to play a leading role in shaping regulation in this space.
We think that type of regulation needs to do three things. It needs to protect human rights, advance transparency and accountability, and ensure testing of facial recognition systems in a way that demonstrates they are performing appropriately.
When it comes to law enforcement, there are important human rights protections that regulations need to cover, including prohibiting the use of facial recognition for indiscriminate mass surveillance and prohibiting use on the basis of an individual's race, gender, sexual orientation or other protected characteristics. Regulations should also ensure it's not being used in a way that chills important freedoms, such as freedom of assembly.
On transparency and accountability, we think law enforcement agencies should adopt a public use policy setting out how they will use facial recognition, setting out the databases they will be searching and how they will task and train individuals to use the system appropriately and to perform human review. We also think vendors should provide information about how their systems work and the factors that will affect performance.
Importantly, systems must also be subject to testing to ensure they are performing accurately. We recommend that vendors of facial recognition like Microsoft make their systems available for reasonable third party testing and implement mitigation plans for any performance gaps, including across demographic groups.
We also think that organizations deploying facial recognition must test systems in operational conditions, given the impact that environmental factors like lighting and backdrop have on performance. In the commercial setting, we think regulation should require conspicuous notice and express opt-in consent for any tracking.
I'll close my remarks by saying that we commend many of the elements of the provincial and federal privacy commissioners' recommendations from earlier this week, which set out important elements of the legal framework for facial recognition.
Thank you very much.