Good afternoon Mr. Chair and members of the committee. Thank you for having me here today.
My name is Ms. Ana Brandusescu. I research governance and procurement of artificial intelligence technologies, particularly by government. That includes facial recognition technology, or FRT.
I will present two issues and three solutions today. The first issue is discrimination. FRT is better at distinguishing white male faces than Black, brown, indigenous and trans faces. We know this from groundbreaking work by scholars like Joy Buolamwini and Timnit Gebru. Their study found that:
...darker skinned females are the most misclassified group (with error rates of up to 34.7%). The maximum error rate for lighter-skinned males is 0.8%.
FRT generates lots of false positives. That means identifying you as someone you're not. This causes agents of the state to arrest the wrong person. Journalist Khari Johnson recently wrote for Wired about how in the U.S., three Black men were wrongfully arrested because they were misidentified by FRT.
Also, HR could deny someone a job because of FRT misidentification or could get an insurance company to deny a person coverage. FRT is more than problematic.
The House of Commons Standing Committee on Public Safety and National Security's report from 2021 states that there is systemic racism in policing in Canada. FRT exacerbates systemic racism.
The second issue is the lack of regulatory mechanisms. In a report I co-authored with privacy and cybersecurity expert Yuan Stevens for the Centre for Media, Technology and Democracy, we wrote that “as taxpayers, we are essentially paying to be surveilled, where companies like Clearview AI can exploit public sector tech procurement processes.”
Regulation is difficult. Why? Like much of big tech, AI crosses political boundaries. It can also evade procurement policies, such as Clearview offering free software trials. Because FRT is embedded in opaque, complex systems, it is sometimes hard for a government to know that FRT is part of a software package.
In June 2021, the Office of the Privacy Commissioner, OPC, was clear about needing system checks to ensure that the RCMP legally complies when using new technologies. However, the RCMP's response to the OPC was in favour of industry self-regulation. Self-regulation—for example, in the form of algorithmic impact assessments—can be insufficient. A lot of regulation vis-à-vis AI is essentially a volunteer activity.
What is the way forward? Government entities large and small have called for a ban on the use of FRT, and some have already banned it. That should be the end goal.
The Montréal Society and Artificial Intelligence Collective, which I contribute to, participated in the 2021 public consultation for Toronto Police Services Board's draft AI policy. Here, I extend some of these recommendations along with my own. I propose three solutions.
The first solution is to improve public procurement. Clearview AI got away with what it did across multiple jurisdictions in Canada because there was never a contract or procurement process involved. To prevent this, the OPC should create a policy for the proactive disclosure of free software trials used by law enforcement and all of government, as well as create a public registry for them. We need to make the black box a glass box. We need to know what we are being sold. We need to increase in-house AI expertise; otherwise, we cannot be certain agencies even know what they are buying. Also, companies linked to human rights abuses, like Palantir, should be removed from Canada's pre-qualified AI supplier list.
The second solution is to increase transparency. The OPC should work with the Treasury Board to create a public registry, this time for AI, and especially AI used for law enforcement and national security purposes, and for agencies contemplating face ID for social assistance, like employment insurance. An AI registry will be useful for researchers, academics and investigative journalists to inform the public. We also need to improve our algorithmic impact assessments, also known as AIAs.
AIAs should more meaningfully engage with civil society, yet the only external non-governmental actors consulted in Canada's three published AIAs were companies. The OPC should work with the Treasury Board to develop more specific, ongoing monitoring and reporting requirements, so the public knows if the use or impact of a system has changed since the initial AIA.
The third solution is to prioritize accountability. From the inside, the OPC should follow up on RCMP privacy commitments and demand a public-facing report that explains in detail the use of FRT in its unit. This can be applied to all departments and agencies in the future. From the outside, the OPC and the Treasury Board should fund and listen to civil society and community groups working on social issues, not only technology-related issues.
Thank you.