Evidence of meeting #11 for Access to Information, Privacy and Ethics in the 44th Parliament, 1st Session. (The original version is on Parliament’s site, as are the minutes.) The winning word was use.

A recording is available from Parliament.

On the agenda

MPs speaking

Also speaking

Cynthia Khoo  Research Fellow, Citizen Lab, Munk School of Global Affairs and Public Policy, University of Toronto, As an Individual
Carole Piovesan  Managing Partner, INQ Law
Ana Brandusescu  Artificial Intelligence Governance Expert, As an Individual
Kristen Thomasen  Professor, Peter A. Allard School of Law, University of British Columbia, As an Individual
Petra Molnar  Lawyer, Refugee Law Lab, York University

11:55 a.m.

Conservative

James Bezan Conservative Selkirk—Interlake—Eastman, MB

The Privacy Commissioner would have the ability to bring about that pause if we believe people's privacy rights are going to be violated, so that buys us that time to do the evaluation.

How many police agencies in Canada are using facial recognition technology?

11:55 a.m.

Research Fellow, Citizen Lab, Munk School of Global Affairs and Public Policy, University of Toronto, As an Individual

Cynthia Khoo

That's a great question.

If you're looking at Clearview AI, I believe it was several dozen that were testing Clearview AI. However, for the purposes of our report, in terms of who were really using it, we found that it was the Toronto Police Service and the Calgary Police Service.

I saw last month that the Edmonton Police Service signed a contract. This was not with Clearview AI; this was with NEC Corporation, so it's separate facial recognition technology.

In our report, we saw York and Peel had announced that they were planning to engage in contracts too.

11:55 a.m.

Conservative

The Chair Conservative Pat Kelly

Thank you, I'm going to have to go on, but that's a great question and if you have additional specific information on that, that's something that would probably be very helpful to our analysts in the preparation of the report.

With that, we'll finish this with Ms. Khalid for three minutes.

11:55 a.m.

Liberal

Iqra Khalid Liberal Mississauga—Erin Mills, ON

Thank you very much, Chair; and thank you, witnesses, for your very compelling testimony today.

In the interest of time, I'll just ask Ms. Piovesan. We see on Facebook, when you log in, you put up a photo of yourself and your friends, and all of sudden, when you go to tag it, there's a list of potential people who that could be, and nine times out of 10, it is accurate. When we have these social media platforms and their use of facial recognition and their algorithms, they create these circles or bubbles of societies and we've seen how that commercial aspect of it has an impact on that discrimination and creating extreme views, and so on.

Could you maybe comment on that commercial aspect? How do we narrow that scope to make sure that businesses are able to efficiently provide services to consumers without consumers then becoming sheep to be led down a certain path, not just in terms of products but also ideologies?

Noon

Managing Partner, INQ Law

Carole Piovesan

I have four quick proposals.

The first is that we need to have a risk assessment of the systems conducted to understand where the risks, the potential unintended consequences and foreseeable harms are.

That also leads to an impact assessment where you have to look specifically at what the potential impacts of this system are on individuals, on property and on rights. Have that be a thorough assessment, as we already see in the privacy space and as you heard Ms. Khoo refer to as an algorithmic impact assessment that is already adopted by the federal government.

Next, there needs to be clear and plain disclosure so people can make decisions in the commercial context in particular. Often it's not a need to have; it's a nice to have. People need to have that opportunity to understand how their information will be used, not through 20-page privacy policies—which I myself write all the time—but through clear and plain just-in-time information so that they can make and change their decisions and their consent if they choose not to continue. If they had agreed to provide their face originally, they have the right to change that over time.

Noon

Liberal

Iqra Khalid Liberal Mississauga—Erin Mills, ON

Thank you. I appreciate that.

I realize that I have 20 seconds, but Ms. Khoo, do you want to comment on that?

Noon

Research Fellow, Citizen Lab, Munk School of Global Affairs and Public Policy, University of Toronto, As an Individual

Cynthia Khoo

I have nothing further to add, but I will use that time to recommend the work of Simone Browne in reference to Mr. Green's earlier comments on racial justice. She wrote a book called Dark Matters, which traces biometric surveillance back to trans-Atlantic slavery and the branding of slaves. It argues that this was the origin of biometric surveillance.

Noon

Conservative

The Chair Conservative Pat Kelly

Thank you very much to our witnesses.

With that we will briefly suspend.

Noon

NDP

Matthew Green NDP Hamilton Centre, ON

Mr. Chair, before the suspension, through you to the witnesses, could I ask if there is anything they feel they didn't get a chance to fully answer, to please provide it to this committee in writing for the consideration of reports?

I am remiss and I share the concerns [Inaudible—Editor].

Noon

Conservative

The Chair Conservative Pat Kelly

Yes, you may do so and you have done so now.

Thank you, Mr. Green.

As a general comment to members, when you ask a complicated question and leave 10 seconds for the response, you put me in the awkward position of having to cut off our witness. Manage your time so you can get the answers in rather than just your questions.

With that we'll suspend—

Noon

NDP

Matthew Green NDP Hamilton Centre, ON

Mr. Chair, complicated topics have complicated questions.

If the witnesses can provide their expanded answers that would be great for the consideration.

12:05 p.m.

Conservative

The Chair Conservative Pat Kelly

I agree one hundred per cent.

With that, we will briefly suspend while we transition the panels.

12:05 p.m.

Conservative

The Chair Conservative Pat Kelly

The meeting has resumed.

I encourage everyone in the room to take their seats and keep the side discussion down so we can get started. Thank you.

We're getting pressed for time already. I'm going to start off with our opening statements. I'm going to ask our witnesses to keep to an absolute maximum of five minutes. I'm going to have to cut everyone off right when we get to that point.

Today, we have as individuals Ms. Ana Brandusescu, artificial intelligence governance expert; Kristen Thomasen, professor at University of British Columbia, Peter A. Allard School of Law; and Petra Molnar, associate director at the Refugee Law Lab.

We'll begin with Ms. Brandusescu.

You have an absolute max of five minutes.

12:05 p.m.

Ana Brandusescu Artificial Intelligence Governance Expert, As an Individual

Good afternoon Mr. Chair and members of the committee. Thank you for having me here today.

My name is Ms. Ana Brandusescu. I research governance and procurement of artificial intelligence technologies, particularly by government. That includes facial recognition technology, or FRT.

I will present two issues and three solutions today. The first issue is discrimination. FRT is better at distinguishing white male faces than Black, brown, indigenous and trans faces. We know this from groundbreaking work by scholars like Joy Buolamwini and Timnit Gebru. Their study found that:

...darker skinned females are the most misclassified group (with error rates of up to 34.7%). The maximum error rate for lighter-skinned males is 0.8%.

FRT generates lots of false positives. That means identifying you as someone you're not. This causes agents of the state to arrest the wrong person. Journalist Khari Johnson recently wrote for Wired about how in the U.S., three Black men were wrongfully arrested because they were misidentified by FRT.

Also, HR could deny someone a job because of FRT misidentification or could get an insurance company to deny a person coverage. FRT is more than problematic.

The House of Commons Standing Committee on Public Safety and National Security's report from 2021 states that there is systemic racism in policing in Canada. FRT exacerbates systemic racism.

The second issue is the lack of regulatory mechanisms. In a report I co-authored with privacy and cybersecurity expert Yuan Stevens for the Centre for Media, Technology and Democracy, we wrote that “as taxpayers, we are essentially paying to be surveilled, where companies like Clearview AI can exploit public sector tech procurement processes.”

Regulation is difficult. Why? Like much of big tech, AI crosses political boundaries. It can also evade procurement policies, such as Clearview offering free software trials. Because FRT is embedded in opaque, complex systems, it is sometimes hard for a government to know that FRT is part of a software package.

In June 2021, the Office of the Privacy Commissioner, OPC, was clear about needing system checks to ensure that the RCMP legally complies when using new technologies. However, the RCMP's response to the OPC was in favour of industry self-regulation. Self-regulation—for example, in the form of algorithmic impact assessments—can be insufficient. A lot of regulation vis-à-vis AI is essentially a volunteer activity.

What is the way forward? Government entities large and small have called for a ban on the use of FRT, and some have already banned it. That should be the end goal.

The Montréal Society and Artificial Intelligence Collective, which I contribute to, participated in the 2021 public consultation for Toronto Police Services Board's draft AI policy. Here, I extend some of these recommendations along with my own. I propose three solutions.

The first solution is to improve public procurement. Clearview AI got away with what it did across multiple jurisdictions in Canada because there was never a contract or procurement process involved. To prevent this, the OPC should create a policy for the proactive disclosure of free software trials used by law enforcement and all of government, as well as create a public registry for them. We need to make the black box a glass box. We need to know what we are being sold. We need to increase in-house AI expertise; otherwise, we cannot be certain agencies even know what they are buying. Also, companies linked to human rights abuses, like Palantir, should be removed from Canada's pre-qualified AI supplier list.

The second solution is to increase transparency. The OPC should work with the Treasury Board to create a public registry, this time for AI, and especially AI used for law enforcement and national security purposes, and for agencies contemplating face ID for social assistance, like employment insurance. An AI registry will be useful for researchers, academics and investigative journalists to inform the public. We also need to improve our algorithmic impact assessments, also known as AIAs.

AIAs should more meaningfully engage with civil society, yet the only external non-governmental actors consulted in Canada's three published AIAs were companies. The OPC should work with the Treasury Board to develop more specific, ongoing monitoring and reporting requirements, so the public knows if the use or impact of a system has changed since the initial AIA.

The third solution is to prioritize accountability. From the inside, the OPC should follow up on RCMP privacy commitments and demand a public-facing report that explains in detail the use of FRT in its unit. This can be applied to all departments and agencies in the future. From the outside, the OPC and the Treasury Board should fund and listen to civil society and community groups working on social issues, not only technology-related issues.

Thank you.

12:10 p.m.

Conservative

The Chair Conservative Pat Kelly

Thank you very much.

With that, we'll go to Ms. Kristen Thomasen. You have five minutes.

12:10 p.m.

Professor Kristen Thomasen Professor, Peter A. Allard School of Law, University of British Columbia, As an Individual

Thank you, Mr. Chair, and thank you to the committee.

I am joining you from the unceded territory of the Squamish, Tsleil-Waututh and Musqueam nations.

As you heard, I'm a law professor, and my research focuses on the domestic regulation of artificial intelligence and robotics, especially as this relates to public spaces and privacy. I'm representing my own views here today.

I'm very grateful to the committee for the invitation to contribute to this important study. I urge this committee to supply a substantive equality lens to your report and all recommendations made to the government.

Much research has already shown how inequitable various forms of facial surveillance can be, particularly with respect to the misidentification of individuals on the basis of race, gender and age and the quality and source of data used to train such systems. However, even perfectly accurate facial surveillance systems built on data reported to be legally sourced can reflect and deepen social inequality for a range of reasons. I'll focus on some key points and welcome further questions later, including related to apparent narrow beneficial use cases.

First, facial surveillance systems are socio-technical systems, meaning that these technologies cannot be understood just by looking at how a system is built. One must also look at how it will interact with the people who use it, the people affected by it and the social environments in which it is deployed.

Facial surveillance consolidates and perfects surveillance and is introduced into a society where, for example, the Supreme Court of Canada, among other, has already recognized that communities are over-policed on the basis of protected identity grounds. Equity-seeking groups face greater quantities of interpersonal, state and commercial surveillance, and can experience qualitatively greater harm from that surveillance. More perfect surveillance means greater privacy harm and inequity.

I urge the committee to explicitly consider social context in your report and recommendations. This includes that biometric surveillance is not new. I encourage you to place facial surveillance within its historical trajectory, which emerged from eugenic and white supremacist sciences.

Part of the socio-technical context in which facial surveillance is introduced includes gaps in the application and underlying theories of laws of general application. In other words, our laws do not adequately protect against misuses of this technology. In particular, from my own research, I would flag that interpersonal uses of facial surveillance will be under-regulated.

I'm very encouraged to see that the committee is considering interpersonal use within the scope of this study and urge the committee to examine the interrelations between interpersonal surveillance and commercial and state entities. For example, while not specific to facial surveillance, the emergence of Amazon Ring-police partnerships in the United States highlights the potential Interweb of personal surveillance, commercial surveillance infrastructure and state policing, which will at least present challenges to current tort and constitutional laws as interrelations like this emerge in Canada.

Personal use facial surveillance has already been shown to be highly damaging in various cases, particularly with respect to technology-facilitated harassment, doxing and other forms of violence. These uses remain under-regulated because interpersonal surveillance in public spaces and public information is under-regulated. While governance of interpersonal privacy may not fall exhaustively within federal jurisdiction, I do think this is a crucial part of understanding facial surveillance as a socio-technical system and must be considered within the governance of such a technology. I also do not think the solution is to criminalize the personal use of facial surveillance systems, but rather to bolster normative and legal recognition of interpersonal rights and to regulate the design and availability of facial surveillance technologies.

Laws and policies governing technology can have at least three foci: regulating the uses of the technology, regulating the user, and/or regulating the design and availability of the technology. Regulation of design and availability may fall more directly within federal government jurisdiction and better focuses on those responsible for the creation of the possibility of such harm rather than only reactively focusing on punishing wrongdoing and/or compensating for harm that has already occurred.

Also, in terms of regulating the use of facial surveillance, I urge the committee to look to examples around the world where governments have adopted a moratorium on the use of facial surveillance, as has been mentioned by other witnesses, and I do also recommend the same in Canada. More is of course needed in the long term, including expanding the governance focus to include all forms of automated biometric surveillance, not exclusively facial surveillance. The committee may also consider recommending the creation of a national independent expert group to consult on further refinement of laws of general application and design use and user restrictions going forward, perhaps for both federal and provincial guidelines.

Expertise must include those—

12:15 p.m.

Conservative

The Chair Conservative Pat Kelly

Thank you.

12:15 p.m.

Prof. Kristen Thomasen

—from the impacted communities.

Thank you.

12:15 p.m.

Conservative

The Chair Conservative Pat Kelly

I'm really sorry, Ms. Thomasen, that I have to cut you off there. We need to go to our third panellist.

Ms. Molnar, you have five minutes

March 21st, 2022 / 12:15 p.m.

Dr. Petra Molnar Lawyer, Refugee Law Lab, York University

Thank you so much.

My name is Petra Molnar. I'm a lawyer and an anthropologist. Today I would like to share with you a few reflections from my work on the human rights impacts of such technologies as the facial recognition used in immigration and for border management.

Facial recognition technology underpins many of the types of technological experiments that we are seeing in the migration and border space, technologies that introduce biometric mass surveillance into refugee camps, immigration detention proceedings and airports. However, when trying to understand the impacts of various migration management and border technologies—i.e., AI lie detectors, biometric mass surveillance and various automated decision-making tools—it is important to consider the broader ecosystem in which these technologies develop. It is an ecosystem that is increasingly replete with the criminalization of migration, anti-migrant sentiments, and border practices leading to thousands of deaths, which we see not only in Europe but also at the U.S.-Mexico border, and most recently at the U.S.-Canada border, when a family froze to death in Manitoba.

Since 2018 I have monitored and visited borders all around the world, most recently the U.S.-Mexico frontier and the Ukrainian border during the ongoing occupation. Borders easily become testing grounds for new technologies, because migration and border enforcement already make up an opaque and discretionary decision-making space, one where life-changing decisions are rendered by decision-makers with little oversight and accountability in a system of vast power differentials between those affected by technology and those wielding it.

Perhaps a real-world example would be instructive here to illustrate just how far-reaching the impacts of technologies used for migration management can be. A few weeks ago, I was in the Sonoran Desert at the U.S.-Mexico border to see first-hand the impacts of technologies that are being tested out. These technological experiments include various automated and AI-powered surveillance towers sweeping the desert. Facial recognition and biometric mass surveillance, and even recently announced “robodogs”—like my barking dog in the background—are now joining the global arsenal of border enforcement technologies.

The future is not just more technology, however; it is more death. Thousands of people have already perished making dangerous crossings. These are people like Mr. Alvarado, a young husband and father from Central America whose memorial site we visited. Indeed, surveillance and smart border technologies have been proven to not deter people from making dangerous crossings. Instead, people have been forced to change their routes towards less inhabited terrain, leading to loss of life.

Again, in the opaque and discretionary world of border enforcement and immigration decision-making, structures that are underpinned by intersecting systemic racism and historical discrimination against people migrating, technology's impacts on people's human rights are very real. As other witnesses have already said, we already know that facial recognition is highly discriminatory against black and brown faces and that algorithmic decision-making often relies on biased datasets that render biased results.

For me, one of the most visceral examples of the far-reaching impacts of facial recognition is the increasing appetite for AI polygraphs, or lie detectors, used at the border. The EU has been experimenting with a now derided system called iBorderCtrl. Canada has tested a similar system called AVATAR. These polygraphs use facial and emotional recognition technologies to reportedly discern whether a person is lying when presented with a series of questions at a border crossing. However, how can an AI lie detector deal with differences in cross-cultural communication when a person, due to religious or ethnic differences, may be reticent to make eye contact, or may just be nervous? What about the impact of trauma on memory, or the fact that we know that we do not recollect information in a linear way? Human decision-makers already have issues with these complex factors.

At the end of the day, this conversation isn't really about just technology. It's about broader questions. It's about questions around which communities get to participate in conversations around proposed innovation, and which groups of people become testing grounds for border technologies. Why does the private sector get to determine, time and again, what we innovate on and why, in often problematic public-private partnerships, which states are increasingly keen to make in today's global AI arms race? Whose priorities really matter when we choose to create AI-powered lie detectors at the border instead of using AI to identify racist border guards?

In my work, based on years of on-the-ground research and hundreds of conversations with people who are themselves at the sharpest edges of technological experimentation at the border, it is clear that the current lack of global governance around high-risk technologies creates a perfect laboratory for high-risk experiments, making people on the move, migrants and refugees a testing ground.

Currently, very little regulation of FRT exists in Canada and internationally. However, the European Union's recently proposed regulation on AI demonstrates a regional recognition that technologies used for migration management need to be strictly regulated, with ongoing discussions around an outright ban on biometric mass surveillance, high-risk facial recognition and AI-type lie detectors. Canada should also take a leading role globally. We should introduce similar governance mechanisms that recognize the far-reaching human rights impacts of high-risk technologies and ban the high-risk use of FRT in migration and at the border.

We desperately need more regulation, oversight and accountability mechanisms for border tech used by states like Canada.

12:20 p.m.

Conservative

The Chair Conservative Pat Kelly

Thank you, Ms. Molnar.

I'm going to have to begin the questions. We're at 25 after 12. I am going to cut the six- and five-minute rounds to four minutes. With that, we should maybe end a few minutes after one o'clock.

I'm going to go to Mr. Williams for four minutes.

12:20 p.m.

Conservative

Ryan Williams Conservative Bay of Quinte, ON

Thank you very much to our panellists.

I'll start with Ms. Brandusescu.

Last month you were part of a response to the Toronto Police Service's proposed policy on AI technology use, which included facial recognition. Section two talked about “explainability”, which was called an important step for “ensuring that AI technologies remain accountable to users and affected populations”. I also loved your definition of glass boxing the black box. It's very important.

Do we need to define “explainability” in federal legislation to ensure a universal application and understanding of the term? If so, how would you define it?

12:20 p.m.

Artificial Intelligence Governance Expert, As an Individual

Ana Brandusescu

Thank you so much.

We will be told that explainable AI is a computational solution we're going to have to make sure FRT can go forward.

I want to argue that even though explainable AI is a growing field, it's actually adding more complexity, not less. This is because explanation is entirely audience dependent. That audience is usually comprised of computer scientists, not politicians.

Who gets to participate in that conversation and who's also left out is really important. It's not enough to have explainable AI even because of the neural network type of AI that FRT is. It can never be fully explained.

That is also part of our recommendation. In short, it is really trying to get to the core of what the technology is and understanding the black box. Having a technical solution to a very problematic technology doesn't mean we should use it to go forward and not consider the ban.

12:25 p.m.

Conservative

Ryan Williams Conservative Bay of Quinte, ON

Thank you.

You had a 2021 paper called “Weak privacy, weak procurement: The state of facial recognition in Canada”. You talked about biometric data protection and how Canada's privacy laws are failing compared to the rest of the world.

We've heard of the benefits of the General Data Protection Regulation, GDPR, from a witness in a previous meeting. Would adopting a GDPR-style protection be better for Canada's privacy rights?

12:25 p.m.

Artificial Intelligence Governance Expert, As an Individual

Ana Brandusescu

That was the lead of my co-author, Yuan Stevens, who focuses on privacy expertise. I will try to say that GDPR is a good gold standard to have for best practices so far.

I would just argue that this is more than data protection or privacy. This is a conversation about the private sector as well and their involvement in public governance. Right now, what we have in our regulation is just private regulation.

I could touch upon the algorithmic impact assessment and our own directive automated decision-making more deeply in a future question.