Evidence of meeting #12 for Access to Information, Privacy and Ethics in the 44th Parliament, 1st Session. (The original version is on Parliament’s site, as are the minutes.) The winning word was use.

A recording is available from Parliament.

On the agenda

MPs speaking

Also speaking

Alex LaPlante  Senior Director, Product and Business Engagement, Borealis AI
Brenda McPhail  Director, Privacy, Technology and Surveillance Program, Canadian Civil Liberties Association
Françoys Labonté  Chief Executive Officer, Computer Research Institute of Montréal
Tim McSorley  National Coordinator, International Civil Liberties Monitoring Group
Clerk of the Committee  Ms. Nancy Vohl

3:30 p.m.

Conservative

The Chair Conservative Pat Kelly

Welcome to meeting number 12 of the House of Commons Standing Committee on Access to Information, Privacy and Ethics.

Pursuant to Standing Order 108(3)(h) and the motion adopted by the committee on Monday, December 13, 2021, the committee is resuming its study of the use and impact of facial recognition technology.

Today’s meeting is taking place in a hybrid format, pursuant to the House order of November 25, 2021. Members are attending in person in the room and remotely using the Zoom application. So you are aware, the webcast will always show the person speaking rather than the entirety of the committee.

I will remind members in the room that we all know the public health guidelines. I understand that you've heard them many times by now, so I won’t repeat them again, but I will ask you to follow them.

I would also like to remind all participants that no screenshots or photos of your screen are permitted. When speaking, please speak slowly and clearly for the benefit of the interpreters. When you are not speaking, your microphone should be on mute. Finally, I would remind you that all comments by members and witnesses should be addressed through the chair.

I would now like to welcome our witnesses today. From Borealis AI, we have Dr. Alex LaPlante, senior director, product and business development. From the Canadian Civil Liberties Association, we have Dr. Brenda McPhail, director of the privacy, technology and surveillance program. From the Computer Research Institute of Montréal, he have Mr. Françoys Labonté, chief executive officer; and from the International Civil Liberties Monitoring Group, we have Mr. Tim McSorley, national co-ordinator.

Just before I turn it over to the witnesses, for the benefit of committee members, what I've tried to do to minimize the time we lose to change over between panels is to run our witnesses in one panel. We will go through the regular rounds of questions and subsequent rounds as time permits in the prescribed formula for speaker allocation.

With that, I turn it over to our first witnesses, from Borealis AI.

Dr. LaPlante, go ahead.

3:30 p.m.

Dr. Alex LaPlante Senior Director, Product and Business Engagement, Borealis AI

Thank you for the introduction, Mr. Chair.

Thank you to the committee for inviting me to participate as a witness on the topic of the use and impact of facial recognition technology.

As noted, my name is Dr. Alex LaPlante. I am the senior director of product and business development at Borealis AI, which is RBC's R and D lab for artificial intelligence. The views I express today are my own; they do not reflect the views of Borealis AI, RBC or any other institution with which I'm affiliated.

I've spent the last 15 years building and deploying advanced analytics and AI solutions for academic and commercial purposes, and I've seen the positive outcomes that AI can drive. However, I'm also acutely aware that, if we don't take care to adequately assess the application, development and governance of AI, it can have adverse effects on end-users, perpetuate and even amplify discrimination and bias towards racialized communities and women, and lead to unethical usage of data and breaches of privacy rights.

I will focus my comments on two areas: data privacy, and data quality and algorithmic performance. I will then conclude with my recommendations around the governance of this technology.

Biometric data is some of the most sensitive data that exists, so privacy is paramount when it comes to safely collecting, using and storing it. Biometric data has been collected and used without individuals' consent or knowledge in several instances, including in the case of Clearview AI breaching these individuals' privacy rights and putting them at the mercy of unregulated and unvalidated AI systems. This is particularly concerning in high-risk use cases such as criminal identification. There have also been cases of function creep, where companies gain consent to collect biometric data to use in one particular way but go on to use it in other ways beyond this original stated intent.

The best FRT systems can achieve accuracy rates of 99.9% and perform consistently across demographic groups. However, not all algorithms are made equal, and in some cases false positive rates can vary by factors of 10 to even 100 for racialized populations and women. This gap in performance is directly related to the lack of representative, high-quality data.

One field of AI research that should be highlighted in the context of FRT is adversarial robustness. It is the backbone of practices like cloaking, which look to deceive FRTs. This can be achieved through physical manipulation like obscuring facial features or, more covertly, by making modifications to facial pictures that are indiscernible to the human eye but that ensure the pictures are no longer identifiable.

Law enforcement agencies in Canada and abroad have employed technology built on unverified data scraped from the web that can be easily manipulated in ways that are undetectable without direct access to source data. Without proper oversight and regulation, these companies can easily manipulate their data to control who can or cannot be identified with their systems.

Beyond data quality issues, FRT, like any high-risk AI system, should undergo extensive validation so that its limitations are properly understood and taken into consideration when applied in the real world. Unfortunately, many FRTs on the market today are true black boxes and are not available for validation or audit.

While my comments focus on the risks of FRT, I believe there's a lot of value in this technology. We need to carefully craft regulations that will allow FRT to be used safely in a variety of contexts and that address Canada's key legislative gaps as well as concerns around human rights and privacy. In working in the highly regulated financial sector, I have participated in the effective governance of high-risk AI systems where issues of privacy, usage, impact and algorithmic validation are evaluated and documented comprehensively. I believe similar approaches can address many of the primary concerns around this technology.

Regulations need to provide FRT developers, deployers and users with clear requirements and obligations regarding specific uses of this technology. This should include the requirement to gain affirmed consent for the collection and use of biometric data, as well as purpose limitation to avoid function creep. FRT legislation should leverage the privacy principles of necessity and proportionality, especially in the context of privacy-invasive practices.

Further, governance requirements should be proportional to risk materiality. Impact assessments should be common practice, and there should be context-dependent oversight on issues of technical robustness and safety, privacy and data governance, non-discrimination, and fairness and accountability. This oversight should not end once a system is in production but should instead continue for the lifetime of the system, requiring regular performance monitoring, testing and validation.

Last, clearer accountability frameworks for both developers and end-users of FRT are needed, which will require a transparent legislative articulation of the weight of human rights versus commercial interests.

All that being said, these regulations should seek to take a balanced approach that reduces the administrative and financial burdens for public and private entities where possible.

Thank you very much. I look forward to your questions.

3:35 p.m.

Conservative

The Chair Conservative Pat Kelly

Thank you very much.

Now we have Dr. McPhail for up to five minutes.

3:35 p.m.

Brenda McPhail Director, Privacy, Technology and Surveillance Program, Canadian Civil Liberties Association

Thank you to the chair and the committee for inviting the Canadian Civil Liberties Association to appear before you today.

Facial recognition—or, as we often think of it at CCLA, facial fingerprinting, to draw a parallel to another sensitive biometric—is a controversial technology. You will hear submissions during this study that tout its potential benefits and others that warn of dire consequences for society that may come with particular use cases, especially in the context of policing and public safety. Both sides of the debate are valid, which makes your job during this study especially difficult and so profoundly important. I'm grateful that you've undertaken it.

The CCLA looks at this technology through a rights lens. This focus reveals that not just individual and collective privacy rights are at risk in the various public and private sector uses of face surveillance and analysis, but also a wide range of other rights. I know that you’ve heard in previous submissions about the serious risk to equality rights raised by faulty versions of this technology that work less well on faces that are Black, brown, indigenous, Asian, female or young—that is, non-white and non-male.

What I’d add to that discussion is the caution that if the technology is fixed and if it becomes more accurate on all faces across the spectrums of gender and race, it may become even more dangerous. Why? It's because we know that in law enforcement contexts, the surveillance gaze disproportionately falls on those same people. We know who often suffers discrimination in private sector applications. Again, it's those same people. In both cases, a perfect identification of these groups or members of these groups who already experience systemic discrimination because of who they are and what they look like carries the potential to facilitate simply more perfectly targeted discriminatory actions.

In addition to equality rights, tools that could allow ubiquitous identification would have negative impacts on a full range of rights protected by our Canadian Charter of Rights and Freedoms and other laws, including freedom of association and assembly, freedom of expression, the right to be free from unreasonable search and seizure by the state, the presumption of innocence—if everyone’s face, as in the Clearview AI technology, becomes a subject in a perpetual police lineup—and ultimately rights to liberty and security of the person. There’s a lot at stake.

It’s also important to understand that this technology is creeping into daily life in ways that are becoming commonplace. We must not allow that growing familiarity to breed a sense of inevitability. For example, many of us probably unlock our phones with our face. It’s convenient and, with appropriate built-in protections, it may carry relatively little privacy risk. A similar one-to-one matching facial recognition tool was recently used by the Liberal Party of Canada in its nomination voting process prior to the last federal election. In that case, it was a much more risky use of a potentially faulty and discriminatory technology because it took place in a process that is at the heart of grassroots democracy.

The same functionality in very different contexts raises different risks. This highlights the need for keen attention, not just to technical privacy protections, which exist in both the phone and voting app examples, but to contextually relevant protections for the full set of rights engaged by this technology.

What is the path forward? I hope this study examines whether—not just when and how—facial recognition can be used in Canada, taking those contextual questions into consideration. CCLA believes, similar to our previous witness, that regulation is required for those uses that Canadians ultimately deem appropriate in a fair and free democratic state.

Facial recognition for mass surveillance purposes should be banned. For more targeted uses, at the moment CCLA continues to call for a moratorium, particularly in a policing context, in the absence of comprehensive and effective legislation that provides a clear legal framework for its use, includes rigorous accountability and transparency provisions, requires independent oversight and creates effective means of enforcement for failure to comply.

A cross-sector data protection law grounded broadly in a human rights framework is necessary, especially in the environment where the public and private sectors are using the same technologies but are currently subject to different legal requirements. Better yet, targeted laws governing biometrics or data-intensive algorithmically driven technologies could be even better fit for purpose. There are a number of examples globally where such legislation has recently been enacted or is under consideration. We should draw inspiration from those to create Canadian laws to put appropriate guardrails around potentially beneficial uses of FRT and protect people across Canada from its misuse or abuse.

Thank you. I welcome your questions.

3:40 p.m.

Conservative

The Chair Conservative Pat Kelly

Thank you.

Mr. Françoys Labonté, you have up to five minutes. Please go ahead.

3:40 p.m.

Françoys Labonté Chief Executive Officer, Computer Research Institute of Montréal

Members of the committee, I'm delighted to be participating in this important study.

I'll begin by briefly introducing myself. My name is Françoys Labonté, The Chief Executive Officer of the CRIM, the Computer Research Institute of Montréal. I have a technical background, a PhD specializing in computer vision from the École polytechnique de Montréal. In 2010, I joined the CRIM and became its CEO in 2015. The CRIM has worked on artificial intelligence for many years, almost from the moment it was it established, and had very practical opportunities to work on the development of speech recognition technologies in the 2000s, and on facial recognition in the 2010s.

In keeping with the CRIM's approach, my presentation will be very pragmatic. Right from the outset, it's essential to understand that basically, facial recognition technologies neither require nor involve any personal information. These technologies are limited to showing whether a new image of a face that has never been entered before into a given system matches an image that is already in the system.

In the context of your study, I understand the interest in establishing contexts in which it might be acceptable to link personal information to a face and to be able to identify an individual on the basis of one or more images of that person's face. One of the great challenges for your committee is to strike a proper balance between concerns pertaining to privacy, social acceptability and societal benefits.

We are facing a somewhat paradoxical phenomenon: for many Canadians, one or more images of their face to which their name is directly linked, not to mention other personal information that may sometimes be associated, are already publicly available, whether in social networks, digital media or other digital applications. These images were often supplied by people when they had a particular use in mind, but they agreed to very broad consent clauses and very extensive use rights. Even if someone supplied an image of their face unintentionally, for example to add it to their user profile in a digital application, then in practice it's relatively easy for third parties to access the image and other associated data and to use them with impunity for various other purposes, because the consents obtained are so broad. Practically speaking, it's virtually impossible to reverse the situation and make these images disappear from the Internet, or even to dissociate the personal information linked to them.

Here is a question your committee should look into: given that images of most Canadians' faces, to which their personal information is linked, are publicly accessible, what uses of these images that involve facial recognition ought to be proscribed or strictly circumscribed?

There is probably a strong consensus among Canadians for banning the use of facial recognition technologies in a Big Brother manner, with databases containing images of everyone's face, and public surveillance cameras arbitrarily tracking people's movements and behaviour. Likewise using facial recognition in conjunction with drones in a military context for targeted assassinations would certainly run counter to any initiatives to promote the ethical use of artificial intelligence.

I deliberately want to get you to see things somewhat differently in a context where the answers are probably not so clear-cut and where facial recognition technology is simply replacing or substituting for other existing technologies.

Let's take the example of using facial recognition technology for people in a retail store or a shopping centre. It's easy to draw a parallel with e-commerce which, has gained widespread, though not unanimous, social acceptance. When we shop online in a manner that is considered anonymous, by which I mean that it is not connected to any user account, cookies nevertheless leave behind traces of our time on the web. These cookies are then used to send us advertising on the basis of our preferences. Is that very different from a facial recognition system in a shopping centre, which without explicitly knowing your identity, could on the basis of factors that could readily be inferred from your face or your behaviour, send you targeted advertising?

Similarly, when we shop online, but now by means of a user account to which we have supplied some information…

3:45 p.m.

Conservative

The Chair Conservative Pat Kelly

I will have to ask you to wrap up very quickly. You're a little bit over time already.

3:45 p.m.

Chief Executive Officer, Computer Research Institute of Montréal

Françoys Labonté

Right.

Generally speaking, I think people are in favour of using facial recognition technology for specific clearly-stated applications when it's easy to understand the benefits and how the data will be used.

However, there are still enormous challenges to be met in building public confidence and convincing people that facial recognition technology and images will be used properly and only for the purposes that were initially agreed upon.

Thank you.

3:45 p.m.

Conservative

The Chair Conservative Pat Kelly

With that, we will go Tim McSorley for the final opening statement, followed by questions by members.

Go ahead, Mr. McSorley.

3:50 p.m.

Tim McSorley National Coordinator, International Civil Liberties Monitoring Group

Thank you so much for the invitation and for having me here today, Mr. Chair and committee.

I'm very happy to speak to you today on behalf of the International Civil Liberties Monitoring Group. We're a coalition of 45 Canadian civil society organizations dedicated to protecting civil liberties in Canada and internationally in the context of Canada's anti-terrorism and national security activities.

Given our mandate, our particular interest in facial recognition technology is its use by law enforcement and intelligence agencies, particularly at the federal level. We have documented the rapid and ongoing increase of state surveillance in Canada and internationally over the past two decades. These surveillance activities pose significant risks to and have violated the rights of people in Canada and around the world.

Facial recognition technology is of particular concern given the incredible privacy risks that it poses and its combination of both biometric and algorithmic surveillance. Our coalition has identified three reasons in particular that give rise to concern.

First, as other witnesses today and earlier this week have pointed out, multiple studies have shown that some of the most widely used facial recognition technology is based on algorithms that are biased and inaccurate. This is especially true for facial images of women and people of colour, who already face heightened levels of surveillance and profiling by law enforcement and intelligence agencies in Canada.

This is particularly concerning in regard to national security and anti-terrorism, where there is already a documented history of systemic racism and racial profiling. Inaccurate or biased technology only serves to reinforce and worsen this problem, running the risk of individuals being falsely associated with terrorism and national security risks. As many of you are aware, the stigma of even an allegation in this area can have deep and lifelong impacts on the person accused.

Second, facial recognition allows for mass, indiscriminate and warrantless surveillance. Even if the significant problems of bias and accuracy were somehow resolved, facial recognition surveillance systems would continue to subject members of the public to intrusive and indiscriminate surveillance. This is true whether it is used to monitor travellers at an airport, individuals walking through a public square or activists at a protest.

While it is mandatory for law enforcement to seek out judicial authorization to surveil individuals either online or in public places, there are gaps in current legislation as to whether this applies to surveillance or de-anonymization via facial recognition technology. These gaps can subject all passers-by to unjustified mass surveillance in the hopes of being able to identify a single person of interest, either in real time or after the fact.

Third, there is a lack of regulation of the technology and a lack of transparency and accountability from law enforcement and intelligence agencies in Canada. The current legal framework for governing facial recognition technology is wholly inadequate. The patchwork of privacy rules at the provincial, territorial and federal levels does not ensure law enforcement uses facial recognition technology in a way that respects fundamental rights. Further, a lack of transparency and accountability means that such technology is being adopted without public knowledge, let alone public debate or independent oversight.

Clear examples of this have been revealed over the past two years.

The first and most well known is that the lack of regulation allowed the RCMP to use Clearview AI facial recognition for months without the public’s knowledge, and then to lie about it before being forced to admit the truth. Moreover, we now know that the RCMP has used one form of facial recognition or another for the past 20 years without any public acknowledgement, debate or clear oversight. The Privacy Commissioner of Canada found that the RCMP’s use of Clearview AI was unlawful, but the RCMP has rejected that finding, arguing that they cannot be held responsible for the lawfulness of services provided by third parties. This essentially allows them to continue contracting with other services that violate Canadian law.

Lesser known is that the RCMP also contracted the use of a U.S.-based private “terrorist facial recognition” system known as IntelCenter. This company claims to offer access to facial recognition tools and a database of more than 700,000 images of people associated with terrorism. According to the company, these images are acquired, just like Clearview AI's, from scraping online. The stigma that comes with being associated with a so-called terrorist facial recognition database only increases the stigma and rights implications associated with it.

As a final example, I'd just say that CSIS has refused to confirm whether or not they even use facial recognition technology in their work, stating that they have no obligation to do so.

Given all these concerns, we would make three main recommendations: first, that the federal government ban the use of facial recognition surveillance immediately and undertake consultation on the use and regulation of facial recognition technology in general; second, based on these consultations, that the government undertake reforms to both private and public sector privacy laws to address gaps in facial recognition and other biometric surveillance; and, finally, that the Privacy Commissioner be granted greater enforcement powers with regard to both public sector and private sector violations of Canada's privacy laws.

Thank you, and I look forward to the discussion and questions.

3:55 p.m.

Conservative

The Chair Conservative Pat Kelly

Thank you to our witnesses for their opening statements.

The first round, which will be six minutes, goes to Mr. Kurek.

3:55 p.m.

Conservative

Damien Kurek Conservative Battle River—Crowfoot, AB

Thank you very much.

Let me start by sharing a request to all of our witnesses. First, thank you for your expertise and the information that you have shared with us here today. It's very valuable. Certainly as I was preparing for this meeting.... I'm very appreciative of all of you coming to share this with us here today. I know a number of you did make recommendations, and certainly from the practical aspects of what the committee will accomplish in this report, that's very much appreciated.

My ask, beyond a few of the questions that I plan to get to here in a moment, is this: Because there's limited time, if there are further recommendations or information, please feel free to share that with members of this committee so that we can include that information in the report as we compile it in the coming months. Consider that an open invitation, as your expertise here is very much appreciated.

To both Ms. LaPlante and Mr. McSorley, you provided a couple of examples. Clearview AI is one of the most clear examples.

We'll start with Ms. LaPlante.

Are there any other examples that you could briefly share that highlight some of the challenges with these systems?

3:55 p.m.

Senior Director, Product and Business Engagement, Borealis AI

Dr. Alex LaPlante

Clearview AI is one of the concerning cases. What is so concerning about it is that they have scraped mass amounts of data. It is linked to individuals' identities and this is being used in contexts where the ultimate outcome can be very severe for individuals. I think we have to take this into deep consideration when we're applying AI systems of any kind in those types of contexts.

In terms of other examples of this, Facebook is a really good one. Now they've put this program on hold for a while, but I think all of you are very much aware, if you interact with Facebook, that it used to have a feature that essentially pre-identified a friend who was in a photo. This is directly based on use of your profile information and all of the pictures that you and your friends have posted and tagged. Maybe this is a little bit more of a benign case, and in some cases it could be seen as something that's helpful or convenient, but I also want to recognize that there's a slippery slope in having those types of databases owned by private companies when there is no regulation or oversight for their use.

3:55 p.m.

Conservative

Damien Kurek Conservative Battle River—Crowfoot, AB

Thank you for that.

I know I have limited time.

Mr. McSorley, were there any other examples that you could quickly point to that would be worth the committee's time to further look into?

3:55 p.m.

National Coordinator, International Civil Liberties Monitoring Group

Tim McSorley

I'd re-emphasize the question of IntelCenter, a U.S.-based company that we know the RCMP contracted with. We have very little information about what they did with that company and with that database.

That's the only other company I can specifically point to, but it adds an extra boost to the concerns that we see with Clearview AI because they use similar tactics, including scraping images online and putting them into a database, but then add the extra stigma of saying that we know these people are associated with terrorism, with absolutely no oversight in terms of how they come to that determination, and then they share it with law enforcement. There's already this stigma attached to individuals with absolutely no reasoning behind it, and then it's used by law enforcement to essentially identify those people as terrorists.

3:55 p.m.

Conservative

Damien Kurek Conservative Battle River—Crowfoot, AB

Thank you very much for that.

Ms. McPhail, I really appreciate the comment you made, and I'm paraphrasing here, that improving the tech doesn't actually solve the problem. It's a very important message that needed to be heard here.

We've seen through our work on this committee the importance of operationalizing and defining consent and enshrining things like opt-in and opt-out features that are clear for the public.

Today, in the age of social media and with cameras pretty much being everywhere, how do we as legislators protect Canadians from some of the challenges associated with facial recognition and AI in the space that we're discussing here today?

3:55 p.m.

Director, Privacy, Technology and Surveillance Program, Canadian Civil Liberties Association

Brenda McPhail

Thank you for that question. It's a really important one.

You have to start from the right place. I respectfully disagree with Monsieur Labonté. Facial recognition systems use our face. That is some of the most sensitive personal information we have. Faces are recognized in Canadian privacy law as a piece of personally identifiable information; therefore, they are within the scope of the law.

The best way to protect people across Canada from inappropriate uses of this technology truly is to think through how it needs to be regulated. As a first step, a positive example that this committee might wish to consider is contained in the proposed U.S. Senate bill, Bill S.3284, the ethical use of facial recognition act, which would establish a congressional committee or commission to consider and create guidelines for the use of facial recognition technology in the United States.

4 p.m.

Conservative

Damien Kurek Conservative Battle River—Crowfoot, AB

I'm almost out of time here, so thank you very much for that. You've written before, and I won't get into the details because of time, but you said “Clearview AI left the Canadian market, but their business model remains.” Are there other examples in our country similar to Clearview AI that this committee should be aware of?

4 p.m.

Conservative

The Chair Conservative Pat Kelly

Can you do that in about 10 or 15 seconds, please?

4 p.m.

Director, Privacy, Technology and Surveillance Program, Canadian Civil Liberties Association

Brenda McPhail

I think that virtually every private sector purveyor of facial recognition technology has a similar model. I would throw your attention towards the Cadillac Fairview mall investigation by the Privacy Commissioner of Canada, which involved a non-consensual private sector use of facial analytics that was deemed appropriate in sort of backroom conversations between a private sector company and their lawyers and was only discovered due to a mistake, a glitch in the technology, that revealed what was happening behind the scenes. Under these kinds of models, almost every facial recognition vendor advertises that it can help private sector bodies leverage personal data to improve their market, and that's a problem.

4 p.m.

Conservative

The Chair Conservative Pat Kelly

Thank you. We're almost a full minute over time. I'm going to be a little bit less ruthless than I was in the last meeting because of the way we've set this one up. Still I do ask all members of the committee to be conscious of the time when they know they're down to a few seconds and of the questions they pose in that time.

With that said, go ahead, Mr. Fergus.

4 p.m.

Liberal

Greg Fergus Liberal Hull—Aylmer, QC

Thank you very much, Mr. Chair.

In a way, I understand the circumstances for my colleague Mr. Kurec. It's a very thorny issue and we have lots of questions to ask the witnesses. I must admit that I've been doing more and more research into the matter, and every day, the things I've been reading raise further questions.

I'd like to begin by talking about something that Ms. LaPlante mentioned at the outset, and I think that Ms. McPhail raised it as well. It would appear that facial recognition technology is just one facet of our more general concern about the use of artificial intelligence . Some algorithms analyze not only our face, but also our behaviour, the things we say, our voice and how we move.

As a Black Canadian commenting on facial recognition, I am well aware of the fact that cameras cannot render the same image quality for people with darker skin, women or younger people, as for white men. It would appear to be a systemic problem.

Would you agree that the cameras themselves can be prejudicial to some people because they weren't developed specifically for them?

Let's begin with Ms. LaPlante.

4 p.m.

Senior Director, Product and Business Engagement, Borealis AI

Dr. Alex LaPlante

Thank you for your question. It's very interesting and it actually highlights, I would say, some challenges with other technologies that we have. NIST has done very comprehensive studies, and I encourage you to review their reports, in which they have looked at various different aspects of algorithmic performance. Some of those studies have focused specifically on demographics. One issue they have brought up is that data quality is a big driver of algorithmic performance. They've also noted the fact that technologies tend to do quite well for things like mug shots. One reason for that is that mug shot designs are often built in such a way to consider the range of different skin tones. It's more representative of a face. If you have pictures that don't necessarily capture an individual correctly, that will be reflected in the performance of the technology.

4:05 p.m.

Liberal

Greg Fergus Liberal Hull—Aylmer, QC

Thank you for your testimony, Mr. Labonté.

You mentioned the possibility of striking a balance between the concerns raised by these technologies and the benefits of using them.

Is it likely that such a balance can be achieved?

4:05 p.m.

Chief Executive Officer, Computer Research Institute of Montréal

Françoys Labonté

Of course, the matter of a balance is subjective. I don't know whether I expressed myself clearly. When I said that people made a lot of personal information publicly available, I was alluding to societal behaviour. It does not justify the use of such information for other purposes. As I mentioned, when certain applications use personal information without consent, then clearly that's a problem that has nothing to do with striking a balance.

The example of using Face ID on a telephone was mentioned. This is a highly controlled application that people can use because of its usefulness, in airports for example. I remember that although it was available before the pandemic, people could take pictures of their face to speed up passport checks. It's a very limited context in which images are acquired by the government using a photographic identification process governed by standards. This can [Technical difficulty]