Thank you very much, Mr. Chair.
Thank you to the committee for the invitation.
My name is Esha Bhandari, and I am a deputy director of the American Civil Liberties Union's speech privacy and technology project based in New York. I am originally from Saint John, New Brunswick.
I'd like to speak to the committee about the dangers of biometric identifiers with a specific focus on facial recognition.
Because biometric identifiers are personally identifying and generally immutable, biometric technologies—including face recognition—pose severe threats to civil rights and civil liberties by enabling privacy violations, including the loss of anonymity in contexts where people have traditionally expected it, enabling persistent tracking of movement and activity, and identity theft.
Additionally, flaws in the use or operation of biometric technologies can lead to significant civil rights violations, including false arrests and denial of access to benefits, goods and services, as well as employment discrimination. All of these problems have been shown to disproportionately affect racialized communities.
What exactly are we talking about with biometrics?
Prior to the digital age, collection of limited biometrics like fingerprints was laborious and slow. Now we have the potential for near instantaneous collection of biometrics, including face prints. We have machine learning capabilities and digital age network technologies. All of these technological advances combined make the threat of biometric collection even greater than it was in the past.
Face recognition is, of course, an example of this, but I want to highlight that voice recognition, iris or retina scans, DNA collection, gait and keystroke recognition are also examples of biometric technology that have effects on civil liberties.
Facial recognition allows for instant identification at a distance without the knowledge or consent of the person being identified and tracked. Even in the past, identifiers that needed to be captured with the knowledge of the person, such as fingerprints, can now be collected without the knowledge of the person, which includes DNA that we shed as we go about our daily lives. Iris scans can be done remotely, and facial recognition and face prints can be collected remotely without the knowledge or consent of the person whose biometrics are being collected.
Facial recognition is particularly prone to the flaws of biometrics, which include design flaws, hardware limitations and other problems. Multiple studies have shown that face recognition algorithms have markedly higher misidentification rates for people of colour, including Black people, children and older adults. There are many reasons for this. I won't get into the specifics of that, but in part it's because of the datasets that are used but also flaws in real world conditions.
I also want to highlight that often the error rates that are shown in test conditions are exacerbated in real world conditions, which are often worse than test conditions—for example, when a facial recognition tool is being used on poor quality surveillance footage.
There are also other risks with face recognition technology when it is combined with other technology to infer emotion, cognitive state or intent. We see private companies increasingly promoting products that purport to detect emotion or affect, such as aggression detectors, based on facial tics or other movements that this technology picks up on.
Psychologists who study emotion agree that this project is built on faulty science because there is no universal relationship between emotional states and observable facial traits. Nonetheless, these video analytics are proliferating, claiming to detect suspicious behaviour or detect lies. When deployed in certain contexts, this can cause real harm, including employment discrimination if a private company is using these tools to analyze someone's face during an interview to infer emotion or truthfulness and deny jobs based on this technology.
I have been speaking, of course, about the flaws with the technology and the error rates that it has, which, again, disproportionately fall on certain marginalized communities, but there are, of course, problems even when the facial recognition technology functions and functions accurately.
The ability for law enforcement, for example, to systematically track people and their movements over time poses a threat to freedom and civil liberties. Sensitive movements can be identified, whether people are travelling to protests, to medical facilities or other sensitive locations. In recognition of these dangers from law enforcement use, at least 23 jurisdictions in the United States, from Boston to Minneapolis, and San Francisco and to Jackson, Mississippi, have enacted legislation halting law enforcement or government use of face recognition technology.
There's also, of course, the private sector use of this technology, which I just want to highlight. Again, you see trends now where, for example, landlords may be using facial recognition technology in buildings, which enables them to track their tenants' movements in and out of the building and also their guests—romantic partners and others—who come in and out of the building. We also see this use in private shopping malls and in other contexts as well—