Thank you for the chance to speak today.
My name is Elizabeth Anne Watkins and I am a post-doctoral research fellow at the Center for Information Technology as well as the human-computer interaction group at Princeton University, and an affiliate with the Data & Society research institute in New York.
I'm here today in a personal capacity to express my concerns with the private industry use of facial verification on workers. These concerns have been informed by my research as a social scientist studying the consequences of AI in labour contexts.
My key concerns today are twofold: one, to raise awareness of a technology related to facial recognition yet distinct in function, which is facial verification; and two, to urge this committee to consider how these technologies are integrated into sociotechnical contexts, that is, the real-world humans and scenarios forced to comply with these tools and to consider how these integrations hold significant consequences for the privacy, security and safety of people.
First I'll give a definition and description of facial verification. Whereas facial recognition is a 1:n system, which means it both finds and identifies individuals from camera feeds typically viewing large numbers of faces, usually without the knowledge of those individuals, facial verification, on the other hand, while built on similar recognition technology, is distinct in how it's used. Facial verification is a 1:1 matching system, much more intimate and up close where a person's face, directly in front of the camera, is matched to the face already associated with the device or digital account they're logging in to. If the system can see your face and predict that it's a match to the face already associated with the device or account, then you're permitted to log in. If this match cannot be verified, then you'll remain locked out. If you use Face ID on an iPhone, for example, you've already used facial verification.
Next I'll focus on the sociotechnical context to talk about where this technology is being integrated, how and by whom. My focus is on work. Facial verification is increasingly being used in work contexts, in particular gig work or precarious labour. Amazon delivery drivers, Uber drivers and at-home health care workers are already being required in many states in the U.S., in addition to countries around the world, to comply with facial verification in order to prove their identities and be allowed to work. This means the person has to make sure their face can be seen and matched to the photo associated with the account. Workers are typically required to do this not just once, but over and over again.
The biases, failures and intrinsic injustices of facial recognition have already been expressed to this committee. I'm here to urge this committee to also consider the harms resulting from facial verification's use in work.
In my research, I've gathered data from workers describing a variety of harms. They're worried about how long their faces are being stored, where they're being stored and with whom they're being shared. In some cases, workers are forced to take photos of themselves over and over again for the system to recognize them as a match. In other cases, they're erroneously forbidden from logging into their account because the system can't match them. They have to spend time visiting customer service centres and then wait, sometimes hours, sometimes days, for human oversight to fix these errors. In other cases still, workers have described being forced to step out of their cars in dark parking lots and crouch in front of their headlights to get enough light for the system to see them. When facial verification breaks, workers are the ones who have to create and maintain the conditions for it to produce judgment.
While the use of facial recognition by state-based agencies like police departments has been the subject of growing oversight, the use of facial verification in private industry and on workers has gone on under-regulated. I implore this committee to allocate attention to these concerns and pursue methods to protect workers from the biases, failures and critical safety threats of these tools, whether it's through biometric regulation, AI regulation, labour law or some combination thereof.
I second a recent witness, Cynthia Khoo, in her statement that recognition technology cannot bear the legal and moral responsibility that humans are already abdicating to it over vulnerable people's lives. A moratorium is the only morally appropriate regulatory response.
Until that end can be reached, accountability and transparency measures must be brought to bear not only on these tools, but also on company claims that they help protect against fraud and malicious actors. Regulatory intervention could require that companies release data supporting these claims for public scrutiny and require companies to perform algorithmic impact assessments, including consultation with marginalized groups, to gain insight into how workers are being affected. Additional measures could require companies to provide workers with access to multiple forms of identity verification to ensure that people whose bodies or environments cannot be recognized by facial verification systems can still access their means of livelihood.
At heart, these technologies provoke large questions around who gets to be safe, what safety ought to look like, and who carries the burden and liability of achieving that end.
Thank you.