Good morning.
Thank you, Mr. Chair and members of the committee.
My name is Rob Jenkins. I'm a professor of psychology at the University of York in the U.K., and I speak to the issue of face recognition from the perspective of cognitive science.
I'd like to begin by talking about expectations of face recognition accuracy and how actual performance measures up to these expectations.
Our expectations are mainly informed by our experience of face recognition in everyday life, and that experience can be highly misleading when it comes to security and forensic settings.
Most of the time we spend looking at faces, we're looking at familiar faces, and by that I mean the faces of people we know and have seen many times before, including friends, family and colleagues. Humans are extremely good at identifying familiar faces. We recognize them effortlessly and accurately, even under poor viewing conditions and in poor quality images. The everyday success of face recognition in our social lives can lead us to overgeneralize and to assume that humans are good at recognizing faces generally. We are not.
Applied face recognition, including witness testimony, security and surveillance, and forensic face matching, almost always involves unfamiliar faces, and by that I mean the faces of people we do not know and have never seen before.
Humans are surprisingly bad at identifying unfamiliar faces. This is a difficult task that generates many errors, even under excellent viewing conditions and with high quality images. That is the finding not only for randomly sampled members of the public but also for trained professionals with many years of experience in the role, including passport officials and police staff.
It is essential that we evaluate face recognition technology, or FRT, in the context of unfamiliar face recognition by humans. This is partly because the current face recognition infrastructure relies on unfamiliar face recognition by humans, making human performance a relative comparison, and partly because, in practice, FRT is embedded in face recognition workflows that include human operators.
Unfamiliar face recognition by humans, a process that is known to be error prone, remains integral to automatic face recognition systems. To give one example, in many security and forensic applications of FRT, an automated database search delivers a candidate list of potential matches, but the final face identity decisions are made by human operators who select faces from the candidate list and compare them to the search target.
The U.K. “Surveillance Camera Code of Practice” states that the use of FRT “...should always involve human intervention before decisions are taken that affect an individual adversely”. A similar principle of human oversight has been publicly adopted by the Australian federal government: “decisions that serve to identify a person will never be made by technology alone”.
Human oversight provides important safeguards and a mechanism for accountability; however, it also imposes an upper limit on the accuracy that face recognition systems could achieve in principle. Face recognition technologies are not 100% accurate, but even if they were, human oversight bakes human error into the system. Human error is prevalent in these tasks, but there are ways to mitigate it. Deliberate efforts, either by targeted recruitment or by evidence-based training, must be made to ensure that the humans involved in face recognition decisions are highly skilled.
Use of FRT in legal systems should be accompanied by transparent disclosure of the strengths, limitations and operation of this technology.
If FRT is to be adopted in forensic practice, new types of expert practitioners and researchers are needed to design, evaluate, oversee and explain the resultant systems. Because these systems will incorporate human and AI decision-making, a range of expertise is required.
Thank you.