I agree that it's an integral part, but it is important to also recognize that it is not enough.
I think this committee heard, as well, from a witness on how humans interact with facial images that they are presented with, and how their own biases creep in. The example provided was of a photo lineup that is often used by police, and how it replicates the type of image output that you often get from a facial recognition system, where it gives you maybe the top 10 or 15 matches. We know that, as an investigative tool, that has led to a lot of problems in the past for police.
That is the type of human supervision that we're talking about. It's worse in the context of facial recognition systems, because the tendency is to trust in the automated results of the system and that they produced an accurate match. You're questioning it even less than in the context where you get just a general photo lineup and try to figure out who a person is. What it ends up doing is embedding cognitive and other biases.