I can also jump in.
I think in approaching thinking about regulation and limits on facial surveillance through the lens of regulating use, users or the designer availability of the technology, we can start to think about things like restraints or restrictions on the use of commercial facial surveillance systems. Instead, fund or develop in-house systems using data that is not just legally sourced, but sourced through fully informed consent and processes that ensure the dignity of the individuals whose data is being processed. It would be designed and used only for very specific use cases, as opposed to commercial systems like Clearview AI, for instance, that's being used in a wide range of different scenarios, none of which are taking into account the specific social context and implications for the people whose data is being processed or who are being affected by the use of that system.
I think there are ways we can really distinguish very narrow use cases and not build into a narrative that says we need facial recognition because it can be used to protect people from potential harm.