Absolutely.
When we're looking at the regulation of artificial intelligence, we need to look at aspects of the data, as well as the use and the design of the technology to ensure that it is properly regulated. In different jurisdictions, including the United States and the EU, we see an attempt to regulate artificial intelligence—including facial recognition very specifically—that takes a risk-based approach to the regulation.
If we draw inspiration from the EU's draft artificial intelligence act, we see that a criticality of risk is first anticipated, which means there are some use cases that are considered prohibitive or very high-risk. Others are considered high-risk categories for regulation and then the risk level decreases.
The high-risk categories are specifically regulated with a more proscriptive pen, telling both vendors and users of those systems what some of the requirements are and what needs to be done to verify and validate the system and the data, and then imposes ongoing controls to ensure that the system is operating as intended.
That is a really important point because when you are using a high-risk AI system—recognizing that artificial intelligence is quite sophisticated and unique in its self-learning and self-actioning embodiment—having those controls after the fact is really critical to ensure that there is an ongoing use.