We think that this is a really important part of the conversation, and it's for a number of reasons.
The accuracy of facial recognition has improved markedly in recent years. There's some very good research being done by the National Institute of Standards and Technology in the U.S., or NIST, that shows that accuracy has improved markedly for the best-performing systems in recent years. There is, however, a very wide gap between the best-performing systems and the least well-performing systems, and the less accurate systems tend to be more discriminatory as well, so we think testing is really important.
There are a couple of components to it. We think that vendors like Microsoft should allow for their systems to be tested by independent third parties in a reasonable fashion, so we allow for that at the moment via an API. A third party can go and test our system to see how accurate it is. We think that vendors should be required to respond to any testing and address any material performance gaps, including across demographics, so that's one thing: vendors doing something on the testing side.
We also think it's really very important that organizations deploying a facial recognition service test it in operational conditions. If you are a police customer and you're using a facial recognition system, you shouldn't just take the word of the vendor that it's going to be accurate in the abstract; you also need to test it in operational conditions. That's because environmental factors like image quality or camera positions have a really big impact on accuracy.
You can imagine that if you have a camera that is placed looking down on someone's head and there are smudges on the lens or poor quality imagery going into the system in general, it's going to have a really big impact on performance; therefore, there should also be a testing requirement for organizations deploying facial recognition to make sure that they know that it is working accurately in the environment in which it's going to be used.