We have our broader responsible AI program, which we have been developing for the last few years. It has a few components. We have a company-wide AI governance team. This is a multi-stakeholder team with some of our Microsoft researchers. These are world-leading AI researchers sharing knowledge about where the technology is and around where the state-of-the-art technology is going. They come together with people working on legal and policy issues and people with an engineering background to oversee the general program.
In terms of the other components, we also have a responsible AI standard. This is a set of requirements across our six AI principles, which I can go into detail on, that ensure that any teams that are developing AI systems or deploying AI systems are doing so in a way that meets our principles.
The final piece we have is also a “sensitive use” review process. This comes into play when any potential development or deployment of a system hits one of three potential triggers. Any time a system is going to be used in a way that affects an individual's legal opportunities or legal standing, any time there is a potential for psychological or physical harm, or any time there is an implication for human rights, then the governance team that I mentioned will come together and review whether we can move forward with a particular deployment or development of AI to ensure that it's being done in a responsible way.
You can imagine that those conversations apply across all of our systems, including the discussions we're having on facial recognition.