I think it's a really hard but important question. At Microsoft, we've been contemplating this as well. We've been establishing our own internal governance program and understanding how we calibrate an application of our requirements in that governance process to higher-risk scenarios.
We have developed three categories of what we call “sensitive uses” internally at Microsoft.
The first is any system that has an impact on life opportunity or life consequences. In that category, we think about systems that impact opportunities for employment, education or legal status, for example.
The second category is any system that has an impact on physical or psychological safety. Think about safety-critical systems in the context of critical infrastructure, for example, or systems that might be used by vulnerable populations.
The third category is any system that has an impact on human rights.
I do think it's useful to have a framework to think about triggers for higher risk and then, where there is readiness to go further, to think about some of the more specific sorts of use cases like education and employment. That is represented in some of the high-impact examples in the AIDA as well. Then it's also to recognize that there is going to be a need to evolve and to put guardrails in place for how you think about the high-impact systems and the examples evolving over time. It's about not just having an open-ended process but also thinking about what the triggers are going to be for meeting that bar going forward.