The legislation and the recently proposed amendments provide a high-level structure for requirements, and through the implementation process, we expect there will be more detail for defining the requirements.
There is also the enforcement approach, in which there is an ability—for example, through the power to audit—that is less specific. There seems to be an opportunity to provide more detail to the implementation process about how organizations can demonstrate compliance with the requirements that we defined in more detail.
From a Microsoft perspective, we have been working on internal governance for responsible AI for seven years, and we have developed a lot of these constructs internally, which we can think of as a starting point. We can imagine a lot of other organizations may not have been spending as much time on that issue or may not have as many resources to apply to that issue. Providing more certainty on how to comply will be of incredible value.
We have developed our responsible AI principles, and we've developed a responsible AI standard to put those principles into practice. It truly looks like a set of really specific goals and requirements that apply to internal teams working on AI to fulfill practices like ensuring that we are reducing bias and mitigating the risk of bias and ensuring that we have sufficient transparency and accountability built into our processes.