Additional work needs to be done on the definitions, perhaps taking a principles-based approach. One thing that really surprised me when I was talking to an insurance underwriter on their use of AI was that they don't use AI for all their claims adjustments. They said 10% of their claims are silly and simple, so they're going to have AI do the silly, simple ones. They'll get that work out of the way and then keep people doing what people are uniquely qualified to do, which is creativity, ingenuity and critical thinking. That was taken off the table. If I look at harms and categories of uses, do they fall into high-impact use because you're using them for claims adjudication, whether it's the silly, simple ones or not?
There is a need to go back and look at those definitions and look at how these tools are actually implemented in the real world. I don't think that's been done sufficiently across the broad communities, as was said.