I'm glad we're focusing on this part of the approach.
I do think that the effort, which we've also seen in the European Union, to specify that these are the domains in which we are concerned, which we've raised in terms of applications, is unlikely to be robust and stable over time because there are domains we haven't thought about. The point of a general-purpose system, the GPT-4 type of system, is that it's going to find its way into absolutely everything we're doing. That's point number one, so I think that coming at it from the point of view of saying, “We're only going to carve out these ones,” is not going to be stable.
Let me go to the safe harbours and regulatory markets approach. I'll start with the safe harbours one because the term was used here...and it's one I use a lot. We need to get the infrastructure in place to give us the capacity to act as we learn, and we will learn only over time how things are playing out. Industry needs some certainty, and the idea of a safe harbour is to say, “Let's work through where we think, with these kinds of controls in place, this kind of thing is currently safe,” so that entities that are applying AI, building AI, can reach the certainty they need by saying, “We've done what's in the safe harbour. We're protected for now.” Now, that may need to evolve. There's just no way to get around the fact that this is going to be a domain of uncertainty and it's going to evolve. That's true across a complex economy, but safe harbours are a technique I think we should be exploring.
The regulatory markets approach would then also say, “Okay, let's identify and let's start with those areas where we know there are concerns.” We know a lot about the use of models to discriminate, for example. Can we foster the development of new technologies that will help us track things like that and have government give its stamp of approval to those types of technologies, again, in an iterative, evolving type of way? There's no way to get around the fact that we cannot write a piece of legislation that is going to say, “Here are the things we're concerned about. Here are the precise things we're concerned about, and here's what you can do to completely avoid any liability and concern.” I don't think there's any pathway like that.