I just want to circle back to this notion of whether you regulate the model or the end applications. That is pretty central here. We're going to have to walk and chew gum at the same time. There are risks that, irreducibly, come from the model. Look at OpenAI's ChatGPT, for example. They build this one system, one model. I don't know if I can.... In fact, I know that I can't. I know that no one, technically, can count the full range of end-use applications that a tool like that would have. You'll use it in health care today, and you'll use it in space exploration tomorrow and software engineering the day after.
The idea that we're going to be able to take a general-purpose model like this and regulate it as if somehow we can play this losing game of whac-a-mole.... This is just not going to track reality, unfortunately. This is true for a certain subset of risks—the more extreme ones. We can look at the risks, for example, from general-purpose models that can orient themselves in the world and have high context-awareness. You have to regulate the model at that point because that is the source of the risk, irreducibly.
For other things, yes, we need to have application-level regulation and legislation. Again, you see that in the executive order—that we're doing both things. However, I just want to surface that although there might seem to be a tension between these two approaches, they are actually not at all incompatible. In fact, in some ways, they are deeply complementary.
I just wanted to prop that thought.