I think those are excellent questions.
I think, fortunately, we're not without tools for dealing with them. To piggyback off the testimony that Jennifer just gave, I think it's actually quite right to ask, “How can we massage this into a form that fits within our legal frameworks?” We're not going to overhaul the Constitution tomorrow. It's not going to happen.
One thing we can do is to recognize the fact that we can't predict the capabilities of systems at the next level of scale, so safety by design would seem to imply “until we can”. We're not talking about a blanket ban. We're saying, “until we can”, let's incentivize the private sector to make fundamental advances in the science of AI and to give us a scientific theory for predicting the emergence of those dangerous capabilities.
I'd also say we can draw inspiration from the White House executive order that came out recently. One of the key things they do—again, to piggyback off this idea, like sunlight is the best disinfectant, to bring this all out to the fore so that we can evaluate what's going on—is have a reporting requirement in the executive order. If you train an AI system that uses above a certain amount of computational power in the training process, you need to report the results of various audits you've performed, various evaluations. Those evaluations have to do with bioweapon design capability, chemical synthesis ability and self-replication ability. That's all baked into the executive order.
Seeing something like that, where we have a tiered process that essentially mirrors what we see in the EO, where we base it on computational processing power thresholds; above this line, you have to do this, and above that line, you have to do that. It's that sort of thing.