I'll jump in. I'll agree with what Anthony said.
It's really important to emphasize just how much investment is going into this. In the past, people used to think there was no real way to regulate AI and make sure nobody was doing something dangerous with it because it was software and anybody could do it on their laptop in their garage. That's not at all the case right now.
In the case right now, these systems are taking hundreds of millions and billions of dollars. This isn't the sort of stuff that is publicly disclosed these days, but there are huge investments that continue to increase, and the hardware is extremely specialized, as Anthony mentioned. This is the main point for intervention for international regulation of AI, which, as I mentioned, is absolutely critical.
The only thing I would add is that it's very important to think about how to make such a scheme as robust as possible. Verification might look something like a white list of types of AI systems that are allowed to be run on the computer chips. It also might look like location tracking so that we know where the chips are in case we need to recall them.
In fact, we should stop developing more powerful AI systems immediately. The most robust way of doing that would be to actually stop building the chips, as opposed to trying to set up a more complicated and less robust system of technical verification. That's my personal view: To give us some breathing room, we should stop building the chips and stop building and maintaining the factories that produce them.
As I and the other experts mentioned, we have potentially a few years here. This is not a situation in which we have time to try to find the perfect solution. We need to immediately implement a solution that will slow or pause the incredible rate of progress towards superintelligence.
