I have a quick reaction here.
The latest progress in science has demonstrated techniques where you could invest a significant amount of money in compute inference—that's not training—to be able to have models of a certain size perform like they were 10 times bigger. It's never that simple.
Yes, it is a proxy model size, but there are ways with sufficient money or sufficient compute that you can go further than model size. There are ways to go around that and get performance out of these models. There are also ways to specialize smaller models.
Again, I think it's a use case-based approach that can potentially offer an opportunity to mitigate the risks. I think the use cases mentioned are absolutely relevant, but the triggers are never that simple.