No. I think it's very important to cover the general purpose AI systems in particular, because they could be the most dangerous if misused. This is the place where there is also the most uncertainty about the harms that could follow from these systems.
I think that having a law that says more oversight is necessary for these general purpose systems will also be an encouragement for developers to create more specialized systems. In fact, in most applications in business and science or medicine, we want a system that's very specialized on one particular kind of question we care about. Until recently, these were the only kinds of AI systems that we knew how to build. General purpose systems like the large language models can be specialized and turned into something specific that doesn't know everything about the world and only knows some specific questions, in which case they become much more innocuous.