It's already the case that some attacks that large actors carry out might be targeted against a particular target, but they don't consider collateral damage. There was an attack a few years called NotPetya. It targeted Ukraine, but it spread worldwide and caused havoc absolutely everywhere.
With regard to the way that people are using AI now—when I talk about narrow AI, that is specific tools for specific occasions—if your concern is that they'll launch an AI attack and it will develop a mind of its own and do its own thing, that's not the case. This is the kind of AI where there's still a pilot in the cockpit. There are still human beings running it and deciding to let it loose. You're still going to get collateral damage, particularly if it's unregulated state actors that are doing it—