Some things that you said are pure fiction, but others are cause for concern.
I think that we should be concerned about a system that uses artificial and programmed intelligence to target, for example, all parliamentarians in a certain country. This situation is quite plausible from a scientific point of view, since it involves only technological issues related to the implementation of this type of system. That's why several countries are currently discussing a treaty that would ban these types of systems.
However, we must remember that these systems aren't really autonomous at a high level. The systems will simply follow the instructions that we give them. As a result, a system won't decide on its own to kill someone. The system will need to be programmed for this purpose.
In general, humans will always decide what constitutes good or bad behaviour on the part of the system, much like we do with children. The system will learn to imitate human behaviour. The system will find its own solutions, but according to criteria or an objective chosen by humans.