Sure.
Yes, AI is currently used in both cases, on both sides, as a tool to create defence and to basically identify, for example, if you are you, through some of the very simple applications. Face recognition on your phone or some of that might have AI to basically some capacity. It's used also on the other side as an offensive to attack the other systems. It's a tool, so it basically plays both sides.
What we at SHIELD are very much concerned with is that AI is usually used as a very good tool, but as long as it's working. What if someone or a malicious actor attacks that tool and breaks it? That will create a lot more problems, essentially, so we have been advocating a lot about responsible and secure AI. With these devices and as AI is basically now going everywhere, it's left to its own devices. No one is thinking about protecting the AI unit itself to make sure that it's working appropriately or as intended.
The problem is that if you are not securing and not taking care of the AI unit that is going to be used for protecting us, if that AI centre's own activity gets attacked, then basically the consequences could be a bit more damaging than what we are seeing in some cases—