Thank you so much for the question. I think it's an excellent one. It's certainly an issue we're aware of at the national level, and the Department of National Defence has been considering this issue.
I think there are an incredible number of guardrails to put into considering which systems are used and for what purposes. AI systems will hallucinate. They will make mistakes. They have built-in biases. The Department of National Defence needs to have a comprehensive strategy.
I was consulted on the AI strategy the Department of National Defence put out. However, it needs more substance to it. We need clarity on which systems, for which purposes and in which applications. Are we using them for back-end office things like recruiting individuals? Are we using them for targeting? There are a vast number of concerns, of course, as we go down the spectrum of use. There need to be clear policies and guidance for the Department of National Defence. These currently do not exist regarding which systems are permissible and which are not.
You pointed to the issue of bias. That is incredibly important for this committee to consider as you think about the application of new and emerging technologies. There will be biases built into the systems, and technological efforts to address them won't be sufficient. There needs to be clarity in who is making decisions and in who is held accountable for those decisions when these systems are applied.