Thank you for the invitation to appear today.
My name is Ali Dehghantanha. I am a professor and Canada research chair in cybersecurity and threat intelligence at the University of Guelph, and I also work closely with industry on securing real-world AI systems. I would like to focus my remarks on a critical gap that is currently limiting Canada's ability to fully realize the benefits of artificial intelligence in strategic sectors.
Today, the primary barrier to AI adoption is not capability; it is trust. Across sectors, organizations are increasingly capable of building and deploying AI systems. However, they are often unable to safely operationalize these systems at scale due to concerns around security, misuse, reliability and regulatory exposure. In sectors like advanced manufacturing and construction, where AI-driven automation meets physical safety, the stakes of this trust gap are particularly high.
In practice, we are seeing that AI systems are being deployed without sufficient mechanisms to continuously monitor, verify and remediate risks once they are in operation. This creates what I would describe as an AI security deadlock, where innovation is technically possible but deployment is slowed or blocked by unresolved risk.
Current approaches to AI governance tend to focus on pre-deployment checks, model evaluation or static compliance frameworks. While these are important, they are not sufficient for modern AI systems, which are dynamic, adaptive and increasingly integrated into critical workflows.
What is missing is a run-time layer of control—an infrastructure that continually observes AI behaviour, detects failures or misuse, and actively intervenes to correct or contain those issues in real time. This is similar to how cybersecurity evolved. We do not secure systems today solely through a design-time review; we rely on continuous monitoring, detection and response. AI systems require a similar paradigm. Furthermore, this run-time approach allows for robust security oversight without requiring access to a company's proprietary source code or sensitive training data, protecting Canadian intellectual property while ensuring safety.
From a policy perspective, I would suggest three priority areas.
First, Canada should support the development of standards and frameworks for continuous AI risk monitoring and post-deployment assurance. This includes defining what “safe operation” means in practice—not just at deployment, but throughout the life cycle of AI systems.
Second, we should incentivize secure AI deployment, not just AI deployment. Many current programs focus on building AI capabilities, but fewer address the operational challenge of deploying these systems safely in high-stakes environments.
Third, Canada has the opportunity to lead in the emerging domain of AI security and risk orchestration. Supporting domestic companies and research efforts in this space can strengthen both our economic position and our digital sovereignty. As we look toward the horizon of quantum computing, the need for these real-time adaptive security layers to protect our AI infrastructure against next-generation threats becomes even more urgent.
Finally, I would like to emphasize that the goal is not to slow down AI innovation but to enable it. By addressing the security and trust gap, we can unlock faster, safer and more responsible adoption of AI across Canada's strategic industries.
Thank you. I look forward to your questions.