Good afternoon. Thank you, Mr. Chair and honourable members of the committee.
My name is Carole Piovesan. I am a managing partner at INQ Law, where I advise clients on privacy, cybersecurity, data governance and AI risk management.
I've had the privilege of contributing to AI policy discussions nationally and internationally, including through the OECD.AI Policy Observatory. I have previously appeared before this committee, as well as the INDU committee. I am an adjunct professor at the University of Toronto's faculty of law, where I teach AI regulation. As well, I co-authored the book Leading Legal Disruption: Artificial Intelligence and a Toolkit for Lawyers and the Law.
The opinions I present today are my own personal opinions and do not reflect those of my law firm.
To understand how we should govern AI, we should go back to first principles and ask what AI is trying to achieve. In 1950, Alan Turing posed what he called the imitation game: a test to determine whether machines could think. He believed that one day machines would be able to play games, remember, observe results of their own behaviours, learn from rewards and punishments, and even deliberately introduce mistakes into their working.
Today, some of the leading AI researchers around the world are divided on where the trajectory of AI is taking us. Award-winning Canadian researchers such as Yoshua Bengio and Geoffrey Hinton, both pioneers of deep learning, warn that we may soon have computers that exceed human intelligence, with profound implications for safety and control—indeed, the existential threat we all hear about. Others, such as Yann LeCun, another pioneer, advance an argument that is more aligned with artificial machine intelligence, which is best understood as augmenting human intelligence rather than replacing it.
The purpose for pursuing AI and the achievements of those pursuits matters in how we think about governance. If a tool is to be used to extend human capabilities, we govern its use. If AI is an autonomous system capable of independent reasoning, we regulate its development and deployment with a different level of vigilance. Canada's approach must account for both.
Around the world, we are seeing at least three distinct models of governance being presented for AI.
Under the Trump administration, we are seeing a deregulatory approach in the United States with an emphasis on competitiveness over comprehensive safeguards. The federal approach relies on existing sectoral laws applied through agencies such as the FTC, while actively resisting state-level experimentation with stricter AI rules.
The United Kingdom and Singapore take a different approach. There, we are finding a much more tailored sectoral approach to AI regulation. The U.K., in particular, has a principles-based approach asking existing sector-specific regulators to interpret and apply cross-cutting principles such as safety, transparency, fairness and contestability within their domains. The U.K. considers that this approach offers critical adaptability that keeps pace with rapid technological change, although there are certain developments that suggest binding measures for the most powerful AI models may be forthcoming.
Singapore has certainly adopted a much more soft law, voluntary framework. There is no specific AI regulation. However, Singapore's approach through consensus building among government, industry and citizens, and through instruments such as the model AI governance framework and the AI Verify Foundation testing tool kit, has proven somewhat successful in building a sense of trust and a common approach to AI development. With Singapore's investment in national AI literacy and its consultative and iterative approach to governance, it's a model from which Canada can draw inspiration.
Then we see the third model, which is far more prescriptive. That model is found in the EU AI Act, which I know this committee has already heard about. That act is much more horizontal and is focused on the prescriptive life cycle of AI development and deployment across the supply chain.
Canada's approach should be tailored to our context. Regulating frontier AI systems is not the same as regulating Copilot in the use of a law firm or a chatbot on a service line. The U.K.'s context-specific approach recognizes this. Canada is more like the U.K. and Singapore than the United States or Europe. We value proportionate regulation that protects rights while enabling innovation.
I'll close with my three-point call to action.
The first is to continue building a regulatory guidance approach for safe AI. Our AI safety institute must be operating at full force, demonstrating that Canada takes the safety of these systems seriously. We must continue to target iterative standards guidance and a directives-based approach to artificial intelligence, with an emphasis on real-world testing for high-risk AI contexts. Lab benchmarks and off-line evaluations only show how models perform on static tests, not how they actually interact in real-world use.
Second, and very importantly, we need to improve the diversity in representation and perspectives in policy and throughout the development, evaluation and deployment process. Individual perspectives matter, and they are highly unrepresented throughout the AI ecosystem.
Third, we must conduct an environmental scan to better understand, on a sectoral basis, where our laws may have gaps to account for AI or where AI is already accounted for, so we have the coverage we need for the everyday use of AI in business. Targeting soft and hard law at home in a tailored manner and enabling Canada to play to its trusted global position to ensure robust and harmonized standards, certifications and guidance for responsible AI should be our path forward.
Thank you. I welcome the committee's questions.