Thank you, Mr. Chair.
I'm a professor of law at the University of Ottawa, where I hold the Canada research chair in information law and policy. I work in the areas of privacy law and AI governance.
As I'm sure you're all aware, Canada's attempt to regulate AI technologies through a cross-sectoral law, the proposed artificial intelligence and data act, failed with Bill C-27 in January 2025.
This bill would have created a set of ex ante measures for different actors within the AI value chain. These were only for high-impact systems and would have required risk identification and mitigation, documentation, some public-facing transparency and some data governance. The bill provided for limited and predominantly light-touch oversight.
The bill was regarded as a broad, cross-sectoral AI statute, but it had important limitations. Although high-impact systems were initially undefined, proposed amendments by the minister sketched out a series of high-impact categories mainly linked to human-oriented use, for example, the use of AI in employment, automated decision-making, the use of biometric data and so on. This is so, even though systems used in industrial or manufacturing contexts can bring with them serious potential risks as well. Of course, new categories of high-impact AI could have been added to the list by regulation over time.
The application of the AIDA was also limited to systems designed for use in interprovincial or international trade and commerce. It would not have applied to the federal public service. It did not apply to the defence department or the security establishment, or to those who supplied AI systems to them.
The signals now seem clear that AIDA will not be resurrected. There's a tendency to assume that because the bill failed, there's no AI regulation in Canada. A recent KPMG survey indicated that 92% of Canadians believe Canada has no AI regulation. It also revealed a significant trust gap when it came to AI.
In reality, there's a considerable amount of AI regulation in Canada. However, it's more sectoral and context specific. It's also more fragmented, less obvious and less transparent. It sometimes looks very different from what ordinary Canadians might consider to be regulation, and it often involves soft law. It ranges from law to guidance.
Many existing laws, such as privacy law, already apply in different ways to AI. In addition, policies, guidance and best practices are developed by government departments and agencies, and by regulators, including privacy commissioners, the Competition Bureau, human rights commissions, financial conduct authorities, law societies and many others.
AI governance is also taking place through standards development and, in the private sector, through corporate self-governance, according to guidance from diverse sources. These have the potential to be reinforced by privately managed compliance certification. The government is exploring how standards and certification could be leveraged to assist Canadian businesses in meeting EU AI Act requirements.
Budget bill amendments to the Red Tape Reduction Act will enable the use of regulatory sandboxes across the federal sector. The federal government has launched a beta register of AI in the public sector and is currently consulting on it. Since 2019, we've had the directive on automated decision-making for the federal public service, and this has been joined by a “Guide on the use of Generative AI” in the public sector. The federal government has also created a list of suppliers committed to principles relating to responsible and effective AI use. I offer these as diverse examples of AI regulation, broadly understood, at the federal level.
Other laws are contemplated or will be amended to address specific AI issues. We may see new online harms legislation. A new privacy bill, when it's eventually introduced, will likely contain provisions related to automated decision-making in the private sector.
All of this activity is encouraging, but where are the gaps?
First, many existing measures are voluntary, and oversight and compliance mechanisms are lacking. While guidance is important in early days, as things advance, public confidence will require oversight. There may also be the need in some contexts to make compliance compulsory. If oversight and compliance are left to existing regulators, commissions or agencies, it will be necessary to consider what legislative changes might also be required and whether regulators have adequate resources to fulfill complex expanding mandates.
Second, much of this regulatory activity is difficult to detect unless you follow it closely. This undermines public trust. It's also particularly burdensome for small and medium-sized enterprises. A national coordinating body that ensures coherence, enables greater transparency and promotes federal-provincial harmonization would be valuable. Such a role could also support public trust by serving an ombuds function. There must be ways for Canadians to surface their concerns about AI systems in both public and private sectors.
Third, if approaches are piecemeal and sectoral, then so too will be law reform. It would be useful to map what reforms are needed or contemplated—a clear AI governance strategy. Such a road map was not part of the AI strategy consultation.
Thank you, Mr. Chair, for this opportunity to address this committee. I look forward to any questions.