My name is Gillian Hadfield. I'm a professor of law and economics at the University of Toronto, where I hold the Schwarz Reisman chair in technology and society. I'm also a CIFAR AI chair at the Vector Institute and a Schmidt Sciences AI2050 senior fellow. I basically don't think about anything except AI these days.
I'm appearing here in a personal capacity. I really appreciate the opportunity to speak to you about this crucial piece of legislation.
In my view, Parliament should move to enact AIDA as soon as possible. However, there are some outstanding areas of concern that I would like to highlight, along with some recommendations.
First, I think AIDA should recognize and address the fundamental, systemic and potentially catastrophic risk posed by large models. I don't think this is just fear talking. AIDA is currently focused on individual harms. I think that means we are neglecting potential systemic issues like financial instability, election interference and national security threats posed by advanced AI systems. Recent regulatory actions in the U.S. and the U.K. highlight the need to address systemic risks in AI alongside individual harms.
Proposed amendments to the definition of “high-impact system” remain focused on individual harms and should be expanded to include coverage of AI likely to cause systemic harms regardless of domain.
To further address systemic harms, Canada should swiftly establish, either as a part of AIDA or in separate legislation, a mandatory registry for large AI models to provide basic insights into developers, associated risks and legal compliance to ensure effective regulation amid the rapid pace of AI development.
Second, AIDA needs to retain the flexibility and adaptability that I saw in its initial draft. This is because of a basic tension at the core of AI regulation: Legislation does not move quickly; advanced technologies do. Consider the very process of passing Bill C-27. It's been well over 500 days since Minister Champagne introduced this legislation in June 2022, yet the bill remains at some distance from becoming law. Meanwhile, AI has been racing forward. Since that time, we have all witnessed the emergence of ChatGPT, GPT-4 and additional large models. Companies have scrambled to integrate AI into their operations. AI continues to demonstrate its practical applications across diverse fields like law, health care and finance. As I mentioned, other countries are taking action.
The rate of change of advanced technologies demands responsiveness and adaptability in the regulation we impose on them. The original draft of AIDA was extremely flexible in this regard. It set out broad parameters for AI regulation, leaving specific details to be worked out in regulations and administrative decisions. Minister Champagne's letter of November 28 last year reduced this flexibility by moving key regulatory requirements into the legislation itself. As you consider this bill and these amendments at committee, I urge you to be mindful that, while this may provide greater clarity to businesses in the short term, it will impair AIDA's flexibility and, therefore, its long-term effectiveness as the foundation of Canada's AI regulation.
I think the most important point I want to make is to emphasize that additional supports must be implemented to operationalize the desired flexibility, longevity and balance of AIDA. Relying on regulations that will take at least two years to develop will leave stakeholders in a dynamic and rapidly advancing area with significant uncertainty, as you've heard. Canada can make itself a leader in AI regulation, however, by implementing two low-barrier regulatory schemes to provide AIDA with the flexibility it needs while increasing certainty for stakeholders.
One is to have safe harbours that would offer time-limited guidelines for acceptable AI use to shield organizations from legal repercussions. The other involves a proposal I've made regarding regulatory markets, which would involve licensing private regulators to ensure flexible and efficient regulation.
These solutions aim to balance innovation and safety, to promote effective technology regulation without stifling innovation and to ensure that citizens are protected from AI-related risks. I'll note that Eric Schmidt, the former CEO of Google, wrote a piece in The Wall Street Journal just last Saturday advocating this regulatory market approach.
I'd like to thank the committee for your hard work on this important bill, and I look forward to your questions.
Thank you.