Thank you very much, Mr. Chair and members of the committee, for inviting me to comment on the challenges related to regulating AI.
Although I will be testifying in English today, I will respond to your questions in English or French.
I am co-leader of the national cybersecurity and data protection group at Gowling WLG, and I'm an associate professor at the faculty of law at the Université de Sherbrooke. I am a practising lawyer called to the bars of Quebec and Paris. My evidence today represents my own views. I am here as an individual, not representing my law firm, clients or any third parties.
Much of my legal career has focused on comparative analysis of legal regimes across the globe, advising clients on their compliance obligations in the jurisdictions where I am qualified to practise. My practice focuses on data protection and cybersecurity, and it naturally extends to artificial intelligence, given its role as a major data-driven technology.
To me, Canada has always been a model of education, growth and innovation. That's why I chose to pursue my doctorate, start my family and build my life here—recently earning citizenship, which remains one of my proudest moments. I believe that Canada's institutions, diverse economy and culture of innovation create an environment well suited for the effective development, adoption and regulation of AI technologies.
Today I would like to discuss the challenges of AI, not simply as an ever-evolving technology but as a new field of regulation. In my view, grounded in my experience in the current international landscape, there are three key pitfalls that we must not overlook.
The first one is that newer doesn't mean better. There is a natural tendency to respond to new technology by creating new laws. However, consistent with the civil law tradition, leading jurists have long recommended applying ancient law to technological revolutions. This approach is not about doing nothing. Rather, it calls for revisiting existing areas of law and adapting them, case by case, to each new technology.
Today, AI does not exist in a legal vacuum in Canada. A wide range of legislation already applies, including copyright, liability, trademark law and data protection. In this last area, we are already seeing new obligations related to automated decision-making, including in Quebec, to ensure transparency when AI is used. In that sense, prior to tabling bills like the former AIDA, we should assess current laws and identify any gaps before imposing new requirements.
My second message would be that faster doesn't mean better. There is a natural tendency, again, to adopt laws as quickly as technologies evolve. However, in law more than in any other field, slow and steady often proves the wiser approach. A look at both domestic and international developments illustrates why.
In data protection, for example, the GDPR, the general data protection regulation, was adopted in 2016, but it took Quebec five years to amend its own legislation in response, with Law 25, particularly in light of the GDPR's international impact. In the realm of AI, the EU AI Act, which came into force in August 2024, is already facing a form of retrenchment, especially regarding implementation timelines and the regulatory burden on tech companies. Whether it will achieve the same success as the GDPR remains uncertain.
Closer to home, AIDA faced significant changes after its introduction. The most recent version contained no fewer than 70 references to upcoming regulation in just 20 pages—an ambitious effort, but far from a self-contained legislative text.
My last message would be that heavier doesn't mean better. Again, there is a tendency to assume that the greater the burden on organizations, the better the protection for the public. This is not always the case, and, more importantly, it can undermine the competitiveness of small and medium-sized enterprises. AIDA reflected this trend, mandating multiple assessments at various stages of an AI system's life cycle. While theoretically sound, this approach is rarely feasible in practice, at least based on my experience.
In sum, I believe that AI legislation can succeed only through sustained and substantive collaboration with stakeholders in industry, academia and civil society to ensure that any framework, first, reflects a risk-based approach; second, appropriately takes into account the state of AI technology, including its current limitations; third, assigns responsibility along the AI value chain; and finally, harmonizes core concepts with existing international frameworks.
With the chair's permission, I would be pleased to submit a short written brief in French and English on the issues I have addressed in my opening remarks.
Thank you, and I look forward to answering this committee's questions.