Good morning, Mr. Chair and members of the committee. Thank you for the opportunity to comment on the AI portion of Bill C-27.
I am a full professor in the faculty of law at Université de Montréal. I am also the Canada research chair in collaborative culture in health law and policy, as well as the Canada-CIFAR chair in AI, affiliated to Mila. From January 2, 2022 to December 2023, I co-chaired the Working Group on Responsible AI for the Global Partnership on AI.
The first point I want to make is to reaffirm not only the importance, but also the urgency of creating a better legal framework for AI, as proposed in Bill C-27. That has been my view for the past five years, and I am now more convinced than ever, given the dizzying pace of recent developments in AI, which you are all familiar with.
We need legal tools that are binding. They must clearly set out our expectations, values and requirements in relation to AI, at the national level. During the citizen consultations that culminated in the development of the Montréal Declaration for a Responsible Development of Artificial Intelligence, the first need identified was for an appropriate legal framework that would enable the development of trusted AI technologies.
As you probably know, that trend has spread across the world, the most obvious example definitely being the European Union's efforts. As of last week, the EU is now one step closer to adopting a regulatory framework for AI.
In addition to these national requirements, the global discussions around AI and the resulting decisions will have repercussions for every country. In fact, the idea of creating a specific AI authority is being discussed.
In order to ensure that Canadian values and interests are taken into account in the international space, Canada has to be able to influence the discussions and decisions. Setting out a national vision with strong and clear standards is vital to playing a credible, meaningful and influential role in the global governance of AI.
That said, I think Bill C-27 could still use some improvements. I will focus on two of them today.
The first improvement is to make the artificial intelligence and data commissioner more independent. Although recent amendments have resulted in improvements, the commissioner is still very much tied to Innovation, Science and Economic Development Canada. To avoid any conflict of interest, real or apparent, the government should create more of a wall between the two entities. This would address any tensions that might arise between the government's role as a funder on one hand, and its role as a watchdog on the other.
Possible solutions include creating an office of the artificial intelligence commissioner that is totally independent of the department, and empowering the commissioner to impose administrative monetary penalties or require that corrective actions be taken to address the accountability framework. In addition, the commissioner could be asked to recommend new or improved regulations informed by their experience as a watchdog, mainly through the annual public report.
Other measures could also be taken. Once the legislation is passed, for instance, the government could give the commissioner the financial and institutional resources, as well as the qualified staff necessary to successfully carry out the duties of the commissioner. Making sure that the commissioner has the means to achieve their objectives is really important. Another possibility is to create a mechanism whereby the public could report issues directly to the commissioner. That would establish a relationship between the two.
The second major improvement that's needed, as I see it, is to further strengthen the crucial role that human rights can play in analyzing the risks and impacts of AI systems. The importance of taking into account human rights in defining the classes of high-impact AI systems is specifically mentioned. However, the importance of then incorporating consideration of those rights in companies' assessments, which could include an analysis of the risks of harm and adverse effects, is not quite so clear.
I would also recommend adding specific language to address the need to conduct impact assessments for human rights in relation to individuals or groups of individuals who may be affected by high-impact AI systems. A portion of those assessments could also be made public. These are sometimes called human rights impact assessments.
The Council of Europe, the European Union with its AI legislation, and even the United Nations Educational, Scientific and Cultural Organization are working on similar tools, so exploring the possibility of sharing expertise would be worthwhile.
The second recommendation is fundamental. While the AI race is very real, there can be no winner of the race to violate human rights. The legislation must make that clear.
Thank you.