Thank you.
I have been researching and writing in the areas of tort law, privacy law and the regulation of automated technologies for over a decade, with a particular focus on rights and substantive equality, including recent publications on safety in AI and robotics governance in Canada and work with the B.C. Law Institute's civil liability and AI project.
I'm here today representing the Women's Legal Education and Action Fund. LEAF is a national charitable, non-profit organization that works toward ensuring that the law guarantees substantive equality for all women, girls, trans and non-binary people. I'm a member of LEAF's technology-facilitated violence advisory committee and will speak to LEAF's written submissions, which I co-authored with LEAF senior staff lawyer Rosel Kim. Our submission and my comments today focus on the proposed AI and data act.
You've heard this before, but if we're going to regulate AI in Canada, we need to get it right. LEAF agrees with previous submissions emphasizing that AI legislation must be given the special attention it deserves and should not be rushed through with privacy reform. To the extent that this committee can do so, we urge that AIDA be separated from this bill and wholly revisited. We also urge that any new law be built from a foundation of human rights and must centre substantive equality.
If the AI and data act is to proceed, it will require amendments. We examined this law with an acute awareness that many of the harms already arising from the introduction of AI into social contexts are inequitably experienced by people who are already marginalized within society, including on the grounds of gender, race and class. If the law is not cognizant of the inequitable distribution of harm and profit from AI, then despite its written neutrality, it will offer inequitable protection. The companion document to AIDA suggests that the drafters are cognizant of this.
In our written submission, we made five recommendations, accompanied by textual amendments, to allow this law to better recognize at least some of the inequalities that will be exacerbated by the growing use of AI.
The act is structured to encourage the identification and mitigation of foreseeable harm. It does not require perfection and, in fact, is likely to be limited by the extent to which harms are not considered foreseeable to the developers and operators of AI systems.
In this vein, and most urgently, the definitions of “biased output” and “harm” need to be expanded to capture more of the many ways in which AI systems can negatively impact people, for instance, through proxies for protected grounds and through harm experienced at the group or collective level.
As we note in our submission, the introduction of one AI system can cause harm and discriminatory bias in a complex and multi-faceted manner. Take the example we cite of frontline care workers at an eating disorder clinic who had voted to unionize and were then replaced by an AI chatbot system. Through an equity lens, we can see how this would cause not just personal economic harm to those who lost their jobs but also collective harm to those workers and others considering collective action.
Additionally, the system threatened harm to care-seeking clients, who were left to access important medical services through an impersonal and ill-equipped AI system. When we consider equity, we should emphasize not only the vulnerable position of care workers and patients, but also the gendered, racialized and class dimensions of frontline work and experience with eating disorders. The act as currently framed does not seem to prompt a fulsome understanding nor a mitigation of the different complex harms engaged here.
Furthermore, as you've already heard, the keystone concept in this legislation, “high-impact system”, is not defined. Creating only one threshold for the application of the act and setting it at a high bar undermines any regulatory flexibility that might be intended by this. At this stage in the drafting, absent a rethinking of the law, we would recommend removing this threshold concept and allowing the regulations to develop in various ways to apply to different systems.
Finally, a key challenge with a risk mitigation approach, such as the one represented in this act, is that many of the harms of AI that have already materialized were unforeseeable to the developers and operators of the systems, including in the initial decision to build a given tool. For this reason, our submission also recommends a requirement for privacy and equity audits that are transparent to the public and that bring the attention of the persons responsible to as extensive as possible prevention and mitigation.
Finally, I would emphasize that concerns about the resources required to mitigate harm should not dissuade this committee from ensuring that the act will mitigate as much harm and discrimination as possible. We should not look to expand an AI industry that causes inequitable harm. Among many other reasons, we need a human rights approach to regulating AI for any chance of an actually flourishing industry in this country.
Industries will also suffer if workers in small enterprises are not protected against harm and discrimination by AI.
Public resistance to a new technology is often based on an understanding that a select few stand to benefit, while many stand to lose out. To the extent possible, this bill should try to mitigate some of that inequity.
Thank you for your time, and I look forward to your questions and the conversation.