Mr. Chair and members of the committee, thank you for inviting me to address you all this afternoon.
My name is Christelle Tessono, and I'm a technology policy researcher currently pursuing graduate studies at the University of Toronto. Over the course of my academic and professional career in the House of Commons, at Princeton University, and now with the Right2YourFace coalition and The Dais, I have developed expertise in a wide range of digital technology governance issues, most notably AI.
My remarks will focus on the AI and data act, and they build on the analysis submitted to INDU last year. This submission was co-authored with Yuan Stevens, Sonja Solomun, Supriya Dwivedi, Sam Andrey and Dr. Momin Malik, who is on the panel with me today. In our submission, we identify five key problems with AIDA; however, for the purposes of my remarks, I will be focusing on three.
First, AIDA does not address the human rights risks that AI systems cause, which puts it out of step with the EU AI Act. The preamble should, at a minimum, acknowledge the well-established disproportionate impact that these systems have on historically marginalized groups such as Black, indigenous, people of colour, members of the LGBTQ community, economically disadvantaged, disabled and other equity-seeking communities in the country.
While the minister's proposed amendments provide a schedule for classes of systems that may be considered in the scope of the act, that is far from enough. Instead, AIDA should be amended to have clear sets of prohibitions on systems and practices that exploit vulnerable groups and cause harms to people's safety and livelihoods, akin to the EU AI Act's prohibition on systems that cause unacceptable risks.
A second issue we highlighted is that AIDA does not create an accountable oversight and enforcement regime for the AI market. In its current iteration, AIDA lacks provisions for robust, independent oversight. Instead, it proposes self-administered audits at the discretion of the Minister of Innovation when in suspicion of act contravention.
While the act creates the position of the AI commissioner, they are not an independent actor, as they are appointed by the minister and serve at their discretion. The lack of independence of the AI commissioner creates a weak regulatory environment and thus fails to protect the Canadian population from algorithmic harms.
While the minister's proposed amendments provide investigative powers to the commissioner, that is far from enough. Instead, I believe that the commissioner should be a Governor in Council appointment and be empowered to conduct proactive audits, receive complaints, administer penalties and propose regulations and industry standards. Enforcing legislation should translate into having the ability to prohibit, restrict, withdraw or recall AI systems that do not comply with comprehensive legal requirements.
Third, AIDA did not undergo any public consultations. This is a glaring issue at the root of the many serious problems with the act. In their submission to INDU, the Assembly of First Nations reminds the committee that the federal government adopted the United Nations Declaration on the Rights of Indigenous Peoples Act action plan, which requires the government to make sure that “Respect for Indigenous rights is systematically embedded in federal laws and policies developed in consultation and cooperation with Indigenous peoples”. AIDA did not receive such consultation, which is a failure of the government in its commitment to indigenous peoples.
To ensure that public consultations are at the core of AI governance in this country, the act should ensure that a parliamentary committee is empowered to have AIDA reviewed, revised and updated whenever necessary and include public hearings conducted on a yearly basis or every few years or so, starting one year after AIDA comes into force. The Minister of Industry should be obliged to respond within 90 days to these committee reviews and include legislative and regulatory changes designed to remedy deficiencies identified by the committee.
Furthermore, I support the inclusion of provisions that expand the reporting and review duties of the AI commissioner, which could include but wouldn't be limited to, for example, the submission of annual reports to Parliament and the ability to draft special reports on urgent matters as well.
In conclusion, I believe that AI regulation needs to safeguard us against a rising number of algorithmic harms that these systems perpetuate; however, I don't think AIDA in its current state is up to that task. Instead, in line with submissions and open letters submitted to the committee by civil society, I highly recommend taking AIDA out of Bill C-27 to improve it through careful review and public consultations.
There are other problems I want to talk about, notably the exclusion of government institutions in the act.
I'm happy to answer questions regarding the proposed amendments made by the minister and expand on points I raised in my remarks.
Since I'm from Montreal, I'll be happy to answer your questions in French.
Thank you for your time.