Good afternoon, committee members. It is my honour to be here with you today.
The 55 national and international unions affiliated with the Canadian Labour Congress bring together three million workers in virtually all sectors, industries, occupations and regions of the country. We are grateful for the opportunity to speak to the artificial intelligence and data act, AIDA, enacted by Bill C-27.
Across sectors, industries and occupations, workers in Canada increasingly encounter AI applications in their work and employment. Many report that AI has the potential to improve and enrich their work. In certain instances, AI applications could reduce time and energy spent on routine tasks. This could free workers up to focus on more skill-intensive aspects of their jobs, or on directly serving the public.
However, workers are also concerned about the negative potential consequences for jobs, privacy rights, discrimination and workplace surveillance. Workers are troubled by the potential for displacement and job loss from AI. Workers in creative industries and the performing arts are concerned about control over, and compensation for, their images and work. Workers are concerned about the collection, use and sharing of their personal data. Workers and unions are concerned about the use of AI in hiring, discipline and human resource management functions. Almost every week, we hear from workers who have real-life experience with the impact this is already having on their jobs. AI systems carry serious risks of racial discrimination, gender discrimination, and labour and human rights violations.
The number one demand from Canada's unions is greater transparency, consultation and information sharing around the introduction of AI systems in workplaces and Canadian society. Unfortunately, AIDA falls short in this respect.
Our concerns about AIDA are as follows.
First, unions are troubled by the lack of public debate and broad consultation on regulating AI in Canada. We feel there should have been proper public debate prior to the drafting and introduction of AIDA.
Second, the major deficiency of AIDA is that it exempts government and Crown corporations. The Government of Canada is a leading adopter and promoter of AI. Despite this, AIDA provides no protection for public service workers, whose work and employment are affected by AI systems. Government is responsible for many high-impact AI systems for decision-making—from immigration and benefits claims to policing and military operations. AIDA should be expressly expanded to apply to all federal departments, agencies and Crown corporations, including national security institutions.
Third, the bill only requires measures to prevent harms caused by high-impact systems. It leaves the definition of “high-impact systems” to regulation. As well, it is silent on AI systems that can cause real harms and discrimination despite falling outside the classification of “high-impact”.
Fourth, AIDA contemplates a senior Innovation, Science and Economic Development Canada official acting as the AI and data commissioner. The commissioner should be an independent position. An office tasked with supervision and regulatory oversight should not be housed within the department responsible for promoting the AI industry.
Fifth, while AIDA authorizes the minister to establish an advisory committee, we strongly believe the government must go much further than the current advisory council on artificial intelligence, established in 2019. The advisory council is dominated by industry and academic voices, with no participation from civil society, human rights advocacy organizations, unions and the public. The CLC urges the government to create a permanent representative advisory council that makes recommendations on research needs, regulatory matters, and the administration and enforcement of AIDA.
Finally, the purpose clause of the act should be strengthened. Currently, AIDA is intended in part “to prohibit certain conduct in relation to artificial intelligence systems that may result in serious harm to individuals or harm to their interests.” This should be revised to prohibit conduct that may result in harm to individuals and groups, not just “serious harm”. Currently, AIDA is focused on individual harms, not on societal risks, such as to the environment or Canadian democracy.
In summary, the CLC believes there should be much more institutionalized transparency, information sharing and engagement around AI in the workplace and Canadian society.
Thank you. I welcome any questions the committee may have.