Thank you, members of the committee, for the opportunity to speak with you today.
My name is Alexandre Shee. I'm the incoming co-chair of the future of work working group of the Global Partnership on AI, of which Canada is a member state. I'm an executive at a multinational AI company, a lawyer in good standing and an investor and adviser to AI companies, as well as the proud father of two boys.
Today, I'll speak exclusively on part 3 of the bill, which is the artificial intelligence and data act, as well as the recently proposed amendments.
I believe we should pass the act. However, it needs significant amendments beyond those currently proposed. In fact, the act fails to address a key portion of the AI supply chain—data collection, annotation and engineering—which represents 80% of the work done in AI. This 80% of the work is manually done by humans.
Failing to require disclosures on the AI supply chain will lead to bias, low-quality AI models and privacy issues. More importantly, it will lead to the violation of the human rights of millions of people on a daily basis.
Recent amendments have addressed some of the deficiencies in the act by including certain steps in the AI supply chain, as well as requiring the preservation of records of the data used. However, the law does not consider the AI development process as a supply chain, with millions of people involved in powering AI systems. No disclosure mechanism is put in place to ensure that Canadians are able to make informed decisions on the AI systems they choose, ensuring that they're fair and high-quality, and that they respect human rights.
If I unpack that statement, there are three takeaways that I hope to leave you with. The first is that the act as drafted does not regulate the largest portion of AI systems: data collection, annotation and engineering. The second is that failing to address this fails to protect human rights for millions of people, including vulnerable Canadians. In turn, this leads to low-quality artificial intelligence systems. The third is that the act can help protect those involved in the AI supply chain and empower people to choose high-quality and fair artificial intelligence solutions if it is enacted with disclosure requirements.
Let me dive deeper into all of these three points, with additional detail on why these considerations are relevant for the future iteration of the act.
Self-regulation in the AI supply chain is not working. The lack of a regulatory framework and disclosures of the data collection, annotation and engineering aspects of the AI supply chain is having a negative impact on millions of lives today. These people are mostly in the global south, but they also include vulnerable Canadians.
There is currently a race to the bottom, meaning that basic human rights are being disregarded to diminish costs. In a recent well-documented investigative journalism piece featured in Wired magazine, entitled “Underage Workers Are Training AI” and published on November 15, 2023, a 15-year-old Pakistani child describes working on tasks to train AI models that pay as little as one cent. Even in higher-paying jobs, the amount of time he needs to spend doing unpaid research means that he needs to work between five and six hours to complete an hour of real-time work—all to earn two dollars. He is quoted as saying, “It’s digital slavery”. His statement echoes similar reporting done by journalists and in-depth studies of the AI supply chain by academics from around the world, and international organizations such as the Global Partnership on Artificial Intelligence.
However, while these abuses are well documented, they are currently part of the back end of the AI development process, and Canadian firms, consumers and governments interacting with AI systems do not have a mechanism to make informed choices about abuse-free systems. Requiring disclosures—and eventually banning certain practices—will help to avoid a race to the bottom in the data enrichment and validation industry, and enable Canadians to have better, safer AI that does not violate human rights.
If we borrow from recently passed legislation Bill S-211, Canada’s “modern slavery act”, creating disclosure obligations helps foster more resilient supply chains and offers Canadians products free from forced or child labour.
Transparent and accountable supply chains have helped respect human rights in countless industries, including the garment industry, the diamond industry and agriculture, to name only a few. The information requirements in the act could include information on data enrichment and specifically how data is collected and/or labelled, a general description of labelling instructions and whether it was done using identifiable employees or contractors, procurement practices that include human rights standards, and validating that steps have been taken so that no child or forced labour was used in the process.
Companies already prepare instructions for all aspects of the AI supply chain. The disclosure would formalize what is already common practice. Furthermore, there are options in the AI supply chain that create high-quality jobs that respect human rights. The Canadian government should immediately require these disclosures as part of its own procurement processes of AI systems.
Having a disclosure mechanism would also be a complement to the audit authority bestowed on the minister under the act. Creating equivalent reporting obligations on the AI supply chain would augment the current law and ensure that quality, transparency and respect of human rights are part of AI development. It would allow Canadians to benefit from innovative solutions that are better, safer and aligned with our values.
I hope you will consider the proposal today. You can have a positive impact on millions of lives.
Thank you.