Chairman Lightbound and honourable members, happy Valentine's Day.
Thank you for the opportunity to come back and expand on my previous testimony to include concerns about the artificial intelligence and data act. AIDA's flaws in both process and substance are well documented by the expert witnesses. Subsequent proposals by the minister only reinforce my core recommendation that AIDA requires a complete restart. It needs to be sent back to the drawing board, but not for ISED to draft alone. Rushing to pass legislation so seriously flawed will only deepen citizens' fears about AI, because AIDA merely proves that policy-makers can't effectively prevent current and emerging harms from emerging technologies.
Focusing on existential harms that are unquantifiable, indeterminate and unidentifiable is buying into industry's gaslighting. Existential risk narratives divert attention from current harms such as mass surveillance, misinformation, and undermining of personal autonomy and fair markets, among others. From a high-level perspective, some of the foundational flaws with AIDA are the following.
One, it's anti-democratic. The government introduced its AI regulation proposal without any consultation with the public. As Professor Andrew Clement noted at your January 31 meeting, subsequent consultations have revealed exaggerated claims of meetings that still disproportionately rely on industry feedback over civil society.
Two, claims of AI benefits are not substantiated. A recent report on Quebec's AI ecosystem shows that Canada's current AI promotion is not yielding stated economic outcomes. AIDA reiterates many of the exaggerated claims by industry that AI advancement can bring widespread societal benefits but offers no substantiation.
References to support the minister's statement that “AI offers a multitude of benefits for Canadians” come from a single source: Scale AI, a program funded by ISED and the Quebec government. Rather than showing credible reports on how the projects identified have benefited many Canadians, the reference articles claiming benefits are simply announcements of recently funded projects.
Three, AI innovation is not an excuse for rushing regulation. Not all AI innovation is beneficial, as evidenced by the creation and spread of deepfake pornographic images of not just celebrities but also children. This is an important consideration, because we are being sold AIDA as a need to balance innovation with regulation.
Four, by contrast, the risk of harms is well documented yet unaddressed in the current proposal. AI systems, among other features, have been shown to facilitate housing discrimination, make racist associations, exclude women from seeking job listings visible to men, recommend longer prison sentences for visible minorities, and fail to accurately recognize the faces of dark-skinned women. There are countless additional incidents of harm, thousands of which are catalogued in the AI incident database.
Five, the use of AI in AIDA focuses excessively on risk of harms to individuals rather than harms to groups or communities. AI-enabled misinformation and disinformation pose serious risks to election integrity and democracy.
Six, ISED is in a conflict of interest situation, and AIDA is its regulatory blank cheque. The ministry is advancing legislation and regulations intended to address the potentially serious multiple harms from technical developments in AI while it is investing in and vigorously promoting AI, including the funds of AI projects for champions of AIDA such as Professor Bengio. As Professor Teresa Scassa has shown in her research, the current proposal is not about agility but lack of substance and credibility.
Here are my recommendations.
Sever AIDA from Bill C-27 and start consultation in a transparent, democratically accountable process. Serious AI regulation requires policy proposals and an inclusive, genuine public consultation informed by independent, expert background reporting.
Give individuals the right to contest and object to AI affecting them, not just a right to algorithmic transparency.
The AI and data commissioner needs to be independent from the minister, an independent officer of Parliament with appropriate powers and adequate funding. Such an office would require a more serious commitment than how our current Competition Bureau and privacy regulators are set up.
There are many more flawed parts of AIDA, all detailed in our Centre for Digital Rights submission to the committee, entitled “Not Fit for Purpose”. The inexplicable rush by the minister to ram through this proposal should be of utmost concern. Canada is at risk of being the first in the world to create the worst AI regulation.
With regard to large language models, current leading-edge LLMs incorporate hundreds of billions of parameters in their models, based on training data with trillions of tokens. Their behaviour is often unreliable and unpredictable, as AI expert Gary Marcus is documenting well.
The cost and the compute power of LLMs are very intensive, and the field is dominated by big tech: Microsoft, Google, Meta, etc. There is no transparency in how these companies build their models, nor in the risks they pose. Explainability of LLMs is an unsolved problem, and it gets worse with the size of the models built. The claimed benefits of LLMs are speculative, but the harms and risks are well documented.
My advice for this committee is to take the time to study LLMs and to support that study with appropriate expertise. I am happy to help organize study forums, as I have strong industry and civil society networks. As with AIDA, understanding the full spectrum of technology's impacts is critical to a sovereign approach to crafting regulation that supports Canada's economy and protects our rights and freedoms.
Speaking of sovereign capacity, I would be remiss if I didn't say I was disappointed to see Minister Champagne court and offer support to Nvidia. Imagine if we had a ministry that throws its weight behind Canadian cloud and semi companies so that we can advance Canada's economy and sovereignty.
Canadians deserve an approach to AI that builds trust in the digital economy, supports Canadian prosperity and innovation and protects Canadians, not only as consumers but also as citizens.
Thank you.