Thank you, Mr. Chair and committee members.
I am Andrew Clement, professor emeritus in the faculty of information at the University of Toronto. As a computer scientist who started in the field of artificial intelligence, I have been researching the computerization of society and its social implications since the 1970s.
I'm one of three pro bono contributors to the Centre for Digital Rights' report on C-27 that Jim Balsillie spoke to you about here.
I will address the artificial intelligence and data act, AIDA, exclusively in my remarks.
AI, better interpreted as algorithmic intensification, has a long history. For all of its benefits, from well before the current acceleration around deep neural networks, AI misapplication has already hurt many people.
Unfortunately, the loudest voices driving public fear are coming from the tech giant leaders, who are well known for their anti-government and anti-regulation attitudes. These “move fast and break things” figures are now demanding urgent government intervention while jockeying for industry dominance. This is distracting and demands our skepticism.
Judicious AI regulation focused on actual risks is long overdue and self-regulation won't work.
Minister Champagne wants to make Canada a world leader in AI governance. That's a fine goal, but it's as if we are in an international Grand Prix. Apparently, to allay the fears of Canadians, he abruptly entered a made-in-Canada contender. Beyond the proud maple leaf and his smiling at the wheel, his AIDA vehicle barely had a chassis and an engine. He insisted he was simply being “agile”, promising that if you just help to propel him over the finish line, all would be fixed through the regulations.
As Professor Scassa has pointed out, there's no prize for first place. Good governance isn't even a race but an ongoing, mutual learning project. With so much uncertainty about the promise and perils of AI, public consultation informed by expertise is a vital precondition for establishing a sound legal foundation. Canada also needs to carefully study developments in the EU, U.S. and elsewhere before settling on its own approach.
As many witnesses have pointed out, AIDA has been deeply flawed in substance and process from the get-go. Jamming it on to the overdue modernization of PIPEDA made it much harder to give that and the AI legislation the thorough review they each merit.
The minister initially gave himself sweeping regulatory powers, putting him in a conflict of interest with his mandate to advance Canada's AI industry. His recent amendments don't go anywhere near far enough to achieve the necessary regulatory independence.
Minister Champagne claimed to you that AIDA offers a long-lasting framework based on principles. It does not.
The most serious flaw is the absence of any public consultation, either with experts or Canadians more generally, before or since introducing AIDA. It means that it has not benefited from a suitably broad range of perspectives. Most fundamentally, it lacks democratic legitimacy, which can't be repaired by the current parliamentary process.
The minister appears to be sensitive to this issue. As a witness here, he bragged that ISED held “more than 300 meetings with academics, businesses and members of civil society regarding this bill.” In his subsequent letter providing you with a list of those meetings, he claimed that, “We made a particular effort to reach out to stakeholders with a diversity of perspectives....”
My analysis of this list of meetings, sent to you on December 6, shows that this is misleading. Overwhelmingly, ISED held meetings with business organizations. There were 223 meetings in all, of which 36 were with U.S. tech giants. Only nine meetings were with Canadian civil society organizations.
Most striking by their complete absence are any organizations representing those that AIDA is claimed to protect most, i.e., organizations whose members are likely to be directly affected by AI applications. These are citizens, indigenous peoples, consumers, immigrants, parents, children, marginalized communities, and workers or professionals in health care, finance, education, manufacturing, agriculture, the arts, media, communication, transportation—all of the areas where AI is claimed to have benefits.
AIDA breaks democratic norms in ways that can't be fixed through amendments alone. It should therefore be sent back for proper redrafting. My written brief offers suggestions for how this could be accomplished in an agile manner, within the timetable originally projected for AIDA.
However, I realize that the shared political will for pursuing this option may not currently be achievable. If you decide that this AIDA is to proceed, then I urge you to repair its many serious flaws as well as you can in the following eight areas at the very least:
First, sever AIDA from parts 1 and 2 of Bill C-27 so that each of the sub-bills can be given proper attention.
Position the AI and data commissioner at arm's-length from ISED, appropriately staffed and adequately funded.
Provide AIDA with a mandatory review cycle, requiring any renewal or revision to be evidence-based, expert-informed and independently moderated with genuine public consultation. This should involve a proactive outreach to stakeholders not included in ISED's Bill C-27 meetings to date, starting with the consultations on the regulations. I'm reminded here of the familiar saying that if you're not welcome at the table, you should check that you're not on the menu.
Expand the scope of harms beyond individual support to include collective and systemic harms, as you've heard from others.
Base key requirements on robust, widely accepted principles in the legislation and not solely in regulations or schedules.
Ground such a principles-based framework explicitly in the protection of fundamental human rights and compliance with international humanitarian law, in keeping with the Council of Europe's pending treaty, which Canada has been involved with.
Replace the inappropriate concept of high-impact systems with a fully tiered, risk-based scheme, such as the EU AI Act does.
Tightly specify a set of unacceptably high-risk systems for prohibition.
I could go on.
Thank you for your attention. I welcome your questions.