Evidence of meeting #111 for Industry, Science and Technology in the 44th Parliament, 1st Session. (The original version is on Parliament’s site, as are the minutes.) The winning word was prices.

A recording is available from Parliament.

On the agenda

MPs speaking

Also speaking

Momin M. Malik  Ph.D., Data Science Researcher, As an Individual
Christelle Tessono  Technology Policy Researcher, University of Toronto, As an Individual
Jim Balsillie  Founder, Centre for Digital Rights
Pierre Karl Péladeau  President and Chief Executive Officer, Quebecor Media Inc.
Jean-François Lescadres  Vice-President, Finance, Vidéotron ltée
Peggy Tabet  Vice-President, Regulatory Affairs, Quebecor Media Inc.

4:40 p.m.

Liberal

The Chair Liberal Joël Lightbound

Colleagues, good afternoon.

I call this meeting to order.

Welcome to meeting number 111 of the House of Commons Standing Committee on Industry and Technology.

Today's meeting is taking place in a hybrid format, pursuant to the Standing Orders.

Pursuant to the order of reference of Monday, April 24, 2023, the committee is resuming consideration of Bill C‑27, An Act to enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act and to make consequential and related amendments to other Acts.

I would like to welcome our witnesses.

We're meeting with Momin Malik, Ph.D. and data science researcher. He is speaking as an individual and is joining us by video conference.

We're also meeting with Christelle Tessono, a technology policy researcher at the University of Toronto. She too is joining us by video conference.

Lastly, we're meeting with Jim Balsillie, who is here in person and whom I would like to thank for coming to speak to the committee again.

I'll now give the floor to Mr. Malik for five minutes.

4:40 p.m.

Dr. Momin M. Malik Ph.D., Data Science Researcher, As an Individual

Mr. Chair and members of the committee, thank you for the opportunity to address you this afternoon.

My name is Momin Malik. I am a researcher working in health care AI, a lecturer at the University of Pennsylvania and a senior investigator in the Institute in Critical Quantitative, Computational, & Mixed Methodologies.

I did my Ph.D. at Carnegie Mellon University's School of Computer Science, where I focused on connecting machine learning and social science. Following that, I did a post-doctoral fellowship at the Berkman Klein Center for Internet & Society at Harvard University on the ethics and governance of AI.

My current research involves statistically valid AI fairness auditing, reproducibility in machine learning and translation from health care research to clinical practice.

For comments specifically on the current form, content and issues of the AI and data act, I will defer to my colleague Christelle Tessono, who was the lead author of the report submitted to the committee last year, to which I contributed. I will be able answer questions related to technical and definitional issues around AI, on which I will focus my comments here.

In my work, I argue for understanding AI not in terms of what it appears to do, nor what it aspires to do, but rather how it does what it does. Thus, I propose talking about AI as the instrumental use of statistical correlations. For example, language models are built on how words occur together in sequences. Such correlations between words are the core of all such technologies and large language models.

We all know the adage “correlation is not causation”. The innovation of AI that goes beyond what statistics have historically done is not to try to use correlations towards understanding and intervention, but instead use them to try to automate processes. We now have models that can use these observed correlations between words to generate synthetic text.

Incidentally, curating the huge volumes of text needed to do this convincingly requires huge amounts of human curation, which companies have largely outsourced to poorly paid and exploitatively managed workers in the global south.

In this sense, AI systems can be like a stage illusion. They can impress us like a stage magician might by seemingly levitating, teleporting or conjuring a rabbit. However, if we look from a different angle, we see the support pole, the body double and the hidden compartment. If we look at AI models in extreme cases—things far from average—we similarly see them breaking down, not working and not being appropriate for the task.

The harms from the instrumental use of correlations as per AI have an important historical precedent in insurance and credit. For more than a century, the actuarial science industry has gathered huge amounts of data, dividing populations by age, gender, race, wealth, geography, marital status and so on, taking average lifespans and, on that basis, making decisions to offer, for example, life insurance policies and at what rates.

There is a long history. I am aware of the U.S. context most strongly. For example, in the 1890s, insurance companies in Massachusetts were not offering life insurance policies to Black citizens, citing shorter lifespans. This was directly after emancipation. This was rejected at the time, and, later on, race became illegal to use. However, correlates of race, like a postal code, are still valid uses and are still legal in the U.S.—and from what I understand, in Canada as well—and thus end up disadvantaging people who can often least afford to pay.

In general, those who are marginalized are most likely to have bad outcomes. We risk optimizing for a status quo that is unjust and further solidifying inequality when using correlations in this way.

Canada's health care system is a distinct contrast to that of the U.S.—something for which the country is justifiably proud. That is an example of collectivizing risk rather than, as private industry does, optimizing in ways that benefit it best but that may not benefit the public at large.

I encourage the committee to take this historical perspective and to reason out the ways in which AI can fail and can cause harm and, on that basis, make planning for regulation.

Just as in areas critical to life, dignity and happiness—like health care, criminal justice and other areas—government regulation has a crucial role to play. Determining what problems exist and how regulation might address them will stem best from listening to marginalized groups, having strong consultation with civil society and having adequate consultation with technical experts who are able to make connections in ways that are meaningful for the work of the committee.

Thank you for your time. I welcome your questions.

4:45 p.m.

Liberal

The Chair Liberal Joël Lightbound

Thank you.

I'll now give the floor to Ms. Tessono.

4:45 p.m.

Christelle Tessono Technology Policy Researcher, University of Toronto, As an Individual

Mr. Chair and members of the committee, thank you for inviting me to address you all this afternoon.

My name is Christelle Tessono, and I'm a technology policy researcher currently pursuing graduate studies at the University of Toronto. Over the course of my academic and professional career in the House of Commons, at Princeton University, and now with the Right2YourFace coalition and The Dais, I have developed expertise in a wide range of digital technology governance issues, most notably AI.

My remarks will focus on the AI and data act, and they build on the analysis submitted to INDU last year. This submission was co-authored with Yuan Stevens, Sonja Solomun, Supriya Dwivedi, Sam Andrey and Dr. Momin Malik, who is on the panel with me today. In our submission, we identify five key problems with AIDA; however, for the purposes of my remarks, I will be focusing on three.

First, AIDA does not address the human rights risks that AI systems cause, which puts it out of step with the EU AI Act. The preamble should, at a minimum, acknowledge the well-established disproportionate impact that these systems have on historically marginalized groups such as Black, indigenous, people of colour, members of the LGBTQ community, economically disadvantaged, disabled and other equity-seeking communities in the country.

While the minister's proposed amendments provide a schedule for classes of systems that may be considered in the scope of the act, that is far from enough. Instead, AIDA should be amended to have clear sets of prohibitions on systems and practices that exploit vulnerable groups and cause harms to people's safety and livelihoods, akin to the EU AI Act's prohibition on systems that cause unacceptable risks.

A second issue we highlighted is that AIDA does not create an accountable oversight and enforcement regime for the AI market. In its current iteration, AIDA lacks provisions for robust, independent oversight. Instead, it proposes self-administered audits at the discretion of the Minister of Innovation when in suspicion of act contravention.

While the act creates the position of the AI commissioner, they are not an independent actor, as they are appointed by the minister and serve at their discretion. The lack of independence of the AI commissioner creates a weak regulatory environment and thus fails to protect the Canadian population from algorithmic harms.

While the minister's proposed amendments provide investigative powers to the commissioner, that is far from enough. Instead, I believe that the commissioner should be a Governor in Council appointment and be empowered to conduct proactive audits, receive complaints, administer penalties and propose regulations and industry standards. Enforcing legislation should translate into having the ability to prohibit, restrict, withdraw or recall AI systems that do not comply with comprehensive legal requirements.

Third, AIDA did not undergo any public consultations. This is a glaring issue at the root of the many serious problems with the act. In their submission to INDU, the Assembly of First Nations reminds the committee that the federal government adopted the United Nations Declaration on the Rights of Indigenous Peoples Act action plan, which requires the government to make sure that “Respect for Indigenous rights is systematically embedded in federal laws and policies developed in consultation and cooperation with Indigenous peoples”. AIDA did not receive such consultation, which is a failure of the government in its commitment to indigenous peoples.

To ensure that public consultations are at the core of AI governance in this country, the act should ensure that a parliamentary committee is empowered to have AIDA reviewed, revised and updated whenever necessary and include public hearings conducted on a yearly basis or every few years or so, starting one year after AIDA comes into force. The Minister of Industry should be obliged to respond within 90 days to these committee reviews and include legislative and regulatory changes designed to remedy deficiencies identified by the committee.

Furthermore, I support the inclusion of provisions that expand the reporting and review duties of the AI commissioner, which could include but wouldn't be limited to, for example, the submission of annual reports to Parliament and the ability to draft special reports on urgent matters as well.

In conclusion, I believe that AI regulation needs to safeguard us against a rising number of algorithmic harms that these systems perpetuate; however, I don't think AIDA in its current state is up to that task. Instead, in line with submissions and open letters submitted to the committee by civil society, I highly recommend taking AIDA out of Bill C-27 to improve it through careful review and public consultations.

There are other problems I want to talk about, notably the exclusion of government institutions in the act.

I'm happy to answer questions regarding the proposed amendments made by the minister and expand on points I raised in my remarks.

Since I'm from Montreal, I'll be happy to answer your questions in French.

Thank you for your time.

4:50 p.m.

Liberal

The Chair Liberal Joël Lightbound

Thank you.

I'll now give the floor to Mr. Balsillie for five minutes.

4:55 p.m.

Jim Balsillie Founder, Centre for Digital Rights

Chairman Lightbound and honourable members, happy Valentine's Day.

Thank you for the opportunity to come back and expand on my previous testimony to include concerns about the artificial intelligence and data act. AIDA's flaws in both process and substance are well documented by the expert witnesses. Subsequent proposals by the minister only reinforce my core recommendation that AIDA requires a complete restart. It needs to be sent back to the drawing board, but not for ISED to draft alone. Rushing to pass legislation so seriously flawed will only deepen citizens' fears about AI, because AIDA merely proves that policy-makers can't effectively prevent current and emerging harms from emerging technologies.

Focusing on existential harms that are unquantifiable, indeterminate and unidentifiable is buying into industry's gaslighting. Existential risk narratives divert attention from current harms such as mass surveillance, misinformation, and undermining of personal autonomy and fair markets, among others. From a high-level perspective, some of the foundational flaws with AIDA are the following.

One, it's anti-democratic. The government introduced its AI regulation proposal without any consultation with the public. As Professor Andrew Clement noted at your January 31 meeting, subsequent consultations have revealed exaggerated claims of meetings that still disproportionately rely on industry feedback over civil society.

Two, claims of AI benefits are not substantiated. A recent report on Quebec's AI ecosystem shows that Canada's current AI promotion is not yielding stated economic outcomes. AIDA reiterates many of the exaggerated claims by industry that AI advancement can bring widespread societal benefits but offers no substantiation.

References to support the minister's statement that “AI offers a multitude of benefits for Canadians” come from a single source: Scale AI, a program funded by ISED and the Quebec government. Rather than showing credible reports on how the projects identified have benefited many Canadians, the reference articles claiming benefits are simply announcements of recently funded projects.

Three, AI innovation is not an excuse for rushing regulation. Not all AI innovation is beneficial, as evidenced by the creation and spread of deepfake pornographic images of not just celebrities but also children. This is an important consideration, because we are being sold AIDA as a need to balance innovation with regulation.

Four, by contrast, the risk of harms is well documented yet unaddressed in the current proposal. AI systems, among other features, have been shown to facilitate housing discrimination, make racist associations, exclude women from seeking job listings visible to men, recommend longer prison sentences for visible minorities, and fail to accurately recognize the faces of dark-skinned women. There are countless additional incidents of harm, thousands of which are catalogued in the AI incident database.

Five, the use of AI in AIDA focuses excessively on risk of harms to individuals rather than harms to groups or communities. AI-enabled misinformation and disinformation pose serious risks to election integrity and democracy.

Six, ISED is in a conflict of interest situation, and AIDA is its regulatory blank cheque. The ministry is advancing legislation and regulations intended to address the potentially serious multiple harms from technical developments in AI while it is investing in and vigorously promoting AI, including the funds of AI projects for champions of AIDA such as Professor Bengio. As Professor Teresa Scassa has shown in her research, the current proposal is not about agility but lack of substance and credibility.

Here are my recommendations.

Sever AIDA from Bill C-27 and start consultation in a transparent, democratically accountable process. Serious AI regulation requires policy proposals and an inclusive, genuine public consultation informed by independent, expert background reporting.

Give individuals the right to contest and object to AI affecting them, not just a right to algorithmic transparency.

The AI and data commissioner needs to be independent from the minister, an independent officer of Parliament with appropriate powers and adequate funding. Such an office would require a more serious commitment than how our current Competition Bureau and privacy regulators are set up.

There are many more flawed parts of AIDA, all detailed in our Centre for Digital Rights submission to the committee, entitled “Not Fit for Purpose”. The inexplicable rush by the minister to ram through this proposal should be of utmost concern. Canada is at risk of being the first in the world to create the worst AI regulation.

With regard to large language models, current leading-edge LLMs incorporate hundreds of billions of parameters in their models, based on training data with trillions of tokens. Their behaviour is often unreliable and unpredictable, as AI expert Gary Marcus is documenting well.

The cost and the compute power of LLMs are very intensive, and the field is dominated by big tech: Microsoft, Google, Meta, etc. There is no transparency in how these companies build their models, nor in the risks they pose. Explainability of LLMs is an unsolved problem, and it gets worse with the size of the models built. The claimed benefits of LLMs are speculative, but the harms and risks are well documented.

My advice for this committee is to take the time to study LLMs and to support that study with appropriate expertise. I am happy to help organize study forums, as I have strong industry and civil society networks. As with AIDA, understanding the full spectrum of technology's impacts is critical to a sovereign approach to crafting regulation that supports Canada's economy and protects our rights and freedoms.

Speaking of sovereign capacity, I would be remiss if I didn't say I was disappointed to see Minister Champagne court and offer support to Nvidia. Imagine if we had a ministry that throws its weight behind Canadian cloud and semi companies so that we can advance Canada's economy and sovereignty.

Canadians deserve an approach to AI that builds trust in the digital economy, supports Canadian prosperity and innovation and protects Canadians, not only as consumers but also as citizens.

Thank you.

5 p.m.

Liberal

The Chair Liberal Joël Lightbound

Thank you very much, Mr. Balsillie.

To start the discussion, I'll turn it over to Mr. Perkins for six minutes.

5 p.m.

Conservative

Rick Perkins Conservative South Shore—St. Margarets, NS

Thank you, Mr. Chair.

Thank you, witnesses.

I'd like to start my questions with Mr. Balsillie.

You're a unique—in my mind—successful entrepreneur who's in this space, the technology space. Everyone, I think, knows what you created, invented and built with BlackBerry, but you're not unusual because of that, although that was amazing; you're unusual because you actually put your capital into trying to improve public policy, with a lot of time and effort to do that. I want to thank you for that.

You've been talking about the surveillance economy and personal privacy data breaches by big tech—Facebook, for example, on numerous occasions—for quite a while. When did you start talking about this?

5 p.m.

Founder, Centre for Digital Rights

Jim Balsillie

I've been doing digital framework since I started commercializing ideas globally, generally, because I learned globally that the game is won and lost on the intersection of the public policy frameworks and the private firms' activities. It's the marriage of those two things.

More specifically, on the surveillance economy, I wrote a large piece for The Globe and Mail in 2015 that really turned on and turned the narrative away from what I would call our outdated approaches that cost us that, and then more publicly on the Sidewalk Labs project to privatize government in Toronto in 2017, so specifically on surveillance, 2015 and 2017, but on intangibles, it's been 25 years.

5 p.m.

Conservative

Rick Perkins Conservative South Shore—St. Margarets, NS

When you appeared before, it was at the ethics committee a Parliament or two ago on this issue about either the Toronto initiative or a major data breach by Facebook, wasn't it? I can't remember which it was.

5 p.m.

Founder, Centre for Digital Rights

Jim Balsillie

Yes, and I give real credit to Bob Zimmer, Nate Erskine-Smith and Charlie Angus, who led a cross-partisan approach in saying that if we don't address these issues, we're going to pay a security price, a social price and an economic price. I found that a very constructive interplay with the committee in being able to participate as a witness.

5 p.m.

Conservative

Rick Perkins Conservative South Shore—St. Margarets, NS

You were doing this at a time when you were chairing a Crown foundation known as SDTC. Is that correct?

5 p.m.

Founder, Centre for Digital Rights

5 p.m.

Conservative

Rick Perkins Conservative South Shore—St. Margarets, NS

Did the government ever push back on you personally for doing that, either the minister or his staff, while you were in that Governor in Council appointment role?

5 p.m.

Founder, Centre for Digital Rights

Jim Balsillie

I only got it indirectly. I didn't have anyone address me directly on these issues. I was trying to explain that the initiative not only would undermine civil liberties but was foundationally undermining the opportunity for our domestic smart city companies at a time when the priority was to transition to the green economy. You need these companies to grow, and your policy apparatus would undermine their prospects, as well as civil liberties.

5:05 p.m.

Conservative

Rick Perkins Conservative South Shore—St. Margarets, NS

Were you unaware when Leah Lawrence, before this committee, testified that the government had asked her to see if you could stop speaking publicly on this, and then you ended up removed as the chair of SDTC?

5:05 p.m.

Founder, Centre for Digital Rights

Jim Balsillie

Leah Lawrence never said that to me. Nobody ever told me directly to stop.

5:05 p.m.

Conservative

Rick Perkins Conservative South Shore—St. Margarets, NS

So you saw her testimony.

5:05 p.m.

Founder, Centre for Digital Rights

Jim Balsillie

I did, yes. That's the first I heard that they had been telling her, “Get him to quit it.”

5:05 p.m.

Conservative

Rick Perkins Conservative South Shore—St. Margarets, NS

That surprised you, obviously, I would think.

5:05 p.m.

Founder, Centre for Digital Rights

5:05 p.m.

Conservative

Rick Perkins Conservative South Shore—St. Margarets, NS

If you saw some of that testimony and how it related to the digital economy and what you were trying to achieve in cleaning up SDTC and our technology thing, with regard to that, could you table with this committee a written summary of your experience and what happened through SDTC on that?

5:05 p.m.

Founder, Centre for Digital Rights

Jim Balsillie

I'd be happy to.

5:05 p.m.

Conservative

Rick Perkins Conservative South Shore—St. Margarets, NS

Okay.

In your statement, you said—

5:05 p.m.

Liberal

Ryan Turnbull Liberal Whitby, ON

I have a point of order, Chair.

I'm sorry, but I have to interrupt.