Evidence of meeting #107 for Industry, Science and Technology in the 44th Parliament, 1st Session. (The original version is on Parliament’s site, as are the minutes.) The winning word was chair.

A recording is available from Parliament.

On the agenda

MPs speaking

Also speaking

Vass Bednar  Executive Director, Master of Public Policy in Digital Society Program, McMaster University, As an Individual
Andrew Clement  Professor Emeritus, Faculty of Information, University of Toronto, As an Individual
Nicolas Papernot  Assistant Professor and Canada CIFAR AI Chair, University of Toronto and Vector Institute, As an Individual
Leah Lawrence  Former President and Chief Executive Officer, Sustainable Development Technology Canada, As an Individual

5:50 p.m.

Liberal

The Chair Liberal Joël Lightbound

Colleagues and friends, I call this meeting to order.

Welcome to meeting number 107 of the House of Commons Standing Committee on Industry and Technology.

Today's meeting is taking place in a hybrid format, pursuant to the Standing Orders.

Before I introduce our witnesses, we have one quick piece of business, the election of the second vice‑chair. Pursuant to Standing Order 106(2), the second vice‑chair must be a member of an opposition party other than the official opposition.

I am now prepared to receive motions for the second vice‑chair. Can someone submit Mr. Garon's name?

Colleagues, I need someone to....

It's Mr. Bittle.

It has been moved by Mr. Bittle that Jean‑Denis Garon be elected as second vice‑chair of the committee.

Since there are no other motions, do I have the unanimous consent of the committee to elect Mr. Garon as second vice‑chair?

5:50 p.m.

Some hon. members

Agreed.

5:50 p.m.

Liberal

The Chair Liberal Joël Lightbound

(Motion agreed to)

5:50 p.m.

Liberal

The Chair Liberal Joël Lightbound

Mr. Garon, congratulations on your election as second vice‑chair. Welcome. You have great responsibilities to take on, because Mr. Lemire has been very helpful in his years on the committee. He was a very good parliamentarian, but I'm sure you'll be up to the task. It's a pleasure to have you with us.

Before moving on to Bill C‑27, I must also submit to the committee a proposal for supplementary estimates for our study of Bill C‑27. It indicates that an amount of $6,000 is requested, and that amount is broken down.

Do I have the unanimous consent of the committee to adopt this budget proposal?

5:50 p.m.

Some hon. members

Agreed.

5:50 p.m.

Liberal

The Chair Liberal Joël Lightbound

(Motion agreed to)

5:50 p.m.

Liberal

The Chair Liberal Joël Lightbound

That's wonderful.

Pursuant to the order of reference of Monday, April 24, 2023, the committee is resuming consideration of Bill C-27, an act to enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act and to make consequential and related amendments to other acts.

I would now like to welcome the witnesses. We have Vass Bednar, executive director of the master of public policy in digital society program at McMaster University, who is joining us by videoconference. Also, from the University of Toronto, we have Andrew Clement, professor emeritus, Faculty of Information, who is also joining us by videoconference, as well as Nicolas Papernot, assistant professor and CIFAR AI chair.

Thank you to all three of you for being here.

I want to apologize for our being late to the committee. We had about 10 votes in the House of Commons. Because of the delay, we have until about 7 p.m. for the testimonies and the questions.

Without further ado, we will start with you, Madam Bednar, for five minutes.

5:50 p.m.

Vass Bednar Executive Director, Master of Public Policy in Digital Society Program, McMaster University, As an Individual

Thank you, and good evening.

My name is Vass Bednar. You heard that I run the master of public policy program in digital society at McMaster University, where I'm an adjunct professor of political science. I engage with Canada's policy community broadly as a senior fellow at CIGI, a fellow with the Public Policy Forum, and through my newsletter “Regs to Riches”. I'm also a member of the provincial privacy commissioner's strategic ad hoc advisory committee.

Thank you for the opportunity to appear. I appreciate the work of this committee. I do agree there is an urgent need to modernize Canada's legislative framework so that it's suited in the digital age. I also want to note I've been on a sabbatical of sorts for the past year, and I have not followed every detailed element of debate on this bill in deep detail. That made me a little bit anxious about appearing, but then I remembered that I am not on the committee; I am appearing before the committee, so I decided to be as constructive as I could be today.

As we consider this framework for privacy, consumer protection and artificial intelligence, I really think we're fundamentally negotiating trust in our digital economy, what that looks like for citizens and actually articulating what responsible innovation is supposed to look like. That's what gets me excited about the direction that we're going.

Very briefly, on the privacy side, it's well known, or it has been well said, that this is not the most consumer-centric privacy legislation we see from other jurisdictions. It does provide clarity for businesses, both large and small, which is good, and especially small businesses. I don't think the requirements for smaller businesses are overly onerous.

The elements on consent have been well debated. Zooming in on that language beyond what is necessary, I think, is such a major hinge of debate. Who gets to decide what is necessary and when? I think the precedent of consent, of course, is critical. I think about a future where, as people who are experiencing our online world, or exchanging information with businesses, there's just way more autonomy for consumers.

For example, there's being able to search without self-preferencing algorithms that dictate the order of what you see; seeing prices that aren't tailored to you, or even knowing there is a personalized dynamic pricing situation; accessing discounts through loyalty programs, without trading your privacy to use them; or simple things like returning to an online store that you've shopped at before without seeing these so-called special offers based on your browsing or purchase history.

That tension, I think, is probably going to be core to our continued conversation around that need for organizations to collect.

On algorithmic collusion, recent reporting from The New Statesman elaborated on how the prices of most goods now are set not by humans, but by automatic processes that are set to maximize their owners' gains. There's this big academic conversation about the line between what's exploitative and what's efficient. Our evolving competition law may soon begin to consider algorithmic collusion, which may also garner more attention through advancements on Bill C-27 as it prompts the consideration of the effects of algorithmic conduct in the public interest.

Again, very briefly on the AI side, I agree with others that the AI commissioner should be more empowered, perhaps as an officer of Parliament. That office needs to be properly funded in order to do this work. Note that the provinces may want to create their own AI frameworks as a way to solve for some of the ambiguities or intersections. We should embrace and celebrate that in a Canadian federalist context.

In the spirit of being constructive and forward-looking, I wonder if we should be taking some more inspiration from very familiar policy areas of labelling and manufacturing just to achieve more disclosure. For the layer of transparency that's proposed for those who manage a general purpose AI system, we should ensure that individuals can identify AI-generated content. This is also critical for the result of any algorithmically generated system.

We probably need either a nutrition facts label approach to privacy or a registration requirement. I would hope we can avoid onerous audits, or kind of spurring strange secondary economies, that sprout and maybe aren't as necessary as they seem. Having to register novel AI systems with ISED, so the government can keep tabs on potential harms and justifications for them entering into the Canadian market, would be helpful.

I will wrap up in just a moment.

Of course, we, you, should all be thinking about how this legislation will work with other policy levers, especially in light of the recently struck Digital Regulators Forum.

Much of my work is rooted in competition issues, such as market fairness and freedom. I note that in the U.S., the FTC held a technology summit on artificial intelligence just last week. There it was noted, “we see a tech ecosystem that has concentrated...power in the hands of a small number of firms, while entrenching a business model built on constant surveillance of consumers.” Canadian policy people need to be more honest about connecting these dots. We should be doing more to question that core business model and ensure we're not enshrining it, going forward.

I have a final, very quick worry about productivity, which I know everyone is thinking about.

I have a concern that our productivity crisis in Canada will fundamentally act, whether implicitly or explicitly, to discourage regulation of any kind over the phantom or zombie risk of impeding this elusive thing we call innovation. I want to remind all of you that smart regulation clarifies markets and levels the playing field.

Thanks for having me.

5:55 p.m.

Liberal

The Chair Liberal Joël Lightbound

Thank you very much, Ms. Bednar.

I'll now give the floor to Professor Clement.

5:55 p.m.

Professor Andrew Clement Professor Emeritus, Faculty of Information, University of Toronto, As an Individual

Thank you, Mr. Chair and committee members.

I am Andrew Clement, professor emeritus in the faculty of information at the University of Toronto. As a computer scientist who started in the field of artificial intelligence, I have been researching the computerization of society and its social implications since the 1970s.

I'm one of three pro bono contributors to the Centre for Digital Rights' report on C-27 that Jim Balsillie spoke to you about here.

I will address the artificial intelligence and data act, AIDA, exclusively in my remarks.

AI, better interpreted as algorithmic intensification, has a long history. For all of its benefits, from well before the current acceleration around deep neural networks, AI misapplication has already hurt many people.

Unfortunately, the loudest voices driving public fear are coming from the tech giant leaders, who are well known for their anti-government and anti-regulation attitudes. These “move fast and break things” figures are now demanding urgent government intervention while jockeying for industry dominance. This is distracting and demands our skepticism.

Judicious AI regulation focused on actual risks is long overdue and self-regulation won't work.

Minister Champagne wants to make Canada a world leader in AI governance. That's a fine goal, but it's as if we are in an international Grand Prix. Apparently, to allay the fears of Canadians, he abruptly entered a made-in-Canada contender. Beyond the proud maple leaf and his smiling at the wheel, his AIDA vehicle barely had a chassis and an engine. He insisted he was simply being “agile”, promising that if you just help to propel him over the finish line, all would be fixed through the regulations.

As Professor Scassa has pointed out, there's no prize for first place. Good governance isn't even a race but an ongoing, mutual learning project. With so much uncertainty about the promise and perils of AI, public consultation informed by expertise is a vital precondition for establishing a sound legal foundation. Canada also needs to carefully study developments in the EU, U.S. and elsewhere before settling on its own approach.

As many witnesses have pointed out, AIDA has been deeply flawed in substance and process from the get-go. Jamming it on to the overdue modernization of PIPEDA made it much harder to give that and the AI legislation the thorough review they each merit.

The minister initially gave himself sweeping regulatory powers, putting him in a conflict of interest with his mandate to advance Canada's AI industry. His recent amendments don't go anywhere near far enough to achieve the necessary regulatory independence.

Minister Champagne claimed to you that AIDA offers a long-lasting framework based on principles. It does not.

The most serious flaw is the absence of any public consultation, either with experts or Canadians more generally, before or since introducing AIDA. It means that it has not benefited from a suitably broad range of perspectives. Most fundamentally, it lacks democratic legitimacy, which can't be repaired by the current parliamentary process.

The minister appears to be sensitive to this issue. As a witness here, he bragged that ISED held “more than 300 meetings with academics, businesses and members of civil society regarding this bill.” In his subsequent letter providing you with a list of those meetings, he claimed that, “We made a particular effort to reach out to stakeholders with a diversity of perspectives....”

My analysis of this list of meetings, sent to you on December 6, shows that this is misleading. Overwhelmingly, ISED held meetings with business organizations. There were 223 meetings in all, of which 36 were with U.S. tech giants. Only nine meetings were with Canadian civil society organizations.

Most striking by their complete absence are any organizations representing those that AIDA is claimed to protect most, i.e., organizations whose members are likely to be directly affected by AI applications. These are citizens, indigenous peoples, consumers, immigrants, parents, children, marginalized communities, and workers or professionals in health care, finance, education, manufacturing, agriculture, the arts, media, communication, transportation—all of the areas where AI is claimed to have benefits.

AIDA breaks democratic norms in ways that can't be fixed through amendments alone. It should therefore be sent back for proper redrafting. My written brief offers suggestions for how this could be accomplished in an agile manner, within the timetable originally projected for AIDA.

However, I realize that the shared political will for pursuing this option may not currently be achievable. If you decide that this AIDA is to proceed, then I urge you to repair its many serious flaws as well as you can in the following eight areas at the very least:

First, sever AIDA from parts 1 and 2 of Bill C-27 so that each of the sub-bills can be given proper attention.

Position the AI and data commissioner at arm's-length from ISED, appropriately staffed and adequately funded.

Provide AIDA with a mandatory review cycle, requiring any renewal or revision to be evidence-based, expert-informed and independently moderated with genuine public consultation. This should involve a proactive outreach to stakeholders not included in ISED's Bill C-27 meetings to date, starting with the consultations on the regulations. I'm reminded here of the familiar saying that if you're not welcome at the table, you should check that you're not on the menu.

Expand the scope of harms beyond individual support to include collective and systemic harms, as you've heard from others.

Base key requirements on robust, widely accepted principles in the legislation and not solely in regulations or schedules.

Ground such a principles-based framework explicitly in the protection of fundamental human rights and compliance with international humanitarian law, in keeping with the Council of Europe's pending treaty, which Canada has been involved with.

Replace the inappropriate concept of high-impact systems with a fully tiered, risk-based scheme, such as the EU AI Act does.

Tightly specify a set of unacceptably high-risk systems for prohibition.

I could go on.

Thank you for your attention. I welcome your questions.

6 p.m.

Liberal

The Chair Liberal Joël Lightbound

Thank you very much, Professor Clement.

I'll now give the floor to Professor Papernot.

6 p.m.

Professor Nicolas Papernot Assistant Professor and Canada CIFAR AI Chair, University of Toronto and Vector Institute, As an Individual

Thank you for inviting me to appear here today. I am an assistant professor of computer engineering and computer science at the University of Toronto, a faculty member at the Vector Institute, where I hold a Canada CIFAR AI chair, and a faculty affiliate at the Schwartz Reisman Institute.

My area of expertise is at the intersection of computer security, privacy and artificial intelligence.

I will first comment on the consumer privacy protection act proposed in Bill C‑27. The arguments I'm going to present are the result of discussions with professors Lisa Austin, David Lie and Aleksandar Nikolov, some colleagues.

I do not believe that the act in its current form creates the right incentives for adoption of privacy-preserving data analysis standards. Specifically, the act's reliance on de-identification as a privacy protection tool is misplaced. For example, as you know, the act allows organizations to disclose personal information to some others for socially beneficial purposes if the personal information is de-identified.

As a researcher in this field, I would say that de-identification creates a false sense of security. Indeed, we can use algorithms to find patterns in data, even when steps have been taken to hide those patterns.

For instance, the state of Victoria in Australia released public transit data that was de-identified by replacing each traveller's smart card ID with a unique random ID. The logic was that no IDs means no identities. However, researchers showed that mapping their own trips, where they tapped on and off public transit, allowed them to reidentify themselves. Equipped with that knowledge, they then learned the random IDs assigned to their colleagues. Once they had knowledge of their colleagues' random IDs, they could find out about any other trip—weekend trips, doctor visits—all things that most would expect to be kept private.

As a researcher in this area, that doesn't surprise me.

Moreover, AI can automate finding these patterns.

With AI, such reidentification can happen for a large portion of individuals in the dataset. This makes the act problematic when trying to regulate privacy in an AI world.

Instead of de-identification, the technical community has embraced different approaches to privacy data analysis, such as differential privacy. Differential privacy has been shown to work well with AI and can demonstrate privacy, even if some things are already known about the data. It would have protected the colleague's privacy in the example I gave earlier. Because differential privacy does not depend upon modifying personal information, this creates a mismatch between what the act requires and emerging best technical practices.

I will now comment on the part of Bill C‑27 that proposes an artificial intelligence and data act. The original text was ambiguous as to the definition of an AI system and a high‑impact system. The amendments that were proposed in November seem to be moving in the right direction. However, the proposed legislation needs to be clearer with respect to data governance.

Currently, the act does not capture important aspects of data governance that can result in harmful AI systems. For example, improper care when curating data leads to a non-representative dataset. My colleagues and I have illustrated this risk with synthetic data used to train AI systems that generate images or text. If the output of these AI systems is being fed back to them, that is, to train new AI systems, these new AI systems perform poorly. The analogy one might use is how the photocopy of a photocopy becomes unreliable.

What's more, this phenomenon can disparately impact populations already at risk of being the subject of harmful AI biases, which can propagate discrimination. I would like to see broader considerations at the data curation stage captured in the act.

Coming back to the bill itself, I encourage you to think about producing support documents to help with its dissemination. AI is a very fast-paced field and it's not an exaggeration to say that there are new developments every day. As a researcher, it is important that I educate the future generation of AI talent on what it means to design responsible AI. In finalizing the bill, please consider plain language documents that academics and others can use in the classroom or laboratory. It will go a long way.

Lastly, since the committee is working on regulating artificial intelligence, I'd like to point out that the bill will have no impact if there are no more AI ecosystems to regulate.

When I chose Canada in 2018 over the other countries that tried to recruit me, I did so because Canada offered me the best possible research environment in which to do my work on responsible AI, thanks to the pan-Canadian AI strategy. Seven years into the strategy, AI funding in Canada has not kept pace. Other countries have larger funding for students and better computing infrastructure, both of which are needed to stay at the forefront of responsible AI research.

Thank you for your work, which lays the foundation for responsible AI. I thought it was important to highlight these few areas for improvement in the interest of artificial intelligence in Canada.

I look forward to your questions.

6:10 p.m.

Liberal

The Chair Liberal Joël Lightbound

Thank you very much.

To start the conversation, I'll yield the floor to MP Rempel Garner.

You have six minutes.

6:10 p.m.

Conservative

Michelle Rempel Conservative Calgary Nose Hill, AB

Thank you, Mr. Chair.

Welcome to all of you.

I'll direct my questions to Dr. Papernot and Dr. Clement.

I'll scope my questions specifically on the AIDA component of the bill.

Since this bill was last debated at this committee, there have been several, as you said, Dr. Papernot, real-life examples of where lack of a regulatory structure or application of current legislation has created ambiguity and potential social harm.

I'd like to begin with the issue of Canada's intimate image distribution laws and the fact that the Canadian Bar Association and many other legal professionals have said that Canada's existing laws may not adequately protect women, particularly in the distribution of deepfakes and deepnudes that have been put online.

Do you believe this bill provides a timeline or provisions that would protect Canadians in this regard?

I'll start with Dr. Clement.

6:10 p.m.

Prof. Andrew Clement

Thank you for that question.

I don't believe that it offers a timeline for the concerns you raise, but I'm reminded that there has been in the works for years now an online harm's bill that has undergone extensive consultation, citizens' forums—

6:10 p.m.

Conservative

Michelle Rempel Conservative Calgary Nose Hill, AB

I have limited time. I'm just looking specifically at this bill. Do you believe this bill adequately covers that provision?

6:10 p.m.

Prof. Andrew Clement

I would say not.

6:10 p.m.

Conservative

Michelle Rempel Conservative Calgary Nose Hill, AB

Thank you.

We have Dr. Papernot.

6:10 p.m.

Prof. Nicolas Papernot

My comments here would be that the bill is not clear enough when it comes to monitoring AI system outputs, so this is very difficult, because we don't have very good visibility of how different users of an AI system could compose the outputs that they each get, then leading to harmful behaviour.

6:10 p.m.

Conservative

Michelle Rempel Conservative Calgary Nose Hill, AB

That would speak to the fact that this bill doesn't create the environment in which enforcement provisions could be adequately utilized by law enforcement professionals, should existing laws surrounding, let's say, intimate image distribution be expanded to cover artificial intelligence. Is that correct?

6:10 p.m.

Prof. Nicolas Papernot

That's right.

6:10 p.m.

Conservative

Michelle Rempel Conservative Calgary Nose Hill, AB

I would then go to intellectual property ownership.

Since this bill was last debated at this committee, The New York Times undertook a very significant lawsuit against OpenAI and Microsoft for the use of their intellectual property in the creation and training of their large language models. Do you believe that the decision regarding intellectual property ownership, or the determination of intellectual property ownership, should be left to the courts, or should it be addressed in a more formal legal format?

6:10 p.m.

Prof. Nicolas Papernot

I don't have the right expertise to comment on that. What I will say is that it's currently impossible, technically speaking, to trace the prediction that a model makes back to the data that it learned that behaviour from. It would be very difficult to trace back what the offending pieces of training data are that the copyright claims are being made with respect to.

6:15 p.m.

Conservative

Michelle Rempel Conservative Calgary Nose Hill, AB

Do you believe, though, that this speaks to the need for perhaps parliamentary oversight or legislative oversight on defining what constitutes intellectual property in terms of input in training large language models?

6:15 p.m.

Prof. Nicolas Papernot

I'm not sure.