Digital Charter Implementation Act, 2022

An Act to enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act and to make consequential and related amendments to other Acts

Sponsor

Status

In committee (House), as of April 24, 2023

Subscribe to a feed (what's a feed?) of speeches and votes in the House related to Bill C-27.

Summary

This is from the published bill. The Library of Parliament has also written a full legislative summary of the bill.

Part 1 enacts the Consumer Privacy Protection Act to govern the protection of personal information of individuals while taking into account the need of organizations to collect, use or disclose personal information in the course of commercial activities. In consequence, it repeals Part 1 of the Personal Information Protection and Electronic Documents Act and changes the short title of that Act to the Electronic Documents Act . It also makes consequential and related amendments to other Acts.
Part 2 enacts the Personal Information and Data Protection Tribunal Act , which establishes an administrative tribunal to hear appeals of certain decisions made by the Privacy Commissioner under the Consumer Privacy Protection Act and to impose penalties for the contravention of certain provisions of that Act. It also makes a related amendment to the Administrative Tribunals Support Service of Canada Act .
Part 3 enacts the Artificial Intelligence and Data Act to regulate international and interprovincial trade and commerce in artificial intelligence systems by requiring that certain persons adopt measures to mitigate risks of harm and biased output related to high-impact artificial intelligence systems. That Act provides for public reporting and authorizes the Minister to order the production of records related to artificial intelligence systems. That Act also establishes prohibitions related to the possession or use of illegally obtained personal information for the purpose of designing, developing, using or making available for use an artificial intelligence system and to the making available for use of an artificial intelligence system if its use causes serious harm to individuals.

Elsewhere

All sorts of information on this bill is available at LEGISinfo, an excellent resource from the Library of Parliament. You can also read the full text of the bill.

Votes

April 24, 2023 Passed 2nd reading of Bill C-27, An Act to enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act and to make consequential and related amendments to other Acts
April 24, 2023 Passed 2nd reading of Bill C-27, An Act to enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act and to make consequential and related amendments to other Acts

Ryan Turnbull Liberal Whitby, ON

I know we're studying Bill C-27. I'm just not sure of the relevance. I know that SDTC is another topic this committee is studying, but I don't understand how Mr. Perkins' line of questioning and request for documentation are related to the current work we're doing on today's agenda. It's not to say that Mr. Balsillie wouldn't be able to do that in future meetings on SDTC, but this is not the time or the place, in my opinion.

Jim Balsillie Founder, Centre for Digital Rights

Chairman Lightbound and honourable members, happy Valentine's Day.

Thank you for the opportunity to come back and expand on my previous testimony to include concerns about the artificial intelligence and data act. AIDA's flaws in both process and substance are well documented by the expert witnesses. Subsequent proposals by the minister only reinforce my core recommendation that AIDA requires a complete restart. It needs to be sent back to the drawing board, but not for ISED to draft alone. Rushing to pass legislation so seriously flawed will only deepen citizens' fears about AI, because AIDA merely proves that policy-makers can't effectively prevent current and emerging harms from emerging technologies.

Focusing on existential harms that are unquantifiable, indeterminate and unidentifiable is buying into industry's gaslighting. Existential risk narratives divert attention from current harms such as mass surveillance, misinformation, and undermining of personal autonomy and fair markets, among others. From a high-level perspective, some of the foundational flaws with AIDA are the following.

One, it's anti-democratic. The government introduced its AI regulation proposal without any consultation with the public. As Professor Andrew Clement noted at your January 31 meeting, subsequent consultations have revealed exaggerated claims of meetings that still disproportionately rely on industry feedback over civil society.

Two, claims of AI benefits are not substantiated. A recent report on Quebec's AI ecosystem shows that Canada's current AI promotion is not yielding stated economic outcomes. AIDA reiterates many of the exaggerated claims by industry that AI advancement can bring widespread societal benefits but offers no substantiation.

References to support the minister's statement that “AI offers a multitude of benefits for Canadians” come from a single source: Scale AI, a program funded by ISED and the Quebec government. Rather than showing credible reports on how the projects identified have benefited many Canadians, the reference articles claiming benefits are simply announcements of recently funded projects.

Three, AI innovation is not an excuse for rushing regulation. Not all AI innovation is beneficial, as evidenced by the creation and spread of deepfake pornographic images of not just celebrities but also children. This is an important consideration, because we are being sold AIDA as a need to balance innovation with regulation.

Four, by contrast, the risk of harms is well documented yet unaddressed in the current proposal. AI systems, among other features, have been shown to facilitate housing discrimination, make racist associations, exclude women from seeking job listings visible to men, recommend longer prison sentences for visible minorities, and fail to accurately recognize the faces of dark-skinned women. There are countless additional incidents of harm, thousands of which are catalogued in the AI incident database.

Five, the use of AI in AIDA focuses excessively on risk of harms to individuals rather than harms to groups or communities. AI-enabled misinformation and disinformation pose serious risks to election integrity and democracy.

Six, ISED is in a conflict of interest situation, and AIDA is its regulatory blank cheque. The ministry is advancing legislation and regulations intended to address the potentially serious multiple harms from technical developments in AI while it is investing in and vigorously promoting AI, including the funds of AI projects for champions of AIDA such as Professor Bengio. As Professor Teresa Scassa has shown in her research, the current proposal is not about agility but lack of substance and credibility.

Here are my recommendations.

Sever AIDA from Bill C-27 and start consultation in a transparent, democratically accountable process. Serious AI regulation requires policy proposals and an inclusive, genuine public consultation informed by independent, expert background reporting.

Give individuals the right to contest and object to AI affecting them, not just a right to algorithmic transparency.

The AI and data commissioner needs to be independent from the minister, an independent officer of Parliament with appropriate powers and adequate funding. Such an office would require a more serious commitment than how our current Competition Bureau and privacy regulators are set up.

There are many more flawed parts of AIDA, all detailed in our Centre for Digital Rights submission to the committee, entitled “Not Fit for Purpose”. The inexplicable rush by the minister to ram through this proposal should be of utmost concern. Canada is at risk of being the first in the world to create the worst AI regulation.

With regard to large language models, current leading-edge LLMs incorporate hundreds of billions of parameters in their models, based on training data with trillions of tokens. Their behaviour is often unreliable and unpredictable, as AI expert Gary Marcus is documenting well.

The cost and the compute power of LLMs are very intensive, and the field is dominated by big tech: Microsoft, Google, Meta, etc. There is no transparency in how these companies build their models, nor in the risks they pose. Explainability of LLMs is an unsolved problem, and it gets worse with the size of the models built. The claimed benefits of LLMs are speculative, but the harms and risks are well documented.

My advice for this committee is to take the time to study LLMs and to support that study with appropriate expertise. I am happy to help organize study forums, as I have strong industry and civil society networks. As with AIDA, understanding the full spectrum of technology's impacts is critical to a sovereign approach to crafting regulation that supports Canada's economy and protects our rights and freedoms.

Speaking of sovereign capacity, I would be remiss if I didn't say I was disappointed to see Minister Champagne court and offer support to Nvidia. Imagine if we had a ministry that throws its weight behind Canadian cloud and semi companies so that we can advance Canada's economy and sovereignty.

Canadians deserve an approach to AI that builds trust in the digital economy, supports Canadian prosperity and innovation and protects Canadians, not only as consumers but also as citizens.

Thank you.

Christelle Tessono Technology Policy Researcher, University of Toronto, As an Individual

Mr. Chair and members of the committee, thank you for inviting me to address you all this afternoon.

My name is Christelle Tessono, and I'm a technology policy researcher currently pursuing graduate studies at the University of Toronto. Over the course of my academic and professional career in the House of Commons, at Princeton University, and now with the Right2YourFace coalition and The Dais, I have developed expertise in a wide range of digital technology governance issues, most notably AI.

My remarks will focus on the AI and data act, and they build on the analysis submitted to INDU last year. This submission was co-authored with Yuan Stevens, Sonja Solomun, Supriya Dwivedi, Sam Andrey and Dr. Momin Malik, who is on the panel with me today. In our submission, we identify five key problems with AIDA; however, for the purposes of my remarks, I will be focusing on three.

First, AIDA does not address the human rights risks that AI systems cause, which puts it out of step with the EU AI Act. The preamble should, at a minimum, acknowledge the well-established disproportionate impact that these systems have on historically marginalized groups such as Black, indigenous, people of colour, members of the LGBTQ community, economically disadvantaged, disabled and other equity-seeking communities in the country.

While the minister's proposed amendments provide a schedule for classes of systems that may be considered in the scope of the act, that is far from enough. Instead, AIDA should be amended to have clear sets of prohibitions on systems and practices that exploit vulnerable groups and cause harms to people's safety and livelihoods, akin to the EU AI Act's prohibition on systems that cause unacceptable risks.

A second issue we highlighted is that AIDA does not create an accountable oversight and enforcement regime for the AI market. In its current iteration, AIDA lacks provisions for robust, independent oversight. Instead, it proposes self-administered audits at the discretion of the Minister of Innovation when in suspicion of act contravention.

While the act creates the position of the AI commissioner, they are not an independent actor, as they are appointed by the minister and serve at their discretion. The lack of independence of the AI commissioner creates a weak regulatory environment and thus fails to protect the Canadian population from algorithmic harms.

While the minister's proposed amendments provide investigative powers to the commissioner, that is far from enough. Instead, I believe that the commissioner should be a Governor in Council appointment and be empowered to conduct proactive audits, receive complaints, administer penalties and propose regulations and industry standards. Enforcing legislation should translate into having the ability to prohibit, restrict, withdraw or recall AI systems that do not comply with comprehensive legal requirements.

Third, AIDA did not undergo any public consultations. This is a glaring issue at the root of the many serious problems with the act. In their submission to INDU, the Assembly of First Nations reminds the committee that the federal government adopted the United Nations Declaration on the Rights of Indigenous Peoples Act action plan, which requires the government to make sure that “Respect for Indigenous rights is systematically embedded in federal laws and policies developed in consultation and cooperation with Indigenous peoples”. AIDA did not receive such consultation, which is a failure of the government in its commitment to indigenous peoples.

To ensure that public consultations are at the core of AI governance in this country, the act should ensure that a parliamentary committee is empowered to have AIDA reviewed, revised and updated whenever necessary and include public hearings conducted on a yearly basis or every few years or so, starting one year after AIDA comes into force. The Minister of Industry should be obliged to respond within 90 days to these committee reviews and include legislative and regulatory changes designed to remedy deficiencies identified by the committee.

Furthermore, I support the inclusion of provisions that expand the reporting and review duties of the AI commissioner, which could include but wouldn't be limited to, for example, the submission of annual reports to Parliament and the ability to draft special reports on urgent matters as well.

In conclusion, I believe that AI regulation needs to safeguard us against a rising number of algorithmic harms that these systems perpetuate; however, I don't think AIDA in its current state is up to that task. Instead, in line with submissions and open letters submitted to the committee by civil society, I highly recommend taking AIDA out of Bill C-27 to improve it through careful review and public consultations.

There are other problems I want to talk about, notably the exclusion of government institutions in the act.

I'm happy to answer questions regarding the proposed amendments made by the minister and expand on points I raised in my remarks.

Since I'm from Montreal, I'll be happy to answer your questions in French.

Thank you for your time.

The Chair Liberal Joël Lightbound

Colleagues, good afternoon.

I call this meeting to order.

Welcome to meeting number 111 of the House of Commons Standing Committee on Industry and Technology.

Today's meeting is taking place in a hybrid format, pursuant to the Standing Orders.

Pursuant to the order of reference of Monday, April 24, 2023, the committee is resuming consideration of Bill C‑27, An Act to enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act and to make consequential and related amendments to other Acts.

I would like to welcome our witnesses.

We're meeting with Momin Malik, Ph.D. and data science researcher. He is speaking as an individual and is joining us by video conference.

We're also meeting with Christelle Tessono, a technology policy researcher at the University of Toronto. She too is joining us by video conference.

Lastly, we're meeting with Jim Balsillie, who is here in person and whom I would like to thank for coming to speak to the committee again.

I'll now give the floor to Mr. Malik for five minutes.

Brian Masse NDP Windsor West, ON

Thank you, Mr. Chair.

My first intervention is the challenge of what we do next, because what I think you have demonstrated today is that it's like the argument that we're going to consult you on Bill C-27, and we will fix it sometime on copyright, and we will fix it somehow after we pass Bill C-27. That is not sufficient for the NDP. It's clear to us that you can do both of those things. Alternatively, we either send this to regulatory oblivion—that's really what happens—or dismantle what we have here.

I'm looking at an alternative where we view it through the lens of almost like national security. Perhaps we even have a standing committee of Parliament and the Senate that looks at this over all the different jurisdictions, because copyright is proving that it's just outside this particular bill in terms of the technicality of it, but the reality is that it encompasses everything you have been saying and doing here in a much more wholesome way than in many other industries.

I have one quick question to go across the table here about an AI commissioner. Should the commissioner be independent and able to fine the abuse of artificial intelligence if that is part of the law?

Maybe we can start with ACTRA and go across.

Jean-Denis Garon Bloc Mirabel, QC

Thank you, Mr. Chair.

I'd like to comment on the transparency issue. One of my colleagues, Mr. Turnbull, discussed this. He said it might be complicated to determine the identity of works that have been used among billions of data points. However, my impression is that an AI system capable of reading 100 million books a day is capable of searching from a list. You'd have to check that.

That being said, some intervenors have told us that Bill C-27 won't get the job done. Many representatives of the web giants told us so, almost implying that we should reject it, start over from scratch, modify all kinds of other acts and work on it for I don't know how many years. We have that option, but there's also the option of moving ahead, continuing to amend Bill C-27 and doing the best we can. Then there's the option of waiting and imitating Europe, since Canada is a minor player after all.

However, there's another solution: we could add a provision requiring periodic updates to the act, say every three to five years. That would force Parliament to review the act completely and would give it the opportunity to align the act periodically with the legislation of other countries so that Canada remains competitive, while enabling it to participate in the international review process.

Ms. Hénault, what do you think of that kind of provision?

Francesco Sorbara Liberal Vaughan—Woodbridge, ON

I'll move to Marie-Julie Desrochers.

Welcome. My question concerns Bill C-27.

Is it not important that we finish off this bill and put it in place? Twenty years have passed. Wouldn't you agree that the reviews of this bill pertaining to AI—even the copyright side, which is ongoing—should happen at much shorter intervals?

February 12th, 2024 / 12:45 p.m.


See context

Chief Executive Officer, Music Canada

Patrick Rogers

I don't believe that currently the bill describes music as high-impact. I would find it hard to believe that anybody who's spent the last two hours listening to us, though, would think that there wasn't a high impact of AI on all cultural industries. If there is an attempt to allow for AI not to respect copyright laws, then it will have the highest impact on us. That's something you could fix today in Bill C-27 by just saying that AI has to pay for the use of copyright material.

February 12th, 2024 / 12:40 p.m.


See context

Director of Legal Affairs, Association nationale des éditeurs de livres

Stéphanie Hénault

Yes, I wanted to discuss the distinction between the Copyright Act and Bill C-27. The Copyright Act governs rights holders, whereas Bill C-27 concerns the construction and management of generative AI models.

It's important to regulate that industry by means of obligations of collective interest, including compliance with copyright. I imagine that other statutes, such as those on aircraft construction and transport, provide that one must comply with standards in the collective interest. We view Bill C-27 in the same way. It has to be said very clearly that developers must introduce policies to train their models fairly and respectfully and make them available. There must also be policies respecting users to ensure they clearly understand that this isn't a free pass to violate third-party copyright.

February 12th, 2024 / 12:35 p.m.


See context

Chief Executive Officer, Music Canada

Patrick Rogers

Can I just say, without casting aspersions on anyone, that this is an impossible game of three-card monte for stakeholders?

The bill before Parliament is Bill C-27. There is a copyright review going on. If we don't comment on AI and its interaction with copyright during Bill C-27, we will have missed the boat. If we miss the opportunity to talk about it during copyright consultations, there's a high chance of it being suggested that we talk about it in Bill C-27.

February 12th, 2024 / 12:25 p.m.


See context

National Executive Director, Alliance of Canadian Cinema, Television and Radio Artists

Marie Kelly

Yes, we would, and we started our submission by saying we're thankful that the government is looking at this and we're thankful that Bill C-27 has been brought forward. It has allowed us to have this conversation.

There are significant changes we'd like to see in it, but we are happy to have the conversation. We're happy to be here, and we're glad that Bill C-27 is being discussed.

Bernard Généreux Conservative Montmagny—L'Islet—Kamouraska—Rivière-du-Loup, QC

Now I'm going to speak to everyone.

On several occasions, many of you have discussed interoperability with what's being done elsewhere in the world, particularly in Europe and the United States. Do you think Bill C-27 goes far enough, even though it was improved by the amendments the government proposed? Considering the answers you've been giving from the start, that doesn't seem to be the case.

To ensure your respective organizations remain viable, do you think it's important that Bill C-27 include the elements you're proposing?

Bernard Généreux Conservative Montmagny—L'Islet—Kamouraska—Rivière-du-Loup, QC

Ms. Hénault, before you respond, I'd like to remind you that earlier you said that Canada mustn't become a banana republic. Do you view Bill C-27 as the bill of a banana republic?

February 12th, 2024 / 12:05 p.m.


See context

National Executive Director, Alliance of Canadian Cinema, Television and Radio Artists

Marie Kelly

I think it's going to be a lot of threading together of different things.

Copyright is key. You are hearing us say that as actors. I think you need to have protection on the data you're looking at in Bill C-27. I think it's very important for us to look at how it's scraped and what they're doing with it. We need to have knowledge about where this data is coming from in order for us to even be able to trace bad actors—and good actors who just happen to take it and may not know.

We're looking at things like this: What are you going to do with a worker who has their data taken from them by their employer so they can generate a program—say, a training session, etc.? Why not put something in the Employment Standards Act that protects all workers against having their name, image and likeness taken without consent, control and compensation?

Privacy laws have to be increased so we have those protections.

I'm sure there's more than that. This is going to be a patchwork.

February 12th, 2024 / noon


See context

Chief Executive Officer, Music Canada

Patrick Rogers

Yes. Thank you for the opportunity to comment on that.

You can go about it either way. Either you can say that there are no exceptions for AI, that AI is like everything else, and you can do it in a bill like Bill C-27 and go back and reference the Copyright Act, or you can make the change in the Copyright Act and say that this is the case.

We didn't create copyright for the printing press. We created copyright for Dickens and the recognition that the work was worth more than what you paid for it right away, and we extended term of copyright for sound recordings because people were starting to live to the point at which they could hear their song on the radio and not get paid, so we made that change.

If we say that we know they're scraping our stuff, and we know that's a use—it's of value—we can just agree now that that's the case and get out of those sorts of fun academic conversations about “I don't know. Is it a copy?” I know it's a copy. I know they're taking it because our stuff is a thing of value.