Digital Charter Implementation Act, 2022

An Act to enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act and to make consequential and related amendments to other Acts

Sponsor

Status

In committee (House), as of April 24, 2023

Subscribe to a feed (what's a feed?) of speeches and votes in the House related to Bill C-27.

Summary

This is from the published bill. The Library of Parliament has also written a full legislative summary of the bill.

Part 1 enacts the Consumer Privacy Protection Act to govern the protection of personal information of individuals while taking into account the need of organizations to collect, use or disclose personal information in the course of commercial activities. In consequence, it repeals Part 1 of the Personal Information Protection and Electronic Documents Act and changes the short title of that Act to the Electronic Documents Act . It also makes consequential and related amendments to other Acts.
Part 2 enacts the Personal Information and Data Protection Tribunal Act , which establishes an administrative tribunal to hear appeals of certain decisions made by the Privacy Commissioner under the Consumer Privacy Protection Act and to impose penalties for the contravention of certain provisions of that Act. It also makes a related amendment to the Administrative Tribunals Support Service of Canada Act .
Part 3 enacts the Artificial Intelligence and Data Act to regulate international and interprovincial trade and commerce in artificial intelligence systems by requiring that certain persons adopt measures to mitigate risks of harm and biased output related to high-impact artificial intelligence systems. That Act provides for public reporting and authorizes the Minister to order the production of records related to artificial intelligence systems. That Act also establishes prohibitions related to the possession or use of illegally obtained personal information for the purpose of designing, developing, using or making available for use an artificial intelligence system and to the making available for use of an artificial intelligence system if its use causes serious harm to individuals.

Elsewhere

All sorts of information on this bill is available at LEGISinfo, an excellent resource from the Library of Parliament. You can also read the full text of the bill.

Votes

April 24, 2023 Passed 2nd reading of Bill C-27, An Act to enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act and to make consequential and related amendments to other Acts
April 24, 2023 Passed 2nd reading of Bill C-27, An Act to enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act and to make consequential and related amendments to other Acts

Bernard Généreux Conservative Montmagny—L'Islet—Kamouraska—Rivière-du-Loup, QC

Will Bill C‑27 allow it to be as effective as, or equivalent to, the U.S. presidential executive order currently in force?

Do you think the Americans will then pass legislation that will go further than this current presidential executive order?

The EU has already been much quicker to adopt measures than we've been. What is the intersection between Bill C‑27 and the bill that's about to be passed in Europe?

Bernard Généreux Conservative Montmagny—L'Islet—Kamouraska—Rivière-du-Loup, QC

Thank you very much, Mr. Chair.

I'd like to thank all the witnesses. Today's discussions are very interesting.

I'm not necessarily speaking to anyone in particular, but rather to all the witnesses.

Bad actors, whether they be terrorists, scammers or thieves, could misuse AI. I think that's one of Mr. Bengio's concerns. If we were to pass Bill C‑27 tomorrow morning, would that prevent such individuals from doing so?

To follow up on the question from my Bloc Québécois colleague earlier, it seems clear to me that, even in the case of a recorded message intended to scam someone, the scammer will not specify that the message was created using AI.

Do you really believe that Bill C‑27 will change things or truly make Quebeckers and Canadians safer when it comes to AI?

Prof. Catherine Régis

Influence is an issue, but I'd like to briefly comment on the self-regulation aspect, if I may. I think it's important. In my view, self-regulation clearly isn't adequate. There's a pretty strong consensus in the international community that opting strictly for self-regulation isn't enough. That means legislation has its place: it imposes obligations and formal accountability measures on companies.

That said, it's important to recognize that this legislation, Bill C-27, is one tool in the important tool box we need to ensure the responsible deployment of AI. It's not the only answer. The law is important, but highly responsive ethical standards are also necessary. The tool box should include technical defensive AI, where you have AI versus AI. International standards as well as business standards need to be established. Coming up with a comprehensive strategy is really key. This bill won't fix everything, but it is essential. That's my answer to your first question.

Sorry, could you please remind me what your second question was?

Jean-Denis Garon Bloc Mirabel, QC

Recently, we've heard about scams that use AI to imitate people's voices and dupe a grandmother or grandfather. You'll have to forgive me if I don't use the right terminology. As I understand it, you are saying that the current regulatory framework neither requires companies nor incentivizes them—because there is a cost attached—to identify when something is fake.

Does Bill C-27, in its current form, remedy that? Does it cover everything it should, or does it need to be strengthened?

Professor Catherine Régis Full Professor, Université de Montréal, As an Individual

Good morning, Mr. Chair and members of the committee. Thank you for the opportunity to comment on the AI portion of Bill C-27.

I am a full professor in the faculty of law at Université de Montréal. I am also the Canada research chair in collaborative culture in health law and policy, as well as the Canada-CIFAR chair in AI, affiliated to Mila. From January 2, 2022 to December 2023, I co-chaired the ​​Working Group on Responsible AI for the Global Partnership on AI.

The first point I want to make is to reaffirm not only the importance, but also the urgency of creating a better legal framework for AI, as proposed in Bill C-27. That has been my view for the past five years, and I am now more convinced than ever, given the dizzying pace of recent developments in AI, which you are all familiar with.

We need legal tools that are binding. They must clearly set out our expectations, values and requirements in relation to AI, at the national level. During the citizen consultations that culminated in the development of the Montréal Declaration for a Responsible Development of Artificial Intelligence, the first need identified was for an appropriate legal framework that would enable the development of trusted AI technologies.

As you probably know, that trend has spread across the world, the most obvious example definitely being the European Union's efforts. As of last week, the EU is now one step closer to adopting a regulatory framework for AI.

In addition to these national requirements, the global discussions around AI and the resulting decisions will have repercussions for every country. In fact, the idea of creating a specific AI authority is being discussed.

In order to ensure that Canadian values and interests are taken into account in the international space, Canada has to be able to influence the discussions and decisions. Setting out a national vision with strong and clear standards is vital to playing a credible, meaningful and influential role in the global governance of AI.

That said, I think Bill C-27 could still use some improvements. I will focus on two of them today.

The first improvement is to make the artificial intelligence and data commissioner more independent. Although recent amendments have resulted in improvements, the commissioner is still very much tied to Innovation, Science and Economic Development Canada. To avoid any conflict of interest, real or apparent, the government should create more of a wall between the two entities. This would address any tensions that might arise between the government's role as a funder on one hand, and its role as a watchdog on the other.

Possible solutions include creating an office of the artificial intelligence commissioner that is totally independent of the department, and empowering the commissioner to impose administrative monetary penalties or require that corrective actions be taken to address the accountability framework. In addition, the commissioner could be asked to recommend new or improved regulations informed by their experience as a watchdog, mainly through the annual public report.

Other measures could also be taken. Once the legislation is passed, for instance, the government could give the commissioner the financial and institutional resources, as well as the qualified staff necessary to successfully carry out the duties of the commissioner. Making sure that the commissioner has the means to achieve their objectives is really important. Another possibility is to create a mechanism whereby the public could report issues directly to the commissioner. That would establish a relationship between the two.

The second major improvement that's needed, as I see it, is to further strengthen the crucial role that human rights can play in analyzing the risks and impacts of AI systems. The importance of taking into account human rights in defining the classes of high-impact AI systems is specifically mentioned. However, the importance of then incorporating consideration of those rights in companies' assessments, which could include an analysis of the risks of harm and adverse effects, is not quite so clear.

I would also recommend adding specific language to address the need to conduct impact assessments for human rights in relation to individuals or groups of individuals who may be affected by high-impact AI systems. A portion of those assessments could also be made public. These are sometimes called human rights impact assessments.

The Council of Europe, the European Union with its AI legislation, and even the United Nations Educational, Scientific and Cultural Organization are working on similar tools, so exploring the possibility of sharing expertise would be worthwhile.

The second recommendation is fundamental. While the AI race is very real, there can be no winner of the race to violate human rights. The legislation must make that clear.

Thank you.

Ignacio Cofone Canada Research Chair in AI Law and Data Governance, McGill University, As an Individual

Thank you very much, Mr. Chair.

Good morning, everyone, and thank you for the invitation to share with the committee my thoughts on Bill C-27.

I'm appearing today in my personal capacity. Mr. Chair has already introduced me, so I'm going to skip that part and say that it is crucial that Canada have a legal framework that fosters the enormous benefits of AI and data while preventing its population from becoming collateral damage from it.

I'm happy to share my broad thoughts on the act, but today I want to focus on three important opportunities for improvement while maintaining the general characteristics and approach of the act as proposed. I have one recommendation for AIDA, one for the CPPA and one for both.

My first recommendation is that AIDA needs an improved definition of “harms”. AIDA is an accountability framework, and the effectiveness of any accountability framework depends on what it is that we hold entities accountable for. AIDA recognizes currently property, economic, physical and psychological harms, but for it to be helpful and comprehensive, we need one step more.

Consider the harms to democracy that were imposed during the Cambridge Analytica scandal and consider the meaningful but diffuse and invisible harms that are inflicted every day through intentional misinformation that polarizes voters. Consider the misrepresentation of minorities that disempowers them. These go unrecognized by the current definition of “harms”.

AIDA needs two changes to recognize intangible harms beyond individual psychological ones: It needs to recognize harms to groups, such as harms to democracy, as AI harms often affect communities rather than discrete individuals, and it also needs to recognize dignitary harms, like those stemming from misrepresentation and the growing of systemic inequalities through automated means.

I therefore urge the committee to amend subsection 5(1) of AIDA to incorporate these intangible harms to individuals and to communities. I would be happy to propose suggested language.

This fuller account of harms would put Canada up to international standards, such as the EU AI Act, which considers harms to “public interest”, to “rights protected” by EU law, to a “plurality of persons” and to people in a “vulnerable position”. Doing so better complies with AI ethics frameworks, such as the Montreal declaration for responsible AI, the Toronto declaration and the Asilomar AI principles. You would also increase consistency within Canadian law, as the directive on automated decision-making repeatedly refers to “individuals or communities”.

My second recommendation is that the CPPA must recognize inferences as personal information. We live in a world where things as sensitive and dangerous as our sexuality or ethnicity and our political affiliation can be inferred from things as inoffensive as our Spotify listens or our coffee orders or text messages, and those are just some of the inferences that we know about.

Inferences can even be harmful when they are incorrect. TransUnion, for example, the credit rating agency, was sued in the United States a couple of years ago for mistakenly inferring that hundreds of people were terrorists. By supercharging inferences, AI has transformed the privacy landscape.

We cannot afford to have a privacy statute that focuses on disclosed information and builds a back door into our privacy law that strips from it its power to create meaningful protection in today's inferential economy. The CPPA doesn't rule out inferences being personal information, but it doesn't incorporate them explicitly. It should. I urge the committee to amend the definition of personal information in one of the acts to say that “ 'personal information' means disclosed or inferred information about an identifiable individual or group”.

This change would also increase consistency within Canadian law, as the Office of the Privacy Commissioner has repeatedly stated that inferences should be personal information, and also with international standards, as foreign data protection authorities emphasize the importance of inferences for privacy law. The California attorney general has also stated that inferences should be personal information for the purposes of privacy law.

My third brief recommendation is a consequence of this bill, which is reforming enforcement. As AI and data continue to seep into more aspects of our social and economic lives, one regulator with limited resources and personnel will not be able to have their eye on everything. They will need to prioritize. If we don't want all other harms to fall through the cracks, both parts of the act need a combined public and private enforcement system, taking inspiration from the GDPR, so that we have an agency that issues fines without preventing the court system from compensating for tangible and intangible harm done to individuals and groups.

We also have a brief elaborating on the suggested outlines here.

I'd be happy to address any questions or elaborate on anything.

Thank you very much for your time.

The Chair Liberal Joël Lightbound

I call the meeting to order.

Good morning one and all. Welcome to meeting number 108 of the House of Commons Standing Committee on Industry and Technology. Today's meeting is taking place in a hybrid format, pursuant to the Standing Orders.

Pursuant to the order of reference of Monday, April 24, 2023, the committee is resuming its study of Bill C-27, an act to enact the consumer privacy protection act, the personal information and data protection tribunal act and the artificial intelligence and data act and to make consequential and related amendments to other acts.

Today's witnesses are all joining us by video conference. We have with us Ignacio Cofone, Canada research chair in artificial intelligence law and data governance at McGill University; Catherine Régis, full professor at Université de Montréal; Elissa Strome, executive director of pan-Canadian AI strategy at the Canadian Institute for Advanced Research; and Yoshua Bengio, scientific director at Mila - Quebec Artificial Intelligence Institute.

Welcome and thank you all for being with us.

Since we are already a bit behind schedule, I'm going to turn the floor right over to you, Mr. Cofone. You have five minutes for your opening statement.

Consumer-Led Banking ActPrivate Members' Business

February 1st, 2024 / 6:40 p.m.


See context

Conservative

Ryan Williams Conservative Bay of Quinte, ON

Madam Speaker, the member is a new addition to our industry committee; I look forward to working with him.

We see this across a lot of different spectra right now. This bill is asking for legislation. The legislation has to come forward. It is much the same as we are seeing with Bill C-27, and we have a much better privacy bill in Quebec, so I will agree with that. It is much the same as we saw today when we were talking about the problems with Manulife and Loblaw, and the fact that some of the legislation is provincial that is allowing Manulife to sole-source pharmaceuticals.

Yes, I agree with the member. We always need to look at the provinces, and we are looking at that with some of that legislation. However, let us get the legislation forward and passed, so we can all talk about it in the House of Commons and then get it passed for Quebec and all Canadians.

Jean-Denis Garon Bloc Mirabel, QC

Professor Bednar, I have to interrupt you, because time is very limited. That said, the chair is very generous.

Do you think that, in its current form, Bill C‑27 is too permissive when it comes to self‑regulation? Should we rely instead on government regulations, for example?

Prof. Nicolas Papernot

I agree that AI is one way to analyze data, but there are many other ways to do it. So we need regulations on privacy, just as we do for AI. For the latter, the part of Bill C‑27 that deals with it talks a lot about privacy, but there are a lot of other ways to—

Jean-Denis Garon Bloc Mirabel, QC

Thank you very much. That sends a clear message about the confidence an expert like you may have in the current regulations. There are even people who say that this bill is inadequate and that we should tear it up and rewrite it.

Canadian regulations already exist. Indeed, other legislation directly or indirectly regulates artificial intelligence and data protection. Do you think that, if Bill C‑27 were amended to reflect the advances, there would be a way to improve what we already have, or is it a waste of time?

Professor Nicolas Papernot Assistant Professor and Canada CIFAR AI Chair, University of Toronto and Vector Institute, As an Individual

Thank you for inviting me to appear here today. I am an assistant professor of computer engineering and computer science at the University of Toronto, a faculty member at the Vector Institute, where I hold a Canada CIFAR AI chair, and a faculty affiliate at the Schwartz Reisman Institute.

My area of expertise is at the intersection of computer security, privacy and artificial intelligence.

I will first comment on the consumer privacy protection act proposed in Bill C‑27. The arguments I'm going to present are the result of discussions with professors Lisa Austin, David Lie and Aleksandar Nikolov, some colleagues.

I do not believe that the act in its current form creates the right incentives for adoption of privacy-preserving data analysis standards. Specifically, the act's reliance on de-identification as a privacy protection tool is misplaced. For example, as you know, the act allows organizations to disclose personal information to some others for socially beneficial purposes if the personal information is de-identified.

As a researcher in this field, I would say that de-identification creates a false sense of security. Indeed, we can use algorithms to find patterns in data, even when steps have been taken to hide those patterns.

For instance, the state of Victoria in Australia released public transit data that was de-identified by replacing each traveller's smart card ID with a unique random ID. The logic was that no IDs means no identities. However, researchers showed that mapping their own trips, where they tapped on and off public transit, allowed them to reidentify themselves. Equipped with that knowledge, they then learned the random IDs assigned to their colleagues. Once they had knowledge of their colleagues' random IDs, they could find out about any other trip—weekend trips, doctor visits—all things that most would expect to be kept private.

As a researcher in this area, that doesn't surprise me.

Moreover, AI can automate finding these patterns.

With AI, such reidentification can happen for a large portion of individuals in the dataset. This makes the act problematic when trying to regulate privacy in an AI world.

Instead of de-identification, the technical community has embraced different approaches to privacy data analysis, such as differential privacy. Differential privacy has been shown to work well with AI and can demonstrate privacy, even if some things are already known about the data. It would have protected the colleague's privacy in the example I gave earlier. Because differential privacy does not depend upon modifying personal information, this creates a mismatch between what the act requires and emerging best technical practices.

I will now comment on the part of Bill C‑27 that proposes an artificial intelligence and data act. The original text was ambiguous as to the definition of an AI system and a high‑impact system. The amendments that were proposed in November seem to be moving in the right direction. However, the proposed legislation needs to be clearer with respect to data governance.

Currently, the act does not capture important aspects of data governance that can result in harmful AI systems. For example, improper care when curating data leads to a non-representative dataset. My colleagues and I have illustrated this risk with synthetic data used to train AI systems that generate images or text. If the output of these AI systems is being fed back to them, that is, to train new AI systems, these new AI systems perform poorly. The analogy one might use is how the photocopy of a photocopy becomes unreliable.

What's more, this phenomenon can disparately impact populations already at risk of being the subject of harmful AI biases, which can propagate discrimination. I would like to see broader considerations at the data curation stage captured in the act.

Coming back to the bill itself, I encourage you to think about producing support documents to help with its dissemination. AI is a very fast-paced field and it's not an exaggeration to say that there are new developments every day. As a researcher, it is important that I educate the future generation of AI talent on what it means to design responsible AI. In finalizing the bill, please consider plain language documents that academics and others can use in the classroom or laboratory. It will go a long way.

Lastly, since the committee is working on regulating artificial intelligence, I'd like to point out that the bill will have no impact if there are no more AI ecosystems to regulate.

When I chose Canada in 2018 over the other countries that tried to recruit me, I did so because Canada offered me the best possible research environment in which to do my work on responsible AI, thanks to the pan-Canadian AI strategy. Seven years into the strategy, AI funding in Canada has not kept pace. Other countries have larger funding for students and better computing infrastructure, both of which are needed to stay at the forefront of responsible AI research.

Thank you for your work, which lays the foundation for responsible AI. I thought it was important to highlight these few areas for improvement in the interest of artificial intelligence in Canada.

I look forward to your questions.

Professor Andrew Clement Professor Emeritus, Faculty of Information, University of Toronto, As an Individual

Thank you, Mr. Chair and committee members.

I am Andrew Clement, professor emeritus in the faculty of information at the University of Toronto. As a computer scientist who started in the field of artificial intelligence, I have been researching the computerization of society and its social implications since the 1970s.

I'm one of three pro bono contributors to the Centre for Digital Rights' report on C-27 that Jim Balsillie spoke to you about here.

I will address the artificial intelligence and data act, AIDA, exclusively in my remarks.

AI, better interpreted as algorithmic intensification, has a long history. For all of its benefits, from well before the current acceleration around deep neural networks, AI misapplication has already hurt many people.

Unfortunately, the loudest voices driving public fear are coming from the tech giant leaders, who are well known for their anti-government and anti-regulation attitudes. These “move fast and break things” figures are now demanding urgent government intervention while jockeying for industry dominance. This is distracting and demands our skepticism.

Judicious AI regulation focused on actual risks is long overdue and self-regulation won't work.

Minister Champagne wants to make Canada a world leader in AI governance. That's a fine goal, but it's as if we are in an international Grand Prix. Apparently, to allay the fears of Canadians, he abruptly entered a made-in-Canada contender. Beyond the proud maple leaf and his smiling at the wheel, his AIDA vehicle barely had a chassis and an engine. He insisted he was simply being “agile”, promising that if you just help to propel him over the finish line, all would be fixed through the regulations.

As Professor Scassa has pointed out, there's no prize for first place. Good governance isn't even a race but an ongoing, mutual learning project. With so much uncertainty about the promise and perils of AI, public consultation informed by expertise is a vital precondition for establishing a sound legal foundation. Canada also needs to carefully study developments in the EU, U.S. and elsewhere before settling on its own approach.

As many witnesses have pointed out, AIDA has been deeply flawed in substance and process from the get-go. Jamming it on to the overdue modernization of PIPEDA made it much harder to give that and the AI legislation the thorough review they each merit.

The minister initially gave himself sweeping regulatory powers, putting him in a conflict of interest with his mandate to advance Canada's AI industry. His recent amendments don't go anywhere near far enough to achieve the necessary regulatory independence.

Minister Champagne claimed to you that AIDA offers a long-lasting framework based on principles. It does not.

The most serious flaw is the absence of any public consultation, either with experts or Canadians more generally, before or since introducing AIDA. It means that it has not benefited from a suitably broad range of perspectives. Most fundamentally, it lacks democratic legitimacy, which can't be repaired by the current parliamentary process.

The minister appears to be sensitive to this issue. As a witness here, he bragged that ISED held “more than 300 meetings with academics, businesses and members of civil society regarding this bill.” In his subsequent letter providing you with a list of those meetings, he claimed that, “We made a particular effort to reach out to stakeholders with a diversity of perspectives....”

My analysis of this list of meetings, sent to you on December 6, shows that this is misleading. Overwhelmingly, ISED held meetings with business organizations. There were 223 meetings in all, of which 36 were with U.S. tech giants. Only nine meetings were with Canadian civil society organizations.

Most striking by their complete absence are any organizations representing those that AIDA is claimed to protect most, i.e., organizations whose members are likely to be directly affected by AI applications. These are citizens, indigenous peoples, consumers, immigrants, parents, children, marginalized communities, and workers or professionals in health care, finance, education, manufacturing, agriculture, the arts, media, communication, transportation—all of the areas where AI is claimed to have benefits.

AIDA breaks democratic norms in ways that can't be fixed through amendments alone. It should therefore be sent back for proper redrafting. My written brief offers suggestions for how this could be accomplished in an agile manner, within the timetable originally projected for AIDA.

However, I realize that the shared political will for pursuing this option may not currently be achievable. If you decide that this AIDA is to proceed, then I urge you to repair its many serious flaws as well as you can in the following eight areas at the very least:

First, sever AIDA from parts 1 and 2 of Bill C-27 so that each of the sub-bills can be given proper attention.

Position the AI and data commissioner at arm's-length from ISED, appropriately staffed and adequately funded.

Provide AIDA with a mandatory review cycle, requiring any renewal or revision to be evidence-based, expert-informed and independently moderated with genuine public consultation. This should involve a proactive outreach to stakeholders not included in ISED's Bill C-27 meetings to date, starting with the consultations on the regulations. I'm reminded here of the familiar saying that if you're not welcome at the table, you should check that you're not on the menu.

Expand the scope of harms beyond individual support to include collective and systemic harms, as you've heard from others.

Base key requirements on robust, widely accepted principles in the legislation and not solely in regulations or schedules.

Ground such a principles-based framework explicitly in the protection of fundamental human rights and compliance with international humanitarian law, in keeping with the Council of Europe's pending treaty, which Canada has been involved with.

Replace the inappropriate concept of high-impact systems with a fully tiered, risk-based scheme, such as the EU AI Act does.

Tightly specify a set of unacceptably high-risk systems for prohibition.

I could go on.

Thank you for your attention. I welcome your questions.

Vass Bednar Executive Director, Master of Public Policy in Digital Society Program, McMaster University, As an Individual

Thank you, and good evening.

My name is Vass Bednar. You heard that I run the master of public policy program in digital society at McMaster University, where I'm an adjunct professor of political science. I engage with Canada's policy community broadly as a senior fellow at CIGI, a fellow with the Public Policy Forum, and through my newsletter “Regs to Riches”. I'm also a member of the provincial privacy commissioner's strategic ad hoc advisory committee.

Thank you for the opportunity to appear. I appreciate the work of this committee. I do agree there is an urgent need to modernize Canada's legislative framework so that it's suited in the digital age. I also want to note I've been on a sabbatical of sorts for the past year, and I have not followed every detailed element of debate on this bill in deep detail. That made me a little bit anxious about appearing, but then I remembered that I am not on the committee; I am appearing before the committee, so I decided to be as constructive as I could be today.

As we consider this framework for privacy, consumer protection and artificial intelligence, I really think we're fundamentally negotiating trust in our digital economy, what that looks like for citizens and actually articulating what responsible innovation is supposed to look like. That's what gets me excited about the direction that we're going.

Very briefly, on the privacy side, it's well known, or it has been well said, that this is not the most consumer-centric privacy legislation we see from other jurisdictions. It does provide clarity for businesses, both large and small, which is good, and especially small businesses. I don't think the requirements for smaller businesses are overly onerous.

The elements on consent have been well debated. Zooming in on that language beyond what is necessary, I think, is such a major hinge of debate. Who gets to decide what is necessary and when? I think the precedent of consent, of course, is critical. I think about a future where, as people who are experiencing our online world, or exchanging information with businesses, there's just way more autonomy for consumers.

For example, there's being able to search without self-preferencing algorithms that dictate the order of what you see; seeing prices that aren't tailored to you, or even knowing there is a personalized dynamic pricing situation; accessing discounts through loyalty programs, without trading your privacy to use them; or simple things like returning to an online store that you've shopped at before without seeing these so-called special offers based on your browsing or purchase history.

That tension, I think, is probably going to be core to our continued conversation around that need for organizations to collect.

On algorithmic collusion, recent reporting from The New Statesman elaborated on how the prices of most goods now are set not by humans, but by automatic processes that are set to maximize their owners' gains. There's this big academic conversation about the line between what's exploitative and what's efficient. Our evolving competition law may soon begin to consider algorithmic collusion, which may also garner more attention through advancements on Bill C-27 as it prompts the consideration of the effects of algorithmic conduct in the public interest.

Again, very briefly on the AI side, I agree with others that the AI commissioner should be more empowered, perhaps as an officer of Parliament. That office needs to be properly funded in order to do this work. Note that the provinces may want to create their own AI frameworks as a way to solve for some of the ambiguities or intersections. We should embrace and celebrate that in a Canadian federalist context.

In the spirit of being constructive and forward-looking, I wonder if we should be taking some more inspiration from very familiar policy areas of labelling and manufacturing just to achieve more disclosure. For the layer of transparency that's proposed for those who manage a general purpose AI system, we should ensure that individuals can identify AI-generated content. This is also critical for the result of any algorithmically generated system.

We probably need either a nutrition facts label approach to privacy or a registration requirement. I would hope we can avoid onerous audits, or kind of spurring strange secondary economies, that sprout and maybe aren't as necessary as they seem. Having to register novel AI systems with ISED, so the government can keep tabs on potential harms and justifications for them entering into the Canadian market, would be helpful.

I will wrap up in just a moment.

Of course, we, you, should all be thinking about how this legislation will work with other policy levers, especially in light of the recently struck Digital Regulators Forum.

Much of my work is rooted in competition issues, such as market fairness and freedom. I note that in the U.S., the FTC held a technology summit on artificial intelligence just last week. There it was noted, “we see a tech ecosystem that has concentrated...power in the hands of a small number of firms, while entrenching a business model built on constant surveillance of consumers.” Canadian policy people need to be more honest about connecting these dots. We should be doing more to question that core business model and ensure we're not enshrining it, going forward.

I have a final, very quick worry about productivity, which I know everyone is thinking about.

I have a concern that our productivity crisis in Canada will fundamentally act, whether implicitly or explicitly, to discourage regulation of any kind over the phantom or zombie risk of impeding this elusive thing we call innovation. I want to remind all of you that smart regulation clarifies markets and levels the playing field.

Thanks for having me.

The Chair Liberal Joël Lightbound

That's wonderful.

Pursuant to the order of reference of Monday, April 24, 2023, the committee is resuming consideration of Bill C-27, an act to enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act and to make consequential and related amendments to other acts.

I would now like to welcome the witnesses. We have Vass Bednar, executive director of the master of public policy in digital society program at McMaster University, who is joining us by videoconference. Also, from the University of Toronto, we have Andrew Clement, professor emeritus, Faculty of Information, who is also joining us by videoconference, as well as Nicolas Papernot, assistant professor and CIFAR AI chair.

Thank you to all three of you for being here.

I want to apologize for our being late to the committee. We had about 10 votes in the House of Commons. Because of the delay, we have until about 7 p.m. for the testimonies and the questions.

Without further ado, we will start with you, Madam Bednar, for five minutes.