Evidence of meeting #96 for Industry, Science and Technology in the 44th Parliament, 1st Session. (The original version is on Parliament’s site, as are the minutes.) The winning word was aida.

A recording is available from Parliament.

On the agenda

MPs speaking

Also speaking

Alexander Jarvie  Partner, Davies Ward Phillips & Vineberg LLP, As an Individual
François Joli-Cœur  Partner, Borden Ladner Gervais, As an Individual
Scott Lamb  Partner, Clark Wilson LLP, As an Individual
Carole Piovesan  Co-founder and Partner, INQ Law, As an Individual
David Young  Principal, Privacy and Regulatory Law Counsel, David Young Law, As an Individual

3:35 p.m.

Liberal

The Chair Liberal Joël Lightbound

Good afternoon, everyone.

I call this meeting to order.

Welcome to meeting number 96 of the House of Commons Standing Committee on Industry and Technology.

Today’s meeting is taking place in a hybrid format, pursuant to the Standing Orders.

Pursuant to the order of reference of Monday, April 24, 2023, the committee is resuming consideration of Bill C‑27, an act to enact the consumer privacy protection act, the personal information and data protection tribunal act and the artificial intelligence and data act and to make consequential and related amendments to other acts.

I’d like to welcome our witnesses today: Alexander Max Jarvie, partner, Davies Ward Phillips and Vineberg LLP; François Joli-Coeur, partner, Borden Ladner Gervais; Scott Lamb, partner, Clark Wilson LLP; Carole Piovesan, co‑founder and partner, INQ Law; and David Young, principal, privacy and regulatory counsel, David Young Law.

Welcome, everyone, and thank you again for joining us this afternoon.

Without further ado, I yield the floor to Mr. Jarvie for five minutes.

3:35 p.m.

Alexander Jarvie Partner, Davies Ward Phillips & Vineberg LLP, As an Individual

Thank you very much.

Good afternoon, and thank you for the invitation to share my thoughts on Bill C-27 with the committee.

I am a partner at Davies Ward Phillips & Vineberg LLP, practising as a lawyer in the firm’s technology group. I am appearing today in a personal capacity, presenting my own views.

Recent years have seen significant technological developments related to machine learning. In part, these have come to pass because of another relatively recent development, namely, the vast amount of information, including personal information, that is now generated by our activities and circulates in our economy and our society. Together, these developments hold great promise for future innovation, but they also carry significant risks, such as risks to privacy, risks of bias or discrimination and risks relating to other harms.

I am, therefore, encouraged that a bill has been introduced that seeks to address these risks while supporting innovation. I will begin by making some remarks on the proposed consumer privacy protection act, CPPA, and by suggesting changes to certain provisions of the bill that could better support innovation involving machine learning while introducing important guardrails. I will then share some observations in relation to the proposed artificial intelligence and data act, AIDA.

In my view, there could be improvements made to the CPPA consent exception framework that would facilitate personal information exchange among, and collection by, private sector actors that wish to undertake socially beneficial projects, study or research. In particular, proposed sections 35, 39 and, in part, 51 could be combined and generalized so as to permit private sector actors to disclose and exchange personal information or to collect information from the public Internet for those purposes, study or research, provided that certain conditions are fulfilled.

Those could include conducting a privacy impact assessment, entering into an agreement containing relevant contractual assurances where applicable, and providing notice to the commissioner prior to the disclosure or collection. Noting that de-identified data is sufficient for the training of machine learning models in many cases and noting that de-identification is a requirement in proposed section 39, as currently drafted, but not in proposed section 35, I would note only that whether the information should be de-identified in a given case should be a factor in the proposed privacy impact assessment.

Suitably crafted, these changes could provide material but appropriately circumscribed support for section 21 of the proposed CPPA, which permits the use of personal information that has been de-identified for internal research and analysis purposes, and for proposed subsection 18(3), which permits use of personal information in its native form for legitimate interests, provided that an assessment has been undertaken.

With respect to the AIDA, I begin with the definition of the term “artificial intelligence system”. This definition is of fundamental importance, given that the entire scope of the act depends upon it. The current definition risks being overbroad. The minister’s letter proposes to provide better interoperability by introducing a definition that seeks to align with a definition used by the OECD, but the text provided differs from the OECD formulation and introduces the word “inference” in a suboptimal way. We also do not have the final wording.

There are also different definitions to consider in other instruments, including the European Union’s proposed AI act, the recent U.S. President’s executive order, and the NIST AI risk management framework, among others. Some of these do converge on the OECD’s definition, but in each case the wording differs.

I would recommend to the committee—or, at least, I would urge the committee—when it begins clause-by-clause review, to make a survey of existing definitions to determine the state of the art and to ensure that the definition ultimately chosen indeed maximizes interoperability yet also remains extensible to account for new techniques or technologies.

I would also recommend that the purpose clause of the AIDA, as well as other relevant provisions, be amended to include harms to groups and communities, as these may also be adversely affected by the decisions, recommendations or predictions of AI systems.

Finally, there should be an independent artificial intelligence and data commissioner. The companion document to the AIDA notes that the model whereby the regulator would be a departmental official was chosen in consideration of a number of factors, including the objectives of the regulatory scheme. However, since the scope of what is being left to regulation is so extensive, the creation of an independent regulator to administer and enforce the AIDA will counterbalance skepticism concerning the relative lack of parliamentary oversight and thereby help to instill trust in the overall regulatory scheme.

I will submit a brief for consideration by the committee, elaborating on the matters raised here. Machine learning technologies are poised to play a significant role in future innovation. Through legislation, we can achieve meaningful support for this potential while providing effective protections for individuals, groups and society.

Thank you for your attention. I welcome your questions.

3:40 p.m.

Liberal

The Chair Liberal Joël Lightbound

Thank you very much.

We will now hear from Mr. Joli‑Coeur.

3:40 p.m.

François Joli-Cœur Partner, Borden Ladner Gervais, As an Individual

Good afternoon.

Thank you for inviting me. I'm pleased to have the opportunity to share my thoughts on Bill C‑27 with the committee.

I am a partner at Borden Ladner Gervais, BLG, and a member of the privacy practice group. I am also the national lead of BLG's artificial intelligence, AI, group. I am appearing today as an individual.

My remarks will focus on the AI provisions in the bill, in both the artificial intelligence and data act, or AIDA, and the consumer privacy protection act, or CPPA.

To start, I want to say how important it is to modernize the federal privacy regime, something Quebec, the European Union and some of the world's largest economies have done recently.

I commend the government's commitment to AI legislation. In spite of the criticisms against AIDA, the bill has the advantage of putting forward a flexible approach. Nevertheless, some key concepts should be provided for in the act, instead of in the regulations. Furthermore, it is imperative that the government consult extensively on the regulations that flow from AIDA.

The first point I want to make has to do with the anonymized data in the CPPA. The use of anonymized personal information is an important building block for AI models, and excluding anonymized information from coverage by the act will allow Canadian businesses to keep innovating.

The definition of anonymization should, however, be more flexible and include a reasonableness standard, as other individuals and groups have recommended. That would bring the definition in line with those in other national and international laws, including recent amendments to Quebec's regime.

The CPPA should explicitly state that organizations can use an individual's personal information without their consent to anonymize the information, as is the case for de‑identified information.

Lastly, AIDA includes references to anonymized data, but it isn't defined in the act. The two acts should be consistent. AIDA, for instance, could refer to the definition of “anonymize” set out in the CPPA.

The second point I want to make concerns another concept in the CPPA, automated decisions. Like most modern privacy laws, the proposed act includes provisions on automated decisions. On request by an individual, organizations would be required to provide an explanation of the organization’s use of any automated decision system to make predictions, recommendations or decisions about individuals that could have a significant impact on them.

An automated decision system is defined as any technology that assists or replaces the judgment of human decision-makers. The definition should be amended to capture only systems with no human intervention at all. That would save organizations the heavy burden of having to identify all of their decision support systems and introduce processes to explain how those systems work, even when the final decision is made by a human. Such a change would increase the act's interoperability with Quebec's regime and the European Union's, which is based on the general data protection regulation.

Turning to AIDA, I want to draw your attention to high-impact systems. The act should include a definition of those systems. Since most of the obligations set out in the act flow from that designation, it's not appropriate for the term to be wholly defined in the regulations. The definition should include a contextual factor, specifically, the risk of harm caused by the system. For example, it could take into account whether the system posed a risk of harm to health and safety or a risk of an adverse impact on fundamental rights. That factor could be combined with the classes of systems that would be considered high-impact systems, as set out in the act.

Including a list of classes of systems that would de facto be considered high-impact systems, as the minister proposed in his letter, could capture too many systems, including those that pose moderate risk.

My last point concerns general purpose AI systems. In his letter, the minister proposed specific obligations for generative AI and other such systems. While generative AI has become wildly popular in the past year, regulating a specific type of AI system could render the act obsolete sooner.

Not all general purpose AI systems pose the same degree of risk, so it would be more appropriate to regulate them as high-impact systems when they meet the criteria to be designated as such.

Thank you very much. I would be happy to answer any questions you have.

3:45 p.m.

Liberal

The Chair Liberal Joël Lightbound

Thank you, Mr. Joli‑Coeur.

We will now hear from Mr. Lamb.

November 9th, 2023 / 3:45 p.m.

Scott Lamb Partner, Clark Wilson LLP, As an Individual

Thank you, Mr. Chair and members of the committee, for having me here today on the important matter of reform of our privacy legislation and Bill C-27.

I'm a partner at the law firm of Clark Wilson in Vancouver, and I'm called to the bar in Ontario and British Columbia. I've been practising in the area of privacy law since approximately 2000. I've advised both private sector organizations in a variety of businesses and public bodies such as universities in the public sector. I've also acted as legal counsel before the Information and Privacy Commissioner for British Columbia in investigations, inquiries and judicial review.

With the limited amount of time we have, I'll be confining my remarks to the proposed consumer privacy protection act, specifically the legitimate interest exception, anonymization and de-identification, and the separate review tribunal. Hopefully, I'll have a bit of time to get into the artificial intelligence and data act, AIDA, with respect to high-impact systems.

I will of course be happy to discuss other areas of Bill C-27 and questions you may have. Also, subsequent to my presentation, I'll provide a detailed brief on the areas discussed today.

Starting with the proposed consumer privacy protection act and the legitimate interest exception, it's important to point out that arguably the leading privacy law jurisdiction, the EU with its GDPR, provides for a stand-alone right of an organization to collect, use and disclose personal information if it has a legitimate interest. Accordingly, if Canada is to have an exception to consent based on an organization's legitimate interest, it's important to look, in detail, at how that will operate and the implications of that exception.

First, to reiterate, the draft provisions in proposed subsection 18(3) are an exception to the consent requirements and not a stand-alone right for an organization as set out in the GDPR.

What's the significance of this? A stand-alone right generally is not as restrictively interpreted by the courts as an exception to an obligation from a purely statutory interpretation point of view. In short, the legitimate interest exception is very likely to be a narrower provision in scope than the GDPR's legitimate interest provisions.

A stand-alone right may be a means to circumvent or generally undercut the consent structure of our privacy legislation, which again is at the heart of our legislation and is a part of the inculcated privacy protection culture in Canada. Maintaining the legitimate interest provisions as an exception to the consent structure, on balance, is preferable to a stand-alone right.

Second, the exception is only for the collection or use of personal information and is not permitted for the disclosure of personal information to third parties. The prohibition on application of the exception to disclosure of personal information that is in the legitimate interest of an organization, in my view, doesn't make sense. While I'm in favour of the first instance of an exception over a stand-alone right, I think you have to expand this to cover disclosure as well.

The provisions in proposed subsection 18(3) expressly state that the legitimate interest of an organization “outweighs any potential adverse effect”. This is effectively a high standard of protection. The usefulness of this exception, if limited to only collection and use, is significant for organizations. For example, a business may have a legitimate interest in collection and use of personal information to measure and improve the use of its services or to develop a product. However, proposed subsection 18(3) prevents that organization from actually disclosing that personal information to a business partner or third party vendor to give effect to its legitimate purpose.

Finally, the point is that other jurisdictions allow for a legitimate interest of an organization to apply to disclosure of personal information as well as to collection and use. Specifically, again, that is not only the EU GDPR but also the Singapore law. I note that when you look at those pieces of legislation standing side by side, Singapore also has it as an exception. Singapore also has some case law that has moved forward.

I think it would give a lot of comfort to this committee if it were to examine some of the case law from Singapore, as well as some of the more current case law from the GDPR regime. It does give some sense of what this means as a legitimate interest, which I can appreciate at first instance may seem rather vague and could be seen as a giant loophole. However, my submission is that's not the case.

The next item I'd like to talk about is anonymization and de-identification. Clarity on this issue has been sought for some time, and it's reassuring that the change from Bill C-11 to Bill C-27 introduced this idea, a concept of anonymization, as separate from de-identification. However, technologically and practically speaking, you're never going to reach the standard set out in the definition of anonymization, so why put it in the act in the first place? There's been some commentary on this, and I am generally in support of the recommendation that you should insert into that definition the reasonableness to expect in the circumstances that an individual can be identified after the de-identification process. Then the data is not anonymized and is still caught by the legislation and the specific requirements for the use and disclosure of such data.

In terms of use and disclosure, I also note that proposed section 21 confines the use to internal use by the organization. The utility of this provision could be remarkably limited by this, again compared to what our trading partners have, because in modern research and development you have the idea of data pooling and extensive partnerships in the use of data. If it's strictly for internal purposes, we could lose this important tool in a modern technological economy that relies on this. Therefore, I recommend that it be deleted as well.

Also, proposed section 39 would limit the disclosure of de-identified personal information to, effectively, public sector organizations—this is very restrictive—and consideration should be given to disclosing to private sector organizations that are really fundamentally important to our modern economy and research and development.

In terms of the separate review tribunal, I know that the Privacy Commissioner has been hostile to this and I recognize that the Privacy Commissioner performs an invaluable role in investigating and pursuing compliance with our privacy legislation. However, given the enormous administrative monetary penalties that may be awarded against organizations—the higher of 3% of gross annual revenue or $10 million—for breaches, clear appeal rights to an expert tribunal and review of penalties are required to ensure due process and natural justice standards and, frankly, to develop the law in this area.

It is also noteworthy that judicial oversight of the decision of the tribunal would be according to the Supreme Court of Canada's test in Vavilov, which is limited to a review on the reasonableness standard, which is a very deferential and limited review. It's been suggested that you try to limit these things from going on forever and ever. With judicial review, they would be limited. I know there was one suggestion that the ability to seek judicial review should jump right from the tribunal to the Federal Court of Appeal. I think that's fine if you want to expedite this and meet that concern. I think that's probably right, but I do like the structure of a separate review tribunal.

Finally, on artificial intelligence and the high-impact systems, I think the focus of that, in terms of identifying the concept of high-impact systems, is sound in structure and potentially generally aligned with our trade partners in the EU. However, the concept cannot be left to further development and definition in regulations. This concept needs extensive consultation and parliamentary review.

It is recommended that the government produce a functional analysis of a high-impact system from qualitative and quantitative impact, risk assessment, transparency and safeguards perspectives.

It's further recommended that distinctions be made between artificial intelligence research and development for research purposes only and artificial intelligence that is implemented into the public domain for commercial or other purposes. What I would not want to see come out of our AIDA legislation is that we have some sort of brake on research in artificial intelligence.

We are vulnerable and our allies are vulnerable to other international actors that are at the forefront of research in artificial intelligence. We should not have anything in our legislation to break that. However, we should protect the public when artificial intelligence products are rolled out to the public domain, and ensure that we are protected. I think that's a distinction that is missing in the discussion, and it's very important that we advance that.

Those are my submissions.

Thank you.

3:55 p.m.

Liberal

The Chair Liberal Joël Lightbound

Thank you, Mr. Lamb.

We now go to Ms. Piovesan.

3:55 p.m.

Carole Piovesan Co-founder and Partner, INQ Law, As an Individual

Thank you, Mr. Chair and members of the committee, for the opportunity to speak to Bill C-27.

I am the managing partner of INQ Law, where my practice focuses on data- and AI-related laws. I’m here in my personal capacity and the views presented are my own.

Every day, we are hearing new stories about the promise and perils of artificial intelligence. AI systems are complex computer programs that process large amounts of data, including large amounts of personal information, for training and output purposes. Those outputs can be very valuable.

There is a possibility that AI can help cure diseases, improve agriculture yields or even help us become more productive, so we can each play to our best talents. That promise is very real, but as you've already heard on this panel, that promise does not come without risk. Complex as these systems are, they are not perfect and they are not neutral. They are being developed at such a speed that those on the front lines of development are some of the loudest voices calling for some regulation.

I appreciate that this committee has heard quite a bit of testimony over the last several weeks. While the testimonies you've heard have certainly run the gamut of opinions, there seem to be at least two points of consistency.

The first is that Canada’s federal private sector privacy law should be updated to reflect the increasing demand for personal information and changes to how that information is collected and processed for commercial purposes. In short, it’s time to modernize PIPEDA.

Second, our laws governing data and AI should strive for interoperability or harmonization across key jurisdictions. Harmonization helps Canadians understand and know how to assert their rights, and it helps Canadian organizations compete more effectively within the global economy.

The committee has also heard opposing views about Bill C-27. The remainder of my submissions will focus on five main points to do with parts 1 and 3 of the bill.

Part 1, which proposes the consumer privacy protection act, or CPPA, proposes some important changes to the governance of personal information in Canada. My submissions focus on the legitimate interest consent exception and the definition of anonymized data, much of which you've already heard on this panel.

First, the new exceptions to consent in the bill are welcome. Not only do they provide flexibility for organizations to use personal data to advance legitimate and beneficial activities, but they also align Canada’s law more closely with those of some of our key allies, including internally within Canada, such as Quebec’s Law 25, more specifically. Critically, they do so in a manner that is reasonably measured. I agree with earlier testimony that you've heard in this committee, that the application of the legitimate interest exception in the CPPA should align more closely with other notable privacy laws, namely Europe's GDPR.

Second, anonymized data can be essential for research, development and innovation purposes. I support the recommendations put to this committee by the Canadian Anonymization Network with respect to the drafting of the definition of “anonymize”. I also agree with Mr. Lamb's submissions as to the insertion of existing notions of reasonable foreseeability or a serious risk of reidentification.

As for part 3 of the bill, the proposed artificial intelligence and data act, first, I support the flexible approach adopted in part 3. I caution and recognize that the current draft contains some major holes, and that there is a need to plug those holes as soon as possible. As well, any future regulation would need to be subject to considerate consultation, as contemplated in the companion document to AIDA.

Our understanding of how to effectively promote the promise of AI and prevent harm associated with its use is evolving with the technology itself. Meaningful regulation will need to benefit from consultation with broad stakeholders, including, importantly, the AI community.

Second, Minister Champagne, in the letter he submitted to this committee, proposes to amend AIDA to define “high impact” by reference to classes of systems. The definition of high impact is the most striking omission in the current draft bill.

The use of a classification approach aligns with the EU's draft artificial intelligence act and supports a risk-based approach to AI governance, which I support. When the definition is ultimately incorporated into the draft, it should parallel the language in the companion document and provide criteria on what “high impact” means, with reference to the classifications as illustrated.

Finally, I support the proposed amendments to align AIDA more closely with OECD guidance on responsible AI. Namely, this is the definition in proposed section 2 of AIDA, which has also been adopted by the National Institute of Standards and Technology in the United States in its AI risk management framework.

To the extent that Canada can harmonize with other key jurisdictions where it makes sense for us to do so, we should.

I look forward to the committee's questions, as well as to the comments from my fellow witnesses.

4:05 p.m.

Liberal

The Chair Liberal Joël Lightbound

Thank you.

Finally, Mr. Young, the floor is yours.

4:05 p.m.

David Young Principal, Privacy and Regulatory Law Counsel, David Young Law, As an Individual

Thank you for the invitation to appear before this committee for its important review of Bill C-27.

This bill includes significant proposed amendments to Canada's privacy laws at the same time as it introduces a proposed oversight regime for artificial intelligence. The AIDA component warrants focused study by the committee. Certainly, as you've heard from my co-witnesses, there's a lot to consider there. However, I will restrict my comments to the privacy components.

I am a privacy and regulatory lawyer. My practice over the past 25 years has included advising private sector organizations—both for-profit and non-profit—as well as government and Crown agencies. I address all relevant areas, including individual privacy, employee privacy and health privacy.

In these introductory comments, I will focus on one impactful area of the bill, which you have heard some comments about already: de-identified and anonymized information. I'm hoping to provide some clarification as well as my thoughts on how the proposed provisions can be improved.

The proposed treatment of such information in Bill C-27 is critically important. Firstly, it clarifies a category of information that, while not being fully identifiable and therefore available for specific uses without consent, is still deemed appropriate for protection under the law. Secondly, it provides for a category of anonymized information that can be used more broadly for research purposes, innovation and policy development.

The first category, de-identified information, is governed by all of the law's privacy protections, subject to certain specific exceptions. Conversely, the second category, anonymized information, is stated to not be subject to the law. However, as I will mention, this stipulation—that it's not subject to the law—is not the end of the story. The law will and should continue to provide oversight over anonymized information. This is a point that is sometimes missed. I certainly heard it raised as a concern in previous comments. I think it's very important to understand that, however we define the term—and we've heard a number of comments here—it will continue to be subject to the law.

I have a number of recommendations for improvement.

First, with respect to de-identified information, the definition should be amended to stipulate appropriate processes to ensure no person can be directly identified from the information. Additionally, proposed section 74 of the CPPA, which addresses technical and administrative protections, should be amended to include, as an additional criterion, the risk of re-identification.

Secondly, the definition of anonymized information should be amended to make more explicit the processes required for anonymization. With its law 25, Quebec got it right in this area. I recommend aligning with Quebec's approach, which stipulates that the generally accepted best practices for anonymization should be those set out in appropriate regulations. Such regulations should include transparency, risks of re-identification, accountability and guardrails for downstream uses. The Quebec law also recognizes that it is not possible, from a practical perspective, to say that anonymized information cannot be re-identified. The CPPA provision should reflect the same approach. Additionally, there should be a requirement for the organization performing any anonymization process to conduct a re-identification risk analysis. This is a proposed requirement in Quebec's regulations governing anonymized information.

Thirdly, the applicability of the law's protections for de-identified information is a bit of a complicated area. I can certainly go into it in more detail during questions, if you like. Currently, the CPPA provides that de-identified information is personal information, except for certain provisions, where it will not be considered personal information.

This is the wrong approach. Instead, as recommended by the OPC, a simple statement should be made that all de-identified personal information remains personal information. Also, the list of exceptions in the bill is confusing. To make it simpler and clearer, many of the exceptions should be omitted entirely—they are not needed. I can explain that in more detail if you wish.

My final comment is to address, as I mentioned a couple of minutes ago, a concerned voice by some stakeholders that the statute's anonymization regime should be made expressly subject to oversight by the Privacy Commissioner. I know you've heard that from at least one witness and maybe others here. In my view, such a provision is not required. The commissioner will have oversight over an organization's compliance with the anonymization rules, whatever they are. Also, and very importantly, if anonymized information does become identifiable—and that's this whole risk of reidentification—all of the statute's protective provisions again will apply with full vigour, and the commissioner will have oversight. Actually, there are two routes whereby the commissioner will or may continue to have oversight.

In sum, my recommendations are as follows.

First, the definition of “de-identified” information should be made more rigorous, including addressing the risk of reidentification. Secondly, the definition of anonymized information should be amended to make more explicit the processes required to achieve anonymization, and these should be set out in regulations, including a requirement for risk assessment. Finally, the regime for applicability of the CPPA's protections for de-identified information should be made clearer, in particular, stating that all such information remains personal information.

I will be happy to elaborate and answer any questions you have regarding these comments or any other provisions of the bill.

4:10 p.m.

Liberal

The Chair Liberal Joël Lightbound

Thank you very much, Mr. Young.

To start the discussion, I will turn it over to Mr. Perkins for about six minutes.

4:10 p.m.

Conservative

Rick Perkins Conservative South Shore—St. Margarets, NS

Thank you very much, Mr. Chair.

Thank you, witnesses—those were great presentations. This has been a fascinating bill and discussion so far, so thank you very much. There were lots of good, new approaches, too, and reinforcement of others we've heard....

My first question will be for Mr. Lamb. You won't be surprised to learn, if you've been following, that my belief is that this bill actually puts the interests of large corporations ahead of individuals' right to privacy.

Starting in proposed section 5, even if it's changed to “fundamental right”, it still has the word “and”, which puts it on par with an organization's right to use the data.

In my view, “fundamental right” is further watered down by proposed subsection 15(7), which allows implied consent, which I think is a thing that should have gone out with the dodo bird. I don't think there should ever be implied consent.

Then there's proposed subsection 18(3), which you referenced, which says it has restrictions. When I read it, though, it says I can use somebody's data “without their knowledge” even if it harms them. I have to understand that I'm a marketer—I've been elected for only two years. I liked to push the envelope for the large corporations I did marketing for on data. I know a bit about how I use data in the retail space.

I'd like to ask you if you really believe that putting a fundamental right and purpose on par with everything else doesn't still skew the bill totally towards large corporate exceptions in this bill to allow businesses basically to do the things that marketers want to do, which is use everything as an exception to use individual data to sell more product.

4:15 p.m.

Partner, Clark Wilson LLP, As an Individual

Scott Lamb

I understand your concern, and my sympathies are with the interpretation of this legislation as consumer protection legislation. I think the status of the current law to date is that that's what it is, so courts and potentially a tribunal will look at the facts of any case from that perspective. I think that should give you some reassurance. If there's some need to be more expressive about that and to bolster that, I would be in sympathy with that and your concerns.

With proposed subsection 18(3), my suggestion was to start looking at some of the case law that's coming out of the GDPR, that jurisdiction, and Singapore. You get an idea of how that's to work.

One of the potential things you can investigate is that, if you're going to get rid of implied consent, you're going to have to have a very robust “legitimate interest” exception and—

4:15 p.m.

Conservative

Rick Perkins Conservative South Shore—St. Margarets, NS

Okay, I have only a little time. Could you table or share with the committee later some of that case law that you think would be helpful to us? That would be great.

4:15 p.m.

Partner, Clark Wilson LLP, As an Individual

Scott Lamb

Sure, I'd be happy to do that.

4:15 p.m.

Conservative

Rick Perkins Conservative South Shore—St. Margarets, NS

You're the first one I'm aware of, actually, who's raised section 21 of the proposed act, so thank you for doing that.

When I read it, it's yet another clause that says that an organization can use somebody's information without their knowledge or consent for internal research.

From a positive perspective, I use data for internal research all the time, and it says it has to be de-identified first. I often was looking at individual customer data through loyalty programs, or if I actually had a coalition program like Air Miles, I had a lot of collected data on individuals to do them.

Would this inhibit a company from doing what they've done in the past—a retailer, for example— in analyzing coalition loyalty rewards programs or their own in-house loyalty programs?

4:15 p.m.

Partner, Clark Wilson LLP, As an Individual

Scott Lamb

Potentially it could be interpreted that way, and that's why I think there's an added reason to delete it.

4:15 p.m.

Conservative

Rick Perkins Conservative South Shore—St. Margarets, NS

Thank you.

I think Ms. Piovesan and Mr. Joli-Coeur both raised the issue of high-impact systems. It's something I've been struggling with too, and I know we'll probably get to more of it when we get to a deeper part of the AIDA study.

I'm struggling with what a high-impact system is. Why is only the high-impact system being covered by legislation and not other levels, and what are those other levels, Ms. Piovesan?

4:15 p.m.

Co-founder and Partner, INQ Law, As an Individual

Carole Piovesan

On high-impact systems, if you look at AI law in different parts of the world, you see the governance applies to systems that are likely to cause a significant risk of harm.

There are lots of AI systems that we are using every day, like our GPS, that do not have a significant risk of harm to an individual, thus they should not be subject to the kind of governance oversight that we're talking about in AIDA.

The importance of “high-impact” is that it is a trigger to determine when governance is required, as stipulated in the law.

4:15 p.m.

Conservative

Rick Perkins Conservative South Shore—St. Margarets, NS

What would be high risk?

4:15 p.m.

Co-founder and Partner, INQ Law, As an Individual

Carole Piovesan

High risk is defined in the E.U. as a criticality of risk or, in the letter from Minister Champagne, when you're touching on elements of harm or bias—unjustified, unlawful bias—that can harm at scale an individual or property. In the case of the amendment, you have a number of different classes of systems or classes of use cases.

I want to be clear. We're not talking about a high-impact system; we are talking about a system used in a high-impact context.

4:15 p.m.

Conservative

Rick Perkins Conservative South Shore—St. Margarets, NS

That's a nice distinction.

Mr. Joli-Coeur.

4:15 p.m.

François Joli-Coeur

I agree with Carole's comments, essentially.

4:15 p.m.

Conservative

Rick Perkins Conservative South Shore—St. Margarets, NS

Okay.

Mr. Young, I have a quick question. I believe you mentioned that there were parts of proposed section 15 on consent that actually could or should be removed. Is that correct?

4:20 p.m.

Principal, Privacy and Regulatory Law Counsel, David Young Law, As an Individual

David Young

No. It is very confusing. There's a provision, I remember, in a proposed section of the bill, that says basically that de-identified information is not personal information except for these sections, and it's a laundry list of about 20. That's what I'm talking about.