Thank you, Mr. Chair and members of the committee, for having me here today on the important matter of reform of our privacy legislation and Bill C-27.
I'm a partner at the law firm of Clark Wilson in Vancouver, and I'm called to the bar in Ontario and British Columbia. I've been practising in the area of privacy law since approximately 2000. I've advised both private sector organizations in a variety of businesses and public bodies such as universities in the public sector. I've also acted as legal counsel before the Information and Privacy Commissioner for British Columbia in investigations, inquiries and judicial review.
With the limited amount of time we have, I'll be confining my remarks to the proposed consumer privacy protection act, specifically the legitimate interest exception, anonymization and de-identification, and the separate review tribunal. Hopefully, I'll have a bit of time to get into the artificial intelligence and data act, AIDA, with respect to high-impact systems.
I will of course be happy to discuss other areas of Bill C-27 and questions you may have. Also, subsequent to my presentation, I'll provide a detailed brief on the areas discussed today.
Starting with the proposed consumer privacy protection act and the legitimate interest exception, it's important to point out that arguably the leading privacy law jurisdiction, the EU with its GDPR, provides for a stand-alone right of an organization to collect, use and disclose personal information if it has a legitimate interest. Accordingly, if Canada is to have an exception to consent based on an organization's legitimate interest, it's important to look, in detail, at how that will operate and the implications of that exception.
First, to reiterate, the draft provisions in proposed subsection 18(3) are an exception to the consent requirements and not a stand-alone right for an organization as set out in the GDPR.
What's the significance of this? A stand-alone right generally is not as restrictively interpreted by the courts as an exception to an obligation from a purely statutory interpretation point of view. In short, the legitimate interest exception is very likely to be a narrower provision in scope than the GDPR's legitimate interest provisions.
A stand-alone right may be a means to circumvent or generally undercut the consent structure of our privacy legislation, which again is at the heart of our legislation and is a part of the inculcated privacy protection culture in Canada. Maintaining the legitimate interest provisions as an exception to the consent structure, on balance, is preferable to a stand-alone right.
Second, the exception is only for the collection or use of personal information and is not permitted for the disclosure of personal information to third parties. The prohibition on application of the exception to disclosure of personal information that is in the legitimate interest of an organization, in my view, doesn't make sense. While I'm in favour of the first instance of an exception over a stand-alone right, I think you have to expand this to cover disclosure as well.
The provisions in proposed subsection 18(3) expressly state that the legitimate interest of an organization “outweighs any potential adverse effect”. This is effectively a high standard of protection. The usefulness of this exception, if limited to only collection and use, is significant for organizations. For example, a business may have a legitimate interest in collection and use of personal information to measure and improve the use of its services or to develop a product. However, proposed subsection 18(3) prevents that organization from actually disclosing that personal information to a business partner or third party vendor to give effect to its legitimate purpose.
Finally, the point is that other jurisdictions allow for a legitimate interest of an organization to apply to disclosure of personal information as well as to collection and use. Specifically, again, that is not only the EU GDPR but also the Singapore law. I note that when you look at those pieces of legislation standing side by side, Singapore also has it as an exception. Singapore also has some case law that has moved forward.
I think it would give a lot of comfort to this committee if it were to examine some of the case law from Singapore, as well as some of the more current case law from the GDPR regime. It does give some sense of what this means as a legitimate interest, which I can appreciate at first instance may seem rather vague and could be seen as a giant loophole. However, my submission is that's not the case.
The next item I'd like to talk about is anonymization and de-identification. Clarity on this issue has been sought for some time, and it's reassuring that the change from Bill C-11 to Bill C-27 introduced this idea, a concept of anonymization, as separate from de-identification. However, technologically and practically speaking, you're never going to reach the standard set out in the definition of anonymization, so why put it in the act in the first place? There's been some commentary on this, and I am generally in support of the recommendation that you should insert into that definition the reasonableness to expect in the circumstances that an individual can be identified after the de-identification process. Then the data is not anonymized and is still caught by the legislation and the specific requirements for the use and disclosure of such data.
In terms of use and disclosure, I also note that proposed section 21 confines the use to internal use by the organization. The utility of this provision could be remarkably limited by this, again compared to what our trading partners have, because in modern research and development you have the idea of data pooling and extensive partnerships in the use of data. If it's strictly for internal purposes, we could lose this important tool in a modern technological economy that relies on this. Therefore, I recommend that it be deleted as well.
Also, proposed section 39 would limit the disclosure of de-identified personal information to, effectively, public sector organizations—this is very restrictive—and consideration should be given to disclosing to private sector organizations that are really fundamentally important to our modern economy and research and development.
In terms of the separate review tribunal, I know that the Privacy Commissioner has been hostile to this and I recognize that the Privacy Commissioner performs an invaluable role in investigating and pursuing compliance with our privacy legislation. However, given the enormous administrative monetary penalties that may be awarded against organizations—the higher of 3% of gross annual revenue or $10 million—for breaches, clear appeal rights to an expert tribunal and review of penalties are required to ensure due process and natural justice standards and, frankly, to develop the law in this area.
It is also noteworthy that judicial oversight of the decision of the tribunal would be according to the Supreme Court of Canada's test in Vavilov, which is limited to a review on the reasonableness standard, which is a very deferential and limited review. It's been suggested that you try to limit these things from going on forever and ever. With judicial review, they would be limited. I know there was one suggestion that the ability to seek judicial review should jump right from the tribunal to the Federal Court of Appeal. I think that's fine if you want to expedite this and meet that concern. I think that's probably right, but I do like the structure of a separate review tribunal.
Finally, on artificial intelligence and the high-impact systems, I think the focus of that, in terms of identifying the concept of high-impact systems, is sound in structure and potentially generally aligned with our trade partners in the EU. However, the concept cannot be left to further development and definition in regulations. This concept needs extensive consultation and parliamentary review.
It is recommended that the government produce a functional analysis of a high-impact system from qualitative and quantitative impact, risk assessment, transparency and safeguards perspectives.
It's further recommended that distinctions be made between artificial intelligence research and development for research purposes only and artificial intelligence that is implemented into the public domain for commercial or other purposes. What I would not want to see come out of our AIDA legislation is that we have some sort of brake on research in artificial intelligence.
We are vulnerable and our allies are vulnerable to other international actors that are at the forefront of research in artificial intelligence. We should not have anything in our legislation to break that. However, we should protect the public when artificial intelligence products are rolled out to the public domain, and ensure that we are protected. I think that's a distinction that is missing in the discussion, and it's very important that we advance that.
Those are my submissions.
Thank you.