Evidence of meeting #92 for Industry, Science and Technology in the 44th Parliament, 1st Session. (The original version is on Parliament’s site, as are the minutes.) The winning word was commissioner.

A recording is available from Parliament.

On the agenda

MPs speaking

Also speaking

Colin Bennett  Professor, Political Science, Unversity of Victoria, As an Individual
Michael Geist  Professor of Law, Canada Research Chair in Internet and e-Commerce Law, Faculty of Law, University of Ottawa, As an Individual
Vivek Krishnamurthy  Associate Professor of Law, University of Colorado Law School, As an Individual
Brenda McPhail  Acting Executive Director, Master of Public Policy in Digital Society Program, McMaster University, As an Individual
Teresa Scassa  Canada Research Chair in Information Law and Policy, Faculty of Law, Common Law Section, University of Ottawa, As an Individual

5:30 p.m.

Associate Professor of Law, University of Colorado Law School, As an Individual

Vivek Krishnamurthy

Undoubtedly, the Bill C-27 package of amendments is an improvement over the status quo. I think all of us would acknowledge that. However, I'm not sure we should settle for a C+ bill. I think Canadians deserve A+ privacy protection, and amendments to this bill can get us there.

I think that is the spirit in which all of us who are scholars and activists, and who think about privacy and take a big-picture approach to this, think of it. We understand that private information does need to be collected and processed, but that needs to be done in a way that respects what is a very fundamental human right, one that is becoming more important in our digital age over time, as technology becomes more invasive, and it is important to get that right.

Political oxygen is scarce. Again, you have many priorities, many things to legislate, so if this is our shot, we have to do our very best. I think everyone here today has provided lots of really good ideas, and if this committee would embrace them and enact some amendments, this could be a much better bill.

5:30 p.m.

Liberal

Ryan Turnbull Liberal Whitby, ON

You're speaking to the importance of this process, which I think we're all very committed to. I was just trying to get a sense of what the repercussions are going to be if this bill stalls any longer, but I think you've answered that well enough.

Maybe I'll just say quickly a couple of things based on some testimony we've heard.

The AIDA portion of this bill went through over 300 consultations, so I think there has been a lot of consultation that has happened. I'll just put that out there.

In relation to some of the comments made about political parties, the government has been carefully studying the Chief Electoral Officer's recommendation on strengthening privacy measures. We will have more to say about that in due course. Just to let you know, just for information purposes, I think that's helpful to reassure folks.

Maybe I'll leave it there.

5:30 p.m.

Liberal

The Chair Liberal Joël Lightbound

Thank you, MP Turnbull.

Mr. Lemire, you have the floor.

5:30 p.m.

Bloc

Sébastien Lemire Bloc Abitibi—Témiscamingue, QC

Thank you, Mr. Chair.

We've had some meaningful discussions. However, I'm wondering whether this committee will really have the will or capacity to move quickly and help get this bill passed. To be honest, I even wonder if the government really wants to get Bill C‑27 passed at this point, in the context of this legislature.

Having said that, I feel like asking you some questions, Dr. McPhail.

In your publications, you put a great deal of emphasis on developing responsible artificial intelligence and transparent governance of artificial intelligence.

Because the rapid development of technology poses significant data security and privacy challenges, what are your thoughts on establishing a technological sandbox that would isolate emerging technologies in a separate environment, with a view to assessing their compliance with privacy standards before they are made available to the public?

5:35 p.m.

Acting Executive Director, Master of Public Policy in Digital Society Program, McMaster University, As an Individual

Dr. Brenda McPhail

Thank you very much for that question.

There are a range of ways in which the AIDA could be improved to facilitate truly responsible AI governance.

The idea of a sandbox is an interesting model. One of the big problems with the ways artificial intelligence tools are currently developed is that they are created and tossed out into the wild, and then we see what happens. A sandbox, to the extent that it would be able to mitigate that kind of risk, is a really interesting concept. I would note that there's absolutely nothing in the current bill that actually fosters the creation of such a sandbox at this time.

Of course, that's only one of the many gaping holes in the truly skeletal structure of AIDA, which, even with some of the potential amendments that have now been floated, still has a long way to go in order to be the effective bill that people across Canada deserve. That is why many of us have actually called for a reset of that bill, rather than a revision. It is so fundamentally flawed that it's hard to imagine how you're going to make it something that truly respects Canadians' rights and truly reassures Canadians that artificial intelligence is a tool that can be used across all sectors of our economy as it is envisioned to be used, safely and with respect for their privacy rights.

We've heard a little bit about reticence risks. I would counter reticence risks, which is a business concept, with social licence. Members of the public are deeply concerned that their information is being collected and used in ways that they don't understand, often without their consent—something the CPPA would facilitate—and for purposes that they disagree with fundamentally.

If we allow our AI act to take that data, collect it in that way, and leverage it in tools that, again, members of the public find difficult to trust, we are not fostering a vibrant innovation economy in Canada; we are fostering a distrustful society that will not believe their government has their back, and we will be genuinely reticent as citizens to use these technologies in a way that we would like to, if we take seriously the idea that this technology has immense potential to improve our world, used responsibly.

5:35 p.m.

Bloc

Sébastien Lemire Bloc Abitibi—Témiscamingue, QC

Thank you very much.

My time is up.

5:35 p.m.

Liberal

The Chair Liberal Joël Lightbound

Thank you very much, Mr. Lemire.

I have no more speakers, so I'll yield the floor to myself.

My first question is regarding proposed section 35, which MP Perkins brought up. I'll ask Professor Geist, but if anyone wants to volunteer comments.... Proposed section 35 provides that “An organization may disclose an individual's personal information without their knowledge or consent”. Proposed paragraph 35(c) is, to me, the oversight of that disposition, which requires the organization to inform the commissioner of the disclosure before the information is disclosed.

Is this a sufficient form of oversight for that sort of transmission of personal information?

5:35 p.m.

Professor of Law, Canada Research Chair in Internet and e-Commerce Law, Faculty of Law, University of Ottawa, As an Individual

Dr. Michael Geist

I think there's reason for concern based on the load that the commissioner is facing, realistically. We've had a situation for the last number of years with the Privacy Commissioner where findings sometimes run between 12 and 18 months because there simply haven't been the resources to deal with issues.

If this gets interpreted aggressively by organizations to say, “Well, it's impractical to obtain consent, so let's just run off and ask the commissioner”, I think there's a concern that there will be delays, which businesses aren't going to be happy with, but there's also the question of whether this is going to get the sort of study that's necessary.

I'd be curious to hear what some of my colleagues on the panel think.

5:40 p.m.

Liberal

The Chair Liberal Joël Lightbound

Thank you.

Professor Krishnamurthy, go ahead.

5:40 p.m.

Associate Professor of Law, University of Colorado Law School, As an Individual

Vivek Krishnamurthy

I think it's positive that the disclosure does need to be made to the commissioner when this happens. However, I think we need to question how much oversight the commissioner can exercise.

Again, this is a point where an interaction between the CPPA and AIDA.... What kinds of research for statistical purposes might organizations make? It might well be to train AI models. We now know from research that when data is used to train an AI, AI systems can retain that data. I believe the technical name is “imprinting”. If you use ChatGPT and you use it hard, you can probably get an AI system to spit that data back, and that's a big problem.

The mere disclosure to the commissioner that this is happening, without some kind of analysis of what the risks are.... This is why I keep coming back to this data protection impact assessment point. It's so important that this weighing occurs. What are the relevant risks?

We want to incentivize research, of course, but let's remember that Cambridge Analytica was a research organization. It was a research disclosure of data that was the beginning of that terrible privacy scandal. That safeguard alone is not enough. I think we need more.

I'm very interested in research. I'm at an academic institution. I want to promote that. It's a very pro-social thing, and there is a real anti-commons problem with trying to get individual consent every time, but the safeguards need to be stronger.

5:40 p.m.

Liberal

The Chair Liberal Joël Lightbound

The second question is regarding proposed section 39, “Socially beneficial purposes”. My understanding is that the information needs to be de-identified and it can be shared with the government or government institutions, with health care institutions or post-secondary educational institutions. I gather it would be, for instance, at the request of a government department or organization that the information, which is de-identified for sure, is shared.

I'm just wondering if there should be the same kind of oversight here that we see in proposed section 35, where at least an organization that is transmitting this information to the government is required to disclose that. Otherwise, we can envision ways in which that would happen where the public wouldn't even be aware that this information.... I get that it's for socially beneficial purposes, but that can be interpreted quite broadly.

Professor Geist, go ahead.

5:40 p.m.

Professor of Law, Canada Research Chair in Internet and e-Commerce Law, Faculty of Law, University of Ottawa, As an Individual

Dr. Michael Geist

Yes, I think that's right.

Professor Scassa has done a lot of study on that and she is probably ideally suited to comment.

I think you answered the question yourself. It does indeed open the door to very broad interpretation, which I think is a source of concern.

5:40 p.m.

Liberal

The Chair Liberal Joël Lightbound

Professor Scassa, go ahead.

October 26th, 2023 / 5:40 p.m.

Canada Research Chair in Information Law and Policy, Faculty of Law, Common Law Section, University of Ottawa, As an Individual

Dr. Teresa Scassa

I completely agree that there are problems with this provision.

The one I flagged in my opening comments is that it refers to de-identified information. This was taken verbatim from Bill C-11 and put into Bill C-27, but in Bill C-11, “de-identified” was given the definition that is commonly given to anonymized information.

Under Bill C-27, we have two different categories: de-identified and anonymized. Anonymized is the more protected. Now you have a provision that allows de-identified information—which is not anonymized, just de-identified—to be shared, so there has actually been a weakening of proposed section 39 in Bill C-27 from Bill C-11, which shouldn't be the case.

In addition to that, there are no guardrails, as you mentioned, for transparency or for other protections where information is shared for socially beneficial purposes. The ETHI committee held hearings about the PHAC use of mobility data, which is an example of this kind of sharing for socially beneficial purposes.

The purposes may be socially beneficial. They may be justifiable and it may be something we want to do, but unless there is a level of transparency and the potential for some oversight, there isn't going to be trust. I think we risk recreating the same sort of situation where people suddenly discover that their information has been shared with a public sector organization for particular purposes that have been deemed by somebody to be socially beneficial and those people don't know. They haven't been given an option to learn more about it, they haven't been able to opt out and the Privacy Commissioner hasn't been notified or given any opportunity to review.

I think we have to be really careful with proposed section 39, partly because I think it's been transplanted without appropriate changes and partly because it doesn't have the guardrails that are required for that provision.

5:45 p.m.

Liberal

The Chair Liberal Joël Lightbound

Again, feel free to send any suggestions on how we could strengthen proposed section 39. You have mentioned some already with regard to anonymized instead of de-identified, and also if the organization needs to inform the commissioner. If you have any specific wording—and this goes for all witnesses—you can send through the clerk potential amendments you deem worthwhile.

I have one last question.

Are there jurisdictions where a constellation of publicly available data on an individual becomes sensitive information—personal information that ought to be protected? With the systems that are capable of gathering publicly available information, when you gather enough data points on an individual, it can become sensitive.

Are there jurisdictions that do that? Are there examples you can point to?

5:45 p.m.

Canada Research Chair in Information Law and Policy, Faculty of Law, Common Law Section, University of Ottawa, As an Individual

Dr. Teresa Scassa

One example would be Clearview AI scraping publicly available information. They scrape photographs of individuals from publicly accessible websites. Then they create biometric face prints of those individuals in order to create their facial recognition database. We all accept.... It's broadly understood in the privacy community that biometric data is sensitive information.

A picture of you receiving an award at a public event or giving a speech as a member of Parliament, for example, is turned into biometric data that populates a facial recognition database. I think it goes from being an innocent photograph to sensitive biometric data very quickly. That's just one example.

5:45 p.m.

Liberal

The Chair Liberal Joël Lightbound

Professor Krishnamurthy, go ahead.

5:45 p.m.

Associate Professor of Law, University of Colorado Law School, As an Individual

Vivek Krishnamurthy

Mr. Chair, directly responding to your question, article 9 of the European Union's GDPR is very instructive in this regard, because it says that the processing of personal data that reveals sensitive characteristics is subject to the heightened protections for sensitive data. So, the data may be anodyne at the beginning, but as I tell my privacy law students, what I buy at Loblaws can be very revealing of my health, if I'm buying lots of potato chips and not a lot of fresh fruit. That can become health information through processing and through the correlation of my data on my shopping habits with large-scale statistical studies.

I think that's a very important point that you've raised about protecting what the processing reveals.

5:45 p.m.

Liberal

The Chair Liberal Joël Lightbound

Thank you very much.

I know, Mr. Perkins, you wanted to ask one last question. Be very brief, please.

5:45 p.m.

Conservative

Rick Perkins Conservative South Shore—St. Margarets, NS

Perhaps I could ask Dr. McPhail.

In your opening testimony, you mentioned that we need to dig deeper in AIDA than high-impact systems, which the minister has defined, and now redefined. At what level, then...? Could you expand on that a little more beyond high-impact systems? One, is “high-impact” now properly defined? Two, what other level or examples of things do you think need to be incorporated or captured by the AIDA portion of this bill?

5:45 p.m.

Acting Executive Director, Master of Public Policy in Digital Society Program, McMaster University, As an Individual

Dr. Brenda McPhail

Thank you for that question.

I think it's been mentioned already today, but I will repeat it. Merely looking at high-impact systems, however they are defined—and right now that's unclear in the current amendments—is not enough to fully mitigate the risks of AI, particularly the collective risks to communities and groups. That kind of risk, furthermore, is not covered under the current definition of “harm” in the bill, which is focused strictly on individuals and quantitative forms of harm. In looking at how you can restructure that better, you could look at the European act, but I would refer you to something closer to home.

The Toronto Police Service recently did an extensive public consultation and developed rules on artificial intelligence for use by their service. They adopted a tiered approach, where there are some systems that are deemed low-risk, but require an assessment in order to determine that they are so. There are some systems that are deemed medium-risk, and there are different sets of precautions and safeguards in order to ensure that those risks are appropriately analyzed and mitigated prior to the technology being used. There are also systems that are considered high-risk, which have the highest level of protections and safeguards. Then there are systems that are considered beyond the pale. Some systems are considered so risky that it is not appropriate to use them in a country governed by the Charter of Rights and Freedoms and where democratic freedoms are valued.

That's a much more tiered and nuanced approach requiring assessments at different stages, and then proportionate safeguards and restrictions, depending on the level of risk, can be much more finely tuned and much more responsive to the genuine concerns that members of the public have about ways that AI systems can be used for them or against them in violation of their beliefs and values.

5:50 p.m.

Conservative

Rick Perkins Conservative South Shore—St. Margarets, NS

Could we get the library to provide us with a little outline of the Toronto Police Service policy that was just referred to?

5:50 p.m.

Liberal

The Chair Liberal Joël Lightbound

I'm sure it can be sought for. It's available online.

Thank you very much.

This concludes our meeting.

Thanks to all our witnesses. It's been a very informative discussion.

Thanks in particular to Professor Bennett. We've seen the day rise in Australia through the blinds behind you. Thanks for waking up so early to meet with us. It's much appreciated.

I'd like to thank the analysts, interpreters, clerk and support staff.

The meeting is adjourned.