Evidence of meeting #99 for Industry, Science and Technology in the 44th Parliament, 1st Session. (The original version is on Parliament’s site, as are the minutes.) The winning word was board.

A recording is available from Parliament.

On the agenda

MPs speaking

Also speaking

Barry Sookman  Senior Counsel, McCarthy Tétrault, As an Individual
Elizabeth Denham  Chief Strategy Officer, Information Accountability Foundation
Kristen Thomasen  Assistant Professor, Peter A. Allard School of Law, University of British Columbia, Women's Legal Education and Action Fund
Geoffrey Cape  Chief Executive Officer, R-Hauz, As an Individual
Andrée-Lise Méthot  Founder and managing partner, Cycle Capital, As an Individual

4:05 p.m.

Bloc

Sébastien Lemire Bloc Abitibi—Témiscamingue, QC

I'm being told no.

4:05 p.m.

Liberal

The Chair Liberal Joël Lightbound

Just give me one second.

I'm consulting the technical staff.

Can you try to say a few words again, Ms. Denham?

4:05 p.m.

Chief Strategy Officer, Information Accountability Foundation

Elizabeth Denham

Yes. I was discussing the various legal bases for processing personal information.

4:05 p.m.

Liberal

The Chair Liberal Joël Lightbound

Just give me one second, colleagues.

4:10 p.m.

Senior Counsel, McCarthy Tétrault, As an Individual

Barry Sookman

Mr. Chair, while we're waiting, can I respond to the question from Mr. Williams on legitimate interest?

4:10 p.m.

Liberal

The Chair Liberal Joël Lightbound

I think that would be a good use of our committee's time, Mr. Sookman.

You can go ahead.

4:10 p.m.

Senior Counsel, McCarthy Tétrault, As an Individual

Barry Sookman

Thank you very much for that question. I have concerns about legitimate interest as well.

As to whether it is weaker or stronger than the GDPR, it is substantially weaker than the GDPR in its wording. The GDPR, like the CPPA proposal, has a balancing.... It has no “get out of jail free” card. There must be a balance and the use must be appropriate, and it can't adversely affect individuals.

However, unlike the GDPR, it doesn't apply to disclosures, which is very significant. For example, search engines or AI companies are either not going to be able to operate in this country without an amendment, or they'll operate and they won't be subject to the law. This section needs to be fixed.

The other thing is that it has additional tests, which aren't in the GDPR, about a reasonable person having to expect the collection of such activity. That is tethered to an old technology—a known technology—rather than a technologically neutral approach. We want something that's going to work in the future, and this language doesn't work.

I think the problem is actually the opposite of what you were asking for. We need to fix it so that it works properly but still protects the public.

4:10 p.m.

Liberal

The Chair Liberal Joël Lightbound

Thank you very much, Mr. Sookman and Mr. Williams.

I will now turn it over to MP Gaheer.

November 28th, 2023 / 4:10 p.m.

Liberal

Iqwinder Gaheer Liberal Mississauga—Malton, ON

Thank you, Chair.

Thank you to all the witnesses for making time for the committee and appearing in person or virtually. My questions are for Ms. Denham.

Your expertise is very comparative, so I want to ask those sorts of questions. You mentioned the global south in your opening testimony, and that you're familiar with the privacy regime in the global south. Is there anything we can learn from the global south in how its privacy regime is levied?

4:10 p.m.

Chief Strategy Officer, Information Accountability Foundation

Elizabeth Denham

My experience with countries in the global south is.... If you take Africa, for example, and South America, many of those nations are not copying and pasting the GDPR, but they are certainly taking inspiration from the GDPR.

I think the GDPR is a global standard. There are different cultures and different legal systems, which means that it's not going to look the same as the GDPR, but certainly the principles are the same. Respect and the need for privacy and data protection are fundamental rights.

The Middle East is different again. I was recently in Dubai. I met with the minister for AI and talked to many businesses in the Middle East about their approach to data protection and transporter data flows. It certainly is innovation-forward. It's taking a different track than what I see happening in the global south. India's law is also being amended, reformed and passed. It's a very exciting time to see what's happening in that country.

4:10 p.m.

Liberal

Iqwinder Gaheer Liberal Mississauga—Malton, ON

Thank you.

You also mentioned enforcement powers and sanctions. What sorts of enforcement powers and sanctions would you like to see?

4:10 p.m.

Chief Strategy Officer, Information Accountability Foundation

Elizabeth Denham

I think fines are important. There's a lot of focus in this legislation on providing the commissioner with fining power, but fines are so 20th century. Although you need to have fines available for the worst cases and for the bad actors that continue to contravene the legislation, what's more impactful and where I see more modern tools are things like stopping processing orders or the disgorgement of data. This would mean that a company can no longer use its algorithmic models, that it has to destroy data that was fed into a model, because it was collected illegally.

We see the Federal Trade Commission in the U.S. using these kinds of powers. You see that happening in Europe and the U.K., because at the end of the day what is most impactful for significant contraventions of the law is what actually impacts a business model, and not a fine that is just the cost of doing business.

I think there's a rethink that's needed to give the commissioner modern tools, and I look at fines as a last resort.

4:15 p.m.

Liberal

Iqwinder Gaheer Liberal Mississauga—Malton, ON

In meetings past, we have heard folks speak on both sides of the issue of the personal information and data protection tribunal and the Privacy Commissioner directly imposing fines. You spoke about that briefly in your opening testimony.

If you don't like that structure, what kind of a structure would you rather have?

4:15 p.m.

Chief Strategy Officer, Information Accountability Foundation

Elizabeth Denham

As I said, in the U.K. there is a tribunal system, and administrative tribunals are used across many areas of law. In the U.K., when it comes to freedom of information, data protection, cybersecurity or electronic marketing—all of those areas the commissioner is responsible for—the decisions that the commissioner fines and sanctions are subject to a review by the first-tier tribunal. Then the case can actually go to appeal at the second-tier tribunal, and then on to the court.

That sounds like what could be a very lengthy process. However, I think that over time the tribunals have become expert tribunals, so you're not taking a very specialist policy area like data protection and having a general court look at the issues.

I think there are pluses and minuses. Obviously, the government wants to make sure there is administrative fairness and an appeal system, because otherwise you have too much power concentrated in a government body.

You could understand why there should be appeals, but my argument is that, if there is going to be a tribunal, then the standard of review needs to be reasonableness, as it is in British Columbia. Also, the members of the tribunal need to be independent and appointed that way. Finally, I think it's really important that the tribunals not conduct an inquiry from scratch, because I think that undermines the commissioner's expertise.

If there is no tribunal, then I agree with the Privacy Commissioner's recommendation that an appeal go directly to the Federal Court of Appeal, rather than starting at the tribunal and then going to Federal Court.

4:15 p.m.

Senior Counsel, McCarthy Tétrault, As an Individual

Barry Sookman

The structure of the adjudication of issues, where they start and how they get appealed, is a very important question. We have to recognize that, with the importance of privacy and the amount at stake for the public and organizations, getting it right is really important.

The powers of the Privacy Commissioner are very broad. There's really only anorexic protection procedurally, yet the Privacy Commissioner makes the determination as to whether there's a breach and can make a recommendation as to whether there are penalties.

The appeal goes to the tribunal, but the tribunal only has the power to order a reversal if it's an error of law. It has no power to do anything if it's a mixed question of fact and law or a question of fact. When you look at the way the CPPA operates, there are going to be huge numbers, almost invariably a huge number of questions of fact and law. This means that, effectively, there are almost no procedural protections before the Privacy Commissioner, and any decisions are virtually unappealable. Also, the constitution of a tribunal doesn't require a judge, which is required in other contexts. I do think there needs to be procedural protection in front of the Privacy Commissioner.

As for the appeal, you heard Ms. Denham talk about at least a reasonableness standard. That doesn't even exist before the tribunal. That would only exist on a further judicial review, but it's almost impossible to get there.

I do think the structure really needs to be changed to provide at least a modicum of procedural protection.

4:15 p.m.

Liberal

Iqwinder Gaheer Liberal Mississauga—Malton, ON

Thank you.

4:15 p.m.

Liberal

The Chair Liberal Joël Lightbound

Thank you very much, Mr. Sookman.

Mr. Lemire, the floor is yours.

4:15 p.m.

Bloc

Sébastien Lemire Bloc Abitibi—Témiscamingue, QC

Thank you, Mr. Chair.

Mr. Sookman, you expressed your thoughts in an article on artificial intelligence that was published three days ago, if I'm not mistaken, in which you analyzed the AI Regulation Bill in the UK House of Lords.

In that analysis, you noted that the British bill could provide a roadmap as to how to improve the Artificial Intelligence and Data Act.

You outlined issues such as parliamentary sovereignty, the creation of an artificial intelligence authority and regulatory principles.

What lessons can we, as legislators, learn from the British bill, and what specific aspects would you recommend should be incorporated in the framework of the Artificial Intelligence and Data Act to strengthen AI regulations in Canada?

4:20 p.m.

Senior Counsel, McCarthy Tétrault, As an Individual

Barry Sookman

Thank you for your question.

I think that is a fantastic question.

The private member's bill that you referred to is also an attempt to have an agile mechanism for the regulation of AI. There are two things about that bill that I think are fundamentally important. If this committee is going to make recommendations for improvements, two things can be taken from that bill, which I would strongly recommend.

First, the secretary had the power to enact regulations in the first instance. The regulations only become effective when they're passed by a resolution in both Houses of Parliament. Regulations that don't have that procedure can be annulled by either House of Parliament.

In my mind, that is a way to give effect to the important principle of parliamentary sovereignty. That way, the government can go ahead with its regulatory analysis, but at the end of the day, it's still regulated by a mechanism of Parliament. I think that's a brilliant approach to solving the problem of parliamentary sovereignty.

The second thing about the draft AI bill that I think is really important is that it contains the principles for guiding the legislation. If you look at AIDA, it doesn't define “high impact” and it tells you nothing about what the principles should be that would guide regulation. What this bill does is provide a good first look at what could be an approach.

It starts off with saying the principles should be fundamental ethical principles for responsible AI. They should deliver safety, security, robustness and those sorts of things. Secondly, any business that's going to engage in AI—in this case, I would say a high-impact system—should test it thoroughly and should be transparent about its testing. Thirdly, it has to comply with equalities legislation—that is, discrimination, which is extremely important.

Lastly—and this is completely missing in our bill—a consideration has to be that the regulation benefits outweigh the burdens and that the burdens of the restriction don't prejudice and would enhance the international competition of the U.K.

I think having a set of principles like that to guide the regulatory framework would be very useful. When I saw that bill, I thought, this is genuis. This is from a private member.

4:20 p.m.

Bloc

Sébastien Lemire Bloc Abitibi—Témiscamingue, QC

Ms. Denham, I want to let you react, since we're talking about a British bill. Do you also think the bill is genius?

4:20 p.m.

Chief Strategy Officer, Information Accountability Foundation

Elizabeth Denham

That's a private member's bill, as my colleague has just said, so I haven't studied that.

The government's direction right now is not to regulate AI, but rather to give existing regulators the powers that they need, which includes running sandboxes to beta-test generative AI and machine learning applications.

The government has said that yes, there's a set of principles. The U.K. government of the day has said that it's not going to regulate a specific technology at this point. Instead, it's creating an AI institute to look over all digital regulation and to encourage digital regulators to work together. That is in contrast with the EU, which is in trilogue right now, as you know. The EU has an AI act that is comprehensive and reads like a product safety statute.

There may be a brilliant private member's bill, but the government's preference is really to support existing regulators when it comes to this new technology. It's taking a “wait and see” approach.

4:25 p.m.

Bloc

Sébastien Lemire Bloc Abitibi—Témiscamingue, QC

Ms. Denham, I was in England a few weeks ago to take part in the AI security summit.

What lessons has England learned from holding that summit at Bletchley a few weeks ago?

4:25 p.m.

Chief Strategy Officer, Information Accountability Foundation

Elizabeth Denham

Yes, the global summit that was sponsored by the Prime Minister in the U.K. was really important, because there was an outcome that was an agreement of leading nations around the world around the approach towards AI. There was a declaration that came out of that meeting, and it included China. I think that is quite astonishing to a lot of people.

You see, ultimately, I think what we're going to need is an international solution. In the same way that we regulate civil aviation around the world, I can see that we need a world where we are regulating AI, at least to a high level of principles and standards. I think that's what was achieved at the British meeting a few weeks ago, and I think dialogue is a good thing.

4:25 p.m.

Bloc

Sébastien Lemire Bloc Abitibi—Témiscamingue, QC

Yes.

And by the way, China surprised me, especially by raising the issue of human rights in artificial intelligence.

Thank you, Mr. Chair.

4:25 p.m.

Liberal

The Chair Liberal Joël Lightbound

Thank you, Mr. Lemire.

Go ahead, Mr. Masse.