Evidence of meeting #102 for Industry, Science and Technology in the 44th Parliament, 1st Session. (The original version is on Parliament’s site, as are the minutes.) The winning word was systems.

A recording is available from Parliament.

On the agenda

MPs speaking

Also speaking

Ana Brandusescu  AI Governance Researcher, McGill University, As an Individual
Alexandre Shee  Industry Expert and Incoming Co-Chair, Future of Work, Global Partnership on Artificial Intelligence, As an Individual
Bianca Wylie  Partner, Digital Public
Ashley Casovan  Managing Director, AI Governance Center, International Association of Privacy Professionals

5:15 p.m.

Liberal

The Chair Liberal Joël Lightbound

Thank you, Ms. Brandusescu. I'll have to cut you off here. I was just interested in more information on that. To my knowledge, most of the biggest players in AI remain in the private sector, but thank you for the examples you provided.

We have bells ringing, colleagues, which means we do need unanimous consent to continue. I'm looking around the room to see if we have it, given that we're going to about 35 hours of voting, thanks to our friends to my left, but definitely to my right politically.

Do I have unanimous consent to continue for 10 more minutes?

5:15 p.m.

Some hon. members

Agreed.

5:15 p.m.

Liberal

The Chair Liberal Joël Lightbound

I'll now yield the floor to MP Gaheer.

5:15 p.m.

Liberal

Iqwinder Gaheer Liberal Mississauga—Malton, ON

Thank you, Chair, and thank you to all the witnesses for their testimony before the committee.

My first question is for Ms. Casovan.

We know that the minister has provided recent amendments to the committee to clarify the definition and scope of “high-impact systems” by outlining seven distinct classes of such systems. Do you think that's a good way of proceeding? Does it provide sufficient clarity, or do you think there would be a better model?

5:15 p.m.

Managing Director, AI Governance Center, International Association of Privacy Professionals

Ashley Casovan

As we've discussed a lot today, I do think it's a good start to understand that AI is not one thing. Breaking it down into different types of contexts and use is important.

I think, though, that it's a limited list. I get that the concept is to continue to add to it and to have a process. I do think that maintaining an inventory of such classes could be difficult, as I mentioned earlier, recognizing that there are different degrees of risk that could exist within those classes and trying to identify a way...similar to what we did with the directive to break that down into what we are actually trying to achieve from each of the mitigation measures for those classes of systems.

5:15 p.m.

Liberal

Iqwinder Gaheer Liberal Mississauga—Malton, ON

Do you have a proposed system that would be better?

5:15 p.m.

Managing Director, AI Governance Center, International Association of Privacy Professionals

Ashley Casovan

As I mentioned, I think it could be a matrix of both the contexts that are being used and the recognition of what a standard high-risk assessment would be.

Again, I would draw your attention to appendix C of the directive on automated decision-making systems, where that is broken down into four different types of impact, as we called it, but then different types of compliance requirements would be related to that.

I also think that the key word there is a “standard” for an impact assessment, to understand what that risk would actually be.

5:20 p.m.

Liberal

Iqwinder Gaheer Liberal Mississauga—Malton, ON

Sorry, I didn't mean to put you on the spot.

5:20 p.m.

Managing Director, AI Governance Center, International Association of Privacy Professionals

Ashley Casovan

No, no. I have lots of opinions about this.

5:20 p.m.

Liberal

Iqwinder Gaheer Liberal Mississauga—Malton, ON

This is generally for everyone, and maybe Mr. Shee can answer this one. We also know that the government-proposed amendments to AIDA include a series of tasks to be completed before a general purpose or high-impact AI system can be made commercially available, including an assessment of adverse effects and a test of the effectiveness of measures to mitigate the risk of harm or biased results.

What do you think about these new obligations that the government wants to impose on people who want to make AI systems available?

5:20 p.m.

Managing Director, AI Governance Center, International Association of Privacy Professionals

Ashley Casovan

Maybe I'll answer really quickly.

5:20 p.m.

Industry Expert and Incoming Co-Chair, Future of Work, Global Partnership on Artificial Intelligence, As an Individual

5:20 p.m.

Managing Director, AI Governance Center, International Association of Privacy Professionals

Ashley Casovan

I think that what's really important is that there is a governance process put in place before those systems are developed. As I mentioned, that's part of this assurance or audit function that would exist.

I also think, as I mentioned in my opening statements, that having an accountable person, something like a chief AI officer, would help work through that process in a consistent and therefore meaningful way.

5:20 p.m.

Liberal

Iqwinder Gaheer Liberal Mississauga—Malton, ON

Mr. Shee, do you want to add anything?

5:20 p.m.

Industry Expert and Incoming Co-Chair, Future of Work, Global Partnership on Artificial Intelligence, As an Individual

Alexandre Shee

I would just add that I think it's a good starting place, but especially in proposed paragraph 11(1)(a), respecting the usage of data, I think there would be advantages to including a disclosure mechanism to be able to understand how the data was labelled and how it was used. I think that would be something that would have an incredibly positive impact, both on the creation of the models and on their implementation.

I think it's a good starting place, but I would include, specifically in that paragraph, the amendments that were proposed with a specific disclosure requirement around data labelling and annotation.

5:20 p.m.

Partner, Digital Public

Bianca Wylie

We can't know how these things will be used. We can write systems all day where we say, “This is where we think it will be used. This is what we think the risks and harms could be.” It's a tool. You can't tell anybody how to use a tool. If they use it a certain way that's not in your categorization, you have a problem.

This model, to me.... I'm going to keep bringing us back to deployment. We can write beautiful laws with intricacy all day long, but you can't control the use of these products in operations and deployment. I don't want us to talk as though how we think we should organize it is the most important thing. The most important thing is what's going to happen in reality.

5:20 p.m.

Liberal

Iqwinder Gaheer Liberal Mississauga—Malton, ON

Ms. Casovan, do you think there should be a compliance audit before the AI systems are placed on the market?

5:20 p.m.

Managing Director, AI Governance Center, International Association of Privacy Professionals

Ashley Casovan

Yes, I do, and I think, too, a certain specification. That's why a standard would be good. The analogue could be a fair trade symbol or LEED, as I mentioned previously. Thinking about different types of standards that one would need to meet in order for that to go on the market should be a precondition for high-risk systems.

5:20 p.m.

Liberal

Iqwinder Gaheer Liberal Mississauga—Malton, ON

Thank you, Chair.

Thank you to the witnesses.

5:20 p.m.

Liberal

The Chair Liberal Joël Lightbound

Thank you, Mr. Gaheer.

Mr. Lemire, you have the floor.

5:20 p.m.

Bloc

Sébastien Lemire Bloc Abitibi—Témiscamingue, QC

Thank you, Mr. Chair.

Ms. Brandusescu, last year, when you appeared before the Standing Committee on Access to Information, Privacy and Ethics, you talked about the procurement of artificial intelligence systems by the public sector. You were saying that facial recognition technologies and other artificial intelligence technologies highlight the need for a discussion on private sector participation in public governance.

Can you elaborate on what you mean by private sector involvement in public governance when it comes to facial recognition technologies and other artificial intelligence systems?

December 7th, 2023 / 5:20 p.m.

AI Governance Researcher, McGill University, As an Individual

Ana Brandusescu

Facial recognition technology, as we know, hopefully is the low-hanging fruit of dangerous AI. It seems like harm is getting out of context. I will call it dangerous because that's what it is. Yet, we need to have these levels of imagination of banning certain technologies, and facial recognition technologies should be banned.

The public sector can make that choice because it is responsible to the public in the end. The private sector, as it stands, is responsible to the shareholder and to the business model of making more money. This is how capitalism works. This is what we're seeing.

That's not the job of the government. Again, when I say that AIDA should be out and reflected upon as public and private, that is exactly what I'm thinking about. I'm thinking about facial recognition technology used by law enforcement, national security, in IRCC and in immigration. Now it can be used maybe in Service Canada, or maybe in the CRA the way the IRS wanted to use facial recognition for doing taxes. Again, these technologies aren't domain-bound. Just like Palantir went from the military to health, FRT, facial recognition technology, works the same way. The public sector needs to be involved and to be publicly accountable to its people.

I really am coming back to Bianca's points about democracy. Participation is messy, but we need to participate in a way that there is dissent, discussion, non-compliance across the board and consensus, because it is important to make sure that these technologies will no longer be used because they are too dangerous. We saw what happened with Clearview AI. That is a privacy case, but it is also a mass surveillance case, besides the obvious, which are the dangers and harms it has done to so many marginalized groups.

5:25 p.m.

Liberal

The Chair Liberal Joël Lightbound

Thank you.

5:25 p.m.

Bloc

Sébastien Lemire Bloc Abitibi—Témiscamingue, QC

We see all the abuses that are happening in Ireland and China, among others.

Thank you.

5:25 p.m.

Liberal

The Chair Liberal Joël Lightbound

Thank you, Mr. Lemire.

Normally, Mr. Masse would now have the floor, but I think he had to leave to vote. That will conclude the last round of questions, and since we have little time left to head to the House, that will end today's meeting.

Mr. Masse is back. I thought we lost him.

The floor is yours, Brian.