Evidence of meeting #102 for Industry, Science and Technology in the 44th Parliament, 1st Session. (The original version is on Parliament’s site, as are the minutes.) The winning word was systems.

A recording is available from Parliament.

On the agenda

MPs speaking

Also speaking

Ana Brandusescu  AI Governance Researcher, McGill University, As an Individual
Alexandre Shee  Industry Expert and Incoming Co-Chair, Future of Work, Global Partnership on Artificial Intelligence, As an Individual
Bianca Wylie  Partner, Digital Public
Ashley Casovan  Managing Director, AI Governance Center, International Association of Privacy Professionals

4:35 p.m.

Managing Director, AI Governance Center, International Association of Privacy Professionals

Ashley Casovan

You're not the only one. It's something that I think is quite complicated.

One note that came in the amendments was related to the role of auditing within the commissioner's office. Something I'd like to see is more proactive use of auditing to ensure compliance, as opposed to the powers of the commissioner to require an audit when there is something that percolates that's problematic enough. It would be good to see that. That is done typically like a financial audit. You require those proactively every year with companies.

In this case, one thing we need to understand better is the scope of an AI system and, based on that, what those harms are and how you comply with that. What does that “good” look like, again, doing that through a public process? From there, you would require third party audits in a similar way that we have professional auditors in financial services to do the same thing.

4:40 p.m.

Liberal

Francesco Sorbara Liberal Vaughan—Woodbridge, ON

As someone who has spent many years in financial services, domestically and globally, I know we depend on audited financial statements to do our job. Hopefully 99 times out of 100 they're accurate.

Are we looking at the same type of world as we go forward?

4:40 p.m.

Managing Director, AI Governance Center, International Association of Privacy Professionals

Ashley Casovan

If I have my way, yes, I would love that.

However, there's one addition that I'd like to note here. One of the things that people talk about—as you would know—is that financial audits are lengthy and very expensive. However, there are a lot of tools we can use to expedite the evaluation of these systems now. Recognizing that they're changing so rapidly, it's really important for us to use and leverage those tools so that those audits are not only expedited, but also accurate at the time of that use and also for the purposes of ongoing monitoring.

4:40 p.m.

Liberal

The Chair Liberal Joël Lightbound

You're out of time, Mr. Sorbara.

I'll just yield myself a little bit of time for a follow-up question to Mr. Shee.

I'm just trying to understand what's the scale of the issue you're hoping for Parliament to address when it comes to the exploitative labour used in the AI supply chain.

I'm thinking out loud. Just today, I watched the Google DeepMind Gemini prototype that came out. It seems to me like maybe that ship has sailed and AI has already gotten to the point where you would think it's not that labour-intensive.

I'm just trying to understand what the scale is.

4:40 p.m.

Industry Expert and Incoming Co-Chair, Future of Work, Global Partnership on Artificial Intelligence, As an Individual

Alexandre Shee

It's a great question.

What I would say is that, first, while AI systems look very impressive to consumers, millions of people on a daily basis are working behind the scenes to make them work. That spans from our interactions with social media to automated decision-making systems.

The scope of what I'm asking for is very simple. By having a disclosure mechanism in the law that requires companies to give information about the data they've collected and how they collected it, we essentially ensure that millions of people around the world who are annotating daily and interacting with AI systems in the back end are protected from exploitative processes and procedures.

Right now, nothing is in place in any jurisdiction in the world. Right now, this is a wild west and nobody is protecting these people. These are youth in Pakistan and women in Kenya. These are vulnerable Canadians who are trying to have a side job to make a bit more money. In all of these circumstances, they have nothing protecting them.

4:40 p.m.

Liberal

The Chair Liberal Joël Lightbound

Thank you.

Mr. Lemire, you have the floor.

4:40 p.m.

Bloc

Sébastien Lemire Bloc Abitibi—Témiscamingue, QC

Thank you, Mr. Chair.

Mr. Shee, I'd like to continue with you.

Yesterday, CBC presented a report on artificial intelligence in the service of war. He was referring to the use of artificial intelligence and Gospel software by the Israeli army to better target the facilities assigned to Hamas. However, this technology increases the number of civilian casualties, according to experts, because there is less human interaction behind every decision made before going on the offensive.

In that case, is there some slippage in artificial intelligence? How can we regulate these practices to save human lives?

4:40 p.m.

Industry Expert and Incoming Co-Chair, Future of Work, Global Partnership on Artificial Intelligence, As an Individual

Alexandre Shee

That's a great question.

I have no experience with artificial intelligence in war or defence situations. I can just comment on that as a sophisticated citizen.

I think we need a very clear framework that takes into account the rules of war that have already been established. Unfortunately, AI systems are used in war situations and they kill a lot of people. We have to be aware of the risk and take measures to manage it.

Very humbly, this is a bit outside my area of expertise. However, I think you raise an important point. Indeed, artificial intelligence will be used in war situations and systems [Technical difficulty—Editor].

4:40 p.m.

Liberal

The Chair Liberal Joël Lightbound

We still have problems with the system. It looks like the sound has stopped working.

Mr. Shee and Mr. Masse, can you hear us?

4:45 p.m.

Industry Expert and Incoming Co-Chair, Future of Work, Global Partnership on Artificial Intelligence, As an Individual

Alexandre Shee

Yes, I can.

4:45 p.m.

Liberal

The Chair Liberal Joël Lightbound

Okay.

The sound is back.

4:45 p.m.

NDP

Brian Masse NDP Windsor West, ON

Yes. It just started working again.

4:45 p.m.

Liberal

The Chair Liberal Joël Lightbound

Okay.

Mr. Lemire, you may continue.

4:45 p.m.

Bloc

Sébastien Lemire Bloc Abitibi—Témiscamingue, QC

Based on your expertise and your involvement with the Global Partnership on Artificial Intelligence working group, I think you will be able to help us demystify all the pitfalls caused by artificial intelligence, in particular.

I would like you to give us another type of example in terms of protecting our democratic institutions. For example, this week, 19,600 amendments were proposed in a very short time at the Standing Committee on Natural Resources by the Conservative Party, not to mention its name. Since the amendments were made in a very short period of time, I think that they were necessarily generated by artificial intelligence. So they want to bog down institutions using artificial intelligence.

In that case, is there also a risk of slippage? What can we do to protect our democratic institutions from these attempts that could be called "Trumpists"?

December 7th, 2023 / 4:45 p.m.

Industry Expert and Incoming Co-Chair, Future of Work, Global Partnership on Artificial Intelligence, As an Individual

Alexandre Shee

Without commenting specifically on what came out, I can mention that generating artificial intelligence, which is taking up more and more space in the current conversation, can generate texts as plausible as those that human beings would write. It certainly puts our democracy at risk, and it also puts people's interactions with different systems at risk. Will people be able to be assured that this is a human being? The answer is no.

You raise an extremely important question. You have to have a marker to determine whether something is produced by an AI system as well as a way for the consumer or the person interacting with the system to know that they are speaking with a system based on artificial intelligence and not with a human being.

These are essential elements to protect our democracy from the misinformation that can emerge and will grow exponentially with new systems. We're in the early days of artificial intelligence. We absolutely have to have ways of identifying artificial intelligence systems and determining whether we are in the process of interacting with a system or a person.

4:45 p.m.

Bloc

Sébastien Lemire Bloc Abitibi—Témiscamingue, QC

Thank you very much.

4:45 p.m.

Liberal

The Chair Liberal Joël Lightbound

Thank you, Mr. Lemire.

Mr. Masse, you have the floor.

4:45 p.m.

NDP

Brian Masse NDP Windsor West, ON

Thank you, Mr. Chair.

Ms. Wylie, you didn't get a chance to get into the last conversation, so let me ask you this. If we had an AI commissioner or data commissioner, whatever it might be called, would the model of the Privacy Commissioner, an independent model like that, be something we should be looking toward?

Second to that, maybe you have another suggestion. How do we bring some independence and accountability to the table here that would also be empowered?

4:45 p.m.

Partner, Digital Public

Bianca Wylie

I just want to go back to my remark about making the same mistake for the third time. It's the same mistake that we saw with privacy and data protection, which is to treat these topics as objects that are independent from the rest of the world as it exists. We've seen the failure that thinking like this has gotten us to. While we talk about privacy a lot, what we're dealing with is a deeply privatized space where the control and power of the infrastructures—particularly with AI, never mind with data and software—are privately held.

If we think about our failures in access to justice for things like privacy and data protection, and we think about the failures of this sort of model, with privacy or data protection it's never about whether we should do it; it's always about “how”. If we want to turn the corner into a different world so that we have control over technologies, we have to talk about them in context.

For me, I go back to this. Who is the minister in charge of X, Y or Z sector? Who is in charge of making sure forestry is operating in a certain way, environmental protections are operating in a certain way and cars are operating in a certain way? Go from there every time. If we keep scaffolding more and more complexity, more and more compliance, and more and more of these sorts of complexities out into the sky, it doesn't serve justice. We have a fundamental access to justice problem as it stands right now. How many people have the time and energy to file a complaint with the Privacy Commissioner? What is the profile of someone or the demographic of someone who can bring that kind of a complaint forward?

In the same way that we're talking today about how you would even know if you were harmed by artificial intelligence, I recently heard the concept that in some cases it's like asbestos: It's in things and you don't know it's there. Whom will you go to and ask to hold them accountable? If you get hit by a car, there is a clearly accessible track of where you go to deal with that problem. I do not understand why we think it's a good idea to build an entirely new construct when we have a perfectly good physical and material world and a perfectly good set of governance standards. That's a place where we have public power. To me, the only people who benefit from scaffolding all this additional complexity are those with private interests. In a democracy—at this point in time we're 30 years in—public power has to be increased.

Do I want to see a commissioner for AI? No. I don't want to see a new regime for AI.

4:50 p.m.

NDP

Brian Masse NDP Windsor West, ON

You want it built within the actual departments. Is that correct?

4:50 p.m.

Partner, Digital Public

Bianca Wylie

That's correct. Guess what's going to happen. It will surface the harms that right now we're talking about in abstractions.

I'm sorry, everyone. I cannot believe we keep doing this. This is not how the world works. You have to talk about specificity. That is how the law works. The law is about where, when, who and what happened. That's how justice works. You don't work in the abstract.

I'm sorry to have to keep bringing us back to this point, but why don't we build out from what we have functioning? The majority of our government is pre-existing. Work from there.

4:50 p.m.

NDP

Brian Masse NDP Windsor West, ON

Thank you.

4:50 p.m.

Liberal

The Chair Liberal Joël Lightbound

Thank you very much.

Mr. Généreux, you have the floor.

4:50 p.m.

Conservative

Bernard Généreux Conservative Montmagny—L'Islet—Kamouraska—Rivière-du-Loup, QC

Thank you, Mr. Chair.

Thank you to all the witnesses.

As they say in Quebec, I am “sur le cul”.

I don't know if you know what that means. It means “I'm on my ass.”

I don't know if that translates into that.

I apologize to the interpreters.

Ms. Wylie, you're giving us a particularly interesting lesson.

Bill C‑27 has been on the table for almost two years. It has been evaluated. It was created by public servants, obviously, in Ottawa. Some politicians have done some work to try to put in place legislation that would frame a problem that you don't really see. In fact, you are saying that all the legislation we need already exists. We simply have to proceed by sector to correct the elements that will be related to artificial intelligence.

At the committee, we have heard from people. Over the past few years, we have conducted studies on blockchain, the automotive industry, the right to repair, and so on.

Today, you are telling us that what we are doing is not working at all. You are telling us to take back the studies we have conducted and the existing legislation and to correct what will affect artificial intelligence, because it is already in all these sectors, let's face it.

My question is still for you, Ms. Wylie, but I would also like to know what Ms. Brandusescu and Ms. Casovan think of your position.

4:50 p.m.

Partner, Digital Public

Bianca Wylie

There's nothing wrong with supporting the industry of AI. I want to be very clear about that. However, to me, it is stunningly disingenuous to use fear, safety, harm reduction, human rights protection and more to say that's the reason for this bill, which is why I was asking what this bill is actually doing.

If we were to stop and go back to the start, we could ask, “What are the sector-specific harms we're seeing? How did we deal with them in software and banking?” Take any sector. They're not starting from scratch. They've had to deal with data. They've had to deal with privacy. They've had to deal with software. There are harms all over the place with software. We're not looking at those. This is also not even coherent with the last 30 years of tech harms.

What I'm saying is that you should go to the people. Again, the only people we should be talking to haven't been included in this process. They're the ones who could tell you about the problems, because right now, everybody's talking in generic terms.