Evidence of meeting #102 for Industry, Science and Technology in the 44th Parliament, 1st Session. (The original version is on Parliament’s site, as are the minutes.) The winning word was systems.

A recording is available from Parliament.

On the agenda

MPs speaking

Also speaking

Ana Brandusescu  AI Governance Researcher, McGill University, As an Individual
Alexandre Shee  Industry Expert and Incoming Co-Chair, Future of Work, Global Partnership on Artificial Intelligence, As an Individual
Bianca Wylie  Partner, Digital Public
Ashley Casovan  Managing Director, AI Governance Center, International Association of Privacy Professionals

4:20 p.m.

NDP

Brian Masse NDP Windsor West, ON

Okay.

I'll go to our final witness, please.

4:20 p.m.

AI Governance Researcher, McGill University, As an Individual

Ana Brandusescu

Just in terms of AIDA, AIDA should be separate.

In terms of privacy, that's not my expertise either. I just stand by my comment to remove AIDA and proceed with the other two. Whether other amendments are needed for that is for somebody else.

4:20 p.m.

NDP

Brian Masse NDP Windsor West, ON

This is interesting.

I do want to ask about the protection of labour law. If you could continue with regard to that, how would that best be done? Would that be through a commissioner or a special component in the labour ministry? I'm just throwing this out there. What are some mechanics we have around it that you're seeking to change?

4:20 p.m.

AI Governance Researcher, McGill University, As an Individual

Ana Brandusescu

Thank you for that.

As Ms. Wylie said, I could give you so many examples, right now, of specific types of harms, real-world implications and everything that's changing all the time, but I want to zoom out a little and talk about why labour is important to look at.

Before getting into who can do this, it seems paradoxical to me to want agility in technologies that are so complex. We don't understand them. Most people don't. The black box is still there. Engineers don't understand them still, to this day. Workers are being continuously impacted. When I say “impacted”, I mean negative impacts and harms. I submitted a brief to your committee with Dr. Renee Sieber, and we discuss those at length. You have multiple studies to look at, from multiple years. I've been following Sama for five years now, the company that is a self-proclaimed “ethical AI” company. When we look at who says they're ethical, and what ethical is, we should really question that, as well.

In my first five minutes, I said that AI being a societal benefit is being shoved down our throats. That is the case. “We need digital literacy. We need AI literacy. We know it's good and it's here to stay.” I'm here to sometimes reject that. We should be able to ban AI when we need to. We should be able to listen to the workers and see what they want and what they think. What does their day-to-day job look like? Do they have enough breaks? Look at what Amazon is doing, micromanaging every millisecond of their lives. The factory workers are living in a limbo space. I wouldn't even say “a limbo space”. They're in hell.

How do we prevent that? Why not go to labour departments that know those strengths? This is why ISED is not fit to do this alone. Earlier, I was asked what other agency could do this. It cannot just be one. It has to be multiple. This is a team effort. This goes back to democracy. Slow it down a bit and listen to the public. We don't know what the public wants, because the public wasn't involved. We need to listen to labour organizations, departments that deal with labour everywhere in this country, and the workers themselves. This is why we cannot just have people in these rooms. We cannot just have this televised. We need to have people come to you. We need you to come to the people. We need to look at town halls. We need to look at off-line methods. We need to look at different times and places to do public participation, because we live in a digitized world.

You're saying we need to change everything for AI. No. As Ms. Wylie said before, AI needs to change for us.

4:25 p.m.

NDP

Brian Masse NDP Windsor West, ON

Thank you.

4:25 p.m.

Liberal

The Chair Liberal Joël Lightbound

Thank you very much.

Go ahead, Mr. Vis.

December 7th, 2023 / 4:25 p.m.

Conservative

Brad Vis Conservative Mission—Matsqui—Fraser Canyon, BC

Thank you to all the witnesses here today.

I'm very concerned about this broken bill. As legislators, we around this table understand what's at stake here, but it's very disconcerting. For the second time since we started doing this bill, we received massive packages of information from the minister that completely changed the bill in front of us. I'm saying, “Minister, why did you screw up so badly, and where the heck was your department for years? Where were you?”

In the last meeting, I asked a number of experts whether Industry Canada or the Government of Canada even has the capacity. This was one of the first things I raised in Parliament when I got elected. I was on the HUMA committee reviewing data systems for the Department of Human Resources, because they were still using a binary code method from the 1970s. I think that's still in effect today. The Government of Canada has proven that, generally, they get a lot of things wrong and they're not up to date in the 21st century. I am so apprehensive about giving this department any more power over something most experts are still contemplating how to get right.

That said, I think that, despite the minister's incompetence in this, his heart may be partly in the right place. He's trying to bring forward amendments and do something to fix his own mess. However, it is very scary that he's so incompetent that we're just getting thrown this information.

I'm sorry for that rant, but part of me is thinking now—

4:25 p.m.

Liberal

Tony Van Bynen Liberal Newmarket—Aurora, ON

Tell us what you really think.

4:25 p.m.

Conservative

Brad Vis Conservative Mission—Matsqui—Fraser Canyon, BC

Tony, you and I both come from the Dutch community, and in our culture, it's about being direct. I know you appreciate that as well. Thank you, my friend.

That's true. Dutch people are direct. Tony was even born in Holland.

We talked a lot about enshrining a fundamental right to privacy for children in the first part of the bill. We got from the minister seven areas where he doesn't believe that AI should be used now. I don't see anything in there related to children. That's kind of concerning.

Have any of you followed the debates that we've had so far about a fundamental right to privacy for kids?

Ms. Casovan, you're nodding “yes” in response.

4:30 p.m.

Managing Director, AI Governance Center, International Association of Privacy Professionals

Ashley Casovan

I heard the debates, yes.

4:30 p.m.

Conservative

Brad Vis Conservative Mission—Matsqui—Fraser Canyon, BC

I'm in a position where the Liberal members of this committee may make a decision with the Bloc Québécois to support this going through. I'm not sure where we're going to land on that. We're openly having this deliberation about whether this part of the bill deserves to go forward. That's where we are right now, in good faith.

That said, if it does go through, is it worth it for committee members to look at some of the other amendments that we'll be putting forward in the first part of the bill, like really enshrining some protections for kids?

I am so concerned about the innocent. I have a 10-month-old daughter, a four-year-old son and an eight-year-old son. I'm so concerned about their innocence and the manipulation. The bill, I will admit, does address psychological harms, but I don't think one or two clauses are good enough when it relates to a data-driven economy that impacts kids from birth to death in today's day and age.

Could you comment on that a bit?

4:30 p.m.

Managing Director, AI Governance Center, International Association of Privacy Professionals

Ashley Casovan

Sure. Actually, the reason I included my personal note was that I heard your line of questioning. It is concerning. It is not something that I typically speak to, but it was quite surprising, having the experience of working in this space for almost a decade—which is scary—to really think about the evolution of different types of technologies and therefore the societal impacts they have.

I was also nodding my head when you were mentioning some of the challenges that exist internally. Working inside government, I saw them up close and personal. Definitely, as with all organizations, there are concerns when we're using old technologies to try to fix modern problems. That said, the reality is that it does take a significant amount of time.

On the children's perspective, the fact that I had kids recently completely opened my aperture in terms of the harms. It made it more real and visceral than I could have ever imagined. Everything was abstract before.

I not only think that this should be included, but I think that when we see potential new classes of high-impact systems get added into these amendments, it would be nice to see something related to the protection of youth, similar to what we're seeing south of the border in the U.S.

4:30 p.m.

Conservative

Brad Vis Conservative Mission—Matsqui—Fraser Canyon, BC

Okay.

Mr. Shee mentioned in his comments earlier the relationship between generative AI models and child labour.

If we had, say, a clause in the AI portion of the bill that excluded any data that was created by children in third world countries, what impact would that have?

4:30 p.m.

Industry Expert and Incoming Co-Chair, Future of Work, Global Partnership on Artificial Intelligence, As an Individual

Alexandre Shee

It would have a—

4:30 p.m.

Conservative

Brad Vis Conservative Mission—Matsqui—Fraser Canyon, BC

I was actually asking Ms. Casovan.

4:30 p.m.

Managing Director, AI Governance Center, International Association of Privacy Professionals

Ashley Casovan

I think it would be not only nice to see.

One challenge, though, with all of these systems is that they're trained on data. I know you've talked about this lots in this committee, so I won't regurgitate it too much, but what's important to note is that often the supply chain is not transparent. Knowing where that data comes from is quite difficult. To know that it comes from or was collected by children, I think you need to solve the more fundamental problem of transparency in the supply chain of data collection practices, which I think should be addressed with deeper concern in this bill as well.

4:30 p.m.

Conservative

Brad Vis Conservative Mission—Matsqui—Fraser Canyon, BC

Mr. Chair, do I have any more time?

4:30 p.m.

Liberal

The Chair Liberal Joël Lightbound

Thank you, Mr. Vis. That's all the time.

Mr. Shee, I will just allow you to add to this, if you had something.

4:30 p.m.

Industry Expert and Incoming Co-Chair, Future of Work, Global Partnership on Artificial Intelligence, As an Individual

Alexandre Shee

Yes. I would just add that it is common practice within the AI development world to actually detail instructions for both data collection and data annotation. Including any reference to child labour or forced labour would have a tremendous impact on making sure that that would be eradicated, given that it would be included specifically in the instructions given to companies that are operating around the world.

4:35 p.m.

Liberal

The Chair Liberal Joël Lightbound

Thank you.

Mr. Sorbara, you have the floor.

4:35 p.m.

Liberal

Francesco Sorbara Liberal Vaughan—Woodbridge, ON

Thank you, Chair.

Welcome, everyone.

Thank you for your respective testimonies on AI. It's fascinating. It's very complex, and it's given a lot of us as MPs and not specific subject matter experts a lot to chew on.

I do wish to go to the gentleman who is here virtually, Alexandre.

You mentioned several times the AI continuum and the idea of data collection, engineering and annotation in the AI supply chain. Can you elaborate on that point? Your first point was that we should go forward with the bill. If you can comment on both aspects, that would be great.

4:35 p.m.

Industry Expert and Incoming Co-Chair, Future of Work, Global Partnership on Artificial Intelligence, As an Individual

Alexandre Shee

Essentially, when we look at artificial intelligence, there are many steps in that.

The first step is collecting data for an AI system. The second step is annotating that data. For example, if you have an image where you see a nose and eyes, there is somebody annotating that. Then there is the feedback loop where that data is enriched, so it goes through a software model, and ultimately the outputs of that are revalidated by a human. That's packaged into a proof of concept that's oftentimes launched, and then it becomes a product that's used by consumers or in the business context. That's the whole supply chain.

Right now, this legislation is geared only around the outputs, so we're missing all of the work done by humans to create the AI systems. I think it's important to have a law in place, because we need to start regulating the outputs as much as we need to regulate the supply chain.

My recommendation [Technical difficulty—Editor].

4:35 p.m.

Liberal

The Chair Liberal Joël Lightbound

I'm afraid it's the whole system, because it's not just Mr. Shee.

Mr. Shee, I will ask you to go back one minute in time. The system froze.

4:35 p.m.

Industry Expert and Incoming Co-Chair, Future of Work, Global Partnership on Artificial Intelligence, As an Individual

Alexandre Shee

Essentially, I think it's important to have legislation in place, because we need to start protecting the citizens who are interacting with AI systems.

We also need to hold accountable companies that are building AI systems and ensure that they're not using practices that are against Canadian values in their supply chain.

4:35 p.m.

Liberal

Francesco Sorbara Liberal Vaughan—Woodbridge, ON

You did say one thing that I found fascinating. You made the linkage between the AI supply chain and human rights, and you also mentioned the race to the bottom on the lack of worker rights when it comes to the AI supply chain. I would love to follow up in a more in-depth conversation on that, but I am going to move on to another witness.

Ashley, you commented on what compliance would look like in this AI world. Can you elaborate on that? We know governance within any type of organization is very important, and any type of service or product that's provided is important. When I think of compliance, I'm trying to wrap my head around compliance in an AI world. What is that, and what should it look like?