Evidence of meeting #20 for Access to Information, Privacy and Ethics in the 45th Parliament, 1st session. (The original version is on Parliament’s site, as are the minutes.) The winning word was approach.

A recording is available from Parliament.

On the agenda

Members speaking

Before the committee

Leahy  Chief Executive Officer, Conjecture Ltd.
Alfour  Chief Technology Officer, Conjecture Ltd.
Piovesan  Managing Partner, INQ Law

12:25 p.m.

Managing Partner, INQ Law

Carole Piovesan

We have a job to do in establishing discussions among different legal regimes, such as the human rights regime and its place within artificial intelligence. This means that, in the application of human rights law and our charter, for instance, we should understand the connection and role AI has in each of those sections. There are absolutely concerns around the use of AI producing fair outputs. What we need to start to understand is, what does fair mean? What is the standard? How do we judge it? How can we demonstrate that we are living up to these standards? That is critical.

In addition, I want to highlight one of the points I made earlier: The diversity of perspectives around the table matters. You need to hear from different people with different perspectives. It really matters and will shape the way we approach AI policy in law.

Luc Thériault Bloc Montcalm, QC

Let us go back to Mr. Bengio’s approach. Can any guardrails be put in place to prevent some of the most common risks posed by artificial intelligence? You made a series of recommendations earlier, but could you elaborate on these recommendations or provide additional ones?

12:25 p.m.

Managing Partner, INQ Law

Carole Piovesan

There are a number of recommendations to ensure that AI is used in a safer manner. Many of the recommendations I have in my written submissions I tried to consolidate, just for expediency.

We need to augment the standards we have in place. There are standards through the international standards organization. The Standards Council of Canada is working on a body of research to support Canadian standards in AI. The application of those standards matters.

When we look at ISO standards, we are looking at a governance standard for how you operationalize responsible AI within the use of a system. Actually, it's the use of AI in a particular context, which means it is not technical; it is governance-oriented.

By increasing our understanding through a national literacy program, through more sectoral guidance, through more industry collaboration and through greater perspectives being brought to the table, we can start to have very actionable plans around how we will operationalize these standards and what “good” looks like in order to achieve in each of these standards.

Luc Thériault Bloc Montcalm, QC

Can—

12:30 p.m.

Conservative

The Chair Conservative John Brassard

Sorry to cut you off, Mr. Thériault, but your six minutes are up.

Luc Thériault Bloc Montcalm, QC

Oh, all right.

12:30 p.m.

Conservative

The Chair Conservative John Brassard

Time flies.

Mr. Hardy, you have the floor for five minutes.

12:30 p.m.

Conservative

Gabriel Hardy Conservative Montmorency—Charlevoix, QC

Thank you, Ms. Piovesan, for appearing today.

Many companies have made it clear that their aim is to use AI to take over a significant portion of human-driven tasks. Studies indicate a 13% decline in hiring among youth ages 22 to 25 for roles where artificial intelligence can easily replace human labour.

Is the use of artificial intelligence for labour essentially equivalent to hiring postdoctoral level talent to work 24 hours a day, for less than minimum wages? It seems this is already part of the conversation.

Ethics and responsible innovation are central to everything. Do you think we should legislate this particular issue to ensure companies don’t start using artificial intelligence instead of hiring people?

12:30 p.m.

Managing Partner, INQ Law

Carole Piovesan

I don't think you can legislate so that businesses aren't hiring AI instead of real people. Let me offer a different perspective.

We work with clients all the time to operationalize their AI governance programs. When we're going through use-case identification, I always ask them three questions. Number one, what is the work they don't want to do because it's boring and mundane? Number two, how much time do they spend on each of those tasks? Number three, what would they be doing if they weren't doing those tasks? A hundred per cent of the time I am told they would be more proactive, they would be able to serve their mandate better and they would be able to contribute more value to their organization.

Here are my points.

Number one, AI as a tool is going to be used within our businesses, and we can't and shouldn't stop that use.

Number two, we will have to reorient the job market. I have kids. I'm distinctly aware of where they're headed and where there may be vulnerabilities in some of their job choices. I understand that there will be shifts in the way we structure labour within Canada, but we have to make that adjustment. Instead of resisting it, we have to support re-skilling and upskilling. This is something that we as a country have been talking about for many years.

The last point is about literacy. We should allow people to understand how this tool is to be used and enable them to better identify and shape their own career paths, recognizing the profound transformational impact of artificial intelligence.

12:30 p.m.

Conservative

Gabriel Hardy Conservative Montmorency—Charlevoix, QC

I do understand what you are trying to say: If someone isn’t stuck doing tedious tasks, they could be engaged in more creative work. However, it’s important to think about the fact that if everyone were doing the same thing and had the same abilities, then artificial intelligence could do most of the work in the country. Don’t you think businesses will try to take advantage of that? We would then have to ensure those who lose their jobs don’t generally become a burden on society, since they will need financial support, for example.

Don’t you think we can regulate companies that replace human employees with artificial intelligence?

12:30 p.m.

Managing Partner, INQ Law

Carole Piovesan

I'll take your question in two parts.

First, is there something we should be doing to support this transitional period, as people might be looking for new opportunities because of the displacement caused by AI? Second, is there something we should be doing to prevent companies from using AI instead of people?

On the second point, I don't think we should be inhibiting companies from adapting to the use of AI. I don't think that's the approach. It's certainly outside the scope of my practice, so that's very much a personal opinion, but I don't think that's the right approach.

On the first part, I agree with you. We should be continuously monitoring and investigating how we can better support our workers as we go through a transformation on a step-by-step basis.

12:35 p.m.

Conservative

Gabriel Hardy Conservative Montmorency—Charlevoix, QC

I’d like to shift gears with a different question.

In the private sector, setting an example often shapes the direction of the market. What can we do here in Canada to lead by example when it comes to artificial intelligence and encourage countries to emulate us, to ensure we don’t lag behind others?

Can we adopt the best approach ethically? Can we make sure that the development of artificial intelligence is confined to use in very specific areas?

12:35 p.m.

Conservative

The Chair Conservative John Brassard

Give a very quick response, please.

12:35 p.m.

Managing Partner, INQ Law

Carole Piovesan

I appreciate that question.

We can lead in two ways. We can adopt the technology, and we can do so with the responsible made-in-Canada AI brand we have been developing for a very long time. We should sell that to the world.

12:35 p.m.

Conservative

The Chair Conservative John Brassard

Thank you, Monsieur Hardy.

Mr. Saini, you have five minutes. Go ahead, sir.

Gurbux Saini Liberal Fleetwood—Port Kells, BC

Thank you, Carole, for coming.

I was very alarmed by the two witnesses who came before you given the extent that humanity may be at risk. What can we do? The United States, China, India and G7 countries can develop all these things that may be a danger to humanity, but a lot of countries in the world don't have the knowledge or facilities. What can we do to help them? How can we protect humanity from the uncontrollable use of this weapon?

12:35 p.m.

Managing Partner, INQ Law

Carole Piovesan

I think we have been here before. I didn't hear all the testimony of the two prior witnesses, but I believe they talked about this in the context of nuclear co-operation. I believe that was the case. We are going to rely on those processes again. We will have to find our friends, and we will establish the mechanisms with our friends to establish—

12:35 p.m.

Conservative

The Chair Conservative John Brassard

Excuse me, Carole. I'm sorry.

Something became hollow. I don't know whether the connection came out of the microphone. Maybe you can plug it back in, if you don't mind, because the sound in the room got really hollow quickly.

I've stopped your time, Mr. Saini, just so you know. I'm going to give Carole an opportunity to respond to the question in its entirety.

Can you just give me a test, please?

12:35 p.m.

Managing Partner, INQ Law

Carole Piovesan

Sure. Is this better?

12:35 p.m.

Conservative

The Chair Conservative John Brassard

It's much better. Thank you.

I'm going to give you an opportunity to restart your response. Once you're done, I'll start the clock again.

12:35 p.m.

Managing Partner, INQ Law

Carole Piovesan

That's wonderful.

As I was saying, we have been here before. I think we will have to find out who our friends are and establish the mechanisms to embed the right types of values, certification mechanisms, evaluation mechanisms and deployment safeguards to do what we can to prevent rogue countries from investing in and ultimately succeeding with superintelligent computers that are harmful to humanity. I think that's going to be our best path forward.

Gurbux Saini Liberal Fleetwood—Port Kells, BC

My concern is with atomic energy. It was used, and it did a lot of harm before we realized it was dangerous.

Do we have any organizations, like the United Nations, that can regulate these things and tell rogue countries that it is enough, that we don't want to carry on with this?

12:35 p.m.

Managing Partner, INQ Law

Carole Piovesan

I don't think we have the right international governance organization set up to protect AI yet. We need to invest in that organization or augment an existing organization with a specific view to supporting the safe deployment of AI. We have seen some of that through the international network of AI safety institutes—the organization of safety institutes that consolidates and organizes the various national safety institutes—but I don't believe that is their explicit mandate.

Gurbux Saini Liberal Fleetwood—Port Kells, BC

In general, how are Canadian AI companies doing compared with those of the rest of the world?

12:40 p.m.

Managing Partner, INQ Law

Carole Piovesan

We have one large language model that competes on a global scale, and that's through Cohere. Otherwise, our AI companies are by and large, from what I understand, relatively small compared with some U.S. companies. I think it's California that has 32 of the top 50 AI companies in the world.

Canada has a long way to go in augmenting and really globalizing our AI companies.