Evidence of meeting #145 for Access to Information, Privacy and Ethics in the 42nd Parliament, 1st Session. (The original version is on Parliament’s site, as are the minutes.) The winning word was companies.

A video is available from Parliament.

On the agenda

MPs speaking

Also speaking

Clerk of the Committee  Mr. Michael MacPherson
Ben Wagner  Assistant Professor, Vienna University of Economics, As an Individual
Yoshua Bengio  Scientific Director, Mila - Quebec Artificial Intelligence Institute

4:55 p.m.

Liberal

Frank Baylis Liberal Pierrefonds—Dollard, QC

Thank you.

Is there any country that regulates AI directly?

We'll start with you, Mr. Wagner.

4:55 p.m.

Prof. Ben Wagner

Are there countries that regulate AI directly, as a general thing, that I am aware of right now? No. There are AI-specific regulations in different fields that you will find; for example, the general data protection regulation in Europe is one of those cases.

4:55 p.m.

Liberal

Frank Baylis Liberal Pierrefonds—Dollard, QC

Mr. Bengio, do you know of any specifically, or are they all subsets of a general regulation?

4:55 p.m.

Prof. Yoshua Bengio

No, I don't think there is. I would agree with Mr. Wagner that we want sector-specific regulations. That's also a protection for innovation—to make sure we find the right compromise that makes sense both ethically and technically.

4:55 p.m.

Liberal

Frank Baylis Liberal Pierrefonds—Dollard, QC

No one is out there saying we're going to regulate AI in a general sense. They're doing more of what you're suggesting to us, Mr. Wagner, which is to say we already regulate, for example, hate speech. Take that one. How is AI going to be regulated within the context of hate speech? Is that the approach you would both suggest?

4:55 p.m.

Prof. Yoshua Bengio

Yes.

4:55 p.m.

Liberal

Frank Baylis Liberal Pierrefonds—Dollard, QC

Are there specific areas in immediate need? As you're at the forefront of the development of AI, do you see specific areas? It's quite a broad thing to start looking at every one of our regulations and say, “Okay, we've got to make every one of them AI-proof.” Where would you say we should focus our energies? Where have other jurisdictions focused their energies?

5 p.m.

Prof. Yoshua Bengio

I think I already mentioned the security and military applications that deserve more attention. We have to act quickly on this, to avoid the kind of arms race between countries that would lead to the availability of these killer drones. It would mean that it would be very easy for even terrorists to get hold of these things. That's an example where there's no reason to wait. The red line has been defined—

5 p.m.

Liberal

Frank Baylis Liberal Pierrefonds—Dollard, QC

Is that like the anti-personnel mines?

5 p.m.

Prof. Yoshua Bengio

Yes. That's right.

5 p.m.

Liberal

Frank Baylis Liberal Pierrefonds—Dollard, QC

Okay. That's the military's one.

Mr. Wagner.

5 p.m.

Prof. Ben Wagner

I think the automated drones, or what are termed LAWS—lethal autonomous weapons systems—are definitely areas where further focus is acquired. I would also say that what's been mentioned here about the spread or proliferation of surveillance and AI technologies that can be misused by authoritarian governments is another an area where there is an urgent need to look more closely.

Then, of course, you have whole sectors that have been mentioned by this committee already—media, hate-speech-related issues and issues related to elections. I think we have a considerable number of automated technical systems changing the way the battleground works, and how existing debates are taking place.

There's a real need to take a step back, as was mentioned and discussed before, in the context of AI potentially being able to solve or fix hate speech. I don't think we should expect that any automated system will be able to correctly identify content in a way that would prevent hate speech, or that would deal with these issues to create a climate. Instead, I think we need a broad set of tools. It's precisely not relying on just humans or technical solutions that are fully automated, but instead developing a wide tool kit of issues that design and create spaces of debate that we can be proud of, rather than getting stuck in a situation where we say, “Ah, we have this fancy AI system that will fix it for you.”

5 p.m.

Liberal

Frank Baylis Liberal Pierrefonds—Dollard, QC

Okay.

One of the areas we hear about, and you mentioned a few times, is transparency. I'm not talking about the transparency of how an algorithm works but the transparency of what you're dealing with. This is a bot, or a design, as opposed to a human being. These are AI-driven bots. What are your views on that?

We'll start again with you, Mr. Bengio.

5 p.m.

Prof. Yoshua Bengio

It's usually pretty obvious if you're dealing with a machine or a human, because the machines aren't that good at imitating humans. In the future, we should definitely have regulations to clarify that, so that a user knows whether they are talking to a human or a machine.

5 p.m.

Liberal

Frank Baylis Liberal Pierrefonds—Dollard, QC

Mr. Wagner.

5 p.m.

Prof. Ben Wagner

I couldn't agree more. There are also cases, as best as I'm aware, in California, where it's also already being debated—to find mechanisms whereby automated systems like bots would be required to declare themselves as bots. Especially in the context of elections, and also in other cases, that can be quite helpful. Of course, that doesn't mean that all issues are fixed with that, but it's certainly better than what we have right now.

5 p.m.

Conservative

The Chair Conservative Bob Zimmer

Thank you, Mr. Baylis.

Last up for three minutes is Mr. Angus.

April 30th, 2019 / 5 p.m.

NDP

Charlie Angus NDP Timmins—James Bay, ON

Thank you.

Mr. Bengio, my riding is bigger than Great Britain, and I live in my car. My car is very helpful. It tells me when I'm tired, and it tells me when I need to take a break, but it's based on roads that don't look like roads in northern Ontario. I'm always moving into the centre lane to get around potholes, to get around animals and to get away from 18-wheelers. I start watching this monitor, and sometimes I'm five minutes from the house and it's saying I've already exceeded my safety capacity.

I thought, well, it's just bothering me and bugging me. I'll break the glass. Then I read Shoshana Zuboff's book on surveillance capitalism and how all this will be added to my file at some point. This will be what I'm judged on.

To me, it raises the question of the right of the citizen. The right of the citizen has personal autonomy and the right to make decisions. If I, as a citizen, get stopped by the police because I made a mistake, he or she judges me on that and I can still take it to some level of challenge in court if I'm that insistent. That is fair. That's the right of the citizen. Under the systems that are being set up, I have no rights based on what an algorithm designed by someone in California thinks a good roadway is.

The question is, how do we reframe this discussion to talk about the rights of citizens to actually have accountability, so their personal autonomy can be protected and so decisions that are made are not arbitrary? When we are dealing with algorithms, we have yet to find a way to actually have the adjudication of our rights heard.

Is that the role you see legislators taking on? Is it a regulatory body? How would we insist that, in the age of smart cities and surveillance capitalism, the citizen still has the ability to challenge and to be protected?

5:05 p.m.

Prof. Yoshua Bengio

It's interesting. This question is related to the issue of the imbalance of power between the user and large companies in the case of how data is used. You have to sign these consents. Otherwise you can't be part of, say, Facebook.

It's similar in the way the products are defined remotely. As users, we don't have access to the details of how this is done. We may disagree on the decisions that are made, and we don't have any recourse.

You are absolutely right. The balance of power between users and companies that are delivering those products is something that maybe needs rethinking.

As long as the market does its job of providing enough competition between comparable products, then at least there is a chance for things to be okay. Unfortunately, we're moving towards a world where these markets are dominated more and more by just one or a few players, which means that users don't have a choice.

I think we have to rethink things like notions of monopolies and maybe bring them back. We need to make sure one way or another that we re-equilibrate the power differential between ordinary people and those companies that are building those products.

5:05 p.m.

NDP

Charlie Angus NDP Timmins—James Bay, ON

Thank you.

5:05 p.m.

Conservative

The Chair Conservative Bob Zimmer

I want to thank our witnesses.

I think it's alarming just for you to say that essentially AI is largely unregulated. We're seeing that with data-opolies as well, and we're really trying to grasp what we do as regulators to protect our citizens.

The challenge is before us, and it's certainly not easy, but I think we will take your advice. Mr. Wagner, you said to start early. It already feels like we're too late, but we're going to do our best.

I want to thank you for appearing today from Vienna, and from Montreal as well.

We're going to suspend for a few minutes to get our guests out so we can get into committee business.

Thank you.

[Proceedings continue in camera]