Evidence of meeting #145 for Access to Information, Privacy and Ethics in the 42nd Parliament, 1st Session. (The original version is on Parliament’s site, as are the minutes.) The winning word was companies.

A video is available from Parliament.

On the agenda

MPs speaking

Also speaking

Clerk of the Committee  Mr. Michael MacPherson
Ben Wagner  Assistant Professor, Vienna University of Economics, As an Individual
Yoshua Bengio  Scientific Director, Mila - Quebec Artificial Intelligence Institute

4:25 p.m.

Prof. Ben Wagner

I think there is a distinction to be made between online platforms and media platforms. I think there is a substantive difference. I don't think it's alway helpful to just focus on the content. In a lot of these cases, the solutions to this tend to be more procedural and tend to be more, let's say, organizational. If you have ways in which consumers have more control over the algorithms that YouTube is using to present them with music or to present them with information, that can already deal with a large part of the problem.

That's not to say that there isn't a responsibility with these large organizations; for sure there is. It's just also the grave danger that when too much government regulation decides what you can and cannot see on the Internet, that's not always the—

4:25 p.m.

NDP

Charlie Angus NDP Timmins—James Bay, ON

What about illegal content?

4:25 p.m.

Prof. Ben Wagner

If it's illegal in that specific jurisdiction, then steps definitely need to be taken to ensure that.... But a lot of the time, at least in my experience of looking at content moderation, it's not so much about legal or illegal; it's more content that creates a certain atmosphere, and the challenges of that certain atmosphere chill speech and make minorities, different genders or people with different sexual orientations much less comfortable speaking, and that impoverishes the public sphere.

We live in a world, right now, where there is a real challenge that people who are important parts of our communities no longer feel comfortable debating things on the Internet. I don't think just saying that it's identical to media will fix that problem. There is a huge challenge on how to restore a space where people genuinely feel comfortable having a public conversation. I think that's a huge challenge but an extremely important one.

4:25 p.m.

Conservative

The Chair Conservative Bob Zimmer

Thank you.

Next up for seven minutes is Mr. Saini.

4:25 p.m.

Liberal

Raj Saini Liberal Kitchener Centre, ON

Good afternoon to both of you gentlemen.

Mr. Bengio, I'd like to start with you, because I would like to ask a technical question just so I have a better understanding of how algorithms work. I'm sure you're aware of the term “black box problem”.

4:25 p.m.

Prof. Yoshua Bengio

Yes.

4:25 p.m.

Liberal

Raj Saini Liberal Kitchener Centre, ON

Can you explain that? To me, that sounds like you have an algorithm, the data is not very good, the algorithm produces a result and you just take it for granted that this is the result, without having any human eyes on it or any human interaction. Can you explain that a little more for me?

4:25 p.m.

Prof. Yoshua Bengio

Sure. Actually, we know a lot of things about how that result is obtained. We know that it's obtained as a consequence of optimizing some objectives—for example, minimizing the prediction error on the large dataset—and that tells us a lot about what the system is trying to achieve. When the system is designed, we can also measure how well it achieves that and how many errors it makes on new cases on average. There are many other things you can do to analyze those systems before they are even put in the hands of users.

It's not really a black box. The reason people call it a black box.... In fact, it's very easy to look into it. The problem is that those systems are very complex and they're not completely designed by humans. Humans designed how they learn, but what they learn and detail is something that they come up with by themselves. Those systems learn how to find solutions to problems. We can look at how they learn, but what they learn is something that takes much more effort to figure out. You can look at all of the numbers that are being computed. There is nothing hidden. It's not black; it's just very complex. It's not a black box. It's a complex box.

There are things that we can do very easily. For example, once the system is trained and we look at a particular case where it's taking a decision, it's very easy to find out which of the variables it takes as input that were most relevant and how they influenced the answer. There are things that can be said to highlight it, to give a little bit of explanation about their decisions.

4:30 p.m.

Liberal

Raj Saini Liberal Kitchener Centre, ON

Thank you.

Mr. Wagner, I want to ask you a question about a term you used in a recent paper you wrote. You talked about “quasi-automation” and about keeping humans in the loop. Can you explain that to us a little more clearly?

You talked about three places where you felt that human agency, or the involvement of human agency decision-making, was debatable. You talked about self-driving cars. You talked about border searches on passenger name records. You also talked about social content media.

Perhaps you could expand on that term for us so that we have a better understanding of what you meant.

4:30 p.m.

Conservative

The Chair Conservative Bob Zimmer

Could you hear that question, Mr. Wagner?

I guess not. Are you able to hear me now, either one of you?

4:30 p.m.

Prof. Yoshua Bengio

I'm hearing you fine.

4:30 p.m.

Conservative

The Chair Conservative Bob Zimmer

Mr. Wagner?

4:30 p.m.

Liberal

Nathaniel Erskine-Smith Liberal Beaches—East York, ON

I'd take that as a no.

4:30 p.m.

Conservative

The Chair Conservative Bob Zimmer

Yes. I'll take that as a no.

Your time is still ticking, too, Mr. Saini. Hopefully, you'll get it back.

4:35 p.m.

Liberal

Raj Saini Liberal Kitchener Centre, ON

Mr. Bengio, I have just one quick question. I came across the term “singularity”. Is it a real thing?

4:35 p.m.

Prof. Yoshua Bengio

No.

4:35 p.m.

Liberal

Raj Saini Liberal Kitchener Centre, ON

Could you explain it a little? When I read it, I was alarmed, as you can appreciate.

4:35 p.m.

Prof. Yoshua Bengio

Yes. That is the intention of people who—

4:35 p.m.

Liberal

Raj Saini Liberal Kitchener Centre, ON

So, is it a real thing, and if it is—

4:35 p.m.

Prof. Yoshua Bengio

No.

4:35 p.m.

Liberal

Raj Saini Liberal Kitchener Centre, ON

It's not a real thing. Then why does it keep being written about?

4:35 p.m.

Prof. Yoshua Bengio

Unfortunately, there is a lot of confusion in many people's understanding of AI. A lot of it comes from the association we make with science fiction.

The real AI on the ground is very different from what you see in movies. The singularity is about the theory. It's just a theory that once the AI becomes as smart as humans, then the intelligence of those machines will just take off and become infinitely smarter than we are.

There is no more reason to believe this theory than there is, say, to believe some opposite theory that once they reach human-level intelligence it would be difficult to go beyond that because of natural barriers that one can think of.

There is not much scientific support to really say whether something like this is an issue, but there are some people who worry about that and worry about what would happen if machines became so intelligent that they could take over humanity at their own will. Because of the way machines are designed today—they learn from us and they are programmed to do the things we ask them to do and that we value—as far as I'm concerned, this is very unlikely.

It's good that there are some researchers who are seriously thinking about how to protect against things like that, but it's a very marginal area of research. What I'm much more concerned with, as are many of my colleagues, is how machines could be used by humans and misused by humans in ways that could be dangerous for society and for the planet. That, to me, is a much bigger concern.

The current level of social wisdom may not grow as quickly as will the power of these technologies as they grow. That's the thing I'm more concerned about.

4:35 p.m.

Liberal

Raj Saini Liberal Kitchener Centre, ON

Thank you very much.

4:35 p.m.

Conservative

The Chair Conservative Bob Zimmer

We have Mr. Wagner back.

4:35 p.m.

Liberal

Raj Saini Liberal Kitchener Centre, ON

Mr. Wagner, before we got cut out I was quoting a term you had written in a paper recently called “quasi-automation”. You talked about the lack of human agency in certain decision-making processes that was debatable, for example, self-driving cars, border searches and also social content on the media. Dr. Bengio had also indicated fake news and the misuse of AI.

It seems to me that in some cases human beings are just part of the loop for the minimum amount of contact. How do we make sure there is still the human dimension in making decisions, especially when it comes to certain things like fake news and the use of political advertising?