Evidence of meeting #145 for Access to Information, Privacy and Ethics in the 42nd Parliament, 1st Session. (The original version is on Parliament’s site, as are the minutes.) The winning word was companies.

A video is available from Parliament.

On the agenda

MPs speaking

Also speaking

Clerk of the Committee  Mr. Michael MacPherson
Ben Wagner  Assistant Professor, Vienna University of Economics, As an Individual
Yoshua Bengio  Scientific Director, Mila - Quebec Artificial Intelligence Institute

4:35 p.m.

Prof. Ben Wagner

There's a challenge in that if we assume human intervention alone will fix things, we will also be in a difficult situation because human beings, for all sorts of reasons, often do not make the best decisions. We have many hundreds of years of experience of how to deal with bad human decision-making and not so much experience in how to deal with mainly automated human decision-making, but the best types of decisions tend to come from a good configuration of interactions between humans and machines.

If you look at how decisions are made right now, human beings often rubber-stamp the automated decision made by AIs or algorithms and put a stamp on it and say, “Great, a human decided this”, when actually the reason for that is to evade different legal regulations and different human rights principles, which is why we use the term quasi-automation. It seems like it's an automated process, but then you have three to five seconds where somebody is looking over this.

In the paper I wrote and also in the guidelines of the Article 29 Working Party, guidelines were developed for what is called “meaningful human intervention” and only when this intervention is meaningful. When human beings have enough time to understand the decision they're making, enough training, enough supports in being able to do the only event, then it's considered meaningful decision-making.

It also means that if you're driving in a self-driving car, you need enough time as an operator to be able to stop, to change, to make decisions and a lot of the time we're building technical systems where this isn't possible. If you look at the recent two crashes of Boeing 737 Max aircraft, it's exactly this example where you had an interface between technological systems and human systems, where it became unclear how much control human beings had, and even if they did have control, so they could press the big red button and override this automated system, and whether that control was actually sufficient to allow them to control the aircraft.

As I understand the current debate about this, that's an open question. This is a question that is being faced now. With autopilots and other automated systems of aircraft, this will increasingly lead to questions that we have in everyday life, so not just about an aircraft but also about an insurance system, about how you post online comments, and also how government services are provided. It's extremely important that we get it right.

4:40 p.m.

Liberal

Raj Saini Liberal Kitchener Centre, ON

Thank you very much.

4:40 p.m.

Conservative

The Chair Conservative Bob Zimmer

Mr. Gourde.

4:40 p.m.

Conservative

Jacques Gourde Conservative Lévis—Lotbinière, QC

Thank you, Mr. Chair.

I want to thank the witnesses for joining us.

I'll turn to you, Mr. Bengio. Perhaps you'll answer me in French.

Artificial intelligence finds solutions to our problems and improves the services, knowledge and information that we receive. So far, so good. However, you really worried me when you said that artificial intelligence can find solutions to problems on its own. What would happen if artificial intelligence determined that we were the problem?

You mentioned killer drones, which may be capable of genocide. If artificial intelligence programming includes a list of all Canadian parliamentarians to eliminate within a week, should we be concerned? Is this pure fiction? Could this happen?

4:40 p.m.

Prof. Yoshua Bengio

Some things that you said are pure fiction, but others are cause for concern.

I think that we should be concerned about a system that uses artificial and programmed intelligence to target, for example, all parliamentarians in a certain country. This situation is quite plausible from a scientific point of view, since it involves only technological issues related to the implementation of this type of system. That's why several countries are currently discussing a treaty that would ban these types of systems.

However, we must remember that these systems aren't really autonomous at a high level. The systems will simply follow the instructions that we give them. As a result, a system won't decide on its own to kill someone. The system will need to be programmed for this purpose.

In general, humans will always decide what constitutes good or bad behaviour on the part of the system, much like we do with children. The system will learn to imitate human behaviour. The system will find its own solutions, but according to criteria or an objective chosen by humans.

4:40 p.m.

Conservative

Jacques Gourde Conservative Lévis—Lotbinière, QC

We've talked a great deal about job losses, efficiency and the fact that artificial intelligence could eventually replace foremen in factories. I could be informed by means of my smartphone of the work that awaits me today on my production line. Artificial intelligence could arguably do much of the work itself.

Will workers end up going to the factory simply to carry out tasks that are too difficult for robots to perform, such as moving certain items? We're even talking about artificial intelligence controlling transportation. Could a large part of the population be unemployed within 10 to 20 years?

4:45 p.m.

Prof. Yoshua Bengio

Yes, it's quite possible.

Your example of a machine that assigns the work already exists. For example, today, couriers who carry letters from one end of the city to the other are often guided by systems that use artificial intelligence and that decide who will carry a given package. There's no longer any human contact between the dispatcher and the person performing the tasks.

As technology advances, obviously more and more of these jobs, especially the more routine jobs, will be automated. In the courier example that I just provided, the dispatcher's job was the most routine and easiest to automate. The work of a human who walks the streets of the city is more difficult to automate at the moment. However, it will probably happen eventually.

It's very important for governments to plan, anticipate the future and think about measures that will minimize the human misery that may result from this development if it were left to run its course.

4:45 p.m.

Conservative

Jacques Gourde Conservative Lévis—Lotbinière, QC

My last question concerns ethics.

Our government is increasingly using artificial intelligence to provide services to Canadians. All governments in the world are doing the same. If I need services from my government and I see that the responses provided have been generated by artificial intelligence—a reality that's fast approaching—how can I be sure that a human has listened to me? To what extent can I invoke ethical considerations to require that the service be provided by another person?

4:45 p.m.

Prof. Yoshua Bengio

It depends on the type of service. In some cases, all that matters is that the job is done properly.

Personally, I would prefer to receive quick and efficient responses from tax officials, even if the responses must be generated by a machine. I'm using this example because we're in the middle of tax season.

However, if I have questions about my health, if the discussion takes a more personal turn or if I'm ill and in hospital, I want to have a human in front of me. It doesn't bother me that the human uses technology to do a better job. That said, some situations involve human and relational concerns that are better addressed through human interaction.

4:45 p.m.

Conservative

Jacques Gourde Conservative Lévis—Lotbinière, QC

Thank you.

4:45 p.m.

Conservative

The Chair Conservative Bob Zimmer

Mr. Picard, for five minutes.

April 30th, 2019 / 4:45 p.m.

Liberal

Michel Picard Liberal Montarville, QC

Thank you, Mr. Chair.

Let's try to talk about the positive side of AI, as we have been kind of scared of all the issues.

Mr. Wagner, you talked about the moral leadership. In my view, moral is a bit wider than ethics. Ethics is the values that you decide to promote, and then governance would apply them; but with moral, there's a societal aspect to it where AI is used by individuals, but the system in general is created first by humans. The system must help us govern our values, to the point that you suggest—moral. Who will do that? Who is credible enough for me to say, well, that's a wise person? I should count on this type of person or persons to establish what will be from now on my moral guide in my AI systems.

4:45 p.m.

Prof. Ben Wagner

I think right now we live in a situation where these decisions are overwhelmingly made by private companies, and almost none of these decisions are made by democratically elected governments, and this is the problem for citizens, for rights, for governance. It poses a considerable challenge, but that doesn't mean that it's impossible. Whether it's the trade in the technologies where you choose to export them; the development of the technology, and which ones you focus on developing; the research and the research funding for different technologies, what you focus on, and what you ensure is developed, I do think there is an opportunity for moral leadership— which I think is the right word there.

But also to be perfectly blunt, there aren't that many countries in the world that are seriously trying to develop artificial intelligence in a positive way for their citizens and for its development in the context of human rights. There are many that are discussing it and trying, but a lot of the time they're saying, “Ah, but we're not quite sure. Would it have issues for economic development? Ah, we're not quite sure if some of our companies will have some mild issues here or there.”

I think there is a need to be willing and also have the strength to take that stand, but I also do think it's important because if there are no countries left in the world that are willing to do that, then we're in a very difficult spot. I think the European general data protection regulations have a perspective on what things could be done on data. But for artificial intelligence, for algorithms, we have a whole new set of issues, a whole new set of challenges, where I think there will be further leadership required to really get to a human rights basis and a basis that benefits all citizens.

4:50 p.m.

Liberal

Michel Picard Liberal Montarville, QC

Mr. Bengio, I want to hear your comments on this issue.

At this time, the legislation that a country adopts to protect privacy to some extent may become counterproductive. The legislation may prevent that country from fully developing its use of artificial intelligence. The country will then have a leadership issue. However, both of you have stated that Canada wants to be a leader in different areas.

First, I wonder where this leadership should begin, since the concept is so broad. In addition, Canadian leadership, which is probably based on Canadian values, may not be equal to the leadership of another country that uses a different value system.

4:50 p.m.

Prof. Yoshua Bengio

You're asking a good question, but I don't think that there's a general answer.

This requires the use of experts, who will review ethical and moral issues, along with technological and economic concerns in each relevant area. The goal is to establish guidelines to both foster innovation and protect the public. I think that this is generally possible. Of course, several companies have protested that there shouldn't be too many barriers. However, in most cases, I don't believe that the expected results pose an issue.

As we said earlier, there are issues in some situations, but there's no easy solution. We specifically talked about [Technical difficulty—Editor] illegal videos on Facebook. The issue is that we don't yet have the technology to identify these videos quickly enough, even though Facebook is researching ways to improve this type of automatic identification. However, not enough humans are monitoring everything put on the Internet in order to remove things quickly and prevent things from being posted.

The task is practically impossible, and there are only three possible solutions. We can shut everything down, wait until we've developed better technology, or accept that things aren't perfect and that humans carry out the monitoring. In fact, this is already the case right now, when people have the opportunity to click on a button to report unacceptable content.

4:50 p.m.

Liberal

Michel Picard Liberal Montarville, QC

Thank you.

4:50 p.m.

Conservative

The Chair Conservative Bob Zimmer

I just want to bring this before the committee. We actually have three more on the list to ask questions—Mr. Kent, Mr. Baylis and Mr. Angus—but we're at about the five-minute mark. Because we had so many delays, I would suggest we go into committee business slightly later, but it's up to you.

What would you like to see? I think the questions still need to be asked, but I'm looking for direction from the committee just to finish the slate of questions.

4:50 p.m.

Some hon. members

Agreed.

4:50 p.m.

Conservative

The Chair Conservative Bob Zimmer

It will just push into committee business by about eight minutes.

Okay, we'll continue.

Next up for five minutes is Mr. Kent.

4:50 p.m.

Conservative

Peter Kent Conservative Thornhill, ON

Thank you, Chair.

In the interests of time, I have just one last question that I'd like to ask to both of our witnesses, and it comes back to this matter of regulation in the borderless digital world.

Do you see a need for international treaties that would govern the development and the use of artificial intelligence in ways similar to the EU? It has has this new GDPR, which is certainly far beyond any regulations we have in Canada. Would either of you see the need for international, meaningful, enforceable—and I think the word “enforceable” is key to this—international treaties to enforce the way artificial intelligence might be used or abused?

4:55 p.m.

Prof. Yoshua Bengio

I think we have to go in that direction. It's not going to be perfect because there will be countries that don't embark, or some countries might be able to water down the strength of these things.

Even some regulation there—and in particular, here we're talking about international regulations—is better than none, by far.

4:55 p.m.

Conservative

Peter Kent Conservative Thornhill, ON

Mr. Wagner.

4:55 p.m.

Prof. Ben Wagner

In the experience of what I've seen, when people try to develop general regulations for all of AI or all of algorithms or all of technology, it never ends up being quite appropriate to the task.

I think I agree with Mr. Bengio in the sense that, if we're talking about certain types of international regulation, for example, it would be focused on automated killer systems, let's say, and there is already an extensive process going on in this work in Geneva and in other parts of the world, which I think is extremely important.

There is also the consideration that could be made whether Canada wants to become, itself, a state that has protections equivalent to the GDPR and that, I think, is also a relevant consideration that would considerably improve both flows of data and protection of privacy.

I think all other areas need to be looked at in a sectoral-specific way. If we're talking about elections, for example, often AI and other automated systems will abuse existing weaknesses in regulatory environments. So how can we ensure, for example, that campaign finance laws are improved in specific contexts, but also ensure that those contexts are improved in a way that they consider automation? When we're talking about the media sector and issues related to that, how can we ensure that our existing laws adapt and reflect AI?

I think if we build on what we have already, rather than developing a new cross-sectional rule for all of AI and for all algorithms, we may do a better job.

I think that also goes at the international level where it's very much a case of building on and developing from what we already have, whether it's related to dual-use controls, whether it's related to media or whether it's related to challenges related to elections. I think there are already existing instruments there. I think that's more effective than the one-size-fits-all AI treaty.

4:55 p.m.

Conservative

Peter Kent Conservative Thornhill, ON

Thank you.

Thank you, Professor Bengio, for your explanation and your discussion of similarity. I had the occasion to watch an old version of 2001: A Space Odyssey and the battle between the human and HAL over the control of the spaceship. Your discounting of the reality of similarity was reassuring. Thank you.

4:55 p.m.

Conservative

The Chair Conservative Bob Zimmer

Thank you, Mr. Kent.

Next up for five minutes is Mr. Baylis.