Evidence of meeting #145 for Access to Information, Privacy and Ethics in the 42nd Parliament, 1st Session. (The original version is on Parliament’s site, as are the minutes.) The winning word was companies.

A video is available from Parliament.

On the agenda

MPs speaking

Also speaking

Clerk of the Committee  Mr. Michael MacPherson
Ben Wagner  Assistant Professor, Vienna University of Economics, As an Individual
Yoshua Bengio  Scientific Director, Mila - Quebec Artificial Intelligence Institute

3:40 p.m.

Conservative

The Chair Conservative Bob Zimmer

We will call to order meeting number 145 of the Standing Committee on Access to Information, Privacy and Ethics. Pursuant to Standing Order 108(2), this is the study of the ethical aspects of artificial intelligence and algorithms.

First of all, we'll go to Mr. Angus who has a motion.

Go ahead, Mr. Angus.

3:40 p.m.

NDP

Charlie Angus NDP Timmins—James Bay, ON

Thank you. I won't take too much time.

I have a motion, but can I make a commercial announcement first?

I have no pecuniary interest in this, but I implore my colleagues on the committee to watch the BBC television show Brexit. Our friend Zack Massingham makes an appearance as one of the central characters, and he certainly does not appear to be as tired and confused and memory-fogged as he did to us.

3:40 p.m.

Liberal

Raj Saini Liberal Kitchener Centre, ON

Is it a show?

3:40 p.m.

Conservative

The Chair Conservative Bob Zimmer

Yes. As chair, I'll send out the link.

I uploaded it on iTunes, so get it however you wish.

3:40 p.m.

NDP

Charlie Angus NDP Timmins—James Bay, ON

I think it would be great to have Mr. Zack Massingham back to ask how he considers his portrayal....

3:40 p.m.

Conservative

The Chair Conservative Bob Zimmer

He was much more involved than he let on.

3:40 p.m.

NDP

Charlie Angus NDP Timmins—James Bay, ON

Much more involved.

Okay, I brought a notice of motion to the committee:

That the Committee begin a study on the ethical aspects of artificial intelligence and algorithms.

This was in response to our clerk, who said that in order to undertake this next round of witnesses, we needed an official motion.

3:40 p.m.

Conservative

The Chair Conservative Bob Zimmer

I'll speak quickly to that, Mr. Angus, to talk about the aspects of the study, and then I'll pass it on to the clerk. Our calendar is very packed for what we're able to pull off. The analysts are wondering how far we go with this. Do we report back with a report?

Mr. Clerk, go ahead.

3:40 p.m.

The Clerk of the Committee Mr. Michael MacPherson

I can send around a calendar after tonight's meeting. It will make it clear what we can fit in.

It's basically defining the priorities for the committee. Do you want a report on the privacy of digital government services as well as this new study that you're embarking on?

3:40 p.m.

Conservative

The Chair Conservative Bob Zimmer

Is there any debate?

3:40 p.m.

NDP

Charlie Angus NDP Timmins—James Bay, ON

I'm very interested in following up on some of the discussions we've had in terms of the effect of how the algorithms in certain platforms are being used to distort public conversation and political discourse by moving people toward more and more extremist and false content, as opposed to being able to find accurate, credible sources.

I think we have not really looked at some of those algorithms, particularly with YouTube—we have put a lot of attention into Facebook—but these algorithms are having a very distinct impact on civil discourse. It's worth knowing how they work. Of course, with the larger issues we talked about of AI and algorithms, I leave it to my colleagues around the table if they feel other witnesses should be drawn, but I think we have a pretty good list of witnesses and not much time.

I'd say let's get down to it.

3:40 p.m.

Conservative

The Chair Conservative Bob Zimmer

Mr. Kent.

3:45 p.m.

Conservative

Peter Kent Conservative Thornhill, ON

The topic is worthy of a full study, and I think we're going to be talking about elements of that today in testimony from our witnesses. I think we should get to that.

However, given that we have barely six weeks of meaningful committee time left, I'm not sure we could get a formal study itself going. Certainly as we go through discussion of the outline of the draft report on digital government, if there are opportunities, then the more testimony we can hear, the better off the next parliament will be to really take it on seriously and chew on it.

3:45 p.m.

Conservative

The Chair Conservative Bob Zimmer

Mr. Erskine-Smith.

3:45 p.m.

Liberal

Nathaniel Erskine-Smith Liberal Beaches—East York, ON

Very briefly, obviously on May 28, we have a much larger committee hearing, with international colleagues coming from other countries. Part of that conversation is going to be about algorithmic accountability and transparency.

Whether or not it leads into a more fulsome report in any way—we might run out of time, and that's fine—I agree with Mr. Kent that regardless, it's worthwhile for us to hear the evidence. I think more than anything for us now, it's worthwhile leading into May 28 to listen to some of the experts. We might have some more pointed questions.

3:45 p.m.

Conservative

The Chair Conservative Bob Zimmer

That sounds good. Is there any further discussion on that?

Mr. Angus.

3:45 p.m.

NDP

Charlie Angus NDP Timmins—James Bay, ON

Well I'm a firm believer in always having reports so we can show what our committees have done. I recognize that the clock is ticking, so I'm willing to bend on that.

I agree with Mr. Erskine-Smith. This is about setting us up for the international grand committee so that we are fully prepared; we've had a chance to look at some other issues that we may bring to the table. To me, this is a good training session leading up to that.

Then, out of that committee, there may be an international statement, or we may feel the need to follow up with a further report. I will take that after we have the international committee and find out what our colleagues around the world think.

3:45 p.m.

Conservative

The Chair Conservative Bob Zimmer

Okay. Is there any more debate or any discussion? Seeing none, all in favour of the motion?

(Motion agreed to)

That's unanimous.

Thank you, Mr. Angus.

We'll get on to business. We have two witnesses here with us today: Mr. Ben Wagner, assistant professor, Vienna University of Economics, by teleconference; and Yoshua Bengio from Mila-Quebec Artificial Intelligence Institute. Dr. Bengio is the scientific director there and is here by teleconference from Montreal.

We'll start with you, Mr. Wagner. Go ahead for 10 minutes.

April 30th, 2019 / 3:45 p.m.

Professor Ben Wagner Assistant Professor, Vienna University of Economics, As an Individual

Thank you very much for the opportunity to speak here. I really appreciate the standing committee dealing with these issues. My name is Ben Wagner. I'm with the Privacy and Sustainable Computing Lab in Vienna.

We've been working closely on these issues for some time, specifically trying to understand how to safeguard human rights in a world where artificial intelligence and algorithms are becoming extremely common. This has included helping prepare Global Affairs Canada for the G7 last year. It was a great pleasure to work with colleagues there like Tara Denham, Jennifer Jeppsson and Marketa Geislerova.

The results that were produced in that, I think, are quite relevant also for this committee. You have the Charlevoix common vision for the future of artificial intelligence. Related to that, last year we were also working on—this is now in a Council of Europe context—a study on the human rights dimensions of algorithms, which I also think would be extremely helpful, especially if you're discussing studies and common challenges faced. Many of the common challenges you're discussing are already mentioned in these G7 documents and also in the statements developed by the Council of Europe.

To come back to a more general understanding of why this is important, artificial intelligence or AI is frequently thought of as some unusual or new thing. I think it's important to acknowledge that this is not a new and unusual technology. Artificial intelligence is here right now and is present in many existing applications that are being used.

It's increasingly permeating life-worlds, and it will soon be difficult to live in the modern world without having AI touch your life on a very daily basis. Its deep embedding in societies of course poses considerable challenges, but also opportunities. I think when we specifically look at the ethical and regulatory dimensions as, I believe, this committee is doing, it's extremely important to remember to try to ensure that all citizens have access to the opportunities of these technologies and that the opportunities provided by these technologies are not limited to just a select few.

With regard to how that can be done, there is a variety of sets of challenges and different issues. One of the most common ones is whether we talk about ethical framework or a more regulatory governance framework. I think it's important that they not be played off against each other. Ethical frameworks have their place. They're extremely important and they're extremely valuable, but of course they can't override or prevent governance frameworks from functioning. Indeed it would be difficult if they could. But if they function in parallel in a useful and sustainable manner, that can be quite effective.

The same is true even if you take a more governance-oriented human rights-based framework. It's very frequent that in these contexts different human rights are played off against each other. The right to freedom of expression is seen as being more important than the right to privacy. The right to privacy is seen as being more important than the right to free assembly, and so on. It's very important that in developing standards and frameworks in this context, we always consider all human rights and that human rights be the basic foundation for how we think about algorithms and artificial intelligence.

If you look at the Charlevoix documents that were developed last summer, you'll also note a considerable focus on human-centric artificial intelligence. While that's an extremely important design component, I think it's also important to acknowledge that human-centric focuses alone are not enough. At the same time, while we're seeing an increasing number of automated systems, lots of actors who are developing automated systems are not willing to admit how they're actually developing them or what exact elements are part of these systems.

It's often joked that some of the most frequently used examples in the start-up business plans of artificial intelligence are closer to Mechanical Turk—that is to say human labour—than to actual advanced artificial intelligence systems. This human labour often gets lost on the way or fails to be acknowledged.

This is also relevant in the context of extra-legal frameworks that are frequently applied when we talk about ethical frameworks, when we talk about frameworks that don't govern in the way that rule of law can. I think we need to be extremely careful there with regard to the extent to which frameworks like this actually come to replace or override the rule of law. That's specifically also the case where we see lots of conversations right now. I'm sure you will have heard about Google's AI board, which was recently created and then shut down within the space of just a week or two.

You'll notice that there's an attempt on the one hand, a great push by some actors, to try to be more ethical, but this ethical framework is not enough and the actors realize this, given the heavy criticism of this that you see, which again isn't to say that ethics isn't important or ethics is necessary but that ethics needs to be done right if it's going to have a meaningful impact on this. That means there's a strong role for the public sector as well. We can't allow ethics squashing. We can't allow ethics shopping. We can't allow for lowering the bar for the standards that we already have.

As I'm sure you are aware, the existing standards in many areas of public governance—when we're talking about existing norms related to how we govern technology and how we govern the activities of corporations, if you look at the business and human rights framework of the United Nations, for example—are already relatively weak. In some areas, there's a danger that these ethical principles will even go below existing business and human rights standards.

At the same time, to take a more positive note as well, there is an extremely important role for the public sector here, and I think it's again possible to commend the work specifically of Michael Karlin, who has done some fantastic work on algorithmic impact assessments for the Government of Canada. There's really an important measure to be seen there in how Canada is also taking a lead and really showing what is possible in the context of these algorithmic impact assessments. I can definitely commend his work there.

At the same time, when you look at the recent accusations now that Facebook has been breaking Canadian privacy laws, we have a serious issue related to implementation. Specifically, these breaches that have been of concern to numerous Canadian privacy regulators do raise a question. Can we just focus on the public sector alone and can the public sector alone lead the way, or do we need to take similar considerations for, at the very least, large, powerful private sector companies? Because in the world we live in right now, whether you're talking about opening a bank account, posting something on Facebook, talking to a friend online or even getting a pizza delivery, algorithms and AI are part of every step that takes place in that context.

Unless we're willing to limit the agency of these algorithms, there are two things when we consider those things democratically relevant. They increasingly begin to dominate us, and this is not a Terminator-like scenario where we need to be scared that the robots will come and take over the world.

It's rather that, through these technologies, a lot of power becomes concentrated in the hands of very few human beings, and these are precisely the types of situations that democratic institutions, such as the parliamentary committee that's hearing about this topic right now, were built to deal with. That is to ensure that the power of the few is spread to the power of the many, and to ensure that having access to AI and to the benefits of AI, but also to the foundational promise of AI that technology can make people's lives better, both inside Canada and beyond, is accessible to every human being, and that basic human rights provide the core foundation of how we develop and how we think about technology in the future.

Thank you very much for listening. I look forward to answering any questions you might have.

3:55 p.m.

Conservative

The Chair Conservative Bob Zimmer

Thank you.

We'll go next to Mr. Bengio in Montreal.

Go ahead, please, for 10 minutes.

3:55 p.m.

Professor Yoshua Bengio Scientific Director, Mila - Quebec Artificial Intelligence Institute

Hello. My expertise is in computer science. I've been a pioneer of deep learning, which is the area that has changed AI from something that was happening in universities into something that is now taking a big economic role and where there are billions of investments in industry.

In spite of the progress that's remarkable, it's also important to realize that the current AI systems are very far from human-level AI. In many ways they are weak. They don't understand the human context, of course. They don't understand moral values. They don't understand much but they can be very good at a particular task and that can be very economically useful, but we have to be aware of these limitations.

For example, if we consider the application of these tools in the military, a system is going to take a decision to kill a person and doesn't have the moral context a human can have to maybe not obey the order. There's a red line, which the UN Secretary-General has talked about, that we shouldn't be crossing.

Going back to AI and Canada's role, the thing that is interesting is we've played a very important role in development of the recent science of AI and clearly we are recognized as a scientific leader. We also are playing a growing role on the economic side. Of course, Canada is still dwarfed in comparison to Silicon Valley, but there is a very rapid growth of our tech industry regarding AI and we have a chance, because of our strength scientifically, to become not just a consumer of AI but also a producer, which means Canadian companies are getting involved and that's important to keep in mind as well.

The thing that's important, in addition to the scientific leadership and our growing economic leadership regarding AI, is moral leadership, and Canada has a chance to play a crucial role in the world here. We have already been noticed for this. In particular I want to mention the Montreal declaration for responsible development of AI to which I contributed and which is really about ethical principles.

Ten principles have been articulated with a number of subprinciples for each. This is interesting and different from other efforts in trying to formalize the ethical and social aspects of AI because in addition to bringing in experts in AI, of course there were scholars in the social sciences and humanities, but ordinary people also had a chance to provide feedback. The declaration was modified thanks to that feedback with citizens in libraries, for example, attending workshops where they could discuss the issues that were presented in the declaration.

In general for the future, I think it's a good thing to keep in mind that we have to keep ordinary people in the loop. We have to educate them so they understand issues because we will take decisions collectively, and it's important that ordinary people understand.

When I give talks about AI, often the biggest concerns I hear are about the effect of AI on motivation and jobs. Clearly, governments need to think about that and that thinking must be done quite a bit ahead of the changes that are coming. If you think about, say, changing the education system to adapt to a new wave of people who might lose their jobs in the next decade, those changes can take years, can take a decade to have a real impact. So it's important to start these things early. It's the same thing if we decide to change our social safety net to adapt to these potential rapid changes in the job market. These things should be tackled fairly soon.

I have another example of short-term concerns. I talked about military applications. It could be really good if Canada played more of a leadership role in the discussions that are currently taking place around the UN in the military use of AI and the so-called “killer drones” that can be used, thanks to computer videos, to recognize people and target them.

There's already a large coalition of countries expressing concern and working on drafting an international ban. Even if not all the countries—or even major countries such as the U.S., China or Russia—don't go with such an international treaty, I think Canada can play an important role. A good example is what we did in the nineties with anti-personnel mines and the treaty that was signed in Canada. That really had an impact. Even though countries such as the U.S. didn't sign it, the social stigma of these anti-personnel mines, thanks to the ban, has meant that companies gradually have stopped building them.

Another area of concern from an ethical point of view has to do with bias and discrimination, which is something that is very important to Canadian values. I think it's also an area where governments can step in to make sure there's a level playing field between companies.

Right now, companies can choose to use one approach—or no approach at all—to try to tackle the potential issues of bias and discrimination in the use of AI, which comes mostly from the data that those systems are trained on, but there will be a trade-off between their use of these techniques and, say, the profitability or the predictability of the systems. If there is no regulation, what's going to happen is that the more ethical companies are going to lose market share against the companies that don't have such high standards, and it's important, of course, to make sure that all those companies play on the same level.

Another example that's interesting is the use of AI not necessarily in Canada but in other countries, because these systems can be used to track where people are by, again, using these cameras all over the place. The surveillance systems, for example, are currently being sold by China to some authoritarian countries. We are probably going to see more of that in the future. It's something that is ethnically questionable. We need to decide if we want to just not think about it or have some sort of regulation to make sure that these potentially unethical uses are not something that our companies are going to be doing.

Another area that's interesting for government to think about is advertising. As AI becomes gradually more powerful, it can influence people's minds more efficiently. In using information that a company has on a particular user, a particular person, the advertising can be targeted in a way that can have much more influence on our decisions than older forms of advertising can. If you think about things like political advertising, this could be a real issue, but even in other areas where that type of advertising can influence our behaviour in ways that are not good for us—with respect to our health, for example—we have to be careful.

Finally, related again to targeted advertising is the use of AI in social networks. We've seen the issues with Cambridge Analytica and Facebook, but I think there's a more general issue about how governments should set the rules of the game to minimize this kind of influencing by, again, using targeted messages. It's not necessarily advertising, but equivalently somebody is paying for influencing people's minds in a way that might not agree with what they really think or what's in their best interests.

Related to social networks is the question of data. A lot of the data that is being used by companies like Google and Facebook, of course, comes from users. Right now, users sign a consent to allow those companies to do whatever they want, basically, with that data.

There's no real potential strength for bargaining between a single user and those companies, so various organizations, particularly in the U.K., have been thinking about ways to bring back some sort of balance between the power of these large companies and the users who are providing data. There's a notion of data trust, which I encourage the Canadian government to consider as a legal approach to try to make sure the users can aggregate—you can think of it like a union—where they can negotiate contracts that are aligned with their values and interests.

4:05 p.m.

Conservative

The Chair Conservative Bob Zimmer

Thank you.

We'll get to questions.

I just want to recognize that we have a special guest and his class with us today. Professor Michael Geist, I thank you for attending. You could probably appear at the same panel with us today, but you're going to take the easy road today and listen in.

Welcome, students.

4:05 p.m.

NDP

Charlie Angus NDP Timmins—James Bay, ON

We also have students from the University of Haifa.

4:05 p.m.

Conservative

The Chair Conservative Bob Zimmer

They're from Haifa, so we have students from across the water.

4:05 p.m.

NDP

Charlie Angus NDP Timmins—James Bay, ON

That's east of St. John's, I believe.