Evidence of meeting #145 for Access to Information, Privacy and Ethics in the 42nd Parliament, 1st Session. (The original version is on Parliament’s site, as are the minutes.) The winning word was companies.

A video is available from Parliament.

On the agenda

MPs speaking

Also speaking

Clerk of the Committee  Mr. Michael MacPherson
Ben Wagner  Assistant Professor, Vienna University of Economics, As an Individual
Yoshua Bengio  Scientific Director, Mila - Quebec Artificial Intelligence Institute

4:05 p.m.

Conservative

The Chair Conservative Bob Zimmer

Just a little.

Thank you for coming today.

We'll start off with Mr. Erskine-Smith for seven minutes.

4:05 p.m.

Liberal

Nathaniel Erskine-Smith Liberal Beaches—East York, ON

Thanks very much.

I want to talk more about regulation than ethics, particularly because of the most recent example where Facebook has said to our Privacy Commissioner, “Thanks for your recommendations; we're not going to follow them”, so I think we need stronger rules as far as they go.

Mr. Wagner, in a recent article, one of the three examples you use about AI is social media content moderation. At this committee we've talked about algorithmic transparency. In the EU it's algorithmic explainability. In that article you noted that it's unclear what that looks like. It's a new idea, obviously, in the sense that, when we've spoken to the U.K. information commissioner and had recent conversations with the EU data protection supervisor, they are just scaling up their capacity to address this issue and to understand what this looks like.

Having looked at this issue yourself and written about this, when we talk about algorithmic transparency, is there a practical understanding that we ought to have? It's one thing to make an recommendation on algorithmic transparency. What should it specifically look like?

4:05 p.m.

Prof. Ben Wagner

It's an extremely good question. At this point there are quite a lot of proposals out there on what it could be, but I think the first thing, to come straight to the point, is that transparency or explainability itself is insufficient. Just saying we can explain what it does, and therefore it's enough, is not enough. You have to have someone who's in a meaningful way accountable for the actions of these things, and you need a governance framework around it.

When we're talking about, especially in the context of social media, having a framework for how content is moderated, it also means appeal mechanisms, transparency mechanisms and ensuring that there is some kind of external adjudication if there is a disagreement in these contexts, and then adding an extra layer of complexity when we're talking about regulatory responses to this.

There is a challenge that once AI-type systems or automated systems have been embedded within organizations, over time those organizations become dependent on those systems, and it's very difficult to move beyond them or get out of them, so you need to be quite strong on the governance quite early to make sure that you're really having a strong and meaningful effect on how—

4:05 p.m.

Liberal

Nathaniel Erskine-Smith Liberal Beaches—East York, ON

I want to get to what more it could be. With respect to explainability and transparency, you mentioned Karlin here in Canada, and you referenced, too, what the Treasury Board has done with respect to algorithmic impact assessments on the public sector side. It occurs to me that, if we are serious about that level of transparency and explainability, it could mean a requirement for algorithmic impact assessments in the private sector akin to an SEC filing where non-compliance would come with some sanctions if information is not included. Do you think that is the level we should aim for?

4:10 p.m.

Prof. Ben Wagner

Yes. In principle, I think that's exactly where things should be going. That's exactly the type of proposal that I was trying to suggest as to where things should be moving. What I would add is that, of course by doing so, you don't want to stifle innovation, so you would need some kind of threshold on top of which, let's say for publicly traded companies or for companies of a certain size, there's a certain impact. Now, of course, depending on the amount of data those companies hold, those can also be very small companies, so you would have to have different types of thresholds for different types of organizations. Yes, I think that would be extremely helpful.

4:10 p.m.

Liberal

Nathaniel Erskine-Smith Liberal Beaches—East York, ON

Mr. Bengio, you talked about bias and discrimination. You talked about advertising, the ability to influence more efficiently, and the use of AI in social networks. Each time, I think you were hinting at something. I mean, with respect to bias and discrimination, you explicitly hinted at the need for regulation, or you suggested the need for regulation.

4:10 p.m.

Prof. Yoshua Bengio

Yes.

4:10 p.m.

Liberal

Nathaniel Erskine-Smith Liberal Beaches—East York, ON

If it's not just an ethical framework...and I appreciate the work you've done with the Montreal declaration on ethics, but if we're talking regulation, is there something you would point this committee to in terms of how we ought to regulate algorithmic decision-making to solve some of these problems that you've identified?

4:10 p.m.

Prof. Yoshua Bengio

Yes. I'm not a legal expert, so there might be different ways that one could regulate. In some cases, maybe even current laws are sufficient and they need to pass the test of the courts. Let me give you an example in the case of bias and discrimination. Let's say you consider the insurance industry. You probably would need different regulations for different industries where the way in which issues come up might be different. In the case of insurance, there could be information that is used by the companies that could lead to, say, gender discrimination. Even though the variables used by the insurance company do not explicitly mention gender, or do not explicitly mention race, it might be something that the AI system infers implicitly. For example, if you live in some neighbourhood, maybe it's a good indication of your race in some places.

The good news is that the algorithms that can mitigate this exist, but there will be a trade-off between eliminating the implicit information about gender and the accuracy of the predictions made by those systems. Those predictions turn into dollars. For an insurance company, if I can make a very precise assessment of your risk, of how many dollars you will cost me, that is how I will determine your premium, so that precision is really worth money. There will be pressure from companies to use as much information as they can from their customers, but it might go against our legal principles. We need to make sure we find the right trade-off.

4:10 p.m.

Liberal

Nathaniel Erskine-Smith Liberal Beaches—East York, ON

I want to pick it up there, because I think what it gets at is that we have existing rules from a human rights perspective.

4:10 p.m.

Prof. Yoshua Bengio

Yes.

4:10 p.m.

Liberal

Nathaniel Erskine-Smith Liberal Beaches—East York, ON

In a way, the reason transparency becomes the first step is that it's so hard to enforce any of these rules until a human rights commissioner can adequately assess what is going on. When we were asking questions of the information commissioner in the U.K. in November, her view was that her job was to make it explainable. Other regulators have other rules and perspectives and rights and values that they want to enforce, and it's then their job to take on their roles.

Is that the sense you get? Is that the right approach?

4:10 p.m.

Prof. Yoshua Bengio

That's a first step. We need to have clarity on how these processes are being put in place by companies like insurance companies, which use data to make decisions about people. We need to have some sort of access to that. It's understandable that they might want some secrecy, but government officials should be able to look into how they do it and make sure that it agrees with some of these principles that we put into law or in regulations or whatever. It doesn't mean that the system needs to explain every decision in detail, because that's probably not reasonable, but it's really important that they document, for example, what kind of data was used, where it came from, the way in which the data was used to train the system, and under what objective it was trained so that an expert can look at it and say that, for example, it's fine, or that there is a potential issue of bias and discrimination and maybe you should run such-and-such test to verify that there isn't; if there is an issue, then you should use one of the state-of-the-art techniques that can be used to mitigate the problem.

4:15 p.m.

Conservative

The Chair Conservative Bob Zimmer

Thank you, Mr. Erskine-Smith.

We'll go next to Mr. Kent for seven minutes.

4:15 p.m.

Conservative

Peter Kent Conservative Thornhill, ON

Thank you, Chair.

Thanks to both of you for appearing before the committee today.

Professor Bengio, you're to be congratulated for the work that you did on the Montreal declaration for responsible development of artificial intelligence, but as this committee has learned and as the public is—I hope—increasingly aware, much of the development of artificial intelligence has been funded by the “data-opolies”, by the Facebooks, by the Googles and by the increasing notoriety, as we learn, of their disregard for written and unwritten ethical guidelines and laws.

Just in passing, and you may not be aware of it, when this committee visited Facebook's headquarters in Washington last year, we were told almost in a passing comment that, when we asked if the company would accept increased regulation in Canada, the sort of investment we made in the AI hub in Montreal might not continue to be forthcoming, which hit me like a clunker. It was basically a threat from a “data-opoly” that Canada would be ostracized from AI investment should we increase regulation, even along the lines of the EU's GDPR or elements of it.

The question is to both of you. Large companies are already using and exploiting artificial intelligence in a variety of very commendable, wonderful ways, but also, in any number of ways that disregard ethical and legal guidelines. Should they be responsible for the misuse or the abuse of AI that occurs on their platforms?

4:15 p.m.

Prof. Yoshua Bengio

Let me go to your first question about the Facebook investment in Montreal.

Our AI research centres are mostly funded by the provincial and federal governments right now: Mila in Montreal, Vector in Toronto, Amii in Edmonton. The investment that was made by, say, Facebook to create a lab in Montreal or to be a sponsor of organizations like Mila, is pretty small in comparison to the other investments that are happening.

I'm not really concerned. Facebook and other organizations have opened shop here because they see their interest in it. It makes it easier for them to recruit people they need for their research groups.

4:15 p.m.

Conservative

Peter Kent Conservative Thornhill, ON

You don't see as improper a threat or any possibility of partnerships exposing the AI that's being developed across the various partners to the hub by these very large companies?

4:15 p.m.

Prof. Yoshua Bengio

No, they're doing a bit of research here and it's a small part of the bigger pot of research they're doing worldwide, which is somewhat disconnected from their actual business. Unless they want to use threats in an inappropriate way, to pull out of Canada right now would be to their disadvantage.

The other thing is, the investment they made in these other companies is still pretty small compared to the magnitude of the impact that we're talking about for all Canadians. Of course, we would be sad to see them go, and I don't think they would go, but I don't think we should even pay attention to this kind of statement.

4:15 p.m.

Conservative

Peter Kent Conservative Thornhill, ON

Okay.

Professor Wagner?

4:15 p.m.

Prof. Ben Wagner

I think threats of this kind are quite indicative of a general regulatory challenge, which is that every country wants to be the leading country on AI right now, and that doesn't always lead to the best regulatory climate for the citizens of those countries.

There seems to have been some kind of agreement between the Government of the Netherlands and the automobile industry which is developing AI into self-driving cars to not look so closely when they build a factory there, in order to ensure that as a result of building that factory, they will bring jobs and investment to these countries.

I think that the impact of AI and these technologies will be sufficiently transformative that while these large U.S. giants seem quite important right now, that may not be the case in a few years' time. A lot of the time I think the danger is actually the other way around. The public sector has historically invested a lot more than many people are aware of, and a lot of the fortunes of these large well-known companies are based on that. Of course, in political terms, it always looks more attractive to have Google, Facebook or Tesla as part of your local industries, because this also sends a political message.

I sense that this is part of the challenge that has led regulators down the path where we have real regulatory gaps. I would also caution from expecting just information commissioners or privacy regulators to be able to respond to this. It's also media regulators, people responsible for elections, people responsible for ensuring that, on a day-to-day basis, competition functions.

All of these regulators are heavily challenged by new digital technologies, and we would be wise as a society to take a step back and make sure they're really able to do their job as regulators, that they have access to all of the relevant data. We may find that there are still regulatory gaps where we possibly need even additional regulatory authorities.

There, I think the danger is to say we just want progress; we just want innovation. If you do that a few times and keep allowing that to be a possibility.... It doesn't mean that you have to say no to people like Facebook or Google if they want to invest in your country, but if you start getting threats like this, I would see them as exactly what they are: a futile attempt to resist the change that is already coming.

4:20 p.m.

Conservative

Peter Kent Conservative Thornhill, ON

Thank you.

4:20 p.m.

Conservative

The Chair Conservative Bob Zimmer

Next up, we have Mr. Angus for seven minutes.

4:20 p.m.

NDP

Charlie Angus NDP Timmins—James Bay, ON

Thank you, gentlemen.

You're raising I think some very disturbing, broad questions that are so much beyond the scope of our committee and what we do as politicians. My day job is to get Mrs. O'Grady's hydro turned back on—her electricity. That's what keeps me elected.

However, when we're talking AI with you, we're talking about the potential of mass dislocation of employment. What would that mean for society? We have not even had conversations around this. There's the human rights impact, particularly exporting AI to authoritarian regimes and what that would mean.

For me, trying to understand it, there are the rights of citizens and personal autonomy. The argument we were sold—and I was a digital idealist at one point—was that we'd have self-regulation on the Internet and that would give consumers choice; people would make their decisions and they'd click the apps that they like.

When we're dealing with AI, you have no ability as a citizen to challenge a decision that's been made, because it's been made by the algorithm. Whether or not we need to look at having regulation in place to protect the rights of citizens....

Mr. Wagner, you wrote an article, “Ethics as an Escape from Regulation: From ethics-washing to ethics-shopping?”

How do you see this issue?

4:20 p.m.

Prof. Ben Wagner

I'm sure you've guessed, from the title you mention, that I do see the rise of ethics—and I explicitly wouldn't include the Montreal declaration in this, because I don't think it's a good example of it.... There are certainly many cases of ethical frameworks that provide no clear institutional framework beyond them. A lot of my work has been focused, essentially, on getting people to either do human rights and governance, or, if they will do ethics, then to take ethics seriously and really ensure that the ethical frameworks developed as a result of that are rigorous and robust.

At the end of the article you mentioned there is literally a framework of criteria on how to go through this: external participation, external oversight, transparent decision-making and non-arbitrary lists of standards. Ethics don't substitute fundamental rights.

To come back to the example you mentioned on self-regulation on the Internet and how we all assumed that that would be the path that would safeguard citizens' autonomy, I think that's been one of the key challenges. This argument has been misused so much by private companies that then say, for example, “Well, we have a million likes, and you only have 500,000 votes. Surely our likes are worth as much as your votes.” I don't even need to explain that in great detail. It's just this logic of lots of clicks and lots of likes surely can be seen as the same thing as votes. This, in a democratic context, is extremely difficult.

Lastly, you specifically mentioned exporting AI to authoritarian regimes. I think there is a strong link between the debates we have about exporting AI to authoritarian regimes and how we trade in and export surveillance technologies. There are a lot of technologies that are extremely powerful that are getting into the wrong hands right now. Limiting that or ensuring, through agreements like the Wassenaar arrangement and others, that there is dual-use control for certain types of technology will become increasingly important.

We have existing mechanisms. We have existing frameworks to do this, but unless we're willing to implement those and sometimes also say that we will do it collectively as a group, even if this means having slightly less—and I emphasize “slightly less”—economic growth as a result of this, we can still also say we're taking more leadership on this issue. It's going to be very difficult to see where these short-term economic gains are going to meaningfully provide for a human rights environment we would want to stand behind in the years and decades to come.

4:25 p.m.

NDP

Charlie Angus NDP Timmins—James Bay, ON

Thank you.

I'm a music buff. Every morning I wake up, and YouTube has selected music for me. Their algorithms are pretty good, and I watch them. I'm also a World War II buff, and YouTube offers me all kinds of documentaries. I see some of these documentaries on the great historian David Irving, who is a notorious Holocaust denier, and they come up in my feed.

Now, I have white hair; I know what David Irving is, but if I'm a high school student, I don't. It has a lot of likes because a lot of extremists are promoting it. The algorithm is pushing us towards seeing content that would otherwise be illegal.

In terms of self-regulation, I look at what we have in Canada. In Canada, we have broadcast standards for media. That doesn't mean we don't have all manner of debate and crazy commentary, and people are free to do it, but if someone was on radio or television promoting a Holocaust denier, there would be consequences. When it's YouTube, we don't even have a proper vehicle to hold them to account.

Again, in terms of the algorithms pushing us towards extremist content, do you believe that we should have some of the same kinds of legal obligations that are for regular broadcast media? You're broadcasting this. You have an obligation. You have to deal with this.