Evidence of meeting #146 for Access to Information, Privacy and Ethics in the 42nd Parliament, 1st Session. (The original version is on Parliament’s site, as are the minutes.) The winning word was transparency.

A video is available from Parliament.

On the agenda

MPs speaking

Also speaking

Marc-Antoine Dilhac  Professor, Philosophy, Université de Montréal, As an Individual
Christian Sandvig  Director, Center for Ethics, Society, and Computing, University of Michigan, As an Individual

4:10 p.m.

Professor, Philosophy, Université de Montréal, As an Individual

Prof. Marc-Antoine Dilhac

There are many regulatory initiatives throughout the world. The European Union has just produced a normative and ethical framework with recommendations and assessment lists. It is, however, rare that countries resort to legislation.

For certain activities and sectors, the legislative framework can be relevant. Canada can govern certain algorithm-related activities. I am referring to data, of course, since that is what feeds the algorithms. We need to adopt a law to create a regime conducive to governing the use of the data.

We also need to adopt laws with regard to education in order to determine up to what point we want education to be robotized. This is coming gradually, and when we are there, it will be a bit too late to legislate.

I'll give you an example. When a robot follows a student's journey, it makes decisions that will follow the student. I am referring here to “robots” because the interface is robotic, but those decisions are in fact based on algorithms. The question is: to what extent do we want to lock a student into a path where their progress is established or assessed constantly by an algorithm.

4:10 p.m.

NDP

Peter Julian NDP New Westminster—Burnaby, BC

Forgive me for interrupting you, but I don't understand. You are talking about a robot that follows a pupil or a student, but what is the robot doing in the example you gave?

4:10 p.m.

Professor, Philosophy, Université de Montréal, As an Individual

Prof. Marc-Antoine Dilhac

You are asking me what the purpose of the robot is?

4:10 p.m.

NDP

Peter Julian NDP New Westminster—Burnaby, BC

Yes.

4:10 p.m.

Professor, Philosophy, Université de Montréal, As an Individual

Prof. Marc-Antoine Dilhac

It makes it possible to personalize education. If a student is having difficulties, the algorithm can identify this very quickly and adapt the teaching content to the student. It's a big technological breakthrough. Canada and North America in general are still a bit behind with regard to this technology that is well established in Asia, in China and South Korea. I'm giving you examples of things that will be coming soon. It was on my mind because there was a summit on the use of digital technology in education in Montreal recently, attended by over 1,500 people.

I'll give you one last example. Who should make the decisions? It is up to lawmakers to decide, after a broad consultation, just as doctors are responsible for limiting risk when they make a diagnosis and prescribe treatment.

Those are just a few examples, but there are many others.

4:10 p.m.

NDP

Peter Julian NDP New Westminster—Burnaby, BC

Thank you very much.

Mr. Sandvig, what is your opinion?

4:10 p.m.

Prof. Christian Sandvig

Well, I think I'm in sympathy with remarks made by my colleague.

What I can add is that it's hard to foresee specific legislation, in part because we don't have a good definition of what we mean by artificial intelligence. It's really a loose term that covers all kinds of different things. Even ideas within it that we're particularly concerned about, like machine learning...that term is itself a loose term that covers a variety of approaches that are quite different.

One of the challenges for us is the success of computing, because it has meant that things that look like artificial intelligence are all kinds of things, and they are in all kinds of domains. I think it's more likely that we will see legislation that specifically addresses a context and a use of technology, as opposed to an overarching principle.

A colleague of mine said that we are at “peak white paper”. We might be near peak principles as well. There are many statements of principles, and these are valuable. However, I think our task is to translate these into specific situations rather than to legislate all of AI, because I just don't know how to do it. There are some exceptions, though. There are a few areas where we might see overarching legislation that's of value.

One example would be that this committee has done some important work on the Cambridge Analytica scandal with its previous report. One of the challenges of that scandal for many countries around the world was that they had taken an approach to communication that said social media platforms essentially do nothing. Many governments, as you know, provide immunity to liability for online platforms or social media companies as conduits.... They did that in a very blanket way. We could say it's a terrible mistake of the United States.

This is an area where you have one legislation that affects a huge swath of activity, because it affects all use of computers to act as intermediaries or conduits between humans. The idea that you would give away freedom from liability seems like a bad one.

There are some areas where there could be broad legislative action, but I think they're rare. It's more likely that we'll see domain-specific approaches.

4:15 p.m.

NDP

Peter Julian NDP New Westminster—Burnaby, BC

Thank you.

4:15 p.m.

Conservative

The Chair Conservative Bob Zimmer

Next up, for seven minutes, Mr. Erskine-Smith.

4:15 p.m.

Liberal

Nathaniel Erskine-Smith Liberal Beaches—East York, ON

Thanks very much.

Thank you to you both.

The last answer is a useful segue in terms of regulation that could apply more broadly. We obviously have in the GDPR algorithm explainability, and a right to explainability was referenced in some of the opening comments.

This committee has recommended a level of transparency and the ability of a regulator to look under the hood at times to assess whether that transparency has been sufficient.

Mr. Sandvig, you have expressed some skepticism about transparency, though that does appear, to me at least, to be an initial step that would apply more broadly, in the way that the next step might have to be sector specific.

I want to drill down on some of the skepticism with respect to transparency. You didn't mention algorithmic impact assessments in your opening comments. I wonder if the detailed work that is now being put into formulating AIAs is a better answer to transparency.

4:15 p.m.

Prof. Christian Sandvig

I'm going to remain skeptical about transparency because I think that algorithmic impact assessment isn't a transparency proposal. I think that those proposals, as their title implies, owe a debt to environmental impact assessment. There may be elements of transparency required in producing such an assessment, but I think that I didn't mention them in that section in part because I don't see them as predominantly a transparency approach.

I'd be happy to give you additional skepticism about algorithmic impact assessments, though. The challenge of them, for me, is that we might group the negative harms of algorithms and AI into two groups. One group we could say is foreseeable, and one group we could say is not foreseeable. I'm afraid that the second group is quite large. The algorithmic impact assessment stuff that I've seen really takes for granted that it's possible to have some assessment. When we look at many of the scandals involving computer systems, artificial intelligence and algorithmic systems, a number of the scandals—although not all—seem to involve things that no one would have wanted. It could be that an impact assessment process caused or required the developer to think more carefully about the system and to produce a different one, but it might also be that some of the results that we're seeing are hard to imagine as being foreseen at all. I just worry about it.

4:15 p.m.

Liberal

Nathaniel Erskine-Smith Liberal Beaches—East York, ON

Let me jump in.

You have to accept that there is a transparency aspect to this. I'll use an example. In the public sector at the moment—and this is very recent for the Government of Canada—there is a questionnaire that any department that is employing automated decision-making needs to fill out. It's 80-some-odd questions. Based upon the answers to those questions, they're assigned basically a level 1, level 2, level 3 or level 4 in terms of the risk.

Then there are measures that need to be taken, some additional notice requirements. They have to obtain experts who peer review the work, but in the initial impact assessment itself, there are questions about the purpose of the automated decision-making that they intend to employ and the impact that it's likely to have on a particular area, such as individual rights, the environment or the economy.

We could argue about the generality of it and whether this could be improved, but it seems, on one hand, to provide a transparency mechanism in that it is requiring a disclosure of the purpose of the algorithm and potentially the inputs to the algorithm, its benefits and costs, and the potential externalities and risks. Then, depending on the outputs to that assessment, there are additional accountability mechanisms that could apply.

If you haven't looked at it yet, my question would be this: If and when you do take a look at the Canadian model for the public sector in more detail, is that something that you could transcribe and treat more like a securities filing—that is, to say “this is going to be required for private sector companies of a certain threshold, and if there is any non-compliance where material terms are excluded purposefully or in a negligent way, then there are penalties for non-compliance”? Would that be sufficient to meet at least the baseline of transparency accountability generally before we get into sector-specific regulations?

4:20 p.m.

Prof. Christian Sandvig

I absolutely will agree that there is a role for transparency somewhere. I'm just afraid of it as a proposal because it promises, I think, more than we can expect it to.

4:20 p.m.

Liberal

Nathaniel Erskine-Smith Liberal Beaches—East York, ON

That's fair.

4:20 p.m.

Prof. Christian Sandvig

So, I agree with that.

Let's look at a particular example. With regard to some of the algorithms that this committee was concerned about when writing its prior report, we know that there are already patents available that give broad outlines as to how the algorithms work.

For example, look at the Facebook newsfeed. Facebook—in public disclosures that have already been made—used to brag that the computation was made based on three factors. As the years went by, it said that it was based on dozens of factors, and then it said that it was based on hundreds of factors. I think that we're now at over 300 factors. There's some value in disclosing these factors, but it's not clear that there's that much because—

4:20 p.m.

Liberal

Nathaniel Erskine-Smith Liberal Beaches—East York, ON

But I guess my point is that it wouldn't be just about disclosing the factors. When we had Richard Allan, the VP for global policy, at our international committee in London last fall, he said, you know, if speech crosses the threshold for hate, obviously we should take it down, but if it's right up against that line, maybe we shouldn't encourage and promote it. And I'm sitting there thinking, yes, obviously you shouldn't promote that kind of content, but that's the algorithm. That's the newsfeed algorithm to promote reactions, regardless of what those reactions are. Even if they're negative reactions, they're looking for eyeballs. They're not looking for much beyond that when they want to generate profit.

If there is an algorithmic impact assessment and we are setting the rules of what that assessment should entail, I agree with you that there's an element of transparency and disclosure. It shouldn't just be about the inputs, necessarily. A company should have to come to terms with what the potential adverse affects are as well, I think, and have to put that in such an assessment. They have to turn their minds to that.

Do you think that is a useful and additional layer of accountability and transparency?

4:20 p.m.

Prof. Christian Sandvig

Yes. I absolutely do. I just worry—I don't know how large the category is—that they won't be addressed by it. This is why I'm skeptical. It's not that it isn't a valuable proposal in itself.

4:20 p.m.

Liberal

Nathaniel Erskine-Smith Liberal Beaches—East York, ON

Thanks very much.

4:20 p.m.

Conservative

The Chair Conservative Bob Zimmer

Normally we have lots of time for these discussions, but I have to give everybody notice that we have time for only two more five-minute questions. Then we have to get into the discussion about the legal advice on the summons. I just want to forewarn you about that.

We have Monsieur Gourde for five minutes, and then Monsieur Picard.

4:25 p.m.

Conservative

Jacques Gourde Conservative Lévis—Lotbinière, QC

Thank you, Mr. Chair.

The ethical difficulty all these new platforms raise is due to the fact that individuals are not all necessarily aware of what artificial intelligence means and the degree of acceptance may vary considerably.

I'll give you an example. Personally, it does not bother me that thanks to algorithms, my favourite colour or the brand of my car is known. However, some people are very reluctant and absolutely do not want anyone to know anything about them on these platforms. Unfortunately for them, I think it is already too late. Private businesses acquire a lot of personal information about us. Their data bases grow annually and they can practically predict the date of our death by using algorithms.

Mr. Dilhac, you said that there were a lot of regulations but very few laws in this area. I can understand that it's quite difficult to adopt laws to manage tools that don't respect borders. Nevertheless, as legislators, we have to protect the population. Where should we start?

4:25 p.m.

Professor, Philosophy, Université de Montréal, As an Individual

Prof. Marc-Antoine Dilhac

You have to find a balance between the law and the contract. Today, the contractual mode is predominant when it comes to deploying artificial intelligence in applications. When you click on a button at the end of the contract on Facebook, you accept or not. You don't have time to read it.

If you look at the content of these contracts, you see that they contain totally unacceptable elements that should not be there, and I'll take Facebook as an example. We examined the conditions of use of Facebook a little. That enterprise gives itself the authority to obtain your information through third-party applications.

Whether or not you are online using Facebook, whether or not you have registered with them, the enterprise has given itself the right to go and get information about you from other applications. That type of thing is entirely possible through the use of the contract form. If, as a user, you accept that, well it's too bad for you. That kind of contract should be regulated by law. That is precisely where a balance needs to be found. It isn't easy but it is the government's work to find that balance between what should be in a contract between a service provider and a user, and what should be in the law.

What is the priority? There are a lot of things that need to be done, but I think that in order to protect the public, your main, most serious priority should be the use that is made of the data. It isn't just the fact that you like the colour blue that is important, but if one day you no longer like it, an algorithm may come to the conclusion that you have a mental problem or a disease you don't know you have, for instance, and that will be much more troublesome.

4:25 p.m.

Conservative

Jacques Gourde Conservative Lévis—Lotbinière, QC

Thank you.

4:25 p.m.

Conservative

The Chair Conservative Bob Zimmer

Thank you, Mr. Gourde.

Next up is Mr. Picard. That will close this off.

Go ahead, Mr. Picard.

May 2nd, 2019 / 4:25 p.m.

Liberal

Michel Picard Liberal Montarville, QC

Thank you.

My question is for Mr. Dilha, and then, Mr. Sandvig can answer.

The AI revolution comes with its share of unknowns, eliciting a negative reaction from the public. People fear the worst and conjure up all the bad things that could happen. Nevertheless, we lived through the Industrial Revolution at the beginning of the last century. In the 1960s and 1970s, we went through the electronic revolution, which gave us computers.

All things being equal, aren't all three events comparable in terms of their cultural, social, economic and political effects on society? Aren't there lessons we can learn, both positive and negative ones? Conversely, is it not possible to draw lessons because the three revolutions are so different?

4:25 p.m.

Professor, Philosophy, Université de Montréal, As an Individual

Prof. Marc-Antoine Dilhac

I'll try to keep my answer brief.

Yes, AI does come with unknowns. A modest stance would be to say that we don't quite know where we are headed. If we look at the past, we can find guideposts. You brought up the Industrial Revolution, which led to major advancements. However, the revolution occurred in the early 19th century—two centuries ago—without any groundwork being laid. It gave rise to more than a century of torment, more than a century of transitions and war, not to mention revolutions and, all told, millions of deaths. Government was completely overhauled.

If the Industrial Revolution taught us anything, it's that we need to address the period of transition that comes with technological advancement and new tools. Economist Joseph Schumpeter, whom you're probably familiar with, coined a relevant expression. He talked about the destructive transition, better known as creative destruction, meaning that something is destroyed in order to create new economic activities. Creative destruction can take a long time, and the destructive aspect is not necessarily appealing.

It's important to focus on the conditions for transition so that there are as few losers as possible. AI and the use of algorithms leads to tremendous progress, not just in medicine, but also with respect to repetitive tasks. That is something we should welcome, but we also need to prepare for the revolution.