Evidence of meeting #146 for Access to Information, Privacy and Ethics in the 42nd Parliament, 1st Session. (The original version is on Parliament’s site, as are the minutes.) The winning word was transparency.

A video is available from Parliament.

On the agenda

MPs speaking

Also speaking

Marc-Antoine Dilhac  Professor, Philosophy, Université de Montréal, As an Individual
Christian Sandvig  Director, Center for Ethics, Society, and Computing, University of Michigan, As an Individual

3:30 p.m.

Conservative

The Chair Conservative Bob Zimmer

We'll call to order meeting 146 of the Standing Committee on Access to Information, Privacy and Ethics. Pursuant to Standing Order 108(2), we're continuing our study on the ethical aspects of artificial intelligence and algorithms.

In the first hour today, we have with us, as individuals, witnesses Marc-Antoine Dilhac, professor of philosophy, Université de Montréal, and Christian Sandvig, director of the Center for Ethics, Society, and Computing, University of Michigan.

Also, as we all know, we're going to discuss in camera, pursuant to Standing Order 108, a briefing by the law clerk of the House on the power of committees to summon witnesses. That will be our discussion following this hour.

We'll start off with Marc-Antoine for 10 minutes.

3:30 p.m.

Prof. Marc-Antoine Dilhac Professor, Philosophy, Université de Montréal, As an Individual

Good afternoon.

Thank you for inviting me to share with you some of the reflections about artificial intelligence ethical issues which we set out in Montreal.

I was asked to speak about the Montreal declaration for the responsible development of artificial intelligence, which was presented in 2018. I will speak about this document.

First I will outline the context in broad strokes. The technological revolution that is taking place is causing a profound change in the structure of society, by automating administrative processes and decisions that impact the life of our citizens. It also changes the architecture of choice by determining our default options, for instance. And it transforms lifestyles and mentalities through the personalization of recommendations, access to online, automated health advice, the planning of activities in real time, forecasting, and so on.

This technological revolution is an unprecedented opportunity, it seems to me, to improve public services, correct injustices and meet the needs of every person and every group. We must seize this opportunity before the digital infrastructure is completely established, leaving us little or no leeway to act.

To do so we must first establish the fundamental ethical principles that will guide the responsible and sustainable development of artificial intelligence and digital technologies. We must then develop standards and appropriate regulations and legislation. In the Montreal Declaration for a Responsible Development of Artificial Intelligence, we proposed an ethical framework for the regulation of the artificial intelligence sector. Although it is not binding, the declaration seeks to guide the standardization, legislation and regulation of AI, or artificial intelligence. In addition, that ethical framework constitutes a basis for human rights in the digital age.

I will quickly explain how we developed that declaration. This may be of interest in the context of discussions about artificial intelligence in our democratic societies. Then I will briefly present its content.

The declaration is first and foremost a document produced via the consultation of various stakeholders. It was an initiative of the University of Montreal, which received support from the Fonds de recherche du Québec and from the Canadian Institute for Advanced Research, or CIFAR, in the rest of Canada. Behind this declaration there was a multidisciplinary inter-university working group from the fields of philosophy, ethics, the social sciences, law, medicine, and of course, computer science. Mr. Yoshua Bengio, for instance, was a member of this panel.

This university group then launched, in February 2018, a citizens' consultation process, in order to benefit from the field expertise of citizens and AI stakeholders. It organized over 20 public events and discussion seminars or workshops over eight months, mainly in Quebec, but also in Europe, Paris and Brussels. More than 500 people took part in these workshops in person. The group also organized an online consultation. This consultation process was based on a prospective methodology applied to ethics; our group invited workshop participants to reflect on ethical issues based on prospective scenarios, that is to say scenarios about the near future of the digital society.

We organized a broad citizen consultation with various stakeholders, rather than consulting experts alone, for several reasons. I will mention three, rapidly.

The first reason is that AI is being deployed in all societies and concerns everyone. Everyone must be given an opportunity to speak out about its deployment. That is a democratic requirement.

The second reason is that AI raises some complex ethical dilemmas that touch on many values. In a multicultural and diverse society, experts alone cannot make decisions on the ethical dilemmas posed by the spread of artificial intelligence. Although experts may clarify the ethical issues around AI and establish the conditions for a rational debate, they must design solutions in co-operation with citizens and all parties concerned.

The third reason is that only a participative process can sustain the public's trust, which is necessary to the deployment of AI. If we want to earn the population's trust and give it good reasons to trust the actors involved with AI, we have the duty to involve the public in the conversation about AI. That isn't a sufficient condition, but it is a necessary condition to establish trust.

I should add that although industry actors are very important as stakeholders, they must stop wanting to write the ethical principles instead of citizens and experts, and the legislation that should be drafted by Parliaments. That attitude is very widespread, and it can also undermine the public trust that needs to be fostered.

Let's talk about the content of the declaration. The consultation had a dual objective. First, we wanted to develop the ethical principles and then formulate public policy recommendations.

The result of that participatory process is a very complete declaration that includes 10 fundamental principles, 60 subprinciples or proposals to apply the principles, and 35 public policy recommendations.

The fundamental principles touch on well-being, autonomy, private life and intimacy and solidarity—that principle is not found in other documents—democracy, equity, diversity, responsibility, prudence and sustainable development.

The principles have not been classified according to priority. The last principle is no less important than the first, and according to circumstances, a principle may be considered more relevant than another. For instance, if privacy is in general considered a matter of human dignity, the privacy principle may be considered less important for medical purposes, if two conditions are met: it must contribute to improving the health of patients—under the well-being principle—and the collection and use of private data must be subject to individual consent—the autonomy principle.

The declaration, thus, is not a simple checklist, but it also establishes standards and checklists according to activity sectors. Thus, the privacy regime will not be the same, according to the sector, for instance; it may vary depending on whether we are talking about the medical or banking sector.

The declaration also constitutes a basis for the development of legal norms, such as legislation.

Other similar declarations, such as the Helsinki Declaration on Bioethics, are also non-binding declarations like ours. Our declaration simply lists the principles which the AI development actors should commit to respecting. For us, the task is now to work on transposing those principles into industrial standards that also affect the deployment of artificial intelligence in public administrations.

We are also working on the transposition of those principles into human rights for the digital society. That is what we are going to try to establish through a citizens' consultation which we hope to conduct throughout Canada.

Thank you.

3:40 p.m.

Conservative

The Chair Conservative Bob Zimmer

Thank you very much, Marc-Antoine.

Next up we'll have Christian Sandvig for 10 minutes.

Go ahead.

3:40 p.m.

Professor Christian Sandvig Director, Center for Ethics, Society, and Computing, University of Michigan, As an Individual

Thank you very much.

I appreciate the opportunity to address the committee. To frame my remarks before I begin with the substance of my comments, I just want to say that my position is that we are at a moment where I'm delighted that the committee is holding these hearings. We're at a moment where there is increasingly a widespread concern about the harms that might be possible as a result of these systems, meaning artificial intelligence and algorithms.

I thought what I could offer to you in my brief opening remarks would be some sort of assessment of what might governments do in this situation. What I'd like to do with my opening statement is to discuss five areas in which I believe there is the most excitement in communities of researchers and practitioners and policy-makers in this area right now. I offer you my assessment of these five areas. Many of them are areas that you at least preliminarily addressed in your earlier reports, but I think that I have something to add.

The five areas I'll address are the following: transparency, structural solutions, technical solutions, auditing and the idea of an independent regulatory agency.

I'll start with transparency. By far the most excitement in practice and policy circles right now has to do with algorithmic automation centres and the idea that we can achieve justice through transparency. I have to tell you, I'm quite skeptical of this area of work. Many of the problems that we worry about in the area of artificial intelligence and transparency are simply not amenable to transparency as a solution. One example is that we're often not sure that the problems are amenable to individual action, so it is not clear that disclosing anything to individuals would help ameliorate any difficulty.

For example, a problem with a social media platform might require expertise to understand the risk. The idea of disclosing something is in some ways regressive because it demands time and expertise to consider the sometimes quite arcane and complicated intricacies of a system. In addition, it might not be possible to perceive the risk at all from the perspective of the individual.

There is a tenet of transparency that we need to be sure that what is revealed has to be matched with the harm that we hope to detect and prevent, and it's just not clear that we know how to match what should be revealed with the harms we hope to prevent.

Sometimes we discuss transparency as a tactic that we use so that we can match what is revealed to an audience that will listen. This is often something that is missing from the debates right now on transparency and artificial intelligence. It's not clear who the audience would be that we need to cultivate to understand disclosures of details of these systems. It seems like they must be experts and it seems like deconstructing these systems would be quite time consuming, but we don't know who exactly they would be.

A key problem that's really specific to this domain that is sometimes elided in other discussions is that algorithms are often not valuable without data and data are often not valuable without algorithms. So if we disclose data we might completely miss an ethically or societally problematic situation that exists in the algorithm and vice versa.

The challenge there is that you also have a scale problem if you need both the data and the algorithm. It's often not clear just in practical terms how you would manage a disclosure of such magnitude or what you would do once you receive the information. Of course, the data on many systems also is continually updated.

Ultimately, I think you have gathered from my remarks I'm pessimistic about many of the proposals about transparency. In fact, it's important to note that when governments pass transparency requirements they can often be counterproductive in this area because it creates the impression that something has happened, but without some effective mechanism of accountability and monitoring matched to the transparency, it may be that nothing has happened. So it may actually harm things to make them transparent.

An example of a transparency proposal that's gotten a lot of excitement recently would be dataset labels that are somehow made equivalent to food labels, such as nutrition facts for datasets or something like that. There are some interesting ideas. There would would be a description of biases or ingredients that have an unusual provenance—where did the data come from?—but the metaphor is that tainted ingredients produce tainted food. Unfortunately, with the systems we have in AI, it's not a good metaphor, because it's often not clear, without some indication of the use or context, what the data are meant to do and how they will affect the world.

Another attractive, exciting idea in this space of transparency is the right to explanation, which is often discussed. I agree that it's an attractive idea, but it's often not clear that processes are amenable to explanation. Even a relatively simple process—it doesn't have to be with a computer; it could be the process by which you decided to join the House of Commons—might be a decision that involves many factors, and simply stating a few of them doesn't capture the full complexity of how you made that decision. We find the same things with computer systems.

The second big area I'll talk about is structural solutions. I think this was covered quite well in the committee's previous report, so I'll just say a couple of things about it.

The idea of a structural solution might be that because there are only a few companies operating in some of these areas, particularly in social media, we might use competition or antitrust policy to break up monopoly power. That, by changing the structure and incentives of the sector, could lead to the amelioration of any harms we foresee with the systems.

I think it is quite promising that if we change the incentives in a sector we could see changes in the harms that we foresee; however, as your report also mentioned, it's often not clear how economies of scale operate in these platforms. Without some quite robust mechanism for interoperability among systems, it's not clear how an alternative that's an upstart in the area of social media or artificial intelligence—or really any area where there is a large repository of data required—would be effective.

I think that one of the most exciting things about this area might be the idea of a public alternative in some sectors. Some people have talked about a public alternative to social media, but it still has this scale problem, this problem of network effects, so I guess we could summarize that area by saying that we are excited about the potential but we don't know exactly how to achieve the structural change.

One example of a structural change that people are excited about and is more modest is the information fiduciary proposal, whereby a government might regulate a different incentive by just requiring it. It's a little challenging to imagine, because it does seem like we are most successful with these proposals when we have a domain with strong professionalization, such as doctors or lawyers.

The third area I will discuss is the idea of a technical solution to problems of AI and algorithms. There's a lot of work currently under way that imagines that we can engineer an unbiased fair or just system and that this is fundamentally a technical problem. While it's true that we can imagine creating these systems that are more effective in some ways than the systems that we have, ultimately it's not a technical problem.

Some examples that have been put forward in this area include the idea of a seal of approval for systems that meet some sort of standard that might be done via testing and certification. This is definitely an exciting area, but only a limited set of the problems we face would fall into the domain that could be tested systematically and technically solved. Really, these are really societal problems, as the previous witness stated.

The fourth area I'll introduce is the idea of auditing, which I saw mentioned only briefly in the committee's last report. The auditing idea is my favourite. It actually comes from work to identify racial discrimination in housing and employment. The idea of an audit is that we send two testers to a landlord at roughly the same time and ask for an apartment. The testers then see if they get different answers, and if they get different answers, something is wrong.

The exciting thing about this area is that we don't need to know the landlord's mind or to explain it. We simply figure out if something is wrong. There's a lot that legislatures can do in the area of testing. They can protect third parties that wish to investigate these systems or they can create processes akin to software's “bug bounties”, but the bounties could be for fairness or justice. This is I think the most promising area that governments can use to intervene.

Finally, I'll conclude by just mentioning there is also talk of a new agency, a judicial administrative law or commission agency to handle the areas of AI. I think this is an interesting idea, but the challenge is that it just postpones many of the comments I made in the earlier parts of my remarks. We often would imagine such an agency doing some of the same things that I've already discussed, so the question then becomes, what is different about this area that requires processes that are not the processes of the legislature and the standard law-making apparatus—the courts—that we already have? The argument has been made that expertise makes this different, but it's hard to sustain that argument, because we often do see plain old legislatures making rules about quite complicated areas.

I'll conclude there. I'm happy to take your questions.

3:50 p.m.

Conservative

The Chair Conservative Bob Zimmer

Thank you very much, both of you, for your testimony.

We'll start off with seven-minute questioning rounds.

We'll start with Ms. Vandenbeld.

3:50 p.m.

Liberal

Anita Vandenbeld Liberal Ottawa West—Nepean, ON

Thank you very much, Chair, and thank you, both of you, for your very informative remarks.

One of the things I have been wondering about—actually, Professor Dilhac, you mentioned this when you talked about involving civil society—is that for most people this is a very misunderstood area, and that's for people who are not technical experts, and even, I would imagine, for some technical experts.

First of all, you have popular culture myths around artificial intelligence that go back decades. Many people aren't aware of how prevalent it already is in our day-to-day lives. If you have systems whereby you involve civil society, legislators or people who are not technical experts to oversee this, how do you ensure that you're not then taking the biases that exist in these systems and in the public and just replicating all of those once again and amplifying the same bias?

3:50 p.m.

Professor, Philosophy, Université de Montréal, As an Individual

Prof. Marc-Antoine Dilhac

Thank you for that excellent question.

There is no miracle solution.

The idea is to get all of civil society working together, the informatics, ethics and social science experts, as well as the industry stakeholders. There is more than one useful expertise. We will manage to reduce potential bias, preconceived ideas about the most vulnerable people, among others, by getting the various experts together and getting them to talk to each other. The reason is that the discussion among those experts will rationalize the debate. It might be a preconceived philosopher's idea to believe in the rationalization of the debate. However, in the context of meetings in Parliament or with citizens in libraries—we have done a lot of meetings in public libraries— dialogue leads to a rationalization of arguments and allows people to identify the prejudice that may be in them. That collaboration is really important.

3:55 p.m.

Liberal

Anita Vandenbeld Liberal Ottawa West—Nepean, ON

Professor Sandvig, did you want to respond to that?

3:55 p.m.

Prof. Christian Sandvig

I'm happy to defer to my colleague.

3:55 p.m.

Liberal

Anita Vandenbeld Liberal Ottawa West—Nepean, ON

Thank you.

Going back to what you said about transparency, Professor Sandvig, because this is something we've heard a lot about in this committee, if people know where the data is coming from and they understand how the algorithms work, this would allow a certain amount of oversight, as it does in many other areas.

You're suggesting that transparency alone would not actually have that effect. In order to have audits and in order to have a regulator, obviously the information needs to be available, even if you were to audit that information. Are you saying that we need transparency but in such a way that we know who the “who” is in terms of who is actually going to be reviewing? Or is it the public in general—civil society—that would have to do that?

3:55 p.m.

Prof. Christian Sandvig

Thank you very much for this question, because I think it exposed a weakness in my own explanation.

In the social science literature, they use the term “audit”, but they don't use it in the financial sense. The audit simply describes the process I outlined where two testers, say one black and one white or one woman and one man, ask a landlord for a room or an employer for a job. They call that an audit, but it's quite confusing, because obviously the tax authorities also have an audit and it means something else.

I think the reason audits are exciting to me is that you can have an audit without transparency. Remember that I said you don't get to see the inside of the landlord's brain. That's why the audit is exciting. We can audit platforms like Facebook and Google without transparency by simply protecting third parties like researchers, investigative journalists and civil society organizations like NGOs, who wish to see if there are harms produced by these systems. To do that, they would act like the testers in my example. They would act as users of the systems and then aggregate these data to see if there were patterns that were worrying.

Now, this has some shortcomings. For example, you might have to lie. Auditors lie. The people who go in to ask a landlord for a room don't actually want a room; they're testers working for an NGO or a government agency. So you might have to lie; you might have to waste the landlord's time, but not very much.

Usually on systems like large Internet platforms, it's hard to imagine that an audit would be detectable. However, it's possible that you would provide false information that makes it into the system somehow, because you aren't actually looking for a job; you're just testing. There are definitely downsides.

As I mentioned, you also need some sort of system to continue...after your audit finds that there is a problem. For example, if you found that there was something worrying, you would then need some other mechanism like a judicial proceeding, say, involving some disclosure. You could say that transparency comes later through another process, if you needed to really understand how the system works. However, you might never need to understand that. You might just need to detect that there is a harm and tell the company they have to fix it, and they're the ones that have to worry about how.

This is why I'm excited about auditing, because it gets around the problems of transparency.

3:55 p.m.

Liberal

Anita Vandenbeld Liberal Ottawa West—Nepean, ON

Professor Dilhac.

3:55 p.m.

Professor, Philosophy, Université de Montréal, As an Individual

Prof. Marc-Antoine Dilhac

Yes, I'd like to add something.

I am in complete agreement with what has just been said about transparency. I think we overestimate transparency. The mechanism to test the algorithms is probably the best way to proceed to identify problems.

Nevertheless, I would use the term “audit” in both senses of the word. First in the sense that was just used, and also in the sense that competent authorities must use algorithms to detect the problematic parameters of a decision. We could use both, but first I think that the solution that was proposed before is an excellent process.

4 p.m.

Conservative

The Chair Conservative Bob Zimmer

Thank you.

Next up, for seven minutes, is Mr. Kent.

4 p.m.

Conservative

Peter Kent Conservative Thornhill, ON

Thank you, Chair.

Thank you both for your participation today.

Professor Sandvig, with regard to regulation, many lay observers, many with concerns about what they read, what they hear about algorithmic use and AI, may feel—and given some of the testimony we've heard, for example, about Cambridge Analytica, about some of the big data-opoly use of AI to affect and direct consumer retail attitudes, social attitudes and so forth, sometimes I feel..... Is there an element of being a little late to put some of the smoke back into the bottle in terms of regulation? Can some of the inappropriate or unethical applications of AI and algorithms to date be reverse regulated?

I would throw that out to both of you, but Professor Sandvig first.

4 p.m.

Prof. Christian Sandvig

Well, my background before going to graduate school was as a software engineer, and my memory of that time as a software engineer fits with what many commentators are saying now: that software engineering does not have a safety culture. It does not have a culture that we would analogize to industries like, for example, airline travel.

I guess the question is, can we imagine changing something that's big and that has already happened? You mentioned recent revelations about Facebook and the Cambridge Analytica scandal. Can we imagine changing something that looks extremely bad with something that looks extremely safe?

Again, I used the example of air travel, but I think it's possible.... I mean, I can't imagine that the Wright brothers had a safety culture, right? There was some way in which government regulation started slowly and accreted. We have an industry that's regulated, and we now consider—perhaps the Max 8 is an exception—this industry to be a safe industry. We're not concerned about air travel.

I would say that this is the trajectory we need for these industries. We need a sense where it's the role of the government to make sure that the public is safe, and we can do it with social media if we did it with other dangerous things.

4 p.m.

Conservative

Peter Kent Conservative Thornhill, ON

You gave the example of the 737 Max 8. Is that a failure to audit after the first tragedy?

4 p.m.

Prof. Christian Sandvig

Well, a hearing on airline safety...so you'll forgive me for saying I'm not sure. I think that in general I would point to the level of comfort that people currently have with air travel as my main point, even despite the Max 8s. I think we could imagine something like that for social media platforms or artificial intelligence.

We're currently very far away from it, so I don't mean to at all minimize your concern. Recent news reports show that many of these companies are at rock bottom in terms of consumer trust in their operations or in customer satisfaction, and I mean really below even any other industry. We're looking at some of the most hated industries.

4 p.m.

Conservative

Peter Kent Conservative Thornhill, ON

Professor Dilhac.

4 p.m.

Professor, Philosophy, Université de Montréal, As an Individual

Prof. Marc-Antoine Dilhac

In the Montreal Declaration for a Responsible Development of Artificial Intelligence, for instance, one of the principles mentioned is prudence. The idea behind that is to state that there are security and reliability criteria for the algorithms, but not only for the algorithms. I would like to expound on the topic because the way in which the algorithm is put in place in a system is important.

There is a whole system around an algorithm, like other algorithms, data bases, and their use in a specific context. In the case of a platform, it is easy since you have an individual user behind his screen. However, when you are talking about aircraft or a complex enterprise, you have to take the entire system into account.

Here, the reliability involved is that of the system and not only that of the algorithm. The algorithm does its work. The issue is to see how the data is being used, what types of decisions are made and what human control there is over those decisions or predictions. From that perspective, it seems extremely important to me that the algorithmic systems—not simply the algorithm—be audited. I'm talking about audits in the sense where people really look into the architecture of the system to find its possible shortcomings.

In the case of aircraft, since you mentioned those two recent tragic air catastrophes, we must, for instance, ensure in advance that human beings keep control, even if they may make mistakes. That is not the issue; human beings make mistakes. That is precisely why we could also put in place algorithmic aids. However, admitting that to err is human and that there is still human control over the machine—that is part of the things we need to discuss. However, this is certainly an essential factor if we are to identify the problems with a given algorithmic system.

4:05 p.m.

Conservative

Peter Kent Conservative Thornhill, ON

Thank you.

With regard to the social auditing of AI development and algorithms, we've seen the negative impact on assembly lines, for example, the negative impact on labour as some companies rush to either save their business plans or maximize new profits by relying on AI technology. How do both of you feel in terms of the acceptability or the necessity of a certain amount of social regulation to ensure societal stability in important areas of the labour force?

Professor Dilhac can go first.

4:05 p.m.

Professor, Philosophy, Université de Montréal, As an Individual

Prof. Marc-Antoine Dilhac

It's very difficult to direct social change through laws or regulation. When we talk about a technological revolution, we have to take the mode of the revolution seriously. As we were saying, there is a structural change. It does not seem reasonable to me to want to direct a transformation of this nature. What does seem reasonable is to put in place training mechanisms so that the digital society transformation can include everyone who needs to change their skills.

The idea is not to put more pressure on businesses to prevent them from replacing human beings with algorithms. That may be regrettable, and I regret it, but it's not the best approach. The government could, however, put training mechanisms in place to support people in their quest to transform their skills.

4:05 p.m.

Conservative

Peter Kent Conservative Thornhill, ON

On technological—

4:05 p.m.

Conservative

The Chair Conservative Bob Zimmer

Sorry, we're way over.

Next up, for seven minutes, is Mr. Julian.

4:05 p.m.

NDP

Peter Julian NDP New Westminster—Burnaby, BC

Thank you very much, Mr. Chair.

I also thank our guests.

I apologize for arriving a bit late. I didn't get a chance to hear their presentations. I apologize in advance if I ask questions that have already been answered in the presentations.

My first question is about the legislation and the AI regulatory framework. Should Canada develop a regulatory framework to govern the ethical use of artificial intelligence? Can you name some countries where governments, either nationally or regionally, have put in place laws and regulatory frameworks for the use of artificial intelligence?

My question is for both of you, but Mr. Dilhac may answer first.