Evidence of meeting #157 for Access to Information, Privacy and Ethics in the 42nd Parliament, 1st Session. (The original version is on Parliament’s site, as are the minutes.) The winning word was human.

A video is available from Parliament.

On the agenda

MPs speaking

Also speaking

Brent Mittelstadt  Research Fellow, Oxford Internet Institute, University of Oxford, As an Individual
An Tang  Chair, Artificial Intelligence Working Group, Canadian Association of Radiologists
André Leduc  Vice-President, Government Relations and Policy, Information Technology Association of Canada

4:45 p.m.

Research Fellow, Oxford Internet Institute, University of Oxford, As an Individual

Dr. Brent Mittelstadt

It could be a regulator that does it. I watched Christian Sandvig's testimony to this committee. He pointed out the difference between financial audits and social, scientific and computational audits. I suppose it's more the latter that I'm thinking of here.

You can have a regulator do it, but again, that introduces the problem of whether the regulator actually has the expertise required. Do they understand the system that's being used? Do they have access to actually understand the system, what data it's considering and what its purpose is? There are problems with relying solely on a third party independent regulator.

What I would like to see is more willingness, particularly from private companies, to share a bit more about not only the auditing that they're doing of their systems—in-processing and post-processing auditing—but also just more generally the impact that ethical principles have had on their development and deployment of these systems. In other words, I want to know a lot more about specific cases where they've said no or they've changed the design of the system as the result of an impact assessment or as a result of auditing.

4:45 p.m.

Liberal

Nathaniel Erskine-Smith Liberal Beaches—East York, ON

Isn't, though, the lesson learned to date from big companies that do collect large amounts of data, and then employ algorithms, that they're not implementing ethical principles in the first instance, and that there need to be rules brought to bear?

4:45 p.m.

Research Fellow, Oxford Internet Institute, University of Oxford, As an Individual

Dr. Brent Mittelstadt

I'll say that it's not clear the extent to which they are implementing ethical principles. I know that there are some companies that do have feedback mechanisms, but they tend to be more internal. They are very happy to report on positive cases, where ethical considerations have led them to change the system in a positive way or to design a new type of system, but in terms of public-facing sorts of very critical self-assessments, it's not a huge deal.

4:45 p.m.

Liberal

Nathaniel Erskine-Smith Liberal Beaches—East York, ON

What is the economic incentive for them to do that?

4:45 p.m.

Research Fellow, Oxford Internet Institute, University of Oxford, As an Individual

Dr. Brent Mittelstadt

In the first instance, yes, you could argue that there is no economic incentive to do that because it can make you look worse than your competitors.

Actually, one of my other slight concerns is that ethics turns purely into something that is marketable in the same way that, say, having an organic label on your product makes it seem more ethical and more valuable. I don't know.... I'm very cynical about that happening.

4:45 p.m.

Liberal

Nathaniel Erskine-Smith Liberal Beaches—East York, ON

Thanks.

Whether you call it algorithmic transparency or algorithmic explainability, as the GDPR does, when some of us were in the U.K. and asking questions of the ICO, Elizabeth Denham, she said her role was to make the algorithm as explainable as possible, and that it was for other regulators—the human rights commissioner, say, or the competition authority—to better assess, with their expertise.

Similarly, we had experts in the technical side of AI before us at the outset of this study who said that transparency rules make sense across the board and that beyond that you need sector-specific regulators and rules to apply, which would simply take AI into account. Do you think that makes sense?

4:50 p.m.

Research Fellow, Oxford Internet Institute, University of Oxford, As an Individual

Dr. Brent Mittelstadt

Again, if the sector-specific regulators have the necessary expertise to do so, and if they're sufficiently resourced to do so, it could work. I think it's worth—

4:50 p.m.

Liberal

Nathaniel Erskine-Smith Liberal Beaches—East York, ON

Just to pick up on that, though, isn't that the point? Take regulating in the auto sector, for example. They're employing AI for autonomous vehicles. Do we have this stand-alone regulator, whether it's the ICO model or the privacy data commissioner model, where we roll algorithmic accountability into their function? Or is it simply that the regulatory authorities and the rules that are brought to bear on the auto sector have to account for and build up capacity with respect to assessing algorithms?

4:50 p.m.

Research Fellow, Oxford Internet Institute, University of Oxford, As an Individual

Dr. Brent Mittelstadt

The problem is that I can imagine both models working if there's openness to reforming the sectoral rules that the regulator is enforcing. I don't see any particular reason why it couldn't work—again, assuming that there are sufficient expertise and resources available to it.

At the same time, it does make some sense to have a general regulator, at least for certain types of issues, such as the ICO, for example, the data protection authority. Many of the issues with AI have to do with how data is collected, repurposed and used. For those sorts of issues, yes, it makes sense to have them deal with the challenges of AI, but there will be other sorts of issues that are very specific to specific types of AI where I think having the sectoral regulator deal with them makes the most sense.

4:50 p.m.

Liberal

Nathaniel Erskine-Smith Liberal Beaches—East York, ON

Thanks very much.

4:50 p.m.

Conservative

The Chair Conservative Bob Zimmer

We'll go to Monsieur Gourde for five minutes.

4:50 p.m.

Conservative

Jacques Gourde Conservative Lévis—Lotbinière, QC

Thank you, Mr. Chair.

There is no doubt that artificial intelligence will play a major role in the global economy. However, I have the impression that funding, which comes mainly from the private sector and to a lesser extent from the public sector, is not necessarily intended to ensure the well-being of humanity.

Can we know what proportion of artificial intelligence budgets is allocated to military activities and what proportion goes to health?

I think that, in the health field, this will help everyone. On the other hand, in the military field, we will create super powerful weapons and hope never to use them. These funds might have been more useful to humanity if they had been invested in health. There is no doubt that companies are looking to make a profit. They go where money is available and contracts are easy to obtain. Ethically, there will be a global problem.

What do you think, Mr. Leduc and Mr. Tang?

4:50 p.m.

Vice-President, Government Relations and Policy, Information Technology Association of Canada

André Leduc

With regard to the products and services provided to Canada's Department of National Defence or elsewhere in the world, I don't think it's very different from what we're seeing in traditional sectors. In discussing the ethics of artificial intelligence, we seek to determine in which cases our society will approve the use of artificial intelligence and in which other cases artificial intelligence may be used to develop products for military personnel whose country is in conflict with ours. There are always risks.

As far as funding is concerned, I don't know the answer. So I can't answer you. The Ministry of Defence, when it wants to solve problems it faces, often uses the tools that can be provided to it. More and more, we see in our field that artificial intelligence is integrated into all technologies. Implicitly or explicitly, decisions will therefore be made more and more by artificial intelligence, simply to make products and services more efficient.

4:55 p.m.

Chair, Artificial Intelligence Working Group, Canadian Association of Radiologists

Dr. An Tang

Historically, Canada has been a true leader in funding basic research, including through CIFAR, the Canadian Institute for Advanced Research. This has enabled Canadian research laboratories to play a leading international role, particularly in the field of deep learning, which has generated recent interest in artificial intelligence.

In addition, the federal government has funded initiatives such as the Canada First Research Excellence Fund, which is a competition, and has funded things like the Data Serving Canadians initiative. This initiative has transformed fundamental knowledge used in four specific sectors: health, logistics, e-commerce and the financial sector. There are concrete examples of funding useful activities. In addition, supporting such research activities has the effect of attracting a critical mass of industries that will invest in the field.

4:55 p.m.

Conservative

Jacques Gourde Conservative Lévis—Lotbinière, QC

Mr. Mittelstadt, did you want to add something?

4:55 p.m.

Research Fellow, Oxford Internet Institute, University of Oxford, As an Individual

Dr. Brent Mittelstadt

I can add a bit, I suppose, not specifically on the budget, although I would note that particularly in research on explainability or interpretability in AI, there's a huge amount of money going into it in the U.S. from their defence department, and what we've seen, at least in the past, is a crossover into the private sector from military developments in technology. Besides that, there is plenty of academic and commercial research outside of the military context that addresses this.

Beyond that, I don't know that I have much of a comment, particularly with not being extremely familiar with the Canadian context.

4:55 p.m.

Conservative

Jacques Gourde Conservative Lévis—Lotbinière, QC

Thank you.

4:55 p.m.

Conservative

The Chair Conservative Bob Zimmer

Thank you, Mr. Gourde.

Last up for five minutes is Monsieur Picard.

4:55 p.m.

Liberal

Michel Picard Liberal Montarville, QC

Mr. Tang, you mentioned earlier that the information could be made anonymous, so that no link can be made between the patient and the information.

Doesn't this practice, which is intended to protect the individual, have a perverse effect? Indeed, once the information is anonymous, consent is no longer required to conduct the studies you want with the data you have.

4:55 p.m.

Chair, Artificial Intelligence Working Group, Canadian Association of Radiologists

Dr. An Tang

This can be approached from various angles. It is important to know where the information is made anonymous. If the data remains within a hospital, for example, and only the algorithms are removed, there is no breach of confidentiality. In this case, only the researchers and doctors involved know the identity of the patient. In addition, there is a field of research that allows the teaching of several institutions to be shared, which is extremely advantageous. Data can indeed be kept within institutions, and only the learning process is shared by many hospitals.

4:55 p.m.

Liberal

Michel Picard Liberal Montarville, QC

Mr. Mittelstadt, can you come back to what you said about protecting data and not protecting people? Did you say that we did protect data but not people, or we should protect data and not necessarily people because that is what AI is all about?

Can you comment on that, please?

4:55 p.m.

Research Fellow, Oxford Internet Institute, University of Oxford, As an Individual

Dr. Brent Mittelstadt

Yes. Thank you for the question.

The point of the comment was to say that data protection and privacy law are designed to protect people—or at least that's what inspired them originally—and to protect privacy in all of its various different types.

However, functionally or procedurally, what it ends up doing is protecting the data that are produced by people. This links to the comments I was making around informed consent and the need for identifiability for the law to apply in the first place. As has been described throughout the entire session today, once the data is de-identified or anonymized, you can still do very interesting things with it to create very useful knowledge about groups of people, which can then be applied back to those groups. In the case of medical research, it's very laudable, but in other cases, maybe not so much.

4:55 p.m.

Liberal

Michel Picard Liberal Montarville, QC

That suggests that we then have to find a way legally to.... With not being able to separate data from the people, when there's harm done to data, there is therefore harm done to someone, because somewhere the data concerns someone. You can protect someone but you can't sue the data, but data is the centre of the focus.

Do we lack something, legally speaking ?

5 p.m.

Research Fellow, Oxford Internet Institute, University of Oxford, As an Individual

Dr. Brent Mittelstadt

I'd say that most places lack something, legally speaking. Dealing with the collective or group aspects of privacy and data protection is extremely difficult. There's not really a satisfactory legal framework for it, outside of specific types of harm such as discrimination.

We could say that we all lack something legally.

5 p.m.

Liberal

Michel Picard Liberal Montarville, QC

Mr. Leduc, you surprised me by saying that everyone was caught off guard and no one anticipated how important data would be or how much influence the information would have on our daily lives.

However, we have been talking about information, its added value and its commercialization for some time now, if we remember Alvin Toffler's Future Shock and especially his book The Third Wave, published in 1980. We have known for a long time that information has an extraordinary and precious value. Could we conclude that we chose to close our eyes rather than say that we hadn't seen anything coming?