Evidence of meeting #157 for Access to Information, Privacy and Ethics in the 42nd Parliament, 1st Session. (The original version is on Parliament’s site, as are the minutes.) The winning word was human.

A video is available from Parliament.

On the agenda

MPs speaking

Also speaking

Brent Mittelstadt  Research Fellow, Oxford Internet Institute, University of Oxford, As an Individual
An Tang  Chair, Artificial Intelligence Working Group, Canadian Association of Radiologists
André Leduc  Vice-President, Government Relations and Policy, Information Technology Association of Canada

4:30 p.m.

Conservative

Peter Kent Conservative Thornhill, ON

Do you mean in terms of discouraging research and development?

4:30 p.m.

Vice-President, Government Relations and Policy, Information Technology Association of Canada

André Leduc

I mean not just discouraging research and development. It's the cost of compliance. We've seen reports coming out of the EU that the average cost is around $100,000 U.S. to comply with GDPR. If you take that into account, it's not a lot of money for a very large organization, but for a small business of 10 people that has potentially a million dollars in revenue, it would eat up essentially their entire profit margin at 10%.

It is something I studied in depth over the course of about a year and a half. The lack of simple tools and compliance guidance for SMEs cripples their capability to comply with privacy legislation.

4:30 p.m.

Conservative

Peter Kent Conservative Thornhill, ON

I'll go to Mr. Tang and Mr. Neuheimer now.

I consumed with interest your “Who Are Radiologists...?” page. With regard to job loss and job transition as a result of the development of AI in your field, radiology, it would seem that you have some concern about the fourth stage, in which the radiologist does analysis today and refers that analysis to a physician.

If AI progresses to the extent that we're told it will one day, the radiologist's job may be—and you tell me, but it would seem to be—reduced to the first interaction with the patient, taking those images, and then you'd be out of the loop because the doctor would be able to use AI to make the diagnosis and recommend treatment.

4:30 p.m.

Chair, Artificial Intelligence Working Group, Canadian Association of Radiologists

Dr. An Tang

To paraphrase Mark Twain, I would say that rumours of our demise are vastly exaggerated at this point.

At the CAR, we've approached this question conceptually. It also addresses the previous question: What are the various levels of autonomy of software? We make an analogy with the self-driving car and we create a scale ranging from zero to five, in which zero indicates no automation at all and then proceeds to physician assistance, partial automation, high automation and full automation. We don't see on our radar anything that will replace the entire work that's accomplished by radiologists.

4:35 p.m.

Conservative

Peter Kent Conservative Thornhill, ON

No.

4:35 p.m.

Chair, Artificial Intelligence Working Group, Canadian Association of Radiologists

Dr. An Tang

However, we see many helpful applications for specific tasks that are repetitive and mundane and that would free up time for us to perform more meaningful tasks, such as communication, explaining procedures to patients or even performing these procedures and attending tumour boards. This would be much more productive.

4:35 p.m.

Conservative

The Chair Conservative Bob Zimmer

Thank you, Mr. Kent. We're well past time.

4:35 p.m.

Conservative

Peter Kent Conservative Thornhill, ON

Thank you.

4:35 p.m.

Conservative

The Chair Conservative Bob Zimmer

Mr. Boulerice is next, for seven minutes.

June 6th, 2019 / 4:35 p.m.

NDP

Alexandre Boulerice NDP Rosemont—La Petite-Patrie, QC

Thank you, Mr. Chair.

I thank everyone for being here today for this study and the important questions it raises. We live in a world where artificial intelligence will take up more and more space. It will be given more responsibilities. It will make increasingly complex decisions because its algorithms will be able to process countless amounts of data at a speed faster than any human brain.

I want to put my first questions to Mr. Leduc and Mr. Mittelstadt, and they concern the ethics of artificial intelligence.

An algorithm or supercomputer is in itself incapable of displaying discrimination or bias. On the other hand, the human being who programs the algorithms is capable of doing so at different stages: during data collection, during processing, or during the preparation of questions the algorithm will try to answer.

In your opinion, how, at these different stages, can we avoid these normal human prejudices, which could lead to discriminatory results? Which one of you two wants to dive into this easy question?

4:35 p.m.

Vice-President, Government Relations and Policy, Information Technology Association of Canada

André Leduc

Perhaps Dr. Mittelstadt could begin and I will follow up.

4:35 p.m.

NDP

Alexandre Boulerice NDP Rosemont—La Petite-Patrie, QC

Fine.

Mr. Mittelstadt, did you want to speak?

4:35 p.m.

Research Fellow, Oxford Internet Institute, University of Oxford, As an Individual

Dr. Brent Mittelstadt

I'm happy to answer this.

It's a very important question, how we both identify bias as it's picked up by algorithms and then mitigate it once we know that it's there. I think the simplest way to explain the problem is that we live in a biased world and we're training algorithms and AI with data about the world, so it's inevitable that they pick up these biases and can end up replicating them or even creating new biases that we're not aware of.

We tend to think of bias in terms of protected attributes—things such as ethnicity, gender or religion, things that are historically protected for very good reasons. What's interesting about AI is that it can create entirely new sets of biases that don't map onto those characteristics or even characteristics that are humanly interpretable or humanly comprehensible. Detecting those sorts of biases in particular is very difficult and requires looking essentially at the set of decisions or outputs of an algorithmic system to try to identify when there is disparate impact upon particular groups, even if they are not legally protected groups.

Besides that, there is quite a bit of research, and methods are being developed to detect gaps in the representativeness of data and also to detect proxies for protected attributes that may or may not be known in the training phase. For example, postal code is a very strong proxy for ethnicity in some cases. It's discovering more sorts of proxies like that.

Again, there are many types of testing—automated methods, auditing methods—whereby essentially you are doing some sort of analysis of training data of the algorithm while it's performing processing and of the sets of decisions that it produces.

There is, then, no simple answer to how you do it, but there are methods available at all stages.

4:35 p.m.

NDP

Alexandre Boulerice NDP Rosemont—La Petite-Patrie, QC

Thank you.

Mr. Leduc, it's your turn.

4:35 p.m.

Vice-President, Government Relations and Policy, Information Technology Association of Canada

André Leduc

As Mr. Mittelstadt said, in artificial intelligence, there are frequent opportunities where biases are created in the data itself and in the codes generated in relation to artificial intelligence. We suggest that, in the industry, there should be a review at each step, whether an algorithm is being developed, databases are being used or data analysis is being done. The aim is to ensure that the results from artificial intelligence processes are not biased.

4:40 p.m.

NDP

Alexandre Boulerice NDP Rosemont—La Petite-Patrie, QC

I have a supplementary question, along the same lines. It is addressed to all of you.

Artificial intelligence algorithms will make decisions that will have an impact on people's lives. They will be used for facial recognition, identification, police investigations, probably, and credit investigations. They will be able to guide decisions regarding the granting of a mortgage or a loan to a business, or hiring decisions. These algorithms will be asked to make decisions that can be considered fair and equitable.

Since the very principle of what is fair and equitable changes with history, culture and ideology, how can we ensure that we get fair and equitable decisions?

4:40 p.m.

Conservative

The Chair Conservative Bob Zimmer

Would anybody like to tackle that question?

4:40 p.m.

NDP

Alexandre Boulerice NDP Rosemont—La Petite-Patrie, QC

Perhaps Mr. Mittelstadt would like to respond.

4:40 p.m.

Research Fellow, Oxford Internet Institute, University of Oxford, As an Individual

Dr. Brent Mittelstadt

Yes, I'd be happy to give that a go.

I think, as I was alluding to at the end of my statement, what is going to be determined as fair or ethical is going to be extremely context-dependent. Maybe the highest level we could go to in terms of having guidelines for what constitutes an ethical or fair decision would be at a sectoral level, at which you have existing regulation that gives you some restrictions concerning what is considered permissible or discriminatory, because these things will vary across different sectors.

Really, it's something that can only be answered at that contextual level. I think maybe we have a head start in AI that will be used in professions that are already licensed or legally recognized as professions, where they have fiduciary duties to the people they serve, because they have these very long histories where they've developed best practices, guidelines, principles and lower-level norms, basically, to define what is a good behaviour and what is a good decision.

It's a difficult question, but I think that's how we start.

4:40 p.m.

NDP

Alexandre Boulerice NDP Rosemont—La Petite-Patrie, QC

Thank you.

Mr. Tang, did you want to speak, briefly?

4:40 p.m.

Chair, Artificial Intelligence Working Group, Canadian Association of Radiologists

Dr. An Tang

Yes, I will venture to answer both your questions, because I had some time to think about it.

On the issue of bias, I would say that one of the strategies to minimize it in the medical field would be to use a large amount of data to reflect the target population, particularly in terms of gender, ethnicity or age group.

As far as discrimination is concerned, I think the best way to minimize it is to keep the human element in the equation and involve a doctor or other member of the care team. Indeed, in the end, health care is highly personalized and deeply affects privacy. Beyond the recommendation established by the algorithm on the basis of a large demographic sample, the decision made by the patient and physician will remain.

4:40 p.m.

Conservative

The Chair Conservative Bob Zimmer

I'm just going to explain that we were going to do some committee business in about five minutes, but we've talked with the vice-chairs and all parties, and we're going to push that committee business to next Tuesday, if you can stick around until five o'clock.

Is that something you can do? Okay. We'll take it right to five with questions.

We'll go with Nate for seven minutes.

4:40 p.m.

Liberal

Nathaniel Erskine-Smith Liberal Beaches—East York, ON

Thanks very much.

I want to start with you, Mr. Mittelstadt, and talk first about AI, risk assessments and algorithmic transparency. At a government level, there are now rules that the Treasury Board has put in place for government agencies and departments. It's a risk assessment, and then, depending upon how they answer the 85 questions, they're categorized from stages one to four. Depending on where they slot in, there are mitigation measures that are then required.

Perhaps you can explain the usefulness of that, if you think it's useful, and the deficiencies, if you think there are deficiencies, and how we can improve upon that, potentially, and what else might be required.

4:40 p.m.

Research Fellow, Oxford Internet Institute, University of Oxford, As an Individual

Dr. Brent Mittelstadt

I tend to say that there is no single silver bullet for appropriate governance of these systems, so risk assessments can be a very good starting point.

They're very good in the sense of catching problems in the pre-deployment or procurement stage. Their shortcoming is that they're only as good as the people or the organizations that complete them, so they do require a certain level of expertise and, potentially, training—essentially, people who are aware of potential ethical issues and can flag them up while actually going through the questionnaire.

We've seen that with other sorts of impact assessments such as privacy impact assessments, environmental impact assessments and, now, data protection impact assessments in Europe. There really has to be a renewed focus on the training or the expertise of the people who will be filling those out.

They are useful in a pre-deployment sense, but as I was suggesting before with biases, problems can emerge after a system has been designed. We can test a system in the design phase and during the training phase and say that it seems to be fair, it seems to be non-discriminatory and it seems to be unbiased, but that doesn't mean that problems won't then emerge when the system is essentially used in the wild.

Any sort of impact assessment approach has to be complemented as well by in-process monitoring and post-processing assessment of the decisions that were made, and very clear auditing standards in terms of what information needs to be retained and what sorts of tests need to be carried out after the fact, again, to check for things like bias.

4:45 p.m.

Liberal

Nathaniel Erskine-Smith Liberal Beaches—East York, ON

That's helpful. There's at least a model or a template for algorithmic impact assessments that seems somewhat transferable, at least to the private sector, for bigger companies at a minimum.

We've recommended that there then be a regulatory authority to conduct audits, not only against that original assessment but also potentially ongoing. Is that the kind of thing...? Ought there be some regulator with the power to audit practices of companies that are engaging in the use of algorithms? Is that the idea?