Evidence of meeting #157 for Access to Information, Privacy and Ethics in the 42nd Parliament, 1st Session. (The original version is on Parliament’s site, as are the minutes.) The winning word was human.

A video is available from Parliament.

On the agenda

MPs speaking

Also speaking

Brent Mittelstadt  Research Fellow, Oxford Internet Institute, University of Oxford, As an Individual
An Tang  Chair, Artificial Intelligence Working Group, Canadian Association of Radiologists
André Leduc  Vice-President, Government Relations and Policy, Information Technology Association of Canada

3:45 p.m.

Conservative

The Chair Conservative Bob Zimmer

We'll call the meeting to order. This is the Standing Committee on Access to Information, Privacy and Ethics, meeting 157. Today's topic is the ethical aspects of artificial intelligence and algorithms.

We have with us today, as an individual, Brent Mittelstadt, research fellow, Oxford Internet Institute, University of Oxford, by teleconference. From the Canadian Association of Radiologists, we have Nicholas Neuheimer, chief executive officer; and An Tang, chair, artificial intelligence working group. From the Information Technology Association of Canada, we have André Leduc, vice-president, government relations and policy.

We do have some business to follow, so we're going to try to get this done as soon as we can. We're giving you a full hour. We just had votes. You have our apologies for that, but it's something out of our control.

We'll start off right away with Mr. Mittelstadt. Go ahead for 10 minutes.

3:45 p.m.

Dr. Brent Mittelstadt Research Fellow, Oxford Internet Institute, University of Oxford, As an Individual

Thank you so much for inviting me.

I have been researching the ethical challenges of algorithms and AI for nearly half a decade. What's become apparent to me in that time is that the promise of AI largely owes to its apparent capacity to replace or augment any type of human expertise. The fact that it's so malleable in that sense means that the technology inevitably becomes entangled in the ethical and political dimensions of the jobs, the practices and the organizations in which it's embedded. The ethical challenges of AI are effectively a microcosm of the political and ethical challenges that we face in society, so recognizing that and solving them is certainly no easy task.

I know, from witnesses in your previous sessions, that you've heard quite a bit about the challenges of AI, dealing with things such as accountability, bias, discrimination, fairness, transparency, privacy and numerous others. All those are extremely important and complex challenges that deserve your attention, and really the attention of policy-makers worldwide, but in my 10 minutes I want to focus less on the nature and extent of the ethical challenges of AI and more on the strategies and tools we have for solving them.

You've heard also quite a bit about the tools available to address these ethical challenges, using things such as algorithmic and social scientific auditing, multidisciplinary research, public-private partnerships, and participatory design processes and regulations. All of those sorts of solutions are essential, but my concern is that we're perhaps broadly using the wrong strategy or at least an incomplete strategy for the ethical and legal governance of AI. As a result, we may be expecting too much from our current efforts to ensure AI is developed and used in an ethically acceptable manner.

In the rest of my statement, what I want to address are the significant shortcomings that I see in current efforts to govern AI, specifically through data protection and privacy law on the one hand and through principled self-governance on the other. My principal concern here is that these strategies too often conceive of the ethical challenges of AI in an individualistic sense, when in fact they are collective challenges that require collective solutions.

To start with data protection and privacy law, responsibility far too often falls on the shoulders of individuals to protect their vital interests, or their privacy, autonomy, reputation and those sorts of things. Data protection law too often ends up protecting data rather than the people the data represents. That shortcoming can be seen in several areas of law globally. The core concepts of data protection and privacy law—personal data, personally identifiable information and so forth—are typically defined in relation to an identifiable individual, which means that the data must be able to be linked to an individual in order to fall within agreement of the law and thus to be protected by the law.

The emphasis on the individual is really mismatched with capabilities of AI. We're excited by AI precisely because of its ability to find small patterns between people and group them in meaningful ways, and to create generalizable knowledge from individual records or individual data. In the modern data analytics that drive so many of the technologies we think of as AI, the individual doesn't really matter. AI is interested not in what makes a person uniquely identifiable but rather in what makes that person similar to other people. AI has transformed privacy from an individual concern to a collective challenge, yet relatively little attention is actually paid in existing legal frameworks to collective or group aspects of privacy. I see that as something that really needs to change.

That shortcoming itself extends to the sorts of legal protections that we quite often see in data protection and privacy law that are offered to individuals and to their data. These protections are still fundamentally based on the idea that individuals can make informed decisions about how they produce data, how that data is collected and used, and when it should not be used. The burden is really placed on individuals to be well informed and to make a meaningful choice about how their data is collected and used.

As is suggested by the name, informed consent only works if meaningful, well-informed choice is actually possible. Again, we're excited about AI precisely because it can process so much data so quickly, because it can identify novel and unintuitive patterns within the data and because it can produce knowledge from them. We're excited because the data analytics that drive AI are so big, so fast and so unpredictable, but the voracious appetite that AI has for personal data, combined with the seemingly limitless and unpredictable reusability of the data, means that even if you're a particularly motivated individual, a well-informed choice about how your data is collected and used is typically impossible. Under those conditions, consent no longer offers meaningful protection or allows individuals to control how their data is collected and used.

Moving forward, in terms of data protection and privacy law in particular, we need to think more about how to shift a fair share of the ethical responsibility to companies, public bodies and other sorts of collectives. Some of the ethical burden that's normally placed on individuals should be placed on these entities, requiring them, for example, to justify their data collection and processing before the fact, rather than leaving it up to individuals to proactively protect their own interests.

The second government strategy I want to address has seen unprecedented uptake globally. To date, no fewer than 63 public-private initiatives have formed to determine how to address the ethical challenges of AI. Seemingly every major AI company has been involved in one or more of these initiatives and has partnered with universities, civil society organizations, non-profits and other sorts of bodies. More often than not, these initiatives produce frameworks of high-level ethical principles, values or tenets meant to drive the development and usage of AI.

The strategy seems to be that the ethical challenges of AI are best addressed through a top-down approach, in which these high-level principles are translated into practical requirements that will act as a guide for developers, users and regulators. The ethical challenges of AI are more often than not presented as problems to be solved through technical solutions and changes to the design process. The rationale seems to be that insufficient consideration of ethics leads to poor design decisions, which create systems that harm people and society.

These initiatives are essentially producing self-regulatory frameworks that are not yet binding, in any meaningful sense. It seems as though the blame for unethical AI tends to fall, again, on the individuals, or individual developers and researchers, who have somehow behaved badly, as opposed to any sort of collective failure of the institutions, businesses or other types of organizations driving development in the first place.

With that in mind, I'm not entirely sure why we assume that top-down principles and codes of ethics will actually make AI, and the organizations that create it and use it, more ethical or trustworthy. Using principles and ethics is nothing new. We have lots of well-established professions, such as medicine and law, that have used principles for a very long time to define their ethical values and responsibilities, and to govern the behaviour of the professionals and organizations that employ them.

If we can think of AI development as a profession, it very quickly becomes apparent that it lacks several characteristics necessary to make a principled approach actually work in practice.

In the first place, AI development lacks common aims and fiduciary duties to users and individuals. Take medicine as a counter example: AI development doesn't serve the public interest in the first instance, in the same sense. Developers don't have fiduciary duties toward their users or people affected by AI, because AI is quite often developed in a commercial environment where fiduciary duty is owed to the company's shareholders. As a result, you can have these principles that are intended to protect the interests of users and the public coming into conflict with commercial interests. It's not clear how those are going to be resolved in practice.

Second, AI development has a relatively short professional history and it lacks well-established and well-tested best practices. There are professional bodies for software engineering and codes of ethics, but because it's not a legally recognized or licensed profession, professional bodies exercise very little power over their members, in practice. The codes of ethics they do have tend to be more high-level and relatively brief in comparison to other professions.

The third characteristic that AI development is seemingly lacking is proven methods to translate these high-level principles into practical requirements. The methods we do have available tend to exist or have been tested only in academic environments and not in commercial environments. Moving from high-level principles to practical requirements is a very difficult process. The outputs we've seen from AI ethics initiatives thus far have almost universally relied on vague, contested concepts like fairness, dignity and accountability. There's very little offered in the way of practical guidance.

Disagreements over what those concepts mean only come out when the time comes to actually apply them. The huge amount of work we've seen to develop these top-down approaches to AI ethics have accomplished very little in practice. Most of the work remains to be done.

What I would conclude with is that essentially ethics is not meant to be easy or formulaic. Right now we too often think of ethics purely in terms of technical fixes or checklists or impact assessments, when really we should be looking for and celebrating these normative disagreements because they represent, essentially, taking ethical challenges seriously in the plurality of opinion that we should expect in democratic societies.

The difficult work that remains for us in AI ethics is to move from high-level principles down to practical requirements. It's really only in doing that and in supporting that sort of work that we'll really come to understand the ethical challenges of AI in practice.

Thank you, and I look forward to your questions later.

3:55 p.m.

Conservative

The Chair Conservative Bob Zimmer

Thank you, Mr. Mittelstadt.

Next up is Mr. Tang. Go ahead for 10 minutes.

3:55 p.m.

Dr. An Tang Chair, Artificial Intelligence Working Group, Canadian Association of Radiologists

Thank you, Mr. Chair.

Thank you, Mr. Chair and members of the Standing Committee on Access to Information, Privacy and Ethics, for giving me the opportunity to speak with you today about artificial intelligence in radiology, specifically in relation to ethical and legal issues in the implementation of this technology in medical imaging.

My name is Dr. An Tang and I am here representing the Canadian Association of Radiologists (CAR), as chair of the Artificial Intelligence Committee within the CAR.

The CAR AI working group is composed of more than 50 members who have a keen interest in technology advancement in radiology as it pertains to AI. The composition of this working group is varied, from predominantly radiologists to physicists, computer scientists and researchers. It also includes a philosopher specialized in the ethics of AI and an academic lawyer.

Under the CAR board of directors' leadership we have been entrusted with taking a global look at AI and the impact it will have on radiology and patient care in Canada.

I believe I speak for most of my colleagues in thinking that this is a good-news story and that AI can dramatically impact the way radiologists practise, in a positive way. Through the collection of data and simulation, using mathematical algorithms, we can help reduce wait times for patients, thus expediting diagnosis and positively affecting patient outcomes.

AI software analyzing medical images is becoming increasingly prevalent. Unlike early generations of AI software, which relied on expert knowledge to identify image features, machine learning techniques can automatically learn to recognize these features with the use of training datasets.

AI can be used for the purpose of detecting disease, establishing diagnosis and optimizing treatment selection. However, for this to be performed accurately, access to large quantities of medical data from patients will be required. This, of course, brings the privacy question into the equation. How do we collect this data while still guaranteeing we are collecting this information in an ethical way that protects the privacy of our patients?

Because of the transition from film to digital imaging that occurred two decades ago in radiology, and because of the availability of digital records for each imaging examination, radiology is well positioned to lead the development and implementation of AI and to manage associated ethical and legal challenges.

CAR believes that the benefits of AI can outweigh risks when institutional protocols and technical considerations are appropriately implemented to safeguard or remove the individually identifiable components of medical imaging data.

Technology advancements are occurring so quickly that they are outpacing current radiology procedures. We need to establish regulations pertaining to data collection and ownership to ensure that we are safeguarding patients and not infringing on ethical or privacy guidelines.

The CAR is advocating for the federal government to take a leadership role in the implementation of an ethical and legal framework for AI in Canada. Despite health care being a provincial priority, AI is a global issue. We feel the government is well positioned to lead the provinces in the regulation of the implementation of such a framework. Similar examples are the federal government's leadership in the national medical imaging equipment fund in the early 2000s.

The CAR can help, and the AI working group, under the CAR board's leadership, has published two white papers on AI, the first published in 2018 on AI in radiology, and a general overview of machine learning and implementation in radiology. This second paper, published in May 2019, focused on ethical and legal issues related to AI in radiology.

We have provided copies of the white papers, with our recommendations, for each of you. For the purpose of the discussion, I would like to highlight the more prevalent ones as they relate to the federal government's role in this capacity.

The first is the implementation of a public awareness campaign regarding consent and patient sharing of anonymized health data and harm reduction strategies. This information is essential for helping to identify disease and treatment for future AI applications.

Second is the general adoption of broad consent by default, with the right to opt out.

Third is developing a system for ensuring data security and anonymization of radiology data for secondary use, and implementing system standards to ensure that this criterion is being met.

Fourth, train radiology data custodians and establish clear guidelines for their role in the implementation of data sharing agreements for common AI-related scenarios and third parties.

The CAR has to work with the federal government and provincial ministries of health, including the Canadian Medical Protective Association, or CMPA, to develop guidelines for appropriate deployment of AI assistive tools in hospitals and clinics, while looking at minimizing harm and liability for malpractice for errors involving AI. We need to educate radiologists and other health care professions on the limitations of AI and reiterate the use of the tool in supplementing the work rather than replacing radiologists.

AI is not going away. Sharing medical data is a complex issue that balances individual privacy rights versus collective societal benefits. Given the potential of AI in helping to improve patient care and medical outcomes, I believe we will start to see a paradigm shift from patient's rights to near absolute data privacy through the sharing of anonymized data for the good of society.

We need to work together to implement a framework to ensure that we can move forward with this technology, while respecting the patient's anonymity and privacy. AI in healthcare is going to happen sooner or later; let's make sure it is implemented in an ethical way.

Thank you for your time. I'm happy to answer your questions in either French or English.

4:05 p.m.

Conservative

The Chair Conservative Bob Zimmer

Thank you for your testimony.

We'll move on next to Mr. Leduc. Go ahead for 10 minutes.

4:05 p.m.

André Leduc Vice-President, Government Relations and Policy, Information Technology Association of Canada

Thank you Mr, Chair and members of the committee.

It is a privilege to be here today to present the industry's perspective on behalf of the Information Technology Association of Canada. ITAC is the national voice of the telecommunications and Internet technology industry. We have more than 300 members, including more than 200 small and medium-sized businesses.

As already noted by the other speakers, there's a lot of promise and opportunity behind artificial intelligence to support economic growth and societal improvements, and the opportunities are seemingly boundless. From human mobility by automating vehicles to precision health care, many of our forthcoming solutions will be powered by artificial intelligence.

To realize the full benefits of artificial intelligence, we'll need to create systems that people trust. I've provided a brief outline of the slides that I'll present here today, including our industry's obligations and where our industry is already going; a call on our government to lead in terms of developing an ongoing dialogue via public-private partnership; the types of impacts that this will have on our workforce and the need for re-skilling, upskilling and training; and the recommendations in order to build trust in artificial intelligence.

Canada has been recognized as a global leader in artificial intelligence research and development. We are attracting global talent to universities across Canada to study in this field. We're already experiencing the benefits of AI in a number of fields, from start-ups and SMEs to larger global tech companies, all of which have developed AI systems to help solve businesses' or some of society's most pressing problems. Many others are using AI to improve supply chain efficiencies, to advance public services and to advance groundbreaking research. By leveraging large datasets, increased computing power and ingenuity, AI-driven solutions can address any number of societal or business problems, from precision or predictive health care to automated and connected vehicles improving human mobility and decreasing traffic, having an exponential impact on our environment.

AI systems need to leverage vast amounts of data. The availability of robust and representative data, often de-identified or anonymized, is required for building and improving AI and machine learning systems. We can't overstate this enough: Having access to broad and vast amounts of data is the key to advancing our artificial intelligence capabilities in Canada.

That said, the AI ecosystem is global. It's very competitive and it's multi-faceted. Our association welcomes a multi-stakeholder engagement approach to artificial intelligence, one that encourages Canada to bolster global engagement on AI policy to ensure we are all prospering from the potential benefits for our societies.

I'll note six key factors for the committee to consider.

First, traditional industries are already seizing and leading in AI opportunities. From oil and gas to mining, forestry and agriculture, they are embracing this technology to drive efficiencies and compete on a global scale. They are developing new services and new products based on the information being analyzed and leveraging artificial intelligence.

Second, AI is a journey. This isn't going to be an end state. This is going to be something that continues to evolve over the forthcoming decades.

Third, central to any economy's digital transformation is cultural transformation, and misinformation in this space will kill consumer and citizen trust in new technology and artificial intelligence.

Fourth, there will be workforce disruption, but based on historical factors, we believe new technologies including AI will create more job opportunities than it will kill.

Fifth, we need partnerships for workforce development, including the re-skilling and upskilling of existing people who may force disruption, based on their current roles.

Sixth, next-generation policies are needed. These are next-generation technologies. It's time for us to start thinking outside the box.

When I first joined government in 1999, one of the first jobs I had was working to support the development of PIPEDA. I was also one of the lead architects of Canada's anti-spam legislation. I did my master's thesis on why SMEs struggle to comply with CASL and PIPEDA, so I've been working on this for the better part of the last 17 or 18 years. Interestingly, we never foresaw the impact that data would have on the legislative frameworks we have today. We couldn't foresee, when developing PIPEDA or CASL, the types of data-driven businesses that have come our way to date.

Next, I want to talk about industry's obligation to promote responsible development and use of artificial intelligence.

First, we recognize our responsibility to integrate principles and values into the design of AI technologies, beyond compliance with existing laws. While the potential benefits to people in society are amazing, AI researchers, subject-matter experts and stakeholders should continue to spend a great deal of time working to ensure the responsible design and deployment of AI systems, including addressing safety and controllability mechanisms, the use of robust and representative data, enabling greater interpretability and recognizing that solutions must be tailored to the unique risks presented by the specific context in which a particular system operates.

Second, in terms of safety, security, controllability and reliability, we believe technologists have a responsibility to ensure the safe design of AI systems. Autonomous AI agents must treat the safety of users and third parties as a paramount concern, and AI technology should strive to reduce risks to humans. Furthermore, the development of autonomous AI systems must have safeguards to ensure controllability of the AI systems by humans, tailored to the specific context in which a particular system operates.

Third is robust and representative data, with a specific focus on mitigating bias. To promote the responsible use of data and to assure its integrity at every stage, industry has a responsibility to understand the parameters and characteristics of the data, to demonstrate the recognition of potentially harmful bias and to test for potential bias before and throughout the deployment of AI systems.

AI systems need to leverage large datasets. The availability of robust and representative data for building and improving AI and machine learning systems is of utmost importance.

By the way, this could be a significant competitive advantage for Canada. We have a globally representative population, including indigenous communities. It would be a wonderful target for medical testing and AI testing in the medical field.

In terms of interpretability, we should leverage public-private partnerships to find ways to better mitigate bias, inequity and other potential harms in automated decision-making systems. Our approach to finding such solutions should be tailored to the unique risks presented by the specific context in which a particular system operates.

Finally, the use of AI to make autonomous consequential decisions about people informed by, but often replacing, decisions made by humans has led to concerns about liability. Acknowledging existing legal and regulatory frameworks, our industry is committed to partnering with relevant stakeholders to form a reasonable accountability framework for all entities in the context of automated systems.

We believe we should leverage and build a public-private partnership that can expedite AI R and D, democratize access to data, prioritize diversity and inclusion and prepare our workforce for the jobs of the future. ITAC members also believe that we need to prioritize an effective and balanced liability regime via the continued engagement of multi-stakeholder expert groups. The right solution is only going to come from an open exchange with all actors in the AI supply chain.

If the value favours only certain incumbent entities, there's a risk of exacerbating existing wage income and wealth gaps. In this scenario, this isn't “us versus them”, “private versus public”. It's just “us”. There should be increased partnership to explore how to develop a safer and more secure and trusted data-driven digital economy.

There is a concern that AI will result in job change, job loss and worker displacement. While these concerns may be understandable, it should be noted that most emerging AI technologies are designed to perform a specific task or to assist and augment a human's capacity rather than to replace a human employee. This type of augmented intelligence means that a portion—most likely not all—of an employee's job could be replaced or made easier by AI.

Leveraging AI to complete an employee's menial tasks is a way to increase their productivity by freeing up time to engage in customer service and interaction or more value-added job functions. Nevertheless, while the full impact of AI on jobs is not yet fully known in terms of both jobs created and jobs displaced, an ability to adapt to rapid technological change is critical. We should leverage traditional human-centred resources as well as career educational models, and newly developed AI technologies should assist in developing both the existing workforce and the future workforce to help Canadians navigate through career transitions.

4:15 p.m.

Conservative

The Chair Conservative Bob Zimmer

Mr. Leduc, you're two minutes over. We had 10 minutes for your presentation.

4:15 p.m.

Vice-President, Government Relations and Policy, Information Technology Association of Canada

André Leduc

I'll just run quickly. I'll go to the next slide, in which you can see our recommendations onscreen around prioritizing Canada's competitiveness, promoting innovation and ethical AI practices, leveraging global standards, investing in AI R and D, and using a balanced and flexible regulatory approach. I think this creates an opportunity for us to marry privacy and cybersecurity.

In summary, many if not all of the uses of AI are going to rely on data—in certain circumstances personal data—and responsible use of that data is key. Burdensome regulation or reporting will limit the pace of AI innovations. We have to get this balance right. Industry will follow the key principles of responsible use of personal data in AI. We believe these principles are echoed in Canada's first-ever digital charter, which we support as a foundational framework for launching AI that is trustworthy, secure, ethical and safe for Canadians.

Thank you.

4:15 p.m.

Conservative

The Chair Conservative Bob Zimmer

Thank you, Mr. Leduc. I apologize for constraining AI into 10 minutes. It's more than challenging, I can imagine. You did a pretty good job.

Anyway, now we have Mr. Saini for seven minutes.

4:15 p.m.

Liberal

Raj Saini Liberal Kitchener Centre, ON

Good afternoon, gentlemen. Thank you very much for coming here today.

Mr. Tang, I'm going to start with you, because very rarely do we have a medical practitioner, and as a pharmacist I thought I would start with you first.

You talked about the white paper you produced. One thing I found interesting, which I found even in my own practice, is the translational research component, which you've termed the “valley of death” because there's a lack of resources.

How do we get beyond that problem? We might be able to create a great piece of equipment or a great piece of software, but the transition to actually seeing it used clinically usually is very difficult.

What do you propose, on a medical basis, whereby we can get the benefit of all this technology but actually apply it usefully to help patients?

4:15 p.m.

Chair, Artificial Intelligence Working Group, Canadian Association of Radiologists

Dr. An Tang

Thank you for offering me the opportunity to answer.

Serendipity has it that the federal government recently awarded a strategic innovation fund grant to a consortium led by Imagia Cybernetics, a Canadian start-up specialized in AI in oncology, along with the Terry Fox Foundation and academic radiology departments across the country, in partnership with the four top computer science labs specializing in artificial intelligence. The goal, over a three- to five-year period, is to make sure that we harness the imaging data we have and create new applications that can be used in academic departments prior to commercialization of these products down the road.

4:20 p.m.

Liberal

Raj Saini Liberal Kitchener Centre, ON

My second question is for Mr. Leduc.

If we look at the advent of artificial intelligence or the advent of technology, we are now into a different phase of human progress. Automation was created to do repetitive tasks, tasks for which intellectual capacity was not necessarily required because the tasks were repetitive. We are now entering another phase, whether you want to call it industrialization or another phase in our economic growth, whereby artificial intelligence now has the ability to do intellectual tasks.

Now, because of automation of repetitive tasks, you're creating algorithms and creating artificial intelligence through machine learning such that the decision-making is getting better.

How do we deal with potentially having underemployment of a class of people who are educated or trained to apply their intelligence to any task?

4:20 p.m.

Vice-President, Government Relations and Policy, Information Technology Association of Canada

André Leduc

There are a few things built into this.

One is that I think we're going to need to embrace lifelong learning. I think we are going to see the automation of a lot of menial and repetitive tasks and of some human decision-making. You will see, and we've seen it going back in history over time, that when we created the automobile the first time, we didn't need stable boys or stables for the horse and buggy anymore. In our own sector, we replaced the operators who used to walk in front of the switchboard and switch everything. We replaced them with a router and switch.

We believe that the opportunity for these types of technologies to create more employment is going to outpace the disruption. That said, people who are in menial-task fields are often the most vulnerable, and I think we need to embrace programming for re-skilling and upskilling of people who are going to face displacement based on these new technologies.

This is going to come. It isn't an option. Businesses will strive to become more efficient. They need to compete globally. If they can leverage AI to complete tasks within the enterprise, they will choose that route, because it will cost less.

4:20 p.m.

Liberal

Raj Saini Liberal Kitchener Centre, ON

Here is one other question I want to ask,

I'm sure you're all aware of the term “singularity”. Singularity can be construed as a science fiction term for cases in which eventually you may have overlords of machinery that control human beings. Let's, however, take one step back from that point.

The basis of artificial intelligence and machine learning is for them to be smarter, more efficient and more capable than human thinking. Ultimately, though, there still has to be a human component. If you look at technology the way it is, you can program it within a certain narrative, but there's also the human dimension that makes calculations as you go along. One thing I've read is that if you program autonomous cars to go at the speed limit and human beings don't always go at the speed limit, how do you compensate for that?

If we look at singularity as the end point, how do we make sure that the human dimension is still involved? We want the advantages. We want the resources that AI and machine learning can provide us, but how do we make sure that there's still a human component to ensure that decisions are still being made in the human interest or with human interest involved?

It's an easy question. Take 20 seconds.

4:20 p.m.

Voices

Oh, oh!

4:20 p.m.

Vice-President, Government Relations and Policy, Information Technology Association of Canada

André Leduc

I highlighted in our presentation that there still needs to be human control over all artificial intelligence. It has to be enabled by human control. At the end of the day, there's a lot of fear of the unknown in this space but I think that allowing industry to place the standards and, wherever there are market failures, creating the right legislative and regulatory frameworks to address those market failures is going to be important.

What we as an industry want to see is an ongoing dialogue and a balanced approach to legislation and regulation. We don't say there is no need for it. We understand that going through our own standard setting is not always the be-all and end-all. Sometimes there will be market failures that will require legislative or regulatory action. What we're suggesting is that it has to be done in a dialogue and be balanced so that we don't impede our access to innovation and our ability to do R and D in this field.

4:25 p.m.

Liberal

Raj Saini Liberal Kitchener Centre, ON

Here is a final question. You guys can comment on the other question, but I want to make sure I get this question in.

There has been some discussion philosophically that—because of the global race for AI and machine learning, some commentaries have suggested—maybe we should take it one step at a time, because the research far surpasses any legislative ability or any human comprehension of how to deal with the moral and ethical implications of AI.

Would you suggest that we should have some framework whereby, as we hit certain milestones in the progress of AI, we should take a step back and regroup to think about how we're going to manage the next phase of development?

4:25 p.m.

Vice-President, Government Relations and Policy, Information Technology Association of Canada

André Leduc

I'll go again on that one.

The problem you run into is that not everybody's going to play by the same set of rules. This is like a global space race. Now we're in an AI race. Our country has invested a significant amount of time, money and effort into being leading researchers in research and development of algorithms and artificial intelligence, and we'll need to be able to commercialize that R and D and promote the use of our capabilities and capacities in AI.

If we took the time to take a step back and review and took a couple of years to do it, we'd essentially just be putting up a roadblock vis-à-vis our global competition in this space. That's why we say we think this needs to be an ongoing dialogue. I think it's wonderful that you guys have brought this issue for study, but I think these types of issues around data and leveraging of data, privacy and what frameworks actually work now, and what the issues are around consent.... We've been going through that. Through this committee, you've been doing this for the better part of the last 20 years. To take the time to stop and review will impede our competitiveness.

4:25 p.m.

Conservative

The Chair Conservative Bob Zimmer

You're way past time. Thank you.

I have to go on to Mr. Kent for seven minutes.

4:25 p.m.

Conservative

Peter Kent Conservative Thornhill, ON

Thank you, Chair.

Thank you all for adding a couple of new dimensions to the study we've been doing on digital government, digital threats, privacy issues and so forth.

I'd like to start with Professor Mittelstadt on the area of the vulnerability of massive amounts of highly personal data across society—medicine, health, so forth, business—and liability and regulation.

I'd like your comment on exactly how bringing in the GDPR, the new spectrum of regulation in Europe, with significant penalties for breaches or improper use of privacy, changed the development of artificial intelligence in its various applications, but also the precautions that have been taken in various industries, such as the health industry, or such as social media on the other hand.

4:25 p.m.

Research Fellow, Oxford Internet Institute, University of Oxford, As an Individual

Dr. Brent Mittelstadt

I can say a few things. Largely, the impact of the GDPR is still uncertain because so much of it is vague or the actual requirements it imposes are not entirely clear at this moment. Many complaints have been filed at the member state level that are still being worked through by national data protection authorities.

We'll get some more clarity from those and also as cases are brought in front of national courts and European courts as well. There are very large fines. Data protection authorities are starting to use them, so I think we'll start to see what the actual impact is over the next two or three years in particular.

In terms of how it has actually impacted the development of AI, one effect I would say it's had—although arguably the 1995 data protection directive had this effect as well—has been to encourage developers to anonymize or de-identify data before doing anything of interest with it, because as soon as that has happened, essentially the GDPR no longer applies. It applies to the de-identification process, it applies if you re-link the knowledge that you create back to individuals, but it doesn't apply to anything you do in the in-between stage.

That's one negative, I would say, that it's had. On a positive note, I would say it has encouraged more developers to consider how humans can actually be put into the loop of automated decision-making, because there are several rights that kick in for solely automated processes—essentially, AI that does not have a human in the loop to help make a decision or with the ability to intervene in a decision.

4:30 p.m.

Conservative

Peter Kent Conservative Thornhill, ON

The overriding element of consent would touch all of this, presumably.

4:30 p.m.

Research Fellow, Oxford Internet Institute, University of Oxford, As an Individual

Dr. Brent Mittelstadt

Certainly. My comments on consent definitely apply here. There are limitations on how data can be repurposed, but again these apply only to identifiable data, so they are limited in their applicability.

June 6th, 2019 / 4:30 p.m.

Conservative

Peter Kent Conservative Thornhill, ON

Monsieur Leduc, how would the Information Technology Association of Canada feel about regulations similar to the GDPR—not necessarily exactly the same but much more regulation than exists today in Canada?

4:30 p.m.

Vice-President, Government Relations and Policy, Information Technology Association of Canada

André Leduc

We follow this issue very closely. We've been reviewing the impacts the GDPR has had, both positive and negative, in the European context. The positive is in terms of improving privacy rights for European citizens. The negative side, as my colleague pointed out, is that there's a lack of transparency and a lack of clear and simple guidance around how to comply.

This is particularly impactful not upon the largest organizations, who have teams of lawyers to filter through the legislation and figure out how to comply. Although it's a cost burden on them, it's particularly impactful upon small and medium-sized enterprises. We've seen a significant impact in the EU.

It's not that we wouldn't welcome GDPR-like principles brought into our digital charter and Canada's data strategy and welcome improvements made to PIPEDA, but I would caution about just turning a light-switch on for GDPR exactly as is. For multinationals, that might make compliance a little bit easier, because they're already GDPR-compliant, but for SMEs, taking the leap from what is today PIPEDA and moving into a GDPR-like framework without clear and simple guidance about how to comply with the law would have a significant negative impact on smaller and medium-sized enterprises.