Good afternoon.
Thank you for inviting me to share with you some of the reflections about artificial intelligence ethical issues which we set out in Montreal.
I was asked to speak about the Montreal declaration for the responsible development of artificial intelligence, which was presented in 2018. I will speak about this document.
First I will outline the context in broad strokes. The technological revolution that is taking place is causing a profound change in the structure of society, by automating administrative processes and decisions that impact the life of our citizens. It also changes the architecture of choice by determining our default options, for instance. And it transforms lifestyles and mentalities through the personalization of recommendations, access to online, automated health advice, the planning of activities in real time, forecasting, and so on.
This technological revolution is an unprecedented opportunity, it seems to me, to improve public services, correct injustices and meet the needs of every person and every group. We must seize this opportunity before the digital infrastructure is completely established, leaving us little or no leeway to act.
To do so we must first establish the fundamental ethical principles that will guide the responsible and sustainable development of artificial intelligence and digital technologies. We must then develop standards and appropriate regulations and legislation. In the Montreal Declaration for a Responsible Development of Artificial Intelligence, we proposed an ethical framework for the regulation of the artificial intelligence sector. Although it is not binding, the declaration seeks to guide the standardization, legislation and regulation of AI, or artificial intelligence. In addition, that ethical framework constitutes a basis for human rights in the digital age.
I will quickly explain how we developed that declaration. This may be of interest in the context of discussions about artificial intelligence in our democratic societies. Then I will briefly present its content.
The declaration is first and foremost a document produced via the consultation of various stakeholders. It was an initiative of the University of Montreal, which received support from the Fonds de recherche du Québec and from the Canadian Institute for Advanced Research, or CIFAR, in the rest of Canada. Behind this declaration there was a multidisciplinary inter-university working group from the fields of philosophy, ethics, the social sciences, law, medicine, and of course, computer science. Mr. Yoshua Bengio, for instance, was a member of this panel.
This university group then launched, in February 2018, a citizens' consultation process, in order to benefit from the field expertise of citizens and AI stakeholders. It organized over 20 public events and discussion seminars or workshops over eight months, mainly in Quebec, but also in Europe, Paris and Brussels. More than 500 people took part in these workshops in person. The group also organized an online consultation. This consultation process was based on a prospective methodology applied to ethics; our group invited workshop participants to reflect on ethical issues based on prospective scenarios, that is to say scenarios about the near future of the digital society.
We organized a broad citizen consultation with various stakeholders, rather than consulting experts alone, for several reasons. I will mention three, rapidly.
The first reason is that AI is being deployed in all societies and concerns everyone. Everyone must be given an opportunity to speak out about its deployment. That is a democratic requirement.
The second reason is that AI raises some complex ethical dilemmas that touch on many values. In a multicultural and diverse society, experts alone cannot make decisions on the ethical dilemmas posed by the spread of artificial intelligence. Although experts may clarify the ethical issues around AI and establish the conditions for a rational debate, they must design solutions in co-operation with citizens and all parties concerned.
The third reason is that only a participative process can sustain the public's trust, which is necessary to the deployment of AI. If we want to earn the population's trust and give it good reasons to trust the actors involved with AI, we have the duty to involve the public in the conversation about AI. That isn't a sufficient condition, but it is a necessary condition to establish trust.
I should add that although industry actors are very important as stakeholders, they must stop wanting to write the ethical principles instead of citizens and experts, and the legislation that should be drafted by Parliaments. That attitude is very widespread, and it can also undermine the public trust that needs to be fostered.
Let's talk about the content of the declaration. The consultation had a dual objective. First, we wanted to develop the ethical principles and then formulate public policy recommendations.
The result of that participatory process is a very complete declaration that includes 10 fundamental principles, 60 subprinciples or proposals to apply the principles, and 35 public policy recommendations.
The fundamental principles touch on well-being, autonomy, private life and intimacy and solidarity—that principle is not found in other documents—democracy, equity, diversity, responsibility, prudence and sustainable development.
The principles have not been classified according to priority. The last principle is no less important than the first, and according to circumstances, a principle may be considered more relevant than another. For instance, if privacy is in general considered a matter of human dignity, the privacy principle may be considered less important for medical purposes, if two conditions are met: it must contribute to improving the health of patients—under the well-being principle—and the collection and use of private data must be subject to individual consent—the autonomy principle.
The declaration, thus, is not a simple checklist, but it also establishes standards and checklists according to activity sectors. Thus, the privacy regime will not be the same, according to the sector, for instance; it may vary depending on whether we are talking about the medical or banking sector.
The declaration also constitutes a basis for the development of legal norms, such as legislation.
Other similar declarations, such as the Helsinki Declaration on Bioethics, are also non-binding declarations like ours. Our declaration simply lists the principles which the AI development actors should commit to respecting. For us, the task is now to work on transposing those principles into industrial standards that also affect the deployment of artificial intelligence in public administrations.
We are also working on the transposition of those principles into human rights for the digital society. That is what we are going to try to establish through a citizens' consultation which we hope to conduct throughout Canada.
Thank you.