Thank you so much for inviting me.
I have been researching the ethical challenges of algorithms and AI for nearly half a decade. What's become apparent to me in that time is that the promise of AI largely owes to its apparent capacity to replace or augment any type of human expertise. The fact that it's so malleable in that sense means that the technology inevitably becomes entangled in the ethical and political dimensions of the jobs, the practices and the organizations in which it's embedded. The ethical challenges of AI are effectively a microcosm of the political and ethical challenges that we face in society, so recognizing that and solving them is certainly no easy task.
I know, from witnesses in your previous sessions, that you've heard quite a bit about the challenges of AI, dealing with things such as accountability, bias, discrimination, fairness, transparency, privacy and numerous others. All those are extremely important and complex challenges that deserve your attention, and really the attention of policy-makers worldwide, but in my 10 minutes I want to focus less on the nature and extent of the ethical challenges of AI and more on the strategies and tools we have for solving them.
You've heard also quite a bit about the tools available to address these ethical challenges, using things such as algorithmic and social scientific auditing, multidisciplinary research, public-private partnerships, and participatory design processes and regulations. All of those sorts of solutions are essential, but my concern is that we're perhaps broadly using the wrong strategy or at least an incomplete strategy for the ethical and legal governance of AI. As a result, we may be expecting too much from our current efforts to ensure AI is developed and used in an ethically acceptable manner.
In the rest of my statement, what I want to address are the significant shortcomings that I see in current efforts to govern AI, specifically through data protection and privacy law on the one hand and through principled self-governance on the other. My principal concern here is that these strategies too often conceive of the ethical challenges of AI in an individualistic sense, when in fact they are collective challenges that require collective solutions.
To start with data protection and privacy law, responsibility far too often falls on the shoulders of individuals to protect their vital interests, or their privacy, autonomy, reputation and those sorts of things. Data protection law too often ends up protecting data rather than the people the data represents. That shortcoming can be seen in several areas of law globally. The core concepts of data protection and privacy law—personal data, personally identifiable information and so forth—are typically defined in relation to an identifiable individual, which means that the data must be able to be linked to an individual in order to fall within agreement of the law and thus to be protected by the law.
The emphasis on the individual is really mismatched with capabilities of AI. We're excited by AI precisely because of its ability to find small patterns between people and group them in meaningful ways, and to create generalizable knowledge from individual records or individual data. In the modern data analytics that drive so many of the technologies we think of as AI, the individual doesn't really matter. AI is interested not in what makes a person uniquely identifiable but rather in what makes that person similar to other people. AI has transformed privacy from an individual concern to a collective challenge, yet relatively little attention is actually paid in existing legal frameworks to collective or group aspects of privacy. I see that as something that really needs to change.
That shortcoming itself extends to the sorts of legal protections that we quite often see in data protection and privacy law that are offered to individuals and to their data. These protections are still fundamentally based on the idea that individuals can make informed decisions about how they produce data, how that data is collected and used, and when it should not be used. The burden is really placed on individuals to be well informed and to make a meaningful choice about how their data is collected and used.
As is suggested by the name, informed consent only works if meaningful, well-informed choice is actually possible. Again, we're excited about AI precisely because it can process so much data so quickly, because it can identify novel and unintuitive patterns within the data and because it can produce knowledge from them. We're excited because the data analytics that drive AI are so big, so fast and so unpredictable, but the voracious appetite that AI has for personal data, combined with the seemingly limitless and unpredictable reusability of the data, means that even if you're a particularly motivated individual, a well-informed choice about how your data is collected and used is typically impossible. Under those conditions, consent no longer offers meaningful protection or allows individuals to control how their data is collected and used.
Moving forward, in terms of data protection and privacy law in particular, we need to think more about how to shift a fair share of the ethical responsibility to companies, public bodies and other sorts of collectives. Some of the ethical burden that's normally placed on individuals should be placed on these entities, requiring them, for example, to justify their data collection and processing before the fact, rather than leaving it up to individuals to proactively protect their own interests.
The second government strategy I want to address has seen unprecedented uptake globally. To date, no fewer than 63 public-private initiatives have formed to determine how to address the ethical challenges of AI. Seemingly every major AI company has been involved in one or more of these initiatives and has partnered with universities, civil society organizations, non-profits and other sorts of bodies. More often than not, these initiatives produce frameworks of high-level ethical principles, values or tenets meant to drive the development and usage of AI.
The strategy seems to be that the ethical challenges of AI are best addressed through a top-down approach, in which these high-level principles are translated into practical requirements that will act as a guide for developers, users and regulators. The ethical challenges of AI are more often than not presented as problems to be solved through technical solutions and changes to the design process. The rationale seems to be that insufficient consideration of ethics leads to poor design decisions, which create systems that harm people and society.
These initiatives are essentially producing self-regulatory frameworks that are not yet binding, in any meaningful sense. It seems as though the blame for unethical AI tends to fall, again, on the individuals, or individual developers and researchers, who have somehow behaved badly, as opposed to any sort of collective failure of the institutions, businesses or other types of organizations driving development in the first place.
With that in mind, I'm not entirely sure why we assume that top-down principles and codes of ethics will actually make AI, and the organizations that create it and use it, more ethical or trustworthy. Using principles and ethics is nothing new. We have lots of well-established professions, such as medicine and law, that have used principles for a very long time to define their ethical values and responsibilities, and to govern the behaviour of the professionals and organizations that employ them.
If we can think of AI development as a profession, it very quickly becomes apparent that it lacks several characteristics necessary to make a principled approach actually work in practice.
In the first place, AI development lacks common aims and fiduciary duties to users and individuals. Take medicine as a counter example: AI development doesn't serve the public interest in the first instance, in the same sense. Developers don't have fiduciary duties toward their users or people affected by AI, because AI is quite often developed in a commercial environment where fiduciary duty is owed to the company's shareholders. As a result, you can have these principles that are intended to protect the interests of users and the public coming into conflict with commercial interests. It's not clear how those are going to be resolved in practice.
Second, AI development has a relatively short professional history and it lacks well-established and well-tested best practices. There are professional bodies for software engineering and codes of ethics, but because it's not a legally recognized or licensed profession, professional bodies exercise very little power over their members, in practice. The codes of ethics they do have tend to be more high-level and relatively brief in comparison to other professions.
The third characteristic that AI development is seemingly lacking is proven methods to translate these high-level principles into practical requirements. The methods we do have available tend to exist or have been tested only in academic environments and not in commercial environments. Moving from high-level principles to practical requirements is a very difficult process. The outputs we've seen from AI ethics initiatives thus far have almost universally relied on vague, contested concepts like fairness, dignity and accountability. There's very little offered in the way of practical guidance.
Disagreements over what those concepts mean only come out when the time comes to actually apply them. The huge amount of work we've seen to develop these top-down approaches to AI ethics have accomplished very little in practice. Most of the work remains to be done.
What I would conclude with is that essentially ethics is not meant to be easy or formulaic. Right now we too often think of ethics purely in terms of technical fixes or checklists or impact assessments, when really we should be looking for and celebrating these normative disagreements because they represent, essentially, taking ethical challenges seriously in the plurality of opinion that we should expect in democratic societies.
The difficult work that remains for us in AI ethics is to move from high-level principles down to practical requirements. It's really only in doing that and in supporting that sort of work that we'll really come to understand the ethical challenges of AI in practice.
Thank you, and I look forward to your questions later.