Thank you very much.
I appreciate the opportunity to address the committee. To frame my remarks before I begin with the substance of my comments, I just want to say that my position is that we are at a moment where I'm delighted that the committee is holding these hearings. We're at a moment where there is increasingly a widespread concern about the harms that might be possible as a result of these systems, meaning artificial intelligence and algorithms.
I thought what I could offer to you in my brief opening remarks would be some sort of assessment of what might governments do in this situation. What I'd like to do with my opening statement is to discuss five areas in which I believe there is the most excitement in communities of researchers and practitioners and policy-makers in this area right now. I offer you my assessment of these five areas. Many of them are areas that you at least preliminarily addressed in your earlier reports, but I think that I have something to add.
The five areas I'll address are the following: transparency, structural solutions, technical solutions, auditing and the idea of an independent regulatory agency.
I'll start with transparency. By far the most excitement in practice and policy circles right now has to do with algorithmic automation centres and the idea that we can achieve justice through transparency. I have to tell you, I'm quite skeptical of this area of work. Many of the problems that we worry about in the area of artificial intelligence and transparency are simply not amenable to transparency as a solution. One example is that we're often not sure that the problems are amenable to individual action, so it is not clear that disclosing anything to individuals would help ameliorate any difficulty.
For example, a problem with a social media platform might require expertise to understand the risk. The idea of disclosing something is in some ways regressive because it demands time and expertise to consider the sometimes quite arcane and complicated intricacies of a system. In addition, it might not be possible to perceive the risk at all from the perspective of the individual.
There is a tenet of transparency that we need to be sure that what is revealed has to be matched with the harm that we hope to detect and prevent, and it's just not clear that we know how to match what should be revealed with the harms we hope to prevent.
Sometimes we discuss transparency as a tactic that we use so that we can match what is revealed to an audience that will listen. This is often something that is missing from the debates right now on transparency and artificial intelligence. It's not clear who the audience would be that we need to cultivate to understand disclosures of details of these systems. It seems like they must be experts and it seems like deconstructing these systems would be quite time consuming, but we don't know who exactly they would be.
A key problem that's really specific to this domain that is sometimes elided in other discussions is that algorithms are often not valuable without data and data are often not valuable without algorithms. So if we disclose data we might completely miss an ethically or societally problematic situation that exists in the algorithm and vice versa.
The challenge there is that you also have a scale problem if you need both the data and the algorithm. It's often not clear just in practical terms how you would manage a disclosure of such magnitude or what you would do once you receive the information. Of course, the data on many systems also is continually updated.
Ultimately, I think you have gathered from my remarks I'm pessimistic about many of the proposals about transparency. In fact, it's important to note that when governments pass transparency requirements they can often be counterproductive in this area because it creates the impression that something has happened, but without some effective mechanism of accountability and monitoring matched to the transparency, it may be that nothing has happened. So it may actually harm things to make them transparent.
An example of a transparency proposal that's gotten a lot of excitement recently would be dataset labels that are somehow made equivalent to food labels, such as nutrition facts for datasets or something like that. There are some interesting ideas. There would would be a description of biases or ingredients that have an unusual provenance—where did the data come from?—but the metaphor is that tainted ingredients produce tainted food. Unfortunately, with the systems we have in AI, it's not a good metaphor, because it's often not clear, without some indication of the use or context, what the data are meant to do and how they will affect the world.
Another attractive, exciting idea in this space of transparency is the right to explanation, which is often discussed. I agree that it's an attractive idea, but it's often not clear that processes are amenable to explanation. Even a relatively simple process—it doesn't have to be with a computer; it could be the process by which you decided to join the House of Commons—might be a decision that involves many factors, and simply stating a few of them doesn't capture the full complexity of how you made that decision. We find the same things with computer systems.
The second big area I'll talk about is structural solutions. I think this was covered quite well in the committee's previous report, so I'll just say a couple of things about it.
The idea of a structural solution might be that because there are only a few companies operating in some of these areas, particularly in social media, we might use competition or antitrust policy to break up monopoly power. That, by changing the structure and incentives of the sector, could lead to the amelioration of any harms we foresee with the systems.
I think it is quite promising that if we change the incentives in a sector we could see changes in the harms that we foresee; however, as your report also mentioned, it's often not clear how economies of scale operate in these platforms. Without some quite robust mechanism for interoperability among systems, it's not clear how an alternative that's an upstart in the area of social media or artificial intelligence—or really any area where there is a large repository of data required—would be effective.
I think that one of the most exciting things about this area might be the idea of a public alternative in some sectors. Some people have talked about a public alternative to social media, but it still has this scale problem, this problem of network effects, so I guess we could summarize that area by saying that we are excited about the potential but we don't know exactly how to achieve the structural change.
One example of a structural change that people are excited about and is more modest is the information fiduciary proposal, whereby a government might regulate a different incentive by just requiring it. It's a little challenging to imagine, because it does seem like we are most successful with these proposals when we have a domain with strong professionalization, such as doctors or lawyers.
The third area I will discuss is the idea of a technical solution to problems of AI and algorithms. There's a lot of work currently under way that imagines that we can engineer an unbiased fair or just system and that this is fundamentally a technical problem. While it's true that we can imagine creating these systems that are more effective in some ways than the systems that we have, ultimately it's not a technical problem.
Some examples that have been put forward in this area include the idea of a seal of approval for systems that meet some sort of standard that might be done via testing and certification. This is definitely an exciting area, but only a limited set of the problems we face would fall into the domain that could be tested systematically and technically solved. Really, these are really societal problems, as the previous witness stated.
The fourth area I'll introduce is the idea of auditing, which I saw mentioned only briefly in the committee's last report. The auditing idea is my favourite. It actually comes from work to identify racial discrimination in housing and employment. The idea of an audit is that we send two testers to a landlord at roughly the same time and ask for an apartment. The testers then see if they get different answers, and if they get different answers, something is wrong.
The exciting thing about this area is that we don't need to know the landlord's mind or to explain it. We simply figure out if something is wrong. There's a lot that legislatures can do in the area of testing. They can protect third parties that wish to investigate these systems or they can create processes akin to software's “bug bounties”, but the bounties could be for fairness or justice. This is I think the most promising area that governments can use to intervene.
Finally, I'll conclude by just mentioning there is also talk of a new agency, a judicial administrative law or commission agency to handle the areas of AI. I think this is an interesting idea, but the challenge is that it just postpones many of the comments I made in the earlier parts of my remarks. We often would imagine such an agency doing some of the same things that I've already discussed, so the question then becomes, what is different about this area that requires processes that are not the processes of the legislature and the standard law-making apparatus—the courts—that we already have? The argument has been made that expertise makes this different, but it's hard to sustain that argument, because we often do see plain old legislatures making rules about quite complicated areas.
I'll conclude there. I'm happy to take your questions.