There's a challenge in that if we assume human intervention alone will fix things, we will also be in a difficult situation because human beings, for all sorts of reasons, often do not make the best decisions. We have many hundreds of years of experience of how to deal with bad human decision-making and not so much experience in how to deal with mainly automated human decision-making, but the best types of decisions tend to come from a good configuration of interactions between humans and machines.
If you look at how decisions are made right now, human beings often rubber-stamp the automated decision made by AIs or algorithms and put a stamp on it and say, “Great, a human decided this”, when actually the reason for that is to evade different legal regulations and different human rights principles, which is why we use the term quasi-automation. It seems like it's an automated process, but then you have three to five seconds where somebody is looking over this.
In the paper I wrote and also in the guidelines of the Article 29 Working Party, guidelines were developed for what is called “meaningful human intervention” and only when this intervention is meaningful. When human beings have enough time to understand the decision they're making, enough training, enough supports in being able to do the only event, then it's considered meaningful decision-making.
It also means that if you're driving in a self-driving car, you need enough time as an operator to be able to stop, to change, to make decisions and a lot of the time we're building technical systems where this isn't possible. If you look at the recent two crashes of Boeing 737 Max aircraft, it's exactly this example where you had an interface between technological systems and human systems, where it became unclear how much control human beings had, and even if they did have control, so they could press the big red button and override this automated system, and whether that control was actually sufficient to allow them to control the aircraft.
As I understand the current debate about this, that's an open question. This is a question that is being faced now. With autopilots and other automated systems of aircraft, this will increasingly lead to questions that we have in everyday life, so not just about an aircraft but also about an insurance system, about how you post online comments, and also how government services are provided. It's extremely important that we get it right.