In the Montreal Declaration for a Responsible Development of Artificial Intelligence, for instance, one of the principles mentioned is prudence. The idea behind that is to state that there are security and reliability criteria for the algorithms, but not only for the algorithms. I would like to expound on the topic because the way in which the algorithm is put in place in a system is important.
There is a whole system around an algorithm, like other algorithms, data bases, and their use in a specific context. In the case of a platform, it is easy since you have an individual user behind his screen. However, when you are talking about aircraft or a complex enterprise, you have to take the entire system into account.
Here, the reliability involved is that of the system and not only that of the algorithm. The algorithm does its work. The issue is to see how the data is being used, what types of decisions are made and what human control there is over those decisions or predictions. From that perspective, it seems extremely important to me that the algorithmic systems—not simply the algorithm—be audited. I'm talking about audits in the sense where people really look into the architecture of the system to find its possible shortcomings.
In the case of aircraft, since you mentioned those two recent tragic air catastrophes, we must, for instance, ensure in advance that human beings keep control, even if they may make mistakes. That is not the issue; human beings make mistakes. That is precisely why we could also put in place algorithmic aids. However, admitting that to err is human and that there is still human control over the machine—that is part of the things we need to discuss. However, this is certainly an essential factor if we are to identify the problems with a given algorithmic system.