I can't propose something that could be changed. I'm saying that you should really compare the system's ability to behave well with the risk that it may make mistakes. With the artificial intelligence systems available today, I think you have to allow for a margin of error because, even though the systems have a very high degree of reliability, they aren't 100% reliable.
That's why I mentioned the loss of confidentiality and data integrity or availability that may have serious impacts on certain persons. How many of the total number of persons concerned by the system have been affected? If 80% of those people are seriously affected, we really have a high-impact system, and action has to be taken. On the other hand, if barely 1% of 100,000 individuals are affected, that percentage may fall within the learning rate, which allows the system to make mistakes in 1% of cases.