I agree that humans remain a weak point in many of these systems. I mentioned earlier some of the social engineering we've seen when very sensitive computer systems in the U.K. have been inappropriately accessed. While they do have protective monitoring on those systems that raises alerts when inappropriate access is made, the time delay between the access being made and the human being found, tracked down, and held to account has unfortunately been tragically slow on occasion, and I do mean literally “tragically slow” in at least one case.
The risk appetite comes back into this discussion, along with everything involved in the software engineering. How do we trust the code that a human being has written, all the way through the system to the operator of that system? Given that this can be a weak point, how do we ensure that as little unnecessary data as possible is displayed to users when they look at a screen in the future, instead of enabling them to bring up somebody's entire record on a single screen to look at all at once?
You're right that all those things should be looked at in designing these systems, but ultimately there's always going to be a risk in these systems. Where are you on that risk appetite, in terms of the cost and the mitigation you're prepared to take in different systems?