It's possible to ask a human how they reached a particular judgment, but we don't have a great deal of insight into why we make the decisions we make sometimes. Often, we're inventing justifications post hoc that sound plausible to others, and that's news to us as much as it is to them.
I'm not sure I can make recommendations that would transfer readily from that situation to decisions made by AI.