In terms of training, for the staff that we didn't ask, it was going back and actually documenting that people were taking the training: first, that the training was being offered as it should have been; second, that employees were taking it; and third, based on their actual day-to-day work, whether they absorbed the lessons they needed to absorb.
We've decided that those are the three key questions. We've developed tools to get at those questions. Moving forward in the next fiscal year, we're going to be documenting the answer to all those questions. Based on the answer, we'll be changing our game plan.
In terms of the NRAM and the risk assessment approach, the first step was just to calibrate, in our minds, between high, very high, and extremely high risk and the audit results, and whether they tended to correlate or not. That's going to happen by March 31 this year. Moving forward, we're going to do a more detailed analysis factor by factor to say that this exact subfactor in trying to assess the risk around a corporation is predictive, and that factor isn't. As I mentioned, we have roughly 90 factors or algorithms that we use.
I think this is part of the evolution of the tool. We started with a structured formal questionnaire that people had to answer so that we were systemic across all of our offices. We've now moved to automate that thing. We're now moving to decide whether it works in terms of the direction, and then the final stage for us is determining what are the specific levers and do they work so we can optimize it.