Yes. I was expecting that question. It has to do with selection of people. All we're trying to do when we select people is to increase our probability of success. So you try to use as many tools as you can, without delaying the process too much.
What happened was that we devised—The design was that you looked at the total candidate. You didn't limit yourself to one tool. You could use a test and say that you need 62%; I think the parole board does that, if I'm not mistaken. There's a mark, and if you don't meet it, you don't go to the next step. The reason we had a panel was to look at the track record of individuals: their international work, community work, their languages, the test. And the application—the application, as you know, is 20 pages, and the onus is on them to show why they would be a good member—also plays a role. So the panel was looking at the total person.
In the early stages, when we developed a new test—we had validated it amongst ourselves, but was still a new test—there were cases, and I think they were reported, where people may not have had the C mark but they were referred to the interview to see if they were worth proceeding with. And we did that.
Gradually the panel felt more comfortable with the test. In the last two panels I think we did have a passing mark, where they would not look at people—I can't remember the mark itself, but it could have been at about 60%. I stand to be corrected.