We have a variety of ways we monitor for this. One is that when we calibrate the programs or crunch the data, we look for skews in performance. We look for individuals who might be providing too much of one thing to the customers they're meeting, which would suggest that there might be a bias on the part of that advisor.
What we also look for is whether the client actually uses the product or services they were provided. Our programs reward individuals for the education—not just the providing of the advice and the solution, but the education—of the client, and we measure whether the client actively uses it after. Those are a couple of examples of ways.
The other thing I want to put out, though—this is the first time we've shared those numbers—is that the reality is that we also watch and monitor for manager behaviour and concurrently watch and monitor to make sure the overall program itself has integrity, because if the numbers got any higher, you would want to ensure that the program in and of itself wasn't creating the wrong culture or the wrong bias.