Again, the fact that one in five might not respond is not terribly relevant. Having something like 25 million Canadians respond to a survey would be overwhelming. Angus Reid would do a survey with 1% and consider that to be a huge survey. The labour force survey is only 40,000 households, and we base a lot on that, so the size is not.... The key question is whether the 19% who didn't respond would be representative of the overall population. So it's the skewness within the response rather than the overall response.
Again, we can only draw on the results of previous surveys, particularly the experiment in the United States in 2003, where they did a pilot project on a voluntary basis and benchmarked it against the actual survey. The people who did not respond were highly unrepresentative of the total population.
But you don't really know, without having done that, exactly where the non-representation would be. We certainly overwhelmingly would speculate that they would be at the two extremes of the income distribution, and they would tend to be in first nations communities and our recent immigration.
If we knew what that bias was, even that would not be a problem, but we have no reason for doing it a priori. If, for example, in 2011 we did one more run of the mandatory survey and we ran a voluntary one on a pilot project at the same time, then we could benchmark them. Then I think we'd probably be okay, because we would know how to adjust for those biases.