There's absolutely no doubt that I would have preferred to have a 94% response rate or any response rate higher than 69%, but the fact that the response rate was 69% is not in itself a condemnation of the data.
Response rates really have two impacts when you start talking about data. One is an impact on the statistical variability of the estimates—when you think about polls and when you talk about how this estimate is accurate plus or minus 2%, 95 times out of 100. That's where the 69% comes in. If we hadn't adjusted the size of the sample as we did, that would have resulted in the estimates being of a much poorer quality from that perspective. Because we adjusted the size of the sample and went from a 20% sampling rate to a 30% sampling rate, we actually got, as the Auditor General noted, the same number of responses from households and Canadians—it was actually a slightly higher number—and that took care of that issue. In terms of sampling variability, the estimates from the 2011 national household survey, as we demonstrated in some documentation that we released regarding coefficients of variation, were roughly as good as what we got from the 2006 census.
The second issue is non-response bias. There was a possibility that, because the proportion of people who answered is smaller and significantly different from 100%, those people might be significantly different in terms of their characteristics in the population as a whole. A lot of claims have been made. A lot of people raised concerns about that possibility. We spent a very large amount of time as were were publishing and prior to publishing the data, looking at that issue.