I am going to need Ms. Kim's help on this, but let me just start by describing the overall response. As for why the field test was done, there were a number of objectives. One was to ensure that in the end survey we would have a limited number of questions—obviously, you get a higher response rate when that's done, making sure that usability is high—but another was specifically for the purpose of developing the archetypes, groupings, or clusters that were fed back as part of the response to the survey.
As for your question about what was different in the field test versus the final one, I think there were six factors at play that led to changes between the two. First, as I said, there was an effort made to use the pretest to determine what the best clusters were. There was some analytic work going on in that regard. Second, there was an effort to see how the survey length could be brought to an appropriate size in order to maximize the user experience and response rates.
The third thing was trying to avoid unnecessary duplication. There were a number of questions that were in a similar kind of space. Fourth, a few questions were removed, as they were perceived as being too sensitive in our effort to ensure that the questionnaire was well received by Canadians writ large.
Fifth, some questions were used to assess user satisfaction, whether there were issues encountered, and how users responded to them. Finally, as was noted in the media, there were a few questions that were accidentally included in the pretest, and that obviously was not replicated in the final survey.
I don't know if Ms. Kim wants to add to that, but that's a quick summary.