Okay. When you use this data, you're collecting all this personal information. You're tracking population trends. Then you divide everybody up into categories, and then you treat them differently because they belong to a category. Earlier a concern was raised that this type of technology is very important for democratic debate. You can use those categories, once people identify themselves, to change the environment around them.
I was doing research on MSN, and while I had not identified myself as any particular person, I was surrounded by the news of the day. As soon as I registered as a 16-year-old girl living in Vancouver—which I was not, as you might have guessed—the news of the day disappeared and it was replaced with celebrity news, dieting ads, and plastic surgery ads. It wasn't that they knew I was Val the 16-year-old girl living in Vancouver; they knew I was someone who fit that category.
Therefore, there are issues of discrimination that flow from that, as Professor Scassa mentioned, but they are even more insidious, because they change the environment around a person because of their assumptions about who they are and what category they fit into. So that would not fall within PIPEDA protections on the use of personal information, but it's highly problematic from a privacy point of view, because it fractures the public spaces that are necessary for democratic debate, and it opens up vulnerable populations to discrimination.