I'll add to what my colleague said. I think it might be useful just to flesh out a little an example of why when datasets are combined they can be way more privacy intrusive than the sets considered in isolation. I can give you an example from the private sector that helps to make this point.
There is an app that was published and was called “Girls Around Me”. Basically, it combined two sets of publicly available information: information from Facebook profiles, which is generally about people's pictures and their likes, dislikes, interests, and what have you, and then information from Foursquare, which allows people to use their iPhones to check into a particular thing, such as “I'm at this restaurant, I'm at this movie theatre, I'm at this bar”, or whatever. Combining those two datasets, which in isolation have their own concerns but are not super-intrusive, basically creates this stalking app that allows people to look at their phone and say that in this restaurant there is a girl, here's what she looks like, here's what her interests are, and here's everything about her.
Again, you take these two datasets in isolation and then put them together, and suddenly you have something that is far more intrusive than the two taken separately. That's an example from the private sector, but I think it does illustrate the harm and the concerns that can come about when these datasets are combined, particularly echoing what my colleague said about the fact that, in dealing with the government, people don't necessarily have a choice when they're sharing this information. When you're dealing with communities that are under risk and already have a good reason to be suspicious of their interactions with government, I think these are very good illustrations of the kinds of concerns that come into play.