We're very similar. In our case we also cannot make our source data directly available as is. So we aggregate our reports, our tables. Data sets are all aggregate data.
Regardless, even if it's aggregate, we need to deal with the smaller values to prevent identification of individuals, so we sometimes group smaller values. We mask them. In a sense, we drop certain values. We also have algorithms to randomly round data. As we round them, it helps to anonymize certain instances and also systematically at a different layer. Within the department we consult with ATIP colleagues to ensure that, again, we are respecting privacy.
This, of course, poses a challenge and puts a limitation on the type of response to different requests. Let's say we have received a very recent request through the TBS portal, where the clients can come and suggest new data sets they would like to see. We have separate data sets, for instance, on immigration category admissions to Canada. We also have source-country separate information. But the request, for instance, is to cross them, so when we cross them, we know there will be very small cells. As the challenge comes, we work on it, and that obviously delays a little bit our making this data available. What you're pointing at is definitely an everyday question for us.