Very briefly, then, and I'll let my colleague pick it up, the concern here is that you have two ends. You have the relevancy test for passing information over, and it's information relevant to activities that undermine the security of Canada, meaning the sovereignty, security, or territorial integrity of Canada. I sense that it's an incredibly broad definition, and the examples that are given are simply illustrative; they're not closed sets. Even if they were closed sets, even within them they're pretty broad, so you have broadness on both ends of this equation.
The concern I have with robust information sharing along these lines is that there's no real control for false positives. I appreciate that when it gets to the service—let's take the service as an example—they will apply their analytics and will ask whether this person is really engaged in something that undermines the sovereignty of Canada, whatever that means, and whether they'll take any national security action against the person.
However, it's rather like this: once they're on you, they're on you, and they just don't let go. It sits in the database, and there are no real retention issues spoken about here in this legislation. There are, at the service; they have their own retention standards, but what I'm concerned about is that the agency that sends information on, the transmitting agency, doesn't really turn itself to false positives. The “necessary” test would impose some rigour that at least has the prospect of doing so more efficiently than a relevancy test would.
Part of the training for those folks who are in the transmitting agencies is to really have some understanding of national security and to appreciate the ease with which there can be false positives. If you're alive to that possibility, then you'll be vetting for it, and the risk is diminished on the false positive side.
That's what my concern is.