Thank you.
I appreciate being able to go back to the point about the human context of surveillance. We certainly delved into that when we talked about AI and its use for surveillance that was on-device in the audit response. We covered—I think, compellingly—the ways in which bias is baked in.
What I'm struck by in this way is surveillance that was more akin to CCTV. You know that in places like the U.K. and the United States, where there's an expansive use of CCTVs, they're found to be really susceptible to abuse, just based on human nature. The American Civil Liberties Union identified four ways in which CCTV is susceptible to abuse, so for the purpose of this round, I want you to just consider that context.
The first is criminal abuse. In instances like this, obviously if the federal government is doing it, the legality could suggest criminality if it's warrantless and outside the scope of their work. The second is clandestine, if we're talking about our RCMP, our national defence or perhaps more specifically our national security establishment and the way it does online surveillance. I'm not suggesting they're involved in that, but that is a possibility. The third is institutional abuse, the overreach, the top-down approach and the way in which government institutes surveillance on the public is a significant risk. CCTV was found to be not only ineffective but also, it was argued, an institutional abuse.
I think what I'm most concerned about with the sensitive nature of the information is the abuse for personal purposes, which is why I was trying to drill down on exactly who. I think, for anybody who's not aware of IT, we have an idea of who's on the IT side.
Would you agree that the intentions or the possibilities, the susceptibilities, for abuse at the level of CCTV or the analog level ought also to be considered at the deeply digital level, particularly as it relates to AI and the technologies and the full and complete access that it has to people's personal information and data? Is that a fair assumption?