Thank you, Chair, and thank you for the invitation to share the perspectives of the ICLMG today regarding Bill C-27.
We're a Canadian coalition that works to defend civil liberties from the impact of national security and anti-terrorism laws. Our concerns regarding Bill C-27 are grounded in this mandate.
While we support efforts to modernize Canadian privacy laws and establish AI regulations, the bill unfortunately contains multiple exemptions for national security purposes that are unacceptable and undermine Bill C-27's stated goal of protecting the rights and privacy of people in Canada.
We have submitted a written brief to the committee with 10 recommendations and accompanying amendments. I'd be happy to speak in more detail about any of these during the question period, but for now, I'd like to make three specific points.
First, in regard to the CPPA, we are opposed to proposed sections 47 and 48 of the act, which create exceptions to consent by allowing an organization to disclose, collect or use personal information if it simply “suspects that the information relates to national security, the defence of Canada or the conduct of international affairs”. This is an incredibly low threshold for circumventing consent.
Proposed section 48 is particularly egregious. It allows for an organization of “its own initiative” to collect, use or disclose an individual's personal information if it simply suspects that the information relates to these three areas. The concern does not even need to be connected to a suspected threat. Again, it only needs to relate, and that's not defined in the bill.
Not only are these sections very broad, they're also unnecessary. Other sections of the law would allow for more targeted disclosure to government departments, institutions and law enforcement agencies. For example, proposed section 45 allows an organization to proactively divulge information if it “has reasonable grounds to believe”—a much higher threshold—“that the information relates to a contravention” of a law that has been, is being or will be committed. We contrast that “reasonable grounds to believe” threshold with simply suspecting that it “relates”.
In that regard, we find proposed sections 47 and 48 unnecessary and overly broad. We propose, then, that proposed sections 47 and 48 simply be removed from the CPPA. Barring that, we've proposed specific language in our brief that would help to establish a more robust threshold for disclosing personal information.
Second, we're deeply concerned with the artificial intelligence and data act overall. In line with other witnesses, we believe it is a deeply flawed piece of legislation that must be withdrawn in favour of a more considered and appropriate framework. We have outlined these concerns in our brief, as well as in a joint letter shared with the committee and the minister, signed by 45 organizations and experts in the fields of AI, civil liberties and human rights.
AIDA was developed without appropriate public consultation or debate. It fails to integrate appropriate human rights protections. It lacks fundamental definitions. Egregiously, it would create an AI and data commissioner operating at the discretion of the Minister of Innovation, resulting in a commissioner with no independence to enforce the provisions of AIDA, as weak as they may be.
Finally, I'd like to address an unacceptable exception for national security that is found in AIDA as well.
Canadian national security agencies have been open regarding their interest and use of artificial intelligence tools for a wide range of purposes, including for facial recognition, surveillance, border security and data analytics. However, no clear framework has been established to regulate the development or use of these tools in order to prevent serious harm.
AIDA should present an opportunity to address this gap. Instead, it does the opposite in proposed subsection 3(2), where it explicitly excludes the application of the act to:
a product, service or activity that is under the direction or control of
(a) the Minister of National Defence;
(b) the Director of the Canadian Security Intelligence Service;
(c) the Chief of the Communications Security Establishment; or
(d) any other person who is responsible for a federal or provincial department or agency and who is prescribed by regulation.
This means that any AI system developed by a private sector actor that falls under the direction or control of this open-ended list of national security agencies would face absolutely no independent regulation or oversight.
It is inconceivable how such a broad exemption can be justified. Under such a rule, companies could create tools for our national security agencies without the need to undergo any assessment or mitigation for harm or bias, creating a human rights and civil liberties black hole. What if such technology were leaked, stolen or even sold to state or private entities outside of Canada's jurisdiction? All AI systems developed by the private sector must face regulation, regardless of their use by national security agencies.
Our brief includes specific examples of the harms that this lack of regulation can cause. I'd be happy to discuss these more with the committee. Overall, if AIDA does go ahead, we believe that proposed subsection 3(2) should simply be removed.
Thank you.