Thank you so much for the invitation and for having me here today, Mr. Chair and committee.
I'm very happy to speak to you today on behalf of the International Civil Liberties Monitoring Group. We're a coalition of 45 Canadian civil society organizations dedicated to protecting civil liberties in Canada and internationally in the context of Canada's anti-terrorism and national security activities.
Given our mandate, our particular interest in facial recognition technology is its use by law enforcement and intelligence agencies, particularly at the federal level. We have documented the rapid and ongoing increase of state surveillance in Canada and internationally over the past two decades. These surveillance activities pose significant risks to and have violated the rights of people in Canada and around the world.
Facial recognition technology is of particular concern given the incredible privacy risks that it poses and its combination of both biometric and algorithmic surveillance. Our coalition has identified three reasons in particular that give rise to concern.
First, as other witnesses today and earlier this week have pointed out, multiple studies have shown that some of the most widely used facial recognition technology is based on algorithms that are biased and inaccurate. This is especially true for facial images of women and people of colour, who already face heightened levels of surveillance and profiling by law enforcement and intelligence agencies in Canada.
This is particularly concerning in regard to national security and anti-terrorism, where there is already a documented history of systemic racism and racial profiling. Inaccurate or biased technology only serves to reinforce and worsen this problem, running the risk of individuals being falsely associated with terrorism and national security risks. As many of you are aware, the stigma of even an allegation in this area can have deep and lifelong impacts on the person accused.
Second, facial recognition allows for mass, indiscriminate and warrantless surveillance. Even if the significant problems of bias and accuracy were somehow resolved, facial recognition surveillance systems would continue to subject members of the public to intrusive and indiscriminate surveillance. This is true whether it is used to monitor travellers at an airport, individuals walking through a public square or activists at a protest.
While it is mandatory for law enforcement to seek out judicial authorization to surveil individuals either online or in public places, there are gaps in current legislation as to whether this applies to surveillance or de-anonymization via facial recognition technology. These gaps can subject all passers-by to unjustified mass surveillance in the hopes of being able to identify a single person of interest, either in real time or after the fact.
Third, there is a lack of regulation of the technology and a lack of transparency and accountability from law enforcement and intelligence agencies in Canada. The current legal framework for governing facial recognition technology is wholly inadequate. The patchwork of privacy rules at the provincial, territorial and federal levels does not ensure law enforcement uses facial recognition technology in a way that respects fundamental rights. Further, a lack of transparency and accountability means that such technology is being adopted without public knowledge, let alone public debate or independent oversight.
Clear examples of this have been revealed over the past two years.
The first and most well known is that the lack of regulation allowed the RCMP to use Clearview AI facial recognition for months without the public’s knowledge, and then to lie about it before being forced to admit the truth. Moreover, we now know that the RCMP has used one form of facial recognition or another for the past 20 years without any public acknowledgement, debate or clear oversight. The Privacy Commissioner of Canada found that the RCMP’s use of Clearview AI was unlawful, but the RCMP has rejected that finding, arguing that they cannot be held responsible for the lawfulness of services provided by third parties. This essentially allows them to continue contracting with other services that violate Canadian law.
Lesser known is that the RCMP also contracted the use of a U.S.-based private “terrorist facial recognition” system known as IntelCenter. This company claims to offer access to facial recognition tools and a database of more than 700,000 images of people associated with terrorism. According to the company, these images are acquired, just like Clearview AI's, from scraping online. The stigma that comes with being associated with a so-called terrorist facial recognition database only increases the stigma and rights implications associated with it.
As a final example, I'd just say that CSIS has refused to confirm whether or not they even use facial recognition technology in their work, stating that they have no obligation to do so.
Given all these concerns, we would make three main recommendations: first, that the federal government ban the use of facial recognition surveillance immediately and undertake consultation on the use and regulation of facial recognition technology in general; second, based on these consultations, that the government undertake reforms to both private and public sector privacy laws to address gaps in facial recognition and other biometric surveillance; and, finally, that the Privacy Commissioner be granted greater enforcement powers with regard to both public sector and private sector violations of Canada's privacy laws.
Thank you, and I look forward to the discussion and questions.