I'm calling this meeting back to order.
For the second hour of this meeting, we're launching our study on facial recognition software and concerns related to it. Today we have the commissioner, who has agreed to remain here for an additional hour so that he can answer some questions as we launch into the investigation of this matter.
Thank you, Commissioner, for remaining with us.
We also have Mr. Homan, who is remaining with us as well, and Lara Ives, who is the executive director of the policy, research and parliamentary affairs directorate. Thank you so much for being here. Finally, we have Regan Morris, who is joining us as legal counsel.
Thank you as well for being here.
Commissioner, I'll turn it back to you for an opening statement to allow you to begin the discussion. Then we'll have questions for you.
Thank you again, Mr. Chair.
Facial recognition technology has become an extremely powerful tool that, as we saw in the case involving Clearview AI, can identify a person in a bank of billions of photos or even among thousands of protesters. If used responsibly and in appropriate situations, it can provide significant benefits to society.
In law enforcement, for example, it can enable police to solve crimes or find missing persons. However, it requires the collection of sensitive personal information that is unique to each individual and permanent in nature. Facial recognition technology can be extremely privacy invasive. In addition to promoting widespread surveillance, it can produce biased results and undermine other human rights.
The recent Clearview AI investigation, conducted jointly with my counterparts in three provinces, demonstrated how facial recognition technology can lead to mass surveillance and help treat billions of innocent people as potential suspects. Despite our findings that Clearview AI's activities violated Canadian privacy laws, the company refused to follow our recommendations, such as destroying the photos of Canadians.
In addition, our office is currently investigating the Royal Canadian Mounted Police, or RCMP, use of Clearview AI technology. This investigation is nearing completion. We are also working with our colleagues in all provinces and territories to develop a guidance document on the use of facial recognition by police forces. We expect to release a draft of this document for consultation in the coming weeks.
The Clearview case demonstrates how citizens are vulnerable to mass surveillance facilitated by the use of facial recognition technology. This is not the kind of society we want to live in. The freedom to live and develop free from surveillance is a fundamental human right. Individuals do not forgo their rights merely by participating in the world in ways that may reveal their face to others or enable their image to be captured on camera.
The right to privacy is a prior condition to the exercise of other rights in our society. Poorly regulated uses of facial recognition technology, therefore, not only pose serious risks to privacy rights but also impact the ability to exercise such other rights as freedom of expression and association, equality and democracy. We must ensure that our laws are up to par and that they impose limits to ensure respect for fundamental rights when this technology is used.
To effectively regulate facial recognition technologies, we need stronger protections in our privacy laws, including, among other things, a rights-based approach to privacy, meaningful accountability measures and stronger enforcement powers. The federal government recently introduced two proposals to modernize our privacy laws. These are important opportunities to better regulate the use of facial recognition and other new technologies.
Last November, the Department of Justice released a comprehensive and promising consultation paper that outlined numerous proposals to improve privacy legislation in the federal public sector. It proposes enhanced accountability requirements and measures aimed at providing meaningful oversight and quick and effective remedies. It also proposes a stronger collection threshold, which would require institutions to consider a number of factors to determine if the collection of personal information is “reasonably required” to achieve a specific purpose, such as ensuring that the expected benefits are balanced against the privacy intrusiveness, so that collection is fair, not arbitrary and proportionate in scope.
In the private sector, Bill C-11 would introduce the consumer privacy protection act. In my view, as I stated in the last hearing, that bill requires significant amendments to reduce the risks of facial recognition technology. The significant risks posed by facial recognition technology make it abundantly clear that the rights and values of citizens must be protected by a strong, rights-based legislative framework. The Department of Justice proposes adding a purpose clause to the Privacy Act that specifies that one of the key objectives of the legislation is “protecting individuals' human dignity, personal autonomy, and self-determination”, recognizing the broad scope of the right to privacy as a human right.
Conversely, Bill C-11 maintains that privacy and commercial interests are competing interests that must be balanced. In fact, compared with the current law in the private sector, PIPEDA, the bill gives more weight to commercial interests by adding new commercial factors to be considered in the balance without adding any reference to the lessons of the past 20 years on technology's disruption of rights.
Clearview was able to rely on the language of the current federal act, PIPEDA, to argue that its purposes were appropriate and the balance should favour the company's interests rather than privacy. Although we rejected these arguments in our decision, some legal commentators have suggested that our findings would be a way to circumvent PIPEDA's purpose clause by not giving sufficient weight to commercial interests. Even though we found that Clearview breached PIPEDA, a number of commentators, including the company but not limited to the company, are saying that we actually misapplied the current purpose clause.
If Bill C-11 were passed in its current form, Clearview and these commentators could still make these arguments, buttressed by a purpose clause that gives more weight to commercial factors. I urge you to make clear in Bill C-11 that where there is a conflict between commercial objectives and privacy protection, Canadians' privacy rights should prevail. Our submission analyzing this bill makes specific recommendations on the text that would achieve this goal.
Demonstrable accountability measures are another fundamental mechanism to protect Canadians from the risks posed by facial recognition. Obligations to protect privacy by design, conduct privacy impact assessments, and ensure traceability with respect to automated decision-making are key elements of a meaningful accountability framework. While most of these accountability measures are part of the Department of Justice's proposals for modernizing public sector law, they are all absent from Bill C-11.
Efforts to regulate facial recognition technologies must also include robust compliance mechanisms that provide quick and effective remedies for individuals.
Our investigation into Clearview AI revealed that the organization had contravened two obligations under Canadian privacy law. On the one hand, it collected, used and disclosed biometric information without consent, and for an inappropriate purpose.
Remarkably—and shockingly—the new administrative penalty regime created by Bill C-11 would not apply to these and other important violations of the legislation. Such a penalty regime renders meaningless laws that are supposed to protect citizens.
I therefore urge you to amend the bill to remedy this fundamental flaw.
In conclusion, I would say that the nature of the risks posed by facial recognition technology calls for collective reflection on the limits of acceptable use of this technology. These limits should not be defined only by the risks associated with specific facial recognition initiatives, but by taking into account the aggregate social effects of all such initiatives over time.
In the face of ever-increasing technological capabilities to intrude on our private lives, we need to ask ourselves what are the expectations we should be setting now for the future of privacy protection.
I thank you again for your attention.
I welcome any questions you may have.