Good afternoon to the industry and technology committee as well as a lot of their assistants and also to whoever may be in the room.
I am here today to talk about part 3 of Bill C-27, an act to enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act and to make consequential and related amendments to other Acts. Part 3 is the Artificial Intelligence and Data Act.
Firstly, there are some issues, some challenges, with this bill, especially in accordance with societal effects and public effects.
Number one, when this bill was crafted, there was very little public oversight. There were no public consultations, and there are no publicly accessible records accounting for how these meetings were conducted by the government's AI advisory council, nor which points were raised.
Public consultations are important, as they allow a variety of stakeholders to exchange and develop innovative policy that reflects the needs and concerns of affected communities. As I raised in the Globe and Mail, the lack of meaningful public consultation, especially with Black, indigenous, people of colour, trans and non-binary, economically disadvantaged, disabled and other equity-deserving populations, is echoed by AIDA's failure to acknowledge AI's characteristic of systemic bias, including racism, sexism and heteronormativity.
The second problem with AIDA is the need for proper public oversight.
The proposed artificial intelligence and data commissioner is set to be a senior public servant designated by the Minister of Innovation, Science and Industry and, therefore, is not independent of the minister and cannot make independent public-facing decisions. Moreover, at the discretion of the minister, the commissioner may be delegated the “power, duty” and “function” to administer and enforce AIDA. In other words, the commissioner is not afforded the powers to enforce AIDA in an independent manner, as their powers depend on the minister's discretion.
Number three is the human rights aspect of AIDA.
First of all, how it defines “harm” is so specific, siloed and individualized that the legislation is effectively toothless. According to this bill:
harm means
(a) physical or psychological harm to an individual;
(b) damage to an individual's property; or
(c) economic loss to an individual.
That's quite inadequate when talking about systemic harm that goes beyond the individual and affects some communities. I wrote the following in The Globe and Mail:
“While on the surface, the bill seems to include provisions for mitigating harm,” [as said by] Dr. Sava Saheli Singh, a research fellow in surveillance, society and technology at the University of Ottawa's Centre for Law, Technology and Society, “[that] language focuses [only] on individual harm. We must recognize the potential harms to broader populations, especially marginalized populations who have been shown to be negatively affected disproportionately by these kinds of...systems.”
Racial bias is also a problem for artificial intelligence systems, especially those used in the criminal justice system, and racial bias is one of the greatest risks.
A federal study was done in 2019 in the United States that showed that Asian and African American people were up to 100 times more likely to be misidentified than white men, depending on the particular algorithm and type of search. Native Americans had the highest false positive rate of all ethnicities, according to the study, which found that systems varied widely in their accuracy.
A study from the U.K. showed that the facial recognition technology the study tested performed the worst when recognizing Black faces, especially Black women's faces. These surveillance activities raise major human rights concerns when there is evidence that Black people are already disproportionately criminalized and targeted by the police. Facial recognition technology also disproportionately affects Black and indigenous protesters in many ways.
From a privacy perspective, algorithmic systems raise issues of construction, because constructing them requires data collection and processing of vast amounts of personal information, which can be highly invasive. The reidentification of anonymized information, which can occur through the triangulation of data points collected or processed by algorithmic systems, is another prominent privacy risk.
There are deleterious impacts or risks stemming from the use of technology concerning people's financial situations or physical and/or psychological well-being. The primary issue here is that a significant amount and type of personal information can be gathered that is used to surveil and socially sort, or profile, individuals and communities, as well as forecast and influence their behaviour. Predictive policing does this.
In conclusion, algorithmic systems can also be used in the public sector context to assess a person's ability to receive social services, such as welfare or humanitarian aid, which can result in discriminatory impacts on the basis of socio-economic status, geographic location, as well as other data points analyzed.