Thank you very much, Mr. Chair.
Good morning, everyone, and thank you for the invitation to share with the committee my thoughts on Bill C-27.
I'm appearing today in my personal capacity. Mr. Chair has already introduced me, so I'm going to skip that part and say that it is crucial that Canada have a legal framework that fosters the enormous benefits of AI and data while preventing its population from becoming collateral damage from it.
I'm happy to share my broad thoughts on the act, but today I want to focus on three important opportunities for improvement while maintaining the general characteristics and approach of the act as proposed. I have one recommendation for AIDA, one for the CPPA and one for both.
My first recommendation is that AIDA needs an improved definition of “harms”. AIDA is an accountability framework, and the effectiveness of any accountability framework depends on what it is that we hold entities accountable for. AIDA recognizes currently property, economic, physical and psychological harms, but for it to be helpful and comprehensive, we need one step more.
Consider the harms to democracy that were imposed during the Cambridge Analytica scandal and consider the meaningful but diffuse and invisible harms that are inflicted every day through intentional misinformation that polarizes voters. Consider the misrepresentation of minorities that disempowers them. These go unrecognized by the current definition of “harms”.
AIDA needs two changes to recognize intangible harms beyond individual psychological ones: It needs to recognize harms to groups, such as harms to democracy, as AI harms often affect communities rather than discrete individuals, and it also needs to recognize dignitary harms, like those stemming from misrepresentation and the growing of systemic inequalities through automated means.
I therefore urge the committee to amend subsection 5(1) of AIDA to incorporate these intangible harms to individuals and to communities. I would be happy to propose suggested language.
This fuller account of harms would put Canada up to international standards, such as the EU AI Act, which considers harms to “public interest”, to “rights protected” by EU law, to a “plurality of persons” and to people in a “vulnerable position”. Doing so better complies with AI ethics frameworks, such as the Montreal declaration for responsible AI, the Toronto declaration and the Asilomar AI principles. You would also increase consistency within Canadian law, as the directive on automated decision-making repeatedly refers to “individuals or communities”.
My second recommendation is that the CPPA must recognize inferences as personal information. We live in a world where things as sensitive and dangerous as our sexuality or ethnicity and our political affiliation can be inferred from things as inoffensive as our Spotify listens or our coffee orders or text messages, and those are just some of the inferences that we know about.
Inferences can even be harmful when they are incorrect. TransUnion, for example, the credit rating agency, was sued in the United States a couple of years ago for mistakenly inferring that hundreds of people were terrorists. By supercharging inferences, AI has transformed the privacy landscape.
We cannot afford to have a privacy statute that focuses on disclosed information and builds a back door into our privacy law that strips from it its power to create meaningful protection in today's inferential economy. The CPPA doesn't rule out inferences being personal information, but it doesn't incorporate them explicitly. It should. I urge the committee to amend the definition of personal information in one of the acts to say that “ 'personal information' means disclosed or inferred information about an identifiable individual or group”.
This change would also increase consistency within Canadian law, as the Office of the Privacy Commissioner has repeatedly stated that inferences should be personal information, and also with international standards, as foreign data protection authorities emphasize the importance of inferences for privacy law. The California attorney general has also stated that inferences should be personal information for the purposes of privacy law.
My third brief recommendation is a consequence of this bill, which is reforming enforcement. As AI and data continue to seep into more aspects of our social and economic lives, one regulator with limited resources and personnel will not be able to have their eye on everything. They will need to prioritize. If we don't want all other harms to fall through the cracks, both parts of the act need a combined public and private enforcement system, taking inspiration from the GDPR, so that we have an agency that issues fines without preventing the court system from compensating for tangible and intangible harm done to individuals and groups.
We also have a brief elaborating on the suggested outlines here.
I'd be happy to address any questions or elaborate on anything.
Thank you very much for your time.