Thank you very much.
Good afternoon, and thank you for the invitation to share my thoughts on Bill C-27 with the committee.
I am a partner at Davies Ward Phillips & Vineberg LLP, practising as a lawyer in the firm’s technology group. I am appearing today in a personal capacity, presenting my own views.
Recent years have seen significant technological developments related to machine learning. In part, these have come to pass because of another relatively recent development, namely, the vast amount of information, including personal information, that is now generated by our activities and circulates in our economy and our society. Together, these developments hold great promise for future innovation, but they also carry significant risks, such as risks to privacy, risks of bias or discrimination and risks relating to other harms.
I am, therefore, encouraged that a bill has been introduced that seeks to address these risks while supporting innovation. I will begin by making some remarks on the proposed consumer privacy protection act, CPPA, and by suggesting changes to certain provisions of the bill that could better support innovation involving machine learning while introducing important guardrails. I will then share some observations in relation to the proposed artificial intelligence and data act, AIDA.
In my view, there could be improvements made to the CPPA consent exception framework that would facilitate personal information exchange among, and collection by, private sector actors that wish to undertake socially beneficial projects, study or research. In particular, proposed sections 35, 39 and, in part, 51 could be combined and generalized so as to permit private sector actors to disclose and exchange personal information or to collect information from the public Internet for those purposes, study or research, provided that certain conditions are fulfilled.
Those could include conducting a privacy impact assessment, entering into an agreement containing relevant contractual assurances where applicable, and providing notice to the commissioner prior to the disclosure or collection. Noting that de-identified data is sufficient for the training of machine learning models in many cases and noting that de-identification is a requirement in proposed section 39, as currently drafted, but not in proposed section 35, I would note only that whether the information should be de-identified in a given case should be a factor in the proposed privacy impact assessment.
Suitably crafted, these changes could provide material but appropriately circumscribed support for section 21 of the proposed CPPA, which permits the use of personal information that has been de-identified for internal research and analysis purposes, and for proposed subsection 18(3), which permits use of personal information in its native form for legitimate interests, provided that an assessment has been undertaken.
With respect to the AIDA, I begin with the definition of the term “artificial intelligence system”. This definition is of fundamental importance, given that the entire scope of the act depends upon it. The current definition risks being overbroad. The minister’s letter proposes to provide better interoperability by introducing a definition that seeks to align with a definition used by the OECD, but the text provided differs from the OECD formulation and introduces the word “inference” in a suboptimal way. We also do not have the final wording.
There are also different definitions to consider in other instruments, including the European Union’s proposed AI act, the recent U.S. President’s executive order, and the NIST AI risk management framework, among others. Some of these do converge on the OECD’s definition, but in each case the wording differs.
I would recommend to the committee—or, at least, I would urge the committee—when it begins clause-by-clause review, to make a survey of existing definitions to determine the state of the art and to ensure that the definition ultimately chosen indeed maximizes interoperability yet also remains extensible to account for new techniques or technologies.
I would also recommend that the purpose clause of the AIDA, as well as other relevant provisions, be amended to include harms to groups and communities, as these may also be adversely affected by the decisions, recommendations or predictions of AI systems.
Finally, there should be an independent artificial intelligence and data commissioner. The companion document to the AIDA notes that the model whereby the regulator would be a departmental official was chosen in consideration of a number of factors, including the objectives of the regulatory scheme. However, since the scope of what is being left to regulation is so extensive, the creation of an independent regulator to administer and enforce the AIDA will counterbalance skepticism concerning the relative lack of parliamentary oversight and thereby help to instill trust in the overall regulatory scheme.
I will submit a brief for consideration by the committee, elaborating on the matters raised here. Machine learning technologies are poised to play a significant role in future innovation. Through legislation, we can achieve meaningful support for this potential while providing effective protections for individuals, groups and society.
Thank you for your attention. I welcome your questions.