Good afternoon.
Thank you for inviting me. I'm pleased to have the opportunity to share my thoughts on Bill C‑27 with the committee.
I am a partner at Borden Ladner Gervais, BLG, and a member of the privacy practice group. I am also the national lead of BLG's artificial intelligence, AI, group. I am appearing today as an individual.
My remarks will focus on the AI provisions in the bill, in both the artificial intelligence and data act, or AIDA, and the consumer privacy protection act, or CPPA.
To start, I want to say how important it is to modernize the federal privacy regime, something Quebec, the European Union and some of the world's largest economies have done recently.
I commend the government's commitment to AI legislation. In spite of the criticisms against AIDA, the bill has the advantage of putting forward a flexible approach. Nevertheless, some key concepts should be provided for in the act, instead of in the regulations. Furthermore, it is imperative that the government consult extensively on the regulations that flow from AIDA.
The first point I want to make has to do with the anonymized data in the CPPA. The use of anonymized personal information is an important building block for AI models, and excluding anonymized information from coverage by the act will allow Canadian businesses to keep innovating.
The definition of anonymization should, however, be more flexible and include a reasonableness standard, as other individuals and groups have recommended. That would bring the definition in line with those in other national and international laws, including recent amendments to Quebec's regime.
The CPPA should explicitly state that organizations can use an individual's personal information without their consent to anonymize the information, as is the case for de‑identified information.
Lastly, AIDA includes references to anonymized data, but it isn't defined in the act. The two acts should be consistent. AIDA, for instance, could refer to the definition of “anonymize” set out in the CPPA.
The second point I want to make concerns another concept in the CPPA, automated decisions. Like most modern privacy laws, the proposed act includes provisions on automated decisions. On request by an individual, organizations would be required to provide an explanation of the organization’s use of any automated decision system to make predictions, recommendations or decisions about individuals that could have a significant impact on them.
An automated decision system is defined as any technology that assists or replaces the judgment of human decision-makers. The definition should be amended to capture only systems with no human intervention at all. That would save organizations the heavy burden of having to identify all of their decision support systems and introduce processes to explain how those systems work, even when the final decision is made by a human. Such a change would increase the act's interoperability with Quebec's regime and the European Union's, which is based on the general data protection regulation.
Turning to AIDA, I want to draw your attention to high-impact systems. The act should include a definition of those systems. Since most of the obligations set out in the act flow from that designation, it's not appropriate for the term to be wholly defined in the regulations. The definition should include a contextual factor, specifically, the risk of harm caused by the system. For example, it could take into account whether the system posed a risk of harm to health and safety or a risk of an adverse impact on fundamental rights. That factor could be combined with the classes of systems that would be considered high-impact systems, as set out in the act.
Including a list of classes of systems that would de facto be considered high-impact systems, as the minister proposed in his letter, could capture too many systems, including those that pose moderate risk.
My last point concerns general purpose AI systems. In his letter, the minister proposed specific obligations for generative AI and other such systems. While generative AI has become wildly popular in the past year, regulating a specific type of AI system could render the act obsolete sooner.
Not all general purpose AI systems pose the same degree of risk, so it would be more appropriate to regulate them as high-impact systems when they meet the criteria to be designated as such.
Thank you very much. I would be happy to answer any questions you have.