Thank you, Mr. Chair and members of the committee, for the opportunity to speak to Bill C-27.
I am the managing partner of INQ Law, where my practice focuses on data- and AI-related laws. I’m here in my personal capacity and the views presented are my own.
Every day, we are hearing new stories about the promise and perils of artificial intelligence. AI systems are complex computer programs that process large amounts of data, including large amounts of personal information, for training and output purposes. Those outputs can be very valuable.
There is a possibility that AI can help cure diseases, improve agriculture yields or even help us become more productive, so we can each play to our best talents. That promise is very real, but as you've already heard on this panel, that promise does not come without risk. Complex as these systems are, they are not perfect and they are not neutral. They are being developed at such a speed that those on the front lines of development are some of the loudest voices calling for some regulation.
I appreciate that this committee has heard quite a bit of testimony over the last several weeks. While the testimonies you've heard have certainly run the gamut of opinions, there seem to be at least two points of consistency.
The first is that Canada’s federal private sector privacy law should be updated to reflect the increasing demand for personal information and changes to how that information is collected and processed for commercial purposes. In short, it’s time to modernize PIPEDA.
Second, our laws governing data and AI should strive for interoperability or harmonization across key jurisdictions. Harmonization helps Canadians understand and know how to assert their rights, and it helps Canadian organizations compete more effectively within the global economy.
The committee has also heard opposing views about Bill C-27. The remainder of my submissions will focus on five main points to do with parts 1 and 3 of the bill.
Part 1, which proposes the consumer privacy protection act, or CPPA, proposes some important changes to the governance of personal information in Canada. My submissions focus on the legitimate interest consent exception and the definition of anonymized data, much of which you've already heard on this panel.
First, the new exceptions to consent in the bill are welcome. Not only do they provide flexibility for organizations to use personal data to advance legitimate and beneficial activities, but they also align Canada’s law more closely with those of some of our key allies, including internally within Canada, such as Quebec’s Law 25, more specifically. Critically, they do so in a manner that is reasonably measured. I agree with earlier testimony that you've heard in this committee, that the application of the legitimate interest exception in the CPPA should align more closely with other notable privacy laws, namely Europe's GDPR.
Second, anonymized data can be essential for research, development and innovation purposes. I support the recommendations put to this committee by the Canadian Anonymization Network with respect to the drafting of the definition of “anonymize”. I also agree with Mr. Lamb's submissions as to the insertion of existing notions of reasonable foreseeability or a serious risk of reidentification.
As for part 3 of the bill, the proposed artificial intelligence and data act, first, I support the flexible approach adopted in part 3. I caution and recognize that the current draft contains some major holes, and that there is a need to plug those holes as soon as possible. As well, any future regulation would need to be subject to considerate consultation, as contemplated in the companion document to AIDA.
Our understanding of how to effectively promote the promise of AI and prevent harm associated with its use is evolving with the technology itself. Meaningful regulation will need to benefit from consultation with broad stakeholders, including, importantly, the AI community.
Second, Minister Champagne, in the letter he submitted to this committee, proposes to amend AIDA to define “high impact” by reference to classes of systems. The definition of high impact is the most striking omission in the current draft bill.
The use of a classification approach aligns with the EU's draft artificial intelligence act and supports a risk-based approach to AI governance, which I support. When the definition is ultimately incorporated into the draft, it should parallel the language in the companion document and provide criteria on what “high impact” means, with reference to the classifications as illustrated.
Finally, I support the proposed amendments to align AIDA more closely with OECD guidance on responsible AI. Namely, this is the definition in proposed section 2 of AIDA, which has also been adopted by the National Institute of Standards and Technology in the United States in its AI risk management framework.
To the extent that Canada can harmonize with other key jurisdictions where it makes sense for us to do so, we should.
I look forward to the committee's questions, as well as to the comments from my fellow witnesses.