Perhaps I could answer that by reflecting on and accentuating a couple of things that Professor Bengio and Professor Geist have said.
What we're talking about here in terms of the governance of AI is the principle of accountability. There's been a lot of analysis of accountability in privacy issues. It means that organizations do the risk assessment, do the analysis of the problems they're likely to face and stand ready to demonstrate compliance. They don't necessarily have to do it, but they stand ready to demonstrate it.
Last year, there was a proposal to develop codes of practice around AIDA that really didn't go anywhere. The process was flawed, in my judgment, but codes of practice play a really important role here too. They may be on the company level or they may be in terms of industrial associations. My central point here is that just because we're dealing with amazingly new and potentially powerful technologies, it does not mean that the governance issues are any different from what they were when we were talking about these problems 30 years ago, as I was. I gave testimony before the committee that looked at PIPEDA back in 2000.
We should be learning about what works and what doesn't from those experiences. We're not talking about imposing prescriptive regulation and we're not talking about self-regulation; we're talking about co-regulation, meaning that the companies are incented to do the right thing and are punished if they do not.
You've given me a very broad question, so I'm going to take the liberty of making a couple of other comments.
On the question of consultation, I don't regard this issue, AI, as just another policy question. It's not just another law and policy that requires the standard stakeholder consultation. It is so general. It is so pervasive. It is going to affect all aspects of our lives, so the consultation process also needs to be fundamentally different. For example, I would like the government to think critically about citizens' assemblies in this area.
Professor Bengio is putting his thumb up. Good.
Citizens' assemblies can play more than the role of getting feedback from ordinary citizens about what they think about these issues. Citizens are going to be exposed to this and are being exposed to it constantly, but they're also going to be hurt by it. Your constituents are going to be denied, and are being denied, rights and services because of decisions that are made in automated machines without proper human oversight. Citizens' assemblies, I think, can play a very critical role here.
On this question of digital sovereignty, I have no insights into whether a new CPPA is coming along soon from ISED, but we hope that it will. We also hope that it will be strengthened with respect to the issue concerning the international flows of personal data. I think this is accentuating what Professor Geist said.
The previous version just said that a company had to do diligence when it sent information elsewhere. It had to ensure that the rules in Canada were applied. It didn't matter whether that company was based in Ontario, Europe, a developed country or elsewhere with authoritarian regimes, so there's something deeply wrong, in my view, with the way the government has been thinking about the protection of the international flows of personal data.
We have some views about that. I hope that when this committee—assuming it's this committee—comes to look at a new view of CPPA, it will consider these questions about digital sovereignty and think very critically not only about the stronger rules that need to be in place to ensure that Canadians' personal data remains in Canada and the role for data localization, but also about the rules being really strong when that data is transferred overseas.
I hope that addressed your question, Mr. Bains.
