Those would be the top priority, starting with the notion of a privacy impact assessment for generative AI. To me, that is a major shortcoming.
If you look at AIDA and if you look at the minister's proposed amendments to AIDA, you see a lot of discussion about risk mitigation, identifying risk and managing risk. This is absolutely essential and critical. However, we need to do this for privacy as well as for non-privacy harms. I'm very much insisting on this.
The other important recommendation, which I would say is the top priority, is making sure that fines are available for violation of the “appropriate purposes” provision. This is a violation of section 12. This is the key central provision. This is at the heart of the bill in a way, but there are no fines for that. That, in my view, should be corrected. It's easily corrected by adding that to the list of the breaches.
Other comparable legislation, like Quebec's, for instance, simply says, “a violation of the law”. The whole law is there. It's all covered. This approach lists offences, and then in Bill C-11 there were more omissions. It's been corrected to some extent, but it needs to be corrected further.
I talked about algorithmic transparency. It is an important element, especially at this time in AI. Again, we can manage that by providing guidance to industry, so it's something that's workable, but I think Canadians need to understand what is going on with their data and how decisions are made about them. If we limit it to matters that have significant impact, we're creating debates and limiting the transparency that Canadians deserve.
That is—