I think the main onus should be risk mitigation. This can go back to the principles of fairness, transparency and accountability that we were talking about at the very beginning of the session. It is important that creators and developers of AI systems keep track of the risks they create for a wide variety of harms when they are deploying and developing those systems, and that we have legal frameworks that will hold them accountable for that.
I think that also relates to your prior question. It is legitimately challenging and reasonably concerning that in other countries we may not be able to enforce the frameworks that are passed today. However, we should not let imperfect enforcement stop us from passing the rules and the principles that we believe ought to be enforced, because imperfect enforcement of them is better than not having enforcement at all.
This concern is similar to a concern that we had for privacy more than 20 years ago in relation to data that crosses borders. We didn't know whether we would be able to enforce Canadian privacy law abroad. Courts and regulators surprised us to the extent that they are sometimes able to do it.