It's consistent with the point I was making earlier. If you look at jurisdictions such as the United States and the EU, the EU has a robust artificial intelligence act that is to be passed, we're told, any day now.
Look at the Canadian context. The application of a governance framework is triggered when there is a high-risk scenario, meaning that not every single AI system will be subject to the same kind of oversight and rigour as a system that would fall within that high-risk category. That allows for a little more flexibility in the way we manage high-risk use cases. It does put more emphasis on a thoughtful approach to the types of intended purposes these systems are put to.