In the overall approach to legislation and the regulation of technology and AI in particular, it is incredibly important to think about establishing a framework and a process through which you can iterate over time with appropriate guardrails in place. One example is allowing the high-impact system schedule to continue to reflect growing deployment of this technology and new high-risk scenarios by adding to it over time, with appropriate guardrails in place to ensure that the process by which that happens also requires that additions reflect the same risk analysis and that a threshold is being met to add to the schedule.
I think ensuring that the processes for implementing requirements, for example, evolve over time with changing approaches to safety systems for AI. That is important. Also, ensuring that the risk-based approach is really foundational to the regulation will be important, as will ensuring that the way the regulation applies is not overly broad. Having onerous requirements applied to low-impact systems restricts how Canadian businesses can continue to use AI for lots of innovative purposes.