I'll focus my comments on AI regulation broadly. Right now in Canada, we lack an end-to-end regulation, and there are several changes that need to be made. I'll point you in the direction of a recent framework published by the EU Commission—this is a draft framework, but it's very likely to go into practice in 2022—that tackles issues of artificial intelligence and the risks associated, anything from privacy and human rights to very technical concepts of robustness and stability.
Ultimately, every time we develop one of these systems, we should be doing an impact assessment. As I noted in my remarks, the oversight of these systems should be based on risk materiality, meaning for very high-risk systems. There should be some level of scrutiny in the requirements around their usage and testing. Testing covers a very broad range of technical concepts, like robustness, stability, bias and fairness, and thresholds have to be put in place. Granted, these are context-dependent, so they would have to be put in place by the developers in order for us to ensure that there is accountability.
I will also note—and this is something I think is often forgotten—that these systems are stochastic. This means that when we put them in production, we may have a really good sense of how they'll behave today, but as our data changes in the future, we need to make sure we're continually monitoring the systems to ensure that they are working in the way we had initially intended. If they're not working in that way anymore, they need to be pulled from production and reassessed before they are put back out. This is particularly true in high-risk use cases like criminal identification.