Thank you for that question.
I think it's been mentioned already today, but I will repeat it. Merely looking at high-impact systems, however they are defined—and right now that's unclear in the current amendments—is not enough to fully mitigate the risks of AI, particularly the collective risks to communities and groups. That kind of risk, furthermore, is not covered under the current definition of “harm” in the bill, which is focused strictly on individuals and quantitative forms of harm. In looking at how you can restructure that better, you could look at the European act, but I would refer you to something closer to home.
The Toronto Police Service recently did an extensive public consultation and developed rules on artificial intelligence for use by their service. They adopted a tiered approach, where there are some systems that are deemed low-risk, but require an assessment in order to determine that they are so. There are some systems that are deemed medium-risk, and there are different sets of precautions and safeguards in order to ensure that those risks are appropriately analyzed and mitigated prior to the technology being used. There are also systems that are considered high-risk, which have the highest level of protections and safeguards. Then there are systems that are considered beyond the pale. Some systems are considered so risky that it is not appropriate to use them in a country governed by the Charter of Rights and Freedoms and where democratic freedoms are valued.
That's a much more tiered and nuanced approach requiring assessments at different stages, and then proportionate safeguards and restrictions, depending on the level of risk, can be much more finely tuned and much more responsive to the genuine concerns that members of the public have about ways that AI systems can be used for them or against them in violation of their beliefs and values.