In Canada and the provinces, the use of facial recognition, generally speaking, and in particular by law enforcement agencies, is not circumscribed. Of course, without a legal framework, it becomes a matter of trial and error. As was demonstrated in the Clearview AI case, we know from a reliable source that facial recognition was used by several law enforcement agencies in Canada, including the Royal Canadian Mounted Police.
When there is no legal framework, things become problematic. Practices develop without any restrictions. That's why people might, on the one hand, fear the legal framework because its existence means the technology has been accepted and recognized, while on the other hand, it would be naïve to imagine that the technology will not be used and can't be stopped, and possibly has many advantages for use in police investigations.
It's always a matter of striking the right balance between the benefits of AI while avoiding the risks. More specifically, a law on the use of facial recognition should ideally anticipate the principles of necessity and proportionality. For example, limits could be placed on when and where the technology can be used for specific purposes or certain types of big investigations. The use of the technology would have to be permitted by a judicial or administrative authority. Legal frameworks are possible. There are examples elsewhere and in other fields. It is certainly among the things that need to be dealt with.
I would add that Bill C‑27 is not directly related to this subject, because what we are dealing with here is regulating international and interprovincial trade. It has nothing to do with the use of AI in the public sector. We can, in due course, regulate companies that sell these facial recognition AI products and systems to the police, but not their use by the police. It's also important to ask about the scope of the regulation that is to be adopted for AI, which will no doubt extend beyond Bill C‑27.