I'll let my colleagues answer some of those questions. However, I would like to clarify something I proposed in what I said and wrote. It has to do with setting a criterion related to the size of the systems in terms of computing power, with the current threshold above which a system would have to be registered being 1026 operations per second. That would be the same as in the United States, and it would bring us up to the same level of oversight as the Americans.
This criterion isn't currently set out in Bill C‑27. I would suggest that we adopt that as a starting point, but then allow the regulator to look at the science and misuse to adjust the criteria for what is a potentially dangerous and high‑impact system. We can start right away with the same thing as in the United States.
In Europe, they've adopted more or less the same system, which is also based on computing power. Right now, it's a simple, agreed‑upon criterion that we can use to distinguish between potentially risky systems that are in the high‑impact category and systems that are 99.9% classified as AI systems without a national security risk.