I think so, yes. What it will do, for example, is force legitimate Canadian companies to protect the AI systems they've developed from falling into the hands of criminals. Obviously, this won't prevent these criminals from using systems designed elsewhere, which is why we have to work on international treaties.
We already have to work with our neighbour to the south to minimize those risks. What the Americans are asking companies to do today includes this protection. I think that if we want to align ourselves with the United States on this issue to prevent very powerful systems from falling into the wrong hands, we should at least provide the same protection as they do and work internationally to expand it.
In addition, sending the signal that users must be able to distinguish between artificial intelligence and non‑artificial intelligence will encourage companies to find technical solutions. For example, one of the things I believe in is that it should be the companies making the content for cameras and recorders that encrypt a signature to distinguish what is generated by artificial intelligence from what is not.
For companies to move in that direction, they need legislation to tell them that they need to move in that direction as much as possible.