Unfortunately, there is no silver bullet, so it's going to be a lot of little things.
Unfortunately, a lot of the power to reduce those risks is in the hands of the Americans, so it could be their federal government—or California, these days.
However, I think there are things that the Canadian government can do.
First of all, one of the most important things is that those companies that are building those very powerful AI systems need to do tests—which the U.K. and the U.S. AI Safety Institute, for example, are helping with—that try to evaluate the capabilities of the system. How good is the AI at doing something that could be dangerous for us? It could be generating very realistic imitations or it could be persuasion, which is one thing we haven't seen used that much yet, but I'd be surprised if the Russians are not working on it using open-source software.
We need to know, basically, how a bad actor could use the open-source systems that are commercially available or downloadable in order to do something dangerous to us. Then we need to evaluate that, so we basically force the companies to mitigate those risks or even prevent them from putting out something that could end up being very disruptive.