Working in Canadian AI as I do, I speak to experts who are assessing these various claims. I think there's a consensus that this sort of world-ending risk is maybe 20 years out, maybe 30 years out, or something like that, and that we have time to regulate these things now. I would say that my focus in the remarks I made is that we have a choice between whether we want foreign companies to be deciding this or we want Canadian companies to be playing along.
One of the concerns is that some of the regimes that have been proposed right now sort of lock you in the current state, in which obviously Canada is not a big player. We can go and write laws if we like. Are they going to be followed? Are we going to be able to enforce them? This is the thing. The power that we can give ourselves is the opportunity for Canadian....
For example, one important aspect is that we talk a lot about ChatGPT, but there are now hundreds of large language models that are open source. These are by people and companies that don't necessarily have the regulatory department to deal with the regulations that are maybe being proposed in some corners.