Particularly when it comes to artificial intelligence, we see, at the speed of change, that the only way forward is to have a framework as opposed to being prescriptive. You may see that other jurisdictions have decided to go in a different direction, but how can you ensure that you will be relevant six months or a year from now? I think ChatGPT highlighted that to the world. In a matter of months, we found ourselves with generative AI. Things that probably most of us did not initially believe were possible are now commonly used by students and by people around the world. I saw yesterday that there is a new addition for voice and images.
My point is that having a framework is what allows us to be relevant. We probably don't know initially where the technology is going to go, but I would say that we need to have guardrails. The best example.... I was with the chief science adviser of Canada—and colleagues should reflect on this—and when people were saying that AI will be what it wants to be, she said that the best analogy is what happened when people were talking about cloning. When we decided, as humanity, that we wouldn't clone another human, it wasn't that cloning became whatever cloning could be—because, potentially, you could do that. As the international community and as people, we said that would not allow that to happen.
We have a precedent in the history of mankind of saying, for technologies that could bring us to the wrong place, that we could put a stop on that. Cloning is the best example of humanity's deciding that we won't go there. AI is kind of the same thing. It's not just something that should float and go wherever it may go. We need to have a place where we say, as the people, that these are going to be the boundaries within which we can have creative and responsible innovation to help people in so many ways, but that there is going to be a line that should not be crossed because that would be detrimental to people.