That's a fair point; the systems aren't necessarily developed here.
First of all, through international coordination, we have a mechanism to inform what those values and standards ought to look like. We have precedent for this in a number of different respects, some of which you heard about from the earlier witnesses. We have some role to play in being clear about what embedding those good values looks like and what it means to develop intelligent computers that are aligned with the role we believe they should play in society.
I will give you the example of the G7 back in 2018, when our government and all the governments of the G7 were talking very much about the values in the context of AI and how we were going to advance standards, policies and practices that would protect those values. From there, you saw a movement into the global partnership on AI, and you saw a coordination through the United Nations on AI with a view to creating more of a harmonization in approach. Cut to the most recent G7, where the emphasis was primarily on adoption because we're now at a stage where we can see the real-world uses of AI and are much more prone to and excited about, in a lot of ways, the adoption of these technologies, which is a good thing.
In this current context, as we are starting to become much more familiar with the technology and understand its opportunities and use, we have to be mindful of the risks, but we cannot lose sight of the opportunities. We have a role to play in ensuring safe use where there is actual risk. What I don't want to do is establish a system that applies the same kinds of controls for all uses. We have to be targeted.