It's very clear that AIDA is focused on individual harms. We've adopted a product safety-type approach—as has the EU—that says that companies should be looking at whether or not their products can cause this harm.
That is not addressing the question of what it means that we already have autonomous systems. Trading on our financial markets is an example. For the rapid advances that have been mentioned, like what could happen in the next two to five years, the talk is about personalized AI agents out there buying, selling, creating products and operating websites. We're about to see that kind of autonomy, with autonomous agents starting to participate in our economies. Our thinking is still five years ago on this. We need to rapidly get up to speed on that fact.
The systemic harms that I think about are what happens to the equilibrium of our financial, economic, regulatory and political domains when we have huge amounts of autonomous action taking place. We've already seen that in social media. We need to think about how we'd act there.
The types of things I'd say we need to be thinking about are.... All of our regulators should be doing what I've called a regulatory impact analysis to figure out how the introduction of the systems impact our capacity to control the liquidity and reliability of our financial markets to protect against antitrust behaviour in our other markets, or to ensure that our court systems, for example, and our decision-making systems are still safe and trusted.
We have to be thinking about it at that level. I do not think that the individual harm, product safety and risk management approach that AIDA and the EU are taking will get us there. That's the systemic point.