I don't think there is any cause and effect relationship between Europe's legislation on artificial intelligence and the fact that they are somewhat behind there. That is a myth. I am quite familiar with the legislation and the code of practice in Europe. Actually, with one exception, American companies all agreed with what the legislation and the code required.
The real obstacles to innovation in Europe and in Canada are the lack of self-confidence and the aversion to risk on the part of Canadian and European investors.
Regulation isn't the issue. For example, the European code of best practice simply asks companies to do what they were already doing. It asks for reports to be made public, for that not to be optional, and for the regulator to be able to decide to put a stop to certain things if ever anything happens.
To summarize, in terms of the recommendations, what I'm suggesting is very simple. We need transparency in the risk-management process the companies are following for building and deploying their AI systems—that's number one—and that process needs to demonstrate that the systems they're building and will deploy will not create harms that scientists can anticipate. That is all. By the way, this is the template for the regulation in California that passed recently, the one in New York and, of course, the EU AI Act. The Chinese also have similar laws.
It's not true that nothing is going on. As I said, it will be better for Canada from the point of view of managing and maximizing our impact that we do this in coordination with our partners, like the U.K., the EU and other middle powers.