Influence is an issue, but I'd like to briefly comment on the self-regulation aspect, if I may. I think it's important. In my view, self-regulation clearly isn't adequate. There's a pretty strong consensus in the international community that opting strictly for self-regulation isn't enough. That means legislation has its place: it imposes obligations and formal accountability measures on companies.
That said, it's important to recognize that this legislation, Bill C-27, is one tool in the important tool box we need to ensure the responsible deployment of AI. It's not the only answer. The law is important, but highly responsive ethical standards are also necessary. The tool box should include technical defensive AI, where you have AI versus AI. International standards as well as business standards need to be established. Coming up with a comprehensive strategy is really key. This bill won't fix everything, but it is essential. That's my answer to your first question.
Sorry, could you please remind me what your second question was?