Hi. Thanks for inviting me. My name is David Krueger. I'm a machine learning professor at the University of Montreal and Mila.
In 2012, I learned about deep learning from Geoff Hinton's online lectures, and I realized that this new approach to AI might produce superintelligent AI within a few decades. I went to study under Yoshua Bengio in Montreal.
At the time, I was already concerned that superintelligent AI could cause human extinction, and I wanted to know what the experts thought. I was hoping to find they had good reasons not to be concerned, but what I actually found was that nobody was really thinking about it. In fact, for most of my time in the field, the risk of human extinction from AI was considered a taboo topic, and researchers feared for their careers if they talked about it.
This unfortunately set back by years critical public conversations about how to handle this risk. Still, for over a decade, I've been talking about it every chance I get. I continue to be dismayed at the bad arguments people make to avoid confronting the problem. On the other hand, over time, I've witnessed more researchers become increasingly concerned.
In 2023, we had a watershed moment, and I initiated a statement that mitigating the risk of extinction from AI should be a global priority alongside other societal scale risks, such as pandemics and nuclear war. This statement was signed by many of the biggest names in AI, including Hinton, Bengio and hundreds of other AI researchers. Unfortunately, in 2023, ChatGPT was released as well and AI companies started to become extremely powerful, making it more difficult to regulate the technology.
The most important thing I want to stress today is that we're still in a position where the world is not yet taking the steps that are needed to mitigate the risk that AI will lead to human extinction.
What is this risk and where is it coming from?
AI companies are explicitly trying to build superintelligent AI systems. These are systems that would be much smarter than people across the board and that can autonomously do anything that humans can do, including robotics, and do it much better, cheaper, faster, etc. The basic goal is to render humans obsolete and take all of their jobs, but we don't know how to control superintelligent AI. In fact, we don't understand how existing systems work because they are grown—not built—using deep learning. Despite thousands of research papers over the past decade, this remains an unsolved research challenge, and we should not expect any amount of investment to solve this problem in the foreseeable future.
We also don't know how to do safety testing for these kinds of AI systems. The kinds of tests we have can show that an AI system is dangerous. They cannot show that it is safe. We should also not expect any amount of investment to solve this problem in the foreseeable future.
Instead of maintaining control or using rigorous safety practices, AI companies and researchers try to instill particular goals and values in AI systems so that the systems will do what the designers want, but we only know how to do this approximately, and even a small approximation error might lead a superintelligent AI to reallocate towards its own goals the resources we need to survive. None of these approaches that you'll often hear mentioned—interpretability, testing or alignment—is technically adequate. We don't know how to build superintelligent AI safely. The plan is basically to roll the dice.
Finally, if the companies building superintelligent AI do not immediately or quickly lose control of it, we should still expect the wholesale replacement of humans with AI throughout society. This means not just near total unemployment, but also political power being handed over to AI systems that make decisions too quickly for humans to meaningfully participate in.
I want to say a little about timelines, because I think we're in a state of acute crisis. If we don't do anything, I think we're about five years away from superintelligent AI. Many agree. We need to course correct immediately and work to prevent the development of superintelligent AI, and we need to do this internationally, which will take time. We cannot afford to wait for more evidence that the risk is imminent. There's already ample evidence that the level of risk from this course we're on is unacceptable.
In my home country of the United States, the main argument against stopping the race to build superintelligent AI these days is simply that it's inevitable: If we don't do it, then China will. This is false. One simple way to stop it would be to get rid of advanced AI computer chips and the factories that produce them. Fortunately, the supply chain for these chips is extremely concentrated, making this or other interventions to control and limit the means for producing superintelligent AI possible. There may be better ways that are less costly, but this cost, given the risk, is also well worth paying if necessary. AI is an unprecedented technology, and the future of humanity is at stake.
We're in a state of crisis. We need immediate action to slow or pause AI development internationally. This issue should be the number one foreign policy priority of every nation, including Canada.
Thank you.