Thank you, Mr. Chair and members of the committee, for inviting me to testify today.
I'm an expert on the catastrophic global threats of AI and will primarily be speaking to you from this perspective.
I am the CEO of Conjecture, which is an AI safety research firm. I'm also an adviser at ControlAI, which is a non-profit focused on mitigating the security risks posed by advanced AI.
In 1985, humanity awakened to a hole in the sky. Scientists discovered that chlorofluorocarbons, CFCs, were depleting the ozone layer, which shields humanity from damaging ultraviolet radiation. At the same time, humanity also lived atop a deep fracture—a cold war between the U.S. and the U.S.S.R. that threatened nuclear annihilation.
Amidst deep geopolitical tensions, the two superpowers ultimately shook hands, signing both a landmark nuclear de-escalation treaty and the Montreal Protocol in 1987 to prohibit and phase out CFCs. This protocol ultimately received universal ratification. Despite the world's divisions, these rival powers came together to mend a hole in the sky and to recognize that never-ending nuclear escalation was in no one's interest, and the rest of the world followed.
In 2023, humanity heard a new warning call from Nobel Prize-winning AI scientists and the CEOs of major AI companies, saying, “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” This risk of extinction is posed by superintelligence, the exact subset of AI that the leading AI companies are racing to develop.
Superintelligence is defined as AI that is more competent than all humans at all relevant cognitive tasks across all relevant domains and capable of acting beyond human oversight and control. If there were to exist systems that autonomously out-compete any human in all relevant tasks of science, business, persuasion, politics and warfare, and if we did not control them, it is hard to imagine a future that goes well for humanity.
A major part of the risk is that AI developers fundamentally do not understand how the AI systems they are creating actually work and cannot develop them in a safe manner. Dario Amodei, the CEO of the second-largest AI company, recently stated that we perhaps “understand 3% of how they work”, which is, in my personal opinion, somewhat of an overestimation.
AIs are not developed as code that is written line by line as we do with traditional software. Instead, researchers are essentially growing AI models by feeding them vast amounts of data and training them by using enormous computing power to produce what is called a neural network rather than a set of lines of computer code.
Unfortunately, the current AI development paradigm does not allow the safety-by-design approaches that we use for other advanced, highly risky technologies. We would not, for example, build nuclear power plants if we did not know how to control nuclear reactions. Technical control methods are lagging drastically behind the advancement in AI systems capabilities. Currently, there are no legally binding AI safety regulations to protect consumers and humanity as a whole.
Where does this leave us today? Right now, multiple AI companies are pouring hundreds of billions of dollars into developing superintelligent AI as quickly as possible despite experts warning of the risks. This haste is, in my opinion, directly tied to an attempt to outrun legislation to complete their projects before the wider public and the government wake up to the completely unconscionable risks the unconsenting public is being exposed to by private, oversightless and reckless actors.
Recently, AI companies have been racing to automate AI research itself, allowing AIs to build even better AIs by themselves in order to reach superintelligence more quickly. This process is called recursive self-improvement, meaning the moment an AI is built that is good enough to make better AIs, it might already be too late.
Leading scientists now estimate that superintelligence could be developed by 2030, or potentially even sooner. In the face of this pressing threat from superintelligence, I'd like to offer the committee three recommendations for how Canada can respond now.
One, the Canadian government should publicly recognize superintelligence as a national and global security threat that poses an extinction risk to humanity.
Two, Canada should begin negotiating an international agreement to prohibit the development of superintelligence, given that no scientific consensus can be developed in a way that does not threaten humanity with extinction. The agreement should also restrict and monitor superintelligence precursors such as recursive self-improvement.
Three, Canada should prevent the development of artificial superintelligence on its soil, as superintelligence would be capable of overpowering individuals, companies and even Canada's national security apparatus.
Thank you. I would be happy to take any questions you may have.