Thank you and good afternoon, Mr. Chair and members of the committee.
I'm here on behalf of Gladstone AI, which is an AI safety company that I co-founded. We collaborate with researchers at all the world's top AI labs, including OpenAI and partners in the U.S. national security community, to develop solutions to pressing problems in advanced AI safety.
Today's AI systems can write software programs nearly autonomously, so they can write malware. They can generate voice clones of regular people using just a few seconds of recorded audio, so they can automate and scale unprecedented identity theft campaigns. They can guide inexperienced users through the process of synthesizing controlled chemical compounds. They can write human-like text and generate photorealistic images that can power, and have powered, unprecedented and large-scale election interference operations.
These capabilities, by the way, have essentially emerged without warning over the last 24 months. Things have transformed in that time. In the process, they have invalidated key security assumptions baked into the strategies, policies and plans of governments around the world.
This is going to get worse, and fast. If current techniques continue to work, the equation behind AI progress has become dead simple: Money goes in, in the form of computing power, and IQ points come out. There is no known way to predict what capabilities will emerge as AI systems are scaled up using more computing power. In fact, when OpenAI researchers used an unprecedented amount of computing power to build GPT-4, their latest system, even they had no idea it would develop the ability to deceive human beings or autonomously uncover cyber exploits, yet it did.
We work with researchers at the world's top AI labs on problems in advanced AI safety. It's no exaggeration to say that the water cooler conversations among the frontier AI safety community frames near-future AI as a weapon of mass destruction. It's WMD-like and WMD-enabling technology. Public and private frontier AI labs are telling us to expect AI systems to be capable of carrying out catastrophic malware attacks and supporting bioweapon design, among many other alarming capabilities, in the next few years. Our own research suggests this is a reasonable assessment.
Beyond weaponization, evidence also suggests that, as advanced AI approaches superhuman general capabilities, it may become uncontrollable and display what are known as “power-seeking behaviours”. These include AIs preventing themselves from being shut off, establishing control over their environment and even self-improving. Today's most advanced AI systems may already be displaying early signs of this behaviour. Power-seeking is a well-established risk class. It's backed by empirical and theoretical studies by leading AI researchers published at the world's top AI conferences. Most of the safety researchers I deal with on a day-to-day basis at frontier labs consider power-seeking by advanced AI to be a significant source of global catastrophic risk.
All of which is to say that, if we anchor legislation on the risk profile of current AI systems, we will very likely fail what will turn out to be the single greatest test of technology governance we have ever faced. The challenge AIDA must take on is mitigating risk in a world where, if current trends simply continue, the average Canadian will have access to WMD-like tools, and in which the very development of AI systems may introduce catastrophic risks.
By the time AIDA comes into force, the year will be 2026. Frontier AI systems will have been scaled hundreds to thousands of times beyond what we see today. I don't know what capabilities will exist. As I mentioned earlier, no one can. However, when I talk to frontier AI researchers, the predictions I hear suggest that WMD-scale risk is absolutely on the table on that time horizon. AIDA needs to be designed with that level of risk in mind.
To rise to this challenge, we believe AIDA should be amended. Our top three recommendations are as follows.
First, AIDA must explicitly ban systems that introduce extreme risks. Because AI systems above a certain level of capability are likely to introduce WMD-level risks, there should exist a capability level, and therefore a level of computing power, above which model development is simply forbidden, unless and until developers can prove their models will not have certain dangerous capabilities.
Second, AIDA must address open source development of dangerously powerful AI models. In its current form, on my reading, AIDA would allow me to train an AI model that can automatically design and execute crippling malware attacks and publish it for anyone to freely download. If it's illegal to publish instructions on how to make bioweapons or nuclear bombs, it should be illegal to publish AI models that can be downloaded and used by anyone to generate those same instructions for a few hundred bucks.
Finally, AIDA should explicitly address the research and development phase of the AI life cycle. This is very important. From the moment the development process begins, powerful AI models become tempting targets for theft by nation, state and other actors. As models gain more capabilities and context awareness during the development process, loss of control and accidents become greater risks, as well. Developers should bear responsibility for ensuring the safe development of their systems, as well as their safe deployment.
AIDA is an improvement over the status quo, but it requires significant amendments to meet the full challenge likely to come from near-future AI capabilities.
Our full recommendations are included in my written submission, and I look forward to taking your questions. Thank you.