Committee members, thank you for giving me the honour of being here.
AI Governance and Safety Canada is a cross‑partisan not‑for‑profit organization and a community of people across Canada. We started with the following question. What can we do in Canada, and from Canada, to ensure positive artificial intelligence outcomes?
In November, we submitted a brief with detailed recommendations concerning the Artificial Intelligence and Data Act. We're currently preparing a second brief in response to the amendments proposed by the minister.
The witnesses at previous meetings already discussed the risks posed by the current systems. I'll focus today on the upcoming economic and safety challenges posed by artificial intelligence; on the time constraints involved in preparing for these challenges; and on what all this means for the Artificial Intelligence and Data Act.
Let me start by stating the obvious. With human intelligence staying roughly the same and AI getting better by the day, it is only a matter of time before AI outperforms us in all domains. This includes ones like reasoning, caring for people and navigating real-world complexity, where we currently hold a clear advantage. Building this level of AI is the explicit goal of frontier labs like OpenAI, Google DeepMind and, more recently, Meta.
The first implication of smarter-than-human AI is for public safety, due to the weaponization and control problems.
The weaponization problem is straightforward. If a human being can design or use weapons of mass destruction, then a smarter-than-human AI system can too. This means that, in the hands of the wrong people, smarter-than-human AI systems could be used for unprecedented harm.
The control problem comes from the fact that a system that is smarter than us is, by definition, one that can out-compete us. This means that if an advanced AI system, through accident or poor design, starts to interpret human beings as a threat and takes actions against us, we will not be able to stop it.
Moreover, there is a growing body of evidence backed by research at the world's top AI labs suggesting that, without proper safety precautions, AI systems above a certain threshold of intelligence may behave adversarially by default. This is why hundreds of leading AI experts signed a statement last year saying, “Mitigating the risk of extinction from AI should be a global priority”.
The second major implication is for labour. As AI approaches the point where it can do everything we can, only better—including designing robots that can outperform us physically—our labour will be increasingly less useful. The economic pressures are such that a company that doesn't eventually replace its CEO, board and employees with smarter-than-human AI systems and robotics will likely be a company that loses out to others that do. If we don't manage these developments wisely, increasing numbers of people will get left behind.
I want to be clear, however, that AI is also a very positive force, and we can't let fear take us over. The world we create with advanced AI could be a far more peaceful, prosperous and equitable world than the one we currently have. It's just that, as discussed so far, AI and, in particular, smarter-than-human AI represents a tsunami of change, and there's a lot we need to get right.
How much time do we have? The reality is that we're already late in the game. Even the rudimentary AI that we have today is causing issues with everything from biased employment decisions to enabling cybercrime and spreading misinformation.
However, the greatest risks come from AI that is reliably smarter than us, and that AI could be coming soon. Many leading experts expect human levels of AI in as little as two to five years, and the engineers at the frontier labs whom we've talked to are saying there's even a 5% to 10% chance of it being built in 2024. While accurate predictions about the future are impossible, the trends are clear enough that a responsible government needs to be ready.
What we can do? In our white paper “Governing AI: A Plan for Canada”, we outline five categories of action needed from government, including establishing a central AI agency, investing in AI governance and safety research, championing global talks and launching a national conversation on AI. Legislative action is the fifth, and essential, pillar.
The main reasons Canada needs an AI and data act are, first, to limit current and future harms by banning or regulating high-risk use cases and capabilities; second, to create a culture of ethics, safety and accountability in the public and private sectors that can scale up as AI technology advances; and third, to provide government with the capacity, agility and oversight to adequately protect Canadians and respond to developments in the field as they arise.
The minister's amendments are a good step in the right direction, and I'd be happy to provide feedback on them.
To conclude, while the challenges we face with AI are daunting and the timelines to address them are very tight, constructive action to govern the risks and harness the opportunities is possible, and bills like Bill C-27 are an essential piece of the puzzle.
As the wheels of history turn around us, one thing is clear: Success on this global issue will require every country to step up to the challenge, and Canada's on us.
Thank you.
I look forward to answering your questions.