Thank you.
I have the same question for Dr. O'Neil.
Evidence of meeting #27 for Industry and Technology in the 45th Parliament, 1st session. (The original version is on Parliament’s site, as are the minutes.) The winning word was companies.
A recording is available from Parliament.
Liberal
Vice-President, Research and Innovation, Simon Fraser University, As an Individual
We're involved in supporting AI research in a number of different ways. One of them, for example, is supplying sovereign AI infrastructure for researchers all across Canada to use to develop new AI models. One of the things we are advancing in that regard is the sovereign nature of the resource. It is not protecting against all eventualities that have just been described, but it's certainly taking very seriously the stewarding of Canadian data on Canadian soil, under the control of Canadian organizations.
In terms of the development of AI models, of course, we're working directly with industry—a number of local companies and international companies—on new AI models. We are also convening civil society discussions through our centre for dialogue on the future of AI, the dangers of AI and responsible use of AI at the same time that we're providing infrastructure to develop future models and graduating talent from our computing science school, for example, to work on those models.
Liberal
Michael Ma Liberal Markham—Unionville, ON
I'd like to follow up with you, Dr. O'Neil, on that. What's your view of the data sovereignty you talked about? What do you think the government should be doing more of?
Vice-President, Research and Innovation, Simon Fraser University, As an Individual
We have to invest in the creation of Canadian capacity. Right now, it's the easiest thing for any company or any individual in Canada to give over their data and grab all of the advice from companies, which are usually very large American companies with data centres offshore from Canada. If we invest in our own industry that we supply with sovereign compute and data capacity, we can create alternatives to the American giants that currently control the industry.
Liberal
Michael Ma Liberal Markham—Unionville, ON
My next question is for Mr. L'Allié.
You talked about critical infrastructure. Do you feel that the current legislation and programs are sufficient to protect our critical industries, hospitals, government agencies and so forth?
How do we do that while we're protecting data sovereignty?
Founder and Executive Director, AI Governance and Safety Canada
That's a great question.
Are they sufficient? No, but they're not sufficient anywhere.
For example, we support Bill C-8's initiatives in terms of improving cybersecurity. There have been a lot of efforts and coordination between provinces and federal government on this kind of stuff. I would say that needs to be turbocharged and also rethought, or at least updated, in the context of a very different type of threat now, with AI agents.
Liberal
Michael Ma Liberal Markham—Unionville, ON
As far as data sovereignty and particularly data privacy are concerned, do you feel that, in general, the industry and government are doing enough to educate the public about the danger of some of these AI tools out there—not recognizing that the data is actually going to be shared globally?
Founder and Executive Director, AI Governance and Safety Canada
I'm a little bit limited in my knowledge of data sovereignty per se, but I will say that with the latest jump in capabilities, there are even more concerns around data. If an AI agent, for example, is working on behalf of a person, it'll often unintentionally share data with a third party. That's another vector of weakness.
Liberal
Michael Ma Liberal Markham—Unionville, ON
Do you see that any legislation is required to help protect that?
Founder and Executive Director, AI Governance and Safety Canada
We broadly support what was in the previous bill, Bill C-27. Certainly, at least being as good as Europe on this stuff seems like a baseline, but there's a lot more that can be done.
Liberal
The Chair Liberal Ben Carr
That's all we have for time. Thank you.
Mr. Ste‑Marie, you may go ahead. You have six minutes.
Bloc
Gabriel Ste-Marie Bloc Joliette—Manawan, QC
Thank you, Mr. Chair.
I want to welcome our three witnesses and thank them for their very informative presentations. I also want to say thank you for being here to answer our questions.
Mr. Tessari L'Allié, you just referred to former Bill C‑27. You appeared before the committee in January 2024, and you made four recommendations: establish a central AI agency; invest in AI safety for humans and AI governance; encourage international co-operation; and launch and maintain a national conversation on AI.
Has there been any progress in those areas?
Founder and Executive Director, AI Governance and Safety Canada
I would say we've seen some baby steps in the right direction. For example, the Canadian AI Safety Institute was created, and talks between the Minister of Artificial Intelligence and Digital Innovation and industry have taken place.
This is just the beginning; it's not enough. Most of the work lies ahead.
Bloc
Gabriel Ste-Marie Bloc Joliette—Manawan, QC
I see.
Whenever a committee like ours examines AI, it does a study, it releases a report and things carry on. First, the government appointed an AI minister, which I think is a good thing. It says it's going to introduce a strategy next. It carried out consultations, but I don't think they were adequate.
That brings me back to your fourth recommendation, launch and maintain a national conversation on AI. How do we broaden and improve that conversation?
I'm going to throw out an idea. The House of Commons has numerous committees that focus on numerous topics. Should the House recognize the importance of AI and the need for a special committee on AI? I'm talking about a committee that would follow all of the dramatic developments in AI as they're happening.
Founder and Executive Director, AI Governance and Safety Canada
Absolutely. Increasing Parliament's capacity to monitor and respond to AI is a good thing.
As far as a national conversation on AI is concerned, I think the government would do very well to consult the public broadly on the whole jobs issue, which we have a few more years to do something about. The debate around euthanasia is an example that comes to mind. It would help Canadians understand what's going on, while giving them the opportunity to respond, and have a say in what the government should do.
When it comes to safety, the situation is too complex and too fast-moving. It's an area where the government needs to act first to protect the public and explain later, unfortunately. As far as I can tell, there just isn't time to carry out that type of consultation.
Bloc
Gabriel Ste-Marie Bloc Joliette—Manawan, QC
Very good. Thank you. I will have more questions for you later.
Mr. Duvenaud, thank you again for being with us.
One of the things you've seen in your work is that, in the absence of any regulation, programmers themselves are the ones curbing features with the potential to do the most harm. You have recommendations as well: track AI and its influence closely; put regulations and oversight mechanisms in place; promote the importance of thinking critically about AI's uses in citizen organization; and ensure that citizens steer how human civilization evolves.
That seems like a tall order. AI is a technical application, but you're raising fundamental philosophical issues.
What legislation should the government bring in? I'll also ask you the same question I asked Mr. Tessari L'Allié: Is AI a big enough concern to warrant the creation of a parliamentary committee that constantly monitors AI developments? Is that a good idea?
Associate Professor of Computer Science, As an Individual
My apologies. I'm going to answer in English. My French is so-so.
I think such a committee would be table stakes and, to be honest, I'm not totally sure it would change things much one way or another.
For the larger question of how governments should react, there are two schools of thought. One is just don't build AGI, but that requires global coordination, and it's a very tall order. More generally, upgrading our institutions across the board to be more robustly aligned to humans is a huge task. No one knows how to do it. It could benefit from AI helping us to build better forecasts or better coordination mechanisms. It's something that no one has had to do because we've always been needed by the states, but the only plausible way I can see forward, if we do make ourselves irrelevant by AGI, is that we could be okay in principle if, along the way, we managed to rethink the incentives that our governments face and build much more robust control mechanisms that make sure that citizens can't be marginalized permanently.
Bloc
Gabriel Ste-Marie Bloc Joliette—Manawan, QC
Following up on that, I have a question.
On one hand, there's the European model of passing legislation and putting clear controls in place. On the other hand, there's the U.S. model of allowing more latitude. The big players have said they'll go to the States if they're too restricted in Europe.
Without robust international coordination, do laws fully serve their purpose?
Associate Professor of Computer Science, As an Individual
You hit the nail on the head. All the important AGI labs are basically in the States right now, so it's up to them. It's nice, because they're capable of unilateral action. It's bad, because they don't seem very interested in it right now, but I do think that this issue is going to become so salient to most people as they start fearing for their jobs that there will be an appetite for potentially really strong legislation. I think Canada's role is basically to try to build a coalition of middle powers. Such a coalition could, in principle, altogether be a large enough counterpoint that the U.S. and China would have to give it a seat at the table.
Liberal
The Chair Liberal Ben Carr
Thank you, Mr. Ste‑Marie.
Mr. Guglielmin, the floor is yours for five minutes.
Conservative
Michael Guglielmin Conservative Vaughan—Woodbridge, ON
Thank you, Chair.
Thank you to the witnesses for being here today.
Just to follow up on what we've been talking about and what Ms. Dancho led with, we're talking about agentic AI.
Mr. Tessari L'Allié, we're talking about systems that are essentially digital employees, for lack of a better word, that can create other agents of themselves and basically form their own task force. Is this correct?
Founder and Executive Director, AI Governance and Safety Canada
Absolutely. Imagine if you give a person access to a computer. That would be like an AI agent. They can do everything on the computer that a human being could do, essentially. They still make mistakes, they're still brittle, they're still not reliable yet, but they can do a lot, and they can work for many hours at a time and be fully functional.
Conservative
Michael Guglielmin Conservative Vaughan—Woodbridge, ON
I believe I've read somewhere—correct me if I'm wrong—that these AI agents can also spin off other agents that work for it as well, and then they can manage them like their regular employees.
Founder and Executive Director, AI Governance and Safety Canada
Well, in fact, they're finding with AI agents, much like with human beings, if you have a team of human beings with different skill sets all working together, it's more effective than having one agent do it all alone. So yes, AI agents can concurrently spin out multiple other AI agents. Even if you don't give them the instruction of doing so, they'll often just do it on their own because they realize it's a better solution. We're talking now about swarms of AI agents rather than single AI agents.