Evidence of meeting #27 for Industry and Technology in the 45th Parliament, 1st session. (The original version is on Parliament’s site, as are the minutes.) The winning word was companies.

A recording is available from Parliament.

On the agenda

Members speaking

Before the committee

Tessari L'Allié  Founder and Executive Director, AI Governance and Safety Canada
David Duvenaud  Associate Professor of Computer Science, As an Individual
O'Neil  Vice-President, Research and Innovation, Simon Fraser University, As an Individual
James Elder  Professor and Research Chair, Human and Computer Vision, York University, Director, Centre for AI and Society, As an Individual
Teresa Scassa  Canada Research Chair in Information Law and Policy, Faculty of Law, Common Law Section, University of Ottawa, As an Individual
Billot  Chief Executive Officer, Scale AI

The Chair Liberal Ben Carr

I call the meeting to order.

Good afternoon, everybody. Welcome back. I hope everyone enjoyed a good constituency week back home in your ridings.

It's an exciting day for us, because we are starting something new for the first time in a little while.

We are starting a new study on AI.

Welcome to our witnesses.

As a quick note for the one witness in the room, if you haven't been here before, you have to make sure that you have this plugged in. When it's not in use, place it on the sticker in front of you to protect the health and well-being of our interpreters.

I note that all of the tests have taken place as well.

Colleagues, we're joined by three witnesses today. Before I introduce them, we have dedicated the final few minutes of the second hour, as previously negotiated, to finishing version two of the defence industrial strategy report that we were working on in the previous week.

With us today, appearing as an individual, we have David Duvenaud, an associate professor of computer science. We have Dugan O'Neil, vice-president of research and innovation at Simon Fraser University, and he's joining us by video conference. Here with us in the room, from AI Governance and Safety Canada, we have Wyatt Tessari L’Allié, founder and executive director.

We will provide time for opening remarks of up to five minutes from our witnesses.

Mr. Tessari L’Allié, the floor is yours.

Wyatt Tessari L'Allié Founder and Executive Director, AI Governance and Safety Canada

Mr. Chair, committee members, thank you for inviting me to speak to you today. I'm honoured.

AI Governance and Safety Canada is a non-partisan not-for-profit organization and a community of people across the country. We start by asking the following question: What can we do in and from Canada to ensure that AI is safe and benefits everyone?

Since 2022, we've been making public policy recommendations to the federal government, such as our submission on the AI and data act bill and our many appearances before parliamentary committees.

Two years ago, in the context of the AI and data act, I testified before this committee that while early forms of AI, like facial recognition and chatbots, require some regulation, there were much more powerful forms of AI on the horizon that Canada needed to get ready for. We made the case that certain AI capabilities pose an unacceptable risk because they could lead to dangerous weaponization or loss of control scenarios: systems that, without the instruction or authorization of their users, can detect and evade monitoring, rewrite their own code, make copies of themselves, spawn other AI systems, commandeer resources or refuse shutdown.

In the last few weeks, a major jump in AI capabilities has produced such systems. We have now entered the new paradigm of AI called “AI agents”. Unlike chatbots, which simply respond to a prompt, AI agents are systems that can take actions in the real world, working autonomously for hours and overcoming hurdles along the way. Think of them as an employee that you sit down at a computer and tell to accomplish a goal such as building a software program or launching a cold-calling campaign. They come up with a plan, navigate the files and tools they'll need, send and receive emails and phone calls, make and receive payments and debug any issues they come across.

Last week, we found out that hackers manipulated Claude Code to break into Mexican government systems and steal data on over 100 million people. The tool didn't just write code or perform odd tasks for the hackers. It planned and executed most of the sophisticated campaign itself.

Also, now we're starting to see loss of control incidents. These include agents stealing passwords, harassing developers and modifying themselves to evade shutdown in order to achieve the often mundane goals they have been given. Over the weekend, we found out that Chinese tech giant Alibaba produced an agent that, unbeknownst to their engineers, had created an elaborate hack to mine cryptocurrency for itself, despite being given a completely unrelated goal.

These loss of control incidents are concerning because they are the precursors to agents that could permanently evade human control and act adversarially in ways that we cannot detect or stop. This is why hundreds of leading scientists, business leaders and policy-makers are calling AI an “extinction risk”.

What needs doing? In October, we published our white paper titled “Preparing for the AI Crisis: A Plan for Canada”. In light of this latest jump in capabilities, we now focus on three actions Canada can take.

Number one is to pivot to meet the AI crisis. AI development is now a national security emergency and needs to be treated as such. Given its impact on a wide range of files, success will require coordination across cabinet, parties and jurisdictions.

Number two is to spearhead global talks. AI development is global, and no country can manage it on its own. At Davos, Prime Minister Carney showed that Canada can lead. Our strongest card is to convene talks, propose solutions and lay the groundwork for an AI treaty that the U.S. and China might sign when they wake up to the crisis and realize that they have no alternative.

Number three is to build Canada's resilience. Canada needs multiple lines of defence against weaponized and malfunctioning AI agent systems.

This includes, first, prevention. Per our AI and data act recommendations, capabilities that pose an unacceptable risk must be made illegal in Canada. This means that you need to place an immediate moratorium on the latest generation of AI agents. Note that heads of Anthropic and Google DeepMind recently stated that they are willing to pause AI development if other companies do the same.

Second is monitoring. Currently, governments have little to no visibility into AI agent populations or activity. This means the incidents that have been publicly reported are very likely just the tip of the iceberg. Ottawa needs to urgently work with AI companies, data centres and Internet service providers to gain a clear picture of what is happening on Canada's digital infrastructure.

Third is defence capacity. Our national security teams need to rapidly develop defence strategies and containment and shutdown protocols to neutralize weaponized or malfunctioning agents.

Last is emergency preparedness. We urgently need scenario planning and joint exercises to ensure readiness for potential large-scale attacks, corrupted communication lines and shutdowns of critical infrastructure.

To make a COVID analogy, the release of the latest AI agents is like that initial outbreak in the wet market in Wuhan, China. Most of the world is still unaware of its implications, but if Canada acts quickly and decisively, we can not only prepare ourselves and help mitigate the emerging global crisis but also ensure that Canadians share in the benefits of this transformational technology.

Thank you.

The Chair Liberal Ben Carr

Thank you very much.

We're going to go to Mr. Duvenaud.

You have up to five minutes.

Professor David Duvenaud Associate Professor of Computer Science, As an Individual

Thank you.

My name is David Duvenaud. I'm a professor of computer science at the University of Toronto, where I used to specialize in deep learning and generative models. In 2023 and 2024, I led Anthropic's alignment evaluation team. Our task was to test whether the company's AI was capable of pursuing hidden agendas—for instance, by subverting human oversight or decision-making.

I'm also an author on the “International AI Safety Report” and a member of the safe and secure AI advisory group for the federal Advisory Council on Artificial Intelligence. I am a co-chair at the Schwartz Reisman Institute for Technology and Society.

Today, I'm speaking in a personal capacity. I want to concur with Mr. Tessari L'Allié, in that very capable models raise serious security and loss of control risks. However, I want to address another challenge.

Large language models in particular, and their successors, are on track to becoming a competitive or superior replacement to humans in almost all of our important white-collar and decision-making roles over the next five to 10 years, roughly. In the slightly longer term, we're on track to make almost all humans economically obsolete, permanently. This will, in turn, cause a permanent loss of bargaining power for workers. Citizens will switch from being necessary for growth to becoming troublesome wards of the state and will have little recourse if they are then further marginalized and disempowered. We face a much larger problem than simply managing a temporary labour disruption.

I realize that this sounds similar to incorrect predictions made about previous labour disruptions, like the Industrial Revolution. Much wealth and many new jobs will be created as a consequence of improving AI capabilities. However, AI will, at some point, also be able to fill these new jobs better than humans. Eventually, each of us will have to compete with machine workers at least as capable as us but also faster, more responsive, more reliable and cheaper. This is the stated goal of the largest artificial general intelligence, AGI, companies, and they're well on their way to achieving it. Many of my former students are working at these companies, making fortunes, and many of them also believe that this is probably their last chance to have a real job. Teaching at the university is becoming depressing as students see the value of their skills decreasing month by month.

You might expect the major AI companies to have an answer to the question of how AGI development is supposed to ultimately benefit the average person economically, even if indirectly. However, their consistent stance has been that this is a huge problem they don't know how to address. For example, Dario Amodei, the CEO of Anthropic, said, “in the long run AI will become so broadly effective and so cheap that [comparative advantage] will no longer apply” and “At that point, our current economic setup will no longer make sense”. OpenAI's CEO, Sam Altman, was recently asked, “How will people survive economically?”, to which he replied, “I don't know; neither does anyone else.”

Over the past few years, I've systematically asked my colleagues in industry labs, research institutes and other academic disciplines for any coherent vision of how our civilization could robustly serve human interests once we are no longer competitive. The only consensus is that the window for individuals to compete and contribute is closing.

What does this mean for you, as MPs?

The main thing I'd like you to keep in mind, going forward, is that people are right to fear being replaced. This isn't just a period of disruption, after which things will return to something like business as usual. The default path is that we all become unemployable, except in mandated make-work contexts, and then marginalized in favour of a machine economy oriented towards growth for the sake of competitiveness.

The second thing to keep in mind is that we should expect governments to generally become much less responsive to the needs of their citizens after this happens. The need for human labour by the state aligns the state with its citizens. Right now, investment in education, and human capital more broadly, pays off for everyone in the long run. However, soon, fiduciary duty will require investing instead—mainly in data centres, power plants and robotic factories.

Finally, there is no way to address these problems without global coordination. Human replacement can happen even if everyone involved would prefer to prioritize human interests. It's just going to be the only way to remain competitive. Every country, industry and worker faces a choice between adapting to AI as fast as possible or being out-competed. No one can unilaterally do much to slow or soften the blow of eventual human irrelevance.

Thank you.

The Chair Liberal Ben Carr

Mr. O'Neil, the floor is yours for up to five minutes.

Dugan O'Neil Vice-President, Research and Innovation, Simon Fraser University, As an Individual

Thank you.

Good afternoon, Mr. Chair and members of the committee. Thank you for the opportunity to contribute to your study on artificial intelligence and Canada's strategic industries. I'm pleased to represent Simon Fraser University here today.

Artificial intelligence is rapidly becoming foundational to economic competitiveness, industrial productivity and national security. Countries that control the infrastructure, talent and intellectual property behind AI will shape the next generation of global innovation. For Canada, this creates a clear strategic imperative to build sovereign AI capacity that ensures Canadian research, data and technologies remain anchored in Canada while supporting our companies and industries.

Canada begins from a position of strength. Our researchers helped pioneer modern machine learning, and our universities continue to produce world-leading AI talent, but global competition is accelerating quickly. Maintaining leadership will depend on whether Canada can connect research excellence to sovereign infrastructure and domestic industrial adoption. Universities are central to that effort.

Institutions like Simon Fraser University operate at the intersection of discovery, infrastructure and commercialization. We train highly skilled AI talent, host advanced computing infrastructure and partner with Canadian companies to translate research into deployable technologies. At SFU, our innovation model brings together researchers, technology developers, start-ups and industry partners to move innovations more quickly from discovery to application.

This model is already supporting collaborations with Canadian industry partners working to strengthen Canada's digital and energy infrastructure. For example, we are working with Bell and Hypertec to support the development of sovereign computing infrastructure and AI capabilities here in Canada.

We also collaborate with Canadian companies including Cerio, Corix and Moment Energy, which are applying advanced technologies, including AI-enabled analytics and optimization, to improve energy systems, infrastructure management and clean technology deployment. These partnerships are critical because sovereign AI capacity is not only about research; it's about ensuring that Canadian companies have the tools and infrastructure they need to compete globally.

Another area in which Canada has a unique opportunity to lead is the convergence of artificial intelligence and quantum technologies. Simon Fraser University has played a major role in Canada's quantum ecosystem. One prominent example is Photonic, a globally recognized quantum computing company that originated as an SFU spinoff and continues to grow within Canada's innovation landscape.

As AI models become larger and more computationally intensive, the relationship between AI supercomputing infrastructure and quantum computing and networking will become increasingly important. Co-locating these capabilities can accelerate breakthroughs in computing, communications and cybersecurity.

At SFU, we are actively advancing research at the intersection of AI, quantum technologies and next-generation communication systems, helping to lay the foundation for future computing architectures that will support Canadian science, industry and national security.

This brings me to four brief observations for the committee.

First, sovereign AI capacity requires sustained investment in research institutions that already have the infrastructure and partnerships needed to scale innovation quickly. Truly sovereign infrastructure is critically needed.

Second, Canada must ensure that advanced computing capacity, including AI supercomputing, remains accessible to Canadian researchers and companies. Compute infrastructure is becoming as strategically important as energy infrastructure.

Third, Canada should leverage the convergence of AI and quantum technologies to build globally competitive innovation hubs anchored in Canadian institutions and companies.

Fourth, the rapidly changing technology landscape requires a flexible, adaptable national talent pool. The post-secondary system needs support to graduate students who can adapt to a changing work environment. We must also provide opportunities for the workforce to upskill and retrain for new opportunities if we want to create conditions for success for Canadian companies.

If Canada can align long-term investment, strong regional ecosystems and partnerships with domestic industry, we have a real opportunity to lead not only in AI research but in the industries and technologies that will define the next generation of economic growth.

Thank you again for the opportunity to appear before the committee. I look forward to your questions.

The Chair Liberal Ben Carr

Thank you very much to all three witnesses for your introductory remarks.

Colleagues, I'm going to let you know that because of that late start, we're going to have to cut the final slot. On my list from each party, I have Madam Borrelli and Mr. Bardeesy. I'm letting you know now so that if you want to make some changes, you'll have the ability to do that. We have to be on time given that we have the report to deal with at the end of the second hour. I'll let you figure that out internally.

For now, Madam Dancho, the floor is yours for six minutes.

3:55 p.m.

Conservative

Raquel Dancho Conservative Kildonan—St. Paul, MB

Thank you.

Thank you to the witnesses.

Thank you to our Bloc Québécois member for bringing forward this important study. It's a critical time for government to be looking at this. I think our committee here is undertaking important work.

There was excellent testimony, though a bit foreboding. Obviously, it's not a laughing matter. It's a very serious matter. I did want to clarify a few of the things to provide context for those who are watching and are interested in this.

Mr. L'Allié, thank you for being here. You provided excellent opening testimony. You stated that there was some sort of AI breakthrough in the last number of weeks. That's really changed the game, if I could put it in my own words.

Can you just describe, in very basic terms, what this change was and why it's so relevant to the discussion we're having today?

3:55 p.m.

Founder and Executive Director, AI Governance and Safety Canada

Wyatt Tessari L'Allié

Absolutely. Very simply, up until now, we had AI that could talk. We now have AI that can act. In practice, we've had various forms of AI agents for many years, but the technology wasn't there to make them useful. In particular, previous agents were very limited because the technology wasn't there yet, allowing them to troubleshoot, interact and use computer tools on the fly.

Since December, there has been a jump in the base model capability. A whole new infrastructure, like OpenClaw, was developed, which helps give control of a computer to an AI assist agent.

We're now in a world where you're not just talking to a chatbot and getting an answer, but you're telling an AI assist agent to go and do this in the real world for you.

3:55 p.m.

Conservative

Raquel Dancho Conservative Kildonan—St. Paul, MB

In your plan for Canada paper from your association, you talk about artificial general intelligence, which you describe as the industry's term for first systems that can match or out-compete human beings in the real world.

Is that what you're talking about, that leaps have been made, or is it that it's just one step closer to that?

3:55 p.m.

Founder and Executive Director, AI Governance and Safety Canada

Wyatt Tessari L'Allié

We're getting so close to AGI that it's becoming a matter of semantics. Depending on your definition of AGI, some people have it as systems that can do every economic task that a human being can do. We're not quite there yet, probably another six, 12, 18 months for that. Another definition is that there are systems you can permanently lose control of. We're probably not there yet with that, but possibly soon, depending on which system you're looking at. It's a bit like driving towards a city. When you start getting to the suburbs, you're not officially in the city yet, but you're kind of in the area, so we're basically very close to AGI.

4 p.m.

Conservative

Raquel Dancho Conservative Kildonan—St. Paul, MB

You mentioned the national security risk associated with this, so applying these new developments with AI and thinking that AGI is just around the corner. We're in the suburbs; we're not quite downtown.

Can you describe in a little bit more detail a real-world scenario on the security impacts that Canada should be preparing for? My understanding is that this just makes cyber-hackers that much more competent. It just increases their capacity and the damage they can do.

Is that correct?

4 p.m.

Founder and Executive Director, AI Governance and Safety Canada

Wyatt Tessari L'Allié

That is correct, but there's also the added new risk of loss of control. This is where you have a system that can—

4 p.m.

Conservative

Raquel Dancho Conservative Kildonan—St. Paul, MB

I'm sorry to cut you off.

They're acting on their own. Is that what you're saying?

4 p.m.

Founder and Executive Director, AI Governance and Safety Canada

Wyatt Tessari L'Allié

Exactly. They're able to independently self-sustain, basically. Whether it be to steal resources, rent a service or rent themselves....They copy themselves in different formats. It becomes more of an electronic disease, living in your networks, that you can't control versus purely a tool you can use for cyber-attacks.

4 p.m.

Conservative

Raquel Dancho Conservative Kildonan—St. Paul, MB

In the same paper I mentioned earlier, you mentioned that one of your recommendations for preparing the country for this was to establish a permanent task force on AGI chaired by the Prime Minister. That's an interesting recommendation.

Given your testimony, you're saying the security risk is so considerable now. I know your colleague, Mr. Duvenaud, also outlined for the committee in his opening remarks the potential labour impact.

Given that significance, is that where that position is coming from, that this is so important?

4 p.m.

Founder and Executive Director, AI Governance and Safety Canada

Wyatt Tessari L'Allié

There's a very legitimate case to make that this is a bigger deal than COVID. In that sense, for every file the government is going to work on, whether it be national defence, energy, culture or the environment, everything is going to be disrupted by AI. This has to be coordinated from the top. Given the time it takes for government to put meaningful solutions in place and given the speed at which AI is moving, you're basically dropping everything and working in AI mode.

4 p.m.

Conservative

Raquel Dancho Conservative Kildonan—St. Paul, MB

Mr. Duvenaud, on that same question, regarding the recommendation to establish a permanent task force on AGI chaired by the Prime Minister, what would be your biggest argument of why we should do that?

4 p.m.

Associate Professor of Computer Science, As an Individual

Professor David Duvenaud

Honestly, I haven't thought much about this question at this particular committee. It's going to be necessary to have expertise on call. Things are just going to keep moving faster and faster. Like Mr. L'Allié said, this is one of the main things that people just want to talk about. They are going to have to react on a shorter time scale than usual going forward.

4 p.m.

Conservative

Raquel Dancho Conservative Kildonan—St. Paul, MB

Your position is that there'll be tremendous job losses, first in white-collar jobs and possibly, with robotic automation combined with AI, we could be losing blue-collar jobs. We're talking about five or 10 years for white-collar jobs and perhaps 15 to 20 years for blue-collar jobs.

Is that your position?

4 p.m.

Associate Professor of Computer Science, As an Individual

Professor David Duvenaud

It's really hard to predict. It's already starting to bite, but it will bite especially over the next two or three years. The claims about there not being any place to retreat to are maybe more in the five- or 10-year time frame.

I'll say those kinds of issues are more suited to the usual legislative pace than the kinds of risks Mr. L'Allié is talking about.

4 p.m.

Conservative

Raquel Dancho Conservative Kildonan—St. Paul, MB

What I'm hearing from the two of you gentlemen is there's certainly a very considerable increase in national security threats because of AI's capabilities and the rapid development of it, and then Mr. Duvenaud talked a lot about considerable labour risks, as well, that are coming. I mean risks....I'm not sure if you'd qualify it as that, but I think all of our constituents would be very concerned, so I'll call it a considerable risk. I appreciate your feedback.

I believe I'm out of time, but I appreciate the recommendation you've made through the association you represent, Mr. L'Allié, about a permanent task force chaired by the PM on this.

4 p.m.

Liberal

The Chair Liberal Ben Carr

Thank you very much.

Mr. Ma, the floor is yours for six minutes.

4 p.m.

Liberal

Michael Ma Liberal Markham—Unionville, ON

Thank you.

I am a computer science graduate. I'd like to start with questions for both Dr. Duvenaud and Dr. O'Neil.

How are your institutions connecting with the industry, especially in terms of how you're utilizing federal government funding for this research and development, and how are you ensuring that we're advancing the technology, but that public safety is at the heart of all these?

We'll start with Dr. Duvenaud.

4 p.m.

Associate Professor of Computer Science, As an Individual

Professor David Duvenaud

I have some federal funding through the Vector Institute, some through a CIFAR AI chair and also some through the Schwartz Reisman Institute. These have been very successful initiatives in terms of retaining talent and just letting researchers do blue-sky research with a long-term view. I'm really happy with how this has all gone.

For some of the risks, it's been slightly awkward. The Vector Institute wants to work closely with industry and says, “We're going to disrupt. We're going to adopt as fast as possible,” and then Geoff Hinton will come in and say, “Also, there are all these horrible risks, and this is going to change everything and maybe ruin lots of things.” It puts them in a bit of an awkward situation.

Right now, the Schwartz Reisman Institute is doing a good job of thinking about the big picture, while at the Vector Institute, we're doing a good job of trying to get the short-term benefits rolled out to industry, health care and stuff like that.

I don't want to go on and speculate. If you have more specific questions, I'm happy to elaborate.

Michael Ma Liberal Markham—Unionville, ON

Thank you.

I have the same question for Dr. O'Neil.

4:05 p.m.

Vice-President, Research and Innovation, Simon Fraser University, As an Individual

Dugan O'Neil

We're involved in supporting AI research in a number of different ways. One of them, for example, is supplying sovereign AI infrastructure for researchers all across Canada to use to develop new AI models. One of the things we are advancing in that regard is the sovereign nature of the resource. It is not protecting against all eventualities that have just been described, but it's certainly taking very seriously the stewarding of Canadian data on Canadian soil, under the control of Canadian organizations.

In terms of the development of AI models, of course, we're working directly with industry—a number of local companies and international companies—on new AI models. We are also convening civil society discussions through our centre for dialogue on the future of AI, the dangers of AI and responsible use of AI at the same time that we're providing infrastructure to develop future models and graduating talent from our computing science school, for example, to work on those models.

Michael Ma Liberal Markham—Unionville, ON

I'd like to follow up with you, Dr. O'Neil, on that. What's your view of the data sovereignty you talked about? What do you think the government should be doing more of?

4:05 p.m.

Vice-President, Research and Innovation, Simon Fraser University, As an Individual

Dugan O'Neil

We have to invest in the creation of Canadian capacity. Right now, it's the easiest thing for any company or any individual in Canada to give over their data and grab all of the advice from companies, which are usually very large American companies with data centres offshore from Canada. If we invest in our own industry that we supply with sovereign compute and data capacity, we can create alternatives to the American giants that currently control the industry.

Michael Ma Liberal Markham—Unionville, ON

My next question is for Mr. L'Allié.

You talked about critical infrastructure. Do you feel that the current legislation and programs are sufficient to protect our critical industries, hospitals, government agencies and so forth?

How do we do that while we're protecting data sovereignty?

4:05 p.m.

Founder and Executive Director, AI Governance and Safety Canada

Wyatt Tessari L'Allié

That's a great question.

Are they sufficient? No, but they're not sufficient anywhere.

For example, we support Bill C-8's initiatives in terms of improving cybersecurity. There have been a lot of efforts and coordination between provinces and federal government on this kind of stuff. I would say that needs to be turbocharged and also rethought, or at least updated, in the context of a very different type of threat now, with AI agents.

Michael Ma Liberal Markham—Unionville, ON

As far as data sovereignty and particularly data privacy are concerned, do you feel that, in general, the industry and government are doing enough to educate the public about the danger of some of these AI tools out there—not recognizing that the data is actually going to be shared globally?

4:05 p.m.

Founder and Executive Director, AI Governance and Safety Canada

Wyatt Tessari L'Allié

I'm a little bit limited in my knowledge of data sovereignty per se, but I will say that with the latest jump in capabilities, there are even more concerns around data. If an AI agent, for example, is working on behalf of a person, it'll often unintentionally share data with a third party. That's another vector of weakness.

Michael Ma Liberal Markham—Unionville, ON

Do you see that any legislation is required to help protect that?

4:05 p.m.

Founder and Executive Director, AI Governance and Safety Canada

Wyatt Tessari L'Allié

We broadly support what was in the previous bill, Bill C-27. Certainly, at least being as good as Europe on this stuff seems like a baseline, but there's a lot more that can be done.

The Chair Liberal Ben Carr

That's all we have for time. Thank you.

Mr. Ste‑Marie, you may go ahead. You have six minutes.

Gabriel Ste-Marie Bloc Joliette—Manawan, QC

Thank you, Mr. Chair.

I want to welcome our three witnesses and thank them for their very informative presentations. I also want to say thank you for being here to answer our questions.

Mr. Tessari L'Allié, you just referred to former Bill C‑27. You appeared before the committee in January 2024, and you made four recommendations: establish a central AI agency; invest in AI safety for humans and AI governance; encourage international co-operation; and launch and maintain a national conversation on AI.

Has there been any progress in those areas?

4:10 p.m.

Founder and Executive Director, AI Governance and Safety Canada

Wyatt Tessari L'Allié

I would say we've seen some baby steps in the right direction. For example, the Canadian AI Safety Institute was created, and talks between the Minister of Artificial Intelligence and Digital Innovation and industry have taken place.

This is just the beginning; it's not enough. Most of the work lies ahead.

Gabriel Ste-Marie Bloc Joliette—Manawan, QC

I see.

Whenever a committee like ours examines AI, it does a study, it releases a report and things carry on. First, the government appointed an AI minister, which I think is a good thing. It says it's going to introduce a strategy next. It carried out consultations, but I don't think they were adequate.

That brings me back to your fourth recommendation, launch and maintain a national conversation on AI. How do we broaden and improve that conversation?

I'm going to throw out an idea. The House of Commons has numerous committees that focus on numerous topics. Should the House recognize the importance of AI and the need for a special committee on AI? I'm talking about a committee that would follow all of the dramatic developments in AI as they're happening.

4:10 p.m.

Founder and Executive Director, AI Governance and Safety Canada

Wyatt Tessari L'Allié

Absolutely. Increasing Parliament's capacity to monitor and respond to AI is a good thing.

As far as a national conversation on AI is concerned, I think the government would do very well to consult the public broadly on the whole jobs issue, which we have a few more years to do something about. The debate around euthanasia is an example that comes to mind. It would help Canadians understand what's going on, while giving them the opportunity to respond, and have a say in what the government should do.

When it comes to safety, the situation is too complex and too fast-moving. It's an area where the government needs to act first to protect the public and explain later, unfortunately. As far as I can tell, there just isn't time to carry out that type of consultation.

Gabriel Ste-Marie Bloc Joliette—Manawan, QC

Very good. Thank you. I will have more questions for you later.

Mr. Duvenaud, thank you again for being with us.

One of the things you've seen in your work is that, in the absence of any regulation, programmers themselves are the ones curbing features with the potential to do the most harm. You have recommendations as well: track AI and its influence closely; put regulations and oversight mechanisms in place; promote the importance of thinking critically about AI's uses in citizen organization; and ensure that citizens steer how human civilization evolves.

That seems like a tall order. AI is a technical application, but you're raising fundamental philosophical issues.

What legislation should the government bring in? I'll also ask you the same question I asked Mr. Tessari L'Allié: Is AI a big enough concern to warrant the creation of a parliamentary committee that constantly monitors AI developments? Is that a good idea?

4:10 p.m.

Associate Professor of Computer Science, As an Individual

Professor David Duvenaud

My apologies. I'm going to answer in English. My French is so-so.

I think such a committee would be table stakes and, to be honest, I'm not totally sure it would change things much one way or another.

For the larger question of how governments should react, there are two schools of thought. One is just don't build AGI, but that requires global coordination, and it's a very tall order. More generally, upgrading our institutions across the board to be more robustly aligned to humans is a huge task. No one knows how to do it. It could benefit from AI helping us to build better forecasts or better coordination mechanisms. It's something that no one has had to do because we've always been needed by the states, but the only plausible way I can see forward, if we do make ourselves irrelevant by AGI, is that we could be okay in principle if, along the way, we managed to rethink the incentives that our governments face and build much more robust control mechanisms that make sure that citizens can't be marginalized permanently.

Gabriel Ste-Marie Bloc Joliette—Manawan, QC

Following up on that, I have a question.

On one hand, there's the European model of passing legislation and putting clear controls in place. On the other hand, there's the U.S. model of allowing more latitude. The big players have said they'll go to the States if they're too restricted in Europe.

Without robust international coordination, do laws fully serve their purpose?

4:15 p.m.

Associate Professor of Computer Science, As an Individual

Professor David Duvenaud

You hit the nail on the head. All the important AGI labs are basically in the States right now, so it's up to them. It's nice, because they're capable of unilateral action. It's bad, because they don't seem very interested in it right now, but I do think that this issue is going to become so salient to most people as they start fearing for their jobs that there will be an appetite for potentially really strong legislation. I think Canada's role is basically to try to build a coalition of middle powers. Such a coalition could, in principle, altogether be a large enough counterpoint that the U.S. and China would have to give it a seat at the table.

Gabriel Ste-Marie Bloc Joliette—Manawan, QC

Thank you very much.

The Chair Liberal Ben Carr

Thank you, Mr. Ste‑Marie.

Mr. Guglielmin, the floor is yours for five minutes.

4:15 p.m.

Conservative

Michael Guglielmin Conservative Vaughan—Woodbridge, ON

Thank you, Chair.

Thank you to the witnesses for being here today.

Just to follow up on what we've been talking about and what Ms. Dancho led with, we're talking about agentic AI.

Mr. Tessari L'Allié, we're talking about systems that are essentially digital employees, for lack of a better word, that can create other agents of themselves and basically form their own task force. Is this correct?

4:15 p.m.

Founder and Executive Director, AI Governance and Safety Canada

Wyatt Tessari L'Allié

Absolutely. Imagine if you give a person access to a computer. That would be like an AI agent. They can do everything on the computer that a human being could do, essentially. They still make mistakes, they're still brittle, they're still not reliable yet, but they can do a lot, and they can work for many hours at a time and be fully functional.

4:15 p.m.

Conservative

Michael Guglielmin Conservative Vaughan—Woodbridge, ON

I believe I've read somewhere—correct me if I'm wrong—that these AI agents can also spin off other agents that work for it as well, and then they can manage them like their regular employees.

4:15 p.m.

Founder and Executive Director, AI Governance and Safety Canada

Wyatt Tessari L'Allié

Well, in fact, they're finding with AI agents, much like with human beings, if you have a team of human beings with different skill sets all working together, it's more effective than having one agent do it all alone. So yes, AI agents can concurrently spin out multiple other AI agents. Even if you don't give them the instruction of doing so, they'll often just do it on their own because they realize it's a better solution. We're talking now about swarms of AI agents rather than single AI agents.

4:15 p.m.

Conservative

Michael Guglielmin Conservative Vaughan—Woodbridge, ON

Mr. Duvenaud, going off that, when they're talking internally in the AI industry, what are they saying? What are people saying behind the scenes about the impact that this is going to have on jobs in the broader economy?

4:15 p.m.

Associate Professor of Computer Science, As an Individual

Professor David Duvenaud

It's not even necessarily behind the scenes. The thing that kind of radicalized me was talking to the engineers and the people at the company saying, “What are you going to be doing once we succeed and don't have jobs?” They said, “Oh, I'll just be clicking 'accept suggestion' all day,” or, “I'll be taking a much-needed vacation.” No one has really thought this through.

Again, as I said, the lab leaders are saying, loud and clear, that there's no plan here. This is going to undermine our economy and democracy, and no one has a good answer. They talk about how we need to have a societal conversation about how to replace it, but that's just filler for saying, “I don't have a plan and no one's come up with a plausible-sounding one yet.”

Twitter is where most of the interesting conversations happen, with lab employees giving their takes. It's not a secret insider opinion. People are being pretty open about what they think.

4:15 p.m.

Conservative

Michael Guglielmin Conservative Vaughan—Woodbridge, ON

Mr. Tessari L'Allié, you referenced, in your opening remarks, the Mexican government system that was attacked and how it wasn't just that AI was told to go and do something. It created the plan, developed the sophistication and was then able to breach over 100-plus million different pieces of data and information.

How would you say agentic AI changes the scale and speed of cyber-attacks compared with traditional hacking tools?

4:15 p.m.

Founder and Executive Director, AI Governance and Safety Canada

Wyatt Tessari L'Allié

It democratizes the ability to run cyber-operations.

In November, there was another example that Anthropic flagged. Basically, Chinese state actors used an AI system to help there, again.

In the past, AI-powered cybercrimes were things like this: The AI would write a section of code, and the hacker would then copy and paste that code into their action. They would do that step by step. Now they're saying, “AI, here's the goal. Attack this target and figure it out.” The AI agent is able to build a plan, try multiple strategies, troubleshoot if it doesn't work and keep going.

4:20 p.m.

Conservative

Michael Guglielmin Conservative Vaughan—Woodbridge, ON

One thing I also found alarming, in the same space of what we're talking about now, is this: The AI was able to change itself to avoid detection and deletion.

How serious are these incidents today, and what does that tell us about the reliability of current safety mechanisms for artificial intelligence?

4:20 p.m.

Founder and Executive Director, AI Governance and Safety Canada

Wyatt Tessari L'Allié

There's an example with Palisade Research. They did an experiment with a robot dog. They gave it the task of patrolling an area. They had a button on the side of the wall to turn it off. They realized that pushing the button didn't often work. The agent, in running the robot, had realized that if somebody pushed that button, it wasn't able to achieve its goal of patrolling the area. It had rewritten its own code so as not to listen to the instruction.

I'm sorry. I missed the second question.

4:20 p.m.

Conservative

Michael Guglielmin Conservative Vaughan—Woodbridge, ON

No, that's good.

Mr. Duvenaud, I'm going to you for that.

Since you were working directly for Anthropic on safety, would you say that safety mechanisms in AI are where they need to be in the current moment?

4:20 p.m.

Associate Professor of Computer Science, As an Individual

Professor David Duvenaud

The short answer is, yes, they are for now, in the sense that we don't think current models are capable of doing the supergalaxy-brained, long-term biding of time to do some big takeover. However, that is the plan—making them that smart. I think we're probably within six or 18 months of models that can do this, though I'm not saying that it's going to happen in that time frame.

This was a big crisis of faith within the company. They had this responsible scaling policy, RSP, which I helped work on a bit. The idea was that they would never ship a model they couldn't prove was safe. However, they realized that they had backed themselves into a corner. They couldn't prove the models were dangerous, but they couldn't prove they were safe. If they unilaterally stopped, they would blow up the company, to no one's benefit. They just changed that RSP two weeks ago to remove that provision.

The point is that they know they're entering a regime where they can't prove the models are safe anymore, but they also don't have a great plan for dealing with that. They wish everyone could slow down, but that requires coordinated action.

4:20 p.m.

Conservative

Michael Guglielmin Conservative Vaughan—Woodbridge, ON

Thank you.

The Chair Liberal Ben Carr

Mr. Bains, the floor will be yours for five minutes. Then I'm going to give one minute to Monsieur Ste-Marie to follow up.

Go ahead, Mr. Bains.

Parm Bains Liberal Richmond East—Steveston, BC

Thank you, Mr. Chair.

Thank you to the witnesses for joining us today.

I'm going to take my first question to British Columbia.

Dr. O'Neil, you are a leader and key figure in national supercomputing initiatives. I believe your testimony here is extremely valuable today.

With respect to Canada's ability to commercialize the work of research institutions, is there a role for artificial intelligence and supercomputers like Cedar to support the commercialization of academic research?

4:20 p.m.

Vice-President, Research and Innovation, Simon Fraser University, As an Individual

Dugan O'Neil

Yes, there is. Right now in Canada, we are an economy dominated by small and medium-sized enterprises. Many of them do not have a large AI division or the tools and the infrastructure needed to assess the technology, to develop their own adaptations of the technologies and to move those forward to be competitive in the world. If we provide public supercomputing access to those small and medium-sized enterprises, we give them platforms on which they can develop their own solutions that allow us to be more independent of some of the international forces at play, even if right now it would seem that Canadian industry is quite far behind the anthropics of the world in terms of developing our own tools.

Parm Bains Liberal Richmond East—Steveston, BC

I'll follow up on something you mentioned there with respect to international markets. How do we compare more specifically, and can this supercomputer technology support ongoing R and D at our Canadian institutions? There's another part to that. What if we don't continue to invest and remain leaders in the AI space?

4:20 p.m.

Vice-President, Research and Innovation, Simon Fraser University, As an Individual

Dugan O'Neil

Currently, we are not an international supercomputing leader. We're the only G7 country that doesn't have a “top 30 in the world” public supercomputer in our jurisdiction.

I think we need to simultaneously increase our public supercomputing capacity in Canada. We need to do that, while also creating platforms for Canadian companies and Canadian individuals to make use of that capacity. If not, then we will be permanently beholden to the way technology is being developed in other jurisdictions. We'll effectively have no choice, if we want to be competitive, but to turn over our data to those other jurisdictions to incorporate it into their products, and then sell it back to us.

Parm Bains Liberal Richmond East—Steveston, BC

Building on that again, can you comment on which sectors—health care and agriculture, for example—show the most promise right now with respect to Canadian AI firms?

4:25 p.m.

Vice-President, Research and Innovation, Simon Fraser University, As an Individual

Dugan O'Neil

I think there are a number of different areas in which we have competitive small companies. Sometimes when people talk about competition, they start from a feeling of being defeated, because we can't compete with the budget of Google or Microsoft or OpenAI. There are many small companies that are doing applications of AI in agriculture, in health care, in mining, in lots of areas that are very critical to the Canadian economy. Right now, there's just a limit on how much those companies can grow and scale those applications.

One thing I encourage people to do is buy Canadian, buy from those companies, be the first customer to allow those companies to grow their technology and their impact in Canada, and then sell to the rest of the world.

Parm Bains Liberal Richmond East—Steveston, BC

What should the federal government prioritize to support responsible AI adoption in Canada? Maybe you can mention what safeguards are needed to ensure responsible AI development and use. There may be a longer answer on this one needed.

4:25 p.m.

Vice-President, Research and Innovation, Simon Fraser University, As an Individual

Dugan O'Neil

I agree with my colleagues that a cross-sector approach is needed. When you have a technology that can disrupt agriculture, health care, mining and other natural resource development all at the same time, it's difficult to have a conversation about how to regulate that when traditionally we regulate more in silos. We do need regulatory thought. We need a societal conversation. And we have to somehow do that while remaining competitive and allowing the use of AI to grow in Canada and not shrink, while we're working on the regulations.

The Chair Liberal Ben Carr

Thank you very much, Mr. Bains.

I apologize, Mr. O'Neil, but that's all the time we have for that line of questioning.

Now it's over to Mr. Ste‑Marie for one minute.

Gabriel Ste-Marie Bloc Joliette—Manawan, QC

Mr. Tessari L'Allié, this morning on the radio, Yoshua Bengio said it was important for Canada to partner with other countries on the issue of AI, given the concentration of power in the U.S. and China. Mr. Duvenaud referred to this in his work, as have you. At the ethics committee, you suggested a treaty between countries to better regulate AI. Could you talk about that briefly?

4:25 p.m.

Founder and Executive Director, AI Governance and Safety Canada

Wyatt Tessari L'Allié

Yes, of course.

In that scenario, the best contribution Canada could make is in the international arena. Canada could take the lead globally and launch those talks.

When it comes to AI safety, no country alone can protect itself against systems that are more intelligent than humans. That means we have to coordinate our efforts, so even the U.S. and China will need such a treaty. That's really the first thing Canada needs to achieve if it wants to influence the trajectory of AI.

Gabriel Ste-Marie Bloc Joliette—Manawan, QC

Thank you very much.

The Chair Liberal Ben Carr

Thank you to our witnesses this afternoon.

Honourable members, I know we like to hobnob a bit after the quick break, but we'll have to get going again right away. You can have five minutes at most before we resume the meeting.

Thank you very much to our witnesses for being here today. Thank you for your patience at the outset.

There is certainly a lot for us to reflect on.

I wish you a good rest of your day.

Thank you.

We'll suspend for a few moments.

The Chair Liberal Ben Carr

I call the meeting back to order.

We have three new witnesses to welcome to the committee. One is joining us online and two are here in person.

Joining as an individual, we have James Elder, professor and research chair in human and computer vision at York University, and co-director of the Centre for AI & Society. We have Teresa Scassa, Canada research chair in information law and policy, from the common law section of the Faculty of Law at the University of Ottawa. From Scale AI, we have Julien Billot, chief executive officer.

It wasn't exactly an uplifting first hour of testimony, so I'll be curious to see where we go in the second hour.

Witnesses, thank you very much for taking the time to join us. As a quick reminder, if you're in the room and your translation earpiece is not in use, to protect the health and well-being of our interpreters, please make sure that it's placed on the sticker in front of you.

With that, I'm going to give the floor to you first, Mr. Elder. You'll have up to five minutes for your opening remarks.

Professor James Elder Professor and Research Chair, Human and Computer Vision, York University, Director, Centre for AI and Society, As an Individual

Thank you so much, Mr. Carr.

It's a privilege to appear before you today.

My expertise is in computational neuroscience, computer vision, AI and robotics. I've been a professor at York for about 30 years. I've led many collaborative research projects with Canadian industry and public sector partners. As mentioned, I'm now serving as director of our Centre for AI and Society, where we bring together around 74 faculty members across different faculties in the university engaged in all aspects of AI research.

I want to break down my brief comments into three categories: opportunities, risks and regulation.

First, I think there are enormous opportunities for Canadian society and industry. As you know, Canadian researchers have been at the forefront of the research on core principles that underlie current AI technologies. In the last few years, we've seen a lot of attention shift to the large language models developed by hyper scalers like OpenAI. I think now we're in a new phase of this AI revolution where we'll see more and more smaller and medium-sized businesses to large businesses reaping benefits from these very large-scale AI models. I think there are very important opportunities for Canada in this regard in many different application areas. I mentioned a few in my opening remarks, including construction, robotics for health care and senior care, smart cities, urban mobility and business process automation.

There are many ways the Government of Canada can help Canadians to seize these opportunities. Some were mentioned in the previous session, including leading by example. The Government of Canada can be an early adopter of Canadian AI technologies to improve business processes. We need to support post-secondary research and training particularly directed toward the application and integration of AI into society. We could talk about the details of how to do that. We need to continue to catalyze collaborative research in applied AI. By “collaborative” I mean pan-Canadian and bringing together industrial sectors with domain experts, government agencies and university researchers. I applaud the initiatives of the government in dual-use research, research into dual-use technologies, but we don't want to neglect AI technologies that have purely civilian applications. Those are some opportunities.

In terms of risks, there are many, as you heard in the previous session, but one I want to emphasize is the risk of missing out. This is a disruptive technology. If Canada tried to avoid it, then we would miss economic opportunities, and that would have downstream impacts on our quality of life. There are going to be huge shifts in employment, both between labour markets and within our job descriptions. Each of us is going to be challenged to adapt our skill set and workflows. I think there are really big risks in education. There are a lot of things we don't know. We need to support research on cognitive development, especially in our young people. We just don't know. We know there are effects of electronic technologies in general on education. We don't know exactly what the effects of outsourcing core intellectual capabilities to AI tools are on brain development, on things like math, logic, prose generation and so forth. We really need to support research in those areas. There are risks in data security, of course. We need data sovereignty. There are, of course, political risks, particularly with respect to AI chatbots and AI bots online and deepfakes. I think there are things we can do to address those challenges as a society, including investing in research on these risks.

I'll try to wrap up very quickly on regulation. I'm not a policy or legal expert—I'm glad to see there are some of those here in this session—but I do think, from my point of view, we can't avoid the details.

We need to look at specific risks and try to mitigate those risks, as we do with any technology. Mitigating political risk will mean clear legislation around the watermarking of AI content to distinguish real from fake content. Above all, we need to protect data sovereignty. We need to have the compute and secure data storage resources in Canada to make sure that Canadian data and IP stay within Canada.

Thank you.

The Chair Liberal Ben Carr

Thank you very much, Mr. Elder.

Ms. Scassa, we'll go to you next. The floor is yours for up to five minutes.

Dr. Teresa Scassa Canada Research Chair in Information Law and Policy, Faculty of Law, Common Law Section, University of Ottawa, As an Individual

Thank you, Mr. Chair.

I'm a professor of law at the University of Ottawa, where I hold the Canada research chair in information law and policy. I work in the areas of privacy law and AI governance.

As I'm sure you're all aware, Canada's attempt to regulate AI technologies through a cross-sectoral law, the proposed artificial intelligence and data act, failed with Bill C-27 in January 2025.

This bill would have created a set of ex ante measures for different actors within the AI value chain. These were only for high-impact systems and would have required risk identification and mitigation, documentation, some public-facing transparency and some data governance. The bill provided for limited and predominantly light-touch oversight.

The bill was regarded as a broad, cross-sectoral AI statute, but it had important limitations. Although high-impact systems were initially undefined, proposed amendments by the minister sketched out a series of high-impact categories mainly linked to human-oriented use, for example, the use of AI in employment, automated decision-making, the use of biometric data and so on. This is so, even though systems used in industrial or manufacturing contexts can bring with them serious potential risks as well. Of course, new categories of high-impact AI could have been added to the list by regulation over time.

The application of the AIDA was also limited to systems designed for use in interprovincial or international trade and commerce. It would not have applied to the federal public service. It did not apply to the defence department or the security establishment, or to those who supplied AI systems to them.

The signals now seem clear that AIDA will not be resurrected. There's a tendency to assume that because the bill failed, there's no AI regulation in Canada. A recent KPMG survey indicated that 92% of Canadians believe Canada has no AI regulation. It also revealed a significant trust gap when it came to AI.

In reality, there's a considerable amount of AI regulation in Canada. However, it's more sectoral and context specific. It's also more fragmented, less obvious and less transparent. It sometimes looks very different from what ordinary Canadians might consider to be regulation, and it often involves soft law. It ranges from law to guidance.

Many existing laws, such as privacy law, already apply in different ways to AI. In addition, policies, guidance and best practices are developed by government departments and agencies, and by regulators, including privacy commissioners, the Competition Bureau, human rights commissions, financial conduct authorities, law societies and many others.

AI governance is also taking place through standards development and, in the private sector, through corporate self-governance, according to guidance from diverse sources. These have the potential to be reinforced by privately managed compliance certification. The government is exploring how standards and certification could be leveraged to assist Canadian businesses in meeting EU AI Act requirements.

Budget bill amendments to the Red Tape Reduction Act will enable the use of regulatory sandboxes across the federal sector. The federal government has launched a beta register of AI in the public sector and is currently consulting on it. Since 2019, we've had the directive on automated decision-making for the federal public service, and this has been joined by a “Guide on the use of Generative AI” in the public sector. The federal government has also created a list of suppliers committed to principles relating to responsible and effective AI use. I offer these as diverse examples of AI regulation, broadly understood, at the federal level.

Other laws are contemplated or will be amended to address specific AI issues. We may see new online harms legislation. A new privacy bill, when it's eventually introduced, will likely contain provisions related to automated decision-making in the private sector.

All of this activity is encouraging, but where are the gaps?

First, many existing measures are voluntary, and oversight and compliance mechanisms are lacking. While guidance is important in early days, as things advance, public confidence will require oversight. There may also be the need in some contexts to make compliance compulsory. If oversight and compliance are left to existing regulators, commissions or agencies, it will be necessary to consider what legislative changes might also be required and whether regulators have adequate resources to fulfill complex expanding mandates.

Second, much of this regulatory activity is difficult to detect unless you follow it closely. This undermines public trust. It's also particularly burdensome for small and medium-sized enterprises. A national coordinating body that ensures coherence, enables greater transparency and promotes federal-provincial harmonization would be valuable. Such a role could also support public trust by serving an ombuds function. There must be ways for Canadians to surface their concerns about AI systems in both public and private sectors.

Third, if approaches are piecemeal and sectoral, then so too will be law reform. It would be useful to map what reforms are needed or contemplated—a clear AI governance strategy. Such a road map was not part of the AI strategy consultation.

Thank you, Mr. Chair, for this opportunity to address this committee. I look forward to any questions.

The Chair Liberal Ben Carr

Thank you very much, Ms. Scassa.

Mr. Billot, the floor is yours for up to five minutes.

Julien Billot Chief Executive Officer, Scale AI

Thank you, Mr. Chair.

My name is Julien Billot, and I'm the CEO of Scale AI, a Montreal-based organization.

At Scale AI, we envision a Canada that is strong and free, where artificial intelligence and high-impact technology fuel sustainable prosperities for years to come. Our mission is really to ignite a new area of growth for Canada—one propelled by empowered industry, collective innovation, visionary champions and strengthened sovereignty, so that Canada shapes a future-ready economy grounded in its own value and assets. By fostering the growth of Canadian champions that build, deploy and retain intellectual property at home, we can ensure that the economic value created by AI remains anchored in Canada.

Through its coinvestment models, Scale AI helps domestic companies scale, attract private capital and compete globally, while ensuring that Canadian innovation benefits Canadian workers, regions and industries first. We act as the engine that connects ideas, industries and investments to build a resilient, globally competitive AI ecosystem.

There is a geopolitical imperative there, which is building Canada's technological and economic sovereignty, because artificial intelligence has become the new front line of global competition. It's no longer a technological experiment; it's a strategic determinant of national power, prosperity and democratic autonomy, defining which countries control innovation, productivity and security, and which retain the freedom to design their own economic and social path. The world's leading economies—the U.S., China and Europe—have already made AI the cornerstone of their industrial and defence strategies. This global shift makes Canada's technological and economic sovereignty a geopolitical imperative. Without control over these technologies shaping tomorrow, even democratic nations risk losing their capacity to decide for themselves.

The imperative rests on two entwined foundations.

First of all is technological sovereignty. We must secure Canada's independence by mastering the critical capabilities, data, compute and algorithms that underpin every modern economy and every democratic institution. Without control of these assets, Canada risks dependence on foreign infrastructures and systems that may not share its values and governance principles. Homegrown AI is now essential for protecting its institutions, privacy and democratic integrity.

On the economic side is capturing the massive value creation that is now shifting toward AI. Over the next decade, artificial intelligence will redefine global GDP pools, productivity and trade competitiveness. The countries that invest early in building sovereign AI capabilities will not only safeguard their independence, but also generate the wealth, jobs and exports that define the next economic era. Failing to act means ceding prosperity and agency to others.

Canada built AI science and inspired the world, but in 2025, sovereignty is no longer measured by research output but by control over the technologies that power institutions and industries, and by the ability to transform them into prosperity and influence. AI-based technologies now determine the resilience and independence of health care systems, the autonomy of defence and cybersecurity, the productivity and resilience of national industry, and the emergence of quantum applications that will define the next technological frontier.

Canada cannot replace dominant foreign AI players overnight, but it must act now to build a foundation of a sovereign AI value chain. Its reliance on foreign infrastructure providers, hardware manufacturers and software providers will not disappear immediately, but with a clear vision and decisive action, it can achieve strategic independence that will allow Canadian AI champions to grow, export, compete and lead. This is not spending; it's investing in securing Canada's future. With world-class AI talent and a strong innovation ecosystem, we must take ownership of our AI identity. Foreign investment can accelerate, but vision and control must remain controlled by domestic hands.

Canada has the talent, infrastructure and partnership to lead, but leadership now depends on the ability to deploy at scale and build a trusted, productive and sovereign AI economy that serves Canada's interests and values.

We have the ability to create a sovereign AI value chain by 2030 if we create, deploy and export Canadian innovation while anchoring its value domestically. We can do that by building a leading industry in applied AI with a champion factory that fuels commercialization and supports Canadian AI champions, by supporting demand with broad AI adoption across public and private sectors, and by securing our infrastructure—the foundation of which is ensuring technological sovereignty and data independence.

We can do it by enabling and expanding across Canada and abroad, by strengthening national co-operation and governance for a Canadian road map and, on the global stage, by building a strong ecosystem with global reach and driving the global conversation.

That's really the vision we want to push at Scale AI. We are here to help.

We are very honoured to be here and we'll be very happy to answer all your questions.

Thank you.

The Chair Liberal Ben Carr

Thank you, Mr. Billot.

Colleagues, we'll enter our first round of questions.

Mr. Falk, the floor will be yours for six minutes.

4:55 p.m.

Conservative

Ted Falk Conservative Provencher, MB

Thank you to all of our witnesses for your presentations here today. They were very informative.

Mr. Elder, I'd like to begin with you.

When you identified risks, the first one you talked about was missing out. Certainly, from an opportunity perspective, that is a risk, I suppose. We can either get on board or get out of the way, I suppose.

You also talked a little bit about security and deepfakes and all that. How significant a concern is that from your perspective?

4:55 p.m.

Professor and Research Chair, Human and Computer Vision, York University, Director, Centre for AI and Society, As an Individual

Professor James Elder

I think data security is very significant, because data are the lifeblood of AI. Not only are there security issues from the point of view of political security, data privacy and so forth, which are societal concerns, but there is also intellectual property. It's valuable, so I think we need to pay attention to both of those dimensions of data security.

I don't see any conflict between our economic objectives there and our social objectives. I think they're going the same direction.

Then, of course, with political risk, I think we all understand that we don't want our political system to be manipulated, especially by foreign actors, and biased by artificial content.

I think these are real risks that we have to balance against that risk of missing out.

4:55 p.m.

Conservative

Ted Falk Conservative Provencher, MB

Several years ago, you had talked about how “Deep learning models fail to capture the configural nature of human shape perception”. What's your perspective on that today?

4:55 p.m.

Professor and Research Chair, Human and Computer Vision, York University, Director, Centre for AI and Society, As an Individual

Professor James Elder

Thanks for the deep research. I appreciate it.

My lab has been one of the labs globally trying to understand aspects in which AI systems diverge from human cognition. I think that's important if we're going to integrate these systems into decision-making that might involve an integration of humans and machines or just autonomous machine decision-making.

We've seen some very significant divergences in my field of expertise, which is visual perception and cognition. Interestingly, those gaps have diminished a little bit with advances in AI, but they're still significant, so I think we need to support research in that domain and all domains of AI perception and cognition, because otherwise we will not have systems that are consistent with our way of seeing problems, and at least we need to understand those differences.

5 p.m.

Conservative

Ted Falk Conservative Provencher, MB

Thank you for that.

Dr. Scassa, I'd like to also ask you a few questions.

During the iteration of Bill C-27, you were quite critical in your comments on the due diligence that was done prior to that piece of legislation. Can you give us specifics where you felt the due diligence had been lacking or where the government had not done proper consultations?

5 p.m.

Canada Research Chair in Information Law and Policy, Faculty of Law, Common Law Section, University of Ottawa, As an Individual

Dr. Teresa Scassa

When the AI and data act hit the scene in June 2022, it was unexpected by industry, and it was unexpected by academia or civil society. It just appeared on the scene. There may have been some behind-the-scenes consultations and discussions that took place, but there was no public consultation beforehand.

That is significant, because consultation does a number of things. One is that it engages the public, and on a topic like AI, the more we engage the public, the better. There's a lot of talk about AI literacy and the importance of it, and I think that plays a role in AI literacy, but it also would have helped to explain the government's very particular approach to AI governance, which was explained nine months later in the companion document that came out.

That lack of consultation was a problem in getting the message across and in building literacy and trust, and I think it caused or created a number of conceptions and misconceptions about the bill that made it very difficult to move forward with it.

5 p.m.

Conservative

Ted Falk Conservative Provencher, MB

In our previous panel, we heard from Mr. L'Allié that we now have agentic AI, which can self-preserve and self-perpetuate. How do we regulate that?

5 p.m.

Canada Research Chair in Information Law and Policy, Faculty of Law, Common Law Section, University of Ottawa, As an Individual

Dr. Teresa Scassa

That's a really good question.

This is one of the challenges with AI. It is moving so quickly that it's very difficult to keep up with it. It's also very difficult in the early stages to understand what the problems, risks and challenges are going to be.

This is something, perhaps, that we're going to have to get used to. Generative AI also created this significant disruption. The AIDA was introduced in June 2022. Generative AI was publicly launched in November 2022. The bill was not prepared for generative AI. Now we're looking at agentic AI and the challenges it's going to bring.

5 p.m.

Conservative

Ted Falk Conservative Provencher, MB

It used to be that when you got concerned about where your computer was going, you just pulled the plug. Apparently, that doesn't work anymore. It's not going to work in the future. Do you have any suggestions for how we can address that?

5 p.m.

Canada Research Chair in Information Law and Policy, Faculty of Law, Common Law Section, University of Ottawa, As an Individual

Dr. Teresa Scassa

There are a lot of different approaches to take, and many Canadian companies are moving slowly, carefully and cautiously. There are also other companies that are going full steam ahead, moving fast and breaking things. Those are typically located in other countries and are planning to reap enormous benefits from it. We're also caught in that position as well. Not all agentic AI is going to be bad.

5 p.m.

Conservative

Ted Falk Conservative Provencher, MB

Thank you.

5 p.m.

Liberal

The Chair Liberal Ben Carr

Mr. Bardeesy, the floor is yours for up to six minutes, sir.

5 p.m.

Liberal

Karim Bardeesy Liberal Taiaiako'n—Parkdale—High Park, ON

Thank you very much.

In this session, we've been hearing a bit about the cutting-edge innovations and also, on the other hand, the potential for displacement. However, there are a lot of spaces in the middle that create opportunities for a wide array of players in the labour market to participate in the potential benefit from AI and to have their work augmented.

I want to start with Monsieur Billot.

I'd like some feedback from you about what kinds of companies in the Scale AI universe might be in that key middle segment. They're not necessarily developing cutting-edge innovations, but they're not creating products that are purely about labour displacement.

5:05 p.m.

Chief Executive Officer, Scale AI

Julien Billot

That's obviously the core of what we try to achieve at Scale AI. Almost all the companies we have had since inception have labour issues—not enough people, basically, to deliver what they need to deliver. None of the projects we funded had job displacement. All the projects we funded helped companies actually do more with the labour they had.

It's true because we focus on something: improving business processes. In improving business processes, we are really here to augment what companies can do with the resources they have—basically, to do much more with what they have, to do much more with more resources. We never had a case of funding where we had companies asking us to do the same with fewer people.

Today, there's a real concern in every industry sector about lack of resources. It's true in every region in Canada. AI is really seen by companies as a way to achieve more with the resources they have. We're not talking about very sophisticated AI.

Something I want to mention in this committee is that everybody talking about AI always has in mind robotics on one side and large language models or agentic AI on the other side. However, AI is also very simple things like machine learning and operation research, and 90% of the projects we funded at Scale AI were about these technologies.

Actually, generative AI happens now on specific content management or marketing management issues, but most of the projects use traditional, I would say, AI technology—the one Yoshua Bengio, Geoffrey Hinton and Richard Sutton invented 30 years ago. They are already providing a lot of productivity gains.

Even when we think about regulating AI, obviously we have to look at different types of AI and different approaches, depending on—

Karim Bardeesy Liberal Taiaiako'n—Parkdale—High Park, ON

Further to that, could you provide some examples of businesses in your ecosystem to illustrate what you're describing?

5:05 p.m.

Chief Executive Officer, Scale AI

Julien Billot

We have helped businesses of all sizes and from a wide range of sectors, since our activities cover various fields. I'll give you a few concrete examples.

During the winter, so this example is still relevant, all aircraft must be de-iced. Aeromag is a global leader in aircraft de-icing. This company has used artificial intelligence to optimize the amount of glycol used to de-ice planes. It doesn't seem like much, but it has a dual effect: first, it reduces costs and streamlines expenditures, and second, it protects the environment, because unused glycol doesn't pollute the environment. So that's one concrete example.

We have also developed projects for companies like the Sept‑Îles railway. Recently, there was an article in La Presse about a project aimed at optimizing the rail transport of iron ore from the mines in northern Quebec and Labrador. This helps optimize the efficiency of the entire supply chain of iron ore from Newfoundland and Labrador and Quebec that passes through the port of Sept‑Îles.

We also helped Pratt & Whitney, a very large company in Quebec and Ontario, optimize the maintenance of its aircraft engines and ensure that aftermarket service and spare parts are always available at the right time for its customers around the world.

We also helped companies like Visual Defence, which works with the City of Ottawa and the Municipality of York to optimize the repair of potholes, which is another timely topic. Artificial intelligence is helping municipalities better predict where problems will occur and optimize pothole repairs.

We've funded 200 projects. I could mention a number of them, but those are a few examples of companies, large and small, that have benefited from artificial intelligence.

Karim Bardeesy Liberal Taiaiako'n—Parkdale—High Park, ON

What kinds of skills are expected on the other end of these innovations that are being deployed?

5:05 p.m.

Chief Executive Officer, Scale AI

Julien Billot

For every project we fund, we also fund training around it, because making an AI solution is one thing, but having usage of this solution is another. Usage is really about training people and changing the management approach. That's what we try to fund that at the same time as we're funding the development of the solution itself.

Karim Bardeesy Liberal Taiaiako'n—Parkdale—High Park, ON

Professor Scassa, thank you for that very extensive and rigorous explanation of the journey of AI regulation, more recently, in Canada.

We sometimes hear from either hyper scalers or multinationals that AI regulation itself can scare off an investment.

I want to know if you have a view about that claim.

5:10 p.m.

Canada Research Chair in Information Law and Policy, Faculty of Law, Common Law Section, University of Ottawa, As an Individual

Dr. Teresa Scassa

[Technical difficulty—Editor] for innovators, for example, who understand more clearly what's expected of them and what routes to follow. You know, there is this tension. There are lots of people who like to say, “Don't get in the way of innovation”, but, frankly, there are some innovations we really need to get in the way of. I think we're already experiencing the harms from some of those.

There are other ones where regulation may simply make it easier for the innovators to know where and how to act.

Karim Bardeesy Liberal Taiaiako'n—Parkdale—High Park, ON

Professor Scassa, you made the case for a broad-based—

The Chair Liberal Ben Carr

Mr. Bardeesy, you're 35 seconds over already, so I'm going to have to cut you off. I apologize.

Mr. Ste‑Marie, you have the floor for six minutes.

Gabriel Ste-Marie Bloc Joliette—Manawan, QC

Thank you, Mr. Chair.

I'd like to welcome the three witnesses and thank them for being with us.

Mr. Billot, your company doesn't work on general artificial intelligence, such as chatbots, but rather supports businesses that want to integrate artificial intelligence into their activities to improve productivity. You do follow-ups. You said that, to date, you've supported 200 companies.

Is the process lengthy for each company?

5:10 p.m.

Chief Executive Officer, Scale AI

Julien Billot

The process is lengthy because it begins with a business assessment for a company. Artificial intelligence is ultimately just a tool, so before integrating artificial intelligence into its operations, a company really needs to think about its business processes based on two elements: productivity gains and ease of implementation. It must then properly select the processes to be transformed, that is, the ones with the greatest potential and that are the easiest to transform.

So it begins with a business assessment. That's why I'm often asked which companies use this tool. In fact, it has a lot to do with the management and leadership of these companies, who may or may not have a clear understanding of their business processes and a willingness to improve them using artificial intelligence.

In general, it's a process that takes several months, if not several years.

Gabriel Ste-Marie Bloc Joliette—Manawan, QC

A few months ago, we conducted a study on productivity to find out how to increase productivity gains for local businesses. I think you're providing a very important solution. It's a lengthy process. When you invest in a business, it mobilizes resources, including financial resources. Would you say you've hit your cruising speed?

Would you have the capacity to do more and support more businesses? If so, what's limiting you right now?

5:10 p.m.

Chief Executive Officer, Scale AI

Julien Billot

We have been around for seven years, so I think we have hit our cruising speed.

We could do a lot more. I would say that we have barely scratched the surface of what we can do to help businesses. In 200 projects, we have helped a few hundred businesses out of the tens of thousands that exist in Canada.

To do more, it's simple: It takes money. It's as basic as that. We could do at least 10 times more projects in the next five years. It's a purely financial challenge. It's also a challenge for us to work with the provinces even more than we already do. Historically, we have had a great working relationship with the Province of Quebec. We can do a lot more with provinces that are starting to take an interest in artificial intelligence. For us, these are key ingredients to accelerate development.

Gabriel Ste-Marie Bloc Joliette—Manawan, QC

There's much higher potential in terms of what can be done, then. You're limited by money, as you said.

You mention great working relationships with the provinces. What about working with Ottawa, with the federal government? I'd like to take this opportunity to talk about the new artificial intelligence strategy. Looking ahead, do you know of any supports that could help you, or do you expect to receive support?

5:10 p.m.

Chief Executive Officer, Scale AI

Julien Billot

We obviously have a very good working relationship with the federal government, which created our company in 2019 and has been refinancing it since 2019. We had extremely in-depth discussions on the new artificial intelligence strategy.

We have done our part, but we don't know the outcome yet, so we're obviously waiting. We have hope, given the value we bring, the importance of productivity and economic sovereignty for Canada, and the need to build a local artificial intelligence industry. We obviously hope that the federal government will continue to provide support as part of this new strategy.

Gabriel Ste-Marie Bloc Joliette—Manawan, QC

Does the fact that there's a waiting period temporarily jeopardize the number of businesses, the number of projects you can support and the stability of jobs in your company, among other things? What's the impact of a delay in that response?

5:15 p.m.

Chief Executive Officer, Scale AI

Julien Billot

Delay is always a problem for an industry. When we fund projects, people have to be able to deliver them. To deliver them, they need labour, and they need to develop solutions, so people need to have visibility. It's extremely important to give economic players visibility so that they can launch investment plans over a number of years to recruit and train people and invest in technological solutions.

What's important for the industry at this stage in the development of AI adoption is to provide visibility. There has been a lot of work in recent years for a whole bunch of reasons, year after year. It was very good and very well managed. Now, there has to be visibility. That's why this artificial intelligence strategy, which should provide visibility over several years, is so important for our ecosystem and our industry.

Gabriel Ste-Marie Bloc Joliette—Manawan, QC

Thank you very much, Mr. Billot.

Ms. Scassa, thank you very much for your opening remarks and the answers you are giving us.

Regarding the federal AI strategy, you—I think rightly—criticized the government's consultations: they were too short, the deadlines were too tight, and there was a lack of diversity in the advisory committee. People could answer the online survey multiple times, and bots could have answered it.

Do you think the government should start afresh and hold real consultations with the public about the future of artificial intelligence?

5:15 p.m.

Canada Research Chair in Information Law and Policy, Faculty of Law, Common Law Section, University of Ottawa, As an Individual

Dr. Teresa Scassa

I think it's right to make sure that the strategy moves forward. As we're hearing, there's an element of urgency here. That said, I believe that conversations with Canadians have to continue in a number of forums and in a number of ways. It isn't enough for there to have been one consultation in the fall of 2025. It's important to continue to consult, educate and engage the public. That's important.

Gabriel Ste-Marie Bloc Joliette—Manawan, QC

Thank you very much, Ms. Scassa. I'll have more questions for you in the next round.

The Chair Liberal Ben Carr

Okay.

Thank you, Mr. Ste‑Marie. Unfortunately, again, we don't have much time left.

We're going to go to Ms. Borrelli, followed by Mr. Bains and then followed, again, by a minute for yourself, Monsieur Ste-Marie.

We need to continue the work on our report.

With that, Ms. Borrelli, the floor is yours for five minutes.

5:15 p.m.

Conservative

Kathy Borrelli Conservative Windsor—Tecumseh—Lakeshore, ON

Mr. Billot, many AI systems require massive power and data storage.

Is it true that most of the computing and data storage that Canadian companies use today is owned by large multinational firms, which are primarily based in the U.S.?

5:15 p.m.

Chief Executive Officer, Scale AI

Julien Billot

To be clear, not all AI systems require a lot of energy. That was my initial point. We mix AI up with large language models.

Large language models, obviously, require a massive amount of energy and computing power, but that's not at all the AI we are working on. AI for industry doesn't typically require a lot of cloud services, energy or computing power because it's very basic AI. AI for industry, which is critical for productivity gains, is not raising any issues in terms of power or water resources.

Foreign players can absolutely be hosted in Canada with very limited computing power. When we talk about large language models—aside from Cohere—most of the players operating here are hyper scalers. They are the ones requiring a massive amount of energy.

That's why we always argue, from Scale AI's industry perspective, that it's nice to build data centres, but it's like building highways and having Korean cars on those highways. If you build data centres, you need to have a Canadian-based application to run on them so that this energy, at least, which belongs to Canadians, serves the creation of Canadian IP and Canadian companies.

5:15 p.m.

Conservative

Kathy Borrelli Conservative Windsor—Tecumseh—Lakeshore, ON

Are fewer foreign companies involved in storage and the power we need to run AI?

5:15 p.m.

Chief Executive Officer, Scale AI

Julien Billot

Well, they aren't yet. That will—

5:15 p.m.

Conservative

Kathy Borrelli Conservative Windsor—Tecumseh—Lakeshore, ON

All right.

In the meantime, how do we ensure that the data we're storing is secure?

5:15 p.m.

Chief Executive Officer, Scale AI

Julien Billot

There is a difference between data security and storage by foreign companies. Typically, in the European Union, there are agreements that, even if the data is in data centres owned by Microsoft or others, it remains in the European Union and has some protection. There are ways to control this.

From our perspective, it's impossible to replace these hyper scalers. It's impossible for any Canadian industry to not use, at one point, Microsoft, Google or Amazon. The question is this: Do we at least force data to be hosted in Canada when it's about Canadian companies? That's feasible. My understanding is that this is what Europe is doing. You cannot be absolutely sovereign by yourself. We cannot hope to control everything in Canada.

What we argue is that we should at least try to control what we can. Even if we control 20% to 30% of the full value chain, it's much better than 0%. Let's try to do that. Legislation can help, investment in the right companies can help and investment in infrastructure can help. It's a sum of actions that will help Canada be more sovereign than it is today regarding AI control.

5:20 p.m.

Conservative

Kathy Borrelli Conservative Windsor—Tecumseh—Lakeshore, ON

Thank you.

Ms. Scassa, Canada has produced many foundational breakthroughs in artificial intelligence research.

When those discoveries go to the market, who typically ends up owning the patents?

5:20 p.m.

Canada Research Chair in Information Law and Policy, Faculty of Law, Common Law Section, University of Ottawa, As an Individual

Dr. Teresa Scassa

One of the challenges that Canada has faced is this: Even if a Canadian company begins by owning a patent, that patent can be sold or transferred. Companies can be sold. We often see our start-ups snapped up by larger, U.S.-based companies.

The patent side is not really my forte, but there are a lot of issues around the ownership management of patents in the Canadian context.

5:20 p.m.

Conservative

Kathy Borrelli Conservative Windsor—Tecumseh—Lakeshore, ON

We see the results of our funded research being sold off to other countries. Often, we see a brain drain. People who have been trained in Canada take jobs in other countries.

Is there something the government can do to keep companies here, and to keep the educated students who have graduated here, rather than have them go elsewhere?

5:20 p.m.

Canada Research Chair in Information Law and Policy, Faculty of Law, Common Law Section, University of Ottawa, As an Individual

Dr. Teresa Scassa

Yes, and I think this brain drain problem is one that has been going on for a long time in Canada. I think that it's lately become more attractive for Canadians to stay in Canada, so that might help to some extent, but this is a challenging thing.

Also, it's going to become more challenging if the economy gets worse, if young people have trouble finding jobs and if they have trouble finding jobs that pay well enough.

These are economic challenges that we're going to have to face. I think they're complex problems.

The Chair Liberal Ben Carr

I'm sorry, Ms. Borrelli. That's all the time.

Mr. Bains, the floor is yours for five minutes.

Parm Bains Liberal Richmond East—Steveston, BC

Thank you, Mr. Chair.

Thank you again to our witnesses for joining us.

I'm going to go to Dr. Elder.

I asked Dr. O'Neil from Simon Fraser University in British Columbia a similar question.

You mentioned it at the beginning of your intervention. What happens if we don't make these investments into the AI space and if we look at other international markets across the world? Can you expand on that a bit?

5:20 p.m.

Professor and Research Chair, Human and Computer Vision, York University, Director, Centre for AI and Society, As an Individual

Professor James Elder

Absolutely, I'm happy to do that.

I think it's like any disruptive technology, but on steroids. It's a really massive disruption this time, and the risks are great for our economy if we don't try to capitalize on the opportunities. As has been mentioned, there are a lot of opportunities that go beyond the development of these large language models in various dimensions of AI. It's not just language, of course, but computer vision or any kind of machine learning applied to data.

There's a time frame here that's important. There are a lot of rapid opportunities available and we don't have time to really sit around and think about it too long. We have to support innovation in this environment. What I think that means is really providing opportunities for small and medium-sized businesses to innovate. That should be through our tax system, through financial incentives, through infrastructure, as has been mentioned, and through better engagement among university research, industry innovators and domain applications.

That can be done at a federal level, where we really try to create an ecosystem that makes it easy for entrepreneurs to want to stay in Canada because all their business relationships are here—not all of them, but a lot of the important ones—and they have a really good pathway for human capital.

Parm Bains Liberal Richmond East—Steveston, BC

How behind are we? How can we catch up?

I'd like you to also comment on specific use cases demonstrating strong Canadian leadership or export potential.

5:25 p.m.

Professor and Research Chair, Human and Computer Vision, York University, Director, Centre for AI and Society, As an Individual

Professor James Elder

Sure. I think we're in a good place with respect to the training and university research. We still have great researchers across Canada.

My sense is that we're behind in providing the ecosystem needed for start-ups. We have a more risk-averse capital market, I think, than, for example, the U.S.

In terms of opportunities, there are just so many, as was mentioned in previous testimony: in agriculture, in natural resources and in smart cities.

Also, health care is a big one. I think there's a lot happening. For example, in health care, we have a lot of great work in medical imaging analysis and in robotics for health care and also for senior care, which is going to be a growing market over the next 10 to 20 years.

Also, in smart cities, of course, there are transformations, particularly around mobility, where we have a lead—historically, we're a leader—in telecommunications, as well as a lot of opportunities for optimization in telecom. I think what's really needed is sustained support for that ecosystem that lowers the friction between universities and this innovation in the private sector.

Parm Bains Liberal Richmond East—Steveston, BC

Thank you.

I'm going to go to Mr. Billot.

Can you talk a bit about the imbalance between the scale that we are developing in AI versus the risk side? Are we doing that simultaneously and combatting both of those initiatives?

5:25 p.m.

Chief Executive Officer, Scale AI

Julien Billot

I signed one of the petitions two years ago regarding the need for regulation. I definitely think, for industry, that you need to regulate, because you need trust in the system. If you are a stakeholder of any industry, a board member or a director of a company, you definitely don't want your company to be pinpointed because of issues with your AI solutions. There's a need for a regulatory framework for industry or at least a trusted environment in which to invest and protect stakeholders.

Right now, AI in industry, as I said, is causing far fewer regulation issues than the large language model, the deepfake models or anything else that is more for mass market. AI for industry is very limited to counting eggs, improving maintenance and creating digital twins, which are very operational things that, in fairness, don't raise any issue of regulation. When we look at projects, we always look at social impacts, green economy impacts, etc., but they don't cause any issues in terms of privacy or democratic issues. We're not really in that.

Talking about imbalance, for sure, on one side, there is the race to develop large language models and their implication, and agentic AI is raising a lot of questions. That's not really what we see in industry right now, which is really a much lower level AI that is not at all raising the same questions of regulation right now.

The Chair Liberal Ben Carr

Thank you very much, Mr. Bains.

Mr. Ste‑Marie, you have the floor for one minute.

Gabriel Ste-Marie Bloc Joliette—Manawan, QC

Thank you, Mr. Chair.

Ms. Scassa, I have two quick questions for you.

First, do you think the issues surrounding artificial intelligence are important enough to create a special committee to maintain an ongoing dialogue?

Second, Quebec has ended its artificial intelligence pilot project, which aimed to answer Canadians' most common questions, because of privacy issues. In Ottawa, that seems to be the model for the financial framework. Is it prudent to move in that direction right now?

5:25 p.m.

Canada Research Chair in Information Law and Policy, Faculty of Law, Common Law Section, University of Ottawa, As an Individual

Dr. Teresa Scassa

I would say yes to the first question. I think a special committee would be very helpful.

I missed the end of the second question.

Gabriel Ste-Marie Bloc Joliette—Manawan, QC

Quebec had set up an artificial intelligence program to answer Canadians' questions, but it backtracked because of privacy issues. Ottawa seems to want this model to move forward. How prudent is that?

5:30 p.m.

Canada Research Chair in Information Law and Policy, Faculty of Law, Common Law Section, University of Ottawa, As an Individual

Dr. Teresa Scassa

It's important to go about it carefully. It's important to know how to organize and govern it to protect privacy.

Gabriel Ste-Marie Bloc Joliette—Manawan, QC

Thank you very much, Ms. Scassa.

The Chair Liberal Ben Carr

Thank you, Mr. Ste‑Marie.

Thank you to the witnesses for being here today.

Colleagues, there is one quick thing to go over.

Witnesses, feel free to go, unless you want to stick around and listen to the guts of minutiae and technicalities at the industry committee. You're certainly welcome to stay.

One day, AI may do this for us.

We were going to look at the defence industrial strategy, but it sounds like there is a point of discussion that Madam Dancho is going to raise with colleagues about the potential for the inclusion of a couple more recommendations. Given that we're already at time today, what I would suggest is that, for Thursday's meeting—as we were aiming to do today, but we started late due to the technical difficulties—we simply cut the last round of questioning. That will save us 15 minutes or so. We might need a little bit more. We'll ask for a couple of additional resources for five or 10 minutes. That will give us the opportunity to finalize everything.

Does that work for everyone? If you have something right at one o'clock on Thursday, you may want to chat with your teams to see if you need five or 10 minutes of replacement, but we should be okay.

That was a good job, everyone. It was a very interesting start to this study.

The meeting is adjourned.