Evidence of meeting #27 for Industry and Technology in the 45th Parliament, 1st session. (The original version is on Parliament’s site, as are the minutes.) The winning word was companies.

A recording is available from Parliament.

On the agenda

Members speaking

Before the committee

Tessari L'Allié  Founder and Executive Director, AI Governance and Safety Canada
David Duvenaud  Associate Professor of Computer Science, As an Individual
O'Neil  Vice-President, Research and Innovation, Simon Fraser University, As an Individual
James Elder  Professor and Research Chair, Human and Computer Vision, York University, Director, Centre for AI and Society, As an Individual
Teresa Scassa  Canada Research Chair in Information Law and Policy, Faculty of Law, Common Law Section, University of Ottawa, As an Individual
Billot  Chief Executive Officer, Scale AI

The Chair Liberal Ben Carr

Thank you very much, Mr. Elder.

Ms. Scassa, we'll go to you next. The floor is yours for up to five minutes.

Dr. Teresa Scassa Canada Research Chair in Information Law and Policy, Faculty of Law, Common Law Section, University of Ottawa, As an Individual

Thank you, Mr. Chair.

I'm a professor of law at the University of Ottawa, where I hold the Canada research chair in information law and policy. I work in the areas of privacy law and AI governance.

As I'm sure you're all aware, Canada's attempt to regulate AI technologies through a cross-sectoral law, the proposed artificial intelligence and data act, failed with Bill C-27 in January 2025.

This bill would have created a set of ex ante measures for different actors within the AI value chain. These were only for high-impact systems and would have required risk identification and mitigation, documentation, some public-facing transparency and some data governance. The bill provided for limited and predominantly light-touch oversight.

The bill was regarded as a broad, cross-sectoral AI statute, but it had important limitations. Although high-impact systems were initially undefined, proposed amendments by the minister sketched out a series of high-impact categories mainly linked to human-oriented use, for example, the use of AI in employment, automated decision-making, the use of biometric data and so on. This is so, even though systems used in industrial or manufacturing contexts can bring with them serious potential risks as well. Of course, new categories of high-impact AI could have been added to the list by regulation over time.

The application of the AIDA was also limited to systems designed for use in interprovincial or international trade and commerce. It would not have applied to the federal public service. It did not apply to the defence department or the security establishment, or to those who supplied AI systems to them.

The signals now seem clear that AIDA will not be resurrected. There's a tendency to assume that because the bill failed, there's no AI regulation in Canada. A recent KPMG survey indicated that 92% of Canadians believe Canada has no AI regulation. It also revealed a significant trust gap when it came to AI.

In reality, there's a considerable amount of AI regulation in Canada. However, it's more sectoral and context specific. It's also more fragmented, less obvious and less transparent. It sometimes looks very different from what ordinary Canadians might consider to be regulation, and it often involves soft law. It ranges from law to guidance.

Many existing laws, such as privacy law, already apply in different ways to AI. In addition, policies, guidance and best practices are developed by government departments and agencies, and by regulators, including privacy commissioners, the Competition Bureau, human rights commissions, financial conduct authorities, law societies and many others.

AI governance is also taking place through standards development and, in the private sector, through corporate self-governance, according to guidance from diverse sources. These have the potential to be reinforced by privately managed compliance certification. The government is exploring how standards and certification could be leveraged to assist Canadian businesses in meeting EU AI Act requirements.

Budget bill amendments to the Red Tape Reduction Act will enable the use of regulatory sandboxes across the federal sector. The federal government has launched a beta register of AI in the public sector and is currently consulting on it. Since 2019, we've had the directive on automated decision-making for the federal public service, and this has been joined by a “Guide on the use of Generative AI” in the public sector. The federal government has also created a list of suppliers committed to principles relating to responsible and effective AI use. I offer these as diverse examples of AI regulation, broadly understood, at the federal level.

Other laws are contemplated or will be amended to address specific AI issues. We may see new online harms legislation. A new privacy bill, when it's eventually introduced, will likely contain provisions related to automated decision-making in the private sector.

All of this activity is encouraging, but where are the gaps?

First, many existing measures are voluntary, and oversight and compliance mechanisms are lacking. While guidance is important in early days, as things advance, public confidence will require oversight. There may also be the need in some contexts to make compliance compulsory. If oversight and compliance are left to existing regulators, commissions or agencies, it will be necessary to consider what legislative changes might also be required and whether regulators have adequate resources to fulfill complex expanding mandates.

Second, much of this regulatory activity is difficult to detect unless you follow it closely. This undermines public trust. It's also particularly burdensome for small and medium-sized enterprises. A national coordinating body that ensures coherence, enables greater transparency and promotes federal-provincial harmonization would be valuable. Such a role could also support public trust by serving an ombuds function. There must be ways for Canadians to surface their concerns about AI systems in both public and private sectors.

Third, if approaches are piecemeal and sectoral, then so too will be law reform. It would be useful to map what reforms are needed or contemplated—a clear AI governance strategy. Such a road map was not part of the AI strategy consultation.

Thank you, Mr. Chair, for this opportunity to address this committee. I look forward to any questions.

The Chair Liberal Ben Carr

Thank you very much, Ms. Scassa.

Mr. Billot, the floor is yours for up to five minutes.

Julien Billot Chief Executive Officer, Scale AI

Thank you, Mr. Chair.

My name is Julien Billot, and I'm the CEO of Scale AI, a Montreal-based organization.

At Scale AI, we envision a Canada that is strong and free, where artificial intelligence and high-impact technology fuel sustainable prosperities for years to come. Our mission is really to ignite a new area of growth for Canada—one propelled by empowered industry, collective innovation, visionary champions and strengthened sovereignty, so that Canada shapes a future-ready economy grounded in its own value and assets. By fostering the growth of Canadian champions that build, deploy and retain intellectual property at home, we can ensure that the economic value created by AI remains anchored in Canada.

Through its coinvestment models, Scale AI helps domestic companies scale, attract private capital and compete globally, while ensuring that Canadian innovation benefits Canadian workers, regions and industries first. We act as the engine that connects ideas, industries and investments to build a resilient, globally competitive AI ecosystem.

There is a geopolitical imperative there, which is building Canada's technological and economic sovereignty, because artificial intelligence has become the new front line of global competition. It's no longer a technological experiment; it's a strategic determinant of national power, prosperity and democratic autonomy, defining which countries control innovation, productivity and security, and which retain the freedom to design their own economic and social path. The world's leading economies—the U.S., China and Europe—have already made AI the cornerstone of their industrial and defence strategies. This global shift makes Canada's technological and economic sovereignty a geopolitical imperative. Without control over these technologies shaping tomorrow, even democratic nations risk losing their capacity to decide for themselves.

The imperative rests on two entwined foundations.

First of all is technological sovereignty. We must secure Canada's independence by mastering the critical capabilities, data, compute and algorithms that underpin every modern economy and every democratic institution. Without control of these assets, Canada risks dependence on foreign infrastructures and systems that may not share its values and governance principles. Homegrown AI is now essential for protecting its institutions, privacy and democratic integrity.

On the economic side is capturing the massive value creation that is now shifting toward AI. Over the next decade, artificial intelligence will redefine global GDP pools, productivity and trade competitiveness. The countries that invest early in building sovereign AI capabilities will not only safeguard their independence, but also generate the wealth, jobs and exports that define the next economic era. Failing to act means ceding prosperity and agency to others.

Canada built AI science and inspired the world, but in 2025, sovereignty is no longer measured by research output but by control over the technologies that power institutions and industries, and by the ability to transform them into prosperity and influence. AI-based technologies now determine the resilience and independence of health care systems, the autonomy of defence and cybersecurity, the productivity and resilience of national industry, and the emergence of quantum applications that will define the next technological frontier.

Canada cannot replace dominant foreign AI players overnight, but it must act now to build a foundation of a sovereign AI value chain. Its reliance on foreign infrastructure providers, hardware manufacturers and software providers will not disappear immediately, but with a clear vision and decisive action, it can achieve strategic independence that will allow Canadian AI champions to grow, export, compete and lead. This is not spending; it's investing in securing Canada's future. With world-class AI talent and a strong innovation ecosystem, we must take ownership of our AI identity. Foreign investment can accelerate, but vision and control must remain controlled by domestic hands.

Canada has the talent, infrastructure and partnership to lead, but leadership now depends on the ability to deploy at scale and build a trusted, productive and sovereign AI economy that serves Canada's interests and values.

We have the ability to create a sovereign AI value chain by 2030 if we create, deploy and export Canadian innovation while anchoring its value domestically. We can do that by building a leading industry in applied AI with a champion factory that fuels commercialization and supports Canadian AI champions, by supporting demand with broad AI adoption across public and private sectors, and by securing our infrastructure—the foundation of which is ensuring technological sovereignty and data independence.

We can do it by enabling and expanding across Canada and abroad, by strengthening national co-operation and governance for a Canadian road map and, on the global stage, by building a strong ecosystem with global reach and driving the global conversation.

That's really the vision we want to push at Scale AI. We are here to help.

We are very honoured to be here and we'll be very happy to answer all your questions.

Thank you.

The Chair Liberal Ben Carr

Thank you, Mr. Billot.

Colleagues, we'll enter our first round of questions.

Mr. Falk, the floor will be yours for six minutes.

4:55 p.m.

Conservative

Ted Falk Conservative Provencher, MB

Thank you to all of our witnesses for your presentations here today. They were very informative.

Mr. Elder, I'd like to begin with you.

When you identified risks, the first one you talked about was missing out. Certainly, from an opportunity perspective, that is a risk, I suppose. We can either get on board or get out of the way, I suppose.

You also talked a little bit about security and deepfakes and all that. How significant a concern is that from your perspective?

4:55 p.m.

Professor and Research Chair, Human and Computer Vision, York University, Director, Centre for AI and Society, As an Individual

Professor James Elder

I think data security is very significant, because data are the lifeblood of AI. Not only are there security issues from the point of view of political security, data privacy and so forth, which are societal concerns, but there is also intellectual property. It's valuable, so I think we need to pay attention to both of those dimensions of data security.

I don't see any conflict between our economic objectives there and our social objectives. I think they're going the same direction.

Then, of course, with political risk, I think we all understand that we don't want our political system to be manipulated, especially by foreign actors, and biased by artificial content.

I think these are real risks that we have to balance against that risk of missing out.

4:55 p.m.

Conservative

Ted Falk Conservative Provencher, MB

Several years ago, you had talked about how “Deep learning models fail to capture the configural nature of human shape perception”. What's your perspective on that today?

4:55 p.m.

Professor and Research Chair, Human and Computer Vision, York University, Director, Centre for AI and Society, As an Individual

Professor James Elder

Thanks for the deep research. I appreciate it.

My lab has been one of the labs globally trying to understand aspects in which AI systems diverge from human cognition. I think that's important if we're going to integrate these systems into decision-making that might involve an integration of humans and machines or just autonomous machine decision-making.

We've seen some very significant divergences in my field of expertise, which is visual perception and cognition. Interestingly, those gaps have diminished a little bit with advances in AI, but they're still significant, so I think we need to support research in that domain and all domains of AI perception and cognition, because otherwise we will not have systems that are consistent with our way of seeing problems, and at least we need to understand those differences.

5 p.m.

Conservative

Ted Falk Conservative Provencher, MB

Thank you for that.

Dr. Scassa, I'd like to also ask you a few questions.

During the iteration of Bill C-27, you were quite critical in your comments on the due diligence that was done prior to that piece of legislation. Can you give us specifics where you felt the due diligence had been lacking or where the government had not done proper consultations?

5 p.m.

Canada Research Chair in Information Law and Policy, Faculty of Law, Common Law Section, University of Ottawa, As an Individual

Dr. Teresa Scassa

When the AI and data act hit the scene in June 2022, it was unexpected by industry, and it was unexpected by academia or civil society. It just appeared on the scene. There may have been some behind-the-scenes consultations and discussions that took place, but there was no public consultation beforehand.

That is significant, because consultation does a number of things. One is that it engages the public, and on a topic like AI, the more we engage the public, the better. There's a lot of talk about AI literacy and the importance of it, and I think that plays a role in AI literacy, but it also would have helped to explain the government's very particular approach to AI governance, which was explained nine months later in the companion document that came out.

That lack of consultation was a problem in getting the message across and in building literacy and trust, and I think it caused or created a number of conceptions and misconceptions about the bill that made it very difficult to move forward with it.

5 p.m.

Conservative

Ted Falk Conservative Provencher, MB

In our previous panel, we heard from Mr. L'Allié that we now have agentic AI, which can self-preserve and self-perpetuate. How do we regulate that?

5 p.m.

Canada Research Chair in Information Law and Policy, Faculty of Law, Common Law Section, University of Ottawa, As an Individual

Dr. Teresa Scassa

That's a really good question.

This is one of the challenges with AI. It is moving so quickly that it's very difficult to keep up with it. It's also very difficult in the early stages to understand what the problems, risks and challenges are going to be.

This is something, perhaps, that we're going to have to get used to. Generative AI also created this significant disruption. The AIDA was introduced in June 2022. Generative AI was publicly launched in November 2022. The bill was not prepared for generative AI. Now we're looking at agentic AI and the challenges it's going to bring.

5 p.m.

Conservative

Ted Falk Conservative Provencher, MB

It used to be that when you got concerned about where your computer was going, you just pulled the plug. Apparently, that doesn't work anymore. It's not going to work in the future. Do you have any suggestions for how we can address that?

5 p.m.

Canada Research Chair in Information Law and Policy, Faculty of Law, Common Law Section, University of Ottawa, As an Individual

Dr. Teresa Scassa

There are a lot of different approaches to take, and many Canadian companies are moving slowly, carefully and cautiously. There are also other companies that are going full steam ahead, moving fast and breaking things. Those are typically located in other countries and are planning to reap enormous benefits from it. We're also caught in that position as well. Not all agentic AI is going to be bad.

5 p.m.

Conservative

Ted Falk Conservative Provencher, MB

Thank you.

5 p.m.

Liberal

The Chair Liberal Ben Carr

Mr. Bardeesy, the floor is yours for up to six minutes, sir.

Karim Bardeesy Liberal Taiaiako'n—Parkdale—High Park, ON

Thank you very much.

In this session, we've been hearing a bit about the cutting-edge innovations and also, on the other hand, the potential for displacement. However, there are a lot of spaces in the middle that create opportunities for a wide array of players in the labour market to participate in the potential benefit from AI and to have their work augmented.

I want to start with Monsieur Billot.

I'd like some feedback from you about what kinds of companies in the Scale AI universe might be in that key middle segment. They're not necessarily developing cutting-edge innovations, but they're not creating products that are purely about labour displacement.

5:05 p.m.

Chief Executive Officer, Scale AI

Julien Billot

That's obviously the core of what we try to achieve at Scale AI. Almost all the companies we have had since inception have labour issues—not enough people, basically, to deliver what they need to deliver. None of the projects we funded had job displacement. All the projects we funded helped companies actually do more with the labour they had.

It's true because we focus on something: improving business processes. In improving business processes, we are really here to augment what companies can do with the resources they have—basically, to do much more with what they have, to do much more with more resources. We never had a case of funding where we had companies asking us to do the same with fewer people.

Today, there's a real concern in every industry sector about lack of resources. It's true in every region in Canada. AI is really seen by companies as a way to achieve more with the resources they have. We're not talking about very sophisticated AI.

Something I want to mention in this committee is that everybody talking about AI always has in mind robotics on one side and large language models or agentic AI on the other side. However, AI is also very simple things like machine learning and operation research, and 90% of the projects we funded at Scale AI were about these technologies.

Actually, generative AI happens now on specific content management or marketing management issues, but most of the projects use traditional, I would say, AI technology—the one Yoshua Bengio, Geoffrey Hinton and Richard Sutton invented 30 years ago. They are already providing a lot of productivity gains.

Even when we think about regulating AI, obviously we have to look at different types of AI and different approaches, depending on—

Karim Bardeesy Liberal Taiaiako'n—Parkdale—High Park, ON

Further to that, could you provide some examples of businesses in your ecosystem to illustrate what you're describing?

5:05 p.m.

Chief Executive Officer, Scale AI

Julien Billot

We have helped businesses of all sizes and from a wide range of sectors, since our activities cover various fields. I'll give you a few concrete examples.

During the winter, so this example is still relevant, all aircraft must be de-iced. Aeromag is a global leader in aircraft de-icing. This company has used artificial intelligence to optimize the amount of glycol used to de-ice planes. It doesn't seem like much, but it has a dual effect: first, it reduces costs and streamlines expenditures, and second, it protects the environment, because unused glycol doesn't pollute the environment. So that's one concrete example.

We have also developed projects for companies like the Sept‑Îles railway. Recently, there was an article in La Presse about a project aimed at optimizing the rail transport of iron ore from the mines in northern Quebec and Labrador. This helps optimize the efficiency of the entire supply chain of iron ore from Newfoundland and Labrador and Quebec that passes through the port of Sept‑Îles.

We also helped Pratt & Whitney, a very large company in Quebec and Ontario, optimize the maintenance of its aircraft engines and ensure that aftermarket service and spare parts are always available at the right time for its customers around the world.

We also helped companies like Visual Defence, which works with the City of Ottawa and the Municipality of York to optimize the repair of potholes, which is another timely topic. Artificial intelligence is helping municipalities better predict where problems will occur and optimize pothole repairs.

We've funded 200 projects. I could mention a number of them, but those are a few examples of companies, large and small, that have benefited from artificial intelligence.

Karim Bardeesy Liberal Taiaiako'n—Parkdale—High Park, ON

What kinds of skills are expected on the other end of these innovations that are being deployed?

5:05 p.m.

Chief Executive Officer, Scale AI

Julien Billot

For every project we fund, we also fund training around it, because making an AI solution is one thing, but having usage of this solution is another. Usage is really about training people and changing the management approach. That's what we try to fund that at the same time as we're funding the development of the solution itself.

Karim Bardeesy Liberal Taiaiako'n—Parkdale—High Park, ON

Professor Scassa, thank you for that very extensive and rigorous explanation of the journey of AI regulation, more recently, in Canada.

We sometimes hear from either hyper scalers or multinationals that AI regulation itself can scare off an investment.

I want to know if you have a view about that claim.

5:10 p.m.

Canada Research Chair in Information Law and Policy, Faculty of Law, Common Law Section, University of Ottawa, As an Individual

Dr. Teresa Scassa

[Technical difficulty—Editor] for innovators, for example, who understand more clearly what's expected of them and what routes to follow. You know, there is this tension. There are lots of people who like to say, “Don't get in the way of innovation”, but, frankly, there are some innovations we really need to get in the way of. I think we're already experiencing the harms from some of those.

There are other ones where regulation may simply make it easier for the innovators to know where and how to act.