Evidence of meeting #27 for Industry and Technology in the 45th Parliament, 1st session. (The original version is on Parliament’s site, as are the minutes.) The winning word was companies.

A recording is available from Parliament.

On the agenda

Members speaking

Before the committee

Tessari L'Allié  Founder and Executive Director, AI Governance and Safety Canada
David Duvenaud  Associate Professor of Computer Science, As an Individual
O'Neil  Vice-President, Research and Innovation, Simon Fraser University, As an Individual
James Elder  Professor and Research Chair, Human and Computer Vision, York University, Director, Centre for AI and Society, As an Individual
Teresa Scassa  Canada Research Chair in Information Law and Policy, Faculty of Law, Common Law Section, University of Ottawa, As an Individual
Billot  Chief Executive Officer, Scale AI

4:15 p.m.

Conservative

Michael Guglielmin Conservative Vaughan—Woodbridge, ON

Mr. Duvenaud, going off that, when they're talking internally in the AI industry, what are they saying? What are people saying behind the scenes about the impact that this is going to have on jobs in the broader economy?

4:15 p.m.

Associate Professor of Computer Science, As an Individual

Professor David Duvenaud

It's not even necessarily behind the scenes. The thing that kind of radicalized me was talking to the engineers and the people at the company saying, “What are you going to be doing once we succeed and don't have jobs?” They said, “Oh, I'll just be clicking 'accept suggestion' all day,” or, “I'll be taking a much-needed vacation.” No one has really thought this through.

Again, as I said, the lab leaders are saying, loud and clear, that there's no plan here. This is going to undermine our economy and democracy, and no one has a good answer. They talk about how we need to have a societal conversation about how to replace it, but that's just filler for saying, “I don't have a plan and no one's come up with a plausible-sounding one yet.”

Twitter is where most of the interesting conversations happen, with lab employees giving their takes. It's not a secret insider opinion. People are being pretty open about what they think.

4:15 p.m.

Conservative

Michael Guglielmin Conservative Vaughan—Woodbridge, ON

Mr. Tessari L'Allié, you referenced, in your opening remarks, the Mexican government system that was attacked and how it wasn't just that AI was told to go and do something. It created the plan, developed the sophistication and was then able to breach over 100-plus million different pieces of data and information.

How would you say agentic AI changes the scale and speed of cyber-attacks compared with traditional hacking tools?

4:15 p.m.

Founder and Executive Director, AI Governance and Safety Canada

Wyatt Tessari L'Allié

It democratizes the ability to run cyber-operations.

In November, there was another example that Anthropic flagged. Basically, Chinese state actors used an AI system to help there, again.

In the past, AI-powered cybercrimes were things like this: The AI would write a section of code, and the hacker would then copy and paste that code into their action. They would do that step by step. Now they're saying, “AI, here's the goal. Attack this target and figure it out.” The AI agent is able to build a plan, try multiple strategies, troubleshoot if it doesn't work and keep going.

4:20 p.m.

Conservative

Michael Guglielmin Conservative Vaughan—Woodbridge, ON

One thing I also found alarming, in the same space of what we're talking about now, is this: The AI was able to change itself to avoid detection and deletion.

How serious are these incidents today, and what does that tell us about the reliability of current safety mechanisms for artificial intelligence?

4:20 p.m.

Founder and Executive Director, AI Governance and Safety Canada

Wyatt Tessari L'Allié

There's an example with Palisade Research. They did an experiment with a robot dog. They gave it the task of patrolling an area. They had a button on the side of the wall to turn it off. They realized that pushing the button didn't often work. The agent, in running the robot, had realized that if somebody pushed that button, it wasn't able to achieve its goal of patrolling the area. It had rewritten its own code so as not to listen to the instruction.

I'm sorry. I missed the second question.

4:20 p.m.

Conservative

Michael Guglielmin Conservative Vaughan—Woodbridge, ON

No, that's good.

Mr. Duvenaud, I'm going to you for that.

Since you were working directly for Anthropic on safety, would you say that safety mechanisms in AI are where they need to be in the current moment?

4:20 p.m.

Associate Professor of Computer Science, As an Individual

Professor David Duvenaud

The short answer is, yes, they are for now, in the sense that we don't think current models are capable of doing the supergalaxy-brained, long-term biding of time to do some big takeover. However, that is the plan—making them that smart. I think we're probably within six or 18 months of models that can do this, though I'm not saying that it's going to happen in that time frame.

This was a big crisis of faith within the company. They had this responsible scaling policy, RSP, which I helped work on a bit. The idea was that they would never ship a model they couldn't prove was safe. However, they realized that they had backed themselves into a corner. They couldn't prove the models were dangerous, but they couldn't prove they were safe. If they unilaterally stopped, they would blow up the company, to no one's benefit. They just changed that RSP two weeks ago to remove that provision.

The point is that they know they're entering a regime where they can't prove the models are safe anymore, but they also don't have a great plan for dealing with that. They wish everyone could slow down, but that requires coordinated action.

4:20 p.m.

Conservative

Michael Guglielmin Conservative Vaughan—Woodbridge, ON

Thank you.

The Chair Liberal Ben Carr

Mr. Bains, the floor will be yours for five minutes. Then I'm going to give one minute to Monsieur Ste-Marie to follow up.

Go ahead, Mr. Bains.

Parm Bains Liberal Richmond East—Steveston, BC

Thank you, Mr. Chair.

Thank you to the witnesses for joining us today.

I'm going to take my first question to British Columbia.

Dr. O'Neil, you are a leader and key figure in national supercomputing initiatives. I believe your testimony here is extremely valuable today.

With respect to Canada's ability to commercialize the work of research institutions, is there a role for artificial intelligence and supercomputers like Cedar to support the commercialization of academic research?

4:20 p.m.

Vice-President, Research and Innovation, Simon Fraser University, As an Individual

Dugan O'Neil

Yes, there is. Right now in Canada, we are an economy dominated by small and medium-sized enterprises. Many of them do not have a large AI division or the tools and the infrastructure needed to assess the technology, to develop their own adaptations of the technologies and to move those forward to be competitive in the world. If we provide public supercomputing access to those small and medium-sized enterprises, we give them platforms on which they can develop their own solutions that allow us to be more independent of some of the international forces at play, even if right now it would seem that Canadian industry is quite far behind the anthropics of the world in terms of developing our own tools.

Parm Bains Liberal Richmond East—Steveston, BC

I'll follow up on something you mentioned there with respect to international markets. How do we compare more specifically, and can this supercomputer technology support ongoing R and D at our Canadian institutions? There's another part to that. What if we don't continue to invest and remain leaders in the AI space?

4:20 p.m.

Vice-President, Research and Innovation, Simon Fraser University, As an Individual

Dugan O'Neil

Currently, we are not an international supercomputing leader. We're the only G7 country that doesn't have a “top 30 in the world” public supercomputer in our jurisdiction.

I think we need to simultaneously increase our public supercomputing capacity in Canada. We need to do that, while also creating platforms for Canadian companies and Canadian individuals to make use of that capacity. If not, then we will be permanently beholden to the way technology is being developed in other jurisdictions. We'll effectively have no choice, if we want to be competitive, but to turn over our data to those other jurisdictions to incorporate it into their products, and then sell it back to us.

Parm Bains Liberal Richmond East—Steveston, BC

Building on that again, can you comment on which sectors—health care and agriculture, for example—show the most promise right now with respect to Canadian AI firms?

4:25 p.m.

Vice-President, Research and Innovation, Simon Fraser University, As an Individual

Dugan O'Neil

I think there are a number of different areas in which we have competitive small companies. Sometimes when people talk about competition, they start from a feeling of being defeated, because we can't compete with the budget of Google or Microsoft or OpenAI. There are many small companies that are doing applications of AI in agriculture, in health care, in mining, in lots of areas that are very critical to the Canadian economy. Right now, there's just a limit on how much those companies can grow and scale those applications.

One thing I encourage people to do is buy Canadian, buy from those companies, be the first customer to allow those companies to grow their technology and their impact in Canada, and then sell to the rest of the world.

Parm Bains Liberal Richmond East—Steveston, BC

What should the federal government prioritize to support responsible AI adoption in Canada? Maybe you can mention what safeguards are needed to ensure responsible AI development and use. There may be a longer answer on this one needed.

4:25 p.m.

Vice-President, Research and Innovation, Simon Fraser University, As an Individual

Dugan O'Neil

I agree with my colleagues that a cross-sector approach is needed. When you have a technology that can disrupt agriculture, health care, mining and other natural resource development all at the same time, it's difficult to have a conversation about how to regulate that when traditionally we regulate more in silos. We do need regulatory thought. We need a societal conversation. And we have to somehow do that while remaining competitive and allowing the use of AI to grow in Canada and not shrink, while we're working on the regulations.

The Chair Liberal Ben Carr

Thank you very much, Mr. Bains.

I apologize, Mr. O'Neil, but that's all the time we have for that line of questioning.

Now it's over to Mr. Ste‑Marie for one minute.

Gabriel Ste-Marie Bloc Joliette—Manawan, QC

Mr. Tessari L'Allié, this morning on the radio, Yoshua Bengio said it was important for Canada to partner with other countries on the issue of AI, given the concentration of power in the U.S. and China. Mr. Duvenaud referred to this in his work, as have you. At the ethics committee, you suggested a treaty between countries to better regulate AI. Could you talk about that briefly?

4:25 p.m.

Founder and Executive Director, AI Governance and Safety Canada

Wyatt Tessari L'Allié

Yes, of course.

In that scenario, the best contribution Canada could make is in the international arena. Canada could take the lead globally and launch those talks.

When it comes to AI safety, no country alone can protect itself against systems that are more intelligent than humans. That means we have to coordinate our efforts, so even the U.S. and China will need such a treaty. That's really the first thing Canada needs to achieve if it wants to influence the trajectory of AI.

Gabriel Ste-Marie Bloc Joliette—Manawan, QC

Thank you very much.

The Chair Liberal Ben Carr

Thank you to our witnesses this afternoon.

Honourable members, I know we like to hobnob a bit after the quick break, but we'll have to get going again right away. You can have five minutes at most before we resume the meeting.

Thank you very much to our witnesses for being here today. Thank you for your patience at the outset.

There is certainly a lot for us to reflect on.

I wish you a good rest of your day.

Thank you.

We'll suspend for a few moments.

The Chair Liberal Ben Carr

I call the meeting back to order.

We have three new witnesses to welcome to the committee. One is joining us online and two are here in person.

Joining as an individual, we have James Elder, professor and research chair in human and computer vision at York University, and co-director of the Centre for AI & Society. We have Teresa Scassa, Canada research chair in information law and policy, from the common law section of the Faculty of Law at the University of Ottawa. From Scale AI, we have Julien Billot, chief executive officer.

It wasn't exactly an uplifting first hour of testimony, so I'll be curious to see where we go in the second hour.

Witnesses, thank you very much for taking the time to join us. As a quick reminder, if you're in the room and your translation earpiece is not in use, to protect the health and well-being of our interpreters, please make sure that it's placed on the sticker in front of you.

With that, I'm going to give the floor to you first, Mr. Elder. You'll have up to five minutes for your opening remarks.

Professor James Elder Professor and Research Chair, Human and Computer Vision, York University, Director, Centre for AI and Society, As an Individual

Thank you so much, Mr. Carr.

It's a privilege to appear before you today.

My expertise is in computational neuroscience, computer vision, AI and robotics. I've been a professor at York for about 30 years. I've led many collaborative research projects with Canadian industry and public sector partners. As mentioned, I'm now serving as director of our Centre for AI and Society, where we bring together around 74 faculty members across different faculties in the university engaged in all aspects of AI research.

I want to break down my brief comments into three categories: opportunities, risks and regulation.

First, I think there are enormous opportunities for Canadian society and industry. As you know, Canadian researchers have been at the forefront of the research on core principles that underlie current AI technologies. In the last few years, we've seen a lot of attention shift to the large language models developed by hyper scalers like OpenAI. I think now we're in a new phase of this AI revolution where we'll see more and more smaller and medium-sized businesses to large businesses reaping benefits from these very large-scale AI models. I think there are very important opportunities for Canada in this regard in many different application areas. I mentioned a few in my opening remarks, including construction, robotics for health care and senior care, smart cities, urban mobility and business process automation.

There are many ways the Government of Canada can help Canadians to seize these opportunities. Some were mentioned in the previous session, including leading by example. The Government of Canada can be an early adopter of Canadian AI technologies to improve business processes. We need to support post-secondary research and training particularly directed toward the application and integration of AI into society. We could talk about the details of how to do that. We need to continue to catalyze collaborative research in applied AI. By “collaborative” I mean pan-Canadian and bringing together industrial sectors with domain experts, government agencies and university researchers. I applaud the initiatives of the government in dual-use research, research into dual-use technologies, but we don't want to neglect AI technologies that have purely civilian applications. Those are some opportunities.

In terms of risks, there are many, as you heard in the previous session, but one I want to emphasize is the risk of missing out. This is a disruptive technology. If Canada tried to avoid it, then we would miss economic opportunities, and that would have downstream impacts on our quality of life. There are going to be huge shifts in employment, both between labour markets and within our job descriptions. Each of us is going to be challenged to adapt our skill set and workflows. I think there are really big risks in education. There are a lot of things we don't know. We need to support research on cognitive development, especially in our young people. We just don't know. We know there are effects of electronic technologies in general on education. We don't know exactly what the effects of outsourcing core intellectual capabilities to AI tools are on brain development, on things like math, logic, prose generation and so forth. We really need to support research in those areas. There are risks in data security, of course. We need data sovereignty. There are, of course, political risks, particularly with respect to AI chatbots and AI bots online and deepfakes. I think there are things we can do to address those challenges as a society, including investing in research on these risks.

I'll try to wrap up very quickly on regulation. I'm not a policy or legal expert—I'm glad to see there are some of those here in this session—but I do think, from my point of view, we can't avoid the details.

We need to look at specific risks and try to mitigate those risks, as we do with any technology. Mitigating political risk will mean clear legislation around the watermarking of AI content to distinguish real from fake content. Above all, we need to protect data sovereignty. We need to have the compute and secure data storage resources in Canada to make sure that Canadian data and IP stay within Canada.

Thank you.