Evidence of meeting #29 for Industry and Technology in the 45th Parliament, 1st session. (The original version is on Parliament’s site, as are the minutes.) The winning word was copyright.

A recording is available from Parliament.

On the agenda

Members speaking

Before the committee

Geist  Canada Research Chair in Internet and E-Commerce Law, Faculty of Law, University of Ottawa, As an Individual
Bennett  Professor Emeritus, University of Victoria, As an Individual
Bengio  Full Professor, Université de Montréal, As an Individual
Dehghantanha  Professor and Canada Research Chair in Cybersecurity and Threat Intelligence, University of Guelph
Craig  Associate Professor of Law, Osgoode Hall Law School, York University, As an Individual
Cukier  Professor, Entrepreneurship and Strategy, Ted Rogers School of Management, and Academic Director, Diversity Institute, As an Individual

4 p.m.

Liberal

Karim Bardeesy Liberal Taiaiako'n—Parkdale—High Park, ON

I have a question that maybe others can jump in on, but I'll start with you, Mr. Bengio.

There's been reference in a couple cases to the idea of a sovereign AI stack. I don't think any country is completely sovereign when it comes to its AI stack, so what are your recommendations for Canada about where within the AI stack we should attempt to be the most sovereign?

4 p.m.

Full Professor, Université de Montréal, As an Individual

Yoshua Bengio

We happen to have incredible talent here. It's unique in the world in terms of the size of our country. I'm talking about the talent in AI in particular, meaning the parts of the AI stack, like the algorithms, the engineering, the computer science behind these things and the design of those models—the frontier models and the LLMs. This is something we can bring to our partners in other countries, who are asking exactly the same questions. They may come with other advantages, and we can work with them.

You're right that, for example, it doesn't make sense for Canada to try to replace the chips level. We should encourage our companies and our academics who are working on it, but the chances are very small that we can lead there. We can lead on the algorithms, and that place is crucial, because with better AI, you can use the AI itself to design the other parts of the stack.

4 p.m.

Liberal

The Chair Liberal Ben Carr

Thanks very much, Mr. Bardeesy.

Mr. Ste‑Marie, you have six minutes.

Gabriel Ste-Marie Bloc Joliette—Manawan, QC

Thank you very much, Mr. Chair.

Welcome to the three witnesses. My thanks to them for being here to join our discussions.

Mr. Bengio, my questions are for you.

I must thank you sincerely for taking the time to meet with us. We know how full your days are. You are the world's most cited researcher in artificial intelligence. You are an A. M. Turing award winner and one of the world's leading authorities in the field. I appreciate your attendance very much.

My first questions are about the international laws and treaties you mentioned that Canada should be party to. It is often said that Europe has rightly chosen a strict regulatory framework whereas, in the United States, the legislation is geared toward supporting large companies. We know that the major multinationals are doing more development of artificial intelligence in the United States than in Europe. It is being done in China too. But less so here and in Europe, despite the skills and talent we have.

In your view, what kind of legislation should we be putting in place in Canada? You mentioned middle powers. Is the European model a good one? What kind of legislation do we need to regulate artificial intelligence?

4:05 p.m.

Full Professor, Université de Montréal, As an Individual

Yoshua Bengio

I don't think there is any cause and effect relationship between Europe's legislation on artificial intelligence and the fact that they are somewhat behind there. That is a myth. I am quite familiar with the legislation and the code of practice in Europe. Actually, with one exception, American companies all agreed with what the legislation and the code required.

The real obstacles to innovation in Europe and in Canada are the lack of self-confidence and the aversion to risk on the part of Canadian and European investors.

Regulation isn't the issue. For example, the European code of best practice simply asks companies to do what they were already doing. It asks for reports to be made public, for that not to be optional, and for the regulator to be able to decide to put a stop to certain things if ever anything happens.

To summarize, in terms of the recommendations, what I'm suggesting is very simple. We need transparency in the risk-management process the companies are following for building and deploying their AI systems—that's number one—and that process needs to demonstrate that the systems they're building and will deploy will not create harms that scientists can anticipate. That is all. By the way, this is the template for the regulation in California that passed recently, the one in New York and, of course, the EU AI Act. The Chinese also have similar laws.

It's not true that nothing is going on. As I said, it will be better for Canada from the point of view of managing and maximizing our impact that we do this in coordination with our partners, like the U.K., the EU and other middle powers.

Gabriel Ste-Marie Bloc Joliette—Manawan, QC

Thank you very much. That is very clear and much appreciated.

You were saying that, collectively, we seem to be underestimating the risks. We can see that you are working full time to make us aware of those risks.

In your view, what could the government, or the parliamentarians here, do to make people more aware of the risks? Should we have advertising campaigns? Instead of having a number of committees conducting a number of studies, should the House strike a committee on artificial intelligence exclusively to organize more in-depth consultations? What do we have to do to make the public take the risks more seriously?

4:05 p.m.

Full Professor, Université de Montréal, As an Individual

Yoshua Bengio

I feel that general education is an issue that must be improved, as you mention.

It might well be a good idea, not only to have one committee specializing in artificial intelligence, but also to have several other committees as well, because there are a lot of factors. We have talked about the workforce, but, for example, there is the issue of artificial intelligence malfunctions too. That's completely different. There is also the impact on children and on psychology, and the matter of disinformation. There are other things too. So if we want to really explore those matters with the right experts and really develop legislation or government action to try to lessen those risks, we have to be able to dig deeper and come up with targeted recommendations.

That said, I feel that the choices we have to make are collective ones. By that I mean that the public must be better informed; it's not just something that happens in Parliament. We have to stimulate democratic discussion and debate all over the country. To move forward, we have to confront the false beliefs that a lot of people have.

Gabriel Ste-Marie Bloc Joliette—Manawan, QC

I have one last question.

Just now, we were discussing artificial general intelligence. You opened a door when you said that, in your view, there are risks with agentic or autonomous artificial intelligence at the moment. Is that the case?

4:10 p.m.

Full Professor, Université de Montréal, As an Individual

Yoshua Bengio

Yes. Agentic artificial intelligence is an extension of generative artificial general intelligence, basically conversational robots. These are agentic already, actually, but companies are working to make them even more agentic. When I say “agentic”, I mean “autonomous”.

Autonomous means no human oversight or very little. The more autonomous something is, the less human oversight there is. If they're not reliable and they're autonomous, that's going to create a lot of problems. However, that's also needed by the companies in order to automate more jobs.

Gabriel Ste-Marie Bloc Joliette—Manawan, QC

Thank you very much.

The Chair Liberal Ben Carr

Thank you, Mr. Ste‑Marie.

Ms. DeRidder, the floor is yours for five minutes.

Kelly DeRidder Conservative Kitchener Centre, ON

Thank you, Mr. Chair.

Hi, Dr. Bengio. Thank you for joining us today. My questions will be for you.

Much of your work rightly focuses on the risks and safety challenges of superintelligent AI. In places like my community of Kitchener Centre, Canada's innovation capital, we're seeing specialty AI help support people with brain injuries, diagnose disease, assist in civic planning and streamline processes to drive real innovation, productivity and economic opportunity.

Can you please explain the difference between specialty AI in sectors like science and research, health care and industry, for example, and superintelligent AI, which the developers themselves admit they have no control over?

4:10 p.m.

Full Professor, Université de Montréal, As an Individual

Yoshua Bengio

They're very different, as you're suggesting. There are even different methodologies.

AI isn't one thing. There's a large variety of methods and systems. Most of the AI approaches being used, for example, in medical research and in scientific research in general are not the same kind of AI as the chatbots people are using now. They are also different from what companies are planning and working on, superintelligent AIs, which are supposed to be smarter than all of us. These are choices we can make. Right now, they're different.

We could have AI that is safe and beneficial and that helps us to cure diseases and deal with all kinds of challenges we have without constructing machines that are dangerous by themselves. However, because of the competition that exists between the leading AI companies and because of the competition that exists between China and the U.S., this is not happening right now. The race is towards superintelligent AIs, because there is a belief that they're going to give superpowers to whoever controls them—if they can control them, of course.

4:10 p.m.

Conservative

Kelly DeRidder Conservative Kitchener Centre, ON

Thank you for your answer. Essentially, specialty AI has a lot of economic opportunities for our country, whereas superintelligent AI is the AI we should be cautious about moving forward.

4:10 p.m.

Full Professor, Université de Montréal, As an Individual

4:10 p.m.

Conservative

Kelly DeRidder Conservative Kitchener Centre, ON

That being the case, do you believe specialty AI could be more of a tool in the tool box of workers instead of a replacement for workers?

4:10 p.m.

Full Professor, Université de Montréal, As an Individual

Yoshua Bengio

Exactly. That goes hand in hand with the idea that humans should remain at the centre as we move in this economic transition. There are two views of AI. One is that it's a tool, and humans can use the tool to be more productive and have a better life. Scientific researchers use it as a tool to improve and accelerate their advances.

The other view coming up...and if you interact with chatbots, you're going to start feeling that they are like people, and they are entities and have their own goals. More and more studies show that somehow, because of the way they're trained, those chatbots behave like people, with their goals and self-interest and so on, even though we don't know what's really going on inside the box.

That's a choice that can be made about what sort of AI we develop. We don't need to rush into the things that are dangerous. For example, the government could invest in AI that's more like a tool so that our companies and our people can benefit from it without creating crazy risks.

4:10 p.m.

Conservative

Kelly DeRidder Conservative Kitchener Centre, ON

Thank you again for your answer. I'm just going to take a moment to ask Mr. Geist a question as well.

You mentioned having sovereign control over our data and the importance of it, and I agree completely. One thing that happened just recently is that $240 million was given to CoreWeave indirectly for a data centre here in Canada, when we had a Canadian company, eStruxture, that could have done the job. To me, that's a missed opportunity. We should have kept our data sovereign with a Canadian company on Canadian soil.

Can you expand on the importance of making sure that for sovereignty and our data, we are utilizing Canadian firms?

4:15 p.m.

Canada Research Chair in Internet and E-Commerce Law, Faculty of Law, University of Ottawa, As an Individual

Michael Geist

You raise an important point. When you're a hammer, everything looks like a nail. When you're a law professor, everything looks like a legal issue to address.

Respectfully, the ownership of the company does not determine, at the end of the day, the sovereignty of the data. That was the point I was trying to get at. Whether it's CoreWeave or the Canadian company you referenced, the reality is that as long as the company has some connections to a foreign country—let's say the United States—Canadian data protection laws and Canadian privacy laws are insufficient to guarantee that Canadian privacy law will apply.

I'm grateful to see the Canadian alternatives we see from some of the large telecom companies on sovereign AI—from the Bells and Teluses of the world. They can't guarantee sovereignty over data unless Parliament acts by developing strong privacy laws that better guarantee the protection of our privacy.

4:15 p.m.

Conservative

Kelly DeRidder Conservative Kitchener Centre, ON

Canadian companies that don't operate abroad would still have control over Canadian data. It's only if they're operating in foreign entities, though.

4:15 p.m.

Canada Research Chair in Internet and E-Commerce Law, Faculty of Law, University of Ottawa, As an Individual

Michael Geist

The practical reality is that virtually any company of a size that can provide the kind of security we need over that data will have sufficient connections to the United States such that U.S. laws, such as the CLOUD Act, or U.S. courts using jurisdictional rules will apply. That's the trade-off. If a small Canadian company says it does not have any ties to the U.S. so it can avoid foreign laws, the problem is that it doesn't have the sophistication and capital investment to provide security over our data. Once they get big enough to be able to do that, they have those connections, and the missing piece is sufficiently strong Canadian privacy laws.

4:15 p.m.

Conservative

Kelly DeRidder Conservative Kitchener Centre, ON

Thank you for your time.

The Chair Liberal Ben Carr

Madame O'Rourke, the floor is yours for five minutes, please.

Dominique O'Rourke Liberal Guelph, ON

Thank you, Mr. Chair.

My question is for Dr. Bengio.

We're hearing in the preambles and all the opening remarks that there could be labour displacement in two to five years, and significant labour displacement after that. I'm hearing that we need to spend more time being certain about the legislation we're putting forward and that we need to find time for multilateralism.

I need some guidance here on how we square that, because we're hearing about a five-year time horizon but being cautious about racing into regulation. Is there a process by which this can be iterative as we have more information? Also, how do you approach multilateralism when there are countries in the world that we know have explicitly prohibited any sorts of guidelines around AI?

4:15 p.m.

Full Professor, Université de Montréal, As an Individual

Yoshua Bengio

You don't try to strike an agreement with everyone—all 190-something countries. That's not going to work. You start with a few countries that share a lot of our concerns and are democratic countries that share our values, and then it can move a lot faster. We've seen small groups of countries getting together through small multilateral agreements. It's already happening on the economic side, but it can happen on AI regulation and AI investment.

I'm not an expert on the issue of privacy, but to speak to the question of companies having enough expertise and not having strong ties with the U.S., it's going to be easier if we are able to create a network of companies from countries outside of the U.S. that have exactly the same questions we're asking. We should do that. It will make things easier for us.

About the timeline, honestly, I think we need to go faster. I don't know how to do that, but my belief is that when we take an issue seriously, we can go very fast. Think of how fast Canadian society and many other countries reacted when the pandemic started. Consider how quickly many countries reacted to help Ukraine, especially after the U.S. started to pull out of helping them. We can move mountains when we're serious about it. I think that's the way to go.