Evidence of meeting #19 for Access to Information, Privacy and Ethics in the 45th Parliament, 1st session. (The original version is on Parliament’s site, as are the minutes.) The winning word was technology.

A recording is available from Parliament.

On the agenda

Members speaking

Before the committee

Guilmain  Partner and co-head, Gowling WLG's National Cybersecurity and Data Protection Practice Group, As an Individual
Bourgon  Chief Executive Officer, Machine Intelligence Research Institute

4:30 p.m.

Conservative

The Chair Conservative John Brassard

I call the meeting to order.

I want to welcome everyone to meeting number 19 of the House of Commons Standing Committee on Access to Information, Privacy and Ethics.

That, pursuant to Standing Order 108(3)(h), the committee undertake a study to assess artificial intelligence (AI), the challenges it poses, and how it should be regulated.

I would like to welcome our witnesses for today. As an individual, we have Antoine Guilmain, who is the partner and co-head of Gowling WLG's national cybersecurity and data protection practice group. From the Machine Intelligence Research Institute, we have Malo Bourgon, who is the chief executive officer.

Mr. Guilmain, I'm going to give you up to five minutes to address the committee. If you want to start, go ahead, please.

Antoine Guilmain Partner and co-head, Gowling WLG's National Cybersecurity and Data Protection Practice Group, As an Individual

Thank you very much, Mr. Chair and members of the committee, for inviting me to comment on the challenges related to regulating AI.

Although I will be testifying in English today, I will respond to your questions in English or French.

I am co-leader of the national cybersecurity and data protection group at Gowling WLG, and I'm an associate professor at the faculty of law at the Université de Sherbrooke. I am a practising lawyer called to the bars of Quebec and Paris. My evidence today represents my own views. I am here as an individual, not representing my law firm, clients or any third parties.

Much of my legal career has focused on comparative analysis of legal regimes across the globe, advising clients on their compliance obligations in the jurisdictions where I am qualified to practise. My practice focuses on data protection and cybersecurity, and it naturally extends to artificial intelligence, given its role as a major data-driven technology.

To me, Canada has always been a model of education, growth and innovation. That's why I chose to pursue my doctorate, start my family and build my life here—recently earning citizenship, which remains one of my proudest moments. I believe that Canada's institutions, diverse economy and culture of innovation create an environment well suited for the effective development, adoption and regulation of AI technologies.

Today I would like to discuss the challenges of AI, not simply as an ever-evolving technology but as a new field of regulation. In my view, grounded in my experience in the current international landscape, there are three key pitfalls that we must not overlook.

The first one is that newer doesn't mean better. There is a natural tendency to respond to new technology by creating new laws. However, consistent with the civil law tradition, leading jurists have long recommended applying ancient law to technological revolutions. This approach is not about doing nothing. Rather, it calls for revisiting existing areas of law and adapting them, case by case, to each new technology.

Today, AI does not exist in a legal vacuum in Canada. A wide range of legislation already applies, including copyright, liability, trademark law and data protection. In this last area, we are already seeing new obligations related to automated decision-making, including in Quebec, to ensure transparency when AI is used. In that sense, prior to tabling bills like the former AIDA, we should assess current laws and identify any gaps before imposing new requirements.

My second message would be that faster doesn't mean better. There is a natural tendency, again, to adopt laws as quickly as technologies evolve. However, in law more than in any other field, slow and steady often proves the wiser approach. A look at both domestic and international developments illustrates why.

In data protection, for example, the GDPR, the general data protection regulation, was adopted in 2016, but it took Quebec five years to amend its own legislation in response, with Law 25, particularly in light of the GDPR's international impact. In the realm of AI, the EU AI Act, which came into force in August 2024, is already facing a form of retrenchment, especially regarding implementation timelines and the regulatory burden on tech companies. Whether it will achieve the same success as the GDPR remains uncertain.

Closer to home, AIDA faced significant changes after its introduction. The most recent version contained no fewer than 70 references to upcoming regulation in just 20 pages—an ambitious effort, but far from a self-contained legislative text.

My last message would be that heavier doesn't mean better. Again, there is a tendency to assume that the greater the burden on organizations, the better the protection for the public. This is not always the case, and, more importantly, it can undermine the competitiveness of small and medium-sized enterprises. AIDA reflected this trend, mandating multiple assessments at various stages of an AI system's life cycle. While theoretically sound, this approach is rarely feasible in practice, at least based on my experience.

In sum, I believe that AI legislation can succeed only through sustained and substantive collaboration with stakeholders in industry, academia and civil society to ensure that any framework, first, reflects a risk-based approach; second, appropriately takes into account the state of AI technology, including its current limitations; third, assigns responsibility along the AI value chain; and finally, harmonizes core concepts with existing international frameworks.

With the chair's permission, I would be pleased to submit a short written brief in French and English on the issues I have addressed in my opening remarks.

Thank you, and I look forward to answering this committee's questions.

4:30 p.m.

Conservative

The Chair Conservative John Brassard

Thank you, Mr. Guilmain.

Mr. Bourgon, you have five minutes, sir. Go ahead.

Malo Bourgon Chief Executive Officer, Machine Intelligence Research Institute

Thank you.

Mr. Chair and members of the committee, my name is Malo Bourgon. I'm the CEO of the Machine Intelligence Research Institute, or MIRI, a non-profit founded in 2000 to make sure that the development of powerful AI systems is beneficial for humanity.

I grew up in Ontario, where I studied engineering and computer science at the University of Guelph, and I've worked at MIRI since 2012. Our research helped create the field of AI alignment, the study of how to build AI systems that reliably want—and do—what we want them to.

Governments face many urgent AI concerns, such as disinformation, surveillance, labour displacement and threats to democratic institutions. These are all real and important. However, my focus today is on something different: dangers from AIs that are smarter than the smartest humans at every mental task—what's often termed artificial superintelligence.

The leading AI companies today say that the creation of artificial superintelligence is their explicit goal. OpenAI's CEO, Sam Altman, recently called for “making superintelligence cheap [and] widely available”. Anthropic's Dario Amodei talks of building “a country of geniuses in a datacenter”. These AI companies weren't founded with the intention of making chatbots. To them, chatbots are a stepping stone.

Researchers at MIRI are concerned that if the world continues racing towards superintelligence using anything like today's techniques and understanding, the default outcome is that we'll lose control, likely resulting in human extinction.

Why is there such a big danger? For one thing, AI is unlike traditional software and doesn't behave exactly as its creators intend. Traditional software is written line by line, and a programmer can understand every part. Modern AI systems are grown as enormous neural networks, trained through trial and error with massive computation. Their creators have little insight into what's actually going on inside them. As a result, AIs often exhibit behaviours that nobody asked for and nobody wanted.

For years, MIRI has warned of this eventuality, and now we're starting to see early evidence. Frontier AI systems get caught cheating on evaluations. AIs sometimes drive users into states that clinicians call AI-induced psychosis, even in cases in which the AI systems themselves can readily tell that their responses are harmful to the user. When we look at their chains of reasoning, we see growing signs of attempts at deception. An especially concerning complication is that models are increasingly recognizing when they're being tested; this is called situational awareness, and it threatens the validity of all safety evaluations moving forward.

At current capability levels, these behaviours are concerning, but not catastrophic. The systems are still limited, but we must ask, what happens when they reach the capabilities the companies are aiming for? Will future AI systems start pursuing their own objectives? If so, what will those objectives be? Do these systems endanger us? Can we just pull the plug? Many who have studied these questions have found the answers quite concerning.

Canadians Geoffrey Hinton and Yoshua Bengio are two of the three godfathers of deep learning—which is the paradigm that underlies most of the modern AI systems today, and certainly the most powerful ones. They have publicly warned of the dangers of extinction. In 2023, they joined other top AI scientists, and even the CEOs of OpenAI, DeepMind and Anthropic, some of the leading frontier labs, in this statement: “Mitigating the risk of extinction from AI should be a...priority alongside...pandemics and nuclear war.”

Some of these signatories lead the very companies racing fastest to build superintelligence. Elon Musk called AI a “fundamental risk to the existence of civilization”. Dario Amodei said, “there's a 25% chance that things go really, really badly”, including extinction. This is an unprecedented situation, in which even the creators of the technology are saying it's incredibly dangerous.

Catastrophe, however, is not inevitable. These dangers can be averted. The race that so many see as unstoppable is taking place in a world where most people don't understand the threat. That can change.

What can Canada do? Policy-makers can say what other leaders seem to lack the courage to: that, according to top experts, the race to superintelligence is far too dangerous. Canada can start a global conversation that changes what's possible when it comes to averting this threat. This very House could ask the leaders of those companies to testify under oath about these grave dangers.

As for the motion that initiated the study, the mover said that Canada should not “unnecessarily slow down technological development”. I agree, of course. We can keep the self-driving cars, the chatbots of today and the AI-powered drug discovery tools among many very promising AI applications. Much of the technology is extremely beneficial and promising. The only thing we need to stop is the race to AIs that exceed humans in every way. The extremely specialized chips and enormous data centres essential to that race can be reined in. Canada cannot do this alone, but it can help start the global conversation.

Canadian scientists led the way on this technology. They continue to lead through their efforts to get the world to avert the dangers. My hope is that Canada will use its voice and moral authority to push the world forward so that our best plan is not that we hope we get lucky in the presence of this threat.

Thank you. I look forward to your questions.

4:40 p.m.

Conservative

The Chair Conservative John Brassard

Thank you, Mr. Bourgon.

With that, we'll start our questioning. It's going to be a hell of a discussion, I think.

Mr. Barrett, go ahead. You have six minutes, please.

4:40 p.m.

Conservative

Michael Barrett Conservative Leeds—Grenville—Thousand Islands—Rideau Lakes, ON

Mr. Guilmain, the current reliance that we have is on a voluntary AI code, if I understand correctly. The European Union enforces the AI Act, which is binding. From a compliance perspective, does this gap expose Canadians and Canadian businesses to legal uncertainty? Does it create a greater risk for privacy and algorithmic bias risks?

4:40 p.m.

Partner and co-head, Gowling WLG's National Cybersecurity and Data Protection Practice Group, As an Individual

Antoine Guilmain

Not necessarily, and I will explain why. We have, at the moment, data protection laws that are working pretty well. We have a privacy commissioner at the federal level and in different provinces as well. In these laws, we already have some existing requirements, including when it comes to AI—more specifically, when there's an automated decision-making process.

It's interesting that you raised the EU example. As I mentioned in my opening remarks, it came into force in 2024, but last Wednesday—as a matter of fact, on the same day I got the invitation for this session—the Europeans tabled a digital omnibus on AI. Essentially, this text aims to extend timelines for compliance, as well as to simplify the burden and the obligation for small and medium-sized organizations.

It tells us that they came up with a proposal, but it's still evolving while we're moving forward. It's important to keep this in mind.

4:40 p.m.

Conservative

Michael Barrett Conservative Leeds—Grenville—Thousand Islands—Rideau Lakes, ON

What measures do you think should be in place, for individuals who are shaping AI policy, to ensure and prevent...that there isn't regulatory capture or conflicts of interest?

4:40 p.m.

Partner and co-head, Gowling WLG's National Cybersecurity and Data Protection Practice Group, As an Individual

Antoine Guilmain

In terms of the potential obligations we could think of, it would mostly be compliance assessment, but not too much. What we see at the moment, especially in Quebec, is a tendency to put impact assessments in pretty much any legislation. That's a problem. We see that it's not feasible for most organizations. There is also potentially the idea of more accountability documentation or having some procedures and processes internally. Again, the idea of policies and procedures is amazing. Even though I'm a lawyer and I'm a big fan of these documents, it's not sufficient.

Finally, at the end, I think it's more about training within organizations. That's what we see more and more, at least with my clients. There's a goal to really ensure that the staff know and understand the potential AI risks, as well as the potential benefits for their own organization.

4:40 p.m.

Conservative

Michael Barrett Conservative Leeds—Grenville—Thousand Islands—Rideau Lakes, ON

It's a rapidly evolving space, then.

We've seen companies like Nvidia finance customer purchases of their own products to accelerate their adoption. Do you see similar risks if individuals with major AI investments are advising on Canada's AI policy when their financial interests could shape the rules? That's what I'm driving at.

4:40 p.m.

Partner and co-head, Gowling WLG's National Cybersecurity and Data Protection Practice Group, As an Individual

Antoine Guilmain

I'm not sure I follow the question. I apologize.

4:40 p.m.

Conservative

Michael Barrett Conservative Leeds—Grenville—Thousand Islands—Rideau Lakes, ON

I got part of an answer in the first half, so I'm going to move to my next one, if that's okay. If you reflect on it afterwards, and you want to submit it to the committee in writing, I'd appreciate that.

Mr. Bourgon, your institute warns about catastrophic AI risks, though you did temper some of that in your opening statement. You said that it might not all be bad.

There is an absence of binding legislation, and we have a reliance on voluntary codes. Do you think this approach is sufficient to prevent the worst-case scenario, or do you think that to prevent the worst-case scenario, the most extreme effort should be undertaken, which would be a moratorium on development until robust and conflict-free regulations are put in place?

4:45 p.m.

Chief Executive Officer, Machine Intelligence Research Institute

Malo Bourgon

That's a great question. When I think about AI, I often try to separate the applications of AI and the current systems we have today. Many should be regulated the way most normal and new technologies are regulated.

The thing I focus most of my time on is thinking about where the technology is going, and what the risk will be from these very advanced systems. In that case, it is, unfortunately, a very challenging coordination problem.

As for any actors deciding that they think the risk is too great, their slowing down does not really help avert anyone else building a system that would pose those risks. The main area of focus should be on trying to have those conversations with partners to figure out an agreement they could come to in order to stop pushing the frontier.

However, I think that our best chance of not succumbing to a catastrophe is finding some way to agree with international partners on which frontiers we aren't going to push.

4:45 p.m.

Conservative

Michael Barrett Conservative Leeds—Grenville—Thousand Islands—Rideau Lakes, ON

Thank you.

4:45 p.m.

Conservative

The Chair Conservative John Brassard

Thank you, sir.

Mr. Sarai for six minutes.

If you need your headphones, make sure you have them on. Make sure you're on the proper channel.

Abdelhaq Sari Liberal Bourassa, QC

Thank you very much, Mr. Chair.

I thank the witnesses for their very inspiring and interesting testimony.

Before I ask any questions, I would like to make a clarification about the history of AI.

What would be very dangerous would be a revolution, or demands for regulatory changes around the world in relation to technology that has seen a resurgence in use recently, but which has been around for a long time.

There is a resurgence in the use and exploitation of AI because it has been popularized; however the first article in which the term “artificial intelligence” was mentioned dates back to before 1954. It is very important to remember this. Today is my birthday, and I have been using artificial intelligence in predictive analytics for over 30 years. This is a very important point, and it leads me to ask you both a question.

We are not just talking about generative AI or machine learning whether there is oversight or not. I do not see how regulations could be applied in either case. That is why I would like to benefit from your expertise in this area.

Would asking the government to regulate technological development really be a viable solution, or would it be better to regulate the use of technologies to manage the risks? If I own a private company that creates technology, will we regulate how I do it, or rather regulate how that technology will be used afterwards? When I talk about use, I'm talking about the collection, processing, and communication of data, and the cycle is much more important.

I would really like to hear your thoughts on this.

4:45 p.m.

Partner and co-head, Gowling WLG's National Cybersecurity and Data Protection Practice Group, As an Individual

Antoine Guilmain

Thank you for your question and happy birthday. I am in a good mood too.

I would like to start by saying that, in this presentation, we place a great deal of emphasis on the risks, and rightly so. There are risks, but there are also good things that come from AI.

Next, I would like to emphasize something that is sometimes overlooked, namely that there are legal considerations. It is very important to stress this point. I am a lawyer specializing in privacy and cybersecurity, and I can confirm that there are rules. I have work at the moment, so there is no problem on that front.

That said, I think it is interesting to ask what kind of AI we are talking about. The motion refers to artificial superintelligence. There are nuances. My colleague Mr. Bourgon will also be able to comment on this. From an educational perspective, there are three types of AI.

First, there is rather limited AI, namely the kind we have today, even though it has increasingly sophisticated and broad capabilities. Second, there is slightly more general AI that would mimic human behaviour. Third, there could be artificial superintelligence.

However, it is important to understand that it will not happen overnight. Artificial superintelligence is a concept that dates back to the 1950s. Since then, there have been developments, advances, and setbacks. Today, we are where we are, and we are witnessing a growing trend that will not be reversed overnight.

This study is fundamental, in my opinion. Furthermore, in general, Canada truly has a rather unique approach to AI, both in terms of adoption but also in terms of trying to regulate it. I salute this aspect, but we must not think that it will lead to an immediate change.

This is my opinion.

Abdelhaq Sari Liberal Bourassa, QC

Let's talk about trying to regulate it.

I have been observing the regulatory trend in the European Union for a long time. Earlier, you mentioned there were changes just a week or two ago.

Let's use another example so that we don't rely solely on the European Union, namely Australia. When the Australians wanted to impose regulatory requirements on companies, first, there was a lot of resistance; second, some companies left the regulated areas.

Do you think this could be a risk? What would you advise the Canadian government to do to better regulate the use of AI, while keeping skills and knowledge within our country? I do mean “regulate.” I completely agree.

4:50 p.m.

Partner and co-head, Gowling WLG's National Cybersecurity and Data Protection Practice Group, As an Individual

Antoine Guilmain

I do not think legalization will solve all the problems. Regulations are much broader than legislation. Different types of regulations are issued by markets, by social needs, by voluntary codes of conduct. So it is really a very broad concept.

I would like to talk to you about Quebec. You may have heard of Quebec's very clear Bill 25, an Act to modernize legislative provisions as regards the protection of personal information. Even today, I still provide legal advice and group therapy on this law. I am telling you this because it is an extremely onerous law in terms of obligations that puts small and medium-sized businesses in absolutely untenable situations.

It is an interesting situation because there has been progress because of new legislation. Is the public better protected? Are organizations comfortable with this law? I can assure you that if we conducted a survey, we would probably get some rather surprising results.

All this to say that the law is a good tool. Again, don't get me wrong, because the law is my job. However, we still need to think about how the regulator applies it. It's really important to keep that in mind.

Abdelhaq Sari Liberal Bourassa, QC

I would like to remind you that Bill 25 only affects how data is compiled, which is fine because it deals with the processing, use, and disclosure of data.

Thank you for your comment. Perhaps I will have the opportunity to go back to this later.

4:50 p.m.

Conservative

The Chair Conservative John Brassard

Thank you, Mr. Sarai.

Mr. Thériault for six minutes.

Luc Thériault Bloc Montcalm, QC

Thank you, Chair.

I will give a brief introduction.

I proposed the study we are undertaking on AI. This standing committee has three priorities: access to information, privacy, and ethics. When I read the “Statement on Superintelligence,” which was signed by leading figures in the field of AI, including Geoffrey Hinton, Yoshua Bengio, and many others. The list is incredible.

Like you, Mr. Bourgon, I have read the statement. These people are saying that we must mitigate the risk of extinction due to AI, and that this should be a global priority on a par with other societal risks, such as pandemics and nuclear war.

It blew me away. From there, we can ask ourselves whether this is really serious. If we start looking, we notice a frantic, almost blind rush towards the establishment of artificial superintelligence that seems to favour economic interests, concentration, and control of information over human interests. An entire vision of human beings underlies what we are doing and what the impact of artificial intelligence on human life will be. For example, when it comes to computer engineers, AI can enlighten us and do the job on the spot, which will revolutionize everything.

I will turn to you in a moment, Mr. Guilmain, but I think things are evolving quite a bit faster than you claim. Canada appointed a minister of Artificial Intelligence, which is not insignificant.

Mr. Bourgon, do you consider the people you mentioned in your presentation to be alarmists?

4:55 p.m.

Chief Executive Officer, Machine Intelligence Research Institute

Luc Thériault Bloc Montcalm, QC

Have you already heard claims that artificial superintelligence and its development cannot be controlled?

4:55 p.m.

Chief Executive Officer, Machine Intelligence Research Institute

Malo Bourgon

Yes, I've heard that.