Evidence of meeting #19 for Access to Information, Privacy and Ethics in the 45th Parliament, 1st session. (The original version is on Parliament’s site, as are the minutes.) The winning word was technology.

A recording is available from Parliament.

On the agenda

Members speaking

Before the committee

Guilmain  Partner and co-head, Gowling WLG's National Cybersecurity and Data Protection Practice Group, As an Individual
Bourgon  Chief Executive Officer, Machine Intelligence Research Institute

Luc Thériault Bloc Montcalm, QC

I read the document in which you propose building an off-switch. Could you explain a little more about that?

4:55 p.m.

Chief Executive Officer, Machine Intelligence Research Institute

Malo Bourgon

Yes, absolutely. In the context I was talking about earlier in terms of how this is a coordination problem, separating the applications and the current things of AI from where things are going.... I can get into that if people are interested in why those risks might present and some of the history around that. I think we need to build a world where we have the ability to make agreements to stop pushing the frontier of AI development and to verify and enforce those agreements. We're not there yet. We certainly don't have the political will to do anything like that, but we need to be able to build in the capability to have the option to do so.

Branding is hard. Building an off-switch doesn't mean it would shut down all AI development. It's having the technical, institutional and legal capability, should there be the political will, to impose fairly strenuous restrictions on AI development, deployment and diffusion in pushing the frontier of these very powerful general systems going toward superintelligence. Being able to build that capability is essential.

We've done the work that you read. We recently released some work trying to sketch out what a model international agreement would look like that could prevent the creation of this technology until we can create it safely. We started to enumerate the things that would need to be in place and that make up the components of what we call this off-switch, to be able to enforce and verify it.

Luc Thériault Bloc Montcalm, QC

We can certainly concern ourselves with legislative and regulatory issues, but I felt it was necessary for the committee to consider the ethical implications of such a statement. The public deserves our consideration of the matter, because this is not just any issue. It would be as dangerous as nuclear weapons, but as far as I know, nuclear weapons are fairly codified and regulated. What strikes me is that this is not the case with AI. It seems that people are fascinated by the application of AI to various fields, without even being able to conceive of where it could lead us.

All these applications will be used in fundamental research to create artificial general intelligence, right?

4:55 p.m.

Chief Executive Officer, Machine Intelligence Research Institute

Malo Bourgon

Yes, I certainly agree. I often find myself saying that there seems to be—and it's like slang—a general missing mood here when it comes to where the future of AI is going. Setting aside the risk of loss of control, which I'm worried about, the current AI companies that are taking these risks very seriously are basically talking about building a technology—and you can call it AGI or something else—to have computers do the “thinky thing” we do that allows us to build rockets to go to the moon and to develop novel science. You can think of this as automating automation. Setting aside those loss of control risks, this is still something that would make all cognitive labour economically redundant and that would automate automation.

These companies expect to be able to build the systems that are approaching this in some small number of years. Maybe there's some advancement they don't have that's going to come in the way, and it could be 10 years. Five years ago, we thought AI systems that would be able to talk to us as they do today were 20 years away. Someone came up with a new idea with a transformer, and then all of a sudden, we made bigger AI systems, and they were talking to us. There's a forecasting question here that's hard, but more money is going into these systems than ever before, and we're making advancements.

5 p.m.

Conservative

The Chair Conservative John Brassard

Thank you, sir.

Thank you, Mr. Thériault.

Mr. Cooper, go ahead for five minutes. They're five-minute rounds now.

Michael Cooper Conservative St. Albert—Sturgeon River, AB

Thank you, Mr. Chair.

Mr. Guilmain, you referenced the European Union's Artificial Intelligence Act. It's been characterized by some as essentially the gold standard. I take it from your comments that you would not view it as such. Is that fair?

5 p.m.

Partner and co-head, Gowling WLG's National Cybersecurity and Data Protection Practice Group, As an Individual

Antoine Guilmain

I don't know yet.

5 p.m.

Conservative

Michael Cooper Conservative St. Albert—Sturgeon River, AB

Okay. You—

5 p.m.

Partner and co-head, Gowling WLG's National Cybersecurity and Data Protection Practice Group, As an Individual

Antoine Guilmain

If I may, I mentioned the general data protection regulation in Europe, which is essentially the gold standard. In the world, we started seeing many states changing their laws to essentially mimic this trend. It took a couple of years. It wasn't done in a year.

At the moment, we know that the GDPR has been a success in terms of its reach beyond the EU, but we don't yet know whether the EU AI Act will gain the same success. Regarding the update I gave you from a week ago, it demonstrates that we are still building the plane while we are flying it, if you will allow me the expression.

5 p.m.

Conservative

Michael Cooper Conservative St. Albert—Sturgeon River, AB

Just to clarify, you would view it as the gold standard.

5 p.m.

Partner and co-head, Gowling WLG's National Cybersecurity and Data Protection Practice Group, As an Individual

Antoine Guilmain

I would do what?

5 p.m.

Conservative

Michael Cooper Conservative St. Albert—Sturgeon River, AB

Would you view it as the gold standard or not as the gold standard?

5 p.m.

Partner and co-head, Gowling WLG's National Cybersecurity and Data Protection Practice Group, As an Individual

Antoine Guilmain

I don't know.

5 p.m.

Conservative

Michael Cooper Conservative St. Albert—Sturgeon River, AB

You don't know.

5 p.m.

Partner and co-head, Gowling WLG's National Cybersecurity and Data Protection Practice Group, As an Individual

Antoine Guilmain

I think it's interesting. If you were to ask me, I think this is really well thought out. They really tried to come up with something similar to what we had in Canada in AIDA. Clearly, it was this idea of proposing something.

Is it sufficient? Is it needed? I don't know yet.

5 p.m.

Conservative

Michael Cooper Conservative St. Albert—Sturgeon River, AB

You spoke about some of the challenges, though, including the regulatory burden imposed on companies. I saw one assessment done by the EU that indicates it could cost businesses hundreds of thousands of dollars to use just one AI system. That seems to be problematic. I hope you would agree with that.

Would you care to elaborate on some of the issues around what I think is problematic in terms of the burdens it puts in place that aren't necessarily in line with some of the risks?

5 p.m.

Partner and co-head, Gowling WLG's National Cybersecurity and Data Protection Practice Group, As an Individual

Antoine Guilmain

Absolutely. I will give you an example. Let's say you were the owner of a crossfit gym company. Let's say you're based out of Saint-Louis-du-Ha! Ha! and you want to use AI for real-time movement feedback and potentially for performance tracking and predictive PRs—personal records. You think it's a good idea for your members.

Under the EU AI Act, it would require quite a bit of money to essentially launch and propose these features to your users. You would have to do compliance assessments. You would need to have in place accountability documentation, policies and procedures. You would potentially need to have some record-keeping obligation, a register, in case of an incident. You would need to make sure that there was human oversight. You would potentially have to notify someone if there were a problem. Remember, you are in Saint-Louis-du-Ha! Ha! and you own a crossfit gym.

I'm not saying I'm against this obligation. I think it makes sense in some situations. The fact is that—and we see this with the new laws at the moment, with a lot of obligation—the massive problem is mostly for small and medium-sized enterprises. It seems that we have not found the response for these layers of organization that are really key for Canada.

I think that's my take. Again, I'm not against it. I just think it's a lot.

5 p.m.

Conservative

Michael Cooper Conservative St. Albert—Sturgeon River, AB

Well, here in Canada, we're really lagging behind when it comes to adaptation. You cited the EU law. The U.K. has also done some work around AI regulation. It seems to me to be a more flexible approach with sector-specific guidelines.

Do you have any thoughts on the U.K. approach? Is there anything to learn from what the U.K. is doing?

5 p.m.

Partner and co-head, Gowling WLG's National Cybersecurity and Data Protection Practice Group, As an Individual

Antoine Guilmain

Thank you for citing the U.K. example. I'm not pretending to be an expert on this system, but it seems to be reasonable, in the sense that we don't do nothing; rather, we focus on some sectors, which makes a ton of sense to me. That's my initial reaction.

5:05 p.m.

Conservative

The Chair Conservative John Brassard

Thank you, Mr. Guilmain and Mr. Cooper.

Ms. Lapointe for five minutes.

Linda Lapointe Liberal Rivière-des-Mille-Îles, QC

Thank you, Chair.

I would like to welcome the witnesses and thank them for being here. We are going to learn a lot. You are going to help us go further, do better, and to better regulate AI.

Earlier, Mr. Guilmain, you said that we did not necessarily need to move faster and that we needed to identify the gaps in our current legislation.

Were you talking about Canada as a whole or were you referring to Quebec? I would like to hear your thoughts on this. Do you believe that there are loopholes in our laws that need to be addressed?

5:05 p.m.

Partner and co-head, Gowling WLG's National Cybersecurity and Data Protection Practice Group, As an Individual

Antoine Guilmain

Thank you for your question.

Indeed, we have many laws. I myself specialize in cybersecurity and protecting personal information. However, many other sectors directly impact artificial intelligence. I'm thinking of copyright, trademark rights and consumer protection. Within the framework of those laws, we always have supervisory authorities associated with enabling legislation and with provisions often drafted to be technologically neutral. That's always the goal of our legislation. We confer a form of neutrality so that it stands the test of time.

That's the case, for example, with protecting personal information. We see that requirements are in place. They don't mention specific technology, but they are broad enough to potentially extend the use case, namely for any use of artificial intelligence. I'm setting aside the issue of superintelligence, because I think even I myself have a hard time defining it.

Once we've said that, it seems to me there's a trend towards passing these laws because it feels good. It's like eating Nutella. It's something that's not bad. It's pleasant. We tell ourselves we've done something. However, what I see the most often down the line is that we have regulators who must apply these laws and have the skills to do so. We have regulators in Canada and Quebec who do extraordinary work. However, they lack the means to truly keep abreast of these changes.

Once again, the logic I'm applying focuses on maybe passing fewer laws. We have more and more legislation. However, the fact is we have excellent regulators who can conduct analyses for themselves and are also able to sound the alarm.

Let's talk about Quebec's Commission d'accès à l'information, to cite just one example. The Commission is responsible for applying Law 25, which protects Quebeckers' personal information. However, we see it's already taken a position on artificial intelligence. It tabled briefs. Its representatives explained what they think is the correct application of the law to artificial intelligence.

In the end, we end up with forms of regulation through existing legislation. That said, we cannot solve this problem today. However, we must also admit that Quebec's Commission d'accès à l'information lacks resources. In my opinion, this observation applies more generally. We can generalize about this kind of thing. We have high-calibre regulators. Perhaps the solution is to give them a real opportunity to dive into the file and keep a steady hand on it.

I think that's the expression I find interesting. To “keep a steady hand” on something essentially means having an understanding and applying legislation correctly. That's my feeling, at least.

Linda Lapointe Liberal Rivière-des-Mille-Îles, QC

Therefore, they would help you update legislation if we find loopholes, to make sure nothing about artificial intelligence gets lost.

You spoke earlier about protecting personal information, but it goes beyond that. It's about data in a broad sense, in the way artificial intelligence is used.

5:05 p.m.

Partner and co-head, Gowling WLG's National Cybersecurity and Data Protection Practice Group, As an Individual

Antoine Guilmain

Absolutely. I’ll say again that this aspect of data is eminently tied to the issue of artificial intelligence. Indeed, this intelligence is never just the result of automatic learning. In other words, data is required to be able to reach a form of artificial intelligence.

As a result, we see that the problematic data is often personal information, meaning information that could identify you, directly or indirectly; you, or other individuals, or other groups. It means that this is where the risks truly lie the most often. They aren’t the only risks, but the data used to get to a result are currently subject to legislation.

So that’s my position. I don’t claim to be an artificial intelligence lawyer because I do indeed know my field. I am a personal information protection lawyer. It’s a technology I integrate into my practice. I think that’s really my message today.

Linda Lapointe Liberal Rivière-des-Mille-Îles, QC

Thank you very much.

I have a brief question for you, Mr. Bourgon. You said earlier that a global conversation needs to start. Were you thinking of a worldwide conversation? Where should we start?