Evidence of meeting #129 for Access to Information, Privacy and Ethics in the 44th Parliament, 1st Session. (The original version is on Parliament’s site, as are the minutes.) The winning word was research.

A video is available from Parliament.

On the agenda

MPs speaking

Also speaking

Mireille Lalancette  Professor, Political Communication, Université du Québec à Trois-Rivières, As an Individual
Timothy Caulfield  Professor, Faculty of Law and School of Public Health, University of Alberta, As an Individual
Marcus Kolga  Director, DisinfoWatch
Yoshua Bengio  Founder and Scientific Director, Mila - Quebec Artificial Intelligence Institute

5:10 p.m.

Conservative

The Chair Conservative John Brassard

We're over time, but I'll let you finish quickly.

Iqra Khalid Liberal Mississauga—Erin Mills, ON

Thank you. I appreciate that, Mr. Chair.

Are we talking about just state actors or non-state actors with respect to foreign interference?

As Mr. Kolga said, there are a whole bunch of different things that are going on here. It's not just one entity that is operating.

5:10 p.m.

Conservative

The Chair Conservative John Brassard

Thank you.

We're 28 seconds over time.

Mr. Bengio, please answer briefly.

5:10 p.m.

Founder and Scientific Director, Mila - Quebec Artificial Intelligence Institute

Yoshua Bengio

Those tools are very easy to use; a kid could do it on their laptop. You don't need to be a state actor to use deepfakes. Some of the things I've been talking about regarding using bots for persuasion are more advanced, but as everything gets more advanced, it's going to be easier for people who are not even state actors and terrorists to use them.

5:10 p.m.

Conservative

The Chair Conservative John Brassard

Thank you.

Mr. Villemure, you have the floor for six minutes.

René Villemure Bloc Trois-Rivières, QC

Thank you very much, Mr. Chair.

Thank you, witnesses.

Mr. Bengio, thank you for being with us for the second time. We had an unfortunate experience the first time. It's always a pleasure to talk with you.

Our study focuses on the effects of disinformation on parliamentarians. Could you tell us about the risks parliamentarians face in relation to disinformation, misinformation and so on?

5:10 p.m.

Founder and Scientific Director, Mila - Quebec Artificial Intelligence Institute

Yoshua Bengio

My expertise lies in artificial intelligence, so I'll confine myself to that aspect.

As I said earlier, people can now make deepfakes using very easy-to-use software. In the case of public figures for whom it's easy to obtain images or voice samples, it becomes easy to imitate them. You can reproduce their voices very convincingly. As for video, it's not always perfect, but it's becoming more and more effective.

What also worries me is that these systems are leading us in a direction where we'll be able to impersonate someone we know interactively. It's like someone committing phone fraud. Using artificial intelligence, they pretend to be someone else, and the caller on the other end of the line actually believes they're talking to the person in question. So you could receive a phone call from a supposed political leader and think it's really him.

All this is developing. So, we absolutely must put in place regulatory safeguards to minimize the risks and be able to prosecute people who, under the cloak of anonymity, cheat on the Internet.

René Villemure Bloc Trois-Rivières, QC

A little earlier, you mentioned Bill C‑27. We're very familiar with this bill.

When it comes to artificial intelligence, what best practices from other countries could be applied here to protect parliamentarians and Canadians?

5:10 p.m.

Founder and Scientific Director, Mila - Quebec Artificial Intelligence Institute

Yoshua Bengio

One of the important things to do would be to better monitor systems that can be used for dangerous purposes. As I said, these systems cost hundreds of millions of dollars. The companies that make them should be obliged to make statements to the government, as well as to show what they're doing to prevent their systems from being used for purposes that are dangerous to democracy, such as the situations we're concerned about today. For example, they should show what kind of tests they carry out. Civil society should also be able to take a look at all this. That's a minimum.

As I mentioned, there would also be things to do regarding the way social media are organized. There are technologies that would keep users anonymous, but allow the government to find them if they were doing something illegal. Today, the government doesn't have that option. However, companies won't voluntarily use this kind of technology, because it creates friction when creating user accounts, and they don't want to put themselves at a disadvantage compared to other companies. If governments decided to do something like this, we'd create a level playing field for all companies, and that would be good for society in general.

René Villemure Bloc Trois-Rivières, QC

Stills on the artificial intelligence front, are there rogue actors or rogue countries that are more likely to use artificial intelligence for bad purposes?

5:10 p.m.

Founder and Scientific Director, Mila - Quebec Artificial Intelligence Institute

Yoshua Bengio

The country that is most advanced in artificial intelligence, after the United States, is China. It has a very large critical mass of researchers and companies, not to mention military or national security organizations, in particular, which can do all sorts of things and have a lot of resources.

However, it's not just this kind of player that's worrying. There are also smaller players who can use software like Meta's Llama, which is available online. They can use all these cutting-edge systems without anyone knowing. They can even adapt these systems so that they are specialized to carry out a task that is dangerous for democracy, or even humanity.

René Villemure Bloc Trois-Rivières, QC

In other words, malice is accessible to many, not just one state, given that systems like Meta's are accessible.

5:15 p.m.

Founder and Scientific Director, Mila - Quebec Artificial Intelligence Institute

René Villemure Bloc Trois-Rivières, QC

In your opinion, in the current state of its language analysis, is artificial intelligence capable of detecting a lie?

5:15 p.m.

Founder and Scientific Director, Mila - Quebec Artificial Intelligence Institute

Yoshua Bengio

A lot of people are working on this. There are systems that try to do fact-checking, but they're not perfected yet. I think that today, we have to depend on human beings to do this.

But this is the kind of research we should be funding. States, together, should invest in developing artificial intelligence in such a way that it is beneficial to democracy rather than harmful. But it's not necessarily profitable, so it should probably be up to governments to build a defence system against attacks from evil actors using artificial intelligence.

René Villemure Bloc Trois-Rivières, QC

Given the advances in artificial intelligence, particularly in lie detection, which is an aspect I'm very interested in, can we consider that privacy should become a public good that we should protect?

5:15 p.m.

Founder and Scientific Director, Mila - Quebec Artificial Intelligence Institute

Yoshua Bengio

It's a choice we can make. I think this choice has been made in Europe. Here, we're moving in that direction. I don't really want to take a position on that. There are pros and cons, and it's not all black and white.

What worries me more are the dangerous uses of these systems. What worries me is that today we're legislating on the basis of systems that exist today. This is a mistake, because researchers in artificial intelligence companies are working on systems that will be released in one, two or three years' time. But developing laws and regulations takes time. So we need to be proactive, think things through and try to predict where artificial intelligence will be in two, three or five years' time.

René Villemure Bloc Trois-Rivières, QC

We saw that when ChatGPT arrived, legislators were taken by surprise.

5:15 p.m.

Founder and Scientific Director, Mila - Quebec Artificial Intelligence Institute

René Villemure Bloc Trois-Rivières, QC

Thank you very much.

5:15 p.m.

Conservative

The Chair Conservative John Brassard

Thank you, Mr. Villemure and Mr. Bengio.

Mr. Green, you have the floor for six minutes.

Matthew Green NDP Hamilton Centre, ON

Thank you.

Welcome to the guests.

One of the privileges we have as members is to engage with subject matter experts. Mr. Bengio, I know that you are certainly that.

I have some questions following up from my good friend Mr. Villemure, as we seem to be on the same wavelength.

Given the increasing accessibility of technologies capable of producing deepfakes, as you referenced, in synthetic media, what specific regulatory measures would you recommend to safeguard democratic institutions in Canada from the potential weaponization in spreading misinformation and disinformation?

5:15 p.m.

Founder and Scientific Director, Mila - Quebec Artificial Intelligence Institute

Yoshua Bengio

Unfortunately, there is no silver bullet, so it's going to be a lot of little things.

Unfortunately, a lot of the power to reduce those risks is in the hands of the Americans, so it could be their federal government—or California, these days.

However, I think there are things that the Canadian government can do.

First of all, one of the most important things is that those companies that are building those very powerful AI systems need to do tests—which the U.K. and the U.S. AI Safety Institute, for example, are helping with—that try to evaluate the capabilities of the system. How good is the AI at doing something that could be dangerous for us? It could be generating very realistic imitations or it could be persuasion, which is one thing we haven't seen used that much yet, but I'd be surprised if the Russians are not working on it using open-source software.

We need to know, basically, how a bad actor could use the open-source systems that are commercially available or downloadable in order to do something dangerous to us. Then we need to evaluate that, so we basically force the companies to mitigate those risks or even prevent them from putting out something that could end up being very disruptive.

Matthew Green NDP Hamilton Centre, ON

I think you referenced that because it's not necessarily a commercially viable research track, nation states are going to have to invest in this to give immunity. I think about this, in some ways, as a form of national defence spending.

What opinion do you have, if any, around the possibility of creating international regulations? For example, there could be treaties that would deal with specificity around AI and would provide some kind of international pressure or culpability should there be evidence of state actors.

I'll let you answer that first, and then I'll ask my follow-up question.

5:20 p.m.

Founder and Scientific Director, Mila - Quebec Artificial Intelligence Institute

Yoshua Bengio

It turns out that I'm very much involved in these kinds of efforts in the international community. I'm chairing an international panel modelled more or less after the IPCC, but for AI safety. I'm also involved with the UN and the OECD on discussions about harmonization and coordination of AI regulation and treaties.

There's still a long way to go. It's also important that each country moves forward.

We have a proposed bill here in Canada, and we should do our share. It's very similar in spirit to some of the things that the Americans have done with the executive order or what the Europeans are doing with their AI act.

Then we need to play a very important role on the international scene. Canada, being a medium power, is in a way less threatening than the U.S., which also has very strong commercial interests. I think we can agree more easily with the Europeans, for example, and even with developing countries that also have issues with the way things are progressing.

I think we can also really play an important, positive role in the geopolitical battle that's coming along between the U.S. and China, which is not making things easy for finding international solutions.