Evidence of meeting #129 for Access to Information, Privacy and Ethics in the 44th Parliament, 1st Session. (The original version is on Parliament’s site, as are the minutes.) The winning word was research.

A video is available from Parliament.

On the agenda

MPs speaking

Also speaking

Mireille Lalancette  Professor, Political Communication, Université du Québec à Trois-Rivières, As an Individual
Timothy Caulfield  Professor, Faculty of Law and School of Public Health, University of Alberta, As an Individual
Marcus Kolga  Director, DisinfoWatch
Yoshua Bengio  Founder and Scientific Director, Mila - Quebec Artificial Intelligence Institute

Matthew Green NDP Hamilton Centre, ON

Thank you very much.

Mr. Bengio, I've heard folks who are familiar with AI downplay some of the comments around what the potential is.

You've touched on some things, and I want to talk about technological singularity, or the idea that there is a point in time in the future when technological growth becomes uncontrollable and irreversible, resulting in unforeseeable consequences.

What are your thoughts on that? Is that an overstatement of the possibility, or is that something we should be aware of?

5:35 p.m.

Founder and Scientific Director, Mila - Quebec Artificial Intelligence Institute

Yoshua Bengio

Let me put it this way: Nobody has a crystal ball.

The AI researchers, I think, as you alluded to, disagree among themselves about the different scenarios, so the rational way of thinking about this is that there are different scenarios. Some are incredibly, fantastically good, and others are terrible. People talk about human extinction and many things in between.

The responsibility of public policy here is to invest, to make sure we see through this fog better as we move forward and to make sure that we avoid the catastrophic cases of risks of upending our democracies or even creating monsters that could turn against humans. For all of these, there are computer science arguments explaining how it could happen.

If I had more time, I could—

Matthew Green NDP Hamilton Centre, ON

You can certainly send them to us, but I have about a minute and 30 seconds left, so I want to ask about some specificity, given your subject matter expertise.

We spoke about the international obligation Canada has. I want to now turn to domestic regulation. I believe government has a role to play. There's not a free market answer to this, because obviously free markets will incentivize some of the worst basic behaviours.

With AI, ethics and accountability, how can we develop an ethical framework for AI developers to ensure accountability when AI technologies are used to propagate disinformation, particularly in a context that can impact democratic processes?

5:40 p.m.

Founder and Scientific Director, Mila - Quebec Artificial Intelligence Institute

Yoshua Bengio

I think we can look at what has been proposed recently in California, where the way to make sure companies are going to behave well with these systems is through transparency and liability. These are really great, because the government doesn't need to specify what they have to do. Because of transparency, now they are showing more of what they are doing in terms of making sure those systems aren't going to be dangerous, and they want the public to look at them positively. Now with liability, they have to be honest about potential harms that they could create with those systems and that third parties could create.

It's not that one of these companies is going to do something directly to harm people, but if they make it easy for a terrorist to do something or to create a monster, as I said, which may have huge costs for society, they should understand that they're going to be financially responsible for that.

5:40 p.m.

Conservative

The Chair Conservative John Brassard

Thank you.

Thank you, Mr. Bengio.

Mr. Barrett, go ahead for five minutes.

5:40 p.m.

Conservative

Michael Barrett Conservative Leeds—Grenville—Thousand Islands and Rideau Lakes, ON

To provide the witnesses with some context about a situation that unfolded over the last week, on September 22, CTV News aired a segment about a confidence vote that was going to come before the House, and in that segment CTV deliberately misrepresented the comments of the leader of His Majesty's loyal opposition. This is incredibly serious. We're talking about CTV News: They're owned by Bell, a media giant in this country.

What happened is shocking. CTV News spliced together different parts of the Leader of the Opposition's comments to create a false impression. First of all, that created a false statement, something that he never said, but the intention was to create a narrative that the opposition day motion was not about having a carbon tax election—which it was about—but was instead about opposing the Liberals' dental care program, as opposed to being about the carbon tax.

This isn't a situation in which there was an error, a misunderstanding during the editing process or some kind of technical issue. This isn't something that can be communicated away. This was very clearly an effort by a media company, a news organization, to manipulate the statements of the Leader of the Opposition on the eve of a confidence vote in the House of Commons, in a minority parliament. We're talking about misinformation here, and the need to trust, and whom we can trust.

We worry about what we see online, but here we have CTV News. We all know what CTV News is. They created a statement and spliced together multiple sentences to say something that the leader of the opposition did not say. Conservatives were calling for a carbon tax election; they made it about something else. How damaging is this to Canadians' confidence in trusted sources if they can't trust that a major news outlet will just simply report on what's actually being said by the leader of the opposition, but instead deliberately edit a clip to have him say something that he never said?

5:40 p.m.

Conservative

The Chair Conservative John Brassard

Go ahead, Mr. Kolga, and then we'll go to Mr. Bengio. You have two minutes to respond.

5:40 p.m.

Director, DisinfoWatch

Marcus Kolga

I'm an expert on foreign information and influence operations, not on CTV's editorial policies, but I agree that it's very important that Canadians be able to trust their media sources, especially in these times, when trust is declining in media.

Again, I'm an expert on foreign influence and information operations, not on these sorts of domestic situations.

5:45 p.m.

Conservative

Michael Barrett Conservative Leeds—Grenville—Thousand Islands and Rideau Lakes, ON

It looks to me like they took it straight from the playbook of foreign entities that look to sow disinformation in our country.

I'll go to the second witness.

5:45 p.m.

Founder and Scientific Director, Mila - Quebec Artificial Intelligence Institute

Yoshua Bengio

I'm even farther away from this question, being an expert in AI. I'm sorry.

5:45 p.m.

Conservative

Michael Barrett Conservative Leeds—Grenville—Thousand Islands and Rideau Lakes, ON

I think we have a real problem in our country. We're looking outside to examine the effects of interference, misinformation and disinformation, and this is an example of disinformation. This is something that demonstrably did not occur, that they edited together.

We talk about deepfakes. They didn't use AI; they used an editing suite to create a lie, and it's shocking. It's really low tech, actually. It's low-tech domestic disinformation.

I think that the fact that this organization is heavily subsidized by the Trudeau government and then took to undermining the Leader of the Opposition on the eve of a confidence vote tells us something very scary about interference in our democratic institutions by the powerful who favour Justin Trudeau.

5:45 p.m.

Conservative

The Chair Conservative John Brassard

Thank you, Mr. Barrett.

Mr. Fisher, we'll start with you, and then I understand that we're going to go to Ms. Khalid to finish off your five minutes. Go ahead.

Darren Fisher Liberal Dartmouth—Cole Harbour, NS

Thank you very much, Mr. Chair.

I want to thank both of our esteemed witnesses for not only being here today and their testimony, but for being patient with our committee, because I know you've been here before and we had votes and some things going on. Thank you so much for being here today.

On artificial intelligence, foreign interference, misinformation, disinformation and deep fakes, in the last hour, the witnesses were talking about trust lost in media and the lack of critical thinking skills. People just don't know what to believe or whom to believe and whom to trust. I had a conversation with a neighbour one time, and she told me she no longer watches news at all, whether she sees it on social media or whether she sees it on TV. This is kind of heartbreaking. She consciously goes out of her way to not watch any news because she's been duped by misinformation, disinformation or malinformation in news.

In the last hour, we asked our witnesses to give us some recommendations. How do we get back to a place where people in my constituency can feel that they can trust the news again?

I would ask Mr. Bengio first, and understanding, Mr. Kolga, that this may not fall within your exact level of expertise, I would value your opinion as well.

5:45 p.m.

Founder and Scientific Director, Mila - Quebec Artificial Intelligence Institute

Yoshua Bengio

I am going to say the same thing I said earlier that I told the U.S. Senate a year ago, which is that we need to stop—with our international partners, but we can do our share here—the practice of anonymous social media accounts that allow foreign interference, that allow even AI to be using thousands of accounts in a way that makes it difficult not just to trace them and take them down but also eventually to send to jail the people who are doing things that go against our laws.

The reason they're not asking what your bank asks when you open an account is that they don't want to make it difficult for you to create an account, because they make money on having more people. They want to make it easy and they compete with other companies. If we had laws, it would be technically possible to protect privacy so that other users wouldn't know necessarily who you are, but the government with the appropriate mandate could.

There are a number of researchers in the world who are thinking about how to do that. There are technical solutions, and we should go in that direction.

5:45 p.m.

Director, DisinfoWatch

Marcus Kolga

I would only add that we should look to Europe. We should look to the Digital Services Act. It is very effective in Europe—I wouldn't say “very effective”; it is effective. It is a step in the right direction. Europe is making progress in terms of holding these social media companies to account, specifically for the content that's posted on their sites. We need to basically replicate that in this country.

We should also look to our European allies like Finland, which has done an extremely good job of ensuring that future generations do have the digital media literacy skills they need in order to enable that sort of critical thinking. They inject digital media literacy into every course in every year within the school curriculum. It's not just one course a year or one class a year; it's throughout the entire education of a child, from kindergarten to high school. They are learning about digital media literacy. This is something that we should also be doing. We need to start disrupting these sorts of activities, especially when it comes to foreign influence and information operations. We need to figure out ways to disrupt these activities.

Our European allies are doing this. We need to look to them again and learn from them how they're doing it and replicate those efforts here. If we're not disrupting these operations, if we're not holding to account those who are collaborating with these foreign regimes, then we're not going to move towards deterrence of them.

Those are my three points.

5:50 p.m.

Conservative

The Chair Conservative John Brassard

Thank you, Mr. Fisher.

We have 20 seconds left, Ms. Khalid. I looked over at Mr. Fisher.

I do have a question I want to ask, so I hope you'll indulge me here.

First of all, let me say, Mr. Kolga and Mr. Bengio, that despite the technological problems that we had in the past, this was worth the wait. You provided some valuable information to the committee.

Whether we like to think so or not, sooner or later we are going to have an election in this country. The election is set for almost a year from today or sooner.

I want to hear from both of you, first with respect to foreign interference, and then, Mr. Bengio, with respect to artificial intelligence on the concerns that political parties and Canadians should have, and maybe some warning signs as we head into the next election.

What are some of the things that we may be seeing down the pipe, if you will?

Mr. Kolga, I'll start with you, and then we'll conclude with you, Mr. Bengio.

5:50 p.m.

Director, DisinfoWatch

Marcus Kolga

I'll try to keep it brief.

The Chair Conservative John Brassard

I control the time, so take your time.

Seriously, I think it's an important question that hasn't been asked at this point, Mr. Kolga.

5:50 p.m.

Director, DisinfoWatch

Marcus Kolga

Again, we have a smoking gun that came out of the U.S. with the DOJ indictment. Ten million dollars was used to create a new platform, with the help of Canadians, to try to influence Canadian and American discourse. We see it happening. It's not a question of something that is going to happen at the time when the writ drops. It's already happening right now.

I think one of the problems we have in this country is this belief that these foreign authoritarian regimes only activate themselves when there's an election. They don't. China and Russia are sophisticated. They engage in these sorts of operations well in advance of any election. It's 24-7 for them.

The Russian government, for example, spends $3 billion per year on these operations. We're not even close. Even if you combine all NATO countries and their resources in terms of spending to challenge and defend against these operations, we're nowhere close. We see it's happening.

I think we need to step back and acknowledge the fact that it's not just at election time; it's a full-time operation. How are we going to defend against that?

5:50 p.m.

Conservative

The Chair Conservative John Brassard

Thank you, Mr. Kolga.

From an artificial intelligence perspective, now and into the future during an election, what are some of the things we should be concerned about as we head into an election cycle, Mr. Bengio?

5:50 p.m.

Founder and Scientific Director, Mila - Quebec Artificial Intelligence Institute

Yoshua Bengio

We have to worry about the technology that already exists that can be used to create deepfakes of various kinds and imitate people, their voices, their visual appearances and their movements. I think we need to start preparing against tools on the horizon that could be coming out in six months or something like this.

Again, AI is not a static thing. It's getting better as new systems and companies are coming up with new ways of training it that make it more competent.

I'm going to go into a little bit of a technical thing here, which is that once one of these very large systems that cost over $100 million has been trained, it's fairly cheap to take it—especially if it's open source—and do a little bit more work to make it really good at one particular task. This is called fine tuning.

You could imagine, for example, that the Russians might be taking Facebook's LLaMA. They might make it run on social media and interact with people to see how well it works, and then they might take that data in order to make the system even better at convincing people to change their political opinion on some subject.

As I said earlier, there are already studies showing that GPT-4, as it stands, is already better than humans, but only slightly, especially when it has access to your Facebook page. However, it can get a lot worse without any new scientific breakthrough, just with a bit of engineering of the kind that it could easily do.

What that would mean is that they can now unleash bots that would be talking to potentially millions of people at the same time and trying to make them change their opinions. It's a kind of technology that we haven't seen, or maybe it is already happening and we're not aware of it. It could be a game-changer for elections in a bad way.

5:55 p.m.

Conservative

The Chair Conservative John Brassard

Thank you, gentlemen, for providing such valuable information to the committee, and thank you for your valuable time today.

Thank you, Madam Clerk, for arranging for our witnesses to be here.

That concludes today's meeting. I have no other business. Have a great weekend, everyone, and we'll see you next week.

The meeting is adjourned.