Evidence of meeting #120 for Access to Information, Privacy and Ethics in the 42nd Parliament, 1st Session. (The original version is on Parliament’s site, as are the minutes.) The winning word was content.

A recording is available from Parliament.

On the agenda

MPs speaking

Also speaking

Claire Wardle  Harvard University, As an Individual
Ryan Black  Partner, Co-Chair of Information Technology Group, McMillan LLP, As an Individual
Pablo Jorge Tseng  Associate, McMillan LLP, As an Individual
Tristan Harris  Co-Founder and Executive Director, Center for Humane Technology
Vivian Krause  Researcher and Writer, As an Individual

11:40 a.m.

Liberal

Frank Baylis Liberal Pierrefonds—Dollard, QC

What is the difference between a deepfake and just a regular fake, or a fake video and a deepfake? Could you explain that to me?

11:40 a.m.

Partner, Co-Chair of Information Technology Group, McMillan LLP, As an Individual

Ryan Black

Actually, I found an article through search engines that Dr. Wardle participated in, in Australia, which explains it very well. I would encourage people to hit their favourite search engine to find it.

Basically, it learns details from a series of images that are publicly sourced or sourced through other means. It learns details about the face and then uses deep-learning techniques—they're algorithmic and not logic in nature—to learn how the face interacts as it moves. Then, using a transplant victim.... If I were to take a video of Pablo here and I had enough video that had been pumped into the deepfake learning engine, I could just put my face onto Pablo's and very convincingly make Pablo look like he's talking while I'm moving.

11:40 a.m.

Liberal

Frank Baylis Liberal Pierrefonds—Dollard, QC

Over time probably every kid in high school is going to be doing this, right?

11:40 a.m.

Partner, Co-Chair of Information Technology Group, McMillan LLP, As an Individual

Ryan Black

There are face-swap apps already.

11:40 a.m.

Liberal

Frank Baylis Liberal Pierrefonds—Dollard, QC

The way we're going with this concept of deepfake, every kid's going to be doing this with their friends and making these videos, if I understand what you're saying. It's going to be that easy to do, right?

11:40 a.m.

Partner, Co-Chair of Information Technology Group, McMillan LLP, As an Individual

Ryan Black

It's a technology of very limitless application, and will be used for more than faces. It will be used for full bodies. At some point it will be used for transplanting entire things or other characteristics. It could be used for voice just as easily as for face as well.

11:40 a.m.

Liberal

Frank Baylis Liberal Pierrefonds—Dollard, QC

Okay.

I'd like to go back to you, Ms. Wardle, for another question. I missed what you mentioned—this angry-face emoji or a crown something. What was it that you said, exactly?

11:40 a.m.

Harvard University, As an Individual

Dr. Claire Wardle

If you're on Facebook and you see a piece of content, you can add a reaction. It can be a happy face or—

11:40 a.m.

Liberal

Frank Baylis Liberal Pierrefonds—Dollard, QC

I know what that is.

11:40 a.m.

Harvard University, As an Individual

11:40 a.m.

Liberal

Frank Baylis Liberal Pierrefonds—Dollard, QC

What were you referring to, though, when you said “crown Google” or something?

11:40 a.m.

Harvard University, As an Individual

Dr. Claire Wardle

When we are searching for content we put a search filter on that says to only find us content that has a disproportionate amount of angry emoji reactions, because people have an angry emotional reaction to a lot of this deceiving content.

11:40 a.m.

Liberal

Frank Baylis Liberal Pierrefonds—Dollard, QC

That leads you to a lot of the disinformation. Is that what I understand?

11:40 a.m.

Harvard University, As an Individual

Dr. Claire Wardle

Yes, it leads to a lot of the false, misleading content. People who are perpetuating this understand that this is an emotional response, and so they are using material that makes you angry. If you look for those reactions, you end up finding a disproportionate number of these examples.

11:45 a.m.

Liberal

Frank Baylis Liberal Pierrefonds—Dollard, QC

—of the disinformation or the malinformation.

11:45 a.m.

Harvard University, As an Individual

11:45 a.m.

Liberal

Frank Baylis Liberal Pierrefonds—Dollard, QC

Okay. You're saying it's just a way that can be used to find them.

One last thing for you, Ms. Wardle. You had mentioned this database in the United Kingdom. What was the name of that, again?

11:45 a.m.

Harvard University, As an Individual

Dr. Claire Wardle

No, it's a suggestion by Full Fact in a document they published last week, saying that we need a public database of ads. My point is they are specifically saying political ads. There are questions, of course, around how we define a political ad, when we know that the majority of the problematic content might not be directly related to a candidate; it's around other social and political issues. There's a challenge here unless we have a database of all advertising. The idea of defining political advertising will require additional thought.

11:45 a.m.

Liberal

Frank Baylis Liberal Pierrefonds—Dollard, QC

If we define political advertising, we should look at what Full Fact is saying and then put in whatever way to track these things. They may be fake. They may be mal-whatever, whatever captures that, at least, within that context of who's advertising politically. There could be things outside of that, though.

11:45 a.m.

Harvard University, As an Individual

Dr. Claire Wardle

Exactly. They're saying that, at a minimum, there should be a transparent database, for example on Facebook, where people are paying to promote posts—essentially a form of advertising—around an election period.

11:45 a.m.

Liberal

Frank Baylis Liberal Pierrefonds—Dollard, QC

Okay.

How much time do I have left?

11:45 a.m.

Liberal

The Vice-Chair Liberal Nathaniel Erskine-Smith

You have 50 seconds.

11:45 a.m.

NDP

Charlie Angus NDP Timmins—James Bay, ON

Never ask; just go.

11:45 a.m.

Liberal

Frank Baylis Liberal Pierrefonds—Dollard, QC

There you go. It sounds like I have a lot more.

Mr. Harris, your point about its being a whack-a-mole problem.... You've certainly done a lot of thinking about this issue. You talked about putting limits on technology. Is it possible that we have to go the other way and even go further into AI so that someone could build a device to say where you're being manipulated and how you're being manipulated? Ms. Wardle is saying to look it up on a database, and then at least make it transparent like that, but as you said, they're going to get better and better and they're going to use all this technology against us. Would the next step not be that someone could design a technology to say that if you see this ad, this is how the guy's fooling you, or if you see that ad, this is where it was posted. Have you thought along those lines of using technology to counter technology, in this sense?

11:45 a.m.

Co-Founder and Executive Director, Center for Humane Technology

Tristan Harris

Yes, this is already the case, in some sense. The human eye in the future will not be able to discern the difference when something has been algorithmically generated, where a computer generates the video or the image of the person you're speaking with. You will literally not be able to do it. You have two options. Either you try to limit the ability of people to create those kinds of deceiving things or you try to create counter-artificial intelligences to fight the AIs that are trying to deceive you.

Increasingly, we're already having to do that. The U.S. Department of Defense, I believe, was publicly.... There was an article about how they're trying to do that. In terms of a framework, I think what we need to do is start by saying that the human being is vulnerable, based on an understanding, an honest understanding and a humble understanding, of how we really work. How do we then protect ourselves from the way all technology works?

By the way, this also works for addiction and the mental health of young people and loneliness and alienation and polarization. These are all sort of on a spectrum of effects, once you understand the machinery of how we really work.