Evidence of meeting #114 for Procedure and House Affairs in the 42nd Parliament, 1st Session. (The original version is on Parliament’s site, as are the minutes.) The winning word was elections.

A recording is available from Parliament.

On the agenda

MPs speaking

Also speaking

David Moscrop  As an Individual
Sherri Hadskey  Commissioner of Elections, Louisiana Secretary of State
Victoria Henry  Digital Rights Campaigner, Open Media Engagement Network
Sébastien Corriveau  Leader, Rhinoceros Party
Chris Aylward  National President, Public Service Alliance of Canada
Pippa Norris  Professor of Government Relations and Laureate Fellow, University of Sydney, McGuire Lecturer in Comparative Politics, Harvard, Director of the Electoral Integrity Project, As an Individual
Angela Nagy  Former Chief Executive Officer, Kelowna - Lake Country, Green Party of Canada, As an Individual
Leonid Sirota  Lecturer, Auckland University of Technology, As an Individual
Morna Ballantyne  Special Assistant to the National President, Public Service Alliance of Canada
Kevin Chan  Global Director and Head of Public Policy, Facebook Canada, Facebook Inc.
Carlos Monje  Director of Public Policy, Twitter - United States and Canada, Twitter Inc.
Michele Austin  Head, Government, Public Policy, Twitter Canada, Twitter Inc.

6:15 p.m.

NDP

Nathan Cullen NDP Skeena—Bulkley Valley, BC

Could you say that again? Sorry, I missed it, Michele.

June 7th, 2018 / 6:15 p.m.

Michele Austin Head, Government, Public Policy, Twitter Canada, Twitter Inc.

Yes, we should be able to see all of those things. The kind of behaviour you are describing is not acceptable. We're very aware of that. We are working very hard on the health and behaviour of the platform to improve that. A violation of the terms of service that you're speaking of is something that we want to hear about, that users can file tickets and cases about, and that we are acting on in a much more aggressive way than previously.

6:15 p.m.

Global Director and Head of Public Policy, Facebook Canada, Facebook Inc.

Kevin Chan

I will say a few things, if I may, Mr. Chair.

6:15 p.m.

Conservative

The Vice-Chair Conservative Blake Richards

Please try to keep it very brief.

6:15 p.m.

Liberal

Chris Bittle Liberal St. Catharines, ON

I'll actually let you go into my time. I'm interested in hearing the answer.

6:15 p.m.

Conservative

The Vice-Chair Conservative Blake Richards

There you go, then.

6:15 p.m.

Global Director and Head of Public Policy, Facebook Canada, Facebook Inc.

Kevin Chan

Thank you, sir.

One of the cornerstones of being on Facebook is actually our authentic identity policy, which you may be aware of. If you're a private user of Facebook, you'll know that typically a Kevin Chan or a Michele Austin or a Nathan Cullen would in fact be themselves on Facebook. We think that is actually the best way, the best first cut at trying to address this issue of being accountable for what you say. In most other places on the Internet, it's like the old New Yorker cartoon, where they say, “On the Internet, nobody knows you're a dog”—

6:15 p.m.

NDP

Nathan Cullen NDP Skeena—Bulkley Valley, BC

You have 83 million fake accounts.

6:15 p.m.

Global Director and Head of Public Policy, Facebook Canada, Facebook Inc.

Kevin Chan

I am not familiar with that number, but I would say that in our community standards, our transparency report that was just released, for Q1, we disabled about 583 million fake accounts, most within minutes of registration. The reason we're able to do that before any individual can actually find and report a fake account is that we're using artificial intelligence technology, a lot of which comes from the pioneering research in Canada. That is actually how we're able to apply machine learning and pattern recognition to identify fake accounts as they are registered on the platform.

I have one broader thing for the committee to consider. I think we were slow to identify the challenges emerging from the U.S. presidential election. I've said it before and I would like to reiterate that. When you look at subsequent elections in countries around the world—in France, in Italy, in the special election in Alabama, in the Irish referendum—these are places where we have applied the election integrity artificial intelligence tools against things like fake accounts. I'm pleased to say that while we're not perfect—and I would never say that—the phenomenon of fake accounts has not had a material impact on those elections.

I think we are getting better. I would never say that we're perfect, but we continue to refine our ability to proactively detect fake accounts and take them down. Again, I point you to the German election, for which independent studies confirmed that fake accounts did not play a role in the outcome.

6:15 p.m.

Liberal

Chris Bittle Liberal St. Catharines, ON

I'll jump in here because I'm losing my time. It's been borrowed.

6:15 p.m.

Conservative

The Vice-Chair Conservative Blake Richards

You do have four and a half minutes.

6:15 p.m.

Liberal

Chris Bittle Liberal St. Catharines, ON

Thank you.

I'm still seeing, especially on Twitter, that you get followed by the person without the photograph, tom@tom36472942—

6:15 p.m.

NDP

Nathan Cullen NDP Skeena—Bulkley Valley, BC

Oh, he follows me, too.

6:15 p.m.

Voices

Oh, oh!

6:15 p.m.

Liberal

Chris Bittle Liberal St. Catharines, ON

Yes, exactly.

I guess I'm troubled. In terms of James Comey, I don't know what type of credibility he has, but he does know a thing or two about security and issues involving elections. He was in Canada recently and he said Canada is at risk. Again—and I think Mr. Cullen brought it up—it's not necessarily the political ads, and maybe next time around you guys will be great at fixing things like the Macedonian sitting in the basement. I went back to the page that I talked to Mr. Chan about. He mentioned that particular content wasn't up there, but there was the Prime Minister with his hand open and a Nazi flag on his hand. There was a post about Liberals in Britain wanting to turn Buckingham Palace into a mosque. This is the type of mean production that gets out there, and that is meant to divide us. It's on both sides, and I see it on both sides. It's not just a right-wing thing. It's not just a left-wing thing. You guys are going to be at the forefront of this. As a lawmaker and as a regulator, this frightens me, because you guys are so difficult to regulate due to your uniqueness.

I don't know. Can you comment on that? Are we going to be in a good place for 2019, given that there are experts telling us we should be worried?

6:20 p.m.

Global Director and Head of Public Policy, Facebook Canada, Facebook Inc.

Kevin Chan

It's hard to know. Sometimes I stare at the screen, and I'm not really sure who should go first or who should go second.

I will address the substantive on this challenge of addressing misinformation online in a moment, but I think it is incumbent on all of us to be very wary of—and I'm sure that's not what you intend, sir—what others may interpret as potentially some form of censorship of what people can say. I think that's something that we're very mindful of. We have taken an approach on misinformation that's a little bit different. I'm not sure that we want to be watching over our users—and I don't think users would want that—to be able to say that we authorize them to say this and we don't authorize them to say something else.

What we do is ensure that we are reducing the spread of misinformation on Facebook. We do this in three ways, three ways that I think are important when we try to understand what we've learned from the past few years.

The first thing, as it turns out, is that the majority of pages and fake accounts that are set up are actually motivated by economic incentives. These are people who create a kind of website somewhere. They have very little content—probably very poor, low-quality content, probably spammy content. They plaster the site with ads, and then they share links on platforms like Twitter, Facebook, and any other social media platform. It's clickbait, so it's designed to get you to see a very salacious headline and click through. As soon as you click through to the website, they monetize.

We've done a number of things to ensure that they can no longer do that. First, we are using artificial intelligence to identify sites that are actually of this nature, and we downrank them or prevent certain content from being shared as spam.

We are also ensuring that you can't spoof domains on our website. If you are pretending to sound like a legitimate website very close to The Globe and Mail or The New York Times, that is no longer possible using our technical measures. We are also ensuring that from a technical standpoint you're no longer able to use Facebook ads to monetize on your website.

The second thing we're doing is for the fake accounts that are set up to sow division, as you say, or to be mischievous in nature and that are not financially motivated. We are using artificial intelligence to identify patterns about these fake accounts and then take them down. As I said earlier, in Q1 we disabled about 583 million fake accounts. In the lead-up to the French and German elections, we took down tens of thousands of accounts that we proactively detected as being fake.

Then, of course, the last thing I should really stress which is very important in this is that we are putting in tremendous resources, and we are already implementing all these measures directly on the platform. I would say, of course, that at the end of the day the final and ultimate backstop is to ensure that when people do come across certain content online, whether it's on Facebook or anywhere else online, they have the critical digital literacy skills to understand that this stuff may actually not be authentic or high-quality information. That's where the partnerships that we have, such as with MediaSmarts on digital news literacy, are hoping to make an effort. I think public awareness campaigns are actually quite important. That would be the first element of what we're trying to do.

6:20 p.m.

Conservative

The Vice-Chair Conservative Blake Richards

Thank you, Mr. Chan.

Mr. Monje, I'll give you a chance to respond as well.

6:20 p.m.

Director of Public Policy, Twitter - United States and Canada, Twitter Inc.

Carlos Monje

The way you phrased that question means you understand the complexity of it.

I echo a lot of what Kevin just said, that we have similar approaches but very different platforms. I think what Twitter brings to our fight against disinformation, against efforts to manipulate the platform, and against efforts to distract people is to look at the signals and the behaviour first, and the content second. We operate in more than 100 countries, and in many more than 100 languages. We have to get smarter about how we use our machine learning, our artificial intelligence, to spot trouble before it kicks up and really causes challenges.

I think there are certain areas that are more black and white than the issues you guys have been focused on today. Terrorism is a great example. When we started putting our anti-spam technology towards the fight against terrorism, we were taking down 25% of accounts before anybody else told us about them. Today that number is 94%. We've taken down 1.2 million accounts since the middle of 2015 when we started using those tools. We've gotten to the point now where 75% of terrorist accounts, when we take them down, haven't been able to tweet once. Instead of content, they're saying, “Go do jihad”. They're coming in from places we've already seen. They're using email addresses or IP addresses that we know of. They're following people who we know are bad folks.

I'm using that as an example of how when it's black and white it's easy, or it's easier. Another example of a black and white issue is child sexual exploitation. There's no good-use case on our platform for child sexual exploitation. Abuse is harder. Misinformation is a lot harder, but that doesn't mean that we're stopping. We are really taking a harder look at the signals that indicate an abusive interaction. such as when something isn't being liked, whether you're talking about it in English, French, or Swahili, and whether you're talking about contextual cues that we wouldn't be able to understand.

On the issue of disinformation in particular, we're doing a lot of the things that Kevin described. An important approach that we're taking in general, and one that we're very excited about, is trying to figure out how we measure these issues in such a way that our engineers can aim at them. Jack Dorsey, our CEO, announced an effort he's calling the health of the conversation on the platform. That circles around four issues. Do we agree on what the facts are, or are fake facts driving the conversation? Do we agree on what's important, or is distraction taking us away from the important issues? Are we open to alternative ideas? This means is there receptivity or toxicity? That's the opposite of it. Then, are we exposed to different ideas, different perspectives? I think we're already pretty healthy about that on Twitter. If you say that cats are better than dogs, you're going to hear about it from your friends and from others.

We've gone out to researchers around the world and said tell us how we can measure; tell us what data we have and what data we need, and then we can measure our policy changes, our enforcement changes, against those.

Right now, we measure the health of the company on very understandable things. How many people do we have? How many monthly users do we have? How much time are they spending on the platform? How many advertisers do we have and how much are they spending? Those are important things for the bottom line for Wall Street. For the health of the conversation on Twitter, which is why people come to Twitter, it's to have a conversation with the world and figure out what's happening.

If we can get those numbers right, we can measure changes. We can do A/B testing against it, and we think we have the best engineers anywhere. We think if we give them a target to aim at, we can get to these really, really, really difficult gnarly issues that have a lot of black, white, and grey in between.

6:25 p.m.

Conservative

The Vice-Chair Conservative Blake Richards

Thank you.

We're pretty well getting to the end of our time, but we did start a few minutes late, so I'm going to allow just one more round. It will be five minutes with Mr. Reid.

6:25 p.m.

Conservative

Scott Reid Conservative Lanark—Frontenac—Kingston, ON

Thank you, Mr. Chair.

I just want to say—and this is not a question, but a statement—that I think any reasonable legislator expects the best efforts from groups like Facebook and Twitter, as opposed to perfection. In the interest of collective humility for members of this committee, I think that the Government of Canada is, after all, the organization that brought the world Canada Post, the Phoenix system, and the long-gun registry, so perhaps expecting perfection from others.... Canada Post had its annual Christmas mail strikes back in the 1970s and 1980s, for those of us with long memories. Perhaps expecting perfection from others is not entirely reasonable. What is reasonable is expecting best efforts.

My impression is that the fundamental problem you guys face is that you're in a kind of arms race with regard to artificial intelligence. You're trying to develop AIs to spot issues that are being generated by AIs themselves, with the purpose of fooling real people. Just a few days ago, I had the chance to sit down with my 23-year-old stepson and his girlfriend, who were watching a fascinating documentary about how people are trying to cause advertisers to be fooled into thinking that they are hitting real eyeballs by creating fake videos to maximize the number of hits when the name Spiderman or Elsa is clicked on. There were some other names, too—some very interesting video names like Spiderman, Elsa, Superman, and on it goes.

What I'm getting at is that there is a desire to stay ahead, but I don't think it's reasonable at all to expect one to go beyond and achieve a zero level. Is that unreasonable, or is it the case that there are some places where you can achieve perfection in blocking these things?

6:30 p.m.

Global Director and Head of Public Policy, Facebook Canada, Facebook Inc.

Kevin Chan

You're right that the threats are always evolving. As I mentioned a moment ago, I think we were slow as a company to spot the new types of threats that emerged out of the U.S. presidential election. Since then, we have spent significant resources and significant time, and have hired—we're doubling our security team—to try to address these things.

AI is going to play a huge role in that. At scale, with 23 million people, and 2.2 billion people around the world using our service, you're right that if everybody posts just one time a day, that is, by definition, 2.2 billion pieces of content. AI will allow us to use automation to identify bad actors.

You're absolutely right that we cannot guarantee 100% accuracy. It goes the other way, too, sir. I think what you're alluding to is that we want to be very careful about the false positive scenario, in which you accidentally take down things that are legitimate content and that don't violate community standards. We do have to be very careful about that.

I do want to assure you—and we have said this in other places as well—that while we are certainly dedicating a lot of resources, staff, and time to addressing these concerns that we know about, we are obviously also looking ahead to identify threats that we think are emerging, to get ahead of this, so that we are on top as electoral events happen around the world.

6:30 p.m.

Conservative

Scott Reid Conservative Lanark—Frontenac—Kingston, ON

Thank you.

For our guests from Twitter, rather than giving a second answer to the same question, you made reference to clause 282.4 of the legislation, titled “Undue influence by foreigners”. You had a proposal, but it wasn't exactly clear to me what it is you're proposing. Could you run through that again?

6:30 p.m.

Head, Government, Public Policy, Twitter Canada, Twitter Inc.

Michele Austin

Yes. That's the section that says where you do not knowingly allow foreign advertisers to advertise. The question is around the definition of “knowingly”. Our concern is with regard to how that will be interpreted and how that will be enforced in real time.

If you're talking about documentaries, Mr. Reid, there's an excellent one called Abacus: Small Enough to Jail, which talks about how, during the financial crisis, a small Chinese bank in New York was jailed because it was the most accessible, rather than the big banks. Our concern is that someone is misidentified or falsely identified, or that something has not been flagged for us in an appropriate way. Therefore, we end up having to defend the actions of some Turkish spam army in Canada, which seems unreasonable.

6:30 p.m.

Director of Public Policy, Twitter - United States and Canada, Twitter Inc.

Carlos Monje

If I could only add, going back to your previous.... You're 100% right. We're not going to be 100%. We have to keep on fighting the new fights, not just fighting the old fights. It is in our financial interest to get this right. It is in our bottom line interest to make sure that, when you come to Twitter and you click on an ad, it's who it says it is. We want to be in a position to be actively looking for this stuff and taking it down. In our conversations with governments around the world, it's important to understand having a safe harbour for good faith efforts to police the platform and do it well.

6:30 p.m.

Conservative

The Vice-Chair Conservative Blake Richards

Thank you.

Thank you to all three of our witnesses for being here today and for the thoroughness of your responses. We appreciate that. That does bring this meeting to a close.

We'll reconvene on Monday at—