Evidence of meeting #156 for Justice and Human Rights in the 42nd Parliament, 1st Session. (The original version is on Parliament’s site, as are the minutes.) The winning word was facebook.

A recording is available from Parliament.

On the agenda

MPs speaking

Also speaking

Kevin Chan  Global Policy Director, Facebook Inc.

8:50 a.m.

Liberal

The Chair Liberal Anthony Housefather

Good morning, everyone, and welcome to this meeting of the Standing Committee on Justice and Human Rights as we resume our study on online hate.

I would like to welcome Ms. Raitt to the committee as vice-chair for her first meeting.

It's an honour and a pleasure to welcome Mr. Kevin Chan, who is the Global Policy Director at Facebook, to our study. Mr. Chan, I want to thank you and Facebook for being willing to participate in our study on online hate. We know that your company takes this seriously, and we really appreciate your being here to educate us on what you are doing.

We will start with you. The floor is yours, sir.

8:50 a.m.

Kevin Chan Global Policy Director, Facebook Inc.

Thank you very much.

Just for the record, Mr. Chair, I am but one of many different global policy directors at Facebook, so I'm not “the” director, just “a” director of the company.

Thank you, Mr. Chair, and members. My name is Kevin Chan, and I am the head of public policy at Facebook Canada. I am pleased to contribute to your study of online hate.

We want Facebook to be a place where people can express themselves freely and safely around the world. With this goal, we have invested heavily in people, technology and partnerships to examine and address the abuse of our platform by bad actors.

We have worked swiftly to remove harmful content and hate figures from our platform in line with our policies, and we also remain committed to working with world leaders, governments and across the technology industry to help counter hate speech and the threat of terrorism.

We want Facebook to be a place where people can express themselves freely and safely around the world. With this goal, we have invested heavily in people, technology and partnerships to examine and address the abuse of our platform by bad actors. We have worked swiftly to remove harmful content and hate figures from our platform, in line with our policies. We also remain committed to working with world leaders, governments and across the technology industry to help counter hate speech and the threat of terrorism.

Everyone at our company remains shocked and deeply saddened by the recent tragedies in New Zealand and Sri Lanka, and our hearts go out to the victims, their families and the communities affected by the horrific terrorist attacks.

With regard to the event in Christchurch, Facebook worked closely with the New Zealand police as they responded to the attack, and we are continuing to support their investigation.

In the immediate aftermath, we removed the original Facebook live video within minutes of the police's outreach to us and hashed it so that other shares that are visually similar to that video are then detected and automatically removed from Facebook and Instagram. Some variants such as screen recordings were more difficult to detect, so we also expanded to additional detection systems, including the use of audio technology.

This meant that in the first 24 hours we removed about 1.5 million videos of the attack globally. More than 1.2 million of those videos were blocked at upload and were, therefore, prevented from being seen on our services.

As you will be aware, Facebook is a founding member of the Global Internet Forum to Counter Terrorism, or GIFCT, which coordinates regularly on terrorism. We have been in close contact since the attack, sharing more than 800 visually distinct videos related to the attack via our collective database, along with URLs and context on our enforcement approaches. This incident highlights the importance of industry co-operation across the range of terrorists and violent extremists operating online.

At the same time, we have been working to understand how we can prevent such abuse in the future. Last month Facebook signed the Christchurch call to eliminate terrorist and violent extremist content online and has taken immediate action on live streaming.

Specifically, people who have broken certain rules on Facebook, including our dangerous organizations and individuals policy, will be restricted from using Facebook Live. We are also investing $7.5 million in new research partnerships with leading academics to address the type of adversarial media manipulation we saw after Christchurch, when some people modified the video to avoid detection in order to repost it after it had been taken down.

With regard to the tragedy in Sri Lanka, we know that the misuse and abuse of our platform may amplify underlying ethnic and religious tensions and contribute to offline harm in some parts of the world. This is especially true in countries like Sri Lanka, where many people are using the Internet for the first time and social media can be used to spread hate and fuel tension on the ground.

That's why in 2018 we commissioned a human rights impact assessment on the role of our services, which found that we weren't doing enough to help prevent our platform from being used to foment division and incite violence. We've been taking a number of steps, including building a dedicated team to work across the company to ensure we're building products, policies and programs with these situations in mind, and learning the lessons from our experience in Myanmar. We've also been building up our content review teams to ensure we have people with the right language skills and understanding of the cultural context.

We've been investing in technology and programs in places where we have identified heightened content risks and are taking steps to get ahead of them.

In the wake of the atrocities in Sri Lanka we saw our community come together to help one another. Following the terror attacks and up until the enforcement of the social media ban on April 21, more than a quarter of a million people had used Facebook's safety check tool to mark themselves safe, to reassure their friends and loved ones. Following the attacks there were over 1,000 offers or requests for help on Facebook's crisis response tool.

These events are a painful reminder that while we have come a long way there's always more we can and should do. The price of getting this wrong can be the very highest.

I'd like to now provide a general overview of how we approach hate speech online. Facebook's more important responsibility is keeping people safe both online and off to help protect what's best about the online world. Ultimately we want to give people the power to build communities and bring the world closer together through a diversity of expression and experiences on our platform.

Our community standards are clear: Hate can take many forms and none of it is permitted in our global community. In fact, Facebook rejects not just hate speech, but all hateful ideologies, and we believe we've made significant progress. As our policies tighten in one area, people will shift language and approach to try to get around them. For example, people talk about white nationalism to avoid our ban on white supremacy, so now we ban that too.

People who are determined to spread hate will find a way to skirt rules. One area we have strengthened a great deal is in the designation of hate figures and hate organizations based on a broader range of signals not just their on-platform activity. Working with external Canadian experts has led to the removal of six hate figures and hate organizations—Faith Goldy, Kevin Goudreau, the Canadian Nationalist Front, the Aryan Strikeforce, the Wolves of Odin and Soldiers of Odin—from having any further presence on Facebook and Instagram. We will also remove any praise, representation or support for them. We have already banned more than 200 white supremacist groups as a result of our dangerous organizations policy worldwide.

In addition to this policy change we have strengthened our approach to hate speech in the last few years centred around three Ps. The first is people. We have tripled the number of people at Facebook working on safety and security globally to over 30,000 people.

The second is products. We continue to invest in cutting-edge technology and our product teams continue to build essential tools like artificial intelligence, smart automation and machine learning that help us remove much of this content, often at the point of upload.

The third is partnerships. In addition to the GIFCT, in Canada we have worked with indigenous organizations to better understand and enforce against hateful slurs on our platform. We have also partnered with Equal Voice to develop resources to keep candidates, in particular women candidates, safe online for the upcoming federal election. We have partnered with the Canada Centre for Community Engagement and Prevention of Violence on a workshop on counter-speech and counter-radicalization.

Underpinning all of this is our commitment to transparency. In April 2018, we published our internal guidelines that our teams used to enforce our community standards. We also published our first-ever community standards enforcement report describing the amount and types of content we have taken action against, as well as the amount of content we have proactively flagged for review. We publish our report on a semi-annual basis, and in our most recent report released last month we were proud to share that we are continuing to make progress on identifying hate speech.

We now proactively detect 65% of the content we remove, up from 24% just over a year ago when we first shared our efforts. In the first quarter of 2019 we took down four million hate speech posts and we continue to invest in technology to expand our abilities to detect this content across different languages and regions.

I would like to conclude with some thoughts on future regulation in this space. New rules for the Internet should preserve what is best about the Internet and the digital economy: fostering innovation, supporting growth for small businesses, and enabling freedom of expression while simultaneously protecting society from broader harms. These are incredibly complex issues to get right and we want to work with governments, academics and civil society around the world to ensure new regulations are effective.

As the number of users on Facebook has grown and as the challenges of balancing freedom of expression and safety have increased, we have come to realize that Facebook should not be making so many of these difficult decisions, which is why we will create an external oversight board to help govern speech on Facebook by the end of this year. This oversight board will be independent of Facebook and will be a final level of appeal for what stays up and what goes down on the platform. Our thinking at this time is that the decisions by this oversight board will be publicly binding on Facebook.

Even with the oversight board in place, we know that people use many different online platforms and services to communicate, and we would all be better off if there were clear baseline standards for all platforms. This is why we like to work with governments to establish rules for what is permissible speech online. We have been working with President Macron of France on exactly this kind of project, and we would welcome the opportunity to engage with more countries going forward.

Thank you for the opportunity to present before you today, and I look forward to answering your questions.

9 a.m.

Liberal

The Chair Liberal Anthony Housefather

Thank you very much for the presentation.

We will now go to questions.

Ms. Raitt.

9 a.m.

Conservative

Lisa Raitt Conservative Milton, ON

Thank you very much.

Mr. Chan, I'm interested in the last part of your presentation, because it's new to hear that you're thinking about an external board. I'm wondering if you could tell me who will be appointing the members of that board.

9 a.m.

Global Policy Director, Facebook Inc.

Kevin Chan

It is actually quite new; you're right, ma'am. It's something we announced back in November.

We are in consultations right now around the world, including in Canada. We held a Canada round table earlier in May to which we brought various groups from different perspectives and different sectors together to talk about these types of questions. One of the open ones is very much this issue of how these people will be appointed and how we will get appropriate representation around the world, because it would be a global board. Some of the decisions they will have to make will be very local in nature.

We don't have a specific final position on this yet, but I can share with you that we have heard feedback that if it were, let's say, Facebook making that decision initially about who would be appointed, I think some people would say, “Well, we're not sure that that's going to get you the appropriate level of independence you're seeking.” If it were, let's say, open to applications, for example, then I think we've also heard feedback that the challenge will be that we will in some way end up not representing certain groups that feel they should be represented on this board.

9 a.m.

Conservative

Lisa Raitt Conservative Milton, ON

I want to talk about enforcement. Correct me if I'm wrong, but how much control of shareholder stock does Mr. Zuckerberg have?

9 a.m.

Global Policy Director, Facebook Inc.

Kevin Chan

I don't have that information on me, but it's a majority.

9 a.m.

Conservative

Lisa Raitt Conservative Milton, ON

It is. It's 57%, right? Essentially, therefore, as the CEO and the majority shareholder, he is the sole proprietor at the end of the day. He controls it all. He controls exactly what happens in the company. He controls the board as well.

The U.K. is doing something very interesting. They have a white paper out right now and they're thinking of introducing the concept of duty of care. There's a concern about enforcement, which you were just telling us about, in terms of a process to find people to sit on a board. It seems to take a long period of time. If we're trying to figure out how we're going to go about finding a process to appoint people, we're nowhere near appointing a board, and yet the issue is urgent and current.

I'm just curious, Mr. Chan: What's the thought process around not having your CEO and majority shareholder coming to be accountable to parliamentarians, and to members of Congress as well, on an issue as important as this?

9 a.m.

Global Policy Director, Facebook Inc.

Kevin Chan

I'm sorry, Madam, but are you referring to today's committee or are you referring to...?

9 a.m.

Conservative

Lisa Raitt Conservative Milton, ON

I mean anything in general. He wasn't asked to come to our committee, because I think we learned our lesson the first time around with the conflict-of-interest one. He did not appear in front of the U.S. Congress in April when they were talking—

9 a.m.

Liberal

The Chair Liberal Anthony Housefather

He was invited to this committee.

9 a.m.

Conservative

Lisa Raitt Conservative Milton, ON

Oh, sorry, he was. That's my bad; I'm new.

9 a.m.

Global Policy Director, Facebook Inc.

Kevin Chan

That's okay.

Well, I mean—

9 a.m.

Conservative

Lisa Raitt Conservative Milton, ON

It's not that you're bad, by the way.

9 a.m.

Global Policy Director, Facebook Inc.

Kevin Chan

I appreciate that, and I tell my kids that every day, and I tell myself that too.

9 a.m.

Voices

Oh, oh!

9 a.m.

Global Policy Director, Facebook Inc.

Kevin Chan

The CEO has been at legislative bodies previously. He did not come before a legislative body in Canada. In this case, there were specific things that I think the committee wished to discuss in terms of hate speech. We usually try to send the experts who are equipped to engage on these issues. Today, that would be me. On previous occasions, it would have been other people, but that is based on the expertise that we bring to the table.

9:05 a.m.

Conservative

Lisa Raitt Conservative Milton, ON

Let's ask the expert. What is your response going to be to the white paper, specifically on the imposition of a duty of care and accountability? I'm very interested in that. Our only accountability is asking you to send the most appropriate decision-maker to the table. We've asked for it, and it doesn't happen, so I guess we're going to have to resort to other means. The U.K. has taken that position. What is your response to that?

9:05 a.m.

Global Policy Director, Facebook Inc.

Kevin Chan

As you know, during the consultation process we are going to provide some input into that process. Generally, though, if we're talking about hateful or prohibited content on the platform, I think a better way to think about it is the prevalence. It is a general understanding of how much of this is on the platform, and we feel very comfortable and we think it appropriate to be measured against that.

As I said, about a year ago we were proactively detecting about 24% of that. We're trying to better that. We're over 60%. We know we still have more work to do, but I think prevalence is an important measurement.

I think in other parts of the world we've talked about timelines, to get something down within a certain period. A couple of things are particularly challenging with that. For example, something may be up there for a while but has been seen by almost nobody, whereas there are things that have been there for potentially a very short or a long period of time that are seen by a lot of people.

We want to get at the question of reach versus whatever is out there, specific pieces of content that you have to take down by a certain time. In other parts of the world I think that has led to the unintended consequence of over-censorship of content.

We understand we shouldn't have certain things on the platform. They are a violation of our policies, and in some cases a violation of local law. We want to act expeditiously on that, but we want to be measured on the prevalence and the ability of our systems to do what we say we're going to do versus specific timelines that lead to these kinds of unintended consequences of censorship.

9:05 a.m.

Conservative

Lisa Raitt Conservative Milton, ON

Thank you.

9:05 a.m.

Liberal

The Chair Liberal Anthony Housefather

Mr. Fraser.

June 6th, 2019 / 9:05 a.m.

Liberal

Colin Fraser Liberal West Nova, NS

Thank you, Mr. Chair.

Thank you, Mr. Chan, for being with us today.

I want to pick up on something Ms. Raitt was talking about, and that's Mr. Zuckerberg not appearing at this committee or another committee, where he was summoned to appear.

In March he said he was looking forward to discussing online issues with lawmakers around the world. Here we are, and we did invite him to this committee. He was summoned to a different committee.

It's an important signal of how important Facebook takes this issue, when the CEO says he's going to meet with lawmakers around the world, and then he gets invited to a parliamentary committee—this one—and gets summoned to another one and doesn't appear. I'd like to know why he's not here.

9:05 a.m.

Global Policy Director, Facebook Inc.

Kevin Chan

As I mentioned, sir, we have a global policy team that engages with not only governments and lawmakers but civil society and academics around the world. In all cases, we try to send the most appropriate people.

I think you're referring to an op-ed in The Washington Post. Obviously, I think he's trying to indicate that the company's posture, generally, is that we want to engage with as many people as possible. You can appreciate it's going to be challenging, obviously, for him to go everywhere he should be or where people would want him to be, which is why he has a global policy team to help him in that regard.

9:05 a.m.

Liberal

Colin Fraser Liberal West Nova, NS

When he said he was looking forward to discussing it with lawmakers around the world, then, he didn't mean himself specifically; he meant the company.

9:05 a.m.

Global Policy Director, Facebook Inc.

Kevin Chan

Clearly, we're talking about a company that does want to engage.

I want to be very clear. I think on this particular question of hate speech, but also on many other realms, including election integrity, which I think we are all seized with, we are the company, among other platforms, that has leaned in on this and has done the most.