Thank you very much.
Just for the record, Mr. Chair, I am but one of many different global policy directors at Facebook, so I'm not “the” director, just “a” director of the company.
Thank you, Mr. Chair, and members. My name is Kevin Chan, and I am the head of public policy at Facebook Canada. I am pleased to contribute to your study of online hate.
We want Facebook to be a place where people can express themselves freely and safely around the world. With this goal, we have invested heavily in people, technology and partnerships to examine and address the abuse of our platform by bad actors.
We have worked swiftly to remove harmful content and hate figures from our platform in line with our policies, and we also remain committed to working with world leaders, governments and across the technology industry to help counter hate speech and the threat of terrorism.
We want Facebook to be a place where people can express themselves freely and safely around the world. With this goal, we have invested heavily in people, technology and partnerships to examine and address the abuse of our platform by bad actors. We have worked swiftly to remove harmful content and hate figures from our platform, in line with our policies. We also remain committed to working with world leaders, governments and across the technology industry to help counter hate speech and the threat of terrorism.
Everyone at our company remains shocked and deeply saddened by the recent tragedies in New Zealand and Sri Lanka, and our hearts go out to the victims, their families and the communities affected by the horrific terrorist attacks.
With regard to the event in Christchurch, Facebook worked closely with the New Zealand police as they responded to the attack, and we are continuing to support their investigation.
In the immediate aftermath, we removed the original Facebook live video within minutes of the police's outreach to us and hashed it so that other shares that are visually similar to that video are then detected and automatically removed from Facebook and Instagram. Some variants such as screen recordings were more difficult to detect, so we also expanded to additional detection systems, including the use of audio technology.
This meant that in the first 24 hours we removed about 1.5 million videos of the attack globally. More than 1.2 million of those videos were blocked at upload and were, therefore, prevented from being seen on our services.
As you will be aware, Facebook is a founding member of the Global Internet Forum to Counter Terrorism, or GIFCT, which coordinates regularly on terrorism. We have been in close contact since the attack, sharing more than 800 visually distinct videos related to the attack via our collective database, along with URLs and context on our enforcement approaches. This incident highlights the importance of industry co-operation across the range of terrorists and violent extremists operating online.
At the same time, we have been working to understand how we can prevent such abuse in the future. Last month Facebook signed the Christchurch call to eliminate terrorist and violent extremist content online and has taken immediate action on live streaming.
Specifically, people who have broken certain rules on Facebook, including our dangerous organizations and individuals policy, will be restricted from using Facebook Live. We are also investing $7.5 million in new research partnerships with leading academics to address the type of adversarial media manipulation we saw after Christchurch, when some people modified the video to avoid detection in order to repost it after it had been taken down.
With regard to the tragedy in Sri Lanka, we know that the misuse and abuse of our platform may amplify underlying ethnic and religious tensions and contribute to offline harm in some parts of the world. This is especially true in countries like Sri Lanka, where many people are using the Internet for the first time and social media can be used to spread hate and fuel tension on the ground.
That's why in 2018 we commissioned a human rights impact assessment on the role of our services, which found that we weren't doing enough to help prevent our platform from being used to foment division and incite violence. We've been taking a number of steps, including building a dedicated team to work across the company to ensure we're building products, policies and programs with these situations in mind, and learning the lessons from our experience in Myanmar. We've also been building up our content review teams to ensure we have people with the right language skills and understanding of the cultural context.
We've been investing in technology and programs in places where we have identified heightened content risks and are taking steps to get ahead of them.
In the wake of the atrocities in Sri Lanka we saw our community come together to help one another. Following the terror attacks and up until the enforcement of the social media ban on April 21, more than a quarter of a million people had used Facebook's safety check tool to mark themselves safe, to reassure their friends and loved ones. Following the attacks there were over 1,000 offers or requests for help on Facebook's crisis response tool.
These events are a painful reminder that while we have come a long way there's always more we can and should do. The price of getting this wrong can be the very highest.
I'd like to now provide a general overview of how we approach hate speech online. Facebook's more important responsibility is keeping people safe both online and off to help protect what's best about the online world. Ultimately we want to give people the power to build communities and bring the world closer together through a diversity of expression and experiences on our platform.
Our community standards are clear: Hate can take many forms and none of it is permitted in our global community. In fact, Facebook rejects not just hate speech, but all hateful ideologies, and we believe we've made significant progress. As our policies tighten in one area, people will shift language and approach to try to get around them. For example, people talk about white nationalism to avoid our ban on white supremacy, so now we ban that too.
People who are determined to spread hate will find a way to skirt rules. One area we have strengthened a great deal is in the designation of hate figures and hate organizations based on a broader range of signals not just their on-platform activity. Working with external Canadian experts has led to the removal of six hate figures and hate organizations—Faith Goldy, Kevin Goudreau, the Canadian Nationalist Front, the Aryan Strikeforce, the Wolves of Odin and Soldiers of Odin—from having any further presence on Facebook and Instagram. We will also remove any praise, representation or support for them. We have already banned more than 200 white supremacist groups as a result of our dangerous organizations policy worldwide.
In addition to this policy change we have strengthened our approach to hate speech in the last few years centred around three Ps. The first is people. We have tripled the number of people at Facebook working on safety and security globally to over 30,000 people.
The second is products. We continue to invest in cutting-edge technology and our product teams continue to build essential tools like artificial intelligence, smart automation and machine learning that help us remove much of this content, often at the point of upload.
The third is partnerships. In addition to the GIFCT, in Canada we have worked with indigenous organizations to better understand and enforce against hateful slurs on our platform. We have also partnered with Equal Voice to develop resources to keep candidates, in particular women candidates, safe online for the upcoming federal election. We have partnered with the Canada Centre for Community Engagement and Prevention of Violence on a workshop on counter-speech and counter-radicalization.
Underpinning all of this is our commitment to transparency. In April 2018, we published our internal guidelines that our teams used to enforce our community standards. We also published our first-ever community standards enforcement report describing the amount and types of content we have taken action against, as well as the amount of content we have proactively flagged for review. We publish our report on a semi-annual basis, and in our most recent report released last month we were proud to share that we are continuing to make progress on identifying hate speech.
We now proactively detect 65% of the content we remove, up from 24% just over a year ago when we first shared our efforts. In the first quarter of 2019 we took down four million hate speech posts and we continue to invest in technology to expand our abilities to detect this content across different languages and regions.
I would like to conclude with some thoughts on future regulation in this space. New rules for the Internet should preserve what is best about the Internet and the digital economy: fostering innovation, supporting growth for small businesses, and enabling freedom of expression while simultaneously protecting society from broader harms. These are incredibly complex issues to get right and we want to work with governments, academics and civil society around the world to ensure new regulations are effective.
As the number of users on Facebook has grown and as the challenges of balancing freedom of expression and safety have increased, we have come to realize that Facebook should not be making so many of these difficult decisions, which is why we will create an external oversight board to help govern speech on Facebook by the end of this year. This oversight board will be independent of Facebook and will be a final level of appeal for what stays up and what goes down on the platform. Our thinking at this time is that the decisions by this oversight board will be publicly binding on Facebook.
Even with the oversight board in place, we know that people use many different online platforms and services to communicate, and we would all be better off if there were clear baseline standards for all platforms. This is why we like to work with governments to establish rules for what is permissible speech online. We have been working with President Macron of France on exactly this kind of project, and we would welcome the opportunity to engage with more countries going forward.
Thank you for the opportunity to present before you today, and I look forward to answering your questions.