Thank you, Mr. Chair.
Thank you for the invitation to appear before the committee today to talk about the important issue of ideologically motivated violent extremism in Canada.
My name is David Tessler and I am the public policy manager on Meta's counterterrorism and dangerous organizations and individuals team.
With me today is Rachel Curran, public policy manager for Canada.
Meta invests billions of dollars each year in people and technology to keep our platform safe. We have tripled to more than 40,000 globally the number of people working on safety and security. We continue to refine our policies based on direct feedback from experts and impacted communities to address new risks as they emerge. We're a pioneer in artificial intelligence technology to remove harmful content at scale, which enables us to remove the vast majority of terrorism- and organized hate-related content before any users report it.
Our policies around platform content are contained in our community standards, which outline what is and what is not allowed on our platforms. The most relevant sections for this discussion are entitled “violence and incitement” and “dangerous individuals and organizations”.
With respect to violence and incitement, we aim to prevent potential offline harm that may be related to content on Facebook, so we remove language that incites or facilitates serious violence. We remove content, disable accounts and work with law enforcement when we believe there's a genuine risk of physical harm or direct threats to public safety.
We also do not allow any organizations or individuals who proclaim a violent mission or who are engaged in violence to have a presence on our platforms. We follow an extensive process to determine which organizations and individuals meet our thresholds of “dangerous”, and we have worked with a number of different academics and organizations around the world, including here in Canada, to refine this process.
The “dangerous” organizations and individuals we focus on include those involved in terrorist activities, organized hate, mass or serial murder, human trafficking, organized violence or criminal activity. Our work is ongoing. We are constantly evaluating individuals and groups against this policy as they are brought to our attention. We use a combination of technology reports from our community and human review to enforce our policies. We proactively look for and review reporting of prohibited content and remove it in line with our community standards.
Enforcement of our policies is not perfect, but we're getting better by the month. We report our efforts and results quarterly and publicly in our community standards enforcement reports.
The second important point, beyond noting that these standards exist, is that we are always working to evolve our policies in response to stakeholder input and current real-world contexts. Our content policy team works with subject matter experts from across Canada and around the world who are dedicated to following trends across a spectrum of issues, including hate speech and organized hate.
We also regularly team up with other companies, governments and NGOs because we know those seeking to abuse digital platforms attempt to do so not solely on our apps. For instance, in 2017, we, along with YouTube, Microsoft and Twitter, launched a Global Internet Forum to Counter Terrorism, GIFCT. The forum, which is now an independent non-profit, brings together the technology industry, government, civil society and academia to foster collaboration and information sharing to counter terrorism and violent extremist activity online.
Now I'll turn it over to my colleague, Rachel.