Thank you very much.
Thank you, Mr. Chair, for inviting me to appear today to discuss this study on online hate.
On behalf of Twitter, I'd like to acknowledge the hard work of all committee members and witnesses on this issue. I apologize; my opening remarks are long. There's a lot to unpack. We're a 280-character company though, so maybe they aren't. We'll see how it goes.
Twitter's purpose is to serve the public conversation. Twitter is public by default. When individuals create Twitter accounts and begin tweeting, their tweets are immediately viewable and searchable by anyone around the world. People understand the public nature of Twitter. They come to Twitter expecting to see and join public conversations. As many of you have experienced, tweets can be directly quoted in news articles, and screen grabs of tweets can often be shared by users on other platforms. It is this open and real-time conversation that differentiates Twitter from other digital companies. Any attempts to undermine the integrity of our service erode the core tenet of freedom of expression online, the value upon which our company is based.
Twitter respects and complies with Canadian laws. Twitter does not operate in a separate digital legal world, as has been suggested by some individuals and organizations. Existing Canadian legal frameworks apply to digital spaces, including Twitter.
There has been testimony from previous witnesses supporting investments in digital and media literacy. Twitter agrees with this approach and urges legislators around the world to continuously invest in digital and media literacy. Twitter supports groups that educate users, especially youth, about healthy digital citizenship, online safety and digital skills. Some of our Canadian partners include MediaSmarts—and I will note that they just yesterday released a really excellent report on online hate with regard to youth—Get Cyber Safe, Kids Help Phone, We Matter and Jack.org.
While we welcome everyone to the platform to express themselves, the Twitter rules outline specific policies that explain what types of content and behaviour are permitted. We strive to enforce these rules consistently and impartially. Safety and free expression go hand in hand, both online and in the real world. If people don't feel safe to speak, they won't.
We put the people who use our service first in every step we take. All individuals accessing or using Twitter services must adhere to the policies set forth in the Twitter rules. Failure to do so may result in Twitter's taking one or more enforcement actions, such as temporarily limiting your ability to create posts or interact with other Twitter users; requiring you to remove prohibited content, such as removing a tweet, before you can create new posts or interact with other Twitter users; asking you to verify account ownership with a phone number or email address; or permanently suspending your account.
The Twitter rules enforcement section includes information about the enforcement of the following Twitter rules categories: abuse, child sexual exploitation, private information, sensitive media, violent threats, hateful conduct and terrorism.
I do want to quickly touch on terrorism.
Twitter prohibits terrorist content on its service. We are part of the Global Internet Forum to Counter Terrorism, commonly known as GIFCT, and we endorse the Christchurch call to action. Removing terrorist content and violent extremist content is an area that Twitter has made important progress in, with 91% of what we remove being proactively detected by our own technology. Our CEO, Jack Dorsey, attended the Christchurch call meeting in Paris earlier this month and met with Prime Minister Justin Trudeau to reiterate Twitter's commitment to reduce the risks of live streaming and to remove viral content faster.
Under our hateful conduct policy, you may not “promote violence against or directly attack or threaten” people on the basis of their inclusion in a protected group, such as race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability or serious disease. These include the nine protected categories that the United Nations charter of human rights has identified.
The Twitter rules also prohibit accounts that have the primary purpose of inciting harm towards others on the basis of the categories I mentioned previously. We also prohibit individuals who affiliate with organizations that—whether by their own statements or activities, both on and off the platform—“use or promote violence against civilians to further their causes.”
Content on Twitter is generally flagged for review for possible Twitter rules violations through our help centre found at help.twitter.com/forms or in-app reporting. It can also be flagged by law enforcement agencies and governments. We have a global team that manages enforcement of our rules with 24-7 coverage in every language supported on Twitter. We have also built a dedicated reporting flow exclusively for hateful conduct so it is more easily reported to our review teams.
We are improving. During the last six months of 2018, we took enforcement action on more than 612,000 unique accounts for violations of the Twitter rules categories. We are also taking meaningful and substantial steps to remove the burden on users to report abuse to us.
Earlier this year, we made it a priority to take a proactive approach to abuse in addition to relying on people's reports. Now, by using proprietary technology, 38% of abusive content is surfaced proactively for human review instead of relying on reports from people using Twitter. The same technology we use to track spam, platform manipulations and other violations is helping us flag abusive tweets for our team to review. With our focus on reviewing this type of content, we've also expanded our teams in key areas and locations so that we can work quickly to keep people safe. I would note: We are hiring.
The final subject I want to touch on is law enforcement. Information sharing and collaboration are critical to Twitter's success in preventing abuse that disrupts meaningful conversations on the service. Twitter actively works to maintain strong relationships with Canadian law enforcement agencies. We have positive working relationships with the Canadian centre for cybersecurity, the RCMP, government organizations and provincial and local police forces.
We have an online portal dedicated to law enforcement agencies that allows them to report illegal content such as hate, emergency requests and requests for information. I have worked with law enforcement agencies as well as civil society organizations to ensure they know how to use this dedicated portal.
Twitter is committed to building on this momentum, consistent with our goal of improving healthy conversations. We do so in a transparent, open manner with due regard to the complexity of this particular issue.
Thank you. I look forward to your questions.