First of all, I would like to thank the committee for giving us this opportunity to present the security policies we work with at Twitter.
I will continue in English.
As you pointed out, my name is Patricia Cartes. I have the privilege to represent Twitter's trust and safety teams. We're working very hard behind the scenes to prevent abuse and to fight any report of abuse we receive in the platform.
By virtue of being Spanish, I tend not to be brief so I'll try my best to follow the Twitter style and keep it to maybe a bit more than 140 characters, but to my 10 minutes. I will speak a little fast so we will have time to go through more details in the Q and A.
I wanted to start by explaining how the Twitter platform is different from other platforms. We are public, we are widely distributed, and we're conversational, so when you hear about abuse online, that tends to be equated to Twitter because we are public; and so people have access to content in our platform in a way that perhaps they don't have access to in other platforms or in their privacy layers.
That, of course, also means we have a greater responsibility to ensure that not just our users but also Internet users who may not be on Twitter, but who may see Twitter content beyond our borders, do not encounter abuse in the platform.
We have 313 million users, which might not seem like a big number compared to some of our sister companies; however, the issue with scale at Twitter comes due to the number of tweets that we're seeing flowing through the platform, which is one billion every two days. Just to give you an idea, it took three years, two months, and one day to see the billionth tweet, and now we're seeing 500 million tweets on a single-day basis.
We have 79% of our users based outside of the U.S., so even though we were born in San Francisco, we're by no means just an American company. That's why people like me, not being born in the U.S., can have the roles that we have.
We have offices in Singapore, Dublin, and San Francisco that are for the operational support of our users. The reason we have them there is so we can do 24/7 global coverage: so when Singapore goes to sleep, Dublin takes over, and when Dublin goes to sleep, San Francisco takes over.
We also look at providing support not just based on the abuse-type of expertise. As you can imagine, abuse comes in many ways, from spam to child sexual exploitation, gender-based harassment, and other types of hate speech and extremism, but we also look at the market specificities. That's why we work with a number of organizations on the ground that are experts in this field. They provide us with advice about abuse trends, but also about what users in those markets are saying are the main difficulties they are encountering with the platforms.
I did want to call to your attention the work we have been doing with MediaSmarts, High Resolves, Hollaback! Canada, which really have been instrumental in some of the changes we introduced as recently as last week.
We also have 82% of our users who are accessing the site via mobile. This is extremely important. The reason that we have 140 characters as a limitation is because we were born on mobile. Initially, when Jack Dorsey created the platform, you could only text to tweet, and at the time 140 characters was the text limitation. That's why it remains a 140-character platform.
This also means when we encounter persistent abuse we do not have the ability to use traditional methods such as IP blocking because the majority of our users are entering the site through dynamic IP addresses that are mobile, and therefore on an IP address you might have a bad user and a good user. That's why at Twitter when we talk about automating support and automating the detection of abuse, we have to think about patterns of behaviour. Are we seeing users we have previously suspended coming back with similar email addresses, similar names, using similar hashtags, and targeting the same accounts? When we see a combination of those patterns, it's easier for us to automate. We cannot simply block a word or block an IP address and hope the abuse will go away, because that's not going to happen, due to that mobile nature of our platform.
We also have rules. I know people tend to think Twitter is the Wild West. That's not the case. While we believe in freedom of expression and speaking truth to power, that really means little as an underlying philosophy if people are afraid to speak up. That's why over the last few years, and especially over the last year, we have introduced significant changes to the Twitter rules.
Today I want to walk you through some of those rules.
It's important to know these rules are public. We want our users to be aware of what the rules are, so that when they cross the line we can hold them accountable and we can show them not just the rules they have violated, but the specific tweets that were shared and that are in violation of the rules.
Let me be very clear. We do not allow our users to make threats of violence and to encourage terrorism or violence, especially when it comes to targeting the protected categories. When I refer to the protected categories, I refer to the UN charter of human rights. We really are talking about race, ethnicity, national origin, religion, sexual orientation, gender identity, age, and disability.
On a platform such as Twitter, I could question an idea or I could question a notion, but I could not target somebody for following that notion or that idea. I could say something such as “I hate Spain”, but I could not say “I hate Spaniards, therefore I'm going to encourage violence against them.” That's where we have to draw the line, and what we're always looking at is the likelihood of content in the platform causing harm in the offline world. If that is the case, it's important that we step in and take action.
When it comes to harassment, we clearly state that you may not incite or engage in the targeting, abuse, or harassment of others. In some of the elements we're looking at, remember that with 140 characters, oftentimes we lack context. That's why we have to look at the intention of the account. Was the account set up only with the intention of harassing somebody, or is this an account that was tweeting constructively before something triggered it and it started tweeting in a way that violates our rules? That might come as surprise, but that is the majority of the cases we see. We don't see the worst kind of trolls, the Gamergate trolls. On a day-to-day basis, what we see are users who, for whatever reason, start tweeting in a non-constructive way.
The way we enforce our rules depends on the severity of the violation of the rule. If we see that a user created the account with only the intent to harass somebody or a group of people, we will suspend the account permanently and we will continue to try to detect new accounts that are set up as a follow up, which tends to happen. However, if we see that a user, who was tweeting constructively, gets triggered by something, and starts tweeting in a non-constructive way, we're going to look at whether taking an educational approach might bring that user back into compliance.
We think these methods work, so at times we can take action such as asking the account to delete specific tweets that violate our rules. We can also freeze the account for a specific time frame so they can't interact for whatever time limit we give to the account. We can also ask the account to verify certain pieces of information. You can use Twitter in an anonymous way, but we do not want the veil of anonymity to be used for abusive purposes. At times if we see that an account is trying to violate our rules through anonymity, we will ask it to provide to us either a phone number or an email address so we have that information.
It will probably not come as a surprise that the worst type of trolls, knowing that they might be held accountable, especially with law enforcement authorities requesting data from Twitter in criminal cases, tend not to engage back on the site once we have taken that step to request further information.
It's important to bear in mind that the types of actions we take are not just suspensions. There's a wider range that we can take. Abuse is not black and white; oftentimes you will have the grey in between.
I also want to mention the tools. We want to empower our users to tailor their experience on Twitter. To that effect, we have launched a number of tools.
As recently as last Tuesday we announced that our mute function has been broadened. Now you can mute not just an account, which enables you not to get notified when that account is tweeting for as long as you don't want to engage with it, but you can also mute words, hashtags, conversations, and emojis. That means, let's say I don't want to see content related to Trump, if I mute the hashtag “trump”, I will not see content related to that within my notifications.
We also have a tool to block, which we recommend for more severe situations where you're adamant that somebody should not interact with you on Twitter. If you block somebody, they cannot engage with you, they cannot tweet at you, and you will not get notified if they do try to tweet at you.
What's most important is to remember that, as a public platform, we don't want to give a false sense of security. If you really don't want somebody to see your tweets, we also recommend protecting them. You can block somebody, but to prevent them from seeing the content, whether they are logged out or looking at it from a search engine, you can also also protect your tweets to further prevent that.