Evidence of meeting #33 for Status of Women in the 42nd Parliament, 1st Session. (The original version is on Parliament’s site, as are the minutes.) The winning word was platform.

A recording is available from Parliament.

On the agenda

MPs speaking

Also speaking

Patricia Cartes  Head, Global Safety, Twitter Inc.
Loly Rico  President, Canadian Council for Refugees
Lynne Groulx  Executive Director, Native Women's Association of Canada
Francyne Joe  President, Native Women's Association of Canada
Awar Obob  Member, Babely Shades
Marilee Nowgesic  Special Advisor, Liaison, Native Women's Association of Canada

3:30 p.m.

Conservative

The Chair Conservative Marilyn Gladu

I call the meeting to order.

We have one item to take care of before we get to our witnesses. Last meeting we had a motion that was approved to add one session to talk about algorithms. The list of suggested witnesses for the algorithms was sent, so I believe Ms. Harder has a motion to bring to us.

3:30 p.m.

Conservative

Rachael Thomas Conservative Lethbridge, AB

Yes, in order to move this study along, I have put together a list of witnesses. To the best of my ability I represented all parties at this table and the witness list that came forward. It is as follows:

That, pursuant to the motion passed by the Status of Women Committee on November 14, 2016 related to the one (1) meeting designated on November 30, 2016, in order to examine the effects of automated algorithm based content curation as part of the study on violence against young girls and women in Canada, the Committee invite the following witnesses to present evidence: Dr. Diana Inkpen of University of Ottawa; Colin McKay, Head of Public Policy and Government Relations of Google Inc. (Canada); Thierry Plante, Media Education Specialist at MediaSmarts, Canada's Centre for Digital Media Literacy; Dr. Sandra Robinson of Carleton University; Kelly Acton, Director General of Communications and Marketing Branch of Innovation, Science and Economic Development Canada; and Corinne Charette, Senior Assistant to the Deputy Minister at Spectrum, Information Technologies and Telecommunications.

That makes for a total of six witnesses, which would be three on each panel. All of these witnesses come with expertise with regard to algorithms. There is a mix of both private enterprise and public, and of course research-based groups more on the study side of things, but there is also the practical hands-on. I tried to go for a good balance there.

3:30 p.m.

Conservative

The Chair Conservative Marilyn Gladu

Ms. Damoff.

3:30 p.m.

Liberal

Pam Damoff Liberal Oakville North—Burlington, ON

I'm a little confused. I didn't think we were doing committee business right now, and I thought we had submitted witnesses on Friday for this part of the study. Did I miss something by arriving late?

3:30 p.m.

Conservative

The Chair Conservative Marilyn Gladu

Rachael is bringing a motion based on the ones that were all submitted to recommend which witnesses we need to call, because in order to get them here for November 30, we have to call them pretty soon.

3:30 p.m.

Liberal

Pam Damoff Liberal Oakville North—Burlington, ON

Okay. I did miss something.

3:30 p.m.

Conservative

The Chair Conservative Marilyn Gladu

Sorry.

3:30 p.m.

Liberal

Pam Damoff Liberal Oakville North—Burlington, ON

That's okay.

These are based on the witnesses that everyone submitted?

3:30 p.m.

Conservative

The Chair Conservative Marilyn Gladu

Yes. You, Ms. Nassif, and a bunch of people submitted some, and then Rachael had a bunch of names. They're trying to make two panels, so that there's one for the first hour and one for the second. She has six there that are listed out of about 12 all told that were suggested.

3:30 p.m.

Liberal

Pam Damoff Liberal Oakville North—Burlington, ON

Okay.

3:30 p.m.

Conservative

The Chair Conservative Marilyn Gladu

Is there discussion?

(Motion agreed to)

All right, and now we'll go to the witnesses today.

We are very excited to have Twitter with us today. We have Patricia Cartes, who is the head of global safety. Welcome to you, Patricia. We're looking forward to hearing from you. You'll have 10 minutes to make your comments, and then we'll begin our rounds of questioning. You may begin.

3:30 p.m.

Patricia Cartes Head, Global Safety, Twitter Inc.

First of all, I would like to thank the committee for giving us this opportunity to present the security policies we work with at Twitter.

I will continue in English.

As you pointed out, my name is Patricia Cartes. I have the privilege to represent Twitter's trust and safety teams. We're working very hard behind the scenes to prevent abuse and to fight any report of abuse we receive in the platform.

By virtue of being Spanish, I tend not to be brief so I'll try my best to follow the Twitter style and keep it to maybe a bit more than 140 characters, but to my 10 minutes. I will speak a little fast so we will have time to go through more details in the Q and A.

I wanted to start by explaining how the Twitter platform is different from other platforms. We are public, we are widely distributed, and we're conversational, so when you hear about abuse online, that tends to be equated to Twitter because we are public; and so people have access to content in our platform in a way that perhaps they don't have access to in other platforms or in their privacy layers.

That, of course, also means we have a greater responsibility to ensure that not just our users but also Internet users who may not be on Twitter, but who may see Twitter content beyond our borders, do not encounter abuse in the platform.

We have 313 million users, which might not seem like a big number compared to some of our sister companies; however, the issue with scale at Twitter comes due to the number of tweets that we're seeing flowing through the platform, which is one billion every two days. Just to give you an idea, it took three years, two months, and one day to see the billionth tweet, and now we're seeing 500 million tweets on a single-day basis.

We have 79% of our users based outside of the U.S., so even though we were born in San Francisco, we're by no means just an American company. That's why people like me, not being born in the U.S., can have the roles that we have.

We have offices in Singapore, Dublin, and San Francisco that are for the operational support of our users. The reason we have them there is so we can do 24/7 global coverage: so when Singapore goes to sleep, Dublin takes over, and when Dublin goes to sleep, San Francisco takes over.

We also look at providing support not just based on the abuse-type of expertise. As you can imagine, abuse comes in many ways, from spam to child sexual exploitation, gender-based harassment, and other types of hate speech and extremism, but we also look at the market specificities. That's why we work with a number of organizations on the ground that are experts in this field. They provide us with advice about abuse trends, but also about what users in those markets are saying are the main difficulties they are encountering with the platforms.

I did want to call to your attention the work we have been doing with MediaSmarts, High Resolves, Hollaback! Canada, which really have been instrumental in some of the changes we introduced as recently as last week.

We also have 82% of our users who are accessing the site via mobile. This is extremely important. The reason that we have 140 characters as a limitation is because we were born on mobile. Initially, when Jack Dorsey created the platform, you could only text to tweet, and at the time 140 characters was the text limitation. That's why it remains a 140-character platform.

This also means when we encounter persistent abuse we do not have the ability to use traditional methods such as IP blocking because the majority of our users are entering the site through dynamic IP addresses that are mobile, and therefore on an IP address you might have a bad user and a good user. That's why at Twitter when we talk about automating support and automating the detection of abuse, we have to think about patterns of behaviour. Are we seeing users we have previously suspended coming back with similar email addresses, similar names, using similar hashtags, and targeting the same accounts? When we see a combination of those patterns, it's easier for us to automate. We cannot simply block a word or block an IP address and hope the abuse will go away, because that's not going to happen, due to that mobile nature of our platform.

We also have rules. I know people tend to think Twitter is the Wild West. That's not the case. While we believe in freedom of expression and speaking truth to power, that really means little as an underlying philosophy if people are afraid to speak up. That's why over the last few years, and especially over the last year, we have introduced significant changes to the Twitter rules.

Today I want to walk you through some of those rules.

It's important to know these rules are public. We want our users to be aware of what the rules are, so that when they cross the line we can hold them accountable and we can show them not just the rules they have violated, but the specific tweets that were shared and that are in violation of the rules.

Let me be very clear. We do not allow our users to make threats of violence and to encourage terrorism or violence, especially when it comes to targeting the protected categories. When I refer to the protected categories, I refer to the UN charter of human rights. We really are talking about race, ethnicity, national origin, religion, sexual orientation, gender identity, age, and disability.

On a platform such as Twitter, I could question an idea or I could question a notion, but I could not target somebody for following that notion or that idea. I could say something such as “I hate Spain”, but I could not say “I hate Spaniards, therefore I'm going to encourage violence against them.” That's where we have to draw the line, and what we're always looking at is the likelihood of content in the platform causing harm in the offline world. If that is the case, it's important that we step in and take action.

When it comes to harassment, we clearly state that you may not incite or engage in the targeting, abuse, or harassment of others. In some of the elements we're looking at, remember that with 140 characters, oftentimes we lack context. That's why we have to look at the intention of the account. Was the account set up only with the intention of harassing somebody, or is this an account that was tweeting constructively before something triggered it and it started tweeting in a way that violates our rules? That might come as surprise, but that is the majority of the cases we see. We don't see the worst kind of trolls, the Gamergate trolls. On a day-to-day basis, what we see are users who, for whatever reason, start tweeting in a non-constructive way.

The way we enforce our rules depends on the severity of the violation of the rule. If we see that a user created the account with only the intent to harass somebody or a group of people, we will suspend the account permanently and we will continue to try to detect new accounts that are set up as a follow up, which tends to happen. However, if we see that a user, who was tweeting constructively, gets triggered by something, and starts tweeting in a non-constructive way, we're going to look at whether taking an educational approach might bring that user back into compliance.

We think these methods work, so at times we can take action such as asking the account to delete specific tweets that violate our rules. We can also freeze the account for a specific time frame so they can't interact for whatever time limit we give to the account. We can also ask the account to verify certain pieces of information. You can use Twitter in an anonymous way, but we do not want the veil of anonymity to be used for abusive purposes. At times if we see that an account is trying to violate our rules through anonymity, we will ask it to provide to us either a phone number or an email address so we have that information.

It will probably not come as a surprise that the worst type of trolls, knowing that they might be held accountable, especially with law enforcement authorities requesting data from Twitter in criminal cases, tend not to engage back on the site once we have taken that step to request further information.

It's important to bear in mind that the types of actions we take are not just suspensions. There's a wider range that we can take. Abuse is not black and white; oftentimes you will have the grey in between.

I also want to mention the tools. We want to empower our users to tailor their experience on Twitter. To that effect, we have launched a number of tools.

As recently as last Tuesday we announced that our mute function has been broadened. Now you can mute not just an account, which enables you not to get notified when that account is tweeting for as long as you don't want to engage with it, but you can also mute words, hashtags, conversations, and emojis. That means, let's say I don't want to see content related to Trump, if I mute the hashtag “trump”, I will not see content related to that within my notifications.

We also have a tool to block, which we recommend for more severe situations where you're adamant that somebody should not interact with you on Twitter. If you block somebody, they cannot engage with you, they cannot tweet at you, and you will not get notified if they do try to tweet at you.

What's most important is to remember that, as a public platform, we don't want to give a false sense of security. If you really don't want somebody to see your tweets, we also recommend protecting them. You can block somebody, but to prevent them from seeing the content, whether they are logged out or looking at it from a search engine, you can also also protect your tweets to further prevent that.

3:40 p.m.

Conservative

The Chair Conservative Marilyn Gladu

Thank you very much. That's your time.

We're going to begin our first round of questioning with my colleague, Ms. Ludwig.

3:40 p.m.

Liberal

Marc Serré Liberal Nickel Belt, ON

Can we take a picture first and tweet it?

3:40 p.m.

Liberal

Karen Ludwig Liberal New Brunswick Southwest, NB

Patricia, buenas tardes. Me llamo Karen Ludwig. Yo soy en Cuba por siete años.

I think I got that right, didn't I? I worked in Cuba for seven years.

3:40 p.m.

Head, Global Safety, Twitter Inc.

3:40 p.m.

Liberal

Karen Ludwig Liberal New Brunswick Southwest, NB

I am very pleased to hear your presentation, and certainly with the work that's being done with Twitter.

Most recently, we heard from the soon-to-be First Lady of the United States. Regarding cyber-bullying, she said:

It is never okay when a 12-year-old girl or boy is mocked, bullied, or attacked. It is terrible when that happens on the playground and it is absolutely unacceptable when it’s done by someone with no name hiding on the Internet.

When celebrities or well-known people take on issues such as cyber-bullying, what impact does that have on making changes to the operational side of organizations as well as on greater awareness within the general public?

Gracias.

3:40 p.m.

Head, Global Safety, Twitter Inc.

Patricia Cartes

That's a great question.

I think the impact is the same as it would be if one of our safety partners or one of the governments that we work with were to make those statements.

With regard to that particular statement, I would like to remind not just the next First Lady but everybody that we do not allow children under 13 on our platform. We would hope that no one under 13 is bullied on the platform because they shouldn't be there to begin with, under the COPPA law, which is the Children's Online Privacy Protection Act. Beyond that, I always appreciate the concern, whether it's from celebrities, politicians, or, as I said, non-profits that work in this space.

I think it is necessary for society to be aware of this as an issue. When there is abuse online, it rarely is just online. Online tends to be a reflection of what's happening off-line. What is quite interesting when it comes to the incitement of hate on Twitter is that, while of course we will do anything in our power to fight it on our platform, we have to remember that these ideas are floating around society.

We should open our eyes to how the world is, not how we want it to be. We think that a platform like Twitter can enable counter-narratives.

I welcome those remarks. We are looking forward to working with the new administration to continue to implement changes, but that doesn't change the work that we're already doing. More specifically, we refer to the experts. I referred to MediaSmarts before. You also have the Amanda Todd Legacy Fund in Canada, whom we work with on a very regular basis. I think they have the knowledge, and we would hope that every administration in the world would consult with them to gather the necessary insight.

3:45 p.m.

Liberal

Karen Ludwig Liberal New Brunswick Southwest, NB

Okay, thank you.

There probably isn't an easy answer to this question. With only 140 characters to extend a message, and lots of people trying to get likes, trying to get retweets, and trying to increase their following, is there a possibility that it might increase some of the sensationalism in the message?

3:45 p.m.

Head, Global Safety, Twitter Inc.

Patricia Cartes

You correctly point out that there isn't an easy answer to that one.

It's possible. On Twitter, you can also add images and links. That's interesting because when we first started operating you couldn't do any of those things, and because you couldn't do those things, some violations of our rules hadn't even happened. When we started working, we didn't see violations of privacy to the same extent as now that you can share images, links, and so on.

At times, what you will have indeed is people who try to combine different platforms. They upload a link to another platform and then share it through Twitter for maximum reach. That's something that we continue to work on with our sister companies. When we see abuse on one of the platforms, how can we work with our sister companies to prevent it on the other platforms?

I think we have been quite successful in the different working groups that we have, but you're correct that the lack of space, so to speak, may leave some people misusing the platform. We are aware, and we try to continue to fight it, especially through providing report links, not just on our health centre, but also within the tweet. If somebody feels that a person is trying to be more abusive, precisely because they don't have that much space at the tweet level, they can click “report” through the three buttons they have and send it to us.

3:45 p.m.

Liberal

Karen Ludwig Liberal New Brunswick Southwest, NB

On that, Ms. Cartes, you mentioned working with your sister organizations.

From the research perspective I'm wondering how the data is collected, how it's reported, and possibly how it's shared. In terms of other areas of violence that we've talked about in this committee, that's definitely been a central theme on the research side, as well as the lack of reporting. Is there any funding or are there any organizations or any university programs that you're funding right now to conduct such research?

3:45 p.m.

Head, Global Safety, Twitter Inc.

Patricia Cartes

It's another great question because, due to privacy laws, we are restricted about the amount of information that we can share with our sister companies.

Let's say I were to see a case of abuse happening on Twitter that has a ramification on the ASKfm platform, I can give a heads-up to my counterpart at ASKfm, but I cannot share with her all the data about the user. That hasn't really impeded our collaboration because, as Twitter is a public platform, you can share the tweet that will contain enough information for the other platforms to take action.

When it's the other way around, it's a little more challenging. We might get a heads-up from Facebook about a specific profile. We will of course look at abuse reports filed within our platform or abusive content within that specific account. Beyond those case-by-case situations, we have found those working groups I referred to extremely helpful. We have one on self-harm and suicide. We see teenagers especially trying to use these platforms to encourage self-harm and suicide, using language that is not straightforward, that we wouldn't be familiar with. Sitting down with organizations like Lifeline and our sister companies to see what shape that takes in those platforms and in ours, we can learn a lot. That's been extremely helpful.

Another great example would be non-consensual nudity. We have one working group with the attorney general in California. We don't like to refer to it as “revenge porn” because it's not just another type of commercial porn. This content destroys lives and reputations. We're lobbying to have it renamed non-consensual nudity in every legislation in the future. Just hearing from them what shape that abuse takes in their platform or what shape it takes in ours has been extremely helpful.

Some groups we have worked with look at different platforms: the Dangerous Speech Project, Susan Benesch from the Berkman Klein Center would be a very good researcher; Danielle Citron as well from the University of Maryland School of Law. I'm happy to also share some beyond that who would be experts on this data.

3:50 p.m.

Conservative

The Chair Conservative Marilyn Gladu

That's your time. We're going to go now to Ms. Harder for her seven minutes.

3:50 p.m.

Conservative

Rachael Thomas Conservative Lethbridge, AB

Thanks for taking time to be with us today.

I have a number of questions for you, and most of mine have to do with the idea of algorithms being used to direct online traffic. How does Twitter go about using algorithms to attract people to the site and to help facilitate use of Twitter?

3:50 p.m.

Head, Global Safety, Twitter Inc.

Patricia Cartes

Clearly, we're not doing very well because we don't have that much growth.

Jokes aside, we're not really utilizing algorithms to bring people into the platform. At times we use certain algorithms to detect abusive behaviour, which is what I was referring to. We have used some tools that in the past have really helped us to flag certain patterns of abuse, for us to know when an account might be abusive. We're not really utilizing algorithms to attract people.

You may have seen how we present stories to current users. We have gone from a chronological model where before you would see any tweets, there had been no adulteration of the tweet stream; it was just chronological. Now we're using algorithms to figure out what stories you are most interested in, based on your interactions. If I interact with my colleague Will on a regular basis, now when I log in I might see a message that says “while you were away” that highlights his tweets, based on my interactions with him.

We have utilized some artificial intelligence, but it's more for current users to ease their navigation. It would not be exactly the same as Facebook's newsfeed, but it's a similar idea. It's not just chronological, it's based more on what we think your interests are.