Evidence of meeting #33 for Status of Women in the 42nd Parliament, 1st Session. (The original version is on Parliament’s site, as are the minutes.) The winning word was platform.

A recording is available from Parliament.

On the agenda

MPs speaking

Also speaking

Patricia Cartes  Head, Global Safety, Twitter Inc.
Loly Rico  President, Canadian Council for Refugees
Lynne Groulx  Executive Director, Native Women's Association of Canada
Francyne Joe  President, Native Women's Association of Canada
Awar Obob  Member, Babely Shades
Marilee Nowgesic  Special Advisor, Liaison, Native Women's Association of Canada

3:50 p.m.

Conservative

Rachael Thomas Conservative Lethbridge, AB

Could you just briefly sum up what an algorithm is for those around the table?

3:50 p.m.

Head, Global Safety, Twitter Inc.

Patricia Cartes

It would be like a program where you give it a number of factors, and when those factors coincide, the algorithm will alert you. A good example of how this is used for abuse-fighting purposes, if you look at sexual exploitation, you could say to the algorithm to flag any account that is contacting somebody who has provided their age to us and is a minor and is using certain keywords within a specific time frame. If you have a lot of these patterns of behaviour happening at the same time, it will let us know.

The algorithms are a very smart way to let the system alert you to specific situations that might be happening that you might not know unless somebody has reported it to you.

3:50 p.m.

Conservative

Rachael Thomas Conservative Lethbridge, AB

Right.

Is there a way that algorithms could unintentionally facilitate cyber-abuse or violence rather than being helpful? I recognize that it probably wouldn't be intentional, but is there a way that it could be unintentional?

3:50 p.m.

Head, Global Safety, Twitter Inc.

Patricia Cartes

I have worked in tech for the last 10 years. I was at Google and Facebook before, always in this field, and I have always been very skeptical about just using algorithms. They won't necessarily lead to more abuse or violence in the platforms, but if you rely on just the algorithms to provide support to users, you can have a lot of collateral damage. You may have certain accounts and certain activities that are flagged by the algorithm that are not abusive and that you need to manually review.

I'll give you a perfect example. We started seeing abuse on hashtags on Twitter—a hashtag is a mechanism to have a conversation in a platform around a specific topic—and an example would be #stopIslam. We immediately thought there must be hate speech within this hashtag. When we started looking at the data—by the way, the Dangerous Speech Project helped us, and The Washington Post did a great article on this—we found that the majority of the tweets were actually positive tweets. It was people saying, “This hashtag is atrocious. You should never say this.” Or, on the word “bitch”, when we started automating our processes, we were looking at the word “bitch”— pardon my not-French—and we realized there is a whole demographic that is using bitch as a way to say hi. The majority of our systems nearly collapsed because we were looking at this content that was not abusive.

What we have to think about in government and in these companies is whether these measures are proportional. If you were just to rely on algorithms, would it be proportional to be looking at people's accounts without there being any reports or any abusive activity? That's why I would always advocate for algorithm plus manual action in order to automate the support.

3:55 p.m.

Conservative

Rachael Thomas Conservative Lethbridge, AB

You raised an excellent point in terms of algorithms probably not being enough on their own, so I appreciate you bringing up that point. That's certainly a good one.

In terms of then using manpower, as well, in order to monitor, you used the example of the hashtag “bitch”. That comes back to you through an algorithm as being bad, but then you take a manual look at it and realize it's not always bad. Sometimes it's appropriate. How do you respond, then? Do you keep the original algorithm in place in order to track that and flag it for you, and then manually go over it, or do you just loosen up your algorithm to allow more of it to go through? How do you respond to something like that?

3:55 p.m.

Head, Global Safety, Twitter Inc.

Patricia Cartes

It's the latter option. You modify the algorithm.

What happens in those cases is that the algorithm is lacking the information it needs to be accurate, so you're looking at the action rate that the algorithm leads to. By the way, a lot of these are like bots. You're implementing bots in the platform through algorithms. If I create a bot that is giving me a 10% action rate, that means, of all of the content that is flagged to me, I'm only taking action on 10%. That means that the algorithm is certainly not accurate enough, but I can feed it more information.

I referred before to patterns of behaviour. I could say, “Only flag to me accounts that have been created within this time span in this IP address, trying to use this hashtag, trying to tweet to these people.” The more information you give it, the more accurate it is. We have found that for certain types of abuse, spam being a great example, we have been able to eliminate most of the support, based on very accurate algorithms. However, by no means does this happen from one day to the next. It has taken months and years to reach the right amount of information for those algorithms to be properly deployed on the site.

3:55 p.m.

Conservative

The Chair Conservative Marilyn Gladu

Very good.

Now we're going to go to Ms. Malcolmson for seven minutes.

3:55 p.m.

NDP

Sheila Malcolmson NDP Nanaimo—Ladysmith, BC

Thank you for being here.

We heard from a witness earlier in this committee study that when she wanted police support around ending cyber-bullying, the police needed to be deeply educated by the victim herself around what a hashtag is. There was no cyber-literacy whatsoever on the enforcement side. That felt like a particularly unfair burden for victims, who were looking for support in simply having the violence and bullying end.

Can you talk about your perception of the police role, and what partnerships or education Twitter might be providing to fill that gap?

3:55 p.m.

Head, Global Safety, Twitter Inc.

Patricia Cartes

It is a very valid point, and we have heard this time and time again from victims and from groups that are advocating support for them. There's a really big disconnect between the technology and the education within law enforcement and judicial authorities about that technology. We see this on a day-to-day basis. Actually, my colleague Will, who is here, and the rest of my team and I travel the world, educating law enforcement. I tend to spend my days in Mexico City, sitting down with the federal police to see if they can understand the processes. This is very common. We need to make it easier for them to understand.

We have guidelines for law enforcement, something that I didn't even get to speak about. There's a link that contains all of the information for law enforcement. It's within our help centre, and I recommend that you check it out. It has really helpful information, like how long does Twitter keep the information, what type of information do we keep, what does a valid legal process look like, what happens in emergency situations where you may not even have the time to provide a valid subpoena or court order because there might be a risk to life.

Our job is to sit down with those law enforcement agencies and work with them. There is one model that I find has worked very well. The United Kingdom has what's called a SPOC system, that is, a single point of contact system. Every law enforcement agency in the U.K. will have single points of contact. If you are sitting down in West London at a police station and a victim comes to you with a case, you don't need to navigate how to make a request of data from a tech company. You can go to your SPOC, who will help you do it. It's a really helpful system that we keep advocating for. We will continue to do more, but there's something that we can certainly do: make it easier for victims to get all of the information that they need at the point of report.

To that effect, we launched last year a mechanism that allows you to download the report when you report a threat of violence. That report will contain the specific tweet—so the text that was shared in the tweet—the URL of the tweet, the time stamp, the URL of the user who shared it, and the name as it's shown in the account, together with a link to the law enforcement guideline so that you can print it and bring it to the local law enforcement station.

We will continue to invest more in training law enforcement authorities. I would make this recommendation to the members of the committee: if you know of law enforcement authorities in Canada that are struggling with this, it's our job to engage with them and to train them as thoroughly as we can. We will continue to invest more in improving those processes.

I would also like to mention that, at times, those mechanisms get abused. You will have people who pretend to be law enforcement officers to try to gather information. That's why if you go to the reporting form for law enforcement, you'll see that you cannot submit a report without having an official email address, and we will still ask for valid legal identification to make sure that the valid legal process is being followed.

4 p.m.

NDP

Sheila Malcolmson NDP Nanaimo—Ladysmith, BC

Our committee would benefit from seeing your guidelines for law enforcement and the other models that you recommend to us. If you're able to provide that to the clerk, then it would be in evidence for the committee, and we can reflect it in our report.

We've heard quite a bit from witnesses about how some of the stigma around reporting and some of the profile of cyber-bullying and sexual violence has been removed. However, the front-line organizations that sometimes might be partnering with you to help support victims are increasingly underfunded, and they have an increasing workload. The worst thing would be for us to encourage young women and girls to ask for help more and then not have the help available.

Can you comment on your experience with the capacity of these groups, and whether their having access to more secure operating funds would allow them to meet this new demand?

4 p.m.

Head, Global Safety, Twitter Inc.

Patricia Cartes

Yes, you're absolutely correct.

We're seeing that these organizations are under a lot of pressure when it comes to resources. When we start partnerships of this nature with organizations, we do two things. One, we provide them with operational support, because we acknowledge that, oftentimes, victims won't come to us. They don't trust the social media platforms. They don't know what happens after you click report, and that's something we're taking on board. We are working on increasing transparency around reporting.

In the meantime, we know that victims feel more comfortable with the organizations that, in their countries, are known for providing them with support. We want to continue to prioritize any of these reports that these groups provide and send our way. We have specific reporting mechanisms for them and therefore, if Hollaback! was to report a case of abuse, it would go to a specific queue that our team would look into. It doesn't go into the general queue. As you point out, they are, perhaps, getting more and more people to go to them and request help.

We also help them with the awareness piece. We have a #FoodforGood program, which is our corporate philanthropy program, and also run by our team. Oftentimes, we'll work with these organizations through ad grants and through our own platform, the Twitter blogs, and Twitter corporate accounts to provide more awareness.

We will also support them with requests they make for funding from governments and different programs by which they might qualify for more funding. We'll oftentimes document how we have been working with them. Twitter, in particular, is not in a position to provide funding because we are not profitable. You should see the way I flew here yesterday; it was remarkable.

We will continue to support them. A good example of this would be the Insafe network in Europe, which is funded by the safer Internet programme of the European Commission. Almost every year we provide letters of support. We have vast documentation about how we have worked with those groups, the number of reports that those groups have sent our way, how many of those cases have found a positive resolution, and our own recommendations for funding.

Whenever it happens that we do have any available funding, we also try to support them as much as we can. They really are essential to creating that safe environment.

4:05 p.m.

Conservative

The Chair Conservative Marilyn Gladu

Ms. Damoff, you have seven minutes.

4:05 p.m.

Liberal

Pam Damoff Liberal Oakville North—Burlington, ON

Thank you so much for coming and sharing what you're doing. All of us, I'm sure, because we're on Twitter, have been subject to some form of harassment online, some of it worse than others. I really appreciate you coming here today.

I had a conversation with Facebook about what they're doing in terms of social media. Twitter allows fake accounts and anonymous accounts. Have you considered tightening up the rules around identification of who can have accounts? That was one of things they pointed out to me in their platform that they do require.

4:05 p.m.

Head, Global Safety, Twitter Inc.

Patricia Cartes

Yes, we allow anonymous use. We don't allow fake accounts, and that's a big distinction we want to make.

One thing is a priority account, which, by the way, happens a lot. It adds levity to the platform, and it was one of the first types of accounts that we ever saw being set in the platform. Something else is to impersonate somebody with an abusive intention in mind. We do draw the line there.

If I start tweeting right now, impersonating you, using your photo in the first person, mocking you, that would be a violation of our rules that we would take action on. We also enable bystanders to report on behalf of the person who is getting impersonated. We cannot just equate real name platforms with safer platforms.

4:05 p.m.

Liberal

Pam Damoff Liberal Oakville North—Burlington, ON

No, no—

4:05 p.m.

Head, Global Safety, Twitter Inc.

Patricia Cartes

I know you are not saying...In my experience, where a real name has its benefits, it also has a very negative impact on whistleblowers and activists who may not be able to communicate safely using their names, and we do want to cater to them.

When Twitter was first created, it was precisely to enable people to speak truth to power, and to provide people with a platform of communication to the higher levels of power that was unprecedented. We want to continue to encourage that use, but as you say, it's extremely important that we clamp down on fake accounts and impersonation, and that impersonation is not being used for abusive purposes.

4:05 p.m.

Liberal

Pam Damoff Liberal Oakville North—Burlington, ON

You sort of led into my next question about jurisdictions.

Do you have any difficulty upholding your terms of use across various jurisdictions? Obviously, you're worldwide, and there's only so much a government can do in terms of requiring things. Within your own terms of use, do you have trouble enforcing them?

4:05 p.m.

Head, Global Safety, Twitter Inc.

Patricia Cartes

We want the terms of use to be as thorough as possible. As you point out, because we are global, we want to make sure that the terms of use and our rules are as fair as possible, that they enable speech, and that they prohibit abuse.

You also have different jurisdictions that will prohibit certain types of speech. I will give you the example of Turkey where you cannot criticize Atatürk, the founder of the republic. If you do, then you are in violation of Turkish law, and oftentimes we have to deal with a violation of our rules. When content is reported to us, we will look at whether there is a violation of the rules. If there is, we will take action. If there isn't, but it's a law enforcement or a judicial authority that is bringing it to our attention as violating a local law, then we will then look at whether we can block that content in that country. This is something we will do.

Another good example would be Holocaust denial in Germany. It's illegal in Germany. It's illegal in France. It's illegal in Spain. You will see some tweets that perhaps didn't violate our rules, but that we have blocked in that jurisdiction. The challenge there is how does this scale, and is it ensuring, as we spoke of before, that the law enforcement authorities and the judicial authorities know how to bring this to our attention.

Some organizations that are non-profit also do have a government mandate to bring hate speech to the attention of platforms like ours. That is the case of Jugendschutz in Germany, or the diversity centres in Belgium, or the Movimiento contra la Intolerancia in Spain. We will work with them on that, too.

4:05 p.m.

Liberal

Pam Damoff Liberal Oakville North—Burlington, ON

Do you have any recommendations that the federal government could make that would assist you and other social media platforms to deal with harassment online? Is there anything we could be recommending as part of our study that would assist in the things you are already doing, or that would be over and above that?

4:10 p.m.

Head, Global Safety, Twitter Inc.

Patricia Cartes

The most helpful thing will always be to empower those organizations that you have in the country that are the experts on this. I know that I keep saying this, but you really have an incredible unprecedented level of knowledge in this country. MediaSmarts alone has led the way in digital citizenship in this country. They started Media Literacy Week 12 years before the U.S. did.

You have organizations that are very knowledgeable that may not be as well equipped to fight abuse, due to a lack of resources. I always recommend working with them, because the public tends to trust those organizations more than they trust the platforms or the government. That is the reality, and working with them, or providing them with the funds that they need at times, or the mechanisms for them to grow, does help us. Similarly, if you are finding that there is abuse in the country about which we are clueless, providing us with reports, whether that is through the Royal Canadian Mounted Police or through a specific hotline that is run by the government—we work a lot with Get Cyber Safe—to ensure that we have that knowledge, so we can act on it, would be very helpful.

It really breaks my heart when I see governments and media thinking that we don't care about abuse, because we do. It's just that the world is a really big place. We are a very small company. Google Ireland has more employees than Twitter worldwide, and so oftentimes we just lack the ability to act on everything, but the majority of the time we're just not aware, and we're working with somebody who is an expert and who is providing us with ongoing feedback.

4:10 p.m.

Liberal

Pam Damoff Liberal Oakville North—Burlington, ON

I only have about a minute left, and I have a quick question. You've talked about adding new features where you can block and mute specific words, but all that does is stop me from seeing it. That doesn't stop it from being out there.

For example, in the case of the lady who testified and who had filed a suit against a harasser and lost, it doesn't mean that material is not out there. As another example, someone could be harassing me and putting the period in front of my name, and it's public for everyone to see. They could have 10,000 followers. How do you deal with that kind of harassment?

4:10 p.m.

Head, Global Safety, Twitter Inc.

Patricia Cartes

The tools that we strengthened and relaunched last week are just a means for us to empower the user, but you are correct that we also don't want to put the burden on the user. We want regular users to be able to control their experience, but we have also significantly changed the way we enforce our hateful conduct rules and how we look at the targeting of not just groups, but also individuals. Those two go hand in hand where there is abusive content on which we have to act. The user should feel empowered to use these tools, not to have to engage further, and not to have to see the content that may be triggering it at times, but we do have a responsibility to act on the content.

4:10 p.m.

Liberal

Pam Damoff Liberal Oakville North—Burlington, ON

Thank you.

4:10 p.m.

Conservative

The Chair Conservative Marilyn Gladu

Excellent.

Over to Ms. Vecchio, for five minutes.

4:10 p.m.

Conservative

Karen Vecchio Conservative Elgin—Middlesex—London, ON

Thank you very much.

I'm just going to continue on that line with Pam. We often hear from women that the wrong thing to say, “get offline and go offline”. Similarly, the answer shouldn't just be to get a private account, because we've heard, as well, that you shouldn't accept things. Twitter is a little bit different, because it's really not a private account that you can compare to Facebook.

What is the answer for women who have been continually harassed, but want to keep using your service? What are some of the techniques that they could use?