Evidence of meeting #153 for Justice and Human Rights in the 42nd Parliament, 1st Session. (The original version is on Parliament’s site, as are the minutes.) The winning word was platform.

A recording is available from Parliament.

On the agenda

MPs speaking

Also speaking

Michele Austin  Head, Government and Public Policy, Twitter Canada, Twitter Inc.
Clerk of the Committee  Mr. Marc-Olivier Girard

3:30 p.m.

Liberal

The Chair Liberal Anthony Housefather

Welcome to this meeting of the Standing Committee on Justice and Human Rights, as we resume our study on online hate.

It is a great pleasure to be joined this afternoon by Ms. Michele Austin, who is the Head of Government and Public Policy at Twitter Canada.

We want to really express our appreciation that you are here because it's only with the assistance of platforms like Twitter that we're going to be able to have the information to finish our study and do it well.

Thank you for coming. The floor is yours, Ms. Austin.

3:30 p.m.

Michele Austin Head, Government and Public Policy, Twitter Canada, Twitter Inc.

Thank you very much.

Thank you, Mr. Chair, for inviting me to appear today to discuss this study on online hate.

On behalf of Twitter, I'd like to acknowledge the hard work of all committee members and witnesses on this issue. I apologize; my opening remarks are long. There's a lot to unpack. We're a 280-character company though, so maybe they aren't. We'll see how it goes.

Twitter's purpose is to serve the public conversation. Twitter is public by default. When individuals create Twitter accounts and begin tweeting, their tweets are immediately viewable and searchable by anyone around the world. People understand the public nature of Twitter. They come to Twitter expecting to see and join public conversations. As many of you have experienced, tweets can be directly quoted in news articles, and screen grabs of tweets can often be shared by users on other platforms. It is this open and real-time conversation that differentiates Twitter from other digital companies. Any attempts to undermine the integrity of our service erode the core tenet of freedom of expression online, the value upon which our company is based.

Twitter respects and complies with Canadian laws. Twitter does not operate in a separate digital legal world, as has been suggested by some individuals and organizations. Existing Canadian legal frameworks apply to digital spaces, including Twitter.

There has been testimony from previous witnesses supporting investments in digital and media literacy. Twitter agrees with this approach and urges legislators around the world to continuously invest in digital and media literacy. Twitter supports groups that educate users, especially youth, about healthy digital citizenship, online safety and digital skills. Some of our Canadian partners include MediaSmarts—and I will note that they just yesterday released a really excellent report on online hate with regard to youth—Get Cyber Safe, Kids Help Phone, We Matter and Jack.org.

While we welcome everyone to the platform to express themselves, the Twitter rules outline specific policies that explain what types of content and behaviour are permitted. We strive to enforce these rules consistently and impartially. Safety and free expression go hand in hand, both online and in the real world. If people don't feel safe to speak, they won't.

We put the people who use our service first in every step we take. All individuals accessing or using Twitter services must adhere to the policies set forth in the Twitter rules. Failure to do so may result in Twitter's taking one or more enforcement actions, such as temporarily limiting your ability to create posts or interact with other Twitter users; requiring you to remove prohibited content, such as removing a tweet, before you can create new posts or interact with other Twitter users; asking you to verify account ownership with a phone number or email address; or permanently suspending your account.

The Twitter rules enforcement section includes information about the enforcement of the following Twitter rules categories: abuse, child sexual exploitation, private information, sensitive media, violent threats, hateful conduct and terrorism.

I do want to quickly touch on terrorism.

Twitter prohibits terrorist content on its service. We are part of the Global Internet Forum to Counter Terrorism, commonly known as GIFCT, and we endorse the Christchurch call to action. Removing terrorist content and violent extremist content is an area that Twitter has made important progress in, with 91% of what we remove being proactively detected by our own technology. Our CEO, Jack Dorsey, attended the Christchurch call meeting in Paris earlier this month and met with Prime Minister Justin Trudeau to reiterate Twitter's commitment to reduce the risks of live streaming and to remove viral content faster.

Under our hateful conduct policy, you may not “promote violence against or directly attack or threaten” people on the basis of their inclusion in a protected group, such as race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability or serious disease. These include the nine protected categories that the United Nations charter of human rights has identified.

The Twitter rules also prohibit accounts that have the primary purpose of inciting harm towards others on the basis of the categories I mentioned previously. We also prohibit individuals who affiliate with organizations that—whether by their own statements or activities, both on and off the platform—“use or promote violence against civilians to further their causes.”

Content on Twitter is generally flagged for review for possible Twitter rules violations through our help centre found at help.twitter.com/forms or in-app reporting. It can also be flagged by law enforcement agencies and governments. We have a global team that manages enforcement of our rules with 24-7 coverage in every language supported on Twitter. We have also built a dedicated reporting flow exclusively for hateful conduct so it is more easily reported to our review teams.

We are improving. During the last six months of 2018, we took enforcement action on more than 612,000 unique accounts for violations of the Twitter rules categories. We are also taking meaningful and substantial steps to remove the burden on users to report abuse to us.

Earlier this year, we made it a priority to take a proactive approach to abuse in addition to relying on people's reports. Now, by using proprietary technology, 38% of abusive content is surfaced proactively for human review instead of relying on reports from people using Twitter. The same technology we use to track spam, platform manipulations and other violations is helping us flag abusive tweets for our team to review. With our focus on reviewing this type of content, we've also expanded our teams in key areas and locations so that we can work quickly to keep people safe. I would note: We are hiring.

The final subject I want to touch on is law enforcement. Information sharing and collaboration are critical to Twitter's success in preventing abuse that disrupts meaningful conversations on the service. Twitter actively works to maintain strong relationships with Canadian law enforcement agencies. We have positive working relationships with the Canadian centre for cybersecurity, the RCMP, government organizations and provincial and local police forces.

We have an online portal dedicated to law enforcement agencies that allows them to report illegal content such as hate, emergency requests and requests for information. I have worked with law enforcement agencies as well as civil society organizations to ensure they know how to use this dedicated portal.

Twitter is committed to building on this momentum, consistent with our goal of improving healthy conversations. We do so in a transparent, open manner with due regard to the complexity of this particular issue.

Thank you. I look forward to your questions.

3:35 p.m.

Liberal

The Chair Liberal Anthony Housefather

Thank you so much.

We'll start with Mr. Cooper.

3:35 p.m.

Conservative

Michael Cooper Conservative St. Albert—Edmonton, AB

Thank you, Mr. Chair.

Thank you, Ms. Austin. It's good to see you.

I would be interested in your comments from Twitter's perspective on some of the initiatives that have been undertaken by European governments as well as the European Commission. We've heard about them from other witnesses. For example, I understand that Twitter was among the platforms that entered into an agreement with the European Commission in May 2016 to take down hateful content within a period of 24 hours. I personally see some positive aspects but also some concerns with that specific agreement and how that would be implemented.

I would be interested in Twitter's perspective on how that's worked out three years later.

3:35 p.m.

Head, Government and Public Policy, Twitter Canada, Twitter Inc.

Michele Austin

I think you're also focusing specifically on terrorist and violent extremist content, which is a part of GIFCT as well. I believe we achieve a standard of two hours to try to take that content down.

As I stated in my remarks, with proprietary technology, 91% of that content doesn't make it to platform. We now have a better understanding of where it's being posted and who is posting it. Between the time you hit post and the time it comes through our servers, we can tag it.

It's a very interesting and important question that you ask because we're very proud of the work that we've done with regard to terrorism and violent extremist groups, but when we go to conferences like the Oslo Freedom Forum or RightsCon—I don't know if you know that conference; it happened in Toronto two years ago—we get feedback from groups like Amnesty International, which is here in Canada, and Witness, which is not, that are a little worried that we're too good at it. They want to see insignias on videos. They want to see the conversation that is happening in order to be able to follow it and eventually prosecute it.

We're trying to find this balance between the requests of governments to stop this kind of hate and these terrorist actions from happening, which, again, we've been very successful at, and the requirements of civil society to track them and prosecute.

3:35 p.m.

Conservative

Michael Cooper Conservative St. Albert—Edmonton, AB

Thank you for that. In terms of speed, two hours is pretty good. In a lot of ways, that's a good thing. On the other hand, sometimes it could be said that speed can sacrifice a thoughtful deliberation in certain instances.

You spoke about the reporting of hate, including from governments. How does Twitter handle those requests? How do you ensure that the removal of content and addressing those requests, especially from governments, which might have ulterior motives, are dealt with in a transparent and consistent manner?

3:40 p.m.

Head, Government and Public Policy, Twitter Canada, Twitter Inc.

Michele Austin

Thank you very much for your question. It's something, again, that we consider thoughtfully and often.

There are two parts to the answer. The first one is our transparency report, which I would urge you to take a look at. It's published twice a year at transparency.twitter.com. In it we report, by government, what kinds of requests we have for takedowns, be they for information purpose or emergency takedowns. We report on how often we have completed them.

I think—and I could be wrong—in the previous report we complied with 100% of requests from the Canadian government, and it was a small number, like 38.

You can go and check that resource to see how we interact with governments. Of course, we're also governed by international rules and agreements. MLAT would be the law enforcement one that would govern how we work with other governments through Homeland Security in the U.S.

Finally, with regard to law enforcement, we work with them consistently. I was at RCMP headquarters yesterday to have discussions about whether they are getting the information they need from us, whether our reporting is helpful and sufficient in cases like child sexual exploitation and if they have enough of what they need to prosecute, and also that they understand what our limitations are and how we go through and assess reports.

3:40 p.m.

Conservative

Michael Cooper Conservative St. Albert—Edmonton, AB

I'll renounce my time in favour of Mr. Barrett.

3:40 p.m.

Liberal

The Chair Liberal Anthony Housefather

Mr. Barrett, you have a minute and a half.

May 30th, 2019 / 3:40 p.m.

Conservative

Michael Barrett Conservative Leeds—Grenville—Thousand Islands and Rideau Lakes, ON

Thanks very much.

For Twitter specifically, what stage of consideration is given, or is any consideration given, to the removal of the ability of users to operate on the platform in an anonymous fashion? Oftentimes we have all seen and, to varying degrees, been the recipients of hateful content. Certainly almost everyone on the platform would experience things there that no one would say to another person if their name were actually attached to it. That runs the full spectrum of everything from illegal activity to just really poisoning the discourse.

Can you give us any idea of what Twitter's position is on anonymity on the platform?

3:40 p.m.

Head, Government and Public Policy, Twitter Canada, Twitter Inc.

Michele Austin

Twitter has allowed anonymity since the inception of the platform. We allow it for a number of reasons. A lot of anonymity is not targeted at hate.

There are certain countries in which we operate where, if we did not allow anonymity, people would be in physical danger for the comments they post online.

3:40 p.m.

Conservative

Michael Barrett Conservative Leeds—Grenville—Thousand Islands and Rideau Lakes, ON

Right.

3:40 p.m.

Head, Government and Public Policy, Twitter Canada, Twitter Inc.

Michele Austin

I would note that anonymous accounts must comply with the Twitter rules. There are no exceptions for them. If they make a hateful comment, they are subject to the Twitter rules. Of course it is a delicate balance, but at this time we allow anonymity and will continue to allow it.

From a personal perspective, I've had a number of women's groups come to me in Canada, but also my Korean colleagues, to say that the conversation for them is entirely different because they are allowed to be anonymous and they are allowed to have conversations that they are fearful to have face to face.

While we recognize certainly the problem we see in probably Canada and the United States with anonymous comments, we are sensitive to those who are coming to Twitter to have their voices heard in ways that they couldn't be heard in other locations.

3:40 p.m.

Conservative

Michael Barrett Conservative Leeds—Grenville—Thousand Islands and Rideau Lakes, ON

That's great. Thanks very much.

3:40 p.m.

Liberal

The Chair Liberal Anthony Housefather

Thank you.

Mr. Fraser.

3:40 p.m.

Liberal

Colin Fraser Liberal West Nova, NS

Thank you, Mr. Chair.

Thanks for being here, Ms. Austin. I appreciate your presentation.

I just want to pick up on something you mentioned in your presentation regarding a stat, that 91% of content is removed proactively, detected by your own technology for terrorist and violent extremist content.

What about other content? What about content such as threats or racist comments or comments that otherwise violate your standards?

3:40 p.m.

Head, Government and Public Policy, Twitter Canada, Twitter Inc.

Michele Austin

I think, generally speaking, we believe that only about 1% of the content on Twitter makes up the majority of those accounts reported for abuse.

Focusing specifically on abuse, again, those statistics are published in our transparency report.

We action unique accounts focused on the six rules that I mentioned with regard to abuse, child sexual exploitation and hateful conduct. Out of the six categories we actioned in July to December 2018, 250,000 accounts under hateful conduct policies were suspended, 235,000 accounts under abuse, 56,000 under violent threats, 30,000 under sensitive media, 29,000 on child sexual exploitation and about 8,000 on private information.

It's very difficult to compare those numbers year to year, country to country, because context matters. There could have been an uprising in a country or something could have happened politically to spur on some sort of different conversation or different actions. But of the 500 million tweets per day, those are the numbers of accounts we actioned.

3:45 p.m.

Liberal

Colin Fraser Liberal West Nova, NS

You obviously have the technology. If you're detecting 91% of the terrorism-related and violent extremist-related content, and you're able to detect them and remove them 91% of the time before they get on the platform, then you obviously have the technology to do a better job at preventing other types of online hatred, such as racist propaganda, threatening and abusive harassment. Why aren't you doing a better job with that type of conduct?

3:45 p.m.

Head, Government and Public Policy, Twitter Canada, Twitter Inc.

Michele Austin

You make an excellent point. If people don't feel safe coming to Twitter, they won't use it, so it's in our best interests to do a better job with regard to these actions.

Recently we've changed our approach. Twitter has a different approach to content moderation than other platforms. There are many human interactions with our people reviewing them. We are depending more on proprietary technology and now 38% of accounts are actioned to us by technology for review—they're flagged for us to review—whereas previously we didn't have any. We plan on making more investments in proprietary technology.

3:45 p.m.

Liberal

Colin Fraser Liberal West Nova, NS

How much does Twitter spend on those types of technologies every year?

3:45 p.m.

Head, Government and Public Policy, Twitter Canada, Twitter Inc.

Michele Austin

I would have to get back to you on the amount we spend.

3:45 p.m.

Liberal

Colin Fraser Liberal West Nova, NS

You'll forward that to the committee, then?

3:45 p.m.

Head, Government and Public Policy, Twitter Canada, Twitter Inc.

3:45 p.m.

Liberal

Colin Fraser Liberal West Nova, NS

With regard to civil action, has Twitter ever been successfully sued for any type of material that has been on their platform?

3:45 p.m.

Head, Government and Public Policy, Twitter Canada, Twitter Inc.

Michele Austin

I would have to get back to you on that as well.