Evidence of meeting #153 for Justice and Human Rights in the 42nd Parliament, 1st Session. (The original version is on Parliament’s site, as are the minutes.) The winning word was platform.

A recording is available from Parliament.

On the agenda

MPs speaking

Also speaking

Michele Austin  Head, Government and Public Policy, Twitter Canada, Twitter Inc.
Clerk of the Committee  Mr. Marc-Olivier Girard

3:30 p.m.

Liberal

The Chair Liberal Anthony Housefather

Welcome to this meeting of the Standing Committee on Justice and Human Rights, as we resume our study on online hate.

It is a great pleasure to be joined this afternoon by Ms. Michele Austin, who is the Head of Government and Public Policy at Twitter Canada.

We want to really express our appreciation that you are here because it's only with the assistance of platforms like Twitter that we're going to be able to have the information to finish our study and do it well.

Thank you for coming. The floor is yours, Ms. Austin.

3:30 p.m.

Michele Austin Head, Government and Public Policy, Twitter Canada, Twitter Inc.

Thank you very much.

Thank you, Mr. Chair, for inviting me to appear today to discuss this study on online hate.

On behalf of Twitter, I'd like to acknowledge the hard work of all committee members and witnesses on this issue. I apologize; my opening remarks are long. There's a lot to unpack. We're a 280-character company though, so maybe they aren't. We'll see how it goes.

Twitter's purpose is to serve the public conversation. Twitter is public by default. When individuals create Twitter accounts and begin tweeting, their tweets are immediately viewable and searchable by anyone around the world. People understand the public nature of Twitter. They come to Twitter expecting to see and join public conversations. As many of you have experienced, tweets can be directly quoted in news articles, and screen grabs of tweets can often be shared by users on other platforms. It is this open and real-time conversation that differentiates Twitter from other digital companies. Any attempts to undermine the integrity of our service erode the core tenet of freedom of expression online, the value upon which our company is based.

Twitter respects and complies with Canadian laws. Twitter does not operate in a separate digital legal world, as has been suggested by some individuals and organizations. Existing Canadian legal frameworks apply to digital spaces, including Twitter.

There has been testimony from previous witnesses supporting investments in digital and media literacy. Twitter agrees with this approach and urges legislators around the world to continuously invest in digital and media literacy. Twitter supports groups that educate users, especially youth, about healthy digital citizenship, online safety and digital skills. Some of our Canadian partners include MediaSmarts—and I will note that they just yesterday released a really excellent report on online hate with regard to youth—Get Cyber Safe, Kids Help Phone, We Matter and Jack.org.

While we welcome everyone to the platform to express themselves, the Twitter rules outline specific policies that explain what types of content and behaviour are permitted. We strive to enforce these rules consistently and impartially. Safety and free expression go hand in hand, both online and in the real world. If people don't feel safe to speak, they won't.

We put the people who use our service first in every step we take. All individuals accessing or using Twitter services must adhere to the policies set forth in the Twitter rules. Failure to do so may result in Twitter's taking one or more enforcement actions, such as temporarily limiting your ability to create posts or interact with other Twitter users; requiring you to remove prohibited content, such as removing a tweet, before you can create new posts or interact with other Twitter users; asking you to verify account ownership with a phone number or email address; or permanently suspending your account.

The Twitter rules enforcement section includes information about the enforcement of the following Twitter rules categories: abuse, child sexual exploitation, private information, sensitive media, violent threats, hateful conduct and terrorism.

I do want to quickly touch on terrorism.

Twitter prohibits terrorist content on its service. We are part of the Global Internet Forum to Counter Terrorism, commonly known as GIFCT, and we endorse the Christchurch call to action. Removing terrorist content and violent extremist content is an area that Twitter has made important progress in, with 91% of what we remove being proactively detected by our own technology. Our CEO, Jack Dorsey, attended the Christchurch call meeting in Paris earlier this month and met with Prime Minister Justin Trudeau to reiterate Twitter's commitment to reduce the risks of live streaming and to remove viral content faster.

Under our hateful conduct policy, you may not “promote violence against or directly attack or threaten” people on the basis of their inclusion in a protected group, such as race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability or serious disease. These include the nine protected categories that the United Nations charter of human rights has identified.

The Twitter rules also prohibit accounts that have the primary purpose of inciting harm towards others on the basis of the categories I mentioned previously. We also prohibit individuals who affiliate with organizations that—whether by their own statements or activities, both on and off the platform—“use or promote violence against civilians to further their causes.”

Content on Twitter is generally flagged for review for possible Twitter rules violations through our help centre found at help.twitter.com/forms or in-app reporting. It can also be flagged by law enforcement agencies and governments. We have a global team that manages enforcement of our rules with 24-7 coverage in every language supported on Twitter. We have also built a dedicated reporting flow exclusively for hateful conduct so it is more easily reported to our review teams.

We are improving. During the last six months of 2018, we took enforcement action on more than 612,000 unique accounts for violations of the Twitter rules categories. We are also taking meaningful and substantial steps to remove the burden on users to report abuse to us.

Earlier this year, we made it a priority to take a proactive approach to abuse in addition to relying on people's reports. Now, by using proprietary technology, 38% of abusive content is surfaced proactively for human review instead of relying on reports from people using Twitter. The same technology we use to track spam, platform manipulations and other violations is helping us flag abusive tweets for our team to review. With our focus on reviewing this type of content, we've also expanded our teams in key areas and locations so that we can work quickly to keep people safe. I would note: We are hiring.

The final subject I want to touch on is law enforcement. Information sharing and collaboration are critical to Twitter's success in preventing abuse that disrupts meaningful conversations on the service. Twitter actively works to maintain strong relationships with Canadian law enforcement agencies. We have positive working relationships with the Canadian centre for cybersecurity, the RCMP, government organizations and provincial and local police forces.

We have an online portal dedicated to law enforcement agencies that allows them to report illegal content such as hate, emergency requests and requests for information. I have worked with law enforcement agencies as well as civil society organizations to ensure they know how to use this dedicated portal.

Twitter is committed to building on this momentum, consistent with our goal of improving healthy conversations. We do so in a transparent, open manner with due regard to the complexity of this particular issue.

Thank you. I look forward to your questions.

3:35 p.m.

Liberal

The Chair Liberal Anthony Housefather

Thank you so much.

We'll start with Mr. Cooper.

3:35 p.m.

Conservative

Michael Cooper Conservative St. Albert—Edmonton, AB

Thank you, Mr. Chair.

Thank you, Ms. Austin. It's good to see you.

I would be interested in your comments from Twitter's perspective on some of the initiatives that have been undertaken by European governments as well as the European Commission. We've heard about them from other witnesses. For example, I understand that Twitter was among the platforms that entered into an agreement with the European Commission in May 2016 to take down hateful content within a period of 24 hours. I personally see some positive aspects but also some concerns with that specific agreement and how that would be implemented.

I would be interested in Twitter's perspective on how that's worked out three years later.

3:35 p.m.

Head, Government and Public Policy, Twitter Canada, Twitter Inc.

Michele Austin

I think you're also focusing specifically on terrorist and violent extremist content, which is a part of GIFCT as well. I believe we achieve a standard of two hours to try to take that content down.

As I stated in my remarks, with proprietary technology, 91% of that content doesn't make it to platform. We now have a better understanding of where it's being posted and who is posting it. Between the time you hit post and the time it comes through our servers, we can tag it.

It's a very interesting and important question that you ask because we're very proud of the work that we've done with regard to terrorism and violent extremist groups, but when we go to conferences like the Oslo Freedom Forum or RightsCon—I don't know if you know that conference; it happened in Toronto two years ago—we get feedback from groups like Amnesty International, which is here in Canada, and Witness, which is not, that are a little worried that we're too good at it. They want to see insignias on videos. They want to see the conversation that is happening in order to be able to follow it and eventually prosecute it.

We're trying to find this balance between the requests of governments to stop this kind of hate and these terrorist actions from happening, which, again, we've been very successful at, and the requirements of civil society to track them and prosecute.

3:35 p.m.

Conservative

Michael Cooper Conservative St. Albert—Edmonton, AB

Thank you for that. In terms of speed, two hours is pretty good. In a lot of ways, that's a good thing. On the other hand, sometimes it could be said that speed can sacrifice a thoughtful deliberation in certain instances.

You spoke about the reporting of hate, including from governments. How does Twitter handle those requests? How do you ensure that the removal of content and addressing those requests, especially from governments, which might have ulterior motives, are dealt with in a transparent and consistent manner?

3:40 p.m.

Head, Government and Public Policy, Twitter Canada, Twitter Inc.

Michele Austin

Thank you very much for your question. It's something, again, that we consider thoughtfully and often.

There are two parts to the answer. The first one is our transparency report, which I would urge you to take a look at. It's published twice a year at transparency.twitter.com. In it we report, by government, what kinds of requests we have for takedowns, be they for information purpose or emergency takedowns. We report on how often we have completed them.

I think—and I could be wrong—in the previous report we complied with 100% of requests from the Canadian government, and it was a small number, like 38.

You can go and check that resource to see how we interact with governments. Of course, we're also governed by international rules and agreements. MLAT would be the law enforcement one that would govern how we work with other governments through Homeland Security in the U.S.

Finally, with regard to law enforcement, we work with them consistently. I was at RCMP headquarters yesterday to have discussions about whether they are getting the information they need from us, whether our reporting is helpful and sufficient in cases like child sexual exploitation and if they have enough of what they need to prosecute, and also that they understand what our limitations are and how we go through and assess reports.

3:40 p.m.

Conservative

Michael Cooper Conservative St. Albert—Edmonton, AB

I'll renounce my time in favour of Mr. Barrett.

3:40 p.m.

Liberal

The Chair Liberal Anthony Housefather

Mr. Barrett, you have a minute and a half.

3:40 p.m.

Conservative

Michael Barrett Conservative Leeds—Grenville—Thousand Islands and Rideau Lakes, ON

Thanks very much.

For Twitter specifically, what stage of consideration is given, or is any consideration given, to the removal of the ability of users to operate on the platform in an anonymous fashion? Oftentimes we have all seen and, to varying degrees, been the recipients of hateful content. Certainly almost everyone on the platform would experience things there that no one would say to another person if their name were actually attached to it. That runs the full spectrum of everything from illegal activity to just really poisoning the discourse.

Can you give us any idea of what Twitter's position is on anonymity on the platform?

3:40 p.m.

Head, Government and Public Policy, Twitter Canada, Twitter Inc.

Michele Austin

Twitter has allowed anonymity since the inception of the platform. We allow it for a number of reasons. A lot of anonymity is not targeted at hate.

There are certain countries in which we operate where, if we did not allow anonymity, people would be in physical danger for the comments they post online.

3:40 p.m.

Conservative

Michael Barrett Conservative Leeds—Grenville—Thousand Islands and Rideau Lakes, ON

Right.

3:40 p.m.

Head, Government and Public Policy, Twitter Canada, Twitter Inc.

Michele Austin

I would note that anonymous accounts must comply with the Twitter rules. There are no exceptions for them. If they make a hateful comment, they are subject to the Twitter rules. Of course it is a delicate balance, but at this time we allow anonymity and will continue to allow it.

From a personal perspective, I've had a number of women's groups come to me in Canada, but also my Korean colleagues, to say that the conversation for them is entirely different because they are allowed to be anonymous and they are allowed to have conversations that they are fearful to have face to face.

While we recognize certainly the problem we see in probably Canada and the United States with anonymous comments, we are sensitive to those who are coming to Twitter to have their voices heard in ways that they couldn't be heard in other locations.

3:40 p.m.

Conservative

Michael Barrett Conservative Leeds—Grenville—Thousand Islands and Rideau Lakes, ON

That's great. Thanks very much.

3:40 p.m.

Liberal

The Chair Liberal Anthony Housefather

Thank you.

Mr. Fraser.

3:40 p.m.

Liberal

Colin Fraser Liberal West Nova, NS

Thank you, Mr. Chair.

Thanks for being here, Ms. Austin. I appreciate your presentation.

I just want to pick up on something you mentioned in your presentation regarding a stat, that 91% of content is removed proactively, detected by your own technology for terrorist and violent extremist content.

What about other content? What about content such as threats or racist comments or comments that otherwise violate your standards?

3:40 p.m.

Head, Government and Public Policy, Twitter Canada, Twitter Inc.

Michele Austin

I think, generally speaking, we believe that only about 1% of the content on Twitter makes up the majority of those accounts reported for abuse.

Focusing specifically on abuse, again, those statistics are published in our transparency report.

We action unique accounts focused on the six rules that I mentioned with regard to abuse, child sexual exploitation and hateful conduct. Out of the six categories we actioned in July to December 2018, 250,000 accounts under hateful conduct policies were suspended, 235,000 accounts under abuse, 56,000 under violent threats, 30,000 under sensitive media, 29,000 on child sexual exploitation and about 8,000 on private information.

It's very difficult to compare those numbers year to year, country to country, because context matters. There could have been an uprising in a country or something could have happened politically to spur on some sort of different conversation or different actions. But of the 500 million tweets per day, those are the numbers of accounts we actioned.

3:45 p.m.

Liberal

Colin Fraser Liberal West Nova, NS

You obviously have the technology. If you're detecting 91% of the terrorism-related and violent extremist-related content, and you're able to detect them and remove them 91% of the time before they get on the platform, then you obviously have the technology to do a better job at preventing other types of online hatred, such as racist propaganda, threatening and abusive harassment. Why aren't you doing a better job with that type of conduct?

3:45 p.m.

Head, Government and Public Policy, Twitter Canada, Twitter Inc.

Michele Austin

You make an excellent point. If people don't feel safe coming to Twitter, they won't use it, so it's in our best interests to do a better job with regard to these actions.

Recently we've changed our approach. Twitter has a different approach to content moderation than other platforms. There are many human interactions with our people reviewing them. We are depending more on proprietary technology and now 38% of accounts are actioned to us by technology for review—they're flagged for us to review—whereas previously we didn't have any. We plan on making more investments in proprietary technology.

3:45 p.m.

Liberal

Colin Fraser Liberal West Nova, NS

How much does Twitter spend on those types of technologies every year?

3:45 p.m.

Head, Government and Public Policy, Twitter Canada, Twitter Inc.

Michele Austin

I would have to get back to you on the amount we spend.

3:45 p.m.

Liberal

Colin Fraser Liberal West Nova, NS

You'll forward that to the committee, then?

3:45 p.m.

Head, Government and Public Policy, Twitter Canada, Twitter Inc.

3:45 p.m.

Liberal

Colin Fraser Liberal West Nova, NS

With regard to civil action, has Twitter ever been successfully sued for any type of material that has been on their platform?

3:45 p.m.

Head, Government and Public Policy, Twitter Canada, Twitter Inc.

Michele Austin

I would have to get back to you on that as well.

3:45 p.m.

Liberal

Colin Fraser Liberal West Nova, NS

Are you aware of any person who has successfully sued another individual who has put that type of information...?

3:45 p.m.

Head, Government and Public Policy, Twitter Canada, Twitter Inc.

Michele Austin

I would have to get back to you on that.

3:45 p.m.

Liberal

Colin Fraser Liberal West Nova, NS

All right.

I understand obviously there is a difference between your proactive investigation of the type of content we're talking about, online hatred, and having the individual users having to report it and wait for you to get back to them. Do you see an issue right now with how long it's taking for somebody who reports a type of abusive, threatening or hateful type of conduct online to get a response?

As I understand it, it takes up to 24 hours just to get the initial response, and then it can take a long time after that, and the onus is on the individual user to prove their case to you, rather than perhaps nipping it in the bud, putting it in abeyance and then being able to determine whether or not that content should be removed or that person should be allowed to use your platform.

3:45 p.m.

Head, Government and Public Policy, Twitter Canada, Twitter Inc.

Michele Austin

Context matters with regard to actions that we take. We have a number of signals that we measure and take a look at before we take action. Let me break that into two pieces: The first would be the reporting and the second would be review.

We publicly acknowledge that there's too much burden on victims to report to Twitter and that is why we are trying to do a better job. We now have a dedicated hateful conduct workflow so that we know we can raise those issues for review faster. As I mentioned, we're working with proprietary technology. We realize we have to do a better job in reporting abuse.

In reviewing those accounts or those tweets flagged for action, it's extremely important for us to try to get it right. There are a number of behavioural signals we get, so if I tweet something and you mute me, you block me and you report me, clearly something's up in the quality of the content. Further, we take a look at our rules and we take a look at the laws where that tweet came from.

The other part of context is that there are very different conversations happening on Twitter. Often, the example we use is gaming. It is perfectly acceptable in the gaming community to say something like, “I'm coming to kill you tonight, be prepared”, so we would like to make sure we have that context as well.

These are the things we consider when we make that review.

3:50 p.m.

Liberal

Colin Fraser Liberal West Nova, NS

All right, thanks.

3:50 p.m.

Liberal

The Chair Liberal Anthony Housefather

Thank you very much.

Mr. MacGregor.

May 30th, 2019 / 3:50 p.m.

NDP

Alistair MacGregor NDP Cowichan—Malahat—Langford, BC

Thank you, Chair.

Ms. Austin, thank you for coming here before the committee.

Facebook banned a series of white nationalist groups: Faith Goldy, the Soldiers of Odin, Kevin Goudreau and the Canadian Nationalist Front, among others. Twitter only chose to ban the Canadian Nationalist Front from their own platform. Faith Goldy used Twitter to direct her followers to her website after her ban on Facebook.

When Twitter banned the Canadian Nationalist Front, you said you did so because the account violated the rules for barring violent extremist groups, but there was no further elaboration, and a multitude of other white nationalist groups still have a presence on the platform. Facebook has said that the removal of the groups came from their policy that they don't allow anyone who's engaged in offline organized hate to have a presence on the platform.

When Facebook took that step, why did Twitter only ban the Canadian Nationalist Front? Why is your threshold different from Facebook as to what can be allowed on your platform?

3:50 p.m.

Head, Government and Public Policy, Twitter Canada, Twitter Inc.

Michele Austin

I'm not going to comment on the individual accounts.

Firstly, with regard to white supremacism, we have an in-house process now working with civil society, working with researchers, to better understand how white nationalists and white supremacists use the platform, but that approach is not exclusive to hate. That is how we go through policy review and policy development, but that specific policy is currently under review.

We have the three core robust policies of violent extremist groups, which is the one you mentioned. We also have terrorism and the hateful conduct policies. We will take a look at all three policies as those accounts are actioned to determine whether or not we should take further action.

3:50 p.m.

NDP

Alistair MacGregor NDP Cowichan—Malahat—Langford, BC

Thank you.

You have made an allusion to the fact that there's always room for improvement, and it's a continuous work. I can appreciate that.

I'm curious. What leads to an actual ban? Can a ban happen right away, or do you like to progressively put sanctions on a person or an account?

3:50 p.m.

Head, Government and Public Policy, Twitter Canada, Twitter Inc.

Michele Austin

There are a number of stages with regard to bans. Egregious violence can result in an immediate ban. An egregious breaking of our rules like posting child sexual exploitation material can result in an immediate ban.

To be honest, most users who receive an action have made an error, and in our asking them to take that tweet down, they will apologize to their users once they have received that kind of information. We are looking to hopefully provide avenues for corrective behaviour, for counter-speech, for others to come and run to those conversations, and talk about different issues, or ask people to behave differently.

There are a number of measures we use. We can action a single tweet. We can lock down an account, but to ban somebody from Twitter completely is a very big deal for us that we take extremely seriously.

3:50 p.m.

NDP

Alistair MacGregor NDP Cowichan—Malahat—Langford, BC

I know you don't want to really comment on individual accounts. Are you studying what other platforms like Facebook are doing looking at their reasons and is that forming a large part of your review?

3:50 p.m.

Head, Government and Public Policy, Twitter Canada, Twitter Inc.

Michele Austin

Yes. We are constantly comparing our policies with the best practices of a number of organizations, not just online platforms or digital companies. We are also reaching out constantly to civil society. We have a trust and safety council, of which MediaSmarts in Canada is a member. They meet twice a year. They come and tell us what they are seeing on platforms.

We also rely on a number of researchers to come and tell us what they are seeing. It's that open dialogue and exchange of not just ideas but of information in terms of how violent extremist groups are behaving that is extremely important to us. We value it and depend on it as we continue to iterate and try to improve the service.

3:55 p.m.

NDP

Alistair MacGregor NDP Cowichan—Malahat—Langford, BC

Thank you.

I have a final question. In your opening statement you made mention of the help centre. You have a team of people who are reviewing accounts for possible violations of your rules, and really it's a global team.

Do you know how many people are staffed on the global team and how many are based in Canada?

3:55 p.m.

Head, Government and Public Policy, Twitter Canada, Twitter Inc.

Michele Austin

There is probably—

3:55 p.m.

NDP

Alistair MacGregor NDP Cowichan—Malahat—Langford, BC

Just to clarify, I'm asking this because I'm wondering if those people who are reviewing accounts based in Canada have an understanding of the Canadian context, our history, our culture, the fact that we have a lot of racism towards first nations and there is a lot of misunderstanding. A lot of those stereotypes are still being propagated on social media.

3:55 p.m.

Head, Government and Public Policy, Twitter Canada, Twitter Inc.

Michele Austin

There are two things. First of all, there are no content reviewers in Canada. We have a really excellent Moments team. I don't know if you use the search function, but when you look through and we highlight trends, we have a really excellent Moments team in Canada that looks at content from that perspective.

Canadian voices are well represented across the country. There's bias training. There's a really huge number of trainings that occur.

I have personal experience where we've changed a policy and we've been asked to provide Canadian context, which is something I have done. We also have an appeal function internally in order to provide more context.

With regard to first nations, I don't know if you have heard from indigenous groups at this committee, but if you have not, I strongly encourage you to do so. They experience hate differently from us, not just in terms of hate but also language. This is something they have brought to my attention, for which I am very grateful. A word like “savage”, we would use very differently than they would use it, and this is something we are looking at going forward. Dr. Jeffrey Ansloos from the University of Toronto is a really excellent resource with regard to this.

These are open conversations that we're having. We look forward to hearing more from the indigenous community. They are doing a great job, for me at least, highlighting the different kinds of issues they get, as they often do run to conversations to try to correct the speech or tell their stories.

3:55 p.m.

NDP

Alistair MacGregor NDP Cowichan—Malahat—Langford, BC

Thank you.

3:55 p.m.

Liberal

The Chair Liberal Anthony Housefather

Thank you very much.

Mr. Ehsassi.

3:55 p.m.

Liberal

Ali Ehsassi Liberal Willowdale, ON

Thank you, Mr. Chair.

Thank you, Ms. Austin, for appearing before our committee. I have to say your testimony was very clear, so thank you for that.

I may have missed this, but Mr. MacGregor asked how big the global enforcement team was. Could you tell us what the numbers are for the global enforcement team?

3:55 p.m.

Head, Government and Public Policy, Twitter Canada, Twitter Inc.

Michele Austin

I'll get back to you with specific numbers, but it's more than 2,500 people.

3:55 p.m.

Liberal

Ali Ehsassi Liberal Willowdale, ON

This global enforcement team is not primarily concerned about enforcing your own internal rules. Do they actually look at distinctions between legal requirements in different countries?

3:55 p.m.

Head, Government and Public Policy, Twitter Canada, Twitter Inc.

Michele Austin

There are a number of signals that we take into consideration as we are asked to action accounts or tweets, such as behaviour and local context. Certainly Asian and Southeast Asian countries have a different approach to sensitive media, for which I would say—I don't know if you know—you can currently turn a sensitive media function on and off on Twitter here in Canada. They take into account many of these signals as they go through their enforcement actions.

3:55 p.m.

Liberal

Ali Ehsassi Liberal Willowdale, ON

You did say that, as a company, you aspire to have healthy conversations. In your opinion, given that this is one of the big, overarching issues that we've grappled with at this committee, who do you think the onus should be on? Should it be on social media platforms or should it be on governments to ensure that we deal with hate speech in an effective manner?

3:55 p.m.

Head, Government and Public Policy, Twitter Canada, Twitter Inc.

Michele Austin

I think it's a combination of both. From a Twitter perspective, I would add the users into that mix as well. We welcome suggestions from human rights groups. We welcome suggestions from government. Of course we respect the law that we're in. It is extremely important for us to deliver a safe and healthy service. It's in our best interests for Twitter users to be safe. It is really an important combination of both.

In some cases, generally speaking, social media platforms will act faster than governments. That's not always the case. In other cases—as we talked about with regard to terrorism—we are asked to act faster.

3:55 p.m.

Liberal

Ali Ehsassi Liberal Willowdale, ON

Is the transparency report you referred to mandated by the EU?

4 p.m.

Head, Government and Public Policy, Twitter Canada, Twitter Inc.

Michele Austin

It's not mandated. It's a different reporting structure for the European Union. The transparency report is just something we do.

4 p.m.

Liberal

Ali Ehsassi Liberal Willowdale, ON

Is the violent extremist content or the terrorist content mandated by any agreement or convention or is that something you opted to do?

4 p.m.

Head, Government and Public Policy, Twitter Canada, Twitter Inc.

Michele Austin

Each company will have a different set of policies. As I said, we have three. We have a terrorism policy, a violent extremist group policy and a hateful conduct policy. The hateful conduct policy does tend to encompass individuals or, obviously, violent extremist groups. It can also apply to groups. You would have to ask each company how they enforce and what their policies are.

4 p.m.

Liberal

Ali Ehsassi Liberal Willowdale, ON

Given your reference to each one of those companies, how would you say Twitter compares to other companies in terms of combatting hate crimes?

4 p.m.

Head, Government and Public Policy, Twitter Canada, Twitter Inc.

Michele Austin

I don't feel qualified to comment on what the other companies are doing, or might or might not do. We have a different approach, which I've outlined to you. Could we do better? We always think we can do better.

4 p.m.

Liberal

Ali Ehsassi Liberal Willowdale, ON

What things do you do really well and you could hold up as an example for other companies?

4 p.m.

Head, Government and Public Policy, Twitter Canada, Twitter Inc.

Michele Austin

Do you mean with regard to hate?

4 p.m.

Liberal

Ali Ehsassi Liberal Willowdale, ON

Yes.

4 p.m.

Head, Government and Public Policy, Twitter Canada, Twitter Inc.

Michele Austin

I think that we've done a very good job of refining our policy and expanding our policy. In December 2018, we included violent images and also handles that might have a hateful connotation in the name. I think we do a good job of working with civil society and working with governments. I think we do a very good job of explaining what we're doing through our transparency report. I'd leave it there.

4 p.m.

Liberal

Ali Ehsassi Liberal Willowdale, ON

You say that you are attempting to have healthy conversations and that you're attempting to do so in a transparent fashion, but you say in one of the closing lines that your company is also supposed to have due regard for the complexity of this particular issue.

In your opinion, given the differences and the legal requirements in each jurisdiction—the most obvious one being that in the U.S. there is something called absolute free speech, whereas we take a very different perspective with that—are our Canadian requirements complex? Is it difficult to draw the line?

4 p.m.

Head, Government and Public Policy, Twitter Canada, Twitter Inc.

Michele Austin

Do you mean, is Canadian law complex?

4 p.m.

Liberal

Ali Ehsassi Liberal Willowdale, ON

Yes.

4 p.m.

Head, Government and Public Policy, Twitter Canada, Twitter Inc.

Michele Austin

Does it matter if it's complex? We have to respect it, so that would be my point.

Are we taking into account what we're doing in other jurisdictions? Absolutely. I'll give you the GDPR as an example. That's a privacy example. When they implemented the GDPR in Europe, we took those best practices and applied them globally with regard to our company.

We are absolutely open to having discussions on best practices from around the world. Some things work very well, and some laws work very well in other countries.

If you're asking me also how to change Canadian law, or do that kind of thing, I wouldn't comment on that. If you're asking me about regulation, I would say that you should consider clear definitions for us to be measured against. Develop standards for transparency. We really value transparency with our users. The focus should be on systematic, recurring problems, not just individual one-off issues. Co-operation should be fostered.

4 p.m.

Liberal

Ali Ehsassi Liberal Willowdale, ON

Lastly, I think you suggested that insofar as Canada is concerned, you have no content review. If so, why not?

4 p.m.

Head, Government and Public Policy, Twitter Canada, Twitter Inc.

Michele Austin

Why don't we have content reviewers in Canada?

4 p.m.

Liberal

Ali Ehsassi Liberal Willowdale, ON

In Canada, yes.

4 p.m.

Head, Government and Public Policy, Twitter Canada, Twitter Inc.

Michele Austin

It's just a choice. We're hiring. We have Canadians working across the company in various locations, including Brussels and around the world. It is just a choice that we made, but that doesn't mean that Canadian content is not respected.

4 p.m.

Liberal

Ali Ehsassi Liberal Willowdale, ON

Thank you for that.

4 p.m.

Liberal

The Chair Liberal Anthony Housefather

We don't have time for another full round, but we do have time before the hour's up. Could members show me who has questions?

Mr. McKinnon.

Anyone else on this side? Mr. Saini.

Let's start with Mr. McKinnon. I'll give you three minutes to start with, if that's okay.

4 p.m.

Liberal

Ron McKinnon Liberal Coquitlam—Port Coquitlam, BC

Thank you.

I want to talk more about anonymity. I take the point you made to Mr. Barrett about it being sometimes quite necessary. I would also like to suggest to you that perhaps there could be a user option on the account to authenticate or not, and then there would be a flag that shows up on a tweet that says whether this person is authenticated or not. That might help because I do think anonymity is a factor in bad behaviour online.

I would also like to include in such a mechanism the prospect of pseudonymity. I think that in a case where you have people who are operating in situations where they don't want to identify themselves to authorities, such as authentication authorities such as VeriSign, that there's room for a relative authentication mechanism such as webs of trust, such as PGP offers, so that groups amongst themselves can identify themselves among themselves and use whatever names they like.

That would be a very great thing, I think, if that could be arranged.

I'm not sure if the second part of that fits the Twitter paradigm. I'm thinking more in terms of Facebook. It would be nice if interactions could be filtered based on whether or not people are authenticated or not, relative to my web of trust, perhaps, or relative to a white list of authentication authorities, perhaps, or not on a black list of authorities. Is that something that Twitter might be able to contemplate?

4:05 p.m.

Head, Government and Public Policy, Twitter Canada, Twitter Inc.

Michele Austin

Let me take the second half of that first. You can mute words and accounts on Twitter, so if you were interested in the Raptors game tonight and couldn't see it, you could mute #WeTheNorth, #TorontoRaptors. You can filter that out of your conversations currently if you wanted to, in case you missed the game.

We are working on that, but right now, if you look on your account, you'll see “mute words” or “mute accounts”. You can do that to help filter through what you are and are not seeing. It's the same with the sensitive media setting.

With regard to information integrity and with regard to trusting who you're interacting with, this is actually something I testified on at the Senate last year. It is something that we are studying very carefully. Our verification process is on hold because we were unhappy with how it was being applied. We couldn't find a consistent policy that would reflect well to our users what it was about. It's something we are working on.

We're also making product changes daily. We have something now called the profile peak. As you're scrolling through, you can just hover over the image of the person you're interacting with and their profile will pop up. Further, we now also tell you where they're tweeting from and when so that you can understand if they're tweeting from their iPhone, if they're tweeting from Hootsuite or some other sort of third party application.

I take your question and your comment seriously. It is something that we are working on to improve the user experience.

4:05 p.m.

Liberal

Ron McKinnon Liberal Coquitlam—Port Coquitlam, BC

Thank you.

I have one more quick question—

4:05 p.m.

Liberal

The Chair Liberal Anthony Housefather

We'll come back to you, if that's okay. I want to give everybody three minutes, if I can.

Mr. Saini.

4:05 p.m.

Liberal

Raj Saini Liberal Kitchener Centre, ON

Hello, Ms. Austin. It's very good to see you. Welcome.

I have two quick questions.

When we look at broadcasters—radio, television—and newspapers, if anybody makes a comment or says anything hateful, the broadcaster or newspaper has two options: either they can not print it, not show it, or they can tag it with the person's name and identity.

With Twitter, and with online hate, you have a version where you can be anonymous. Why should somebody be allowed to be anonymous, when they can't be anonymous on another platform?

4:05 p.m.

Head, Government and Public Policy, Twitter Canada, Twitter Inc.

Michele Austin

Thank you for your question.

Obviously, the CRTC regulates a different set of media from us, and that's a conversation for you to have, in terms of regulation.

I would revert to my previous answer with regard to anonymity. It is something we have embraced from the beginning with regard to our platform, and it's a freedom of expression issue.

4:05 p.m.

Liberal

Raj Saini Liberal Kitchener Centre, ON

Why is being anonymous...? That's the point I want to understand. You can't be anonymous in a newspaper, or on radio or TV, but somehow you can be anonymous on Twitter. I don't see what protection that would afford someone who wants to express an opinion, whether it's good or bad. There's a freedom of speech element but we don't allow hatred on other platforms. It's not a freedom of speech element on a regular broadcaster, but it's a freedom of speech element on Twitter. I don't understand how that....

4:05 p.m.

Head, Government and Public Policy, Twitter Canada, Twitter Inc.

Michele Austin

Again, I would say to you that an anonymous account is still completely subject to Twitter rules. It can be reported, should not be engaging in hateful conduct and should be actioned in the same way.

We have taken a different approach to allowing anonymous voices. Again, I specifically speak to the issue of women, and violence against women, and how contextually.... Again, I lean on Korea. Korea has a problem with regard to what they call peep cameras and people filming women. They come to the platform anonymously to talk about it, and how they don't like it.

We're hoping to have that kind of conversation. Many people choose to run to that conversation. In terms of bad behaviour, if it is very bad behaviour, we will certainly act.

4:10 p.m.

Liberal

Raj Saini Liberal Kitchener Centre, ON

My final question is this. You mentioned to my colleague, Mr. Ehsassi, that you have 2,500 people globally—

4:10 p.m.

Head, Government and Public Policy, Twitter Canada, Twitter Inc.

Michele Austin

I have to confirm that.

4:10 p.m.

Liberal

Raj Saini Liberal Kitchener Centre, ON

Let's just use 2,500 as an example. These are people who look out for tweets that are offensive in nature.

Let's take a very recent example. India had an election campaign. Nine hundred million people voted. In India, you have 22 official languages and a thousand different dialects. Do you have the capacity, in a country like India, where there are 22 official languages, plus thousands of dialects, to monitor everything that's happening on Twitter, with all those languages?

4:10 p.m.

Head, Government and Public Policy, Twitter Canada, Twitter Inc.

Michele Austin

You're speaking to the issue of scope and scale, which we tackle daily.

India is an excellent example. We have many content reviewers who speak those languages and dialects. We have a heavy presence in India. We have the ability to review content in the languages that are supported by the countries we are in.

We feel confident that we have the ability to review those. We are, as I mentioned, hiring more, and we'd like to hear also.... It's been raised with me. Dialects are really important. They are very important in first nations here in Canada. The dialect changes as you move from community to community. We certainly feel confident in our ability to do that.

4:10 p.m.

Liberal

The Chair Liberal Anthony Housefather

I have Ms. Khalid and Mr. Fraser. Is there anyone from the opposition side who wants to ask a question?

If not, we're going to go to Ms. Khalid.

4:10 p.m.

Liberal

Iqra Khalid Liberal Mississauga—Erin Mills, ON

Thank you, Chair. Thank you, Ms. Austin, for coming in today.

You touched on this a bit when you were talking about indigenous communities, and also just now, when we were talking about women.

In terms of the review and identification of what is hateful content on Twitter, is there a gender lens that you apply? As you said, indigenous communities experience it differently, and I can tell you that women definitely experience it differently as well. They are more targeted than their male counterparts, for example.

4:10 p.m.

Head, Government and Public Policy, Twitter Canada, Twitter Inc.

Michele Austin

We train constantly at Twitter with regard to bias and understanding. I will speak specifically to women in politics, and specifically to women in politics in Canada. Women like you, who run for office, perhaps experience an abnormal amount of abuse and hatred. A number of Canadian female politicians have come to me and explained their context, and how they approach it, which has been extremely helpful to me. We have made a couple of changes internally, with regard to policy review and context, based on those conversations.

Those conversations are extremely important. They happen often. I encourage any group that feels we don't understand the context to come and speak to us.

4:10 p.m.

Liberal

Iqra Khalid Liberal Mississauga—Erin Mills, ON

Following up on that, based on the context, what specific policy changes were made?

4:10 p.m.

Head, Government and Public Policy, Twitter Canada, Twitter Inc.

Michele Austin

I'm not going to.... The problem is that people game the system when I tell them exactly what policy has been changed. They'll do something like change a capital i to a small l.

I'm not going to tell you exactly what has been changed. However, in review, in terms of Canada, for a number of accounts, we are aware of certain words that we weren't aware of before that were being used that we should keep an eye on. It's kind of like unparliamentary language.

4:10 p.m.

Liberal

Iqra Khalid Liberal Mississauga—Erin Mills, ON

Thank you.

I'd be happy to identify a few more afterwards.

4:10 p.m.

Head, Government and Public Policy, Twitter Canada, Twitter Inc.

Michele Austin

I'd be happy to hear them.

4:10 p.m.

Liberal

The Chair Liberal Anthony Housefather

Thank you very much.

Mr. Fraser.

4:10 p.m.

Liberal

Colin Fraser Liberal West Nova, NS

Thanks. I have a question that I didn't get to ask in my time.

Touching a bit on authentication, which has been raised a couple of times, I'm obviously very concerned about the amount of disinformation on social media platforms that is propagated easily. Unfortunately, too many people aren't using their critical thinking skills when they see this disinformation. It allows these sorts of stories to spread like wildfire. People believe them and then they comment, and it almost becomes a mob mentality online.

It's important to respect journalistic standards for news items that are shared on social media platforms. I know that you have the blue check mark that certain organizations can be authenticated in a certain way, to say that this is a legitimate entity that's putting forward this information.

Is there another way to identify to individuals using your platform that this is a trustworthy news source that uses journalistic standards? It's not necessarily trying to get into the content itself, but saying that this is something that could be relied upon for your users?

4:15 p.m.

Head, Government and Public Policy, Twitter Canada, Twitter Inc.

Michele Austin

I used to work for Preston Manning and he said to me not too long ago, “Michele, fake news is not new. You just have to go back to Genesis, and what the serpent told Eve was 100% fake news.” So it's not a new phenomenon. It's something, though, that we are very, very aware of and actioning. In 2018, we identified and challenged more than 425 million accounts that were suspected of engaging in platform manipulation.

With regard to context, we do context differently than our other competitors and other digital companies. As I mentioned, the Moments team, which has a really wonderful foothold in Canada, is now adding more information to stories that they curate. We curate stories using individual editors, if you will. We put tweets together, and now we're adding more context.

I'll take the example of the Alberta Speech from the Throne. We explained what a Speech from the Throne was in terms of that moment, and how it worked and what happened.

Certainly, fake news is something we're keeping a very close eye on. It's particularly concerning during elections. We have an election integrity team dedicated to it. It's a cross-functional team during elections. The Alberta election—knock on wood—went very smoothly. We had that team completely involved in that election.

However, it's something we're concerned about.

4:15 p.m.

Liberal

Colin Fraser Liberal West Nova, NS

I guess my question is whether there is contemplation of some other way you can identify a source as adhering to journalistic standards, versus those that may not meet the criteria to be banned from Twitter but do not achieve the gold check mark, for example. Is there something so that an individual can identify a trustworthy news source versus some sort of fake news outlet?

4:15 p.m.

Head, Government and Public Policy, Twitter Canada, Twitter Inc.

Michele Austin

Are you asking me to decide what is trustworthy and isn't trustworthy from a news perspective?

4:15 p.m.

Liberal

Colin Fraser Liberal West Nova, NS

No. I'm asking whether or not there's any contemplation of some way to identify entities that have journalistic standards, that adhere to the common practice, the best practices, for journalism, versus those that don't.

4:15 p.m.

Head, Government and Public Policy, Twitter Canada, Twitter Inc.

Michele Austin

We work with many news organizations. That industry is changing, and a lot of them do not have blue check marks that should, so we are reaching out to them. We did that especially in advance of the Alberta election, to make sure that the parliamentary reporters were there.

However, if you're asking me whether there is some sort of verified list of news organizations that Twitter allows or lifts up, we don't make that choice.

4:15 p.m.

Liberal

Colin Fraser Liberal West Nova, NS

All right. We'll leave it there.

Thank you.

4:15 p.m.

Liberal

The Chair Liberal Anthony Housefather

Thank you so much.

Is there anyone else—

4:15 p.m.

Head, Government and Public Policy, Twitter Canada, Twitter Inc.

Michele Austin

That's a good question for Google, though. They have a different approach.

4:15 p.m.

Liberal

The Chair Liberal Anthony Housefather

Mr. McKinnon has one small question.

4:15 p.m.

Liberal

Ron McKinnon Liberal Coquitlam—Port Coquitlam, BC

I'd like to segue over to bots.

4:15 p.m.

Head, Government and Public Policy, Twitter Canada, Twitter Inc.

4:15 p.m.

Liberal

Ron McKinnon Liberal Coquitlam—Port Coquitlam, BC

To some degree, automated responses should be expected, but it can get carried away. I've heard complaints of bots out there that rummage through the whole Twitterverse and respond in nasty ways and in a very massive fashion.

Are bots allowed on Twitter and do you have standards of behaviour for them?

4:15 p.m.

Head, Government and Public Policy, Twitter Canada, Twitter Inc.

Michele Austin

Yes. Automation is permitted on the platform.

I'll give you two positive examples. If you lose your luggage and you DM Air Canada, a bot will help triage where you are and what your flight number was. In Australia, they have a bot for self-harm, wherein you can text Lifeline Australia for help. Whether you're in a crisis or whether you are a parent who is looking for advice, that is also triaged through a bot. I desperately wish we could bring it to Canada. We're working on that.

Malicious automation is not permitted on the platform. It is something we work on daily to ensure it doesn't happen. However, there are cases where bots are doing a really good job and helping.

4:20 p.m.

Liberal

Ron McKinnon Liberal Coquitlam—Port Coquitlam, BC

Great. Thank you.

4:20 p.m.

Liberal

The Chair Liberal Anthony Housefather

Thank you very much.

Before we finish, the clerk has one date to report. It's just to let members know about dates on the dissenting report issue.

4:20 p.m.

The Clerk of the Committee Mr. Marc-Olivier Girard

I was looking at the calendar for the upcoming weeks and I discovered that the appropriate deadline for submitting supplementary or dissenting opinions is June 6, rather than June 13.

4:20 p.m.

Liberal

The Chair Liberal Anthony Housefather

That's for the HIV report.

4:20 p.m.

The Clerk

That's for HIV.

For online hate, it would remain June 13.

4:20 p.m.

Liberal

The Chair Liberal Anthony Housefather

For online hate, it's June 13, and for HIV, it's June 6.

4:20 p.m.

The Clerk

Exactly.

If you want the reason for that—

4:20 p.m.

Liberal

The Chair Liberal Anthony Housefather

Yes, please.

4:20 p.m.

The Clerk

In the current calendar, the target tabling date in the House for the HIV report is June 10, so having you submit your dissenting or supplementary opinions on June 13 would not make any sense.

4:20 p.m.

Conservative

Michael Cooper Conservative St. Albert—Edmonton, AB

Does it have to be translated by June 6, or not?

4:20 p.m.

The Clerk

No. Actually, you submit it in English and in French, and we just insert it at the end of the report.

4:20 p.m.

Conservative

Michael Cooper Conservative St. Albert—Edmonton, AB

No, my issue is—

4:20 p.m.

Liberal

The Chair Liberal Anthony Housefather

He is asking, does he have to translate it? Could it be submitted only in English at that point and then get translated afterwards? I think that's what he's asking.

4:20 p.m.

The Clerk

No. Such opinions need to be submitted to us in English and in French—

4:20 p.m.

Conservative

Michael Cooper Conservative St. Albert—Edmonton, AB

Well, it's an impossibility, then.

4:20 p.m.

The Clerk

—on June 6, no later than June 6, which is Thursday of next week.

4:20 p.m.

Conservative

Michael Cooper Conservative St. Albert—Edmonton, AB

It would have been a possibility but being notified on Thursday afternoon—

4:20 p.m.

Liberal

The Chair Liberal Anthony Housefather

I understand. What we'll do, because that obviously is not possible, is that because we're sitting that entire week, instead of tabling it on Monday, June 10, we can push the tabling back to, let's say, Wednesday, June 12. Then that will give a couple of extra days to get it translated, perhaps.

4:20 p.m.

The Clerk

Yes, absolutely.

4:20 p.m.

Liberal

The Chair Liberal Anthony Housefather

We'll work on it. That week, we're sitting—

4:20 p.m.

Liberal

Raj Saini Liberal Kitchener Centre, ON

Can we allow the witness to leave?

4:20 p.m.

Liberal

The Chair Liberal Anthony Housefather

Yes. I was going to thank the witness. There's going to be a very brief discussion.

4:20 p.m.

Liberal

Raj Saini Liberal Kitchener Centre, ON

That's fine.

4:20 p.m.

Liberal

The Chair Liberal Anthony Housefather

In any case, we'll try to work that out in terms of dates. We're sitting that whole week, so the report could be tabled that week in any case.

It's not necessary to do it on June 10. We could perhaps do it a couple of days later.

4:20 p.m.

The Clerk

Yes. More or less, it's about 48 hours before the tabling date.

4:20 p.m.

Liberal

The Chair Liberal Anthony Housefather

We'll figure out when to deal with that.

Ms. Austin, thank you very much. Your comment about the serpent was exceptionally funny. I'll be stealing that to use in other places. I will credit Mr. Manning.

We all really appreciate your being here. It was very helpful. Thank you so much.

The meeting is adjourned.