Evidence of meeting #19 for Public Safety and National Security in the 44th Parliament, 1st Session. (The original version is on Parliament’s site, as are the minutes.) The winning word was twitter.

A recording is available from Parliament.

On the agenda

MPs speaking

Also speaking

Evan Balgord  Executive Director, Canadian Anti-Hate Network
Barbara Perry  Director, Ontario Tech University, Centre on Hate, Bias and Extremism
Wendy Via  Co-Founder, Global Project Against Hate and Extremism
Ilan Kogan  Data Scientist, Klackle, As an Individual
Rachel Curran  Public Policy Manager, Meta Canada, Meta Platforms
David Tessler  Public Policy Manager, Meta Platforms
Michele Austin  Director, Public Policy (US & Canada), Twitter Inc.

Noon

Liberal

The Chair Liberal Jim Carr

I call the meeting back to order.

Colleagues, we're ready to resume with our second panel. With us this second hour, we have Ilan Kogan, data scientist at Klackle. From Meta Platforms, we have Rachel Curran, public policy manager of Meta Canada, and David Tessler, public policy manager. From Twitter Inc., we have Michele Austin, director of public policy for the U.S. and Canada.

I would like to invite our guests to give an opening statement of up to five minutes. I will begin with Mr. Kogan.

Mr. Kogan, the floor is yours.

Noon

Ilan Kogan Data Scientist, Klackle, As an Individual

Mr. Chair, members of the committee, I would like to thank you for inviting me today to discuss artificial intelligence and social media regulation in Canada.

I begin with an oft-quoted observation: “For every complex problem, there is a solution that is clear, simple and wrong.”

Canada is not the first country to consider how to best keep the Internet safe. In 2019, for instance, the French Parliament adopted the Avia law, a bill very similar to the online harms legislation that the Canadian government considered last year. The bill required social media platforms to remove “clearly illegal content”, including hate speech, from their platforms. Under threat of significant monetary penalties, the service providers had to remove hate speech within 24 hours of notification. Remarkably, France's constitutional court struck the law down. The court held that it overly burdened free expression.

However, France's hate speech laws are far stricter than Canada's. Why did this seemingly minor extension of hate speech law to the online sphere cross the constitutional line? The answer is what human rights scholars call “collateral censorship”. Collateral censorship is the phenomenon where if a social media company is punished for its users' speech, the platform will overcensor. Where there's even a small possibility that speech is unlawful, the intermediary will err on the side of caution, censoring speech, because the cost of failing to remove unlawful content is too high. France's constitutional court was unwilling to accept the law's restrictive impact on legal expression.

The risk of collateral censorship depends on how difficult it is for a platform to distinguish legal from illegal content. Some categories of illegal content are easier to identify than others. Due to scale, most content moderation is done using artificial intelligence systems. Identifying child pornography is relatively easy for such a system; identifying hate speech is not.

Consider that over 500 million tweets are posted on Twitter every day. Many seemingly hateful tweets are actually counter-speech, news reporting or art. Artificial intelligence systems cannot tell these categories apart. Human reviewers cannot accurately make these assessments in mere seconds either. Because Facebook instructs moderators to err on the side of removal, counterintuitively, online, the speech of marginalized groups may be censored by these good-faith efforts to protect them. That is why so many marginalized communities objected to the proposed online harms legislation that was unveiled last year.

Let me share an example from my time working at the Oversight Board, Facebook's content moderation supreme court. In August 2021, following the tragic discovery of unmarked graves in Kamloops, British Columbia, a Facebook user posted a picture of art with the title “Kill the Indian, Save the Man”, and an associated description. Without any user complaints, two of Facebook's automated systems identified the content as potentially violating Facebook's policies on hate speech. A human reviewer in the Asia-Pacific region then determined that the content was prohibited and removed it. The user appealed. A second human reviewer reached the same conclusion as the first.

To an algorithm, this sounds like success, but it is not. The post was made by a member of the Canadian indigenous community. It included text that stated the user's sole purpose was to bring awareness to one of the darkest periods in Canadian history. This was not hate speech; it was counter-speech. Facebook got it wrong, four times.

You need not set policy by anecdote. Indeed, the risk of collateral censorship might not necessarily preclude regulation under the charter. To determine whether limits on free expression are reasonable, the appropriate question to ask is, for each category of harmful content, such as child pornography, hate speech or terrorist materials, how often do these platforms make moderation errors?

Although most human rights scholars believe that collateral censorship is a very significant problem, social media platforms refuse to share their data. Therefore, the path forward is a focus on transparency and due process, not outcomes: independent audits; accuracy statistics; and a right to meaningful review and appeal, both for users and complainants.

This is the path that the European Union is now taking and the path that the Canadian government should take as well.

Thank you.

12:05 p.m.

Liberal

The Chair Liberal Jim Carr

Thank you very much.

I would now like to invite Ms. Curran to take up to five minutes for an opening statement.

The floor is yours.

12:05 p.m.

Rachel Curran Public Policy Manager, Meta Canada, Meta Platforms

Thank you, Mr. Chair.

We'll start with my colleague, Mr. Tessler.

April 26th, 2022 / 12:05 p.m.

David Tessler Public Policy Manager, Meta Platforms

Thank you, Mr. Chair.

Thank you for the invitation to appear before the committee today to talk about the important issue of ideologically motivated violent extremism in Canada.

My name is David Tessler and I am the public policy manager on Meta's counterterrorism and dangerous organizations and individuals team.

With me today is Rachel Curran, public policy manager for Canada.

Meta invests billions of dollars each year in people and technology to keep our platform safe. We have tripled to more than 40,000 globally the number of people working on safety and security. We continue to refine our policies based on direct feedback from experts and impacted communities to address new risks as they emerge. We're a pioneer in artificial intelligence technology to remove harmful content at scale, which enables us to remove the vast majority of terrorism- and organized hate-related content before any users report it.

Our policies around platform content are contained in our community standards, which outline what is and what is not allowed on our platforms. The most relevant sections for this discussion are entitled “violence and incitement” and “dangerous individuals and organizations”.

With respect to violence and incitement, we aim to prevent potential offline harm that may be related to content on Facebook, so we remove language that incites or facilitates serious violence. We remove content, disable accounts and work with law enforcement when we believe there's a genuine risk of physical harm or direct threats to public safety.

We also do not allow any organizations or individuals who proclaim a violent mission or who are engaged in violence to have a presence on our platforms. We follow an extensive process to determine which organizations and individuals meet our thresholds of “dangerous”, and we have worked with a number of different academics and organizations around the world, including here in Canada, to refine this process.

The “dangerous” organizations and individuals we focus on include those involved in terrorist activities, organized hate, mass or serial murder, human trafficking, organized violence or criminal activity. Our work is ongoing. We are constantly evaluating individuals and groups against this policy as they are brought to our attention. We use a combination of technology reports from our community and human review to enforce our policies. We proactively look for and review reporting of prohibited content and remove it in line with our community standards.

Enforcement of our policies is not perfect, but we're getting better by the month. We report our efforts and results quarterly and publicly in our community standards enforcement reports.

The second important point, beyond noting that these standards exist, is that we are always working to evolve our policies in response to stakeholder input and current real-world contexts. Our content policy team works with subject matter experts from across Canada and around the world who are dedicated to following trends across a spectrum of issues, including hate speech and organized hate.

We also regularly team up with other companies, governments and NGOs because we know those seeking to abuse digital platforms attempt to do so not solely on our apps. For instance, in 2017, we, along with YouTube, Microsoft and Twitter, launched a Global Internet Forum to Counter Terrorism, GIFCT. The forum, which is now an independent non-profit, brings together the technology industry, government, civil society and academia to foster collaboration and information sharing to counter terrorism and violent extremist activity online.

Now I'll turn it over to my colleague, Rachel.

12:10 p.m.

Public Policy Manager, Meta Canada, Meta Platforms

Rachel Curran

Thanks, David.

In Canada, in 2020, in partnership with Ontario Tech University Centre on Hate, Bias and Extremism, led by Dr. Perry, who you just heard from, we launched the Global Network Against Hate. This five-year program will help advance the centre's work and research on violent extremism based on ethnic, racial, gender and other forms of prejudice, including how it spreads and how to stop it.

The Global Network Against Hate also facilitates global partnerships and knowledge sharing focused on researching, understanding and preventing hate, bias and extremism online and off. Our partnerships with the academics and experts who study organized hate groups and figures help us stay ahead of trends and activities among extremist groups. Our experts are able to share information with us on how these organizations are adapting to social media and to give us feedback on how we might better tackle them.

Based on this feedback, in Canada we've designated several Canadian hate organizations and figures in recent years, including Faith Goldy, Kevin Goudreau, the Canadian Nationalist Front, Aryan Strikeforce, Wolves of Odin and Soldiers of Odin. They've all been banned from having any further presence on Facebook and Instagram.

We also remove affiliate representation for these entities, including linked pages and groups. Recent removals include Alexis Cossette-Trudel, Atalante Québec and Radio-Québec—

12:10 p.m.

Liberal

The Chair Liberal Jim Carr

Finish in 10 seconds, please.

12:10 p.m.

Public Policy Manager, Meta Canada, Meta Platforms

Rachel Curran

—and QAnon-affiliated pages and organizations.

To sum up, we've banned 250 white supremacist organizations from our platforms. We're constantly engaged with this work in conjunction with Canadian law enforcement and intelligence agencies.

12:10 p.m.

Liberal

The Chair Liberal Jim Carr

Thank you very much.

Ms. Austin, you have five minutes to make your opening comments. The floor is yours.

12:10 p.m.

Michele Austin Director, Public Policy (US & Canada), Twitter Inc.

Thank you very much, Chair and members of the committee, for the opportunity to be here, and thank you for your service.

I'd also like to acknowledge the political staff who are in the room and thank them for their service and support.

Twitter's purpose is to serve the public conversation. People from around the world come together on Twitter in an open and free exchange of ideas and issues they care about. Twitter is committed to improving the collective health, openness and civility of public conversation on our platform. We do this work with the recognition that freedom of expression and safety are interconnected.

Twitter approaches issues such as terrorism, violent extremism and violent organizations through a combination of interventions, including the development and enforcement of our rules, product solutions and work with external partners such as government, civil society and academia.

For my opening remarks, I will focus on our work with partners and, in particular, the Government of Canada.

Twitter shares the Government of Canada's view that online safety is a shared responsibility. Digital service providers, governments, law enforcement, digital platforms, network service providers, non-government organizations and citizens all play an important role in protecting communities from harmful content online. Twitter is grateful for the Government of Canada's willingness to convene honest and sometimes difficult conversations through venues such as the Christchurch call to action and organizations such as Five Eyes.

Through our joint work on the Global Internet Forum to Counter Terrorism, commonly known as GIFCT, which my colleague Mr. Tessler referred to in his remarks, we have made real progress across a wide range of issues, including establishing GIFCT as an independent, non-government organization; building out GIFCT's resources and impact; forming the independent advisory committee and working groups; and implementing a step change on how we respond to crisis events around the world.

In Canada, the Anti-terrorism Act and the Criminal Code of Canada provide measures for the Government of Canada to identify and publicly list known terrorist and violent extremist organizations. Twitter carefully monitors the Government of Canada's list, as well as other lists from governments around the world. The last time that list was updated was on June 25, 2021. We also collaborate and co-operate with law enforcement entities when appropriate and in accordance with legal processes. I also want to acknowledge the regular and timely dialogue I have with officials across government working on domestic issues related to these files.

In addition to governments, Twitter partners with non-government organizations around the world to help inform our work and to counter online extremist content. For example, we partner closely with Tech Against Terrorism, the global NGO, to share information, knowledge and best practices. We recently participated alongside the Government of Canada in the Global Counterterrorism Forum's workshop to develop a tool kit to focus on countering racially motivated violent extremism.

Our approach is not stagnant. We aggressively fight online violent extremist activity and have invested heavily in technology and tools to enforce our policies. As the nature of these threats has changed, so has our approach to tackling this behaviour. As an open platform for free expression, Twitter has always sought to strike a balance between the enforcement of our own rules covering prohibited behaviour and the legitimate needs of law enforcement with the ability of people to express their views freely on Twitter, including views that people may disagree with or find offensive.

I would like to end my testimony with a quote from Canada's Global Affairs Minister, the Honourable Mélanie Joly, on March 2 of this year. She said:

More than ever, social media platforms are powerful tools of information. They play a key role in the health of democracies and global stability. Social media platforms play an important role in the fight against disinformation....

Twitter agrees.

I'm happy to answer any questions you might have on policies, policy enforcement, product solutions and the ways in which we're working to protect the safety of the conversation on Twitter.

Thank you.

12:15 p.m.

Liberal

The Chair Liberal Jim Carr

Thank you very much. You won't have long to wait, because the first round of questions will start right now.

We'll begin by asking Ms. Dancho to take us through the first six minutes of questioning in this round.

12:15 p.m.

Conservative

Raquel Dancho Conservative Kildonan—St. Paul, MB

Thank you, Mr. Chair.

Thank you to the witnesses for being here. My first question is for Twitter.

Today in committee, as you may have heard, we talked a lot about right-wing opinion and left-wing opinion, sharing online, and the harmful content from extreme elements of both. I'm sure you're also aware that Conservatives sometimes comment how they feel unfairly targeted by social media censorship.

In that same vein, in your joint statement with Elon Musk, he explained his motivation for wanting to buy Twitter and take it private. He said, “Free speech is the bedrock of a functioning democracy, and Twitter is the digital town square where matters vital to the future of humanity are [being] debated”. Elon Musk, as you know, has also said he wants to enhance Twitter with new features, “making the algorithms open source to increase [user] trust, defeating the spam bots, and authenticating all [human users].

Do you feel that Mr. Musk can achieve these goals, and do you feel that will ensure all sides of the political spectrum, so to speak, including Conservatives, are better protected to share their opinions freely on your platform?

12:15 p.m.

Director, Public Policy (US & Canada), Twitter Inc.

Michele Austin

Twitter is certainly living up to its moniker. Twitter seems to be what's happening right now. It's a very exciting place to work. Partners can continue to expect our best-in-class customer service, client solutions and our commitment to safety.

Yesterday, Twitter was a publicly traded company. Today, Twitter is still a publicly traded company. I cannot speculate on what Elon Musk is proposing or what changes he might make. For now, there will be no changes as a result of the announcement. Any changes will be publicly communicated on Twitter. You can actually follow on Twitter the entire company meeting that we had yesterday with regard to this.

12:15 p.m.

Conservative

Raquel Dancho Conservative Kildonan—St. Paul, MB

Thank you very much.

My next question is for Facebook.

Thank you, Ms. Curran, for being here today.

I want to talk a bit about what happened in Australia. As you know, the Australian government brought forward legislation that would force Facebook to pay publishers of news media if Facebook hosted, or users shared, news content. As you know, Facebook retaliated and banned news links from being shared by Facebook users in Australia, and shut down Australian news pages hosted on the Facebook platform, in a protest to the Australian law that the government was looking to bring forward. Ultimately, Facebook had cut off the ability to share news publications online from users or otherwise. An agreement was reached shortly afterwards, but it did take this extraordinary step to ban the sharing of news publications.

We know that the Liberal government brought forward a similar bill to what the Australian government did. Bill C-18 has some similarities. It's called, in short, the online news act. You may be familiar with it. There's also Bill C-11, which aims to control what Canadians see when they open their social media apps such as Facebook, Twitter and the like.

Ms. Curran, is it reasonable to believe that Facebook could do the same thing in Canada as it did in Australia and prohibit the sharing of news, should the Liberal government move forward with bills such as Bill C-18 or other iterations of it?

12:20 p.m.

Public Policy Manager, Meta Canada, Meta Platforms

Rachel Curran

The short answer is that we're still evaluating that legislation. We didn't know the scope of it until it was tabled very recently.

We have some pretty serious concerns. Our view is that when publishers place links to their content on our platforms, they receive significant value from doing that. We don't actually control when or how or to what degree they post news material on our platforms.

I will say this. We're committed to fuelling innovative solutions for the news industry and to the sustainability of the news industry in Canada. That's why we've entered into a number of partnerships to support that kind of work.

I can't comment definitively on our future action with respect to that bill specifically, since we're still evaluating it.

12:20 p.m.

Conservative

Raquel Dancho Conservative Kildonan—St. Paul, MB

Thank you, Ms. Curran.

You would say—perhaps I'm putting words in your mouth—and maybe you could clarify, that it's not off the table that you would take the similar action that Facebook did in Australia in response to Bill C-18.

12:20 p.m.

Public Policy Manager, Meta Canada, Meta Platforms

Rachel Curran

I would say that we're still looking at all of the options based on our evaluation of the legislation. We're still going through that in detail. We were not consulted on the content of it, and so we need to review it in pretty close detail before we decide what our future response will be.

12:20 p.m.

Conservative

Raquel Dancho Conservative Kildonan—St. Paul, MB

Thank you very much.

I'll go back to Twitter.

Perhaps you could comment on Bill C-18 as well. Do you feel that news publications benefit from being shared on Twitter's platform? Do you have any concerns, similar to those of Facebook's, with it?

12:20 p.m.

Director, Public Policy (US & Canada), Twitter Inc.

Michele Austin

I agree with Rachel that we're still in the early stages of analysis.

There are a couple of things to say with regard to Bill C-18.

Twitter, like the news industry, does not make a lot of money on news. In fact, we have nobody in Canada who is selling news content. If you see news advertised on Twitter, it is largely self-serving. The news organizations have chosen to advertise on their own.

We are also what's called a “closed” platform. When you link to news on Twitter, you have to leave the site. That is not necessarily the case with the other platforms.

The thing we're most concerned about is with regard to scope and transparency. The question is whether or not Twitter is scoped in under that bill. That is very unclear. I understand that there will be quite an extensive GIC coming out after the bill is passed.

I am more than happy to meet with anybody to discuss the content of Bill C-18.

12:20 p.m.

Conservative

Raquel Dancho Conservative Kildonan—St. Paul, MB

Thank you very much, Ms. Austin.

Ms. Curran, if you would like to add anything further on the government's approach to censoring or regulating the Internet, you can have my last 10 seconds.

12:20 p.m.

Public Policy Manager, Meta Canada, Meta Platforms

Rachel Curran

Again, I would just reiterate that we have some fairly significant concerns with Bill C-18.

We think it should take into account the way the Internet actually works when it comes to linking to views on our websites. We hope we're able to engage in a good conversation with the government about that.

12:20 p.m.

Liberal

The Chair Liberal Jim Carr

Thank you very much.

Ms. Damoff, I will turn the floor over to you for a six-minute block of questions. Go ahead.

12:20 p.m.

Liberal

Pam Damoff Liberal Oakville North—Burlington, ON

Thank you so much, Chair.

I'm going to start with Twitter. We have heard a lot in this study about the radicalization of individuals to ideologically motivated violent extremism through social media. You know, you've said that you're grateful to the Government of Canada for having conversations with platforms like yours, and yet you've also compared our draft proposal to regulate online harms to policies in Iran and North Korea. Do you think it's appropriate for a private company that has a financial stake in the legislation to make comments like that?

12:25 p.m.

Director, Public Policy (US & Canada), Twitter Inc.

Michele Austin

Your question is with regard to the proposal put forward by the Government of Canada to create the position of a digital safety commissioner who would have the ability to block Internet platforms. We made a submission that has been made public—which is great, and I'm very grateful for that access to information request—stating that this kind of activity, as it was proposed, was very similar to the activity we experience in those countries: China, Iran and North Korea.

I don't think it's irresponsible to make a comparison when we're asked by the Government of Canada to give our input. We tried our best to make a very thoughtful submission and to make the recommendations that are contained in that submission of how to do things differently. Blocking Internet sites is contrary to Twitter's position on the open Internet.

12:25 p.m.

Liberal

Pam Damoff Liberal Oakville North—Burlington, ON

Your site uses algorithms to drive traffic to information and other tweets, correct? Why are those algorithms, as we've heard from other witnesses, driving individuals like me more likely to the far right than to the centre or far left? We know that those kinds of things are more likely to go viral and get more engagement, but your algorithms are not public, and yet you're driving people to the far right, which in turn can lead to radicalization.