Evidence of meeting #22 for Public Safety and National Security in the 44th Parliament, 1st Session. (The original version is on Parliament’s site, as are the minutes.) The winning word was content.

A recording is available from Parliament.

On the agenda

MPs speaking

Also speaking

Jane Bailey  Full Professor, Faculty of Law, University of Ottawa, As an Individual
Garth Davies  Associate Director, Institute on Violence, Terrorism, and Security, Simon Fraser University, As an Individual
Tony McAleer  Author and Co-founder, Life After Hate, As an Individual
Samuel Tanner  Full Professor, School of Criminology, Université de Montréal, As an Individual
Michael Mostyn  Chief Executive Officer, National Office, B'nai Brith Canada
Marvin Rotrand  National Director, League for Human Rights, B'nai Brith Canada
Imran Ahmed  Chief Executive, Center for Countering Digital Hate

1:05 p.m.

Imran Ahmed Chief Executive, Center for Countering Digital Hate

Good afternoon.

The Center for Countering Digital Hate is a non-profit that seeks to disrupt the monetized architecture of online hate and misinformation, which has been overwhelming enlightenment values of tolerance, of science and of democracy that underpin our nation's prosperity.

Our organization had been around for six years. We have around 20 staff in London and Washington, D.C. We're independent. We're not affiliated with any political party. We don't take money from governments or from technology companies.

Our research throughout that six years has tracked the rise of online hate, including anti-Semitism. The reason we started this organization was that we were seeing the rise of virulent anti-Semitism and disinformation on the left in the United Kingdom, as well as seeing that fringe actors, from anti-vaxxers to misogynist incels to racists such as white supremacists and jihadists, are able to easily exploit digital platforms to promote their own content.

The platforms and search engines benefit commercially from this system, and that is one of the central insights of CCDH: There is an economy and an industry around hate and misinformation now that is so profitable that it inherently leads to the sustainability and further proliferation of this industry and to platforms not being incentivized to do more than send a press release when problems are raised.

Put simply, our problems are threefold.

One is the proliferation of bad actors. These are extremists who are sharing dangerous misinformation and hate content online. They're organized and skilled in exploiting platform structures and undermining public safety and democracy.

Another problem is that platforms profit from the spread of extreme content through a system that promotes engagement over any other metric, including public good, safety or anything else, and that companies do not factor in public safety in the design of their products and do not effectively self-regulate through adequately resourcing the enforcement of their own rules.

Another is bad laws, the absence of legislation and global coordination at a scale that will protect citizens through assessing and enforcing common standards and sharing intelligence and metrics about them.

We've published a series of reports on things like anti-Semitism. Our most recent was on anti-Muslim hate. It showed that even when you report anti-Muslim hate to platforms by using their own tools, nine out of 10 times they fail to take it down. That includes posts promoting the Great Replacement conspiracy theory, violating pledges that they made in the wake of the 2019 Christchurch mosque attacks when they signed up to the Christchurch call. That conspiracy theory inspired the Christchurch attacks as well as the Tree of Life Synagogue attacks in Pennsylvania in the United States.

So there are commercial hate and disinformation actors who are making a lot of money from spreading discord and peddling lies. I've used anti-Muslim hate as an example, but we found the same figures with anti-Semitism, with misogyny and with anti-Black hateful content in the past.

Why are they failing to act? The truth is that there is a web of commercial actors, from platforms to payment processes to people who provide appetizing technology that is embedded on hateful content, giving the authors of that hateful content money for every eyeball they can attract to it. It has revenues in the high millions, tens of millions and hundreds of millions of dollars that have made some entrepreneurs in this space extremely wealthy.

For example, the leading anti-vaxxer in the United States, Joe Mercola, claims in court testimony that he's worth $100 million. That's what this industry is worth.

The creation of this industry has involved a series of moral choices by companies to profit from this hate. To back this up, these greedy, selfish and frankly lazy companies have proselytized the notion that they're right to profit from hate, without criticism, without boycotts, without regulatory action and without even moral opprobrium or justifiable moral opprobrium. It's somehow a God-given right, because a violation of it, they say, would be cancelling them—which is nonsense.

Our experiences in organizations suggest that four things are missing from existing powers globally. One is safety by design being enforced. Second is the power to compel transparency around algorithms and around enforcement of community standards and of the economics. We need bodies to hold companies accountable and set standards so that we don't have a race to the bottom morally. Finally, we need the power to hold social media platforms and executives responsible for the decisions they take.

1:10 p.m.

Liberal

The Chair Liberal Jim Carr

Thank you very much.

Again, you have my apologies for overlooking you, Mr. Ahmed.

We've now moved back to the rota of questioning. I would ask Mr. Zuberi to begin a six-minute slot.

May 5th, 2022 / 1:10 p.m.

Liberal

Sameer Zuberi Liberal Pierrefonds—Dollard, QC

Thank you, Mr. Chair.

Thanks to all of the witnesses for being here today and for your testimony. It's very insightful and important.

I'd like to start off with Mr. Ahmed. I think what you just shared with us is really insightful and interesting.

Are you aware of legislation that we are now discussing within our Parliament around online hate? If so, what are your thoughts on it?

1:10 p.m.

Chief Executive, Center for Countering Digital Hate

Imran Ahmed

I'm sorry, I'm not. The request to have me appear was last minute. However, I am very familiar with the international efforts on this.

We are actually organizing a conference in Washington in two weeks' time to talk about global alignment on a set of standards by which we would analyze the effectiveness of any legislation. We've invited your colleagues to attend.

Can I just lay out very simply what those standards are? I think it will help to give you the insight you need as to whether or not the legislation you're proposing meets those standards.

One is forcing safety by design. At the moment, companies can act in a profoundly negligent way in designing their systems. In the U.K. and the U.S., for example, there's a big push for ensuring that there is safety by design for children, but there is no reason why that should not be extended to adults as well.

Second is transparency. It's the transparency of how the algorithms work, what their outputs are and the transparency of the economics. Let's not forget that 98% of Meta's revenues come from advertising. There is a reason why content is structured the way it is. It's structured to maximize advertising opportunities. Understanding those economics is absolutely vital. Then there are the enforcement decisions. Why do they decide to take down one piece of content and not another, or to leave one up when they've taken one down of similar content? It's understanding those enforcement decisions.

Third is accountability. Are there bodies setting the standards and also doing independent analysis of the effectiveness of that work? That's the space where you're looking for public-private partnerships because of course not all of that can be done by the state.

Finally, some mechanisms for responsibility are needed, whether that is through civil litigation or criminal responsibility. For companies and the executives, when they create negative externalities which have a cost paid in lives, some of that cost should be borne by the companies themselves economically, to disincentivize the production of harm. Do you have proper mechanisms for responsibility to disincentivize the production of harms in the first instance?

That's our safety, transparency, accountability and responsibility platform that we're encouraging countries around the world to analyze their overall regulatory package by.

1:15 p.m.

Liberal

Sameer Zuberi Liberal Pierrefonds—Dollard, QC

Thank you for that.

I'd like to shift gears.

I'd like to turn to you, now, Mr. Tanner.

You recently wrote a publication called “The Process of Radicalization: Right-Wing Skinheads in Quebec”. You mentioned there that you're “identifying mechanisms that shape pathways toward extremism and violence.".

Could you expand on that point?

1:15 p.m.

Full Professor, School of Criminology, Université de Montréal, As an Individual

Samuel Tanner

Thank you for the question.

In this report, which dates back to 2014, we sought to draw an initial picture of the extreme right in Quebec. We were interested in different profiles that we had established. Basically, through open sources, we were able to conduct interviews in Quebec with members who had, in one way or another, participated in activities or groups related to the far right.

We realized that these people were deeply in search of meaning and, through a kind of opportunism, found themselves interested in more ideological content, related to immigration and, essentially, to the protection of white suprematism.

We had seen a form of radicalization in that these people first became interested in a group and then found that within that group, people were just drinking alcohol and would eventually get violent with each other. These people would then turn to increasingly ideologically radicalized groups, which was more in line with what they could perceive as a form of extremism of the idea.

I hope that answers your question.

1:15 p.m.

Liberal

Sameer Zuberi Liberal Pierrefonds—Dollard, QC

It does, and it's really insightful.

I would like to highlight what Mr. Ahmed mentioned around algorithms being manifest when it comes to social media platforms, as I think that's really important.

In the last 30 seconds I'd like to shift to Mr. Rotrand.

Can I get your thoughts and comments on us as a federal government investing in the new Holocaust museum in Montreal to the tune of $20 million and how that will help to educate Canadians and Montrealers around anti-Semitism and the Holocaust in particular?

1:15 p.m.

National Director, League for Human Rights, B'nai Brith Canada

Marvin Rotrand

We certainly welcome any investment that promotes Holocaust remembrance, but we would also very much like to see an improvement in school curriculum, particularly at the high school level. As well, we would like to see the broadening of the mandate of the special envoy preserving Holocaust remembrance and combatting anti-Semitism. The Honourable Irwin Cotler was just recently named, and the mandate includes preserving the Holocaust.

Thank you.

1:15 p.m.

Liberal

The Chair Liberal Jim Carr

Thank you very much.

I would now invite Ms. Larouche to begin her six minutes of allotted questioning, please.

1:15 p.m.

Bloc

Andréanne Larouche Bloc Shefford, QC

Thank you, Mr. Chair.

I'd like to thank the witnesses again for being with us.

I will try to address each of the witnesses.

Mr. Ahmed, during this pandemic, you managed to identify 12 individuals who are known as super‑spreaders and who spread fake news. These are humans, not robots. This is no small thing: they were responsible for 65% of the anti‑vaccine messages posted on Facebook and Twitter in February and March 2021.

Can you explain how you went about identifying them?

Have these individuals been reported to these platforms and have they been able to keep their accounts?

1:15 p.m.

Chief Executive, Center for Countering Digital Hate

Imran Ahmed

On March 24, 2021, we issued our report that disinformation does...and that showed that 12 super-spreaders of disinformation were responsible for 65% of the content shared on social media that was undermining confidence in the vaccine. That might sound extraordinary, that 12 people can be responsible for so much of the disinformation, but it's because they're not just individuals; they're often 501(c)(3) or they're limited companies with a front person that are producing the highest quality material.

If you think about the impact that just one British man, Andrew Wakefield, had on the take-up of the MMR vaccine, it then becomes understandable that a small number of highly motivated, highly talented spreaders of misinformation are able to cause so much damage.

This is what happened with that. On the same day that the report came out, Mark Zuckerberg was in front of the House Energy and Commerce Committee in the U.S. Congress. He said that he would take action on it. The President asked him in June, he said, "Look, these people are killers, you've got to take action. Think about if your relative was one of the people who was receiving this information”. Even then, only 50% of their accounts and their followers have been taken down.

With the example of Robert F. Kennedy, Jr., they took down one of his accounts on Instagram but not the accounts on Facebook, which is an extraordinary failure. What we've seen is piecemeal enforcement, even when they are identifiable super-spreaders of harm. They are not just super-spreaders of harm, they're super-violators of their own community standards. It just goes to show that they're more addicted to the profits that come with attention than they are to doing the right thing.

1:20 p.m.

Bloc

Andréanne Larouche Bloc Shefford, QC

Thank you very much, Mr. Ahmed.

Mr. Tanner, in your research, you were interested in how social media affects policing practices.

Have you studied the influence of social media in the context of countering violent extremism?

Can you tell us briefly about this and about what you have learned from the study of these policing practices?

1:20 p.m.

Full Professor, School of Criminology, Université de Montréal, As an Individual

Samuel Tanner

Thank you for the question.

Our research hasn't focused directly on how police organizations use social media to counter violent extremism, but rather on how social media is involved in raising the profile of these problematic narratives.

I am uncomfortable answering this question for these reasons.

1:20 p.m.

Bloc

Andréanne Larouche Bloc Shefford, QC

No problem, Mr. Tanner.

I'd now like to turn to the two representatives from B'nai Brith Canada.

Mr. Mostyn and Mr. Rotrand, you said that you are awaiting the implementation of the federal government's online hate content act.

Could you suggest a few things that should be in this legislation to make it as effective as possible?

1:20 p.m.

National Director, League for Human Rights, B'nai Brith Canada

Marvin Rotrand

Thank you for the question.

We think it's important to strike a balance between freedom of expression and the safety of religious and racial minorities in Canada.

I'd like to reflect what Mr. Ahmed said. Clearly, aligning international standards would certainly help, because what we're seeing is the same debate just about everywhere: Powerful new technologies have outstripped the capacity of our laws to regulate hate online. The numbers that we're seeing in our audit are mushrooming every single year.

We would like to see a way to have the companies brought into the process where they have a responsibility, within a reasonable amount of time, not only to take down hateful literature but to find a way to modify their algorithms. It's not an area that I'm personally an expert on. However, we can see on a daily basis—and we are getting more and more complaints from our community—about what's online and how that leads to vandalism and violence in our streets that are aimed at Jews.

1:20 p.m.

Chief Executive Officer, National Office, B'nai Brith Canada

Michael Mostyn

One of the recommendations we've made in the past with respect to online hate as well is a trusted flagger program, so that organizations can have the ability to perhaps flag certain issues when they are racist or hateful. It's a great frustration for anyone interested in making the Internet a safer space, because it's impossible to get through to any of these platforms.

1:20 p.m.

Liberal

The Chair Liberal Jim Carr

You have seven seconds.

1:25 p.m.

Bloc

Andréanne Larouche Bloc Shefford, QC

Again, I thank all the witnesses for being with us today.

1:25 p.m.

Liberal

The Chair Liberal Jim Carr

Thank you very much.

Finally, I will turn to Mr. MacGregor.

Sir, you have six minutes to take us to the end of the questioning of this round and this panel.

1:25 p.m.

NDP

Alistair MacGregor NDP Cowichan—Malahat—Langford, BC

Thank you very much, Mr. Chair.

Mr. Ahmed, I would like to start with you.

In our previous panel, we had witnesses who both said that we can't rely on social media platforms to do the heavy lifting on their own, but also that deplatforming has consequences, in that some of these actors, by deplatforming them, could spread onto other platforms that are not as carefully regulated. There has been an explosion in alternative social media platforms for that very reason.

I guess this is the struggle we have as policy-makers, because it can be like playing the game Whac-a-Mole. You try to knock someone off of one platform and they pop up on another one.

In your organization's experience, how do you approach that problem, and do you have any recommendations for our committee?

1:25 p.m.

Chief Executive, Center for Countering Digital Hate

Imran Ahmed

Let me start by saying that it is vital that deplatforming is a tool available to platforms to remove harms from them. Deplatforming is a vital part of the overall cleaning up of the infrastructure, but also to make sure the outputs of their algorithms aren't malignant. It's the algorithmic amplification of bad actors, the fact that they're given access to enormous audiences and they're amplified....

One of our research reports, “Malgorithm”, looked at the way the algorithm works on Instagram. It showed that if you follow wellness, the algorithm was feeding you anti-vax content. If you follow anti-vax content, it was feeding you anti-Semitic content and QAnon content. It knows that some people are vulnerable to misinformation and conspiracy theories and that conspiracy theories, because of the psychology of them—they're driven by epistemic anxiety but they never sate that epistemic anxiety—lead to rabbit-holing.

It was driving people deeper and deeper into warrens of conspiracy theories. Why? Because the commercial imperative is simple. You find conspiracy theories on social media platforms because they are the least regulated spaces in terms of quality control that you have for mass publishing of content.

Deplatforming these people and putting them into their own little hole, a little hole of anti-Semites, anti-vaxxers and general lunatics, is a good thing, because you limit their capacity to infect other people. Also, for trends such as the convergence and hybridization of ideologies.... I went to an anti-vax rally in Los Angeles a few weeks ago, and standing there were members of the Kennedy family, QAnon, anti-Semites, Proud Boys and kooky hippies who smoke ayahuasca. It's an entire mix of people, which is driven by social media convergence.

It is vital that they are deplatforming people so that we don't end up with the kinds of problems that you also faced in Canada a few months ago.

1:25 p.m.

NDP

Alistair MacGregor NDP Cowichan—Malahat—Langford, BC

I'd like to turn to B'nai Brith and Mr. Mostyn.

In our previous panel, we had a former white supremacist who was able to reform himself and now leads an organization that is dedicated to really helping people in the white supremacist movement come out of that. Earlier, I asked him about the struggle we have, where on the one hand we as the public want to denounce hateful ideologies like white supremacy and neo-Nazism, but on the other hand there is a struggle with trying to show compassion and trying to bring those people out of those movements. Mr. McAleer, our witness, was identifying how ideology is so intertwined with a person's sense of self.

Has B'nai Brith had any valuable experience in speaking with reformed members from white supremacist movements, and is there anything your organization has learned from this that would be helpful for our committee to know?

1:25 p.m.

Chief Executive Officer, National Office, B'nai Brith Canada

Michael Mostyn

Those are some excellent points.

There have been some great deradicalization programs used in Canada and abroad to deal with a variety of those radicalized in different ways. Sometimes it's religiously based radicalization. Just like in the criminal justice system, our system of justice has to be able to identify those who can be deradicalized and those who cannot.

We have different sorts of ideas that come into sentencing within our criminal justice system. There have to be deradicalization programs available for those from the far right. If we don't want it to spread, then people have to be given the ability to be educated, and it's very difficult, as you mentioned, to educate those who believe so strongly that their way is right and, in fact, that they are so right they are willing to perhaps engage in violence against those who see things in a different way.

Our recommendation would be to not separate out hate from different ideologies. Hate needs to be treated as hate, but overall it needs to be treated through the lens of public safety. Are these criminal issues we're talking about? Are these terrorist issues we're talking about? Or are these opinions, perhaps very strongly held opinions, that are not criminal? There are ways to get this out.

People had disgusting things to say far before the Internet. The white supremacists used to slap fliers on people's cars. The Internet is allowing folks to speak longer and, like Mr. Ahmed said earlier, there's nothing wrong with limiting their ability to spread hate.

1:30 p.m.

Liberal

The Chair Liberal Jim Carr

Thank you very much.

Colleagues, that brings us to the end of this panel.

I'd like to thank the witnesses for your patience. Our apologies for the late start. It's a moment of this parliamentary session that we have to contend with.

You've been patient and your testimony has been important and enlightening. On behalf of the members of this committee and all parliamentarians, I want to thank you for helping us understand these complex issues.

Colleagues, that's it for today. Have a good weekend, everybody, and we'll see you on Tuesday.

The meeting is adjourned.