Evidence of meeting #116 for Access to Information, Privacy and Ethics in the 42nd Parliament, 1st Session. (The original version is on Parliament’s site, as are the minutes.) The winning word was advertising.

A recording is available from Parliament.

On the agenda

MPs speaking

Also speaking

Taylor Owen  Assistant Professor, Digital Media and Global Affairs, University of British Columbia, As an Individual
Fenwick McKelvey  Associate Professor, Communication Studies, Concordia University, As an Individual
Ben Scott  Director, Policy and Advocacy, Omidyar Network

12:25 p.m.

Director, Policy and Advocacy, Omidyar Network

Dr. Ben Scott

I can jump in on that. There are four things you can do right now before the end of the year to prepare for 2019.

One, aggressive political ad transparency should be applied by law on all of the platform companies.

Two, increase the amount of money and coordination of those who are monitoring and exposing foreign intervention directed at misinformation campaigns.

Third, elevate quickly a process for removing illegal content, with all of the proper caveats about free expression, so that we don't have to suffer from things that shouldn't be out there in the first place.

Fourth, start talking to young voters in the classroom. My kids are in this wonderful program in Canada called Student Vote. In it they do mock elections and learn about political parties and the political system. We should have a digital literacy component in that curriculum.

12:25 p.m.

NDP

Brian Masse NDP Windsor West, ON

Would anyone else like to comment?

12:25 p.m.

Prof. Taylor Owen

I agree with those four.

12:25 p.m.

Prof. Fenwick McKelvey

Yes. I think enforcement mechanism is quick. I think one of the challenges is how you develop tools during the election to combat some of these things. This is where I think a code of conduct would be important, because, if you think of parties, if all of a sudden one party is benefiting from foreign interference, how do all parties respond? I think that's a tough question that talks about the conduct of our elections.

I think this kind of enforcement mechanism—I think a lot of the stuff is illegal—is about trying to bring greater transparency to this, whether this is content moderation, as has been discussed, or whether it talks about ad markets.

12:25 p.m.

Prof. Taylor Owen

On enforcement, there is a reason that GDPR sets the penalties at global revenue, not localized revenue, because if you don't do that, there's very little incentive for structural change. I think that's a cue on where we need to go on the penalty side.

12:25 p.m.

Conservative

The Chair Conservative Bob Zimmer

You have 30 seconds.

12:25 p.m.

NDP

Brian Masse NDP Windsor West, ON

Go ahead, please, Mr. McKelvey.

12:25 p.m.

Prof. Fenwick McKelvey

I also want to add widening our scope of online advertising. We've been mostly talking about programmatic advertising. This is when you loop in bots, sponsored content, and influencer marketing, which is all this grey area of promotional content that's taking place on social media. We have to move forward in recognizing the scope and ubiquity of the advertising we see today.

12:25 p.m.

NDP

Brian Masse NDP Windsor West, ON

Thank you, Mr. Chair.

12:25 p.m.

Conservative

The Chair Conservative Bob Zimmer

Thanks, everyone.

We have more time, so does anybody have any further questions?

We will start with Mr. Erskine-Smith for five minutes.

12:25 p.m.

Liberal

Nathaniel Erskine-Smith Liberal Beaches—East York, ON

Just to pick up where we left off on transparency, I think it makes it sense that political advertising would not be treated particularly differently than other advertising. Give no answer now, because I want to get to something else, but do think about collection and use and how political parties or political activities should perhaps be subject to different rules or the same rules. If you have further thoughts, it would be great if you could submit them to the committee.

I want to get to the policing of content on the Internet, because both Mr. Owen and Mr. Scott have touched on this in their writing. You have suggested that these big platforms have the capacity and resources to do the work.

How do we set a rule that requires certain organizations to police content and not others, if smaller organizations don't have the capacity and resources?

September 25th, 2018 / 12:30 p.m.

Director, Policy and Advocacy, Omidyar Network

Dr. Ben Scott

We have a couple of different models to look at. I will profile the German model and tell you where I think it went right and where it went wrong.

The Germans set a bar, I think, of a million domestic subscribers to the service, which basically meant three companies—Google, Facebook, and Twitter—and they said, "You have 24 hours to remove illegal content from the moment you get notified that it's there".

The problem with that was that they put all the burden on the companies. They gave all the decision-making authority to the companies about what was and wasn't illegal, and they had no appeals process.

The benefit they got from that was the resources and the technical ability of the companies to rapidly find not only the content that drew a complaint, but all content like that and all copies of that content all across the network and to quickly bring it down, much as they do for copyright violations, much as they do for other forms of fraud and illegal content. Counterterrorism functions the same way.

In my view, the problem is that we need more regular order judicial review. The prosecutors who would normally have brought a case like that through the usual court procedure ought to be involved in the oversight so that when the algorithm comes back and says these are the thousand cases of this piece of hate speech we see on the network, there is either a common review of that content to ensure it's meeting a public interest standard of free expression, legal/illegal, or it goes into an appeals process and goes through regular order judicial review.

12:30 p.m.

Liberal

Nathaniel Erskine-Smith Liberal Beaches—East York, ON

Why not flip that on its side? If you have ever had a parking ticket in Toronto, there's an administrative system that makes you subject to a $50 fine unless you explain yourself. If I post something hateful on the Internet, part of the problem with our system right now is that the response is the Criminal Code, right? There's no good ability to penalize either me, who has published the hateful content, and there's certainly no imposition upon the platform at the moment to take it down or pay a penalty if they don't.

Why not tax the big players and have a public administrative system that has a quick takedown system in the first instance, rather than putting the obligation on these companies to police it themselves?

12:30 p.m.

Director, Policy and Advocacy, Omidyar Network

Dr. Ben Scott

In theory, on paper, there's no reason you couldn't do it that way. In practice, the administration of that technical system is non-trivial and requires access to those companies' infrastructures, which they are not likely to want to provide.

I think it's certainly something that should be on the table for discussion about a long-term solution, but in the short term, if what we need, for example, between now and October 2019 is the ability to remove intentional hate speech and illegal content off the Internet in a hurry, we're going to have to find a more straightforward mechanism.

12:30 p.m.

Liberal

Nathaniel Erskine-Smith Liberal Beaches—East York, ON

In the short term, that probably means the platforms themselves taking it down.

In terms of regulating platforms, the U.K. recently suggested in their recommendations that there should be a category of platforms that are subject to regulation. If you look at the CRTC right now, they regulate publishers and broadcasters, but we don't regulate these platforms that claim not to be either of those things.

Is the body that should regulate these platforms, whatever threshold we set, the CRTC? Is it the Privacy Commissioner? Where should this reside?

12:30 p.m.

Prof. Taylor Owen

I think Professor McKelvey might be best positioned to answer this one.

12:30 p.m.

Prof. Fenwick McKelvey

I'm currently working with Chris Tenove and Heidi Tworek on a report about content moderation. First off is that there is no one jurisdiction that's going to regulate these platforms. I think they are multi-jurisdictional and I think that's actually something that's not a problem. We have that with broadcasting and telecommunications.

In terms of the Privacy Commissioner and the CRTC with regard to the ways platforms function, I think they do at times function specifically as broadcasters as well as, I think, a specific new category that deals with this content moderation problem. I think it's important to recognize that they fit into existing jurisdictions and need to be held accountable with regard to the ways in which their activities fit within those, but then I think there's this content moderation question that we really have not given any serious legislative attention to. What we have is kind of a piecemeal amalgam of hate speech laws and revenge porn laws.

One of the things I, along with my co-authors, am recommending is a social media standards council or a content moderation standards council similar to a broadcasting standards council. If you look at what the broadcasting standards council looks like, it's very parallel to what has been called for and what we need in content moderation, with an appeals process, transparency, and disclosure. I think the concern and the push-back I have to give back are that's it's more industry self-regulation. I think there is a criticism there, but I think that's an important first step that would actually start convening around this particular activity of content moderation, which we have not recognized well before the law.

12:35 p.m.

Liberal

Nathaniel Erskine-Smith Liberal Beaches—East York, ON

But we impose—

12:35 p.m.

Conservative

The Chair Conservative Bob Zimmer

Hold on. You're out of time.

12:35 p.m.

Liberal

Nathaniel Erskine-Smith Liberal Beaches—East York, ON

I'm out of time. No worries.

12:35 p.m.

Conservative

The Chair Conservative Bob Zimmer

Mr. Baylis, you have five minutes.

Hold on. There are more in the queue before Ms. Vandenbeld. She just got added to the end.

12:35 p.m.

Liberal

Frank Baylis Liberal Pierrefonds—Dollard, QC

I'll give her my spot because I've already spoken. I'll switch spaces with her. If I don't make it, that's fine, Chair. Thank you.

12:35 p.m.

Conservative

The Chair Conservative Bob Zimmer

Okay.

12:35 p.m.

Liberal

Anita Vandenbeld Liberal Ottawa West—Nepean, ON

Thank you very much.

Actually, I wanted to pick up on that particular thought. It's one thing to moderate content when there is actual hate speech or something that is outright misogynistic. What you've been discussing today is more about the algorithms and the fact that the toxic platform actually prioritizes the kind of speech that might not reach the threshold of hate speech but is still racist or has underlying sexist messaging.

The difference, for instance, with television is that when you put on a commercial, everybody sees the same commercial. Obviously it has to be moderated to be what most people would want to see and consider to be acceptable versus, for instance, if somebody did something that might have underlying misogynistic undertones and they click on it and it says “The reason you got this is that you are a white male between the ages of 20 and 25 and you just broke up with your girlfriend.”

If they knew that, then that would allow that person to think twice and say, “Why am I getting this?”

Is this what we're talking about here? I'm asking because there are two different things. There's actual hate speech and then there's the way in which all of these messages are being targeted at individuals, and that's a lot harder to regulate.

12:35 p.m.

Prof. Fenwick McKelvey

The thing is that there's a distinction between hate speech, which is captured under the Criminal Code, and what I think is an increasingly growing concern, which is harmful speech. We don't want to conflate the two. As a male who has grown up online, and having talked to my female counterparts, I think there's concern about the amount of aggression. I think this is also particularly true now for female politicians. Just think about the amount of vitriol being spewed. I think there is some way of dealing with that, which is different from dealing with hate speech, both in terms of concern and in terms of tactics.

That's part of that content moderation, and that already happens on social media platforms. Social media platforms are already making decisions about what content is accessible. Instagram producers online are already struggling with what parts of their body they can expose or not expose based on the content moderation of that platform.

The specific point about this is about recommendation. This is how platforms make recommendations about what content you see. This is often described as a filter bubble, whereby they're filtering your content. I think there is less concern about the filter bubble than there is about the fact that if you look at YouTube, it optimizes for engagement. If you look at Facebook, it's for meaningful social interactions.

It's those particular kinds of logic that are recommending content that might have some, to use Taylor Owen's words, negative externalities. We need to have more transparency about the consequences of those recommendations, and in particular about some of the ways there might be some red lines about what content can be recommended. I think a standards council could be one of the ways. I also think that when you get into the enforcement issue and you're trying to shut down hate speech quickly, that's another point at which there might be intervention.