Evidence of meeting #116 for Access to Information, Privacy and Ethics in the 42nd Parliament, 1st Session. (The original version is on Parliament’s site, as are the minutes.) The winning word was advertising.

A recording is available from Parliament.

On the agenda

MPs speaking

Also speaking

Taylor Owen  Assistant Professor, Digital Media and Global Affairs, University of British Columbia, As an Individual
Fenwick McKelvey  Associate Professor, Communication Studies, Concordia University, As an Individual
Ben Scott  Director, Policy and Advocacy, Omidyar Network

12:35 p.m.

Liberal

Anita Vandenbeld Liberal Ottawa West—Nepean, ON

Mr. Owen, would you comment?

12:35 p.m.

Prof. Taylor Owen

More broadly on the content moderation issue, there's clearly a broad spectrum of potential harmful speech and a broad range of ways to address different problems along that spectrum—hate speech, child pornography, and criminal activity on one end of the extreme, and maybe just political views we don't agree with on the other end. We'll engage different things in different spaces, and that's fine.

The other important point here is that there is national context to the way we regulate speech, and that is okay. We know what the alternative default is. If we're not imposing those national guidelines, regulations, and incentives on speech, the default is the interpretation of the terms of use of a global company. Twitter has terms of use different from Facebook's, and Google/YouTube has terms of use different from the other two. We know, for example, that Twitter has a very free-speech-leaning application of its terms of use. Up until recently, almost anything was allowed. Twitter was incentivizing engagement and activity over the limiting of speech. That was a corporate decision, and that has caught different consequences in different national environments.

In Canada, we have criminalized hate speech. When we did that, there was a lot of push-back from free-speech advocates in the United States, who said Canada was limiting speech too much, but we made that decision as a democracy ourselves and then built an infrastructure to apply it.

The questions for us now in Canada—which are different from the questions for Germans, for instance, who have a different application of hate speech for various historical reasons—are how we are going to apply our current hate speech standards onto platforms, and whether we are going to extend those hate speech provisions to other kinds of content that we now think have negative costs in society beyond those original provisions. Those are two separate questions, I think.

12:40 p.m.

Conservative

The Chair Conservative Bob Zimmer

Thank you, Mr. Owen.

Next up, we have Brian Masse for five minutes.

12:40 p.m.

NDP

Brian Masse NDP Windsor West, ON

Thank you, Mr. Chair.

One of the presentations I found interesting was with regard to the regulations around bots and artificial intelligence, although we didn't get too much into it. Would it be worthwhile for Canada to create a type of regulatory environment for how bots can be used for advertising and content distribution? I'm just throwing that out in terms of what we would do here. Also, should we be looking at including this as part of some of our trade agreements?

I worked on the anti-spam legislation. There are serious problems with that legislation, as you know, but the volume of information and its use, and the consequences from malware and other things, are quite economically significant, let alone irritating.

Maybe we can start with Mr. McKelvey. Do you have any comments about bots and whether there should be domestic rules and perhaps international rules with regard to that activity?

12:40 p.m.

Prof. Fenwick McKelvey

I think Dr. Owen summed it up nicely. There are transparency requirements. It's about trying to make sure that when there is bot activity, we know it's a bot, and that there is disclosure around it.

I've actually thought it comparable to the voter contact registry, the VCR. The issues of the VCR and whether that can be done for bots.... I don't think it should be done on a per-bot basis, but if companies do large-scale social media amplification, that could be subject to it.

In many ways, it's performing this kind of placement cost. If you're paying a bot to amplify your message, there are ways to refine it. It's about counting it as advertisement and disclosing it as such. That would actually go a long way. I think because it targets that specific type of bot, we have a problem, which is what I would describe, along with Elizabeth Dubois, as an amplification bot. This is a bot that is adding more credibility to something and kind of “Astroturfing”. If we count this as advertising, that would be an important step toward normalizing it within this advertising system.

12:40 p.m.

Prof. Taylor Owen

I would briefly add that the bot issue is in many ways the tip of the iceberg of a much bigger conversation that you alluded to, which is around the governance of AI. This issue we're talking about today, about information in our democracy, is embedded in a much larger debate about how we should be governing automated elements in our society, whether they be individual agents, advertisers, medical providers, or whoever they might be.

We need to have a conversation about consent and data access, two systems that use our data, and about knowing how that data is used. That will require a broader conversation beyond the Canadian context. In many ways, it's becoming, and emerging as, a global regulatory conversation.

It's part of this conversation we're having.

12:45 p.m.

Prof. Fenwick McKelvey

I also want to say that the federal government is currently investigating right now the ways it's governing itself in AI. The Treasury Board is looking into impact assessments for AI. As it's rolling out, how is AI being deployed in the federal service? There's a review process being put in place.

I think this is important evidence of how the government could be a leader in AI governance. I think it also requires awareness that the rollout is done transparently and that these kinds of concerns about the potential political use of these technologies are factored in. I think there's really important work taking place presently.

12:45 p.m.

NDP

Brian Masse NDP Windsor West, ON

Thank you, Mr. Chair.

12:45 p.m.

Conservative

The Chair Conservative Bob Zimmer

Thank you.

Next up, for five minutes, is Monsieur Picard.

12:45 p.m.

Liberal

Michel Picard Liberal Montarville, QC

Thank you.

We have a journalist who is very serious in his work and who surely provides credibility to the newspaper he works for. Here we are, though, with someone who cannot talk about my NDP colleague because someone else decided to prevent his readers from knowing what's going on in his riding. When I look at news on TV, and when I look at the different U.S. channels especially, they seem to be very serious channels, but depending on which channel I look at, the United States seems like two different countries.

You referred to trust a few minutes ago, and to the source of information that you can have access to. What were you referring to, anticipating that trust is a possible notion?

12:45 p.m.

Prof. Taylor Owen

Trust is a difficult concept in relationship to journalism. I might trust Fox News and you might trust MSNBC, and we both have high degrees of trust in the journalism we're consuming, so I'm not sure that's the core metric on an individual level.

On a societal level, I think we can talk about how much trustworthy and accurate information is in our public sphere, is circulating, and whether that's enough. I think that's the point on which we need to engage in this. It's not whether each individual trusts the particular news source they're getting their content from, but whether as a society we have a collective body of reliable information in our democracy.

12:45 p.m.

Liberal

Michel Picard Liberal Montarville, QC

Go ahead, sir.

12:45 p.m.

Prof. Fenwick McKelvey

It's very strange for me, as someone who teaches communication and media studies. I have had long-standing criticism, I think shared by many people, about the gatekeeping effects of the media and the decline of the for-profit media. It's not something that anybody comes here holding in high regard.

I think the challenge is that in one sense, you were looking at these gatekeepers as people that you knew. That's kind of the way the system worked. What we're now facing is that we just don't know how that system works. We don't know how the influencers work. There's strategic power in the fact that there's inequitable information there.

One thing that needs to be said is that there are a variety of solutions that need to be put forward. I think in Canada we've kind of said that we have a more proactive cultural policy and that we can function as information subsidies for the public good. When we're talking about trust in the media, this is where public broadcasting has been shown to be really effective in raising the bar for any kind of misinformation or disinformation campaign, making it more difficult to do, and in also putting good information out there. It's really clear to me that the public benefit of public broadcasting is something that is ever more true, that is unique, and it should continue to be part of the robust solution Canada takes to these concerns.

12:45 p.m.

Liberal

Michel Picard Liberal Montarville, QC

Unfortunately, Facebook is not owned by Radio-Canada, so there's no public medium like Radio-Canada broadcasting on Facebook, the Internet, or whatever media you use. Therefore, with any source available, when you rely on your Facebook page, you get tons and tons of awkward information. Government cannot regulate laziness. If I don't cross-check my information, as Mr. Scott said, I'm going to read my Facebook and think the world is the way Facebook describes it to me.

It's all a matter of interests. For me, the important thing is to be able to know what interests are behind the information and therefore have the availability to verify this information with other sources and make up my own mind. Would that be the limit of my intervention as a government, and of course the responsibility of any reader?

12:45 p.m.

Prof. Fenwick McKelvey

I would joke that the CBC should buy Reddit, in part because I think we had about a 10-year gap when we really weren't thinking about what public broadcasting means in an era of social media. I still think we have in many ways a really limited sense of what the potential of social media could be, and I think there's room for imagination and thinking more broadly.

I also think that one of the benefits as we're talking about the sharing of information is that if you give away the information for free as a public good, you are creating and fuelling these platforms with good information and seeding it.

We can talk about the concentration of the social networking space or the advertising space, but I think if we're just talking about access to information, public broadcasting plays an important role there.

12:50 p.m.

Liberal

Michel Picard Liberal Montarville, QC

Mr. Scott, Mr. Owen, would you comment?

September 25th, 2018 / 12:50 p.m.

Director, Policy and Advocacy, Omidyar Network

Dr. Ben Scott

Our goal here is not the elimination of bias or sensationalism or nonsense in the media system. They will always be there and the media have always been all those things. Our goal here is to contain those things such that most of the people most of the time are shaping their political views based on a fact-based, rational view of their society.

How do we get that done? It's changed. When there are major shifts in the dominant form of information distribution, a new set of norms has to emerge about how you get to that result of most of the people most of the time. The way we did that in the broadcast area was through a heavy investment in public media, and we relied on journalistic standards in the newspaper market. Now we have a tremendous disruption, the biggest disruption in public information since the printing press, and we are going about the task of figuring out how we establish the right norms by controlling the supply side: by using privacy policy to limit filter bubbles, by using competition policy to ensure there's space in the market for other kinds of providers, and by investing in digitization of public media.

We're also working on the demand side, helping consumers understand that the passive consumption—

12:50 p.m.

Conservative

The Chair Conservative Bob Zimmer

I hate to cut you off, Mr. Scott. We've got to move on to the next questioners.

We have two questions to close.

We'll go to Mr. Saini and then Mr. Kent, and then we're done. My apologies.

12:50 p.m.

Liberal

Raj Saini Liberal Kitchener Centre, ON

Mr. McKelvey, in something you wrote a while back, you had three topics: discoverability, trending, and advertising.

I want to focus on discoverability, because discoverability for me is doing something indirectly that you can do directly with advertising. You have an issue, and the users or the platform companies highlight that, and then the algorithms push that to the top of the list. Then if someone without any prior knowledge wants to research a topic, a candidate, or a particular position and they go to the Internet and they google that name, that negative piece or the most salacious piece will appear.

You've written about that. What can be done to prevent it? I think that can be done indirectly. If you have advertising, you're directly advertising, but this is an indirect way of also getting out a position. The existing algorithms seem more insidious than the advertising component.

12:50 p.m.

Prof. Fenwick McKelvey

I think the discoverability of things is a really important thread, so thank you for picking that up. To me, discoverability means what shows up when you search for something. I would point to some of the research I've done in the Algorithmic Media Observatory. We looked at discoverability of political content during the Ontario election to see how the recommender system was working. The CBC also did a similar study and reported on it.

I think that the way you're feeling with that is, first, to look at what counts. What are these systems ranking information for? I think we're still trying to find intentions, so this is talking about engagement or meaningful social interactions. I think those are things to be attended to. An explicit judgment is being made, and I think it's for the government to put forward good recommendations or good cultural policies for other forms of discoverability as a government norm.

I think it's also trying to recognize...I point to the report of data in society, which has just come out and talks about influencer networks. I think it's important to say that discoverability is a system that works, but we don't necessarily know how. It's clear that through coordination you can influence these discoverability systems, and I think that's one point that points to research. Particularly if people are being paid to influence or change discoverability, I think that could count as a form of advertising.

12:50 p.m.

Conservative

The Chair Conservative Bob Zimmer

Thank you.

Last up, we have Mr. Kent.

12:50 p.m.

Conservative

Peter Kent Conservative Thornhill, ON

Thank you very much, Chair.

Thank you all for the various remedies you suggested for the surveillance-of-capitalism side of what we've been talking about today, but human nature being what it is, people are still enthusiastically joining and participating in the relationship-enabling aspect of social media, which is after all the origin of social media today.

I'd like to come back to the foreign intervention in the electoral process that we talked about a little earlier. I think it was Dr. Scott who gave the example of the Russian-confected Beyoncé fan site, Trojan Horse time bomb. How do you prevent that sort of confected site leading up to an election, which is detonated just at decision-making time?

12:55 p.m.

Director, Policy and Advocacy, Omidyar Network

Dr. Ben Scott

I think it's very difficult to guard against that kind of attack.

Here's where the state of the art is now. Essentially it's a collaboration among security services, outside researchers, and companies to try to detect in advance the coordinated activity of disinformation operators. There are signals in the network if you know how to look for them, and they're developing tools and they're doing what they call red teaming, which is to put yourself in the perspective of a malignant actor who might try that Beyoncé trick. How would you go about doing that? If you can do it, what are the ways that could be countered?

If we can think of it in an imaginative red team exercise, you can be sure that our adversaries are thinking of it as well, and you build prophylactic defences against those things that you can imagine doing. It's a very Cold War war-gaming exercise, and that's what's going on right now in the cybersecurity space.

You're not going to be able to defend against all of these things. You're only going to be able to contain a certain percentage, so the second piece of this is resilience. You need to have a plan in place to react very rapidly when that time bomb is triggered and suddenly something happens that you weren't expecting. You need to be able to react fast to bring it down and to educate the public who were contacted by that account that they have been engaged by either an automated account with malignant intent or a foreign-operated influence campaign. Those rapid response techniques are also things we ought to be developing.

12:55 p.m.

Conservative

Peter Kent Conservative Thornhill, ON

Thank you.

12:55 p.m.

Conservative

The Chair Conservative Bob Zimmer

Thank you, everybody, for attending today and providing us with a lot of food for thought.

Thanks to the committee for coming today. It's a good first meeting of the session. Thanks again, and we'll talk soon. Have a good day.

The meeting is adjourned.