Evidence of meeting #118 for Access to Information, Privacy and Ethics in the 42nd Parliament, 1st Session. (The original version is on Parliament’s site, as are the minutes.) The winning word was content.

A recording is available from Parliament.

On the agenda

MPs speaking

Also speaking

Elizabeth Dubois  Assistant Professor, Department of Communication, University of Ottawa, As an Individual
Michael Pal  Associate Professor, Faculty of Law, Common Law Section, University of Ottawa, As an Individual
Samantha Bradshaw  Researcher, As an Individual

October 2nd, 2018 / 11:05 a.m.

Conservative

The Chair Conservative Bob Zimmer

Welcome this morning to the Standing Committee on Access to Information, Privacy and Ethics for meeting number 118. Pursuant to Standing Order 108(3)(h)(vii), we are continuing our study of breach of personal information involving Cambridge Analytica and Facebook.

Today we have as witnesses Elizabeth Dubois, Michael Pal and Samantha Bradshaw.

We'll start off with Ms. Dubois for 10 minutes.

11:05 a.m.

Dr. Elizabeth Dubois Assistant Professor, Department of Communication, University of Ottawa, As an Individual

Hello. Thank you for inviting me to speak today.

I am an assistant professor at the University of Ottawa. I completed my doctoral work at the University of Oxford. My research focuses on political communication in a digital media environment. I've examined issues such as the political uses of artificial intelligence and political bots, echo chambers, and citizens' perceptions of social media data use by third parties, such as government, journalists and political parties.

My research has been conducted in Canada and internationally, but today I want to speak about four things: first, analog versus digital voter-targeting strategies; second, changing definitions of political advertisements; third, self-regulation of platforms; and fourth, artificial intelligence.

I have one quick note. I'll use the term “platform” throughout my testimony today. When I do, I'm referring to technology platform companies, including social media, search engines and others.

Let's start with voter targeting. This is by no means a new phenomenon. It's evolving at a spectacular rate, though. It is typical and in fact considered quite useful for a political party to collect information by going door to door in a community and asking people if they plan to vote and who for. In some cases, they may also ask what issues a citizen cares about. This helps political parties learn how to direct their limited resources. It also helps citizens connect with their political system.

However, even with this analog approach, there are concerns, because disengagement of voters and discrimination can be exacerbated. For example, if certain groups are identified as unlikely voters, they are then essentially ignored for potentially the remainder of the campaign.

Digital data collection can amplify these issues and present new challenges. I see four key differences in the evolving digital context as opposed to that analog one I briefly outlined.

First, there are meaningful differences between digital and analog data. The speed and scope of data collection is immense. While data collection used to require a lot of human resources, it now can be done automatically through sophisticated tools. I believe that last week you heard from a number of people who described the ones that political parties are using currently.

Similarly, this data can now more easily be joined with other datasets, such as credit history or other personal information that citizens may not want political parties or political entities to be using. It can also be more easily shared and transported and more easily searched, and predictive analytics can be employed because there is so much more data and there are so many more kinds of data that they can be collected together and analyzed very quickly.

Second, citizens may no longer be aware when their data is being collected and used. Unlike when they had to answer the door to give out personal information, this now can be done without their knowledge. They may not even know what is technically possible. In a study of Canadian Internet users, my colleagues at Ryerson University and I found that most Canadians are uncomfortable with political uses of even publicly available social media data. For me, this signals a need to really think about what kinds of data citizens would actually want their political representatives to have and to be using.

Third, the uses of data are evolving. Since online advertisements, for example, can now target niche audiences, personal data has become more useful to political entities. At the same time, these uses are less transparent to regulators and less clear to citizens. This means that emerging uses could be breaking existing laws, but they're so hard to trace that we don't know. We need to have increased transparency and accountability in order to respond adequately.

Fourth, political entities are incentivized to collect data continually, not solely during an election campaign. This means that existing elections laws could be insufficient. I should note that it is not just political parties that are collecting this kind of data, but also non-profits, unions and other third parties, so the questions about how this data is collected and what is the responsible use have to be broader than simply political parties writ large.

These changes are particularly concerning, then, because many of these uses aren't covered by existing privacy laws, and the Privacy Commissioner doesn't have the tools needed to make sure those laws are enforced the way they were intended.

This data use is not all bad. There are a lot of positive uses, including increasing voter turnout and trying to combat voter apathy. That said, to balance things we need to make sure we include political parties under the personal data uses laws that we have, PIPEDA being the main one. We need to create provisions that ensure transparency and accountability for political uses of data, and we need to ensure that citizens are literate, which includes things like having better informed-consent statements and other media and digital literacy initiatives.

With the few minutes I have left, I want to talk about a few issues that stem from this targeted voter behaviour. First is political advertisement. It's no longer quite as clear-cut as it once was. In addition to the placement cost for what platforms might call advertisements, there are a bunch of other ways that political entities can have paid content show up in somebody's newsfeed or as a recommended video, and how algorithms can be gamed to make sure that certain pieces of content show up on people's screens.

Those might include something like sponsored stories, using brand ambassadors, renting social media accounts that already have a big following, or employing political bots to help disseminate information more widely. All of these could be done potentially for free but they could also be done on a paid basis, and when they're paid, that comes awfully close to advertising, under the spirit of the law.

In response, we need to redefine what constitutes a political advertisement in order to continue enforcing these existing laws and their intended outcomes. It's particularly important that we consider this when we look at the worldwide increase in instant messaging platform use. The ways that political parties and other political entities are using instant messaging platforms is a lot harder to track than the ways social media platforms are used, and we can expect that is going to increase.

Second, I want to talk about self-regulation and how it is insufficient when we're talking about the big platform companies. While they have been responding, these are reactionary responses. These are not proactive responses to the threat that we see when digital data is being collected and personal information is being stored. These companies need to be responsible for the content that shows up, what they allow to show up, on their platforms. We also need to make sure that any interactions they have with those data are transparent and accountable. Right now there is a black box. We don't know how Facebook or Google decides what shows up and what doesn't, and we can't allow that to continue when things like personal privacy, hate speech, and free speech are being called into question.

Finally, the use of artificial intelligence is already complicating matters. The typical narrative at the moment is that when learning algorithms are used, it is impossible to open that black box and unpack what's happened and why. While this may be true if you take a very narrow technical perspective, there are in fact steps we can take to make the use of AI more transparent and accountable.

For example, we could have clearer testing processes, where data is open for government and/or academics to double-check procedures. There could be regular audits of algorithms, the way financial audits are required, and documented histories of the algorithm development, including information about how decisions were made by the team and its members and why. We also need things like clearer labelling of automated accounts on social media or instant messaging applications, and registrations of automated digital approaches to voter contact. You could imagine a voter contact registry being modified to include digital automated approaches. As well, we need widespread digital literacy programs that really dig into how these digital platforms work so that citizens can be empowered to demand the protection they deserve.

Ultimately I see a lot of value in political uses of digital data, but those uses must be transparent and accountable in order to protect the privacy of Canadians and the integrity of Canadian democracy. This requires privacy laws to be updated and applied to political parties, the Privacy Commissioner to have increased power to enforce regulations, and platforms to be held responsible for the content they choose to allow and the reasons for that.

Thank you.

11:10 a.m.

Conservative

The Chair Conservative Bob Zimmer

Thank you, Ms. Dubois.

Next we have Michael Pal.

11:10 a.m.

Professor Michael Pal Associate Professor, Faculty of Law, Common Law Section, University of Ottawa, As an Individual

Thank you very much for having me today.

I'm an associate professor in the faculty of law at the University of Ottawa, where I teach election law and constitutional law. Also, I am the director of the public law group there, although today I speak only for myself. I work on matters including voter privacy, campaign finance laws applied online and social media platform regulation, in addition to election cybersecurity. Today I'd like to speak to you a little bit about political parties, which I know is something you've heard a lot about, about social media platform regulation, and then about cybersecurity, briefly, I think, given what you've heard in the last few rounds of testimony.

Some of this material I had the opportunity to present to your colleagues in the procedure and House affairs committee in their study of Bill C-76, so I also have a few comments about that bill.

The first issue, which I know you've heard about, is voter privacy as it relates to political parties. As my colleague Professor Dubois mentioned, political parties are one of the few major important Canadian institutions and entities not covered by meaningful privacy regulation. They are not government entities under the Privacy Act, and they are not engaging in commercial activity under PIPEDA. They fall into a gap between the two major pieces of federal privacy legislation.

Very recently, all of the privacy commissioners across Canada—the federal commissioner and the provincial ones—issued a statement saying this was an unsatisfactory state of affairs and something needed to be done about it. Only in B.C. are political parties covered by provincial privacy laws. There was a bill in Quebec, as I know you've heard, which was not passed before the recent election.

Bill C-76 would address these measures to some extent. Mainly, though, it would require political parties to have privacy policies and set rules on which particular issues the policies must address. All the major registered parties already do have privacy policies. The bill might change some of the issues that they address, because they're not consistent across all parties, but it would not actually clearly give oversight authority to either the federal Privacy Commissioner or Elections Canada. It would not actually require specific content in privacy policies. It wouldn't provide an enforcement mechanism. Therefore, I think, it's a good first step. It's the biggest step that's been made in terms of political parties and privacy, but it doesn't go far enough.

What would regulation of political parties to protect voter privacy look like? Voters should have the right to know what data political parties hold about them. Voters should have the right to correct incorrect information, which is pretty common under other privacy regimes. Voters should have comfort that political parties should only use the data they collect for actual legitimate political purposes. As Professor Dubois mentioned, it's a good thing that political parties collect information about voters—you can find out what voters actually want and you can learn more about them—but that data should only be used for political purposes, electoral purposes.

One place where I think some of the other generally applicable privacy rules would not work here is, say, on a “do not call” list. Political parties should be able to contact voters, and it would be a problem, I think, for democratic electoral integrity if 25%, 30% or 40% of voters were simply uncontactable by political parties. I think we have to actually adapt the content of the rules that are out there for the specific context of political parties and elections.

The second big issue I wanted to address is social media platform regulations. I know you've heard a lot about Facebook. A lot of this is contained in a paper I gave recently at MIT, which I'm happy to share with the committee if it's useful. The Canada Elections Act and related legislation governs political parties, leadership candidates, nomination contestants and third parties, as you well know. Social media platforms and technology companies need to be included under the set of groups that are explicitly regulated by electoral legislation and the legislation that is under the purview of this committee. How so? Platforms should be required to disclose and maintain records about the source of any entities seeking to advertise on them.

Bill C-76 does take some positive measures there. It would prevent, say, Facebook from accepting a foreign political advertisement for the purpose of influencing a Canadian election. That's a good step forward. It only applies during the election campaign, as I read it, and I would like to see a more robust rule that requires due diligence on the part of the social media companies. Is there a real person here? Where are they located? Are they trying to pay in rubles or dollars? Do they have an address and other basic things that we would all pretty logically think of doing, if you cared about the source of the donation.

That relates to foreign interference. It also relates to having a clean domestic campaign finance system, given all the advertising that happens online.

Another issue that I think requires further regulation is search terms. You can microtarget ads to particular users of a social media platform. If there's a political election ad on Hockey Night in Canada, we get to see the content of the ad. As members of the public, we don't necessarily get to see an ad that's microtargeted at an individual or a group of individuals and those individuals might not even know why they were targeted.

There are certain kinds of searches that we may think have no place in electoral policy. For instance, searching for racists is something you can do, potentially, and there's been a lot of media discussion about that and whether that did happen in the last U.S. election. I don't think we have concrete information about particular instances, but we know enough to know that search terms might be used in a way that we find objectionable, in broadly understood terms about how democracy should operate in Canada.

Therefore, there's a public value in disclosing search terms, but also to the individuals that have been targeted who may not know why.

Another issue is that there should be a public repository of all election-related ads. Facebook has voluntarily done some of this. That decision could be rescinded at any point by people sitting in California. That's not an acceptable state of affairs to me, so that should be legally mandated.

A very interesting precedent has been raised about political communication on WhatsApp. There's even less publicity about what is sent on text messaging, especially for encrypted end-to-end applications, like WhatsApp. It came out in the media recently that, in the Ontario provincial election, there were political communications on XBox. I don't use the XBox. I don't play a lot of video games, but people who do can be targeted and have election ads directed to them. In the public, we have no way of knowing what the content of those ads are, so public disclosure of election ads on an ongoing basis, not just during the election campaign, on all the relevant platforms is something that I would like to see.

Another matter is social media platforms and whether they should be treated as broadcasters. I'm not an expert in telecommunications law. I don't make any claims about whether, say, Facebook should count as a broadcaster, like CTV or CBC, generally. However, there are provisions in the Elections Act related to broadcasters, in particular section 348, which says that the broadcaster must charge the lowest available rate to a political party seeking to place an ad on its platform. This ensures that political parties have access to the broadcasting networks, but it also ensures that they're charged substantially the same rate. Therefore, CTV cannot say, “We like this party, so we're going to charge them less. We don't like that party, so we're going to charge them more”.

Facebook's ad auction algorithm potentially increases a lot of variation and the price that an advertiser might pay to reach the exact same audience. That is something that I think is unwelcome because it could actually tilt the scale in one direction or another.

We have a bit of a black box problem with the ad auction system. Facebook doesn't tell us exactly how it works because it's their proprietary information, but on the basis of the information we know, I think that there is something there for regulation under section 348, even if we don't treat Facebook like a broadcaster more generally.

The second last thing is liability. One way to incentivize compliance with existing laws is imposing liability on social media platforms. Generally, they're not liable for the content posted on them, so one of the big questions, before this committee and the House in general, is whether there should be liability for repeated violations of norms around elections. I think that's something that we may need to consider.

The last point I wanted to make is simply on election cybersecurity, because I understand that's something of interest to the committee. Cybersecurity costs a lot of money. For example, I think that Canadian banks spend a lot of money trying to ensure cybersecurity. It may be difficult for political parties or entities involved in the electoral sphere. Political parties receive indirect public subsidies through the rebate system, say, for election expenses. One way to incentivize spending on cybersecurity is to have a rebate for political parties or other entities to spend money on cybersecurity. That's an idea that I've been trying to speak about quite a bit lately.

The last issue is that the U.S. has come out with very detailed protocols on what should happen among government agencies in the event of a cyber-attack, an unfortunate potential event, say, in the middle of the October 2019 election. What would the protocols be? There may be discussions that I'm not privy to between Elections Canada or the new cybersecurity agency. I hope there are, but the public needs to have some confidence about what procedures are followed, because if they don't know what the procedures are, there can be risks that an agency is seen as favouring one side or another, of foreign interference, potentially, on behalf of one party or one set of entities. I think that's pretty self-evident based on what has happened in the U.S.

Some more publicity around those protocols, I think, would be very welcome.

Thank you very much for your attention. I look forward to your questions in either official language.

11:20 a.m.

Conservative

The Chair Conservative Bob Zimmer

Thank you, Mr. Pal.

Last up, via teleconference, we have Samantha Bradshaw.

Go ahead for 10 minutes.

11:20 a.m.

Samantha Bradshaw Researcher, As an Individual

Great.

Thanks for having me today.

My name is Samantha Bradshaw. I'm a researcher on the computational propaganda project at the University of Oxford. I'll shorten that to Comprop.

On the Comprop project, we study how algorithms, big data and automation affect various aspects of public life. Questions around fake news, misinformation, targeted political advertisements, foreign influence operations, filter bubbles, echo chambers, all these big questions that we're struggling with right now with social media and democracy, are things that we are researching and trying to advance some kind of public understanding and debate around.

Today I'm going to spend my 10 minutes talking through some of the relevant research that I think will help inform some of the decisions the committee would like to make in the future.

One of our big research streams has to do with monitoring elections and the kinds of information that people are sharing in the lead-up to a vote, and we tend to evaluate the spread of what we call “junk news”. This is not just fake news and not just information that is false or misleading, but it also includes a lot of that highly polarizing content—the hate speech, the racism, the sexism—this highly partisan commentary that's masked as news. These are the kinds of junk information that we track during elections. In the United States, that was one of our most dramatic examples of the spread of junk news around elections. We found about a 1:1 ratio of junk information being shared to professionally produced news and information.

What's really interesting here is that if you look at the breakdown of where this information was spreading most, you see it tended to be targeted to swing states, and to the constituencies where 10 or 15 votes could tilt the scale of the election. This is really important because content doesn't just organically spread, but it can also be very targeted, and there can be organized campaigns around influencing the voters whose votes can turn an election.

The second piece of research that I'd like to highlight for everyone here today has to do with our work on what we call “cyber troops”. These are the organized public opinion manipulation campaigns. These are the people who work for government agencies, political parties or private entities. They have a salary, benefits. They sit in an air-conditioned room, and it's part of their job to work on these influence operations. Every year for the last two years we've done a big global inventory to start estimating some of the capacities of various governments and political party actors in carrying out these manipulation campaigns on social media.

There are a few interesting findings here. I'm not going to talk about all of them, for sake of time, but I'd like to highlight what we're seeing in democracies and what some of the key threats are. For democracies, it tends to be the political parties who are using these technologies, such as political bots, to amplify certain messages over others and maybe even spreading misinformation themselves in some of the cases we've seen. They tend to be the ones who use these organized manipulation tactics within their own population.

We also tend to see democracies using these techniques as part of more military psychological or influence operation activities. For the most part, it's the political parties who tend to focus domestically. We also see a lot of private actors being involved in these sorts of campaigns around elections, so where a lot of the techniques around social media manipulation were developed in more military settings for these information warfare techniques back in 2009 or 2010, now it tends to be private companies or firms that are offering these as services. Companies such as Cambridge Analytica are the biggest example, but there are so many different companies out there who are working with politicians or with governments to shape public discussions online in ways that we might not consider healthy for democracy and for democratic debate.

I guess the big challenge for me when I'm looking at these problems is that a lot of the data that goes into the targeting is no longer being held by the government, by Statistics Canada, which is the best information about Canadian public life. Instead it's being held by private companies such as Facebook or Google that collect personal information and then use that to target voters around elections.

In the past, it was all about targeting us commercially to sell us shampoo or other kinds of products. We knew it was happening and we were somewhat okay with it, but now when it comes to politics, selling us political ideologies and selling us world leaders, I think we need to take a step back to critically ask to what extent we should be targeted as voters.

I know that a lot of the laws right now are around transparency and improving why we're seeing certain messages, but I would take that a step further to ask if I should even be allowed to be targeted because I'm a liberal or on a even more microscale than that.

I know one of my colleagues earlier talked about targeting because you are identified as being a racist. At those much deeper levels as to who we are as individuals that really get to the core of our identity, I think we need to have a serious debate about that within society.

In terms of some of the future threats we're seeing around social media manipulation, disinformation and targeted advertisements, there are big questions around deep fakes and artificial intelligence making political bots a lot more conversational so that the person behind the account or the bot behind the account is human and more genuine. That might make it harder for citizens and also the platforms to detect these fake accounts that are spreading disinformation around election periods. That's one of the future threats on the horizon.

Professor Dubois talked about messaging platforms, things like WhatsApp and Telegram. A lot of these encrypted channels are incredibly hard to study because they are encrypted. Of course, encryption is incredibly important, and there's a lot of value in having these kinds of communication platforms, but the way they are affecting democracy by spreading junk information raises serious questions that we need to tackle, especially when you look at places like India or Sri Lanka where this misinformation is actually leading to death.

The third point on the horizon in the future is regulation. I think there is a real risk of over-regulation in this area. With Europe, for example, and Germany's NetzDG law, I applaud them for trying to take some of the first steps to making this situation better by placing fines on platforms. There has been a lot of, I guess, unintended consequences to that law, and we tend to see a lot more.

To use a good example, as soon as that law was put into place, there was someone from the alt-right party who had made some horribly racist comments online, and it got taken down, which is good, but what also got taken down was all the political satire, all the people calling that comment out as being racist, so you lose a lot of that really important democratic deliberation if you force social media companies to take on the burden of making all of those really hard decisions about content.

I do think one of the threats and one of the challenges in the future is over-regulation. As governments, we need to find a way to create smart regulations that get to the root of the problem instead of just addressing some of the symptoms, such as the bad content itself.

I will end my comments there. I look forward to your questions.

11:30 a.m.

Conservative

The Chair Conservative Bob Zimmer

Thank you very much, Ms. Bradshaw.

We will go to Mr. Saini for 10 minutes.

11:30 a.m.

Liberal

Raj Saini Liberal Kitchener Centre, ON

Thank you to all three of you for being here today.

Professor Dubois, I'm going to start with you because I read an article you had written with Mr. McKelvey who appeared here last week. You talked specifically about the four different types of political bots. Part of the article was on the amplification and the dampening usage of political bots.

What concerned me with that is that right now you're creating psychographic profiles of people. You're targeting certain people. Information is being harvested. My concern is the dampening of those bots in conjunction with what's being collected. Could there be a possible tactic in suppressing voters?

11:30 a.m.

Assistant Professor, Department of Communication, University of Ottawa, As an Individual

Dr. Elizabeth Dubois

Yes.

In the work that Fenwick McKelvey and I have been doing on political bots, we identified these amplifiers and these dampeners as the two types of bots that are most frequently used to impact the spread of political information in a way that could be negative.

One of those concerns is voter suppression, because if a “get out the vote” message is targeting a particularly under-represented group within the Canadian voting sphere—we know that new Canadians have lower rates of voting than people who have been here their entire lives—and if there's an amplification of a message that's trying to dissuade them from voting, or a dampening of the message that is trying to encourage them to vote, that could unfairly push them away from participating in their electoral system.

We could also imagine more covert approaches that are similar to the robocall scandal, where we had somebody who created an automated telephone message that directed people to the wrong polling place. You can imagine an automated version of that being deployed on Twitter or on WhatsApp, using automated scripts, which is essentially what we mean when we're saying political bots at this point.

11:35 a.m.

Liberal

Raj Saini Liberal Kitchener Centre, ON

You've written that you were in favour of registering political bots, rather than banning them.

Do you think there should be some way of identifying whether a bot is human, so that we can register them by some identifier so people know whether a human or a bot is targeting them? Do you think that would help in any way?

11:35 a.m.

Assistant Professor, Department of Communication, University of Ottawa, As an Individual

Dr. Elizabeth Dubois

Yes.

I think there are two important pieces to this. One is when I was saying we should register bots, I meant specifically ones that are used to contact voters in the same way the voter contact registry isn't where you have to register every kind of communication a political party has with an individual. The voter contact registry could apply to automated accounts that are targeting people to go to the wrong polling station.

11:35 a.m.

Liberal

Raj Saini Liberal Kitchener Centre, ON

If you have certain bots and they're creating misinformation, whether it be racist or threatening information or anything like that, do you think the social media companies have a mechanism in place right now to remove them as quickly as possible?

11:35 a.m.

Assistant Professor, Department of Communication, University of Ottawa, As an Individual

Dr. Elizabeth Dubois

Yes.

This is where it gets a little tricky. If Twitter, for example, wanted to eliminate all automation on their account immediately, they could, but that wouldn't be very useful writ large, because people benefit a lot from certain kinds of bots.

Think of all the media organizations you see on Twitter. Almost every one of them uses automation to some extent on their Twitter accounts to get stories out on Twitter, Facebook and Instagram simultaneously. That is a form of a bot on Twitter, so I don't think eliminating all automation would be a good idea.

There's also the problem that a lot of accounts are now cyborg accounts. These accounts are automated sometimes, but sometimes a human intervenes and posts content themselves, literally by typing it out and pressing send.

11:35 a.m.

Liberal

Raj Saini Liberal Kitchener Centre, ON

Ms. Bradshaw, you've written extensively on social media manipulation and you also mentioned algorithms.

Right now, we're seeing the weaponization of those algorithms. The design was to allow people to personalize their own content, but now they're being used to push disinformation. A solution that has been proposed by the social media companies is to have a separate algorithm to police the algorithms they're using. How feasible is that?

11:35 a.m.

Researcher, As an Individual

Samantha Bradshaw

Having the human element in reviewing and auditing algorithms I think is really important. We can't just sprinkle magic artificial intelligence dust on it to solve the problem.

Having this technology support human decisions is great, and that's where I see a lot of the benefit in having a second algorithm, but we still need humans to review this content at the end of the day. There are so many nuanced decisions that these algorithms can or cannot make, and a human making that final judgment is really important.

11:35 a.m.

Liberal

Raj Saini Liberal Kitchener Centre, ON

My final question is to you. If you have a political campaign and you have entities, whether it be a third party or political entities that are designing an algorithm to target a certain specific type of voter, do you think that those algorithms maybe should be kept in a repository so that tomorrow, if there is any consequence, then, as you say, humans can analyze that algorithm to see whether there was misinformation or anything that was used in a negative way or—and I don't want to use the word “illegal”—in an untoward way to target specific voters? Then humans could look at that algorithm to analyze whether that was used in a negative manner.

11:40 a.m.

Researcher, As an Individual

Samantha Bradshaw

Yes, I think that would make sense. There is always a danger of making algorithms too public, though, because as soon as they become very public and as soon as they're really out in the open there is a whole industry built on trying to break algorithms. There is search engine optimization, so you don't want to make the algorithm so transparent that people can then easily game them.

11:40 a.m.

Liberal

Raj Saini Liberal Kitchener Centre, ON

Professor Dubois, do you have any comments on that?

11:40 a.m.

Assistant Professor, Department of Communication, University of Ottawa, As an Individual

Dr. Elizabeth Dubois

Yes, I think that the ability to game algorithms is a concern when we are thinking, okay, let's create an algorithm to solve this problem and then just make it available for everyone to see. I think that kind of transparency is important, but we also need to have things like published tests of how the algorithms were working, and that can be a way that we can have audits and checks of those systems without necessarily opening the doors up for people who want to then go break the algorithm by circumventing it.

A few of the things I said in my opening statement, I think, connect here. The idea of having a history of the decisions of the people who were on the team who were actually making the algorithm in the first place and learning about what it was supposed to do, and why and how, those are the kinds of information that could help us solve the problem I think you're pointing to in a way that doesn't incentivize people to go and just break everything.

11:40 a.m.

Conservative

The Chair Conservative Bob Zimmer

Thank you.

Next up for seven minutes is Mr. Kent.

11:40 a.m.

Conservative

Peter Kent Conservative Thornhill, ON

Thank you, Chair.

Thank you to all for the insight, the opinions and the advice that you've provided to the committee today.

In the barely 24 or 35 hours since NAFTA became USMCA, there hasn't been an awful lot of talk about intellectual property protection and the borderless digital universe. But a number of folks have spoken up and there is some buzz, below the radar, that in fact the protection of Canadian intellectual property will now fall under the U.S. regime, that North America will more or less be under the U.S. system when it comes to the protection of intellectual property.

There is a suggestion that, in fact, the big tech companies will not be responsible for the content on their platforms, which would—it's been suggested, and I'd like comments from the three of you—mean that investigations of the Cambridge Analytica-AggregateIQ-Facebook scandal would not be possible, or that they would be unaccountable with regard to the content on their platforms and the way it's used.

Could we start with you, Mr. Pal?

11:40 a.m.

Prof. Michael Pal

Thank you very much. I think that's an important question.

As you say, it's only been 24 to 36 hours. I did get the chance to look through article 19 this morning, which I think is the relevant one on digital trade or digital policy. There are a couple of things that are relevant there.

One, it does seem to suggest that—and I'm forgetting the exact term that's used in article 19—basically Internet companies, social media platforms, will not be liable under the terms of the agreement for the content posted on them. Now, those things have to be implemented in domestic law and there is what the federal government can do, and what the provinces can do. There are all those kinds of issues there, but that is in article 19.

There is also a provision on source code, which talks about algorithms as well. Maybe I'll be corrected by my colleagues here, but I read that to include algorithms.

11:40 a.m.

Conservative

Peter Kent Conservative Thornhill, ON

I think there is a specific point saying that governments will not be able to examine source codes.

11:40 a.m.

Prof. Michael Pal

We couldn't have a mandated algorithmic transparency, but there is an exception for criminal investigation. Would an investigation by Elections Canada or the Privacy Commissioner count as a criminal investigation? That's kind of an open question.

I have no definitive views about article 19. I only read it this morning. I'm going to be lawyerly and cautious and say that I'm not sure of all the implications. It does address and seem to restrict, potentially, in some ways. I suggested liability for social media platforms for repeated breaches of norms around elections. There might be USMCA implications under article 19 that would make that less viable as a policy proposal.

11:45 a.m.

Researcher, As an Individual

Samantha Bradshaw

As you mentioned, it's quite new. I haven't actually seen the document yet. I know that social media platforms have always fallen under safe harbour provisions that do protect them from the content that people post on their platforms. Back in the day, we considered it a positive thing because we didn't want to hold Google responsible for someone else uploading content. Google Search would not function, or we wouldn't have it today, if Google was going to be responsible for organizing certain kinds of illegal information.

When it comes to actually holding Facebook accountable with regard to Cambridge Analytica, I'm also not quite sure what the implications of this new agreement would be, but I do think it's a really important question. I'm sorry that I don't have more insight.