Evidence of meeting #116 for Access to Information, Privacy and Ethics in the 42nd Parliament, 1st Session. (The original version is on Parliament’s site, as are the minutes.) The winning word was advertising.

A recording is available from Parliament.

On the agenda

MPs speaking

Also speaking

Taylor Owen  Assistant Professor, Digital Media and Global Affairs, University of British Columbia, As an Individual
Fenwick McKelvey  Associate Professor, Communication Studies, Concordia University, As an Individual
Ben Scott  Director, Policy and Advocacy, Omidyar Network

11 a.m.

Conservative

The Chair Conservative Bob Zimmer

I will call the meeting to order. Welcome back, everyone.

This is the Standing Committee on Access to Information, Privacy and Ethics, meeting 116, on the breach of personal information involving Cambridge Analytica and Facebook.

We're going to start off with our teleconference witness. Welcome, Mr. Owen.

11 a.m.

Professor Taylor Owen Assistant Professor, Digital Media and Global Affairs, University of British Columbia, As an Individual

Thanks for having me.

11 a.m.

Conservative

The Chair Conservative Bob Zimmer

Go ahead. You have 10 minutes.

11 a.m.

Prof. Taylor Owen

Thank you.

I think I want to leave you with one message today in my opening remarks, and that is that I really believe that the issue you're diving into of the particularities of the vulnerabilities that were shown and demonstrated through the case of Cambridge Analytica's use of Facebook and collection of data about American and Canadian citizens is not a case of individual bad actors that need to be countered, but rather is a function of structural problems in our very digital infrastructure, which I think are creating weaknesses in our free and open society. These weaknesses, I think, are being exploited by corrupting the quality of information in our public sphere, which is increasingly digital, by magnifying divisions in our society and by undermining our democratic institutions themselves. I want to talk about those problems and the structural elements of these problems by making four points over the next few minutes.

The first is that I think it's really important as a baseline to recognize that there has been a real evolution of our digital infrastructure, particularly of the Internet, over the past 30 years. In very broad sweeps—obviously, this is a much more detailed evolution—the first iteration of the Internet, web 1.0, really did give voice to a whole host of actors and individuals and groups who were excluded from our mainstream public discourse.

Web 2.0, in the 1990s and 2000s, the social web, connected people in really powerful ways and often democratizing ways, as we saw through the Arab Spring and through a whole host of social movements that leveraged these technologies around the world in incredibly positive ways.

I now think that the Internet is something qualitatively different. The problems you're investigating are representative of this difference. I think we're in a third phase of its evolution, what I broadly call the platform era. I would argue that this current version of the Internet is largely controlled by a small number of global platform companies, and for many people in the world the Internet they experience is filtered via these platform companies. That's what I want to talk a little bit about today.

The second broad point I would make is that in this platform ecosystem, this platform Internet, there are two structural problems embedded in that very Internet infrastructure. The first is the way that platforms or the Internet or we have been monetized—what's often called the attention economy or surveillance capitalism.

I would argue that in this tightly controlled market for our attention, audiences can be microtargeted and behaviour can be nudged by anyone from anywhere for any reason. Our attention and our behavioural change is the product being sold in this digital economy.

At the same time as our microtargeting behaviour is being affected or changed, since engagement is the primary metric of value in this attention economy—how much we engage, whether positively or negatively—platform algorithms prioritize entertainment, shock, and radicalization over reliable information. This is embedded in the business model. This is why research shows, for example, that misinformation spreads further and faster than genuine news. It's because it's embedded in the model.

The second structural problem, I think, which we're on the front end of and is going to become a much bigger issue over the coming years, is that the character and what we experience in this digital platform ecosystem is increasingly determined by unaccountable artificial intelligence systems.

These AI systems are used to filter the most engaging content to us, to know what will rile us up and engage us, to determine what we see as an individual user and whether we are seen and heard inside these platforms. Increasingly, AI is used to create versions of reality itself. They're often called deep fakes or synthetic media. A whole new reality is shaped by AI and targeted specifically to us as individuals.

Those are what I see as the structural problems here.

The third point I want to make is that I think these structural problems are responsible for the negative externalities we're now seeing in our democracy, one of which is represented by the Cambridge Analytica case and the 2016 U.S. election, but I think these negative externalities extend far more broadly. Let me describe a few.

One is that the quality of the information we receive, or the information in our digital public sphere, is becoming increasingly unreliable. The platform web is increasingly a toxic place. Highly gendered and racialized speech is incentivized, political discourse has become more extreme and divisive, which you experience intimately, and speech has been weaponized, with a resulting censoring effect. Voices are simply drowned out by abuse. At the same time that this is happening, the digital public sphere is becoming more toxic. We're seeing the increasingly rapid collapse of the industry of journalism, providing weaker and weaker backstops against this flood of false and toxic content.

In my view, democracy requires a grounding of common and generally trustworthy information, and I fear that because of this structural problem this is slipping away from us.

The second negative externality I want to mention is fragmentation. On platforms, we're each given a customized diet of information designed to reinforce and harden our views. The result is that polarization and tribalism can very quickly emerge in this ecosystem. This is a problem for a wide range of reasons, but perhaps most worryingly because it's increasingly leading to actual physical manifestations of individual and collective violence.

A recent study found that in any German town where per-person Facebook use rose to one standard deviation above the national average, a tax on refugees increased by about 50%. I think that Canada without a doubt lags on some of these trends and the social implications of them that we've seen in other western democracies, but fragmentation based on unreliable and microtargeted information is sure to divide us on the issues that are most poignant in Canada now. Imagine climate change, indigenous rights, pipelines and immigration all being fuelled by this structural vulnerability.

The third negative externality, which I think is of acute interest right now in Canada, is the vulnerability of our elections themselves. I would argue that by using the very tools provided by the attention economy, foreign and domestic actors alike can powerfully shape the behaviour of voters. AI and data-driven microtargeting is incredibly powerful during elections, as we saw with the Cambridge Analytica case. Acute cyber-attacks and hacking are a vulnerability, as we saw during the Clinton email leaks or the Macron leaks, but I think you can also be more subtle. I wouldn't want to focus too much on just these very acute public cases.

I can give you an example of a more subtle case. A recent study found that long before the 2016 U.S. election, Russian government-connected accounts created a host of fan pages on Facebook for prominent African-American figures. They did one for Beyoncé and one for Malcolm X. The goal was to build an organic community. They published fan content about Beyoncé to try to build the followers of that page. In the days before the election, they then weaponized that community and pushed content to them designed to suppress the African-American vote.

How do we deal with something like that? How do we even know that this is a foreign-sponsored fan page and that it will be weaponized in the days before the election? This gets at the real structural problems we're facing here.

In the final and fourth point I want to make, I want to offer a few reflections on the public policy solutions to this problem or the governance challenges that this presents.

The first point I would make about public policy here is that it's very clear that self-regulation has proven and will continue to prove insufficient for the nature of this problem. I would argue that the apt analogy is the lead-up to the financial crisis, where the financial incentives are powerfully aligned against meaningful reform of the ecosystem. These are publicly traded and largely unregulated companies whose shareholders demand year-on-year growth.

This growth simply may or may not be aligned with the public interest, and that's how democracies function. When there are negative externalities of largely unregulated monopolies, governments engage to protect the collective good. I think that's where we are now.

I have a second point about public policy here. To me this is primarily a demand-side problem that requires a comprehensive policy approach. Many have argued that it's actually the users' fault, that it's a supply-side problem, that we're consuming and producing toxic content and therefore we should change consumer behaviour. I actually think that misses the structural aspect, and indeed, almost every major global commission or report that has looked at this issue has argued that a comprehensive policy approach is needed. There's not just one silver bullet to this. It's about reforming how we regulate and engage with our digital economy writ large. This is going to involve—

11:10 a.m.

Conservative

The Chair Conservative Bob Zimmer

Mr. Owen, you're at just about 11 minutes.

11:10 a.m.

Prof. Taylor Owen

Okay. Sorry to be—

11:10 a.m.

Conservative

The Chair Conservative Bob Zimmer

We'll get back to you with some questions maybe, if you want to continue a little later.

11:10 a.m.

Prof. Taylor Owen

Absolutely. Let me just finally conclude here, and Ben's going to talk about these policy proposals, which I agree with. They're going to involve immediate fixes such as ad transparency and new data rights regimes, regulatory changes to give rights to Canadians over the data that's collected about them, and reform of our journalism space and the way we regulate the journalism space.

I'd be happy to talk about any of those afterward.

Thank you.

11:10 a.m.

Conservative

The Chair Conservative Bob Zimmer

Thank you, Mr. Owen.

Next we have Mr. McKelvey for 10 minutes.

11:10 a.m.

Professor Fenwick McKelvey Associate Professor, Communication Studies, Concordia University, As an Individual

I'd like to begin by acknowledging that the land on which we gather is the traditional unceded territory of the Algonquin Anishinaabe people. Further, I'm on parental leave now, and I would like to thank my family for giving me the time to speak here today.

I hope my comments will be relevant to the committee and provide evidence to support its preliminary recommendations, which I largely support as well. I appreciate its willingness and dedication to keep pulling a lone thread that unravels this tangled web of data, surveillance, campaigning and advertising. These issues have been a great preoccupation for me, bringing together previously separate research into Internet policy, digital political communication, and algorithmic governance.

I would like to focus my comments on three areas of investigation before the committee today. In many ways, they complement some of the findings and conclusions of Taylor Owen, such as the focus on online advertising, third party data brokers and analytics, and finally political parties. My comments highlight my concerns and potential policy remedies to these issues based on my own research. I hope the committee will also look to new ways to support more research in these areas, giving researchers better access to data under clear ethical guidelines.

First, online advertising is more than a political problem. The Cambridge Analytica Facebook scandal has exposed more than anything the public's unawareness, resignation or willed ignorance about the sophistication of online advertising. It might not tip the next election, but reforms to the sector will go a long way toward restoring public trust in the Internet writ large—speaking to the structural issues of the presenter before me.

Online advertising means a few things today. It concerns programmatic banner advertisements of the kind we see around every website. These ads account for a $12-billion industry in Canada, according to the Media Concentration Research Project, and Google and Facebook account for three-quarters of the revenue. However, there are new types of advertising. Native advertising, or sponsored content, confuses the line between advertising and advertorial. With influence marketing, informal brand ambassadors fill our social media feeds with their often unacknowledged endorsements. There's also spam and bot activity.

In general, I question the public benefit of all these forms of targeted advertising. In my mind, we have too little accountability and too much in the belief of data and targeting. New kinds of advertisers will present problems for political campaigns. Political campaigns may turn to these grey markets, using influencers or “for rent” social media accounts to fake grassroots support. We must recognize the extent of this promotional content in our culture, make steps to be able to qualify it, and also work to ensure proper disclosure and fair play for these third party advertisers. One tangible step might be to work with Elections Canada to clarify the placement cost criteria to ensure that the new types of advertising count in electoral spending.

In regard to programmatic advertising, we need to consider what are appropriate limits to data collection and targeting. There's evidence in the political literature that the multitude of microtargeting does not necessarily help campaigns better engage with voters. In my opinion, the current situation overstates the value of targeting data, omitting the potential harms in over-collection. We can name a few risks of over-collection. Conceivably, we can think about advertising profiles being used as a proxy for protected categories like race and political belief. Targeted advertising is increasingly used to justify growing online and offline surveillance. Finally, all this data can be leaked or improperly handled, as we've seen time and time again.

Data protection is an important remedy. By limiting what can be collected and used for targeting, we can diminish the race to monetize more personal information for advertising. Without change, I fear a time when large social media companies compete against Internet service providers on how much data they can collect and turn into targeted advertising portfolios, collecting as much data as they can.

AggregateIQ is part of a global technology industry. Canada, like many other western democracies, has witnessed political parties go digital to better run their campaigns. Many companies now sell services to help parties manage, analyze and use their data to, among other things, buy ads and gauge support.

For its proponents, technology-intensive campaigning gets out the vote. It also helps parties find the right supporters, be more responsible with their limited funds, and ultimately win. I do not dispute these claims, but it has become clear to me that the global scope of the industry today creates new regulatory challenges, particularly in ensuring that offshoring data analytics or digital services does not evade national spending or national privacy law.

These industries warrant greater scrutiny, particularly in how they move data across borders. Offshoring data analytics should not evade privacy laws. International companies should be mindful of how they transport models, particularly models using machine learning algorithms that might have been collected and developed using loose privacy laws, and make sure they do not find their way abroad.

I believe these issues can be addressed by adding enforcement powers to the office of the Privacy Commissioner and continuing to support its multi-jurisdictional enforcement.

Third and finally, with regard to political parties, I was not surprised that AggregateIQ has little uptake in Canada. This is not because there is an aversion to technology in politics but because parties already have their own solutions in place. The Conservative Party uses NationBuilder, together with its proprietary database. The Liberal Party uses the U.S. Democratic Party-affiliated NGP VAN. The NDP works with other Democratic-affiliated firms, Blue State Digital, and its own tool, Populus. I have to admit I'm surprised that no representatives from these political parties or from these companies have appeared before these committees investigating these matters of political data.

In general, political parties have much to do to be more accountable about their data habits. Again, in my research I've been impressed by the professionalism of campaigners on all sides, and I believe these professionals will ultimately embrace these new rules. I understand reluctance, too, to impose more regulation on already taxed organizations, but greater accountability for digital campaigning should benefit all parties.

I support the committee's recommendation for privacy laws to apply to political parties. I'd like to add one other reason.

In my own research I've found that lax rules have created real challenges for political campaigns. Data is a strategic resource for parties. Lax rules, however, translate into real inequities. Incumbent parties have better access to data than new entrants. To compete, all parties have to be constantly maintaining their lists and collecting more data, since they cannot rely on the data collected by Elections Canada. This leads to an overall concentration in the central party, which often becomes the database, and an overall logic of permanent campaigning.

Parties might be reluctant to adopt privacy law, given the importance of digital fundraising. If we believe that parties should collect less data, then we may want to consider reinstating the per-vote subsidy that diminished the need for funding and its associated data collection.

Also in terms of data, most parties use some form of predictive analytics to examine the political data they have collected and make predictions about voter behaviour. Either the party or, more often, the consultant analyzes the data to calculate the probability that each voter will support the party and the probability that a voter will be persuaded to vote for the party. The parties use these to make important decisions, like who to target and who to encourage to vote. Predictive analytics exacerbates low voter turnout in Canada, allowing parties to continue to distance many voters from the electoral process. Parties should agree to audit their scoring of voters, and other analytics, for potential race or gender biases. As well, they should also make sure that these decisions about which voters to contact and which voters to ignore are auditable and explainable.

Finally, my suggestions about reform to digital campaigning are my own experiences alone.

Political parties ultimately need to work together on the rules of the game. Codes of conduct have long been recommended to improve Canadian politics. I believe that now is the time to move toward the drafting of a code. In many ways, when we're trying to deal with these consequences of foreign interference, we can only begin to look to ourselves as a first step in rectifying those potential threats. However, parties need to be able to take the first step.

I commend the committee for continuing this project and I hope my comments help support the recommendations for online advertising, data protections for political technology firms, and reforms to privacy and the activities of political parties.

Thank you very much.

11:20 a.m.

Conservative

The Chair Conservative Bob Zimmer

Thank you, Mr. McKelvey. Stay tuned. Those parties might show up one day.

Next up is Mr. Scott. Go ahead; you have ten minutes.

11:20 a.m.

Dr. Ben Scott Director, Policy and Advocacy, Omidyar Network

Thank you, Mr. Chair.

What brings me to sit before you today is a tale of regret. I look south at the political and democratic disaster playing out in my own country with great distress and great humility. I was among those young, idealistic, tech-savvy staffers who went to join the Obama administration in the early days after he was elected.

It was a time when we had big ideas about open data, social media, and global digital markets for speech and commerce as liberatory, as a new tool of democratic soft power, and they were—we've benefited tremendously from those forces over the last decade—but it was a double-edged sword. We were not prepared for the way that technology proved instrumental in ushering in one of the darkest chapters in American political history. We didn't do enough.

We're not alone in this. We are now seeing related phenomena across the democratic world—in Britain, Germany, Italy, France, and many other places.

The politics of resentment that we're seeing in contemporary populism mixed with the distorting power of the digital information market are a toxic brew. You have rightly pointed this out in the examination you've conducted so far, and in what we've seen in parallel examinations of this phenomenon in other legislatures.

My message to you today is a simple one: Don't wait to see how it plays out in Canada. Act right now. It will happen here too. The only question is how, and whether the consequences will be effectively mitigated in the Canadian context.

What is to be done? The first thing I want to say is, don't count on the private sector to deal with this problem. Publicly traded monopolies do not self-regulate. If we didn't know that before, we've certainly learned it over the course of the last year and a half. It brings to mind a quote that I like from my favourite chronicler of monopoly capitalism from a century ago, Upton Sinclair. He said, “It is difficult to get a man to understand something when his salary depends on his not understanding it.”

The answer here is not going to be the market; the answer is going to be government using its tools to steer the market back in the direction of the public interest. We need a kind of digital charter for democracy, one that lays out a set of principles and comes in behind it with clear policies that begin to make the changes we need to protect the integrity of our democratic public sphere.

We need to start right away, but we need to expect that this will take time. There are no single solutions to this problem. It's going to be a combination of things, none of which are sufficient by themselves, and all of which are necessary. It's going to be a messy process, because no one thing will appear to be moving the needle and making the difference that we would all like to see. However, together these things can first contain the problem, then treat the symptoms, and ultimately begin to get at the root causes of the structural problems in the market, both on the supply side and the demand side.

We begin first with security. This is the simplest and most important piece of the puzzle. The combination of cyber-attack and disinformation campaigns that we have seen unleashed on elections in several different countries is a dire threat, and we have to treat it that way. We need to increase the cybersecurity applied to our democratic institutions, including not just election administration but also political parties and campaigns. They should be treated as critical infrastructure, in my view. We also need to be much better about coordinating the research, monitoring, and exposure of disinformation campaigns that are happening with security services, with outside research entities, and with companies.

We're beginning to see a model developing in the U.S. that is worthy of examination and expansion, but let me be clear: Even if we solve the security problem, we're only eliminating a minor part of the problem. Most of the threats come from within, not from without. The most important thing in my mind about the foreign interventions we have seen across the world is that they took advantage of standard market-based tools. They were opportunistic amplifications of existing domestic political movements, and they were using tools that are perfectly well known and understood by commercial marketers across the digital world.

The second piece we can begin to deal with is illegal content. Again, it's not a huge part of the problem, but it's an important part. Citizens have a right to be protected from illegal content. There are now categories of content that are illegal in the off-line world; they should be illegal in the online world. These include hate speech, defamation, harassment, and incitement to violence.

All of these things can be removed on an accelerated timetable with a process that is rigorously overseen by regular judicial oversight and that has an appeals process so that we are not endangering freedom of expression when we begin to move into the space of removing illegal content. You can't cede that power to the platform companies, but we need their involvement in order to speed up the process.

Once we've dealt with the security issues and the illegal content issues, we get into the real meat of the problem: How do we mitigate the influence of disinformation campaigns that are homegrown, that begin to separate people from facts that help inform their judgments and that begin to polarize our society over time?

One thing we can do is really cultivate the research community to spend more time, energy, and money studying the problem. We simply don't know enough about how disinformation works and how the digital market works to shape political views and electoral outcomes. We need to develop ways to signal users to be wary and to be critical consumers of digital media.

Consider for a moment the average consumer who is accustomed to the traditional media environment. When you step into a news agent at an airport and look at the periodicals arrayed before you, you see the daily newspapers, and you see the political magazines and the sports, automotive, entertainment, and home and garden magazines. Depending on where you're standing, when you pick a periodical off the rack, you have a pre-set schema in your mind about what to expect.

In the digital environment, all of that is compressed into a single stream, and it looks the same. It's a Facebook newsfeed. It's a Twitter feed. It's a YouTube NextUp list of videos. In that environment, all of the signals about source credibility and quality that we once had begin to attenuate. People will tell you that they read an outrageous thing the other day and that it has really shaped their views on an important matter, whether it's climate, immigration or economic policy. You ask them where they read that, and they say they read it on Facebook—but they didn't read it on Facebook. They read it through Facebook on some other source. What was the other source? They don't remember.

We've lost the normative structure that in the old media environment allowed us as citizens to make implicit judgments about source credibility and, when we're reading digital media, to engage in critical thinking. We need to begin to find ways to understand this problem better through the research community and to begin to address it through public education and digital literacy.

As well, there are many things we can do in the market with a regulatory intervention. We can ask the companies and compel them to be much more transparent in the way they operate. This starts with political ads.

There's no reason in the world why every citizen who sees a political ad shouldn't know exactly who bought it, how much they spent, and how many people they paid to reach. Most importantly, why did I as an individual voter get that message? Is it because of my gender, my age, my income? Is it because of where I live? Is it because my characteristics are similar to those of other people they're targeting? I should be able to know that, because when I know that, it allows me to engage in a much more critical view about why that ad came to me.

To me, transparency is the simplest and easiest way to regulate the companies to move in the right direction. It's something they're voluntarily doing, but only in some countries and only when they're getting public pressure to do it. In no case has there been law laid down to mandate it. I think that's an easy first step.

There are a variety of other things that I think we ought to engage in as well. These are longer-term structural issues. They include algorithmic accountability. We need to look at how algorithms work and how they impact social welfare. We need to look at data privacy; we need to reduce the amount of data that companies collect, and we need to restrict how they use it.

Also, we need to be looking at competition policy. We need to be looking at modernizing antitrust policy to put shackles on anti-competitive practice, to restrict mergers and acquisitions, and to ease access to market entry for new kinds of services that offer alternatives to the existing models whose externalities have led to such negative outcomes.

Finally, we need to focus on the long-term task of addressing public education. We need to help people help themselves by helping them to become stronger and more insightful media consumers.

That includes not only digital literacy but also investments in better and more independent media. We can't expect people to steer their way away from nonsense on the Internet if there isn't a large body of quality information and journalism available to them.

I can't predict the future of where this combination of policies will go, but I do think it's the right starting point. I don't think we have a lot of time to lose. I'm encouraged and inspired by the work of this committee that government is moving in the right direction.

Thank you for your attention. I look forward to the discussion.

11:30 a.m.

Conservative

The Chair Conservative Bob Zimmer

Thank you, Mr. Scott.

First up is Mr. Saini, for seven minutes.

11:30 a.m.

Liberal

Raj Saini Liberal Kitchener Centre, ON

Good morning to everybody. Thank you very much for coming here. Your opening statements, coming from three different perspectives, have given us a lot to think about.

I would like to start with you, Mr. Owen. You talked about negative externalities. You mentioned that there have been three waves of negative externalities, one of the waves being disinformation. In one of your recent articles, you also talked about how the Overton window has been upended. Looking at that, talking about disinformation and the public space, who determines what is acceptable in the public debate, then?

11:30 a.m.

Prof. Taylor Owen

I think we need to step back and look at who used to determine this. Up until the rise of the social web and the decline of legacy media that has paralleled it and is intimately related to it, we entrusted this window of acceptable discourse to a small number of legacy 20th century media institutions. This was itself a highly flawed system. It excluded a whole host of voices. It perpetuated an economic system, and arguably a political system, that benefited certain groups over others. In many ways it limited our discourse. We didn't hear from all the voices that we now have access to hearing.

When the social web emerged and new voices were given audience, we found that our debate, our public sphere, was actually much more diverse, much more dynamic, and much more informative than had been mitigated by that legacy media infrastructure. The problem now, I would argue, is that the terms of this public debate are not being defined by the value of individual voices, the societal benefit of those individual voices, or even the desired audience for those individual voices. We have a new structure that's determining what's acceptable. That structure is the filtering mechanism of platforms, deciding what we see and whether we are seen.

If we were concerned about that previous filtering model—the editors of major newspapers, the broadcasters, the small group of people who were determining what was acceptable—then we should now be concerned about the parallel filtering point, which is the algorithms and the business models that are determining what we see.

11:35 a.m.

Liberal

Raj Saini Liberal Kitchener Centre, ON

As you know, information is sometimes conveyed by bots. There's human interaction and bot interaction. Should there be different standards, and should there be a transparency level of knowing, when we receive a message, whether this message is coming from a bot or from a human source? Should there be a standard to allow us to be able to differentiate that information in a transparent and clear way?

11:35 a.m.

Prof. Taylor Owen

I believe so, yes. This has been discussed and proposed in California, where the so-called Blade Runner law would force all automated accounts to self-identify as being automated. I think in this case, transparency is the solution. There are all sorts of potential positive uses of bots and automated tools in the social ecosystem, but as consumers, we should know whether we are being targeted by one, because, importantly, this will become a much bigger issue as we engage more and more with agents and artificial intelligence-driven entities in the digital space.

11:35 a.m.

Liberal

Raj Saini Liberal Kitchener Centre, ON

Okay.

Mr. Scott, I'd like to ask you a question about an article you wrote in The Atlantic about algorithms, which you mentioned in your opening remarks. As you know, certain algorithms are used to help us collect information in a much more efficient way, but it seems that algorithms now are being weaponized. One of the answers or one of the discussions by social media companies is that they should create algorithms to police the existing algorithms. Does that seem feasible to you?

11:35 a.m.

Director, Policy and Advocacy, Omidyar Network

Dr. Ben Scott

This reminds me of the argument that the answer to gun violence is more guns on the streets. It has a certain logic to it that you could control misbehaving algorithms with the policing algorithms, but to me the real root of the issue is not having more technology to try to patch the holes in the existing system; the real end to this problem is oversight and transparency. We need to better understand how the algorithms are working and we need to understand what the vulnerabilities are for weaponization.

In markets that have grown large and powerful and have a strong impact on the public interest, such as health and safety rules for the restaurant sector or third-party review for pharmaceuticals, we have a long history of auditing these kinds of businesses, not in order to verify that they're misbehaving intentionally but to ensure that there aren't unintended consequences to the development of products in the market. I think ultimately where we're heading is toward a system of oversight and review of algorithms that can be weaponized to ensure we don't have strong negative effects.

11:35 a.m.

Liberal

Raj Saini Liberal Kitchener Centre, ON

You've said a couple of things, and it looks like I only have a minute.

One of the things you mentioned was that maybe we should limit the amount of data that is shared with social media companies. Another thing that you've said is about education, that the consumers should be more educated in being able to disseminate and differentiate between legitimate and illegitimate sources.

With the amount of information that's coming onto the Internet on a daily basis, how is it possible for somebody to be able to differentiate? What would that education piece look like? How can you educate the consumer to recognize legitimate or illegitimate information?

11:40 a.m.

Director, Policy and Advocacy, Omidyar Network

Dr. Ben Scott

It is a substantial challenge, but we had the same debate with the rise of television when we went from three or four broadcast channels to 200 channels—that the wash of information would make it impossible to differentiate credibility and quality. Over time, people developed new schema for how to sort, categorize, and judge the quality and credibility of sources on television. The same thing can happen with the Internet.

I would also emphasize that you don't need to have a Ph.D. and do a dissertation on every source that comes in to evaluate what you think about it; you need to have some quick and easy ways to evaluate how credible you find something. Those things can be taught in civics classes. They can be taught relatively broadly and in a content-neutral way so that people are simply equipped with the skills to judge when and how they ought to apply more cognitive energy to evaluate the credibility or the quality of the source.

11:40 a.m.

Conservative

The Chair Conservative Bob Zimmer

Thank you, Mr. Scott and Mr. Saini.

Next up for seven minutes is Mr. Kent.

11:40 a.m.

Conservative

Peter Kent Conservative Thornhill, ON

Thank you, Chair.

Thank you all for appearing before us today. As my colleague said, you've given us three variations of issues to consider.

Mr. McKelvey, have you shared your insight and advice with the Privacy Commissioner or the Chief Electoral Officer?

11:40 a.m.

Prof. Fenwick McKelvey

I have not spoken with the Office of the Privacy Commissioner. There has been some contact with the Chief Electoral Officer. I understand there is an informal working group, but I wasn't able to attend the first meeting. Whenever I have the opportunity, I try to make myself available.

11:40 a.m.

Conservative

Peter Kent Conservative Thornhill, ON

You spoke of the urgency of action before the next Canadian federal election in October 2019. The Chief Electoral Officer has told the country, told the House of Commons, the government, that some of the legislation before us now is too late to enact. Do you have any suggestions that could practically be put into effect before the election to minimize or counter some of the threats you've described?