Evidence of meeting #120 for Access to Information, Privacy and Ethics in the 42nd Parliament, 1st Session. (The original version is on Parliament’s site, as are the minutes.) The winning word was content.

A recording is available from Parliament.

On the agenda

MPs speaking

Also speaking

Claire Wardle  Harvard University, As an Individual
Ryan Black  Partner, Co-Chair of Information Technology Group, McMillan LLP, As an Individual
Pablo Jorge Tseng  Associate, McMillan LLP, As an Individual
Tristan Harris  Co-Founder and Executive Director, Center for Humane Technology
Vivian Krause  Researcher and Writer, As an Individual

12:20 p.m.

Liberal

Anita Vandenbeld Liberal Ottawa West—Nepean, ON

Mr. Harris.

12:20 p.m.

Co-Founder and Executive Director, Center for Humane Technology

Tristan Harris

One thing I would add is that the advertising business model is at the root of many of these problems.

One thing we really believe is that, if you ask people how much they've paid for their Facebook account recently, they don't even realize how it is that Facebook is worth more than $500 billion. If you imagine something like a “we are the product act”, in which companies are forced to report transparently on how much each user, each cow, is worth to them when they milk them for both their data and their attention, this would generate two things.

One is a cultural understanding of the fact that people are the product for companies based on this business model. It also selects just for the companies generating these problems, because the companies that are mostly generating these problems are ones with advertising-supported engagement business models. Culturally, it would have an impact.

The second is that, economically, people would actually start to see that they're worth $120, and that their value went up to $180 when they became a new mother. Having that transparency directly to users and directly to regulators, I think, is actually very important.

12:20 p.m.

Liberal

The Vice-Chair Liberal Nathaniel Erskine-Smith

Thank you very much.

For the next five minutes, we go to Mr. Kent.

12:20 p.m.

Conservative

Peter Kent Conservative Thornhill, ON

Thank you very much, Chair.

Just to respond briefly to Mr. Picard's quibble, I think that whenever a foreign organization and foreign funds are moved into interfering situations in the Canadian electoral process, in shell companies or confected Canadian companies to misrepresent the source of that income, the term “money laundering” is quite appropriate.

Mr. Harris, I'd like to come back to you. In a profile in The Atlantic magazine, you were described as “the closest thing Silicon Valley has to a conscience”. There has been an awful lot of discussion of the social responsibility of what one of our witnesses called the “data-opolies” with regard to the imbalance between the search for revenue and profit and growing the companies versus responsible maintenance and protection of individual users' privacy.

I'm just wondering what your thoughts are on whether the big data companies do, in fact, have a conscience and a responsibility and a willingness, a meaningful willingness, to respond to some of the things we've seen coming out of, principally, the Cambridge Analytica, Facebook, AggregateIQ scandal. We know, and we've been told many times, that it's only the tip of the iceberg in terms of the potential for gross invasion of individual users' privacy.

12:20 p.m.

Co-Founder and Executive Director, Center for Humane Technology

Tristan Harris

Yes, we have to look at their business models and at their past behaviour. It wasn't until the major three technology companies were hauled to Congress in November 2017 that we even got the honest numbers about how many people, for example, had been influenced in the U.S. elections. They had claimed it was only a few million people. Claire and I both know many researchers who did lots of late work until three in the morning, analyzing datasets and saying it had to be way more people than that. Again, we didn't get the honest number that more than 126 million Americans, 90% of the U.S. voting population, were affected until after we brought them to testify.

That's actually one of the key things that caused them to be honest. I say this because they're in a very tough spot. Their fiduciary responsibility is to their shareholders, and until there's an obvious notion that they will be threatened by not being honest, we need that public pressure.

There are different issues here, but when I was at Google I tried to raise the issue of addiction. It was not taken as seriously as I would have liked, which is why I left, and it wasn't until there was more public pressure on each of these topics that they actually started to move forward.

One last thing I will say is that we can look to the model of a fiduciary. We're very worried about privacy, but we just need to break it down. I want to hand over more information to my lawyer or doctor because with more information, they can help me more. However, if I am going to do that, we have to be bound into a contract where I know for sure that you are a fiduciary to my interests. Right now, the entire business model of all the data companies is to take as much of that information as possible and then to enable some other third party to manipulate you.

Imagine a priest in a confession booth, except instead of listening carefully and compassionately and caring about keeping that information private, the only way the priest gets paid for listening to two billion people's confessions is when they allow third parties, even foreign state actors, to manipulate those people based on the information gathered in the confession booth. It's worse, because they have a supercomputer next to them calculating two billion people's confessions so when you walk in, they know the confessions you're going to make before you make them.

It's not that we don't want priests in confession booths; it's just that we don't want priests with the business model of basically having an adversarial interest manipulating your vulnerable information.

12:25 p.m.

Conservative

Peter Kent Conservative Thornhill, ON

We're told that Facebook is constructing a war room that will be intended to operate to prevent improper interference in American elections. One would think the mid-terms would be the first area that needs protection. It's not completed yet, I understand. Would you suggest that in Canada it would be advisable that Facebook establish a war room to prevent that same sort of potential interference in Canadian elections?

12:25 p.m.

Co-Founder and Executive Director, Center for Humane Technology

Tristan Harris

Absolutely. It also speaks to the global nature of the problem, which is what I was trying to get at from the beginning. For all the issues we're talking about in western developed democracies with free press reporting on these topics, there are just hundreds of vulnerable countries, as Claire mentioned regarding Brazil, that have no such apparatus. Facebook is not going to spend the money to create war rooms for every single country.

Neither do they have the engineers who speak the languages. In India, there are 22 different languages. How many of those engineers speak those 22 languages? How many of the engineers at Facebook speak Sri Lankan or Burmese, where there are actually genocides emerging from the manipulation of their platform? There's actually a dearth of civil society groups in those places. There are no civil society groups doing enough work to cover those topics.

Yes, there should be a Facebook war room in Canada. Also, structurally speaking, they're editor-in-chief of two billion people's thoughts in the morning, so how do we start to scale that out and go from unmanageable levels to manageable levels of complexity? It's a mathematical thing.

12:25 p.m.

Liberal

The Vice-Chair Liberal Nathaniel Erskine-Smith

Thanks very much.

The last five minutes go to Mrs. Fortier, who's not here. Perhaps Mr. Saini would like to take that time.

12:25 p.m.

Liberal

Raj Saini Liberal Kitchener Centre, ON

I get the last five minutes? Okay.

Mr. Harris, I'd like to start with you, because you wrote something that I'd like some clarity on. You wrote in a couple of different places about the concept of hacking a human. Can you explain that in more detail?

12:25 p.m.

Co-Founder and Executive Director, Center for Humane Technology

Tristan Harris

Hacking probably came up with Harari, who wrote the book Sapiens. There's this view that in a post-enlightenment culture the customer is always right, the voter knows best or that you should trust your heart and your feelings because they are truly your own. We're increasingly living in an age where we have people on one side of the screen and supercomputer AIs on the other side of the screen who know more about us than we know about ourselves. If you think about that situation, if you enter a room and you know more about the other person's mind than they know about their own mind, who wins?

Why does magic work? It works because there's an asymmetry where the magician knows something about the limits of your mind. They can hack your mind, because they know something that you don't know about your own mind. Any time that's true, in that asymmetric situation, the party that knows more will—quote, unquote—“win”.

We're enabling new forms of automated psychological influence—again, the fact that YouTube calculates what has caused two billion people to watch that next video—and we're just throwing that at new human beings every day. We say that if it works at getting you to watch the next video, then it must be good, because the customer is always right and the voter knows best. But, that's not true. We're really wiring in the lizard brain and calculating what works on lizard brains, and then showing that back to people and creating a loop.

Artificial intelligence turns correlation into causation. It used to be correlated that people who watch this now watch this, but then AI can drive that into a causative loop. The problem is that we're creating a chaos loop, because if you take feedback loops and you feed them into themselves, you get chaos as a result. That's what's happening across our social fabric by hacking humans and feeding them back into the loop.

12:30 p.m.

Liberal

Raj Saini Liberal Kitchener Centre, ON

You gave an example in one of your articles about YouTube, and you've mentioned it here also. I'm just going to tell you about something that happened to me.

Last week, I went to a grade 5 civics class and I was speaking with them. There was a Q and A after, and some of the students in grade 5, who are 10 years old, asked me what my favourite YouTube channel or video was. When I go on YouTube, I have an interest in TED Talks, or something politically related where you're watching a speech or something, but I'm also fascinated by how quickly the right side of the screen fills up with suggested topics.

If I'm watching that stuff and I don't have an awareness, either I'm young or maybe not as knowledgeable, I'm technically being hacked. I'm being injected with information that I didn't seek. I might have tried to find something that I found of interest, through an article or an ad or something, and all of a sudden all these videos are appearing, which are furthering the original premise.

If you don't have the ability to differentiate between what is right and what is wrong, then technically that's a hack. But if you look at the amount of information that's being uploaded on any given day, how would...? You talked about regulating the information. How is it possible that YouTube can regulate that information when you have so much information being uploaded? What kind of advice could you give us as lawmakers? How would you even contemplate regulating that information?

12:30 p.m.

Co-Founder and Executive Director, Center for Humane Technology

Tristan Harris

This is why I said.... The advertising business model has incentivized them to have increasing automation and channels that are doing all this. They want to create an engagement box—it's a black box; they don't know what's inside it—where more users keep signing up, more videos keep getting uploaded, and more people keep watching videos. They want to see all those three numbers going up and up.

It's a problem of exponential complexity that they can't possibly hire trillions of staff to look at and monitor and moderate the—I forget what the number is—I think billions of hours or something like that are uploaded now every day. They can't do it.

They need to be responsible for the recommendations, because if you print something in a newspaper and you reach 10 million people, there's some threshold by which you're responsible for influencing that many people. YouTube does not have to have the right-hand side bar with recommendations. The world didn't have a problem before YouTube suddenly offered it. They just did it only because the business model of maximizing engagement asked them to do it. If you deal with the business model problem, and then you say they're responsible for those things, you're making that business model more expensive.

I think of this very much like coal or dirty-burning energy and clean-burning energy.

Right now we have dirty-burning technology companies that use this perverse business model that pollutes the social fabric. Just as with coal, we need to make that more expensive, so you're paying for the externalities that show up on society's balance sheet, whether those are polarization, disinformation, epistemic pollution, mental health issues, loneliness or alienation. That has to be on the balance sheets of companies.

12:30 p.m.

Liberal

The Vice-Chair Liberal Nathaniel Erskine-Smith

Thanks very much.

With that, we have three minutes for Mr. Angus.

12:30 p.m.

NDP

Charlie Angus NDP Timmins—James Bay, ON

Thank you.

Ms. Wardle, I want to talk about the expanse and the changing nature of disinformation. My region, my constituency, is bigger than Great Britain, so one of the easiest ways to engage with my voters is through Facebook. In my isolated indigenous communities, Facebook is how everyone talks.

There are enormous strengths to it, but I started to see patterns on Facebook. For example, there was the Fukushima radiation map showing how much radiation was in the Pacific Ocean. It was a really horrific map. I saw it on Facebook. People were asking what I was going to do about it. I saw it again and again, and I saw people getting increasingly agitated. People were asking how come no newspaper was looking at it and why the media was suppressing it, and they were saying that Obama had ordered that this map not be talked about. I googled it. It's a fake. It didn't do a lot of damage, but it showed how fast this could move.

Then there was the burka ad of the woman in the grocery store. It's in America, but then it was in England, and then it was in Canada in the 2015 election. It was deeply anti-Muslim. People I knew who didn't know any Muslim people were writing me and growing increasingly angry because they saw this horrific woman in a burka abusing a mother of a soldier. That also was a fake, but where did it come from?

Now we have Myanmar, where we're learning how the military set up the accounts to push a genocide. When we had Facebook here, they kind of shrugged and said, “Well, we admit we're not perfect.”

We're seeing an exponential weaponization of disinformation. The question is, as legislators, at what point do we need to step in? Also, at what point does Facebook need to be held more accountable so that this kind of disinformation doesn't go from just getting people angry in the morning when they get up to actually leading to violence, as we've seen in Myanmar?

12:35 p.m.

Harvard University, As an Individual

Dr. Claire Wardle

A big part of our focus ends up being on technology, but we also need to understand what this technology sits on top of, and if we don't understand how societies are terrified by these huge changes we're seeing, which we can map back to the financial crisis.... We're seeing huge global migration shifts, so people are worried about what that does to their communities. We're seeing the collapse of the welfare state. We're also seeing the rise of automation, so people are worried about their jobs.

You have all of that happening underneath, with technology on top of that, so what is successful in terms of disinformation campaigns is content that reaffirms people's world views or taps into those fears. The examples that you gave there are around fears.

Certainly, when we do work in places such as Nigeria, India, Sri Lanka and Myanmar, you have communities that are much newer to information literacy. If we look at WhatsApp messages in Nigeria, we see that they look like the sorts of spam emails that were circulating here in 2002, but to Tristan's point, in the last 20 years many people in western democracies have learned how to use heuristics and cues to make sense of this.

To your point, this isn't going anywhere because it feeds into these human issues. What we do need is to put pressure onto these companies to say that they should have moderators in these countries who actually speak the languages. They also need to understand what harm looks like. Facebook now says that if there's a post in Sri Lanka that is going to lead to immediate harm, to somebody walking out of their house and committing an act of violence, they will take that down. Now, what we don't have as a society is to be able to say, what does harm look like over a 10-year period, or what do memes full of dog whistles actually have in terms of a long-term impact?

I'm currently monitoring the mid-term elections in the U.S. All of the stuff we see every single day that we're putting into a database is stuff that it would be really difficult for Facebook to legislate around right now, because they would say, “Well, it's just misleading” and “It's what we do as humans”. What we don't know is what this will look like in 10 years' time when all of a sudden the polarization that we currently have is even worse and has been created by this drip-feed of content.

I'll go back to my point at the beginning and say that we have so little research on this. We need to be thinking about harm in those ways, but when we're going to start thinking about content, we need to have access to these platforms so we can make sense of it.

Also, as society, we need groups that involve preachers, ethicists, lawyers, activists, researchers and policy-makers, because actually what we're facing is the most difficult question that we've ever faced, and instead we're asking, as Tristan says, young men in Silicon Valley to solve it or—no offence—politicians in separate countries to solve it. The challenge is that it's too complex for any one group to solve.

What we're looking at is that this is essentially a brains trust. It's cracking a code. Whatever it is, we're not going to solve this quickly. We shouldn't be regulating quickly, but there's damage.... My worry is that in 20 years' time we'll look back at these kinds of evidence proceedings and say that we were sleepwalking into a car crash. I think we haven't got any sense of the long-term harm.

12:35 p.m.

NDP

Charlie Angus NDP Timmins—James Bay, ON

Thank you.

12:35 p.m.

Liberal

The Vice-Chair Liberal Nathaniel Erskine-Smith

Thanks very much.

We have just over 20 minutes left. I would propose that we do five minutes and see where we get.

12:35 p.m.

Conservative

Peter Kent Conservative Thornhill, ON

Sure.

12:35 p.m.

Liberal

The Vice-Chair Liberal Nathaniel Erskine-Smith

If you don't mind, I'll start, because I'm stuck in this chair and I don't get to ask as many questions as I'm used to.

I'll start with Ms. Wardle.

I take no great offence to your thinking that politicians can't quite figure it out, but we are where we are. We have to make recommendations to the government as to what they need to do. I should note that they have bolstered an act to require online platforms to create a registry of all digital ads placed by political or third parties during pre-writ and writ periods. That's to your point about a registry. We have already made a recommendation with respect to transparency of advertising, which I think is a critical piece in conjunction with that registry, so that there's a real-time honesty in ads.

What other specific recommendation would you have? Put yourself in our shoes and say, “Government, specifically beyond the registry, beyond honest advertising, this is another piece that you should be recognizing about the limitations of empirical evidence.”

12:40 p.m.

Harvard University, As an Individual

Dr. Claire Wardle

I would also say that we need to support quality journalism. They are part of this ecosystem. There are significant issues around local news deserts. If we don't recognize the connection between local journalism collapsing and the fact that local communities are turning to Facebook as their only source of information, we have a problem.

I'll give a plug now. In Brazil, we've created a coalition of 24 major newsrooms that are working together in a way that newsrooms never do. They normally compete, but there's no reason to compete around disinformation. I have 24 newsrooms that work collaboratively every day to find, verify and write debunks on one central website. Their logos are next to each other to show the audience that it doesn't matter about their political perspective, this is a false piece of content. It's amplified through their own 24 channels, online sites, radio, television and social media channels.

12:40 p.m.

Liberal

The Vice-Chair Liberal Nathaniel Erskine-Smith

Thanks very much.

12:40 p.m.

Harvard University, As an Individual

Dr. Claire Wardle

I was going to say....

12:40 p.m.

Liberal

The Vice-Chair Liberal Nathaniel Erskine-Smith

I'm sorry, but I only have five minutes. I want to come back to you.

Mr. Harris, you talked about redesigning and realigning tech, given human limitations. You've talked a lot about the problem. Let's take the same question to you, about a specific policy prescription that you would want this committee to recommend to the government.

12:40 p.m.

Co-Founder and Executive Director, Center for Humane Technology

Tristan Harris

Yes, I think we should always be skeptical anywhere that governments would tell companies how to design their products. That's not the place of the government. What I was mostly talking about in that earlier statement was that there are ways to design products that protect a vulnerability in the human animal.

If we know that a slot-machine style of social validation which doses kids every 15 minutes has this addictive effect and generates fear of missing out, we could start by understanding that kids are vulnerable to that, and design to protect against that addiction.

If we know that colour rewards light up your brain, and notifications buzzing against human skin at a certain frequency and rate tend to stimulate anxiety in your nervous system, we can start by understanding that there's a different way to design and protect against that happening.

12:40 p.m.

Liberal

The Vice-Chair Liberal Nathaniel Erskine-Smith

How do we put that into a rule? How do we take those ideas and....