Evidence of meeting #120 for Access to Information, Privacy and Ethics in the 42nd Parliament, 1st Session. (The original version is on Parliament’s site, as are the minutes.) The winning word was content.

A recording is available from Parliament.

On the agenda

MPs speaking

Also speaking

Claire Wardle  Harvard University, As an Individual
Ryan Black  Partner, Co-Chair of Information Technology Group, McMillan LLP, As an Individual
Pablo Jorge Tseng  Associate, McMillan LLP, As an Individual
Tristan Harris  Co-Founder and Executive Director, Center for Humane Technology
Vivian Krause  Researcher and Writer, As an Individual

11:05 a.m.

Liberal

The Vice-Chair Liberal Nathaniel Erskine-Smith

We're going to start the meeting now. I'm Nathaniel Erskine-Smith. I'm filling in for Mr. Zimmer, who is our usual chair. I'll ask some questions, but I'll leave it to my Liberal colleagues to ask most of them.

We'll start with 10-minute statements from each witness here today and then move to rounds of questions.

We'll begin with Ms. Wardle from Harvard University.

11:05 a.m.

Dr. Claire Wardle Harvard University, As an Individual

Thank you very much for your invitation to appear today. My apologies for not being able to attend in person.

I am Dr. Claire Wardle. I'm a research fellow at the Shorenstein Center on Media, Politics and Public Policy at Harvard's Kennedy School.

I'm also the executive chair of First Draft. We are a non-profit dedicated to tackling the challenges associated with trust and truth in a digital age. We were founded three years ago specifically to help journalists learn how to verify content on the social web, specifically images and videos. That remains my research speciality.

In 2016, First Draft began focusing on mapping and researching the information ecosystem. We designed, developed and managed collaborative journalism projects in the U.S. with ProPublica, and then in 2017 ran projects in France, the U.K. and Germany during their elections. This year we're currently running significant projects in the U.S. around the mid-terms and the elections in Brazil, so we have a lot of on-the-ground experience of information disorder in multiple contexts.

I'm a stickler for definitions and have spent a good amount of time working on developing typologies, frameworks and glossaries. Last October, I co-authored a report with Hossein Derakhshan, a Canadian, which we entitled “Information Disorder”, a term we coined to describe the many varieties of problematic content, behaviours and practices we see in our information ecosystem.

In the report, we differentiated between misinformation, which is false content shared without any intention to cause harm; disinformation, which is false content shared deliberately to cause harm; and, malinformation, which is a term we coined to describe genuine content shared deliberately to cause harm. An example of that would be leaked emails, revenge porn or an image that recirculates during a hurricane but is from a previous natural disaster, our point being that the term “fake news” is not helpful and that in fact a lot of this content is not fake at all. It's how it's used that's problematic.

The report also underlined the need for us to recognize the emotional relationships we have with information. Journalists, researchers and policy-makers tend to assume a rational relationship. Too often we argue that if only there were more quality content we'd be okay, but humans seek out, consume, share and connect around emotions. Social media algorithms reflect this. We engage with content that makes us laugh, cry, angry or feel superior. That engagement means more people see the content and it moves along the path of virality.

Agents of disinformation understand that. They use our emotional susceptibilities to make us vulnerable. They write emotion-ridden headlines and link them to emotional images, knowing that it is these human responses that drive our information ecosystem now.

As a side note, in our election projects we use the tool CrowdTangle, which now has been acquired by Facebook, to search for potentially misleading or false posts. One of the best techniques we have is filtering our search results by Facebook's angry face reaction emoji. It is the best predictor for finding the content that we're looking for.

I have three challenges that I want to stress in this opening statement.

First, we need to understand how visuals work as vehicles for disinformation. Our brains are far more trusting of images, and it takes considerably less cognitive effort to analyze an image compared to a text article. Images also don't require a click-through. They sit already open on our feeds and, in most situations, on our smart phones, which we have a particularly intimate relationship with.

Second, we have an embarrassingly small body of empirical research on information disorder. Much of what we know has been carried out under experimental conditions with undergraduate students, and mostly U.S. undergraduate students. The challenges we face are significant and there's a rush to do something right now, but it's an incredibly dangerous situation when we have so little empirical evidence to base any particular interventions on. In order to study the impact of information disorder in a way such that we can really further our knowledge, we need access to data that only the technology companies have.

Third, the connection between disinformation and ad targeting is the most worrying aspect of the current landscape. While disinformation itself at the aggregate level might not seem persuasive or influential, targeting people based on their demographic profile, previous Internet browsing history and social graph could have the potential to do real damage, particularly in countries that have first-past-the-post electoral systems with a high number of close-fought constituencies. But again, I can't stress enough that we need more research. We simply just don't know.

At this stage, however, I would like to focus specifically on disinformation connected to election integrity. This is a type of information disorder that the technology companies are prepared to take action around. Just yesterday, we saw Facebook announce that around the U.S. mid-terms, they will take down, not just de-rank, disinformation connected to election integrity.

If disinformation is designed to suppress the vote, they can take action, whereas in other forms of information disorder, without external context, they are less willing to take action in a way that actually right now is the right thing.

In 2016 in the U.S., visual posts were micro-targeted to minority communities, suggesting they could stay at home to vote for Hillary Clinton via SMS, giving a short code. Of course, this was not possible. As a minimum, we need to prioritize these types of posts. At a time when the whole spectrum is so complex, that's the type of post we should be taking action on.

In terms of other types of promoted posts that can be microtargeted, there is a clear need for more action; however, the challenge of definitions returns. If any type of policy or even regulation applies simply to ads that mention a candidate or party name, we would be missing the engine of any disinformation campaign, which is messages designed to aggravate existing cleavages in society around ethnicity, religion, race, sexuality, gender and class, as well as specific social issues, whether that's abortion, gun control or tax cuts, for example.

When a candidate, party, activist or foreign disinformation agent can test thousands of versions of a particular message against endless slices of the population, based on the available data on them, the landscape of our elections looks very different very quickly. The marketing tools are designed for toothpaste manufacturers wanting to sell more tubes, or even for organizations like the UNHCR. I used to do that type of microtargeting when I was there, to reach people who were more likely to support refugees. When those mechanisms have been weaponized, what do we do? There is no easy solution to this challenge. Disinformation agents are using these companies exactly as they were designed to be used.

If you haven't read it already, I recommend you read a report just published by the U.K.'s leading fact-checking organization, Full Fact. They lay out their recommendations for online political advertising, calling for a central, open database of political ads, including their content, targeting, reach and spend. They stress that this database needs to be in machine-readable formats, and that it needs to be provided in real time.

The question remains how to define a political ad and whether we should try to publicly define it. Doing so allows agents of disinformation to find other ways to effectively disseminate their messages.

I look forward to taking your questions on what is an incredibly complex situation.

Thank you.

11:10 a.m.

Liberal

The Vice-Chair Liberal Nathaniel Erskine-Smith

Thank you very much, Dr. Wardle.

Next up are Mr. Black and Mr. Tseng. Both are lawyers at McMillan LLP.

October 16th, 2018 / 11:10 a.m.

Ryan Black Partner, Co-Chair of Information Technology Group, McMillan LLP, As an Individual

Thanks very much.

Good morning, members of the standing committee and fellow witnesses.

I am Ryan Black, partner and co-chair of information technology at McMillan LLP, a national law firm. With me is Pablo Tseng, my colleague in our business and intellectual property groups. We're practising lawyers in British Columbia, and we're honoured to be here today by video conference at the request of the standing committee.

11:10 a.m.

Pablo Jorge Tseng Associate, McMillan LLP, As an Individual

A few months ago, Ryan and I wrote an article entitled “What Can The Law Do About ‘Deepfake’?” The article provides an overview of the causes of action that may be taken against those who create and propagate deepfake material across the Internet, including across social media platforms.

Some of the causes of action include those related to defamation, violation of privacy, appropriation of personality, and the Criminal Code. However, the article did not focus on how deepfakes may influence elections, or how we as a nation can limit the effects of such videos on the outcome of an election.

We hope to use our time here today to further our thoughts on this very important topic. Our opening statement will be structured as follows: one, provide an overview of some other legal mechanisms that are available to combat deepfake videos in an election context; two, provide an overview of potential torts that are not yet recognized in Canada but have the potential to be; and three, discuss whether deepfakes really are the problem or just another example of a greater underlying problem in society.

11:10 a.m.

Partner, Co-Chair of Information Technology Group, McMillan LLP, As an Individual

Ryan Black

From the outset, we want to ensure that the appropriate focus is placed on the roles that users, platforms and bad actors themselves play in propagating social media content. Well-intended platforms can and will be misused, and deepfake videos will certainly be a tool used in that malfeasance.

The true bad actor, though, is the person creating the false media for the purpose of propagating it through psychological manipulation. As Dr. Wardle alluded to, the data is valuable, and platforms generally want technology to be used properly. They assist law enforcement agencies with upholding relevant law, and develop policies intended to uphold the election's integrity. They also allow for the correction of misinformation and the sourcing of information.

A recent example in Canada is Facebook's Canadian election integrity policy, which is posted on the Internet.

I'll turn it over to Pablo to discuss the legal remedies relevant to today's discussion.

11:10 a.m.

Associate, McMillan LLP, As an Individual

Pablo Jorge Tseng

Focusing on elections, we wish to highlight here that Parliament is forward-thinking in the fact that in 2014, they introduced a provision to the Elections Act directed to the impersonation of certain kinds of people in the election process. While such provisions are not specifically targeted at deepfake videos, such videos may very well fall within the scope of this section.

In addition, there have been examples in our Canadian case law where social media platforms have been compelled through what courts call Norwich orders to assist in the investigation of a crime committed on that social media platform. For example, a social media platform may be compelled by a court to reveal the identities of anonymous users utilizing the services of that social media platform. That is to say that legal mechanisms already exist and, in our experience, law-abiding third parties subject to such orders generally comply with the terms thereof.

There is also room for our courts to expand on common law torts and for governments to codify new ones.

In general, laws exist in common law and statute form. It is important not to lose sight of the fact that governments have the ability to create law; that is, governments are free to come up with laws and pass them into force. Such laws will be upheld, assuming that they comply with certain criteria. Even if they do not necessarily comply with those criteria, there are certain override provisions that are available.

An example of codification of torts is British Columbia's Privacy Act, which essentially writes out in statute what the cause of action of appropriation of personality is.

Today we are flagging two other torts for discussion: unjust enrichment and the tort of false light.

With regard to unjust enrichment, such tort has generally been upheld in cases involving economic loss suffered by the claimant. However, it is reasonable to argue that the concept of losses should be expanded to cover other forms of losses that may not be quantifiable in dollars and cents.

Regarding the tort of false light, such tort exists in some states of the United States. Canada, however, does not recognize this tort just yet. However, the impact of deepfake videos may cause Canadian courts to rethink their position about the tort of false light. Even if this tort of false light does not exist in common law, it is very well within the power of the provincial government to enact the tort into statutory code, thereby creating its existence via statutory form.

11:15 a.m.

Partner, Co-Chair of Information Technology Group, McMillan LLP, As an Individual

Ryan Black

In our article, we explore copyright tort and even Criminal Code actions as potential yet sometimes imperfect remedies. We note that deepfake, impressive and game-changing no doubt, is likely overkill from manipulating the public. One certainly would not need complex computer algorithms to fake a video of the sort routinely serving as evidence or newsworthy.

Think back really to any security footage you have ever seen in a news incident. It's hardly impressive fidelity. It's often grainy or poorly angled, and usually only vaguely resembles the individuals in question.

While deepfake might convincingly place a face or characteristics into a video, simply using angles, poor lighting, film grain, or other techniques can get the job done. In fact, we've seen recent examples of speech synthesis seeming more human-like by actually interjecting faults such as ums, ahs, or other pauses.

For an alternative example, a recent viral video purportedly showed a female law student pouring bleach onto men's crotches on the Russian subway to prevent them from the micro aggression of manspreading, or men sitting with legs too splayed widely apart. This video triggered an expected positive and negative reaction across the political spectrum. Reports later emerged that the video was staged with the specific intent to promote a backlash against feminism and further social division in western countries. No AI technology was needed to fake the video, just some paid actors and a hot button issue that pits people against each other. While political, it certainly didn't target Canadian elections in any conceivably actual manner.

Deepfake videos do not present a unique problem, but instead another aspect of a very old problem worthy of consideration certainly, but we do have two main concerns about any judicial or legislative response to deepfake videos.

The first is overspecification or overreaction. We've long lived with the threat that deepfake poses for video in the realm of photography. I'm no visual effects wizard, but when I was an articling student at my law firm more than a decade ago, as part of our tradition of roasting partners at our holiday parties, I very convincingly manipulated a photograph of the rapper Eminem replacing his face with one of our senior lawyers. Most knew it was a joke, but one person did ask me how I got the partner to pose. Thankfully, he did not feel that his reputation was greatly harmed and I survived unscathed.

Yes, there will come a time when clear video is no longer sacred, and an AI-assisted representative of a person's likeness will be falsified and convincingly newsworthy. We've seen academic examples of this already, so legislators can and should ensure that existing remedies allow the state and victims to pursue malicious deepfake videos.

There are a number of remedies already available, a lot which will be discussed in our article, but in the future of digitally manipulable video, the difference between a computer simulation and the filming of an actual physical person may be a matter of content creator preference, so it may, of course, be appropriate to review legal remedies, criminal offences, and legislation to ensure that simulations are just as actionable as physical imaging.

Our second concern is that any court or government action may not focus on the breadth of responsibility by burdening or attacking the wrong target. By pursuing a civil remedy through courts, particularly over the borderless Internet, it will often be a heavy burden to place on the victim of a deepfake, whether it's a woman victimized by deepfake revenge pornography, or a politician victimized by deepfake controversy. It's a laborious, slow and expensive process. Governments should not solely leave remedy entirely to the realm of victim-pursued legislation or litigation.

Canada does have experience in intervening in Internet action to varying degrees of success. Our privacy laws and spam laws have protected Canadians, and sometimes burdened platforms, but in the cybersecurity race among malicious actors, platforms and users, we can't lose sight of two key facts.

First, intermediaries, networks, social media providers, and media outlets will always be attacked by malicious actors just as a bank or a house will always be the target of thieves. These platforms are, and it should not be forgotten, also victims of malicious falsehood spread through them just as much as those whose information is stolen or identities falsified.

Second, as Dr. Wardle alluded to, the continued susceptibility of individuals to fall victim to fraud, fake news, or cyber-attack speaks to the fact that humans are inherently not always rational actors. More than artificial intelligence, it is the all too human intelligence with its confirmation bias, pattern-seeking heuristics, and other cognitive shortfalls and distortions that will perpetuate the spread of misinformation.

For those reasons, perhaps even more than rules or laws that ineffectively target anonymous or extraterritorial bad actors, or unduly burden legitimate actors at Canadian borders, in our view governments' response must dedicate sufficient resources to education, digital and news literacy and skeptical thinking.

Thanks very much for having us.

11:20 a.m.

Liberal

The Vice-Chair Liberal Nathaniel Erskine-Smith

Thanks very much to you both.

Next up, from San Francisco, we have Tristan Harris, co-founder and executive director of the Center for Humane Technology.

11:20 a.m.

Tristan Harris Co-Founder and Executive Director, Center for Humane Technology

Thank you, Mr. Chair.

I am Tristan Harris. It's a pleasure to be with you today. My background was originally as a Google design ethicist, and before that I was a technology entrepreneur. I had a start-up company that was acquired by Google.

I want to mirror many of the comments that your other guests have made, but I also want to bring the perspective of how these products are designed in the first place. My friends in college started Instagram. Many of my friends worked at the early technology companies, and they actually have a similar basis.

What I want to avoid today is getting into the problem of playing whack-a-mole. There are literally trillions of pieces of content, bad actors, different kinds of misinformation, and deepfakes out there. These all present this kind of whack-a-mole game where we're going to constantly search for these things, and we're not going to be able to find them.

What I'd like to do today is offer a diagnosis that is really just my opinion about the centre of the problem, which is that we have to basically recognize the limits of human thinking and action. E.O. Wilson, the great sociobiologist, said that the real problem of humanity is that we have paleolithic emotions, medieval institutions and god-like technology. This basically describes the situation we are in.

Technology is overwriting the limits of the human animal. We have a limited ability to hold a certain amount of information in our head at the same time. We have a limited ability to discern the truth. We rely on shortcuts like what other people are saying is true, or the fact that a person who I trust said that thing is true. We have a limited ability to discern what we believe to be truthful using our own eyes, ears and senses. If I can no longer trust my own eyes, ears and senses, then what can I trust in the realm of deepfakes?

Rather than getting distracted by hurricane Cambridge Analytica and hurricane addiction and hurricane deepfakes, what we really need to do is ask what the generator function is for all these hurricanes. The generator function is basically a misalignment of how technology is designed to not accommodate, almost like the ergonomic view of a human animal.

Just like ergonomics, where a pair of scissors can be in my hands and I can use it a few times, it will get the job done. However, if it's not geometrically aligned with the way the muscles work, it actually starts to stress the system. If it's highly geometrically misaligned, it causes enormous stress and can break the system.

Much like that, the human mind and our ability to make sense of the world and our emotions have a kind of ergonomic capacity. We have a situation where hundreds of millions of teenagers, for example, wake up in the morning, and the first thing they do when they turn off their alarm is turn their phone over. They are shown evidence of photo after photo after photo of their friends having fun without them. This is a totally new experience for 100 million teenage human animals who are waking up in the morning every day.

This is ergonomically breaking our capacity for getting an honest view of how much our friends are having fun. It's sort of a distortion. However, it's a distortion that starts to bend and break our normal notions and our normal social construction of reality. That's what's happening in each different dimension.

If you take a step back, the scale of influence that we're talking about is unique. This is a new form of psychological influence. Oftentimes what is brought up in this conversation is, “Well, we've always had media. We've always had propaganda. We've always had moral panic about how children use technology. We've always had moral panic about media.” What is distinctly new here? I want to offer four distinct new things that are unprecedented and new about this situation.

The first is the embeddedness and the scale. We have 2.2 billion human animals who are jacked into Facebook. That's about the number of followers of Christianity. We have 1.9 billion humans who are jacked into YouTube. That's about the number of followers of Islam. The average person checks his or her phone 80 times a day. Those are Apple's numbers, and they are conservative. Other numbers say that it's 150 times a day. From the moment people wake up in the morning and turn off their alarms to the moment they set their alarms and go to sleep, basically all these people are jacked in. The second you turn your phone over, thoughts start streaming into your mind that include, “I'm late for this meeting”, or “My friends are having fun without me.” All of these thoughts are generated by screens, and it's a form of psychological influence.

The first thing that's new here is the scale and the embeddedness, because unlike other forms of media, by checking these things all the time, they have really embedded themselves in our lives. They're much more like prosthetics than they are like devices that we use. That's the first characteristic.

The second characteristic that's different and new about this form of media propagandic issue is the social construction of reality. Other forms of media, television, and radio did not give you a view of what each of your friends' lives were like or what other people around you believed. You had advertising that showed you a theoretical couple walking on a theoretical beach in Mexico, but not your exact friends walking on that specific beach and the highlight reels of all these other people's lives. The ability to socially construct reality, especially the way we socially construct truth, because we look at what a lot of other people are retweeting, is another new feature of this form of psychological manipulation.

The third feature that's different is the aspect of artificial intelligence. These systems are increasingly designed to use AI to predict the perfect thing that will work on a person. They calculate the perfect thing to show you next. When you finish that YouTube video, and there's that autoplay countdown five, four, three, two, one, you just activated a supercomputer pointed at your brain. That supercomputer knows a lot more information about how your brain works than you do because it's seen two billion other human animals who have been watching this video before. It knows the perfect thing that got them to watch the next video was X, so it's going to show another video just like X to this other human animal. That's a new level of asymmetry, the self-optimizing AI systems.

The fourth new distinct thing here is personalization. These channels are personalized. Unlike forms of TV, radio or propaganda in the past, we can actually provide two billion Truman Shows or two billion personalized forms of manipulation.

My background in coming to these questions is that I studied at the Persuasive Technology Lab at Stanford, which taught engineering students essentially how to apply everything we knew about the fields of persuasion, Edward Bernays, clicker training for dogs, the way slot machines and casinos are designed, to basically figure out how you would use persuasion in technology if you wanted to influence people's attitudes, beliefs and behaviours. This was not a nefarious lab. The idea was could we use this for good? Could you help people go out and get the exercise they wanted, etc.?

Ultimately, in the last class at the Persuasive Technology Lab at Stanford, someone imagined the use case of, what if in the future you had a perfect profile of what would manipulate the unique features, the unique vulnerabilities, of the human being sitting in front of you. For example, the person may respond well to calls from authority, that the Canadian government's summoning the person would be particularly persuasive to his or her specific mind because the person really falls for authority, names like Harvard or the Canadian government, or is really susceptible to the fact that all of his or her friends or a certain pocket of friends really believed something. By knowing people's specific vulnerabilities, you could tune persuasive messages in the future to perfectly manipulate the person sitting in front of you.

This was done in the last class of my persuasive technology class, done by one of the groups. It was on the future of the ethics of persuasive technology, and it horrified me. That hypothetical experiment is basically what we live inside of every single day. It's also what was more popularly packaged up at Cambridge Analytica where, by having the unique personality characteristics of the person who you're influencing, you could perfectly target political messaging.

If you zoom out, it's really all about the same thing, which is that the human mind, the human animal is fundamentally vulnerable, and there are limits to our capacity. We have a choice. We either redesign and realign the way the technology works to accommodate the limits of human sense making and human choice making or we do not.

As a former magician who can tell you that these limits are definitely real, what I hope to accomplish in the meeting today is we have to bring technology back inside those limits. That's what we work on with our non-profit group, the Center for Humane Technology.

11:30 a.m.

Liberal

The Vice-Chair Liberal Nathaniel Erskine-Smith

Thank you very much, Mr. Harris.

As our last witness, we have Ms. Krause, researcher and writer.

11:30 a.m.

Vivian Krause Researcher and Writer, As an Individual

Good morning, Mr. Chairman. It's a privilege to appear before your committee. Thank you for the opportunity.

My name is Vivian Krause. I'm a Canadian writer and I have done extensive research on the funding of environmental and elections activism. My understanding is I have been asked to speak to you today on the topic of elections integrity and specifically about issues related to social media.

Based on my research, Mr. Chairman, it is clear to me that the integrity of our 2015 federal election was compromised by outside interests. Furthermore, our federal election was compromised because the charities directorate at the CRA is failing to enforce the Income Tax Act with regard to the law that all charities must operate for purposes that are exclusively charitable.

I'll get to the CRA in a minute, but first I'd like to speak briefly about the non-Canadian organizations that intervened in the 2015 election and why. As evidence, Mr. Chairman, I would ask your committee to please take a look at the 2015 annual report of an American organization called the Online Progressive Engagement Network, which goes by the acronym OPEN. This is an organization based in Oakland, California. I have provided a copy to the clerk. In the annual report the executive director of OPEN writes that his organization based in California ended the year 2015 with “a Canadian campaign that moved the needle during the national election, contributing greatly to the ousting of the Conservative Harper government.”

Who is OPEN, and how did it involve itself on the 2015 federal election? OPEN is a project of the strategic incubation program of an organization called the Citizen Engagement Laboratory, CEL. The Citizen Engagement Laboratory has referred to itself as the people behind the people. It says on its website that it is dedicated to providing best-in-class technology, finance, operations, fundraising and strategic support.

What does OPEN do exactly? According to OPEN, it provides its member organizations with financial management, protocols, and what it calls surge capacity in the early days of their development. OPEN helps “insights, expertise and collaboration flow seamlessly” across borders, adding that this helps new organizations to “launch and thrive in record time”.

Indeed, that is precisely what Leadnow did in the 2015 federal election. As part of his job description for OPEN, the executive director says he was employed to “advise organizations on every stage of the campaign arc: from big picture strategy to messaging to picking the hot moments”.

OPEN is funded, as least partially, by the Rockefeller Brothers Fund based in New York. Tax returns and other documents, which I have also provided to the clerk, state that since 2013 the Rockefeller Brothers Fund has paid at least $257,000 to OPEN. In its literature, OPEN describes itself as a B2B organization with “a very low public profile”. It says this is intentional as the political implications of an international association can be sensitive in some of the countries in which it works. In its Facebook profile, the executive director of OPEN says of himself that he can see the Golden Gate from one house—in other words, from San Francisco—and the Washington monument from the other—in other words, the White House—and he adds that he spent a lot of time interloping in the affairs of foreign nations.

What did OPEN do exactly in the 2015 federal election? OPEN helped to launch Leadnow, a Vancouver-based organization. We know this because OPEN's executive director tweeted about how he came to Canada in 2012, stayed at a farmhouse near Toronto and worked with Leadnow. Other documents also refer to OPEN's role in launching and guiding Leadnow.

We know for sure that Leadnow was involved with OPEN because there's a photo of Leadnow staff in New York attending an OPEN meeting with the Rockefeller Brothers Fund in 2012. Another photo of Leadnow is at an OPEN meeting in Cambridge, England, and there is a photo of Leadnow staff in Australia in January 2016, shortly after the federal election, winning an award from OPEN, an American organization, for helping to defeat the Conservative Party of Canada.

Leadnow claims credit for helping to defeat 26 Conservative incumbents. That's a stretch, I would guess, but in a few ridings I think it stands to reason that Leadnow may have had an impact on the vote.

For example, in Winnipeg's Elmwood—Transcona riding, where Leadnow had full-time staff, the Conservative incumbent lost by only 61 votes. Leadnow has presented itself as a thoroughly Canadian youth-led organization, the brainchild of two university students, but as we now know, that is not the whole story.

I think it is important to note that this Rockefeller-backed effort to topple the Canadian government did not emerge out of thin air. This effort to influence Canada's federal election was part and parcel of another Rockefeller-funded campaign called the tar sands campaign, which began in 2008, 10 years ago. Indeed, the tar sands campaign itself has also taken credit in writing for helping to defeat the federal government in 2015.

For many years, the strategy of the tar sands campaign was not entirely clear, but now it is. Now the strategy of the tar sands campaign is plenty clear, because the individual who wrote the original strategy and has been leading the campaign for more than a decade has written, “From the very beginning, the campaign strategy was to land-lock the tar sands so their crude could not reach the international market where it could fetch a high price per barrel.”

Now, turning to the CRA, I'll be brief. As an example of what I regret to say I think is a failure on the part of the charities directorate to enforce the Income Tax Act, I referred the committee to three charities. These are the DI Foundation, the Salal Foundation, and Tides Canada Foundation. As I see it, the DI Foundation and the Salal Foundation are shell charities that are used to Canadianize funds and put distance between Tides Canada Foundation and the Dogwood initiative. The DI Foundation, a registered charity, has done absolutely nothing but channel funds from Tides Canada Foundation to the Dogwood initiative, which is one of the most politically active organizations in our country.

In the 2015 federal election, the Dogwood initiative was a registered third party, and it reported, for example, that it received $19,000 from Google. The Dogwood initiative is also one of the main organizations in the tar sands campaign, as it received more than $1 million from the American Tides Foundation in San Francisco. One of its largest funders, in fact, I believe its single largest funder, is Google.

According to U.S. tax returns for 2016, Google paid Tides $69 million. The Tides Foundation in turn is one of the key intermediary organizations in the tar sands campaign, and has made more than 400 payments by cheques and wire transfers to organizations involved in the campaign to landlock Canadian crude and keep it out of international markets.

Mr. Chairman, in conclusion, I think it's important to note that the interference in the 2015 federal election was done with a purpose. It was done as part of a campaign to landlock one of our most important national exports. I hope that my remarks have given you a glimpse of some of the players that were involved, the magnitude of the resources at their disposal, and perhaps also some actionable insights about what your committee could do to better protect the integrity of our elections in the future.

Thank you very much.

11:35 a.m.

Liberal

The Vice-Chair Liberal Nathaniel Erskine-Smith

Thanks very much for that presentation.

We're going to go to seven-minute rounds. We have about an hour and 20 minutes, so we'll get one full round in, and then we'll have some time for additional questions.

The first seven minutes go to Mr. Baylis.

11:40 a.m.

Liberal

Frank Baylis Liberal Pierrefonds—Dollard, QC

Thank you.

I'll start with you, Ms. Wardle. What I'd like to do first of all is put some nomenclature around all the different things that are going on. You've used “misinformation,” “disinformation,” and “malinformation”. Mr. Black and Mr. Tseng have used “deepfakes”, deepfake videos. Do they fit into one of your three categories?

11:40 a.m.

Harvard University, As an Individual

Dr. Claire Wardle

Yes, I would argue that deepfakes are an example of false information disseminated to cause harm, so that would be disinformation. Misinformation might be that my mom sees that deepfake later and she reshares that. She doesn't understand that it's false. My mom's not trying to cause harm. These things can shift as they move through the ecosystem.

11:40 a.m.

Liberal

Frank Baylis Liberal Pierrefonds—Dollard, QC

What is the difference between disinformation and malinformation then?

11:40 a.m.

Harvard University, As an Individual

Dr. Claire Wardle

Regarding malinformation, we talk a lot about fabricated content or false content, but there is a way to use genuine content to cause harm. For example, leaking emails that were previously private and making them public might be a form of malinformation. There is a form of a whistle-blowing leak where that's done for the public good, so malinformation is to leak information to cause harm.

11:40 a.m.

Liberal

Frank Baylis Liberal Pierrefonds—Dollard, QC

Like “mal” in the sense of “malicious”? Is that what you mean by “mal”?

11:40 a.m.

Harvard University, As an Individual

11:40 a.m.

Liberal

Frank Baylis Liberal Pierrefonds—Dollard, QC

There's disinformation, misinformation, and malicious information, and malinformation is actually true, but it's used to distort or contort.

11:40 a.m.

Harvard University, As an Individual

11:40 a.m.

Liberal

Frank Baylis Liberal Pierrefonds—Dollard, QC

To you, deepfakes would be disinformation.

11:40 a.m.

Harvard University, As an Individual

11:40 a.m.

Liberal

Frank Baylis Liberal Pierrefonds—Dollard, QC

Mr. Tseng and Mr. Black, would that go along with how you see this concern about deepfakes?

11:40 a.m.

Partner, Co-Chair of Information Technology Group, McMillan LLP, As an Individual

Ryan Black

Largely, it does. I do agree that it's definitely a form of false information, but to attribute malice to it.... Some deepfakes are done for parody or for humour. There will almost certainly be a Hollywood version of deepfakes used to transplant actors' faces. There will be legitimate uses of deepfake, but in the news sphere or in the social media sphere, there certainly is a vulnerability that it would be used for malicious purposes. I tend to agree that it's definitely a form of falsification, just like a tricky camera angle or an edit could be disinformation as well.