Evidence of meeting #135 for Access to Information, Privacy and Ethics in the 44th Parliament, 1st Session. (The original version is on Parliament’s site, as are the minutes.) The winning word was content.

A recording is available from Parliament.

On the agenda

MPs speaking

Also speaking

Jeanette Patell  Director, Government Affairs and Public Policy, Canada, Google Canada
Rachel Curran  Head of Public Policy, Canada, Meta Platforms Inc.
Lindsay Hundley  Global Threat Intelligence Lead, Meta Platforms Inc.
Steve de Eyre  Director, Public Policy and Government Affairs, TikTok Canada
Wifredo Fernández  Head of Government Affairs, United States of America and Canada, X Corporation
Justin Erlich  Global Head of Policy Development, TikTok

3:45 p.m.

Conservative

The Chair Conservative John Brassard

I call this meeting to order.

Good afternoon, everyone.

Welcome, everyone, to meeting 135 of the House of Commons Standing Committee on Access to Information, Privacy and Ethics.

Pursuant to Standing Order 108(3)(h) and the motion adopted by the committee on Tuesday, February 13, 2024, the committee is resuming its study of the impact of disinformation and misinformation on the work of parliamentarians.

I'd like to welcome today's witnesses.

From Google Canada we have Shane Huntley, senior director, threat analysis group, who's joining us by video conference, and Jeanette Patell, who is the director of government affairs and public policy, Canada.

From Meta Platforms Inc., Rachel Curran is here. She is the head of public policy for Canada. We also have Lindsay Hundley, the global threat intelligence lead, who is appearing by video conference.

From TikTok, we have Steve de Eyre, director of public policy and government affairs for Canada, and Justin Erlich, who is the global head of policy development. They are appearing by by video conference.

Also, from X Corporation, we have Wifredo Fernández, who is the head of government affairs, United States of America and Canada.

I welcome you all to the committee for this very important study. As you know, you all have up to five minutes to address the committee.

I will start with Mr. Huntley. Mr. Huntley is online. Can you can go ahead, sir? You have five minutes to address the committee.

Jeanette Patell Director, Government Affairs and Public Policy, Canada, Google Canada

Hi there. I'll be addressing the committee on behalf of Google Canada.

3:45 p.m.

Conservative

The Chair Conservative John Brassard

I apologize.

Go ahead. Thank you.

3:45 p.m.

Director, Government Affairs and Public Policy, Canada, Google Canada

Jeanette Patell

Thank you very much, Mr. Chair.

Members of the committee, my name is Jeannette Patell. I'm responsible for government affairs and public policy at Google in Canada.

I'm pleased to be joined remotely today by my colleague Shane Huntley, a senior director of Google's threat intelligence group.

Earlier this year, as part of our ongoing commitment to protect elections, Google created the Google threat intelligence group which brings together the industry-leading work of our threat analysis group and the Mandiant intelligence division of Google Cloud.

Google threat intelligence helps identify, monitor and tackle threats ranging from coordinated influence operations to cyber-espionage campaigns across the Internet. On any given day, TAG, the threat analysis group, tracks and works to disrupt more than 270 government-backed attacker groups from more than 50 countries. It publishes its findings each quarter. Mandiant similarly shares its findings on a regular basis, and has published more than 50 blogs to date this year alone, analyzing threats from Russia, China, Iran, North Korea and the criminal underground. We have shared some of our recent reports with this committee, and Shane will be happy to answer your questions about these ongoing efforts.

Google's mission is to organize the world's information and make it universally accessible and useful. We recognize this is especially important when it comes to our democratic institutions and processes. We take seriously the importance of protecting free expression and access to a range of viewpoints. We recognize the importance of enabling the people who use our services to speak freely about the political issues most important to them.

When it comes to the integrity and security of elections, our work is focused on three key areas. First and foremost is continuing to help people find helpful information from trusted sources through our products, which are strengthened through a variety of proactive initiatives, partnerships and responsible safeguards. Beyond designing our systems to return high-quality information, we also build information literacy features into Google Search that help people evaluate and verify information, whether it's something they saw on social media or heard in conversations with family or friends.

For example, our About This Image feature in Google Search helps people assess the credibility and context of images they see online by identifying an image's history and how it has been used and described on other web pages, as well as identifying similar images. We also continue to invest in state-of-the-art capabilities to identify AI-generated content. We have launched SynthID, an industry-leading tool that watermarks and identifies AI-generated content in text, audio, video and images. On YouTube, when creators upload content, we now require them to indicate whether it contains altered or synthetic materials that appear realistic, which we then label appropriately.

We will soon begin to use C2PA's Content Credentials, a new form of tamper-evident metadata, to identify the provenance of content across Google Ads, Google Search and YouTube and to help our users identify AI-generated material.

When it comes to our own generative AI tools, out of an abundance of caution we're applying restrictions on certain election-related queries on Gemini and connecting users directly to Google Search for links to the latest and most accurate information.

The second area of focus is working to equip high-risk entities, like campaigns and elected officials, with extra layers of protection. Our advanced protection program and Project Shield are free services that leverage our strongest set of cyber protections for high risk individuals and entities, including elected officials, candidates, campaign workers and journalists.

Finally, we focus on safeguarding our own platforms from abuse by actively monitoring and staying ahead of abuse trends through the enforcement of our long-standing policies regarding content that could undermine democratic processes.

Maintaining and enforcing responsible policies at scale is a critical part of how we protect the integrity of democratic processes around the world. That's why we've long invested in cutting-edge capabilities, strengthened our policies and introduced new tools to address threats to election integrity. At the same time, we continue to take steps to prevent the misuse of our tools and platforms, particularly attempts by foreign state actors to undermine democratic elections.

The Google Threat intelligence teams, including the threat analysis group founded by my colleague Shane Huntley, are central to this work. They often receive and share important information about malicious activity with national security agencies and local law enforcement, as well as our industry peers, so that they can investigate and take appropriate action.

Maintaining the integrity of our democratic processes and institutions is a shared challenge. Google, our users, industry, law enforcement and civil society all have important roles to play, and we are deeply committed to doing our part to keep the digital ecosystem safe and reliable.

We look forward to answering your questions and continuing our engagement with this committee as you study these important questions.

The Chair Conservative John Brassard

Thank you, Ms. Patell.

Ms. Curran, we're going to go to you for five minutes, please.

Rachel Curran Head of Public Policy, Canada, Meta Platforms Inc.

Thank you, Mr. Chair.

Lindsay will speak on behalf of Meta Platforms.

Dr. Lindsay Hundley Global Threat Intelligence Lead, Meta Platforms Inc.

Thank you for the opportunity to have us appear before you today.

My name is Dr. Lindsay Hundley, and I am the global threat intelligence lead at Meta. My work is focused on producing intelligence to identify, disrupt and deter adversarial threats on our platforms. I've worked to counter these threats at Meta for the past three years, and my work at the company draws on over 10 years of experience as a researcher focused on issues related to foreign interference, including in my doctoral work at Stanford University and during research fellowships at both Stanford University and Harvard Kennedy School.

I'm joined today by Rachel Curran, the head of public policy for Canada.

At Meta, we work hard to identify and counter foreign adversarial threats, including hacking and cyber-espionage campaigns as well as influence operations—what we call coordinated inauthentic behaviour, or CIB. Meta defines CIB as any coordinated effort to manipulate public debate for a strategic goal in which fake accounts are central to the operation. CIB occurs when users coordinate with one another and use fake accounts to mislead others about who they are and what they are doing.

At Meta, we believe that authenticity is a cornerstone of our community. Our community standards prohibit inauthentic behaviour, including by users who seek to misrepresent themselves, use fake accounts or artificially boost the popularity of content. This policy is intended to protect the security of user accounts and our services and create a space where people can trust the people and communities that they interact with on our platforms.

We also know that threat actors are working to interfere with and manipulate public debate. They try to exploit societal divisions, promote fraud, influence elections and target authentic social engagement. Stopping these bad actors is one of our highest priorities, and that is why we've invested significantly in people and technology to combat inauthentic behaviour at scale.

The security teams at Meta have developed policies, automated detection tools and enforcement frameworks to tackle deceptive campaigns, both foreign and domestic. These investments in technology have enabled us to stop millions of attempts to create fake accounts every day and to detect and remove millions more, often within minutes after creation. Just this year, Meta has disabled nearly two billion fake accounts, and the vast majority, over 99%, were identified proactively.

Our strategy to counter these adversarial threats has three main components. First there are expert-led investigations to uncover the most sophisticated operations. Second is public disclosure and information-sharing to enable cross-societal defences, and third are product and engineering efforts to build the insights derived from our investigations and turn them into more effective, scaled and automated detection and enforcement.

A key component of this strategy is our public quarterly threat reports. Since we began this work, we've taken down and disclosed more than 200 covert influence operations from 68 countries that operated in 40 languages, from Amharic to Urdu to Russian to Chinese. Sharing this information has enabled our teams, investigative journalists, government officials and industry peers to better understand and expose Internet-wide security risks, including ahead of critical elections.

We've also shared detailed technical indicators linked to these networks in a public-facing repository hosted on GitHub, which contains more than 7,000 indicators of influence operations activity across the Internet.

Before I close, I'd like to touch on a few trends that we're monitoring in the global threat landscape.

To start, Russia, Iran and China remain the top three sources of foreign interference networks globally. We have removed nearly 40 operations from Russia that target audiences around the world, including four new operations in just this past quarter. Russian-origin operations have become overwhelmingly one-sided over the past two years, pushing narratives to support those who are less supportive of Ukraine.

Likewise, China-origin operations have evolved significantly in recent years to target broader, more global audiences, including in languages other than Chinese. These operations have continued to diversify their tactics, including targeting critics of the Chinese government, attempting to co-opt authentic individuals and using AI-generated news readers in an attempt to make fictitious news outlets look more legitimate.

Finally, we've seen threat actors increasingly decentralize their operations to withstand disruptions from any singular platform. We've seen them outsource their deceptive campaigns increasingly to private firms. We are also seeing them leverage generative AI technologies to produce higher volumes of original content at scale, though their abuse of these technologies has not impeded our ability to detect and remove these operations.

I would be happy to discuss any of these trends in more detail.

I want to close by saying that countering foreign influence operations is a whole-of-society effort, which is why we engage with our industry peers, independent researchers, journalists, government and law enforcement.

Thank you so much for your focus on this work. We look forward to answering your questions.

3:55 p.m.

Conservative

The Chair Conservative John Brassard

Thank you, Ms. Hundley.

Mr. de Eyre, you have up to five minutes to address the committee. Go ahead, sir.

Steve de Eyre Director, Public Policy and Government Affairs, TikTok Canada

Good afternoon, Mr. Chair and committee members. My name is Steve de Eyre. I'm the director of public policy and government affairs for TikTok Canada. I'm joined today by my colleague Justin Erlich, the global head of policy development for TikTok's trust and safety team. He's joining virtually from California.

Thank you for the invitation to return to your committee today to speak about the important issue of protecting Canadians from disinformation. The topic of today's hearing is important to us, to the foundation of our community and to our platform.

TikTok is a global platform where an incredibly diverse range of Canadian creators and artists have found unprecedented success with global audiences; where indigenous creators are telling their own stories in their own voices; and where small businesses like Hamilton's DSRT Company, Mississauga's Realm Candles, and of course Smiths Falls' McMullan Appliance and Mattress are finding new customers, not just across Canada but also around the world.

Canadians love TikTok because of the authenticity and positivity of the content, so it's important, and in our interest, to maintain the security and integrity of our platform. To do this, we invest billions of dollars into our work on trust and safety. This includes advanced automated moderation and security technologies and thousands of safety and security experts around the world, including content moderators here in Canada. We also employ local policy experts who help ensure that the application of our policies considers the nuances of local laws and culture.

When it comes to misinformation and disinformation, TikTok takes an objective and robust approach. To start, our community guidelines prohibit misinformation that may cause significant harm to individuals or society, regardless of intent. To help counter misinformation and disinformation, we work with 19 independent fact-checking organizations to enforce our policies against this content. In addition, we invest in elevating reliable sources of information during elections, during unfolding events and on topics of health and well-being.

We relentlessly pursue and remove accounts that break our deceptive behaviour rules, including covert influence operations. We run highly technical investigations to identify and disrupt these operations on an ongoing basis. We have removed thousands of accounts belonging to dozens of networks operating from locations around the world. We regularly report on these removals in our publicly available transparency centre.

Addressing disinformation is an industry-wide challenge that requires a collaborative approach and collective action, including both platforms and government. At the heart of this collaboration lies transparency and accountability, which we believe are essential to fostering trust. We're committed to leading the way when it comes to being transparent in how we operate, moderate and recommend content, empower users, and secure our platform. As part of this commitment, TikTok regularly publishes transparency reports to provide visibility into how we uphold our community guidelines; how we respond to law enforcement requests for information, or government requests for content removals; and attempts at covert influence operations that we have disrupted on our platform.

Our commitment to transparency is also guiding our work with Canadian officials, including in the national security review of TikTok under the Investment Canada Act. We have been working with officials to ensure that they understand how our platform operates, including how we protect Canadians' user data and defend against things like disinformation and foreign interference. As part of this process, last year we offered Canadian officials the opportunity to review and analyze TikTok's source code and algorithm. While the government has not yet taken us up on this opportunity, we are hopeful that they will do so. We will continue to work collaboratively with the government in the best interest of Canadians.

Such collaboration will be critical as we approach the next federal election. In 2021 TikTok worked with Elections Canada to build an in-app hub that provided authenticated information on when, where and how to vote. That year we were also the only new platform to sign on to PCO's Canada declaration on electoral integrity online. As we approach the next election, we will be building upon these efforts and leveraging learnings and best practices from other elections taking place around the world, including in the U.S.

Finally, I'd be remiss not to mention that today's meeting is taking place during Media Literacy Week, an annual event promoting digital media literacy across Canada. As well, yesterday was Digital Citizen Day, a day that encourages Canadians to engage and share responsibly online. Education plays a critical role in empowering Canadians to be safe online and build resilience against misinformation and disinformation.

In Canada these events are led by MediaSmarts, a Canadian non-profit and a global leader in this space whose work TikTok is very proud to support.

We look forward to sharing more with you about how we are addressing these important issues.

Thank you again for the invitation to speak with the committee today.

4 p.m.

Conservative

The Chair Conservative John Brassard

Thank you, Mr. de Eyre.

We're going to X now. Mr. Fernández, you have five minutes to address the committee.

Go ahead, please.

4:05 p.m.

Conservative

The Chair Conservative John Brassard

Thank you, Mr. Fernández.

Thank you to all our witnesses for their opening statements.

Members of the committee, we are fortunate that we have all four of the major players on social media here today, which poses its own problems. I'm going to ask every member to direct their questions specifically to an individual. That will save us some time in guessing who's going to answer.

It's been common practice at this committee that we reset after the first set of questions to allow Mr. Villemure and Mr. Green the opportunity to establish those six-minute questions in the second round. Is it the will of the committee to do that?

Some hon. members

Agreed.

4:05 p.m.

Conservative

The Chair Conservative John Brassard

Thank you.

We're going to start with six minutes of questions.

Mr. Cooper, you have the floor. Go ahead, sir.

Wifredo Fernández Head of Government Affairs, United States of America and Canada, X Corporation

Chairman Brassard, Vice-Chairs Fisher and Villemure and members of the committee, thank you for the opportunity to be with you here today. It's an honour.

My name is Wifredo Fernández, and I have the pleasure of leading government affairs and public policy at X in the U.S. and Canada.

We know that X is a critical platform in the public debate around elections. Through September this year, there were over 850 billion impressions, 79 billion video views and four billion posts related to politics globally. We are proud that our platform powers democratic discourse around the world. For us, authenticity, accuracy and safety are fundamental to our approach to elections.

Our consideration of authenticity has two principal dimensions: accounts and conversations. Our safety team proactively monitors activity on our platform and employs advanced detection methodologies to enforce our rules related to authenticity, such as platform manipulation, spam, and misleading and deceptive identities. Whether they are state-affiliated entities engaged in covert influence operations or generic spam networks, we actively work to thwart and disrupt campaigns that threaten to degrade the integrity of the platform.

Through our verification program, we have profile labels that signal the authenticity of accounts, including brands and governments. The grey check mark helps the public know when they are hearing from or interacting with a verified government actor, whether they're an elections official, law enforcement or their representatives.

We want X to be the most accurate source of information on the Internet. That's why we have deeply invested in the development and expansion of Community Notes, which now empower over 800,000 contributors in 197 countries and territories to add helpful context to posts, including advertisements.

A recent study from the University of Giessen in Germany found that across the political spectrum, Community Notes were perceived as significantly more trustworthy than traditional, simple misinformation flags. It also found that Community Notes had a greater effect on improving people's identification of misleading posts. Separate studies from the University of Giessen and the University of Luxembourg show that posts with notes are shared 50% to 61% less and deleted 80% more. We'd be happy to submit these studies for the record.

Deepfakes, shallowfakes, AI-generated photos, out-of-context media and similar content are a source of public concern. This past year, we put a new superpower into contributors' hands, allowing them to write notes that are automatically shown on posts with matching media. To give you a sense of the multiplying effect this has, the around 6,800 media notes that have been written are now showing on over 540,000 posts and have been seen nearly two billion times.

We've also introduced, due to popular demand, the ability for anyone to request a Community Note. With enough requests, top contributors will be alerted and can propose notes. For everyone on X, it's a way to help. For contributors, it's a way to see where help is needed. Posts with a Community Note are also demonetized.

We strongly believe that freedom of speech and safety can and must coexist. The election context brings a diverse set of challenges covering abuse and harassment, violent content, deceptive identities and impersonation, violent entities, hateful conduct, synthetic and manipulated media, and misleading information about how to participate and vote.

At X, every year is an election year, and our policies and procedures are constantly being revised to address evolving threats, adversarial practices and malicious actors. For us, planning begins well in advance of these elections. All relevant working groups internally collaborate to lend their expertise and experience in planning and to participate in enforcing these rules before, during and after elections. We continue to invest in our team and our technology to strengthen our capabilities.

Our efforts extend well beyond content moderation and include proactive initiatives to direct those on our platform to authoritative and reliable sources around election participation. We engage directly with regulators, political parties, campaigns, candidates, civil society, law enforcement, security agencies and others to ensure that clear lines of communication are established to broaden our visibility into the threat landscape and ensure that external partners have a resource here at X.

For example, on multiple occasions over the last year, we engaged productively with Canada's rapid response mechanism and as a result took down networks of accounts, including those linked to the Chinese information operation called “spamouflage”. We appreciate the helpfulness of the mechanism and will continue to maintain open lines of communication in the lead-up to the next federal election in Canada.

Thank you again for the opportunity to be with you today. I look forward to any questions you may have.

4:05 p.m.

Conservative

Michael Cooper Conservative St. Albert—Edmonton, AB

Thank you, Mr. Chair. Thank you to the witnesses.

I'll start with Mr. Fernández.

Which foreign state is the most active in spreading or attempting to spread disinformation in Canada on your platform?

4:10 p.m.

Head of Government Affairs, United States of America and Canada, X Corporation

Wifredo Fernández

From our experience over the past year, the “spamouflage” campaign, which is linked to China, has been the most active.

4:10 p.m.

Conservative

Michael Cooper Conservative St. Albert—Edmonton, AB

Speaking of the spamouflage campaign, it was detected by, or at least was reported on, by the rapid response mechanism at Global Affairs. It involved a campaign that began in late August and intensified into, I believe, October of last year. It targeted dozens of MPs by falsely accusing these MPs of various ethical and criminal violations.

Is that correct? Is that what you're referring to?

4:10 p.m.

Head of Government Affairs, United States of America and Canada, X Corporation

Wifredo Fernández

Yes. Over the last year, we've taken down about 60,000 accounts linked to the spamouflage operations. About 9,500 of those came from escalations from the rapid response mechanism.

4:10 p.m.

Conservative

Michael Cooper Conservative St. Albert—Edmonton, AB

The rapid response mechanism brought those 9,000 to X's attention.

4:10 p.m.

Head of Government Affairs, United States of America and Canada, X Corporation

Wifredo Fernández

That's correct.

4:10 p.m.

Conservative

Michael Cooper Conservative St. Albert—Edmonton, AB

Okay.

I will turn to Meta now to maybe address the spamouflage campaign, because Facebook was also used.

Perhaps you could elaborate on the steps you've taken and Meta's interactions with the rapid response mechanism.

4:10 p.m.

Head of Public Policy, Canada, Meta Platforms Inc.

Rachel Curran

I can, absolutely. I'll turn to my colleague, Dr. Hundley, to speak more about this.

4:10 p.m.

Global Threat Intelligence Lead, Meta Platforms Inc.

Dr. Lindsay Hundley

We've been enforcing against spamouflage since 2019. Last year, we did a really large enforcement under our coordinated inauthentic behaviour policy.

Spamouflage is a long-running, cross-Internet operation with global targeting. We removed thousands of accounts and pages after we were able to connect different clusters of activity together as part of a single operation and were able to attribute that operation to individuals associated with Chinese law enforcement.

We've identified over 50 platforms and forums that spamouflage has used, including Facebook, Instagram, X, YouTube, TikTok, Reddit, Pinterest, Medium, Blogspot, LiveJournal, VKontakte, Vimeo and dozens of other smaller platforms and forums.

As with other China-origin operations, we have not found evidence of spamouflage getting significant substantial engagement among authentic communities on our services. As it is a global operation, we have seen targeting of audiences in Canada as part of this targeting. Researchers at the Australian Strategic Policy Institute, for instance, have described the operation's use of generative AI audio and doctored YouTube videos that were shared on other platforms with zero or minimal engagement from real users.

We've engaged a couple of times with the rapid response mechanism, including just yesterday, about spamouflage activity. I'm happy to report that in that instance, they found that we had been able to proactively remove the vast majority of activity that they were tracking.

4:10 p.m.

Conservative

Michael Cooper Conservative St. Albert—Edmonton, AB

Going back to the specific spamouflage campaign that I referenced, which occurred last year and was specifically targeting MPs, what was the scale of that campaign on the Facebook platform, and what was the response from the rapid response mechanism vis-à-vis Facebook?

4:10 p.m.

Global Threat Intelligence Lead, Meta Platforms Inc.

Dr. Lindsay Hundley

Unfortunately, I cannot give you specific numbers on the scale of that one specific campaign, because spamouflage consists of thousands of accounts. They drop in and out of different campaigns overall.

That said, when we engaged with the rapid response mechanism, we had already been tracking a lot of the activity they had shared with us and had removed a lot of it, although information sharing from government partners like these is of course really helpful for identifying anything that does get past our automated detection systems.