Evidence of meeting #97 for Access to Information, Privacy and Ethics in the 44th Parliament, 1st Session. (The original version is on Parliament’s site, as are the minutes.) The winning word was content.

A video is available from Parliament.

On the agenda

MPs speaking

Also speaking

Jeanette Patell  Head of Canada Government Affairs and Public Policy, Google and YouTube, Google Canada
Shane Huntley  Senior Director, Threat Analysis Group, Google, Google Canada
Nathaniel Gleicher  Head of Security Policy, Meta Platforms Inc.
Lindsay Hundley  Influence Operations Policy Lead, Meta Platforms Inc.
Wifredo Fernández  Head of Government Affairs, United States of America and Canada, X Corporation
Rachel Curran  Head of Public Policy, Canada, Meta Platforms Inc.
Josh Harris  Senior Privacy and Data Protection Counsel, X Corporation

4:40 p.m.

Conservative

The Chair Conservative John Brassard

Good afternoon, everyone.

I apologize for the late start, but I do call the meeting to order.

Welcome to meeting number 97 of the House of Commons Standing Committee on Access to Information, Privacy and Ethics.

Pursuant to Standing Order 108(3)(h) and the motion adopted by the committee on Tuesday, January 31, 2023, the committee is resuming its study of the use of social media platforms for data harvesting and unethical or illicit sharing of personal information with foreign entities.

Today's meeting is taking place in a hybrid format, pursuant to the Standing Orders of the House. Members are participating in person, in the room, and virtually using the Zoom application.

I'd like to remind all members not to put earpieces near the microphones, for obvious reasons. It does cause feedback and potential injury.

I'd now like to welcome our guests and witnesses today. All have the proper equipment and connection. We've done all the technical tests, and yes, things seem to be working properly.

We have a full slate today, and I want to welcome you all.

From Google Canada, we have Jeanette Patell, head of Canada government affairs and public policy, YouTube; and Shane Huntley, senior director, threat analysis group. From Meta Platforms, we have Rachel Curran, head of public policy, Canada; Nathaniel Gleicher, head of security policy; and Dr. Lindsay Hundley, influence operations policy lead. From X Corporation, I want to welcome Wifredo Fernández, head of government affairs, United States of America and Canada; and Josh Harris, senior privacy and data protection counsel.

We are going to start with Google.

You have up to five minutes for your opening statement to the committee. Go ahead, please.

4:40 p.m.

Jeanette Patell Head of Canada Government Affairs and Public Policy, Google and YouTube, Google Canada

Mr. Chair, ladies and gentlemen of the committee, good morning.

My name is Jeanette Patell. I am responsible for government affairs and public policy for Google and YouTube in Canada.

I am joined by my colleague Shane Huntley, who leads a group dedicated to protecting Google and its users from advanced threats, including those posed by state-sponsored attacks.

We recognize the committee's efforts to make Canadians aware of the unethical and illegal harvesting and sharing of personal data and the risks to which Internet users around the world are exposed.

Data plays an important role in making the products and services that Canadians use each day more helpful. When Canadians use our services, they are trusting us with their information. This is a responsibility that we take very seriously at Google. We protect user privacy with industry-leading security infrastructure, responsible data practices and easy-to-use privacy tools that put our users in control.

Tools such as our privacy checkup and our security checkup give people personalized privacy and security reminders and recommendations, including flagging actions that they should take to immediately secure their Google account.

These two verification functions allow users to customize, step by step, the security and confidentiality controls based on their personal preferences.

We also have an advanced protection program, which is available to anyone but is specifically designed for individuals and organizations—such as elected officials, political campaigns, human rights activists, and journalists—who are at a higher risk of targeted online attacks.

Treating our user data responsibly and protecting user privacy include protecting data from third parties. That's why it's our strict policy to never sell our users' personal information to anyone. When it comes to government requests for user information, our team carefully reviews each request to make sure that it satisfies applicable laws. If a request asks for too much information, we try to narrow it, and in some cases, we object to producing any information at all. We have also taken the lead, through our transparency reports, in being transparent about government requests for user information.

In addition to these industry-leading tools and strict protocols, we invest significantly in global teams and operations to prevent abuse on our platforms. One of those teams is our threat analysis group.

I'll now let my colleague Shane speak about the work that his group does to secure our users' information against bad actors.

4:40 p.m.

Shane Huntley Senior Director, Threat Analysis Group, Google, Google Canada

Thank you, Chair and members of the committee.

As Jeanette mentioned, I'm the director of Google's threat analysis group, or TAG. While I'm personally based in Australia, we are a global team, a significant part of which is based in Google's Montreal office, which I'm sure this committee well knows is a growing hub of cybersecurity talent and expertise.

Our global team of analysts and security experts works closely with product teams to analyze and counter threats to our platform and our users, including threats from government-backed attackers, serious cybercriminals and information operations.

Hostile actors continue to attempt to access and misuse our platforms, and Google has invested heavily over many years to counter attempts to deceive, harm or take advantage of users. We don't just mitigate security risks; we work to eliminate entire classes of threats for consumers and businesses whose work and lives depend on the Internet.

On any given day, TAG tracks more than 270 targeted or government-backed attacker groups from more than 50 countries. We publish a quarterly bulletin about actions we take against accounts that we attribute to coordinated influence campaigns. For instance, in the third quarter of 2023, we reported disabling influence campaigns originating from groups including Russia, Iran, China and Mexico.

We are particularly focused on disrupting coordinated influence operations on YouTube. For example, since January 2023, we terminated more than 2,400 YouTube channels linked to Russia and more than 60,000 channels linked to China as part of our investigations into this activity. These actions are in addition to YouTube's ongoing enforcement of community guidelines, which resulted in the removal of more than eight million videos globally in the third quarter of 2023.

As we discover and disrupt operations, we take steps to protect users, disclose information publicly and share our findings with industry and government partners to support the entire ecosystem. We also issue warnings to our users when we believe that they have been targeted by a government-backed attack.

While this work is never done, we continue to take action, identify bad actors and share relevant information to protect users and prevent future attacks.

We would like to thank the committee for your attention to this critical issue and for allowing us to share more on our work to keep Canadians safe and our investments in the right expertise to protect users on our platform. We remain committed to partnering with the Canadian government to ensure a stronger and safer digital future for all Canadians.

We look forward to answering your questions.

4:45 p.m.

Conservative

The Chair Conservative John Brassard

Thank you, Mr. Huntley and Ms. Patell.

We're going to go to Meta now.

You have five minutes to address the committee. Please go ahead.

4:45 p.m.

Nathaniel Gleicher Head of Security Policy, Meta Platforms Inc.

Thank you for the opportunity to appear before you today.

My name is Nathaniel Gleicher, and I'm the head of security policy at Meta.

My work is focused on addressing the adversarial threats that we face every day to the security and integrity of our products and services and taking steps to protect our users in every way we can.

I have worked in cybersecurity and trust and safety for two decades, first as a technical expert and then as a cybercrime prosecutor at the U.S. Department of Justice and as director for cybersecurity policy at the National Security Council.

I'm joined by video conference today by two colleagues at Meta: Rachel Curran, the head of public policy for Canada; and Dr. Lindsay Hundley, our lead for influence operations policy.

At Meta, we work hard to identify and counter foreign adversarial threats, including hacking campaigns and cyber-espionage operations, as well as influence operations, what we call coordinated inauthentic behaviour, or CIB, which we define as any “coordinated efforts to manipulate public debate for a strategic goal, in which fake accounts are central to the operation.”

CIB is when users coordinate with one another and use fake accounts to mislead others about who they are and what they are doing. At Meta, our community standards prohibit inauthentic behaviour, including by users who seek to misrepresent themselves, use fake accounts or artificially boost the popularity of content. This policy is intended to protect the security of user accounts and our services and to create a space where people can trust the people and communities they interact with on our platforms.

We also know that threat actors are working to interfere with and manipulate public debate, exploit societal divisions, promote fraud, influence elections and target authentic social engagement across the Internet. Stopping these bad actors, both on our platforms and more broadly, is one of our highest priorities. That's why we have invested significantly in people and technology to combat inauthentic behaviour.

The security teams at Meta have developed policies, automated detection tools and enforcement frameworks to tackle deceptive actors, both foreign and domestic. These investments in technology have enabled us to stop millions of attempts to create fake accounts every day and to detect and remove millions more, often within minutes of their creation. Just this year, Meta has disabled almost two billion fake accounts. The vast majority of those, more than 99% of them, were identified proactively before receiving any report.

As part of this work, we regularly publish reports on our work to counter the threats we're discussing here today. To talk more about that, I'd like to hand it over to Dr. Hundley, who coordinates our work to identify and expose foreign interference.

December 13th, 2023 / 4:45 p.m.

Dr. Lindsay Hundley Influence Operations Policy Lead, Meta Platforms Inc.

My name is Lindsay Hundley and I lead Meta's policy work on countering influence operations, both overt and covert. My work at the company draws on my nearly 10 years of experience as a researcher focused on issues related to foreign interference, including in my doctoral work at Stanford University and during research fellowships at both Stanford and Harvard.

Meta uses a behaviour-based approach to identify covert influence operations, not one that's based on the content they share. We remove networks like these no matter who is behind them, what they post, or whether they are foreign or domestic. If helpful, I would be happy to give specific examples.

We have taken down more than 200 covert influence operations from 68 countries in at least 42 languages from Amharic and Urdu to Russian and Chinese. We regularly report these findings through our adversarial threat reports. Sharing this information has enabled our teams, investigative journalists, government officials, and industry peers to better understand and expose Internet-wide security risks, including ahead of critical elections.

As of our latest report, China is now the third most common geographic source of foreign CIB that we have disrupted, after Russia and Iran. This year, we have taken down five CIB networks from China, more than any other country. Regardless of who was behind these networks, or what they targeted, these CIB operations emanating from China typically posted content related to China's interest in different regions worldwide. Many praised China. Some defended its human rights records in Tibet and Xinjiang. Others criticized critics of the Chinese government, including journalists and researchers.

Countering foreign influence operations is a whole-of-society effort. No single platform can solve foreign interference on its own, which is why we work with our industry peers, independent researchers, investigative journalists, government and law enforcement.

Thank you for your focus on this work. We look forward to answering your questions.

4:50 p.m.

Conservative

The Chair Conservative John Brassard

Thank you, Dr. Hundley. That was precisely on time between the two of you. Thank you for that.

Now we're going to go to X Corporation.

Please go ahead, for five minutes, to address the committee.

4:50 p.m.

Wifredo Fernández Head of Government Affairs, United States of America and Canada, X Corporation

Mr. Chair and members of the committee, thank you for the invitation to appear before you today.

My name is Wifredo Fernández. I serve as head of government affairs for the U.S. and Canada at X. I'm joined by my colleague Josh Harris, our lead privacy counsel for North America.

X's mission is to promote and protect the public conversation and to be the town square of the Internet. People's right to privacy and data protection is a fundamental right, not a privilege. X is a uniquely open service. We offer a range of ways for people to be a part of the conversation on X on their terms, from creating pseudonymous accounts in order to protect their identity to letting people control who sees their posts.

Our privacy efforts have enabled people around the world using X to protect their own data. That same philosophy guides how we work to protect the data people share with us. We empower people who use our service to make informed decisions about the data they share with us. We believe individuals should know and have meaningful control over what data is being collected about them, how it's used and when it's shared. We're guided by the principle that we should only use data for the purpose for which it was collected.

We have one global privacy program that encompasses the highest data protection standards in the world, and a single global privacy policy, which we have worked hard to make clear and easy to understand. X is always working to improve transparency into what data is collected and how it is used. Through the account settings on X, we give people the ability to make a variety of choices about their data privacy, including limiting the data we collect, determining whether they see interest-based advertising, and controlling how we personalize their experience. In addition, we provide people with the ability to access information about advertisers that have included them in tailored audiences to serve them ads, demographic and interest data about their accounts from ad partners, and information X has inferred about them.

Behind the scenes, teams across the company are constantly working to protect the privacy and data of those who use our service. This work has several areas of focus. Over the last year, we have been overhauling technical infrastructure and products to make X more efficient and durable. Tackling technical debt isn't just good for the privacy and safety of people who use X. It will also help us get better products and services to people faster.

Privacy by design is a priority with every product we build. We execute comprehensive privacy reviews for all new features and tools we roll out, and perform additional data protection impact assessments for products that may pose additional risks to our users.

In addition, we have taken steps to mitigate unauthorized scraping and harvesting of X data. No single mitigation can protect against all the privacy harms associated with such activity. Some actions we've taken include the use of dedicated teams that work together to monitor, identify and mitigate scraping activity across a range of vectors and platforms; the introduction of rate limits to limit a malicious actor's ability to scrape data; the expansion of user verification offerings to assess whether a given account applicant is a real person, not a bot; and updates to our terms of service, in order to make it clear that scraping is an express misuse of the X service.

X is public. Posts are immediately viewable and searchable by anyone around the world. We give people non-public ways to communicate on X, too, through protected accounts and direct messages. It is also possible to use X under a pseudonym, if you prefer not to use your real name. When people use X, even if they're just looking at posts, we receive some personal information, such as the type of device they're using and their IP address. People can choose what additional information to share with us, including email address, phone number, address book contacts and a public profile. We use the information for things such as keeping accounts secure and showing people more relevant posts to follow—events and ads.

Like many peer companies, X's business is largely based on advertising, but we have some fundamental differences. In general, rather than focusing on who you are, our data is more about what you're interested in—for example, what you repost, what you like and whom you follow, all of which is public. X has an open public API, making data available for developers, journalists, brands and researchers for analysis, and to build businesses, provide services and create innovative products. We do not provide personally identifiable information through our API that is not already visible on the service. We take our responsibility to protect people's data seriously and have strict policies and processes in place to assess applications for uses of X data and restrict improper use of such.

Notwithstanding the fact that our API only makes available public data, we have long-standing rules against the use of our data for surveillance. As a company, we will always err on the side of protecting the voices of those who use our service. Privacy and data protection are at the heart of our company-wide priority to build products that earn the trust of people who use them. Freedom of speech and expression is built on this foundation, and we take this responsibility very seriously.

Thank you, and we look forward to answering your questions.

4:55 p.m.

Conservative

The Chair Conservative John Brassard

Thank you, Mr. Fernández.

Thank you, all, for your opening statements.

Members of the committee, we have a bit of a Brady Bunch scenario going on here.

Mr. Green, I'll call you “Mike Brady”, the patriarch of the family.

I'm going to ask that members direct their questions specifically to an individual, because we're just going to waste time trying to figure out who's going to answer the question. If you can do that, it would be appreciated.

We're going to start our first six-minute round with Mr. Barrett from the Conservative Party.

Mr. Barrett, go ahead, please.

4:55 p.m.

Conservative

Michael Barrett Conservative Leeds—Grenville—Thousand Islands and Rideau Lakes, ON

I'll direct my first question to X Corporation.

Would you support an age restriction requiring parental approval for downloads of your app by children under the age of 16?

4:55 p.m.

Head of Government Affairs, United States of America and Canada, X Corporation

Wifredo Fernández

Thank you for the question.

There are, around the world, a variety of different laws when it comes to consent and age restrictions. Sometimes they vary by state here in the United States. We welcome the opportunity to engage on any potential legislation—

4:55 p.m.

Conservative

Michael Barrett Conservative Leeds—Grenville—Thousand Islands and Rideau Lakes, ON

I'm sorry. I'm just going to jump in there quickly, sir.

This is a great opportunity for you to engage on whether X Corporation would support a restriction in the App Store for minors under the age of 16 to require parental consent when they're downloading your app. Would you support that?

4:55 p.m.

Head of Government Affairs, United States of America and Canada, X Corporation

Wifredo Fernández

As you may imagine, X is not the platform of choice for teens. We do allow, in the United States and Canada—with the exception of Quebec, which is over 14—the ability to use the service. We leave the decision of whether to restrict on the App Store to the App Store.

4:55 p.m.

Conservative

Michael Barrett Conservative Leeds—Grenville—Thousand Islands and Rideau Lakes, ON

Okay.

I have the same question for Meta, please.

4:55 p.m.

Rachel Curran Head of Public Policy, Canada, Meta Platforms Inc.

Thank you, Mr. Barrett.

Yes, we would support that kind of restriction. If I may say, I think that would be an excellent way for policy-makers to protect and address youth safety issues, as long as it's applied industry-wide.

4:55 p.m.

Conservative

Michael Barrett Conservative Leeds—Grenville—Thousand Islands and Rideau Lakes, ON

Thank you for your response.

I have the same question for Google, please.

4:55 p.m.

Head of Canada Government Affairs and Public Policy, Google and YouTube, Google Canada

Jeanette Patell

You can put parental controls on an Android device. That's one of the things we've built in order to put families and caregivers in control of [Technical difficulty—Editor] experience.

That can restrict what content can be downloaded or purchased from Google Play on that particular device, based on the maturity [Technical difficulty—Editor] level and concerns in putting this [Technical difficulty—Editor] that is right for them.

4:55 p.m.

Conservative

The Chair Conservative John Brassard

I'm sorry, Ms. Patell. The interpreters are having a problem because you are cutting in and out.

Mr. Barrett, I'm going to stop your time here.

I don't know.... We did the test, and it was fine.

I'm going to go back to Mr. Barrett here, but we may have a problem, Ms. Patell. We'll see what the next answer brings.

Go ahead.

4:55 p.m.

Conservative

Michael Barrett Conservative Leeds—Grenville—Thousand Islands and Rideau Lakes, ON

Okay, Ms. Patell. I heard your response that on Android Google devices parents have the opportunity to set content moderation by age. Can you indicate if that's a correct summary of what you said?

4:55 p.m.

Head of Canada Government Affairs and Public Policy, Google and YouTube, Google Canada

Jeanette Patell

Essentially, yes. We have built tools so that parents can put controls on the devices and downloads for [Technical difficulty—Editor].

4:55 p.m.

Conservative

Michael Barrett Conservative Leeds—Grenville—Thousand Islands and Rideau Lakes, ON

Thanks very much.

My next question is for X Corp.

Do you have a list of instances in which the Government of Canada has requested that content be taken down on your platform? That's by the Government of Canada to X Corp.

5 p.m.

Head of Government Affairs, United States of America and Canada, X Corporation

Wifredo Fernández

I'll allow my colleague Josh to add to this.

We do keep track of lawful requests for user information from governments. We don't have that information in front of us today, but yes, law enforcement do have a particular portal where they can make lawful requests for user data or potential content removal, according to lawful order.

5 p.m.

Conservative

Michael Barrett Conservative Leeds—Grenville—Thousand Islands and Rideau Lakes, ON

Before your colleague jumps in there—and you can give a response in under 30 seconds—I'm looking for requests by government and not by police agencies.

That's for X, please.

5 p.m.

Josh Harris Senior Privacy and Data Protection Counsel, X Corporation

Yes, we do track by government agency. We would be able to provide you with aggregate numbers of government requests from Canada for a set period—for example, one year.

5 p.m.

Conservative

Michael Barrett Conservative Leeds—Grenville—Thousand Islands and Rideau Lakes, ON

If you're able to give us the last five years and table that with the committee, can you also itemize it by the nature of the request, if you're providing a written submission to the committee? Is that something you'd be able to do, sir?