Evidence of meeting #153 for Access to Information, Privacy and Ethics in the 42nd Parliament, 1st Session. (The original version is on Parliament’s site, as are the minutes.) The winning word was facebook.

A video is available from Parliament.

On the agenda

MPs speaking

Also speaking

Ian Lucas  Member, Digital, Culture, Media and Sport Committee, United Kingdom House of Commons
Kevin Chan  Global Policy Director, Facebook Inc.
Neil Potts  Global Policy Director, Facebook Inc.
Derek Slater  Global Director, Information Policy, Google LLC
Carlos Monje  Director, Public Policy, Twitter Inc.
Damian Collins  Chair, Digital, Culture, Media and Sport Committee, United Kingdom House of Commons
Colin McKay  Head, Government Affairs and Public Policy, Google Canada
Edwin Tong  Senior Minister of State, Ministry of Law and Ministry of Health, Parliament of Singapore
Hildegarde Naughton  Chair, Joint Committee on Communications, Climate Action and Environment, Houses of the Oireachtas
Jens Zimmermann  Social Democratic Party, Parliament of the Federal Republic of Germany
Keit Pentus-Rosimannus  Vice-Chairwoman, Reform Party, Parliament of the Republic of Estonia (Riigikogu)
Mohammed Ouzzine  Deputy Speaker, Committee of Education and Culture and Communication, House of Representatives of the Kingdom of Morocco
Elizabeth Cabezas  President, National Assembly of the Republic of Ecuador
Andy Daniel  Speaker, House of Assembly of Saint Lucia
Jo Stevens  Member, Digital, Culture, Media and Sport Committee, United Kingdom House of Commons
James Lawless  Member, Joint Committee on Communications, Climate Action and Environment, Houses of the Oireachtas
Sun Xueling  Senior Parliamentary Secretary, Ministry of Home Affairs and Ministry of National Development, Parliament of Singapore
Michele Austin  Head, Government and Public Policy, Twitter Canada, Twitter Inc.

1:05 p.m.

Liberal

David Graham Liberal Laurentides—Labelle, QC

Does Mr. Zuckerberg have a—

1:05 p.m.

Conservative

The Chair Conservative Bob Zimmer

We're actually out of time. Sorry, Mr. Graham.

We'll go to Mr. Saini for five minutes.

1:05 p.m.

Liberal

Raj Saini Liberal Kitchener Centre, ON

Good afternoon.

One of the things we've heard from many experts is that a lot of the issues that have happened with data were when the machine learning really came into force. There was an inflection point. Many experts agree that self-regulation is not viable anymore. Rules have to be in place. The business model just can't regulate itself, and it doesn't align with the public interest.

I have two points. My concern is that right now, we're a mature democracy. A lot of the countries represented around this table are mature democracies. My worry is for nascent democracies that are trying to elevate themselves but don't have the proper structure, regulation, education and efficiency in place, or a free or advanced press. There has been some suggestion that maybe this self-regulation should be internationalized, as with other products. Even though some countries may not have the ability to effectively regulate certain industries, the mature democracies could set a standard worldwide.

Would that be something that would be accepted, either through a WTO mechanism or some other international institution that's maybe set apart? Part of the conversation has been around the GDPR, but the GDPR is only for Europe. There are the American rules, the Canadian rules, the South Asian rules.... If there were one institution that governed everybody, there would be no confusion, wherever the platforms were doing business, because they would accede to one standard worldwide.

1:10 p.m.

Global Policy Director, Facebook Inc.

Kevin Chan

You are correct. Where we can have harmonization of standards across the world, that's helpful. We've called for that in multiple realms, including in the privacy realm.

I would say that's actually one of the key challenges. It's a vigorous debate that we have when we talk about the oversight board. In the consultations we've had around the world, including in Canada, the big question that's come up is, “How can you have a global board make decisions that have local impact?” It's come up, and I can say we've had some interesting conversations with the Assembly of First Nations. Obviously they have some unique questions about what the right governing framework is for content online and how we marry the international with the local.

I absolutely agree with you, sir. That's a very good and astute question, and one that we wrestle with.

1:10 p.m.

Liberal

Raj Saini Liberal Kitchener Centre, ON

Google...?

1:10 p.m.

Head, Government Affairs and Public Policy, Google Canada

Colin McKay

This is a challenge we've recognized since the early days of implementing machine learning in our products, and it's one of the reasons we came out over a year ago with a very clear set of principles around how we will use artificial intelligence in our products.

We certainly underline that there needs to be an active conversation between industry and governments around the world. In the past, this sort of principles-based development of regulation that was led by the OECD around data protection has served us well for developments according to region and in levels of sophistication.

I agree with you that this sort of principles-based global collaboration, especially in concert with the companies that will have to execute on it, will result in a fairer regulatory system that's as broadly applicable as possible, whether in this room or globally.

1:10 p.m.

Liberal

Raj Saini Liberal Kitchener Centre, ON

Thank you for that. Obviously you're going to take in the local effect of any country you work in, but there are some basic principles that I think can be agreed upon worldwide.

My final question is a bit philosophical but also a bit practical. In certain countries in which you operate, you know there are no standards and no regulation. The rule of law is lacking, a free press is lacking and the government structure is not that advanced. The content could be exploited in a much more heinous or worse way in those countries than in other parts of the world.

Is there no moral incumbency on the platforms to make sure that when they go into those jurisdictions they are helping to elevate the governance structure, so that when they're doing business in those countries, it's done in a more equitable manner?

1:10 p.m.

Head, Government Affairs and Public Policy, Google Canada

Colin McKay

I have a twofold response to that.

First, as my colleague, Mr. Slater, mentioned, we work very hard to make sure the policies and guidelines around all of our products apply to circumstances in those countries as well as what we might describe as more developed or more stable countries.

However, that's also why people like me work at the company. We work on teams that focus on AI and on privacy and data protection. We have those engagements, whether in the country itself or at international organization meetings, so that we can have an ongoing conversation and share information about how we're seeing these policies and technologies develop internationally. We can provide a comparison with how they're seeing that develop within their own jurisdiction.

It's an honest and forthright exchange, recognizing that we're on the forefront of a technology that is having a significant impact on our products and a significant benefit for our users.

1:10 p.m.

Liberal

Raj Saini Liberal Kitchener Centre, ON

Thank you very much.

1:10 p.m.

Conservative

The Chair Conservative Bob Zimmer

Thank you, Mr. Saini.

Now, we'll go into the countries again. We're at about two minutes each. If it can be quick that would be great. We have Estonia, Germany, Mexico, Singapore, Ireland and then we have some closing statements from us.

Go ahead, Estonia.

1:10 p.m.

Vice-Chairwoman, Reform Party, Parliament of the Republic of Estonia (Riigikogu)

Keit Pentus-Rosimannus

Thank you.

It has been pointed out that the disinformation phenomenon is becoming more complex and cross-channel or cross-platforms, meaning that the different phases or different stages of the disinformation campaigns happen often on different channels.

How do you co-operate? How do you see what could be the model for co-operation between the different platforms in order to fight the disinformation problem?

1:10 p.m.

Global Director, Information Policy, Google LLC

Derek Slater

We do our own threat assessments to prevent and anticipate new trends, and then we work collaboratively among the industry, and where appropriate with law enforcement and others, to make sure information and indicators are shared. We actively want to be a participant in that sort of process.

1:10 p.m.

Director, Public Policy, Twitter Inc.

Carlos Monje

I'll just say that the companies do work together on these issues and work closely with governments. In Mr. Saini's comments, I was thinking about the role of government and how forward-leaning Estonia has been, under the thumb of Russian disinformation for a long time, in working to improve media literacy and working with the platforms in a way that it makes our jobs easier.

Everyone has a role and the companies do work together to try to address common threats.

1:15 p.m.

Global Policy Director, Facebook Inc.

Neil Potts

A good place to perhaps look is the work that we do as the Global Internet Forum to Counter Terrorism. All of our companies are members of that forum, where we have shared responsibilities of information sharing, technological co-operation and then support for research.

1:15 p.m.

Conservative

The Chair Conservative Bob Zimmer

Next up is Germany.

Go ahead.

1:15 p.m.

Social Democratic Party, Parliament of the Federal Republic of Germany

Jens Zimmermann

Thank you, Mr. Chairman.

I would like to ask a few questions of Twitter.

During the recent European elections you introduced several measures, especially concerning fake news about the electoral process as such. That sounded like a good idea in the first place, but we ended up having elected officials in Germany being blocked for tweets that obviously didn't have any failed information in them.

Obviously, the complaint mechanism from Twitter was used by right-wing activists to flag completely legal posts by members of Parliament, for example. It was really the problem that you did it that way, and it took quite some time until everything was solved.

Have you monitored these developments? What are your lessons learned, and how will that evolve during upcoming elections?

1:15 p.m.

Director, Public Policy, Twitter Inc.

Carlos Monje

Thank you for that question.

I followed that as well from the U.S., the disinformation reporting flow that we launched in the EU and in India. I think this is one of the challenges and blessings of being a global platform—every time you turn around there's another election. We have Argentina coming up. We have the U.S. 2020 elections to follow, and Canada, of course, in October.

What we learned is that, as always, when you create a rule and create a system to implement that rule, people try to game the system. What we saw in Germany was a question of how and whether you sign a ballot. That was one of the issues that arose. We are going to learn from that and try to get better at it.

What we found—and Neil mentioned the GIFCT—was that our contribution to the effort, or what Twitter does, is to look at behavioural items first, which is not looking at the content but how are different accounts reacting to one another. That way we don't have to read the variety of contexts that make those decisions more complicated.

1:15 p.m.

Conservative

The Chair Conservative Bob Zimmer

Thank you.

We'll go to Singapore for two minutes.

1:15 p.m.

Senior Minister of State, Ministry of Law and Ministry of Health, Parliament of Singapore

Edwin Tong

Mr. Potts, earlier you had said in answer to my colleague's question that you were not aware of being told prior to or after the bombings in Sri Lanka that it, in fact, had been flagged to Facebook prior to that. Do you recall that?

1:15 p.m.

Global Policy Director, Facebook Inc.

Neil Potts

Yes, sir.

1:15 p.m.

Senior Minister of State, Ministry of Law and Ministry of Health, Parliament of Singapore

Edwin Tong

This was a serious and horrific massacre where videos were carried on Facebook for some time and, as you said, it's hate speech in clear breach of your policies. The alleged bomber himself, in fact, had a Facebook account, which I'm sure you know. I'm surprised that you didn't check, because that's something that I would have thought Facebook would want to know.

They would have wanted to know how it is that, with videos having been spread on Facebook, this was something that Facebook had missed, if at all you had missed it. I'm surprised you are not aware today as to whether or not this was something flagged to Facebook. Can you confirm that?

1:15 p.m.

Global Policy Director, Facebook Inc.

Neil Potts

For the specific video, sir, I'm happy to follow up after this hearing, and to come back to show you exactly that timeline. When we were made aware of the attack of the individual, we quickly removed—

1:15 p.m.

Senior Minister of State, Ministry of Law and Ministry of Health, Parliament of Singapore

Edwin Tong

I know. That's not my focus. Why it is that today, about two months after the event, you still don't know if, prior to the attack, Facebook had been made aware of the existence of the videos? I would have thought that, if you were to have us believe that the policies you now have in place—AIs and other mechanisms for the future.... If I were Facebook, I would have wanted to know how this was missed. I'm surprised that you don't know, even today.

1:15 p.m.

Global Policy Director, Facebook Inc.

Neil Potts

That is correct. We do a formal after-action process where we review these incidents to make sure that our policies—

1:20 p.m.

Senior Minister of State, Ministry of Law and Ministry of Health, Parliament of Singapore

Edwin Tong

It didn't come up.