Evidence of meeting #155 for Justice and Human Rights in the 42nd Parliament, 1st Session. (The original version is on Parliament’s site, as are the minutes.) The winning word was google.

A video is available from Parliament.

On the agenda

MPs speaking

Also speaking

Colin McKay  Head, Government Affairs and Public Policy, Google Canada

4:10 p.m.

Liberal

The Chair Liberal Anthony Housefather

We apologize for being late.

We had a vote in the chamber. I'm sorry for being late, especially to our witness.

Good afternoon, everyone, and welcome to the Standing Committee on Justice and Human Rights, as we resume our study of online hate.

Today it is an enormous pleasure to be joined by Colin McKay, head of government affairs and public policy at Google Canada. We really appreciate Google's participation and yours to enable us to have a better study. Thank you so much.

Mr. McKay, the floor is yours.

4:10 p.m.

Colin McKay Head, Government Affairs and Public Policy, Google Canada

Thank you, Chair.

Thank you to all members of the committee for the opportunity to speak with you today.

I don't mind the delay. It's the business of Parliament, and I'm just happy to be a part of it today.

As the chair just mentioned, my name is Colin McKay, and I'm the head of government affairs and public policy for Google in Canada.

We, like you, are deeply troubled by the increase in hate and violence in the world. We are alarmed by acts of terrorism and violent extremism like those in New Zealand and Sri Lanka. We are disturbed by attempts to incite hatred and violence against individuals and groups here in Canada and elsewhere. We take these issues seriously, and we want to be part of the solution.

At Google, we build products for users from all backgrounds who live in nearly 200 countries and territories around the world. It is essential that we earn and maintain their trust, especially in moments of crisis. For many issues, such as privacy, defamation or hate speech, local legislation and legal obligations may vary from country to country. Different jurisdictions have come to different conclusions about how to deal with these complex issues. Striking this balance is never easy.

To stop hate and violent extremist content online, tech companies, governments and broader society need to work together. Terrorism and violent extremism are complex societal problems that require a response, with participation from across society. We need to share knowledge and to learn from each other.

At Google we haven't waited for government intervention or regulation to take action. We've already taken concrete steps to respond to how technology is being used as a tool to spread this content. I want to state clearly that every Google product that hosts user content prohibits incitement to violence and hate speech against individuals or groups, based on particular attributes, including race, ethnicity, gender and religion.

When addressing violent extremist content online, our position is clear: We are agreed that action must be taken. Let me take some time to speak to how we've been working to identify and take down this content.

Our first step is vigorously enforcing our policies. On YouTube, we use a combination of machine learning and human review to act when terrorist and violent extremist content is uploaded. This combination makes effective use of the knowledge and experience of our expert teams, coupled with the scale and speed offered by technology.

In the first quarter of this year, for example, YouTube manually reviewed over one million videos that our systems had flagged for suspected terrorist content. Even though fewer than 90,000 of them turned out to violate our terrorism policy, we reviewed every one out of an abundance of caution.

We complement this by working with governments and NGOs on programs that promote counter-speech on our platforms—in the process elevating credible voices to speak out against hate, violence and terrorism.

Any attempt to address these challenges requires international coordination. We were actively involved in the drafting of the recently announced Christchurch Call to Action. We were also one of the founding companies of the Global Internet Forum to Counter Terrorism. This is an industry coalition to identify digital fingerprints of terrorist content across our services and platforms, as well as sharing information and sponsoring research on how to best curb the spread of terrorism online.

I've spoken to how we address violent extremist content. We follow similar steps when addressing hateful content on YouTube. We have tough community guidelines that prohibit content that promotes or condones violence against individuals or groups, based on race, ethnic origin, religion, disability, gender, age, nationality, veteran status, sexual orientation or gender identity. This extends to content whose primary purpose is inciting hatred on the basis of these core characteristics. We enforce these guidelines rigorously to keep hateful content off our platforms.

We also ban abusive videos and comments that cross the line into a malicious attack on a user, and we ban violent or graphic content that is primarily intended to be shocking, sensational or disrespectful.

Our actions to address violent and hateful content, as is noted in the Christchurch call I just mentioned, must be consistent with the principles of a free, open and secure Internet, without compromising human rights and fundamental freedoms, including the freedom of expression. We want to encourage the growth of vibrant communities, while identifying and addressing threats to our users and their broader society.

We believe that our guidelines are consistent with these principles, even as they continue to evolve. Recently, we extended our policy dealing with harassment, making content that promotes hoaxes much harder to find.

What does this mean in practice?

From January to March 2019, we removed over 8.2 million videos for violating YouTube's community guidelines. For context, over 500 hours of video are uploaded to YouTube every minute. While 8.2 million is a very big number, it's a smaller part of a very large corpus. Now, 76% of these videos were first flagged by machines rather than humans. Of those detected by machines, 75% had not received a single view.

We have also cracked down on hateful and abusive comments, again by using smart detection technology and human reviewers to flag, review and remove hate speech and other abuse in comments. In the first quarter of 2019, machine learning alone allowed us to remove 228 million comments that broke our guidelines, and over 99% were first detected by our systems.

We also recognize that content can sit in a grey area, where it may be offensive but does not directly violate YouTube's policies against incitement to violence and hate speech. When this occurs, we have built a policy to drastically reduce a video's visibility by making it ineligible for ads, removing its comments and excluding it from our recommendation system.

Some have questioned the role of YouTube's recommendation system in propagating questionable content. Several months ago we introduced an update to our recommendation systems to begin reducing the visibility of even more borderline content than can misinform users in harmful ways, and we'll be working to roll out this change around the world.

It's vitally important that users of our platforms and services understand both the breadth and the impact of the steps we have taken in this regard.

We have long led the industry in being transparent with our users. YouTube put out the industry's first community guidelines report, and we update it quarterly. Google has long released a transparency report with details on content removals across our products, including content removed upon request from governments or by order from law enforcement.

While our users value our services, they also trust them to work well and provide the most relevant and useful information. Hate speech and violent extremism have no place on Google or on YouTube. We believe that we have developed a responsible approach to address the evolving and complex issues that have seized our collective attention and that are the subject of your committee's ongoing work.

Thank you for this time, and I welcome any questions.

4:15 p.m.

Liberal

The Chair Liberal Anthony Housefather

Thank you very much for your opening statement.

We will go to Mr. MacKenzie.

4:15 p.m.

Conservative

Dave MacKenzie Conservative Oxford, ON

If I don't use all my time, Mr. Chair, Mr. Barrett will take it.

4:15 p.m.

Liberal

The Chair Liberal Anthony Housefather

Absolutely.

4:15 p.m.

Conservative

Dave MacKenzie Conservative Oxford, ON

Thank you for being here today, Mr. McKay.

You're in an enviable position of trying to harness whatever is going on in the world through your medium. I wonder what you would define as “hateful messages”.

4:15 p.m.

Head, Government Affairs and Public Policy, Google Canada

Colin McKay

If you'll permit me to look at my notes, I have a very specific definition.

For us, hate speech refers to content that promotes violence against, or has the primary purpose of inciting hatred against, individuals or groups based on the attributes I mentioned in my opening remarks.

4:15 p.m.

Conservative

Dave MacKenzie Conservative Oxford, ON

When we have that definition and somebody puts something on YouTube that may come from a movie or a television show, it seems to me as though, at times, those topics would be part of the broadcast. If those show up, what happens?

4:15 p.m.

Head, Government Affairs and Public Policy, Google Canada

Colin McKay

If those show up and they are flagged for review—a user flags them or they're spotted by our systems—we have a team of 10,000 who review videos that have been flagged to see if they violate our policies.

If the context is that something is obviously a clip from a movie or a piece of fiction, or it's a presentation of an issue in a particular way, we have to carefully weigh whether or not this will be recognized by our users as a reflection of cultural or news content, as opposed to something that's explicitly designed to promote and incite hatred.

4:15 p.m.

Conservative

Dave MacKenzie Conservative Oxford, ON

A couple of weeks ago a group of youngsters attacked and beat a woman in a park. I believe only one was 13; I think the rest of them were young. It showed up on the news. Would that end up in a YouTube video?

4:15 p.m.

Head, Government Affairs and Public Policy, Google Canada

Colin McKay

Speaking generally and not to that specific instance, if that video were uploaded to YouTube, it would violate our policies and would be taken down. If they tried to upload it again, we would have created a digital fingerprint to allow us to automatically pull it down.

The context of how a video like that is shown in news is a very difficult one. It's especially relevant not just to personal attacks, but also to terrorist attacks. In some ways, we end up having to evaluate what a news organization has determined is acceptable content. In reviewing it, we have to be very careful that it's clear to the viewer that this is part of a commentary either by a news organization or another organization that places that information in context.

Depending on the length and type of the video, it may still come down.

4:15 p.m.

Conservative

Dave MacKenzie Conservative Oxford, ON

Okay. I appreciate that, because one of the things that I think did occur was that it showed up over and over on news. If you say that Google doesn't accept that video in YouTube, I think that's very appropriate. I'm not sure how we deal with it in everyday newscasts, so in some respects, I think you're ahead of where we are with the news.

Is that equally true of other mediums? We're talking about videos. Can somebody google—as opposed to YouTube—some hateful speech that takes place? I don't know if “censor” is the right word, but do you have a means to take it down or locate it?

4:15 p.m.

Head, Government Affairs and Public Policy, Google Canada

Colin McKay

I was being very specific about YouTube, because that's somewhere that people consciously upload to our platform. In “search” we apply similar processes, but they have to be broader. We are just providing answers to questions that are posed to us by users about specific instances and events. If a news organization is presenting information in this way, it will surface in a normal way within our systems.

With specific elements, if there are specific speeches or pieces of content that are illegal within a country, then those will be taken down. We follow the law and legislation of the countries within which operate.

4:20 p.m.

Conservative

Dave MacKenzie Conservative Oxford, ON

Okay. Thank you.

I'll pass to Mr. Barrett.

4:20 p.m.

Liberal

The Chair Liberal Anthony Housefather

You have two minutes, Mr. Barrett.

4:20 p.m.

Conservative

Michael Barrett Conservative Leeds—Grenville—Thousand Islands and Rideau Lakes, ON

Thanks.

Mr. McKay, Mr. MacKenzie satisfied my curiosity as it relates to this, but I do have a question. If Google has said this year that it is not planning to allow political ads, is it the intent of Google to allow political ads in the next election? Or is the current plan just not workable ever?

4:20 p.m.

Head, Government Affairs and Public Policy, Google Canada

Colin McKay

We were faced with a difficult decision. The legislation was passed in December, and we had to have a system in place for the end of June. We went through the evaluation internally as to whether or not we could take political ads in Canada within that time frame, and it just wasn't workable.

The reality around transparency in political ads is that we already have products in the United States—and we're rolling them out elsewhere—that provide transparency around political advertising. Those products are evolving as we go through election after election. Europe just had one. India just had one. Brazil just had one. Our goal is to continue developing those products to a point where we hope it will reach parity with what's identified in the Elections Act.

4:20 p.m.

Conservative

Michael Barrett Conservative Leeds—Grenville—Thousand Islands and Rideau Lakes, ON

Do you have an expectation for a timetable as to when, based on the current legislation, you think Google would be able to comply?

4:20 p.m.

Head, Government Affairs and Public Policy, Google Canada

4:20 p.m.

Conservative

Michael Barrett Conservative Leeds—Grenville—Thousand Islands and Rideau Lakes, ON

Okay.

My next question has to do with public safety. Rural Canadians have expressed concerns that mapping software that often relies on Google Maps doesn't identify rural streets. That can pose problems for emergency services. Is there a mechanism or a plan for a mechanism to be made available to rural Canadians to be able to identify to Google either missing streets or missing mapping data for instances like the one I mentioned?

4:20 p.m.

Head, Government Affairs and Public Policy, Google Canada

Colin McKay

They can right now; on Google Maps, you can use the feedback mechanism to identify a particular element of the map. You can identify whether that road is closed indefinitely and it just isn't marked on the map, or whether there has been development since we last mapped that area and there is now a municipal building or some other facility that needs further recognition. They can send those signals to us through the mapping product. That is actually the quickest way to do it. Those feedback comments go directly to the mapping team and then are evaluated for inclusion.

4:20 p.m.

Conservative

Michael Barrett Conservative Leeds—Grenville—Thousand Islands and Rideau Lakes, ON

Thanks very much for your answers to my questions.

4:20 p.m.

Liberal

The Chair Liberal Anthony Housefather

Thank you very much.

Mr. McKinnon.

4:20 p.m.

Liberal

Ron McKinnon Liberal Coquitlam—Port Coquitlam, BC

Thank you for being here. I'm very interested to hear about the AI work you're doing to track down malicious content and so forth. I'm interested more particularly in tracking the provenance of such content. I submit that anonymity can be a big problem in encouraging bad behaviour online. I understand that Google has a very broad universe in which it operates. It has many different products.

I'm most particularly interested in commentary. I'm wondering whether Google has considered not necessarily requiring users to be authenticated, whether by an authentication authority such as Verisign or by more homegrown approaches such as webs of trust like PGP...and identifying people with an icon of some kind to indicate whether or not these people are authenticated. The next part of that would be to allow them to filter out content that came from unauthenticated sources. Do you have any comments on that?

4:20 p.m.

Head, Government Affairs and Public Policy, Google Canada

Colin McKay

I have a two-part response if you'll be patient with me. I think the first is that if we're speaking specifically about YouTube and a platform where you're able to upload information, there isn't a process of verification/authentication, but you do need to provide some reference points for yourself as an uploader. This can be limited to an email address and some other data points, but it does create a bit of a marker, especially for law enforcement who may want to track back the behaviour of a particular video uploader.

One area we focus on, though, is that we're very conscious that many users rely on anonymity or pseudonymity to be able to take positions, especially in politically sensitive or socially heightened environments, particularly if they're advocates of a particular position using our platforms. The process of verification/authentication in those circumstances is actually detrimental to them.

What I will speak to is that in responding to incidents of hate and online violent extremist content, we have made conscious efforts both in Google Search and our Google News product, as well as YouTube, in the moments after a crisis especially, when there isn't a reliable, factual content available about the immediate crisis, to focus, as our responsibility, on the authenticity and authority of those sources that are reporting and commenting on the crisis.

Within our systems, particularly in YouTube, you will see that if you're looking at a particular incident, the other material that is recommended to you comes from reliable sources that you likely have had contact with before. We try to send those signals. In addition to making information that's relevant to your query available, we're trying to make it clear that we're also trying to provide that level of reassurance, if not certainty.