Thank you, Chair.
Thank you to all members of the committee for the opportunity to speak with you today.
I don't mind the delay. It's the business of Parliament, and I'm just happy to be a part of it today.
As the chair just mentioned, my name is Colin McKay, and I'm the head of government affairs and public policy for Google in Canada.
We, like you, are deeply troubled by the increase in hate and violence in the world. We are alarmed by acts of terrorism and violent extremism like those in New Zealand and Sri Lanka. We are disturbed by attempts to incite hatred and violence against individuals and groups here in Canada and elsewhere. We take these issues seriously, and we want to be part of the solution.
At Google, we build products for users from all backgrounds who live in nearly 200 countries and territories around the world. It is essential that we earn and maintain their trust, especially in moments of crisis. For many issues, such as privacy, defamation or hate speech, local legislation and legal obligations may vary from country to country. Different jurisdictions have come to different conclusions about how to deal with these complex issues. Striking this balance is never easy.
To stop hate and violent extremist content online, tech companies, governments and broader society need to work together. Terrorism and violent extremism are complex societal problems that require a response, with participation from across society. We need to share knowledge and to learn from each other.
At Google we haven't waited for government intervention or regulation to take action. We've already taken concrete steps to respond to how technology is being used as a tool to spread this content. I want to state clearly that every Google product that hosts user content prohibits incitement to violence and hate speech against individuals or groups, based on particular attributes, including race, ethnicity, gender and religion.
When addressing violent extremist content online, our position is clear: We are agreed that action must be taken. Let me take some time to speak to how we've been working to identify and take down this content.
Our first step is vigorously enforcing our policies. On YouTube, we use a combination of machine learning and human review to act when terrorist and violent extremist content is uploaded. This combination makes effective use of the knowledge and experience of our expert teams, coupled with the scale and speed offered by technology.
In the first quarter of this year, for example, YouTube manually reviewed over one million videos that our systems had flagged for suspected terrorist content. Even though fewer than 90,000 of them turned out to violate our terrorism policy, we reviewed every one out of an abundance of caution.
We complement this by working with governments and NGOs on programs that promote counter-speech on our platforms—in the process elevating credible voices to speak out against hate, violence and terrorism.
Any attempt to address these challenges requires international coordination. We were actively involved in the drafting of the recently announced Christchurch Call to Action. We were also one of the founding companies of the Global Internet Forum to Counter Terrorism. This is an industry coalition to identify digital fingerprints of terrorist content across our services and platforms, as well as sharing information and sponsoring research on how to best curb the spread of terrorism online.
I've spoken to how we address violent extremist content. We follow similar steps when addressing hateful content on YouTube. We have tough community guidelines that prohibit content that promotes or condones violence against individuals or groups, based on race, ethnic origin, religion, disability, gender, age, nationality, veteran status, sexual orientation or gender identity. This extends to content whose primary purpose is inciting hatred on the basis of these core characteristics. We enforce these guidelines rigorously to keep hateful content off our platforms.
We also ban abusive videos and comments that cross the line into a malicious attack on a user, and we ban violent or graphic content that is primarily intended to be shocking, sensational or disrespectful.
Our actions to address violent and hateful content, as is noted in the Christchurch call I just mentioned, must be consistent with the principles of a free, open and secure Internet, without compromising human rights and fundamental freedoms, including the freedom of expression. We want to encourage the growth of vibrant communities, while identifying and addressing threats to our users and their broader society.
We believe that our guidelines are consistent with these principles, even as they continue to evolve. Recently, we extended our policy dealing with harassment, making content that promotes hoaxes much harder to find.
What does this mean in practice?
From January to March 2019, we removed over 8.2 million videos for violating YouTube's community guidelines. For context, over 500 hours of video are uploaded to YouTube every minute. While 8.2 million is a very big number, it's a smaller part of a very large corpus. Now, 76% of these videos were first flagged by machines rather than humans. Of those detected by machines, 75% had not received a single view.
We have also cracked down on hateful and abusive comments, again by using smart detection technology and human reviewers to flag, review and remove hate speech and other abuse in comments. In the first quarter of 2019, machine learning alone allowed us to remove 228 million comments that broke our guidelines, and over 99% were first detected by our systems.
We also recognize that content can sit in a grey area, where it may be offensive but does not directly violate YouTube's policies against incitement to violence and hate speech. When this occurs, we have built a policy to drastically reduce a video's visibility by making it ineligible for ads, removing its comments and excluding it from our recommendation system.
Some have questioned the role of YouTube's recommendation system in propagating questionable content. Several months ago we introduced an update to our recommendation systems to begin reducing the visibility of even more borderline content than can misinform users in harmful ways, and we'll be working to roll out this change around the world.
It's vitally important that users of our platforms and services understand both the breadth and the impact of the steps we have taken in this regard.
We have long led the industry in being transparent with our users. YouTube put out the industry's first community guidelines report, and we update it quarterly. Google has long released a transparency report with details on content removals across our products, including content removed upon request from governments or by order from law enforcement.
While our users value our services, they also trust them to work well and provide the most relevant and useful information. Hate speech and violent extremism have no place on Google or on YouTube. We believe that we have developed a responsible approach to address the evolving and complex issues that have seized our collective attention and that are the subject of your committee's ongoing work.
Thank you for this time, and I welcome any questions.