Thank you for the opportunity to appear before you today.
My name is Derek Slater, and at Google I help shape the company's approach to information policy and content regulation. I'm joined here by my colleague Colin McKay, who's the head of public policy for Google in Canada.
We appreciate your leadership and welcome the opportunity to discuss Google's approach to addressing our many shared issues.
For nearly two decades, we have built tools that help users access, create and share information like never before, giving them more choice, opportunity and exposure to a diversity of resources and opinions. We know, though, that the very platforms that have enabled these societal benefits may also be abused, and this abuse ranges from spam to violent extremism and beyond. The scrutiny of lawmakers and our users informs and improves our products as well as the policies that govern them.
We have not waited for government regulation to address today's challenges. Addressing illegal and problematic content online is a shared responsibility that requires collaboration across government, civil society and industry, and we are doing and will continue to do our part.
I will highlight a few of the things we're doing today. On YouTube, we use a combination of automated and human review to identify and remove violative content. Over time we have improved, removing more of this content faster and before it's even viewed. Between January and March 2019, YouTube removed nearly 8.3 million videos for violating its community guidelines, and 76% of these were first flagged by machines rather than people. Of those detected by machines, over 75% had never received a single view.
When it comes to combatting disinformation, we have invested in our ranking systems to make quality count in developing policies, threat monitoring and enforcement mechanisms to tackle malicious behaviours and in features that provide users with more context, such as fact check or information panels on Google Search and YouTube.
Relatedly, in the context of election integrity, we've been building products for over a decade that provide timely and authoritative information about elections around the world. In addition, we have devoted significant resources to help campaigns, candidates and election officials improve their cybersecurity posture in light of existing and emerging threats. Our Protect Your Election website offers free resources like advanced protection, which provides Google's strongest account security, and Project Shield, a free service designed to mitigate the risk of distributed denial of service attacks that inundate sites with traffic in an effort to shut them down.
While industry needs to do its part, policy-makers, of course, have a fundamental role to play in ensuring everyone reaps the personal and economic benefits of modern technologies while addressing social costs and respecting fundamental rights. The governments and legislatures of the nearly 200 countries and territories in which we operate have come to different conclusions about how to deal with issues such as data protection, defamation and hate speech. Today's legal and regulatory frameworks are the product of deliberative processes, and as technology and society's expectations evolve, we need to stay attuned to how best to improve those rules.
In some cases, laws do need updates, for instance, in the case of data protection and law enforcement access to data. In other cases, new collaboration among industry, government and civil society may lead to complementary institutions and tools. The recent Christchurch call to action on violent extremism is just one example of this sort of pragmatic, effective collaboration.
Similarly, we have worked with the European Union on its hate speech code of conduct, which includes an audit process to monitor how platforms are meeting their commitments, and on the recent EU Code of Practice on Disinformation. We agreed to help researchers study this topic and to provide a regular audit of our next steps in this fight.
New approaches like these need to recognize relevant differences between services of different purpose and function. Oversight of content policies should naturally focus on content sharing platforms. Social media, video sharing sites and other services that have the principle purpose of helping people to create content and share it with a broad audience should be distinguished from other types of services like search, enterprise services, file storage and email, which require different sets of rules.
With that in mind, we want to highlight today four key elements to consider as part of evolving oversight and discussion around content sharing platforms.
First is to set clear definitions.
While platforms have a responsibility to set clear rules of the road for what is or is not permissible, so too, do governments have a responsibility to set out the rules around what they consider to be unlawful speech. Restrictions should be necessary and proportionate, based on clear definitions and evidence-based risks and developed in consultation with relevant stakeholders. These clear definitions, combined with clear notices about specific pieces of content, are essential for platforms to take action.
Second, develop standards for transparency and best practice.
Transparency is the basis for an informed discussion and helps build effective practices across the industry. Governments should take a flexible approach that fosters research and supports responsible innovation. Overly restrictive requirements like one-size-fits-all removal times, mandated use of specific technologies or disproportionate penalties will ultimately reduce the public's access to legitimate information.
Third, focus on systemic recurring failures rather than one-offs.
Identifying and responding to problematic content is similar, in a way, to having information security. There will always be bad actors and bugs and mistakes. Improvement depends on collaboration across many players using data-driven approaches to understand whether particular cases are outliers or representative of a more significant recurring systemic problem.
Fourth and finally, foster international co-operation.
As today's meeting demonstrates, these concerns and issues are global. Countries should share best practices with one another and avoid conflicting approaches that impose undue compliance burdens and create confusion for customers. That said, individual countries will make different choices about permissible speech based on their legal traditions, history and values consistent with international human rights obligations. Content that is unlawful in one country may be lawful in another.
These principles are meant to contribute to a conversation today about how legislators and governments address the issues we are likely to discuss, including hate speech, disinformation and election integrity.
In closing, I will say that the Internet poses challenges to the traditional institutions that help society organize, curate and share information. For our part, we are committed to minimizing that content that detracts from the meaningful things our platforms have to offer. We look forward to working with the members of this committee and governments around the world to address these challenges as we continue to provide services that promote and deliver trusted and useful information.
Thank you.