Thank you very much, Mr. Chair.
Members of the committee, my name is Jeannette Patell. I'm responsible for government affairs and public policy at Google in Canada.
I'm pleased to be joined remotely today by my colleague Shane Huntley, a senior director of Google's threat intelligence group.
Earlier this year, as part of our ongoing commitment to protect elections, Google created the Google threat intelligence group which brings together the industry-leading work of our threat analysis group and the Mandiant intelligence division of Google Cloud.
Google threat intelligence helps identify, monitor and tackle threats ranging from coordinated influence operations to cyber-espionage campaigns across the Internet. On any given day, TAG, the threat analysis group, tracks and works to disrupt more than 270 government-backed attacker groups from more than 50 countries. It publishes its findings each quarter. Mandiant similarly shares its findings on a regular basis, and has published more than 50 blogs to date this year alone, analyzing threats from Russia, China, Iran, North Korea and the criminal underground. We have shared some of our recent reports with this committee, and Shane will be happy to answer your questions about these ongoing efforts.
Google's mission is to organize the world's information and make it universally accessible and useful. We recognize this is especially important when it comes to our democratic institutions and processes. We take seriously the importance of protecting free expression and access to a range of viewpoints. We recognize the importance of enabling the people who use our services to speak freely about the political issues most important to them.
When it comes to the integrity and security of elections, our work is focused on three key areas. First and foremost is continuing to help people find helpful information from trusted sources through our products, which are strengthened through a variety of proactive initiatives, partnerships and responsible safeguards. Beyond designing our systems to return high-quality information, we also build information literacy features into Google Search that help people evaluate and verify information, whether it's something they saw on social media or heard in conversations with family or friends.
For example, our About This Image feature in Google Search helps people assess the credibility and context of images they see online by identifying an image's history and how it has been used and described on other web pages, as well as identifying similar images. We also continue to invest in state-of-the-art capabilities to identify AI-generated content. We have launched SynthID, an industry-leading tool that watermarks and identifies AI-generated content in text, audio, video and images. On YouTube, when creators upload content, we now require them to indicate whether it contains altered or synthetic materials that appear realistic, which we then label appropriately.
We will soon begin to use C2PA's Content Credentials, a new form of tamper-evident metadata, to identify the provenance of content across Google Ads, Google Search and YouTube and to help our users identify AI-generated material.
When it comes to our own generative AI tools, out of an abundance of caution we're applying restrictions on certain election-related queries on Gemini and connecting users directly to Google Search for links to the latest and most accurate information.
The second area of focus is working to equip high-risk entities, like campaigns and elected officials, with extra layers of protection. Our advanced protection program and Project Shield are free services that leverage our strongest set of cyber protections for high risk individuals and entities, including elected officials, candidates, campaign workers and journalists.
Finally, we focus on safeguarding our own platforms from abuse by actively monitoring and staying ahead of abuse trends through the enforcement of our long-standing policies regarding content that could undermine democratic processes.
Maintaining and enforcing responsible policies at scale is a critical part of how we protect the integrity of democratic processes around the world. That's why we've long invested in cutting-edge capabilities, strengthened our policies and introduced new tools to address threats to election integrity. At the same time, we continue to take steps to prevent the misuse of our tools and platforms, particularly attempts by foreign state actors to undermine democratic elections.
The Google Threat intelligence teams, including the threat analysis group founded by my colleague Shane Huntley, are central to this work. They often receive and share important information about malicious activity with national security agencies and local law enforcement, as well as our industry peers, so that they can investigate and take appropriate action.
Maintaining the integrity of our democratic processes and institutions is a shared challenge. Google, our users, industry, law enforcement and civil society all have important roles to play, and we are deeply committed to doing our part to keep the digital ecosystem safe and reliable.
We look forward to answering your questions and continuing our engagement with this committee as you study these important questions.