I think from our point of view, I'd back away from bots to a wider perspective, which is that, across our system, we've long had experience with automated attacks on our infrastructure as well as our services. What we have focused on over time is providing the signals to our users that they are being subjected to an automated attack that is trying to either compromise their Google account, their Gmail account, or to present misinformation or disinformation to them. That goes all the way back to providing notices to users that they could be subject to a state-sponsored attack on their Gmail account.
Through this sort of deep-level analysis that I described, which analyzes videos writ more broadly across our infrastructure, we are trying both to identify when we see systemic attempts to breach the security of our systems and also to raise the profile and popularity of content, whether it's on search or whether it's on YouTube, to battle that. From our point of view, it's a very different context from the other services, but it's something in which we've historically invested a lot of money and time in both combatting and then also providing flags to our users so they're aware that they're being subject to these attacks or that there's an attempt to try to influence them in this way.