To further refine my point, it's important to realize that YouTube is a platform for content creators, so essentially the people who are on YouTube are people who are trying to build an audience and a community around shared interests. The vast majority of them have given you the information to be able to verify and have some level of certainty about their authority if not their actual identity, whether you're talking about repairing small engines, doing model trains or political commentary, and whether you're talking about traditional news organizations or purely online news organizations.
For the vast material or content, you are operating in an environment where you are able to identify and then qualify what you are looking at. A lot of my opening statement, as well as a lot of our online and automated processes, are based on that immediate response to a crisis or that attempt to incentivize violent extremism or hatred online, where we're trying to do exactly what you're trying to describe. That is to say, wait a second, what's the outlier, what's the uploader, who is the user of our service trying to pursue a negative outcome, and can we identify them and qualify them and then apply our policies against them, whether it means limiting the availability of those videos or taking them down from the system?
We are trying to pursue that goal, not just within the context of authentication.