To rephrase a little bit, my concern is that we are even thinking in terms of what it is that we expect of them, beyond simply following—as you put it—their terms of service. If we look at this from the perspective of the extreme right, all of these attempts essentially feed their narrative. We are essentially providing them with the fuel that they need. Every attempt to try to deplatform or to identify content that needs to be shut down actually allows them to say, see, look, they're afraid of us. They don't want these ideas out there.
It also raises questions about what the nature of discourse is in a democratic society.
Outside of the things that we can identify legally—you can't call for violence; there should be no advocacy of direct violence—in terms of your specific question, I think we have to have more of a tolerance. I know this is not going to be a popular opinion. I don't like the things that I read. I lived in this world of vitriol and absolutely toxic garbage, but unless it is crossing very definitive red lines in terms of what it's calling for in violence, I have grave concerns about our trying to legislate that. I have more concerns about requiring any of these social media platforms to start doing that.
I would also caution the committee that this can be interpreted by social media in a variety of ways. For example, from the far right, we have seen requests that their perceptions of extremism—for example, Black Lives Matter—should be deplatformed. I don't think anybody in this room would agree with that, but that is a real possibility if we start going down this road of who should and shouldn't be making decisions about what kinds of things we do and don't want to have access to.