I agree with that statement completely. To me, the challenge is in how you manage it. If you think about it, censorship and moderation were never designed to handle things at the scale that these Internet platforms operate at. In my view, the better strategy is to do the interdiction upstream, to ask the fundamental questions of what role platforms like this have in society and what business model is associated with them. To me, what you really want to do....
My partner, Renée DiResta, is a researcher in this area. She talks about the issue of freedom of speech versus freedom of reach, the latter being the amplification mechanism. On these platforms, what's really going on is the fact that the algorithms find what people engage with and amplify that more. Sadly, hate speech, disinformation and conspiracy theories are, as I said, the catnip that really gets the algorithms humming and gets people to react. In that context, eliminating that amplification is essential.
But how will you go about doing that, and how will you essentially verify that it's been done? To my mind, the simplest way to do that is to prevent the data from getting in there in the first place.