It depends on the platform. Twitter has traditionally taken a much lighter hand, and now they're engaging more heavily because of the pressure they've come under. Facebook has always engaged more heavily, but again, they're coming under pressure. They recently have faced some criticism for deactivating certain Palestinian sites under pressure from the Israeli government. Facebook has faced criticism for deactivating Kurdish websites under pressure from the Turkish government.
It's a very challenging political situation once you expect these intermediaries to be the arbiters of acceptable content. It can also be problematic because in some cases you don't really have a proper appeal mechanism. If the government orders you to make a particular decision—if it says, “This content is illegal, so don't publish it”—you can appeal it. There are all these procedures in place. However, if Facebook is the one that's telling you what is or is not acceptable, there aren't the same kinds of procedural protections there.
This is why I'm leery of governments off-loading that responsibility. Specifically, you see cases—I'll point to South Korea as one—of the government doing this as basically a way to impose censorship indirectly, cases of the government leaning very heavily on tech companies to do the content moderation for them. There are very abusive content controls as a result.