We do know that there were cases in which the discussion of certain topics, such as sexual violence, is believed to be down-ranked and that algospeak was developed to be able to talk about it. Discussion of sexual orientation and gender identity is believed to be down-ranked on some apps. Again, I say “believed” because we don't know due to the lack of transparency.
In many cases, even the people who operate these platforms don't necessarily know, because these are not programmed algorithms; they're machine-learning algorithms that are trained on datasets. Therefore, they can very easily encode existing biases without their operators even necessarily being aware or intending for them to do so.