We do have evidence that recommendation algorithms lead to self-censorship, because one of the things they do is to favour certain content and down-rank, or shadow ban, other content. This is not transparent to users, to people who are participating on these platforms, so in many cases, people will be particularly cautious to avoid using terms that they think might have them down-ranked.
Sometimes they will use so-called “algospeak”, which is a code word that people in the community know stands for a particular word that they expect will get them down-ranked. Of course, this means that people who are not yet members of the community don't have access to that conversation.
We also know that even in creating content, particularly people who are commercial content creators, they feel a very strong pressure to create not necessarily the content that they want to express, but the content that will be favoured by the algorithm.