Thank you very much for the question. I will try to keep my comment brief.
Yes, when we think about technological responses, platforms tend to respond in terms of affordances, which is exactly what you mentioned, design fixes around, for example, this is labelled a piece of misinformation. We could entertain the idea of labelling something hateful or against community standards. I think that's a good starting point. However, we need to ask ourselves this: What then?
What happens when we say to a person that this is a piece of misinformation or hateful content? The user needs to then have other skills and tools to be able to, for example, find the original source of the piece of misinformation to feel confident in being able to verify. Similarly, with online hate, what then? What happens after we have flagged for them that this is a piece of hateful content?
From our perspective, again, those are helpful, but they aren't the end of the story. I think we need other tools and critical thinking skills that will allow people to verify and authenticate information and/or respond to online hate.