Definitely.
In the more effective disinformation campaigns that I've studied, a lot of them aren't relying on false information. Instead they're drawing on harmful identity tropes. They're using these ideas around racism, sexism and xenophobia to polarize society and to suppress certain kinds of people from participating, or even to incite violence against particular groups or individuals within societies.
When we're thinking about combatting this kind of identity-based disinformation, it's really a tricky challenge because you just slap a label on something that is sexist on the Internet, and you can't simply fact-check racism away. It's very much a long-term human bias problem, so it's going to take a long-term strategy to manage that.
Drawing attention to the fact that these are the tactics and strategies of influence operations today is really important. Platforms can do more, particularly on the political violence side of things. When we're going to more extreme and egregious cases—I'm thinking about Myanmar and the coordinated campaigns against the Rohingya population by the government there—where we see violence and even a genocide against a particular group of people, having platforms do appropriate human rights assessments and making sure they have enough content moderators who have a local language understanding and local contextual understanding of any given society is really important.