I have lots to say, a lot more than the amount of time here, but there are a couple of main points.
This has to be done with the help of AI. What we're seeing right now is just a preview. This is going to get significantly worse as the disinformation is going to be AI-generated. The use of AI eventually will have to be done to detect this content, to de-escalate it and to intervene.
During our studies, we've always employed humans, domain experts, people of specific communities in which we wanted to detect disinformation. The approach we've used, I think successfully, was to get members of the community to point out examples of disinformation topics and then use that to start to train an AI model, which then can pick up on this and continue detecting new sources.