In the world I foresee in the next couple of years, whatever disinformation content we've seen so far would become AI-generated and would escalate by a factor of 10 or 100, so the solution I foresee is something similar, using humans: Domain experts in whatever community would start to identify disinformation attacks and from that, within that one specific community, AI could be trained to detect such content. That's the research and that's the model we've been working with, through different grants and funding opportunities.
I think this solution will help us detect and model what is happening. We were able to create charts saying, “This person is very heavily linked to another. Disinformation is coming from this person, but these people are also linked.” That helps us figure out where the disinformation is coming from, for example.