Yes, you're right. Propaganda and misleading information have been around as long as politics has been around. The difference is the speed, the reach, the volume, and the ease at which it can deployed. That's unprecedented.
When we talk about these hacks, the thing that's being hacked is the human brain, for the most part. People are trying to capture the human brain and direct it.
How do we provide people with more reliable information that they can trust? Part of that is structurally protecting media. That's not just legacy media; it's also making sure there's space for new media, that people have these trusted sources they can go to and know they are legitimate. That part involves some degree of transparency, so that when something is posted online, there's some very easy indication that it's trustworthy or verified.
We discussed this a little in a project I'm working on: red, yellow, or green on a story. The problem—and I don't have an immediate answer to this—is who does the verifications? This is the broader epistemic problem. If part of the issue is that we need stuff we can trust, who decides what's trustworthy? That used to be the news media, and they were the gatekeepers. Now that's all fallen apart. To some extent that's good news, because you want the stuff democratized, but we just haven't figured out what an alternative model would look like. The best structural answer I have is we need to protect the media.
In microanswers, you might want to at least have a discussion about how social media posts that can be controlled by Facebook, Twitter, or whomever could bear some sort of marking or system to identify them as trustworthy.