There are a number of important points in that. First of all, I would create a distinction between pure fake news and what you might call “hyperpartisan content”. We know, for example, that the Pope didn't endorse Donald Trump. That is clearly someone spreading a known lie, and therefore I put that in a category of bad, harmful content that we want companies like Facebook and Google to act against.
There's then the question of hyperpartisan content, which could be bias or propaganda, as you say. The question is, do people understand why they're receiving it, and can they do anything about receiving it? These problems seem to have gotten worse since Facebook allowed advertising through the newsfeed. You can advertise straight into the newsfeed. I asked Mike Schroepfer this: If you didn't want to receive political ads, can you stop them? The answer is no. You can be targeted with political messaging, and there's nothing you can do to not receive it. You could receive many of these political messages that have been targeted at you based on psychological profiling that's been done—unbeknownst to you—of your interests, fears, and concerns, and you can't stop receiving them.
You may also not know who is sending them to you. What's been exposed in the Internet Research Agency's work is that what you thought could be a community group concerned about an issue you're concerned about is actually someone in St. Petersburg targeting you with propaganda, and you have no way of knowing.
There are questions there about how people can turn off political advertising for the newsfeed if they don't want to receive it, which is a policy issue for Facebook, but also to have more transparency over who is sending you information. If someone in my constituency in the election period got a leaflet from me through the door and they weren't a Conservative voter, they would weigh up what I'd said against the fact that I'm a Conservative. They would consider whether or not to believe it, because they would know that I have a biased political opinion because I'm a politician representing a political party. They could make that judgment based on knowledge. You can't do the same thing with this message on Facebook, because you don't know who is sending it to you.
Also, I don't think enough people understand why they see what they see. They see more of the content they engage with, so what they're seeing doesn't reflect a broad sweep of opinion; they're seeing only the opinions of the people they most agree with, and that's continually reinforced.