In terms of the profitability of these platforms, they are in the fastest-growing industry. Once Facebook and others figured out that you could monetize the residue or data trails of users, they turned them into enormous profits. They have also developed strategies to undermine different nation-states—and the laws in these nation-states—which have different obligations to ensure that people are well educated and have access to the truth.
What I'm arguing is that Facebook has a duty to prioritize accurate, truthful information. We cannot achieve that if it is blocking all reputable news organizations. What we also know from research is that when news isn't available, something else fills the void. In that void, we know there's much more information and different kinds of information, particularly information from bad actors.
The last think I'll say is that technology is the policy. It's not that we have an absence of regulation, but the technology arrives in the world, and if we fail to regulate it, it exists and makes its own policy. Facebook, for instance, decided that you were going to be able to target individuals with bespoke advertising, which, importantly, meant that civil rights were going to be violated if you could target certain age groups and earning brackets in order to get your messages across for things like credit and purchasing health insurance or other kinds of insurance. We know there are broad civil rights effects of the way technology, like Facebook, is designed and then what kinds of services people get down the pipeline.
Importantly, technology becomes the policy. As a result, it becomes very hard for regulators to come in a year, two years or 10 years after a product has been on the market and say, “Wait. Now we understand the harms and we want to do something about them.”