Through all of the whistle-blower data that has come out and from the whistle-blowers themselves who have told the story of what happens behind the scenes at Facebook, we've seen pretty conclusively that they identify problems like polarization and hate speech. When they propose solutions, they're told by their executives not to do them because it would hurt engagement or they discover that some of the things they do to increase engagement are in fact driving polarization. They move forward with those decisions because engagement is money for them. Platforms like Facebook and Twitter have more of a built-in incentive to drive engagement at all costs.
No, they are not doing enough to combat things. I know that right now the government is looking at an online safety piece of legislation. That would have been very effective five years ago. It's still going to be effective and it's important because when people get involved in ideologically motivated violent extremism or far-right organizing or COVID conspiracies, they don't start doing that on the weird fringe platforms like Telegram. They start on the Facebooks and the Twitters of the world.
If we can stop people from connecting with that misinformation and disinformation, we can help a lot of families who are dealing with their grandmother, their uncle or their aunt who's been swept up into this alternate reality that's causing a lot of trouble.
There's still a lot that we can accomplish with the platforms, but we need to change the incentives. We need to make it so that they act responsibly.
They've had 10 years to figure out how to do it themselves. Unfortunately, nobody really likes the idea of government having to step in and tell an industry what to do. Everybody rankles at that here and there, but we have to because, quite frankly, the status quo is untenable.