It's difficult to quantify how users are targeted with different content, because as I mentioned in my opening remarks, there's just so much opacity plaguing these companies, and attempts by researchers or others to glean insights are met with the various tactics I mentioned.
That being said, there are some hallmarks that we've seen over the years. In particular, during election periods, we know that certain user demographics have been targeted with content that makes them afraid to engage and go to the polls. In the United States, users whose identities are Black, Latino or native American have been targeted with laser-like precision by major tech companies: Meta, which at the time was Facebook, Twitter and YouTube. The content that some of these users see really plays into their existing vulnerabilities, fears within communities and distrust of government.
The kernel here is it always feels credible. People see something online and they trust it. They then become fearful, and the content they might be given will play into what their perceived vulnerabilities already are. In the 2020 election in the United States, users were given content that specifically mentioned that certain law enforcement or others might be at polling locations. That also preyed on fears of violence or intimidation.