Given many of the opacity problems I've mentioned previously, it's difficult to say with certainty, certainly across every platform, what content is being boosted more than others. We have anecdotal evidence at best, and that's part of why we continue to call for greater disclosures and greater auditing of what processes are occurring within these companies, but one of the pieces we know is that far-right and extremist content has been engaged with over six times more than other content that is politically neutral.
The NYU ad observatory, which was kicked off Facebook for doing this research, found many of those various pieces of evidence and pointed to the way conservative and far-right content is getting amplified more. There's a current effort, at least in the United States, to recharacterize this attack, and part of what we have to be very careful about is the way that we point to evidence, to make sure that we are not making claims we cannot support. When lawmakers say that X or Y type of content is boosted, we have to make sure we have evidence to show, and that's why the precursor is necessary.