I will simply get through as much as I can. You may have to bear with me a tiny bit. You have my apologies.
As I was saying, that news vacuum is being filled, and I'm here to discuss the risk that disinformation and hate speech, fuelled by these platforms' algorithms, could become the filler.
Harmful algorithms feed the spread of disinformation. The design of these algorithms prioritizes attention and engagement to maximize the number of eyeballs on and the time spent viewing advertiser content.
My organization researches the amplification of disinformation and hate speech by social media algorithms. Rather than giving people the freedom to choose their own content, algorithms apply methods such as predictive analysis to promote whatever outcome the company determines will maximize its profits.
I just want to repeat that again. Platforms' algorithms don't give you what you want. That's a myth. They give you what they want you to want.
These highly personal, highly invasive systems have resulted in the polarization of economic, democratic and social thought, and are correlated to a rise in radical hate groups, networks and extremism.
Algorithm and “recommended” systems are fundamental to technology platforms' business models. This commercial sensitivity is one reason technical information about these algorithms is so hard to obtain.
Nevertheless, we have shown, through our research, a strong relationship between algorithms and the promotion of conspiracist and hateful content and disinformation. For example, in August 2020, when Instagram added recommendations to its user experience—unsolicited content to user streams to extend their time on the platform—CCDH set out to understand what effect this design change had on the prevalence of misinformation and hate speech.
In our 2021 report on algorithms, we found evidence that this design choice actively pushed radicalizing extremists' misinformation to users. Once the user exhausted the latest content from all accounts they follow, they gave new content as an extension of their feed, identifying users' potential interests based on their data and habits. If they were looking at COVID-19 disinformation, it actually gave them QAnon and anti-Semitic disinformation. If they were looking at anti-Semitic users, they were being fed anti-vax and COVID-19 disinformation as well.
Our findings illustrate how platforms' algorithms and design choices can quickly lead users from one conspiracy into the next, from investigating questions about the efficacy of a novel vaccine to unrelated conspiracies like QAnon, electoral rigging and anti-Semitic hate.
Meta owns, controls and profits from the Instagram algorithm, which here was shown to amplify dangerous misinformation and conspiracies. That's just one example of the malignant dynamics caused by our reliance on a revenue-maximizing algorithm.