1. Introduction
In the contemporary information era, the evolution of the Internet and social media has significantly transformed the methods by which individuals access information and engage in communication.Studies have shown that personalized recommendation algorithms analyze users' behavioral data to push content that matches their interests and preferences, which not only improves users' experience but also accelerates the efficiency of information dissemination. However, with the popularity of personalized recommendations, academics are gradually discovering their potential negative impacts, such as the phenomenon of the “Echo Chamber” and the polarization of social views. Current investigations emphasize the technical refinement of algorithms and the enhancement of user experience; however, a notable research void persists regarding the precise mechanisms of cognitive bias and the polarization of public opinion induced by personalized recommendation algorithms at the societal level, along with their long-term ramifications.
This study investigates how personalized recommendation algorithms shape and reinforce user opinions through information filtering, echo chambers, and the spiral of silence, contributing to social polarization. It employs both quantitative and qualitative methods to analyze user behavior across various social platforms, integrating experimental and interview data to uncover how these algorithms influence cognitive patterns. Additionally, the research assesses user reactions and attitude shifts in response to diverse information sources, measuring their impact on viewpoint polarization. The findings offer a theoretical framework for understanding the societal effects of personalized algorithms and practical insights for policymakers, platform developers, and users. By examining the interplay between algorithms and user behavior, the results aim to inform the development of balanced algorithmic strategies that foster information diversity and openness. Ultimately, this research can provide a scientific foundation and policy recommendations to mitigate social division, encourage rational dialogue, and enhance media literacy among users.
2. The Rise of Online Polarization
With the rapid development of technology and the rise of Internet giants, the convenience of our society today has been greatly improved. Internet users can easily access information and products globally while connecting with like-minded individuals in diverse online communities. Internet platforms can often anticipate our preferences in a way that feels similar to how close friends do, offering recommendations for entertainment and products that align with our tastes. In addition to tailoring content to user interests, it will encounter factions with differing viewpoints. Internet users often debate these dissenters to counter opposing perspectives. The friendly discussions from the early days of the Internet have gradually developed into the current “cyber witch hunts” with extremely hostile, often extreme views and rhetoric. The swift advancement of the Internet has significantly accelerated the pace and extent of information distribution; however, this trend has concurrently led to the polarization of viewpoints among online users. Polarization of ideas is the tendency to reinforce one's own views in response to opposing opinions, resulting in heightened differences and increased social division. Internet platforms can often anticipate its users’ preferences in a way that feels similar to how close friends do, offering recommendations for entertainment and products that align with people’s preferences.
Frequent internet users will easily find that there are a constant stream of social problems in today's society, such as the opposition between men and women, the opposition between political stances. Once individuals engage with a particular subject on a platform, related topics will continuously surface, leading netizens to perceive contemporary society as inundated with analogous issues. This phenomenon can be attributed to the algorithms utilized by internet companies for content recommendation. [1] Although these algorithms are different, they all follow the basic logic of pushing content that customers may like, with high click-through rates or high levels of interest. The logic of the advanced algorithmic systems of Internet companies is to provide personalized content recommendations for each user through detailed analysis of user behavior data. This procedure initiates with the gathering and examination of user behavioral data, including browsing patterns, click-through rates, and search logs. Utilizing this data, the algorithm formulates a comprehensive user profile and proficiently anticipates the content that may resonate with the user. Subsequently, the algorithm assesses and prioritizes potential content recommendations to ensure that the material most pertinent to the user's preferences is suggested first. This personalized recommendation not only improves user engagement and satisfaction, but also optimizes the recommendation system through a continuous feedback loop.When users engage with the suggested content, fresh data is relayed to the system, enhancing the precision of the recommendations and establishing an ongoing cycle of improvement. [2] This inevitably results in both segments of the Internet becoming ensnared in the Echo Chamber meticulously crafted by big data algorithms. Once people stay in the Echo Chamber for a long time, trying to see the outside world by jumping out of the cocoon will only strengthen the firmness of the Echo Chamber.
In Hachten's The World News Prism: Digital, Social and Interactive, the author conducted an experiment. [3] Initially, they enlisted 1,000 individuals and, following an assessment of their political ideologies via a structured questionnaire, established two automated social media accounts. One of these accounts consistently retweeted content related to the Republican Party or prominent internet influencers in the United States, executing one retweet every hour, around the clock, without any breaks. Then the attributor asked the Republican subjects to follow the Mingzhu robot account. This experiment was conducted to see what would happen if subjects were forced to step outside their information bubble. To ensure the validity and fairness of the results, the author's team also asked the two robot accounts to repost videos of cute animals or scenic pictures, and delete them after a certain period of time. As long as the subjects could identify the animals or landscapes that had been reposted, they would receive an additional $20. And to prevent people from deliberately behaving in a certain way, the recruitment of subjects was based on a statement that differed greatly from the purpose of the experiment, in order to create a relatively ideal experimental state. The results were obtained a month later, which showed that supporters of both parties had not become more moderate after being exposed to information outside the Echo Chamber, but had instead strengthened their existing views. After the experiment, the team conducted interviews with the subjects to better understand the results. A 63-year-old woman began using the Internet to connect with her grandchildren and search for cooking recipes. Initially, she identified as a politically apathetic centrist, avoiding political discussions and voting for the Democratic Party without enthusiasm. Her views on social issues aligned more with the Republican Party. However, after a month of following a Republican Party bot, she transformed into a committed Democrat, stating her views diverged significantly from Republican positions. This shift was prompted by her exposure to frequent personal attacks against Democrats. Although she had previously been in an Echo Chamber, it was not a bad thing, because these attacks were also blocked. Once she stepped out of this cocoon, she realized that a war was being waged and that she had to take a side. Attacks from the opposing party awakened the woman's political identity. This woman is not an exception, but a microcosm of many subjects. For example, another 40-year-old woman is a Republican herself. She only sent out a thousand or so tweets about current events a year, but after following the robot, she began to fight in the comment section every day. There is no doubt that stepping out of the cocoon has made her more extreme.
In the daily lives of individuals within conventional society, exaggerated expressions and shocking comments are typically confined to online forum communities. Discussions on various topics often occur with mutual respect and rational analysis. This prompts an inquiry into the prevalence of extreme remarks primarily within digital environments. [4]
3. The Silent Majority
It is possible that the extreme opposing views that people see on the Internet are not that numerous in total, but because most Internet users are ordinary people like us who have lives, they are the silent majority. A Pew Research Center report shows that users who frequently post political tweets only account for 6% of the total users, and these 6% of users post 20% of all tweets and 73% of all political tweets. This indicates that a significant proportion of the global population consists of quiet moderates, yet social media platforms are predominantly influenced by the most vocal segment of active users. [5]
There are several reasons why the silent majority does not speak up. Initially, moderates are 40% more susceptible to online harassment compared to extremists. The rationale for this phenomenon is straightforward: moderates tend to refrain from targeting individuals who express dissenting opinions against extreme ideologies. However, extreme netizens are likely to attack the moderate netizens who are not firm in their positions. A classic example is that in a discussion topic with a male-female dichotomy, a moderate advocating mutual respect is often attacked from both sides, and what is more, they are verbally attacked by the extremists on both sides and have their private information disclosed. And this is another example of the spiral effect of silence, that is, the moderates have a lot of worries. The extremists might be experiencing significant stress in their daily lives, leading to a consistently heightened state of mental tension. In contrast, moderates typically engage with the internet and social media primarily for leisure or informational purposes, perceiving the potential dangers of sharing information online as surpassing the benefits obtained. An intuitive example is that when a person at work, that individual will try to avoid publicly posting personal life information on social media about happy events in that person’s life, such as camping, buy a new car, or traveling. Even if that person do wish to share and record that up-lifting moment, that person would only set a tag to ensure that as few people as possible know. Because often what people share are personal opinions, and interpersonal relationships are so complicated, it is likely to cause dissatisfaction among other people. Therefore, the moderate faction, which has all kinds of worries, will hide their views. When the silent majority wants to find a theory, they find that the extremist faction is very united and will not debate the topic with in a logical sense. On the Chinese Internet, radical individuals often disclose personal information about others and attempt to intimidate those with differing viewpoints by unearthing identity details and addresses. [6] Therefore, challenging extremists may very likely lead the moderate faction to fall into a dangerous situation such as information disclosure. Since it is completely impossible to communicate normally with extremists, moderates find that debating on the Internet is just harmful and so they gradually withdraw from the discussion. The Internet is progressively becoming more radicalized, as the discourse of extremists increasingly prevails online. These potentially misleading assertions are overshadowing rational dialogue, compelling individuals to align with the polarized perspectives of these extremists. [7].
It can be seen that while developed network and information technology has brought us convenient lives and rich entertainment content, it has undoubtedly added fuel to the fire of modern fast-paced life. It is understandable that internet platforms want to generate revenue by getting people to use their platforms as much as possible through recommendation algorithms. The potential impact on our society caused by this unchecked growth was not adequately considered when personalized recommendation algorithms were first designed. While these algorithms make it easier for users to access information, they also limit their horizons. Whether there is a better technical solution to balance personalized recommendations and information diversity has become an issue that must be addressed. In the recommendation algorithm at the beginning of YouTube's founding, Guillaume, a French engineer, emphasized that YouTube's video recommendations initially depended on the number of video views. Nevertheless, the platform quickly identified the issue of clickbait, wherein bloggers employ attention-grabbing headlines to generate clicks, regardless of whether the content aligns with the title. In reaction to this, Guillaume and his team implemented a novel algorithm focused on total playback duration to address this concern. This shift in algorithm proved very beneficial for the YouTube platform, and its success is largely due to the AI recommendation algorithm. This is clearly the filter bubble effect, and the platform has obviously long been aware of this, but it is still unmoved in order to generate revenue.
4. Navigating Filter Bubbles and Promoting Diversity
As Guillaume highlighted, the emergence of the filter bubble phenomenon can be attributed to the development of an AI-driven video recommendation algorithm by his team. [8]This phenomenon restricts users to a closed information space, constantly pushing content that matches their preferences, thereby limiting their exposure to diverse information. Guillaume also gave an example to explain this in detail. He used cat videos on YouTube to further illustrate his point, namely that the more cat videos a viewer watches, the more similar the cat videos recommended to that viewer will be. Although AI video recommendation algorithms are effective, Guillaume emphasizes that if viewers are always directed to one side of a political issue, subsequent recommendations will perpetuate biased views and exacerbate political polarization. This situation forces viewers in a filter bubble to adopt a distorted, narrow-minded and polarized view, thereby distorting their perception of reality.
While algorithmic recommendations significantly improve the tailored experience for users, the resultant Echo Chamber effect and ideological polarization must not be overlooked. [9] In this process, the media literacy and responsibility of users themselves are particularly important. Users should not passively accept the content recommended by algorithms, but should actively seek diverse information sources to break away from a singular perspective. However, user awareness alone is not enough to address this issue. Social media platforms and governments should also bear corresponding responsibilities. Platforms have the capability to modify their recommendation algorithms to proactively deliver a broader range of content, thereby fostering user engagement with varied viewpoints. [10] At the same time, policymakers should consider guiding platforms to reduce the spread of extreme information through regulations and create a more open and inclusive information environment. Only in this way can personalized recommendations and information diversity be balanced to avoid further polarization of social opinion.
5. Conclusion
In summary, Internet companies use algorithms to provide users with personalized content. This mechanism improves user experience and engagement, but it also locks users into an Echo Chamber without them realizing it. When Internet users can only access information that agrees with their views and reject dissenting voices, their perceptions and judgments gradually become one-sided and monotonous, and they lose their tolerance and understanding of different opinions. This Echo Chamber effect will gradually divide the power of speech of Internet users, exacerbate the polarization of social opinion, and reduce the space for rational debate. Addressing the challenges posed by Echo Chambers requires a multifaceted approach, including promoting media literacy, encouraging exposure to diverse perspectives, and fostering platforms that prioritize balanced discourse. Only through such efforts can it hope to bridge the divides that have become increasingly pronounced in our digital age.
References
[1]. Rabb, N., Cowen, L., & de Ruiter, J. P. (2023). Investigating the effect of selective exposure, audience fragmentation, and echo-chambers on polarization in dynamic media ecosystems. Applied Network Science, 8(1), 78
[2]. Wang, X., & Song, Y. (2020). Viral misinformation and echo chambers: the diffusion of rumors about genetically modified organisms on social media. [Viral misinformation and echo chambers] Internet Research, 30(5), 1547-1564.
[3]. Hachten, W. A. (2015). The World News Prism: Digital, Social and Interactive. John Wiley &. Sons, Incorporated, John Wiley & Sons, Incorporated.
[4]. Kratzke, N. (2023). How to Find Orchestrated Trolls? A Case Study on Identifying Polarized Twitter Echo Chambers. Computers, 12(3), 57.
[5]. Eceiza, E. U., & Ibañez, I. E. (2021). POLARIZATION AND ECHO CHAMBERS IN. #LEYDEEUTANASIA DEBATE ON TWITTER. [LA POLARIZACIÓN Y ECHO CHAMBERS EN EL DEBATE DE LA #LEYDEEUTANASIA EN TWITTER A POLARIZAÇAO E ECHO CHAMBERS NO DEBATE DA#LEYDEEUTANASIA NO TWITTER] Revista De Comunicación De La SEECI, (54), 187-204.
[6]. Gao, Y., Liu, F., & Gao, L. (2023). Echo chamber effects on short video platforms. Scientific. Reports (Nature Publisher Group), 13(1), 6282.
[7]. Avin, C., Daltrophe, H., & Lotker, Z. (2024). On the impossibility of breaking the echo chamber. effect in social media using regulation. Scientific Reports (Nature Publisher Group), 14(1), 1107.
[8]. C, T. N. (2020). ECHO CHAMBERS AND EPISTEMIC BUBBLES. Episteme, 17(2), 141-161.
[9]. Wolfowicz, M., Weisburd, D., & Hasisi, B. (2023). Examining the interactive effects of the filter. bubble and the echo chamber on radicalization. Journal of Experimental Criminology, 19(1), 119-141.
[10]. Jasny, L., Waggle, J., & Fisher, D. R. (2015). An empirical examination of echo chambers in US climate policy networks. Nature Climate Change, 5(8), 782-786.
Cite this article
Yan,X. (2025). The Advantages and Disadvantages of Internet Platform Algorithms Promoting Personalized Recommendations – From the Echo Chamber to the Deep Impact of Ideological Polarization. Communications in Humanities Research,62,46-51.
Data availability
The datasets used and/or analyzed during the current study will be available from the authors upon reasonable request.
Disclaimer/Publisher's Note
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of EWA Publishing and/or the editor(s). EWA Publishing and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
About volume
Volume title: Proceedings of the 4th International Conference on Literature, Language, and Culture Development
© 2024 by the author(s). Licensee EWA Publishing, Oxford, UK. This article is an open access article distributed under the terms and
conditions of the Creative Commons Attribution (CC BY) license. Authors who
publish this series agree to the following terms:
1. Authors retain copyright and grant the series right of first publication with the work simultaneously licensed under a Creative Commons
Attribution License that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this
series.
2. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the series's published
version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial
publication in this series.
3. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and
during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See
Open access policy for details).
References
[1]. Rabb, N., Cowen, L., & de Ruiter, J. P. (2023). Investigating the effect of selective exposure, audience fragmentation, and echo-chambers on polarization in dynamic media ecosystems. Applied Network Science, 8(1), 78
[2]. Wang, X., & Song, Y. (2020). Viral misinformation and echo chambers: the diffusion of rumors about genetically modified organisms on social media. [Viral misinformation and echo chambers] Internet Research, 30(5), 1547-1564.
[3]. Hachten, W. A. (2015). The World News Prism: Digital, Social and Interactive. John Wiley &. Sons, Incorporated, John Wiley & Sons, Incorporated.
[4]. Kratzke, N. (2023). How to Find Orchestrated Trolls? A Case Study on Identifying Polarized Twitter Echo Chambers. Computers, 12(3), 57.
[5]. Eceiza, E. U., & Ibañez, I. E. (2021). POLARIZATION AND ECHO CHAMBERS IN. #LEYDEEUTANASIA DEBATE ON TWITTER. [LA POLARIZACIÓN Y ECHO CHAMBERS EN EL DEBATE DE LA #LEYDEEUTANASIA EN TWITTER A POLARIZAÇAO E ECHO CHAMBERS NO DEBATE DA#LEYDEEUTANASIA NO TWITTER] Revista De Comunicación De La SEECI, (54), 187-204.
[6]. Gao, Y., Liu, F., & Gao, L. (2023). Echo chamber effects on short video platforms. Scientific. Reports (Nature Publisher Group), 13(1), 6282.
[7]. Avin, C., Daltrophe, H., & Lotker, Z. (2024). On the impossibility of breaking the echo chamber. effect in social media using regulation. Scientific Reports (Nature Publisher Group), 14(1), 1107.
[8]. C, T. N. (2020). ECHO CHAMBERS AND EPISTEMIC BUBBLES. Episteme, 17(2), 141-161.
[9]. Wolfowicz, M., Weisburd, D., & Hasisi, B. (2023). Examining the interactive effects of the filter. bubble and the echo chamber on radicalization. Journal of Experimental Criminology, 19(1), 119-141.
[10]. Jasny, L., Waggle, J., & Fisher, D. R. (2015). An empirical examination of echo chambers in US climate policy networks. Nature Climate Change, 5(8), 782-786.