Algorithmic Recommendation and Information Cocoons: Analysis of Information Security Issues on Social Media

Research Article
Open access

Algorithmic Recommendation and Information Cocoons: Analysis of Information Security Issues on Social Media

Kaiye Yang 1*
  • 1 University of Nottingham Ningbo China    
  • *corresponding author yangky0701@outlook.com
CHR Vol.67
ISSN (Print): 2753-7072
ISSN (Online): 2753-7064
ISBN (Print): 978-1-80590-115-0
ISBN (Online): 978-1-80590-116-7

Abstract

In recent years, social media platforms have increasingly adopted algorithm-driven content delivery mechanisms that personalize user experiences by tailoring recommendations based on interactions such as liking, sharing, commenting, and saving content. This approach, often referred to as the "information cocoon" effect, significantly shapes the digital information landscape by creating highly individualized content streams. The information cocoon algorithm in daily life can enhance users' engagement and cohesion, but at the same time, it limits the exposure of diverse viewpoints, amplifies cognitive biases towards certain fixed opinions, and increases vulnerability to misinformation. This paper critically analyzes the mechanisms through which algorithmic recommendations foster information cocoons and identifies associated risks, including misinformation propagation, social polarization, and algorithmic discrimination. Utilizing a systematic literature review, this study proposes mitigation strategies encompassing enhanced algorithmic transparency, regular independent audits, content diversification, digital literacy enhancement, and regulatory oversight, aiming to safeguard information security in the algorithm-dominated social media landscape.

Keywords:

Algorithmic Recommendations, Information Cocoons, Information Security, Social Media Platforms, Digital Literacy

Yang,K. (2025). Algorithmic Recommendation and Information Cocoons: Analysis of Information Security Issues on Social Media. Communications in Humanities Research,67,38-43.
Export citation

References

[1]. Pang, G., Wang, X., Wang, L., Hao, F., Lin, Y., Wan, P., & Min, G. (2023). Efficient Deep Reinforcement Learning-Enabled Recommendation. IEEE Transactions on Network Science and Engineering, 10, 871-886. https://doi.org/10.1109/TNSE.2022.3224028.

[2]. Li, S., Karatzoglou, A., & Gentile, C. (2016). Collaborative Filtering.

[3]. Hashim, S., & Waden, J. (2023). Content-based filtering algorithm in social media. Wasit Journal of Computer and Mathematics Science. https://doi.org/10.31185/wjcm.112.

[4]. García, D., Kappas, A., Küster, D., & Schweitzer, F. (2016). The dynamics of emotions in online interaction. Royal Society Open Science, 3. https://doi.org/10.1098/rsos.160059.

[5]. Palmieri, E. (2024). Online bubbles and echo chambers as social systems. Kybernetes. https://doi.org/10.1108/k-09-2023-1742.

[6]. Hameleers, M. (2023). The (Un)Intended Consequences of Emphasizing the Threats of Mis- and Disinformation. Media and Communication. https://doi.org/10.17645/mac.v11i2.6301.

[7]. Sliwa, R. (2020). Disinformation campaigns in social media. https://doi.org/10.18419/OPUS-11202.

[8]. Bednar, J. (2021). Polarization, diversity, and democratic robustness. Proceedings of the National Academy of Sciences, 118. https://doi.org/10.1073/pnas.2113843118.

[9]. Vlasceanu, M., & Amodio, D. (2022). Propagation of societal gender inequality by internet search algorithms. Proceedings of the National Academy of Sciences of the United States of America, 119. https://doi.org/10.1073/pnas.2204529119.

[10]. Watson, H., & Nations, C. (2019). Addressing the Growing Need for Algorithmic Transparency. Commun. Assoc. Inf. Syst., 45, 26. https://doi.org/10.17705/1cais.04526.

[11]. Costanza-Chock, S., Raji, I., & Buolamwini, J. (2022). Who Audits the Auditors? Recommendations from a field scan of the algorithmic auditing ecosystem. Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency. https://doi.org/10.1145/3531146.3533213.


Cite this article

Yang,K. (2025). Algorithmic Recommendation and Information Cocoons: Analysis of Information Security Issues on Social Media. Communications in Humanities Research,67,38-43.

Data availability

The datasets used and/or analyzed during the current study will be available from the authors upon reasonable request.

Disclaimer/Publisher's Note

The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of EWA Publishing and/or the editor(s). EWA Publishing and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

About volume

Volume title: Proceedings of ICLLCD 2025 Symposium: Enhancing Organizational Efficiency and Efficacy through Psychology and AI

ISBN:978-1-80590-115-0(Print) / 978-1-80590-116-7(Online)
Editor:Rick Arrowood
Conference date: 12 May 2025
Series: Communications in Humanities Research
Volume number: Vol.67
ISSN:2753-7064(Print) / 2753-7072(Online)

© 2024 by the author(s). Licensee EWA Publishing, Oxford, UK. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license. Authors who publish this series agree to the following terms:
1. Authors retain copyright and grant the series right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this series.
2. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the series's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial publication in this series.
3. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See Open access policy for details).

References

[1]. Pang, G., Wang, X., Wang, L., Hao, F., Lin, Y., Wan, P., & Min, G. (2023). Efficient Deep Reinforcement Learning-Enabled Recommendation. IEEE Transactions on Network Science and Engineering, 10, 871-886. https://doi.org/10.1109/TNSE.2022.3224028.

[2]. Li, S., Karatzoglou, A., & Gentile, C. (2016). Collaborative Filtering.

[3]. Hashim, S., & Waden, J. (2023). Content-based filtering algorithm in social media. Wasit Journal of Computer and Mathematics Science. https://doi.org/10.31185/wjcm.112.

[4]. García, D., Kappas, A., Küster, D., & Schweitzer, F. (2016). The dynamics of emotions in online interaction. Royal Society Open Science, 3. https://doi.org/10.1098/rsos.160059.

[5]. Palmieri, E. (2024). Online bubbles and echo chambers as social systems. Kybernetes. https://doi.org/10.1108/k-09-2023-1742.

[6]. Hameleers, M. (2023). The (Un)Intended Consequences of Emphasizing the Threats of Mis- and Disinformation. Media and Communication. https://doi.org/10.17645/mac.v11i2.6301.

[7]. Sliwa, R. (2020). Disinformation campaigns in social media. https://doi.org/10.18419/OPUS-11202.

[8]. Bednar, J. (2021). Polarization, diversity, and democratic robustness. Proceedings of the National Academy of Sciences, 118. https://doi.org/10.1073/pnas.2113843118.

[9]. Vlasceanu, M., & Amodio, D. (2022). Propagation of societal gender inequality by internet search algorithms. Proceedings of the National Academy of Sciences of the United States of America, 119. https://doi.org/10.1073/pnas.2204529119.

[10]. Watson, H., & Nations, C. (2019). Addressing the Growing Need for Algorithmic Transparency. Commun. Assoc. Inf. Syst., 45, 26. https://doi.org/10.17705/1cais.04526.

[11]. Costanza-Chock, S., Raji, I., & Buolamwini, J. (2022). Who Audits the Auditors? Recommendations from a field scan of the algorithmic auditing ecosystem. Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency. https://doi.org/10.1145/3531146.3533213.