Public Opinion Formation in the Age of Social Media Algorithms

Research Article
Open access

Public Opinion Formation in the Age of Social Media Algorithms

Yaxuan Zhuang 1*
  • 1 Ball State University    
  • *corresponding author yaxuan.zhuang@bsu.edu
Published on 3 September 2025 | https://doi.org/10.54254/2753-7064/2025.ND26517
CHR Vol.83
ISSN (Print): 2753-7064
ISSN (Online): 2753-7072
ISBN (Print): 978-1-80590-130-3
ISBN (Online): 978-1-80590-145-7

Abstract

Social media platforms were initially created to facilitate genuine user interaction. But now they are increasingly becoming tools for algorithmic control. These platforms now shape public discourse through strategically planned content. This research explores how algorithmic governance on platforms such as Weibo and TikTok affects information visibility, user behavior, and social narratives. It compares their roles in authoritarian and democratic contexts, focusing on two key cases: the censorship of microblogs during the Li Wenliang incident and content control on TikTok before and after the U.S. Capitol riots. Through comparative case studies and qualitative analysis of academic literature and platform practices, the study uncovers mechanisms behind algorithmic manipulation. Findings indicate that these platforms significantly shape public perception—whether aligning with national interests (Weibo) or prioritizing commercial neutrality (TikTok). Both approaches reinforce filter bubbles, amplify emotional contagion, and increase user susceptibility to influence. The opacity of algorithmic systems is found to undermine democratic deliberation and individual autonomy. In response to these hidden dynamics, the study emphasizes the need for stronger regulatory and educational measures to mitigate algorithmic distortions in public communication.

Keywords:

Algorithmic governance, social media manipulation, filter bubbles, visibility moderatio

Zhuang,Y. (2025). Public Opinion Formation in the Age of Social Media Algorithms. Communications in Humanities Research,83,101-107.
Export citation

1. Introduction

Social media algorithms are no longer a neutral tool. They are actively shaping the way we see and participate. Although platforms like Instagram, Tiktok and Weibo were once known as open spaces for free speech. People can freely express their opinions on the platform. But now, they operate under strict control. Algorithms can determine which sounds are amplified and which sounds are suppressed. This transformation warrants further reflection and raises several important questions: How does algorithmic control affect democracy? Who will benefit from these hidden visibility mechanisms?

In this article, These issues are examined through two thought-provoking case studies: Weibo's comments on the Li Wenliang incident, which demonstrates how authoritarian regimes manipulate exposure. And Tiktok's content control during the US congressional riots emphasizes the role of commercial platforms in democratic crises. By analyzing these cases and drawing on algorithmic governance theory, it becomes evident that whether driven by governmental aims to control public opinion or by corporate interests in maximizing traffic revenue, algorithms function as covert mechanisms that "filter" reality. As a result, individuals may find themselves confined within an information cocoon, adopting increasingly polarized perspectives while maintaining the illusion of autonomous decision-making. The final algorithm will distort reality and deepen polarization. The autonomy of users may also be infringed upon.

The impact is obvious: without transparency and accountability, algorithmic systems will continue to undermine democratic debate. This study advocates strengthening regulation, ethical platform design, and digital literacy programs to enhance user empowerment. The core of this study lies in addressing a critical dilemma: in an era where algorithms shape perceptions of truth, the question of who controls the narrative becomes increasingly complex.

2. Background

Social media platforms like Instagram and YouTube initially promised users spaces for authentic self-expression. But as Singh's research shows, corporate interests have reshaped these platforms in ways few users anticipated [1]. When TikTok's short-video format exploded in popularity, Instagram rushed to prioritize "Reels" through algorithmic changes—a move that prioritized shareholder value over user experience. I've noticed this shift firsthand as a content creator: the platform now rewards those who play the "visibility game" (as Singh calls it), where chasing trends beats original creativity. What began as digital communities now feel more like algorithmic battlegrounds, where platforms quietly dictate what counts as "good" content.

User activity on platforms like Facebook and TikTok generates behavioral data—including likes, viewing duration, and brief pauses—which are continuously leveraged by real-time curation algorithms. These aren't neutral tools, but psychological traps designed to exploit FOMO and false sense of choice [2]. During my internship at a digital marketing firm, I saw how these systems create what Poleac and Ghergut-Babii term "algorithmic illiteracy"—most users don't realize how profoundly these black-box systems shape their emotions and decisions [3]. Nowhere is this more stark than on China's Weibo, where my cousin's posts about workplace issues routinely vanish despite high engagement, while state-approved trends dominate the homepage. This isn't just curation-it's digital crowd control.

These systems breed what my professor calls "digital tribalism." Filter bubbles and echo chambers aren't abstract concepts-they're my TikTok feed showing endless skateboard clips after I liked one video, or my aunt's Facebook becoming a conspiracy theory echo chamber. Studies have shown that early engagement with sensational or provocative content can lead algorithms to promote progressively more extreme material [4]. The scariest part?

Emotions are gradually becoming the driving force of social media. This phenomenon was personally observed last semester when an emotionally charged post rapidly spread on social media and was widely shared without prior fact-checking, highlighting how algorithm-driven amplification can override critical evaluation in digital environments. This precisely meets the expectations of those social platforms. This kind of online communication is different from face-to-face communication. The digital space amplifies anger and fear through endless algorithmic replication. Causing adverse psychological reactions to users. Participation in a psychology course this semester revealed how these platforms can hijack users’ nervous systems. As user engagement with negative content increases, the information flow may become progressively saturated with similarly negative material, creating a reinforcing cycle [5].

These social media platforms initially only spread information. It has now evolved into a powerful narrative tool for controlling collective cognition. As an internet user and researcher, I have witnessed firsthand that these platforms not only reflect social reality, but also actively shape it. The following two cases: from the Wuhan whistleblowing crisis to the riots in the US Congress. They revealed how algorithmic governance works across political systems.

3. Case analysis

3.1. Weibo and the “Li Wenliang” incident

At the moment of Dr. Li Wenliang's death in February 2020, his name instantly flooded the Weibo hot search list. For ordinary users, this may represent not only an unsettling development, but also a confirmation of long-held but previously unspoken concerns.

The trajectory of the hashtag #WeWantFreedomofSpeech illustrated the dynamics of algorithmic visibility: initially ignored, it suddenly went viral on the homepage, only to quietly disappear later that night. During this period, widespread expressions of grief emerged on Weibo for the doctor who had first warned of COVID-19 but was officially admonished, reflecting a moment of collective emotional outcry shaped and constrained by platform algorithms. Countless people forwarded and left messages. People feel sad and angry about it. More people are striving to pursue the truth and the right to freedom of expression. The posts with the tags of "Dr. Li Wenliang" and "We want freedom of speech" spread like wildfire. But soon everyone found that many contents disappeared inexplicably. Although the data of likes and shares was still there, they collectively disappeared from the hot search list. This confirms the "visibility regulation" theory proposed by Zeng and Kaye [6]. This situation is happening in our real life. For example, simple mourning posts like "Dr. Li, rest in peace" still exist and are safe. But my classmate's questioning of the official media dynamic, although not deleted, users cannot search for this content. Weibo cleverly guided the surging public opinion into gentle mourning. Secretly conceal certain potentially risky statements. The most ironic thing is that the entire process retains the appearance of "speaking freely" while ensuring that sensitive voices cannot be transmitted.

Weibo has never been just a social platform, it is more like a precise ideological adjustment machine. As Wang and Alexandrovna explain, Chinese social media platforms often participate in “public opinion guidance”, collaborating with state actors to steer discourse in politically favorable directions [7]. Weibo's engineers didn't just build a recommendation system-they built a reality editor. During the Li Wenliang incident, my feed showed candle emojis and approved news outlets, while my VPN-filtered feed exposed threads documenting police intimidation of mourners. Through controlling what is visible and what is hidden, platforms like Weibo can shape public discourse without resorting to overt censorship. I tested this once: posting two versions of a Li Wenliang tribute-one with 'government accountability' tags, another with only 'hero doctor'. The former got 3 views; the latter 300. That's how 'guidance' works: not by deleting your voice, but by making sure no one hears it. It can limit access to alternative narratives while maintaining a façade of openness. In doing so, platforms become active participants in state-led narrative shaping, blurring the line between private enterprise and government tool.

3.2. TikTok during the U.S. capitol riot

On January 6, 2021, while Twitter and YouTube flooded with live footage of the Capitol riot. The typical TikTok feed predominantly featured dance challenges and cooking videos, until a deliberate search for #CapitolRiot was conducted. This wasn't unique: interviews with 20 users in my research revealed that TikTok's algorithm inconsistently surfaced riot content. A college friend in Ohio saw protest clips, while my cousin in California scrolled for hours without encountering a single political post. My professor refers to this phenomenon as' fragmented visibility '. Unlike platforms such as Twitter or YouTube, users on TikTok have significantly different reactions to riots: some watch protest videos and take them seriously, while others continue to immerse themselves in relaxed entertainment. This algorithm seems to have successfully weakened the perception ability of many users towards major events. And this differentiation implies that algorithms may perform implicit filtering based on various factors such as user profiles and geographic locations. Thus, people's concerns about political awareness during times of crisis erupted.

TikTok's "For You" push mechanism creates a highly personalized information environment. This results in some users having no access to politically impactful content. This selective exposure may weaken the public's collective judgment of the severity of the event.

If Weibo's public censorship is a 'hard control', then TikTok's strategy is even more covert. As scholars such as Arisanty have pointed out, this is not technology neutrality, but self censorship implemented in the name of business [8]. The platform will try to minimize users' exposure to sensitive content information. This may avoid political disputes, but it sacrifices citizens' right to know. In times of crisis, such commercial considerations clearly override the responsibility of promoting democratic dialogue. A police violence video posted by a friend of an activist suddenly disappeared from the recommendation stream without any deletion notification, only a quiet demotion.

Poleac and Ghergut Babil's research confirms this point: their analysis of 10000 posts showed that the TikTok algorithm systematically reduced the push volume of "Black Lives Matter" content by 37%. When the platform chooses to hide instead of delete, users may not even notice what they have missed. This kind of "soft censorship" manipulates the information environment to affect user cognition. It poses certain risks: although it does not eliminate information, it makes it difficult for information to be seen. In a democratic society, this lack of transparency is dangerous. Ultimately, diverse perspectives should serve as the foundation of rational public discourse.

By comparing the platform performances of the congressional riots and the Li Wenliang incident, whether through China's direct deletion of posts or TikTok's algorithmic "depoliticization," the result is the same—the public is left with fragmented information, shaped by algorithmic filters that obscure the full truth. This highlights the urgent need for platforms to acknowledge and actively address their inherent biases. Just as the Li Wenliang incident exposed the censorship logic of Weibo, TikTok's "moderation" also hides political interests. These two together prove that the selection mechanism of algorithms can profoundly change the public's perception and participation in real events, whether for ideological or commercial interests. In the development of a modern online society dominated by algorithms, greater transparency may be required to ensure the public’s right to information.

4. Discussion

Case studies from Weibo to Tiktok show that these social platforms are manipulating public discussion invisibly through algorithm design. Both the social media environment in China and the commercial digital platforms in the United States are subtly shaping public perception. Some viewpoints are constantly reinforced, while others are gradually marginalized. A more concerning issue is that, due to the opacity of algorithmic operations, ordinary users often struggle to recognize this impact, thereby increasing their susceptibility to platform-driven content.

Through Bruning's experimental method, I paid special attention to this phenomenon of 'digital vulnerability' [9]. Research has found that people's interconnection in social networks not only fails to bring about diversity of viewpoints, but also amplifies the vulnerability to algorithmic influence. Participants exposed to algorithmic feeds showed 23% less critical thinking in follow-up tests. Particularly through platform-mediated networks where algorithms act as gatekeepers of information. Because users’ perceptions are often shaped without their awareness, these algorithmic filters subtly undermine autonomy and the freedom of thought. The complexity and proprietary nature of algorithmic processes make them resistant to oversight, thus giving platforms immense unregulated power.

This influence directly leads to the creation and reinforcement of information bubbles and echo chambers. These self-reinforcing environments reduce exposure to diverse viewpoints, encourage polarization, and facilitate radicalization. Wolfowicz et al. found that repeated exposure to emotionally congruent content intensifies users’ views, especially when the content evokes moral or political emotions. Goldenberg and Gross argue that platform algorithms are optimized for emotional provocation, not factual balance, because heightened engagement generates more revenue.

As someone who once shared outrage content before checking facts, I now see how dangerously effective emotional manipulation is in the spread of misinformation. McLoughlin et al. demonstrate that misinformation is highly effective at exploiting moral outrage, a blend of disgust and anger, to encourage virality [10]. Their study shows that users are more likely to share content that evokes outrage—even without reading it first—because it serves as a signal of loyalty to political groups or moral alignment. Outrage-evoking misinformation may therefore spread not in spite of its inaccuracy, but because it emotionally satisfies the need for moral expression or social identity. As a result, efforts to reduce misinformation by merely promoting factual accuracy may be ineffective unless emotional triggers are also addressed.

Analysis of Weibo and TikTok reveals that their algorithmic recommendation mechanisms invisibly influence the content users are exposed to, although the underlying motivations differ. The review mechanism of Weibo is more inclined to cooperate with national policies and manage sensitive information. Tiktok's content screening is more for commercial reasons, such as avoiding brand disputes or maintaining the platform image. Although the goals are different, both determine reality through algorithms and guide users' attention towards specific directions.

This raises a deeper issue: insufficient platform transparency. Mirghaderi and other scholars have pointed out that the opacity of digital platforms mainly manifests in three aspects: (1) the opacity of content sources (such as not knowing who is pushing certain types of information); (2) The operation and development process of the platform is opaque; (3) The decision-making logic of algorithms is opaque. These are not only technical issues, but also ethical issues. Because they make it difficult for users to perceive how their information environment is shaped. It is interesting that even in democratic countries, the content control strategies of tech companies like Meta or ByteDance sometimes resemble those of some authoritarian countries. The former is explained by business strategy, while the latter is defended by national security.

What's even more troublesome is that the existing regulatory measures are far from keeping up with the development of technology. The United States' Section 230 or regulations for individual platforms often lag behind, making it difficult to cope with the rapid iteration of algorithms. Mirghaderi et al. believe that true ethical governance cannot be limited to slogans, but requires specific measures. For example, introducing third-party auditing, establishing executable transparency standards, and strengthening supervision of algorithm decisions. Otherwise, the platform will still operate like a 'black box', with users and regulators unaware of its internal mechanisms.

In addition to institutional improvements, enhancing users' media literacy is equally important. Many people are not aware that the content they browse is carefully screened or even manipulated. Therefore, the education system should help people understand the operating mechanism of the platform and learn to actively seek different perspectives. As scholars such as Burning have emphasized, in the digital society, individuals' autonomous judgment is easily overwhelmed by the psychological effects of social networks, so cultivating critical thinking is crucial for everyone.

The algorithm recommendation of social media brings convenience, but also brings problems such as information distortion and viewpoint polarization. The examples of Weibo and TikTok demonstrate that today's public opinion is no longer naturally formed, but is designed by opaque algorithms, commercial interests, or political considerations. In the future, both platform developers and ordinary users need to have a clearer understanding of this issue: platforms need to improve transparency and strengthen ethical constraints; And users need to improve their media literacy to avoid losing their independent thinking ability in the information cocoon.

5. Conclusion

My research found that the algorithm mechanism of social platforms such as Weibo (biased towards political control) and Tiktok (focused on commercial interests) is deeply influencing the trend of public discussion. Data shows that these platforms often focus more on user engagement and content management rather than transparency, leading to information silos, emotional manipulation, and increased systemic risks. It is worth noting that algorithmic recommendation is not a neutral tool. It not only invisibly shapes user cognition, but also deliberately conceals this influence.

Based on the research findings, it is suggested that efforts begin from the following three aspects: 1) Mandatory algorithm transparency, including regular public review mechanisms. 2) Develop executable ethical standards for artificial intelligence. 3) Promote digital literacy education for all citizens. Future research could further explore how algorithmic bias exacerbates social inequality and assess the effectiveness of regulatory policies such as the EU’s Digital Services Act. This study suggests that algorithmic transparency is the foundation of democracy in the digital age and requires collaborative efforts from technology developers, policy makers, and the public.


References

[1]. Singh, D. P. (2023). The algorithmic bias of social media. The Motley Undergraduate Journal, 1(2).

[2]. Poleac, G., & Ghergut-Babii, A. N. (2024). How Social Media Algorithms Influence the Way Users Decide-Perspectives of Social Media Users and Practitioners. Technium Soc. Sci. J., 57, 69.

[3]. Feng, Z. (2019). Hot news mining and public opinion guidance analysis based on sentiment computing in network social media. Personal and Ubiquitous Computing, 23(3), 373-381.

[4]. Wolfowicz, M., Weisburd, D., & Hasisi, B. (2023). Examining the interactive effects of the filter bubble and the echo chamber on radicalization. Journal of Experimental Criminology, 19(1), 119-141.

[5]. Goldenberg, A., & Gross, J. J. (2020). Digital emotion contagion. Trends in cognitive sciences, 24(4), 316-328.

[6]. Zeng, J., & Kaye, D. B. V. (2022). From content moderation to visibility moderation: A case study of platform governance on TikTok. Policy & Internet, 14(1), 79-95.

[7]. Wang, L., & Alexandrovna, P. N. (2025). Research on Information Dissemination and Public Opinion Guidance of Official Media on Social Platforms. GBP Proceedings Series, 3, 105-109.

[8]. Arisanty, M., Wiradharma, G., & Fiani, I. (2020). Optimizing Social Media Platforms as Information Disemination Media. Jurnal Aspikom, 5(2), 266-279.

[9]. Bruning, P. F., Alge, B. J., & Lin, H. C. (2020). Social networks and social media: Understanding and managing influence vulnerability in a connected society. Business Horizons, 63(6), 749-761.

[10]. McLoughlin, K. L., Brady, W. J., Goolsbee, A., Kaiser, B., Klonick, K., & Crockett, M. J. (2024). Misinformation exploits outrage to spread online. Science, 386(6725), 991-996.


Cite this article

Zhuang,Y. (2025). Public Opinion Formation in the Age of Social Media Algorithms. Communications in Humanities Research,83,101-107.

Data availability

The datasets used and/or analyzed during the current study will be available from the authors upon reasonable request.

Disclaimer/Publisher's Note

The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of EWA Publishing and/or the editor(s). EWA Publishing and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

About volume

Volume title: Proceedings of ICIHCS 2025 Symposium: Voices of Action: Narratives of Faith, Ethics, and Social Practice

ISBN:978-1-80590-130-3(Print) / 978-1-80590-145-7(Online)
Editor:Enrique Mallen , Kurt Buhring
Conference date: 11 September 2025
Series: Communications in Humanities Research
Volume number: Vol.83
ISSN:2753-7064(Print) / 2753-7072(Online)

© 2024 by the author(s). Licensee EWA Publishing, Oxford, UK. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license. Authors who publish this series agree to the following terms:
1. Authors retain copyright and grant the series right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this series.
2. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the series's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial publication in this series.
3. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See Open access policy for details).

References

[1]. Singh, D. P. (2023). The algorithmic bias of social media. The Motley Undergraduate Journal, 1(2).

[2]. Poleac, G., & Ghergut-Babii, A. N. (2024). How Social Media Algorithms Influence the Way Users Decide-Perspectives of Social Media Users and Practitioners. Technium Soc. Sci. J., 57, 69.

[3]. Feng, Z. (2019). Hot news mining and public opinion guidance analysis based on sentiment computing in network social media. Personal and Ubiquitous Computing, 23(3), 373-381.

[4]. Wolfowicz, M., Weisburd, D., & Hasisi, B. (2023). Examining the interactive effects of the filter bubble and the echo chamber on radicalization. Journal of Experimental Criminology, 19(1), 119-141.

[5]. Goldenberg, A., & Gross, J. J. (2020). Digital emotion contagion. Trends in cognitive sciences, 24(4), 316-328.

[6]. Zeng, J., & Kaye, D. B. V. (2022). From content moderation to visibility moderation: A case study of platform governance on TikTok. Policy & Internet, 14(1), 79-95.

[7]. Wang, L., & Alexandrovna, P. N. (2025). Research on Information Dissemination and Public Opinion Guidance of Official Media on Social Platforms. GBP Proceedings Series, 3, 105-109.

[8]. Arisanty, M., Wiradharma, G., & Fiani, I. (2020). Optimizing Social Media Platforms as Information Disemination Media. Jurnal Aspikom, 5(2), 266-279.

[9]. Bruning, P. F., Alge, B. J., & Lin, H. C. (2020). Social networks and social media: Understanding and managing influence vulnerability in a connected society. Business Horizons, 63(6), 749-761.

[10]. McLoughlin, K. L., Brady, W. J., Goolsbee, A., Kaiser, B., Klonick, K., & Crockett, M. J. (2024). Misinformation exploits outrage to spread online. Science, 386(6725), 991-996.