How Big Data Recommendation Algorithms Reinforce Gendered Body Shaming

Research Article
Open access

How Big Data Recommendation Algorithms Reinforce Gendered Body Shaming

Xueyinuo Li 1*
  • 1 Hangzhou Foreign Language School Cambridge A-Level Center    
  • *corresponding author nora1976@163.com
Published on 27 August 2025 | https://doi.org/10.54254/2753-7064/2025.LC26348
CHR Vol.76
ISSN (Print): 2753-7072
ISSN (Online): 2753-7064
ISBN (Print): 978-1-80590-146-4
ISBN (Online): 978-1-80590-284-3

Abstract

With the rise of social media, the body shaming has evolved from occurring in personal interactions and traditional forms of media to being amplified by algorithm-driven social media platforms. The surge in self-image and beauty-centric social media platforms, such as Instagram and TikTok, fueled by big data recommendation algorithms, significantly impacts users’ self-esteem and the construction of self-image by enforcing rigid beauty standards. This paper focuses on the effects of big data recommendation algorithms and beauty-obsessed and centric social media on body shaming and its psychological and social impacts, seeking solutions to counter the negative impacts of body shaming. The review integrates concepts from media studies, psychology, and computer science to analyze the impact of social media algorithm designs on user behavior and social norms. The findings underscore the need for an intentional shift in social media design, policy frameworks, and public discourse to foster social equity and combat discrimination in the digital environment.

Keywords:

Body shaming, Social media, Algorithmic bias, Big data recommendation algorithms, Algorithmic system design

Li,X. (2025). How Big Data Recommendation Algorithms Reinforce Gendered Body Shaming. Communications in Humanities Research,76,19-24.
Export citation

1. Introduction

Body shaming, the criticism or humiliation of individuals based on his or her body shape or size, has always existed [1]. Body shaming is traditionally done interpersonally or conducted by traditional media forms. It is known as social aggression, which produces severe trauma to the victim and damages their mental and physical well-being [1].

However, in the digital age, body shaming incidents have changed. Through the power of big data recommendation algorithms, viral videos on TikTok and Instagram are spotlighted and are lent powerful effects to constantly propagate hyper-edited images of ideal bodies [2,3]. As users, especially through upward social comparison, internalize these one-dimensional standards, they develop body dissatisfaction, anxiety, and self-objectification [3,4].

Two big concerns motivate this study: The ways in which big data recommendation algorithms amplify body-shaming, and the psychological and social impacts of these amplified effects.

By weaving together media studies, recommender system design, and body image psychology, this study seeks to impart a deeper understanding of how these digital technologies, especially big data recommendation algorithms, promote body shaming related contents and to articulate avenues of practice for more ethical algorithm design.

2. Algorithmic body shaming vs. traditional models

Although body shaming has historically occurred in traditional media, algorithmically driven content recommendation systems amplify and extend its effects. This evolution can be explained by three significant paradigm shifts: scale, source, and persistence.

In contrast to traditional body shaming, which generally occurred in limited social circles or was restricted by media forms, algorithmic platforms enable content to disseminate quickly and extensively to global audiences. Previously, body shaming was often localized—limited to peer groups or a single issue of a magazine. With big data recommendation algorithms, however, personalized and attention-optimized content travels far from its origin. Social media platforms collapse spatial and temporal boundaries, permitting content to be shared across contexts and to unintended audiences [5]. Combined with algorithmic amplification—which selects and amplifies content based on engagement —a localized body-shaming post can be amplified into a worldwide phenomenon. What could have been a localized criticism, then, becomes extensively visible, reaching millions via recommendation systems.

Body shaming in the past originates more from interpersonal comments or overt commentary. However, algorithmic systems today are like invisible curators that are constantly prioritizing and surfacing some kinds of content over others. This machine curation essentially becomes the voice that influences users' understanding of what's normal and desirable. In contrast to traditional body shaming from human, algorithmic body shaming is more institutionalized—it does not directly insult but rather over-represents a limited range of body types, implying exclusion or undesirability for those who don't conform [6].

In traditional media, body shaming instances were more transient, while algorithmic platforms create perpetual feedback loops, where users view the same beauty standards reiterated over and over again. Algorithmic curation locks users into a "loop of relevance," where they are exposed to more of what they interact with—regardless of whether it is mentally healthy for them [7]. For a user who is already body insecure, they will be exposed to more weight-loss content, filtered beauty influencers, or cosmetic surgery advertising.

Although physical body shaming in traditional media is more obvious, the algorithm-driven body aesthetic recommendations often manifest as "implicit bias", but potentially more impactful. On one hand, the algorithm does not directly express gender bias; instead, it continuously recommends content that conforms to stereotypical aesthetics, allowing users to unconsciously accept and internalize these biases. This intangible influence is often harder to detect and resist but can shape cognition in an imperceptible way. Although implicit bias is not consciously perceived by individuals, it has a significant impact on behavior and attitudes [8]. In recommendation algorithms, this implicit bias is further amplified by the interaction between users and the system, continuously reinforcing existing preferences, thereby forming an echo chamber effect, locking users in a single aesthetic framework and further exacerbating the exclusion and humiliation of "non-standard" bodies [9].

In conclusion, algorithmic recommendation systems have turned body shaming into a persistent, mass phenomenon from previously isolated and fleeting experiences. Through the subtle curation and amplification of restrictive beauty standards, these systems generate ongoing feedback loops that reinforce gendered body shaming. Unlike the obvious body shaming tactics seen in traditional media, algorithmic bias exists in an invisible form-but much more profound.

3. The amplifying effects of algorithms on gendered body image issues

This section discusses ways in which algorithms exacerbate gendered body-shaming on individuals and society. Psychological effects include heightened insecurity, dissatisfaction, and mental health problems. Additionally, big data recommender generates social effects, including women’s role in society, new social habitus, and reformed societal balance.

3.1. Psychological consequences

Algorithmic curation of beauty content on social media intensifies exposure to idealized, and frequently digitally manipulated, body images. This repeated exposure molds users' attentional bias and cognitive schema, making them view slender, athletic, or surgically trimmed bodies as normative and desirable. When people are continually exposed to these curated ideals, they start making upward social comparisons by contrasting their own appearance with impossible standards, leading to the internalization of limited beauty standards and promoting distorted self-perception [4].

This distortion leads to emotional consequences. Appearance-focused activity on social media continues to support internalization of unrealistic beauty ideals. Higher Facebook appearance exposure was significantly correlated with greater thin-ideal internalization (r = .23, p < .01) and body surveillance (r = .24, p < .01). Similarly, Instagram users reported significantly higher levels of body surveillance (M = 4.56) compared to non-users (M = 4.26; p = .03), indicating greater self-monitoring and concern about body appearance [10]. Due to unreachable ideals, individuals are more vulnerable to shame, dissatisfaction of self, and general insecurity, contributing to lower self-esteem, increased anxiety, and ultimately trouble, particularly among adolescents grappling with identity formation [10].

Cumulatively, the pairing of negative emotion and negative cognition can culminate in more severe mental health effects, like body dysmorphia, depression, and eating disorders [11,12]. Alarmingly, the platforms appear to perpetuate self-supporting cycles of negative emotions: users, unsatisfied about their bodies, might return to apps seeking affirmation or fixes (e.g., beauty items, exercise directions), resulting in finding more idealized and enhanced content that creates additional psychological damage [13]. This process presents how algorithmic exposure to body-shaming content not only reflects biases that these influencers, brands, and others adhere to, but also they support biases back into the minds of vulnerable users.

3.2. Social consequences

Social media algorithms usually privilege content that promotes hyper-idealized female bodies - fitness challenges, weight-loss journeys and 'body transformation' posts - which commodifies women’s bodies into quantifiable objects [3]. Women are disproportionately encouraged and incentivized to take part in such trends, normalizing the objectification of women's bodies and centering appearance as one of the most important forms of social value [14].

This algorithmic reinforcement creates new social habitus by embedding body supervision as a social norm and creating a culture in which individuals - and particularly women - not only self monitor, but are expected to monitor their peers. This heightened socially comparative lens erodes empathy and normalizes conformity to narrow understandings of beauty and vilifying those whose bodies do not fit dominant norms [14]. For example, users with larger body types may receive little social validation or opportunities to connect with other users [6]. This transformative interaction-based accountability builds competition over time and can contribute to significant structural gender inequality. The social media platform constructs an environment that ties women's value to their appearance and continues the longstanding association of women's worth with their physical attractiveness [14]. The digitally mediated discipline confines the public and professional roles of women, reminding them that all their cultural narratives that create systems of gendered power heighten the surveillance.

4. Towards gender equal algorithm design

Given the entrenchment of gendered beauty norms in algorithmic processes, and their potential negative impact on society, it is essential to examine ways to offset these effects. This section will explain three related approaches to offset negative impacts: redesigning algorithmic systems, developing new policy and regulations, and reshaping social awareness.

4.1. Technology redesign

Although algorithms are said to be gender-neutral, their construction and operation are often a reflection of gender ideologies that exist in society. Due to the reproduction of gendered stereotypes and ideals that are propagated through social media, technological redesign has emerged as a primary pathway towards normalized beauty standards. The prioritization of “female attractiveness” is often a result of gendered bias in training data, leading models and algorithms to reinforce the beauty-centric aesthetic logic [15].

Biases, above all, must be dealt with at the source data. That can be done by depicting women of different age groups, body sizes, skin tones, and fashion styles representatively in training data, in a way that the algorithm learned on diverse data, doesn't limit the standards of beauty. Also, providing what can be termed as “fairness constraints”, such as Equal Opportunity, during training can ensure that different gender groups have their shared recommendation results with equal chances, where a platform is able to design a balanced control for the content displayed [16].

One inspiring example is Mastodon, a decentralized social media platform not reliant on algorithmic recommendation systems used by mainstream platforms for content. Users' timelines at Mastodon are determined by the people they follow and the instances they are members of, rather than invisible algorithms maximizing for engagement. This design resists filter bubble construction and lowers exposure to algorithmically rewarded biased or beauty-focused material. By putting control in the hands of users and interest-driven interactions, Mastodon offers a model that more closely resembles pluralistic and inclusive digital environments.

4.2. Policy intervention

While technical redesign is important, systemic gender bias embedded in digital platforms also requires political measures to ensure long-term sustainability and accountability.

To tackle gender bias within algorithmic systems, legal regulations need to require that the platforms render transparent the internal logic of their technology, particularly content recommendation and ranking algorithms. The transparency enables companies to be publicly scrutinized by users and subject to external audit choices. A good case in point is the EU DSA (European Union's Digital Services Act), which requires large online platforms to provide informative explanations of the algorithmic recommendation system. In addition to legislation, corporate regulation is equally important. There ought to be explicit gender equity aspects and bias auditing throughout the whole process of the algorithm. Audits regularly, fair-mindedness training of employees, and monitoring for gendered content amplification tracking metrics are among them. Together with legal action and management on platforms, it is a two-step process towards preventing recommender systems from reinforcing gender stereotypes.

4.3. Social awareness

Notwithstanding policy and technological measures, the lack of public consciousness will perpetuate the existence of biased algorithms.

The public needs to be aware of how algorithms silently influence information reception and gender concept formation. Recommendation systems recommend content based on the history of user actions. Lacking discernment, when users like and share posts on "body anxiety," they unwittingly reproduce gender stereotypes. To render the public more algorithm-literate, school curricula can introduce elementary algorithmic mechanisms. Social media platforms should promote the dissemination of popular science news, and non-profit organizations can also enhance the public's ability to recognize and resist gender discrimination through charitable activities. Only when the public becomes aware of and familiar with stereotypes within their own behaviors can a more balanced social feedback be created in the algorithmic world.

5. Conclusion

This paper mainly explores how big data recommendation algorithms amplify the phenomenon of gendered body shaming on social media platforms, and promote a uniform and idealized aesthetic standard. Through the analysis of psychological and social consequences, the article points out that this algorithm-driven "aesthetic push" has particularly long-term and hidden negative impacts on female users.

Studies have shown that compared with the traditional body shaming model, recommendation algorithms have undergone fundamental changes in terms of scale, source, and persistence. The algorithms continuously lead users towards aesthetically enhanced body images, intensifying users' concerns about their appearance, self-objectification, and harsh aesthetic expectations of others. Technological optimization, policy intervention, and public education are considered as improvement approaches.

This study still has room for improvement. For instance, while it does treat the gendered algorithmic reasoning, it does not examine how such bias could vary across diverse cultural contexts, and such constraints may diminish the capacity of its results to generalize. Additionally, the impact on the male or non-binary gender groups was not adequately explored.

Future research could focus on the differences in recommendation mechanisms across various platforms, the algorithmic dissemination paths of cross-cultural aesthetics, or further analyze how AI filters affect an individual's sense of identity and social interaction.


References

[1]. Arumugam, N., Manap, M. R., Mello, G. D., & Dharinee, S. (2022). Body shaming: Ramifications on an individual. International Journal of Academic Research in Business and Social Sciences, 12(4), 1067-1078.

[2]. Färber, M., Coutinho, M., & Yuan, S. (2023). Biases in scholarly recommender systems: impact, prevalence, and mitigation. Scientometrics, 128(5), 2703-2736.

[3]. Czubaj, N., Szymańska, M., Nowak, B., & Grajek, M. (2025). The Impact of Social Media on Body Image Perception in Young People. Nutrients, 17(9), 1455.

[4]. Fardouly, J., Diedrichs, P. C., Vartanian, L. R., & Halliwell, E. (2015). Social comparisons on social media: The impact of Facebook on young women's body image concerns and mood. Body image, 13, 38-45.

[5]. Brandtzaeg, P. B., & Lüders, M. (2018). Time collapse in social media: Extending the context collapse. Social Media+ Society, 4(1), 2056305118763349.

[6]. Clark, O., Lee, M. M., Jingree, M. L., O'Dwyer, E., Yue, Y., Marrero, A., ... & Mattei, J. (2021). Weight stigma and social media: evidence and public health solutions. Frontiers in nutrition, 8, 739056.

[7]. Eslami, M., Vaccaro, K., Karahalios, K., & Hamilton, K. (2017, May). “Be careful; things can be worse than they appear”: Understanding biased algorithms and users’ behavior around them in rating platforms. In Proceedings of the international AAAI conference on web and social media (Vol. 11, No. 1, pp. 62-71).

[8]. FitzGerald, C., Martin, A., Berner, D., & Hurst, S. (2019). Interventions designed to reduce implicit prejudices and implicit stereotypes in real world contexts: a systematic review. BMC psychology, 7(1), 29.

[9]. Cinelli, M., De Francisci Morales, G., Galeazzi, A., Quattrociocchi, W., & Starnini, M. (2021). The echo chamber effect on social media. Proceedings of the national academy of sciences, 118(9), e2023301118.

[10]. Cohen, R., Newton-John, T., & Slater, A. (2017). The relationship between Facebook and Instagram appearance-focused activities and body image concerns in young women. Body image, 23, 183-187.

[11]. Fardouly, J., & Vartanian, L. R. (2016). Social media and body image concerns: Current research and future directions. Current opinion in psychology, 9, 1-5.

[12]. Choukas-Bradley, S., Roberts, S. R., Maheux, A. J., & Nesi, J. (2022). The perfect storm: A developmental–sociocultural framework for the role of social media in adolescent girls’ body image concerns and mental health. Clinical child and family psychology review, 25(4), 681-701.

[13]. Nesi, J., & Prinstein, M. J. (2015). Using social media for social comparison and feedback-seeking: Gender and popularity moderate associations with depressive symptoms. Journal of abnormal child psychology, 43(8), 1427-1438.

[14]. Duffy, B. E., & Hund, E. (2019). Gendered visibility on social media: Navigating Instagram’s authenticity bind. International Journal of Communication, 13, 20.

[15]. Kay, M., Matuszek, C., & Munson, S. A. (2015, April). Unequal representation and gender stereotypes in image search results for occupations. In Proceedings of the 33rd annual acm conference on human factors in computing systems (pp. 3819-3828).

[16]. Hardt, M., Price, E., & Srebro, N. (2016). Equality of opportunity in supervised learning. Advances in neural information processing systems, 29.


Cite this article

Li,X. (2025). How Big Data Recommendation Algorithms Reinforce Gendered Body Shaming. Communications in Humanities Research,76,19-24.

Data availability

The datasets used and/or analyzed during the current study will be available from the authors upon reasonable request.

Disclaimer/Publisher's Note

The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of EWA Publishing and/or the editor(s). EWA Publishing and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

About volume

Volume title: Proceedings of ICADSS 2025 Symposium: Art, Identity, and Society: Interdisciplinary Dialogues

ISBN:978-1-80590-146-4(Print) / 978-1-80590-284-3(Online)
Editor:Ioannis Panagiotou, Yanhua Qin
Conference date: 22 August 2025
Series: Communications in Humanities Research
Volume number: Vol.76
ISSN:2753-7064(Print) / 2753-7072(Online)

© 2024 by the author(s). Licensee EWA Publishing, Oxford, UK. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license. Authors who publish this series agree to the following terms:
1. Authors retain copyright and grant the series right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this series.
2. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the series's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial publication in this series.
3. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See Open access policy for details).

References

[1]. Arumugam, N., Manap, M. R., Mello, G. D., & Dharinee, S. (2022). Body shaming: Ramifications on an individual. International Journal of Academic Research in Business and Social Sciences, 12(4), 1067-1078.

[2]. Färber, M., Coutinho, M., & Yuan, S. (2023). Biases in scholarly recommender systems: impact, prevalence, and mitigation. Scientometrics, 128(5), 2703-2736.

[3]. Czubaj, N., Szymańska, M., Nowak, B., & Grajek, M. (2025). The Impact of Social Media on Body Image Perception in Young People. Nutrients, 17(9), 1455.

[4]. Fardouly, J., Diedrichs, P. C., Vartanian, L. R., & Halliwell, E. (2015). Social comparisons on social media: The impact of Facebook on young women's body image concerns and mood. Body image, 13, 38-45.

[5]. Brandtzaeg, P. B., & Lüders, M. (2018). Time collapse in social media: Extending the context collapse. Social Media+ Society, 4(1), 2056305118763349.

[6]. Clark, O., Lee, M. M., Jingree, M. L., O'Dwyer, E., Yue, Y., Marrero, A., ... & Mattei, J. (2021). Weight stigma and social media: evidence and public health solutions. Frontiers in nutrition, 8, 739056.

[7]. Eslami, M., Vaccaro, K., Karahalios, K., & Hamilton, K. (2017, May). “Be careful; things can be worse than they appear”: Understanding biased algorithms and users’ behavior around them in rating platforms. In Proceedings of the international AAAI conference on web and social media (Vol. 11, No. 1, pp. 62-71).

[8]. FitzGerald, C., Martin, A., Berner, D., & Hurst, S. (2019). Interventions designed to reduce implicit prejudices and implicit stereotypes in real world contexts: a systematic review. BMC psychology, 7(1), 29.

[9]. Cinelli, M., De Francisci Morales, G., Galeazzi, A., Quattrociocchi, W., & Starnini, M. (2021). The echo chamber effect on social media. Proceedings of the national academy of sciences, 118(9), e2023301118.

[10]. Cohen, R., Newton-John, T., & Slater, A. (2017). The relationship between Facebook and Instagram appearance-focused activities and body image concerns in young women. Body image, 23, 183-187.

[11]. Fardouly, J., & Vartanian, L. R. (2016). Social media and body image concerns: Current research and future directions. Current opinion in psychology, 9, 1-5.

[12]. Choukas-Bradley, S., Roberts, S. R., Maheux, A. J., & Nesi, J. (2022). The perfect storm: A developmental–sociocultural framework for the role of social media in adolescent girls’ body image concerns and mental health. Clinical child and family psychology review, 25(4), 681-701.

[13]. Nesi, J., & Prinstein, M. J. (2015). Using social media for social comparison and feedback-seeking: Gender and popularity moderate associations with depressive symptoms. Journal of abnormal child psychology, 43(8), 1427-1438.

[14]. Duffy, B. E., & Hund, E. (2019). Gendered visibility on social media: Navigating Instagram’s authenticity bind. International Journal of Communication, 13, 20.

[15]. Kay, M., Matuszek, C., & Munson, S. A. (2015, April). Unequal representation and gender stereotypes in image search results for occupations. In Proceedings of the 33rd annual acm conference on human factors in computing systems (pp. 3819-3828).

[16]. Hardt, M., Price, E., & Srebro, N. (2016). Equality of opportunity in supervised learning. Advances in neural information processing systems, 29.