1. Introduction
With the rapid advancement of AIGC (Artificial Intelligence-Generated Content) technologies, tools such as automated news generation, virtual news anchors, and text-to-video applications are being increasingly adopted in the field of journalism, bringing profound changes to the production and dissemination of news. However, in the evolving context of new media, while these technologies accelerate content creation, they also give rise to growing controversies regarding the “authenticity” of AIGC news. Traditional journalism is founded on the principle of authenticity, yet AIGC technologies can now generate several hundred words of news content within seconds. The lack of human oversight during content generation, combined with professional-looking formats that often lack substantive informational structure, means these AI-generated reports—though seemingly “real”—are often devoid of the investigative rigor and editorial scrutiny associated with traditional journalism. These texts rely on vast corpora and algorithmic training to mimic journalistic style, but their factual basis is frequently unverifiable, challenging the ontological foundation of news. The line between truth and falsehood is increasingly blurred, and public trust in news is gradually eroding.
In Simulacra and Simulation, Jean Baudrillard introduced the concepts of simulation and hyperreality, emphasizing that modern society—and even social institutions—are increasingly controlled by technology and mass media. As he states, “We live in a world where there is more and more information, and less and less meaning” [1]. With the replacement of traditional journalistic gatekeepers by algorithms and AI, the essence of information becomes decentered, lost within opaque simulations. “Here, mass media are no longer vehicles for social meaning and dramatic social practice, but merely for information” [1]. This paper draws on Baudrillard’s theory of simulacra to reexamine the concept of “authenticity” within the new media environment. Through case studies and textual analysis, it explores the authenticity crisis of AIGC news in three dimensions—content generation, expressive form, and audience cognition—while revealing its deeper cultural implications within the media landscape.
2. Literature review
Baudrillard posits that "simulation" refers to the imitation or emulation of the real, where simulation is not the "truth" of reality but a form of "virtual truth." "Simulacra" denotes non-real images that are not based on an original reality but instead constructed from copies of copies [1]. Baudrillard divides the evolution of simulacra into four stages, paralleling shifts in value systems, each linked to technological advancements and their societal implications. The first stage, "counterfeit," corresponds to the Renaissance and Industrial Revolution eras, characterized by imitation of nature through material extension [1]. The second stage, "industrial simulacra," revolves around mass production and mechanical replication under equivalent exchange principles [1] The third stage, "simulation," emerges with information technology, marked by models, cybernetics, and hyperreality, where technology deepens its control over human life [1]. The fourth stage, "hyperreality," coincides with the fragmentation of value in the digital age, where reality is entirely replaced by self-referential signs and codes1. [1]
"Simulation is no longer that of a territory, a referential being, or a substance. It is the generation by models of a real without origin or reality: a hyperreal" [1]. The digital development of contemporary society validates Baudrillard's prophecy, where media-constructed images and signs no longer reflect reality, but generate a "hyperreality" that is detached from yet more appealing than actual existence. AIGC journalism precisely manifests the quintessential characteristics of the fourth-order simulation - "prototype-free" pure simulation. As the virtual scenarios constructed by media continue to evolve, will their boundaries gradually blur until they ultimately replace reality, becoming a new form of "authenticity"? This warrants profound contemplation.
In traditional journalism theory, authenticity is regarded as the lifeline of news and the fundamental criterion for its value. Scholars generally categorize journalistic authenticity into three dimensions: factual accuracy (verifiable details), holistic accuracy (consistency with broader societal phenomena), and essential authenticity (insight into underlying causes) [2]. However, these standards face challenges in the AIGC context. While studies acknowledge AIGC's potential benefits—such as cost efficiency, human-machine collaboration, and multimedia storytelling [3]—its erosion of authenticity remains a critical concern. Issues like AI-generated disinformation, rooted in large language models and biased datasets [4], have prompted calls for regulatory frameworks, industry self-regulation, and enhanced editorial oversight [5]. Current research on AIGC journalism remains nascent, focusing on technical applications, disinformation governance, and professional ethics, while gaps persist in areas like copyright, privacy, authenticity verification, and cross-cultural impacts.
3. Analysis and results
3.1. Content generation level
Baudrillard asserts that contemporary society has fully entered an era dominated by simulation, in which information is no longer grounded in reality but is constructed through endless processes of replication, collage, and representation, thus forming a “hyperreal” world detached from the real. Within this context, the method of content production employed by AIGC news exhibits typical characteristics of simulation. AIGC relies on existing databases, archived materials, and fragmented online information to recombine content, lacking direct on-site interviews or authentic field investigations. This fundamentally undermines the epistemological foundation of journalistic authenticity.
In April 2025, a notable case of AIGC-generated false news occurred in Qiaokou District, Wuhan, Hubei Province. A local news company, in pursuit of higher website traffic, purchased an AI writing program that automatically generated “hot articles” by inputting keywords and scraping online content. The AI system mechanically stitched together textual fragments from various sources. While the syntax appeared coherent, the articles contained numerous factual errors and were published without human review, ultimately misleading a wide audience. This incident is not isolated. In July 2024, public security authorities in Sichuan Province publicly disclosed ten cases involving AI-generated rumors, including fabricated disasters such as a “landslide in Yunnan killing eight” and a “Jiude County earthquake,” as well as invented social conflicts like “a confrontation between police and civilians in Bazhong.” Perpetrators used AI tools to mass-produce realistic text and imagery to provoke public sentiment and obtain illicit gains.
Behind seemingly complete AIGC content, there is often an absence of real events or first-hand information. Claimed sources and interview references are frequently nonexistent. These cases illustrate that the crisis of authenticity in AIGC news is, at its core, a deeper crisis of cognition, meaning, and power. Under the regime of simulacra, the boundaries between truth and falsehood, beauty and ugliness, good and evil become increasingly blurred. As Baudrillard famously stated, “It is no longer possible to measure beauty and ugliness, truth and falsehood, or good and evil—just as it is impossible to simultaneously determine the speed and position of a molecule.” [1]
3.2. Expressive form level
Baudrillard argues that modern digital and media technologies have induced a profound “implosion” [1]within contemporary society. Unlike McLuhan’s concept of media convergence and expansion, Baudrillard’s notion of “implosion” refers to the collapse of meaning or the disappearance of the referent—one of the core characteristics of mass media in the modern era [1]. In the field of journalism, AIGC-generated news is capable of closely simulating the stylistic norms of traditional news writing, including language style, page layout, and headline structure. As a result, audiences find it increasingly difficult to distinguish the authenticity of such content, falling prey to a cognitive illusion of “hyperreality.”
For instance, the U.S.-based local news platform Hoodline, in its use of AI-generated content, not only fabricated author names such as “Elena Martinez,” but also produced synthetic reporter avatars and fictional professional bios such as “a veteran community observer with 10 years of experience in local journalism.” This approach thoroughly imitates the byline system of traditional journalism and even adopts the visual layout style of established media outlets like the San Francisco Chronicle, leading readers to believe that the articles are the product of real journalists conducting field reporting. Although the site includes a disclaimer noting the content is “AI-generated,” the label is minimized and hidden within a small icon beside the byline, making it visually inconspicuous. This manipulation exploits the audience’s habitual trust in professional news institutions. By creating fictional reporter identities and deploying professional-looking formats, editorial teams enhance the perceived credibility and legitimacy of the content, aligning with the public’s expectations of journalistic professionalism. Consequently, audiences—when exposed to these highly simulated “journalistic texts”—tend to assume their reliability, thereby fostering a trust illusion toward hyperreal information and exacerbating the crisis of authenticity in journalism. When both truth and falsehood are disguised in the same appearance, the very foundation of journalistic authenticity becomes impossible to assess.
Moreover, even in these highly formatted and simulated news texts, errors and inconsistencies inevitably arise, further revealing the deficiencies in AIGC content’s truth value. For example, during the January 2025 earthquake in Shigatse, Tibet, a widely circulated image titled “Little Boy with Hat Buried Under Rubble” gained traction on the Chinese internet. Upon verification by Tencent News’ fact-checking platform “Jiaozhen,” the image was confirmed to be AI-generated. Although the original uploader had marked the image as “AI-generated,” secondary disseminators deliberately removed this label and paired the image with real earthquake hashtags, misleading the public into interpreting it as actual disaster footage. Notably, the image displayed typical AI-generation anomalies such as “six fingers,” but due to its strong emotional impact, most viewers failed to detect the signs. While the narrative structure of AIGC news may appear coherent and readable, it often suffers from logical gaps, flawed causal chains, and implausible contextual details.
3.3. Audience perception level
As AIGC technologies continue to evolve, AI-generated anchors, digital humans, synthetic imagery, and virtual scenes have become increasingly sophisticated. These hyperrealistic media environments deliver powerful visual and auditory sensations, often leading users to mistake mediated simulations for actual reality. This intensified sensory realism elevates the perceived credibility of the content while simultaneously diminishing the audience’s ability to critically assess the authenticity of news information.
For example, in March 2024, a series of short videos featuring a fictional Russian woman named “Natasha” attracted widespread attention on Chinese social media platforms. In the videos, the character spoke fluent Chinese and recounted her “entrepreneurial journey in China,” including renting a booth at the Yiwu International Trade City and selling amber handicrafts. The videos also featured scenes of her “making dumplings” and “practicing calligraphy,” which mirrored common lifestyle depictions of foreigners living in China. The high degree of audiovisual realism in these videos fostered trust among viewers and successfully enabled monetization through livestreaming. This case demonstrates how fabricated content, when embedded within familiar cognitive frameworks, is more easily accepted by audiences, thereby blurring the boundary between truth and falsehood.
The rapid development of AIGC has further contributed to the proliferation of mass media, allowing popular culture to dominate. However, this expansion has also led to the erosion of individual subjectivity, as the public becomes increasingly shaped—and simultaneously disempowered—by the same media forces. As a result, audiences gradually lose the ability to discern the authenticity of AIGC news or even trace the origin of information. Over time, this leads to a state of apathy and detachment toward fragmented content. Standards for judging authenticity become increasingly vague, and the notion that “what appears real is real” may ultimately become the default mode of social cognition.
4. Discussion
4.1. Is AIGC news a form of simulation without an original prototype
The continuous development of AIGC technologies has greatly enhanced the efficiency of news production. However, the challenges brought about by AIGC-generated news have become increasingly apparent, compelling us to ask: Does AIGC news constitute a form of fictional simulation that lacks any original prototype? If the answer is yes, how should we confront the challenges posed by such content?
To address this question, we must return to the content generation mechanism of AIGC. At present, most AIGC news is generated based on the analysis and integration of pre-existing databases and online information. This method is inherently retrospective and virtual, as its content does not originate from real-world fieldwork or firsthand investigation. Instead, it involves the secondary recombination of symbolic information. From the perspective of Baudrillard’s theory, this represents a defining feature of the third stage of simulacra: information becomes detached from reality, referencing only itself, continuously replicating within a simulated system. Over time, this simulation permeates and dominates public cognition. Its “authenticity” lies merely in formal repetition and stylistic mimicry.
Consider, for instance, a 2025 incident involving the sale of health products. A mother, after watching an AI deepfake video posing as a news report, spent tens of thousands of yuan on a supplement falsely claimed to “cure diabetes.” The video fabricated scenes of a “traditional Chinese medicine doctor” explaining clinical efficacy in a laboratory setting, along with AI-synthesized “consumer interviews,” collectively creating a highly persuasive media product. In reality, the product was merely a generic herbal remedy, lacking both regulatory approval and proven effects. In this case, AIGC assembled information from databases to construct a seemingly authoritative persona—one that had no basis in reality, a pure simulacrum with no original referent.
In this light, AIGC news more closely resembles a simulated text detached from reality yet presenting itself as authentic. It does not aim to restore facts but instead generates a text with the semblance of “newsworthiness” through algorithmic logic and semantic recombination. In Baudrillard’s terms, this type of content has severed ties with the real and emerged as a cultural illusion “in the name of the real.”
4.2. Does AIGC disrupt the ontological authenticity of news
At its current stage of development, AIGC does indeed pose a substantial challenge to the ontological authenticity of news. As previously discussed, the credibility of AIGC-generated news is often not rooted in factual investigation or authoritative editorial review. Rather, it is shaped by algorithmic recommendation systems, keyword matching, and user interaction data. In other words, whether such news is trusted or accepted by the public is largely determined by its mode of dissemination rather than by the factual basis of its content. This form of “passive trust” essentially severs the intrinsic connection between news content and factual reality, reducing AIGC output to a form of simulated representation.
For example, in the widely circulated incident involving the “Israeli Prime Minister’s psychiatrist,” a foreign website used AIGC technology to fabricate a news story about Israeli Prime Minister Benjamin Netanyahu and his “private psychiatrist,” inventing numerous false details about diagnoses and political opinions. The story was packaged as an “exclusive report” and spread rapidly. It was even mistakenly cited by some television programs as a real news item. This fake report relied on fictional experts and supposed insiders—so-called “authoritative sources”—to leverage public trust in professional domains and thus construct a false narrative. The result was a highly convincing simulation of reality.
The threat posed by AIGC is not limited to the fabrication of individual pieces of content. More critically, it lies in the mass production of simulated symbols that saturate the information environment. In what appears to be an information-rich media landscape, the public increasingly relies on these signs to navigate reality. Yet simultaneously, people are unknowingly subjected to an overload of redundant, misleading, or entirely fabricated content. The line between usable information and noise becomes ever more difficult to draw.
For this reason, it is imperative that society and the public treat the impact of AIGC on the authenticity of news with the utmost seriousness. The disruption AIGC causes is not merely technical in nature; it has the potential to trigger a broader breakdown in public discourse and cognitive equilibrium. If simulation is mistaken for reality on a mass scale, the public nature and factual grounding of journalism could face an existential crisis.
4.3. Will AIGC give rise to “non-judgmental reading”
Building on the preceding analysis, this study introduces the concept of “non-judgmental reading.” This term refers to a habitual mode of information consumption in which modern individuals, immersed in an environment saturated with both real and simulated content, make a passive compromise. On one hand, the value system within the media landscape has already shifted: authenticity is no longer the central criterion for evaluating newsworthiness. Instead, attention-grabbing potential now serves as the dominant logic. Algorithms and big data systems prioritize content that can generate traffic and emotional response. On the other hand, the explosive development of AIGC technologies has led to a proliferation of AI-generated news—much of which lacks factual grounding and exemplifies the characteristics of simulated content.
These two tendencies converge to reinforce a media system that derives authority from the logic of simulacra. Through the construction of “hyperreality”—a version of reality that appears more authentic than reality itself [1]—the media produces highly credible illusions that are detached from factual foundations. In this process, audiences may initially attempt to seek out the factual basis of news. However, as authentic information becomes entangled with hyperreal content in a continuous, overwhelming stream, the public gradually loses both the willingness and the capacity to distinguish truth from illusion. As a result, individuals increasingly resort to passive acceptance of superficial “realness,” abandoning the pursuit of underlying truths. Over time, this dynamic solidifies into a form of “non-judgmental reading”—a habitual state in which critical evaluation is suspended.
One illustrative example is a case disclosed by the People’s Court Daily involving an MCN (multi-channel network) operation dubbed a “news factory.” This organization used AI software to mass-produce between 4,000 and 7,000 pieces of fake news per day. The content covered high-sensitivity areas such as “natural disasters” and “criminal incidents,” and was disseminated through a matrix of 842 social media accounts. This industrialized model of content production and dissemination enabled fake news to surpass traditional human-driven misinformation in both volume and speed, creating an overwhelming “information flood.”[6]
This case serves as a stark warning: when hyperreal news content achieves scale and systematization, it not only undermines the audience’s capacity for critical judgment and fosters “non-judgmental reading,” but also poses a deeper threat to the credibility of journalism itself. Left unchecked, it could erode the public’s trust in the news system and compromise its societal function as a whole.
5. Conclusions and prospects
The theory of simulacra provides a profound cultural lens through which to examine the crisis of authenticity in contemporary journalism. AIGC-generated news, through hyperreal simulations, creates an "illusion of credibility" that obscures its detachment from factual foundations. In the digital order of simulacra, such "hyperreal" news—devoid of original referents—dissolves the anchors of reality in both content and form, leaving only self-referential models and floating signs. This cultural spectacle blurs the line between truth and fabrication, threatening journalism’s societal function as a public good.
To address this crisis, a multi-layered defense system integrating technology, regulation, and public cognition is urgently needed. Technical solutions like news source-tracing systems and AI content detection tools must be developed to curb disinformation at its origin. Legislative measures should clarify platform responsibilities and ethical boundaries while fostering cross-platform governance and public reporting mechanisms. Equally critical is enhancing public media literacy to cultivate critical discernment and empowering users with open-source fact-checking tools. Future research should investigate the power dynamics and commercial interests embedded in algorithmic systems, as well as cross-cultural variations in authenticity crises. Ultimately, this crisis transcends mere technological evolution—it is a civilizational challenge to safeguard cognitive sovereignty in the digital age. Only by recentering "truth" as a public value can we avoid becoming passive prisoners of algorithmic simulacra, adrift in a hyperreal landscape devoid of meaning.
References
[1]. Kong Ming'an. (2008). Object·Symbol·Simulation: A Study of Baudrillard's Philosophy. Hefei: Anhui People's Publishing House.
[2]. Tang Xinhua. (2004). Analyzing the principle of journalistic authenticity. Journal of Lingling University, (06), 82–83.
[3]. Zeng Xiao. (2023). New reflections on ChatGPT: Opportunities, challenges, and regulatory strategies for AIGC-driven news production. Publishing Panorama, (07), 57–61.
[4]. Mo Zuying, Pan Daqing, Liu Huan, et al. (2023). Root causes of AIGC-generated disinformation: An information quality perspective. Documentation, Information & Knowledge, 40(04), 32–40.
[5]. Zhang Dengyun. (2025). Innovative applications and challenges of AIGC in news production. Journalist Cradle, (04), 138–140.
[6]. Gleick, J.(2022). The Information: A History, a Theory, a Flood (Gao, B., Trans.). Beijing: People's Posts and Telecommunications Press.
Cite this article
Gao,G. (2025). The Authenticity Crisis of AIGC News Content Ontology: A Media Culture Critique Based on Baudrillard’s Simulacra Theory. Communications in Humanities Research,72,117-123.
Data availability
The datasets used and/or analyzed during the current study will be available from the authors upon reasonable request.
Disclaimer/Publisher's Note
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of EWA Publishing and/or the editor(s). EWA Publishing and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
About volume
Volume title: Proceedings of ICADSS 2025 Symposium: Art, Identity, and Society: Interdisciplinary Dialogues
© 2024 by the author(s). Licensee EWA Publishing, Oxford, UK. This article is an open access article distributed under the terms and
conditions of the Creative Commons Attribution (CC BY) license. Authors who
publish this series agree to the following terms:
1. Authors retain copyright and grant the series right of first publication with the work simultaneously licensed under a Creative Commons
Attribution License that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this
series.
2. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the series's published
version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial
publication in this series.
3. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and
during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See
Open access policy for details).
References
[1]. Kong Ming'an. (2008). Object·Symbol·Simulation: A Study of Baudrillard's Philosophy. Hefei: Anhui People's Publishing House.
[2]. Tang Xinhua. (2004). Analyzing the principle of journalistic authenticity. Journal of Lingling University, (06), 82–83.
[3]. Zeng Xiao. (2023). New reflections on ChatGPT: Opportunities, challenges, and regulatory strategies for AIGC-driven news production. Publishing Panorama, (07), 57–61.
[4]. Mo Zuying, Pan Daqing, Liu Huan, et al. (2023). Root causes of AIGC-generated disinformation: An information quality perspective. Documentation, Information & Knowledge, 40(04), 32–40.
[5]. Zhang Dengyun. (2025). Innovative applications and challenges of AIGC in news production. Journalist Cradle, (04), 138–140.
[6]. Gleick, J.(2022). The Information: A History, a Theory, a Flood (Gao, B., Trans.). Beijing: People's Posts and Telecommunications Press.