The Technological Applications and Risk Challenges of Artificial Intelligence in International Communication

Research Article
Open access

The Technological Applications and Risk Challenges of Artificial Intelligence in International Communication

Ke Zhu 1*
  • 1 Xi’an Jiaotong University    
  • *corresponding author 15807694901@163.com
CHR Vol.74
ISSN (Print): 2753-7072
ISSN (Online): 2753-7064
ISBN (Print): 978-1-80590-301-7
ISBN (Online): 978-1-80590-302-4

Abstract

As a driving force of technological innovation in the field of international communication, artificial intelligence (AI) not only reconstructs the technological logic of global information production and distribution but also fundamentally reshapes the cognitive paradigms and power structures underpinning cross-cultural communication. Particularly within the domain of international public opinion, AI demonstrates comprehensive advantages across the entire communication chain—from data collection and multi-modal content production to precise dissemination—by leveraging its powerful data processing and content generation capabilities. AI technologies are rewriting the regulatory framework of international communication and posing novel challenges to communication ethics and security. This paper systematically analyzes the dialectical relationship between technological empowerment and risk coexistence, revealing the internal mechanisms through which AI influences international communication. Furthermore, it explores the theoretical value and practical pathways for establishing a new international communication order in the intelligent era, focusing on three dimensions: technological governance, institutional innovation, and international collaboration.

Keywords:

artificial intelligence (AI), international communication, effect

Zhu,K. (2025). The Technological Applications and Risk Challenges of Artificial Intelligence in International Communication. Communications in Humanities Research,74,129-136.
Export citation

1. Introduction

As the core driving force behind the latest wave of technological revolution and industrial transformation, artificial intelligence (AI), through a series of key technologies such as machine learning, is reshaping the technological foundations, production methods, and communication ecology of international communication [1]. Intelligent technologies have disrupted traditional communication modes, fundamentally altering the landscape of information production [2]. The application of these technologies has transformed conventional models of international communication, giving rise to new forms and business paradigms, and continuously advancing international communication toward greater intelligence, personalization, and contextualization, thereby empowering the field. However, AI’s deployment also introduces various risks and challenges. For example, algorithmic biases may exacerbate cognitive divides among different cultural groups, which not only undermines the effectiveness of international communication but may also deepen divisions and conflicts within the international community. This study aims to present a comprehensive overview of AI’s technological applications in international communication, analyze its enabling effects, evaluate associated risks, and propose corresponding governance strategies, ultimately offering theoretical foundations and practical guidance for the construction of a more equitable, inclusive, and sustainable new order in international communication.

2. Artificial intelligence empowering international communication

2.1. Empowering the information production side

In the stages of information collection and verification, artificial intelligence leverages its outstanding data processing capabilities and the connectivity of global information networks to efficiently handle vast amounts of data and automate fact-checking processes. For instance, AI tools such as Factmata can analyze textual sentiment and source credibility to assist journalists in identifying fake news, thereby enhancing the reliability of news content. Similarly, AI tools developed by Adobe are capable of detecting whether images have been manipulated through Photoshop, helping to verify the authenticity of visual information. Regarding content sourcing, artificial intelligence disrupts the traditional media production landscape by enabling a broader range of individuals and organizations to participate in content creation. Creators from diverse backgrounds can freely express themselves and share their perspectives, enriching content sources and viewpoints, and promoting multicultural exchange within international communication. In terms of content production, AI technologies based on natural language processing (NLP), rule-based systems, and other methods can automatically generate news articles, social media posts, advertisements, and other types of content, significantly reducing the time costs of manual creation and improving production efficiency. Breakthroughs and applications in deep learning, computer vision, and related technologies have extended AI’s capabilities beyond mere textual output to robust polymorphic content creation and translation, encompassing images, audio, video, and translation tasks. As an integrated technology, AI effectively stimulates holographic innovation in international communication [3,4]. Here, “holographic” refers to the diversification of media information formats and a more multidimensional presentation of information [5]. This transformation greatly expands the forms and dimensions of content creation, constructing richer, more immersive communication contexts. It also enables the precise alignment of diverse audience needs related to cultural backgrounds, interests, and media consumption habits, thereby providing more abundant content materials and expressive forms for international communication.

2.2. Empowering the information distribution side

In the process of content distribution, artificial intelligence technologies promote precision, personalization, localization, and contextualization of dissemination.

With the involvement of AI recommendation algorithms, AI can accurately capture target users’ emotional resonance points and content demands through collaborative filtering functions, enabling personalized distribution that aligns with users’ preferences. Moreover, by reading and analyzing big data on communication, AI can assess influencing factors such as audience preferences in different regions, aiding countries in formulating scientifically effective international communication strategies. This facilitates international communication to transcend geographical and cultural differences and reach global audiences in a more targeted and affinity-driven manner, effectively conveying information and shaping images within the international public opinion arena [5]. Regarding the promotion of contextualized and localized communication, various translation software can automatically provide fluent localization of foreign-language content. For example, translation tools such as Google Translate, leveraging AI technologies like Neural Machine Translation (NMT), are embedded in webpages and social media platforms to achieve smooth bidirectional translation between users' native languages and the original content languages. This expands accessibility to international communication content for diverse language groups and advances the localization of distributed content [5]. Meanwhile, intelligent media technologies represented by Virtual Reality (VR) are rapidly developing, constructing distinctive narrative scenarios and demonstrating two prominent advantages: “immersion” and “presence” [6]. VR technology employs a first-person omniscient narrative perspective combined with panoramic storytelling cues and sensory-stimulating narrative symbols, forming an interactive narrative production-consumption model. This creates a unique communication mode for scenario-based content reception, allowing audiences worldwide to experience a sense of presence and enhancing user engagement. For instance, China Global Television Network (CGTN) utilizes VR technology to create novel communication subjects, adapting to trends of video-based, personalized, and social international communication, thus endowing content with a stronger sense of context [6].

2.3. Artificial intelligence empowering the evaluation of communication effectiveness

Artificial intelligence technologies deeply empower the evaluation of international communication effectiveness through multidimensional technical pathways. At the data collection level, AI-driven web crawlers can automatically access and scrape data from multiple platforms such as social media, news websites, forums, and more. These web crawlers support cross-platform data collection, covering major international social media platforms like Instagram and Twitter, as well as niche platforms including Reddit and Zhihu. To address the common dynamic data loading methods found on modern webpages, advanced dynamic crawler technologies utilize JavaScript and other techniques to handle dynamic content—for example, TikTok’s “infinite scroll”—significantly enhancing the efficiency and scope of overseas data collection. Furthermore, AI integrates data channels from multiple social media and news platforms to broaden the range of media usage data that can be gathered. In the data processing stage, AI technologies can synchronously consolidate user interaction data across multiple platforms to accurately map core indicators of communication effectiveness. At the semantic parsing level, natural language processing (NLP) techniques, through sentiment analysis, cross-lingual understanding, and deep contextual modeling, can accurately identify emotional tendencies and implicit attitudes in user comments. This approach avoids the pitfalls of relying solely on surface textual meaning while overlooking cultural metaphors, irony, and other nuanced user emotions. By efficiently and precisely analyzing user-generated text, AI can decode audience sentiment, monitor public opinion, and effectively assist in assessing the actual reception and impact of international communication content [7].

3. Risks and challenges brought by the application of artificial intelligence in international communication

3.1. AI generates and propels the spread of misinformation

Since the major technological breakthrough in generative artificial intelligence in 2022, the production and dissemination of misinformation have surged explosively under the impetus of AI technologies [8]. In the misinformation production stage, large-scale generative AI tools such as ChatGPT undergo training on massive datasets during their development [8]. However, the data sources within these large databases are complex and often contain mixed misinformation. Although technology companies may remove some false information through data cleansing, portions of misinformation inevitably remain, which are then learned by the models and subsequently received by audiences worldwide, thereby undermining the foundation of mutual trust in the international community. On the other hand, AI large models can generate multimodal information based on instructions, leading to mass production of misinformation such as transnational fake news and deepfake videos [9]. Individuals’ discomfort with uncertainty regarding the decision-making process and the eventual outcome of AI-enabled IS adversely impacts their teleological evaluation of the technology [10]. In the misinformation dissemination stage, algorithmic recommendation systems have been embedded in various fields of international communication, including news distribution and social media feeds. However, these recommendation algorithms remain at an early stage; due to technical limitations, they cannot accurately verify data authenticity, making it easy for deeply fabricated misinformation to be widely distributed to users and further accelerating the spread of false information. Additionally, in today’s international public opinion arena, politicians from various countries enthusiastically use social bots to disseminate misinformation with the aim of manipulating public opinion [11]. These bots generate realistic texts through natural language processing, adjust tones via sentiment analysis, and employ deep learning to mimic human conversational styles to publish false and inflammatory statements. By creating convincing fabrications, they actively participate in constructing content within the international public discourse, fueling the spread of biased false narratives and reinforcing existing unequal power relations in international information dissemination [11].

3.2. The deviance of algorithmic discrimination in artificial intelligence

As AI recommendation algorithms become widely embedded across various international communication platforms, algorithmic discrimination has quietly emerged. According to the ACM’s definition, algorithmic discrimination refers to the behavior in automated decision-making systems where technical design or data biases cause differential negative impacts on different groups. At the algorithm design level, these systems are deeply influenced by their designers [12]. Currently, major international social media platforms are predominantly controlled by Western Internet giants led by the United States. The power holders behind platforms such as Facebook and YouTube tend to exhibit Western value biases, which are generated and reinforced through multiple technical chain processes, thereby fostering algorithmic discrimination [12]. Regarding data selection, foundational data for international social media platforms heavily rely on English-language corpora centered on the United States, naturally marginalizing non-Western perspectives. This dual standard in data filtering exacerbates the cognitive gap between different groups, gradually evolving algorithmic discrimination into country-based and racial discrimination. Individuals’ discomfort with uncertainty regarding the decision-making process and the eventual outcome of AI-enabled IS adversely impacts their teleological evaluation of the technology. particularly in cultures high in uncertainty avoidance, which undermines the effectiveness of international communication [10]. Furthermore, the sustained reinforcement of algorithmic discrimination cannot be separated from platform filtering mechanisms. International social media platforms generally embed a dual filtering mechanism composed of “personalized filtering” and “collaborative filtering.” Under personalized filtering, users mostly encounter viewpoints consistent with their own, as information filtering follows users’ existing habits of information acquisition and use. This leads to more covert information segregation among individuals and amplifies algorithmic bias. Meanwhile, in collaborative filtering, the recommended content continues to perpetuate stereotypes and biases within users’ social circles and collective behaviors, deepening the “filter bubble” effect and strengthening preexisting biases formed under personalized filtering, thus creating a vicious cycle [13].

3.3. Data leakage crisis triggered by artificial intelligence in international communication

The inherent contradictions between the technical characteristics and application models of artificial intelligence have given rise to serious data leakage risks. On the technical level, AI large models form probability distributions between tokens through training on massive datasets, essentially constructing a latent data memory system. Experiments have demonstrated that, within the same model, large-scale pre-trained language models memorize details of the training data during the training process, and the model's parameter size is positively correlated with its memory capacity. This correlation increases the risk of data leakage as the model grows larger [14]. Notably, although developers claim that training data is not directly stored, the model’s ability to probabilistically reconstruct information creates a paradoxical leakage scenario of “no storage yet retrievable.” As a result, sensitive information such as user privacy and commercial data can still be systematically restored through technical means. At the application level, the cloud service model causes users’ interaction data to be fully transferred to developers’ servers. When used cross-nationally, users’ personal privacy, business secrets, and even state secrets inputted may directly flow to foreign corporate data centers. This situation creates a compound threat in international communication: on one hand, predictive privacy mechanisms enable overseas platforms to infer sensitive information such as citizen profiles and social habits from user behavioral data; on the other hand, the global proliferation of open-source large models facilitates the extraction of geopolitical and economic intelligence embedded in training data through technical means. When model service providers fall under foreign judicial jurisdiction, data sovereignty boundaries face a technical dissolution.

4. Countermeasures corresponding to the risks and challenges

4.1. Collaborative governance of misinformation

The transnational spread of misinformation has become a severe challenge facing the international community, necessitating the construction of systematic countermeasures from three dimensions: technological governance, international cooperation, and public literacy. First, at the level of technological governance, the international community should promote the establishment of a global certification mechanism for AI training data. This effort should be led by UNESCO to develop international standards for generative AI training data, requiring technology companies to disclose core data sources and provide credibility ratings. Simultaneously, blockchain-based misinformation traceability systems could be developed to enable the sharing of misinformation feature values across multinational platforms, thereby enhancing identification efficiency. In terms of international cooperation and policy regulation, international organizations could establish a global misinformation monitoring network to track the cross-border flow of misinformation in real-time and issue alerts to member states. Last but most importantly, there should be a focus on improving global public media literacy and critical thinking. Multi-level, comprehensive educational interventions are recommended. National educational institutions should integrate digital literacy into their curricula, incorporating AI content recognition and information verification skills to cultivate the public’s ability to discern AI-generated content. International organizations may launch a global “Anti-Misinformation” initiative, employing multilingual science popularization campaigns, online courses, and interactive tools to assist users in mastering fact-checking skills. Additionally, social media platforms could optimize user interfaces by adding warning labels alongside suspected misinformation and providing authoritative source comparison functions, guiding the public toward rational judgment.

4.2. Balanced governance of algorithmic discrimination

Addressing algorithmic discrimination requires the construction of a multidimensional governance system. First and foremost, it is urgent to break the international discourse monopoly formed by Western mainstream social media through algorithmic control. On one hand, strong support should be given to the global expansion of Chinese digital products such as TikTok, promoting the establishment of new international communication platforms. On the other hand, China should actively participate in the construction of global algorithm governance systems by initiating or joining international algorithm governance cooperation organizations, advocating for inclusive cross-border platform algorithm regulations that embed multicultural values within the governance framework. This approach not only counters Western technological hegemony but also secures a discursive space for developing countries to advocate for algorithmic fairness [15]. At the algorithm optimization level, international social media algorithm designers should adhere to principles of technical self-discipline and responsibility to improve recommendation mechanisms. They should overcome the technical limitations of “collaborative filtering” and “personalized recommendation” by establishing a “cocoon-breaking algorithm” mechanism [12]. For example, incorporating cross-cultural interest tags in user profiles to proactively recommend heterogeneous information; setting “cultural balance factors” in recommendation systems to ensure fair presentation of content from diverse civilizational perspectives; adding a “multiple perspectives” section in interface design to break the information cocoon and facilitate cultural communication channels. Simultaneously, a balanced multilingual corpus collection mechanism should be established to increase the weight of non-English data, thereby correcting Western-centric narrative biases from the data source. Finally, the construction of a dynamic monitoring system for algorithmic discrimination should be accelerated. A transnational joint organization should be established to utilize blockchain technology for traceable supervision and regularly publish algorithmic fairness assessment reports. Through the dual engines of technological autonomy and international regulation, the paradigm of algorithmic logic can be transformed from “value bias” to “civilizational mutual learning.”

4.3. Global co-governance of data security

To prevent the risks of data leakage brought about by artificial intelligence, the international community must establish a multidimensional prevention and control system encompassing technology, regulation, and cooperation. At the technical level, privacy protection technologies that meet ethical requirements should be developed. By optimizing algorithm design and model architecture, the direct correlation between model parameters and training data can be reduced at the source, thereby minimizing the possibility of sensitive data being memorized and reproduced by the model, and establishing a reliable privacy protection barrier for data processing. In terms of regulatory norms, it is necessary to improve the management system for cross-border data flows, establish and perfect classification and hierarchical protection mechanisms for data, prevent improper outflow of critical data resources, and safeguard national security and public interests. At the same time, a security assessment mechanism for data outbound in key areas should be constructed, along with legal and regulatory frameworks addressing data sovereignty and cultural ethics. Special attention should be given to the graded and classified protection of sensitive data involving ethnic cultures, religious beliefs, and social customs, in order to avoid ethnic, religious, and social conflicts caused by data misuse, thereby protecting cultural diversity and social harmony. At the level of international cooperation, it is urgent to foster a global consensus on AI governance and formulate transnational governance guidelines and technical standards. Particular focus should be placed on the demands of developing countries regarding data sovereignty protection. Through technical assistance and capacity building, their data protection capabilities can be enhanced, promoting technological innovation and the preservation of cultural diversity jointly, thereby achieving a balanced development of technological advancement and humanistic care.

5. Conclusion

As a revolutionary force in the field of international communication, artificial intelligence technology is profoundly reshaping the global ecosystem of information production, distribution, and evaluation. This paper systematically explores the technological applications of AI in international communication, revealing its dual aspects of empowerment and potential risks. On the empowerment side, AI significantly enhances the intelligence, personalization, and contextualization of international communication through efficient information production, precise content distribution, and multidimensional effect evaluation, thus providing new possibilities for building a diverse and inclusive global communication ecosystem. However, behind technological empowerment lie considerable risks and challenges. The large-scale production and dissemination of misinformation, cognitive gaps caused by algorithmic discrimination, and threats to privacy and sovereignty arising from data leakage highlight the governance dilemmas of AI in international communication. Risk governance of AI in this domain requires the establishment of a systematic response framework. The key to governance lies in balancing innovation and risk prevention, breaking Western technological monopolies, promoting localization and diversification of technology, and strengthening transnational cooperation to build ethical norms and legal frameworks. Only through the integration of technological monitoring, international coordination, and public education can a fairer, more inclusive, and safer new order of international communication be shaped.


References

[1]. Wang, J. D. (2025). Cultivating new quality productivity in journalism: Analysis of smart media iteration upgrading and reshaping communication ecology. Journalist Cradle, (01), 132–134.

[2]. Yu, T. S., & Gu, L. P. (2025). The application, concerns, and coping strategies of artificial intelligence in news production. News Enthusiast, (01), 54–56.

[3]. Zhang, M. X., & Chou, Y. H. (2025). Technological empowerment and ethical dilemmas: A study on the impact of generative AI on journalism. Audio-Visual, (01), 11–14.

[4]. Zhou, B. H., & Wu, Y. Q. (2024). International communication under the influence of generative artificial intelligence: Practice progress and impact pathways. International Communication, (06), 4–8.

[5]. Guo, H. W., & Hu, Z. R. (2025). AI-driven enhancement of accessibility in international communication: Mechanisms, dilemmas, and pathways. China Television, (01), 73–82.

[6]. Deng, J. X. (2024). Construction of national image in CGTN’s VR news from a cross-cultural communication perspective (Master’s thesis). Hebei University.

[7]. Zhao, Z. W. (2024). Opportunities and risks: A preliminary study on the impact of artificial intelligence technology on online public opinion governance.Media, (15), 94–96.

[8]. Liu, G. H. (2024). Typological analysis and governance paths of risks in generative AI misinformation. Science & Technology Communication, 16(20), 109–113.

[9]. Lu, J. P., & Dang, Z. Q. (2024). Production and dissemination of misinformation in the AIGC era and the protection of national security and civil rights. Journal of Zhejiang University (Humanities and Social Sciences), 54(05), 42–58.

[10]. James Parrish, & amp; nbsp; Kashif S. (2025). Understanding the impact of culture on the teleological evaluation of delegation to artificial intelligence-enabled information systems. Technological Forecasting and Social Change, 219, Article 124247.

[11]. Luo, X., & Zhang, J. J. (2023). Production mechanisms and global governance of Western fake news in the context of information geopolitics. Youth Reporter, (11), 80–83.

[12]. Dong, Q. L., & Zhu, Y. (2021). Algorithmic justice and order construction in the era of artificial intelligence. Exploration and Free Views, (03), 82–86+178.

[13]. Wu, X. K., & Deng, K. Q. (2024). Data selection, information filtering, and collaborative governance behind algorithmic bias. China Publishing, (06), 10–15.

[14]. Qian, H. W., Peng, J. T., Yuan, M., et al. (2025). Factors influencing data leakage in pre-trained language models. Information Security Research, 11(02), 181–188.

[15]. Xiang, D. B., & Cao, C. X. (2022). Value bias and governance of international public opinion involving China on international social media platform algorithms. International Communication, (10), 8–11.


Cite this article

Zhu,K. (2025). The Technological Applications and Risk Challenges of Artificial Intelligence in International Communication. Communications in Humanities Research,74,129-136.

Data availability

The datasets used and/or analyzed during the current study will be available from the authors upon reasonable request.

Disclaimer/Publisher's Note

The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of EWA Publishing and/or the editor(s). EWA Publishing and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

About volume

Volume title: Proceedings of ICADSS 2025 Symposium: Art, Identity, and Society: Interdisciplinary Dialogues

ISBN:978-1-80590-301-7(Print) / 978-1-80590-302-4(Online)
Editor:Ioannis Panagiotou, Yanhua Qin
Conference date: 22 August 2025
Series: Communications in Humanities Research
Volume number: Vol.74
ISSN:2753-7064(Print) / 2753-7072(Online)

© 2024 by the author(s). Licensee EWA Publishing, Oxford, UK. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license. Authors who publish this series agree to the following terms:
1. Authors retain copyright and grant the series right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this series.
2. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the series's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial publication in this series.
3. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See Open access policy for details).

References

[1]. Wang, J. D. (2025). Cultivating new quality productivity in journalism: Analysis of smart media iteration upgrading and reshaping communication ecology. Journalist Cradle, (01), 132–134.

[2]. Yu, T. S., & Gu, L. P. (2025). The application, concerns, and coping strategies of artificial intelligence in news production. News Enthusiast, (01), 54–56.

[3]. Zhang, M. X., & Chou, Y. H. (2025). Technological empowerment and ethical dilemmas: A study on the impact of generative AI on journalism. Audio-Visual, (01), 11–14.

[4]. Zhou, B. H., & Wu, Y. Q. (2024). International communication under the influence of generative artificial intelligence: Practice progress and impact pathways. International Communication, (06), 4–8.

[5]. Guo, H. W., & Hu, Z. R. (2025). AI-driven enhancement of accessibility in international communication: Mechanisms, dilemmas, and pathways. China Television, (01), 73–82.

[6]. Deng, J. X. (2024). Construction of national image in CGTN’s VR news from a cross-cultural communication perspective (Master’s thesis). Hebei University.

[7]. Zhao, Z. W. (2024). Opportunities and risks: A preliminary study on the impact of artificial intelligence technology on online public opinion governance.Media, (15), 94–96.

[8]. Liu, G. H. (2024). Typological analysis and governance paths of risks in generative AI misinformation. Science & Technology Communication, 16(20), 109–113.

[9]. Lu, J. P., & Dang, Z. Q. (2024). Production and dissemination of misinformation in the AIGC era and the protection of national security and civil rights. Journal of Zhejiang University (Humanities and Social Sciences), 54(05), 42–58.

[10]. James Parrish, & amp; nbsp; Kashif S. (2025). Understanding the impact of culture on the teleological evaluation of delegation to artificial intelligence-enabled information systems. Technological Forecasting and Social Change, 219, Article 124247.

[11]. Luo, X., & Zhang, J. J. (2023). Production mechanisms and global governance of Western fake news in the context of information geopolitics. Youth Reporter, (11), 80–83.

[12]. Dong, Q. L., & Zhu, Y. (2021). Algorithmic justice and order construction in the era of artificial intelligence. Exploration and Free Views, (03), 82–86+178.

[13]. Wu, X. K., & Deng, K. Q. (2024). Data selection, information filtering, and collaborative governance behind algorithmic bias. China Publishing, (06), 10–15.

[14]. Qian, H. W., Peng, J. T., Yuan, M., et al. (2025). Factors influencing data leakage in pre-trained language models. Information Security Research, 11(02), 181–188.

[15]. Xiang, D. B., & Cao, C. X. (2022). Value bias and governance of international public opinion involving China on international social media platform algorithms. International Communication, (10), 8–11.