AI, Digital Trade, and Data Protection: Overcoming Regulatory Fragmentation for a Multilateral Regulatory Framework

Research Article
Open access

AI, Digital Trade, and Data Protection: Overcoming Regulatory Fragmentation for a Multilateral Regulatory Framework

Shurui Kang 1*
  • 1 Inner Mongolia University    
  • *corresponding author NaranKang@outlook.com
Published on 15 January 2025 | https://doi.org/10.54254/2753-7048/2024.20364
LNEP Vol.76
ISSN (Print): 2753-7056
ISSN (Online): 2753-7048
ISBN (Print): 978-1-83558-751-5
ISBN (Online): 978-1-83558-752-2

Abstract

Artificial Intelligence (AI), digital trade, and data-driven technologies have significantly transformed global trade and the world economy, shifting from a physical economy to a digital economy. However, the existing norms, led by the World Trade Organization (WTO) three conventional trade agreements, have fallen behind this reality. States and governments are still bound by these outdated norms while seeking new regulatory frameworks for digital trade and emerging technologies, particularly AI. This article, through a comprehensive comparison of academic literature, domestic laws, and strategic policies, highlights the varied approaches of the international community: the European Union (EU) emphasizes human-concentric regulation, the United States prioritizes AI research and development over unified legislation, and China strikes a balance between conservative regulation and AI innovation. Furthermore, the article examines efforts like the emergence of AI regulatory sandboxes, the United Nations (UN) draft resolution on AI for sustainable development, and the first binding AI convention, which indicate progress towards multilateral AI regulation.

Keywords:

Artificial Intelligence, International Regulatory Frameworks, Regulatory fragmentation, Data Protection, Regulatory Sandbox

Kang,S. (2025). AI, Digital Trade, and Data Protection: Overcoming Regulatory Fragmentation for a Multilateral Regulatory Framework. Lecture Notes in Education Psychology and Public Media,76,164-169.
Export citation

1. Introduction

The 1956 Dartmouth Artificial Intelligence Summer Workshop is considered the inception of artificial intelligence research. At this meeting, the concept of artificial intelligence (AI) was introduced as an independent discipline for the first time [1]. However, AI research faced a downturn during the 1980s, and the subsequent decades were marked by instability [2]. In the 21st century, AI technologies have experienced another surge in development, with AI products featuring large language models and deep learning achieving success in applications. This led to a renewed widespread interest in AI research [3].

Currently, scholars widely agree that AI is a type of intelligent program that mimics human brain processes for handling information [4]. According to the Ethics Guidelines for Trustworthy AI published by the European Council in 2019, algorithms, computing, and data are the three most critical elements of artificial intelligence [5]. The application and development of AI cannot occur without the input of massive amounts of data. The training and application of AI inevitably involve capturing vast amounts of data from the internet, which can lead to public concerns about data privacy.

Data can be viewed as a digital carrier of information and a new resource in the digitalization era [6]. Meanwhile, AI, symbolic of the fourth industrial revolution, is recognized for driving the global economy's transformation from a knowledge-driven to a data-driven model [7]. This shift, along with AI technology’s inherently transnational nature, challenges traditional regulatory frameworks for international trade and services. It raises critical questions about how to regulate AI from a global perspective.

Therefore, this study aims to compare the regulations on AI and data privacy across various countries, with a particular focus on the EU’s legislation and practices. It explores the feasibility of establishing a multilateral regulation system for AI at the international level and discusses how to address data privacy concerns through domestic legislation and international cooperation while ensuring the sustainable, responsible, and innovative development of AI.

2. The Status Quo: Outdated Multilateral Rules and Fragmented Frameworks

The new disruptive technologies, AI and blockchains, have opened the doors to the digital trade market, promoting the trade of data and increasing the frequency of cross-border data transferring [8]. This trend is reshaping the global economy, transitioning from the physical economy to the digital economy. Meanwhile, AI technology, a new item of the digital economy, has exposed existing shortcomings in the regulatory framework for the digital economy, requiring new regulations. However, it is challenging for international communities to reach a consensus on how to regulate AI and address the various issues arising from the development of the digital economy and the application of AI. Among these, data privacy is one of the most prominent concerns.

This section aims to explore conventional legal norms for digital trade, AI, and data protection by comparing various domestic and international regulatory frameworks. It seeks to identify the deficiencies and the current state of existing global trade norms.

2.1. International Economic Law and AI Regulation

The application of AI technologies impacts nearly every aspect of human life, most notably driving the transformation of the global economy. Given AI’s inherently transnational nature and the slow progress of domestic AI regulation in many countries, international economic law (IEL) emerges as an ideal regulatory tool. IEL not only addresses various aspects of AI development, deployment, and use, along with their corresponding regulations, but also offers a framework that promotes state-led regulation and favors multilateral cooperation [9].

While IEL is an ideal tool for regulating AI at the international level, this assumes that IEL evolves alongside the development of AI technologies. The primary objects of IEL, such as international trade agreements, lag behind AI advancements, meaning the rapid growth of AI will inevitably impact the reconfiguration of IEL. Currently, there are two approaches to regulating AI or new-tech-related trade through IEL: one under the WTO system, represented by agreements like GATT, GATS, and TRIPS; and the other led by individual states through digital trade-focused FTAs. Both approaches historically emphasized a dichotomy between regulating goods and services separately, but the former has yet to shift away from this approach, and the WTO agreements do not explicitly address AI or data flows [10]. Since WTO agreements are based on the outcomes of the Uruguay Round negotiations (1986-1994), they fail to reflect the significant impact digital trade has had on global trade patterns. Many new technologies, including AI, possess characteristics of both goods and services trade. As a result, many individual states have turned to domestic legislation or signed FTAs to address new technologies, such as the CPTPP, DEPA, and SADEA, to promote their digital interests.

Additionally, in the era of the digital economy, data is often regarded as a critical resource for digital technologies, and the functioning and development of AI also rely on the input of large quantities of data [11]. Therefore, it makes sense that data protection norms play a crucial role in digital trade rules and AI regulation systems. For instance, both the CPTPP and DEPA include specific chapters on data transfer and protection. The EU has also incorporated data protection principles into its AI Act. The norms of data protection encompass two main areas: Data ownership, and Data circulation, that is, who owns the data and whether it can flow freely. Aaronson classifies the current world’s data protection rules into three major categories represented by the EU, the US, and China based on their digital trade demands [12-13]. The EU, with its well-established data protection framework and independent data protection laws, addresses data protection in AI regulation through clear and specific articles in the AI Act. These articles confirm data protection principles in AI applications and may later introduce special mechanisms for data protection in AI. The AI Act is also connected with the GDPR to bridge AI regulation with the existing data protection system. In contrast, the US, lacking a unified federal data and privacy protection law or an AI regulatory framework, tends to shape its digital trade rules through FTAs with external trade partners. Data protection and AI-related issues are regulated within FTAs through specific chapters or clauses. China, on the other hand, considers data an extension of national sovereignty and restricts data from leaving the country through a series of mandatory regulations, such as the Personal Information Protection Law, the Cyber Security Law, and the Data Security Law [14].

2.2. EU: The De-facto Global Standard-setter

The EU’s digital legislation began in the 1990s. Since the passage of the General Data Protection Regulation (GDPR) in 2018, with its comprehensive data privacy and personal information protection rules, it has become the cornerstone of the EU digital legal framework. In 2022, the Data Act, Digital Markets Act, and Digital Services Act were successively passed, strengthening the governance of the data economy and digital markets. At the end of 2023, the EU officially passed the Data Governance Act, adding another piece to the puzzle of data governance. The regulation of new technologies forms the third major pillar of the EU’s expansive digital legislation framework.

Currently, only the EU has introduced a comprehensive Artificial Intelligence Act. The EU AI Act adopts a risk-based approach, creating a horizontal regulatory framework that offers broad applicability, regulatory clarity, and the flexibility to adapt to AI developments. Despite lagging behind the US in AI technology and investments, the EU AI Act, much like the GDPR passed in 2016, holds a leading position in setting global governance standards for AI [15-16]. It is likely to have a similar demonstrative effect on other countries' legislation, just as the GDPR did. Additionally, similar to Article 3 of the GDPR, Article 22 of the AI Act extends its applicability to AI providers from third countries, giving it extraterritorial effects.

Yet, despite the successful enactment of the EU AI Act in August 2024, debates within academia have not subsided. The overlap between the EU AI Act and the GDPR creates potential legal conflicts and burdens for AI companies, particularly smaller businesses, due to the added data-related obligations. While the AI Act addresses high-risk and general AI, there is debate over whether strict regulation or self-regulation is the best approach to foster innovation, and concerns remain about how the Act will handle data-related issues specific to AI, such as data scraping and leakage [17]. Additionally, the AI sandbox environment may lead to conflicts between data usage regulations in the AI Act and the GDPR [18], raising challenges for developers and regulators in managing personal data processing and legal compliance.

2.3. U.S.: The AI Technology Leader

AI governance in the United States is characterized by a flexible, industry-driven approach, avoiding strict, mandatory regulations seen in regions like the European Union [19]. The U.S. lacks a comprehensive federal data privacy law or well-structured AI regulation but instead promotes AI development through strategic documents [20]. Since 2016, the government has issued several initiatives, including the National AI Initiative Act and the Blueprint for an AI Bill of Rights. These frameworks encourage AI research, establish governance guidelines, and emphasize principles such as safe AI systems, algorithmic transparency, and privacy protection, though none are legally binding. In January 2023, the National Institute of Standards and Technology (NIST) released the Artificial Intelligence Risk Management Framework (AI RMF 1.0), which provides non-compulsory guidance for managing AI-related risks.

At the end of 2023, President Biden signed the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, representing the most comprehensive AI governance principles in the U.S. to date. It calls for new safety standards, protection of civil rights, privacy safeguards, and maintaining U.S. leadership in AI innovation. However, the executive order, like previous efforts, avoids a mandatory legislative approach, favoring industry self-regulation combined with government oversight.

The U.S. also refrains from creating federal-level specialized AI legislation, preferring to leave regulation to the states. Several states, including Illinois, New York, and Utah, have enacted their own AI-specific laws, addressing issues such as algorithmic discrimination and consumer protection. Looking forward, U.S. federal AI regulation is likely to remain focused on policy incentives and oversight within existing frameworks, while state-level specialized regulations are expected to continue expanding to address specific AI applications and concerns.

2.4. China: The Conservative but Aggressive One

China's approach to AI regulation is deeply influenced by its broader stance on data sovereignty and emerging technologies. Its data governance system, comprising the Data Security Law, Personal Information Protection Law, and Data Flow Security Assessment Measures, ensures strict government control over data transfers, aligning with China's principle of data sovereignty. In digital trade, agreements like the Regional Comprehensive Economic Partnership (RCEP) give China significant discretion to regulate cross-border data flows for national security and public policy objectives. This cautious approach extends to emerging technologies such as AI, with China taking a conservative stance on data regulation in the electronic information age.

Although comprehensive legislation has not yet been enacted, the Temporary Measures to Regulate Generative AI, introduced in August 2023, serves as a foundation. These measures cover key principles, technology promotion, AI service provider obligations, and government oversight, outlining basic requirements for data privacy and personal information protection. Notably, Article 20 restricts foreign companies from collecting data within China, emphasizing China's focus on national sovereignty in cyberspace. This cautious approach may limit Chinese AI firms' participation in international regulatory frameworks, as seen with Didi's restriction from listing in the U.S. due to data security concerns [21].

Additionally, scholars have proposed a draft for a unified AI law, which expands beyond generative AI to cover general AI and key systems affecting critical information or personal rights [22]. This draft adopts a risk-based classification framework similar to the EU AI Act and includes measures for promoting data sharing, enhancing cybersecurity, and protecting personal information. It also sets out comprehensive monitoring responsibilities, safety assessments, and government oversight during AI deployment, ensuring a strong regulatory framework for AI governance in China.

3. Discussion: The Future is Yet to Come, But Already Here

With the development of emerging technologies and the internet industry, the global economy is gradually shifting from a physical economy to a digital economy. Data exchange and data trade are becoming increasingly frequent, and the rapid advancement of data-driven technologies like AI is further influencing social life and business transactions. However, the new economic model presents inconsistencies with existing international legal norms, which adopt a dichotomous approach that treats goods and services separately to regulate the physical economy and trade.

International bodies have begun discussions on how to reform existing trade norms and legal frameworks to regulate emerging technologies such as AI, but consensus has yet to be reached. Therefore, while adhering to the current regulatory frameworks, countries are increasingly seeking new regulatory models suitable for adjusting to the AI sector. One such model that is widely applied globally is the AI regulatory sandbox. The EU, in its AI Act, has also established a dedicated chapter on the regulatory sandbox, aiming to create an AI governance model that is innovation-friendly, future-oriented, sustainable, and flexible.

A regulatory sandbox is a new regulatory experiment that operates outside the existing regulatory framework to test new economic, institutional, and technological approaches, as well as legal provisions. Under this initiative, regulators typically allow certain companies to test new products or services outside the current legal framework, granting them exemptions from specific legal provisions or compliance procedures. Drawing on the experience of Fin-Tech sandboxes, this paper analyzes the advantages and disadvantages of regulatory sandboxes. The report concludes that regulatory sandboxes can indeed facilitate market development, promote dialogue between regulators and companies, and help businesses adapt to regulatory processes. Moreover, due to the similarities in regulatory goals across different regions, the widespread use of sandboxes could help create internationally harmonized sandbox frameworks. However, the report also highlights certain drawbacks, such as the limited scope of the application, the complexity of the testing process, the challenge of managing overly large-scale sandbox implementations, and the regulatory fragmentation caused by international competition, which limits the compatibility between different legal regimes.

In comparison, although the regulatory sandbox is still a relatively new regulatory model, it has the potential to serve as a bridge, helping to address the current fragmentation and paving the way for a harmonized international framework in the future.

4. Conclusion

From the Third Industrial Revolution to the Fourth, over the past 70 years, new technologies such as information technology, the internet, and AI have continuously emerged. Data exchange, cross-border data flows, and digital trade have become frequent, and the global economy has shifted from a physical economy to a digital economy. However, international economic law, which should adjust to these changes in global trade, has lagged behind. As a result, states and governments, while adhering to existing norms, are seeking new regulatory approaches based on differing interests and policy goals. This has led to competition among the three major global powers, and their competition is likely to manifest through the creation of treaties in the future, forming distinct models of AI governance. However, this risks leaving developing and least-developed countries, which lack the capacity and access to develop AI, excluded from the rule-making process and forced to passively accept the resulting frameworks. Although national AI regulatory systems are still in the process of being refined, and existing international frameworks are in urgent need of reform, various global entities are working towards a harmonized, human-centric, and responsible international regulatory framework for AI. The AI Convention, as the first legally binding treaty of its kind, marks a historic step in multilateral AI governance. While many steps remain to bridge the gap between existing norms and the realities of AI, the future is within reach.


References

[1]. Jiang, Wei and Long, Weiqiu. Principles of Digital Law. People's Court Press, 2023. pp. 364-380.

[2]. Wang, Lijun and Li, Daqing. Introduction to Law and Artificial Intelligence. Law Press China, 2022. pp. 201-203.

[3]. White Paper on Artificial Intelligence Standardization (2018 edition). China Electronics Standardization Institute, 2023, http://www.cesi.cn/201801/3545.html?ivksa=1024320u.

[4]. Barfield, Woodrow and Pagallo, Ugo. Advanced Introduction to Law and Artificial Intelligence. Edward Elgar Publishing, 2020. pp. 02-19.

[5]. Ethics Guidelines for Trustworthy AI. Shaping Europe's Digital Future, 2019, digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai.

[6]. OECD Glossary of Statistical Terms. OECD Publishing, 2008, https://doi.org/10.1787/9789264055087-en.

[7]. Ciuriak, Dan. Economic Rents and the Counters of Conflict in the Data-Driven Economy, 2020 CIGI Paper No. 245, pp. 1-7.

[8]. Girasa, Rosario. Artificial Intelligence as a Disruptive Technology. Palgrave Macmillan, 2020. pp. 20-307.

[9]. Peng, Shin-yi, Lin, Ching-Fu and Streinz, Thomas. Artificial Intelligece and International Economic Law: A Research and Policy Agenda. Artificial Intelligece and International Economic Law, Shin-yi Peng, Ching-Fu Lin, and Thomas Streinz, Cambridge University Press, 2021. pp. 1-292.

[10]. Peng, Shin-yin. A New Trade Regime for the Servitization of Manufacturing: Rethinking the Goods-Services Dichotomy, Journal of World Trade, vol. 54, no. 5, 2020, pp. 699–726.

[11]. Mayer-Schonberger, Viktor and Cukier, Kenneth. Big Data: A Revolution That Will Transform How We Live, Work, and Think. Harper Business, 2014. pp. 12-77.

[12]. Zheng, Fei and Ma, Guoyang. Artificial Intelligence and Law. China University of Political Science and Law Publishing, 2023. pp. 93-103.

[13]. Aaronson, Susan and Leblond, Patrick. Another Digital Divide: The Rise of Data Realms and its Implications for the WTO. Journal of International Economic Law, 2018. vol. 21, no. 2, pp. 245–272.

[14]. Cheng, Hao. Viewing the Protection of Data Sovereignty in China from CLOUD Act. Information Studies: Theory & Application, 2019. vol. 42, no. 4, pp. 31-35.

[15]. Artificial Intelligence Index Report 2023, Stanford University Human-Centered Artificial Intelligence, April 2023, https://aiindex.stanford.edu/ai-index-report-2023.

[16]. VC investments in AI by country, visualisations powered by JSI using data from Preqin, OECD.AI, 2024, www.oecd.ai.

[17]. Hacker, Philipp. AI Regulation in Europe: From the AI Act to Future Regulatory Challenges. arXiv:2310.04072, arXiv, 2023. pp. 1-16. arXiv.org, https://doi.org/10.48550/arXiv.2310.04072.

[18]. Liza, Farhana Ferdousi. Challenges of Enforcing Regulations in Artificial Intelligence Act: Analyzing Quantity Requirement in Data and Data Governance. Proceedings of the 2022 1st International Workshop on Imagining the AI Landscape After the AI Act, 2022. pp. 1-10.

[19]. Pagallo, Ugo, Pompeu Casanovas, and Robert Madelin. The Middle-out Approach: Assessing Models of Legal Governance in Data Protection, Artificial Intelligence, and the Web of Data. The Theory and Practice of Legislation, 2019. vol. 7, no. 1, pp. 1–25. doi:10.1080/20508840.2019.1664543.

[20]. Qi, Kai, Cui, Yingjia and Tian, Yanfei. The US–EU–UK Artificial Intelligence Race and Its Prospects. Contemporary International Relations, vol. 2024, no. 5, 2024, pp. 118-139+142.

[21]. Han, Hongling, Chen, Shuaidi, Liu, Jie, and Chen, Hanwen. Data Ethics, National Security and Overseas Listing: A Case Study Based on DiDi, vol. 2021, no. 15, pp. 13-23.doi:10.19641/j.cnki.42-1290/f.2021.15.003.

[22]. Artificial Intelligence Act (Scholar's Draft Proposal). Data Rule of Law Institute, China University of Political Science and Law, 2024.


Cite this article

Kang,S. (2025). AI, Digital Trade, and Data Protection: Overcoming Regulatory Fragmentation for a Multilateral Regulatory Framework. Lecture Notes in Education Psychology and Public Media,76,164-169.

Data availability

The datasets used and/or analyzed during the current study will be available from the authors upon reasonable request.

Disclaimer/Publisher's Note

The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of EWA Publishing and/or the editor(s). EWA Publishing and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

About volume

Volume title: Proceedings of the 2nd International Conference on Global Politics and Socio-Humanities

ISBN:978-1-83558-751-5(Print) / 978-1-83558-752-2(Online)
Editor:Enrique Mallen
Conference website: https://2024.icgpsh.org/
Conference date: 20 December 2024
Series: Lecture Notes in Education Psychology and Public Media
Volume number: Vol.76
ISSN:2753-7048(Print) / 2753-7056(Online)

© 2024 by the author(s). Licensee EWA Publishing, Oxford, UK. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license. Authors who publish this series agree to the following terms:
1. Authors retain copyright and grant the series right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this series.
2. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the series's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial publication in this series.
3. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See Open access policy for details).

References

[1]. Jiang, Wei and Long, Weiqiu. Principles of Digital Law. People's Court Press, 2023. pp. 364-380.

[2]. Wang, Lijun and Li, Daqing. Introduction to Law and Artificial Intelligence. Law Press China, 2022. pp. 201-203.

[3]. White Paper on Artificial Intelligence Standardization (2018 edition). China Electronics Standardization Institute, 2023, http://www.cesi.cn/201801/3545.html?ivksa=1024320u.

[4]. Barfield, Woodrow and Pagallo, Ugo. Advanced Introduction to Law and Artificial Intelligence. Edward Elgar Publishing, 2020. pp. 02-19.

[5]. Ethics Guidelines for Trustworthy AI. Shaping Europe's Digital Future, 2019, digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai.

[6]. OECD Glossary of Statistical Terms. OECD Publishing, 2008, https://doi.org/10.1787/9789264055087-en.

[7]. Ciuriak, Dan. Economic Rents and the Counters of Conflict in the Data-Driven Economy, 2020 CIGI Paper No. 245, pp. 1-7.

[8]. Girasa, Rosario. Artificial Intelligence as a Disruptive Technology. Palgrave Macmillan, 2020. pp. 20-307.

[9]. Peng, Shin-yi, Lin, Ching-Fu and Streinz, Thomas. Artificial Intelligece and International Economic Law: A Research and Policy Agenda. Artificial Intelligece and International Economic Law, Shin-yi Peng, Ching-Fu Lin, and Thomas Streinz, Cambridge University Press, 2021. pp. 1-292.

[10]. Peng, Shin-yin. A New Trade Regime for the Servitization of Manufacturing: Rethinking the Goods-Services Dichotomy, Journal of World Trade, vol. 54, no. 5, 2020, pp. 699–726.

[11]. Mayer-Schonberger, Viktor and Cukier, Kenneth. Big Data: A Revolution That Will Transform How We Live, Work, and Think. Harper Business, 2014. pp. 12-77.

[12]. Zheng, Fei and Ma, Guoyang. Artificial Intelligence and Law. China University of Political Science and Law Publishing, 2023. pp. 93-103.

[13]. Aaronson, Susan and Leblond, Patrick. Another Digital Divide: The Rise of Data Realms and its Implications for the WTO. Journal of International Economic Law, 2018. vol. 21, no. 2, pp. 245–272.

[14]. Cheng, Hao. Viewing the Protection of Data Sovereignty in China from CLOUD Act. Information Studies: Theory & Application, 2019. vol. 42, no. 4, pp. 31-35.

[15]. Artificial Intelligence Index Report 2023, Stanford University Human-Centered Artificial Intelligence, April 2023, https://aiindex.stanford.edu/ai-index-report-2023.

[16]. VC investments in AI by country, visualisations powered by JSI using data from Preqin, OECD.AI, 2024, www.oecd.ai.

[17]. Hacker, Philipp. AI Regulation in Europe: From the AI Act to Future Regulatory Challenges. arXiv:2310.04072, arXiv, 2023. pp. 1-16. arXiv.org, https://doi.org/10.48550/arXiv.2310.04072.

[18]. Liza, Farhana Ferdousi. Challenges of Enforcing Regulations in Artificial Intelligence Act: Analyzing Quantity Requirement in Data and Data Governance. Proceedings of the 2022 1st International Workshop on Imagining the AI Landscape After the AI Act, 2022. pp. 1-10.

[19]. Pagallo, Ugo, Pompeu Casanovas, and Robert Madelin. The Middle-out Approach: Assessing Models of Legal Governance in Data Protection, Artificial Intelligence, and the Web of Data. The Theory and Practice of Legislation, 2019. vol. 7, no. 1, pp. 1–25. doi:10.1080/20508840.2019.1664543.

[20]. Qi, Kai, Cui, Yingjia and Tian, Yanfei. The US–EU–UK Artificial Intelligence Race and Its Prospects. Contemporary International Relations, vol. 2024, no. 5, 2024, pp. 118-139+142.

[21]. Han, Hongling, Chen, Shuaidi, Liu, Jie, and Chen, Hanwen. Data Ethics, National Security and Overseas Listing: A Case Study Based on DiDi, vol. 2021, no. 15, pp. 13-23.doi:10.19641/j.cnki.42-1290/f.2021.15.003.

[22]. Artificial Intelligence Act (Scholar's Draft Proposal). Data Rule of Law Institute, China University of Political Science and Law, 2024.