1. Introduction
With the rapid advancement of generative artificial intelligence (AI) technologies, the capacity of AI systems to autonomously generate text, images, music, and other forms of content based on massive datasets and complex algorithms is profoundly reshaping the ecology of content creation and dissemination. However, this technological paradigm also poses significant challenges to the traditional copyright system, which is fundamentally constructed upon a human-author-centric principle. The current legal framework faces substantial interpretive and applicatory difficulties in addressing issues such as the eligibility of AI-generated outputs for rights subjecthood, the assessment of originality, and the allocation of rights. These limitations not only hinder effective incentives for technological innovation but also fail to appropriately balance the interests of multiple stakeholders. Against this backdrop, this paper aims to systematically examine the contentious positioning of AI-generated content within copyright law and explore regulatory pathways that align with its technical characteristics and ethical demands. By constructing quantitative evaluation models and a tiered governance framework, it seeks to provide theoretical support and institutional design references for legislative and judicial practices.
2. Legal status of AI-generated content
2.1. The institutional limitations of anthropocentric legislation
Traditional copyright law is built around the concept of human authorship as the core of its subject qualification system. This legislative approach is rooted in the physical identifiability of creative acts during the industrial era. The subject-object dichotomy principle strictly limits creative acts to human mental activities, viewing works as the externalisation of human thoughts and emotions, thereby forming a closed logical chain of ‘creative subject-rights holder-work object’ [1]. When generative artificial intelligence autonomously generates content with aesthetic value through algorithms, the traditional system's ‘human-centric’ legislative framework faces challenges. Wu Handong points out that the current law's approach of forcibly incorporating algorithmically generated content into the existing rights framework is an attempt to resolve the tension between technological reality and institutional assumptions through legal fiction [1]. This regulatory approach, which forces a square peg into a round hole, exposes the limitations of legal interpretation. Algorithm-generated content has already transcended the ‘instrumentalist’ framework in terms of its creative mechanisms. Its data-driven, autonomous iterative generation model has severed the traditional causal chain between ‘author-work’ [2].
The originality standard in the copyright subject matter system has revealed a lack of evaluative dimensions in the era of artificial intelligence. While judicial practice has traditionally relied on the dual standards of ‘independent creation’ and ‘creative selection’ to determine the nature of a work, this standard struggles to effectively identify the creative essence of algorithm-generated content. Human input of prompts and parameter adjustments during the algorithmic generation process do not constitute creative acts under copyright law, as they cannot substantially influence the final form of expression. This standard of evaluation clings to explicit indicators of human intervention while ignoring the objective fact that algorithmic models engage in creative combinations within the potential space. When the generated output reaches or even surpasses the average human creative level, mechanically applying existing standards will lead to a divergence between legal evaluation and technical facts, resulting in institutional discrimination.
The path of neighbouring rights protection provides a workaround for institutional challenges. For highly autonomous creations, a special protection mechanism can be established by setting a fixed protection period and restricting the scope of rights, thereby balancing the incentives for technological innovation with the protection of public interests. This approach avoids disrupting the existing copyright system while addressing the institutional protection vacuum for algorithm-generated works. Its rights allocation logic aligns more closely with the technical characteristics of data-driven creation [1]. However, there is a need to be vigilant about the risk of rights fragmentation that may arise from an overly expansive application of neighbouring rights. A dynamic adaptation mechanism should be established between technical separability standards and the intensity of protection.
2.2. Technological autonomy characteristics of generative artificial intelligence
The technical autonomy of generative artificial intelligence is rooted in the unpredictability of its algorithmic auto-generation phase. There is a non-corresponding relationship between user-input prompts and the generated results. The semantic information of the prompts undergoes processing and non-linear transformations within multi-layer neural networks, resulting in final outputs with significant technical black-box characteristics [3].
The technical essence of the unpredictability of generative artificial intelligence outputs stems from implicit knowledge distillation mechanisms and random sampling strategies. The knowledge graph formed through contrastive learning during the pre-training phase exhibits dynamic evolutionary characteristics. Even when faced with the same prompt words, slight adjustments to the temperature parameter can lead to significant differences in the output results. This technical characteristic directly challenges the originality standard in copyright law, as the causal relationship between labour input and originality assessment emphasised by the ‘sweat of the brow’ principle in traditional legal theory is disrupted here. Guo Peng and Li Zhanpeng found through empirical analysis that while AI-generated text or images in complex generative scenarios may not reach the creative heights of human artists, the ‘secondary aesthetic appeal’ produced by their combinations surpasses the realm of mechanical replication [4]. The creative expressions such as style transfer and element reorganisation generated during the model's autonomous generation process are essentially the technical externalisation of algorithmic feature deconstruction and reconstruction in the latent space.
2.3. Differences in institutional responses from a comparative law perspective
The copyright protection of AI-generated works varies across major global jurisdictions, with particularly notable differences in the areas of authorship attribution and rights allocation. These institutional differences reflect the varying considerations of different countries regarding technological ethics and industrial interests. The United States focuses on maintaining the existing copyright market order, the European Union emphasises risk prevention and humanism, Japan pursues technological neutrality and industrial promotion, while China strives to balance innovation and development with rights protection. This diverse landscape leads to conflicts in determining ownership of cross-border AI-generated works, necessitating the establishment of a collaborative governance framework at the international level. Future institutional evolution must seek breakthroughs across three dimensions: technological controllability, rights balance, and rule compatibility. It is essential to avoid both the theoretical chaos caused by excessive legal personhood and the obstacles to industrial development posed by rights vacuums.
3. Reconstruction of originality standards and a dual evaluation system
3.1. The path to weakening the requirement of subjective creative intent
The legal attributes of AI-generated content pose a structural challenge to the subjective creative intent requirement in traditional copyright law. Copyright law has long regarded creative intent as the core element in determining originality, emphasising that works must reflect the author's personalised thoughts, emotions, and intellectual choices. This intent-based standard holds explanatory power in creative activities dominated by human creators. When generative AI autonomously outputs content, users' interactive behaviour through prompt words and parameter adjustments fails to meet the substantive requirements of traditional creative intent. Users cannot control the form of expression of the work nor prove the correspondence between subjective creative intent and the final outcome. While users' operations may produce original works, their essence lies in the use of algorithmic tools, not traditional creative behaviour [5]. This reveals the challenges of applying the subjective intent requirement in new technological environments, objectively necessitating a shift away from a judgement paradigm centred on human psychological activities.
Technology-driven creative mechanisms are reshaping the underlying logic of originality determination. Generative AI's parameter models, trained on massive datasets, exhibit autonomous evolutionary characteristics, often producing outputs that exceed developers' predefined rule frameworks and manifest unpredictable creative expressions. In this context, continuing to insist on substantive reviews of creative intent would result in a large number of generative works with social value and artistic merit being excluded from protection. Denying the copyrightability of algorithmically generated content on the grounds that it lacks human personality is, at its core, a misinterpretation of the qualifications of the creative entity, ignoring the reconstructive effects of technological development on the creative process [6]. This necessitates shifting the focus of originality assessment from the creator's subjective intent to the objective expression of the generated output, establishing an evaluation system centred on expressive form. Specifically, this transition can be achieved through three dimensions: first, deconstructing the ‘creative intent’ requirement into observable objective elements, including the originality contribution of prompt words during the generation process and the technical complexity of parameter adjustments; Second, establish quantitative metrics for the differences between the form of expression and existing works, using natural language processing technology to identify the innovation density of the generated output; Third, construct a dynamic threshold model, setting differentiated originality recognition benchmarks based on the creation patterns of different types of generated outputs.
3.2. Quantitative analysis of algorithm autonomy thresholds
The copyright protection of AI-generated works requires the establishment of a scientific algorithm autonomy assessment system. The determination of originality in AI-generated works must break free from the traditional ‘human-machine dichotomy’ mindset and instead establish objective evaluation criteria by examining the operational mechanisms of algorithms. Based on this, the quantitative analysis of algorithmic autonomy thresholds should focus on the weight distribution of training data and the randomness parameters of outputs, using dynamic indicators to reflect the degree of autonomy in the generation process. The analysis of training data weights should examine the source composition of the input dataset and the adjustment paths of model parameters. For text-generating AI, the contribution weights of different corpora can be calculated using the backpropagation algorithm to determine the proportion of activation values for each neuron. When the proportion of original content in the training dataset exceeds 70% and parameter adjustments do not alter the underlying model architecture, the algorithm can be deemed to have a dominant influence on the output content.Zhu Ge et al. emphasise that generative artificial intelligence constitutes a new form of expression through the creative reorganisation of data elements, which requires the law to give appropriate consideration to the contribution of algorithms [7].
To measure randomness in output, a multi-dimensional evaluation model must be established. In the field of image generation, by comparing the compositional differences, unique colour combinations, and element arrangement dispersion of generated results under the same prompt, a randomness index within the 0-1 range can be constructed. Experimental data shows that when the randomness index exceeds 0.65 and the similarity between the generated results and the training data prototype is below 30%, the algorithm's autonomy reaches the originality threshold required by copyright law. This standard aligns with the ‘combinatorial creation’ theory, where the algorithm's expression combinations achieved through non-linear operations have surpassed the realm of simple imitation. For music-generating AI, metrics such as melody innovation and harmonic complexity must be introduced. By analysing spectral features using the Fourier transform, when the proportion of original melody segments exceeds 45% and harmonic progressions deviate from conventional patterns, the AI can be deemed to have made a substantial creative contribution.
The determination of rights attribution should establish a model linking algorithmic contribution and human intervention. Some scholars suggest incorporating the creative input of user prompts into consideration, but it is necessary to distinguish between basic instructions and substantive creative instructions. When users only provide generalised instructions such as ‘landscape painting,’ the algorithmic autonomy weight should account for more than 85%; if the user specifically specifies ‘a city night scene in the style of Monet’ and adjusts brushstroke parameters, the human contribution weight can be increased to 40%. This dynamic allocation mechanism aligns with the ‘primary contribution’ standard, avoiding the simplistic equating of technical tool use with creative acts under copyright law. At the technical verification level, a blockchain-based evidence system can be developed to record data input, parameter adjustments, and the generation process in real time, providing verifiable quantitative evidence for judicial rulings.
The application of the neighbouring rights protection model must align with the degree of algorithmic autonomy. When quantitative analysis indicates that the algorithm's contribution exceeds 60%, it is recommended to refer to the database special rights system and grant developers non-exclusive dissemination rights over the generated works. This institutional design aligns with the principle of tiered protection while balancing the relationship between technological innovation and rights distribution. For generated works where algorithmic autonomy falls within the 30–60% range, a statutory licensing system can be used to achieve a balance of interests among multiple parties. This requires users to specify the algorithmic contribution ratio when using the work and pay reasonable usage fees to the developer. This mechanism effectively addresses the challenge of rights distribution in ‘socialised creation’ and provides an operational regulatory framework for the legal protection of AI-generated works .
3.3. Consumer perception similarity test model
Traditional copyright law relies primarily on the subjective discretion of judicial officials to determine originality, which has the drawbacks of vague standards and significant differences between individual cases. The massive volume of AI-generated content and the autonomy of algorithms further complicate the assessment of originality, necessitating the establishment of more objective and operational evaluation models. The Consumer Perception Similarity Evaluation Model introduces market recognition standards, incorporating the public's ability to discern the distinctive features of a work into the originality evaluation framework. This model constructs an objective assessment framework based on the three-dimensional framework of ‘creative process—expressive form—audience perception.’ The core of this model lies in quantifying the intensity of the individual characteristics of generated works through market feedback mechanisms, replacing the difficult-to-quantify ‘intellectual creation threshold’ requirement in traditional legal reasoning with consumers' intuitive judgments on the similarity of works. When the similarity between the expression form of the disputed work and existing works falls below the market's general recognition threshold, it can be presumed to possess the minimum level of originality [8].
The implementation of consumer perception testing relies on standardised survey procedures and big data analysis technology. Operationally, a random sample of audiences with cultural consumption experience can be selected, and they are shown the disputed generated work and the comparison work group, requiring participants to score similarity across dimensions such as expression style, content structure, and emotional conveyance. When over 70% of participants perceive the generated work as having distinctiveness, it meets the market perception benchmark for originality recognition. This method effectively avoids the technical judgment dilemmas caused by algorithmic black boxes, returning the standard of originality to the general cognitive patterns of the public. Consumer perception standards can balance the principle of technological neutrality with the humanistic value orientation of copyright law, preventing mechanistic outputs lacking social and cultural value from being included in the scope of protection.
4. A hierarchical governance framework for rights attribution
4.1. Protection of neighbouring rights for highly autonomous generated works
The protection of neighbouring rights for AI-generated works must be based on precise calculations of the balance between technical contributions and benefits. Highly autonomous AI-generated works lack the ‘original spark’ of human intellectual input, making it difficult for them to meet the traditional requirements for the composition of works under copyright law [9]. However, their generation process involves a three-dimensional value chain comprising developers' algorithm research and development, user interaction instructions, and platform data resource support, necessitating the coordination of multiple parties' interests through the neighbouring rights system. Wang Jixia advocates including highly autonomous generative works within the scope of neighbouring rights protection, emphasising that the law should focus on regulating the relationship between data resource utilisation and revenue distribution. The construction of revenue distribution ratio rules should be based on a contribution quantification assessment system, where developers' technical contributions to the underlying model architecture and training algorithms account for approximately 40%, users' creative guidance based on natural language prompt word input accounts for 30%, and the platform's infrastructure services such as data storage, computing power support, and distribution channels account for 30%. This ratio design references the economic value weighting of parameter adjustments in deep learning models while also considering the substantial impact of user prompt complexity on the quality of generated content. A dynamic adjustment mechanism is key to maintaining the rationality of the distribution ratio. A smart contract system based on blockchain technology can be established to record the participation levels of all three parties in real time and automatically execute the distribution plan. When the generated content generates derivative commercial value, the original allocation ratio must be redistributed according to the principle of contribution continuity. Developers, users, and the platform provider receive 25%, 20%, and 15% of the derivative revenue, respectively, with the remaining 40% allocated to a public cultural development fund. This mechanism aligns with the emphasis on disseminators' rights under neighbouring rights protection while avoiding excessive capitalisation that erodes public interest.
4.2. Copyright confirmation for human-machine collaborative works
The copyright attribution of human-machine collaborative works requires breaking through the traditional legal framework and establishing a composite attribution system based on contribution assessment. Users of generative artificial intelligence may form original expressions through the operation process, but case-by-case judgments must be made in conjunction with specific creative scenarios. This perspective provides the theoretical foundation for constructing a contribution assessment system, which quantifies users' substantial contributions in algorithm training, parameter adjustment, and optimisation of generated outputs to distinguish between mere instruction input and creative intervention. In the assessment model, the extent of users' control over the expressive form of generated outputs, their selection criteria for algorithm outputs, and their substantial improvements to the final presentation constitute the core indicators for determining creative contributions.
The current copyright law faces technical challenges in determining the standard of originality. Generative artificial intelligence substantially determines the composition of expressive elements, which differs fundamentally from traditional creative tools. This requires the law to establish a graded standard for ‘human-machine collaborative creation’: when users guide AI to generate content through multiple rounds of interaction and have decisive control over key expressive elements, it can be recognised as original labour; if only basic instructions are provided without substantially influencing the expressive form of the generated content, full copyright should not be granted. This distinction mechanism protects human creative labour while avoiding the risk of incorporating AI-generated content into the scope of works.
The introduction of a statutory licensing system can effectively balance the interests of multiple parties. In the allocation of rights among AI developers, platform operators, and end-users, a hybrid model of ‘contribution-based priority plus statutory compensation’ is recommended. For user-generated content that meets the criteria for originality, the user should obtain the original copyright in accordance with the principle of rights attribution. Developers' substantial contributions based on algorithmic models should receive reasonable compensation through statutory licensing. Compensation calculations can reference the market value of the generated content, the frequency of algorithm use, and data training costs, establishing a dynamic adjustment mechanism. This model ensures users' control over creative outcomes while safeguarding the commercial interests of technology developers.
4.3. Exceptions to the rules governing public domain products
As a new type of object born out of technological development, the legal status of AI-generated content must strike a balance between the principle of the public domain and the order of market competition. Images, text, and other content generated by generative AI should, in principle, be included in the public domain, as their creation lacks direct human intellectual control and substantial human contribution. This institutional arrangement aligns with the copyright law's legislative tradition rooted in ‘human-centredism’ while avoiding the risk of technological monopolies hindering knowledge dissemination due to excessive rights conferral. However, it is important to note that the absolute freedom of use principle in the public domain may pose risks of market disorder, necessitating precise legal definitions of the boundaries of such freedom. When AI-generated works exhibit a minimum level of original expression, the model of performers' rights under neighbouring rights law may be referenced to provide limited protection for labour inputs such as data collection and algorithm optimisation during the generation process. However, such protection must not extend to exclusive rights over the generated works themselves.
5. Flexible institutional system and coordinated regulatory approach
5.1. Technical standards for algorithm transparency regulation
In the construction of a copyright protection system for AI-generated works, technical standards for algorithm transparency regulation should focus on the disclosure of training data and the explainability of decision-making processes. A multi-dimensional traceability system should be established for training data disclosure mechanisms, covering data sources, selection criteria, and scope of use. Copyright law requires works to have original expression, and the determination of originality in AI-generated works often depends on the transparency of algorithmic operational logic. Japan has implemented data usage transparency requirements in its technical regulatory framework, mandating developers to disclose the proportion of copyright-protected materials in training datasets and their licensing status [10]. This mechanism helps distinguish between data elements in algorithm-generated content that belong to the public domain and the creative expressions of protected works, providing objective basis for dynamic threshold assessments of originality standards.
The standard for explainability of decision-making processes should focus on the internal operational mechanisms of algorithmic models. Given the black-box nature of deep neural networks, developers should be required to provide visualised paths mapping input to output relationships. The explainability of decision-making logic in generative AI directly impacts the determination of rights attribution, especially in human-machine collaboration scenarios involving multiple parties. A clear algorithmic decision-making chain can effectively allocate contribution weights among developers, users, and platform operators.Technical standards can establish tiered explainability obligations: the foundational layer requires disclosure of model architecture and training methods, while the application layer must specify the extent of parameter adjustments and randomness control thresholds during the specific generation process, enabling the ‘creative act’ requirement under copyright law to be supported by verifiable technical parameters.
5.2. Establishing a dynamic registration system
Copyright protection for AI-generated works should be based on a dynamic registration system that uses tech tools to label metadata and verify the source. This system is key to dealing with the unclear originality standards and ownership issues around AI-generated works, and it's all about setting up a traceable tech identification system. The dynamic registration system requires metadata labelling throughout the whole life cycle of the work, including the algorithm model version, training data source, user input instructions, and parameter adjustment records. The metadata standardisation framework must comply with the Copyright Law's requirements for the authenticity of the source of works while also meeting technical verifiability standards, such as using blockchain technology to ensure data cannot be tampered with, thereby guaranteeing the integrity of the generation process. By cross-verifying algorithm-generated logs with user operation records, the degree of human contribution can be accurately identified, providing objective evidence for determining originality.
The technical implementation of the rights attribution mechanism must balance the interests of multiple parties. The dynamic registration system should establish a layered rights attribution module that automatically matches rights attribution rules based on the contribution type indicated in the metadata annotations: when users dominate the creative direction through fine-tuned parameter adjustments, the system allocates copyright to the user based on the ‘primary contribution’ standard; when algorithm-driven generation dominates, the system activates the neighbouring rights protection mode, attributing rights to the algorithm developer or operating platform. The system's built-in smart contracts can automatically execute management functions such as revenue sharing and infringement monitoring, and achieve visualised tracking of rights transfers through algorithm transparency oversight.
5.3. Extended application of fair use exceptions
Within the framework of copyright protection for AI-generated works, the expanded application of the fair use exception has become a key mechanism for balancing incentives for technological innovation with the freedom of cultural dissemination. The current fair use system under copyright law primarily targets scenarios involving the secondary use of human-created content. However, given the unique technical attributes and rights structures of AI-generated works, it is necessary to reconfigure the applicable rules and regulations. A tiered and categorised protection scheme provides the theoretical foundation for expanding fair use, and its proposal to include automatically generated works in the public domain effectively preserves a necessary pool of creative materials for technological innovation. For AI-generated works with high levels of autonomy, if a neighbouring rights protection model is adopted, special exceptions for machine learning and model optimisation must be added to the rights limitation clauses, allowing research and development entities to use protected generated works for algorithm training under specific conditions to avoid rights monopolies hindering technological iteration [11].
The reconstruction of originality standards for AI-generated works directly impacts the delineation of fair use boundaries. When algorithmic autonomy reaches a level where it substantially determines expressive elements, the ‘user input ideation’ phenomenon necessitates that the fair use system transcend the traditional expression/idea dichotomy. For complex instruction sets formed through multi-round user interactions, even if the final generated work possesses originality, the data trails generated during the input process may be treated as public knowledge elements, permitting third parties to conduct text mining and knowledge reorganisation in non-commercial domains [12]. This institutional design both safeguards the freedom of basic research and protects the interests of rights holders through commercial use licensing mechanisms, achieving a dynamic balance between technological development and cultural dissemination.
6. Conclusion
The issue of copyright protection for AI-generated works marks the most profound paradigm shift in intellectual property law since the digital revolution. This paper reveals the systemic failure of traditional originality standards and rights attribution mechanisms when dealing with generative AI. The proposed dual evaluation system and tiered governance framework break through the ‘all-or-nothing’ protection dilemma: at the originality determination level, it constructs an objective and operational judgement model centred on algorithmic autonomy threshold quantification analysis and consumer perception similarity testing; At the rights allocation level, based on the varying degrees of autonomy of the generated content, it innovatively distinguishes between three tiers of protection pathways—copyright, neighbouring rights, and the public domain—and coordinates the diverse interests of developers, users, and platform operators through a dynamic revenue-sharing mechanism.
References
[1]. Wu Handong. On the Copyrightability of AI-Generated Content: Practice, Theory, and Institutions [J]. China Legal Review, 2024, (03): 113-129.
[2]. Wang Qian. A Further Discussion on the Classification of AI-Generated Content under Copyright Law [J]. Political and Legal Forum, 2023, 41(04): 16-33.
[3]. Bi Wenxuan. Copyright Attributes and Protection Pathways for Content Generated by Generative Artificial Intelligence [J]. Comparative Law Research, 2024, (03): 55-71.
[4]. Guo Peng, Li Zhanpeng. On the Legal Characterisation of Complex Artificial Intelligence-Generated Works Under Copyright Law: A Commentary on the ‘AI Text-to-Image Copyright Case’ [J]. Science and Technology Law (Chinese and English), 2024, (04): 73-82.
[5]. Zhang Xinbao, Bian Long. A Study on Copyright Protection for Content Generated by Artificial Intelligence [J]. Comparative Law Research, 2024, (02): 77-91.
[6]. Wang Qian. On the Legal Classification of AI-Generated Content Under Copyright Law [J]. Legal Science (Journal of Northwest University of Political Science and Law), 2017, 35(05): 148-155.
[7]. Zhu Ge, Cui Guobin, Wang Qian, Zhang Huyu. Is AI-generated content (AIGC) protected by copyright law? [J]. China Legal Review, 2024(01).
[8]. Liu Jieyong. On the Copyright Protection of Content Generated by Artificial Intelligence: A Comparative Law Perspective [J]. Comparative Law Research,2024,(04):176-193.
[9]. Zhang Jinping. On the Copyrightability of Artificial Intelligence-Generated Works and Liability for Infringement [J]. Nanjing Social Sciences, 2023, (10): 77-89.
[10]. Zhang Xiaocheng. The Copyrightability of AI-Generated Works: Lessons from Japan and Implications for China [J]. Modern Japanese Economy, 2025, 44(01): 81-94.
[11]. Wang Jixia, Gao Xu. On the Copyright Protection of Generative Artificial Intelligence Creations [J]. Journal of Hunan University of Science and Technology (Social Sciences Edition), 2024, 27(05): 103-110.
[12]. Wang Qian. Three Discussions on the Positioning of AI-Generated Content in Copyright Law [J]. Law and Business Research, 2024, 41(03): 182-200.
Cite this article
Qi,F. (2025). Copyright Protection for AI-Generated Works: Positioning and Regulatory Approaches. Lecture Notes in Education Psychology and Public Media,112,91-100.
Data availability
The datasets used and/or analyzed during the current study will be available from the authors upon reasonable request.
Disclaimer/Publisher's Note
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of EWA Publishing and/or the editor(s). EWA Publishing and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
About volume
Volume title: Proceedings of ICILLP 2025 Symposium: Digital Governance: Inter-Firm Coopetition and Legal Frameworks for Sustainability
© 2024 by the author(s). Licensee EWA Publishing, Oxford, UK. This article is an open access article distributed under the terms and
conditions of the Creative Commons Attribution (CC BY) license. Authors who
publish this series agree to the following terms:
1. Authors retain copyright and grant the series right of first publication with the work simultaneously licensed under a Creative Commons
Attribution License that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this
series.
2. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the series's published
version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial
publication in this series.
3. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and
during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See
Open access policy for details).
References
[1]. Wu Handong. On the Copyrightability of AI-Generated Content: Practice, Theory, and Institutions [J]. China Legal Review, 2024, (03): 113-129.
[2]. Wang Qian. A Further Discussion on the Classification of AI-Generated Content under Copyright Law [J]. Political and Legal Forum, 2023, 41(04): 16-33.
[3]. Bi Wenxuan. Copyright Attributes and Protection Pathways for Content Generated by Generative Artificial Intelligence [J]. Comparative Law Research, 2024, (03): 55-71.
[4]. Guo Peng, Li Zhanpeng. On the Legal Characterisation of Complex Artificial Intelligence-Generated Works Under Copyright Law: A Commentary on the ‘AI Text-to-Image Copyright Case’ [J]. Science and Technology Law (Chinese and English), 2024, (04): 73-82.
[5]. Zhang Xinbao, Bian Long. A Study on Copyright Protection for Content Generated by Artificial Intelligence [J]. Comparative Law Research, 2024, (02): 77-91.
[6]. Wang Qian. On the Legal Classification of AI-Generated Content Under Copyright Law [J]. Legal Science (Journal of Northwest University of Political Science and Law), 2017, 35(05): 148-155.
[7]. Zhu Ge, Cui Guobin, Wang Qian, Zhang Huyu. Is AI-generated content (AIGC) protected by copyright law? [J]. China Legal Review, 2024(01).
[8]. Liu Jieyong. On the Copyright Protection of Content Generated by Artificial Intelligence: A Comparative Law Perspective [J]. Comparative Law Research,2024,(04):176-193.
[9]. Zhang Jinping. On the Copyrightability of Artificial Intelligence-Generated Works and Liability for Infringement [J]. Nanjing Social Sciences, 2023, (10): 77-89.
[10]. Zhang Xiaocheng. The Copyrightability of AI-Generated Works: Lessons from Japan and Implications for China [J]. Modern Japanese Economy, 2025, 44(01): 81-94.
[11]. Wang Jixia, Gao Xu. On the Copyright Protection of Generative Artificial Intelligence Creations [J]. Journal of Hunan University of Science and Technology (Social Sciences Edition), 2024, 27(05): 103-110.
[12]. Wang Qian. Three Discussions on the Positioning of AI-Generated Content in Copyright Law [J]. Law and Business Research, 2024, 41(03): 182-200.