1. Introduction
With the rapid advancement of Artificial Intelligence (AI), AI-generated artworks have been increasingly applied across various fields. However, due to the lack of well-established legal frameworks, their copyright status remains contentious. Judicial practice continues to grapple with ambiguities in rights demarcation and difficulties in identifying liable subjects.
Numerous scholars have proposed insights on these issues. For instance, "A Multidimensional Response to Intellectual Property Issues of AI in the Context of the New Technological Revolution" argues that AI-generated works involving human intellectual contributions should not be categorically excluded from copyright protection [1]. Meanwhile, "Rethinking the Fair Use Doctrine in AI Data Mining: A Copyright Perspective" examines infringement risks at the data input stage and proposes constructions of the fair use system [2].
This paper advocates a dual regulatory strategy combining a "training data traceability mechanism" and a "mandatory labeling mechanism", aiming to strike a balance between technological innovation and copyright protection. Through a comprehensive literature review and case analysis, the study seeks to provide insights for legislative refinement and technology governance.
2. Current copyright regulation of AI-generated artworks in China
2.1. Legislative status
Due to the inherent lag of legislation behind societal development, numerous unresolved issues persist regarding the copyright of AI-generated artworks.
Article 3 of China’s Copyright Law stipulates:
"Works referred to in this Law are intellectual achievements with originality in the fields of literature, art, and science that can be expressed in a certain form, including: (1) written works; (2) oral works; (3) musical, dramatic, quyi (Chinese folk art forms), choreographic, and acrobatic works; (4) fine art and architectural works; (5) photographic works; (6) audiovisual works; (7) graphic works such as engineering designs, product designs, maps, and schematic diagrams, as well as model works; (8) computer software; and (9) other intellectual achievements meeting the characteristics of works." [3].
The copyrightability of AI-generated artworks remains controversial, particularly concerning "originality". This debate arises because AI’s creative process involves data training to learn semantic-visual associations, during which it inevitably mimics individual artists’ styles. While the overall composition may differ sufficiently to avoid reproduction right infringement, noticeable resemblances in color palettes or linework might still violate adaptation rights.
Moreover, the similarity between AI-generated outputs and human artists’ works also exposes potential infringement issues arising from the data training phase. Training AI models with unauthorized data—most of which are ultimately commercialized—unquestionably falls outside the scope of fair use. Such practices not only constitute prima facie copyright infringement but also deter artistic creativity, thereby contradicting the legislative intent of the Copyright Law.
To address this, China’s 2023 Interim Measures for the Management of Generative AI Services explicitly requires the use of legally sourced data and base models, prohibiting infringement of intellectual property rights [4]. Further reinforcing these measures, the 2025 Measures for the Labeling of AI-Generated Content mandates both explicit and implicit labeling of synthetic content, facilitating practical enforcement [5].
Nevertheless, specific measures to prevent and address infringement remain either undeveloped or in their early implementation stages, leaving the issue unresolved. Therefore, the copyright ownership of AI-generated artworks remains a significantly debatable issue.
Above all, AI itself cannot qualify as a copyright holder. Under Article 11 of China’s Copyright Law, only natural persons, legal entities, and unincorporated organizations are recognized as copyright owners. Moreover, even if AI were granted authorship, it could neither bear liability for infringement nor license or transfer partial copyrights to maximize their economic value.
The central question then becomes: Should copyright vest in the AI developer or the user? AI-generated artworks are rarely kept for private viewing; once published online by users, they risk infringing the reproduction rights of software owners or designers, especially when the aforementioned problem remains inconclusive.
In summary, as AI technology remains in a phase of rapid development with constantly evolving challenges, China’s Copyright Law has yet to undergo corresponding amendments, leaving certain regulatory gaps. However, supplementary regulations have already begun addressing third-party infringement risks arising from AI systems—beyond just developers and users—to prevent the escalation of AI-related disputes and maintain relative legal stability.
2.2. Judicial regulatory status
In the case of Copyright Ownership and Infringement Dispute: Beijing Law Firm v. Beijing Technology Co., the judicial authorities provided a clear response regarding AI-generated content [6]:
"A work must be created by a natural person. In the process of generating the disputed content, neither the software developer (owner) nor the user engaged in acts of creative authorship, nor did the content reflect their original expression. Therefore, neither party qualifies as the author of the computer software-generated content, and the output does not constitute a copyrightable work."
However, the court’s ruling that the defendant must compensate the plaintiff, due to the latter’s status as a “software user,” is significant. It suggests that even in instances of minimal human creative input, judicial practice acknowledges the potential for users to retain certain rights or interests in AI-generated outputs.
While In another case—Li v. Liu (Dispute over Authorship Rights and Information Network Dissemination Rights)—the plaintiff, Mr. Li, successfully demonstrated that his modifications to an AI-generated artwork reflected his personal creative expression, satisfying the requirement of "originality."
From the initial conceptualization of the image to the final selection, Mr. Li contributed substantial intellectual input, including:
Designing the composition and presentation of characters,
Selecting and structuring text prompts,
Adjusting AI parameters, and
Curating the final output to meet his artistic vision.
The court ruled that the image embodied Mr. Li’s intellectual contribution, thereby fulfilling the "intellectual achievement" criterion under China’s Copyright Law. Consequently, the artwork was deemed protected by copyright, and the defendant was ordered to issue a public apology and pay compensation [7].
Consequently, the copyrightability of AI-generated outputs is largely contingent upon the extent of the software user’s involvement. The degree of the user’s participation in the creative process, including subsequent modifications, determines their eligibility for copyright protection. When users engage substantially in both the ideation stage and post-generation editing, their use of AI to produce artworks may be deemed tool-assisted creation, a scenario that does not inherently preclude copyright protection. Conversely, even in instances where user input is minimal, judicial authorities may recognize certain rights or interests for users. This approach aims to incentivize the dissemination and utilization of such works, thereby fostering cultural and scientific advancement.
In the absence of explicit legislative guidance, the legal status of AI-generated content remains contentious. However, judicial authorities have consistently held that AI cannot be recognized as a legal author, basing their decisions on assessments of:
1. The originality of the work, and
2. The legal standing of the user.
Notably, users who employ AI as a creative tool—particularly those who contribute additional originality—are afforded protection. This approach reflects the judiciary’s adherence to the Berne Convention for the Protection of Literary and Artistic Works, which prioritizes the rights of human creators, while simultaneously accommodating the development of emerging technologies.
3. Copyright risk mitigation mechanisms for AI-generated artworks
3.1. Establishing a pre-screening mechanism for infringement risks
To safeguard creators’ rights and reduce disputes, it is essential to prevent infringement at the data input stage. AI systems typically scrape and learn from raw internet data without conducting copyright clearance checks. This issue was highlighted in Getty Images v. Stability AI [8], where the plaintiff demonstrated that some AI-generated images bore Getty’s invisible watermarks, exposing the lack of due diligence in data sourcing.
However, many infringed artists lack conclusive evidence (such as watermarks) and must rely on vague criteria like "artistic style similarity" making it hard to assert their rights—a standard often insufficient for legal enforcement. Given the evidentiary challenges faced by potential victims, legislative efforts should shift focus toward AI developers, who are the primary source of infringement risks, by implementing a traceable training data mechanism.
The so-called traceable training data mechanism refers to the use of data provenance technologies to track the origin, generation process, and legal status of datasets, ensuring their authenticity, reliability, and compliance. Since AI developers benefit commercially from training models on such data, they should bear the obligation to disclose data sources.
This approach can be implemented in alignment with Article 60 of the EU AI Act, which requires listing the primary data collections or datasets used for training (e.g., large-scale private/public databases) and providing a narrative explanation of other data sources [9].
Beyond mere disclosure, developers should:
1. Label copyright status for all training materials;
2. Implement a tiered filtering system to screen out infringing content.
Such measures would:
1. Enable backward traceability of AI-generated artworks to legally compliant sources;
2. Reduce infringement risks;
3. Enhance user trust in AI systems.
The traceability mechanism establishes a dual accountability framework that primarily engages two key stakeholders: copyright holders whose works are utilized as training data, and end-users who employ the AI systems. For copyright holders, this framework necessitates formal agreements that explicitly delineate the authorized scope of usage and corresponding restrictions, accompanied by provisions for equitable remuneration to ensure proper compensation for the utilization of their protected works.
Concurrently, the mechanism imposes substantial disclosure obligations on AI developers towards end-users. Developers are required to maintain transparency regarding the provenance of training data while ensuring its legal compliance, thereby safeguarding users from potential legal liabilities and reputational risks associated with inadvertent copyright infringement. In instances where developers fail to fulfill these disclosure requirements, they become subject to legal accountability, which may encompass both formal public apologies and substantive financial compensation to affected parties.
Despite its theoretical merits, this traceability mechanism faces significant practical challenges. Firstly, the computational resources required to trace and filter vast quantities of AI-generated artworks would impose a heavy financial burden on developers, increasing operational costs. This economic pressure may incentivize corner-cutting during implementation, ultimately undermining the mechanism’s intended effectiveness.
Secondly, public disclosure of training datasets implies that artists must formally license their works for AI training—a concession that could backfire. Once AI systems begin producing stylistically similar artworks at industrial scale, human artists risk being crowded out of the market due to slower production speeds. In essence, by surrendering their rights today, creators may inadvertently jeopardize their future livelihoods. This paradox has already manifested in early-stage implementations of traceability systems, where dataset transparency has sparked collective backlash from artists who perceive the mechanism as facilitating their own obsolescence.
Finally, the mechanism primarily targets future AI models, leaving a critical gap: countless existing models were trained on potentially infringing data. Without retroactive enforcement, victims of past infringements—many of whom lack the resources for legal action—remain unprotected.
In summary, while the traceable training data mechanism presents a theoretically sound approach to addressing AI copyright issues, its practical implementation faces notable obstacles and loopholes. If combined properly with other mechanism, it may yield more sustainable outcomes for all stakeholders involved.
3.2. Establishing a mandatory labeling mechanism for AI-generated artworks
To address the limitations of the traceable training data mechanism, a mandatory labeling mechanism serves as a complementary solution. Compared to the traceability mechanism, this approach imposes relatively lower computational requirements and targets different entities.
In accordance with China's recently enacted Measures for the Labeling of AI-Generated Content, internet information service providers conducting AI-generated content labeling activities must attach either explicit or implicit identifiers to such content. For AI-generated artworks, prominent identification markers should be added at appropriate locations on the images.
The focus on internet information service providers—primarily major online platforms—represents a strategic shift in regulatory targeting. While these providers may appear peripheral to copyright infringement issues compared to the previously discussed stakeholders (creators, developers, and end-users), their critical role in content dissemination actually amplifies the scope and impact of potential infringements. The legal obligations established under this measure explicitly recognize this amplification effect by mandating that platform operators assume responsibility for verifying AI-generated artworks.
The rapid advancement of AI technology has created a situation where individuals without artistic training often cannot distinguish between AI-generated and human-created artworks. This frequently leads to unwitting dissemination of AI-generated content, thereby exacerbating infringement consequences. The implementation of this labeling mechanism is expected to mitigate such phenomena by providing clear identification of AI-generated materials.
Furthermore, in addition to explicit labeling, the regulation also encourages the use of implicit identifiers containing key production metadata such as: (1) content generation attributes, (2) service provider identification codes, and (3) content serial numbers. While the traceable training data mechanism primarily governs obligations between developers and users, these embedded identifiers empower content viewers with the right to access source information about AI-generated artworks.
This dual identification system creates a comprehensive governance framework encompassing:
1. Preventive measures (through developer data sourcing compliance)
2. Process control (via platform verification and labeling)
3. Post-hoc accountability (enabled by traceable metadata)
The mandatory labeling mechanism effectively interconnects all stakeholders in the AI artwork ecosystem - internet service providers, end-users, and developers - thereby enhancing the practical implementation of data traceability during the content dissemination phase.
4. Conclusion
This study examines the copyright controversies surrounding AI-generated artworks and proposes corresponding regulatory mechanisms, highlighting the complex dilemmas in determining originality, attributing rights, and assessing infringement risks against the backdrop of legal lag. The findings indicate that judicial authorities are gradually developing a "human participation-centric" adjudication logic through case-by-case rulings. When AI outputs reflect personalized users expression, they may qualify as copyright-protected intellectual creations; otherwise, compensatory benefit-sharing mechanisms are employed to encourage technological application. This judicial practice aligns with the spirit of the Berne Convention and offers a balanced approach to liability allocation under the principle of technological neutrality.
The proposed "traceable training data mechanism" and "mandatory labeling mechanism" demonstrate complementary value in mitigating risks. The former regulates developer behavior through data source transparency, while the latter reduces infringement risks during dissemination through visible/invisible identifiers. Collectively, these mechanisms form a comprehensive governance system covering prevention, process control, and post-hoc accountability.
Despite their potential, the proposed mechanisms have certain limitations. Future research should explore emerging technologies for global-scale data traceability to address cross-border copyright challenges posed by AI-generated content. Future research should further explore applications of emerging technologies in data traceability to address global copyright challenges posed by AI-generated content.
References
[1]. Shan, X. G. (2025). A multidimensional response to intellectual property issues of artificial intelligence under the new technological revolution. Intellectual Property, 1, 33.
[2]. Wang, X. L. (2025). Rethinking the application of copyright fair use doctrine to AI data mining. Hebei Law Science, 3, 185.
[3]. Copyright Law of the People’s Republic of China, Art. 3 (2020).
[4]. National Internet Information Office. (2023). Interim Measures for the Management of Generative Artificial Intelligence Services, art. 7.
[5]. Cyberspace Administration of China. (2023). Measures for the Labeling of AI-Generated Content, arts. 4 & 5.
[6]. Beijing Feilin Law Firm v. Beijing Baidu Netcom Science & Technology Co., Ltd., (2019). (Jing 73 Min Zhong No. 2030). Beijing Intellectual Property Court.
[7]. LI M. v. LIU M., (2023) Jing 0491 Min Chu No. 11279 (Beijing Internet Court).
[8]. Getty Images (US), Inc. v. Stability AI, Inc., No. 1:23-cv-00135 (D. Del. Filed Jan. 17, 2023).
[9]. European Union. (2024). Artificial Intelligence Act, Article 60(k). Official Journal of the European Union.
Cite this article
Yu,S. (2025). Copyright issues and response mechanisms for AI-generated artworks. Advances in Social Behavior Research,16(4),140-143.
Data availability
The datasets used and/or analyzed during the current study will be available from the authors upon reasonable request.
Disclaimer/Publisher's Note
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of EWA Publishing and/or the editor(s). EWA Publishing and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
About volume
Journal:Advances in Social Behavior Research
© 2024 by the author(s). Licensee EWA Publishing, Oxford, UK. This article is an open access article distributed under the terms and
conditions of the Creative Commons Attribution (CC BY) license. Authors who
publish this series agree to the following terms:
1. Authors retain copyright and grant the series right of first publication with the work simultaneously licensed under a Creative Commons
Attribution License that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this
series.
2. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the series's published
version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial
publication in this series.
3. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and
during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See
Open access policy for details).
References
[1]. Shan, X. G. (2025). A multidimensional response to intellectual property issues of artificial intelligence under the new technological revolution. Intellectual Property, 1, 33.
[2]. Wang, X. L. (2025). Rethinking the application of copyright fair use doctrine to AI data mining. Hebei Law Science, 3, 185.
[3]. Copyright Law of the People’s Republic of China, Art. 3 (2020).
[4]. National Internet Information Office. (2023). Interim Measures for the Management of Generative Artificial Intelligence Services, art. 7.
[5]. Cyberspace Administration of China. (2023). Measures for the Labeling of AI-Generated Content, arts. 4 & 5.
[6]. Beijing Feilin Law Firm v. Beijing Baidu Netcom Science & Technology Co., Ltd., (2019). (Jing 73 Min Zhong No. 2030). Beijing Intellectual Property Court.
[7]. LI M. v. LIU M., (2023) Jing 0491 Min Chu No. 11279 (Beijing Internet Court).
[8]. Getty Images (US), Inc. v. Stability AI, Inc., No. 1:23-cv-00135 (D. Del. Filed Jan. 17, 2023).
[9]. European Union. (2024). Artificial Intelligence Act, Article 60(k). Official Journal of the European Union.