1 Introduction
1.1 Research Background and Research Object
In recent years, short video platforms like Douyin (TikTok) and Kuaishou have rapidly risen to become significant forms of entertainment in users’ daily lives worldwide. With the dramatic increase in user numbers, the volume of content on these platforms has also surged. To enhance user experience and retention, platforms widely adopt algorithmic recommendation technology based on user behavior data, analyzing users’ viewing history, interaction data, and other information to push personalized content. This technology allows platforms to quickly identify videos related to users’ interests from vast amounts of content, thereby increasing user engagement and boosting advertising revenue.
However, with the widespread use of algorithmic recommendations, platforms are facing an increasing number of copyright disputes. Some unauthorized content on these platforms is suspected of infringing intellectual property, portrait rights, privacy rights, etc., and is widely recommended through algorithms, exacerbating the spread of infringement. This raises the important legal question of whether platforms should be held responsible for recommending infringing content.
Before discussing platform responsibility, it is important to clarify their role in algorithmic recommendations. Traditionally, platforms were seen as neutral service providers, not intervening in the specific content posted by users, and therefore enjoyed immunity under the “safe harbor” principle. However, with the application of algorithmic recommendations, platforms’ roles have shifted. By actively analyzing user data and pushing content, they are no longer purely “neutral intermediaries” but have become content distributors. Platforms influence the dissemination of content by optimizing the distribution path through algorithms, directly affecting the speed and range of the spread of infringing content. As a result, whether platforms can still rely on the “safe harbor” principle for immunity has become a central point of controversy.
1.2 Research Purpose and Value
The purpose of this research is to reveal how algorithmic recommendations affect the determination of copyright liability for short video platforms. As short video platforms rely on algorithmic recommendation technologies to optimize user experience, the role of platforms has undergone a profound transformation. Initially neutral information intermediaries, platforms have now become active content promoters. They no longer simply serve as content transmitters, but instead actively push personalized content by analyzing user behavior and interests. This shift greatly enhances platforms’ control over content distribution, directly influencing the spread and reach of content.
The traditional “safe harbor principle” is based on the idea that platforms, as neutral intermediaries, are not liable for user-uploaded infringing content as long as they do not actively intervene or are not aware of the infringement. However, in the context of algorithmic recommendations, platforms are not only distributors of content but also play an active role in recommending and promoting content. This change complicates the determination of platform liability, as their active involvement in content distribution makes the traditional “safe harbor” theory difficult to apply. Therefore, it is necessary to reconsider and adjust the legal framework for platform liability, clarifying the division of responsibilities in the context of new technologies, and ensuring a balance between copyright protection and platform innovation.
2 Literature Review
The application of algorithmic recommendations on short video platforms and the associated copyright infringement issues represent a relatively new research area. With the rapid development of the short video industry, related legal issues are becoming increasingly complex and deserve further exploration. Existing studies mainly focus on platform liability, the scope of duty of care, and the definition of infringement behaviors. However, due to ongoing technological advancements and evolving judicial practices, existing theories and case law have not fully resolved all the controversies. For example, Xian Zhuoming points out in his study that there are differing views on whether platforms should be held responsible for infringing content within the algorithmic recommendation process [1]. This issue’s complexity arises from the “pseudo-neutrality” of algorithmic recommendations and the opacity of the technology, which further complicates the application of the law. Another example is Liu Keyi , who discusses the difficulty in defining the platform’s liability in cases of indirect infringement, particularly with respect to the application of the “actual knowledge” and “should have known” standards, which remains a challenge in judicial practice [2]. Additionally, Zhou Shuhuan highlights that in the “first algorithmic recommendation case,” the court ruled that short video platforms should bear a higher duty of care [3]. However, in practice, the complexity of the algorithm and the ambiguity of the “should have known” standard still make it difficult to determine infringement liability. Therefore, future research holds vast potential, especially in areas such as refining platform liability determination, establishing more precise legal standards, and addressing various types of infringement. As technology continues to evolve and platform operation models innovate, balancing algorithmic recommendations with copyright protection will be an important topic in future legal research and practice.
3 The Current Status of Copyright Infringement under the Platform Algorithmic Recommendation Context
3.1 The Basic Operating Principle of Platform Algorithmic Recommendation
The core of algorithmic recommendation lies in generating personalized suggestions through user behavior data and content data, using machine learning or other computational models. Short video platforms collect user behavior data such as watch time, likes, and comments to construct user profiles and accurately identify user interests. At the same time, the platform analyzes video content and extracts features such as titles and keywords to match users’ interests. Common recommendation models used by platforms include collaborative filtering algorithms and deep learning models. The collaborative filtering algorithm is based on the principle that “users in the same category are likely to be interested in similar content, or content in the same category will be favored by the same user.” [4] This algorithm analyzes user behavior and video tags to accurately recommend relevant content. Deep learning algorithms, on the other hand, process complex problems like humans, analyzing large volumes of data from multiple dimensions and automatically learning features that align with user needs to provide more precise personalized recommendations [5]. During the recommendation process, platforms adjust strategies in real-time based on user interactions (such as clicks and dwell time) to better match user interests.
3.2 Current Situation of Infringement
With the development of algorithmic recommendation technology, the role of platforms has shifted from neutral information providers to active information controllers. Under the “safe harbor principle,” platforms are typically exempt from legal responsibility if they promptly remove or block infringing content [6]. However, with the widespread use of algorithmic recommendations, platforms have become more proactive in content distribution, making their responsibility more complex. Platforms recommend content based on user interests and behaviors, and if this content involves copyright issues, the platform’s responsibility can no longer be solely attributed to being a “technical intermediary.”
For example, in the “Hunley v. Instagram” case in the United States, although Instagram was not the original creator of the content, its algorithm frequently recommended unauthorized images, leading to copyright disputes [7]. While the court ruled that Instagram was not responsible for embedded content, the case sparked discussions on platform copyright management. Similarly, in the “iQIYI v. ByteDance” case in China, ByteDance’s Toutiao and Douyin platforms used algorithmic recommendations to push unauthorized video content, amplifying the infringement [8]. iQIYI sued ByteDance, arguing that the platform should be held responsible for the infringing content recommended by its system. ByteDance defended itself, claiming that according to the “safe harbor principle,” it could avoid liability by promptly removing infringing content, but iQIYI argued that the platform’s active promotion of the content made it more liable.
These two cases illustrate that with the use of algorithmic recommendation technology, platforms are no longer neutral disseminators but active promoters of content. The traditional “safe harbor principle” is no longer sufficient to determine platform liability, and platform responsibility needs to be redefined and adjusted to address the challenges posed by new technologies.
4 Liability Determination of Short Video Platforms in the Context of Algorithmic Recommendations
In China, algorithmic recommendation technology poses unique challenges for determining the liability of short video platforms. Platforms have transformed from passive content disseminators to active participants in content management, with responsibilities expanding from content storage to content review and management. However, this transformation is accompanied by numerous challenges in determining liability.
4.1 Limitations of the Safe Harbor Principle
The safe harbor principle originally provided exemption protection for platforms, but its limitations have emerged in the context of algorithmic recommendations. Platforms that actively recommend infringing content through algorithms may no longer rely on this principle for exemption. Judicial practice considers platforms as active promoters of content dissemination rather than neutral technology providers. The opacity of algorithms also increases the complexity of platforms' review responsibilities.
4.2 Platforms' Content Review Responsibilities
Regarding copyright infringement issues, although platforms claim that recommendation algorithms are based on user demand, they are actually influenced by commercial orientation, prioritizing the recommendation of high-engagement content, which may include infringing works. This "pseudo-neutrality" amplifies the dissemination of infringing content [9]. Chinese regulatory agencies have set higher standards for short video platforms, especially regarding vulgar, illegal, and false information. Platforms need to bear review responsibilities for algorithmically recommended content to ensure legality and compliance. To achieve this, platforms must balance commercial interests and content compliance in algorithm design and adopt dual mechanisms of technology and manual review for supervision.
5 Suggestions on Copyright Protection in the Context of Algorithmic Recommendation
5.1 Improving the Determination of Platform Responsibility
Algorithmic recommendation technology complicates platform responsibility. It is recommended to establish a refined responsibility determination mechanism to clarify the boundaries of platform responsibility. Platforms should enhance algorithm transparency by publishing algorithm transparency reports and user tag setting options, introducing dynamic content review mechanisms, and combining manual review with technical monitoring to monitor and screen potential infringing content in real-time. Establish expedited copyright protection and emergency response channels to ensure prompt responses to infringements, minimize damage to rights holders, and improve content management and compliance.
5.2 Strengthening Content Review and Compliance
In the context of algorithmic recommendation, platforms need to strengthen their content review responsibilities, especially in addressing the copyright infringement spread caused by “pseudo-neutrality.” [10] To this end, platforms should enhance algorithm transparency by regularly publishing transparency reports that showcase their recommendation logic and ensure the legality of the content. Platforms should also introduce a balancing mechanism that combines commercial interests with copyright compliance, avoiding the prioritization of unauthorized high-engagement content. At the same time, a dual review mechanism combining technological tools and manual intervention should be adopted to monitor and screen high-risk and popular content in real-time, quickly identifying and limiting the spread of infringing content.
Platforms should strengthen cooperation with rights holders and relevant institutions, establishing a rapid response mechanism to promptly take measures such as blocking, deleting, or restricting access in response to infringement reports. This not only prevents the spread of infringing content but also demonstrates the platform’s commitment to copyright protection. In the future, the boundaries of platform responsibility should be clearly defined based on the initiative and influence of algorithmic recommendations, balancing user experience with copyright protection, reducing reliance on the “safe harbor” principle, and improving content management and compliance.
6 Conclusion
The application of algorithmic recommendation technology in short-video platforms has enhanced user experience, yet it has also triggered complex copyright infringement issues. Platforms have shifted from traditional passive information intermediaries to active content pushers, facing greater difficulties in determining responsibility in copyright disputes. The traditional "safe harbor principle" is no longer sufficient to address the responsibility issues of platforms when actively recommending content. The paper points out that platforms have expanded the dissemination of infringing content due to the initiative of algorithms, thus necessitating a reevaluation of their review responsibilities. By analyzing domestic and international legal practices, the paper proposes suggestions to improve the platform's responsibility determination mechanism and strengthen content review and compliance mechanisms. The aim is to help platforms strengthen copyright compliance management while protecting user experience, and to promote the establishment of a more comprehensive copyright protection system. It is hoped that China will further refine related responsibilities in future legislation and regulation, constructing a more perfect copyright protection system for algorithmic recommendation platforms.
References
[1]. Xian, C. M. (2024). Disputes and Optimization of Judicial Determination of Copyright Obligations on Short Video Platforms under Algorithmic Recommendation. Journal of Southeast University (Philosophy and Social Sciences Edition), 26(S1).
[2]. Liu, K. Y. (2024). Identification of Copyright Infringement on Short Video Platforms under Algorithmic Recommendation Technology. Science and Technology Entrepreneurship Monthly, 37(02), 166-170.
[3]. Zhou, S. H. (2023). A Study on the Determination of the Duty of Care of Short Video Platforms in the ‘First Algorithmic Recommendation Case’. Journalists, (04), 64-72.
[4]. Li, M. H., Zhao, X. J., Yu, Y. F., et al. (2022). Survey on Research Progress of Recommendation Algorithms. Journal of Chinese Computer Systems, 3.
[5]. Yu, M., He, W. T., Zhou, X. C., et al. (2022). Review of recommendation system. Journal of Computer Applications, 42(6).
[6]. Article 36 of the Tort Liability Law of the People's Republic of China.
[7]. Alexis Hunley, et al. v. Instagram, LLC, No.22-15293 (9th Cir. 2023)
[8]. See (2019) Jing 0108 Minchu No. 50456.
[9]. Jiao, Y. F. (2023). Research on the Determination of Copyright Infringement Liability of Online Platforms under the Algorithmic Recommendation Model. Science and Technology and Innovation, (S1).
[10]. Shanghai Intellectual Property Court Research Group. (2024). The Duty of Care of Algorithmic Recommendation Service Providers. Application of Law, (07), 24-36.
Cite this article
Xiao,C. (2024). Research on Copyright Infringement Issues in Short Video Platforms in the Context of Algorithmic Recommendation. Advances in Social Behavior Research,13,40-43.
Data availability
The datasets used and/or analyzed during the current study will be available from the authors upon reasonable request.
Disclaimer/Publisher's Note
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of EWA Publishing and/or the editor(s). EWA Publishing and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
About volume
Journal:Advances in Social Behavior Research
© 2024 by the author(s). Licensee EWA Publishing, Oxford, UK. This article is an open access article distributed under the terms and
conditions of the Creative Commons Attribution (CC BY) license. Authors who
publish this series agree to the following terms:
1. Authors retain copyright and grant the series right of first publication with the work simultaneously licensed under a Creative Commons
Attribution License that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this
series.
2. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the series's published
version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial
publication in this series.
3. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and
during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See
Open access policy for details).
References
[1]. Xian, C. M. (2024). Disputes and Optimization of Judicial Determination of Copyright Obligations on Short Video Platforms under Algorithmic Recommendation. Journal of Southeast University (Philosophy and Social Sciences Edition), 26(S1).
[2]. Liu, K. Y. (2024). Identification of Copyright Infringement on Short Video Platforms under Algorithmic Recommendation Technology. Science and Technology Entrepreneurship Monthly, 37(02), 166-170.
[3]. Zhou, S. H. (2023). A Study on the Determination of the Duty of Care of Short Video Platforms in the ‘First Algorithmic Recommendation Case’. Journalists, (04), 64-72.
[4]. Li, M. H., Zhao, X. J., Yu, Y. F., et al. (2022). Survey on Research Progress of Recommendation Algorithms. Journal of Chinese Computer Systems, 3.
[5]. Yu, M., He, W. T., Zhou, X. C., et al. (2022). Review of recommendation system. Journal of Computer Applications, 42(6).
[6]. Article 36 of the Tort Liability Law of the People's Republic of China.
[7]. Alexis Hunley, et al. v. Instagram, LLC, No.22-15293 (9th Cir. 2023)
[8]. See (2019) Jing 0108 Minchu No. 50456.
[9]. Jiao, Y. F. (2023). Research on the Determination of Copyright Infringement Liability of Online Platforms under the Algorithmic Recommendation Model. Science and Technology and Innovation, (S1).
[10]. Shanghai Intellectual Property Court Research Group. (2024). The Duty of Care of Algorithmic Recommendation Service Providers. Application of Law, (07), 24-36.