1 Introduction
The development of big data, artificial intelligence and other technologies has deeply changed and reshaped people's economic and social life. Among them, big data and artificial intelligence are based on algorithms. Therefore, scientific algorithms directly determine the effect of the big data and artificial intelligence on people's lives. In the past, the algorithm was judged to have neutrality with the packaging of technical logic. In fact, algorithm-based big data is not absolutely neutral. Instead, big data is bound to have inherent social prejudice and discrimination as it is extracted from the real society. In recent years, with the emergence of "varied prices for same tour product or service based on different consumers through big data", people came to realize the problem of widespread algorithmic discrimination. For example, Didi uses big data analysis to charge more from regular users and Apple mobile phone users than new users and Android mobile phone users when the starting points and destinations are the same. However, algorithmic discrimination is concealed and technical, which may not only affect the basic rights of citizens but also cause a series of social problems and legal issues such as unfair distribution of social resources and chaos of economic order. Therefore, how to prevent algorithmic discrimination and realize algorithm justice is a theoretical and practical issue worth researching. Based on the analysis of algorithmic discrimination, this paper discusses the difficulties in managing algorithmic discrimination and proposes strategies for regulating algorithmic discrimination.
2.
Manifestations and Characteristics of Algorithmic Discrimination 2.1. Algorithmic Discrimination On the one hand, algorithmic discrimination is discriminated in essence, that is, unreasonable and unequal treatment. On the other hand, algorithmic discrimination is different from the gender discrimination, employment discrimination and other traditional forms of discrimination, which is a kind of unreasonable and unequal differential treatment "transformed" by the design of big data algorithm technology. Such unequal treatment is intrinsic, professional and concealed. Algorithmic discrimination distinguishes between, excludes, restricts or favors some feature groups through the construction and design of algorithms or the operation of algorithms, which cannot prove that the measures of distinguishing, restricting, excluding, restricting or favoring are reasonable.
2.2 Manifestations of Algorithmic Discrimination
Algorithmic discrimination is widely existed in various fields of our social and economic life. Compared with traditional types of discrimination, algorithmic discrimination often appears in big data, artificial intelligence, and other technical scenarios, which produces discriminatory effects and causes harm. In the jurisdiction, algorithmic discrimination is a kind of discrimination resulting from the use of big data and algorithms in judicial decisions. For example, the US began to employ COMPAS algorithm in early 2000 to predict criminals' probability of committing crimes again and their social hazards. Test results of this algorithm showed that "the probability of a black defendant getting a higher score than a white defendant was 45%", which was quite different from the actual statistical results. Therefore, discrimination caused by algorithm design in the judicial system reflects that algorithms involve people's stereotypes and that it is difficult to regulate algorithms due to their professional packaging and conceal characteristic. In the business sector, algorithmic discrimination can be seen in price discrimination or "varied prices for same tour product or service based on different consumers through big data". With the help of data analysis platforms and algorithms, businesses classify customers and make user portraits according to customers' consumption records, purchase frequency, etc., thereby providing different prices or preferential conditions that are unrelated to the quality of goods so as to obtain more profits. Due to the emergence of online shopping and information asymmetry, consumers can hardly detect algorithmic discrimination or provide evidence when they perceive such discrimination. Besides, algorithmic discrimination is also very common in economic and financial sectors. For example, due to the limitations of the Internet coverage and the difference in education quality in various regions, inclusive finance and targeted poverty alleviation measures based merely on big data statistics and algorithms often fail to produce the expected poverty alleviation effect due to the lack of samples and incomplete data. More seriously, algorithms may easily lead to misjudgment, thus resulting in Matthew effect, deepening regional differences and aggravating imbalanced financial development.
2.3 Characteristics of Algorithmic Discrimination
Algorithmic discrimination is rooted in traditional discrimination and particular, which is inherent, professional and concealed.
Algorithmic discrimination is inherent in that algorithm application is the process of data mining, data input, feature extraction, feature selection, logical reasoning and prediction. Firstly, the complexity of human society makes it difficult to quantify, interpret or even predict algorithmic discrimination in the form of data. During data mining, the scope of the excavated data is subject to the quantity and quality of samples. In addition, the popularity of network and education, and imbalanced regional development may also lead to the deviation between the collected sample data and the realities. Secondly, data, in essence, represents people's observation of the world. It is undeniable that bias exists in human culture at any stage of development. Therefore, big data, which is isomorphic with human society, is inherently biased as well. The scope of data mining is defined by individuals with subjective judgment, which inevitably leads to the conscious or unconscious omission of minority data samples or unreasonable weight matching. Seemingly, the algorithm formed by arranging, reorganizing or classifying these data is automatic and neutral. In fact, however, it has been mixed with the inherent bias of human society and designers. Algorithmic discrimination is professional and concealed in that the construction and operation of algorithms involve information technology and computer devices and require a high degree of professionalism. The professionalism and complexity of the algorithm itself make the process of data input and data output a "black box" that cannot be understood. The deep learning of computer further highlights some technical barriers brought by the phenomenon of "algorithm black box". For instance, it is difficult to distinguish between program errors and algorithmic discrimination in the deep learning of artificial intelligence. On the one hand, the complexity of the algorithm itself has made it difficult for ordinary people to analyze and understand. On the other hand, the algorithm is also protected as intellectual property if meeting certain conditions, which increases the difficulty of algorithm disclosure.
3 Current Situation of Algorithmic Discrimination Governance and the Limitations
Compared with traditional discrimination that is identifiable and can be regulated, algorithmic discrimination is more likely to cause a series of social problems and harm due to its particularity and complexity. At present, the regulation of algorithmic discrimination mainly come from market power and government, both of which has limitations.
3.1 Enterprise Autonomy and Industry Self-discipline and the Limitations
3.1.1 Current Situation of Enterprise Autonomy and Industry Self-discipline
The technicality and complexity of the algorithm mean that the premise of the algorithm being regulated is the disclosure of the algorithm. In order to make algorithms open, it is necessary to encourage enterprises to take the initiative and form industry self-discipline. For instance, the Association for Computing Machinery has issued seven basic principles on algorithm transparency and reviewability with the aim to strengthen the self-regulation of algorithmic discrimination.
Algorithm data reliability is one of these principles, which requires that algorithm designers should explain the source and reliability of the basic data and use legal basic data. In recent years, large Internet and media companies in China, such as Tencent, have also improved the content in their privacy policies concerning the collection and protection of personal information. Some artificial intelligence research and development companies, such as Megvii, have also clearly stated the core principles that they must follow when developing and using artificial intelligence technology in prospectus documents and other public documents, including technology reliability and safety, traceability of responsibilities, and data privacy protection.
3.1.2 Limitations of Enterprise Autonomy and Industry Self-discipline
Governance dilemma is inevitable due to the spontaneity and industrial regulation of the market. On the one hand, enterprise autonomy often fails. First of all, in terms of enterprise initiative, new fields and new business modes brought by algorithms are leading a new round of business structure and business competition with the advent of intelligent society. Meanwhile, algorithms have gradually become the decisive technology and key competitiveness for enterprises to pursue maximum benefits and occupy dominant position in the industry in the new round of competition. Therefore, algorithm protection is the inevitable choice for enterprises to carry out intellectual property development strategies and achieve future profits. Requiring enterprises to actively disclose the algorithm logic is contrary to the essence of business operations and the corporate behavior of protecting core algorithm technology as trade secrets. Take the cloud service market as an example. A large number of industry reports show that China has become the main cloud service market in recent years. Traditional offline merchants are increasingly aware of the opportunities they can obtain from digital shopping technology, who are gradually shifting to digital commerce. For example, they use data to improve their understanding of consumer shopping behavior, which makes the research and development of real-time data analysis technology and AI algorithm become one of the industry barriers for cloud service providers to attract more customers and improve business capabilities. Facing fierce market competition, cloud service providers are rarely willing to disclose algorithms and related technologies. Secondly, in terms of the actual effect of autonomy, the principle statements of information protection and privacy protection made by enterprises fail to produce the expected effect in practice. This is because such statements often cover initial data collection only, and cannot effectively restrict corporate behavior of collecting data and using information in the subsequent process, thus leading to the inherent lack of samples and biased collections. At the same time, under the information collection authorization mechanism of "informed consent" adopted by most enterprises, it is difficult for users to authorize consent based on obtaining valid information and to truly exercise the right of refusal to the "privacy policy" contract. The authorization mechanism may fail, thereby making it impossible to control the risk of algorithmic discrimination even in the initial stage of data collection. On the other hand, industry self-discipline lacks experience and rule support. China started late in the development of artificial intelligence and machine learning. While algorithms have been widely used in various social fields and scenarios such as commercial transactions, self-driving, credit ratings, financial poverty alleviation, and talent recruitments, data collection and classification, the design of application programs and algorithm construction are still in the hands of a few commercial entities in emerging markets. Therefore, compared with traditional industries, the emerging artificial intelligence, Internet, and cloud service industries have no experience summaries. Besides, they have not reached a generally accepted industry consensus on corporate ethics, algorithm disclosure, and privacy protection, and have not formed a mature rule system. As a result, their selfdiscipline is relatively weak and can hardly provide strong regulatory support.
3.2 Current Situation of Legal Regulation and Government Supervision and Limitations
3.2.1 Current Situation of Legal Regulation and Government Supervision
In recent years, under the background of the rapid development of big data and algorithms, the regulation of artificial intelligence technology and the protection of privacy information have gradually become the regulatory focus of government agencies around the world. In the past few years, the Chinese government has also formulated a series of relevant laws and regulations to regulate the application of algorithms from different levels. The first level is the algorithm distribution rules of data information. The Cybersecurity Law of the People's Republic of China, which came into effect in June 2017, defines the basis for the protection of personal information and stipulates the three principles of "legality, propriety and necessity" for the processing of personal information. Article 1035 of the Civil Code further clarifies the requirements for the legal processing of information on the basis of these three principles. Subsequently, the Information security technology-Personal information security issued by National Information Security Standardization Technical Committee provides guidance for protecting personal information, which has become an important reference for the management and law enforcement of the regulatory authorities. According to the Notice on the Special Governance of the Collection and Use of Personal Information in Violations and Regulations of App issued and effective in January 2019 and Self Evaluation Guide for Illegal Collection and Use of Personal Information by App issued and effective in March 2019, APP operators should check whether their privacy policies contain elements that need to be disclosed to users. The second level is the application rules of algorithms in e-commerce and other scenarios. The E-commerce Law of the People's Republic of China, issued in June 2020, writes the protection of consumers' rights and interests in personalized recommendation into the law for the first time, which requires operators to fully respect consumers' right to free choice and fair trade when using big data analysis and algorithms. In February 2021, the Anti-monopoly Commission of the State Council issued the Anti-monopoly Guidelines of the Anti-monopoly Commission of the State Council on Platform Economy, making targeted regulations on the definition of relevant markets, the identification of abuse of market adaptive status, the closure of API, the necessary facilities, the notification of concentrations between undertakings under the VIE framework and other issues. The third level comprehensively discusses algorithm ethics. The Development Planning for a New Generation of Artificial Intelligence issued by the State Council in July 2017 stresses that artificial intelligence is a disruptive technology with a wide range of influence, which may change employment structure, impact the law and social ethics, infringe personal privacy and cause other problems. Therefore, it is necessary to build a protective legal and ethical framework, formulating the code of ethics and code of conduct for the research and development personnel of artificial intelligence products, and strengthening the assessment of the potential hazards and benefits of artificial intelligence.
3.2.2 Limitations of Legal Regulation and Government Supervision
First, governmental heteronomy regulations mainly focus on privacy and information protection, having no regulatory requirements and complete accountability systems concerning the filling procedure of algorithms and related technologies as well as algorithm interpretability. On the one hand, existing legal regulations are often subject to technical difficulties. The professionalism and complexity of the algorithm determines the difficulty of understanding and knowing it in different application scenarios. On the other hand, under the existing laws and regulatory systems, due to the lack of professional review institutions and filing systems, regulatory institutions have difficult in effectively preventing, detecting and identifying discriminatory risks or results brought about by the use of algorithms by companies with huge technology, information and capital support. Second, although the existing laws and regulations have made provisions on the compliance and desensitization of data collection, they still cannot effectively supervise the use of information. Especially, the improper use and processing of non-sensitive data and private data by enterprises driven by economic interests may also lead to discriminatory results. For example, most laws and regulations and national standards on the use of information are still limited to the basic compliance areas, such as obvious violation of national laws and regulations as well as public order and good customs, endangering national security, etc., which cannot cover other social and economic life scenes involving algorithmic discrimination. Third, the existing laws and regulations are also very limited even at the level of information collection. For example, although national standards such as the Information security technology-Personal information security specification clarify the minimum necessary principles for the collection of personal information and propose the management and training of personal information controllers, these detailed specifications are only recommended standards with no enforcement effect.
4.
Methods for Improving the Legal Regulation of Algorithmic Discrimination 4.1. Improving the Legislation Algorithmic Discrimination First, it is necessary to improve the legislation of the user agreed rules, which is applicable to industrial and corporate self-regulation as well as the supervision, management and assessments of the regulatory authorities. The mechanism of informing the consent should be adjusted and improved based on the Personal Information Protection Law (Draft). Article 14 of the Personal Information Protection Law (Draft) stipulates that the substantive requirements of user consent for personal information processing are "specific, explicit and voluntary". In addition, Article 24, Article 26, and Article 2 stipulate the conditions for individual consent when providing personal information to third parties, disclosing processed personal information and processing sensitive personal information. On this basis, separate consent and general consent should be further explained and distinguished. Separate consent is actually a special type of consent, which requires that all information and possible risks be informed to the information subject and the specific consent of the information subject be obtained. In contrast, general consent is a general and package agreement which does not focus on specific matters.
Second, it is worthwhile to regulate the interpretability of algorithm technology and the accountability system. At the legislative level, we should clarify the responsibility of the designers and users of the algorithm for the pre-filing system and the subsequent interpretation of the algorithm. The pre-filing system shall specify the filing institution, the time of filing and examination, and the content of filing. During the use of the algorithm, corresponding corrective and punitive measures should be taken once the risk of discrimination is found. When an algorithm infringement event occurs, the infringed user can request the court for relief and the algorithm designer or user should bear the burden of proof.
4.2 Improving Government Supervision of Algorithmic Discrimination
First, a special algorithm review agency should be established. Considering the professionalism of algorithms, the agency should be composed of personnel with a certain professional and legal background. Besides, centralized review and supervision should be adopted to improve efficiency. The functions of the algorithm review agency may include accepting algorithm filings, actively conducting investigations on algorithmic discrimination, and providing review opinions on allegations of algorithmic discrimination.
Second, the algorithm review mechanism should be improved in three aspects, that is, pre-filing, process supervision and suggestions after the events. Algorithm designers and users should actively submit the algorithm model, data and operation process to the algorithm review agency for pre-filing. The prior filing shall specify the algorithm designer, controller and user, algorithm design purpose, algorithm model, data source and processing method, and disclose possible risks and solutions. In order to prevent algorithmic discrimination, the algorithm review agency can take the initiative to conduct investigations on algorithm operations in key industries and application scenarios. Meanwhile, in response to user complaints and accusations of algorithmic discrimination, the algorithm review agency can conduct a second review of the algorithm, requesting algorithm designers and users to provide further explanations, and providing review opinions and correction suggestions after the review. Such suggestions should have legal status and authority, which can become the main basis for punishment.
4.3 Strengthening Industry Self-discipline and Promoting Enterprise Autonomy
First of all, after the further improvement of laws and regulations, industry associations should actively participate in the formulation of industry regulations against algorithmic discrimination, make the regulations as the textual basis for industry self-discipline, and promote the application of these regulations among market entities in the industry. The formulation of industry regulations against algorithmic discrimination should learn from and comply with relevant laws, regulations and national standards. Specialized and feasible industry norms should be established for the Internet, cloud services, artificial intelligence and other sub-industry sectors. Meanwhile, these norms should cover general principles, legal value and algorithmic ethics issues. In addition, industry associations should also fulfill the responsibilities of industry regulators, strengthening industry self-regulation by supervising algorithmic discrimination between market entities and punishing discriminatory behaviors. Taking the Social Ecommerce Association as an example, it is possible to improve the terms of business requirements for e-commerce platform operators and promotion service organizations in the business service specifications of social e-commerce enterprises. For example, we can add such compliance requirements as "big data and other technologies must not be used for price discrimination or discriminatory marketing methods" and "refining user privacy protection, and data collection and processing". Furthermore, the specifications should stipulate that industry associations be the subjects of receiving consumer complaints about algorithmic discrimination and perform regulatory responsibilities within the industry. For instance, industry associations should review consumer complaints and deal with market entities that implement algorithmic discrimination by making negative comments and public criticisms or imposing fines.
Second, enterprises should strengthen ethical norms and ethical self-discipline. On the one hand, it is advisable to establish corresponding internal systems and policies in lie with the Information security technology-Personal information security specification, and include algorithmic discrimination into the scope of corporate regular compliance review. Internal reviews on algorithmic discrimination should be conducted by professional committees or functional departments. At the same time, enterprises should disclose measures and results related to opposing algorithm discrimination compliance, technical ethics and self-discipline in annual reports, environmental and social governance reports and other regular disclosure documents to fully fulfill their disclosure obligations and social responsibilities. On the other hand, it is necessary to enhance training and guidance on preventing algorithmic discrimination and improving technical ethics. Beside, opposing algorithmic discrimination can be added into the scope of regular employee compliance assessment so as to encourage technicians to consciously abide by laws, regulations and ethical requirements in algorithm design and application.
5 Conclusion
Yang Dong. Interpretation of Anti-monopoly Guide in Platform Economy-Implementing Comprehensive
Supervision and Preventing the Expansion of Capital Disorder. Retrieved May 30, 2021 from https://www.sohu.com/a/451960708_345245
Han Xuzhi. The Dilemma and Solution of Informed-consent Rule in Personal Information Protection-On the
Relevant Provisions of the Personal Information Protection Law (Draft) [J]. Business and Economic Law Review, 2021 (01): 47-59.
References
[1]. Han Shuo. How Algorithms are Equal: Establishment of Algorithmic Discrimination Review Regime [J]. The South China Sea Law Journal 2020, 4(02): 114-124.
[2]. Zhou Xianwei. Algorithmic Discrimination: Performance, Influence and Legal Regulation [J]. Journal of Zaozhuang University, 2020, 37(04): 118-123.
[3]. Wang Zhijie. Supervision of Inclusive Finance under Algorithmic Discrimination—Based on Algorithmic Discrimination and Coupling of Inclusive Financial Risks [J]. Fujian Finance, 2020 (06): 21-27.
[4]. Zhang Yuhong, Qin Zhigang, Xiao Le. Discriminatory of Big Data Algorithm [J]. Studies in Dialectics of Nature, 2017, 33 (05): 81-86.
[5]. Chen Gen. From Algorithmic Discrimination to Data Justice, “Malice” of Artificial Intelligence. Retrieved May, 30, 2021 from https://www.sohu.com/a/409868821_124207
[6]. Liang Xianfei. Thoughts on Algorithmic Discrimination at the Age of Artificial Intelligence [J]. China Informationization, 2020 (07): 54-55.
[7]. Cai Lin. The Legal Exploration of Patent Protection Method for the Artificial Intelligence [J]. Journal of Northwestern Polytechnical University(Social Sciences), 2019 (03): 103-111+3.
[8]. Han Xuzhi. The Dilemma and Solution of Informed-consent Rule in Personal Information Protection—On the Relevant Provisions of the Personal Information Protection Law (Draft) [J]. Business and Economic Law Review, 2021 (01): 47-59.
[9]. Algorithm: Fully Incorporated into the Supervision Field. Retrieved May 30, 2021 from http://www.cac.gov.cn/2019-05/21/c_1124523038.htm
[10]. Yang Dong. Interpretation of Anti-monopoly Guide in Platform Economy—Implementing Comprehensive Supervision and Preventing the Expansion of Capital Disorder. Retrieved May 30, 2021 from https://www.sohu.com/a/451960708_345245
[11]. Han Xuzhi. The Dilemma and Solution of Informed-consent Rule in Personal Information Protection—On the Relevant Provisions of the Personal Information Protection Law (Draft) [J]. Business and Economic Law Review, 2021 (01): 47-59.
Cite this article
Zhu,Z. (2021). Legal Regulation of Algorithmic Discrimination. Advances in Social Behavior Research,1,65-72.
Data availability
The datasets used and/or analyzed during the current study will be available from the authors upon reasonable request.
Disclaimer/Publisher's Note
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of EWA Publishing and/or the editor(s). EWA Publishing and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
About volume
Journal:Advances in Social Behavior Research
© 2024 by the author(s). Licensee EWA Publishing, Oxford, UK. This article is an open access article distributed under the terms and
conditions of the Creative Commons Attribution (CC BY) license. Authors who
publish this series agree to the following terms:
1. Authors retain copyright and grant the series right of first publication with the work simultaneously licensed under a Creative Commons
Attribution License that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this
series.
2. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the series's published
version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial
publication in this series.
3. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and
during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See
Open access policy for details).
References
[1]. Han Shuo. How Algorithms are Equal: Establishment of Algorithmic Discrimination Review Regime [J]. The South China Sea Law Journal 2020, 4(02): 114-124.
[2]. Zhou Xianwei. Algorithmic Discrimination: Performance, Influence and Legal Regulation [J]. Journal of Zaozhuang University, 2020, 37(04): 118-123.
[3]. Wang Zhijie. Supervision of Inclusive Finance under Algorithmic Discrimination—Based on Algorithmic Discrimination and Coupling of Inclusive Financial Risks [J]. Fujian Finance, 2020 (06): 21-27.
[4]. Zhang Yuhong, Qin Zhigang, Xiao Le. Discriminatory of Big Data Algorithm [J]. Studies in Dialectics of Nature, 2017, 33 (05): 81-86.
[5]. Chen Gen. From Algorithmic Discrimination to Data Justice, “Malice” of Artificial Intelligence. Retrieved May, 30, 2021 from https://www.sohu.com/a/409868821_124207
[6]. Liang Xianfei. Thoughts on Algorithmic Discrimination at the Age of Artificial Intelligence [J]. China Informationization, 2020 (07): 54-55.
[7]. Cai Lin. The Legal Exploration of Patent Protection Method for the Artificial Intelligence [J]. Journal of Northwestern Polytechnical University(Social Sciences), 2019 (03): 103-111+3.
[8]. Han Xuzhi. The Dilemma and Solution of Informed-consent Rule in Personal Information Protection—On the Relevant Provisions of the Personal Information Protection Law (Draft) [J]. Business and Economic Law Review, 2021 (01): 47-59.
[9]. Algorithm: Fully Incorporated into the Supervision Field. Retrieved May 30, 2021 from http://www.cac.gov.cn/2019-05/21/c_1124523038.htm
[10]. Yang Dong. Interpretation of Anti-monopoly Guide in Platform Economy—Implementing Comprehensive Supervision and Preventing the Expansion of Capital Disorder. Retrieved May 30, 2021 from https://www.sohu.com/a/451960708_345245
[11]. Han Xuzhi. The Dilemma and Solution of Informed-consent Rule in Personal Information Protection—On the Relevant Provisions of the Personal Information Protection Law (Draft) [J]. Business and Economic Law Review, 2021 (01): 47-59.