Erosion and Remodeling: New Alienation of User Subjects Led by Intelligent Algorithms

Research Article
Open access

Erosion and Remodeling: New Alienation of User Subjects Led by Intelligent Algorithms

Muting Sun 1 , Naye Ji 2*
  • 1 Zhejiang University of Media and Communication    
  • 2 Zhejiang University of Media and Communication    
  • *corresponding author jinaye@cuz.edu.cn
Published on 20 September 2024 | https://doi.org/10.54254/2753-7064/36/2024BJ0035
CHR Vol.36
ISSN (Print): 2753-7072
ISSN (Online): 2753-7064
ISBN (Print): 978-1-83558-451-4
ISBN (Online): 978-1-83558-452-1

Abstract

As the core technological means of today's artificial intelligence products, intelligent algorithms have become an indispensable tool for human development. They have led the progress of the intelligent era, fully demonstrating the powerful charm and infinite possibilities of technology, and depicting a hopeful intelligent future. However, while bringing convenience and imagination to users, it also makes them objects dominated by intelligent technology, gradually losing their independence and creativity as humans, losing their ability to criticize and think deeply, and causing a series of inevitable alienation problems that cannot be ignored. This article uses qualitative research methods to conduct in-depth interviews with respondents, extract their attitudes and opinions towards one of the artificial intelligence technologies - algorithms, and organize and analyze them. The phenomenon of "new alienation" brought about by reshaping algorithms is based on users' real thinking and more accurate needs, breaking the constraints of intelligence and adding benefits to the development of algorithms.

Keywords:

algorithms, new alienation, autonomy, erosion, remodeling

Sun,M.;Ji,N. (2024). Erosion and Remodeling: New Alienation of User Subjects Led by Intelligent Algorithms. Communications in Humanities Research,36,61-72.
Export citation

1. Introduction

Since the advent of the era of intelligent media, artificial intelligence has completely changed human society because of its revolutionary and subversive nature. Various intelligent systems have sprung up like mushrooms after rain, bringing scientific and technological civilization to an unprecedented height. However, the development of artificial intelligence is not yet mature, and the end of intelligence is not yet foreseeable. Compared with the agricultural and industrial ages, the intelligent era seems to have taken "unmanned" to the extreme, with humans who were once in a dominant position retreating to bystanders or even being dominated. The "new alienation" by machines is pushing humans to the brink of systemic degradation[1]. As Thomas·Davenport and J. Kirby said, "As computers begin to take up more and more knowledge tasks, the rate of skill degradation will accelerate." [2] This issue demands our immediate attention.

As one of the means of artificial intelligence technology, algorithms play a huge role in the structure of information dissemination, but the push mode advertised as "personalized" is also eroding users' autonomy and critical thinking ability. While enjoying the convenience of algorithms, users have also become a string of numbers in massive data and have become objects dominated by intelligent technology. Therefore, in the early stage of the development of artificial intelligence, timely patching of hidden alien vulnerabilities and reshaping the "new alienation" phenomenon brought about by algorithms is the optimal solution to realize the collaborative evolution and all-round development of man and machine. This process also lays the foundation for the vigorous development of the intelligent era.

2. Research design

This article uses in-depth interviews to focus on the impact of algorithm push on the 20-35 age group, and selected 20 qualified interviewers through purposive sampling (see Table 1).

Table 1: Basic information of respondents

Serial No.

Gender

Age

Occupation

Education

1

Female

31

designer

junior college

2

Female

24

student

postgraduate

3

Female

27

product managers

undergraduate

4

Male

30

civil servant

undergraduate

5

Male

22

student

undergraduate

6

Female

26

teacher

postgraduate

7

Male

23

student

postgraduate

8

Male

20

student

undergraduate

9

Female

25

new media operations

undergraduate

10

Female

20

freelance

junior college

11

Male

28

civil servant

postgraduate

12

Male

26

information flow distributor

undergraduate

13

Female

25

product operations

postgraduate

14

Female

24

securities

undergraduate

15

Female

26

customer service

undergraduate

16

Female

28

civil servant

undergraduate

17

Female

22

self-media

undergraduate

18

Female

26

artist coordination

undergraduate

19

Male

22

student

postgraduate

20

Male

28

teacher

postgraduate

Combined with online and offline interviews, it's not difficult to understand how algorithms erode users' autonomy, what users' actual needs for algorithms are (see Table 2), and how to reshape users' freedom in the intelligent era.

Table 2: Coding table.

Core Generic

Generic

Initial concept

The positive impact of algorithms

Expand horizons

Expand the dissemination rate of niche cultures

Push unfamiliar fields

individualization

Do not push content that I am not interested in

Push me content that interests me

Improve information acquisition efficiency

Filter heterogeneous information

Quickly collect similar information

Expand social networking

Join the same group

Contact individuals with consistent viewpoints

The negative impact of algorithms

information cocoons

Unable to obtain content beyond preferences

Unable to hear voices beyond one's own perspective

Algorithmic discrimination

Push content varies among different ages

Push content varies between genders

Inert thinking

Lack of deep thinking and critical thinking ability

Blindly receiving homogeneous information and viewpoints

Privacy leakage

Personal identity information leakage

Personal preference exposure

Development Expectations

Privacy protection aspect

Introduce relevant privacy protection laws

Establish a protection mechanism for the platform

Personalization aspect

Self-set push mechanism

Pushing diversity to break through the cocoon house

Through the word frequency query (see Figure 1), it can be seen that the proportion of words such as ‘recommendation, technology, information, interest, and demand’ is relatively high; Words such as ‘content, service, development, and impact’ also have a high proportion. Regarding algorithmic technology, the first thing respondents talk about and say more about is its positive impact on themselves. Statistically, more than 70% of the respondents support and enjoy algorithmic technology, but they will also add some subtle impacts of algorithms, such as "privacy, protection, problems, hope, "and other words.

/word/media/image1.png

Figure 1: Interview word frequency cloud

While the negative impact of the algorithm mentioned by respondents is not a significant aspect, it is not a minority seen through word cloud aggregation. Negative words such as "worried, dissatisfied, weakened, critical, discriminatory, biased, misleading" are obvious. Therefore, this paper will discuss how to alleviate the new alienation phenomenon by algorithms, which has little impact on ordinary users but cannot be ignored, and reshape user subjectivity in the intelligent era.

3. Refactoring: The powerful benefits of intelligent algorithms

As a core driving force of modern scientific and technological development, algorithm technology exerts a positive and far-reaching impact on all societal fields with unprecedented depth and breadth. It has significantly changed how people live, work, and society operates. For algorithm users, it serves as a model of AI for the benefit of humanity. Currently, algorithm technology has permeated human life and, to some extent, mitigated the impact of the influx of users in the information society. This trend is evident from the interviewees' statements, revealing its status as an essential tool for internet users.

3.1. Individuality and efficiency: Improve user experience and information acquisition efficiency

The essence of algorithms is "opinions expressed mathematically or in computer code", among which the recommendation system is an information filtering system that can help users reduce the time wasted by browsing large amounts of invalid data. Since the development of recommender systems, their core technologies can be roughly divided into three categories: recommendation methods based on collaborative filtering, content-based recommendation methods, and hybrid recommendation methods. On the other hand, the personalized recommendation provides specific services for each user to achieve personalized demands for "thousands of people and thousands of faces" services[3]. This is also one of the most distinctive applications in the practice of algorithm recommendation. The study found that when it comes to the positive impact of algorithms on users, the first thing respondents proposed was the personalized push model. The vast majority of respondents said that algorithmic recommendations have changed the way they browse and obtain information and affirmed this customized push. "I think the effect of algorithm technology in meeting personalized needs is quite good. For example, Xiaohongshu Douyin's algorithm mechanism is relatively mature, and it can push what I am interested in into my account. (F1)"For example, when I usually visit Xiaohongshu, I just click on a topic that I am interested in, and he will keep pushing me this kind of thing after that, which is a benefit." (F5) It can be seen that this kind of personalized recommendation of content or products that meet the user's taste by analyzing the user's historical behavior, interest preferences and other data is more accurate and effective than the traditional advertising and promotion methods, which can significantly improve the user's satisfaction and loyalty, not only improve the user's engagement but also enhance the user's stickiness to the platform.

3.2. Filtering and sorting: Improve the efficiency of information acquisition and alleviate information overload

"Information overload" is a concept that has been around since the 80s and 90s of the last century. With the rapid development of information technology and the Internet, humanity has moved from the era of information scarcity to the era of information overload. Early research has proposed solving this problem using information retrieval and filtering. By the mid-'90s, researchers were trying to solve the problem of information overload by predicting how users would rate recommended items, content, or services. As a result, recommender systems have emerged as an independent field of research[3]. It can perceive user interest and behavioral changes in real-time. Dynamically adjust recommendation strategies by continuously learning from user feedback and interaction data to maintain timely sensitivity to user needs. This ability to dynamically adjust allows personalized recommendation systems to always provide users with the latest and most appropriate content or products to improve the efficiency of information acquisition, "I feel that algorithmic technology has changed the way I browse and obtain information." Without this technology, I might have taken the initiative to find something that I am interested in. Still, with its blessing, the things I am interested in will be automatically pushed to me, which will reduce the time I spend searching for information. " (F6) greatly alleviates the digital pressure caused by information overload and uses technical means to break the intelligent dilemma that comes with the times.

3.3. Broaden your horizons and expand your social network: Promote multicultural exchange

Social media platforms use algorithmic recommendation technology to recommend people or groups that may interest users based on their social relationships, interests, and preferences. This referral mechanism helps users expand their social circle and meet more like-minded friends. "I think there will be a situation where when social media platforms continue to recommend things that I am interested in, and if there are some people below who discuss what I am interested in, I may participate in their discussions and then meet some friends who have similar views to me." (F12) At the same time, the algorithm's similar content push and friend can let users know about areas that they have not paid attention to before, "I may not be so interested in a specific content initially, but in the case of this algorithm recommendation, he keeps recommending it to me, because I clicked on such a push for the first time, he will continue to push it to me for the second and third time, then from the beginning, I may just click on this push out of curiosity, and then he gradually pushed it to me many times. I may develop progressively a specific emotion for the content he pushes, and I will like such content even more. " (F1) Enables users to access artworks, traditional customs, and other content from different cultures and promotes understanding and respect for multiculturalism. At the same time, in the context of globalization, it is becoming increasingly common for people to communicate across languages and cultures through platforms such as social media and online forums. Through machine translation, language recognition, and other technical means, the algorithm lowers the threshold and difficulty of cross-cultural communication so that people with different cultural backgrounds can communicate and exchange more smoothly. In turn, it promotes the international dissemination of culture. In addition, algorithms have also shown great convenience in career search platforms, such as common career social platforms such as LinkedIn, Zhilian, and Ape Circle Technology, which help users discover potential partners, peers, or employers through algorithm recommendations to expand their career networks.

4. Erosion: The hidden lesion behind the algorithm

Artificial intelligence technology is an extension of human intelligence, based on information technology and coordinated development with biotechnology and aerospace technology, which can be described as the culmination of modern science and technology. Regarding the relationship between science and technology and people, the most influential "value neutrality theory" in history believes that technology is a tool created by human beings, a means to achieve goals and meet needs. It is neutral in itself, and there is no difference between good and evil, good or evil, and it is the people who drive tool technology behind the scenes who can play the role of good or evil. With the continuous progress of human practice, human science and technology has reached an unprecedented height. The development of artificial intelligence has promoted the emergence of an intelligent society, and its powerful functions and heterogeneity that cannot be ignored have become a new alien force and have gradually become the protagonists of human civilization and being dominated. Objectification has also become a visible crisis. Heidegger mentioned in his essay "The Pursuit of Technology" that "technology is not only a means but a way of display." It is no longer "neutral" but a "pedestal" that governs the way modern people understand the world, "limits" the social life of modern people, and becomes a fate that contemporary people cannot get rid of[1]. As one of the technical means of artificial intelligence, algorithms are causing "new alienation" problems.

4.1. Innocence of harm: Privacy leakage and identity blurring

The right to privacy refers to the specific personality rights enjoyed by natural persons, which refers to the specific personality rights enjoyed by natural persons to enjoy the tranquility of their private lives and to independently dominate and control the security interests of their private lives, such as private spaces, private activities, and private information, that they do not want others to know, and not to be disturbed by others. The emergence of the Internet is due to society's quest for self-invisibility, which is reflected in the idea proposed by Jerome S. in his book integrated broadband networks, that network design should focus on ensuring "end-to-end" freedom and privacy[4]. Whether it is using APP or browsing the web, you need to check the so-called "informed consent" box, that is, "consent" the platform to obtain its information such as geographical location, contacts, and mobile phone storage content, which is also the existence of the hidden layer of the algorithm makes there is a technical and cognitive gap between the platform company and the user[5]. For example, according to an April 2022 survey, the top 10 free apps in Apple's App Market have a combined user agreement and privacy policy of more than 220,000 words, with an average of 22,000 words of relevant text for each app. For users, it takes at least 40 minutes to 1 hour to read these texts in full[6]. The vast majority of users do not read such lengthy texts carefully, so the vague and speculative terms for collecting user information hidden in them are bypassed. People's simple desire to hide themselves in cyberspace is ultimately frustrated, and sharing their private information becomes the "entry qualification" to enter this space. In this context, the accurate tracking, collection, and analysis of personal information and online behavior by platforms have become increasingly sophisticated, which not only amplifies users' demand for privacy protection but also strengthens the motivation of platforms to obtain information[7]. Many respondents in the survey expressed their hope that the platform would introduce relevant privacy protection measures, such as not pushing "me" to "my" friends, so that users can have the freedom of "invisibility": " When I communicate and interact with others on social media, sometimes I don't want offline people to see it, which creates a sense of contradiction, so I hope to have my own free space online and not be discovered by others." (F9)

In addition, the "new alienation" by algorithms also manifests in the blurring of identity. In social media, people only build awareness of others through superficial information (such as profile pictures, nicknames, updates, etc.). Algorithms reinforce this superficial connection through a recommendation mechanism, making social relationships illusory and lacking in depth. This lack of authentic understanding and communication further exacerbates the ambiguity of identity. At the same time, constantly pushing information that matches the user's interests reinforces some of the user's cognitive biases and stereotypes. This continuous reinforcement can lead to a distorted sense of identity and an inability to thoroughly and objectively perceive themselves. In a pluralistic environment, algorithmically recommended information can contain conflicting views and values. Users may feel confused and lost under this kind of information shock, and it isn't easy to form a stable identity.

4.2. Algorithmic rights: Profit-driven induced consumption

Generally speaking, any technological revolution is accompanied by a redefinition of interest relations and profound changes in power structures[8]. Under the detailed user portrait of the algorithm, the user's behavior information on the platform, including purchase history, shopping preferences, gender division, and search history, is accurately captured for further user classification so that accurate data mining comes from vigorous profit pursuit and business games. Driven by market competition, users' personal information is usually further used by algorithms in the form of matching, regulation, and control, thus outlining the strong correlation between them[9], thereby releasing substantial commercial value and stimulating the enthusiasm of the platform to polish and upgrade information acquisition technology and acquisition mode. As far as the current social and shopping platforms are concerned, the platform algorithm attracts users by recommending highly customized content and makes users emotionally resonate with great adaptability and personalized services. As a result, in the profit-first platform strategy, the idea of "finding content based on users" will be changed to "finding users based on content," resulting in colossal user stickiness and ultimately attracting more and more netizens to use social media platforms with algorithmic recommendation functions[10].

Take the Douyin short video platform as an example. It comprehensively collects users' browsing history and browsing preferences, extracts and mines users' purchase needs, and then constructs user portraits and prediction models to achieve high-accuracy purchase preference prediction. Then use a hidden and subtle way to induce users to consume, such as pushing "guess what you like" product videos or product live broadcast rooms on the Home of the app, or high-frequency pushing "you are interested" bloggers' business push videos, using users' love and trust in bloggers to induce consumption. These strategies seem to improve the satisfaction of user experience and demand matching. Still, in fact, they treat users as "Party B" to obtain the benefits of the platform and infinitely consume services for "Party A", that is, brands and merchants. In the interview, one of the interviewees also mentioned: "I think the algorithm will recommend things I like according to my preferences, and at first I thought it was a good thing, and I liked a lot of recommended products, but then I realized that it seems that these purchases are not very necessary, and the algorithm seems to make me spend a lot of money" (F16).

4.3. Reverse acclimation: The user's tool "prosthetic" in the age of intelligent media

The term "domestication" originated in biology and refers to the process of forming new conditioned reflexes based on the innate instincts of animals through human intervention and training[11]. Silvers et al. introduced this concept into the field of sociology, referring to the process by which human beings discipline the use of media through the use of daily life[12]. Human beings have created media technology, but technology does not only exist as a tool, as a social subject; people are also constrained and influenced by object media technology while changing media technology, constantly making changes involuntarily, and being "branded" by technology, and unconsciously being "reverse domesticated" from media technology[13]. Taking the wide application of algorithm technology as an example, people are completely conquered by the precise push of algorithms and excellent user experience, and the comfortable and intelligent lifestyle created by technology is increasingly "kidnapping" the autonomous socialization behavior of individuals and gradually evolving into an indispensable and unavoidable way of survival.

McLuhan's "Media Extension Theory" once proposed that all media (or technology) are an expansion and extension of a certain function of human beings. For example, a hammer is an extension of a fist, a wheel is an extension of a leg and foot, etc. As far as algorithm technology is concerned, people can say that its high degree of intelligence extends the wisdom of the human brain, which is equivalent to an individual having an extra brain without using a knife. It is a super-brain; such an intelligent extension undoubtedly leads the social development of human beings, helping people perceive the world, link others, and even replace thinking. However, such an extension does not seem to be a boon to the benefits of smart technology. In the long run, users affected by the platform's algorithms have mild or severe algorithm domestication. The human brain has enjoyed the services of the super brain for a long time, and it has gradually degraded. Specifically, users are accurately "instilled" with the content they are interested in by the algorithm and lack the will to jump out of the cocoon, so they gradually lose the ability to think deeply. Users are bound to the value-oriented category of algorithm design, and they strayed into the vicious circle of value discipline. One respondent responded, "I can feel that the content and opinions of the messages I receive are roughly the same. It's the opinions that I agree with more that will be pushed in front of me, and then I may rarely have access to things that contradict my opinions. In such a situation, it may be difficult for me to realize that my ideas are wrong. "(F12)

4.4. Hidden loopholes: Discrimination and the black box dilemma

The "black box" of an algorithm refers to the knowledge involved in the algorithm's operation at a particular stage, which is known to the developer and manufacturer but not necessarily known to the user. Users can only observe and understand the input and output in computer science. Still, they cannot understand converting input into output, constituting a "black box." Based on this, the computer science definition of the algorithm "black box" refers to the "technical black box" that appears in people's field of vision due to the complexity of the algorithm itself, in a state that is known to the developer but not necessarily known to the user[14]. Users are unaware of the goals and intent of the algorithm and have no way of knowing, judging, and monitoring the designer, actual controller, and responsibility for machine-generated content. The algorithmic black box exacerbates the inequality in information access and interpretation, so capitalists and enterprises with control of algorithms can use this advantage to carry out back-end operations and control the process and effect of information release and transmission. For example, the personal information of users obtained by cookie technology is originally an interaction between the online platform and the user. Still, such data will be provided to third-party companies as a commodity. Third-party companies will cooperate with filtering algorithms to analyze and calculate user information to target personalized pushes[15]. Finally, the so-called personalized service that improves the satisfaction of the user experience is formed, that is, the circular business chain with the user as the commodity behind the algorithm.

The black box is the crux of algorithmic discrimination, and there are two types of research content on algorithmic discrimination in the academic community. One of them is algorithmic discrimination in the name of algorithmic convenience. Network service providers argue that algorithms can improve decision-making efficiency and facilitate decision-making, but they incorporate factors such as personal preferences into actual decision-making. When biased algorithms exceed reasonable limits, they provide different services to similar users, enabling algorithms to implement and expand discriminatory behaviors[16]. Due to the inherent insidious nature of algorithmic discrimination, discrimination embedded in AI is no longer as explicit as in the past. Still, it hides in the corners of the digital world and silently erodes social fairness and justice[17]. Under the control of capital, the algorithm design is initially formulated with rule preferences. The algorithm inherits the developer's bias, and the user becomes the object of control under the algorithm discrimination. In the interview, more than two-thirds of the respondents mentioned that they had encountered algorithmic discrimination: "I found that the information recommended by the algorithm to everyone is different, especially for men and women, the content of his information is relatively different, such as many common gender-related social issues, as if different genders receive different pushes, and everyone's views are different." (F7) In addition to gender discrimination, ageism, juvenile discrimination, and religious belief discrimination all exist on many common social platforms, and this kind of bias and discrimination hidden under artificial intelligence algorithms is difficult to detect.

5. Remodeling: The multi-faceted return of subjectivity

The original goal of the emergence of machines was to liberate human productivity, whether it was the agricultural age or the industrial age, and human beings needed to take the risks brought by machines if they wanted to change society through machines. In the era of human-machine symbiotic intelligence, "human beings and artificial intelligence should not compete with each other, but in a complementary and symbiotic relationship."[18]In the face of the marginalization of user autonomy caused by algorithm technology, it is necessary to prevent technological improvement and human control, find a reasonable balance between regulation and innovation, and make the application of algorithms meet social expectations and norms.

5.1. Technology optimization: Construct a humanism-guided direction

Technological evolution is essentially a process of selection, development, and reinforcement by human beings, and human beings as subjects play a decisive role in determining the direction of evolution[19]. When exploring optimization algorithm technology, humans must establish a people-oriented operational direction. This means that while pursuing the efficiency and accuracy of algorithms, algorithm designers must have deep insight into and respect people's needs, values, and emotions. Promote the harmonious coexistence of man and machine through technological innovation so that algorithms can not only solve complex problems but also take into account individual differences, social ethics and long-term goals of sustainable development in the learning and decision-making process, to ensure that every step of technological development serves the well-being of mankind. For example, user feedback mechanisms are incorporated into the recommendation algorithm, such as the "dislike" button or preference adjustment options, so that users can actively intervene in the recommendation results and enhance the transparency and controllability of the algorithm. Intensify scrutiny to ensure that algorithms are human-centered regarding program design, data collection, and computing, comply with digital ethics, and ensure the security of users' private data.

In addition, optimizing the algorithm model's design is necessary. The primary purpose of algorithm application is people-oriented, and designers should strengthen humanistic thinking and project humanistic concepts into algorithm programs so that algorithms can focus on users' privacy protection and equality norms, correct discriminatory statements, ensure the operation of algorithms justly and improve the accuracy and efficiency of algorithms. However, the optimization of the algorithm model is a continuous iterative process. With the increase in data volume, the complexity of problems, and the development of new technologies, it is necessary to continuously optimize the algorithm model, which involves many links and a long time, so it may be a challenge for those who benefit from the algorithm.

5.2. Accountability: Autonomous monitoring of technological intelligence

"As a technology, algorithms themselves have no power attributes, but once they are applied in the public domain, they will have the color of public power and should be subject to the regulations of public field ethics"[20]. The supervision of algorithm technology should be reflected in technical governance and legal constraints. As far as algorithmic technology itself is concerned, when wrong values guide it, it may lead to the alienation of algorithmic power, which will cause a series of algorithmic risks[21]. Compared with the rigid constraints of the law, algorithm companies need to consciously assume the primary responsibility of digital enterprises and give full play to organizational self-discipline to conduct self-supervision and self-examination. Using algorithmic discrimination as an example, foreign algorithmic technology entities represented by Google and Microsoft have actively intervened in regulating algorithmic discrimination through technological innovation and have developed tools to detect and observe algorithmic discrimination. Microsoft also created the Fairness, Accountability, Transparency, and Ethics in AI Group to study the complex societal implications of artificial intelligence, machine learning, and natural language processing. In addition, Pymetrics, an emerging AI company, has developed an open-source tool called Audit AI to measure the specific data and traits used by algorithms to determine whether they negatively impact a small number of people[22].

Compared with the supervision and governance of all sectors of society, the autonomous supervision efficiency of artificial intelligence is more feasible, and it can use technology to counter technical drawbacks, significantly reduce the problems caused by algorithms at the source, and cut off various risks such as data threats and privacy leaks. At the same time, the smooth use of self-supervision software technology can also effectively help users get a better user experience, reduce black box and discriminatory push so that users can jump out of the fixed information cocoon, get out of information bias, and then expand the boundaries of thinking, and break the user thinking inertia and value discipline cycle caused by the new alienation of algorithms.

5.3. Regulatory constraints: Algorithms for good under rigid requirements

The operation of algorithms requires the mastery of personal information data as a prerequisite, and the security of personal information data is the Achilles heel in the operation of algorithms. However, the definition of personal information in China's judicial practice is ambiguous. Although the existing laws provide for the protection of personal information and require that no subject use it illegally, it does not protect non-personal information other than personal information that can be used to identify a specific individual[23]. At present, the algorithm technology has made the illegal collection of personal information more and more widespread with its high concealment and strong information capture capabilities. Software platforms are the most common online platforms for algorithmic applications today, and most mobile app users have no idea how their personal information is leaked or even how it is infringed[24]. In addition, with the support of algorithms, it is no longer necessary to rely on specific duties or services to obtain information. The criminal punishment boundary of the criminal law is unclear for the collection behavior in this case. There is also a regulatory ambiguity in the collection methods using neutral technologies such as crawlers [25]. Therefore, concerning the construction of laws and regulations, it is also necessary to pay attention to the special legislative status of the Personal Information Protection Law. When conducting the trial of relevant cases, the provisions of the pre-existing laws and the criminal law requirements should echo each other to maximize the effect of legal protection.

Second, on the issue of algorithmic black boxes, legal regulations can also be used to enforce constraints, such as stipulating that algorithm producers should make corresponding interpretations before the algorithm is applied, clearly presenting the user use agreement, not depriving individuals of privacy by obscure word games, and at the same time, the principle of contracting and autonomy should be used in dispute resolution. Improper exploitation and infringement should not be carried out because users do not understand and are not sensitive to professional technology. Algorithm developers and users are required to make a clear explanation of the causal relationship between the input data of the algorithm itself and the output results without revealing the part of the algorithm that is protected as a trade secret and make the black box of the algorithm operation transparent and visualized, to enhance the public's sense of trust and security, protect the symmetry of information, and prevent algorithm writers from manually interfering with the operation of the algorithm[26].

6. Conclusion

The emergence of algorithms is not a technological evil, but the rights and interests on them are the crux of algorithmic crimes. The development trend of artificial intelligence is becoming more and more vigorous. The changes in human society will be more rapid in the future, and the changes brought by science and technology are immeasurable. It is foreseeable that in the future, human beings will become more and more dependent on artificial intelligence and computer programs, and in the face of this unchangeable and reverie trend, what people can do is to continue to make up for it in the process of development, timely patch the loopholes accompanying the technological take-off, and maintain the balance between technology and humanity. However, there is a necessary process for developing all things, and the solution of problems is not achieved overnight. In the same way, the new alienation phenomenon brought about by algorithms also needs to be gradually broken and reshaped; as long as the concept of human-machine symbiosis has not wavered, the future of human-machine harmony technology take-off is just around the corner.

Acknowledgements

This work was partially supported by the "Pioneer" and "Leading Goose" R&D Program of Zhejiang (No.2023C01222), the Public Welfare Technology Application Research Project of Zhejiang (No.LGF22F020008).


References

[1]. Sun, W. (2020). Social sciences in China. Social Sciences in China, (12), 119-137, 202-203.

[2]. Davenport, T., & Kirby, J. (2018). Hangzhou: Zhejiang People's Publishing House.

[3]. Hillhouse School of Artificial Intelligence, Renmin University of China. (2022, January 7). Research report on the development of algorithms for good and personalized recommendation. http://ai.ruc.edu.cn/newslist/newsdetail/20220107001.html (Accessed 2024, July 25).

[4]. Jerome, S. (1991). End-to-end arguments in system design. In Integrated broadband networks (p. 30). Boston: Artech House.

[5]. Zhang, H., Xu, H., & Ding, L. (2024). Privacy infringement under the "non-sensory harm" in the era of algorithms. All Media Exploration, (04), 120-122.

[6]. Zhao, L. (2022, April 29). APP user agreement and privacy policy, have you really read? People's Posts and Telecommunications, (7).

[7]. Yang, F. (2024). Masking and unmasking: On the impact of algorithms on personal information security and its responses—Based on the perspective of media visibility. Journal of Intelligence, 1-9.

[8]. Chen, W. (2018). International Press, (2), 8-14.

[9]. Peng, L. (2021). Survival, cognition, relationship: How algorithms will change us. Journalism, (3), 45-53.

[10]. Kuang, W., & Wang, T. (2023). Social media algorithm recommendation communication logic and platform social responsibility. Journal of Shanghai Jiao Tong University (Philosophy and Social Science), (5), 1-12.

[11]. Liu, D., & Cheng, X. (2024). Cognitive risk of tool dependence: ChatGPT instrumentality and creative myths. Young Journalists, (03), 17-22.

[12]. Pan, C. D. (2014). Play with my iPhone, mess with my world! — Discussion on "mediation" and "domestication" in the application of new media technology. Journal of Soochow University (Philosophy and Social Science), (04), 153-162.

[13]. Liu, Q., & Zhang, S. (2018). From tool dependence to instinctive inhibition: The phenomenon of "reverse domestication" in the era of intelligent media. News Lovers, (04), 13-16.

[14]. Wang, X. (2024). Research on criminal law based on algorithmic "black box" personal information security. Cyberspace Security, 15(03), 41-45.

[15]. Wu, S., & Guo, W. (2021). Rule of law governance of artificial intelligence algorithm black box. Science and Technology and Law (Chinese), (01), 20.

[16]. Wang, S., & Zhang, Q. (2024). Theoretical reflection on algorithmic discrimination and normative reconstruction of algorithmic decision-making. E-Government, 1-12.

[17]. Li, C. (2021). Legal governance of artificial intelligence discrimination. China Law Science, (2), 127-147.

[18]. Zhou, Z. (2018). The frontier of wisdom: From Turing machine to artificial intelligence. Beijing: China Machine Press. Chen, C. (2021). People's Forum, (1), 38-40.

[19]. Chen, C. (2021). People's Forum, (1), 38-40.

[20]. Zhang, H., & Zhang, Z. (2022). From algorithm black box to algorithm transparency: The transition logic and path of government algorithm governance. Journal of Guizhou University (Social Sciences), (4), 65-74.

[21]. Lai, X. (2015). On the cross-departmental collaborative governance of government. Beijing: Peking University Press.

[22]. Lv, S., & Hu, C. (2024). Journal of Qingdao University of Science and Technology (Social Sciences), 40(02), 80-90.

[23]. Wu, J., & Guo, W. (2021). Rule of law governance of algorithmic black box in the era of artificial intelligence. Science & Technology and Law, (01), 19-28.

[24]. Zhang, Z. (2023). Legal protection of personal information security in mobile APP. Cyberspace Security, (05), 2-5.

[25]. Bu, T. (2020). Criminal regulation of infringement of citizens' personal information by web crawler technology. Cyberspace Security, (03), 116-117.

[26]. Zhang, L. (2018). Research on algorithm interpretation power of commercial automation decision-making. Legal Science (Journal of Northwest University of Political Science and Law), 36(3), 65-74.


Cite this article

Sun,M.;Ji,N. (2024). Erosion and Remodeling: New Alienation of User Subjects Led by Intelligent Algorithms. Communications in Humanities Research,36,61-72.

Data availability

The datasets used and/or analyzed during the current study will be available from the authors upon reasonable request.

Disclaimer/Publisher's Note

The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of EWA Publishing and/or the editor(s). EWA Publishing and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

About volume

Volume title: Proceedings of ICADSS 2024 Workshop: International Forum on Intelligent Communication and Media Transformation

ISBN:978-1-83558-451-4(Print) / 978-1-83558-452-1(Online)
Editor:Enrique Mallen
Conference website: https://2024.icadss.org/
Conference date: 18 October 2024
Series: Communications in Humanities Research
Volume number: Vol.36
ISSN:2753-7064(Print) / 2753-7072(Online)

© 2024 by the author(s). Licensee EWA Publishing, Oxford, UK. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license. Authors who publish this series agree to the following terms:
1. Authors retain copyright and grant the series right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this series.
2. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the series's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial publication in this series.
3. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See Open access policy for details).

References

[1]. Sun, W. (2020). Social sciences in China. Social Sciences in China, (12), 119-137, 202-203.

[2]. Davenport, T., & Kirby, J. (2018). Hangzhou: Zhejiang People's Publishing House.

[3]. Hillhouse School of Artificial Intelligence, Renmin University of China. (2022, January 7). Research report on the development of algorithms for good and personalized recommendation. http://ai.ruc.edu.cn/newslist/newsdetail/20220107001.html (Accessed 2024, July 25).

[4]. Jerome, S. (1991). End-to-end arguments in system design. In Integrated broadband networks (p. 30). Boston: Artech House.

[5]. Zhang, H., Xu, H., & Ding, L. (2024). Privacy infringement under the "non-sensory harm" in the era of algorithms. All Media Exploration, (04), 120-122.

[6]. Zhao, L. (2022, April 29). APP user agreement and privacy policy, have you really read? People's Posts and Telecommunications, (7).

[7]. Yang, F. (2024). Masking and unmasking: On the impact of algorithms on personal information security and its responses—Based on the perspective of media visibility. Journal of Intelligence, 1-9.

[8]. Chen, W. (2018). International Press, (2), 8-14.

[9]. Peng, L. (2021). Survival, cognition, relationship: How algorithms will change us. Journalism, (3), 45-53.

[10]. Kuang, W., & Wang, T. (2023). Social media algorithm recommendation communication logic and platform social responsibility. Journal of Shanghai Jiao Tong University (Philosophy and Social Science), (5), 1-12.

[11]. Liu, D., & Cheng, X. (2024). Cognitive risk of tool dependence: ChatGPT instrumentality and creative myths. Young Journalists, (03), 17-22.

[12]. Pan, C. D. (2014). Play with my iPhone, mess with my world! — Discussion on "mediation" and "domestication" in the application of new media technology. Journal of Soochow University (Philosophy and Social Science), (04), 153-162.

[13]. Liu, Q., & Zhang, S. (2018). From tool dependence to instinctive inhibition: The phenomenon of "reverse domestication" in the era of intelligent media. News Lovers, (04), 13-16.

[14]. Wang, X. (2024). Research on criminal law based on algorithmic "black box" personal information security. Cyberspace Security, 15(03), 41-45.

[15]. Wu, S., & Guo, W. (2021). Rule of law governance of artificial intelligence algorithm black box. Science and Technology and Law (Chinese), (01), 20.

[16]. Wang, S., & Zhang, Q. (2024). Theoretical reflection on algorithmic discrimination and normative reconstruction of algorithmic decision-making. E-Government, 1-12.

[17]. Li, C. (2021). Legal governance of artificial intelligence discrimination. China Law Science, (2), 127-147.

[18]. Zhou, Z. (2018). The frontier of wisdom: From Turing machine to artificial intelligence. Beijing: China Machine Press. Chen, C. (2021). People's Forum, (1), 38-40.

[19]. Chen, C. (2021). People's Forum, (1), 38-40.

[20]. Zhang, H., & Zhang, Z. (2022). From algorithm black box to algorithm transparency: The transition logic and path of government algorithm governance. Journal of Guizhou University (Social Sciences), (4), 65-74.

[21]. Lai, X. (2015). On the cross-departmental collaborative governance of government. Beijing: Peking University Press.

[22]. Lv, S., & Hu, C. (2024). Journal of Qingdao University of Science and Technology (Social Sciences), 40(02), 80-90.

[23]. Wu, J., & Guo, W. (2021). Rule of law governance of algorithmic black box in the era of artificial intelligence. Science & Technology and Law, (01), 19-28.

[24]. Zhang, Z. (2023). Legal protection of personal information security in mobile APP. Cyberspace Security, (05), 2-5.

[25]. Bu, T. (2020). Criminal regulation of infringement of citizens' personal information by web crawler technology. Cyberspace Security, (03), 116-117.

[26]. Zhang, L. (2018). Research on algorithm interpretation power of commercial automation decision-making. Legal Science (Journal of Northwest University of Political Science and Law), 36(3), 65-74.