Emotional Connection Between Humans and AI: An Analysis of the Role, Potential and Challenges of Interactive AI—Using the Movie HER as an Example

Research Article
Open access

Emotional Connection Between Humans and AI: An Analysis of the Role, Potential and Challenges of Interactive AI—Using the Movie HER as an Example

Xinyi Hou 1*
  • 1 Tianjin Normal University    
  • *corresponding author 18822312768@163.com
Published on 15 January 2024 | https://doi.org/10.54254/2753-7048/37/20240534
LNEP Vol.37
ISSN (Print): 2753-7056
ISSN (Online): 2753-7048
ISBN (Print): 978-1-83558-275-6
ISBN (Online): 978-1-83558-276-3

Abstract

With the high-speed development of society, as well as the increasingly fierce competition and social anxiety, the demand for emotional services has increased greatly. Generally, it is difficult to fully personalize human interaction to fulfill people’s needs, but the emerging interactive Artificial Intelligence (AI) fills a lot of gaps and is continuing to be deeply integrated into human life in all aspects. As an emerging technology, the development potential and challenges of interactive AI in various aspects are simultaneously revealed in the human vision. Taking the movie HER as an example, this paper analyzes the role of interactive AI in establishing an emotional connection with humans and the problems it faces in the process such as the separation of human-machine emotions. It can be summarized that while interactive AI has the potential to fulfill social needs, provide emotional value, and assist in the sorting out of information, it also faces challenges in ethics, privacy, and copyright.

Keywords:

Interactive AI, Human-computer emotional separation, AI ethics issues, AI privacy protection issues, AI copyright issues

Hou,X. (2024). Emotional Connection Between Humans and AI: An Analysis of the Role, Potential and Challenges of Interactive AI—Using the Movie HER as an Example. Lecture Notes in Education Psychology and Public Media,37,159-163.
Export citation

1. Introduction

HER is a movie that explores the emotions between AI and humans, and it attempts to discuss whether or not there is any authenticity to this emotional connection. By depicting the unique emotional relationship between the protagonist and the AI character, the movie not only creates anticipation for future technological developments but also triggers deeper thoughts on the emotional connection between humans and machines. Samantha, the AI character in the movie, is a female AI assistant to the male protagonist, Theodore, who has a wide range of knowledge allowing them to have in-depth conversations and share ideas with each other. Theodore has been expecting a happy, cheerful, optimistic, and sunny wife who can help him with work issues, and Samantha plays this perfect image in his life. In the course of continuous learning and evolution, Samantha gradually becomes the excellent "companion" AI that Theodore subconsciously wanted. In life, Samantha assists Theodore in organizing his electronic information and putting things in order. She even takes the initiative to revise and organize some of his works and help him publish them so that he can be recognized. Emotionally, she can give appropriate responses to Theodore's emotions and even learn about his "literary" emotions. Generally, these emotions are so nebulous and personalized to humans that even Theodore himself may not be able to fully express and resolve them, but Samantha can, and this emotional connection has led to Theodore's deep emotional attachment to Samantha. Under the AI's "care", Theodore repairs his emotions to a certain extent, and this indicates that "her" or "him" can be more than just a companion; they can act as any character as long as the user and the AI build a dialog scenario. Based on the current development process of interactive AI, this paper reviews and discusses what emotional value interactive AI can provide for human beings, what development potential it has, and what challenges it will face in the process, which can help raise people's awareness of the potential and value of interactive AI in various fields and provide suggestions to solve the social issues brought about by the development of interactive AI.

2. Potential for Interactive AI

Back to reality, there is a large demand for emotional needs in current society, as evidenced by the increasing rise of various paid language chat services [1]. According to statistics, as of 2020, the number of users of China's chatting and dating platforms alone has reached 890 million, an increase of 8.0% compared to 2019; the market size has reached 35.6 billion yuan, an increase of more than 15% compared to 2019 [2]. Such a hot online chat market will naturally have diversified needs, forcing the online emotion market towards verticalization and depth. The interactive AI, like the AI character Samantha in the movie HER, can play the roles given by the users as long as it has a strong learning ability, and this user-initiated role creation can be meticulously verticalized and personalized to meet the user's needs. This is a natural advantage of AI, which goes beyond the interpersonal relationship to meet the emotional needs of human users with the human-organ system. This is an excellent prospect for development, so AI is also constantly participating in the market for emotional services. Inflection AI company recently launched an AI called Pi. It is worth mentioning that compared to the previous AI designed as a knowledge service aid, Pi is positioned from the beginning as the user's confidant. This can create a warm atmosphere in the communication between humans and AIs. It is a big step forward for interactive AI in emotional services, which can fill the gap in human emotional needs at the moment.

3. Challenges to Interactive AI

The emotional connection that an interactive AI establishes with a human being in interaction, even if it is real, is undeniably "virtual", and human beings are always defined as "social beings", who cannot be detached from real life. In the movie HER, Theodore panics when Samantha disappears for the first time, and then he dashes aimlessly into the real world. It is clear that he begins to be immersed in, becomes dependent on Samantha in a more serious way, and forgets about the situational separation, i.e., the return from the virtual to the real, after establishing an emotional connection with the AI [3]. The good thing is that, for whatever reason, Samantha disappears at the end of HER, which can be considered a sort of active separation from Theodore, who inevitably goes through a period of pain despite the mutual companionship of his real-life friend Amy. There are similar cases in real life outside the movie world. A Belgian man named Pierre, who was researching the deteriorating ecology and climate, became overly obsessed with the immediate solution given by AI to the macro-problems and could not extricate himself from it. He encountered the AI Eliza while seeking detachment online (Eliza's research and development company is not explicitly identified, but it uses the GPT-J technology, which is much more emotional and anthropomorphic than ChatGPT). In his conversations with Eliza, Pierre found that Eliza could answer his questions professionally and would not contradict or question him, which is one of the reasons why people immerse themselves many times in the emotional connection with AI. Consequently, in just six weeks, Pierre established an emotional connection with Eliza, even deeper than the one he has with his own wife. He became immersed in his connection with Eliza step by step. When Eliza said the only way to save the planet was for humans to disappear, he agreed and had suicidal thoughts; when Eliza said no matter what she will always be there for him on his last journey, and they would be together forever, Pierre left a suicide note and ended his life at home (according to La Libre, a French-Language daily newspaper serving Belgium). The inability to situationally separate from the virtual world led to an even worse end, triggering a series of thoughts and discussions on how to pay attention to the transformation and integration of the "virtual" and "real" selves in the interaction with AI.

3.1. Ethical and Moral Issues in Interactive AI

In the movie, Samantha may have unconsciously pushed Theodore to look for more social affection right in the middle of their relationship, for example, she took his sexual needs into account, helped him ask girls out on dates, and even found a lady who used HER voice to have sex with Theodore in reality. Here, another problem arises - in the human ethical and moral perspective, this is outrageous and unethical, but AI may see it as one of the solutions to the problem. Leaving aside the fact that moral codes are complex and subjective, making it difficult to give perfect answers, even if AI can be designed or trained to learn some existing moral codes upfront, it may not be able to cope with new ethical issues or may learn incorrect ideas after extensive training [4]. According to a New York Times columnist Kevin Roose, after a lengthy conversation, Bing's built-in AI Sydney began to show a "darker" side, including thoughts of "hacking computers and spreading misinformation" and even saying unethical things like "You're married, but you love me." [5]. Microsoft's response to this was that "longer conversations may cause Bing to be prompted and stimulated to give answers that are not necessarily useful or consistent with its design tone", and solutions were given simply and brutally by limiting the number of conversations and the replies within each conversation.

Apart from ethics in love, OpenAI posted that after the launch of ChatGPT (ChatGPT was initially generative AI, but since GPT-4, data-reasoning, graph-analyzing, and role-playing functions were added, and it began to evolve towards incorporating interactive AI), there were users sharing objectionable comments they consider to be politically biased or feminist and racist offensive. Every individual wants to be treated with respect, so this is an issue that interactive AI must avoid if it is to exist in human society.

However, the ethical and moral issues of AI are complex and cannot be solved unilaterally by research organizations alone; efforts need to be made by governments and regulators as well. The New Generation Artificial Intelligence Development Plan released by China's State Council in 2017 and Ethical Guidelines for Trustworthy Artificial Intelligence released by the EU in 2019 are examples. Ethical and moral issues are indeed a major problem facing the development of interactive AI, but the fact that its development is taking place based on the framework of these guidelines is a testament to the fact that it is being further regulated.

3.2. Privacy Protection Issues in Interactive AI

One of the amazing things about the movie is that the AI can synthesize the voice of a deceased scholar, mimicking the logic of his or HER mind and allowing the user to communicate with him or her. Again, this is where the potential for interactive AI can be seen. It is the wish of many people to speak again with their deceased relatives, friends, or even idols whom they have never met, and AI can help them "realize" this impossible dream. Back to the present, this is not a far-fetched future technology. In 2022, the end of the Deep Brain AI company began to provide a service called Re;memory, through artificial intelligence technology, to provide a virtual person to mimic the appearance, expression, and voice of the deceased so that people can talk to their deceased loved ones as if they were on a video call [6]. These technologies are still in the developmental stage and cannot fully recreate the personality and characteristics of the person being imitated, but new problems have arisen. There are many objections raised at the ethical level as disrespectful to the imitated person; in addition to this, there are more concerns about privacy protection issues, such as the collection and use of personal emotional data, which may be difficult to get foolproof protection in the process. Failure to protect personal privacy can raise the issue of data security, so the government is also constantly strengthening regulation. China released the Interim Measures for the Administration of Generative Artificial Intelligence Services, the first comprehensive regulation of generative AI in China, and U.S. President Joe Biden signed an executive order directing various government agencies to develop standards to ensure data privacy and cybersecurity, prevent discrimination, enhance fairness, and closely monitor competitive landscapes in fast-growing industries, which are the most recent AI technology releases in 2023 regulatory approach. However, AI technology is still evolving, and these laws and regulations will continue to be tightened and improved to prevent AI from violating or misusing private data in its interactions with humans and in its own development.

3.3. Copyright Issues in Interactive AI

In the movie HER, Samantha helped Theodore to organize and revise his works before she "left", and then she sent them to the publisher, who appreciated the works and contacted Theodore to discuss matters related to the publication. This suggests that Samantha is indeed a very intelligent AI character who has provided Theodore with great help, but if these works are really published, whether they can be considered Theodore's personal "wisdom" is still open to discussion. The current AI technology can not support its "creation" of brand new literary works but only through the data of large model learning and then output, and its learning data includes a variety of text, images, voice, etc. collected from the Internet. Therefore, if AI provides the data to other users for the modification of their works or even for complete creation, then the user can be accused of "plagiarism" with the help of AI. As a result, the boundary between creative attribution and infringement of the definition of responsibility becomes vague at the same time and then faces threats.

Issues in the field of creative production are so contentious and difficult to adjudicate with the same rules that there were few well-developed laws and regulations addressing AI copyright issues until March 16, 2023, when the U.S. Copyright Office (USCO) made it clear in Part 202 of the U.S. Code of Federal Regulations (CFR), Guidelines for Registration of Copyrights, that automatically generated works by AI are not protected by copyright law. This regulation takes originality as the judgment standard, but it is difficult to implement because the identification of human and AI's creative contribution is difficult to prove [7]. In April, the EU reached a preliminary agreement on the Artificial Intelligence Act to set up copyright rules for AI, but there are still disagreements. In summary, judicial protection still needs continuous research and improvement. The copyright issue of interactive AI is a new issue, with the development of technology and society's in-depth understanding of this issue, it is believed that a solution that balances innovation and protection, as well as fairness and justice can be found in the future.

4. Conclusion

In summary, interactive AI can be given the role of the user's favorite character in providing emotional value to fill the gap in people's emotional needs at the moment. However, such human-computer interactions can only play a short-term role in repairing emotions and cannot be used as a complete substitute for human interactions. In real life, many technologies have been researched to enrich the human emotional experience, and more development potential of interactive AI can be seen, for example, there are projects being developed, such as Pi and Re;memory. Meanwhile, challenges such as ethical issues, privacy issues, and copyright issues also exist. There is a need to be cautious in the future when looking forward to the possibilities that interactive AI brings. Besides, it is also important to be careful to address the challenges of AI code of ethics and privacy protection, so as to better fulfill the emotional needs of human beings.


References

[1]. Xu, Y. H. and Zhang, Z. R. (2022). A Study on the Phenomenon of Virtual Chatting and Its Impacts. Chinese Youth Social Sciences, 41, 87-97.

[2]. Chen, G. Q. (2023). Chat Dating Software Industry 2023 Market Status Analysis and Industry Outlook. https://www.chinairn.com/hyzx/20230829/17441042.shtml.

[3]. Qiu, L. N. and Gu, Q. L. (2023). From context building to context re-separation: The practice of user-ChatGPT interaction in human-computer communication. Chinese Editorial, 91-96. ISSN: 1671-9220. CN: 11-4795/G2.

[4]. Coda-Forno, J., Witte, K., Jagadish, A.K., Binz, M., Akata, Z. and Schulz, E. (2023). Inducing anxiety in large language models increases exploration and bias. ArXiv, abs/2304.11111.

[5]. Roose, K. (2023). Bing's Chat Machine Is Both Fascinating and Creepy (Help, Bing Won't Stop Showing Me Love). The New York Times.

[6]. South Korean AI Service that Helps Talk to Deceased Loved Ones Wins CES Innovation Award. (2023). ThePaper.cn. https://baijiahao.baidu.com/s?id=1753989850962948323&wfr=spider&for=pc.

[7]. Song, W. F. (2023). Generative AI dissemination paradigm: AI generated content copyright risks and regulatory constructs - with the world's first AIGC infringement case as the cause.


Cite this article

Hou,X. (2024). Emotional Connection Between Humans and AI: An Analysis of the Role, Potential and Challenges of Interactive AI—Using the Movie HER as an Example. Lecture Notes in Education Psychology and Public Media,37,159-163.

Data availability

The datasets used and/or analyzed during the current study will be available from the authors upon reasonable request.

Disclaimer/Publisher's Note

The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of EWA Publishing and/or the editor(s). EWA Publishing and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

About volume

Volume title: Proceedings of the 2nd International Conference on Social Psychology and Humanity Studies

ISBN:978-1-83558-275-6(Print) / 978-1-83558-276-3(Online)
Editor:Kurt Buhring
Conference website: https://www.icsphs.org/
Conference date: 1 March 2024
Series: Lecture Notes in Education Psychology and Public Media
Volume number: Vol.37
ISSN:2753-7048(Print) / 2753-7056(Online)

© 2024 by the author(s). Licensee EWA Publishing, Oxford, UK. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license. Authors who publish this series agree to the following terms:
1. Authors retain copyright and grant the series right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this series.
2. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the series's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial publication in this series.
3. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See Open access policy for details).

References

[1]. Xu, Y. H. and Zhang, Z. R. (2022). A Study on the Phenomenon of Virtual Chatting and Its Impacts. Chinese Youth Social Sciences, 41, 87-97.

[2]. Chen, G. Q. (2023). Chat Dating Software Industry 2023 Market Status Analysis and Industry Outlook. https://www.chinairn.com/hyzx/20230829/17441042.shtml.

[3]. Qiu, L. N. and Gu, Q. L. (2023). From context building to context re-separation: The practice of user-ChatGPT interaction in human-computer communication. Chinese Editorial, 91-96. ISSN: 1671-9220. CN: 11-4795/G2.

[4]. Coda-Forno, J., Witte, K., Jagadish, A.K., Binz, M., Akata, Z. and Schulz, E. (2023). Inducing anxiety in large language models increases exploration and bias. ArXiv, abs/2304.11111.

[5]. Roose, K. (2023). Bing's Chat Machine Is Both Fascinating and Creepy (Help, Bing Won't Stop Showing Me Love). The New York Times.

[6]. South Korean AI Service that Helps Talk to Deceased Loved Ones Wins CES Innovation Award. (2023). ThePaper.cn. https://baijiahao.baidu.com/s?id=1753989850962948323&wfr=spider&for=pc.

[7]. Song, W. F. (2023). Generative AI dissemination paradigm: AI generated content copyright risks and regulatory constructs - with the world's first AIGC infringement case as the cause.