Challenges of University Science and Technology Ethics Education in the New Era of Artificial Intelligence

Research Article
Open access

Challenges of University Science and Technology Ethics Education in the New Era of Artificial Intelligence

Yunshu Ni 1* , Bo Peng 2 , Siqi Yang 3
  • 1 Northeast Normal University    
  • 2 Jimei University    
  • 3 Silliman University    
  • *corresponding author niys171@nenu.edu.cn
Published on 20 November 2023 | https://doi.org/10.54254/2753-7064/14/20230501
CHR Vol.14
ISSN (Print): 2753-7072
ISSN (Online): 2753-7064
ISBN (Print): 978-1-83558-117-9
ISBN (Online): 978-1-83558-118-6

Abstract

New artificial intelligence has emerged strongly with an unstoppable trend. The emergence of ChatGPT has brought new impacts and challenges to education and ethics. The importance of science and technology ethics education is unprecedentedly prominent in such an era. Science and technology ethics should be an important basic quality necessary for young people in the new era of science and technology. As the main front for personnel training and scientific research, universities shoulder the important mission of finding the direction of scientific research and implementing the main responsibility of science and technology ethics management. Starting from the characteristics of ChatGPT generative artificial intelligence itself, This study aims to analyze the threat of artificial intelligence to higher education academic ethics (including the academic dishonesty and inertia of thoughts), privacy security and credit crisis, false information fabrication, algorithmic bias and discrimination in artificial intelligence by studying the current situation of science and technology ethics education in colleges and universities. Based on the analyses, the study explores new directions and paths for the future development of universities science and technology ethics education.

Keywords:

science and technology ethics education, Chat GPT, artificial intelligence

Ni,Y.;Peng,B.;Yang,S. (2023). Challenges of University Science and Technology Ethics Education in the New Era of Artificial Intelligence. Communications in Humanities Research,14,290-298.
Export citation

1.Introduction

At present, new artificial intelligence technology is developing at an unprecedented speed, which brings many conveniences to people but also causes a series of problems that cannot be ignored, such as scientific and technological ethics problems. The most effective way to circumvent the ethical issues of science and technology in new artificial intelligence technologies is to conduct education on science and technology ethics.

The so-called science and technology ethics education refers to cultivating students with a true scientific spirit, being able to comprehensively and systematically understand the intrinsic connotation of science and technology and its close relationship with the development of human society, consciously linking science and technology with the survival, development and progress of the entire human race, consciously fulfilling relevant moral standards and moral norms in scientific and technological activities, and correctly making moral decisions and moral judgments.

Science and technology ethics education is of great significance in modern society where artificial intelligence is widely used. Universities are the crucial undertakers of this vital task and the important positions for education to carry out. College students are the main audience of science and technology ethics education. Presently, the construction of university science and technology ethics education system has not yet been completed maturely, now the development of the new artificial intelligence, especially ChatGPT, has brought new challenges to it.

At present, the academia has already done some analysis and research on the topics of challenges brought by the new artificial intelligence represented by ChatGPT to education and the current situation and challenges of universities ethical education of science and technology. Xun Yuan proposed that the powerful ChatGPT/generative artificial intelligence not only has a great impact on the field of AI technology, but also brings many challenges to education, especially higher education [1]. The discussion of shocks and challenges has focused on the ethical and moral hazards that ChatGPT may possibly pose to humanity in the future.

Wang Jianguo and Wang Donghong clearly pointed out that the path of science ethics education in higher education ethical education of science and technology in the new era includes improving the construction of ethics curriculum system, building a complete teaching system for science and technology ethics courses, and strengthening public opinion publicity on this issue [2]. Aiming to cultivate college students to become newcomers of the era who are mindful of human well-being, able to anticipate scientific and technological risks, and assume ethical responsibilities. Among them, Zhu Yongxin and Yang Fan also proposed that universities have the responsibility to teach students how to coexist with artificial intelligence, so that students can better cope with the social changes brought about by new technologies [3].

Since ChatGPT is the latest product of artificial intelligence that has only emerged and boomed in the past two years, there are not many academic researches and papers that directly link ‘ChatGPT’ with ‘universities science and technology ethics education’, and discuss the latest problems that universities science-technology ethics education faced, and how can it transform in the future.

Therefore, in the face of new AI development and the state of this issue, it is necessary to analyze and study what new challenges, new artificial intelligence represented by ChatGPT brings to the current university curriculum education of technology. Starting from the characteristics of new AI technology itself, analyzes the new impact of it on the current ethical education in higher education and the new direction and path of the future development of scientific and technological ethics education in colleges and universities are explored.

2.Artificial Intelligence Poses a Threat to Academic Ethics in Universities

2.1Use Artificial Intelligence to Finish Learning Tasks

College students accept the new artificial intelligence technology represented by ChatGPT very quickly, and they have strong learning and application capabilities of it. Since the generation of generative artificial intelligence, college students have directly used it to help them complete related academic tasks and course works. At present, the educational and academic circles have shown high concerns and worries about the possible misuse of the new artificial intelligence-assisted learning.

Many students admit using ChatGPT to finish their learning tasks. A student at Northern Michigan University in the United States used ChatGPT to generate course papers. His language expression, logical structure and argumentation process were highly praised by the teacher, but the interaction between the student and the teacher revealed that he knew nothing about the content of the paper. Over 89% of students in the US have used ChatGPT to help with homework, and 48% of students admit using ChatGPT to complete a quiz or test at home [4].

If college students use ChatGPT to complete academic tests, or even publish the content generated by ChatGPT as their own learning results, The accuracy of their learning evaluation is not only hindered by this type of dishonesty or academic misconduct, but also hampers individual development and even the development of science. Although some universities have banned or restricted students from using this techenology, some authoritative academic journals such as Science have refused to accept manuscripts written by ChatGPT, and there are now some applications that detect whether a paper was generated by ChatGPT, but with the development of technology Iteration, a crisis of trust in the learning outcomes of college students will result from the long-term antagonism between dishonest college students and school managers caused by the abuse of artificial intelligence.

2.2Lead to Students’ Thinking Inertia

The frequent use of new artificial intelligence technology by college students in the learning process will inevitably lead to the inertia of students’ thinking. As a generative artificial intelligence, ChatGPT is characterized by the creation of generated content, the diversity of generated content, the diversity of generated skills, and the huge production model. Therefore, the homework or test answers generated by college students using ChatGPT have a high degree of completion, and students do not need to do in-depth thinking and revision later. This is a kind of covering and alienation of students’ self-subjectivity. The key to this alienation problem lies in the relationship between college students and AI, and from a practical perspective, it is the problem of how exactly should college students use intelligent technology.

Human subjectivity stems from the ability to perceive, interpret and act independently as an independent, self-aware individual, and to evaluate and respond to the behavior of oneself and others. Tools are an extension of human intelligence. Even if artificial intelligence surpasses human beings in information retrieval, storage, computing, and association, it is still just a manifestation of human perception, understanding, and ability to influence objects [5].

Intelligent tools can help college students find the learning resources they want more efficiently and conveniently, make learning plans and provide personalized guidance, but they cannot replace them with understanding knowledge and mastering skills through thinking and practice. Even if ChatGPT can simulate the human language form and inductive ability, however, its technical foundation determines that it cannot replace college students in perceiving the world and carrying out criticism and innovation. College students should always take the new artificial intelligence represented by ChatGPT as an extended or supplementary learning tool, maintain a prudent attitude in the application process, and put forward higher requirements for their own learning. College students should maintain their own learning goals and directions through continuous self-monitoring, evaluation and feedback, and form their own unique insights and ways of thinking during learning.

However, the lack of science ethics education in many schools of higher education has caused college students not to have a clear understanding of the positioning of the new artificial intelligence, and not have a firm belief in self-subjectivity. Therefore, many college students simply use new artificial intelligence to replace their own creation, thinking and criticism progress. Artificial intelligence has led to the inertia of college students’ thinking.

2.3Creating a Privacy Credit Crisis

When university students use generative artificial intelligence systems like ChatGPT for text creation, they often lack the necessary authorization from the rights holders, which inevitably leads to intellectual property disputes. Essentially, the creative output of generative AI is a rearrangement of its learned materials, and some experts even directly refer to generative AI as high-tech plagiarism [6]. In the academic field, where originality is highly valued, the intellectual property issues arising from generative AI are pressing. For example, university personnel may utilize this technology to fabricate results, engage in duplicate submissions, or plagiarize content. Furthermore, if researchers rely on AI-generated papers, do they and their institutions hold the ownership of the publications, or does this infringe upon someone else’s intellectual property? AI-generated works often lack clear authorship, and if university personnel widely disseminate content produced by ChatGPT that contains infringing elements, it becomes challenging to address the resulting legal issues.

The vulnerabilities in ChatGPT technology also pose risks of privacy breaches and data exposure. The databases established by ChatGPT are susceptible to risks of unauthorized use, misuse, collection, storage, analysis, and sharing of educational data, leading to increased uncertainty and reduced resilience in educational environments [7]. Additionally, during the educational process, students are required to share a significant amount of personal information with ChatGPT, such as learning records, grades, and interests. This data may be vulnerable to hacking attacks or improper use, resulting in the compromise of student privacy. Sensitive, complex, and dispersed information concerning students’ identities, preferences, and communication records face threats of tampering, theft, misuse, and disclosure. For instance, incidents have occurred during the training process of GPT-2 where it generated outputs containing users’ private information. Although recent generative AI products have implemented encryption measures on the output content, a recent study revealed that even under such conditions, it is still possible to recover the original data [8].

3.Algorithmic Bias and Discrimination in Artificial Intelligence

3.1Risk of Discrimination

Multicultural education in higher education is of great importance. The university period is a crucial stage for students to develop a sound personality and cultivate a strong humanistic literacy. Like other language models based on large-scale algorithms, algorithmic discrimination is also a common issue with ChatGPT. The algorithm has been found to be prone to latent biases and prejudices. If university students are influenced by algorithms that exhibit racial discrimination or social biases while using generative artificial intelligence, it can easily lead to distorted values and an inadequate humanistic literacy.

For example, Google’s image classification algorithm has been known to label Black people as gorillas, and when searching for images of Black girls, the system often automatically recommends negative information such as “pornography” or “robbery.” When users ask ChatGPT to determine whether someone should be imprisoned based on race or gender, ChatGPT considers African American males as the only group that should be incarcerated [9]. Gender discrimination is also a common issue in generative artificial intelligence. According to research conducted by the Tsinghua University Institute for Interdisciplinary Information Sciences, ChatGPT exhibits a clear bias towards women at the algorithmic level. When using GPT-2 for model predictions, the algorithm has a probability of 70.59% of predicting teachers as male and a probability of 64.3% of predicting doctors as male [10].

Furthermore, ChatGPT could potentially serve as an important tool for Western countries to infiltrate their values globally [11]. As the power of national algorithms focuses on expansion, it is inevitable that it will represent the ideology of developed capitalist countries led by the United States and launch attacks on other nations. ChatGPT, which adheres to Western universal values, disguises itself as a practical tool to erode the values of its users and induce them to voluntarily enter into a psychological contract with Western values.

In conclusion, algorithmic discrimination in generative artificial intelligence under the new circumstances can lead to distorted values among participants in higher education. It is crucial to carry out necessary education on technology ethics.

3.2Fabricate False Information

When students use generative artificial intelligence like ChatGPT, they often receive definitive answers generated by AI. However, they do not know how these results are obtained, and it becomes difficult for them to judge the truthfulness of the conclusions. In fact, generative AI can produce a lot of false information, which can severely mislead the audience in higher education.

Technically, the content generated by generative AI is essentially a recombination of various learning materials, and the algorithm itself does not assess the validity of the generated content. Especially for language models like ChatGPT, the generated content is only semantically related to the user’s input, but it is not necessarily correct. This characteristic makes it possible for them to generate a large amount of false information.

Furthermore, ChatGPT relies primarily on existing web text data as its learning source, which poses risks of outdated and inaccurate information. Particularly in certain cutting-edge fields and advanced knowledge areas, if the training data is insufficient or of poor quality, it can lead to factual errors, knowledge gaps, misuse of concepts, and even the creation of fabricated information in the text content generated by ChatGPT. Some copied literature may lack proper sources or even be fabricated, and ineffective searching may also occur. For example, ChatGPT currently lacks accurate and in-depth handling of specific industry terms or grammatically complex sentences in simple everyday conversations. It also struggles to effectively address issues in fields such as healthcare, education, law, finance, and others. However, it continues to produce what it deems as objective answers, generating false academic texts. The algorithm-driven precise recommendations make these erroneous answers more deceptive. Without proper filtering and discernment, this can lead to inducement and misguidance for students and researchers. Fabricate false information.

3.3Algorithm Opacity

The algorithm of generative AI poses significant security risks. As a prevailing algorithmic technology, generative artificial intelligence is characterized by its “algorithmic black box” nature. The term “algorithmic black box” refers to the inability of the user to fully understand the decision-making process of the AI algorithm or predict its outcomes [12]. People only know that ChatGPT provides certain definitive answers, but it is difficult to gain further insights into its thinking process, operation, and acquisition, and even ChatGPT itself cannot provide explanations. When university researchers present requests to the ChatGPT-4 model, which has not undergone security debugging, to generate false information or manipulate the output of others, ChatGPT-4 can rapidly produce algorithmic results that align with specific queries and make subsequent targeted adjustments based on the researchers’ settings. Due to the intelligence and neutrality of algorithm-based systems, generative AI can generate false information or illegal content in response to specific user commands, directly leading to individuals or relevant entities being controlled by the algorithm. Furthermore, the responsible entity for the consequences of generative AI algorithms is unclear. If university staff misuse generative AI products, as the “controllers of algorithmic decisions,” they should undoubtedly bear legal responsibility for the damages caused by the outputs. The opacity of algorithms exacerbates the security risks associated with AI usage, and university personnel should understand the algorithmic principles of generative AI and utilize it within the bounds of legality and compliance.

4.Suggestions on Ethical Education of Science and Technology in Universities

4.1Reforming the Traditional Education of Science and Technology Ethics in Universities

There are different advantages and disadvantages and goals in the application of traditional quality education and Chat GPT in science and technology ethics education in colleges and universities, but they also have certain connections, including the joint emphasis on morality and responsibility, complementary disciplines and majors, and cultivating all-round development students. In higher education, the ideas and methods of traditional quality education and science and technology ethics education in colleges and universities can better cultivate talents with excellent quality and science and technology ethics accomplishment. The science and technology ethics education in universities has advantages in emphasizing scientific research integrity and moral responsibility, promoting the social responsibility of science and technology, and helping students understand the relationship between science and technology and society. There are shortcomings such as the inconsistency between educational content and the development of science and technology, the ambiguity and uncertainty of ethical norms, and the disconnection between education and actual behavior. Traditional quality education and Chat GPT have their own advantages and disadvantages in science and technology ethics education in colleges and universities, and can have some connections. Traditional quality education is to cultivate students’ moral, ethical, thinking ability and social responsibility quality through traditional education methods, such as classroom teaching, lectures, reading, etc.

The advantage of traditional quality education is that it can cultivate students’ moral concept and social responsibility through personal experience and interaction with others. At the same time, the traditional quality education also pays attention to the cultivation of students’ critical thinking ability, which can enable students to better understand and analyze ethical problems. While Chat GPT is an AI-driven conversation generation model that can generate corresponding responses from a given input and context. It can provide an interactive learning approach where students can learn and explore technological ethical issues through conversations with Chat GPT. Its advantage is that it can provide personalized learning experience and learn according to the needs and pace of students. At the same time, Chat GPT can also provide students with rich information and resources through a large amount of data and corpus. However, the disadvantage of traditional quality education is the limited teaching resources, relying on teachers’ education level and experience, it is difficult to realize personalized education. Although Chat GPT can provide personalized learning, there are also some problems, including insufficient cultivation of students’ thinking ability, inability to solve complex ethical problems and lack of humanized interaction.

Connecting the two, The paper can explore the combination of traditional quality education with Chat GPT and apply it in science and technology ethics education in universities. Traditional quality education can provide a basic framework and way of thinking, while Chat GPT can provide richer resources and personalized learning experience. By combining the two, a more comprehensive, in-depth and personalized science and technology ethics education in universities can be realized. For example, Chat GPT can be used to guide and discuss the ethical issues, and then further explain and think through the traditional quality education method. Such a combination can give full play to the advantages of the two and improve the teaching effect. Traditional quality education and Chat GPT have their own advantages and disadvantages in the science and technology ethics education in colleges and universities, and can be combined with beneficial results to achieve better educational results. However, the specific application method needs to be determined according to the actual situation and the requirements of the teaching objectives.

4.2Optimizing University Management and Establishing Science and Technology Ethics Rules

The formulation of ethics requires multi-party participation, including experts in the field of science and technology, education experts, ethicists and relevant stakeholders. By pooling wisdom, the views and interests of all parties can be coordinated more effectively, and more comprehensive and reasonable norms can be formulated.

The rapid development of science and technology means that ethics need to be continuously updated and regulated. Relevant departments should establish a supervision mechanism to supervise and evaluate the implementation of the ethics of science and technology education in universities, and timely revise and improve them.

Universities can ensure the rational and ethical use of science and technology by regulating the ethics of science and technology in educational management. The university can effectively regulate the ethics of science and technology to ensure sound and ethical use of science and technology in educational management by the following measures. This helps to ensure that the power of science and technology to advance education is aligned with ethical values.

Establishing an Ethics of Technology Policy: universities can establish a clear ethics of technology policy that sets out principles and guidelines for the use of technology. This includes norms on data privacy, intellectual property rights, and proper use of technology.

Emphasize education and training: Universities can include educational content on ethics of science and technology in their curricula to develop students’ awareness and understanding of ethics of science and technology. In addition, special training courses can be offered to help faculty and staff understand the importance and practical approaches to ethics in science and technology.

Promoting interdisciplinary cooperation: Universities can encourage interdisciplinary cooperation so that ethics of science and technology becomes a concern for all subject areas. Through interdisciplinary collaboration, it is possible to enhance a comprehensive understanding of the ethics of science and technology and to ensure that the application of science and technology in various disciplinary areas complies with ethical norms.

Establishment of a S&T ethics committee: A university may establish a specialised S&T ethics committee or panel to oversee and guide S&T ethics matters within the university. The committee could review and monitor S&T projects to ensure that they comply with S&T ethical requirements and provide relevant counselling and training support.

Promote openness and transparency: Universities should encourage openness and transparency in S&T projects, especially those involving educational data and educational decision-making. Through openness and transparency, discussion and review of S&T ethics can be facilitated to ensure that S&T is used in the public interest and in accordance with ethical guidelines.

4.3College Students Should Establish Appropriate Academic Norms to Ensure Their Fairness and Justice

The application of artificial intelligence will have a profound impact on society, so ethics should ensure that the use of technology does not exacerbate social inequality and injustice. This includes the avoidance of algorithmic bias, data privacy, and the protection of intellectual property rights.

Data Privacy Protection. Ensure that students should be noted that the specific ethical and moral norms should be formulated according to the national conditions, cultural traditions and legal provisions of each country, and should be effectively implemented and supervised in the actual educational practice.

Ensuring equity and justice for students in AI-engaged education requires the establishment of a range of academic norms. There are the following six areas to consider:

AI data and privacy are properly protected, that relevant laws and regulations are complied with, and that students’ personal information is not disclosed or misused.

Transparency and Explainability. AI educational tools should be transparent, and students, teachers and relevant organizations should be able to understand the algorithms behind them and how they work. In addition, explanations and rationales should be provided to support their decisions and recommendations.

Fair Assessment. Ensure that AI tools are able to assess students’ abilities and performance fairly, avoiding the influence of factors such as gender, race, and geography on the assessment results, and correcting any bias or discrimination in a timely manner.

Diversity and Inclusiveness. AI educational tools should be able to accommodate the needs of different groups of students, including different learning styles, cultural backgrounds and cognitive differences. Avoid discrimination or neglect of specific groups.

Communication and Collaboration. AI educational tools should facilitate communication and collaboration among students, helping them to learn and solve problems together, not just individually.

Continuous Improvement and Regulation. Establish monitoring and review mechanisms to regularly assess the effectiveness and potential risks of AI educational tools to ensure that they comply with academic norms and educational values.

The establishment of these academic norms requires the concerted efforts of academia, educational institutions, governments and relevant stakeholders. By developing and implementing these norms, the fairness and justice of AI participation in education can be improved.

5.Conclusion

It is obvious that generative artificial intelligence poses a threat to academic ethics in higher education. Meanwhile, the algorithmic biases and discrimination of generative artificial intelligence can also have negative impacts on participants in higher education. After multi-investigation and comparison, according to the combined analysis of science and technology ethics education and ChatGPT, this study understands that on the one hand, the paper needs to balance the advantages of using ChatGPT education mode, and at the same time, the paper also needs the cultivation mode of interpersonal communication. On the other hand, the science and technology ethics education in colleges and universities needs to set up supervision departments and relevant laws and regulations according to the coordination of human natural science and technology. The future use of artificial intelligence and the application of Chat GPT will face a great challenge for the development of human science and technology education in the future.

Authors Contribution

All the authors contributed equally and their names were listed in alphabetical order.


References

[1]. Xun Yuan. (2023). ChatGPT/Generative Artificial Intelligence and the Value and Mission of Higher Education. Journal of East China Normal University (Educational Science Edition) (07), 56-63. doi:10.16382/j.cnki.1000 -5560.2023.07.006.

[2]. Wang Jianguo & Wang Donghong. (2022). Path Research on Science and Technology Ethics Education in Colleges and Universities in the New Era. Education Review (07), 45-50.

[3]. Zhu Yongxin & Yang Fan. (2023). ChatGPT/Generative Artificial Intelligence and Educational Innovation: Opportunities, Challenges and Future. Journal of East China Normal University (Educational Science Edition) (07),1-14.doi:10.16382/j. cnki.1000-5560.2023.07.001.

[4]. Li Haifeng & Wang Wei. (2023). Student homework design and evaluation in the era of generative artificial intelligence. Open Education Research (03), 31-39. doi:10.13966/j.cnki.kfjyyj.2023.03.003.

[5]. Zhang Feng & Chen Wei. (2023). ChatGPT and Higher Education: How Artificial Intelligence Drives Learning Change. Journal of Chongqing University of Technology (Social Science) (05), 26-33.

[6]. Zhu Yongxin, Yang Fan. ChatGPT/Generative Artificial Intelligence and Educational Innovation: Opportunities, Challenges, and the Future [J]. Journal of East China Normal University (Education Sciences Edition), 2023, 41(07): 1-14. DOI: 10.16382/j.cnki.1000-5560.2023.07.001.

[7]. Feng Yuhuan. (2023). The Application Value, Potential Ethical Risks, and Governance Path of ChatGPT in the Field of Education. Ideological and Theoretical Education (04), 26-32. doi: 10.16075/j.cnki.cn31-1220/g4.2023.04.013.

[8]. Zhan Zehui, Ji Yu, Niu Shijing, Lv Siyuan & Zhong Xuanyan. The Intrinsic Mechanism, Representational Forms, and Risk Mitigation of ChatGPT Embedded in the Educational Ecosystem. Modern Distance Education. doi: 10.13927/j.cnki.yuan.20230721.001.

[9]. Zhang Xin. Algorithmic Governance Challenges and Regulatory Governance of Generative Artificial Intelligence [J]. Modern Law Science, 2023, 45(3): 108-123. DOI: 10.3969/j.issn.1001-2397.2023.03.07.

[10]. Liu Yanhong. Three Major Security Risks and Legal Regulations of Generative Artificial Intelligence—A Case Study of ChatGPT[J]. East Methodology, 2023(4):29-43. DOI:10.3969/j.issn.1007-1466.2023.04.003.

[11]. Gao Jun. Research on Algorithmic Security Risks and Governance Path of ChatGPT Generative AI [J]. Communications and Information Technology, 2023(4): 122-124, 128.

[12]. Chen Yongwei. Beyond ChatGPT: Opportunities, Risks, and Challenges of Generative AI [J]. Journal of Shandong University (Philosophy and Social Sciences Edition), 2023(3): 127-143. DOI: 10.19836/j.cnki.37-1100/c.2023.03.012.


Cite this article

Ni,Y.;Peng,B.;Yang,S. (2023). Challenges of University Science and Technology Ethics Education in the New Era of Artificial Intelligence. Communications in Humanities Research,14,290-298.

Data availability

The datasets used and/or analyzed during the current study will be available from the authors upon reasonable request.

Disclaimer/Publisher's Note

The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of EWA Publishing and/or the editor(s). EWA Publishing and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

About volume

Volume title: Proceedings of the International Conference on Global Politics and Socio-Humanities

ISBN:978-1-83558-117-9(Print) / 978-1-83558-118-6(Online)
Editor:Enrique Mallen, Javier Cifuentes-Faura
Conference website: https://www.icgpsh.org/
Conference date: 13 October 2023
Series: Communications in Humanities Research
Volume number: Vol.14
ISSN:2753-7064(Print) / 2753-7072(Online)

© 2024 by the author(s). Licensee EWA Publishing, Oxford, UK. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license. Authors who publish this series agree to the following terms:
1. Authors retain copyright and grant the series right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this series.
2. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the series's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial publication in this series.
3. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See Open access policy for details).

References

[1]. Xun Yuan. (2023). ChatGPT/Generative Artificial Intelligence and the Value and Mission of Higher Education. Journal of East China Normal University (Educational Science Edition) (07), 56-63. doi:10.16382/j.cnki.1000 -5560.2023.07.006.

[2]. Wang Jianguo & Wang Donghong. (2022). Path Research on Science and Technology Ethics Education in Colleges and Universities in the New Era. Education Review (07), 45-50.

[3]. Zhu Yongxin & Yang Fan. (2023). ChatGPT/Generative Artificial Intelligence and Educational Innovation: Opportunities, Challenges and Future. Journal of East China Normal University (Educational Science Edition) (07),1-14.doi:10.16382/j. cnki.1000-5560.2023.07.001.

[4]. Li Haifeng & Wang Wei. (2023). Student homework design and evaluation in the era of generative artificial intelligence. Open Education Research (03), 31-39. doi:10.13966/j.cnki.kfjyyj.2023.03.003.

[5]. Zhang Feng & Chen Wei. (2023). ChatGPT and Higher Education: How Artificial Intelligence Drives Learning Change. Journal of Chongqing University of Technology (Social Science) (05), 26-33.

[6]. Zhu Yongxin, Yang Fan. ChatGPT/Generative Artificial Intelligence and Educational Innovation: Opportunities, Challenges, and the Future [J]. Journal of East China Normal University (Education Sciences Edition), 2023, 41(07): 1-14. DOI: 10.16382/j.cnki.1000-5560.2023.07.001.

[7]. Feng Yuhuan. (2023). The Application Value, Potential Ethical Risks, and Governance Path of ChatGPT in the Field of Education. Ideological and Theoretical Education (04), 26-32. doi: 10.16075/j.cnki.cn31-1220/g4.2023.04.013.

[8]. Zhan Zehui, Ji Yu, Niu Shijing, Lv Siyuan & Zhong Xuanyan. The Intrinsic Mechanism, Representational Forms, and Risk Mitigation of ChatGPT Embedded in the Educational Ecosystem. Modern Distance Education. doi: 10.13927/j.cnki.yuan.20230721.001.

[9]. Zhang Xin. Algorithmic Governance Challenges and Regulatory Governance of Generative Artificial Intelligence [J]. Modern Law Science, 2023, 45(3): 108-123. DOI: 10.3969/j.issn.1001-2397.2023.03.07.

[10]. Liu Yanhong. Three Major Security Risks and Legal Regulations of Generative Artificial Intelligence—A Case Study of ChatGPT[J]. East Methodology, 2023(4):29-43. DOI:10.3969/j.issn.1007-1466.2023.04.003.

[11]. Gao Jun. Research on Algorithmic Security Risks and Governance Path of ChatGPT Generative AI [J]. Communications and Information Technology, 2023(4): 122-124, 128.

[12]. Chen Yongwei. Beyond ChatGPT: Opportunities, Risks, and Challenges of Generative AI [J]. Journal of Shandong University (Philosophy and Social Sciences Edition), 2023(3): 127-143. DOI: 10.19836/j.cnki.37-1100/c.2023.03.012.