
Exploring the application of synchronous analysis of physiological signals and facial expressions in emotion recognition
- 1 Guangdong University of Technology
* Author to whom correspondence should be addressed.
Abstract
In today's increasingly frequent human-computer interaction, the accurate recognition and response of intelligent systems to human emotions has become the key to improving the user experience. This paper aims to explore the application of physiological signals and facial expression synchronization analysis in emotion recognition. This paper adopts the method of literature review, and summarizes the acquisition techniques of physiological signals and facial expressions, synchronous analysis methods, and their application cases in emotion recognition. At the same time, the limitations and challenges of the existing technology are discussed. The review shows that the synchronous analysis of physiological signals and facial expressions has important application value in the field of emotion recognition, especially in mental health monitoring, educational feedback, and customer service automation. However, the practical application of the technology still faces many challenges, including the real-time nature of data collection, the adaptability of individual differences, and other identification problems. Therefore, future research needs to further optimize data acquisition technology and develop more accurate and personalized analysis algorithms.
Keywords
Physiological Signals, Facial Expression Recognition, Emotion Recognition, Multimodal Analysis, Deep Learning
[1]. Wang, Shuai, et al. Multimodal Emotion Recognition From EEG Signals and Facial Expressions[J]. IEEE ACCESS, 2023, 11:33061-33068.
[2]. Yan Zhao et al. Attention-Based CNN Fusion Model for Emotion Recognition During Walking Using Discrete Wavelet Transform on EEG and Inertial Signals[J]. BIG DATA MINING AND ANALYTICS, 2024, 7(1):188-204.
[3]. Chen Xinyi, Tao Xiaomei. Emotion recognition method based on multimodal physiological signal feature fusion[J]. Computer Integrated Manufacturing Systems, 2023, 40(06):175-181+186.
[4]. Chen Kunlin, Hu Defeng, Chen Nannan. Research on Depression Recognition Based on Facial Expression Analysis[J]. Computer Era, 2023(10):70-74.
[5]. Xin LI, Qing FAN. Application progress of machine learning in the study of facial features of patients with depression[J]. JOURNAL OF SHANGHAI JIAOTONG UNIVERSITY (MEDICAL SCIENCE), 2022, 42(1): 124-129.
[6]. Cai, H., Yuan, Z., Gao, Y. et al. A multi-modal open dataset for mental-disorder analysis. Sci Data 9, 178 (2022).
[7]. Bei Pan et al. A review of multimodal emotion recognition from datasets, preprocessing, features, and fusion methods, Neurocomputing, Volume 561, 2023, 126866, ISSN 0925-2312
[8]. Zhijing Xu, Yang Gao. Research on cross-modal emotion recognition based on multi-layer semantic fusion[J]. MATHEMATICAL BIOSCIENCES AND ENGINEERING, 2024, 21(2).
[9]. Xinyu Zhang et al. SkipcrossNets: Adaptive Skip-cross Fusion for Road Detection[Z]. arxiv, 2023.
[10]. Syed Muhammad Daniyal et al. An Improved Face Recognition Method Based on Convolutional Neural Network[J]. JISR ON COMPUTING, 2024, 22(1).
[11]. Miaohua Zhang et al. "Robust tensor factorization using maximum correntropy criterion," 2016 23rd International Conference on Pattern Recognition (ICPR), Cancun, Mexico, 2016, pp. 4184-4189, doi: 10.1109/ICPR.2016.7900290.
[12]. Wen Peiyu et al. Research and Application of Multimodality in Emotion Recognition[Z]. Applied Science and Technology, 2024,51(1):51-58.
[13]. Sun Hao et al. Robustness evaluation of multi-source remote sensing image depth recognition model against attacks[J]. Journal of Remote Sensing, 2023, 27(8):1951-1963.
[14]. Li Yuchi, Li Haifang, Jie Dan, et al. Phase synchronization analysis of emotional EEG based on complex networks[J]. Computer Engineering and Applications, 2017, 53(18):230-235.
[15]. GAO Hanbing. Cross-subject emotional EEG recognition based on multi-source domain adaptation. Chinese Journal of Intelligent Science and Technology[J], 2021, 3(1): 59-64
[16]. LING Wenfen. Multi-modal physiological signal emotion recognition based on 3D hierarchical convolution fusion. Chinese Journal of Intelligent Science and Technology[J], 2021, 3(1): 76-84. doi:10.11959/j.issn.2096-6652.202108
[17]. Ge Liangzhu. Research on Generalization Performance and Model Selection of Deep Learning [D]. Tianjin University, 2019
Cite this article
Wang,L. (2024). Exploring the application of synchronous analysis of physiological signals and facial expressions in emotion recognition. Applied and Computational Engineering,90,20-25.
Data availability
The datasets used and/or analyzed during the current study will be available from the authors upon reasonable request.
Disclaimer/Publisher's Note
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of EWA Publishing and/or the editor(s). EWA Publishing and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
About volume
Volume title: Proceedings of the 6th International Conference on Computing and Data Science
© 2024 by the author(s). Licensee EWA Publishing, Oxford, UK. This article is an open access article distributed under the terms and
conditions of the Creative Commons Attribution (CC BY) license. Authors who
publish this series agree to the following terms:
1. Authors retain copyright and grant the series right of first publication with the work simultaneously licensed under a Creative Commons
Attribution License that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this
series.
2. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the series's published
version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial
publication in this series.
3. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and
during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See
Open access policy for details).