Driver's hazardous state detection in human-computer interaction of automotive cockpits

Research Article
Open access

Driver's hazardous state detection in human-computer interaction of automotive cockpits

Xin Zhang 1*
  • 1 Shanxi Agricultural University    
  • *corresponding author 20201209425@stu.sxau.edu.cn
Published on 31 January 2024 | https://doi.org/10.54254/2755-2721/31/20230123
ACE Vol.31
ISSN (Print): 2755-273X
ISSN (Online): 2755-2721
ISBN (Print): 978-1-83558-287-9
ISBN (Online): 978-1-83558-288-6

Abstract

Today, the smart car industry is growing rapidly, the functions of the intelligent cockpit based on human-computer interaction are more and more extensive, and the sales volume of intelligent vehicles continues to rise. The incidence of traffic crashes caused by the unsafe state of drivers remains high. The different behavioral states that drivers may emit during driving is a necessary consideration in the design of the intelligent cockpit. This paper takes the driver's state as the starting point to systematically consider the driver's state detection. Summarizing the driver's state detection from four parts: eye state, limb state, facial state, and language state. This paper introduces the development status of the current four types of detection systems, focusing on eye state recognition and limb state recognition. The key driver's characteristic signals are mainly collected by the camera. The driver's state is judged by deep learning, machine learning, and database. This paper is more systematic and comprehensive than the existing literature. Comprehensive consideration of the driver's state contributes to the driver and passengers.

Keywords:

Driver's hazardous state detection in human-computer interaction of automotive cockpits

Zhang,X. (2024). Driver's hazardous state detection in human-computer interaction of automotive cockpits. Applied and Computational Engineering,31,64-71.
Export citation

References

[1]. Cai M, Wang W. Summary of research on the interactive design of automobile intelligent cockpit Packaging. Engineering, 2023, 44(06), 430-40.

[2]. World Health Organization. Road Traffic Injuries. Available online: http://www.who.net/news-room/fact-sheets/detail/road-traffic-injuries (accessed on 8 June 8, 2023).

[3]. Liu S, Wang X, Ji H, Wang L, Hou Z. A novel driver abnormal behaviour recognition and analysis strategy and its application in a practical vehicle. Symmetry, 2022, 14(10), 1956.

[4]. Rahman H, Ahmed M. U, Barua S, Funk P, Begum S. Vision-based driver’s cognitive load classification considering eye movement using machine learning and deep learning. Sensors, 2021, 21(23), 8019.

[5]. Dewi C, Chen R-C, Chang C-W, Wu S-H, Jiang X, Yu H. Eye aspect ratio for real-time Drowsiness detection to improve driver safety, Electronics, 2022, 11(19), 3183

[6]. Yan X, He J, Wu G, Zhang C, Wang C. A proactive recognition system for detecting commercial vehicle driver’s distracted behaviour. Sensors, 2022, 22(6), 2373.

[7]. Agrawal U, Giripunje S, Bajaj, P. Emotion and gesture recognition with soft computing tool for driver’s assistance system in human-centered transportation. IEEE International Conference on Systems, Man, and Cybernetics, Manchester, UK, 2013, pp. 4612-4616.

[8]. Ali M, Mosa A. H, Machot F. A, Kyamakya K. Emotion recognition involving physiological and speech signals: A comprehensive review. Recent Advances in Nonlinear Dynamics and Synchronization: With Selected Applications in Electrical Engineering, Neurocomputing, and Transportation, 2018, 287-302.

[9]. Wang Y, Ding X, Yuan G, and Fu X. Dual-cameras-based driver’s eye gaze tracking system with non-linear gaze point refinement. Sensors, 2022, 22(6), 2326.

[10]. Ancilin J, & Milton A. Improved speech emotion recognition with Mel frequency magnitude coefficient. Applied Acoustics, 2021, 179, 108046.


Cite this article

Zhang,X. (2024). Driver's hazardous state detection in human-computer interaction of automotive cockpits. Applied and Computational Engineering,31,64-71.

Data availability

The datasets used and/or analyzed during the current study will be available from the authors upon reasonable request.

Disclaimer/Publisher's Note

The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of EWA Publishing and/or the editor(s). EWA Publishing and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

About volume

Volume title: Proceedings of the 2023 International Conference on Machine Learning and Automation

ISBN:978-1-83558-287-9(Print) / 978-1-83558-288-6(Online)
Editor:Mustafa İSTANBULLU
Conference website: https://2023.confmla.org/
Conference date: 18 October 2023
Series: Applied and Computational Engineering
Volume number: Vol.31
ISSN:2755-2721(Print) / 2755-273X(Online)

© 2024 by the author(s). Licensee EWA Publishing, Oxford, UK. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license. Authors who publish this series agree to the following terms:
1. Authors retain copyright and grant the series right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this series.
2. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the series's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial publication in this series.
3. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See Open access policy for details).

References

[1]. Cai M, Wang W. Summary of research on the interactive design of automobile intelligent cockpit Packaging. Engineering, 2023, 44(06), 430-40.

[2]. World Health Organization. Road Traffic Injuries. Available online: http://www.who.net/news-room/fact-sheets/detail/road-traffic-injuries (accessed on 8 June 8, 2023).

[3]. Liu S, Wang X, Ji H, Wang L, Hou Z. A novel driver abnormal behaviour recognition and analysis strategy and its application in a practical vehicle. Symmetry, 2022, 14(10), 1956.

[4]. Rahman H, Ahmed M. U, Barua S, Funk P, Begum S. Vision-based driver’s cognitive load classification considering eye movement using machine learning and deep learning. Sensors, 2021, 21(23), 8019.

[5]. Dewi C, Chen R-C, Chang C-W, Wu S-H, Jiang X, Yu H. Eye aspect ratio for real-time Drowsiness detection to improve driver safety, Electronics, 2022, 11(19), 3183

[6]. Yan X, He J, Wu G, Zhang C, Wang C. A proactive recognition system for detecting commercial vehicle driver’s distracted behaviour. Sensors, 2022, 22(6), 2373.

[7]. Agrawal U, Giripunje S, Bajaj, P. Emotion and gesture recognition with soft computing tool for driver’s assistance system in human-centered transportation. IEEE International Conference on Systems, Man, and Cybernetics, Manchester, UK, 2013, pp. 4612-4616.

[8]. Ali M, Mosa A. H, Machot F. A, Kyamakya K. Emotion recognition involving physiological and speech signals: A comprehensive review. Recent Advances in Nonlinear Dynamics and Synchronization: With Selected Applications in Electrical Engineering, Neurocomputing, and Transportation, 2018, 287-302.

[9]. Wang Y, Ding X, Yuan G, and Fu X. Dual-cameras-based driver’s eye gaze tracking system with non-linear gaze point refinement. Sensors, 2022, 22(6), 2326.

[10]. Ancilin J, & Milton A. Improved speech emotion recognition with Mel frequency magnitude coefficient. Applied Acoustics, 2021, 179, 108046.