Research on application method of intelligent driving technology based on monocular vision sensor
- 1 Chongqing Jiaotong University
* Author to whom correspondence should be addressed.
Abstract
With the development of driverless cars, intelligent driving technology is increasingly used in the automotive industry, monocular vision sensor plays an indispensable role in intelligent driving technology because of its simple structure, low cost and abundant information. This paper discusses and optimizes the application of the monocular vision sensor in intelligent driving. The basic principle and key technologies of the monocular vision sensor are described in detail. In the specific application of the monocular vision sensor, this paper focuses on the monocular vision sensor's depth learning network, multi-information fusion technology, improved target detection and tracking algorithm. Through in-depth research and analysis, a series of optimization strategies based on the monocular vision sensor, such as the FAST Region Convolutional Neural Network (FAST–RCNN) vehicle target detection method and improved Scale-Invariant Feature Transform (SIFT) algorithm, are proposed. Finally, this paper summarizes the intelligent driving technology based on the monocular vision sensor and suggests that the monocular vision sensor will play a more important role in intelligent driving technology. Future research shall focus on improving the accuracy of the algorithm, such as the development of end-to-end convolutional neural network fusion methods, the proposed depth multi-modal sensor fusion network, and so on.
Keywords
Monocular vision sensor, intelligent driving, SLAM, deep learning network, multi-information fusion technology
[1]. Chen Yupeng. Research on automatic driving monocular vision target recognition technology based on depth learning [D]. Jilin University, 2019.
[2]. Xu Yang, Zhao Yanan, Gao Li et al.. Monocular vision-based vehicle detection and tracking [J]. Laser journal, 2020,41(05): 18-22.
[3]. Ding Meng, Jiang Xinyan. Scene depth estimation method based on monocular vision in advanced driving assistance Système d' aide à la conduite, à l' exploitation et à la maintenance [J]. Journal of Optics, 2020,40(17): 137-145.
[4]. Quan Meixiang. Research on SLAM algorithm based on multi-sensor information fusion [D]. Harbin Institute of Technology, 2021.
[5]. Markus Schön, M. Buchholz et al. “MGNet: Monocular Geometric Scene Understanding for Autonomous Driving.” IEEE International Conference on Computer Vision (2021). 15784-15795.
[6]. Debashri Roy, Yuanyuan Li et al. “Multi-Modality Sensing and Data Fusion for Multi-Vehicle Detection.” IEEE transactions on multimedia (2023). 2280-2295.
[7]. Zhao Wangyu, Li Bijun, Shan yunxiao, etc.. Hybrid millimeter wave radar and monocular vision for front vehicle detection and tracking [J]. Journal of Wuhan University (Information Science Edition) , 2019,44(12) : 1832-1840.
[8]. Hu Wei, Liu Xingyu. An improved SIFT algorithm for image matching in unidirectional SLAM [J]. Electro-optic and control, 2019,26(05): 7-13.
[9]. Chen Kun. Road environment sensing technology based on binocular vision and lidar fusion [D]. Zhejiang University, 2020.
[10]. Mario Bijelic, Tobias Gruber et al. “Seeing Through Fog Without Seeing Fog: Deep Multimodal Sensor Fusion in Unseen Adverse Weather.” Computer Vision and Pattern Recognition (2019). 11679-11689.
[11]. Babak Shahian Jahromi, Theja Tulabandhula et al. “Real-Time Hybrid Multi-Sensor Fusion Framework for Perception in Autonomous Vehicles.” Italian National Conference on Sensors (2019).
Cite this article
Zhang,Z. (2024).Research on application method of intelligent driving technology based on monocular vision sensor.Theoretical and Natural Science,52,186-191.
Data availability
The datasets used and/or analyzed during the current study will be available from the authors upon reasonable request.
Disclaimer/Publisher's Note
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of EWA Publishing and/or the editor(s). EWA Publishing and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content
About volume
Volume title: Proceedings of the Quantum Machine Learning: Bridging Quantum Physics and Computational Simulations - CONFMPCS 2024
© 2024 by the author(s). Licensee EWA Publishing, Oxford, UK. This article is an open access article distributed under the terms and
conditions of the Creative Commons Attribution (CC BY) license. Authors who
publish this series agree to the following terms:
1. Authors retain copyright and grant the series right of first publication with the work simultaneously licensed under a Creative Commons
Attribution License that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this
series.
2. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the series's published
version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial
publication in this series.
3. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and
during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See
Open access policy for details).