
Research and Application Analysis of Global Path Planning Method Based on Radar and Vision in Robot SLAM
- 1 Information School, North China University of Technology, Beijing, China
* Author to whom correspondence should be addressed.
Abstract
In recent years, the production scale of robots has become larger and larger, involving all aspects of society. Many industries are beginning to introduce robots and use them to better development and production. At the same time, autonomous driving is also a promising research direction, which is expected to reach 8 million autonomous vehicles by 2030, and the passenger car market will account for about 13% of total passenger miles. Autonomous driving will also bring many benefits to people, such as freeing up an average of 50 minutes of extra time for drivers per day, which can reduce accidents by more than 90%. Whether it is a robot or a self-driving car, the most important and critical goal is to ensure that the target task can be completed safely and quickly. Simultaneous localization and mapping (SLAM) technology can better help robots achieve this goal. In this paper, laser SLAM, visual SLAM and laser-visual SLAM algorithms are introduced, and then some related applications are shown. Finally, the future development prospect of SLAM should pay more attention to multi-sensor fusion algorithm.
Keywords
Simultaneous localization and mapping, Automatic driving, Laser radar, Cameras.
[1]. Dewan, A., Kumar, A., Singh, H., Solanki V. S., & Kaur, P. (2023) Advancement in SLAM Techniques and Their Diverse Applications. International Conference on System Modeling & Advancement in Research Trends (SMART), Moradabad, India, pp. 387-392, doi: 10. 1109/SMART59791. 2023. 10428583.
[2]. Gaia, J., Orosco, E., Rossomando, F. & Soria, C. (2023) Mapping the Landscape of SLAM Research: A Review. in IEEE Latin America Transactions, vol. 21, no. 12, pp. 1313-1336, Dec. 2023, doi: 10. 1109/TLA. 2023. 10305240.
[3]. Liang, H., Li, Y., Guo, Q., & Yang, J. (2023). ROS2-based locator optimized autonomous navigation robot. International Conference on Robotics, Intelligent Control and Artificial Intelligence (RICAI), Hangzhou, China, pp. 127-130, doi: 10. 1109/RICAI60863. 2023. 10489438.
[4]. Shan, T., & Englot, B. (2018). LeGO-LOAM: Lightweight and Ground-Optimized Lidar Odometry and Mapping on Variable Terrain. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, pp. 4758-4765, doi: 10. 1109/IROS. 2018. 8594299.
[5]. Cheng, Y., Liu, Z., Luo, F., Liu, M., Li, X., & Zhu, J. (2023) With Fused Point Cloud Height and Intensity Information improved the Loop Closure Detection for LeGO-LOAM. 4th International Conference on Computer Engineering and Intelligent Control (ICCEIC), Guangzhou, China, pp. 93-98, doi: 10. 1109/ICCEIC60201. 2023. 10426652.
[6]. Lin, J., & Zhang, F. (2020) Loam livox: A fast, robust, high-precision LiDAR odometry and mapping package for LiDARs of small FoV. IEEE International Conference on Robotics and Automation (ICRA), Paris, France, pp. 3126-3131, doi: 10. 1109/ICRA40945. 2020. 9197440.
[7]. Huang, R., Zhao, M., Chen, J., & Li, L. (2024) KDD-LOAM: Jointly Learned Keypoint Detector and Descriptors Assisted LiDAR Odometry and Mapping. IEEE International Conference on Robotics and Automation (ICRA), Yokohama, Japan, pp. 8559-8565, doi: 10. 1109/ICRA57147. 2024. 10610557.
[8]. Wang, L., & Xu, Z. (2023) Improved Mapping Technique Based on ORB-SLAM2 in Dynamic Scenes. 2nd International Conference on Automation, Robotics and Computer Engineering (ICARCE), Wuhan, China, pp. 1-5, doi: 10. 1109/ICARCE59252. 2024. 10492577.
[9]. Zhang, B., Ma, X., Ma, H. J., & Luo, C. (2024) DynPL-SVO: A Robust Stereo Visual Odometry for Dynamic Scenes. in IEEE Transactions on Instrumentation and Measurement, vol. 73, pp. 1-10, 2024, Art no. 5006510, doi: 10. 1109/TIM. 2023. 3348882.
[10]. Hu, C., Zhang, X., Li, K., Wu K., & Dong, R. (2023) EM-LSD-Based Visual-Inertial Odometry With Point-Line Feature. in IEEE Sensors Journal, vol. 23, no. 24, pp. 30794-30804, 15 Dec. 15, 2023, doi: 10. 1109/JSEN. 2023. 3329524.
[11]. Wang, W., Wang, C., Liu, J., Su, X., Luo, B., & Zhang, C. (2024) HVL-SLAM: Hybrid Vision and LiDAR Fusion for SLAM. in IEEE Transactions on Geoscience and Remote Sensing, doi: 10. 1109/TGRS. 2024. 3432336.
[12]. Junwoon, L. et al., (2024) Switch-SLAM: Switching-Based LiDAR-Inertial-Visual SLAM for Degenerate Environments. in IEEE Robotics and Automation Letters, vol. 9, no. 8, pp. 7270-7277, Aug. 2024, doi: 10. 1109/LRA. 2024. 3421792.
[13]. Zhao, Z., Li, Y., Yang, C., Wang, W., & Xu, B. (2023) An Adaptive Feature Extraction Visual SLAM Method for Autonomous Driving. CAA International Conference on Vehicular Control and Intelligence (CVCI), Changsha, China pp. 1-6, doi: 10. 1109/CVCI59596. 2023. 10397445.
[14]. Zhao, Y., Liang, Y., Ma, Z., Guo, L., &Zhang, H. (2024) Localization and Mapping Algorithm Based on Lidar-IMU-Camera Fusion. in Journal of Intelligent and Connected Vehicles, vol. 7, no. 2, pp. 97-107, June 2024, doi: 10. 26599/JICV. 2023. 9210027.
[15]. Qin, H., Yan, X., & Li, J. (2023) Research on the Application of Low-Cost Aerial-Ground Delivery System Using UAV-UGV Joint. International Conference on Mechanical and Electronics Engineering (ICMEE), Xi'an, China, pp. 373-378, doi: 10. 1109/ICMEE59781. 2023. 10525562.
Cite this article
Lan,T. (2024). Research and Application Analysis of Global Path Planning Method Based on Radar and Vision in Robot SLAM. Applied and Computational Engineering,112,8-14.
Data availability
The datasets used and/or analyzed during the current study will be available from the authors upon reasonable request.
Disclaimer/Publisher's Note
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of EWA Publishing and/or the editor(s). EWA Publishing and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
About volume
Volume title: Proceedings of the 5th International Conference on Signal Processing and Machine Learning
© 2024 by the author(s). Licensee EWA Publishing, Oxford, UK. This article is an open access article distributed under the terms and
conditions of the Creative Commons Attribution (CC BY) license. Authors who
publish this series agree to the following terms:
1. Authors retain copyright and grant the series right of first publication with the work simultaneously licensed under a Creative Commons
Attribution License that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this
series.
2. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the series's published
version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial
publication in this series.
3. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and
during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See
Open access policy for details).