1. Introduction
Simultaneous localization and mapping (SLAM) technology is one of the essential technologies in the current social environment. SLAM technology is mainly a technology that enables robots with specific sensors to independently complete a series of tasks such as obtaining the current position, evaluating the current motion state, and constructing the current regional environmental data in the absence of prior environmental information. At present, SLAM technology mainly uses lidar and camera as sensors to obtain data, and SLAM is also divided into two categories, namely laser SLAM and visual SLAM [1, 2]. At present, some algorithm optimization ideas for laser SLAM are mainly are:1. Adjust the internal parameters of the algorithm framework to increase the efficiency of radar scanning and data density, so that map construction can be clearer. 2. Improved and adjusted the feature point extraction algorithm and strategy which after data acquisition, and the feature points are processed specifically for robot application scenarios to obtain the feature point set that is more in line with the current application scenario, which is convenient for subsequent self-estimation and map construction. 3. A variety of important information is analyzed by fusion processing, and the closed detection system is further optimized. 4. The latest neural network technology is added to SLAM in order to improve the recognition accuracy while maintaining high performance. Some algorithmic optimization ideas for visual SLAM are mainly are:1. Add new target detection threads and feature point extraction criteria to reduce extraction errors and identification errors. 2. Add a new feature point extraction and analysis strategy, so that the wrong feature point set can be reduced and a more accurate environment map can be built using the current data. The optimization ideas of SLAM technology combining radar and vision are:1. Use the data acquired by radar and camera to conduct comprehensive analysis and map construction, so as to obtain self-positioning and environmental maps with higher accuracy. 2. Obtain radar and camera data at the same time, but keep the two data relatively independent, so as to ensure that even if in some extreme environments which a single sensor will fail to distort the data, it will not affect the final result. In the classic application of SLAM, the most important is automatic driving, which is extremely popular, and more and more people will invest in this research in the foreseeable future. SLAM has a variety of algorithms in this aspect, which can be roughly divided into single-sensor SLAM algorithm and multi-sensor SLAM algorithm. Single sensor SLAM mainly reduces the error recognition caused by the sensor and increases the accuracy of the final result through a lot of optimization and adjustment of the algorithm. The multi-sensor is to obtain data from multiple sensors, and all the data is integrated and unified, and the final output of high-precision results. At the same time, multi-sensor SLAM can also prevent the failure of a single sensor in some extreme environments, making the system unreliable, and improve the overall stability of the system.
Firstly, this paper introduces the classification of SLAM according to sensor, the advantages and disadvantages of each classification and several classic SLAM algorithms. Then these classic SLAM algorithms are introduced and the corresponding advantages and disadvantages and the latest improvement methods are summarized. After that, several classic applications of SLAM and the adjustment and optimization of SLAM algorithm in these applications are discussed. Finally, some challenges and development directions of SLAM are summarized and put forward.
2. Technical overview
Current SLAM technology can be roughly divided into two categories according to the sensors used, one is laser SLAM using lidar, and the other is visual SLAM using cameras.
Lidar has high reliability, relatively mature technology than visual SLAM, intuitive mapping and high precision. However, its disadvantage is that the detection range is limited, mainly determined by the radar itself. Lidar installation also has certain structural requirements, compared with the camera is more expensive, the cost is higher, and to perfect the detection composition requires a large number of radar to achieve. Visual SLAM has a simple structure, diversified installation methods, does not require too many installation preconditions, and no sensor detection distance restrictions, can directly detect all the content of the camera, and the cost is low. However, it is greatly affected by ambient light, and the accuracy of visual SLAM will be reduced in some strong light environments, and it can not work normally in the dark (no light). Moreover, the calculation load is large, and a lot of calculation is needed to get the result. Other SLAM technologies use both types of sensors. This SLAM algorithm framework generally has higher accuracy and can achieve stable results in environments where many single sensors do not work properly.
Classical algorithms in laser SLAM include Cartographer, Lidar Odometry and Mapping in Real-time (LOAM), Lego LOAM, and LOAM Livox. Lego LOAM and LOAM Livox are optimized for specific scenarios based on LOAM. Visual SLAM is mainly Oriented FAST and Rotated BRIEF (ORB) SLAM2, Semi-Direct Monocular Visual Odometry (SVO).
2.1. Classical laser SLAM algorithm
In Laser SLAM, Cartographer technology is a set of graph-based optimized laser SLAM algorithms introduced by Google. It tends to be mature as a whole, has a high degree of modularity, is used frequently in engineering, and performs well in drawing terrain. However, the Cartographer consumes large resources, especially computing resources. Therefore, error handling is complicated. If sensor data is of poor quality or abnormal, error handling and recovery mechanisms may not be intuitive. Also, because Cartographer relies on only one odometer, drawing in an indoor setting may lead to robot drift and incomplete drawing. One of the latest optimization methods is to adjust the front-end and back-end parameters, increase the number of scan points to improve the resolution, and increase the back-end calculation constraints and loopback detection times to achieve the optimal construction of the global map [3]. LOAM is a relatively early development of SLAM related framework, which is mainly to solve 3D problems. At present, many related framework algorithms are improved and optimized based on LOAM. Lego LOAM is based on LOAM's improved and specialized algorithm. It is mainly used in vehicles, because the algorithm requires the radar to be installed in a horizontal manner as far as possible. Compared to LOAM, Lego LOAM changes the way feature points are extracted and adds some back-end optimizations, so the maps it builds can be more complete. Lego LOAM also incorporates ground separation, point cloud segmentation and improved L-M optimization [4]. Lego LOAM adopts the strategy of segmental scanning and segmental matching, which can significantly reduce the computational load and improve the efficiency of the algorithm. It uses a multi-sensor fusion strategy so that it can maintain high robustness in complex environments. However, the algorithm requires high-precision liDAR and has a high dependence on the ground, which needs the ground as its reference plane. In the latest improvement and optimization, the intensity information and the height information are integrated to reduce the feature loss caused by a single descriptor, and the loop closure detection is adjusted and optimized accordingly [5]. LOAM Livox is a relatively new algorithm framework based on LOAM, which is the application of LOAM for Livox solid-state LiDAR. The algorithm is mainly created to solve the problems such as too small viewing Angle, uneven scanning pattern and motion distortion of solid-state liDAR. The algorithm achieves high precision in positioning, and the running speed is greatly improved. The latest SLAM algorithm framework about LOAM is KDD-LOAM, which is an improved SLAM algorithm based on LOAM algorithm [6]. It mainly uses the latest descriptor based on multi-task full convolutional neural network. The advantage of this algorithm is that it improves the accuracy of distance measurement, and it has a great precision advantage compared with the original LOAM and other SLAM algorithms. At the same time, the accumulated error is reduced in operation, and the memory is saved while maintaining quite high performance [7].
2.2. Classic visual SLAM algorithm
In Visual SLAM, ORB-SLAM2 is a complete open source SLAM system for monocular, binocular, and RGB-D cameras that enables map reuse, loop detection, and repositioning capabilities. Its advantages are high precision, strong robustness, open source and easy to expand. The disadvantage is that the demand for computing is large, and it can not adapt to some extreme environments, which requires certain initial conditions, and will require storage and loading time (affecting real-time). In the latest improvement, an independent object detection thread is added, and the YOLOv5s network is selected to detect dynamic targets and reject dynamic feature points that are located in high dynamic target frames and do not exist in low dynamic target frames to prevent dynamic feature points from being rejected incompletely. It can greatly reduce the error of dynamic trajectory identification [8]. SVO is a semi-direct visual odometer, which combines the advantages of featual-point method (parallel tracking and mapping, key frame extraction) and direct method (fast and accurate), and is mainly used in micro-aircraft, UAV aerial photography and other situations. Its advantages are fast running speed and uniform distribution of key points. The disadvantages are easy to lose in some extreme cases, the convergence of the depth filter is slow, and it relies heavily on accurate pose estimation, and there is no relocation function, which cannot be recovered after the tracking is lost. In the latest improvement and optimization of a more robust SVO method, namely DynPL-SVO. This method introduces the reprojection error parallel to the line feature into the cost function to make full use of the structure information of the line feature. The dynamic grid method is also introduced to solve the problem of low robustness and accuracy of SVO system caused by moving objects [9]. LSD is also a very important algorithm in visual SLAM, and LSD-SLAM is a representative of direct method. It avoids the step of feature extraction and matching, and directly uses pixel information to solve camera pose and map. Its advantages are high precision, strong robustness, good real-time performance, and strong scalability, which can complete the construction and positioning of maps in less time. The disadvantage is that the image quality requirements are high, easy to error because of noise, blur and other situations. It also has high demands on scenes and is prone to problems when dealing with overly complex scenes that require additional processing. It also has certain requirements for hardware, which requires a lot of computing resources and memory space. In the latest improvement and optimization of LSD, EM-LSD algorithm is proposed, which adds the strategy of short line rejection and approximate line segment merging in LSD, and can achieve high quality line feature extraction. (Solves the problems of many short line features, repeated detection and long line disconnection) [10].
2.3. Combine radar and vision sensor SLAM technology
In the combined radar and vision SLAM technology, HVL-SLAM is an accurate and efficient liDAR monocular SLAM algorithm. It consists of three main components: a simple but effective feature depth extraction module based on LiDAR segmentation and the Delaunay triangle section, which provides depth information even when the LIDAR sensors are sparse (unable to achieve stable and high-precision scenes). A hybrid vision-Lidar tracking module uses both photometric errors and reprojection errors to provide robust and accurate enough estimates of self-motion to accurately determine its position and attitude. A combined vision and LiDAR optimization module can be used to further improve the accuracy of pose estimation each time a new keyframe is generated, while maintaining extremely high efficiency [11]. Switch-SLAM is a SLAM algorithm specifically designed to solve the degradation of a single sensor observation model. In order to solve the limitation of MAP-based sensor fusion, a switching based sensor fusion method is introduced in this algorithm. It effectively prevents the failure information from spreading in the system by using the switching structure, thus enhancing the accuracy of the detection results in the case of degradation and improving the stability in most extreme environments. In addition, Switch-SLAM introduces a non-heuristic degradation detection method, eliminating the need for heuristic tuning [12].
3. Classic application
SLAM has a lot of research on correlative algorithm framework for automatic driving. SLAM algorithm needs to meet many and demanding conditions for automatic driving: high precision for environmental composition, strong reliability, strong stability in extreme environments, high accuracy for positioning and self-pose acquisition, and high operating efficiency. One version of SLAM for driverless use is the ORB-SLAM2 based Visual SLAM. The algorithm is mainly composed of three thread groups: tracing, local mapping and loop closure detection. Firstly, the algorithm uses an adaptive feature extraction method to extract feature points from input images, and then performs system initialization and local map tracking in the tracking thread. The trace thread then passes the new keyframe to the local mapping thread, which adjusts the mapping points and keyframes in the local map, and performs the local BA optimization. When loop closure is detected, the loop closure thread performs global BA optimization on all keyframes and mapping points. Finally, the camera attitude and a global map consisting of key frames and mapping points are output. The final effect is obvious, significantly improving the tracking success rate, but also significantly improving the positioning accuracy of the turn [13].
The other is SLAM algorithm framework based on Lidar-IMU-Camera fusion. The algorithm framework aims to further improve the mapping and positioning accuracy of the algorithm in complex cases. In order to obtain more accurate attitude information, the data from the LEGO-LOAM odometer is closely connected with the data from the visual odometer using an error state Kalman filter. A lightweight monocular vision odometer model combined with LEGO-LOAM system was used to initialize monocular vision. The visual odometer information is used as the initial value of the laser odometer, and the visual word bag model is used for loop detection. The initial value is determined by the detection results, and the pose information is further optimized by Lidar loopback detection. In this case, the overall cumulative position error can be further reduced. The parallel operation method is adopted. If the monocular vision system or LEGO-LOAM system fails, another subsystem can continue to work to improve the stability of sensor degradation. The algorithm has extremely high positioning accuracy. For example, compared with LEGO-LOAM, the maximum error of positioning accuracy is reduced from 1. 51m to 0. 213m, and the minimum error is reduced from 0. 129m to 0. 002m. It has high performance in the real car experiment [14].
SLAM can also be applied in the field of drones to assist drones to complete various tasks. The application of drones usually has the characteristics of large number and low cost, so the drones usually choose cheaper sensors. And because there are many no-fly zones in the current city, there are many obstacles and high-speed moving objects on the ground, so the ground vehicle-assisted UAV is therefore proposed. Because the drone needs to deliver the package to the specified location, it uses a monocular camera and an IMU to obtain the data. In the method, SVO visual SLAM algorithm is mainly used to help the UAV find the user and the designated recipient mark in the designated area. The ground vehicle is equipped with a single-line LiDAR for environment awareness and a Cartographer for map construction. Because Cartographer has global and local maps, back-end optimization, and low cumulative errors, efficient map building and autonomous navigation can be achieved even with low-cost liDAR [15].
4. Challenge
At present, laser SLAM and visual SLAM for autonomous driving alone will face problems that they cannot run smoothly in some extreme environments and the data accuracy will be greatly reduced. Laser SLAM LiDAR is generally expensive. While visual SLAM is relatively cheap, some of the assumptions of classical methods such as direct method (e. g. LSD) are difficult to achieve in practice. In the case of using monocular operation, the feature method is prone to tracking failure. At present, whether it is laser SLAM or visual SLAM, it is difficult to react in time or directly lose in the face of high-speed moving objects, which will make unmanned driving appear very high risk. However, a single sensor cannot guarantee the stability of work in extreme environments, such as visual SLAM can not perform effective data acquisition in dim and strong light places, and radar will make the data acquisition range too narrow in some cases according to its own quality and characteristics. At the same time, the operation of some SLAM algorithms is too complicated, resulting in poor real-time performance, and it is unable to provide a complete environment map while maintaining the normal moving speed of the robot. As a result, the final path planning cannot be completed smoothly or the provided path has a big error.
5. Conclusion
This paper introduces some basic concepts and knowledge about SLAM, summarizes and compares several classic SLAM algorithms, and analyzes some applications based on SLAM. In the current society, mobile robots have become extremely common, and at the same time, more and more people have begun to study autonomous driving technology. SLAM is a core technology dedicated to robotics and autonomous driving. At present, SLAM is mainly divided into two categories according to the sensors it uses, one is the sensor using Lidar, and the other is the vision sensor using a camera. Recently, there have been a number of SLAM algorithms that combine lidar and camera, which can have the advantages of both types of SLAM, while also ensuring its stability in a variety of environments. Laser SLAM is mainly based on LOAM technology, and many subsequent laser SLAM algorithm frameworks are improved and optimized based on LOAM. LEGO-LOAM, for example, is a LOAM optimized for ground conditions. Visual SLAM, on the other hand, has many different SLAM algorithms. At present, the most classic and most popular application of SLAM is automatic driving, in the current SLAM on automatic driving, the use of multi-sensor fusion SLAM technology is the mainstream trend of automatic driving in the future.
References
[1]. Dewan, A., Kumar, A., Singh, H., Solanki V. S., & Kaur, P. (2023) Advancement in SLAM Techniques and Their Diverse Applications. International Conference on System Modeling & Advancement in Research Trends (SMART), Moradabad, India, pp. 387-392, doi: 10. 1109/SMART59791. 2023. 10428583.
[2]. Gaia, J., Orosco, E., Rossomando, F. & Soria, C. (2023) Mapping the Landscape of SLAM Research: A Review. in IEEE Latin America Transactions, vol. 21, no. 12, pp. 1313-1336, Dec. 2023, doi: 10. 1109/TLA. 2023. 10305240.
[3]. Liang, H., Li, Y., Guo, Q., & Yang, J. (2023). ROS2-based locator optimized autonomous navigation robot. International Conference on Robotics, Intelligent Control and Artificial Intelligence (RICAI), Hangzhou, China, pp. 127-130, doi: 10. 1109/RICAI60863. 2023. 10489438.
[4]. Shan, T., & Englot, B. (2018). LeGO-LOAM: Lightweight and Ground-Optimized Lidar Odometry and Mapping on Variable Terrain. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, pp. 4758-4765, doi: 10. 1109/IROS. 2018. 8594299.
[5]. Cheng, Y., Liu, Z., Luo, F., Liu, M., Li, X., & Zhu, J. (2023) With Fused Point Cloud Height and Intensity Information improved the Loop Closure Detection for LeGO-LOAM. 4th International Conference on Computer Engineering and Intelligent Control (ICCEIC), Guangzhou, China, pp. 93-98, doi: 10. 1109/ICCEIC60201. 2023. 10426652.
[6]. Lin, J., & Zhang, F. (2020) Loam livox: A fast, robust, high-precision LiDAR odometry and mapping package for LiDARs of small FoV. IEEE International Conference on Robotics and Automation (ICRA), Paris, France, pp. 3126-3131, doi: 10. 1109/ICRA40945. 2020. 9197440.
[7]. Huang, R., Zhao, M., Chen, J., & Li, L. (2024) KDD-LOAM: Jointly Learned Keypoint Detector and Descriptors Assisted LiDAR Odometry and Mapping. IEEE International Conference on Robotics and Automation (ICRA), Yokohama, Japan, pp. 8559-8565, doi: 10. 1109/ICRA57147. 2024. 10610557.
[8]. Wang, L., & Xu, Z. (2023) Improved Mapping Technique Based on ORB-SLAM2 in Dynamic Scenes. 2nd International Conference on Automation, Robotics and Computer Engineering (ICARCE), Wuhan, China, pp. 1-5, doi: 10. 1109/ICARCE59252. 2024. 10492577.
[9]. Zhang, B., Ma, X., Ma, H. J., & Luo, C. (2024) DynPL-SVO: A Robust Stereo Visual Odometry for Dynamic Scenes. in IEEE Transactions on Instrumentation and Measurement, vol. 73, pp. 1-10, 2024, Art no. 5006510, doi: 10. 1109/TIM. 2023. 3348882.
[10]. Hu, C., Zhang, X., Li, K., Wu K., & Dong, R. (2023) EM-LSD-Based Visual-Inertial Odometry With Point-Line Feature. in IEEE Sensors Journal, vol. 23, no. 24, pp. 30794-30804, 15 Dec. 15, 2023, doi: 10. 1109/JSEN. 2023. 3329524.
[11]. Wang, W., Wang, C., Liu, J., Su, X., Luo, B., & Zhang, C. (2024) HVL-SLAM: Hybrid Vision and LiDAR Fusion for SLAM. in IEEE Transactions on Geoscience and Remote Sensing, doi: 10. 1109/TGRS. 2024. 3432336.
[12]. Junwoon, L. et al., (2024) Switch-SLAM: Switching-Based LiDAR-Inertial-Visual SLAM for Degenerate Environments. in IEEE Robotics and Automation Letters, vol. 9, no. 8, pp. 7270-7277, Aug. 2024, doi: 10. 1109/LRA. 2024. 3421792.
[13]. Zhao, Z., Li, Y., Yang, C., Wang, W., & Xu, B. (2023) An Adaptive Feature Extraction Visual SLAM Method for Autonomous Driving. CAA International Conference on Vehicular Control and Intelligence (CVCI), Changsha, China pp. 1-6, doi: 10. 1109/CVCI59596. 2023. 10397445.
[14]. Zhao, Y., Liang, Y., Ma, Z., Guo, L., &Zhang, H. (2024) Localization and Mapping Algorithm Based on Lidar-IMU-Camera Fusion. in Journal of Intelligent and Connected Vehicles, vol. 7, no. 2, pp. 97-107, June 2024, doi: 10. 26599/JICV. 2023. 9210027.
[15]. Qin, H., Yan, X., & Li, J. (2023) Research on the Application of Low-Cost Aerial-Ground Delivery System Using UAV-UGV Joint. International Conference on Mechanical and Electronics Engineering (ICMEE), Xi'an, China, pp. 373-378, doi: 10. 1109/ICMEE59781. 2023. 10525562.
Cite this article
Lan,T. (2024). Research and Application Analysis of Global Path Planning Method Based on Radar and Vision in Robot SLAM. Applied and Computational Engineering,112,8-14.
Data availability
The datasets used and/or analyzed during the current study will be available from the authors upon reasonable request.
Disclaimer/Publisher's Note
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of EWA Publishing and/or the editor(s). EWA Publishing and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
About volume
Volume title: Proceedings of the 5th International Conference on Signal Processing and Machine Learning
© 2024 by the author(s). Licensee EWA Publishing, Oxford, UK. This article is an open access article distributed under the terms and
conditions of the Creative Commons Attribution (CC BY) license. Authors who
publish this series agree to the following terms:
1. Authors retain copyright and grant the series right of first publication with the work simultaneously licensed under a Creative Commons
Attribution License that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this
series.
2. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the series's published
version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial
publication in this series.
3. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and
during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See
Open access policy for details).
References
[1]. Dewan, A., Kumar, A., Singh, H., Solanki V. S., & Kaur, P. (2023) Advancement in SLAM Techniques and Their Diverse Applications. International Conference on System Modeling & Advancement in Research Trends (SMART), Moradabad, India, pp. 387-392, doi: 10. 1109/SMART59791. 2023. 10428583.
[2]. Gaia, J., Orosco, E., Rossomando, F. & Soria, C. (2023) Mapping the Landscape of SLAM Research: A Review. in IEEE Latin America Transactions, vol. 21, no. 12, pp. 1313-1336, Dec. 2023, doi: 10. 1109/TLA. 2023. 10305240.
[3]. Liang, H., Li, Y., Guo, Q., & Yang, J. (2023). ROS2-based locator optimized autonomous navigation robot. International Conference on Robotics, Intelligent Control and Artificial Intelligence (RICAI), Hangzhou, China, pp. 127-130, doi: 10. 1109/RICAI60863. 2023. 10489438.
[4]. Shan, T., & Englot, B. (2018). LeGO-LOAM: Lightweight and Ground-Optimized Lidar Odometry and Mapping on Variable Terrain. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, pp. 4758-4765, doi: 10. 1109/IROS. 2018. 8594299.
[5]. Cheng, Y., Liu, Z., Luo, F., Liu, M., Li, X., & Zhu, J. (2023) With Fused Point Cloud Height and Intensity Information improved the Loop Closure Detection for LeGO-LOAM. 4th International Conference on Computer Engineering and Intelligent Control (ICCEIC), Guangzhou, China, pp. 93-98, doi: 10. 1109/ICCEIC60201. 2023. 10426652.
[6]. Lin, J., & Zhang, F. (2020) Loam livox: A fast, robust, high-precision LiDAR odometry and mapping package for LiDARs of small FoV. IEEE International Conference on Robotics and Automation (ICRA), Paris, France, pp. 3126-3131, doi: 10. 1109/ICRA40945. 2020. 9197440.
[7]. Huang, R., Zhao, M., Chen, J., & Li, L. (2024) KDD-LOAM: Jointly Learned Keypoint Detector and Descriptors Assisted LiDAR Odometry and Mapping. IEEE International Conference on Robotics and Automation (ICRA), Yokohama, Japan, pp. 8559-8565, doi: 10. 1109/ICRA57147. 2024. 10610557.
[8]. Wang, L., & Xu, Z. (2023) Improved Mapping Technique Based on ORB-SLAM2 in Dynamic Scenes. 2nd International Conference on Automation, Robotics and Computer Engineering (ICARCE), Wuhan, China, pp. 1-5, doi: 10. 1109/ICARCE59252. 2024. 10492577.
[9]. Zhang, B., Ma, X., Ma, H. J., & Luo, C. (2024) DynPL-SVO: A Robust Stereo Visual Odometry for Dynamic Scenes. in IEEE Transactions on Instrumentation and Measurement, vol. 73, pp. 1-10, 2024, Art no. 5006510, doi: 10. 1109/TIM. 2023. 3348882.
[10]. Hu, C., Zhang, X., Li, K., Wu K., & Dong, R. (2023) EM-LSD-Based Visual-Inertial Odometry With Point-Line Feature. in IEEE Sensors Journal, vol. 23, no. 24, pp. 30794-30804, 15 Dec. 15, 2023, doi: 10. 1109/JSEN. 2023. 3329524.
[11]. Wang, W., Wang, C., Liu, J., Su, X., Luo, B., & Zhang, C. (2024) HVL-SLAM: Hybrid Vision and LiDAR Fusion for SLAM. in IEEE Transactions on Geoscience and Remote Sensing, doi: 10. 1109/TGRS. 2024. 3432336.
[12]. Junwoon, L. et al., (2024) Switch-SLAM: Switching-Based LiDAR-Inertial-Visual SLAM for Degenerate Environments. in IEEE Robotics and Automation Letters, vol. 9, no. 8, pp. 7270-7277, Aug. 2024, doi: 10. 1109/LRA. 2024. 3421792.
[13]. Zhao, Z., Li, Y., Yang, C., Wang, W., & Xu, B. (2023) An Adaptive Feature Extraction Visual SLAM Method for Autonomous Driving. CAA International Conference on Vehicular Control and Intelligence (CVCI), Changsha, China pp. 1-6, doi: 10. 1109/CVCI59596. 2023. 10397445.
[14]. Zhao, Y., Liang, Y., Ma, Z., Guo, L., &Zhang, H. (2024) Localization and Mapping Algorithm Based on Lidar-IMU-Camera Fusion. in Journal of Intelligent and Connected Vehicles, vol. 7, no. 2, pp. 97-107, June 2024, doi: 10. 26599/JICV. 2023. 9210027.
[15]. Qin, H., Yan, X., & Li, J. (2023) Research on the Application of Low-Cost Aerial-Ground Delivery System Using UAV-UGV Joint. International Conference on Mechanical and Electronics Engineering (ICMEE), Xi'an, China, pp. 373-378, doi: 10. 1109/ICMEE59781. 2023. 10525562.