1. Introduction
In the rapid progress of intelligent system technology, Simultaneous Localization and Mapping (SLAM) technology has become one of the core technologies in the cutting-edge fields of robotics, automated driving, navigation of mobile devices, etc. SLAM technology empowers intelligent systems to autonomously navigate and accurately build maps in unknown environments, and is important for realizing intelligent and automated spatial cognition and decision-making ability, which is of great significance for realizing intelligent and automated spatial cognition and decision-making.
Pure vision SLAM schemes rely on sequences of environmental images captured by monocular or binocular cameras, and the estimation of camera motion trajectories and the construction of environment maps are realized by image processing and computer vision techniques. Despite the advantages of cost-effectiveness and ease of integration, pure visual SLAM schemes have limitations in terms of sensitivity to illumination variations, high consumption of computational resources, and poor adaptability to dynamic environments. To address these issues, LiDAR sensors provide an effective complementary solution with their excellent high-precision 3D environment sensing capability. LiDAR generates precise point cloud data of the environment by transmitting and receiving laser pulses, which is light-independent, high-resolution, and capable of all-weather operation, and is particularly suitable for autonomous driving and fine-grained 3D map building applications, which require high accuracy and stability 3D mapping applications.
The main objective of this study is to propose and validate a SLAM scheme that fuses the advantages of pure vision SLAM and LIDAR sensors, with a view to achieving the following specific goals: to significantly improve the robustness of the SLAM system in environments with varying illumination conditions and lack of texture; to dramatically enhance the system's localization accuracy and the finesse of the map construction; to achieve efficient real-time on resource-constrained devices for SLAM processing on resource-constrained devices; and to promoting innovative applications of SLAM technology in a wider range of scenarios such as autonomous driving, indoor navigation, and spatial awareness of mobile devices. The research in this paper not only provides new perspectives and ideas for academic research in the field of SLAM, but also provides valuable references and guidance for technology integration, system optimization and innovative applications in practical applications.
2. Single-sensor SLAM
2.1. Visual SLAM
Visual SLAM leverages cameras to capture sequences of images, which are analyzed to estimate the camera’s trajectory and build environmental maps. This technique involves several key stages: image preprocessing to enhance image quality, feature extraction to identify and track distinct points, and motion estimation to calculate the camera’s movement based on these features [1].
Visual SLAM offers several advantages, such as cost-effectiveness and ease of integration into various applications. Cameras are relatively inexpensive and widely available, making visual SLAM an attractive option for many robotic systems [2]. Additionally, visual SLAM systems can provide detailed visual information about the environment, which is essential for tasks such as navigation and mapping. However, the technology also has limitations. It is highly sensitive to changes in lighting conditions, which can affect image quality and, consequently, the accuracy of feature extraction and motion estimation. This sensitivity can lead to inaccuracies in map construction, particularly in environments with poor lighting or dynamic elements [3].
To address these challenges, recent advancements in visual SLAM focus on integrating additional sensors, such as inertial measurement units (IMUs) and LiDAR, to complement the visual data. IMUs provide motion information that can support visual odometry, while LiDAR offers precise depth measurements to enhance map accuracy. These integrations aim to improve the robustness and accuracy of visual SLAM systems, making them more effective in diverse environments and applications [4].
2.2. LiDAR SLAM
Light Detection and Ranging (LiDAR) sensors provide high-resolution 3D environmental maps by measuring the time it takes for laser pulses to reflect off surfaces and return to the sensor. This technology is particularly advantageous for creating detailed and accurate maps, which is crucial for applications requiring precise navigation and obstacle detection [2].
LiDAR operates independently of lighting conditions, which makes it a valuable complement to visual SLAM, especially in environments where visual data alone may be insufficient. By generating a dense point cloud of the scanned area, LiDAR provides detailed spatial information that enhances the accuracy of SLAM systems [1].
However, LiDAR technology also presents challenges, including high costs and extensive data processing requirements. The large volume of data generated by LiDAR sensors requires robust computational resources to process in real-time [3]. Additionally, certain environmental factors, such as weather conditions or reflective surfaces, can affect the performance of LiDAR systems. Research is ongoing to address these issues and improve the practicality and affordability of LiDAR technology for broader applications [5]. Although LiDAR SLAM technologies show great potential for agricultural robot navigation and environment mapping, there are still challenges in terms of cost-effectiveness, data processing, and environmental adaptability. Future research needs to further explore the application of these technologies in real-world environments and develop more cost-effective, efficient, and reliable solutions to advance the development of smart agriculture.
3. Multi-sensor SLAM
Pure vision SLAM relies on the use of cameras to capture image sequences, which are then processed to estimate the camera's motion and create a map of the environment. The process starts with image preprocessing, which includes techniques to enhance image quality by reducing noise and improving contrast [1]. Feature extraction identifies and tracks key points or patterns in the images, which are essential for determining the camera’s movement.
Motion estimation algorithms analyze the movement of these features across successive frames to calculate the camera’s trajectory, which is used to build a map of the environment. This map is continuously updated as new images are processed, allowing for real-time navigation and mapping. Despite its advantages, such as lower cost and straightforward integration, pure vision SLAM is limited by its sensitivity to environmental conditions, such as lighting variations and the presence of repetitive patterns [2].
The computational demands of pure vision SLAM can also be significant, requiring substantial processing power to analyze large volumes of image data in real-time. This limitation can restrict the implementation of visual SLAM on devices with limited resources. Ongoing research aims to address these challenges by optimizing algorithms and integrating additional sensors, such as LiDAR or IMUs, to enhance the performance of SLAM systems in various practical applications [4].
3.1. Radar-inertial navigation system SLAM
Radar-Inertial Navigation System (INS) SLAM, a cutting-edge integration of radar sensing and inertial navigation, offers unparalleled robustness for simultaneous localization and mapping (SLAM) across diverse environments [4]. By harnessing radar's precision in range and velocity measurements, coupled with INS's autonomous navigation, it enhances positional accuracy and mapping depth. This technology excels in environmental adaptability, operating seamlessly in varying light and weather conditions, including GNSS-denied areas.
Long-range detection capabilities enable radar-INS SLAM to map distant objects with precision, broadening the navigational horizon. Its robustness ensures reliability even under challenging circumstances, making it ideal for critical applications. However, the high cost and complex integration of advanced radar systems, along with substantial computational demands for real-time data processing, pose challenges.
Moreover, certain materials and severe weather conditions can interfere with radar performance, requiring careful consideration. Nonetheless, radar-INS SLAM stands as a powerful tool for navigating and mapping in complex scenarios, where traditional SLAM methods may falter. Its independence from GPS, enhanced mapping capabilities, and all-weather performance underscore its value in a wide range of applications.
3.2. LiDAR-visual-IMU SLAM
The LIDAR-Visual-IMU SLAM is a sophisticated multi-sensor fusion technology that enhances robust localization and high-precision map construction in complex environments. It integrates the accurate distance measurement capabilities of LIDAR, the visual detail provided by cameras, and the motion tracking of IMUs, making it an ideal solution for applications such as agricultural robots. This technology is especially effective in situations that demand precision in unknown or changing surroundings [3].
In this scheme, LIDAR offers detailed distance and shape data of the environment, while cameras utilize texture information for feature extraction and recognition. The IMU continuously monitors the robot's movement, and the combined data fusion ensures the SLAM system maintains high accuracy and robustness even in conditions with varying lighting, fast movement, or sparse textures. The system's strengths lie in its real-time six-degree-of-freedom attitude estimation, dense point cloud generation, and the ability to refine trajectory and map accuracy through closed-loop detection.
However, the system faces challenges, primarily the increased cost due to the integration of multiple sensors, which can be a barrier for widespread adoption. The complexity of fusing data from various sensors requires sophisticated algorithms for efficient data processing and real-time performance. Additionally, achieving accurate sensor calibration and robust time synchronization presents technical difficulties that need to be addressed.
Despite these challenges, the LIDAR-Visual-IMU SLAM scheme has promising applications [4]. As technology progresses and costs decrease, it is anticipated that this technology will play an increasingly vital role in smart agriculture. It has the potential to promote automation and intelligence in agricultural production, leading to improvements in efficiency and precision while reducing operational costs. This advancement is expected to provide substantial technical support for sustainable agricultural development.
4. SLAM technology optimization and suggestions
4.1. Fusion SLAM
Multi-sensor fusion SLAM technology effectively improves positioning accuracy and map construction robustness in complex environments by integrating the advantages of different sensors such as LIDAR, cameras, and IMUs. In addition to these, integrated GPS provides global positioning information in outdoor environments, while fused radar and sonar sensors enhance the detection of specific obstacles. This fusion strategy not only enhances the adaptability of the system to dynamic scenes and texture-free environments but also improves the reliability and stability of the system through sensor redundancy. Future research will likely focus on developing more efficient fusion frameworks, leveraging deep learning to enhance feature extraction and environment understanding, and exploring multi-robot collaboration methods to reduce the perception burden on individual robots. As technology advances, it is expected that costs will decrease, allowing for a wider application of multi-sensor fusion SLAM solutions in areas such as smart agriculture.
Data fusion in SLAM systems involves integrating information from multiple sensors to enhance the accuracy and reliability of the system. The goal is to combine the strengths of different sensors to compensate for their limitations. For example, visual SLAM provides rich texture and color information but may struggle with lighting conditions and lack depth perception. LiDAR, on the other hand, offers precise distance measurements and operates independently of lighting, making it a valuable addition to visual SLAM systems [1].
There are two primary approaches to data fusion: tightly coupled and loosely coupled methods. Tightly coupled fusion integrates raw data from sensors at a low level, allowing for more detailed and accurate combinations. This approach can significantly improve the precision of the SLAM system but requires advanced algorithms and substantial computational power [2]. Loosely coupled fusion, on the other hand, combines processed data at a higher level, providing greater flexibility and ease of implementation but may not achieve the same level of accuracy [3].
Recent advancements in SLAM technology have incorporated additional sensors such as IMUs, GPS, and radar into the data fusion framework. IMUs provide information on the robot’s orientation and acceleration, while GPS offers global positioning data. Radar can enhance obstacle detection capabilities. The integration of these sensors helps to address the limitations of individual technologies, resulting in more robust and adaptable SLAM systems [6]. Machine learning techniques are also being explored to optimize data fusion by analyzing large datasets to improve feature extraction and environmental understanding [7].
4.2. SLAM combined with machine learning
The combination of deep learning technology and SLAM provides a new solution idea for the problem of autonomous robot localisation and map building. By integrating the powerful feature extraction and modelling capabilities of deep learning into the SLAM system, the positioning accuracy and environment adaptability of the system can be effectively improved. The applications of deep learning in SLAM mainly focus on front-end tracking, back-end optimisation, semantic mapping and uncertainty estimation. For example, in front-end tracking, the accuracy and robustness of the visual odometry are improved; in back-end optimisation, deep learning can help to optimise the camera position and scene structure to further improve the positioning accuracy [8].
In addition, deep learning techniques can provide semantic-level environment understanding for SLAM systems, enabling robots to better understand the environment they are in and achieve smarter navigation and decision-making through semantic segmentation and object recognition. Although the application of deep learning in SLAM still faces some challenges, such as the real-time nature of the algorithms, the generalisation ability, and the dependence on a large amount of annotated data, it is expected that, with the depth of the research and the development of the technology, deep learning will play an even more important role in the field of SLAM in the future, and promote the development of intelligent robotics to a higher level.
5. Conclusion
The paper summarizes the significant advancements in SLAM Positioning and Navigation. These advancements have led to improvements in system accuracy and robustness, addressing key challenges in navigation and environmental mapping. The successful integration of these technologies with deep learning and other advanced algorithms has further enhanced robotic performance and expanded practical applications.
The review highlights the need for continued research to overcome existing limitations, such as high costs and data processing demands. Future research should focus on developing cost-effective solutions and enhancing sensor technologies to further advance robotics. The development of more affordable and efficient technologies, along with advancements in data fusion and machine learning, will likely play a crucial role in improving modern practices and promoting the adoption of intelligent robotics.
Authors Contribution
All the authors contributed equally and their names were listed in alphabetical order.
References
[1]. Xie, D., Chen, L., Liu, L., Chen, L., & Wang, H. (2022). Actuators and sensors for application in agricultural robots: A review. Machines, 10(10), 913.
[2]. Fountas, S., Mylonas, N., Malounas, I., Rodias, E., Hellmann Santos, C., & Pekkeriet, E. (2020). Agricultural robotics for field operations. Sensors, 20(9), 2672.
[3]. Botta, A., Cavallone, P., Baglieri, L., Colucci, G., Tagliavini, L., & Quaglia, G. (2022). A review of robots, perception, and tasks in precision agriculture. applied mechanics, 3(3), 830-854.
[4]. Mahmud, M. S. A., Abidin, M. S. Z., Emmanuel, A. A., & Hasan, H. S. (2020). Robotics and automation in agriculture: present and future applications. Applications of Modelling and Simulation, 4, 130-140.
[5]. Gonzalez-de-Santos, P., Fernández, R., Sepúlveda, D., Navas, E., Emmi, L., & Armada, M. (2020). Field robots for intelligent farms—Inhering features from industry. Agronomy, 10(11), 1638.
[6]. Ghobadpour, A., Monsalve, G., Cardenas, A., & Mousazadeh, H. (2022). Off-road electric vehicles and autonomous robots in agricultural sector: trends, challenges, and opportunities. Vehicles, 4(3), 843-864.
[7]. Oliveira, L. F., Moreira, A. P., & Silva, M. F. (2021). Advances in agriculture robotics: A state-of-the-art review and challenges ahead. Robotics, 10(2), 52.
[8]. Taketomi, T., Uchiyama, H., & Ikeda, S. (2017). Visual SLAM algorithms: A survey from 2010 to 2016. IPSJ transactions on computer vision and applications, 9, 1-11.
Cite this article
Hu,J.;Wei,H. (2024). Multi-Sensor SLAM Positioning and Navigation. Applied and Computational Engineering,93,45-49.
Data availability
The datasets used and/or analyzed during the current study will be available from the authors upon reasonable request.
Disclaimer/Publisher's Note
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of EWA Publishing and/or the editor(s). EWA Publishing and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
About volume
Volume title: Proceedings of the 2nd International Conference on Machine Learning and Automation
© 2024 by the author(s). Licensee EWA Publishing, Oxford, UK. This article is an open access article distributed under the terms and
conditions of the Creative Commons Attribution (CC BY) license. Authors who
publish this series agree to the following terms:
1. Authors retain copyright and grant the series right of first publication with the work simultaneously licensed under a Creative Commons
Attribution License that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this
series.
2. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the series's published
version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial
publication in this series.
3. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and
during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See
Open access policy for details).
References
[1]. Xie, D., Chen, L., Liu, L., Chen, L., & Wang, H. (2022). Actuators and sensors for application in agricultural robots: A review. Machines, 10(10), 913.
[2]. Fountas, S., Mylonas, N., Malounas, I., Rodias, E., Hellmann Santos, C., & Pekkeriet, E. (2020). Agricultural robotics for field operations. Sensors, 20(9), 2672.
[3]. Botta, A., Cavallone, P., Baglieri, L., Colucci, G., Tagliavini, L., & Quaglia, G. (2022). A review of robots, perception, and tasks in precision agriculture. applied mechanics, 3(3), 830-854.
[4]. Mahmud, M. S. A., Abidin, M. S. Z., Emmanuel, A. A., & Hasan, H. S. (2020). Robotics and automation in agriculture: present and future applications. Applications of Modelling and Simulation, 4, 130-140.
[5]. Gonzalez-de-Santos, P., Fernández, R., Sepúlveda, D., Navas, E., Emmi, L., & Armada, M. (2020). Field robots for intelligent farms—Inhering features from industry. Agronomy, 10(11), 1638.
[6]. Ghobadpour, A., Monsalve, G., Cardenas, A., & Mousazadeh, H. (2022). Off-road electric vehicles and autonomous robots in agricultural sector: trends, challenges, and opportunities. Vehicles, 4(3), 843-864.
[7]. Oliveira, L. F., Moreira, A. P., & Silva, M. F. (2021). Advances in agriculture robotics: A state-of-the-art review and challenges ahead. Robotics, 10(2), 52.
[8]. Taketomi, T., Uchiyama, H., & Ikeda, S. (2017). Visual SLAM algorithms: A survey from 2010 to 2016. IPSJ transactions on computer vision and applications, 9, 1-11.