A Review of Multi-Sensor Fusion Techniques for Indoor Mobile Robot Navigation

Research Article
Open access

A Review of Multi-Sensor Fusion Techniques for Indoor Mobile Robot Navigation

Tianyi Shan 1*
  • 1 School of Mechanical Engineering, Zhejiang University of Technology, Hangzhou, China    
  • *corresponding author tianyidan929@gmail.com
Published on 26 November 2024 | https://doi.org/10.54254/2755-2721/80/2024CH0087
ACE Vol.80
ISSN (Print): 2755-273X
ISSN (Online): 2755-2721
ISBN (Print): 978-1-83558-561-0
ISBN (Online): 978-1-83558-562-7

Abstract

The application of multi-sensor fusion technology in indoor mobile robot navigation and localization has been increasingly gaining attention, especially in complex indoor environments where achieving high-precision autonomous navigation poses a significant challenge. This review summarizes various sensor fusion methods, including the combination of Light Detection and Ranging (LiDAR), Inertial Measurement Unit (IMU), Ultra-Wideband (UWB), and others, with a focus on discussing the progress of fusion algorithms such as Extended Kalman Filter (EKF) and Adaptive Monte Carlo Localization (AMCL) in improving navigation accuracy and stability. In addition, this paper explores the application of visual Simultaneous Localization and Mapping (SLAM) methods incorporating deep learning in indoor robot navigation. Finally, the main challenges currently faced by multi-sensor fusion technology in robot autonomous navigation are analyzed, and future research directions are proposed, aiming to provide valuable references for researchers in the field.

Keywords:

Multi-sensor fusion, indoor mobile robot, Extended Kalman Filter, Adaptive Monte Carlo Localization, visual SLAM.

Shan,T. (2024). A Review of Multi-Sensor Fusion Techniques for Indoor Mobile Robot Navigation. Applied and Computational Engineering,80,175-180.
Export citation

1. Introduction

With the rapid development of modern technology, robotics has become a crucial force driving the progress of social productivity. Especially in the industrial and service sectors, robots have demonstrated immense potential [1]. Indoor mobile robots, as an important branch of robotics, have attracted increasing attention due to their applications in warehousing logistics, domestic services, and medical assistance [2]. However, the complexity and uncertainty of indoor environments pose significant challenges to the navigation and positioning of mobile robots. How to achieve precise autonomous navigation in dynamic and complex indoor environments has become a research focus in both academia and industry [3].

Multi-sensor information fusion technology is considered an effective solution to the problem of mobile robot navigation and positioning [4]. By integrating data from multiple sensors, such as LiDAR, Inertial Measurement Unit (IMU), and Ultra-Wideband (UWB), the robot's environmental perception and navigation accuracy can be significantly improved [5]. In complex indoor environments, single sensors are often limited by factors such as perspective and interference, making it difficult to meet the needs of autonomous navigation. Therefore, multi-sensor fusion has gradually become the mainstream direction in indoor mobile robot research [6].

In recent years, significant progress has been made in multi-sensor fusion technology based on algorithms such as Extended Kalman Filter (EKF) and Adaptive Monte Carlo Localization (AMCL). These methods effectively address the problems of low localization accuracy and insufficient path planning capabilities in indoor scenarios. In addition, with the development of deep learning technology, visual Simultaneous Localization and Mapping (SLAM) methods that integrate depth information have also been gradually applied to indoor navigation systems, enhancing the robot's environmental perception and autonomous decision-making capabilities [7].

This paper reviews the research progress of multi-sensor fusion in indoor mobile robot navigation and localization, focusing on different sensor fusion methods and their application effects in indoor environments. In addition, the challenges and future research directions will be discussed, aiming to provide a reference for subsequent research [8].

2. Overview of multi-sensor fusion technology

2.1. Sensor types and characteristics

In autonomous robot navigation, various sensor types are utilized to gather environmental information, supporting path planning and localization. The need for multi-sensor fusion arises because single sensors often exhibit limitations in data accuracy, coverage, and adaptability to different environments. Common sensor types include monocular cameras, stereo cameras, Light Detection and Ranging (LiDAR), IMUs, ultrasonic, and infrared sensors.

2.1.1. Monocular cameras. Monocular cameras are widely used in robotic navigation due to their lightweight design, low cost, and ease of installation. They capture rich visual information, which is useful for object detection and classification. However, a major drawback of monocular cameras is their inability to obtain accurate depth information, limiting their capability in distance measurement and object size estimation. Moreover, their performance is highly dependent on lighting conditions, and in low-light or overly complex environments, their image processing capabilities are constrained. Although recent advancements in deep learning, such as convolutional neural networks (CNNs), have enhanced the ability of monocular cameras in object recognition and obstacle avoidance, they still struggle to meet the real-time and accuracy demands of autonomous navigation [1].

2.1.2. Stereo cameras. Stereo cameras utilize two or more lenses to capture images from different angles and compute depth information based on the disparity principle, thereby providing three-dimensional environmental perception. This stereoscopic vision effectively overcomes the limitations of monocular cameras, enabling robots to plan paths more effectively in complex environments. However, the depth calculation requires considerable computational power, and the accuracy of stereo cameras can be significantly compromised in environments with poor lighting or featureless surfaces [5]. Despite these limitations, stereo vision remains an important sensing method in autonomous navigation, especially when combined with other sensors to provide comprehensive environmental data.

2.1.3. LiDAR. LiDAR is one of the most commonly used high-precision sensors in robot navigation. It emits laser pulses to measure distances to surrounding objects, constructing a real-time three-dimensional map of the environment. Unlike cameras, LiDAR is not affected by changes in lighting and provides precise distance and position data, making it particularly effective in indoor navigation and complex environments. However, the high cost and processing requirements of LiDAR limit its use in consumer-grade robots [4]. Recent research aims to lower costs and increase accessibility by integrating LiDAR with other low-cost sensors to create more affordable multi-sensor fusion systems [9].

2.1.4. IMUs. IMUs are primarily used to measure the robot's attitude (i.e., orientation and angular velocity), making them crucial for assessing the robot's motion state during navigation. IMUs offer high-frequency data at low power consumption, providing position and orientation estimates in areas where GPS signals are unavailable, such as indoor environments. However, IMU data is prone to drift over time, leading to accumulated errors. Therefore, IMUs are often combined with other sensors to correct localization errors and enhance navigation accuracy [5].

2.1.5. Ultrasonic and infrared sensors. These sensors are widely used for short-range obstacle detection in robots. They are characterized by their low cost, simple structure, and fast response, enabling real-time detection of obstacles around the robot. Ultrasonic sensors calculate distance by measuring the reflection time of sound waves, while infrared sensors utilize the reflective properties of infrared light. Despite their lower precision and range compared to LiDAR and stereo cameras, ultrasonic and infrared sensors are a practical choice for household robots, such as robotic vacuum cleaners, providing basic obstacle avoidance functionality [1].

2.2. Advantages, disadvantages, and application scenarios

Each sensor type offers unique advantages and disadvantages, determining its suitability for various applications. For example, LiDAR provides high-precision 3D data and is well-suited for complex indoor navigation and path planning. However, its high cost and processing demands limit its adoption in consumer robots [4]. Monocular cameras are cost-effective and easy to implement, but they lack the ability to perceive depth information accurately. Stereo cameras fill this gap by offering depth perception but require significant computational resources and struggle in low-light conditions [5]. Ultrasonic and infrared sensors, although not as accurate or reliable as LiDAR and stereo cameras, are inexpensive and low-power, making them ideal for close-range obstacle detection in simple navigation tasks, such as those performed by household cleaning robots [1].

2.3. The necessity of multi-sensor fusion

The limitations of single-sensor navigation systems have led researchers to explore multi-sensor fusion techniques to compensate for these shortcomings. For instance, while LiDAR is highly accurate, its high cost makes it inaccessible for many applications; meanwhile, monocular cameras are affordable but lack depth information. Multi-sensor fusion leverages the strengths of different sensors by processing and integrating their data to provide a more accurate perception of the environment. This fusion approach significantly improves localization and path planning in complex environments [4]. For example, combining LiDAR with cameras allows for both structural and visual information of the surroundings to be captured, enhancing obstacle detection, avoidance accuracy, and overall autonomous navigation stability [9].

3. Multi-sensor fusion algorithms

3.1. Common multi-sensor data fusion algorithms

To effectively fuse data from various sensors and improve real-time navigation, researchers have developed several fusion algorithms, including EKF, Particle Filter, Fuzzy Logic Systems, and Neural Networks. Each of these algorithms offers distinct benefits suited to specific environments and applications.

3.1.1. EKF. EKF is a widely used algorithm for multi-sensor fusion in nonlinear systems. It can process data from different sensors, such as IMUs, odometers, and electronic compasses, to enhance the robot's localization accuracy. EKF operates through prediction and update phases, estimating the robot's state based on sensor data in near real-time. However, the performance of EKF heavily depends on the accurate modeling of system noise and errors, as it is sensitive to sensor noise [2].

3.1.2. Particle filter. Particle Filter is a non-parametric Bayesian estimation method that is particularly suitable for nonlinear and non-Gaussian environments. It represents the robot's possible states using a large number of particles, each with a unique position hypothesis. As new sensor data is received, the Particle Filter adjusts the particle distribution to reflect the most likely state through a process of "weight updating" and "resampling." This method is highly flexible and robust, making it a valuable tool in SLAM applications in dynamic environments. However, the high computational cost of Particle Filter demands substantial hardware performance [7].

3.1.3. Fuzzy logic systems. Fuzzy logic addresses uncertainties in sensor data by converting them into fuzzy sets and applying "if-then" rules. Unlike EKF, fuzzy logic does not require an exact mathematical model but instead relies on rule-based reasoning to handle nonlinear problems in multi-sensor fusion. Studies have shown that combining fuzzy logic with neural networks can further enhance the flexibility and adaptability of data fusion, particularly when dealing with complex and uncertain sensor data [2].

3.1.4. Neural networks and deep learning. In recent years, neural networks, especially deep learning models, have been widely used in multi-sensor data fusion. CNNs can extract features from camera and LiDAR data, and use multiple layers of nonlinear mapping to achieve environment perception and path planning. Neural networks trained on large-scale datasets can learn the complex relationships between multi-sensor data, improving decision-making in intricate environments. However, deep learning models require extensive computational resources and data for training, which can pose challenges in real-time applications [8].

3.2. Algorithm comparison and performance evaluation

Each fusion algorithm has its unique advantages and limitations. EKF is well-suited for linear sensor data fusion, significantly enhancing navigation accuracy, but it requires precise system modeling and is sensitive to sensor noise [2]. Particle Filter excels in nonlinear and non-Gaussian environments, particularly in dynamic SLAM tasks, but its high computational complexity demands substantial hardware performance. Fuzzy logic systems offer flexibility in managing uncertainty and nonlinear data, especially when combined with neural networks. They are effective in handling complex multi-sensor fusion issues but require large training datasets and considerable computational power. Deep learning models provide powerful nonlinear mapping capabilities and can learn high-level features from multi-source sensor data. However, their high computational cost and large training data requirements limit their applicability in real-time navigation systems [8]. Thus, the choice of fusion algorithm should be based on specific navigation needs, environmental complexity, and hardware constraints.

4. Navigation: application of multi-sensor fusion in indoor mobile robots

Multi-sensor fusion has been widely applied in indoor mobile robot navigation to enhance accuracy and stability. In a study by He Youxing, a home service robot navigation system that integrates LiDAR and visual sensors was proposed [9]. By fusing LiDAR's 3D distance information with the visual features captured by cameras, this system achieved efficient environmental perception and path planning, significantly improving the robot's autonomous navigation capability in indoor settings [9]. Similarly, the research by Zhang Shuliang et al. applied multi-sensor fusion technology to achieve high-precision indoor localization for mobile robots [10]. The fusion of data from multiple sensors helped to reduce the problem of accumulated errors in single-sensor localization, thereby enhancing the navigation performance [10]. These studies demonstrate that multi-sensor fusion technology not only enhances the robot's environmental perception capabilities but also effectively handles uncertainties in dynamic environments, providing a more robust autonomous navigation solution.

5. Current challenges and issues

5.1. Real-time processing and computational complexity

Multi-sensor fusion algorithms face significant challenges in terms of real-time processing and computational complexity. Although deep learning models and Particle Filter are capable of processing nonlinear and complex sensor data, they require substantial computational resources and processing time, which restricts their use in real-time navigation [7]. For instance, real-time SLAM tasks demand the robot to perform environmental perception and path planning within milliseconds. The high computational burden of multi-sensor fusion makes it difficult for conventional hardware to meet these requirements. Additionally, the synchronization of sensor data and accumulation of errors add to the difficulty of real-time fusion. Future research must focus on improving algorithm accuracy while optimizing computational efficiency to meet real-time application demands.

5.2. Sensor cost and integration

The high cost of precision sensors like LiDAR remains a major obstacle to their widespread adoption in consumer-grade robots [4]. To reduce the overall cost, researchers are exploring the integration of low-cost sensors (such as monocular cameras and ultrasonic sensors) with high-precision ones. However, integrating multiple sensors introduces challenges such as data redundancy, noise processing, and error correction. When the number of sensors increases, handling redundant data and performing real-time fusion become critical issues that must be addressed in the design of multi-sensor fusion systems.

6. Future trends and research directions

6.1. Integration of artificial intelligence and deep learning

Artificial intelligence (AI), especially deep learning, has great potential in multi-sensor fusion applications for robot navigation. AI can facilitate more intelligent sensor data processing and decision-making, allowing robots to adapt more effectively to complex and dynamic environments [8]. For example, CNNs and Recurrent Neural Networks (RNNs) can extract useful features from multi-source sensor data and adjust path planning strategies in real-time. As AI models become more efficient, they are expected to play an increasingly important role in multi-sensor fusion.

6.2. Development of new sensor technologies

The future development of low-cost, high-precision sensors will further expand the application of multi-sensor fusion in robot navigation. For instance, the miniaturization of LiDAR, quantum sensors, and 5G communication technology will provide robots with more accurate environmental perception capabilities. By integrating these new sensors, robots will be able to navigate autonomously in more complex environments and meet a wider range of application demands.

7. Conclusion

In complex operational environments, the limitations of relying on a single sensor have become increasingly apparent, prompting researchers to explore multi-sensor data fusion techniques to significantly enhance the stability and accuracy of navigation systems. The EKF is widely applied as a multi-sensor fusion algorithm, effectively integrates data from various sensors, such as odometers, gyroscopes, and electronic compasses, which significantly improves the robot's positioning accuracy and obstacle avoidance capabilities. Additionally, fuzzy neural networks demonstrate unique advantages in processing multi-sensor data in nonlinear and complex environments. By fuzzifying sensor information and leveraging the learning capabilities of neural networks, this approach enables efficient data fusion and precise system control. In the field of home service robots, sensor fusion technology combines data from LiDAR and visual sensors, allowing robots to more accurately perceive their surroundings, plan paths, and avoid obstacles, thereby enhancing operational efficiency and safety. As advancements in multi-sensor fusion technology continue, they lay a solid foundation for the practical application of autonomous navigation in various types of robots, enabling them to operate more safely and efficiently in complex environments.


References

[1]. Wang Jingjing. (2019). Application Research of Multi-Sensor Information Fusion Technology in Robot Navigation. Electroacoustic Technology, (11).

[2]. Zhang Ziheng. (2021). Design and Implementation of an Indoor Robot Autonomous Exploration System Based on Multi-Sensor Fusion (Master's thesis, Nanjing University of Posts and Telecommunications).

[3]. Zhang Shuliang, Tan Xiangquan, & Wu Qingwen. (2021). Research on Indoor Mobile Robot Localization Based on Multi-Sensor Fusion Technology. Sensors and Microsystems, (08), 53-56.

[4]. Wang Yuchao. (2021). Research on Global Localization of Indoor Mobile Robots Based on Multi-Sensor Fusion (Master's thesis, Xihua University).

[5]. Ma Mucun. (2022). Research on Mobile Robot System Design and Navigation Technology Based on Multi-Sensor Fusion (Master's thesis, Hefei University of Technology).

[6]. Zheng Yuhang. (2023). Research on Robot Navigation and Positioning Technology Based on Multi-Sensor Fusion (Master's thesis, Anhui University of Engineering)

[7]. Pang Dashuai. (2022). Research on Navigation and Localization of Mobile Robots Based on Multi-Sensor Information Fusion (Master's thesis, Chongqing University of Posts and Telecommunications).

[8]. Shao Mingzhi, He Tao, Zhu Yongping, & Chen Wenchong. (2023). Research on Navigation and Positioning of Mobile Robots Based on Multi-Sensor Information Fusion. Machine Tool & Hydraulics, (05), 8-13.

[9]. He Youxing. (2021). Design and Implementation of a Home Service Robot Navigation System Based on Multi-Sensor Fusion (Master's thesis, Lanzhou University of Technology).

[10]. Haider, M. H., Wang, Z., Khan, A. A., Ali, H., Zheng, H., Usman, S., ... & Zhi, P. (2022). Robust mobile robot navigation in cluttered environments based on hybrid adaptive neuro-fuzzy inference and sensor fusion. Journal of King Saud University-Computer and Information Sciences, 34(10), 9060-9070.


Cite this article

Shan,T. (2024). A Review of Multi-Sensor Fusion Techniques for Indoor Mobile Robot Navigation. Applied and Computational Engineering,80,175-180.

Data availability

The datasets used and/or analyzed during the current study will be available from the authors upon reasonable request.

Disclaimer/Publisher's Note

The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of EWA Publishing and/or the editor(s). EWA Publishing and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

About volume

Volume title: Proceedings of CONF-MLA Workshop: Mastering the Art of GANs: Unleashing Creativity with Generative Adversarial Networks

ISBN:978-1-83558-561-0(Print) / 978-1-83558-562-7(Online)
Editor:Mustafa ISTANBULLU, Marwan Omar
Conference website: https://2024.confmla.org/
Conference date: 21 November 2024
Series: Applied and Computational Engineering
Volume number: Vol.80
ISSN:2755-2721(Print) / 2755-273X(Online)

© 2024 by the author(s). Licensee EWA Publishing, Oxford, UK. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license. Authors who publish this series agree to the following terms:
1. Authors retain copyright and grant the series right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this series.
2. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the series's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial publication in this series.
3. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See Open access policy for details).

References

[1]. Wang Jingjing. (2019). Application Research of Multi-Sensor Information Fusion Technology in Robot Navigation. Electroacoustic Technology, (11).

[2]. Zhang Ziheng. (2021). Design and Implementation of an Indoor Robot Autonomous Exploration System Based on Multi-Sensor Fusion (Master's thesis, Nanjing University of Posts and Telecommunications).

[3]. Zhang Shuliang, Tan Xiangquan, & Wu Qingwen. (2021). Research on Indoor Mobile Robot Localization Based on Multi-Sensor Fusion Technology. Sensors and Microsystems, (08), 53-56.

[4]. Wang Yuchao. (2021). Research on Global Localization of Indoor Mobile Robots Based on Multi-Sensor Fusion (Master's thesis, Xihua University).

[5]. Ma Mucun. (2022). Research on Mobile Robot System Design and Navigation Technology Based on Multi-Sensor Fusion (Master's thesis, Hefei University of Technology).

[6]. Zheng Yuhang. (2023). Research on Robot Navigation and Positioning Technology Based on Multi-Sensor Fusion (Master's thesis, Anhui University of Engineering)

[7]. Pang Dashuai. (2022). Research on Navigation and Localization of Mobile Robots Based on Multi-Sensor Information Fusion (Master's thesis, Chongqing University of Posts and Telecommunications).

[8]. Shao Mingzhi, He Tao, Zhu Yongping, & Chen Wenchong. (2023). Research on Navigation and Positioning of Mobile Robots Based on Multi-Sensor Information Fusion. Machine Tool & Hydraulics, (05), 8-13.

[9]. He Youxing. (2021). Design and Implementation of a Home Service Robot Navigation System Based on Multi-Sensor Fusion (Master's thesis, Lanzhou University of Technology).

[10]. Haider, M. H., Wang, Z., Khan, A. A., Ali, H., Zheng, H., Usman, S., ... & Zhi, P. (2022). Robust mobile robot navigation in cluttered environments based on hybrid adaptive neuro-fuzzy inference and sensor fusion. Journal of King Saud University-Computer and Information Sciences, 34(10), 9060-9070.