1. Introduction
Initially, research on multi-sensor information fusion techniques was focused on military and aerospace applications, and later, with the development of microprocessors and algorithms, Kalman filtering and other data fusion algorithms emerged, leading to significant enhancements in fusion techniques. During this period, the technology was widely used in robotics, demonstrating its potential to improve system reliability and accuracy. Currently, the introduction of machine learning and deep learning techniques, driven by artificial intelligence, big data and cloud computing, has enabled fusion algorithms to process more complex data and extract more useful information. The research and application of this concept in the field of robotics has also become one of the hot topics of the moment.
Multi-sensor information fusion technology collects data independently with the help of multiple sensors, and integrates these data through data fusion algorithms to remove redundant information, supplement missing data, and correct sensor errors. Finally, the fused data are used in application scenarios, which is conducive to improving the accuracy of target detection and enhancing environment perception. Therefore, the article mainly describes the operation logic, algorithms, and current main applications of multi-sensor information fusion technology in the field of robotics. Finally, it looks forward to the development direction of multi-sensor information fusion technology in the field of robotics.
2. Role analysis
2.1. The role and classification of commonly used sensors in robots
In robots, sensors are essential components responsible for perceiving, understanding the surrounding environment, and acquiring relevant information to assist in decision-making. During operation, sensors can be configured according to the robot's needs, mainly divided into visual sensors, distance sensors, touch sensors, inertial sensors, position sensors, environmental sensors, sound sensors, magnetic sensors and tactile sensors.
2.2. The principle of information fusion
Information fusion refers to the integration of information, data, or sensor outputs from multiple sources, which is advantageous in obtaining a more comprehensive and accurate understanding or making decisions compared to acquiring information from a single source. Its core purpose is to improve the reliability, accuracy, and comprehensiveness of information by integrating different data sources.
2.3. Role of multi-sensor information fusion technology in robots.
The fusion technology of multi-sensor information has long been a widely discussed topic in the field of robotics. By integrating data from different sensors, it enables robots to more accurately perceive their environment, identify objects, and plan paths. Specific applications have shown that the fusion technology of multi-sensor information is beneficial for enhancing the stability of robots in complex environments, while also improving their precision and safety in specific operations.
3. The logical structure
3.1. The overall logical structure
The multi-sensor information fusion technology mainly includes data acquisition, data preprocessing, feature extraction, information fusion, decision and inference, output and feedback.
The data collected by different sensors will be classified after denoising, calibration and standardization processing. The relevant technology will analyze the fused information to derive the final decision. Then it will output the decision into the system and adjust or optimize the system operation as needed. In the process of processing data or making decisions, different ways of information fusion will be carried out according to different needs.
3.2. Classification of information collection in the logical structure
Information fusion is mainly divided into pixel-level fusion, feature-level fusion and decision-level fusion [1]. There is a comparison of these information fusion methods in Table 1.
Pixel-level fusion integrates the original data from the pre-processing to generate a comprehensive data level, which is conducive to providing rich view information. However, the processing of data is huge and time-consuming.
Feature-level fusion is a way to integrate the useful information extracted from each sensor. First, features are extracted from the information collected by the sensor, and these features are fused in a space. The fused features can provide more accurate information and improve the efficiency of information processing, but the accuracy of fused information is not as good as that of pixel-level fusion.
Decision-level fusion makes the individual preliminary decision for each useful data after each preprocessing, and integrates all decision results. It has strong reliability and fault tolerance, but it has high requirements for preprocessing and low precision [2].
Table 1. The comparison of three categories of information collection [2, 3].
Pixel level fusion | Feature level fusion | Decision level fusion | |
Calculation quantity | large | middle | small |
Fault tolerance | poor | middle | good |
Information loss | small | middle | large |
Accuracy Anti-interference ability Algorithm difficulty Performance requirements for sensors Real time performance | high poor difficult large poor | middle middle middle middle middle | low good easy small good |
4. The main classification
4.1. Kalman filtering
The above three fusion methods have different characteristics and are reflected in different applications of robots. The pixel-level fusion method is mainly used in LiDAR and so on. Feature-level fusion method is mainly used in the camera, depth sensor and so on. The decision fusion method is mainly used in infrared sensors, motion detectors and so on. Therefore, it is necessary to choose the corresponding information fusion mode according to the facts and specific needs.
Kalman filtering can be used to fuse sensor data in linear systems, enabling accurate state estimation of the robot through a prediction and update process. However, it should be considered that classical Kalman filtering applies to linear Gaussian systems. Whereas, robots in real motion, many of the systems are nonlinear and non-Gaussian [4]. In order to solve such problems, Extended Kalman Filtering and Traceless Kalman Filtering have emerged.
Extended Kalman Filtering (EKF) in multi-sensor information fusion fuses data from different sensors by linearizing the nonlinear system. Firstly, the state and observation equations of the system need to be defined and their Jacobian matrices need to be computed for linearization. EKF adapts to the nonlinear system by linearizing the current state estimation and covariance matrices. The EKF then adjusts the state estimates and covariance matrices according to the update step each time new sensor data is received. Specifically, the EKF takes the observed data from all the sensors as inputs and fuses them by calculating the Kalman gain to improve the accuracy and robustness of the state estimation. In multi-sensor systems, the EKF can effectively combine observations from different sensors to achieve more accurate state estimation and system navigation.
Untraced Kalman Filtering (UKF) improves the accuracy of state estimation by fusing data from multiple sensors. In the application, a state and observation model of the system is first required. UKF uses the traceless transform to generate a set of sigma points representing possible values of the state and propagates these points through a nonlinear function. Then, an update step computes the Kalman gain by combining data from different sensors and adjusts the state estimates and covariance matrix. In multi-sensor fusion, UKF can simultaneously process observations from different sensors, combining their observations to provide more accurate state estimates and higher system robustness.
4.2. Particle filtering
Particle filtering represents the state distribution by using a set of particles, and handles the fusion of multi-sensor information by first taking the data from all sensors as observations. Each particle is updated with weights and normalized according to the state transfer model and sensor observations [5]. This is followed by a resampling step that retains the high-weighted particles and discards the low-weighted particles, thus fusing the data from the different sensors and ultimately providing a more accurate state estimate. This approach is suitable for dealing with complex nonlinear and non-Gaussian systems.
4.3. Multi-hypothesis tracking
Multi-hypothesis tracking (MHT) tracks a target by generating multiple hypotheses in multi-sensor information fusion. Each hypothesis represents a different target trajectory and is evaluated and updated by combining data from different sensors.MHT selects the optimal hypothesis to match the observed data by calculating the probability and cost of the hypothesis. By comprehensively evaluating multiple possible target paths, the MHT is able to handle the multi-target tracking problem in complex environments and improve the accuracy of target identification and state estimation.
5. The progress of application
Multi-sensor information fusion technology is widely used in the field of robotics, which can be mainly classified into the following categories:
5.1. Environment perception and map construction
In Simultaneous Localization and Mapping (SLAM robots commonly use sensors such as LiDAR, camera, Inertial Measurement Unit (IMU), etc. LiDAR provides high-resolution distance information, while camera provides rich visual information, and information fusion allows robots to simultaneously construct maps and determine their position in unknown environments. Combining sensor data can improve the understanding and recognition of the environment. A new fusion method is proposed in previous research to achieve multi-sensor fusion by combining a 2D LiDAR, a depth camera, a wheeled odometer and an IMU unit [6]. The method employs an improved Point-to-Line ICP ( PL-ICP) algorithm for keyframe interception to overcome the shortcomings of ordinary ICP algorithms in data interception and fusion. The EFK algorithm is directly applied to the IMU to ensure streamlined and effective data acquisition. Ultimately, a more comprehensive 2D raster map was constructed by fusing the data from vision and laser sensors through a Bayesian approach, overcoming the limitation of single-sensor map building. This method aims to improve the accuracy of indoor robot positional information and its effectiveness is verified in real-world environments [6].
In robot obstacle detection and obstacle avoidance, data from LiDAR, ultrasonic sensors and cameras are often combined so that they can more accurately identify obstacles in the environment, which is conducive to the robot's decision-making on obstacle avoidance.
5.2. Navigation and localization
In precise positioning, the limitations of a single sensor may lead to inaccurate navigation, and multi-sensor fusion can make up for the shortcomings of a single sensor, fusing GPS, IMU, ground radar, and other sensor data is beneficial to provide accurate position and attitude estimation for robots. This is particularly important for self-driving cars and drones. It is proposed that after the position estimation information obtained by the Quick Response (QR) code, it can be fused with IMU measurement data to compensate for the position data obtained by the QR code, which is beneficial to obtain the optimized position, orientation and velocity [7].
In terms of map matching, the combination of sensor data and known map information enables the robot to determine the position more accurately and perform path planning, which significantly improves the robot's positioning accuracy and robustness in map matching.
5.3. Motion control and coordination
In attitude control, combining data from IMUs, cameras, and other sensors, robots can achieve stable attitude control. For example, quadcopter Unmanned Aerial Vehicles (UAVs) use IMUs and vision sensors to maintain flight stability.
In motion planning, sensor data is used to plan and adjust the robot's motion trajectory for smooth motion control and task execution.
5.4. Object recognition and tracking
By fusing the vision system with other sensors, e.g. combining data from cameras, depth sensors and LiDAR, robots can recognize and track target objects more accurately. This is particularly important in robot grasping and object handling. Ruixue Wang et al. mention a strawberry-picking robot developed by the Spanish company AGROBOT that borrows AI software for the specific identification of strawberry fruits based on the use of on-board integrated color and infrared depth sensors to capture images of the vicinity of the hand's paw [8].
In terms of face recognition and behavioral analysis, face recognition and behavioral analysis are achieved through data fusion between the camera and other sensors, which greatly improves recognition and analysis accuracy and reliability, and is commonly used in service robots and security robots.
5.5. Autonomous decision-making and intelligence
In terms of intelligent decision-making, robots make intelligent decisions by fusing multiple sensor data. For example, in agricultural robots, different sensor data can be detected, such as temperature, humidity, light, etc., and through fusion to decide when to water or fertilizer.
In terms of anomaly detection and handling, the fusion of sensor data can help robots identify anomalies and take appropriate measures, such as fault detection and self-repair.
5.6. Interaction and user experience
In augmented and virtual reality, a more immersive experience can be provided to users by fusing data from cameras, sensors and head trackers. For example, in VR headsets, sensor fusion can improve the accuracy of environment perception and interaction.
In speech recognition and natural language, by processing data from combined microphone arrays, touchscreens, and other sensors, robots can better understand and respond to users' voice commands. For example, by analyzing facial expressions and voice intonation, robots can determine a user's emotional state and respond appropriately.
5.7. Health monitoring and medical applications
In patient monitoring, the fusion of data from physiological sensors and environmental sensors allows real-time monitoring of the patient's health status, which is conducive to improving detection efficiency.
In robotic surgical systems, the application is achieved by fusing data from different sensors, such as cameras and force sensors. Meanwhile, the integration of surgical robot systems with ultrasound, MRI, and other imaging methods will greatly advance the application of robots in the medical field [9]. They are conducive to improving the accuracy and safety of surgery.
6. Limitations and trends
The application of multi-sensor information fusion technology in the field of robotics is gradually improving, but there are still many problems to be solved. For example, data in some complex dynamic environments are still difficult to process effectively, and data fusion algorithms need to be strengthened; the demand for computational resources in the application of the technology is too high, and efficient algorithms and hardware structures need to be developed.
In order to solve such problems, multi-sensor information fusion technology has shown many trends. The introduction of deep learning promotes the intelligence of fusion algorithms and improves the system's ability to handle complex environments. The application of edge computing facilitates the robot to be able to make decisions faster, reduces the data transmission delay, and improves the real-time response capability. The development of adaptive fusion algorithms enables the system to automatically adjust according to environmental changes, enhancing robustness. Multimodal fusion technology integrates data from multiple sensors, such as vision and laser, to achieve more accurate environmental sensing. Advances in smart sensors improve data quality and system stability, while cloud computing and big data analytics process massive amounts of sensor data to support complex decision-making and prediction. The rise of collaborative robotic systems promotes data sharing and cooperation among multiple robots, improving overall performance and efficiency. Together, these trends are driving the intelligent and efficient application of multi-sensor information fusion technology in robotics, opening up more possibilities for the future.
7. Conclusion
This paper mainly focuses on the overview of the application of multi-sensor information fusion technology in robotics, analyses the application of multi-sensor information fusion technology in robotics, and summarizes the logic, algorithms, and main directions of the application of this technology in robotics.
Multi-sensor information fusion technology is an important part of the robotics segment. In the future, multi-sensor information fusion technology will further enhance the environment perception accuracy, real-time response capability and system intelligence in the field of robotics, and promote intelligent robots to achieve more efficient autonomous decision-making and collaboration in complex environments.
References
[1]. Yajuan Tian, Daping Fu & Shihui Wu. (2023). Research on the application of multi-sensor information fusion technology in robotics. Automation & Instrumentation, (02), 51-53+75. doi:10.19557/j.cnki.1001-9944.2023.02.012
[2]. Fang Zhou & Liyan Han. (2006). A review of multi-sensor information fusion technology. Telemetry & Remote Control, (03), 1-7
[3]. Cheng Chen, Fanxing Kong, Tengfei He, Yifei Shao, Shengnan Li & Na Chen. (2023). Multi-sensor information fusion technology and its application development in the field of temperature control Chemical Automation & Instrumentation (02), 137-141. doi:10.20030/j.cnki.1000-3932.202302004
[4]. Haibo Sun, Ziyuan Tong, Shoufeng Tang, Minming Tong & Yuming Ji. (2018). A review of SLAM based on Kalman filter and particle filter Software Guide (12), 1-3+7
[5]. Lili Zhao. (2018). Research on Object Tracking Algorithm Based on Multi-source Sensor Information Fusion (Master's Thesis, Shenyang University of Aeronautics and Astronautics).
[6]. Shuping Xu, Dingzhe Yang & Xiaodun Xiong. (2024). SLAM Indoor Robot with Multi-Sensor Fusion. Journal of Xi'an University of Technology (01), 93-103. doi:10.16185/j.jxatu.edu.cn.2024.01.401.
[7]. Jingju Wang, Cairu Meng, Yipeng Zhao, Xinjia Meng, Hao Yan & Leilei Wang. (2024). Robot positioning technology based on the fusion of QR code positioning and inertial navigation. Mechanical Design and Research (04), 44-48+55.doi:10.13952/j.cnki.jofmdr.2024.0136.
[8]. Ruixue Wang, Licheng Zhu, Bo Zhao, Changwei Wang, Xiaofeng Jia & Qingzhong Xu. (2022). Current status and typical applications of agricultural robot technology. Agricultural Engineering(04), 5-11.doi:10.19998/j.cnki.2095-1795.2022.04.001.
[9]. Wei Liu, Xiao Zhao & Yang Fu. (2023). Research, application status and development trend of medical robots. China Medical Equipment (12), 170-175.
Cite this article
Feng,Y. (2024). Multi-Sensor Information Fusion Technology in Robots. Applied and Computational Engineering,80,149-154.
Data availability
The datasets used and/or analyzed during the current study will be available from the authors upon reasonable request.
Disclaimer/Publisher's Note
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of EWA Publishing and/or the editor(s). EWA Publishing and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
About volume
Volume title: Proceedings of CONF-MLA Workshop: Mastering the Art of GANs: Unleashing Creativity with Generative Adversarial Networks
© 2024 by the author(s). Licensee EWA Publishing, Oxford, UK. This article is an open access article distributed under the terms and
conditions of the Creative Commons Attribution (CC BY) license. Authors who
publish this series agree to the following terms:
1. Authors retain copyright and grant the series right of first publication with the work simultaneously licensed under a Creative Commons
Attribution License that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this
series.
2. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the series's published
version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial
publication in this series.
3. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and
during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See
Open access policy for details).
References
[1]. Yajuan Tian, Daping Fu & Shihui Wu. (2023). Research on the application of multi-sensor information fusion technology in robotics. Automation & Instrumentation, (02), 51-53+75. doi:10.19557/j.cnki.1001-9944.2023.02.012
[2]. Fang Zhou & Liyan Han. (2006). A review of multi-sensor information fusion technology. Telemetry & Remote Control, (03), 1-7
[3]. Cheng Chen, Fanxing Kong, Tengfei He, Yifei Shao, Shengnan Li & Na Chen. (2023). Multi-sensor information fusion technology and its application development in the field of temperature control Chemical Automation & Instrumentation (02), 137-141. doi:10.20030/j.cnki.1000-3932.202302004
[4]. Haibo Sun, Ziyuan Tong, Shoufeng Tang, Minming Tong & Yuming Ji. (2018). A review of SLAM based on Kalman filter and particle filter Software Guide (12), 1-3+7
[5]. Lili Zhao. (2018). Research on Object Tracking Algorithm Based on Multi-source Sensor Information Fusion (Master's Thesis, Shenyang University of Aeronautics and Astronautics).
[6]. Shuping Xu, Dingzhe Yang & Xiaodun Xiong. (2024). SLAM Indoor Robot with Multi-Sensor Fusion. Journal of Xi'an University of Technology (01), 93-103. doi:10.16185/j.jxatu.edu.cn.2024.01.401.
[7]. Jingju Wang, Cairu Meng, Yipeng Zhao, Xinjia Meng, Hao Yan & Leilei Wang. (2024). Robot positioning technology based on the fusion of QR code positioning and inertial navigation. Mechanical Design and Research (04), 44-48+55.doi:10.13952/j.cnki.jofmdr.2024.0136.
[8]. Ruixue Wang, Licheng Zhu, Bo Zhao, Changwei Wang, Xiaofeng Jia & Qingzhong Xu. (2022). Current status and typical applications of agricultural robot technology. Agricultural Engineering(04), 5-11.doi:10.19998/j.cnki.2095-1795.2022.04.001.
[9]. Wei Liu, Xiao Zhao & Yang Fu. (2023). Research, application status and development trend of medical robots. China Medical Equipment (12), 170-175.