1. Introduction
Autonomous vehicles, also known as self-driving vehicles, have emerged as a prominent research and development focus in the realm of electric vehicles, garnering the attention of diverse scholars. The fusion and interpretation of sensor data, being the pivotal aspect in autonomous driving, assumes utmost importance due to sensors being the key constituents in self-driving vehicles.
The vehicle perceives its surroundings through an array of diverse sensors embedded on its framework. These hardware components effectively collect data related to the vehicle's environment. The information gathered from these sensors undergoes processing within a perception block, where the constituent elements combine sensor data to generate meaningful information. The planning subsystem subsequently employs the output derived from the perception block for behavior planning as well as the formulation of short and long-range path plans. The control module ensures the vehicle adherence to the provided path by the planning subsystem, thereby transmitting control commands for execution.
The initial milestones in autonomous vehicle development were achieved during the mid of the previous century. Pioneering efforts in this field were made by Carnegie Mellon University [1,2] and Mercedes-Benz in collaboration with the University of Munich [3], leading to the development of the first fully autonomous vehicles. Consequently, a multitude of companies and research organizations have embarked on prototyping autonomous vehicles and have dedicated concerted efforts towards enabling full autonomy in this domain.
Autonomous vehicle technology witnessed remarkable advancements throughout the Defense Advanced Research Projects Agency’s (DARPA) challenge events, specifically the Grand Challenge events in 2004 and 2005 [4,5], and the Urban Challenge in 2007 [6]. These milestones substantiated the ability of machines to autonomously execute the intricate task of driving. A significant breakthrough was achieved in the 2007 DARPA Urban Challenge, where six out of eleven self-driving vehicles successfully navigated through an urban environment to cross the finish line, marking a significant milestone in the field of robotics. In 2013, Mercedes-Benz accomplished a groundbreaking 103 km test drive with the S500 sedan, demonstrating full autonomy [7]. The test route encompassed 25 towns and major cities, encountering diverse and complex traffic scenarios.
Presently, challenges in autonomous vehicle development primarily revolve around scene perception, localization, mapping, vehicle control, trajectory optimization, and higher-level planning decisions. Emerging trends in autonomous driving encompass end-to-end learning [8-10] and reinforcement learning [11,12].
This paper delves into the exploration of sensor and sensor fusion technology's role and significance in the realm of autonomous vehicles. It reveals the indispensable contribution of these technologies in ensuring the safety and efficiency of autonomous driving through a comprehensive analysis of sensor working principles and the concept of sensor fusion.
2. Sensor Technology in Autonomous Vehicles
2.1. Camera
Cameras have been widely embraced as one of the most prevalent technologies to perceive the environment. By leveraging the principle of registering lights emitted from the surroundings onto a photosensitive surface (image plane) via a camera lens (mounted in front of the sensor), cameras facilitate the generation of vivid depictions of the environment [13]. Consequently, computer vision technology is employed to examine and interpret images associated with roads, traffic signs, vehicles, and pedestrians.
The camera arrangement in autonomous vehicles can utilize monocular cameras, binocular cameras, or a hybrid combination of both. The monocular camera setup utilizes a solitary camera to capture a sequence of images. In comparison to stereo cameras, conventional RGB monocular cameras inherently lack depth information. However, in certain scenarios or using more advanced monocular cameras with dual-pixel autofocus hardware, depth information can be derived through intricate algorithms [14-16]. Consequently, autonomous vehicles often incorporate a binocular camera system, where two cameras are positioned alongside each other.
Cameras have the advantage of providing high-resolution visual information at a relatively low cost. However, they may have limited perception capabilities in adverse weather conditions, low lighting conditions, or limited nighttime visibility.
2.2. Radar
Radar is another widely used type of sensor that utilizes electromagnetic waves to measure and detect the position and velocity of objects. Radar is capable of providing information such as the distance and speed of objects, and exhibits good robustness in adverse weather and low-light conditions, it has a longer detection range but relatively lower resolution, making it difficult to provide detailed object shape and identification.
Presently, the market offers commercial radars that function at frequencies of 79 GHz (Gigahertz), 77 GHz, 24 GHz, and 60 GHz. In comparison to the 79 GHz radar sensors, the 24 GHz radar sensors are characterized by a more confined range, velocity, and angle resolution. As a consequence, these limitations give rise to difficulties in effectively detecting and responding to multiple hazards, thus making it likely for them to be phased out in the future [17]. The transmission of electromagnetic waves, known as radar, remains unaffected by adverse weather conditions, and radar operation remains independent of the surrounding illumination conditions. Consequently, they can operate under conditions of fog, snow, or clouds, regardless of whether it is day or night. Radar sensors possess certain disadvantages, such as generating false detections of metallic objects like road signs or guardrails in the perceived environment, as well as facing challenges in differentiating between static objects and stationary objects [18]. An example of this is the potential difficulty that radars face in distinguishing between an animal carcass (a static object) and the road, due to the similarity in Doppler shift [19].
2.3. LiDAR
In the 1960s, Light Detection And Ranging (LiDAR) emerged as a remote sensing technique extensively applied in mapping terrains for aerospace purposes. In recent decades, the progress of LiDAR technology has been phenomenal, establishing itself as a pivotal perception technology in advanced driver assistance systems (ADAS) and self-driving vehicles. To operate, LiDAR emits bursts of infrared beams or laser light, reflecting them off targeted objects. These reflections are detected by the device, and the time interval between emission and receipt of the light pulse enables the calculation of distance. By surveying its surroundings, LiDAR generates a three-dimensional representation of the scene presented as a point cloud [20]. LiDAR is suitable for various weather and lighting conditions, exhibiting high accuracy in target detection and localization. However, the cost of LiDAR is relatively high, and it has a larger size. Additionally, LiDAR has limited perception capabilities for transparent or highly reflective objects.
Nowadays, in the realm of autonomous automobiles, 3D rotating LiDAR systems are more frequently utilized to ensure a dependable and accurate perception of both diurnal and nocturnal conditions, due to their wider visual scope, extended detection range, and ability to gauge distances. The obtained information, presented as a point cloud in a three-dimensional spatial portrayal (or laser depiction), offers a comprehensive representation of the environment. Unlike camera systems, LiDAR sensors are devoid of color data about the surroundings, thus necessitating the amalgamation of point cloud data and information from various sensors through the implementation of sensor fusion algorithms.
2.4. Ultrasonic waves
Ultrasonic sensors utilize sound waves to measure the distance of objects, making them suitable for low-speed and close-range sensing scenarios. They can provide accurate distance measurements and are widely used in applications such as parking and low-speed driving. However, the limited detection range, angle coverage, and susceptibility to sound reflections may restrict their use in high-speed autonomous driving.
Different types of sensors have their own advantages and limitations in autonomous vehicles. By fusing the data from various sensors, a more comprehensive and accurate environmental perception can be achieved, aiding the vehicle in making more reliable decisions and navigation.
3. Sensor fusion technology in autonomous vehicles
3.1. Sensor fusion algorithm
Sensor fusion is the process of combining and integrating data from different sensors to provide a more comprehensive and accurate environmental representation. By fusing data from multiple sensors, the limitations of individual sensors can be compensated, and the capability for target detection, localization, and perception can be improved. The following are some common sensor fusion algorithms.
Kalman filtering is a recursive filtering algorithm widely used in sensor fusion. It estimates the state of the target by maintaining the difference between a dynamic model and sensor measurement information. The Kalman filter achieves optimal estimation of the target state by balancing prior knowledge and measurement data. This algorithm is particularly suitable for linear systems and is often used to fuse sensor data such as gyroscopes, accelerometers, and magnetometers.
Extended Kalman filtering is used to handle nonlinear systems. Unlike the Kalman filter, the extended Kalman filter estimates the state of nonlinear systems by using linearized systems and measurement models. This allows the extended Kalman filter to better handle nonlinear problems in the sensor fusion process.
Particle filtering is a probabilistic filtering algorithm suitable for nonlinear and non-Gaussian systems. It represents the posterior probability distribution of the target state using a set of random samples called particles. These particles are updated and resampled based on measurement data and motion models, gradually approximating the true state of the target. Particle filtering has good adaptability in handling nonlinear systems and multimodal distributions.
The main objective of these sensor fusion algorithms is to accurately estimate the state of the target by effectively combining data from different sensors. By synthesizing various information provided by sensors, such as images, distances, velocities, and orientations, sensor fusion can provide a more accurate environmental representation, thereby assisting autonomous vehicles in making more reliable decisions and planning.
3.2. The advantages of sensor fusion technology in autonomous vehicles
Improving Environment Perception: Autonomous vehicles require accurate perception of their surroundings, including roads, obstacles, and traffic signs. Sensor fusion can effectively combine data from different types of sensors to perceive the environment at multiple levels, such as using cameras, LiDAR, and radar sensors to collectively construct an environmental map. By fusing data from multiple sensors, the accuracy and completeness of environment perception can be improved.
Enhancing Object Recognition and Tracking: Sensor fusion can assist autonomous vehicles in better recognizing, classifying, and tracking objects in their surroundings. For example, combining camera and radar data can provide more precise information about the position, velocity, and motion direction of objects, aiding in driving decision-making and planning.
Optimizing the trajectory Planning: Sensor fusion can provide more accurate inputs for the trajectory planning of autonomous vehicles. By combining data from various sensors, such as map information, vehicle state, and surrounding environment, the dynamic and risk factors around the vehicle can be better predicted, leading to optimized trajectory planning.
Enhancing Safety: Sensor fusion can enhance the safety of autonomous vehicles through redundancy and fault tolerance mechanisms. When a sensor fails or produces uncertainty, accurate environment perception and decision support can be continued by fusing data from other reliable sensors. Furthermore, real-time detection and handling of sensor failures can be achieved through fault detection and fallback strategies, thereby improving system reliability and robustness.
Sensor fusion technology plays a crucial role in autonomous vehicles. It can improve environment perception, enhance object recognition and tracking, optimize trajectory planning, and enhance safety through redundancy and fault tolerance. These advantages make sensor fusion one of the key technologies in achieving reliable and efficient autonomous driving systems.
4. Discussion
As the popularity of autonomous vehicles grows, vehicles need to monitor the surrounding environment and perceive road conditions in real-time in order to make intelligent decisions. Sensor technology plays a crucial role in collecting environmental information. Autonomous vehicles are typically equipped with a variety of sensors, such as lidar, cameras, mm-wave radar, GPS, etc., to obtain information about obstacles, road signs, traffic conditions, and vehicle position.
However, a single sensor often provides limited information. In order to perceive the environment more comprehensively and accurately, sensor fusion technology has emerged.
With the advancement of artificial intelligence and deep learning technology, the performance of sensor fusion technology will continue to improve. Through data-driven algorithms and model optimization, the accuracy and robustness of environmental perception can be further enhanced. In addition, the continuous development of new sensor technologies will bring more possibilities for sensor fusion technology, such as wearable sensors, smart material sensors, etc. Sensor fusion technology can also be better integrated with other key technologies in future autonomous vehicles, such as high-precision maps, communication technology, and vehicle control systems. This will bring more comprehensive and intelligent perception and decision-making capabilities to autonomous vehicles, enabling them to adapt to more complex and diverse traffic environments.
In summary, sensors and sensor fusion technology have great development potential and future prospects in autonomous vehicles, which can improve the overall autonomy of vehicles and enhance real-time decision-making and emergency handling capabilities. The autonomous driving technology will be able to better adapt to various complex traffic and environmental conditions, and ultimately achieve a safe, efficient, and intelligent autonomous driving traffic system.
5. Conclusion
The research has shown that sensor technology provides essential inputs to capture and perceive the surrounding environment, including objects, obstacles, and road conditions. Different sensors have their strengths and limitations. Combining the data from multiple sensors through sensor fusion techniques allows for a more accurate and reliable perception of the environment, leading to improved decision-making and control algorithms in autonomous vehicles.
However, the research also highlighted a few challenges and limitations. First, sensor fusion algorithms require extensive computational resources and real-time processing capabilities, which pose technical difficulties and increase system complexity. Moreover, the reliability and robustness of the sensor fusion system need to be further enhanced to handle various environmental conditions and potential sensor failures.
Future research can focus on refining sensor fusion algorithms and developing advanced signal processing techniques to improve the accuracy and reliability of autonomous vehicle perception systems. Additionally, investigating new sensor technologies, such as advanced LiDAR and smart cameras, can help overcome the limitations of current sensors.
References
[1]. T. Kanade, "Autonomous land vehicle project at CMU", CSC '86 Proceedings of the 1986 ACM fourteenth annual conference on Computer science, 1986.
[2]. R. Wallace, First results in robot road-following, JCAI'85 Proceedings of the 9th international joint conference on Artificial intelligence, 1985.
[3]. E. D. Dickmanns, A. Zapp, Autonomous High Speed Road Vehicle Guidance by Computer Vision, IFAC Proceedings Volumes, 1987, 20.5: 221-226.
[4]. S. Thrun et al., Stanley: The Robot That Won the DARPA Grand Challenge, Journal of Robotic Systems - Special Issue on the DARPA Grand Challenge, Volume 23 Issue 9, pp. 661-692, 2006.
[5]. M. Montemerlo at al., Winning the DARPA grand challenge with an AI robot, Proc. of the 21st national conference on Artificial intelligence, pp. 982-987, July 2006.
[6]. Martin Buehler, Karl Iagnemma, Sanjiv Singh, The DARPA Urban Challenge: Autonomous Vehicles in City Traffic. Springer Tracts in Advanced Robotics, 2009.
[7]. ZIEGLER J, BENDER P, SCHREIBER M, et al. Making bertha drive: an autonomous journey on a historic route [J]. IEEE Intelligent Transportation Systems Magazine, 2014, 6(2): 8. DOI:10.1109/MITS.2014.2306552
[8]. M. Bojarski et al., End to end learning for self-driving cars, 2016. Available: https://arxiv.org/abs/1604.07316.
[9]. J. Kocić, N. Jovičić and V. Drndarević, Driver behavioral cloning using deep learning, 2018 17th International Symposium INFOTEH-JAHORINA (INFOTEH), East Sarajevo, 2018, pp. 1-5.
[10]. J. Kocić, N. Jovičić and V. Drndarević, End-to-End Autonomous Driving using a Depth-Performance Optimal Deep Neural Network, 2018, submitted for publication.
[11]. M. Riedmiller, M. Montemerlo and H. Dahlkamp, Learning to Drive a Real Car in 20 Minutes, 2007 Frontiers in the Convergence of Bioscience and Information Technologies, Jeju City, 2007, pp. 645-650.
[12]. L. Fridman, B. Jenik, J. Terwilliger, DeepTraffic: Driving Fast through Dense Traffic with Deep Reinforcement Learning, arXiv:1801.02805 [cs.NE], Jan. 2018. Available at: https://arxiv.org/abs/1801.02805.
[13]. Kuutti, S.; Bowden, R.; Jin, Y.; Barber, P.; Fallah, S. A Survey of Deep Learning Applications to Autonomous Vehicle Control. IEEE Trans. Intell. Transp. Syst. 2021, 22, 712–733.
[14]. Joglekar, A.; Joshi, D.; Khemani, R.; Nair, S.; Sahare, S. Depth Estimation Using Monocular Camera. IJCSIT 2011, 2, 1758–1763.
[15]. Bhoi, A. Monocular Depth Estimation: A Survey. arXiv 2019, arXiv:1901.09402v1.
[16]. Garg, R.; Wadhwa, N.; Ansari, S.; Barron, J.T. Learning Single Camera Depth Estimation using Dual-Pixels. arXiv 2019, arXiv:1904.05822v3.
[17]. Shahian Jahromi, B.; Tulabandhula, T.; Cetin, S. Real-Time Hybrid Multi-Sensor Fusion Framework for Perception in Autonomous Vehicles. Sensors 2019, 19, 4357.
[18]. Wang, Z.; Wu, Y.; Niu, Q. Multi-Sensor Fusion in Automated Driving: A Survey. IEEE Access 2019, 8, 2847–2868.
[19]. Detecting Static Objects in View Using—Electrical Engineering Stack Exchange. Available online:https://electronics.stackexchange.com/questions/236484/detecting-static-objects-in-view-using-radar (accessed on 29 December 2020).
[20]. Campbell, S.; O’Mahony, N.; Krpalcova, L.; Riordan, D.; Walsh, J.; Murphy, A.; Conor, R. Sensor Technology in Autonomous Vehicles: A review. In Proceedings of the 2018 29th Irish Signals and Systems Conference (ISSC), Belfast, UK, 21–22 June 2018.
Cite this article
Duan,B. (2024). Sensor and sensor fusion technology in autonomous vehicles. Applied and Computational Engineering,52,132-137.
Data availability
The datasets used and/or analyzed during the current study will be available from the authors upon reasonable request.
Disclaimer/Publisher's Note
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of EWA Publishing and/or the editor(s). EWA Publishing and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
About volume
Volume title: Proceedings of the 4th International Conference on Signal Processing and Machine Learning
© 2024 by the author(s). Licensee EWA Publishing, Oxford, UK. This article is an open access article distributed under the terms and
conditions of the Creative Commons Attribution (CC BY) license. Authors who
publish this series agree to the following terms:
1. Authors retain copyright and grant the series right of first publication with the work simultaneously licensed under a Creative Commons
Attribution License that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this
series.
2. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the series's published
version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial
publication in this series.
3. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and
during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See
Open access policy for details).
References
[1]. T. Kanade, "Autonomous land vehicle project at CMU", CSC '86 Proceedings of the 1986 ACM fourteenth annual conference on Computer science, 1986.
[2]. R. Wallace, First results in robot road-following, JCAI'85 Proceedings of the 9th international joint conference on Artificial intelligence, 1985.
[3]. E. D. Dickmanns, A. Zapp, Autonomous High Speed Road Vehicle Guidance by Computer Vision, IFAC Proceedings Volumes, 1987, 20.5: 221-226.
[4]. S. Thrun et al., Stanley: The Robot That Won the DARPA Grand Challenge, Journal of Robotic Systems - Special Issue on the DARPA Grand Challenge, Volume 23 Issue 9, pp. 661-692, 2006.
[5]. M. Montemerlo at al., Winning the DARPA grand challenge with an AI robot, Proc. of the 21st national conference on Artificial intelligence, pp. 982-987, July 2006.
[6]. Martin Buehler, Karl Iagnemma, Sanjiv Singh, The DARPA Urban Challenge: Autonomous Vehicles in City Traffic. Springer Tracts in Advanced Robotics, 2009.
[7]. ZIEGLER J, BENDER P, SCHREIBER M, et al. Making bertha drive: an autonomous journey on a historic route [J]. IEEE Intelligent Transportation Systems Magazine, 2014, 6(2): 8. DOI:10.1109/MITS.2014.2306552
[8]. M. Bojarski et al., End to end learning for self-driving cars, 2016. Available: https://arxiv.org/abs/1604.07316.
[9]. J. Kocić, N. Jovičić and V. Drndarević, Driver behavioral cloning using deep learning, 2018 17th International Symposium INFOTEH-JAHORINA (INFOTEH), East Sarajevo, 2018, pp. 1-5.
[10]. J. Kocić, N. Jovičić and V. Drndarević, End-to-End Autonomous Driving using a Depth-Performance Optimal Deep Neural Network, 2018, submitted for publication.
[11]. M. Riedmiller, M. Montemerlo and H. Dahlkamp, Learning to Drive a Real Car in 20 Minutes, 2007 Frontiers in the Convergence of Bioscience and Information Technologies, Jeju City, 2007, pp. 645-650.
[12]. L. Fridman, B. Jenik, J. Terwilliger, DeepTraffic: Driving Fast through Dense Traffic with Deep Reinforcement Learning, arXiv:1801.02805 [cs.NE], Jan. 2018. Available at: https://arxiv.org/abs/1801.02805.
[13]. Kuutti, S.; Bowden, R.; Jin, Y.; Barber, P.; Fallah, S. A Survey of Deep Learning Applications to Autonomous Vehicle Control. IEEE Trans. Intell. Transp. Syst. 2021, 22, 712–733.
[14]. Joglekar, A.; Joshi, D.; Khemani, R.; Nair, S.; Sahare, S. Depth Estimation Using Monocular Camera. IJCSIT 2011, 2, 1758–1763.
[15]. Bhoi, A. Monocular Depth Estimation: A Survey. arXiv 2019, arXiv:1901.09402v1.
[16]. Garg, R.; Wadhwa, N.; Ansari, S.; Barron, J.T. Learning Single Camera Depth Estimation using Dual-Pixels. arXiv 2019, arXiv:1904.05822v3.
[17]. Shahian Jahromi, B.; Tulabandhula, T.; Cetin, S. Real-Time Hybrid Multi-Sensor Fusion Framework for Perception in Autonomous Vehicles. Sensors 2019, 19, 4357.
[18]. Wang, Z.; Wu, Y.; Niu, Q. Multi-Sensor Fusion in Automated Driving: A Survey. IEEE Access 2019, 8, 2847–2868.
[19]. Detecting Static Objects in View Using—Electrical Engineering Stack Exchange. Available online:https://electronics.stackexchange.com/questions/236484/detecting-static-objects-in-view-using-radar (accessed on 29 December 2020).
[20]. Campbell, S.; O’Mahony, N.; Krpalcova, L.; Riordan, D.; Walsh, J.; Murphy, A.; Conor, R. Sensor Technology in Autonomous Vehicles: A review. In Proceedings of the 2018 29th Irish Signals and Systems Conference (ISSC), Belfast, UK, 21–22 June 2018.