Abstract
Autonomous driving is gradually becoming one of the major directions in the development of automotive technology nowadays. Environmental detection technology is indispensable for existing intelligent vehicles, especially when such cars are used in daily life, where many complex road environments cannot be helped by environmental detection technology. Environmental detection technology cannot be divorced from hardware and software support. This study will discuss the different sensors used in autonomous driving environment detection technology, to gain a deeper understanding of the characteristics, functions and applications of these sensors, and to discuss the advantages and limitations of each sensor. Then a comparison of the three widely used sensors in the world is conducted, in terms of the detection range and angle, the accuracy of road detection, and the stability of the detection. The three sensors are a camera, millimeter wave radar and laser radar. A more suitable and stable one from these three sensors will be chosen for in-depth consideration. On this basis, new ideas for the future development of existing sensors are provided, while the direction of improvement of existing sensors is summarized based on the results of existing analyses.
Keywords
Environmental perception, driverless cars, sensors
1.Introduction
With the development of technology, autonomous driving technology is also flourishing, which is reflected in the increasing number of self-driving cars being produced and then put into the transportation service industry as ride-hailing or taxi services, appearing in the public eye. But so far, the safety guarantee of Level 4 and Level 5 autonomous vehicles (AVs) has been an unresolved issue, partly because the environmental perception of autonomous vehicles is affected by uncertainty, which will lead to a series of traffic accidents and other problems [1]. At present, these vehicles with various technical conditions and automatic levels are in a state of co-existence, which makes the road traffic system more complex and results in more diverse road traffic accidents. Furthermore, laws and regulations on intelligent vehicles in various countries are still incomplete, which also leads to a variety of social problems [2]. It can be found that the majority of these traffic accidents are caused by immature autonomous vehicle environment detecting technology, and more specifically, it may be because the current sensors are not precise enough. autonomous vehicles perceive the environment through sensors. Sensor settings are usually at least composed of cameras, radar and laser radar sensors. If the theory is correct, a single-sensor system will also be applied to autonomous vehicles. When the technology was not yet mature, there was no way to quickly detect and collect information on the driving path of the vehicle, the traffic participants around the vehicle, driving status and driving environment etc., which is the main reason why autonomous vehicles can not completely replace the driver's driving, and can only exist as the driver's driving assistance system. At the same time, there may be detection errors or excessive errors caused by weather or lighting conditions, leading to unavoidable traffic accidents. At this time, traffic police cannot even hold the driver responsible because the driver is a programmed machine. If this situation occurs enough times, it will affect social order. In order to ensure the normal operation of autonomous vehicles, this study aims to compare three common types of sensors: camera, millimetre wave radar and laser radar, investigate their detection range, detection accuracy, detection stability and other aspects, collect the experimental data conducted by previous researchers in this regard, measure the detection range or detection angle of the three sensors, and collect the scenes for which the three sensors are applicable, and finally determine a more stable and accurate sensor as the hardware equipment of high-quality environmental detection technology that can be used for the autonomous vehicle through the collected second-hand data. Based on the above result, this study combines with the ideas provided by previous researchers and provides improved measurement accuracy and a range of opinions on direction. New development ideas for the environmental detection technology of autonomous vehicle in the future will be proposed.
2.Common sensors for autonomous vehicles
The pavement condition can be divided into two types, one is the structured road, which refers to a lane with clear marking lines and road boundaries, and the other one is a lane such as dirt field roads and rural roads, named the unstructured road [3]. The majority of the time, cars will pass through these kinds of roads, detecting pedestrians, traffic signs and so on. In this case, environmental sensing technologies are very essential.
2.1.Camera
For visual perception, people usually use a camera as a sensor. Also, there are different types of cameras for cars to detect the environment, focus should be on more usual and popular typologies. Generally, the perspective of the monocular vehicle camera is from 50° to 60°, and the visual distance is from 100m to 200m [4]. The binocular camera uses the principle of parallax to calculate the depth, which is constituted by the difference in the pictures of two images due to the different camera viewpoints. So, the latter one will be more accurate than the former one. No matter which one is better, Ye et al. clarified that cars will obtain the surrounding environment information through this system, mimic a human field vision, and do real-time segmentation and topological path recording of all the routes it passes [5]. Another sensor is the trinocular camera, which uses three different focal length monocular cameras to broaden the three dimensions of the observation viewpoint, as well as compensate for the fact that the field of view and depth of field cannot be both problems. The observation distances are 50 meters, 100 meters, and 200 meters, respectively, with fluctuations in values occurring between them, usually as a result of different focal lengths [3]. But technically it will be more difficult to achieve.
2.2.Millimetre-wave radar
Another familiar environmental detecting technology is radar detection. In this system, two styles of radar detection are widely used in the real life, which are millimetre-wave radar (MWR) and light Laser Detection and Ranging (LiDAR).
MWR works by transmitting an electrical wave, then receiving the reflected signal and recording the time of flight of the returned electromagnetic wave to calculate the relative distance to the target. According to the Doppler principle, the relative velocity of the target from the measured frequency can be simultaneously calculated. The general mathematical equation for a radar's Doppler frequency shift can be written as.
\( {f_{D}}=\frac{2×{V_{r}}×f}{C}=\frac{2×{V_{r}}}{λ}\ \ \ (1) \)
Commonly used in-vehicle millimetre wave radars can be separated into three categories on the basis of different frequencies, which are short-range radar (SRR), long-range radar (LRR) and 79GHZ millimetre (79GHZ MWR), each one has various ranges of detection distance.
2.3.LiDAR
LiDAR is a remote sensing technology that is based on the emission of infrared beams or laser pulses, which touch a target object and reflect the beams back to be picked up by the instrument, through this process, LiDAR can imitate and construct a 3D scenario and be used for mapping map in the field of aerospace when this technology firstly established in the 1960s, this technology is becoming one of the core perceptual technology for AD vehicles [6,7].
3.Feasibility analysis of sensors
Cameras tend to be the most common sensors used in visual perception and are often used in forward collision warning systems, lane departure warning systems, traffic sign recognition systems, etc. If the focus lies on traditional cameras, it can be seen that it is possible to use light from the outside to illuminate the photosensitive cell arrays, which creates a photoelectric effect that generates a certain electrical charge [4]. Vision cameras can get the rich texture and feature information. Compared to millimetre waves and LiDAR, vision cameras can work in natural light during the daytime, and collect image data by identifying the colors of cars and traffic signals, which makes it capable of traffic sign detection, free space and other functions. Compared to other sensors, it will not be very demanding on the working environment and can recognize distant target objects in a high-resolution state under sufficient light, if considering its cost it will also be found that it belongs to a low-cost observation hardware, which is one of the reasons why it is widely used. But the camera also has its shortcomings. In the case of direct light or shadow backlight, imaging quality is poor. The intensity of the light changes will directly affect the accuracy of its recognition of the object, and also it is not quite able to carry out road observation in bad weather, the influence of external factors will be much easier. It is straightforward to affect its observation accuracy when the car is moving at high velocity, which will make its field of view blurred, and the current technology is difficult to ensure the accuracy of recognition under dynamic movement.
SRR has close-range radar at 24 GHz and LRR has long-range radar at 77 GHz. Compared to 79 GHz radar sensors, 24 GHz radar sensors have a more limited resolution in terms of distance, speed and angle, leading to problems in recognizing and responding to multiple hazards [7]. Millimetre-wave radar measures long distances, typically up to more than 200 meters, and is less affected by weather, with good penetration of electromagnetic waves in rain, snow, fog and dust [8]. The disadvantage is that the recognition rate is poor for certain materials with weak echoes such as pedestrians, cone buckets or plastic products, but it is particularly sensitive to metal materials, resulting in higher false alarm rates, while millimetre-wave radar is unable to distinguish between static objects and road signs. This can lead, for example, to differences between animal carcasses and roads that may pose a challenge for the radar due to the similarity of Doppler shifts [7]. Now, millimetre-wave radar is used in Tesla [5].
LiDAR is also now used by major automotive manufacturers in the Advanced Driver Assistance System (ADAS). In the mid-1990s, laser scanner manufacturers built and deployed the first commercially available LiDARs with 2,000 to 25,000 pulses per second (PPS) for terrain mapping applications [6]. LiDAR used in the market today can be classified into three categories in terms of their mode of operation: mechanical LiDAR, hybrid solid-state LiDAR and solid-state LiDAR. Mechanical LiDAR through the bottom of the rotary motor to drive the laser beam for 360 °scanning, each scanning circle to get a frame of laser point cloud data, and then through the measurement of the laser signal time difference and phase difference to determine the distance, and can be based on the angle of each scanning line and scanning angle of rotation to build the polar coordinates relationship. The point cloud data (PCD) here includes x, y and z coordinates as well as intensity information of the scene or surrounding obstacles for 3D LiDAR. Hybrid solid-state LiDAR takes the mechanical external rotating components inside the device and then integrates very compact miniature scanning mirrors directly on a silicon-based chip via a Micro Electro Mechanical system (MEMS), which reflects the laser's light to enable micron-scale motion scanning. Lastly, solid-state LiDAR such as Phase Array Control change the direction of the laser beam by adjusting the phase offset, thus enabling scanning of the entire plane. The existing LiDAR farthest detection distance (divided into two-dimensional LiDAR and three-dimensional LiDAR) is about 200 metres, the detection angle range of 15-360 degrees, insensitive to light transformations, night perception ability, ranging accuracy compared to other sensors is higher, with a certain degree of anti-jamming ability, and at the same time perceive the surrounding information is more abundant [3]. But there are also shortcomings, such as rain, snow, fog, dust and other climatic effects of the performance degradation leading to its speed and road marking recognition ability to become weak, for some of the low-reflective properties of the material ranging accuracy is not good, the hardware is expensive, the cost is very high. Lidar is now also widely used in the driver assistance systems of Audi, BMW and other well-known car brands [5].
4.Comparison between different sensors
This section considers the accuracy of the camera, millimetre wave radar and LiDAR from different dimensions, and based on the results of the feasibility analysis suggesting a relatively accurate class of sensors.
Analyzing the dimension of the detection distance angle, it can be found that the camera's farthest range is between 50-200 meters, but it will change according to the weather and light conditions, and cannot give an accurate value for distance measurement. Millimetre wave radar (long range) has a maximum detection distance of 250 meters and an angle range of about 10-70 degrees. LiDAR (long range) is similar to millimetre wave radar and can detect objects up to about 200, but the range of the detection angle will be a little wider, approximately in the range of 15-350 degrees [3]. Overall, although the camera can see objects farther away, the accuracy of long-distance measurement is not allowed. Monocular estimates in 20 meters away from the accuracy began to decline. Stereo camera measurements of objects 80 meters away from the accuracy of the object also declined. Millimetre-wave radar and LIDAR will be slightly better in this regard.
Considering the dimension of detection accuracy of road obstacles, pedestrians, traffic signs and other objects, Wang. found that cameras have difficulty in recognizing distant objects in static images, while millimetre wave radar is weak in recognizing road signs and is more often used in detecting large obstacles such as vehicles and pedestrians, and LiDAR is weak in recognizing road signs and speed [4]. So in this regard, all three types of sensors are less able to take into account a variety of complex road surfaces or obstacle detection.
The last dimension to consider is how much it is affected by interfering factors and whether it will affect the final detection accuracy. According to the information provided by Wang., it is found that the camera is greatly affected by environmental factors, especially in rainy and snowy weather, when there is insufficient light, the ability of the camera to detect road obstacles and traffic signs will be greatly reduced [4]. That is to say, the camera, as a passive sensor, relies on external ambient light and has a greater ability to detect at night, whereas the millimetre wave radar itself is an active sensor, which is not affected by day or night and has better robustness. Especially in weather conditions such as rain, snow, fog, dust, etc., millimetre-wave radar has good penetration, so the performance is not significantly affected, while the camera and LiDAR have different degrees of attenuation in detection performance. In this dimension, millimetre wave radar is the better choice.
In addition to these dimensions, other factors are considered, such as the cost. Compared to the camera, LiDAR and Haumi radar price will be much higher, and from the speed measurement function, only millimetre-wave radar can be directly obtained through the Doppler's law of the speed of movement of the object, the other two sensors do not exist such function. More details are shown in Table 1. So from the results of the above dimensions, it is suggested to choose the more stable millimetre wave radar as the main sensor for self-driving cars.
Table 1. Common comparison among sensors [6].
5.Suggestions for future research directions
Although in the above exploration of different dimensions, this paper has come up with a more accurate and stable sensor - millimetre wave radar, but in real life, when self-driving cars need to independently face the complex factors present in real scenarios, an independent sensor obviously cannot deal with all the situations. Considering the introduction of multiple sensors to work together can ensure the safe implementation of environmental detection. This is also referred as multi-sensor fusion, which is based on deep fusion of autonomous learning and filter-based fusion of multi-source information compared to single-sensor smart recognition techniques. Currently, self-driving cars mainly integrate multiple complementary sensors such as radars, LiDAR and cameras to overcome the limitations of individual sensors operating independently [7]. As for fusion techniques, they include Bayesian fusion method, Kalman filter fusion method and neural network fusion method. The Bayesian information fusion method is a probabilistic statistics-based inference method. The Kalman filter method can predict and correct the position and other information of an object from a limited and noisy sequence of observations, and the neural network method can eliminate the cross-influence effect resulting from multi-sensor co-operation through extensive learning and training [3]. Pei et al. proposed a new way to further accelerate the development of autonomous driving technology, using the largest and most diverse multimodal autonomous driving dataset to date, including images recorded by multiple high-resolution cameras and sensor readings from multiple high-quality LiDAR scanners installed on autonomous vehicle fleets, to ensure a more stable mapping of an unfamiliar scene, which is also a good solution, but also requires large-scale data analysis [9]. In the future, it is suggested to develop better, more adaptable convergence technologies. At the same time, a real-autonomous driving on-road environment requires millions of interactions between vehicles, people, and devices [10]. It is in need to start thinking about the collaborative capabilities of autonomous vehicles, which is one of the big autonomous vehicle developments people could consider in the future.
6.Conclusion
This paper describes the characteristics, functions and applications of three types of sensors: camera, radar and LiDAR, and analyses the feasibility of the three types of sensors. The three types of sensors’ existing advantages and limitations are addressed and this paper provides a comprehensive evaluation and comparison of the them. It is concluded that millimetre wave radar can survey road conditions more consistently than the other two types of sensors in poor weather conditions and can feed back to the autonomous driving decision-making system through algorithms that provide timely information on pedestrians, obstacles and traffic signs. This will greatly help the detection efficiency and stability of intelligent vehicles. Based on the fact that they will be put into application, this paper proposes to consider more complex factors to ensure safe, stable and accurate road exploration. A single sensor cannot take into account all the circumstances, so a variety of sensors to work together to improve the road survey, range, accuracy and stability should be one of the directions for the development of environmental detection technology. In a way, relying on a single sensor environmental detection can only be reached in theory, but not quite in applications. This paper suggests to consider how to improve the existing sensor range, and stability to broaden the scope of application of a sensor scene, which is also an alternative development direction. However, this development idea needs to be combined with multiple sensors in a short time. Since the time cost of improving the monitoring aspects of existing sensors is higher than the time cost of multi-sensor applications, the combination of the two development ideas is the optimal solution. Finally, it is expected that in the future more accurate and stable sensors can be put into the application of self-driving technology, helping self-driving cars realise the real environment simulation driving.
References
[1]. Hoss, M., Scholtes, M., & Eckstein, L. (2022). A Review of Testing Object-Based Environment Perception for Safe Automated Driving. Automot. Innov, 5, 223–250. https://doi.org/10.1007/s42154-021-00172-y
[2]. Yuan, Q., Peng, Y., Xu, X. D., & Wang, X. H. (2021). Key points of investigation and analysis on traffic accidents involving intelligent vehicles. Transportation Safety and Environment, 3(4), tdab020. https://doi.org/10.1093/tse/tdab020
[3]. Song, Z., & Deng, H. (2023). Research on Sensor Optimization Technology of Driverless Vehicle. Frontiers in Computing and Intelligent Systems, 4(2), 131-137. https://doi.org/10.54097/fcis.v4i2.10370
[4]. Wang, P.(2021). Research on Comparison of LiDAR and Camera in Autonomous Driving. Journal of Physics: Conference Series, 2093 012032. https://doi.org/10.1088/1742-6596/2093/1/012032
[5]. Ye, L., Duan, T., & Zhu, J. (2020). Neural network-based semantic segmentation model forrobot perception of driverless vision. IET Cyber-Syst. Robot, 2(4), 190–196. https://doi.org/10.1049/iet-csr.2020.0040
[6]. Ignatious, H.A., El- Sayed, H., & Khan, M. (2022). An overview of sensors in Autonomous Vehicles. Procedia Computer Science, 198, 736-741.https://doi.org/10.1016/j.procs.2021.12.315
[7]. De, J. Y., Velasco-Hernandez, G., Barry, J., & Walsh, J. (2021). Sensor and Sensor Fusion Technology in Autonomous Vehicles: A Review. Sensors 2021, 21(6), 2140-2177.https://doi.org/10.3390/s21062140
[8]. Zhou, Y., Lu, L., Zhao, H., López-Benítez, M., Yu, L., & Yue, Y. (2022). Towards Deep Radar Perception for Autonomous Driving: Datasets, Methods, and Challenges. Sensors 2022, 22(11), 4208-4253. https://doi.org/10.3390/s22114208
[9]. Pei, S. et al. (2020). Scalability in Perception for Autonomous Driving: Waymo Open Dataset. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA,2020, 2443-2451. https://doi.org/10.1109/CVPR42600.2020.00252
[10]. Bathla, G., Bhadane, K., Singh, R. K., Kumar, R., Aluvalu, R., Krishnamurthi, R., Kumar, A., Thakur, R. N., & Basheer, S. (2022). Autonomous Vehicles and Intelligent Automation: Applications, Challenges, and Opportunities. Mobile Information Systems, 7632892, 1-36. https://doi.org/10.1155/2022/7632892
Cite this article
Chen,H. (2024). Analysis and comparison of sensor accuracy of autonomous vehicles. Applied and Computational Engineering,93,1-6.
Data availability
The datasets used and/or analyzed during the current study will be available from the authors upon reasonable request.
Disclaimer/Publisher's Note
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of EWA Publishing and/or the editor(s). EWA Publishing and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
About volume
Volume title: Proceedings of the 2nd International Conference on Machine Learning and Automation
© 2024 by the author(s). Licensee EWA Publishing, Oxford, UK. This article is an open access article distributed under the terms and
conditions of the Creative Commons Attribution (CC BY) license. Authors who
publish this series agree to the following terms:
1. Authors retain copyright and grant the series right of first publication with the work simultaneously licensed under a Creative Commons
Attribution License that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this
series.
2. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the series's published
version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial
publication in this series.
3. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and
during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See
Open access policy for details).
References
[1]. Hoss, M., Scholtes, M., & Eckstein, L. (2022). A Review of Testing Object-Based Environment Perception for Safe Automated Driving. Automot. Innov, 5, 223–250. https://doi.org/10.1007/s42154-021-00172-y
[2]. Yuan, Q., Peng, Y., Xu, X. D., & Wang, X. H. (2021). Key points of investigation and analysis on traffic accidents involving intelligent vehicles. Transportation Safety and Environment, 3(4), tdab020. https://doi.org/10.1093/tse/tdab020
[3]. Song, Z., & Deng, H. (2023). Research on Sensor Optimization Technology of Driverless Vehicle. Frontiers in Computing and Intelligent Systems, 4(2), 131-137. https://doi.org/10.54097/fcis.v4i2.10370
[4]. Wang, P.(2021). Research on Comparison of LiDAR and Camera in Autonomous Driving. Journal of Physics: Conference Series, 2093 012032. https://doi.org/10.1088/1742-6596/2093/1/012032
[5]. Ye, L., Duan, T., & Zhu, J. (2020). Neural network-based semantic segmentation model forrobot perception of driverless vision. IET Cyber-Syst. Robot, 2(4), 190–196. https://doi.org/10.1049/iet-csr.2020.0040
[6]. Ignatious, H.A., El- Sayed, H., & Khan, M. (2022). An overview of sensors in Autonomous Vehicles. Procedia Computer Science, 198, 736-741.https://doi.org/10.1016/j.procs.2021.12.315
[7]. De, J. Y., Velasco-Hernandez, G., Barry, J., & Walsh, J. (2021). Sensor and Sensor Fusion Technology in Autonomous Vehicles: A Review. Sensors 2021, 21(6), 2140-2177.https://doi.org/10.3390/s21062140
[8]. Zhou, Y., Lu, L., Zhao, H., López-Benítez, M., Yu, L., & Yue, Y. (2022). Towards Deep Radar Perception for Autonomous Driving: Datasets, Methods, and Challenges. Sensors 2022, 22(11), 4208-4253. https://doi.org/10.3390/s22114208
[9]. Pei, S. et al. (2020). Scalability in Perception for Autonomous Driving: Waymo Open Dataset. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA,2020, 2443-2451. https://doi.org/10.1109/CVPR42600.2020.00252
[10]. Bathla, G., Bhadane, K., Singh, R. K., Kumar, R., Aluvalu, R., Krishnamurthi, R., Kumar, A., Thakur, R. N., & Basheer, S. (2022). Autonomous Vehicles and Intelligent Automation: Applications, Challenges, and Opportunities. Mobile Information Systems, 7632892, 1-36. https://doi.org/10.1155/2022/7632892