Wave-Based Sensors Utilized in Autonomous Driving Applications

Research Article
Open access

Wave-Based Sensors Utilized in Autonomous Driving Applications

Haoxiang Qi 1*
  • 1 Electric Engineer and Control College, North China University of Technology, Beijing, China.    
  • *corresponding author 23101010206@mail.ncut.edu.cn
Published on 26 November 2024 | https://doi.org/10.54254/2755-2721/80/2024CH0082
ACE Vol.80
ISSN (Print): 2755-273X
ISSN (Online): 2755-2721
ISBN (Print): 978-1-83558-561-0
ISBN (Online): 978-1-83558-562-7

Abstract

This paper introduces several common sensors in automatic driving and their classification methods. It focuses on the comparison of sensors that use waves to sense environmental information, analyzes the influence of wave characteristics on their sensor performance from various angles, and analyzes their different advantages and disadvantages. Then, from the perspective of sensor application and scheduling, several common strategies for using sensors in automatic driving are introduced, and the principle of each strategy is interpreted and evaluated.

Keywords:

Autonomous driving, autonomous vehicles, sensor, multi-sensor fusion method.

Qi,H. (2024). Wave-Based Sensors Utilized in Autonomous Driving Applications. Applied and Computational Engineering,80,210-215.
Export citation

1. Introduction

Sensors are pivotal components in the Autonomous Driving (AD) system. Serving as the critical interface in the perception of the outside world, thus, it directly influences the safety and reliability of AD vehicles by impacting the accuracy and timeliness of environmental data acquisition [1]. Currently, there exist a variety of resolutions designed for AD, primarily achieved through the combination of multiple sensors. This paper mainly focuses on the wave-based sensors utilized in autonomous driving, which rely on receiving electromagnetic or mechanical waves reflected from the environment. Through sophisticated algorithms, these sensors establish a perception of the external environment to achieve route planning and other functions essential for autonomous driving. This paper will first analyze the characteristics of the sensor at the principle level. Then, it will focus on the characteristics of the waves adopted by different sensors and their capacity to collect information effectively. Finally, the paper concludes with a summary of the key findings and discussions, along with potential ways for future research in this domain. This section aims to highlight the implications of the study and propose directions for further exploration that could enhance the effectiveness and safety of AD systems. It will analyze the application of the sensor in autonomous driving from multiple angles based on its characteristics, specifically targeting in general urban scenarios. Functions like automatic parking, which are not part of typical road driving, will not be considered in this paper's primary scope.

Most AD plans are based on multi-sensor fusion, which can circumvent the limitations of using a single sensor. However, among all the combination solutions in AD, each sensor typically has different task divisions according to its specialities [1]. This makes the performance comparison of various sensors crucial when deciding on a sensor fusion plan. This paper intends to summarize the current development process of autonomous driving, analyze existing problems, and find alternative solutions in different autonomous driving environments by comparing different sensors and solutions. This will provide an improved idea for subsequent automatic driving sensor fusion schemes and lead to further advancements in sensor development.

2. Wave-based sensor

Wave-based sensors detect the waves and transfer the signal conveyed by the wave into an electrical signal. Since the waves are often bounced from surrounding environments. They often provide the surrounding information [2]. Thus, the sensors can use the information contained in the waves to detect the geometry of the environment [3]. Not like the GPS and IMU sensors, wave-based sensors are used to handle the actual traffic conditions. From the perspective of time and space, they are aimed at short-term and close-range detection. In other words, they are commonly used to make vehicles react based on the actual situation. While others focus on a long-term and wide range, They are used to pre-plan and pre-deploy in the autonomous driving process.

Though they are all based on waves bounced back from different objects, the speciality and features of different waves can be the decisive features to value their abilities. Thus, the sensors should be classified based on the waves they sense.

2.1. Camera

Camera based on visible lights(400-750pHZ) [4]. Complementary metal oxide semiconductor (CMOS) and charge coupled devices (CCD) cameras are used commonly in AD [2]. In CMOS sensors, each photon is converted to voltage separately at each pixel. Several transistors were used to measure and amplify the signal from each pixel. CCD only measures the total photons numbers in a single pixel, the color was divided by the color filter [2]. The camera usually obtains the information and data of the plane intelligently, but through the training of the rolling machine neural network, using the given data set and according to the algorithm, the camera system of the 2-dimensional intelligent perception plane can also recognize the obstacles ahead. At the same time, using a system composed of more than two groups of cameras, more accurate object depth information can be judged by the difference in visual distance perceived by the two groups.

2.2. Rader

Rader based on IR waves(24/74/79gHZ) [4]. Its principle is to measure the time gaps between emitted IR signals until the echo is received [4].

Radar is a 1D sensor because it can only perceive the depth information of an object. The radar pulse needs to strictly isolate the transmitted signal when receiving the echo signal, while a high-power signal will be transmitted in a transitory continuous cycle, which leads to high hardware requirements and complicated structures [1].

2.3. Lidar

Lidar is based on laser, near infrared waves, well known from ISO20473. There are 2 different types of Lidar which are commonly used in AD, 2D Lidar and 3D Lidar. 2D Lidar based on a single laser beam to detect distance based on the measurement of flight (ToF). 3D Lidar can emit a set of diodes laser. Each beam can provide accurate depth information of the front point [5]. Many detection points converge to form a plane which can obtain a 3D map of the environment with great accuracy [4]. Another concern in 3D Lidar is that it needs to maintain a balance between the number of sampling points and other performance. If the sampling points are increased, more detailed position information and shape information can indeed be obtained, but at the same time, the cost of Lidar will be increased, and the energy consumption will be increased.

2.4. Ultrasonic sensor

Ultrasonic sensor based on ultrasonic waves (20-40kHz). Similar to the Rader and 2D Lidar, it is based on the ToF of the sonic wave from when it is emitted until the sensor receives the bounced-back signal. Below Equation 1 shows how the radar estimates the distance using time [6].

\( d_{OneWay}=\frac{t_{RoundTrip} \times v_{Sound}}{2} \) (1)

Although the detecting methods are similar, different waves have different physical features. And some of the information is only contained in some specific waves. For instance. Road information like road lines and traffic signs was designed for human vision [2]. They use flat geometric patterns, which are hard to detect by other waves besides visual lights. The camera can detect the visible lights. This makes it the must-need sensor for AD. Besides, the Laser has outstanding penetrability which can make it gain reflection information beyond the visual lights, making it an ideal sensor for far-reaching depth detentions. Radar is cheap, wide in applicability, simple in structure, and strong in accuracy, making it widely used in vehicles and AD [6]. It can provide distance tips for the rear visual blind area when reversing.

3. Construction of measurement system of automatic driving sensor

To evaluate the advantages and the disadvantages of a sensor, it should be tested based on the ability to obtain information from the environment that is conducive to correct decisions. It is mainly based on the sensor features itself (mostly the wave it uses), and the application performance when working in an entire AD system. This part will mainly consider the ability of the sensor to obtain accurate information, focused on the individual features of the sensor, this can be described as the scope of information acquisition and accuracy.

3.1. Range and penetration effect

In general, the range has a significant relationship with the wave attenuation features. Radar and Lidar have the best penetrating features, which means they have a longer effective perception range of around 200 meters [7]. The camera uses an optical way to get information. However, just like human eyes can be easily affected by visibility. Plus, the camera range uses deep learning methods, so its accuracy will decrease with the distance. Those two points make it hard to have a further range. Thus, the Camera's effective perception range is around 80 meters. Ultrasonic sensors can reflect most of the material in sound speed. Which makes it accurate at short-range and low speed. But not good at fast speed and long-range [6].

3.2. FOV

Field of View describes the flat breadth of sensors, sensors with smaller FOV angles have more view limitations. Although the breadth can be expanded by stacking more sensors, the cost of whole and blind areas is still affected by FOV angles. In general, Cameras have a very wide FOV. 3D Lidar also has a good view. It can provide a FOV of 360°horizontal and 27°vertical [5]. Compared with this, RADAR only has a medium angle of FOV [4]. The Ultrasonic have a smaller FOV, which is usually 5 degrees [6].

3.3. Stability and anti-interference capability

The weather might be the most interference in AD. Weather like rain or fog easily makes the environment in low visibility.

4. Applicant features

This section will examine the application ability of sensors in AD systems, exploring how a single sensor makes contributions to the whole system. Then Some of the existing solutions for autonomous driving convergence will be introduced. In the multi-modular fusion plan for the AD, practical considerations like deployment usability, price, and size will be taken into consideration. In addition, if some sensors perceive information other than distance information, it can make it critical use to the decision-making process of the AD system, and be irreplaceable by other sensors.

4.1. Sensor’s speciality

As mentioned above, the camera is considered irreplaceable, since most of the traffic signs can only be detected by visual lights. This is why the pure vision system is the only single-sensor autonomous driving system that exists today. Also, the camera and 3D lidar can recognise the shape.

Speed detection is also important. Traffic is constantly changing, which means that it is important to have the ability to anticipate changes in road conditions. Speed detection is essential to this ability. The camera can use artificial intelligence to determine the speed of an object by its changing shape. Rader makes use of the Doppler effect of ultrasound to provide more accurate measurement and wide-range speed detections. However, it could mislead feature extractions in the context of occlusions [7].

4.2. Multi-modular/multi-modal sensor fusion for auto driving

The fusion method can be mainly divided according to the way it processes the sensor data. According to Huang, K., it can be divided into 2 groups, weak fusion and strong fusion [8].

4.2.1. Week fusion. The raw data generated by the sensor of the weak fusion method will be directly affected by the data from the other sensors. For example, the obstacle data recognized by the camera through CNN will be directly transmitted to the raw data of Lidar as supervisor, so that the original data transmitted by lidar will generate the original information of the obstacle recognized by the camera information [8].

This fusion method will directly generate the final fusion result in the original data acquisition stage, and the original data of the output data can be obtained without further processing. To a certain extent, the time of the fusion information processing step can be reduced and the reaction efficiency of automatic driving can be improved, which is very important for ensuring safety in high-speed road sections.

However, weak fusion is a rule-based method that utilizes one data as a supervisory signal and the signals are deeply integrated at the object level. Thus, for the supervised signal, its original signal does not participate in the path planning decision after fusion. If there is a problem with the information source of the supervisory signal, the system can only accept and fuse the wrong signal through the algorithm. The failure to realize that the two signals are not uniform may be a problem with the supervision signal. In addition, the algorithm for deep fusion is basically fixed, and the relationship between this position supervised signal and the supervised signal is constant. The weight of information between sensors can not achieve equivalence at the algorithm level, which reduces the flexibility of this fusion method in the face of different environments to a certain extent. In addition, the accuracy of the algorithm is very high.

4.2.2. Strong fusion. Strong fusion has three different branches, but what they all have in common is that they rely heavily on the point cloud data generated by 3D Lidar. When processing different data from sensors on the back end, the weight of 3D lidar is stronger than that of cameras and other sensors in distance perception [8]. There are 4 main ways contained in the strong fusion method. By taking advantage of the excellent performance of camera neural networks, it can help filter meaningless point cloud data on other sensors (mainly Lidar). The accurate and stable depth data of Lidar can provide good accuracy. This fusion method can greatly improve the performance and recognition accuracy of autonomous driving data. It has greatly improved the application scenario range and security.

The first one is Early-fusion. Early-fusion will first use camera data analyse its depth information, and integrate the depth information captured by the camera at the data level. One instance of it is they reflect the camera depth data as an extra point cloud adding to the Lidar and thus processing the data [8].

Deep-fusion can get the depth image from both the camera and Lidar, mix the depth data at the feature level and generate 3D voxel fusion features [8]. In this way, multiple sets of sensors’ raw and high-level semantic information can be utilized. Features are fused in cascading mode.

Late fusion is also known as object-level fusion [8]. The fusion processing of the data is usually placed after identification. The 2D data identified by the camera and the 3D data of Lidar are first identified and processed, and then the processed results are fused. This kind of fusion is at the object level. He optimally integrates the object information identified by the two to make the final prediction based on the result in two modalities.

Unlike other deep-fusion methods, asymmetric fusion fuses object-level information of one type of data with data-level data of another. This asymmetry refers to an asymmetry in the data. In other words, this fusion method can only obtain the object proposal from a single data source, and the other data only serve as an auxiliary prediction and do not participate in proposal recognition. For example, it merges the recognition data of Lidar and the deep sensing data of 2D cameras asymmetrically. In this case, the Lidar data will be used as the primary identification data, while the 2D camera's deep sensing data will only be used as auxiliary data to conduct the final tasks [8].

4.2.3. Single modular data. Although the multi-modal method plays an essential role in the stability of AD, it is undeniable that the single-modal vision system automatic driving model is also under research, and it has quite mature applications in the field of assisted driving. One of the most famous is Tesla's FSD system. The single-mode method greatly reduces the cost of the vehicle, but the pressure on the system is not significantly reduced. On the contrary, the object recognition algorithm relying solely on the visual system requires more accuracy, so it will need more complex and diverse recognition models and more powerful computing chips. Limited by the camera installation position and visual range, the pure visual scheme is more likely to be blocked or have a larger blind area, therefore, the single-mode method has more restrictions, and more improvements are needed in the training and adjustment of automatic driving [8].

5. Conclusion

This paper focuses on the principle of wave-based sensors in automatic driving. At the same time analyse the characteristics brought by its principle. In the context of urban commuting background, this paper mainly discussed the features which are brought by its physical characteristics. This is because autonomous driving still has a long way to go. Thus, sensors will continue to be updated and iterated. However, no matter how the performance of the sensor changes, the physical characteristics of the wave used by it can not be altered. Therefore, focusing on its carrier physical characteristics can understand the potential of the sensor and provide direction for further research in the future. The second part of this paper introduces the existing AD algorithm strategies according to their categories, including multi-modal sensor fusion method and single sensor scheme, and analyzes the pros and cons of sensors which are commonly used in AD systems from the level of sensor scheduling in the application. In addition, during the study of this paper, it was not found that the theory introduces the category measuring the performance of sensors in AD. This paper tries to consider the performance of sensors from multiple angles in the application, systematically combs this category and explains the reasons for measuring each indicator. This provides guidance for the clear choice of sensors in multi-sensor fusion schemes and the future development of autonomous driving.


References

[1]. Z. Wang, Y. Wu & Q. Niu. (2020). Multi-Sensor Fusion in Automated Driving: A Survey. In IEEE Access, vol. 8 (pp. 2847-2868). doi: 10.1109/ACCESS.2019.2962554.

[2]. M. Taraba, J. Adamec, M. Danko & P. Drgona. (2018). Utilization of modern sensors in autonomous vehicles (pp. 1-5). doi: 10.1109/ELEKTRO.2018.8398279.

[3]. G. Eskandar, A. Braun, M. Meinke, K. Armanious & B. Yang. (2021). SLPC: A VRNN-based approach for stochastic lidar prediction and completion in autonomous driving. 2021 29th European Signal Processing Conference (EUSIPCO) (pp. 721-725). doi: 10.23919/EUSIPCO54536.2021.9616229.

[4]. Rosique, F., Navarro, P. J., Fernández, C., & Padilla, A. (2019). A systematic review of perception system and simulators for autonomous vehicles research. Sensors, 19(3), 648.

[5]. S. Kuutti, S. Fallah, K. Katsaros, M. Dianati, F. Mccullough & A. Mouzakitis. (2018). A Survey of the State-of-the-Art Localization Techniques and Their Potentials for Autonomous Vehicle Applications. in IEEE Internet of Things Journal, vol. 5, no. 2, (pp. 829-846). doi: 10.1109/JIOT.2018.2812300.

[6]. Toa, M., & Whitehead, A. (2020). Ultrasonic sensing basics. Dallas: Texas Instruments, 53-75.

[7]. J. -P. Giacalone, L. Bourgeois & A. Ancora. (2019). Challenges in aggregation of heterogeneous sensors for Autonomous Driving Systems, 2019 IEEE Sensors Applications Symposium (SAS) (pp. 1-5). doi: 10.1109/SAS.2019.8706005.

[8]. Huang, K., Shi, B., Li, X., Li, X., Huang, S., & Li, Y. (2022). Multi-modal Sensor Fusion for Auto Driving Perception: A Survey. arXiv preprint arXiv:2202.02703.


Cite this article

Qi,H. (2024). Wave-Based Sensors Utilized in Autonomous Driving Applications. Applied and Computational Engineering,80,210-215.

Data availability

The datasets used and/or analyzed during the current study will be available from the authors upon reasonable request.

Disclaimer/Publisher's Note

The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of EWA Publishing and/or the editor(s). EWA Publishing and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

About volume

Volume title: Proceedings of CONF-MLA Workshop: Mastering the Art of GANs: Unleashing Creativity with Generative Adversarial Networks

ISBN:978-1-83558-561-0(Print) / 978-1-83558-562-7(Online)
Editor:Mustafa ISTANBULLU, Marwan Omar
Conference website: https://2024.confmla.org/
Conference date: 21 November 2024
Series: Applied and Computational Engineering
Volume number: Vol.80
ISSN:2755-2721(Print) / 2755-273X(Online)

© 2024 by the author(s). Licensee EWA Publishing, Oxford, UK. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license. Authors who publish this series agree to the following terms:
1. Authors retain copyright and grant the series right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this series.
2. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the series's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial publication in this series.
3. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See Open access policy for details).

References

[1]. Z. Wang, Y. Wu & Q. Niu. (2020). Multi-Sensor Fusion in Automated Driving: A Survey. In IEEE Access, vol. 8 (pp. 2847-2868). doi: 10.1109/ACCESS.2019.2962554.

[2]. M. Taraba, J. Adamec, M. Danko & P. Drgona. (2018). Utilization of modern sensors in autonomous vehicles (pp. 1-5). doi: 10.1109/ELEKTRO.2018.8398279.

[3]. G. Eskandar, A. Braun, M. Meinke, K. Armanious & B. Yang. (2021). SLPC: A VRNN-based approach for stochastic lidar prediction and completion in autonomous driving. 2021 29th European Signal Processing Conference (EUSIPCO) (pp. 721-725). doi: 10.23919/EUSIPCO54536.2021.9616229.

[4]. Rosique, F., Navarro, P. J., Fernández, C., & Padilla, A. (2019). A systematic review of perception system and simulators for autonomous vehicles research. Sensors, 19(3), 648.

[5]. S. Kuutti, S. Fallah, K. Katsaros, M. Dianati, F. Mccullough & A. Mouzakitis. (2018). A Survey of the State-of-the-Art Localization Techniques and Their Potentials for Autonomous Vehicle Applications. in IEEE Internet of Things Journal, vol. 5, no. 2, (pp. 829-846). doi: 10.1109/JIOT.2018.2812300.

[6]. Toa, M., & Whitehead, A. (2020). Ultrasonic sensing basics. Dallas: Texas Instruments, 53-75.

[7]. J. -P. Giacalone, L. Bourgeois & A. Ancora. (2019). Challenges in aggregation of heterogeneous sensors for Autonomous Driving Systems, 2019 IEEE Sensors Applications Symposium (SAS) (pp. 1-5). doi: 10.1109/SAS.2019.8706005.

[8]. Huang, K., Shi, B., Li, X., Li, X., Huang, S., & Li, Y. (2022). Multi-modal Sensor Fusion for Auto Driving Perception: A Survey. arXiv preprint arXiv:2202.02703.