Applications of Artificial Intelligence on Autonomous Driving

Research Article
Open access

Applications of Artificial Intelligence on Autonomous Driving

Chule Guan 1*
  • 1 School of Mathematics and Statistics, McMaster University, Hamilton, Canada    
  • *corresponding author guanc12@mcmaster.ca
Published on 26 November 2024 | https://doi.org/10.54254/2755-2721/109/20241349
ACE Vol.109
ISSN (Print): 2755-273X
ISSN (Online): 2755-2721
ISBN (Print): 978-1-83558-737-9
ISBN (Online): 978-1-83558-738-6

Abstract

Artificial intelligence (AI) technology has rapidly developed in recent years and has gradually permeated various aspects of everyday life. The integration of AI with driving technology has given rise to autonomous driving, a technology that is expected to profoundly impact human transportation, efficiency, and quality of life. This paper provides a detailed exploration of the specific applications of AI in the field of autonomous driving and its future development prospects, analyzing the advantages and challenges of these technologies. It also briefly introduces practical application scenarios such as autonomous taxis, intelligent traffic management systems, and long-distance freight transportation, discussing the potential impacts of these technologies on society and the environment. Finally, the paper looks ahead to the profound changes that may result from the combination of autonomous driving technology with other cutting-edge technologies, such as 5G and the Internet of Things (IoT), in shaping future transportation systems.

Keywords:

Autonomous driving, human-vehicle interaction, computer vision systems.

Guan,C. (2024). Applications of Artificial Intelligence on Autonomous Driving. Applied and Computational Engineering,109,31-37.
Export citation

1. Introduction

With the increasing number of vehicles worldwide, traffic congestion and frequent accidents have become urgent problems that need to be addressed [1]. According to statistics from the U.S. National Highway Traffic Safety Administration (NHTSA) in 2021, the traffic accident fatality rate increased by about 7% during the same period, largely due to human factors in driving [2]. Human-related factors are the primary cause of frequent traffic accidents, making the development of autonomous driving technology to replace traditional human driving the most effective way to address these safety issues.

The foundation of autonomous driving technology includes artificial intelligence (AI), machine learning, and sensor technology, with AI being its core. AI is the study of theories, methods, technologies, and application systems that simulate, extend, and expand human intelligence [3]. The rapid development of AI has had a profound impact on many fields. From initial theoretical research to large-scale applications today, AI technology is beginning to play a vital role in various industries. As computing power increases and algorithms evolve, AI is being applied in the field of automotive driving. Machine learning and deep learning technologies enable AI to perform complex tasks in big data environments, leading to a significant leap in autonomous driving technology [4]. Through extensive data training and learning, AI systems can autonomously perform tasks like environment perception, decision-making, and path planning, achieving highly automated, intelligent driving control that integrates the vehicle with its surroundings. This not only reduces operational costs but also greatly improves driving efficiency, safety, convenience, and comfort, transforming vehicles into smart mobile terminals with broader application prospects [5].

Autonomous driving technology comprises modules such as perception and localization, prediction, planning, and control [6,7]. These modules integrate technologies from automatic control, AI, multi-sensor information fusion, and communication systems. The autonomous driving system relies on perception technologies to gather environmental information, locate the vehicle, and predict the dynamics of other vehicles. The planning module processes the information from the perception and prediction modules to make precise, scientific decisions about the vehicle's driving path until it safely reaches its destination. The control module, after receiving instructions from the planning module, controls the vehicle's operation throughout the journey [8,9]. As both hardware and software components of autonomous driving systems continue to improve, the technology is becoming increasingly mature [10].

This paper explores the specific applications of AI in autonomous driving, analyzing key technologies such as computer vision, sensor fusion, and natural language processing, along with their various models. Although autonomous driving technology currently faces numerous challenges, its potential application prospects remain boundless.

2. Core technologies of autonomous driving

Deep learning is applied in four main areas: computer vision, speech recognition, natural language processing (NLP), and recommendation engines. In the context of autonomous driving, the primary focus is on two core technologies: computer vision and natural language processing.

2.1. Computer vision

Computer vision is a technology through which machines collect a series of external information and then recognize and understand that information to autonomously control their movements [11]. Autonomous driving technology based on computer vision uses the images observed by visual sensors as input and driving actions as output. Current autonomous driving methods are mainly divided into mediated perception, direct perception, and end-to-end control. Mediated perception divides the autonomous driving task into subtasks such as object detection, object tracking, scene semantic segmentation, camera modeling and calibration, and 3D reconstruction. Direct perception first learns the key indicators of the traffic environment, after which the control logic takes over. End-to-End control directly establishes a mapping from input to action, which can be transformed into image classification or regression tasks [12]. Visual sensors are the foundation and key component of computer vision technology, typically achieving optimal image observation through the use of following different types of sensors or their fusion.

2.1.1. LiDAR (Light detection and ranging). It emits laser beams and measures their return time to generate a highly accurate 3D map of the environment. It is used to create a 3D point cloud map around the vehicle, recognize and classify objects (such as pedestrians, vehicles, and obstacles), and provide high-precision distance information, especially suited for complex urban environments and low-light conditions.

2.1.2. Cameras. Cameras capture visible light images, simulating the visual function of the human eye. They record detailed image data of the surrounding environment and use image processing algorithms to identify objects and scenes. Cameras are used for object detection and recognition (such as traffic signs, signal lights, lane lines, vehicles, and pedestrians), scene understanding (such as road conditions and obstacle types), and navigation assistance. Cameras are typically used in computer vision tasks such as image classification and semantic segmentation.

2.1.3. Millimeter-Wave Radar. This radar uses high-frequency radio waves to detect the position, speed, and direction of objects. It can penetrate adverse weather conditions such as rain, fog, and snow, providing reliable distance and speed information. It is mainly used to detect vehicles, pedestrians, and lane changes. Radar is a key component of systems like automatic emergency braking (AEB) and adaptive cruise control (ACC), helping vehicles maintain a safe distance and speed under various weather conditions.

2.1.4. Ultrasonic Sensors. Ultrasonic sensors measure the distance to objects by emitting sound waves and detecting their reflections, typically used to detect nearby objects. They are often used in parking assistance systems and low-speed obstacle avoidance, helping vehicles maneuver in tight spaces and detect nearby obstacles such as walls or parking boundaries.

2.1.5. Inertial Measurement Unit (IMU). IMUs combine accelerometers and gyroscopes to measure a vehicle's acceleration, rotational speed, and tilt angle. They provide information about the vehicle's motion state, helping it maintain balance and direction in dynamic environments. IMUs are particularly useful when GPS signals are weak or lost, helping stabilize the vehicle’s navigation system.

2.1.6. Global Positioning System (GPS). GPS calculates the vehicle's global geographic position by receiving signals from satellites, providing precise location information. It is used for global positioning and navigation, helping the vehicle determine its position within a road network. High-precision GPS, combined with other sensors, can achieve centimeter-level accuracy, essential for path planning in fully autonomous driving.

2.1.7. Visual-Inertial Odometry (VIO). VIO combines camera and IMU data to estimate the vehicle's relative motion trajectory by tracking the movement of image feature points and IMU motion data. It enhances the vehicle’s positioning and navigation capabilities, especially when GPS signals are unstable. VIO is often used to improve the accuracy of environmental perception, helping the vehicle navigate accurately in complex environments.

2.1.8. Infrared Cameras. Infrared cameras capture the thermal radiation of objects to generate images, allowing them to work in complete darkness or low-light conditions. They are used in night vision systems to detect pedestrians, animals, and other obstacles in the dark or adverse weather, enhancing driving safety.

2.1.9. Multi-Sensor Fusion Technology. Multi-sensor fusion technology integrates data from different sensors to create a more complete and reliable environmental perception model [13]. Each sensor has its advantages and limitations, and fusion technology maximizes its strengths while compensating for the shortcomings of individual sensors. For example, integrating LiDAR and camera data improves obstacle detection accuracy, while the fusion of radar and IMU data enhances the vehicle’s dynamic monitoring capability.

2.2. Natural Language Processing (NLP)

Natural language processing technology can be used for speech recognition, facilitating better human-vehicle interaction [14]. For example, Xiaomi Su7 can execute voice commands to open windows, navigate, play certain music, inquire about the current location, and so on.

3. Different specific technologies/models

3.1. Tesla's Autopilot system

Tesla’s Autopilot system is an advanced driver assistance system (ADAS), which includes the following configurations. It is equipped with multiple cameras, covering a 360-degree field of view. These cameras are responsible for capturing images of the road, lane markings, traffic signs, traffic lights, pedestrians, and other vehicles around the car. It also has a radar, which is used to detect the distance and speed of objects in various weather and lighting conditions, especially in low-visibility situations such as rain, fog, or nighttime. The radar can reliably detect objects. In addition, the vehicle is equipped with ultrasonic sensors around the car, used for detecting nearby objects at close range, especially useful when parking or performing low-speed maneuvers, detecting obstacles around the vehicle such as other cars, walls, or pedestrians.

The Autopilot system can fuse data from cameras, radar, and ultrasonic sensors to create a comprehensive environmental model. This data fusion technology ensures the system accurately senses and understands its surroundings, recognizing key elements like lanes, traffic signs, other vehicles, and pedestrians.

Autopilot relies on computational vision algorithms and deep learning models, which process the images captured by the cameras to identify and classify various objects on the road. Tesla continuously trains and optimizes these models using massive amounts of fleet data (collected from Tesla vehicles worldwide), making the system increasingly accurate and reliable in complex driving scenarios.

Mapping and Route Planning: GPS and Map Data: Autopilot uses high-precision GPS and map data to assist with navigation and positioning. Although GPS provides the vehicle's global position, the system also integrates visual information for local positioning, allowing for lane-level differentiation.

Precise Positioning: Visual Odometry: By analyzing the sequence of images captured by the cameras, Autopilot can estimate the vehicle's motion trajectory and provide highly accurate relative positioning information, especially in situations where GPS signals are unstable or unavailable.

Route Planning: The Autopilot system conducts real-time route planning based on environmental perception data and map information. The system calculates the vehicle’s current lane driving choices, and also handles lane changes, turns, acceleration, and deceleration.

Decision and Control: The system makes driving decisions using AI algorithms and state models, such as deciding when to slow down or change lanes if there is an obstacle ahead. Autopilot also adjusts speed and following distance based on the speed and distance of the vehicle in front, maintaining a safe distance.

Assisted Driving: Autosteer: Autopilot can automatically keep the vehicle in the center of the lane, relying on cameras to identify lane markings and adjust steering accordingly.

Traffic-Aware Cruise Control (TACC): The system adjusts the vehicle's speed based on the speed of the vehicle in front to maintain a safe following distance. This function is achieved through a combination of radar and cameras.

Auto Lane Change: When the driver signals a turn, Autopilot can automatically change lanes. The system ensures the safety of the lane change before executing the action.

Autopark: Autopilot assists the driver in automatically parking, including parallel and perpendicular parking. The system uses ultrasonic sensors to detect surrounding obstacles and calculates the optimal route to complete the parking process.

Summon Function: The driver can remotely control the vehicle via a mobile app, and the vehicle will automatically drive in or out of parking spaces, which is particularly useful in tight spaces.

Warning System: This system includes Driver Monitoring and Warning and Intervention Mechanism. Autopilot has autonomous driving capabilities, but Tesla still requires the driver to maintain control of the vehicle and be ready to take over at any time. The vehicle is equipped with a driver monitoring system to check whether the driver is holding the steering wheel and paying attention to the road. If the system detects that the driver is not maintaining adequate attention, it will issue a warning and may gradually slow down or stop the vehicle if necessary. At the same time, If the system detects abnormal situations during operation, such as sensor failure or the environment exceeding the system's capabilities (such as in complex construction zones or extreme weather), Autopilot will alert the driver to take over control.

3.2. New energy vehicles in China

Technical architecture and sensor control of Chinese new energy vehicles are showed as follows.

Sensor Configuration: These vehicles are typically equipped with richer sensors, including LiDAR (laser radar), cameras, millimeter-wave radar, and ultrasonic sensors. Some models (such as XPeng P5) emphasize the application of LiDAR.

Processing Units: Most use third-party high-performance computing platforms (such as NVIDIA's Xavier, Orin, or Mobileye's EyeQ series) to support complex AI computing tasks.

Data Processing: By integrating LiDAR, camera, and radar data, these systems provide high-precision environmental perception. The use of LiDAR enhances the ability to detect distant objects and small obstacles, especially improving reliability in complex road conditions.

3.3. Function and driving experience comparison

Tesla Autopilot: Autonomous Driving Features: Mainly offers features like Autosteer, Traffic-Aware Cruise Control (TACC), Auto Lane Change, Autopark, and Summon. Driving Experience: Tesla's Autopilot performs well on highways and in simple urban road environments, but in complex urban traffic and construction zones, it still requires a high level of driver attention.

Chinese New Energy Vehicles: Autonomous Driving Features: Brands like NIO, XPeng, and Li Auto offer similar intelligent driving system features, including Autosteer, Auto Lane Change, Autopark, TACC, and Summon. Some brands (such as XPeng) have introduced advanced autonomous driving features for city roads, like Navigation Guided Pilot (NGP), which enables autonomous driving on urban roads. Driving Experience: Thanks to the use of LiDAR, these vehicles perform more stably in complex environments and low-light conditions. Especially in urban environments, LiDAR improves obstacle detection accuracy, enabling the system to better handle dynamic scenarios with pedestrians and vehicles.

3.4. Software differences

Tesla Autopilot: over-the-air (OTA) Updates: Tesla regularly releases new features and performance improvements through OTA software updates, allowing users to receive the latest software updates without visiting a service center. Data Feedback and Model Optimization: Tesla relies on feedback from its global fleet to continuously optimize and train its AI models, improving the system's performance in various environments.

Chinese New Energy Vehicles: OTA Updates: Brands like NIO and XPeng actively adopt OTA update strategies, providing users with continuous software upgrades, adding new features, and improving the driving experience. Data Ecosystem: Domestic manufacturers also rely on large-scale fleet data feedback to optimize intelligent driving systems. Additionally, some brands further integrate richer intelligent cabin functions, deeply integrating navigation, voice assistants, and entertainment systems.

4. Advantages of autonomous driving technology

In recent years, autonomous driving technology has rapidly developed and demonstrated many significant advantages. First, autonomous driving technology can greatly improve traffic safety. Since autonomous driving systems rely on advanced sensors, computing vision, and artificial intelligence, they can react faster than human drivers, more accurately detect potential dangers in the surrounding environment, and reduce the occurrence of traffic accidents.

In fact, autonomous driving can effectively avoid human driving errors. Many traffic accidents are caused by human factors such as distraction, fatigue, and aggressive driving behavior. Intelligent driving systems can avoid these impacts, maintain a higher level of attention than human drivers, and ensure predictable driving behavior, thereby reducing accidents caused by human negligence.

Additionally, autonomous driving brings great convenience to people’s lives. Through autonomous driving, people can engage in other activities such as entertainment, rest, or work while driving. It not only improves comfort but also offers more travel options for those who cannot drive. At the same time, autonomous driving can also optimize traffic flow, reducing the likelihood of traffic jams. Autonomous vehicles can adjust driving behavior in real-time based on road conditions, improving road efficiency.

5. Challenges and Prospect of Autonomous Driving Technology

Despite the many advantages of autonomous driving technology, there are still some significant challenges in actual applications [15]. First, technology reliability is a key issue. Autonomous driving systems rely on multiple sensors and complex algorithms, and in extreme situations or in areas with poor signal reception, system performance may not be reliable. The system’s accuracy and stability in identifying and predicting road conditions place higher demands on the system’s technology. Second, autonomous driving technology faces the issue of legal and regulatory uncertainty. Currently, laws and regulations related to autonomous driving differ across countries and regions, with no unified international standards or legal frameworks. Moreover, there are ongoing ethical debates surrounding who should be held accountable in the event of an accident caused by an autonomous driving system. To address these challenges, relevant governments and institutions need to continuously advance research and policy development. Strengthening the regulation of autonomous driving systems, creating clear laws and ethical guidelines, and encouraging the responsible development of intelligent driving technology will help promote its wider application. These efforts will help ensure that autonomous driving technology can benefit society, increase safety, and expand the market.

In terms of technological improvements, the primary focus should be on enhancing the accuracy of sensors. Autonomous driving systems rely heavily on sensors for environmental perception and data collection, so the accuracy of sensors directly affects the performance and safety of the system. By improving the resolution, range, and interference resistance of sensors, vehicles can accurately detect and respond to various complex environments. Secondly, optimizing algorithm capabilities is also key to advancing autonomous driving technology. By improving autonomous driving algorithms and enhancing data processing efficiency and decision-making speed, intelligent driving systems can handle large amounts of data in complex scenarios more efficiently and accurately, thus improving the overall system’s operational stability and reliability. With the development of 5G and IoT technologies, autonomous driving technology will shape future transportation systems. All these technological advances will lay a solid foundation for the further promotion and application of autonomous driving technology.

6. Conclusions

This paper analyzed the applications of AI on autonomous driving. Compared to traditional human driving, autonomous driving not only reduces operational costs but also greatly improves driving efficiency, safety, convenience and comfort. Computer vision and natural language processing are two core technologies for autonomous driving, so autonomous driving systems rely on multiple sensors and complex algorithms. By improving the resolution, range, and interference resistance of sensors, vehicles can accurately detect and respond to various complex environments. By improving autonomous driving algorithms and enhancing data processing efficiency and decision-making speed, autonomous driving systems can handle large amounts of data in complex scenarios more efficiently and accurately, thus improving the overall system’s operational stability and reliability. Combined with 5G and IoT technologies, autonomous driving technology will shape future transportation systems. All these technological advances will lay a solid foundation for the further promotion and application of autonomous driving technology.


References

[1]. Lin, Y. S. K., & Bashir, A. K. (2023). Keylight: intelligent traffic signal control method based on improved graph neural network. IEEE Transactions on Consumer Electronics, 1-1.

[2]. National Center for Statistics and Analysis. (2021). Early estimates of motor vehicle traffic fatalities and fatality rate by sub-categories in 2020. USA: NHTSA, 1-10.

[3]. Liu, Z. T. (2022). Application concept of big data dynamic planning and artificial intelligence driverless technology-taking macro-speed mode of four-dimensional space navigation to solve traffic congestion as an example. Wireless Internet Technology, 19(21):102-105.

[4]. Kiran, B. R., Sobh, I., Talpaert, V., Mannion, P., Sallab, A. A. A., Yogamani, S. K., & P'erez, P. (2022). Deep reinforcement learning for autonomous driving: A survey. IEEE Trans. Intell. Transp. Syst., 23(6):4909-4926.

[5]. Yang, Y. (2024). Application analysis of artificial intelligence in the field of automobile driving technology. Auto Electric Parts, 7:1-4.

[6]. Liu, L., Lu, S., Zhong, R., Wu, B., Yao, Y., Zhang, Q., & Shi, W. (2020). Computing systems for autonomous driving: State of the art and challenges. IEEE Internet of Things Journal, 8(8): 6469-6486.

[7]. Cho, R. L. T., Liu, J. S., & Ho, M. H. C. (2021). The development of autonomous driving technology: perspectives from patent citation analysis. Transport Reviews, 41(5): 685-711.

[8]. Chen, L., Li, Y., Huang, C., Li, B., Xing, Y., Tian, D., et al. (2022). Milestones in autonomous driving and intelligent vehicles: Survey of surveys. IEEE Transactions on Intelligent Vehicles, 8(2): 1046-1056.

[9]. Cheng, J., Zhang, L., Chen, Q., & Hu, X. (2022). A review of visual SLAM methods for autonomous driving vehicles. Engineering Applications of Artificial Intelligence, 114: 104992.

[10]. Liu, Y., & Diao, S. (2024). An automatic driving trajectory planning approach in complex traffic scenarios based on integrated driver style inference and deep reinforcement learning. PLoS One, 19(1):e0297192.

[11]. Li, Y., Feng, X., & Wang, Z. (2019). Application progress of computer vision technology. Artificial Intelligence, 2:18-27.

[12]. Bai, C. (2017). Research on automatic driving methods based on computer vision and deep learning. Harbin Institute of Technology, Master's thesis.

[13]. Yeong, J., Velasco-Hernandez, G., Barry, J., & Walsh, J. (2021). Sensor and sensor fusion technology in autonomous vehicles: A review. Sensors (Basel), 21(6):2140.

[14]. Wu, J., Gao, B., Gao, J., Yu, J., Chu, H., Yu, Q., et al. (2024). Prospective role of foundation models in advancing autonomous vehicles. Research (Wash DC), 7:0399.

[15]. Chen, L., Wu, P., Chitta, K., Jaeger, B., Geiger, A., & Li, H. (2024). End-to-nd autonomous driving: Challenges and frontiers. IEEE Trans Pattern Anal Mach Intell., doi: 10.1109/TPAMI.2024.3435937.


Cite this article

Guan,C. (2024). Applications of Artificial Intelligence on Autonomous Driving. Applied and Computational Engineering,109,31-37.

Data availability

The datasets used and/or analyzed during the current study will be available from the authors upon reasonable request.

Disclaimer/Publisher's Note

The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of EWA Publishing and/or the editor(s). EWA Publishing and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

About volume

Volume title: Proceedings of the 2nd International Conference on Machine Learning and Automation

ISBN:978-1-83558-737-9(Print) / 978-1-83558-738-6(Online)
Editor:Mustafa ISTANBULLU
Conference website: https://2024.confmla.org/
Conference date: 21 November 2024
Series: Applied and Computational Engineering
Volume number: Vol.109
ISSN:2755-2721(Print) / 2755-273X(Online)

© 2024 by the author(s). Licensee EWA Publishing, Oxford, UK. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license. Authors who publish this series agree to the following terms:
1. Authors retain copyright and grant the series right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this series.
2. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the series's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial publication in this series.
3. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See Open access policy for details).

References

[1]. Lin, Y. S. K., & Bashir, A. K. (2023). Keylight: intelligent traffic signal control method based on improved graph neural network. IEEE Transactions on Consumer Electronics, 1-1.

[2]. National Center for Statistics and Analysis. (2021). Early estimates of motor vehicle traffic fatalities and fatality rate by sub-categories in 2020. USA: NHTSA, 1-10.

[3]. Liu, Z. T. (2022). Application concept of big data dynamic planning and artificial intelligence driverless technology-taking macro-speed mode of four-dimensional space navigation to solve traffic congestion as an example. Wireless Internet Technology, 19(21):102-105.

[4]. Kiran, B. R., Sobh, I., Talpaert, V., Mannion, P., Sallab, A. A. A., Yogamani, S. K., & P'erez, P. (2022). Deep reinforcement learning for autonomous driving: A survey. IEEE Trans. Intell. Transp. Syst., 23(6):4909-4926.

[5]. Yang, Y. (2024). Application analysis of artificial intelligence in the field of automobile driving technology. Auto Electric Parts, 7:1-4.

[6]. Liu, L., Lu, S., Zhong, R., Wu, B., Yao, Y., Zhang, Q., & Shi, W. (2020). Computing systems for autonomous driving: State of the art and challenges. IEEE Internet of Things Journal, 8(8): 6469-6486.

[7]. Cho, R. L. T., Liu, J. S., & Ho, M. H. C. (2021). The development of autonomous driving technology: perspectives from patent citation analysis. Transport Reviews, 41(5): 685-711.

[8]. Chen, L., Li, Y., Huang, C., Li, B., Xing, Y., Tian, D., et al. (2022). Milestones in autonomous driving and intelligent vehicles: Survey of surveys. IEEE Transactions on Intelligent Vehicles, 8(2): 1046-1056.

[9]. Cheng, J., Zhang, L., Chen, Q., & Hu, X. (2022). A review of visual SLAM methods for autonomous driving vehicles. Engineering Applications of Artificial Intelligence, 114: 104992.

[10]. Liu, Y., & Diao, S. (2024). An automatic driving trajectory planning approach in complex traffic scenarios based on integrated driver style inference and deep reinforcement learning. PLoS One, 19(1):e0297192.

[11]. Li, Y., Feng, X., & Wang, Z. (2019). Application progress of computer vision technology. Artificial Intelligence, 2:18-27.

[12]. Bai, C. (2017). Research on automatic driving methods based on computer vision and deep learning. Harbin Institute of Technology, Master's thesis.

[13]. Yeong, J., Velasco-Hernandez, G., Barry, J., & Walsh, J. (2021). Sensor and sensor fusion technology in autonomous vehicles: A review. Sensors (Basel), 21(6):2140.

[14]. Wu, J., Gao, B., Gao, J., Yu, J., Chu, H., Yu, Q., et al. (2024). Prospective role of foundation models in advancing autonomous vehicles. Research (Wash DC), 7:0399.

[15]. Chen, L., Wu, P., Chitta, K., Jaeger, B., Geiger, A., & Li, H. (2024). End-to-nd autonomous driving: Challenges and frontiers. IEEE Trans Pattern Anal Mach Intell., doi: 10.1109/TPAMI.2024.3435937.