1. Introduction
In recent years, the rapid progress of urbanization and the sharp rise in vehicle ownership in China have placed significant pressure on medium and large cities due to traffic congestion. This issue not only affects citizens' daily commutes but also limits the overall efficiency of urban operations. Meanwhile, the rapid development of artificial intelligence (AI) has injected unprecedented energy into the evolution of intelligent transportation systems. Autonomous driving technology, as the core of intelligent transportation, is leading the transformation of the automotive industry and urban traffic management [1].
The development of autonomous driving technology has gone through three major phases.
Early Exploration and Experimentation Phase (Late 20th Century-2010): In 1995, Carnegie Mellon University achieved vehicle direction control using neural networks. By 2000, adaptive cruise control (ACC) was introduced to intelligent driving systems, and by 2008, parking assist systems were introduced. In 2010, Google launched the first driverless hybrid car using high-precision 3D maps, marking a new era for autonomous driving technology.
Breakthroughs in Sensors and Algorithms (2010-2020): In 2012, the introduction of lane-keeping assist systems further enhanced driving safety. In 2015, Tesla launched the first commercially available autonomous driving system. By 2018, the concept of “vehicle-to-everything” (V2X) emerged. Baidu, in collaboration with Xiamen King Long, used LTE-V2X and 5G technology to achieve beyond-line-of-sight sensing and low-latency decision-making, and they introduced the world’s first mass-produced Level 4 autonomous bus, "Apolong."
High Automation and Testing Phase (2020 to Present): Since 2020, autonomous driving technology has entered a stage of high automation, with some vehicles now equipped with advanced driver-assistance systems (ADAS), enabling autonomous driving under specific conditions.
It is clear that the advancement of AI technology offers new approaches to solving traffic problems. The increasing maturity and deeper application of cutting-edge technologies such as machine learning, deep learning, and reinforcement learning have laid a solid foundation for the expansion of autonomous driving. Autonomous driving technology, relying on high-precision multi-sensor fusion, accurate environmental perception, intelligent path planning algorithms, and efficient decision-making control systems, enables autonomous vehicle operation, thereby improving traffic efficiency and reducing accident rates.
This paper employs a literature review and case analysis approach to thoroughly analyze the development and current state of autonomous driving technology. This study offers multiple perspectives and a more systematic knowledge framework for the development of autonomous driving technology.
2. Autonomous driving
2.1. Autonomous driving system
Autonomous driving systems, also known as driverless systems, refer to technology that enables a vehicle to drive independently without human intervention, through the coordinated work of computer systems and sensors. The aim is to replace the driver and carry out a series of vehicle operations. This system typically consists of three parts: the perception system, the decision-making system, and the control system [2].
The perception system gathers information about the vehicle's surroundings using various sensors (such as cameras, LiDAR, ultrasonic radars, and millimeter-wave radars). This data is transmitted in real-time to the computer system for processing. The decision-making system processes the data provided by the perception system, making decisions based on it. This typically includes sub-modules such as path planning and motion planning. The control system then executes vehicle operations like acceleration, braking, and steering, based on the motion trajectory generated by the decision-making system.
2.2. Categories of autonomous driving
According to the levels of autonomous driving defined by SAE (Society of Automotive Engineers) and ISO (International Organization for Standardization), there are six levels, from 0 to 5, as follows [3]:
Level 0 (No Automation): All operations are controlled by the driver, and the vehicle has no automation features.
Level 1 (Driver Assistance): The vehicle has a single automation feature, such as automatic braking, but these functions cannot operate simultaneously. The driver is still fully responsible for controlling the car.
Level 2 (Partial Automation): The vehicle can assist with steering or acceleration functions, including lane keeping and adaptive cruise control, but the driver must remain alert and ready to take over control at any time.
Level 3 (Conditional Automation): The vehicle can handle driving tasks independently under specific conditions, but the driver must be prepared to take control in emergencies.
Level 4 (High Automation): The vehicle can fully take over driving, allowing the user to relax completely. However, there are some limitations, such as weather and road conditions, that may affect the vehicle's ability to operate.
Level 5 (Full Automation): No driver is required, and the vehicle can manage all driving tasks under all conditions.
3. Core components
3.1. Sensor
Sensors are the core of the perception system in autonomous driving, providing essential data to the decision-making system by detecting the surrounding environment. Below is a detailed explanation and summary of several key sensors [4]:
Radar technology uses radio waves to detect the speed and distance of objects. It has strong penetration and adaptability, allowing it to operate reliably in various complex environments.
Lidar (Laser Radar) works by emitting laser beams and receiving reflected signals. It can measure the distance and shape of objects, providing precise distance measurements and reconstructing the 3D structure of the environment. This significantly enhances the system's ability to perceive its surroundings in complex lighting conditions, such as nighttime or extreme weather. However, lidar technology is expensive and can be affected by harsh conditions like dust storms.
Ultrasonic radar measures the distance to objects by emitting ultrasonic waves and receiving their reflections. It is mainly used for detecting close-range obstacles, such as in parking assistance or automatic emergency braking. However, its detection range is short and can be affected by materials that absorb sound waves.
Cameras capture images of the surrounding environment and, using advanced computer vision algorithms, simulate "human eyes" to recognize traffic signs, pedestrians, and vehicles. Cameras are essential for visual perception in autonomous driving systems, but they are highly dependent on algorithms and are significantly affected by weather conditions.
3.2. Positioning system
Precise positioning is the foundation of autonomous driving technology, requiring the integration of various technologies to achieve accuracy.
Global Positioning System (GPS) is the most widely used positioning technology, providing real-time vehicle location by receiving satellite signals [5]. While it performs with high accuracy in open areas, it is prone to signal blockages and interference in densely populated urban areas and tunnels, resulting in increased positioning errors. Thus, relying solely on GPS cannot meet the precision requirements of autonomous driving.
Inertial Measurement Units (IMU) consist of accelerometers and gyroscopes to measure the vehicle’s acceleration and angular velocity [6]. It can provide short-term high-precision position and orientation estimates when GPS signals are unstable or lost, but issues such as accumulated errors arise. Therefore, IMUs need to be fused with other sensors to improve accuracy.
High-Definition Maps (HD Map) summarize detailed road geometries, lane layouts, traffic signs, and signals, providing the vehicle with rich environmental data [7]. By accurately matching with HD maps, the vehicle can significantly improve its positioning accuracy and environmental awareness. However, updating and maintaining map data in rapidly changing urban environments remains a challenging issue.
Vehicle-to-Everything (V2X) communication enables information sharing and coordinated control in the traffic system through communication between vehicles (V2V), vehicles and infrastructure (V2I), and vehicles and networks (V2N). This technology significantly enhances the stability of autonomous vehicles and improves urban traffic efficiency.
4. Application of reinforcement learning in autonomous driving
4.1. Basic principles of reinforcement learning
Reinforcement learning is a machine learning method whose core lies in the continuous interaction between an agent (in the autonomous driving scenario, this refers to the autonomous driving system or vehicle) and a complex, dynamic environment (the physical world, including roads, vehicles, pedestrians, traffic signals, etc.), to learn and optimize behavior policies in order to maximize long-term cumulative rewards. During operation, the agent selects and executes actions, such as accelerating, decelerating, or steering, based on the current state of the environment, which includes vehicle speed, position, obstacle information, and traffic signs. The agent then receives rewards from the environment, such as avoiding collisions and obeying traffic rules, as feedback. This learning method plays a key role in autonomous driving technology, enabling vehicles to make real-time decisions in dynamic and complex environments [8].
4.2. Algorithms of Reinforcement Learning
Reinforcement learning algorithms are becoming increasingly prevalent in autonomous driving, bringing new momentum to technological development. Here are several key application areas:
Q-Learning is a model-free reinforcement learning algorithm that enables autonomous vehicles to choose actions based on Q-values (the expected reward for taking a specific action in a given state). After executing an action, the vehicle receives a reward and updates the Q-value to optimize future choices, making it a table-based reinforcement learning algorithm. Its advantage lies in not requiring a model of the environment, making it suitable for complex environments with strong flexibility.
Deep Q-Network (DQN) is an extension of Q-Learning that uses deep neural networks to approximate the Q-value function. Its advantage is that it can handle high-dimensional states while improving training stability. Applications in autonomous driving include object detection, path planning, and decision control.
Policy Gradient is a policy-based reinforcement learning algorithm that directly learns the policy function, calculating the probability of taking actions in any given state. Intelligent vehicles make decisions based on the policy function and update the policy parameters by calculating the policy gradient to maximize cumulative rewards. This algorithm can be used to learn complex driving strategies, such as handling sudden traffic situations and coordinating with other vehicles.
These algorithms each have their own strengths and can collectively enhance the safety and flexibility of autonomous vehicles in various scenarios.
4.3. Challenges of reinforcement learning in complex traffic environments
Despite the excellent performance of reinforcement learning algorithms in the field of autonomous driving, they still face numerous issues and challenges in practical applications. Given the high-dimensional state space of traffic environments, which includes complex data such as vehicle position, speed, and environmental information, along with the real-time requirements of autonomous driving systems, there is a need to develop more efficient state representation and dimensionality reduction methods [9]. These advancements would help reduce the dimensionality of data capture and the complexity of calculations.
Furthermore, most reinforcement learning experiments are conducted in simulated environments, which can struggle to fully replicate the rapidly changing scenarios of the real world. This limitation can lead to uncertainties in model performance during actual applications. Future research should focus on enhancing the authenticity and diversity of simulation environments and employing methods such as transfer learning to improve model performance in real-world settings.
5. Practical applications of autonomous driving technology
5.1. Tesla
Tesla is a pioneer in the field of autonomous driving technology. Its Autopilot system utilizes a range of sensors, including high-precision cameras, ultrasonic sensors, and radar systems, combined with advanced driver assistance system (ADAS) features to achieve semi-automated driving. Tesla's Full Self-Driving (FSD) package employs a pure vision-based approach using cameras to simulate the driving processes of the human brain and eyes, successfully enabling features such as automatic lane changes, automated parking, and smart summon.
Through over-the-air (OTA) updates, Tesla continually improves its autonomous driving software, allowing vehicles to learn and adapt to new driving conditions. Although Tesla's driving system still requires driver supervision, its evolving technology showcases the immense potential for achieving fully autonomous driving in the future.
5.2. Waymo of Google
Waymo, the autonomous driving company under Alphabet, is considered a benchmark in global autonomous driving. Its autonomous driving system employs various algorithms, including deep learning, reinforcement learning, and Bayesian networks, to perform functions such as object detection, environmental understanding, and decision-making. Waymo's autonomous vehicles are equipped with a top-tier combination of sensors, including LiDAR, cameras, and radar, and they utilize high-precision maps and algorithms to achieve accurate environmental perception and navigation.
To date, Waymo has completed millions of miles of road testing across multiple cities in the United States and has launched its autonomous taxi service, Waymo One, successfully breaking through to commercial operations without a driver.
5.3. Apollo of Baidu
Baidu's Apollo platform is a leader in autonomous driving in China. The Apollo platform integrates cutting-edge technologies such as multi-sensor fusion, high-definition mapping, and precise positioning, utilizing sensors like LiDAR, cameras, and radar to achieve accurate environmental perception and localization. Through reinforcement learning and deep learning algorithms, its driving capabilities have become adept at navigating complex urban environments. Apollo has already conducted commercial pilot projects and road tests in several Chinese cities. The "Luobo Kuaipao" project serves as a practical application case, providing autonomous taxi services in specific areas to ensure safe and convenient travel.
The current practices of companies like Tesla, Google, and Baidu in autonomous driving are leading a profound transformation in the automotive industry, laying the groundwork for achieving the ambitious goal of fully autonomous driving. With the rapid advancement of technology, a new, highly intelligent traffic ecosystem is set to transition from dream to reality.
5.4. Challenges and future development
The artificial intelligence-driven software for autonomous vehicles faces numerous challenges, including safety and reliability, regulations and legal frameworks, public trust and acceptance, handling edge cases, data and algorithm bias, societal impacts, ethical frameworks and guidelines, and user privacy [10].
When sensors fail, it can lead to misinterpretations of the environment by the vehicle. Cybersecurity vulnerabilities could be exploited for malicious control. There is a lack of clear legal liability in the event of accidents involving autonomous vehicles, raising questions about how to define vehicle-to-vehicle incidents and how to redefine new traffic models. In extreme situations, can the optimal solutions from artificial intelligence align with ethical considerations? Many such questions remain before us.
The current technology and systems are still immature, and the field of autonomous vehicles urgently requires interdisciplinary collaboration and innovation, integrating multiple domains to explore solutions that will promote the maturation of the technology and the widespread application of autonomous driving.
6. Conclusion
This paper thoroughly explores various aspects of autonomous driving technology, including its development history, core components, key technologies, and practical applications. Through literature review and case analysis, it demonstrates the significance of autonomous driving technology within intelligent transportation systems and its vast potential. In the future, autonomous driving technology is expected to enhance traffic efficiency and safety through advantages such as high-precision environmental perception, intelligent path planning, and efficient decision-making control. This research provides a systematic knowledge framework for understanding and promoting the application of autonomous driving technology, benefiting research and practice in related fields.
However, the paper also has some shortcomings, such as a lack of field investigations and experimental data, which may affect the comprehensiveness and accuracy of the research conclusions. Future research will aim to validate and supplement the existing conclusions by incorporating more field studies and experimental data. The focus of this research is primarily on the current application and development of technology, with limited discussion on future development trends and potential impacts. Future studies will address emerging technological directions and predict and evaluate the innovations and societal transformations they may bring.
Despite these limitations, this paper provides valuable references for the research on autonomous driving technology. As research continues to deepen and expand, the technical challenges faced by autonomous driving technology will gradually be addressed, leading to its comprehensive development.
References
[1]. Zhang, L., Shen, J., Qin, X., et al. Information physical mapping and system construction of intelligent network transportation[J]. Journal of Tongji University(Natural Science), 2022, 50(1): 79.
[2]. Hu, C., & Jia, Z. (2024). Development and control methods of autonomous driving technology for intelligent vehicles. Mechanical and Electronic Control Engineering, 6(7), 46-48.
[3]. Dyble, J. (2018). Understanding SAE automated driving–levels 0 to 5 explained. Gigabit.
[4]. Yeong, D. J., Velasco-Hernandez, G., Barry, J., & Walsh, J. (2021). Sensor and sensor fusion technology in autonomous vehicles: A review. Sensors, 21(6), 2140.
[5]. Xu, G., & Xu, Y. (2007). GPS. Springer-Verlag Berlin Heidelberg.
[6]. Höflinger, F., Müller, J., Zhang, R., Reindl, L. M., & Burgard, W. (2013). A wireless micro inertial measurement unit (IMU). IEEE Transactions on instrumentation and measurement, 62(9), 2583-2595.
[7]. Kim, K., Cho, S., & Chung, W. (2021). HD map update for autonomous driving with crowdsourced data. IEEE Robotics and Automation Letters, 6(2), 1895-1901.
[8]. Gao, Y., Chen, S., & Lu, X. (2004). A review of reinforcement learning research. Acta Automatica Sinica, 30(001), 86-100.
[9]. He, Y., Lin, H., Liu, Y., Yang, L., & Qu, X. (2024). Application and challenges of reinforcement learning in autonomous driving technology. Journal of Tongji University (Natural Science Edition), 52(4), 520-531.
[10]. Garikapati, D., & Shetiya, S. S. (2024). Autonomous Vehicles: Evolution of Artificial Intelligence and the Current Industry Landscape. Big Data and Cognitive Computing, 8(4), 42.
Cite this article
Wu,Z. (2024). Exploration and Application Analysis of Autonomous Driving Technology. Applied and Computational Engineering,103,205-210.
Data availability
The datasets used and/or analyzed during the current study will be available from the authors upon reasonable request.
Disclaimer/Publisher's Note
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of EWA Publishing and/or the editor(s). EWA Publishing and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
About volume
Volume title: Proceedings of the 2nd International Conference on Machine Learning and Automation
© 2024 by the author(s). Licensee EWA Publishing, Oxford, UK. This article is an open access article distributed under the terms and
conditions of the Creative Commons Attribution (CC BY) license. Authors who
publish this series agree to the following terms:
1. Authors retain copyright and grant the series right of first publication with the work simultaneously licensed under a Creative Commons
Attribution License that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this
series.
2. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the series's published
version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial
publication in this series.
3. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and
during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See
Open access policy for details).
References
[1]. Zhang, L., Shen, J., Qin, X., et al. Information physical mapping and system construction of intelligent network transportation[J]. Journal of Tongji University(Natural Science), 2022, 50(1): 79.
[2]. Hu, C., & Jia, Z. (2024). Development and control methods of autonomous driving technology for intelligent vehicles. Mechanical and Electronic Control Engineering, 6(7), 46-48.
[3]. Dyble, J. (2018). Understanding SAE automated driving–levels 0 to 5 explained. Gigabit.
[4]. Yeong, D. J., Velasco-Hernandez, G., Barry, J., & Walsh, J. (2021). Sensor and sensor fusion technology in autonomous vehicles: A review. Sensors, 21(6), 2140.
[5]. Xu, G., & Xu, Y. (2007). GPS. Springer-Verlag Berlin Heidelberg.
[6]. Höflinger, F., Müller, J., Zhang, R., Reindl, L. M., & Burgard, W. (2013). A wireless micro inertial measurement unit (IMU). IEEE Transactions on instrumentation and measurement, 62(9), 2583-2595.
[7]. Kim, K., Cho, S., & Chung, W. (2021). HD map update for autonomous driving with crowdsourced data. IEEE Robotics and Automation Letters, 6(2), 1895-1901.
[8]. Gao, Y., Chen, S., & Lu, X. (2004). A review of reinforcement learning research. Acta Automatica Sinica, 30(001), 86-100.
[9]. He, Y., Lin, H., Liu, Y., Yang, L., & Qu, X. (2024). Application and challenges of reinforcement learning in autonomous driving technology. Journal of Tongji University (Natural Science Edition), 52(4), 520-531.
[10]. Garikapati, D., & Shetiya, S. S. (2024). Autonomous Vehicles: Evolution of Artificial Intelligence and the Current Industry Landscape. Big Data and Cognitive Computing, 8(4), 42.