1. Introduction
The automotive industry has transformed greatly by integrating artificial intelligence technologies such as computer vision. The global market for AI usage in vehicles has experienced significant growth, with the market for software used in automobiles expected to increase to over 6.6 billion dollars by 2025, expanding the average growth annual rate of 36.15% [1]. Specifically, objection detection in computer vision has contributed to significant implications in autonomous driving systems, enabling self-driving vehicles to classify and perceive objects on roads more accurately. In this case, the main strategies that have been used in object detention which include the single-stage and two-stage approaches. The two-stage approaches are widely used in industry and academic research due to their high accuracy, while the single stage is renowned for its speed. This paper will provide how computer vision is used in self-driving car detection.
2. Background
Computer vision is a multidisciplinary field of artificial intelligence as well as computer science focusing on enabling machines or computers to process, understand and interpret visual information from the world around them. This field usually involves the implementation and development of the models and the algorithms, which allow the machines to analyze and extract valuable insights from digital videos and images and process that data in making decisions. Computer vision means they imitate the human visual process in interpreting and processing data [2]. The main objective of computer vision is to help machines see and interpret the visual world, similar to human beings. Therefore, they can recognize the objects, detect the features, make decisions and also identify the patterns based on the extracted information. Computer vision usually involves application in various sectors, including the automotive where their ability to interpret visual data plays an important role in facial recognition, autonomous navigation, object recognition and image segmentation. This technology is known to identify and track obstacles, vehicles, and pedestrians’ hence achieving an improved understanding of the environment it is moving through the autonomous driven systems. The advantage of these systems is that they enable safe and efficient autonomous navigation and might reduce the chances of accidents [3]. However, there are challenges associated with these systems, which include regulatory and legal hurdles, edge cases and complex scenarios in which the mind needs human perceptions.
3. Literature Review
Various researchers have provided insights concerning the self-driving implementation of autonomous vehicles guided by computer vision for detection. This literature review tends to explore the ways the researchers have discussed this topic and the various insights they have provided on positive impacts and the challenges of using them.
3.1. Application of Computer Vision in Self-Driving Cars
The main applications of computer vision are discussed below:
3.1.1. Vehicle Detection
Vehicle detention in the self-driving car involves computer vision in locating and identifying vehicles on the road. This usually assists in reducing collisions, cooperative driving and safe lane changes. Vehicle detention uses image processing algorithms, cameras, radar and LiDAR to help the systems detect vehicles ahead [4]. Researchers have found that the cameras capture high-resolution images from the surroundings, and the LiDAR sensors use laser beams to create 3D point clouds of the environment while the radar detects the relative veracity and the presence of nearby vehicles. The computer vision then uses deep learning algorithms and advanced image processing to perceive whether there is a vehicle ahead, detect the movement of those vehicles and then classify it according to whatever it is on which type and size.
3.1.2. Traffic Sign and Light Recognition
Literature on self-driving car implementation using computer vision also applies to traffic signs and light recognition. This process consists of the interpretation and identification of the various signs of the rods and also the traffic signals in order to ensure that the self-driving vehicles follow the track, similar to other vehicles on the road [5]. This process relies on the image processing techniques and the cameras. The cameras capture images of the surrounding, which is then processed through the algorithms, and these algorithms analyze the images to recognize any stop sign, speed limits, and yield signs that may be on the road. Algorithms are then used to process the images, detect the traffic light signals time, and distinguish between yellow, green, and red to inform on whether to stop, move, or get ready.
3.1.3. Pedestrian Detection
Literature on the use of computer vision in self-driving cars has also found that these technologies can also be applied to pedestrian detection [6]. This detection consists of identifying and tracking pedestrians moving around the vehicle. This process uses image processing techniques, LiDAR technology, and a camera where the camera captures images of a person in the surroundings while the LiDAR sensors help to create a 3D point for providing in-depth information. The computer vision algorithms are then used in analyzing the information from the LiDAR and the cameras to detect a person nearby [7]. The continuous capture of the surroundings provides information on the pedestrians’ movements and, hence, safe interactions with them.
3.1.4. Lane Detection and Tracking
Researchers have explained that computer vision is also used in lane detection and tracking self-driving cars. This process consists of the cars detecting the right lane to follow on the roads and when to change the lane. Cameras and image processing techniques are commonly used where the cameras captures the road images while the algorithms analyze those images in other to identify any lane markings such as the dashed lines or solid line [8]. Researchers have also found that these algorithms also used edge detection, Hough transform and image processing in order to locate each h they are following and detect the position of the vehicle with the lane in order to ensure that they are safe and to corrode with other vehicles in the road.
3.2. Importance of Computer Vision in Self-Driving Cars
3.2.1. Enhanced Vehicle Safety
The advantages of using computer vision in self-driving car implementation are substantial and have significance both drivers, pedestrians and vehicle companies. As a critical component of advanced driver assistance systems, the computer vehicle helps in enhancing the vehicle's safety. Researchers have found that computer vision, with its use of radar, LiDAR and image processing algorithms, enhances the safety of vehicles and reduces the accident cases which might be experienced when using manual driving [9]. These vision systems usually detect and identify objects such as cyclists, pedestrians and other vehicles. This capability assists in the implementation of life-saving features such as automated emergency braking, lane departure warnings and collision avoidance, therefore reducing the risks of accidents and enhancing road safety.
3.2.2. Improved Driving Efficiency
Literature on self-driving car implementation has also found that computer vision is advantageous in improving driving efficiency. With these technologies, vehicle companies become efficient through a reduction in the consumption of fuel and emissions. Vehicles have been noticed to be among the main contributors to air and water pollution due to the use of fossil fuels [10]. Computer vision uses electricity, hence reducing the consumption of fuels which are non-renewable. Also, computer vision helps analyze traffic patterns and adjust the speed and deceleration to reduce emissions.
3.2.3. Users' Experience and Comfort
Researchers have also emphasized the importance of the user's experience and comfort in the use of computer vision in self-driving car implementation. Computer vision technologies usually enhance the driving experience through the drives by providing features such as driver monitoring systems (DMS) [11]. These systems can help to trace the driver's attentiveness and detect the signs drivers may have, such as distraction and fatigue, hence improving the safety of driving practices. In addition, computer visions can also help in intuitive human-machine interfaces such as voice control and gestures, helping the self-driving operates to interact with the in-car infotainment systems, hence reducing distractions.
3.2.4. Reduce Vehicle Overlying Costs
The use of manual vehicles or electric vehicles with manpower has underlying costs, such as insurance costs and repairs and maintenance, which might not be experienced with the use of computer vision in self-driving cars. The integration of computer vision-based safety features in vehicles leads to lower costs of insurance [12]. This is because these systems lower the likelihood that collisions and accidents might over, and hence, the offering using such vehicles will prefer to use lower premium insurance packages.
3.2.5. Challenges of computer vision in self-driving cars
Even though the use of computer vision has facilitated the reduction of accidents and underlying effects of manual vehicles, some challenges may be experienced as a result of this technology:
3.2.6. Regulatory and Legal Handles
Self-driving vehicles are rarely used in many countries due to the legal requirements needed. The development of autonomous driving technology has experienced regulatory challenges as well as varying legal requirements in various countries and regions. Many countries have been arguing against the deployment of features such as Full Self Driving and terms it as being insecure [13]. Most researchers have argued that this results from the ethical decision-making the public trusts. Autonomous driving algorithms are needed to make ethical decisions in critical conditions, such as prioritizing action and avoiding collisions, which is very challenging since they are systems programmed to do something [14]. Researchers have also argued that the transparency in the decision-making process of these systems has fallen against the legal framework in many countries, emphasizing that it is challenging for these systems to make informed decisions.
3.2.7. Edge Cases and Complex Scenarios
Computer visions might also be dangerous in the edge cases and complex scenarios that could enquire human perception and decisions. In the real-world driving environment, there are various complex scenarios and edge cases that the computer visions might find challenging to handle accurately [15]. For example, fog, heavy rain and ice may obscure the road signs and signage and the visibility of other passengers and vehicles, making it challenging for these systems to take images. Also, the areas with intense sunlight can create glare, which affects the visibility of the objects in rods. Also, many roads have unconventional road users such as e-scooters, skateboards and unusual vehicles, which may not follow the typical rules of the roads and, therefore, might corrode with the vehicles. There are also roads with animals, such as deer, which may not be detected by the systems and, therefore, cause accidents.
4. Case Study
This paper provides a case study for the practical implications of computer visions of self-driving car implementation. The main objective for conducting the case study will be to understand the influence of computer vision in the section process of self-driving cars.
5. Implementation Process
The case study will be conducted on the main companies in the automotive industries that specifically have introduced self-driving vehicles and computer vision. The study specifically targets these companies since they have knowledge of the application of computer vision in self-driving cars. The companies used will not be disclosed in order to maintain their anonymity and privacy. A qualitative study was used to collect the data from these companies. This research method helps to get in-depth insights from the participants’ perceptions, opinions and attitudes about using computer visions in self-driving cars.
6. Data Collection
Data was collected from 5 participants selected from various companies that have used computer visions in the self-driving car implementation. These participants were selected purposively in order to obtain specific individuals who have knowledge and experience in this field, hence providing important insights. The participants had a session of 15 minutes with the interviewer answering open-ended questions related to the use of computer visions in self-driving cars. The interviews were then recorded and transcribed to help write a report.
7. Results and Discussion
The results of the interviews showed that the use of computer vision has various importance and challenges. The participants highlighted that they use various devices and technologies in computer vision, which include cameras that help in taking images of the vehicle environment. There is also LiDAR, or Light Detection and Ranging, which assists in measuring the distances and creating 3D maps for the vehicles. Another main device used in Radar which uses radio waves to detect the speed and position of the objects. Some of the importance of computer vision that the participants have highlighted include:
Perception for the environment understanding: implementing computer vision in self-driving vehicles helps to perceive and understand the environment on the roads and, hence, respond to the road conditions, signs and obstacles.
Cost-effective: the use of computer vision reduces costs such as maintenance and operations since they only use technology such as cameras and LiDAR, which are easier to maintain.
Versatility: the advantage of versatility lies in the camera's managing to capture a wide range of information, making it versatile in detecting objects and road features.
Sustainable practices: the use of computer vision in self-driving cars also contributes to the control of the environment through the use of renewable sources. The use of fuels in manual vehicles contributed significantly to the destruction of the environment in ways such as pollution, which does not occur with self-driving cars.
The challenges of using computer vision in self-driving vehicles also pose various challenges to the users, which the participants highlighted. These include:
Challenging weather conditions: some places usually have low-light environments and poor weather conditions that make it challenging to capture clear images for processing.
Sensitivity and obstructions: the cameras are easily obstructed by objects and debris, which might cause challenges in vision. These challenges might make the algorithms processing data unable to make correct decisions.
Complex scenarios: there are complex scenarios, such as drunkard drivers on the road or aggressive driving, which may also challenge self-driving cars to respond to situations correctly.
8. Conclusion
In conclusion, integrating computer vision into self-driving car technology has contributed to significant advances in self-driving car implementation. Computer vision applied in the automotive industries helps interpret visual data, playing an important role in facial recognition, autonomous navigation, object recognition and image segmentation. These systems have empowered self-driving vehicles to interpret and perceive their surroundings similar to human capabilities to enable features such as traffic sign recognition, lane tracking and vehicle detection. The advantages of these computer visions are that they reduce operations costs and remain sustainable. However, there are strict regulatory measures and challenges of the weather conditions that have posed serious challenges to their implementation process. Therefore, even though computer vision provides transformative benefits to self-driving vehicles, improvement might be needed to address such challenges for safety on roads.
References
[1]. Khan, S. A., Lee, H. J., & Lim, H. (2023). Enhancing Object Detection in Self-Driving Cars Using a Hybrid Approach. Electronics, 12(13), 2768.
[2]. Szeliski, R. (2022). Computer vision: algorithms and applications. Springer Nature.
[3]. Al-Kaff, A., Martin, D., Garcia, F., de la Escalera, A., & Armingol, J. M. (2018). Survey of computer vision algorithms and applications for unmanned aerial vehicles. Expert Systems with Applications, 92, 447-463.
[4]. Sohail, M., Khan, A. U., Sandhu, M., Shoukat, I. A., Jafri, M., & Shin, H. (2023). Radar sensor based machine learning approach for precise vehicle position estimation. Scientific Reports, 13(1), 13837.
[5]. López, A. M., Imiya, A., Pajdla, T., & Álvarez, J. M. (Eds.). (2017). Computer vision in vehicle technology: Land, sea, and air. John Wiley & Sons.
[6]. Kohli, P., & Chadha, A. (2020). Enabling pedestrian safety using computer vision techniques: A case study of the 2018 uber inc. self-driving car crash. In Advances in Information and Communication: Proceedings of the 2019 Future of Information and Communication Conference (FICC), Volume 1 (pp. 261-279). Springer International Publishing.
[7]. Premebida, C., Ludwig, O., & Nunes, U. (2009). LIDAR and vision‐based pedestrian detection system. Journal of Field Robotics, 26(9), 696-711.
[8]. Berriel, R. F., de Aguiar, E., De Souza, A. F., & Oliveira-Santos, T. (2017). Ego-lane analysis system (elas): Dataset and algorithms. Image and Vision Computing, 68, 64-75.
[9]. Dixit, A., Kumar Chidambaram, R., & Allam, Z. (2021). Safety and risk analysis of autonomous vehicles using computer vision and neural networks. Vehicles, 3(3), 595-617.
[10]. Perera, F. (2018). Pollution from fossil-fuel combustion is the leading environmental threat to global pediatric health and equity: Solutions exist. International journal of environmental research and public health, 15(1), 16.
[11]. Qiao, L., Li, Y., Chen, D., Serikawa, S., Guizani, M., & Lv, Z. (2021). A survey on 5G/6G, AI, and Robotics. Computers and Electrical Engineering, 95, 107372.
[12]. Pavel, M. I., Tan, S. Y., & Abdullah, A. (2022). Vision-based autonomous vehicle systems based on deep learning: A systematic literature review. Applied Sciences, 12(14), 6831.
[13]. Kyriakidis, M., de Winter, J. C., Stanton, N., Bellet, T., van Arem, B., Brookhuis, K., ... & Happee, R. (2019). A human factors perspective on automated driving. Theoretical issues in ergonomics science, 20(3), 223-249.
[14]. von Ungern-Sternberg, A. (2017). Autonomous driving: regulatory challenges raised by artificial decision-making and tragic choices. Research Handbook on the Law of Artificial Intelligence, Edward Elgar (2017/18, Forthcoming).
[15]. Liu, S., Liu, L., Tang, J., Yu, B., Wang, Y., & Shi, W. (2019). Edge computing for autonomous driving: Opportunities and challenges. Proceedings of the IEEE, 107(8), 1697-1716.
Cite this article
Yu,P. (2024). Self-driving car implementation guided by computer vision for detection. Applied and Computational Engineering,74,1-6.
Data availability
The datasets used and/or analyzed during the current study will be available from the authors upon reasonable request.
Disclaimer/Publisher's Note
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of EWA Publishing and/or the editor(s). EWA Publishing and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
About volume
Volume title: Proceedings of the 2nd International Conference on Software Engineering and Machine Learning
© 2024 by the author(s). Licensee EWA Publishing, Oxford, UK. This article is an open access article distributed under the terms and
conditions of the Creative Commons Attribution (CC BY) license. Authors who
publish this series agree to the following terms:
1. Authors retain copyright and grant the series right of first publication with the work simultaneously licensed under a Creative Commons
Attribution License that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this
series.
2. Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the series's published
version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial
publication in this series.
3. Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and
during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See
Open access policy for details).
References
[1]. Khan, S. A., Lee, H. J., & Lim, H. (2023). Enhancing Object Detection in Self-Driving Cars Using a Hybrid Approach. Electronics, 12(13), 2768.
[2]. Szeliski, R. (2022). Computer vision: algorithms and applications. Springer Nature.
[3]. Al-Kaff, A., Martin, D., Garcia, F., de la Escalera, A., & Armingol, J. M. (2018). Survey of computer vision algorithms and applications for unmanned aerial vehicles. Expert Systems with Applications, 92, 447-463.
[4]. Sohail, M., Khan, A. U., Sandhu, M., Shoukat, I. A., Jafri, M., & Shin, H. (2023). Radar sensor based machine learning approach for precise vehicle position estimation. Scientific Reports, 13(1), 13837.
[5]. López, A. M., Imiya, A., Pajdla, T., & Álvarez, J. M. (Eds.). (2017). Computer vision in vehicle technology: Land, sea, and air. John Wiley & Sons.
[6]. Kohli, P., & Chadha, A. (2020). Enabling pedestrian safety using computer vision techniques: A case study of the 2018 uber inc. self-driving car crash. In Advances in Information and Communication: Proceedings of the 2019 Future of Information and Communication Conference (FICC), Volume 1 (pp. 261-279). Springer International Publishing.
[7]. Premebida, C., Ludwig, O., & Nunes, U. (2009). LIDAR and vision‐based pedestrian detection system. Journal of Field Robotics, 26(9), 696-711.
[8]. Berriel, R. F., de Aguiar, E., De Souza, A. F., & Oliveira-Santos, T. (2017). Ego-lane analysis system (elas): Dataset and algorithms. Image and Vision Computing, 68, 64-75.
[9]. Dixit, A., Kumar Chidambaram, R., & Allam, Z. (2021). Safety and risk analysis of autonomous vehicles using computer vision and neural networks. Vehicles, 3(3), 595-617.
[10]. Perera, F. (2018). Pollution from fossil-fuel combustion is the leading environmental threat to global pediatric health and equity: Solutions exist. International journal of environmental research and public health, 15(1), 16.
[11]. Qiao, L., Li, Y., Chen, D., Serikawa, S., Guizani, M., & Lv, Z. (2021). A survey on 5G/6G, AI, and Robotics. Computers and Electrical Engineering, 95, 107372.
[12]. Pavel, M. I., Tan, S. Y., & Abdullah, A. (2022). Vision-based autonomous vehicle systems based on deep learning: A systematic literature review. Applied Sciences, 12(14), 6831.
[13]. Kyriakidis, M., de Winter, J. C., Stanton, N., Bellet, T., van Arem, B., Brookhuis, K., ... & Happee, R. (2019). A human factors perspective on automated driving. Theoretical issues in ergonomics science, 20(3), 223-249.
[14]. von Ungern-Sternberg, A. (2017). Autonomous driving: regulatory challenges raised by artificial decision-making and tragic choices. Research Handbook on the Law of Artificial Intelligence, Edward Elgar (2017/18, Forthcoming).
[15]. Liu, S., Liu, L., Tang, J., Yu, B., Wang, Y., & Shi, W. (2019). Edge computing for autonomous driving: Opportunities and challenges. Proceedings of the IEEE, 107(8), 1697-1716.