sensors-logo

Journal Browser

Journal Browser

Applications of Machine Learning in Automotive Engineering

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Vehicular Sensing".

Deadline for manuscript submissions: 25 September 2024 | Viewed by 13660

Special Issue Editor


E-Mail Website
Guest Editor
Department of Mechanical Engineering, The University of Alabama, Box 870276, Tuscaloosa, AL 35487-0276, USA
Interests: modeling; simulation; control of automotive systems including conventional; hybrid electric; electric vehicles; machine learning and its application in automotive and transportation systems

Special Issue Information

Dear Colleagues,

With the advent of automated and connected vehicles and increasing market share of electric vehicles with improving performance, the automotive industry is currently undergoing a technological revolution. These technological transformations pose significant challenges and yet offer new opportunities in the design and control of the vehicles. In this regard, recent advancements in machine learning and AI have shown potential benefits in various aspects in automotive engineering, from the design and control to monitoring and maintenance of the vehicles.

In this Special Issue, we will discuss uses of machine learning for automotive engineering. Topics include but are not limited to:

  • Machine learning and its application in automotive systems;
  • Modeling, simulation, and control of automotive systems inspired by machine learning or AI;
  • Advanced sensing and actuation via machine learning;
  • Controls based on reinforcement learning;
  • Connected and automated vehicles;
  • Predictive maintenance;
  • Advanced driver assistance systems;
  • Human–machine interface.

Dr. Hwan-Sik Yoon
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • machine learning and its application in automotive systems
  • modeling, simulation, and control of automotive systems inspired by machine learning or AI
  • advanced sensing and actuation via machine learning
  • controls based on reinforcement learning
  • connected and automated vehicles
  • predictive maintenance
  • advanced driver assistance systems
  • human–machine interface

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

17 pages, 4042 KiB  
Article
GPS Data and Machine Learning Tools, a Practical and Cost-Effective Combination for Estimating Light Vehicle Emissions
by Néstor Diego Rivera-Campoverde, Blanca Arenas-Ramírez, José Luis Muñoz Sanz and Edisson Jiménez
Sensors 2024, 24(7), 2304; https://doi.org/10.3390/s24072304 - 05 Apr 2024
Viewed by 789
Abstract
This paper focuses on the emissions of the three most sold categories of light vehicles: sedans, SUVs, and pickups. The research is carried out through an innovative methodology based on GPS and machine learning in real driving conditions. For this purpose, driving data [...] Read more.
This paper focuses on the emissions of the three most sold categories of light vehicles: sedans, SUVs, and pickups. The research is carried out through an innovative methodology based on GPS and machine learning in real driving conditions. For this purpose, driving data from the three best-selling vehicles in Ecuador are acquired using a data logger with GPS included, and emissions are measured using a PEMS in six RDE tests with two standardized routes for each vehicle. The data obtained on Route 1 are used to estimate the gears used during driving using the K-means algorithm and classification trees. Then, the relative importance of driving variables is estimated using random forest techniques, followed by the training of ANNs to estimate CO2, CO, NOX, and HC. The data generated on Route 2 are used to validate the obtained ANNs. These models are fed with a dataset generated from 324, 300, and 316 km of random driving for each type of vehicle. The results of the model were compared with the IVE model and an OBD-based model, showing similar results without the need to mount the PEMS on the vehicles for long test drives. The generated model is robust to different traffic conditions as a result of its training and validation using a large amount of data obtained under completely random driving conditions. Full article
(This article belongs to the Special Issue Applications of Machine Learning in Automotive Engineering)
Show Figures

Figure 1

17 pages, 6584 KiB  
Article
Enhancing Camera Calibration for Traffic Surveillance with an Integrated Approach of Genetic Algorithm and Particle Swarm Optimization
by Shenglin Li and Hwan-Sik Yoon
Sensors 2024, 24(5), 1456; https://doi.org/10.3390/s24051456 - 23 Feb 2024
Viewed by 555
Abstract
Recent advancements in sensor technologies, coupled with signal processing and machine learning, have enabled real-time traffic control systems to effectively adapt to changing traffic conditions. Cameras, as sensors, offer a cost-effective means to determine the number, location, type, and speed of vehicles, aiding [...] Read more.
Recent advancements in sensor technologies, coupled with signal processing and machine learning, have enabled real-time traffic control systems to effectively adapt to changing traffic conditions. Cameras, as sensors, offer a cost-effective means to determine the number, location, type, and speed of vehicles, aiding decision-making at traffic intersections. However, the effective use of cameras for traffic surveillance requires proper calibration. This paper proposes a new optimization-based method for camera calibration. In this approach, initial calibration parameters are established using the Direct Linear Transformation (DLT) method. Then, optimization algorithms are applied to further refine the calibration parameters for the correction of nonlinear lens distortions. A significant enhancement in the optimization process is achieved through the integration of the Genetic Algorithm (GA) and Particle Swarm Optimization (PSO) into a combined Integrated GA and PSO (IGAPSO) technique. The effectiveness of this method is demonstrated through the calibration of eleven roadside cameras at three different intersections. The experimental results show that when compared to the baseline DLT method, the vehicle localization error is reduced by 22.30% with GA, 22.31% with PSO, and 25.51% with IGAPSO. Full article
(This article belongs to the Special Issue Applications of Machine Learning in Automotive Engineering)
Show Figures

Figure 1

17 pages, 11656 KiB  
Article
A Deep Reinforcement Learning Strategy for Surrounding Vehicles-Based Lane-Keeping Control
by Jihun Kim, Sanghoon Park, Jeesu Kim and Jinwoo Yoo
Sensors 2023, 23(24), 9843; https://doi.org/10.3390/s23249843 - 15 Dec 2023
Viewed by 913
Abstract
As autonomous vehicles (AVs) are advancing to higher levels of autonomy and performance, the associated technologies are becoming increasingly diverse. Lane-keeping systems (LKS), corresponding to a key functionality of AVs, considerably enhance driver convenience. With drivers increasingly relying on autonomous driving technologies, the [...] Read more.
As autonomous vehicles (AVs) are advancing to higher levels of autonomy and performance, the associated technologies are becoming increasingly diverse. Lane-keeping systems (LKS), corresponding to a key functionality of AVs, considerably enhance driver convenience. With drivers increasingly relying on autonomous driving technologies, the importance of safety features, such as fail-safe mechanisms in the event of sensor failures, has gained prominence. Therefore, this paper proposes a reinforcement learning (RL) control method for lane-keeping, which uses surrounding object information derived through LiDAR sensors instead of camera sensors for LKS. This approach uses surrounding vehicle and object information as observations for the RL framework to maintain the vehicle’s current lane. The learning environment is established by integrating simulation tools, such as IPG CarMaker, which incorporates vehicle dynamics, and MATLAB Simulink for data analysis and RL model creation. To further validate the applicability of the LiDAR sensor data in real-world settings, Gaussian noise is introduced in the virtual simulation environment to mimic sensor noise in actual operational conditions. Full article
(This article belongs to the Special Issue Applications of Machine Learning in Automotive Engineering)
Show Figures

Figure 1

27 pages, 13473 KiB  
Article
Design and Experimental Assessment of Real-Time Anomaly Detection Techniques for Automotive Cybersecurity
by Pierpaolo Dini and Sergio Saponara
Sensors 2023, 23(22), 9231; https://doi.org/10.3390/s23229231 - 16 Nov 2023
Cited by 1 | Viewed by 950
Abstract
In recent decades, an exponential surge in technological advancements has significantly transformed various aspects of daily life. The proliferation of indispensable objects such as smartphones and computers underscores the pervasive influence of technology. This trend extends to the domains of the healthcare, automotive, [...] Read more.
In recent decades, an exponential surge in technological advancements has significantly transformed various aspects of daily life. The proliferation of indispensable objects such as smartphones and computers underscores the pervasive influence of technology. This trend extends to the domains of the healthcare, automotive, and industrial sectors, with the emergence of remote-operating capabilities and self-learning models. Notably, the automotive industry has integrated numerous remote access points like Wi-Fi, USB, Bluetooth, 4G/5G, and OBD-II interfaces into vehicles, amplifying the exposure of the Controller Area Network (CAN) bus to external threats. With a recognition of the susceptibility of the CAN bus to external attacks, there is an urgent need to develop robust security systems that are capable of detecting potential intrusions and malfunctions. This study aims to leverage fingerprinting techniques and neural networks on cost-effective embedded systems to construct an anomaly detection system for identifying abnormal behavior in the CAN bus. The research is structured into three parts, encompassing the application of fingerprinting techniques for data acquisition and neural network training, the design of an anomaly detection algorithm based on neural network results, and the simulation of typical CAN attack scenarios. Additionally, a thermal test was conducted to evaluate the algorithm’s resilience under varying temperatures. Full article
(This article belongs to the Special Issue Applications of Machine Learning in Automotive Engineering)
Show Figures

Figure 1

21 pages, 9874 KiB  
Article
Detection of Pedestrians in Reverse Camera Using Multimodal Convolutional Neural Networks
by Luis C. Reveles-Gómez, Huizilopoztli Luna-García, José M. Celaya-Padilla, Cristian Barría-Huidobro, Hamurabi Gamboa-Rosales, Roberto Solís-Robles, José G. Arceo-Olague, Jorge I. Galván-Tejada, Carlos E. Galván-Tejada, David Rondon and Klinge O. Villalba-Condori
Sensors 2023, 23(17), 7559; https://doi.org/10.3390/s23177559 - 31 Aug 2023
Cited by 1 | Viewed by 1631
Abstract
In recent years, the application of artificial intelligence (AI) in the automotive industry has led to the development of intelligent systems focused on road safety, aiming to improve protection for drivers and pedestrians worldwide to reduce the number of accidents yearly. One of [...] Read more.
In recent years, the application of artificial intelligence (AI) in the automotive industry has led to the development of intelligent systems focused on road safety, aiming to improve protection for drivers and pedestrians worldwide to reduce the number of accidents yearly. One of the most critical functions of these systems is pedestrian detection, as it is crucial for the safety of everyone involved in road traffic. However, pedestrian detection goes beyond the front of the vehicle; it is also essential to consider the vehicle’s rear since pedestrian collisions occur when the car is in reverse drive. To contribute to the solution of this problem, this research proposes a model based on convolutional neural networks (CNN) using a proposed one-dimensional architecture and the Inception V3 architecture to fuse the information from the backup camera and the distance measured by the ultrasonic sensors, to detect pedestrians when the vehicle is reversing. In addition, specific data collection was performed to build a database for the research. The proposed model showed outstanding results with 99.85% accuracy and 99.86% correct classification performance, demonstrating that it is possible to achieve the goal of pedestrian detection using CNN by fusing two types of data. Full article
(This article belongs to the Special Issue Applications of Machine Learning in Automotive Engineering)
Show Figures

Figure 1

20 pages, 5561 KiB  
Article
Road Feature Detection for Advance Driver Assistance System Using Deep Learning
by Hamza Nadeem, Kashif Javed, Zain Nadeem, Muhammad Jawad Khan, Saddaf Rubab, Dong Keon Yon and Rizwan Ali Naqvi
Sensors 2023, 23(9), 4466; https://doi.org/10.3390/s23094466 - 04 May 2023
Cited by 2 | Viewed by 2093
Abstract
Hundreds of people are injured or killed in road accidents. These accidents are caused by several intrinsic and extrinsic factors, including the attentiveness of the driver towards the road and its associated features. These features include approaching vehicles, pedestrians, and static fixtures, such [...] Read more.
Hundreds of people are injured or killed in road accidents. These accidents are caused by several intrinsic and extrinsic factors, including the attentiveness of the driver towards the road and its associated features. These features include approaching vehicles, pedestrians, and static fixtures, such as road lanes and traffic signs. If a driver is made aware of these features in a timely manner, a huge chunk of these accidents can be avoided. This study proposes a computer vision-based solution for detecting and recognizing traffic types and signs to help drivers pave the door for self-driving cars. A real-world roadside dataset was collected under varying lighting and road conditions, and individual frames were annotated. Two deep learning models, YOLOv7 and Faster RCNN, were trained on this custom-collected dataset to detect the aforementioned road features. The models produced mean Average Precision (mAP) scores of 87.20% and 75.64%, respectively, along with class accuracies of over 98.80%; all of these were state-of-the-art. The proposed model provides an excellent benchmark to build on to help improve traffic situations and enable future technological advances, such as Advance Driver Assistance System (ADAS) and self-driving cars. Full article
(This article belongs to the Special Issue Applications of Machine Learning in Automotive Engineering)
Show Figures

Figure 1

15 pages, 3868 KiB  
Article
Vehicle Localization in 3D World Coordinates Using Single Camera at Traffic Intersection
by Shenglin Li and Hwan-Sik Yoon
Sensors 2023, 23(7), 3661; https://doi.org/10.3390/s23073661 - 31 Mar 2023
Cited by 3 | Viewed by 3454
Abstract
Optimizing traffic control systems at traffic intersections can reduce the network-wide fuel consumption, as well as emissions of conventional fuel-powered vehicles. While traffic signals have been controlled based on predetermined schedules, various adaptive signal control systems have recently been developed using advanced sensors [...] Read more.
Optimizing traffic control systems at traffic intersections can reduce the network-wide fuel consumption, as well as emissions of conventional fuel-powered vehicles. While traffic signals have been controlled based on predetermined schedules, various adaptive signal control systems have recently been developed using advanced sensors such as cameras, radars, and LiDARs. Among these sensors, cameras can provide a cost-effective way to determine the number, location, type, and speed of the vehicles for better-informed decision-making at traffic intersections. In this research, a new approach for accurately determining vehicle locations near traffic intersections using a single camera is presented. For that purpose, a well-known object detection algorithm called YOLO is used to determine vehicle locations in video images captured by a traffic camera. YOLO draws a bounding box around each detected vehicle, and the vehicle location in the image coordinates is converted to the world coordinates using camera calibration data. During this process, a significant error between the center of a vehicle’s bounding box and the real center of the vehicle in the world coordinates is generated due to the angled view of the vehicles by a camera installed on a traffic light pole. As a means of mitigating this vehicle localization error, two different types of regression models are trained and applied to the centers of the bounding boxes of the camera-detected vehicles. The accuracy of the proposed approach is validated using both static camera images and live-streamed traffic video. Based on the improved vehicle localization, it is expected that more accurate traffic signal control can be made to improve the overall network-wide energy efficiency and traffic flow at traffic intersections. Full article
(This article belongs to the Special Issue Applications of Machine Learning in Automotive Engineering)
Show Figures

Figure 1

17 pages, 9345 KiB  
Article
Deep Learning-Based Driver’s Hands on/off Prediction System Using In-Vehicle Data
by Hyeongoo Pyeon, Hanwul Kim, Rak Chul Kim, Geesung Oh and Sejoon Lim
Sensors 2023, 23(3), 1442; https://doi.org/10.3390/s23031442 - 28 Jan 2023
Viewed by 2072
Abstract
Driver’s hands on/off detection is very important in current autonomous vehicles for safety. Several studies have been conducted to create a precise algorithm. Although many studies have proposed various approaches, they have some limitations, such as robustness and reliability. Therefore, we propose a [...] Read more.
Driver’s hands on/off detection is very important in current autonomous vehicles for safety. Several studies have been conducted to create a precise algorithm. Although many studies have proposed various approaches, they have some limitations, such as robustness and reliability. Therefore, we propose a deep learning model that utilizes in-vehicle data. We also established a data collection system, which collects in-vehicle data that are auto-labeled for efficient and reliable data acquisition. For a robust system, we devised a confidence logic that prevents outliers’ sway. To evaluate our model in more detail, we suggested a new metric to explain the events, considering state transitions. In addition, we conducted an extensive experiment on the new drivers to demonstrate our model’s generalization ability. We verified that the proposed system achieved a better performance than in previous studies, by resolving their drawbacks. Our model detected hands on/off transitions in 0.37 s on average, with an accuracy of 95.7%. Full article
(This article belongs to the Special Issue Applications of Machine Learning in Automotive Engineering)
Show Figures

Figure 1

Back to TopTop