Next Article in Journal
Uniqueness of the Inflationary Higgs Scalar for Neutron Stars and Failure of Non-Inflationary Approximations
Next Article in Special Issue
Monitoring of the Behaviour and State of Nanoscale Particles in a Gas Cleaning System of an Ore-Thermal Furnace
Previous Article in Journal
Social Bots Detection via Fusing BERT and Graph Convolutional Networks
Previous Article in Special Issue
Key Validity Using the Multiple-Parameter Fractional Fourier Transform for Image Encryption
Article

Self-Optimizing Path Tracking Controller for Intelligent Vehicles Based on Reinforcement Learning

State Key Laboratory of Engines, Tianjin University, Tianjin 300072, China
*
Author to whom correspondence should be addressed.
Academic Editors: Rudolf Kawalla and Beloglazov Ilya
Symmetry 2022, 14(1), 31; https://doi.org/10.3390/sym14010031
Received: 17 November 2021 / Revised: 15 December 2021 / Accepted: 17 December 2021 / Published: 27 December 2021
The path tracking control system is a crucial component for autonomous vehicles; it is challenging to realize accurate tracking control when approaching a wide range of uncertain situations and dynamic environments, particularly when such control must perform as well as, or better than, human drivers. While many methods provide state-of-the-art tracking performance, they tend to emphasize constant PID control parameters, calibrated by human experience, to improve tracking accuracy. A detailed analysis shows that PID controllers inefficiently reduce the lateral error under various conditions, such as complex trajectories and variable speed. In addition, intelligent driving vehicles are highly non-linear objects, and high-fidelity models are unavailable in most autonomous systems. As for the model-based controller (MPC or LQR), the complex modeling process may increase the computational burden. With that in mind, a self-optimizing, path tracking controller structure, based on reinforcement learning, is proposed. For the lateral control of the vehicle, a steering method based on the fusion of the reinforcement learning and traditional PID controllers is designed to adapt to various tracking scenarios. According to the pre-defined path geometry and the real-time status of the vehicle, the interactive learning mechanism, based on an RL framework (actor–critic—a symmetric network structure), can realize the online optimization of PID control parameters in order to better deal with the tracking error under complex trajectories and dynamic changes of vehicle model parameters. The adaptive performance of velocity changes was also considered in the tracking process. The proposed controlling approach was tested in different path tracking scenarios, both the driving simulator platforms and on-site vehicle experiments have verified the effects of our proposed self-optimizing controller. The results show that the approach can adaptively change the weights of PID to maintain a tracking error (simulation: within ±0.071 m; realistic vehicle: within ±0.272 m) and steering wheel vibration standard deviations (simulation: within ±0.04°; realistic vehicle: within ±80.69°); additionally, it can adapt to high-speed simulation scenarios (the maximum speed is above 100 km/h and the average speed through curves is 63–76 km/h). View Full-Text
Keywords: autonomous vehicle; path tracking; reinforcement learning; adaptive PID; self-optimizing controller; vehicle control autonomous vehicle; path tracking; reinforcement learning; adaptive PID; self-optimizing controller; vehicle control
Show Figures

Figure 1

MDPI and ACS Style

Ma, J.; Xie, H.; Song, K.; Liu, H. Self-Optimizing Path Tracking Controller for Intelligent Vehicles Based on Reinforcement Learning. Symmetry 2022, 14, 31. https://doi.org/10.3390/sym14010031

AMA Style

Ma J, Xie H, Song K, Liu H. Self-Optimizing Path Tracking Controller for Intelligent Vehicles Based on Reinforcement Learning. Symmetry. 2022; 14(1):31. https://doi.org/10.3390/sym14010031

Chicago/Turabian Style

Ma, Jichang, Hui Xie, Kang Song, and Hao Liu. 2022. "Self-Optimizing Path Tracking Controller for Intelligent Vehicles Based on Reinforcement Learning" Symmetry 14, no. 1: 31. https://doi.org/10.3390/sym14010031

Find Other Styles
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop