Next Article in Journal
Design of Short-Turning Service for a Bus Route with Hybrid Vehicle Type
Previous Article in Journal
Influence of Mucosal Thickness, Implant Dimensions and Stability in Cone Morse Implant Installed at Subcrestal Bone Level on the Peri-Implant Bone: A Prospective Clinical and Radiographic Study
Open AccessArticle

Reinforcement Learning Approach to Design Practical Adaptive Control for a Small-Scale Intelligent Vehicle

by Bo Hu 1,2,*,†, Jiaxi Li 1,*,†, Jie Yang 1, Haitao Bai 1, Shuang Li 1, Youchang Sun 1 and Xiaoyu Yang 1
1
Key Laboratory of Advanced Manufacturing Technology for Automobile Parts, Ministry of Education, Chongqing University of Technology, Chongqing 400054, China
2
State Key Laboratory of Engines, Tianjin University, Tianjin 300072, China
*
Authors to whom correspondence should be addressed.
B.H. and J.L. equally contributed to this research work.
Symmetry 2019, 11(9), 1139; https://doi.org/10.3390/sym11091139
Received: 18 August 2019 / Revised: 3 September 2019 / Accepted: 4 September 2019 / Published: 7 September 2019
Reinforcement learning (RL) based techniques have been employed for the tracking and adaptive cruise control of a small-scale vehicle with the aim to transfer the obtained knowledge to a full-scale intelligent vehicle in the near future. Unlike most other control techniques, the purpose of this study is to seek a practical method that enables the vehicle, in the real environment and in real time, to learn the control behavior on its own while adapting to the changing circumstances. In this context, it is necessary to design an algorithm that symmetrically considers both time efficiency and accuracy. Meanwhile, in order to realize adaptive cruise control specifically, a set of symmetrical control actions consisting of steering angle and vehicle speed needs to be optimized simultaneously. In this paper, firstly, the experimental setup of the small-scale intelligent vehicle is introduced. Subsequently, three model-free RL algorithm are conducted to develop and finally form the strategy to keep the vehicle within its lanes at constant and top velocity. Furthermore, a model-based RL strategy is compared that incorporates learning from real experience and planning from simulated experience. Finally, a Q-learning based adaptive cruise control strategy is intermixed to the existing tracking control architecture to allow the vehicle slow-down in the curve and accelerate on straightaways. The experimental results show that the Q-learning and Sarsa (λ) algorithms can achieve a better tracking behavior than the conventional Sarsa, and Q-learning outperform Sarsa (λ) in terms of computational complexity. The Dyna-Q method performs similarly with the Sarsa (λ) algorithms, but with a significant reduction of computational time. Compared with a fine-tuned proportion integration differentiation (PID) controller, the good-balanced Q-learning is seen to perform better and it can also be easily applied to control problems with over one control actions. View Full-Text
Keywords: reinforcement learning; adaptive control; small-scale intelligent vehicle; Q-learning reinforcement learning; adaptive control; small-scale intelligent vehicle; Q-learning
Show Figures

Figure 1

MDPI and ACS Style

Hu, B.; Li, J.; Yang, J.; Bai, H.; Li, S.; Sun, Y.; Yang, X. Reinforcement Learning Approach to Design Practical Adaptive Control for a Small-Scale Intelligent Vehicle. Symmetry 2019, 11, 1139.

Show more citation formats Show less citations formats
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop