Next Article in Journal
An Approach for Feedforward Model Predictive Control of Continuous Pulp Digesters
Next Article in Special Issue
Flexible Flow Shop Scheduling Method with Public Buffer
Previous Article in Journal
Performance and Kinetic Model of a Single-Stage Anaerobic Digestion System Operated at Different Successive Operating Stages for the Treatment of Food Waste
Previous Article in Special Issue
Grouping Method of Semiconductor Bonding Equipment Based on Clustering by Fast Search and Find of Density Peaks for Dynamic Matching According to Processing Tasks
Open AccessArticle

Intelligent Control Strategy for Transient Response of a Variable Geometry Turbocharger System Based on Deep Reinforcement Learning

by Bo Hu 1,2,*, Jie Yang 1, Jiaxi Li 1, Shuang Li 1 and Haitao Bai 1
1
Key Laboratory of Advanced Manufacturing Technology for Automobile Parts, Ministry of Education, Chongqing University of Technology, Chongqing 400054, China
2
State Key Laboratory of Engines, Tianjin University, Tianjin 300072, China
*
Author to whom correspondence should be addressed.
Processes 2019, 7(9), 601; https://doi.org/10.3390/pr7090601
Received: 15 August 2019 / Revised: 2 September 2019 / Accepted: 2 September 2019 / Published: 6 September 2019
Deep reinforcement learning (DRL) is an area of machine learning that combines a deep learning approach and reinforcement learning (RL). However, there seem to be few studies that analyze the latest DRL algorithms on real-world powertrain control problems. Meanwhile, the boost control of a variable geometry turbocharger (VGT)-equipped diesel engine is difficult mainly due to its strong coupling with an exhaust gas recirculation (EGR) system and large lag, resulting from time delay and hysteresis between the input and output dynamics of the engine’s gas exchange system. In this context, one of the latest model-free DRL algorithms, the deep deterministic policy gradient (DDPG) algorithm, was built in this paper to develop and finally form a strategy to track the target boost pressure under transient driving cycles. Using a fine-tuned proportion integration differentiation (PID) controller as a benchmark, the results show that the control performance based on the proposed DDPG algorithm can achieve a good transient control performance from scratch by autonomously learning the interaction with the environment, without relying on model supervision or complete environment models. In addition, the proposed strategy is able to adapt to the changing environment and hardware aging over time by adaptively tuning the algorithm in a self-learning manner on-line, making it attractive to real plant control problems whose system consistency may not be strictly guaranteed and whose environment may change over time. View Full-Text
Keywords: self-learning; transient response; variable geometry turbocharger; deep reinforcement learning; deep deterministic policy gradient self-learning; transient response; variable geometry turbocharger; deep reinforcement learning; deep deterministic policy gradient
Show Figures

Figure 1

MDPI and ACS Style

Hu, B.; Yang, J.; Li, J.; Li, S.; Bai, H. Intelligent Control Strategy for Transient Response of a Variable Geometry Turbocharger System Based on Deep Reinforcement Learning. Processes 2019, 7, 601.

Show more citation formats Show less citations formats
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop