Open AccessEditor’s ChoiceArticle
Energy Management Strategy for a Hybrid Electric Vehicle Based on Deep Reinforcement Learning
by
Yue Hu 1,2,3,†, Weimin Li 1,3,4,*, Kun Xu 1,†, Taimoor Zahid 1,2, Feiyan Qin 1,2 and Chenming Li 5
1
Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
2
Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen 518055, China
3
Jining Institutes of Advanced Technology, Chinese Academy of Sciences, Jining 272000, China
4
Department of Mechanical and Automation Engineering, The Chinese University of Hong Kong, Hong Kong 999077, China
5
Department of Electronic Engineering, The Chinese University of Hong Kong, Hong Kong 999077, China
†
These authors contributed equally to this work.
Cited by 231 | Viewed by 13411
Abstract
An energy management strategy (EMS) is important for hybrid electric vehicles (HEVs) since it plays a decisive role on the performance of the vehicle. However, the variation of future driving conditions deeply influences the effectiveness of the EMS. Most existing EMS methods simply
[...] Read more.
An energy management strategy (EMS) is important for hybrid electric vehicles (HEVs) since it plays a decisive role on the performance of the vehicle. However, the variation of future driving conditions deeply influences the effectiveness of the EMS. Most existing EMS methods simply follow predefined rules that are not adaptive to different driving conditions online. Therefore, it is useful that the EMS can learn from the environment or driving cycle. In this paper, a deep reinforcement learning (DRL)-based EMS is designed such that it can learn to select actions directly from the states without any prediction or predefined rules. Furthermore, a DRL-based online learning architecture is presented. It is significant for applying the DRL algorithm in HEV energy management under different driving conditions. Simulation experiments have been conducted using MATLAB and Advanced Vehicle Simulator (ADVISOR) co-simulation. Experimental results validate the effectiveness of the DRL-based EMS compared with the rule-based EMS in terms of fuel economy. The online learning architecture is also proved to be effective. The proposed method ensures the optimality, as well as real-time applicability, in HEVs.
Full article
►▼
Show Figures