Next Article in Journal
Influence of Spill Pressure and Saturation on the Migration and Distribution of Diesel Oil Contaminant in Unconfined Aquifers Using Three-Dimensional Numerical Simulations
Previous Article in Journal
EAR-CCPM-Net: A Cross-Modal Collaborative Perception Network for Early Accident Risk Prediction
Previous Article in Special Issue
Optimization and Evaluation of the PEDF System Configuration Based on Planning and Operating Dual-Layer Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
Article

Energy Management Strategy for Hybrid Electric Vehicles Based on Experience-Pool-Optimized Deep Reinforcement Learning

1
College of Mechanical and Electrical Engineering, Hainan University, Haikou 570228, China
2
College of Mechanical and Electrical Engineering, Hainan Vocational University of Science and Technology, Haikou 571126, China
3
Institute of Industrial Research, University of Portsmouth, Portsmouth PO1 2EG, UK
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(17), 9302; https://doi.org/10.3390/app15179302
Submission received: 23 July 2025 / Revised: 15 August 2025 / Accepted: 22 August 2025 / Published: 24 August 2025

Abstract

The energy management strategy of Hybrid Electric Vehicles (HEVs) plays a key role in improving fuel economy and reducing battery energy consumption. This paper proposes a Deep Reinforcement Learning-based energy management strategy optimized by the experience pool (P-HER-DDPG), aimed at improving the fuel efficiency of HEVs while accelerating the training speed. The method integrates the mechanisms of Prioritized Experience Replay (PER) and Hindsight Experience Replay (HER) to address the reward sparsity and slow convergence issues faced by the traditional Deep Deterministic Policy Gradient (DDPG) algorithm when handling continuous action spaces. Under various standard driving cycles, the P-HER-DDPG strategy outperforms the traditional DDPG strategy, achieving an average fuel economy improvement of 5.85%, with a maximum increase of 8.69%. Compared to the DQN strategy, it achieves an average improvement of 12.84%. In terms of training convergence, the P-HER-DDPG strategy converges in 140 episodes, 17.65% faster than DDPG and 24.32% faster than DQN. Additionally, the strategy demonstrates more stable State of Charge (SOC) control, effectively mitigating the risks of battery overcharging and deep discharging. Simulation results show that P-HER-DDPG can enhance fuel economy and training efficiency, offering an extended solution in the field of energy management strategies.
Keywords: hybrid electric vehicle; energy management strategy; prioritized experience replay; hindsight experience replay; P-HER-DDPG hybrid electric vehicle; energy management strategy; prioritized experience replay; hindsight experience replay; P-HER-DDPG

Share and Cite

MDPI and ACS Style

Zhuang, J.; Li, P.; Liu, L.; Ma, H.; Cheng, X. Energy Management Strategy for Hybrid Electric Vehicles Based on Experience-Pool-Optimized Deep Reinforcement Learning. Appl. Sci. 2025, 15, 9302. https://doi.org/10.3390/app15179302

AMA Style

Zhuang J, Li P, Liu L, Ma H, Cheng X. Energy Management Strategy for Hybrid Electric Vehicles Based on Experience-Pool-Optimized Deep Reinforcement Learning. Applied Sciences. 2025; 15(17):9302. https://doi.org/10.3390/app15179302

Chicago/Turabian Style

Zhuang, Jihui, Pei Li, Ling Liu, Hongjie Ma, and Xiaoming Cheng. 2025. "Energy Management Strategy for Hybrid Electric Vehicles Based on Experience-Pool-Optimized Deep Reinforcement Learning" Applied Sciences 15, no. 17: 9302. https://doi.org/10.3390/app15179302

APA Style

Zhuang, J., Li, P., Liu, L., Ma, H., & Cheng, X. (2025). Energy Management Strategy for Hybrid Electric Vehicles Based on Experience-Pool-Optimized Deep Reinforcement Learning. Applied Sciences, 15(17), 9302. https://doi.org/10.3390/app15179302

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop