Next Article in Journal
Energy Performance Database of Building Heritage in the Region of Umbria, Central Italy
Next Article in Special Issue
A Multi-Function Conversion Technique for Vehicle-to-Grid Applications
Previous Article in Journal
Coordinated Control Strategies of VSC-HVDC-Based Wind Power Systems for Low Voltage Ride Through
Previous Article in Special Issue
Wheel Slip Control for Improving Traction-Ability and Energy Efficiency of a Personal Electric Vehicle
Article Menu

Export Article

Open AccessArticle
Energies 2015, 8(7), 7243-7260; doi:10.3390/en8077243

Reinforcement Learning–Based Energy Management Strategy for a Hybrid Electric Tracked Vehicle

Collaborative Innovation Center of Electric Vehicles in Beijing, School of Mechanical Engineering, Beijing Institute of Technology, Beijing 100081, China
*
Author to whom correspondence should be addressed.
Academic Editors: Joeri Van Mierlo, Ming Cheng, Omar Hegazy and Wei Hua
Received: 14 January 2015 / Revised: 16 June 2015 / Accepted: 29 June 2015 / Published: 16 July 2015
(This article belongs to the Special Issue Advances in Plug-in Hybrid Vehicles and Hybrid Vehicles)
View Full-Text   |   Download PDF [783 KB, uploaded 16 July 2015]   |  

Abstract

This paper presents a reinforcement learning (RL)–based energy management strategy for a hybrid electric tracked vehicle. A control-oriented model of the powertrain and vehicle dynamics is first established. According to the sample information of the experimental driving schedule, statistical characteristics at various velocities are determined by extracting the transition probability matrix of the power request. Two RL-based algorithms, namely Q-learning and Dyna algorithms, are applied to generate optimal control solutions. The two algorithms are simulated on the same driving schedule, and the simulation results are compared to clarify the merits and demerits of these algorithms. Although the Q-learning algorithm is faster (3 h) than the Dyna algorithm (7 h), its fuel consumption is 1.7% higher than that of the Dyna algorithm. Furthermore, the Dyna algorithm registers approximately the same fuel consumption as the dynamic programming–based global optimal solution. The computational cost of the Dyna algorithm is substantially lower than that of the stochastic dynamic programming. View Full-Text
Keywords: reinforcement learning (RL); hybrid electric tracked vehicle (HETV); Q-learning algorithm; Dyna algorithm; dynamic programming (DP); stochastic dynamic programming (SDP) reinforcement learning (RL); hybrid electric tracked vehicle (HETV); Q-learning algorithm; Dyna algorithm; dynamic programming (DP); stochastic dynamic programming (SDP)
This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. (CC BY 4.0).

Scifeed alert for new publications

Never miss any articles matching your research from any publisher
  • Get alerts for new papers matching your research
  • Find out the new papers from selected authors
  • Updated daily for 49'000+ journals and 6000+ publishers
  • Define your Scifeed now

SciFeed Share & Cite This Article

MDPI and ACS Style

Liu, T.; Zou, Y.; Liu, D.; Sun, F. Reinforcement Learning–Based Energy Management Strategy for a Hybrid Electric Tracked Vehicle. Energies 2015, 8, 7243-7260.

Show more citation formats Show less citations formats

Related Articles

Article Metrics

Article Access Statistics

1

Comments

[Return to top]
Energies EISSN 1996-1073 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top