Previous Issue
Volume 2, September
 
 

Energy Storage Appl., Volume 2, Issue 4 (December 2025) – 1 article

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Select all
Export citation of selected articles as:
22 pages, 1286 KB  
Article
Comparative Analysis of Optimal Control and Reinforcement Learning Methods for Energy Storage Management Under Uncertainty
by Elinor Ginzburg-Ganz, Itay Segev, Yoash Levron, Juri Belikov, Dmitry Baimel and Sarah Keren
Energy Storage Appl. 2025, 2(4), 14; https://doi.org/10.3390/esa2040014 - 17 Oct 2025
Viewed by 81
Abstract
The challenge of optimally controlling energy storage systems under uncertainty conditions, whether due to uncertain storage device dynamics or load signal variability, is well established. Recent research works tackle this problem using two primary approaches: optimal control methods, such as stochastic dynamic programming, [...] Read more.
The challenge of optimally controlling energy storage systems under uncertainty conditions, whether due to uncertain storage device dynamics or load signal variability, is well established. Recent research works tackle this problem using two primary approaches: optimal control methods, such as stochastic dynamic programming, and data-driven techniques. This work’s objective is to quantify the inherent trade-offs between these methodologies and identify their respective strengths and weaknesses across different scenarios. We evaluate the degradation of performance, measured by increased operational costs, when a reinforcement learning policy is adopted instead of an optimal control policy, such as dynamic programming, Pontryagin’s minimum principle, or the Shortest-Path method. Our study examines three increasingly intricate use cases: ideal storage units, storage units with losses, and lossy storage units integrated with transmission line losses. For each scenario, we compare the performance of a representative optimal control technique against a reinforcement learning approach, seeking to establish broader comparative insights. Full article
Show Figures

Figure 1

Previous Issue
Back to TopTop