Next Article in Journal
Research on Single-Phase PWM Converter with Reverse Conducting IGBT Based on Loss Threshold Desaturation Control
Next Article in Special Issue
Impact of DC Voltage Enhancement on Partial Discharges in Medium Voltage Cables—An Empirical Study with Defects at Semicon-Dielectric Interface
Previous Article in Journal
Design and Control of a Buck–Boost Charger-Discharger for DC-Bus Regulation in Microgrids
Previous Article in Special Issue
Rethinking Participation in Smart Energy System Planning
Article Menu
Issue 11 (November) cover image

Export Article

Open AccessArticle
Energies 2017, 10(11), 1846; https://doi.org/10.3390/en10111846

Battery Energy Management in a Microgrid Using Batch Reinforcement Learning

1
ESAT/Electa, KU Leuven, Kasteelpark Arenberg 10 bus 2445, BE-3001 Leuven, Belgium
2
Energy Department, EnergyVille, Thor Park, Poort Genk 8130, 3600 Genk, Belgium
3
Energy Department, Vlaamse Instelling voor Technologisch Onderzoek (VITO), Boeretang 200, B-2400 Mol, Belgium
This paper is an extended version of our paper published in International workshop of Energy-Open 2017.
*
Author to whom correspondence should be addressed.
Received: 15 October 2017 / Revised: 5 November 2017 / Accepted: 7 November 2017 / Published: 12 November 2017
(This article belongs to the Special Issue Selected Papers from International Workshop of Energy-Open)
Full-Text   |   PDF [1076 KB, uploaded 12 November 2017]   |  

Abstract

Motivated by recent developments in batch Reinforcement Learning (RL), this paper contributes to the application of batch RL in energy management in microgrids. We tackle the challenge of finding a closed-loop control policy to optimally schedule the operation of a storage device, in order to maximize self-consumption of local photovoltaic production in a microgrid. In this work, the fitted Q-iteration algorithm, a standard batch RL technique, is used by an RL agent to construct a control policy. The proposed method is data-driven and uses a state-action value function to find an optimal scheduling plan for a battery. The battery’s charge and discharge efficiencies, and the nonlinearity in the microgrid due to the inverter’s efficiency are taken into account. The proposed approach has been tested by simulation in a residential setting using data from Belgian residential consumers. The developed framework is benchmarked with a model-based technique, and the simulation results show a performance gap of 19%. The simulation results provide insight for developing optimal policies in more realistically-scaled and interconnected microgrids and for including uncertainties in generation and consumption for which white-box models become inaccurate and/or infeasible. View Full-Text
Keywords: control policy; fitted-Q iteration; microgrids; reinforcement learning control policy; fitted-Q iteration; microgrids; reinforcement learning
Figures

Figure 1

This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited (CC BY 4.0).
SciFeed

Share & Cite This Article

MDPI and ACS Style

Mbuwir, B.V.; Ruelens, F.; Spiessens, F.; Deconinck, G. Battery Energy Management in a Microgrid Using Batch Reinforcement Learning. Energies 2017, 10, 1846.

Show more citation formats Show less citations formats

Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Related Articles

Article Metrics

Article Access Statistics

1

Comments

[Return to top]
Energies EISSN 1996-1073 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top