Next Article in Journal
Novel Segmentation Technique for Measured Three-Phase Voltage Dips
Next Article in Special Issue
Optimal Energy Management of Multi-Microgrids with Sequentially Coordinated Operations
Previous Article in Journal
Increasing Hydrogen Density with the Cation-Anion Pair BH4-NH4+ in Perovskite-Type NH4Ca(BH4)3
Previous Article in Special Issue
Automated Linear Function Submission-Based Double Auction as Bottom-up Real-Time Pricing in a Regional Prosumers’ Electricity Network
Open AccessArticle

Learning Agent for a Heat-Pump Thermostat with a Set-Back Strategy Using Model-Free Reinforcement Learning

Division ELECTA, Department of Electrical Engineering, Faculty of Engineering, KU Leuven, Kasteelpark Arenberg 10, Box 2445, Leuven 3001, Belgium
EnergyVille, Thor park 8300, Genk 3600, Belgium
Energy Department of VITO, Flemish Institute for Technological Research, Boeretang 200, Mol 2400, Belgium
Author to whom correspondence should be addressed.
Academic Editor: Neville R. Watson
Energies 2015, 8(8), 8300-8318;
Received: 2 June 2015 / Revised: 26 June 2015 / Accepted: 29 June 2015 / Published: 6 August 2015
(This article belongs to the Collection Smart Grid)
The conventional control paradigm for a heat pump with a less efficient auxiliary heating element is to keep its temperature set point constant during the day. This constant temperature set point ensures that the heat pump operates in its more efficient heat-pump mode and minimizes the risk of activating the less efficient auxiliary heating element. As an alternative to a constant set-point strategy, this paper proposes a learning agent for a thermostat with a set-back strategy. This set-back strategy relaxes the set-point temperature during convenient moments, e.g., when the occupants are not at home. Finding an optimal set-back strategy requires solving a sequential decision-making process under uncertainty, which presents two challenges. The first challenge is that for most residential buildings, a description of the thermal characteristics of the building is unavailable and challenging to obtain. The second challenge is that the relevant information on the state, i.e., the building envelope, cannot be measured by the learning agent. In order to overcome these two challenges, our paper proposes an auto-encoder coupled with a batch reinforcement learning technique. The proposed approach is validated for two building types with different thermal characteristics for heating in the winter and cooling in the summer. The simulation results indicate that the proposed learning agent can reduce the energy consumption by 4%–9% during 100 winter days and by 9%–11% during 80 summer days compared to the conventional constant set-point strategy. View Full-Text
Keywords: auto-encoder; batch reinforcement learning; heat pump; set-back thermostat auto-encoder; batch reinforcement learning; heat pump; set-back thermostat
Show Figures

Figure 1

MDPI and ACS Style

Ruelens, F.; Iacovella, S.; Claessens, B.J.; Belmans, R. Learning Agent for a Heat-Pump Thermostat with a Set-Back Strategy Using Model-Free Reinforcement Learning. Energies 2015, 8, 8300-8318.

Show more citation formats Show less citations formats

Article Access Map by Country/Region

Only visits after 24 November 2015 are recorded.
Search more from Scilit
Back to TopTop