Next Article in Journal
Novel Segmentation Technique for Measured Three-Phase Voltage Dips
Next Article in Special Issue
Optimal Energy Management of Multi-Microgrids with Sequentially Coordinated Operations
Previous Article in Journal
Increasing Hydrogen Density with the Cation-Anion Pair BH4-NH4+ in Perovskite-Type NH4Ca(BH4)3
Previous Article in Special Issue
Automated Linear Function Submission-Based Double Auction as Bottom-up Real-Time Pricing in a Regional Prosumers’ Electricity Network
Article Menu

Export Article

Open AccessArticle
Energies 2015, 8(8), 8300-8318; doi:10.3390/en8088300

Learning Agent for a Heat-Pump Thermostat with a Set-Back Strategy Using Model-Free Reinforcement Learning

1
Division ELECTA, Department of Electrical Engineering, Faculty of Engineering, KU Leuven, Kasteelpark Arenberg 10, Box 2445, Leuven 3001, Belgium
2
EnergyVille, Thor park 8300, Genk 3600, Belgium
3
Energy Department of VITO, Flemish Institute for Technological Research, Boeretang 200, Mol 2400, Belgium
*
Author to whom correspondence should be addressed.
Academic Editor: Neville R. Watson
Received: 2 June 2015 / Revised: 26 June 2015 / Accepted: 29 June 2015 / Published: 6 August 2015
(This article belongs to the Collection Smart Grid)
View Full-Text   |   Download PDF [314 KB, uploaded 6 August 2015]   |  

Abstract

The conventional control paradigm for a heat pump with a less efficient auxiliary heating element is to keep its temperature set point constant during the day. This constant temperature set point ensures that the heat pump operates in its more efficient heat-pump mode and minimizes the risk of activating the less efficient auxiliary heating element. As an alternative to a constant set-point strategy, this paper proposes a learning agent for a thermostat with a set-back strategy. This set-back strategy relaxes the set-point temperature during convenient moments, e.g., when the occupants are not at home. Finding an optimal set-back strategy requires solving a sequential decision-making process under uncertainty, which presents two challenges. The first challenge is that for most residential buildings, a description of the thermal characteristics of the building is unavailable and challenging to obtain. The second challenge is that the relevant information on the state, i.e., the building envelope, cannot be measured by the learning agent. In order to overcome these two challenges, our paper proposes an auto-encoder coupled with a batch reinforcement learning technique. The proposed approach is validated for two building types with different thermal characteristics for heating in the winter and cooling in the summer. The simulation results indicate that the proposed learning agent can reduce the energy consumption by 4%–9% during 100 winter days and by 9%–11% during 80 summer days compared to the conventional constant set-point strategy. View Full-Text
Keywords: auto-encoder; batch reinforcement learning; heat pump; set-back thermostat auto-encoder; batch reinforcement learning; heat pump; set-back thermostat
This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. (CC BY 4.0).

Scifeed alert for new publications

Never miss any articles matching your research from any publisher
  • Get alerts for new papers matching your research
  • Find out the new papers from selected authors
  • Updated daily for 49'000+ journals and 6000+ publishers
  • Define your Scifeed now

SciFeed Share & Cite This Article

MDPI and ACS Style

Ruelens, F.; Iacovella, S.; Claessens, B.J.; Belmans, R. Learning Agent for a Heat-Pump Thermostat with a Set-Back Strategy Using Model-Free Reinforcement Learning. Energies 2015, 8, 8300-8318.

Show more citation formats Show less citations formats

Related Articles

Article Metrics

Article Access Statistics

1

Comments

[Return to top]
Energies EISSN 1996-1073 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top