Next Article in Journal
Integration of PV into the Sarajevo Canton Energy System-Air Quality and Heating Challenges
Previous Article in Journal
Viability of Various Sources to Ignite A2L Refrigerants
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Deep Neural Network Approach for Prediction of Heating Energy Consumption in Old Houses

1
Korea Institute of Energy Research, 152, Gajeong-ro, Yuseong-gu, Daejeon 34129, Korea
2
Department of Architecture and Architectural Engineering, Yonsei University, 50, Yonsei-ro, Seodaemun-gu, Seoul 03722, Korea
*
Author to whom correspondence should be addressed.
Energies 2021, 14(1), 122; https://doi.org/10.3390/en14010122
Submission received: 15 November 2020 / Revised: 19 December 2020 / Accepted: 22 December 2020 / Published: 28 December 2020
(This article belongs to the Section G: Energy and Buildings)

Abstract

:
Neural network models are data-driven and are effective for predicting and interpreting nonlinear or unexplainable physical phenomena. This study collected building information and heating energy consumption data from 16,158 old houses, selected key input variables that affect the heating energy consumption based on the collected datasets, and developed a deep neural network (DNN) model that showed the highest accuracy for the prediction of heating energy consumption in an old house. As a result, 11 key input variables were selected, and an optimal DNN model was developed. This optimal DNN model showed the highest prediction accuracy (R2 = 0.961) when the number of hidden layers was five and the number of neurons was 22. When the optimal DNN model was applied for the standard model of low-income detached houses, the prediction accuracy (Cv(RMSE)) of the optimal DNN model, compared to the EnergyPlus calculation result, was 8.74%, which satisfied the ASHRAE standard sufficiently.

1. Introduction

According to the 2018 National Housing Information Survey of the Korean Statistics Information Service (KOSIS), there are about 17.63 million houses in Korea, 9% of which were built before 1979 when housing insulation standards were enacted. Of those buildings, 47% are more than 20 years old [1]. Old houses are vulnerable to a lack of insulation and poor airtightness, which may cause heat loss and thus excessive energy consumption. For this reason, the government announced the 3rd National Energy Plan and has made efforts to improve the energy welfare system and to reduce the energy consumption of old houses. Old houses, however, are smaller in size and more numerous than general buildings. In addition, due to lack of diagnostic equipment, manpower and the long diagnostic time required, it is difficult to measure all parameters affecting energy consumption and to predict energy consumption by the parameters.
There are two model approaches to predict building energy consumption: physics-based models and data-driven models [2]. Physics-based models are based on the laws of thermodynamics and physics. EnergyPlus, eQuest, and Trnsys are representative physics-based energy-simulation software tools developed on this model approach [3]. They calculate building energy consumption using building parameters, air-conditioning and heating equipment system parameters, as well as environmental parameters such as building construction details, operation schedules, HVAC (Heating, Ventilation, and Air Conditioning) design information and climate, sky, and solar/shading information. However, the physics-based model requires a lot of computation time and resources for simulation [4] and, in many cases, it does not accurately reflect the thermal performance of the actual building because it simplifies the model to account for the lack of available detailed building information at the time of simulation [5]. On the other hand, the data-driven model is an effective method for modeling physical phenomena for which the theory is unknown or unexplainable. This approach has recently attracted attention from researchers [6,7,8,9,10] because it can simulate energy consumption from available building information and energy data without the need for detailed modeling and the numerous input parameters required by the physics-based approach. The data-driven approach, instead, requires enough data to obtain accurate simulation results, and requires insight into the appropriate preprocessing of datasets and the understanding of simulation results.
Neural network modeling is a data-driven approach used widely in the prediction of building energy consumption [11,12,13,14,15,16,17,18,19,20]. Matlab Neural Network Toolbox, TensorFlow/Keras, and PyTorch are representative energy-simulation software tools widely used for neural network modeling. ANN is a learning algorithm created by simulating human neural networks. Early ANN started with a single hidden layer in the neural network structure. This structure had been applied to nonlinear regression analysis. However, with growing inputs dimensions and interference components, the shallow layer ANN model cannot fit these situations, while the deep neural network (DNN) is capable of meeting the requirements such as better accuracy, deceased computational time and noise robustness.
The neural network model has been used by many researchers for building energy simulation. González and Zamarreño [16] proposed the use of a data-driven approach as a building energy-consumption prediction method. Their use of an ANN model to predict building energy consumption produced more accurate calculations with fewer input data, and more quickly than conventional physics-based models. Huang et al. [17] and Biswas et al. [18] applied it to residential buildings and heating systems. Tardioli et al. [19] used an ANN model to predict energy demand on the urban level, rather than for individual buildings. Mohandes et al. [20] used a DNN model to predict energy consumption of a commercial building, and Luo et al. [21] applied DNN model for predicting electricity consumption of an office building. They extracted features of weather data and DNN could provide accurate week-ahead energy consumption. In summary, the use of a DNN model for prediction of building energy consumption has been mostly for commercial or residential buildings, but less for old houses.
This study aimed to develop a DNN model for predicting the heating energy consumption for 16,158 old houses in Korea. The key input variables that affect the heating energy consumption were selected, and based on these input variables, an optimal DNN model was proposed by determining the structural parameters (the number of hidden layers and the number of their neurons) with the highest prediction accuracy. In addition, we evaluated the applicability of the optimal DNN model to low-income standard house.

2. Characteristics of Old Houses

The Korea Institute of Energy Research (KIER) has been carrying out an energy efficiency improvement project for old houses since 2014. The energy efficiency improvement project is conducted to support fuel poverty (Public Aid Recipients) based on the national Energy Law with the aim of preventing energy inequality and social polarization and improving energy efficiency. People in the fuel poverty group have lower-income and spend more than 10% of their ordinary income for energy purchases. The government collects building information through diagnosticians for about 20,000 households each year and, with an on-site inspection, supports repair/replacement work for walls, windows, doors, airtightness and boilers to improve energy efficiency. The building information includes architecture scheme (householder, address, region, building structure, building orientation, number of residents, year of completion, building use, floor plan) and energy performance (area of building envelope, U-value, ACH (Air Changes per Hour), heating equipment) before and after the repair/replacement work [22]. Table 1 shows a sample portion of the collected building information.
This study targeted 16,158 old houses among the 20,000 households for which building information was collected. The 16,158 old houses were completed more than 20 years, and were deteriorated in structure and function due to aging, and had high energy-consuming houses with low insulation performance [23]. The old houses had either a light-weight structure (panel, wood, and prefabricated structure) or a heavy-weight structure (steel, concrete, and masonry). The average heating space area was 42.89 m2, and the average areas of envelope elements including the roof, walls, floors, windows and doors, was 42.66, 47.2, 40.90, 6.57 and 2.27 m2, respectively. The U-value was measured using a heat-flux meter according to the ISO 9869-1. An average analysis method was used and each measurement took 72 h. The U-value was measured by the spot measurement method. The average U-value for the roof, walls, floors, windows and doors was 0.97, 0.91, 1.01, 4.28 and 2.78 W/(m2·K), respectively. ACH was applied at 1-h, which was suggested in the results of a detailed survey of low-income houses in Korea [22]. The heating equipment used individual boilers (type of boiler: gas, oil and briquettes). The efficiency of the boilers used the value from their name plates which the diagnostician investigated in the field. The average boiler efficiency was 84.4%. Heating energy consumption was calculated according to ISO 52016 [24]. ISO 52016 is an international standard for procedures for calculating the heating and cooling energy consumption of buildings. It contains calculation methods for the assessment of sensible energy needs for heating and cooling, latent energy needs for dehumidification, design sensible heating/cooling loads, design latent loads, and internal temperatures. The average energy consumption was 279.42 kWh/(m2·a), which is higher than the minimum energy consumption (173.20 kWh/(m2·a)) to escape from fuel poverty [25].

3. Input Data and Configuration of DNN Model

3.1. Preparation of Input Data

The input data for the DNN model used in this study came from the building information for 16,158 old houses surveyed in 2019 (1 January 2019−31 December 2019). This input data was prepared for modeling through a preprocess of error elimination–missing value elimination–outlier elimination–normalization.
  • Error elimination: this is the process of eliminating inaccurate information from the collected input data. For example, address or meteorological data, etc., that do not match or contradict the actual data for the old, detached houses of this study, were removed.
  • Missing data elimination: collected data may contain missing values, and statistical analysis using these datasets does not produce the desired results. Therefore, the data were checked for missing values and the data set containing missing values was removed or replaced with one containing the correct values.
  • Outlier elimination: an outlier, in statistics, is a data point that differs significantly from other observations. The outlier may be due to variability in the measurement, or it may indicate experimental error; the latter are sometimes excluded from the data set. An outlier can cause serious distortion of the analytical results. The outlier value is determined to be greater than 97.5% or less than 2.5% in the normal distribution. This study removed outliers using the Mahalanobis distance method [26]. The Mahalanobis distance (MD) is the distance between two points in multivariate space. The MD calculation—Mahalanobis Score (Probability)—p-value test was conducted using IBM SPSS statistic software. Through this, 16,158 data were extracted from 17,008 data.
  • Normalization: the data range distribution is adjusted by changing the range of data with different scales to 0 and 1. In this study, data normalization was performed using the following Equation (1):
x n e w = x x m i n x m a x x m i n

3.2. Neural Network Model

The DNN model had input layers, hidden layers, and an output layer (Figure 1), similar to the human neuron structure which has a dendrite, an axon, and cell bodies. Each node of the model received an external input and adjusted its influence on the output by weight and bias; the output value was derived by an activation function. The representative activation functions were step function, sigmoid function and linear function. The optimal weights in the learning process were adjusted using back propagation [27]. The DNN model is often used to solve nontheoretical problems such as pattern recognition and classification, as well as prediction in place of mathematical models. Back propagation is a representative algorithm for learning neural networks based on supervised learning, the signals of which are forward and backward propagated. As a learning function of the DNN model, the LMA (Levenberg Marquardt Algorithm) is widely used. The LMA is an algorithm for solving the nonlinear least squares problem [28]. It updates the link weight and bias values according to optimization [29]. This study proposes a DNN model as an alternative for estimating the heating energy consumption of old houses.

3.3. Modeling Approach

In this study, a DNN model for the prediction of heating energy consumption in old houses was developed through the process of Selection of key input variables–setting initial model conditions–optimization. The selection of key input variables affects the prediction accuracy of the model. If the input variables are incorrectly selected, the model prediction accuracy may drop significantly. Therefore, it is very important to find the key input variables that are most relevant to the target variables. The key input variables in this study were selected through correlation analysis of statistical techniques [30] for the input variables and target variables, as shown in Table 2, and then through prediction accuracy analysis [31] using an initial DNN model as shown in Table 3.
  • Correlation analysis: the Pearson correlation coefficients between the input variable and the target variable were calculated using the IBM Statistical Package for the Social Sciences (SPSS) [32]. From these, the input variables with a strong correlation were included as the key input variables.
  • Prediction accuracy analysis using an initial DNN model: the following stepwise method was applied. Possible combination cases were created, based on the key input variables selected from the correlation analysis above, and the coefficient of determination (R2) was calculated for each combination case. The final key input variables were determined by a stepwise method that excludes the combination case of the input variable with the lowest coefficient of determination.
In this study, modeling was performed by using the ‘MATLAB Neural Network Toolbox’ of The MathWorks, Inc. [33]. The initial DNN model used the values of the parameters in Table 3. Optimization of the model is the process of selecting the most suitable structural variables (number of hidden layers, number of their neurons) and parameters affecting the learning speed from the initial neural network model, to improve the predictive performance of the model. The number of hidden layers and the number of neurons in the hidden layers which have the highest prediction accuracy (R2-value) are determined by simulation in the range of the number of hidden layers = 1–10 and the number of neurons = 10–30, for the selected key input variables. This study did not consider the influence of parameters that affect the learning speed of the simulation.

4. Results and Discussion

4.1. Selection of Key Input Variables

The analysis of the correlation between the input variable and the target variable was performed by using SPSS, and the results are summarized as the Pearson correlation coefficient in Table 4. The Pearson correlation coefficient [34] is the quantification index of the linear correlation between X and Y. This is the covariance between two variables divided by the product of their standard deviations, which has a value between +1 and −1. A perfect positive linear correlation is measured as +1, 0 means no linear correlation and −1 means a perfect negative linear correlation. As shown in Table 4, the Pearson correlation coefficients are in the order of roof U-value > roof area > wall U-value > floor area > floor U-value > year of completion > wall area > heating space area > boiler efficiency > door area > window U-value > window area > region > structure > door U-value > ACH > building orientation. The boiler type was not considered in the present analysis, because it does not affect heating energy consumption. As a result of the correlation analysis, roof U-value (0.636), roof area (0.617), wall U-value (0.604), floor area (0.557), floor U-value (0.550), year of completion (−0.539), wall area (0.4888), and heating space area (0.430) were selected as the primary key input variables, because of high correlation with the target variable. Boiler efficiency (−0.276), door area (0.275), window heat permeability (0.269), and window area (0.252), with relatively low Pearson correlation coefficients, were selected as variables to be considered again in the analysis of the prediction accuracy of the initial DNN model.
Table 5 shows the coefficient of determination, R2, which represents the prediction accuracy of the initial DNN model for each combination case. The values of R2 from Case 1 to Case 16 ranged from 0.890 to 0.936. The coefficient of determination is a measure of explanation of the model for the data set, which means that the higher this value the higher the prediction accuracy of the model. As seen in Table 5, Case 13 shows the highest prediction accuracy. When the initial DNN was modeled with 11 input variables (excluding door area), it had the highest prediction accuracy (R2 = 0.936). Based on these results, we selected 11 input variables (roof U-value, roof area, wall U-value, floor area, floor U-value, year of completion, wall area, heating space area, boiler efficiency, window area, and window U-value) as the final key input variables for the DNN model for predicting energy consumption of old houses.

4.2. Optimal DNN Model

The optimization of the DNN model was performed by calculating R2 while changing the structural variables (the number of hidden layers and the number of their neurons) for the selected key input variables, and then identifying the structural variables with the highest R2 (i.e., best prediction accuracy) for use in the DNN model. In the optimization process, the number of hidden layers was increased from 1 to 10 and the number of neurons in the hidden layer was increased from 10 to 30. R2 was then calculated for all possible combinations of the structure variables. In these calculations, since the value of R2 showed a slight deviation for every time of training with random sampling, it was trained 30 times and the average of the results was used.
Table 6 shows the R2 calculation results when the number of neurons was fixed at 10 and the number of hidden layers was varied from 1 to 10. As shown in this table, when the number of hidden layers was five, the prediction accuracy was the highest, with an R2 of 0.954. Table 7 shows the results obtained by fixing the number of hidden layers to five (as determined in the previous Table 6) and changing the number of neurons from 10 to 30. When the number of neurons was 22, the prediction accuracy was the highest, with an R2 of 0.961. The results of the regression analysis (R2) for the training, validation, test and total datasets are plotted in Figure 2.

4.3. Summury and Discussion

As shown in Table 4 above, the results of the correlation analysis and prediction accuracy analysis showed that the U-values and envelope areas of the roof, wall, and floor had a high correlation, which influences the heating energy consumption through envelope heat loss proportional to the U-value and envelope area [24]. The completion year of the building was not used in physics-based model (e.g., E+), but in the DNN model, it was identified as a major variable that influences the prediction results. This variable is also related to the U-value standard of the national building code, and the standard has been strengthened recently. The heating space area has a proportional relationship with the occupied space and the building volume and affects the heating load through heat loss by infiltration. Although the boiler efficiency is an important variable in heating energy consumption, it was placed in a relatively low rank in this study. This seems to have a low correlation due to pipe heat loss and efficiency decline due to aging. The area and U-value of the window, like the roof, wall, and floor described above, are variables that contribute to the envelope heat loss, but their area is small and thus the influence is small. Other variables (door area and U-value, region, structure, ACH, building orientation) did not significantly affect the prediction of heating energy consumption of old houses by DNN model in this study.
The structure of the neural network varies depending on the complexity or hyperparameter setting, and if an appropriate model is not selected, it may cause underfitting or overfitting. Underfitting refers to a state in which learning is not performed because there is too little data to approach the decision boundary, while overfitting refers to a case where the variance is high due to overtraining. To avoid these problems, regularization, hyperparameter selection, and supply of sufficient amount of learning/verification data are necessary [28,35]. This study performed learning by dividing 16,158 data sets into training, validation, and testing. As a result of learning, the neural network structure of hidden layers 5 and neurons 22 had the highest prediction accuracy (R2 = 0.961), which shows that this optimized DNN model is suitable for predicting the heating energy consumption of old houses.
The developed optimal DNN model showed high predictive accuracy even when applied to Korean standard house that were not used for the above training. The Korean standard house is a house selected by extracting features that can have representativeness from architectural information, envelope performance, and floor plan obtained by surveying about 3000 low-income detached houses [23]. The floor plan of the standard house is shown as Figure 3. Table 8 shows the building information used as input for the model simulation and the heating energy consumption of the standard house as calculated by EnergyPlus. A description of EnergyPlus is summarized in Appendix A, and the details are contained in the User’s Manual from the Department of Energy [36]. Table 9 compares the result of the EnergyPlus calculation (in Table 8) with the result of heating energy consumption predicted by the optimal DNN model for the standard house. The annual heating energy consumption for the standard model house, as calculated by EnergyPlus, was 12,143 kWh/a (annual heating energy consumption per unit area: 272.75 kWh/(m2·a)). As predicted by the optimal DNN model from this study, the annual heating energy consumption was 13,307 kWh/a (annual heating energy consumption per unit area: 298.80 kWh/(m2·a)). The Cv(RMSE) of heating energy consumption predicted by the optimal DNN model was 8.74%. The Cv(RMSE) is a coefficient of variation (Cv) of a root mean square error (RMSE) and is a measure of the difference between an actual and a predicted value. The closer the Cv(RMSE) is to 0%, the more accurate the prediction. ASHRAE (The American Society of Heating, Refrigerating and Air-Conditioning Engineers) suggests that the acceptable value of Cv(RMSE) is 15% on a monthly, and 30% on an hourly, scale basis, as shown in Table 10 [37]. The Cv(RMSE) result of 8.74% indicates that the optimal DNN model can be used to evaluate the heating energy consumption of low-income houses. In addition, the DNN model developed in this study will be applicable to energy prediction of residential buildings in other countries with similar climate and building types. This result will be useful in setting the standard of energy consumption and reduction targets in large regional units such as old housing communities and cities.

5. Conclusions

This study developed an optimized DNN model for predicting heating energy consumption by selecting key input variables for 16,158 old houses and determining the structural variables (number of hidden layers and their neurons) with the highest prediction accuracy. As a result of correlation and prediction-accuracy analyses, 11 key input variables (roof U-value, roof area, wall U-value, floor area, floor U-value, year of completion, wall area, heating space area, boiler efficiency, window area, and window U-value) were selected. The optimal DNN model for predicting the heating energy consumption of old houses showed the highest accuracy (R2 = 0.961) when the number of hidden layers and neurons in hidden layers was five and 22, respectively. The developed optimal DNN model showed high accuracy (Cv(RMSE) = 8.74%) in the applicability evaluation for heating energy consumption of the standard house and satisfied the ASHRAE standard sufficiently. Moreover, the optimal DNN model, compared to the physics-based model (EnergyPlus), can have high energy prediction accuracy with fewer input variables, and can reduce modeling and simulation time. These advantages will make it easier for field engineers to use with less expertise and experience in building energy modeling. We believe that our study makes a significant contribute to energy welfare by improving the energy efficiency, diagnosis and prediction for low-income houses.

Author Contributions

Design, modeling, simulation and writing, S.L.; conceptualization and methodology, S.C. (Soo Cho); data collection and analysis, S.-H.K.; project administration, J.K., S.C. (Suyong Chae) and H.J.; supervision, T.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Acknowledgments

This work was conducted under the framework of Research and Development Program of the Korea Institute of Energy Research (C0-2411), and was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. 2019M3E7A111308912).

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

The EnergyPlus program has its roots in both the BLAST and DOE-2 programs. BLAST (Building Loads Analysis and System Thermodynamics) and DOE-2 were both developed and released in the late 1970s and early 1980s, as energy and load simulation tools. EnergyPlus is an energy analysis and thermal load simulation program. The program can be detailed for internal conditions; can consider the interactions between factors in each hourly load calculation; and calculates energy consumption through integrated simulations between buildings, systems, and plants. In addition, the module-type structure provides flexibility, and the calculation results can be accurately reproduced in the actual building, over time. See the User’s Manual of the Department of Energy [36] for details.

References

  1. Korean Statistical Information Service (KSIS). Population and Housing Census. 2018. Available online: http://kosis.kr/index/index.do (accessed on 4 September 2020).
  2. Bourdeau, M.; Zhai, X.Q.; Nefzaoui, E.; Guo, X.; Chatellier, P. Modeling and forecasting building energy consumption: A review of data-driven techniques. Sustain. Cities Soc. 2019, 48, 101533. [Google Scholar] [CrossRef]
  3. O’Neill, Z.; Narayanan, S.; Brahme, R. Model-based thermal load estimation in buildings. Proc. Simbuild 2010, 4, 474–481. [Google Scholar]
  4. Naganathan, H.; Chong, W.O.; Chen, X. Building energy modeling (BEM) using clustering algorithms and semi-supervised machine learning approaches. Autom. Constr. 2019, 72, 187–194. [Google Scholar] [CrossRef]
  5. Ryan, E.M.; Sanquist, T.F. Validation of building energy modeling tools under idealized and realistic conditions. Energy Build. 2012, 47, 375–382. [Google Scholar] [CrossRef]
  6. Esen, H.; Escen, M.; Ozsolak, O. Modelling and experimental performance analysis of solar-assisted ground source heat pump system. J. Exp. Theor. Artif. Intell. 2017, 29, 1–17. [Google Scholar] [CrossRef]
  7. Kang, I. Development of ANN (Artificial Neural Network) Based Predictive Model for Energy Consumption of HVAC System in Office Building. Master’s Thesis, Chung-Ang University, Seoul, Korea, 2017. [Google Scholar]
  8. Robinson, C.; Dilkina, B.; Hubbs, J.; Zhang, W.; Guhathakurta, S.; Brown, M.A.; Pendyala, R.M. Machine learning approaches for estimating commercial building energy consumption. Appl. Energy 2017, 208, 889–904. [Google Scholar] [CrossRef]
  9. Ahmad, T.; Chen, H.; Huang, R.; Yabin, G.; Wang, J.; Shair, J.; Akram, H.M.A.; Mohsan, S.A.H.; Kazime, M. Supervised based machine learning models for short, medium and long-term energy prediction in distinct building environment. Energy 2018, 158, 17–32. [Google Scholar] [CrossRef]
  10. Chou, J.-S.; Tran, D.-S. Forecasting energy consumption time series using machine learning techniques based on usage patterns of residential householders. Energy 2018, 165 Pt B, 709–726. [Google Scholar] [CrossRef]
  11. D’Amico, A.; Ciulla, G.; Traverso, M.; Brano, V.L.; Palumbo, E. Artificial Neural Networks to assess energy and environmental performance of buildings: An Italian case study. J. Clean. Prod. 2019, 239, 117993. [Google Scholar] [CrossRef]
  12. Ciulla, G.; D’Amico, A.; Brano, V.L.; Traverso, M. Application of optimized artificial intelligence algorithm to evaluate the heating energy demand of non-residential buildings at European level. Energy 2019, 176, 380–391. [Google Scholar] [CrossRef]
  13. Katsatos, A.L.; Moustris, K.P. Application of Artificial Neuron Networks as energy consumption forecasting tool in the building of Regulatory Authority of Energy, Athens, Greece. Energy Procedia 2019, 157, 851–861. [Google Scholar] [CrossRef]
  14. Satrio, P.; Mahlia, T.M.I.; Giannetti, N.; Saito, K. Optimization of HVAC system energy consumption in a building using artificial neural network and multi-objective genetic algorithm. Sustain. Energy Technol. Assess. 2019, 35, 48–57. [Google Scholar]
  15. Deb, C.; Lee, S.E.; Santamouris, M. Using artificial neural networks to assess HVAC related energy saving in retrofitted office buildings. Sol. Energy 2018, 163, 32–44. [Google Scholar] [CrossRef]
  16. González, P.A.; Zamarreño, J.M. Prediction of hourly energy consumption in buildings based on a feedback artificial neural network. Energy Build. 2005, 37, 595–601. [Google Scholar] [CrossRef]
  17. Huang, Y.; Yuan, Y.; Chen, H.; Wang, J.; Guo, Y.; Ahmad, T. A novel energy demand prediction strategy for residential buildings based on ensemble learning. Energy Procedia 2019, 158, 3411–3416. [Google Scholar] [CrossRef]
  18. Biswas, M.A.R.; Robinson, M.D.; Fumo, N. Prediction of residential building energy consumption: A neural network approach. Energy 2016, 117 Pt 1, 84–92. [Google Scholar] [CrossRef]
  19. Tardioli, G.; Kerrigan, R.; Oates, M.; O’Donnell, J.; Finn, D. Data Driven Approaches for Prediction of Building Energy Consumption at Urban Level. Energy Procedia 2015, 78, 3378–3383. [Google Scholar] [CrossRef] [Green Version]
  20. Mohandes, S.R.; Zhang, X.; Mahdiyar, A. A comprehensive review on the application of artificial neural networks in building energy analysis. Neurocomputing 2019, 340, 55–75. [Google Scholar] [CrossRef]
  21. Luo, X.J.; Oyedele, L.O.; Ajayi, A.O.; Akinade, O.O.; Owolabi, H.A.; Ahmed, A. Feature extraction and genetic algorithm enhanced adaptive deep neural network for energy consumption prediction in buildings. Renew. Sustain. Energy Rev. 2020, 131, 109980. [Google Scholar] [CrossRef]
  22. Korea Institute of Energy Research (KIER). Low-Income Household Energy Efficiency Improvement Project Report; Korea Institute of Energy Research (KIER): Daejeon, Korea, 2018. [Google Scholar]
  23. Kim, J. Heating Energy Baseline and Saving Model Development of Detached Houses for Low-Income Households. Master’s Thesis, University of Science and Technology, Daejeon, Korea, 2015. [Google Scholar]
  24. International Organization for Standardization (ISO). Energy Performance of Buildings—Energy Needs for Heating and Cooling, Internal Temperatures and Sensible and Latent Heat Loads—Part 1: Calculation Procedures; Standard No. 52016-1; ISO: Geneva, Switzerland, 2017. [Google Scholar]
  25. Lee, S.-J.; Kim, J.; Jeong, H.; Yoo, S.; Lee, S. Heating Energy Efficiency Improvement Analysis of Low-income Houses. J. KIAEBS 2017, 11, 212–218. [Google Scholar]
  26. Hazewinkel, M. Encyclopedia of Mathematics; Springer: Berlin, Germany, 2001. [Google Scholar]
  27. Aggarwal, C.C. Neural Networks and Deep Learning: A Textbook, 1st ed.; Springer: Berlin, Germany, 2018. [Google Scholar]
  28. Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; The MIT Press: Cambridge, MA, USA, 2016. [Google Scholar]
  29. Ye, Z.; Kim, M.K. Predicting electricity consumption in a building using an optimized back-propagation and Levenberg-Marquardt back-propagation neural network: Case study of a shopping mall in China. Sustain. Cities Soc. 2018, 42, 176–183. [Google Scholar] [CrossRef]
  30. Hinton, P.R.; McMurray, I.; Brownlow, C. SPSS Explained, 2nd ed.; Routledge: London, UK, 2014. [Google Scholar]
  31. Ahmad, M.W.; Mourshed, M.; Rezgui, Y. Trees vs Neurons: Comparison between random forest and ANN for high-resolution prediction of building energy consumption. Energy Build. 2017, 147, 77–89. [Google Scholar] [CrossRef]
  32. IBM. IBM SPSS Software. Available online: https://www.ibm.com/kr-ko/analytics/spss-statistics-software (accessed on 4 September 2020).
  33. Mathworks. Matlab Software. Available online: https://www.mathworks.com/products/matlab.html (accessed on 4 September 2020).
  34. Wang, J.C. A study on the energy performance of hotel buildings in Taiwan. Energy Build. 2012, 49, 268–275. [Google Scholar] [CrossRef]
  35. Reed, R.; MarksII, R.J. Neural Smithing: Supervised Learning in Feedforward Artificial Neural Networks; A Bradford Book: Cambridge, MA, USA, 1999. [Google Scholar]
  36. Department of Energy (DOE). EnergyPlus Software. Available online: https://energyplus.net/ (accessed on 4 September 2020).
  37. Department of Energy (DOE). M&V Guidelines: Measurement and Verification for Federal Energy Projects, Version 3.0; Department of Energy (DOE): Washington, DC, USA, 2008; pp. 21–22. [Google Scholar]
Figure 1. Conceptual structure of the deep neural network (DNN) model [22].
Figure 1. Conceptual structure of the deep neural network (DNN) model [22].
Energies 14 00122 g001
Figure 2. Regression results of an optimized DNN model for: (a) training; (b) validation; (c) test; and (d) all datasets.
Figure 2. Regression results of an optimized DNN model for: (a) training; (b) validation; (c) test; and (d) all datasets.
Energies 14 00122 g002
Figure 3. Floor plans of Korean standard house.
Figure 3. Floor plans of Korean standard house.
Energies 14 00122 g003
Table 1. An example of datasets that have collected building information and heating energy consumption of old houses.
Table 1. An example of datasets that have collected building information and heating energy consumption of old houses.
NumberGeneralAreaU-Value HVACHeating Energy
Consumption
CityBuilding
Orientation
StructureRegionYear of
Completion
Heating SpaceWallWindowDoorRoofFloorWallWindowDoorRoofFloorACHBoilerEfficiency
1DaeguEastHeavyGyeongsang25-Jan-9016.3034.220.861.4416.3216.320.763.902.700.520.761.00Oil90.002985
2BusanSouthHeavyBusan01-Jan-7040.0056.884.351.890.0040.081.543.902.700.001.541.00Gas83.506823
3WandoSouthHeavyJeollanam-do07-May-6460.0053.880.242.8860.0060.001.545.292.401.541.541.00Oil80.0014,975
4WandoSouthHeavyJeollanam-do03-Jul-8060.0076.813.087.2860.0060.001.055.302.401.051.051.00Oil80.0012,701
5WandoSouthHeavyJeollanam-do08-Aug-7053.0054.985.244.0053.0053.001.545.302.501.541.541.00Oil80.0014,486
6WandoSouthHeavyJeollanam-do11-Sep-6551.0057.543.085.8251.0051.001.545.302.401.541.541.00Oil80.0014,243
7WandoSouthHeavyJeollanam-do29-May-8516.0033.371.541.8916.0016.000.582.802.700.580.581.00Oil80.002524
8WandoEastHeavyJeollanam-do07-Jul-8750.0074.383.793.4650.0050.000.584.472.580.580.581.00Oil80.006172
9WandoNorthHeavyJeollanam-do07-Jul-8325.0030.405.441.3625.0025.000.586.602.400.581.161.00Oil80.003553
10WandoSouthHeavyJeollanam-do08-Jun-6837.0049.988.501.3637.0037.001.545.302.401.541.541.00Oil80.0010,489
11WandoNorthHeavyJeollanam-do06-Apr-8550.0069.735.881.8950.0050.000.582.802.700.580.581.00Oil80.005502
12WandoNorth-EastHeavyJeollanam-do02-Apr-9521.0041.282.420.0021.0021.000.766.600.000.520.761.00Oil80.003148
13WandoSouthHeavyJeollanam-do07-May-0353.0067.668.003.7853.0053.000.586.602.550.350.411.00Briquette70.004213
14WandoSouthHeavyJeollanam-do18-Apr-64100.0096.768.400.00100.00100.001.543.870.001.541.541.00Oil80.0024,453
15WandoSouthHeavyJeollanam-do03-Jul-9030.0062.740.001.8930.0030.000.760.002.700.520.761.00Oil80.005495
16WandoSouthHeavyJeollanam-do06-Jul-8945.0057.525.043.6945.0045.000.763.902.550.520.761.00Oil80.006165
17WandoSouthHeavyJeollanam-do07-Jun-8650.0065.586.451.8950.0050.000.585.302.700.580.581.00Oil80.006533
18WandoSouthHeavyJeollanam-do20-Aug-8850.0065.473.411.8950.0050.000.763.902.700.520.761.00Oil85.005382
19WandoNorthHeavyJeollanam-do04-Aug-8886.0081.908.961.89172.000.000.763.902.700.520.001.00Oil80.0012,023
20WandoSouthHeavyJeollanam-do16-Sep-0060.0068.086.161.8960.0060.000.765.302.400.520.761.00Oil80.008290
21WandoSouthHeavyJeollanam-do22-May-7038.0055.633.491.3638.0038.001.545.302.701.541.541.00Oil80.0010,057
22WandoNorth-EastHeavyJeollanam-do23-Sep-7728.0044.411.761.4428.0028.001.546.602.701.541.541.00Oil80.009014
23BusanSouthHeavyGyeongsang08-Apr-7116.0016.601.080.0017.0017.001.543.900.001.541.541.00-50.004232
24WandoSouthHeavyJeollanam-do13-Jul-7850.0060.065.940.0050.0050.001.546.600.001.541.541.00Oil80.0015076
25WandoSouthHeavyJeollanam-do07-Jul-8545.0049.5813.340.0045.0045.000.585.110.000.580.581.00Oil85.006360
26WandoSouthHeavyJeollanam-do07-Jun-6545.0075.769.885.0045.0045.001.545.302.401.541.541.00Oil80.0014,025
27WandoSouthHeavyJeollanam-do20-Jun-5925.0043.1810.261.1225.0025.001.545.302.701.541.541.00Oil80.008623
28WandoSouthHeavyJeollanam-do08-Jun-8340.0064.735.072.8840.0040.000.585.612.400.581.161.00Oil80.006932
29WandoSouthHeavyJeollanam-do08-Mar-8665.0068.713.921.8965.0065.000.583.902.700.580.581.00Oil90.006329
30WandoSouthHeavyJeollanam-do10-Aug-6340.0068.642.643.5240.0040.001.545.302.401.541.541.00Oil80.0013,233
31WandoNorth-EastHeavyJeollanam-do05-Jun-8528.0053.553.521.8928.0028.000.586.602.700.580.581.00Oil80.004378
32WandoSouthHeavyJeollanam-do18-Jun-8345.0057.368.640.0045.0045.000.586.600.000.581.161.00Electricity100.005939
33WandoWestHeavyJeollanam-do08-May-6548.0059.170.009.9148.0048.001.540.002.441.541.541.00Oil80.0014,901
34WandoSouthHeavyJeollanam-do18-Jun-7745.0057.7013.831.9545.0045.001.545.302.401.541.541.00Oil80.0014,866
35WandoSouth-EastHeavyJeollanam-do14-Sep-0240.0060.747.425.7640.0040.000.586.182.400.350.411.00Oil80.005843
Table 2. Input variables and target variables considered for the DNN modelling of old houses.
Table 2. Input variables and target variables considered for the DNN modelling of old houses.
TypeInput VariableTarget Variable
General detailsRegion (32 cities)
Building orientation (E, W, S, N, NE, NW, SE, SW)
Structure (heavy, light)
Year of completion
ACH
Type of boiler
Boiler efficiency
Energy consumption [kWh/a]
Area [m2]Heating space area
Wall
Roof
Floor
Window
Door
U-value
[W/(m2·K)]
Wall
Roof
Floor
Window
Door
Table 3. Feature and setting value of the structural and learning parameters of initial DNN model.
Table 3. Feature and setting value of the structural and learning parameters of initial DNN model.
ParametersFeatureValue
ModelBack propagation efficiently computes the gradient of the loss function with respect to the weights of the network for a single input-output exampleLMA (Levenberg Marquardt Algorithm) [29]
Data division (%)Training:Validation:Testing70:15:15
Structural ParametersHidden layerThe increase in the number of hidden layers/neurons, the predict performance is improved, but the calculation takes a long time1
Neuron of hidden layer10
Learning parametersLearning rateThe smaller value has improved predict performance, but the longer it takes to learn0.2
MomentumBasically, it starts with 0.1, and the bigger value it is, the faster the learning speed0.6
EpochsMaximum number of learning1000
GoalTarget error ratio between actual and predicted values0.01
Table 4. Results of Pearson correlation analysis using IBM SPSS Statistics Software.
Table 4. Results of Pearson correlation analysis using IBM SPSS Statistics Software.
TypeInput VariablePearson Correlation CoefficientRank
General detailsRegion0.16213
Area [m2]Building orientation−0.01217
U-value (Heat transmission coefficient)
[W/(m2·K)]
Structure0.15914
Year of completion−0.5396
ACH0.01216
Type of boilerN/AN/A
Boiler efficiency−0.2769
Heating space area0.4308
Wall0.4887
Roof0.6172
Floor0.5574
Window0.25212
Door0.27510
Wall0.6043
Roof0.6361
Floor0.5505
Window0.26911
Door−0.05115
Table 5. Cases of available combinations of the input variables selected from the SPSS correlation analysis and performance results (R2-values) for each case predicted from the initial DNN model.
Table 5. Cases of available combinations of the input variables selected from the SPSS correlation analysis and performance results (R2-values) for each case predicted from the initial DNN model.
ParameterCase 1Case 2Case 3Case 4Case 5Case 6Case 7Case 8Case 9Case 10Case 11Case 12Case 13Case 14Case 15Case 16
Window U-value
[W/(m2·K)]
××××××××
Window area [m2]××××××××
Door area [m2]××××××××
Boiler efficiency [%]××××××××
Heating space area [m2]
Wall area [m2]
Year of completion
Floor U-value
[W/(m2·K)]
Floor area [m2]
Wall U-value
[W/(m2·K)]
Roof area [m2]
Roof U-value
[W/(m2·K)]
R2-value0.8930.8900.8960.8950.9310.8960.8960.9330.8960.9290.9330.8980.9360.9350.9350.935
Note: O and X are input variables that were considered and not considered in the case analysis, respectively.
Table 6. Predict performance results (R2-values) for the change of hidden layer number when the number of neurons is fixed to 10.
Table 6. Predict performance results (R2-values) for the change of hidden layer number when the number of neurons is fixed to 10.
Number of
Hidden Layer
12345678910
R2-value0.9360.9450.9490.9510.9540.9530.9520.9530.9520.950
Table 7. Predicted performance results (R2-values) for the change of neuron number when the number of hidden layers is fixed to five.
Table 7. Predicted performance results (R2-values) for the change of neuron number when the number of hidden layers is fixed to five.
Number of Neurons in Hidden Layer10111213141516
R2-value0.9540.9550.9550.9540.9560.9580.954
Number of Neurons in Hidden Layer17181920212223
R2-value0.9580.9560.9560.9590.9580.9610.958
Number of Neurons in Hidden Layer24252627282930
R2-value0.9590.9580.9590.9580.9600.9590.958
Table 8. Building Information and heating energy consumption (calculated by EnergyPlus) of standard house.
Table 8. Building Information and heating energy consumption (calculated by EnergyPlus) of standard house.
DivisionValues
Window U-value [W/(m2·K)]5.84
Window area [m2]8.05
Heating space area [m2]44.52
Wall area [m2]56.09
Year of completion1980
Floor U-value [W/(m2·K)]1.05
Floor area [m2]44.52
Wall U-value [W/(m2·K)]1.05
Roof area [m2]44.52
Roof U-value [W/(m2·K)]1.05
Boiler efficiency [%]80
Energy consumption(Calculated)
[kWh/yr]
12143
Energy consumption (Calculated/m2)
[kWh/m2·yr]
272.75
Table 9. Cv(RMSE) of heating energy consumptions predicted by the optimal DNN model to those calculated by EnergyPlus for the standard house.
Table 9. Cv(RMSE) of heating energy consumptions predicted by the optimal DNN model to those calculated by EnergyPlus for the standard house.
DivisionValues
Optimal DNN model-predicted annual heating consumption
[kWh/yr]
13,307
Optimal DNN model- predicted annual heating consumption per unit area [kWh/m2·yr]298.90
EnergyPlus-calculated annual heating consumption
[kWh/yr]
12,143
EnergyPlus-calculated annual heating consumption
per unit area [kWh/m2·yr]
272.75
Cv(RMSE) [%]8.74
Table 10. ASHRAE guideline of tolerance limits for the Cv(RMSE) of building energy modelling.
Table 10. ASHRAE guideline of tolerance limits for the Cv(RMSE) of building energy modelling.
DivisionMonthlyHourly
Tolerance limits (Cv(RMSE))15%30%
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Lee, S.; Cho, S.; Kim, S.-H.; Kim, J.; Chae, S.; Jeong, H.; Kim, T. Deep Neural Network Approach for Prediction of Heating Energy Consumption in Old Houses. Energies 2021, 14, 122. https://doi.org/10.3390/en14010122

AMA Style

Lee S, Cho S, Kim S-H, Kim J, Chae S, Jeong H, Kim T. Deep Neural Network Approach for Prediction of Heating Energy Consumption in Old Houses. Energies. 2021; 14(1):122. https://doi.org/10.3390/en14010122

Chicago/Turabian Style

Lee, Sungjin, Soo Cho, Seo-Hoon Kim, Jonghun Kim, Suyong Chae, Hakgeun Jeong, and Taeyeon Kim. 2021. "Deep Neural Network Approach for Prediction of Heating Energy Consumption in Old Houses" Energies 14, no. 1: 122. https://doi.org/10.3390/en14010122

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop