Next Article in Journal
Response Spectra-Based Post-Earthquake Rapid Structural Damage Estimation Approach Aided with Remote Sensing Data: 2020 Samos Earthquake
Next Article in Special Issue
Data Reconstruction of Wireless Sensor Network and Zonal Demand Control in a Large-Scale Indoor Space Considering Thermal Coupling
Previous Article in Journal
Structural Performance of Thin-Walled Twisted Box-Section Structure
Previous Article in Special Issue
A Nonintrusive Load Monitoring Method for Office Buildings Based on Random Forest
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Comparative Evaluation of Predicting Energy Consumption of Absorption Heat Pump with Multilayer Shallow Neural Network Training Algorithms

1
Eco-System Research Center, Gachon University, Seongnam 13120, Korea
2
Department of Architectural Engineering, Kangwon National University, Samcheok-si 25913, Korea
3
Department of Architectural Engineering, Gachon University, Seongnam 13120, Korea
*
Author to whom correspondence should be addressed.
Buildings 2022, 12(1), 13; https://doi.org/10.3390/buildings12010013
Submission received: 5 November 2021 / Revised: 14 December 2021 / Accepted: 24 December 2021 / Published: 26 December 2021

Abstract

:
The performance of various multilayer neural network algorithms to predict the energy consumption of an absorption chiller in an air conditioning system under the same conditions was compared and evaluated in this study. Each prediction model was created using 12 representative multilayer shallow neural network algorithms. As training data, about a month of actual operation data during the heating period was used, and the predictive performance of 12 algorithms according to the training size was evaluated. The prediction results indicate that the error rates using the measured values are 0.09% minimum, 5.76% maximum, and 1.94 standard deviation (SD) for the Levenberg–Marquardt backpropagation model and 0.41% minimum, 5.05% maximum, and 1.68 SD for the Bayesian regularization backpropagation model. The conjugate gradient with Polak–Ribiére updates backpropagation model yielded lower values than the other two models, with 0.31% minimum, 5.73% maximum, and 1.76 SD. Based on the results for the predictive performance evaluation index, CvRMSE, all other models (conjugate gradient with Fletcher–Reeves updates backpropagation, one-step secant backpropagation, gradient descent with momentum and adaptive learning rate backpropagation, gradient descent with momentum backpropagation) except for the gradient descent backpropagation model yielded results that satisfy ASHRAE (American Society of Heating, Refrigerating and Air-Conditioning Engineers) Guideline 14. The results of this study confirm that the prediction performance may differ for each multilayer neural network training algorithm. Therefore, selecting the appropriate model to fit the characteristics of a specific project is essential.

1. Introduction

Buildings consume most of their energy during the operating phase of their entire service life. Specifically, about 60% of energy is used for air conditioning to provide a comfortable living and working environment [1,2,3]. In order to reduce energy consumption in buildings, efficient energy use and management should be implemented for the operational phase. To meet this need, countless studies have been undertaken that focus on the development of high-efficiency facilities, control solutions, and energy management systems. However, in order to optimize energy performance from the design stage to the operation stage, the accurate prediction of energy consumption and the demands of the building must be addressed first.
Various studies of accurate energy consumption and demand prediction have employed neural network techniques based on machine learning. For example, Cheng et al. used an artificial neural network (ANN) model to investigate building envelope performance, parameters, heating degree day, and cooling degree day as input variables and were able to increase the prediction accuracy by more than 96% compared to the existing method [4].
Peng et al. proposed an ANN model that predicts the refrigeration load by combining the box model and Jenkins model and showed performance of less than 2.1% mean absolute percentage error [5]. Roldan et al. proposed an ANN model for predicting the time-temperature curve using short-term building energy consumption, usage temperature, and type of building as input variables. They obtained high prediction accuracy after testing in real buildings for one year [6].
Turhan et al. compared the results of predicting the thermal load according to the building envelope conditions using ANN with those obtained from the building energy simulation tool. They observed a high similarity between ANN prediction techniques and the results of building energy simulation tools and confirmed an average absolute percentage of 5.06% and a prediction success rate of 0.977 [7]. Ferlito et al. developed an ANN model that uses monthly building electrical energy consumption data and showed a prediction accuracy of 15.7% to 17.97% root mean square error (RMSE) [8]. Li et al. proposed an energy consumption prediction technique that simplifies a complex building into several blocks in the initial design stage based on an ANN algorithm. The prediction result of cooling and heating energy consumption showed a relative deviation within ±10%, and the total energy consumption showed a predictive performance within 10% of the relative deviation [9].
Le Cam et al. used a closed-loop nonlinear autoregressive neural network training algorithm to predict the energy consumption of the air supply fan in an air-handling unit (AHU) [10]. Their results showed a predictive performance of 5.5% RMSE and 17.6% coefficient of variation of the RMSE (CvRMSE) [10]. Ahmed et al. predicted the power load of a single building using ANN and random forest (RF) models and compared and analyzed each predictive performance. Their proposed ANN model yielded an average CvRMSE of 4.91%, and the RF model showed an average CvRMSE of 6.10% by adjusting the depth of the tree [11].
Ding et al. investigated prediction accuracy by combining eight input variables using an ANN model and a support vector machine (SVM) [12]. They improved the prediction accuracy by optimizing the combination of variables using K-means, and among the variables, historical cooling capacity data showed the highest correlation with prediction accuracy [12].
Koschwitz et al. predicted data-driven thermal loads using NARX RNNs (nonlinear autoregressive exogenous recurrent neural networks) of different depths and an ε-SVM regression model. Predicting the monthly load in non-residential regions in Germany found that the NARX RNNs showed higher accuracy levels than the ε-SVM regression model [13]. Niu et al. evaluated the energy consumption prediction performance of the AHU in a Bayesian network training model and ARX (autoregressive with external) model. All the models used in their study satisfied ASHRAE Guideline 14 and, among them, the Bayesian network training algorithm exhibited the best prediction performance [14]. Chen et al. improved the accuracy of the prediction model by adopting the concept of clustering to preprocess data when predicting the energy consumption of the chiller system. Important variables for the cluster chiller mode were successfully identified using data mining, K-mean clustering, and gap statistics, and the predictive accuracy and reliability of the energy baseline of the model were effectively improved when key variables were applied [15]. Panahizadeh et al. predicted the performance and coefficient of thermal energy consumption of absorption coolers using three widely used machine learning methods: artificial neural networks, support vector machines, and genetic programming. When the newly estimated formulas were used for the performance coefficients, and thermal energy consumption of each cooler based on genetic programming, the accuracy of the determinants were 0.97093 and 0.95768 [16]. Charron et al. proposed machine learning and deep learning models to predict the power consumption of a water-cooled chiller. The prediction model consisted of a thermodynamic model and multilayer perceptron (MLP), and the time series prediction model adopted MLP, one-dimensional convolutional neural network (1D-CNN), and long short-term memory (LSTM). The best time series prediction performance was LSTM, which showed the results of R2 of 0.994, MAE of 0.233, and RMSE of 1.415. Models selected for both MLP and LSTM showed predictive results approximate to actual data [17].
When predicting building energy consumption and cooling loads using machine learning methods, including ANN models, the prediction accuracy must be above a certain level. In order to derive better prediction results, researchers also evaluate the performance of various prediction models under the same conditions.
This research team has been continuously conducting research into various prediction methods related to the operation of air conditioning equipment through machine learning. For example, ref. [18] investigated a study of heat pump energy consumption predictions using an ANN model and found that the CvRMSE of 19.49% in the training period and 22.83% in the testing period satisfy the ASHRAE standard. In another study of cooling load predictions using MATLAB’s NARX (with eXogenous) feedforward neural networks model, ref. [19] confirmed the prediction performance with a CvRMSE of 7% or less. In yet another study, the energy consumption of the air handling unit and the absorption heat pump during the cooling period was predicted using the ANN model. Both the air handling unit and the absorption heat pump prediction models obtained results satisfying ASHRAE guidelines. Through these research results, it was reaffirmed that the artificial neural network-based prediction model could obtain relatively high accuracy prediction results with only a sufficient amount of data [20].
Based on this earlier work, the energy consumption and load predictions based on ANNs were conducted to develop an energy management technique for centralized air conditioning systems. The ANN model, which is used in various ways in the field of prediction, has numerous detailed algorithms.
Previous studies used a single machine learning algorithm, but this study evaluated the predictive performance of each algorithm using various algorithms classified as multilayer shallow neural networks among deep learning neural network techniques to predict the energy consumption of absorption heat pumps. The predictive performance of 12 multilayer shallow neural network training algorithms was evaluated using energy consumption data of the heating period absorption heat pump in the actual building. Previous studies have shown that machine learning techniques generally have higher predictive performance as large amounts of data are used for training. Among them, shallow natural network models have a simple structure, which reduces the likelihood of overfitting, but instead, it is common to get good results when using a sufficient amount of data [21]. In this study, we examined whether prediction results that meet the criteria of ASHRAE guideline 14 can be obtained when training shallow natural network models with a small amount of data (251 datasets).
Section 2 describes how to write multilayer shallow neural network-based energy consumption prediction models, how to collect data for use in research, and how to use evaluation criteria for prediction results. Section 3 summarizes the prediction results of 12 natural network training algorithms and evaluates the prediction performance, and Section 4 summarizes the research results.

2. Methodology

Figure 1 shows the process of predicting the energy consumption of the absorption heat pump. Data necessary for prediction is collected from the target building and changed to an appropriate form for use as input data. In this process, all data is preprocessed. In this way, input data are prepared to predict energy consumption using natural network algorithms. Finally, the predictive performance of each algorithm is evaluated through the prediction results.

2.1. Collection of Absorption Heat Pump Operational Data

The dataset used for training the neural network training algorithms is composed of absorption heat pump data that were measured during the heating period in an actual office building which is located in Seoul, Korea. The building is an office facility with a total floor area of 41,005.32 m2 and a total of 18 floors. An absorption heat pump with a capacity of 600 USRT is the heat source facility. From 10 December 2020 to 12 January 2021, weather data and absorption heat pump operation data were collected to predict the energy consumption of one absorbent heat pump operated during the heating period. All data were collected on an hourly basis, and for energy consumption, cumulative usage was used for an hour. The entire dataset was preprocessed before the prediction was performed. First, the entire dataset was normalized to a value in the range of 0 to 1, and missing values in which the air conditioning facility was not operated were removed. Energy consumption prediction was performed using 251 data points that were preprocessed in this way.

2.2. Neural Network Algorithms

The deep learning neural network training algorithms included in the Neural Networks Toolbox of MATLAB (R2021a) were adopted to predict energy consumption, and multilayer shallow neural network training algorithms were used as training algorithms. Multilayer neural network training data exhibit excellent performance when optimized using the gradient of the neural network’s performance with regard to neural network weights and the Jacobian matrix of the neural network error. The gradient and Jacobian matrix are calculated using a backpropagation algorithm. The backpropagation algorithm performs calculations by going backward through the neural network. The following twelve neural network training algorithms were compared to predict energy consumption: Levenberg–Marquardt (LM), Bayesian regularization (BR), Broyden–Fletcher–Goldfarb–Shanno (BFGS) quasi-Newton (BFG), resilient propagation (RP), scaled conjugate gradient (SCG), conjugate gradient backpropagation with Powell–Beale restarts (CGB), conjugate gradient backpropagation with Fletcher-Reeves updates (CGF), conjugate gradient backpropagation with Polak-Ribiére updates (CGP), one-step secant (OSS), gradient descent with momentum and adaptive learning rate (GDX), gradient descent with momentum (GDM), and gradient descent (GD) models.
Figure 2 is a schematic diagram of the neural network used in this study, which is a two-layer feedforward neural network with a sigmoid transfer function in the hidden layer and a linear transfer function in the output layer. Neural network training algorithms are typically composed of an input layer, hidden layer, and output layer. This neural network also stores previous values of x(t) and y(t) sequences using tapped delay lines. Since y(t) is a function of y(t−1), y(t−2), …, y(t−d), the output y(t) of the neural network is fed back to the neural network input through delay.
For neural network learning in the input layer, outside conditions, seasonality data, historical energy consumption data, and energy consumption prediction results fed back from output layers are used as the input values. The hidden layer receives input signals every hour from the input layer and performs neural network calculations through internal neurons. The hidden layers were set to 3 and the number of neurons to 20. The output layer outputs the energy consumption (kWh) prediction result for an hour after the input signal point based on the hidden layer calculation result.

2.3. Prediction Criteria for Neural Network Training Algorithms

The input values are the dry bulb temperature of the outside air, relative humidity, cold water supply temperature, and water supply flow rate. The year and date were used as seasonality data.
The structural parameter is the state of the hidden layer and neuron in which actual learning takes place, and the learning capacity is determined according to the number. Epoch, a learning parameter that is used as a unit of learning, is defined as one complete pass through the entire data set. Table 1 lists the conditions.
In order to obtain more accurate prediction results, missing values were removed due to the non-operation hours, and the dataset was normalized. A total of 250 data points from 10 December 2020 to 12 January 2021 were used for the analysis. All training data were normalized to values between 0 and 1.
The training data size was changed from 50% to 90%, and thus, energy consumption could be predicted.

2.4. Performance Evaluation Indicators

The predictive performance of each model was evaluated according to ASHRAE Measurement and Verification (M&V) guidelines, the U.S. Department of Energy Federal Energy Management Program (FEMP) guidelines, and the International Performance Measurement and Verification Protocol (IPMVP). Table 2 shows that ASHRAE, FEMP, and IPMVP present an M&V protocol for building energy management and serve to establish the Building Energy Model’s predictive accuracy criteria for predicting building energy performance. In this study, the CvRMSE and MBE were employed as performance indicators. CvRMSE refers to the degree of variance of the estimates, and MBE is an error analysis indicator that tracks how close estimates form a cluster to the target. Equations (1) and (2) present the formulas for obtaining the CvRMSE and MBE.
CvRMSE = 100 × [ ( y i y ^ i ) 2 / ( n p ) ] 1 / 2 / y ¯ ,
MBE = n Σ ( y i y i ^ ) / [ ( n p ) × y ¯ ] × 100 ,
where n is the number of data points, p is the number of parameters, y i is the utility data used for calibration, y ^ i is the simulation predicted data, and y ¯ is the arithmetic mean of the sample of n observations.

3. Results

3.1. Energy Consumption Prediction Results

Figure 3 summarizes the prediction results according to the sizes of training and test data. In the training period, most models showed satisfactory error rates under all conditions. However, GDM and GD models showed slightly higher error rates of 14.75% and 11.01%, respectively, when the training sizes were 50%, 17.33%, and 70%, GD 12.30%, and 80%, respectively.
In the testing period, the LM and BR models showed an error rate of less than 5% under all conditions, while other algorithms showed an increase in error rate compared to the training period depending on the conditions. In particular, RP, SCG, CGF, OSS, GDX, GDM, and GD models showed error rates of 10% or more.

3.2. Predictive Accuracy

Figure 4 summarizes the CvRMSE and MBE prediction results according to the size of training and test data. The CvRMSE of most models satisfies ASHRAE guideline 14 with less than 30% in the training period and testing period. However, GDM and GD did not meet the criteria by more than 30% under all conditions. In addition, CGF (testing period 50%), OSS (training period 70%/testing period 50%), and GDX(training period 50%, 60%, 70%/testing period 50%) did not meet the criteria under some conditions. The MBE of all models was 5% or less, showing that ASHRAE guideline 14 was satisfied.

4. Discussion

To determine how each algorithm changes its prediction performance with the change in training test data size, Figure 5a,b show the distribution of the error rate and CvRMSE, respectively, for the energy consumption prediction results under all conditions for each algorithm. Table 3 provides a summary of the minimum, maximum, and standard deviations (SDs) of the error rate and CvRMSE for each algorithm. Based on the overall energy consumption predictions, LM, BR, and CGP showed the best results in terms of the distribution of the error rates. LM showed an SD of 1.94 with a minimum error rate of 0.09% and a maximum error rate of 5.76 percent. BR showed a minimum error rate of 0.41% and a maximum of 5.05%, with an SD of 1.68, and CGP showed a minimum error rate of 0.13% and a maximum of 5.73%, with an SD of 1.76. The other algorithms showed high error rates of 9.78~41.77%, among which OSS had the worst results with 8.16 SD, followed by GDM with 41.77 SD and GD with 27.52 SD.
For CGF, OSS, GDX, GDM, and GD, the CvRMSE SDs of the prediction results are 3.13~9.08, confirming that the prediction performance of these algorithms was poor. The other algorithms had SDs of 1.27~2.33 and exhibited predictive performances that satisfy ASHRAE Guideline 14. When the error rate and CvRMSE results are combined, LM and BR exhibit the best prediction performance. These two models are known to be suitable for nonlinear regression problems [25,26], which is confirmed in this study as well.
Models such as GDX, GDM, and GD, which are gradient-based methods, are algorithms that find the minimum value of a function through the gradient of the loss function. One of the disadvantages of this gradient descent method is the local minima problem. This problem occurs when it is difficult to find a unique minimum value because the graph of the loss function becomes complex, which is known to be mainly due to the learning rate. Among the gradient-based algorithms, the significantly poor prediction performance of GDM and GD is considered to have caused this problem.
As such, in this study, the multilayer neural network algorithms of the same series show different prediction performance even under the same conditions. Therefore, when applying a multilayer neural network algorithm to a project, an appropriate algorithm that can obtain the best results must be selected by considering the type and amount of data to be predicted.

5. Conclusions

In this study, the ability of twelve multilayer neural network algorithms under the same conditions were compared and evaluated to predict the energy consumption of an absorption heat pump in an air conditioning system. A predictive model using twelve shallow multilayer neural network algorithms was developed. The monthly heating operation data of the absorbent heat pump were compared and evaluated with the prediction performance of each algorithm according to data training size.
The energy consumption was compared and evaluated with the prediction performance of various shallow multilayer neural network training algorithms based on backpropagation algorithms using measured data and confirmed that the prediction performance differs for each model. LM and BR, which are generally known to be suitable for nonlinear regression predictions, exhibited the best predictive performance among the models studied because they satisfy ASHRAE Guideline 14 and also had low error percentages in the results. On the other hand, the error in the prediction results for GDM and GD, the gradient-based methods, was large, and their prediction performance was poor enough not to satisfy ASHRAE Guideline 14 under all conditions.
Based on these results, the prediction performance may differ for each model, even for multilayer neural network training algorithms that are based on the same backpropagation algorithms. Applying the prediction algorithm to the field may change the amount of collected data and the prediction period, so stable results must be obtained even if the ratio of training and testing period changes. In the field of HVAC facilities, energy consumption, heating and cooling loads, etc., which are major predictions, are in the form of time series, so it would be advantageous to apply nonlinear regression prediction models such as Levenberg–Marquard backpropagation (LM) and Bayesian regulation backpropagation (BR). In addition, despite the use of a small amount of data of (251 data points) for training, it was confirmed that predictive performance that satisfies the criteria of ASHRAE guideline 14 could be obtained by selecting an appropriate algorithm. Therefore, in order to carry out the project and obtain the best results, an appropriate model must be selected in consideration of the characteristics of the project.
Machine learning algorithms work well for data used to train models, but overfitting may occur that is not properly generalized in new data. In this study, the amount of data was limited, so it was not possible to test whether the model was overfit. In future studies, more dates will be secured to conduct research on the development of better performance prediction models.

Author Contributions

J.-H.K. contributed to the project idea development and wrote a draft version; N.-C.S. performed the data analysis; W.-C.C. reviewed the final manuscript and contributed to the results, discussion, and conclusions. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF), funded by the Ministry of Education (2021R1I1A1A01056761).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare that no conflict of interest exists regarding the publication of this article.

References

  1. International Energy Agency. Transition to Sustainable Buildings: Strategies and Opportunities to 2050; International Energy Agency: Paris, France, 2013. [Google Scholar]
  2. Loukaidou, K.; Michopoulos, A.; Zachariadis, T. Nearly-zero energy buildings: Cost-optimal analysis of building envelope characteristics. Procedia Environ. Sci. 2017, 38, 20–27. [Google Scholar] [CrossRef]
  3. Lizana, J.; Chacartegui, R.; Barrios-Padura, A.; Valverde, J.M. Advances in thermal energy storage materials and their applications towards zero energy buildings: A critical review. Appl. Energy 2017, 203, 219–239. [Google Scholar] [CrossRef]
  4. Cheng-wen, Y.; Jian, Y. Application of ANN for the prediction of building energy consumption at different climate zones with HDD and CDD. In Proceedings of the 2010 2nd International Conference on Future Computer and Communication, Wuhan, China, 21 May 2010. [Google Scholar]
  5. Peng, T.M.; Hubele, N.F.; Karady, G.G. An adaptive neural network approach to one-week ahead load forecasting. IEEE Trans. Power Syst. 1993, 8, 1195–1203. [Google Scholar] [CrossRef]
  6. Roldán-Blay, C.; Escrivá-Escrivá, G.; Álvarez-Bel, C.; Roldán-Porta, C.; Rodríguez-García, J. Upgrade of an artificial neural network prediction method for electrical consumption forecasting using an hourly temperature curve model. Energy Build. 2013, 60, 38–46. [Google Scholar] [CrossRef]
  7. Turhan, C.; Kazanasmaz, T.; Uygun, I.E.; Ekmen, K.E.; Akkurt, G.G. Comparative study of a building energy performance software (KEP-IYTE-ESS) and ANN-based building heat load estimation. Energy Build. 2014, 85, 115–125. [Google Scholar] [CrossRef] [Green Version]
  8. Ferlito, S.; Atrigna, M.; Graditi, G.; De Vito, S.; Salvato, M.; Buonanno, A.; Di Francia, G. Predictive models for building’s energy consumption: An artificial neural network (ANN) approach. In Proceedings of the 2015 XVIII Aisem Annual Conference, Trento, Italy, 3–5 February 2015. [Google Scholar]
  9. Li, Z.; Dai, J.; Chen, H.; Lin, B. An ANN-based fast building energy consumption prediction method for complex architectural form at the early design stage. Build. Simul. 2019, 12, 665–681. [Google Scholar] [CrossRef]
  10. Le Cam, M.; Daoud, A.; Zmeureanu, R. Forecasting electric demand of supply fan using data mining techniques. Energy 2016, 101, 541–557. [Google Scholar] [CrossRef]
  11. Ahmad, M.W.; Mourshed, M.; Rezgui, Y. Trees vs. neurons: Comparison between random forest and ANN for high-resolution prediction of building energy consumption. Energy Build. 2017, 147, 77–89. [Google Scholar] [CrossRef]
  12. Ding, Y.; Zhang, Q.; Yuan, T.-H.; Yang, F. Effect of input variables on cooling load prediction accuracy of an office building. Appl. Therm. Eng. 2018, 128, 225–234. [Google Scholar] [CrossRef]
  13. Koschwitz, D.; Frisch, J.; van Treeck, C. Data-driven heating and cooling load predictions for non-residential buildings based on support vector machine regression and NARX recurrent neural network: A comparative study on district scale. Energy 2018, 165, 134–142. [Google Scholar] [CrossRef]
  14. Niu, F.; O’Neill, Z.; Zuo, W.; Li, Y. Assessment of different data-driven algorithms for AHU energy consumption predictions. In Proceedings of the 14th Conference of International Building Performance Simulation Association, Hyderabad, India, 7–9 December 2015. [Google Scholar]
  15. Chen, C.W.; Li, C.C.; Lin, C.Y. Combine Clustering and Machine Learning for Enhancing the Efficiency of Energy Baseline of Chiller System. Energies 2020, 13, 4368. [Google Scholar] [CrossRef]
  16. Panahizadeh, F.; Hamzehei, M.; Farzaneh-Gord, M.; Villa, A.A.O. Evaluation of machine learning-based applications in forecasting the performance of single effect absorption chiller network. Therm. Sci. Eng. Prog. 2021, 26, 101087. [Google Scholar] [CrossRef]
  17. Chaerun Nisa, E.; Kuan, Y.D. Comparative Assessment to Predict and Forecast Water-Cooled Chiller Power Consumption Using Machine Learning and Deep Learning Algorithms. Sustainability 2021, 13, 744. [Google Scholar] [CrossRef]
  18. Kim, J.H.; Seong, N.C.; Choi, W. Modeling and optimizing a chiller system using a machine learning algorithm. Energies 2019, 12, 2860. [Google Scholar] [CrossRef] [Green Version]
  19. Kim, J.H.; Seong, N.C.; Choi, W. Cooling load forecasting via predictive optimization of a nonlinear autoregressive exogenous (NARX) neural network model. Sustainability 2019, 11, 6535. [Google Scholar] [CrossRef] [Green Version]
  20. Kim, J.H.; Seong, N.C.; Choi, W. Forecasting the energy consumption of an actual air handling unit and absorption chiller using ANN models. Energies 2020, 13, 4361. [Google Scholar] [CrossRef]
  21. Glorot, X.; Bordes, A.; Bengio, Y. Deep sparse rectifier neural networks. In Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, Fort Lauderdale, FL, USA, 11–13 April 2011. [Google Scholar]
  22. ASHRAE. Measurement of Energy and Demand Saving; ASHRAE: New York, NY, USA, 2012. [Google Scholar]
  23. Webster, L.J.; Bradford, J.M.V. Guidelines: Measurement and Verification for Federal Energy Projects, Version 3.0; Technical Report; U.S. Department of Energy Federal Energy Management Program: Washington, DC, USA, 2008.
  24. Efficiency Valuation Organization. International Performance Measurement & Verification Protocol; EVO: North Georgia, AL, USA, 2016. [Google Scholar]
  25. Hagan, M.T.; Menhaj, M. Training feed-forward networks with the Marquardt algorithm. IEEE Trans. Neural Netw. 1994, 5, 989–993. [Google Scholar] [CrossRef] [PubMed]
  26. Hagan, M.T.; Demuth, H.B.; Beale, M.H. Neural Network Design; PWS Publishing: Boston, MA, USA, 1996. [Google Scholar]
Figure 1. Schematic diagram of the process of predicting energy consumption of the absorption heat pump.
Figure 1. Schematic diagram of the process of predicting energy consumption of the absorption heat pump.
Buildings 12 00013 g001
Figure 2. Schematic of the multilayer shallow neural network training algorithms for predicting energy consumption.
Figure 2. Schematic of the multilayer shallow neural network training algorithms for predicting energy consumption.
Buildings 12 00013 g002
Figure 3. Prediction results with training period 50–90%/test period 50–10% in data.
Figure 3. Prediction results with training period 50–90%/test period 50–10% in data.
Buildings 12 00013 g003aBuildings 12 00013 g003b
Figure 4. CvRMSE and MBE prediction results with train period 50–90%/test period 50–10% in data.
Figure 4. CvRMSE and MBE prediction results with train period 50–90%/test period 50–10% in data.
Buildings 12 00013 g004aBuildings 12 00013 g004b
Figure 5. Prediction results: (a) error rate and (b) CvRMSE distribution.
Figure 5. Prediction results: (a) error rate and (b) CvRMSE distribution.
Buildings 12 00013 g005
Table 1. Structural and learning parameters for multilayer shallow neural network training algorithms.
Table 1. Structural and learning parameters for multilayer shallow neural network training algorithms.
DivisionNumber of Conditions
hidden layers3
neurons20
epochs100
Table 2. Acceptable calibration tolerances in building energy performance prediction.
Table 2. Acceptable calibration tolerances in building energy performance prediction.
Calibration TypeIndexASHRAE
Guideline 14 [22]
FEMP [23]IPMVP [24]
MonthlyMBE±5%±5%±20%
CvRMSE15%15%-
HourlyMBE±10%±10%±5%
CvRMSE30%30%20%
Table 3. Minimum, maximum, and standard deviation (SD) of error rate and CvRMSE by multilayer shallow neural network training algorithms.
Table 3. Minimum, maximum, and standard deviation (SD) of error rate and CvRMSE by multilayer shallow neural network training algorithms.
Error RateCvRMSE
Min (%)Max (%)SDMin (%)Max (%)SD
LM0.095.761.9422.0428.882.17
BR0.415.051.6821.9830.002.33
BFG0.959.782.8223.1327.921.75
RP0.5913.594.9124.3128.121.27
SCG0.0415.165.0724.2929.301.45
CGB0.0710.142.9924.3329.491.49
CGF0.5815.584.8121.0830.693.13
CGP0.135.731.7624.8929.971.48
OSS0.0326.078.1624.7638.054.13
GDX0.9217.405.7526.1637.113.66
GDM0.3041.7714.8232.2553.207.11
GD3.8927.526.9833.2058.859.08
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Kim, J.-H.; Seong, N.-C.; Choi, W.-C. Comparative Evaluation of Predicting Energy Consumption of Absorption Heat Pump with Multilayer Shallow Neural Network Training Algorithms. Buildings 2022, 12, 13. https://doi.org/10.3390/buildings12010013

AMA Style

Kim J-H, Seong N-C, Choi W-C. Comparative Evaluation of Predicting Energy Consumption of Absorption Heat Pump with Multilayer Shallow Neural Network Training Algorithms. Buildings. 2022; 12(1):13. https://doi.org/10.3390/buildings12010013

Chicago/Turabian Style

Kim, Jee-Heon, Nam-Chul Seong, and Won-Chang Choi. 2022. "Comparative Evaluation of Predicting Energy Consumption of Absorption Heat Pump with Multilayer Shallow Neural Network Training Algorithms" Buildings 12, no. 1: 13. https://doi.org/10.3390/buildings12010013

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop