Next Article in Journal
Effects of High Temperature and Cooling Regimes on Properties of Marble Powder-Based Cementitious Composites
Next Article in Special Issue
Demand Response in Buildings: A Comprehensive Overview of Current Trends, Approaches, and Strategies
Previous Article in Journal
An Integrated Analysis of the Urban Form of Residential Areas in Romania
Previous Article in Special Issue
The Impact of Measurable Findings from Pre- and Post-Occupancy Evaluations of Indoor Environmental Quality in the Primary Workspace
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Optimization of the ANN Model for Energy Consumption Prediction of Direct-Fired Absorption Chillers for a Short-Term

Department of Architectural Engineering, Kangwon National University, Samcheok-si 25913, Gangwon-do, Republic of Korea
*
Author to whom correspondence should be addressed.
Buildings 2023, 13(10), 2526; https://doi.org/10.3390/buildings13102526
Submission received: 2 September 2023 / Revised: 23 September 2023 / Accepted: 30 September 2023 / Published: 6 October 2023

Abstract

:
With an increasing concern for global warming, there have been many attempts to reduce greenhouse gas emissions. About 30% of total energy has been consumed by buildings, and much attention has been paid to reducing building energy consumption. There are many ways to reduce building energy consumption. One of the most relevant methods is machine learning. While machine learning methods provide accurate energy consumption predictions, they require huge datasets. The present study developed an artificial neural network (ANN) model for building energy consumption predictions with small datasets. As mechanical systems are the most energy-consuming components in the building, the present study used the energy consumption data of a direct-fired absorption chiller for the short term. For the optimization, the prediction results were investigated by varying the number of inputs, neurons, and training data sizes. After optimizing the ANN model, it was validated with the actual data collected through a building automation system. In sum, the outcome of the present study can be used to predict the energy consumption of the chiller as well as improve the efficiency of energy management. The outcome of the present study can be used to develop a more accurate prediction model with a few datasets, which can improve the efficiency of building energy management.

1. Introduction

According to energy consumption statistics, the building sector accounts for about 30% of total energy consumption and 80% of greenhouse gas emissions [1]. Therefore, issues regarding reducing building energy consumption have become one of the main agendas to promote carbon neutrality [2,3].
To reduce building energy consumption, many attempts have been made to design energy-efficient buildings by improving the thermal performance of building envelopes, using energy-efficient mechanical systems, and installing renewable energy systems. In addition, the optimized control and efficient operation of mechanical systems can make buildings more energy efficient. Specifically, about half of the building energy was consumed by heating, ventilation, and air conditioning (HVAC) to maintain thermal comfort indoors [4]. Thus, it is necessary to manage the energy consumption of HVAC systems more efficiently in building operations. Recently, building energy management systems (BEMS) have been practically used to manage building energy consumption by providing specific information on building energy usage. It is therefore required to predict accurate building energy consumption for optimizing building energy performance from building design to operation [5].
In 2014, the Korean government regulated the installation of an energy management system (EMS) to strengthen building energy management. In addition, the Korean government has regulated the installation of BEMS in newly constructed public buildings or extensions where the gross floor area is greater than 10,000 m2 since 2017 [6,7]. According to the new laws in 2019 in South Korea, the regulation for BEMS installation has become more significant to strengthen building energy management. Regarding this law, it is required for building energy management to predict building energy consumption by implementing regression analyses or machine learning for hourly and daily energy usage and energy sources [7].
Generally, data-driven models can provide a practical way of predicting building energy consumption. The number of studies for building energy consumption predictions using artificial intelligence techniques and machine learning methods has increased. Several studies have been performed to provide more accurate predictions. For example, artificial neural networks (ANNs) have been used to manage building energy consumption for non-residential buildings [8].
By using ANN models to predict or manage energy consumption for non-residential buildings, most studies have implemented data collected over long periods of time. R. Mena et al. used 18 months of data for efficient building demand management [9]. The result showed an accuracy of prediction in which the mean absolute percentage error (MAPE) was in the range of 0.81–1.73%. Ferlito et al. have created a prediction model by using three years of data for demand side management [10]. In addition, Yun et al. have used six months of data generated by the EnergyPlus simulation for supply and demand side management [11]. Similar studies have been conducted using a huge amount of data [5,12].
To deal with the issues for applications of the data collected over long periods, the study proposes improving the neural network models as well as using deep learning methods. However, in the case of deep learning models, they show overfitting with small datasets while being able to provide reliable prediction results for building energy loads or electricity consumption when using large datasets [13]. Unfortunately, long training times and low accuracy occur when large datasets are implemented [14]. Thus, the proper number of datasets plays an important role in ensuring prediction accuracy [15]. While many studies have reviewed the types of data and machine learning models for data-driven energy consumption prediction, there were few studies varying the number of datasets or data collection durations [16,17].
In South Korea, recent studies have been conducted to predict building energy load [18,19], consumption [20,21], energy usage patterns [22,23], etc. by using machine learning or artificial intelligence (AI). Moreover, they pointed out that much attention should be paid to the accuracy of predictions made by these techniques. Most of those techniques required a number of datasets to predict building energy consumption. If buildings do not equip BEMS or building automation systems (BAS), data generated by simulation tools are generally used.
However, the simulated data are different from the measured data. In addition, it is even more difficult to collect the necessary number of measured data in a short period of time. Thus, an effort needs to be made to improve the accuracy of the prediction with a few datasets for a short period of time.
The purpose of the present study is to develop an energy consumption prediction model for the short term based on the ANN technique. After optimizing the ANN model, it is validated with the energy consumption data collected from the operation of a direct-fired absorption chiller. The outcome of the present study can be used to develop a more accurate prediction model with a few datasets, which can improve the efficiency of building energy management.

2. Optimization and Validation of an ANN-Based Prediction Model

The present study used an ANN technique to predict energy consumption in the short term. Since it requires a number of datasets to optimize the ANN model, the energy simulation was performed to generate datasets. Considering the correlation among the data, input variables were chosen and preprocessed. The number of inputs and neurons, the size of the training data, and the learning parameters were determined to optimize the ANN model. After optimizing the model, it was validated with measured data. Figure 1 shows the study process for the present study.

2.1. ANN Model

Among the ANN models, the present study implemented the NARX (Nonlinear Autoregressive Network with eXogenous) Feedforward Neural Networks model, which was generally used to predict time-series data due to its high accuracy [5]. According to the results of several studies, the NARX model can be used to model non-linear dynamic systems and time-series forecasting models [24,25]. For the ANN model, the Neural Networks Toolbox of MATLAB (R2020a) was used to create a neural network. The NARX Feedforward Neural Networks model is a multi-layer perceptron ANN model that consists of an input layer, a hidden layer, and an output layer [26]. In addition, the Levenberg-Marquardt algorithm was used to find the minimum of a function over a space of parameters, which is a popular trust algorithm. A similar structure of the ANN model used in the previous studies was used for the present study, and the structure is shown in Figure 2 [26,27].

2.2. Assessment of the Prediction Model

In general, the performance of prediction models can be validated with ASHRAE (American Society of Heating, Refrigerating, and Air-Conditioning Engineers) Guideline 14, FEMP (US DOE Federal Energy Management Program), and IP-MVP’s (International Performance Measurement and Verification Protocol) M&V (measurement and verification) guideline [28,29,30]. These provide their M&V protocol and have performance evaluation indicators (Table 1). Among those, the present study evaluated the performance of the prediction model based on ASHRAE Guideline 14. CVRMSE refers to the degree of scattering of estimated values in consideration of variance, and MBE is an error analysis index that identifies errors by tracking how close estimates form clusters through data bias, which are presented below as Equations (1) and (2). By using CVRMSE, the performance evaluation indicators of the predictive model were validated.
MBE = n Σ ( y i y ^ i ) / [ ( n p ) × y ¯ ] · 100
Cv ( RMSE ) = 100 · [ Σ y i y ^ i 2 / ( n p ) ] 1 / 2 / y ¯ ,
where n is the number of data points, p is the number of parameters, y i is the utility data used for calibration, y ^ i is the simulation predicted data, and y ¯ is the arithmetic mean of the sample of n observations. In addition, the suitability of the model was evaluated using R2. After 10 times of learning, the average, maximum, minimum, and standard deviation were used to evaluate the predictive performance of the ANN-based prediction model.

3. The Optimization of the ANN-Based Model Using the Simulation Data

To improve the predictive performance of ANN models, it generally requires a large amount of data [31]. For the present study, data generated by simulations were used. The data generated by simulations provide the advantage of choosing the data for certain periods when it is not possible to gather data from buildings over a long period of time.

3.1. Energy Simulation for Generating Data

In the present study, an office building was chosen. The reference building has 18 floors with a gross floor area of 41,005 m2. For heating and cooling, 2 direct-fired absorption chillers were equipped, and one chiller was operated. Each chiller’s cooling capacity was 600 USRT. For the energy simulation, EnergyPlus 9.3.0 was used. The reference building in Figure 3a was modeled using Openstudio, as shown in Figure 3b. The inputs for the energy simulation, such as the operation schedule, occupancy, etc., were the same as the reference building. In addition, the climate data collected from the BAS installed at the reference building were synthesized into TRY format. Table 2 shows the specific inputs for the energy simulation.

3.2. The Optimization Process for Improving the Predictive Performance of the ANN Model

3.2.1. Input Variables

In this stage, input variables were chosen among the data generated by the energy simulation for training. Using the Spearman rank-order correlation coefficient, the correlation between input and output was analyzed. The high-priority correlated value was chosen as the input value. The input layer for a direct-fired absorption chiller consisted of outside dry-bulb temperature, dew-point temperature, outside wet-bulb temperature, supply chilled water temperature, supply chilled water flow rate, condenser water temperature, and seasonal data. In the hidden layer, data were received as an input signal from the input layer through the internal neurons. The output layer predicted the energy consumption from the direct-fired absorption chillers based on the hidden layer calculation result. Table 3 presents the calculation results and ranks from the correlation between input variables (x(t)) and the predicted gas consumption of the direct-fired absorption chiller (y(t)).

3.2.2. Input Parameters

The number of hidden layers was set at 3. As one of the learning parameters, the number of epochs was 100. Since the number of neurons in the hidden layers mainly influences the calculated prediction and time, the number of neurons was changed from 2 to 20 by 2. While the number of input variables was changed from 3 to 7, the size of the datasets ranged from 50 to 90%. Detailed parameter conditions are summarized in Table 4.

3.3. The Result and Discussion

3.3.1. Predictive Performance by the Number of Input Variable Changes

Figure 4 shows the result of the predictive performance by changing the number of input variables to find the optimized number of input variables. The number of neurons and the data size of the training were set at 10% and 60%, respectively. As the number of input variables increased, the average CVRMSE values of the prediction increased from 5.69% to 8.43% for the training periods, while they increased from 12.25% to 24.04% for the testing periods. These were within the acceptable range of 30% by ASHRAE Guideline 14. When the number of input variables was 4, the average value of CVRMSE was the lowest (5.69%) for the training period. In the case of the testing period, the average value of CVRMSE was the lowest (13.25%) when the number of input variables was 5. In addition, the average, minimum, and maximum CVRMSE values were 12.25%, 8.56%, and 16.05%, respectively. This showed the most accurate predictive performance. When the minimum number of input variables of 3 was used, the average CVRMSE values decreased to 0.58% and 3.74% for the training and testing periods, respectively. The standard deviation was 2.44, which showed constant predictive performance. When the number of input variables was greater than 5, the average CVRMSE value increased, while the accuracy of the predictive performance decreased. It can be seen that the predictive performance of the ANN model is lower when the input variables are not correlated with the input layer. Therefore, the predictive performance was most acceptable when the number of input variables was 5. According to the result, it is important to consider the correlation between the input layer and input variables rather than increasing the number of input variables. Table 5 shows the values of average, minimum, maximum, and standard deviation with an increase in the number of input variables.

3.3.2. Predictive Performance by the Number of Neurons Changes

In this stage, the predictive performance of the model was analyzed by changing the number of neurons (Figure 5). The number of input variables was 5, and the size of the learning data was set at 60%. As the number of neurons increased, the average values of CVRMSE were in the range of 5.61–22.44% and 12.25–27.14% for the training and testing periods, respectively. These were within the acceptable range of 30% by ASHRAE Guideline 14. When the number of neurons was 20, the average value of CVRMSE was the lowest (5.61%) for the training period. In the case of the testing period, the average value of CVRMSE was the lowest (12.25%) when the number of neurons was 10. Moreover, the most accurate predictive performance was obtained when the number of neurons was 10, in which the values of average, minimum, and maximum CVRMSE were 12.25%, 8.56%, and 16.05%, respectively. When comparing the number of neurons between 10 and 2, the average values of CVRMSE for the training and testing periods decreased to 16.66% and 12.55%, respectively. When the number of neurons was 10, the predictive performance was improved by decreasing the values of CVRMSE gradually, while it was lowered by increasing the value of CVRMSE when the number of neurons was higher than 12. This indicated that the weight increased with the increase in the number of neurons, and overfitting was obtained. When the number of neurons was higher than 10, the standard deviation was in the range of 1.42–2.84, which showed more constant predictive performance than when the number of neurons was below 10. It showed that the most acceptable predictive performance was obtained when the number of neurons was 10. It can be seen that the increase in the number of neurons was not able to improve the predictive performance of the ANN model. Thus, it is important to find a suitable number of neurons by observing the model’s predictive performance. Table 6 shows the values of average, maximum, minimum, and standard deviation when increasing the number of neurons.

3.3.3. Predictive Performance by the Size Changes of the Training Data

Figure 6 presents the values of CVRMSE by changing the size of the training data. Based on the previous results, the number of input variables and neurons was set at 5 and 10, respectively. The size of the training data was increased from 50% to 90% by increments of 5%. The average values of CVRMSE were in the range of 5.36–7.74% and 8.93–17.69% for the training and testing periods, respectively. These were all within the acceptable range of ASHRAE Guideline 14. In addition, it showed a suitable predictive performance of 20%. When the size of the training data was set at 65%, the average value of CVRMSE was the lowest (5.36%) for the training period, while it was 8.93% for the testing period when the size of the training data was 85%. Moreover, the average, minimum, and maximum values of CVRMSE were 8.93%, 6.69%, and 11.23%, respectively, when the size of the training data was 85%. That showed the most accurate predictive performance. When the size of the training data was increased to 85%, the predictive performance improved, while the CVRMSE decreased gradually. However, the predictive performance degraded when the training data size was 90%. The standard deviation ranged from 1.88 to 1.45 when the size of the training data was set between 70 and 85%. Thus, the ANN model showed the most acceptable predictive performance when the data size of training was set between 80 and 85%. Table 6 presents the predictive values of maximum, minimum, and standard deviation when changing the size of the training data. Table 7 shows the values of average, minimum, maximum, and standard deviation when increasing changing the size of the training data.

4. The Validation of the ANN Model for a Short Term with Measured Data

Based on the obtained results, the predictive performance of the ANN model for a direct-fired absorption chiller for a short-term period was validated with actual data. The specific conditions were 5 and 10 for the input layer and neurons, respectively. The size of the training data was set in the range of 70–85%. The actual data were collected from the BAS installed in the reference building during the summer (1 July–8 August, 40 days), which was about 952 h of datasets. The data were preprocessed using data transformation, considering the unit of building energy consumption.

4.1. The Validation of the ANN Model with Measured Data

Figure 7 shows the comparison result of the predictive performance by changing the size of the measured training data. By changing the size of the training data, the average values of CVRMSE were in the range of 18.68–21.11% and 19.99–26.06% for the training and testing periods, respectively. These values were all within the acceptable range of ASHRAE Guideline 14. As shown previously, the predictive performance was improved by increasing the size of the training data. The most acceptable predictive performance was obtained with 85% of the training data size. Specifically, the average, maximum, and minimum values of CVRMSE for the testing period were 19.99%, 22.02%, and 17.5%, respectively. However, the CVRMSE values obtained from the ANN model with the actual data were higher than those obtained from the ANN model with data generated by the simulation. This was caused by the decrease in the number of datasets from 8760 to 952. In addition, it can also be a result of the quality difference between the simulated and measured data. Even though the average values of CVRMSE obtained by the ANN model with the actual data were higher than those obtained by the model with simulated data, the standard deviation was constantly in the range of 1.36–1.62. Table 8 shows the average, maximum, minimum, and standard deviation of predictive performance when changing the size of the actual training data.

4.2. The Prediction Result of the Energy Consumption for a Short Term by Using Measured Data

Figure 8 shows the energy consumption comparison between the prediction obtained by the ANN model and the reference building for 952 h. As shown previously, the standard deviation difference between the prediction and measured data decreased as the size of the training data increased.
Figure 9 shows the energy consumption comparison between the prediction and the actual data when changing the size of the training data, including the error rate difference. The total energy consumption of the direct-fired absorption chiller was 998.22 GJ. The error rate difference was 2.16%, 1.82%, 1.44%, and 1.11% for the training data sizes of 70%, 75%, 80%, and 85%, respectively. This indicates that the predictive performance of the model was improved by increasing the size of the training data. As shown above, the training data size plays a significant role in the predictive performance of the model with the actual data. Thus, it is important to find a suitable training data size for improving predictive performance. The summarized predictive performance of the model with the actual data is presented in Table 9.

5. Conclusions

The present study developed the ANN model by using the collected data, and the developed model was optimized. By using the energy consumption of the direct-fired absorption chiller, the predictive performance of the ANN model was validated.
When the number of input variables and neurons was set at 5 and 10, respectively, and the size of the training data was 85%, the average value of CVRMSE was the lowest and the predictive performance was the most acceptable. By using the measured data, the ANN model was validated. As a result, the average value of CVRMSE was 19.99%. Even though the CVRMSE was somewhat increased, the error rate was less than 1%. This indicated that the predictive performance of the ANN model was acceptable. In sum, the outcome of the present study can be used to predict the energy consumption of the chiller as well as improve the efficiency of energy management.
The main achievements are:
  • The predictive performance was investigated by varying the number of input variables. Based on the result, it is important to consider the correlation between the input layer and input variables rather than increasing the number of input variables.
  • The increase in the number of neurons was not effective in improving the predictive performance of the ANN model. Thus, finding a suitable number of neurons is the key to ensuring the accuracy of the predictive performance.
  • The ANN model showed acceptable predictive performance when the data size of training was set between 80 and 85%.
  • The training data size plays a significant role in the predictive performance of the ANN model.
The present study developed the ANN model to predict the energy consumption of the chiller. However, an HVAC system consists of many components, such as air handling units, fans, boilers, pumps, etc. Therefore, it is necessary to develop ANN models for these components for further study. By applying the methodology used in this study, it can be expected to improve the predictive performance of the model for the energy consumption of these components. Furthermore, the outcome of the study can be used to develop more efficient ANN models with a few datasets.

Author Contributions

N.S. contributed to the conceptualization, methodology, writing—original draft preparation, formal analysis, and visualization; G.H. performed writing—review and editing, and funding acquisition. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF), funded by the Ministry of Education (No. 2021R1A6A3A01087034). This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korean government (MSIT) (No. 2020R1C1C1010801).

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. IEA. Global Status Report For Buildings and Construction 2019; IEA: Paris, France, 2019. [Google Scholar]
  2. Dai, B.; Qi, H.; Liu, S.; Zhong, Z.; Li, H.; Song, M.; Ma, M.; Sun, Z. Environmental and economical analyses of transcritical CO2 heat pump combined with direct dedicated mechanical subcooling (dms) for space heating in china. Energy Convers. Manag. 2019, 198, 111317. [Google Scholar] [CrossRef]
  3. Dai, B.; Wang, Q.; Liu, S.; Wang, D.; Yu, L.; Li, X.; Wang, Y. Novel configuration of dual-temperature condensation and dual-temperature evaporation high-temperature heat pump system: Carbon footprint, energy consumption, and financial assessment. Energy Convers. Manag. 2023, 292, 117360. [Google Scholar] [CrossRef]
  4. US Department of Energy. An Assessment of Energy Technologies and Research Opportunities. Quadrennial Technology Review. 2015. Available online: https://www.energy.gov/quadrennial-technology-review-2015 (accessed on 11 July 2023).
  5. Kim, J.-H.; Seong, N.-C.; Choi, W. Cooling load forecasting via predictive optimization of a nonlinear autoregressive exogenous (narx) neural network model. Sustainability 2019, 11, 6535. [Google Scholar] [CrossRef]
  6. Regulations on the Promotion of Rational Use of Energy in Public Institutions Ministry of Trade, Industry and Energy. 2017. Available online: http://www.Motie.Go.Kr/ (accessed on 16 August 2023).
  7. Regulations for the Operation of Energy Management System Installation and Confirmation. Available online: https://www.Energy.Or.Kr/ (accessed on 22 August 2023).
  8. Amasyali, K.; El-Gohary, N.M. A review of data-driven building energy consumption prediction studies. Renew. Sustain. Energy Rev. 2018, 81, 1192–1205. [Google Scholar]
  9. Mena, R.; Rodríguez, F.; Castilla, M.; Arahal, M.R. A prediction model based on neural networks for the energy consumption of a bioclimatic building. Energy Build. 2014, 82, 142–155. [Google Scholar] [CrossRef]
  10. Yuan, Y.; Chen, Z.; Wang, Z.; Sun, Y.; Chen, Y. Attention mechanism-based transfer learning model for day-ahead energy demand forecasting of shopping mall buildings. Energy 2023, 270, 126878. [Google Scholar] [CrossRef]
  11. Yun, K.; Luck, R.; Mago, P.J.; Cho, H. Building hourly thermal load prediction using an indexed arx model. Energy Build. 2012, 54, 225–233. [Google Scholar] [CrossRef]
  12. Shi, G.; Liu, D.; Wei, Q. Energy consumption prediction of office buildings based on echo state networks. Neurocomputing 2016, 216, 478–488. [Google Scholar] [CrossRef]
  13. Zhou, X.; Lin, W.; Kumar, R.; Cui, P.; Ma, Z. A data-driven strategy using long short term memory models and reinforcement learning to predict building electricity consumption. Appl. Energy 2022, 306, 118078. [Google Scholar] [CrossRef]
  14. Xiao, Z.; Gang, W.; Yuan, J.; Chen, Z.; Li, J.; Wang, X.; Feng, X. Impacts of data preprocessing and selection on energy consumption prediction model of hvac systems based on deep learning. Energy Build. 2022, 258, 111832. [Google Scholar] [CrossRef]
  15. Wang, Z.; Wang, Y.; Srinivasan, R.S. A novel ensemble learning approach to support building energy use prediction. Energy Build. 2018, 159, 109–122. [Google Scholar] [CrossRef]
  16. Sun, Y.; Haghighat, F.; Fung, B.C.M. A review of the-state-of-the-art in data-driven approaches for building energy prediction. Energy Build. 2020, 221, 110022. [Google Scholar] [CrossRef]
  17. Olu-Ajayi, R.; Alaka, H.; Owolabi, H.; Akanbi, L.; Ganiyu, S. Data-driven tools for building energy consumption prediction: A review. Energies 2023, 16, 2574. [Google Scholar] [CrossRef]
  18. Seong, N.; Hong, G. Comparative Evaluation of Building Cooling Load Prediction Models with Multi-Layer Neural Network Learning Algorithms. KIEAE J. 2022, 22, 35–41. [Google Scholar] [CrossRef]
  19. Seong, N.-C.; Hong, G. An Analysis of the Effect of the Data Preprocess on the Performance of Building Load Prediction Model Using Multilayer Neural Networks. J. Korean Inst. Archit. Sustain. Environ. Build. Syst. 2022, 16, 273–284. [Google Scholar]
  20. Lee, C.-W.; Seong, N.-C.; Choi, W.-C. Performance Improvement and Comparative Evaluation of the Chiller Energy Con-sumption Forecasting Model Using Python. J. Korean Inst. Archit. Sustain. Environ. Build. Syst. 2021, 15, 252–264. [Google Scholar]
  21. Junlong, Q.; Shin, J.-W.; Ko, J.-L.; Shin, S.-K. A Study on Energy Consumption Prediction from Building Energy Management System Data with Missing Values Using SSIM and VLSW Algorithms. Trans. Korean Inst. Electr. Eng. 2021, 70, 1540–1547. [Google Scholar] [CrossRef]
  22. Yoon, Y.-R.; Shin, S.-H.; Moon, H.-J. Analysis of Building Energy Consumption Patterns according to Building Types Using Clustering Methods. J. Korean Soc. Living Environ. Syst. 2017, 24, 232–237. [Google Scholar] [CrossRef]
  23. Gu, S.; Lee, H.; Yoon, J.; Kim, D. Investigation on Electric Energy Consumption Patterns of Residential Buildings in Four Cities through the Data Mining. J. Korean Sol. Energy Soc. 2022, 42, 127–139. [Google Scholar]
  24. Di Nunno, F.; Granata, F. Groundwater level prediction in apulia region (southern italy) using narx neural network. Environ. Res. 2020, 190, 110062. [Google Scholar] [CrossRef]
  25. Koschwitz, D.; Frisch, J.; van Treeck, C. Data-driven heating and cooling load predictions for non-residential buildings based on support vector machine regression and narx recurrent neural network: A comparative study on district scale. Energy 2018, 165, 134–142. [Google Scholar] [CrossRef]
  26. Kim, J.-H.; Seong, N.-C.; Choi, W.-C. Comparative evaluation of predicting energy consumption of absorption heat pump with multilayer shallow neural network training algorithms. Buildings 2022, 12, 13. [Google Scholar] [CrossRef]
  27. Kim, J.; Hong, Y.; Seong, N.; Kim, D.D. Assessment of ann algorithms for the concentration prediction of indoor air pollutants in child daycare centers. Energies 2022, 15, 2654. [Google Scholar] [CrossRef]
  28. American Society of Heating, Refrigerating and Air Conditioning Engineers. Ashrae Guideline 14-2002, Measurement of Energy and Demand Savings—Measurement of Energy, Demand and Water Savings. 2002. Available online: http://www.eeperformance.org/uploads/8/6/5/0/8650231/ashrae_guideline_14-2002_measurement_of_energy_and_demand_saving.pdf (accessed on 20 July 2023).
  29. M&v Guidelines: Measurement and Verification for Performance-Based Contracts, Federal Energy Management Program. Available online: https://www.Energy.Gov/eere/femp/downloads/mv-guidelines-measurement-and-verification-performance-based-contracts-version (accessed on 11 August 2023).
  30. Efficiency Valuation Organization. International Performance Measurement and Verification Protocol, Concepts and Options for Determining Energy and Water Savings; EVO: North Georgia, GA, USA, 2016; Volume 3. [Google Scholar]
  31. Seong, N.-C.; Kim, J.-H.; Choi, W. Adjustment of multiple variables for optimal control of building energy performance via a genetic algorithm. Buildings 2020, 10, 195. [Google Scholar] [CrossRef]
Figure 1. The study process.
Figure 1. The study process.
Buildings 13 02526 g001
Figure 2. Structure of a multilayer neural network model.
Figure 2. Structure of a multilayer neural network model.
Buildings 13 02526 g002
Figure 3. The reference building overview and the model using Openstudio.
Figure 3. The reference building overview and the model using Openstudio.
Buildings 13 02526 g003
Figure 4. The predictive performance when changing the number of input variables.
Figure 4. The predictive performance when changing the number of input variables.
Buildings 13 02526 g004
Figure 5. The predictive performance when changing the number of neurons.
Figure 5. The predictive performance when changing the number of neurons.
Buildings 13 02526 g005
Figure 6. The predictive performance when changing the size of the training data.
Figure 6. The predictive performance when changing the size of the training data.
Buildings 13 02526 g006
Figure 7. The predictive performance when changing the size of the actual training data.
Figure 7. The predictive performance when changing the size of the actual training data.
Buildings 13 02526 g007
Figure 8. The energy consumption prediction when using the actual data.
Figure 8. The energy consumption prediction when using the actual data.
Buildings 13 02526 g008aBuildings 13 02526 g008b
Figure 9. The energy consumption comparison and error rate when changing the size of the training data.
Figure 9. The energy consumption comparison and error rate when changing the size of the training data.
Buildings 13 02526 g009
Table 1. Acceptable calibration tolerances in building energy consumption prediction.
Table 1. Acceptable calibration tolerances in building energy consumption prediction.
Calibration TypeIndexASHRAE
Guideline 14 [22]
FEMP [23]IP-MVP [24]
MonthlyMBE_monthly±5%±5%±20%
CvRMSE_monthly15%15%-
HourlyMBE_hourly±10%±10%±5%
CvRMSE_hourly30%30%20%
Table 2. Simulation conditions.
Table 2. Simulation conditions.
ComponentFeatures
Site LocationLatitude: 37.27° N, Longitude: 126.99° E
Weather DataTRY Suwon
Load Convergence Tolerance ValueDelta 0.04 W (default)
Temperature Convergence Tolerance ValueDelta 0.4 °C (default)
Heat Balance AlgorithmCTF (Conduction Transfer Function)
Simulated Hours8760 [h]
TimestepHourly
Internal GainLighting 6 [W/m2]
People 20 [m2/person]
Plug and Process 8 [W/m2]
Envelope SummaryWall 0.36 [W/m2·K], Roof 0.20 [W/m2·K],
Window 2.40 [W/m2·K], SHGC 0.497
Operation Schedule7:00~18:00
Table 3. Correlation between input variables and energy consumption.
Table 3. Correlation between input variables and energy consumption.
Input
Variables
[x(t)]
Condenser Water Temp. (°C)Supply Chilled Water Flow Rate (kg/s)Outside Dry-Bulb Temp. (°C)Dew-Point Temp. (°C)Supply Chilled Water Temp. (°C)Outside Wet-Bulb Temp. (°C)Hour
Rank1234567
Spearman correlation0.720.650.540.44−0.380.310.25
Table 4. Parameter conditions for the optimization of the ANN model.
Table 4. Parameter conditions for the optimization of the ANN model.
ParameterValue
FixedNumber of Hidden Layers3
Epochs500
VariableNumber of Inputs3~7
Number of Neurons2~20
Training Data Size50~90%
Table 5. The predictive performance when changing the number of input variables (maximum, minimum, average, and standard deviation).
Table 5. The predictive performance when changing the number of input variables (maximum, minimum, average, and standard deviation).
Number of InputPeriodAverageMaximumMinimumSD
3Training6.359.884.181.67
Testing16.3022.059.444.30
4Training5.699.374.341.49
Testing14.2317.849.992.60
5Training5.778.214.031.40
Testing13.2519.358.563.49
6Training7.3811.924.461.94
Testing16.7421.4312.053.21
7Training8.4311.656.721.45
Testing24.0428.4517.323.92
Table 6. The predictive performance when changing the number of neurons (maximum, minimum, average, and standard deviation).
Table 6. The predictive performance when changing the number of neurons (maximum, minimum, average, and standard deviation).
Number of NeuronsPeriodAverageMaximumMinimumSD
2Training22.4423.6520.231.21
Testing27.1429.3322.682.41
4Training12.4718.436.223.36
Testing19.7725.0112.153.79
6Training9.7012.664.832.59
Testing19.4522.4015.362.22
8Training8.2812.765.002.36
Testing16.8020.5010.373.73
10Training5.778.214.031.40
Testing12.5516.058.562.44
12Training7.7013.954.583.10
Testing12.6616.148.202.84
14Training7.6213.594.023.86
Testing14.5217.5510.272.33
16Training6.229.813.981.94
Testing14.5916.8212.051.42
18Training6.2210.644.012.39
Testing14.6616.509.282.27
20Training5.6111.333.962.19
Testing14.7417.729.572.84
Table 7. The predictive performance when changing the size of the training data (maximum, minimum, average, and standard deviation).
Table 7. The predictive performance when changing the size of the training data (maximum, minimum, average, and standard deviation).
Training Data Size [%]PeriodAverageMaxMinSD
50Training7.7411.254.821.93
Testing17.6924.888.805.78
55Training7.609.084.631.38
Testing16.7723.437.594.70
60Training5.778.214.031.40
Testing12.5516.058.562.44
65Training5.366.533.980.83
Testing12.2916.047.822.78
70Training5.458.244.111.33
Testing12.2616.4210.231.88
75Training5.377.594.321.05
Testing11.0614.549.061.83
80Training6.048.844.281.53
Testing9.0811.857.141.81
85Training5.769.584.481.45
Testing8.9311.236.691.45
90Training5.879.304.651.46
Testing9.7912.206.172.10
Table 8. The predictive performance when changing the size of the actual training data (maximum, minimum, average, and standard deviation).
Table 8. The predictive performance when changing the size of the actual training data (maximum, minimum, average, and standard deviation).
Training Data SizePeriodAverageMaxMinSD
70Training22.1127.5218.412.48
Testing26.0627.8523.681.53
75Training19.7322.9614.952.64
Testing24.8726.9122.271.62
80Training19.3421.8117.651.46
Testing22.7725.3420.411.57
85Training18.6821.6215.741.72
Testing19.9922.0217.501.36
Table 9. The energy consumption comparison and error rate when changing the size of the training data.
Table 9. The energy consumption comparison and error rate when changing the size of the training data.
Training Data SizePeriodPrediction [GJ]Actual [GJ]Error Rate [%]
70%Training Period678.83639.395.81%
Testing Period341.46358.835.09%
Total Period1020.29998.222.16%
75%Training Period737.33713.213.27%
Testing Period279.42285.012.00%
Total Period1016.75998.221.82%
80%Training Period754.86772.422.33%
Testing Period229.17225.801.47%
Total Period984.03998.221.44%
85%Training Period812.06799.141.59%
Testing Period197.38199.080.86%
Total Period1009.44998.221.11%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hong, G.; Seong, N. Optimization of the ANN Model for Energy Consumption Prediction of Direct-Fired Absorption Chillers for a Short-Term. Buildings 2023, 13, 2526. https://doi.org/10.3390/buildings13102526

AMA Style

Hong G, Seong N. Optimization of the ANN Model for Energy Consumption Prediction of Direct-Fired Absorption Chillers for a Short-Term. Buildings. 2023; 13(10):2526. https://doi.org/10.3390/buildings13102526

Chicago/Turabian Style

Hong, Goopyo, and Namchul Seong. 2023. "Optimization of the ANN Model for Energy Consumption Prediction of Direct-Fired Absorption Chillers for a Short-Term" Buildings 13, no. 10: 2526. https://doi.org/10.3390/buildings13102526

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop