Next Article in Journal
Promoting Energy Efficiency in the Built Environment through Adapted BIM Training and Education
Next Article in Special Issue
Event-Based Simulation of a Decentralized Protection System Based on Secured GOOSE Messages
Previous Article in Journal
Decision Support System to Implement Units of Alternative Biowaste Treatment for Producing Bioenergy and Boosting Local Bioeconomy
Previous Article in Special Issue
e4clim 1.0: The Energy for a Climate Integrated Model: Description and Application to Italy
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Electrical Energy Demand Forecasting Model Development and Evaluation with Maximum Overlap Discrete Wavelet Transform-Online Sequential Extreme Learning Machines Algorithms

1
School of Sciences, Institute of Life Sciences and the Environment, University of Southern Queensland, Toowoomba, QLD 4350, Australia
2
Management Technical College, Southern Technical University, Basrah 61001, Iraq
*
Author to whom correspondence should be addressed.
Energies 2020, 13(9), 2307; https://doi.org/10.3390/en13092307
Submission received: 7 March 2020 / Revised: 29 April 2020 / Accepted: 2 May 2020 / Published: 6 May 2020
(This article belongs to the Special Issue Modelling and Simulation of Smart Energy Management Systems)

Abstract

:
To support regional electricity markets, accurate and reliable energy demand (G) forecast models are vital stratagems for stakeholders in this sector. An online sequential extreme learning machine (OS-ELM) model integrated with a maximum overlap discrete wavelet transform (MODWT) algorithm was developed using daily G data obtained from three regional campuses (i.e., Toowoomba, Ipswich, and Springfield) at the University of Southern Queensland, Australia. In training the objective and benchmark models, the partial autocorrelation function (PACF) was first employed to select the most significant lagged input variables that captured historical fluctuations in the G time-series data. To address the challenges of non-stationarities associated with the model development datasets, a MODWT technique was adopted to decompose the potential model inputs into their wavelet and scaling coefficients before executing the OS-ELM model. The MODWT-PACF-OS-ELM (MPOE) performance was tested and compared with the non-wavelet equivalent based on the PACF-OS-ELM (POE) model using a range of statistical metrics, including, but not limited to, the mean absolute percentage error (MAPE%). For all of the three datasets, a significantly greater accuracy was achieved with the MPOE model relative to the POE model resulting in an MAPE = 4.31% vs. MAPE = 11.31%, respectively, for the case of the Toowoomba dataset, and a similarly high performance for the other two campuses. Therefore, considering the high efficacy of the proposed methodology, the study claims that the OS-ELM model performance can be improved quite significantly by integrating the model with the MODWT algorithm.

1. Introduction

To promote the application of appropriate strategic measures and provide accurate scheduling of electrical power in energy security platforms, a forecasting model that can reliably and precisely forecast the electricity energy demand (G), is required. Arising from a shift in consumer energy usage, the large fluctuations evident in G data records leave traditional machine learning models, for example, artificial neural network (ANN) [1], multivariate adaptive regression spline (MARS) [1,2,3], support vector regression (SVR) [2,3], M5 model tree [3], online sequential extreme learning machine (OS-ELM) [4], and multiple linear regression (MLR) [1] needing to improve their capability to accurately forecast the G data. To achieve this task, a data pre-processing method, to be implemented before running the model, is required to address this issue if such data are unsteady, stochastic, or chaotic, as found with real life variables.
Wavelet transformation (WT) algorithm, a popular data pre-processing technique that has been widely adopted in the field of energy forecasting (e.g., [5,6,7,8,9,10]), has been largely explored to decompose the model input datasets through high and low-pass filters. By applying WT, a more coherent structure of the complex time-series can be supplied and fed to a machine learning model to significantly improve the forecast accuracy. Additionally, energy modelers can potentially address issues of non-stationary input data using the WT algorithm, thereby, assisting the model to be more responsive to the input variables’ stochastic behaviors [5]. Wavelet transformation can also provide the relevant information regarding the time-series decomposition process, including the provision of patterns of energy usage within the time and frequency domains, thereby increasing a forecasting model’s capacity to capture such valuable information at different levels of resolution [7,11]. Because of the detailed information that is produced by WT to convert the data from time domain to frequency domain, a machine learning model can work more intelligently to forecast electricity demand data. The fact that several recent studies have applied the WT algorithm to improve global forecasting accuracies in a range of parameters in several fields, for example, rainfall [11], price of electricity [7,8,10], solar radiation [5,6,9], synthetic hydrological time-series [12], flood levels [13,14], water demand [15], electricity demand [16] and streamflow [17], demonstrates the ability of the WT technique to significantly enhance forecasting accuracy.
Although several research studies (e.g., [5,6,7,8,17,18]) have implemented WT for different data forecasting purposes, a recent study in hydrological and water resources forecasting has shown that these studies may have incorrectly applied WT in the data decomposition step. In doing so, they have generated models that should not be employed for real-world forecasting problems because their accuracy is potentially falsely represented [19]. This issue can arise because: (i) future data is drawn upon when the WT uses some data from the testing period to calculate the wavelet and scaling coefficients for the training data, (ii) decomposition levels and wavelet filters are incorrectly selected, and (iii) the training/validation/testing data are split up in an inappropriate manner [19]. It is important to note that these three problems have not been addressed by the current studies in the field of energy forecasting when they apply WT. While some other studies [5,6,17,20,21] have tried to address these issues by applying different forms of WT multiresolution analysis (MRA), for example, discrete wavelet transform (DWT)-MRA or maximal overlap discrete wavelet transform (MODWT)-MRA separately to the training, validation, and testing of data, these approaches require the full time-series to calculate the detail and approximation coefficients, leaving some of the previously mentioned issues unresolved [19]. Therefore, again these studies have failed to apply WT in real-world forecasting problem. Consequently, as only one study has correctly applied MODWT without added MRA to forecast hydrological and water resources, additional studies are needed to explore the impact of MODWT and address the drawbacks cited above when this approach has been used on energy forecasting sector.
In the present study, an OS-ELM model, a fast, reliable and accurate machine learning tool that can offer a better generalization performance than other algorithms in a range of forecasting applications (e.g., regression, classification or time-series) and can learn data one by one as do basic extreme learning machine (ELM) algorithms [22,23], was coupled for the first time with the correct MODWT technique in an effort to forecast G data. Moreover, OS-ELM input and output parameters as well as weights were randomly and analytically selected, respectively [22]. First, the study selected the significant model input variables using a partial autocorrelation function (PACF) to construct a PACF-OS-ELM (POE) model. The novel MODWT-PACF-OS-ELM (MPOE) model was thus built and compared with the standalone (non-wavelet) POE model to investigate the influence of WT on G forecasting when the MODWT transform was applied separately to each input variable to generate the wavelet and scaling coefficients used in feeding the OS-ELM portion of the model. Data from the three regional campuses (Toowoomba, Ipswich, and Springfield) of the University of Southern Queensland (USQ), Australia, were drawn upon to develop and evaluate the accuracy of these techniques in forecasting daily G.
To outline how these goals were achieved, this paper is organized as follows. The theory of the OS-ELM and MODWT algorithms are presented in Section 2. Section 3 describes the study area, data, and methods, while model evaluation criteria, results, and discussions are shown in Section 4. Finally, the study limitations showing future work opportunities, and conclusions are summarized in Section 5 and Section 6, respectively.

2. Theoretical Background

2.1. Online Sequential Extreme Learning Machine Model (OS-ELM)

In this paper, the machine learning data intelligent ELM-based model architecture was employed to design a single-layer feed-forward neural network (SLFN), expressed as [4,24]:
y k = i = 1 i = M ρ i f ( w i . x k + c i )
where k = 1 ,   2 , ; N , M is the hidden nodes of N inputs (lagged variables generated from partial autocorrelation function (PACF) for G data or the decomposition of the lagged data resulting from MODWT) x k ( t ) = { x k } k = 1 N R N , and output (G forecasted) y k ( x ) = { y k } k = 1 N R in the training period). A high-level flowchart is presented in Figure 1 to show how these input variables were used to feed the OS-ELM model. f ( . ) is the activation function, c i Γ denotes the threshold of the ith hidden node, the weight vectors that connect the i t h hidden node with the input and output nodes are w i = [ w i 1 , w i 2 , , w i n ] T and ρ i = [ ρ i 1 , ρ i 2 , , ρ i m ] T , respectively, and the term w i . x k refers to the inner product of w i and x i .
According to [24], Equation (1) can be simplified to the form below:
H ρ = Y
where H = [ f ( w 1 . x 1 + c 1 ) f ( w M . x 1 + c M ) f ( w 1 . x N + c 1 ) f ( w M . x N + c M ) ] N × M is the hidden layer output matrix of the neural network, ρ = [ ρ 1 T ρ M T ] M × m and   Y = [ y 1 T y N T ] N × m .
The following output weight is calculated by applying the least square solution of the linear systems as follows:
ρ = H Y
where H denotes the inverse matrix of H .
With the classical ELM model, all N samples of data are used during the learning process, making the model relatively time consuming [4]; however, the OS-ELM model, such as the one developed in the present study, addresses this issue: data are only used once within the two learning stages of initialization and sequential learning [4]. The hidden layer output matrix is designed in the initialization step by allocating the input node ( w i ) and the threshold ( c i ) to a small piece of initial training data, while the second step of sequential learning is then launched on a one-by-one basis to stop reusing training data [4,22,23,25].

2.2. Maximum Overlap Discrete Wavelet Transform (MODWT)

Serving as a pre-processing method, the MODWT algorithm [26] was implemented before running the model to address non-stationarity issues in time-series datasets by decomposing the input data into high and low-pass filters, resulting in MODWT wavelet and scaling coefficients, respectively (Figure 1). Basically, those components are defined as follows [6,19,26]:
W j , i = l = 0 l = L j 1 h j , l X i 1 mod   N
V j , i = l = 0 l = L j 1 g j , l X i 1 mod   N
where X is an input time-series vector with N values; j = 1 , 2 , J , where J is the decomposition level at the time i ; h j , l and g j , l are the j th level wavelet ( W j , i ) and scaling ( V j , i ) filters of MODWT, respectively, and L j denotes the width of the j th level filters.

3. Data and Methods in the Training Period

3.1. Study Area and Data

In the present case study, the ability of the MPOE model to forecast daily electricity demand (G) was tested using electricity use data from three regional university campuses (Toowoomba, Ipswich, and Springfield) of the University of Southern Queensland (USQ), Australia. The historical data were provided by the university campus services for the periods of 1 January 2013 to 31 December 2014 for the main feed of Toowoomba, 1 September 2015 to 31 August 2016 for the main feed and Building A block of Ipswich and Springfield campuses, respectively. Data were recorded every 15-min (96 times per day) in kilowatts (kW), with a total of 70,080 values including 60 zeros for Toowoomba, 35,136 points each for Ipswich and Springfield including 30 zeros and non-zeros, respectively. Zeros were filled in by taking the average values for the points at the same time of day, across the previous month. Daily data were then obtained by summing each set of the 96 values, resulting in 730 points (days) for Toowoomba and 366 points (days) each for Ipswich and Springfield.
Descriptive statistics for the daily time-series datasets are given in Table 1, while plots of the series datasets for the three university campuses are shown in Figure 2 to demonstrate the electricity demand values recorded for each day. The current G data clearly showed large fluctuations in G values, resulting in the need to implement wavelet transformation through MODWT to address non-stationary issues.

3.2. Forecast Model Development and Validation

In this study, the proposed MPOE model and its traditional non-WT equivalent POE were developed under the MATLAB environment running on an Intel i7 processor at 3.60 GHz. The original (non-wavelet) dataset with its statistically significant lagged variables, identified using the partial autocorrelation function (PACF) operating in a 95% confidence interval, was used as an input to develop the classical POE model. Figure 3 illustrates the number of those lags used to build the model where the first two significant lags were selected using the data from all three study sites.
On the other hand, to construct the MPOE model, wavelet transformation through MODWT was employed on the individual PACF lagged variables, and the wavelet outputs (wavelet and scaling coefficients) were used along with the PACF lagged components as model inputs. The critical task in achieving a robust model with wavelet transformation is to identify the type of wavelet scaling ( h j , l / g j , l ) filter and the decomposition level ( J ) . As no single technique to select the best filter and the level of decomposition can be confirmed in the literature [6,27], a trial-and-error method was employed in the present case. Defining a total of 30 wavelet filters, four different widely tested wavelet families (e.g., [5,6,12,17,19]) were used: Daubechies ( d b i ,   i = 1 , 2 , , 10 ) , where d b 1 is the same as the Haar wavelet (haar); Fejer–Korovkin ( f k i ,   i = 4 , 6 , 8 , 14 , 18 , 22 ) ; Coiflets ( c o i f i ,   i = 1 , 2 , , 5 ) ; . and Symlets ( s y m i ,   i = 2 , 3 , , 10 ) . The maximum level of decomposition ( J ) was computed using Equation (6) [6,17,28]:
J = i n t ( log 2 N )
where N is the number of daily data points in this work, and i n t ( ) . is the function that returns the nearest integer. For example, for the Toowoomba campus data, a value of J . = 9 was computed, so all possible levels of decomposition ( J = 1 , 2 , 9 ) were tested. More details about the MODWT filter and the decomposition level are shown in Table 2. Figure 4 shows the two MODWT wavelet coefficients (WC1 and WC2) and the MODWT scaling coefficient (SC) using lag 1 data from the Toowoomba campus with the best wavelet filter ( f k 8 ) and decomposition level (2). The results of MODWT (wavelet and scaling coefficients) with d b 2 . and d b 3 were found to be the same as those with s y m 2 and s y m 3 , for all study datasets, respectively.
While forecasting models must go through training, validation, and testing datasets, there is no single agreed-upon scenario for data splitting [2,3,5]. Accordingly, these data were divided into 70:15:15 for training: validation: testing (Table 1). Data normalization, a very common practice in machine learning, was applied using Equation (7) to scale values down to a range of ( 0   1 ) , thereby avoiding large numbers in the predictor values of datasets [29]. De-normalization was then applied on predicted data to scale those data back to their original range before models were evaluated.
x n o r m a l i z e d = x x m i n i m u m x m a x i m u m x m i n i m u m
The MATLAB-based OS-ELM function [23], was used to build the present OS-ELM models in this paper. The most important step in developing an OS-ELM model is the selection of the activation function ( f ( . ) ) and the hidden neuron size ( M ; Equation (1)). The radial basis function (RBF) was employed as the activation function in developing the present model, while values of hidden neuron size from 1 to 100 were tested, resulting in 100 POE models for each of the three stations. Additionally, many MPOE models were developed as a result of the numbers of hidden neuron size ( M ) , wavelet filters and decomposition levels ( J ) ; for example 100 ( M ) × 30 (wavelet filters) × 9 ( J ) = 27000 models for the Toowoomba site alone. Table 2 summarizes the details of model development including those factors tested in the training period.
The model accuracy statistics of Pearson correlation coefficient (r) and root-mean square error (RMSE; kW) were used to assess the performance of the POE and MPOE models in the training and validation periods, and thereby to identify the best wavelet and model parameters (Table 2).
r = i = 1 i = n [ ( G i o b s G o b s ¯ ) ( G i f o r G f o r ¯ ) ] i = 1 i = n ( G i o b s G o b s ¯ ) 2 · i = 1 i = n ( G i f o r G f o r ¯ ) 2
R M S E = 1 n i = 1 i = n ( G i f o r G i o b s ) 2
where n is the total number of values of forecast (or observed) G, G i f o r and G i o b s are the ith forecasted and observed values, while G f o r ¯ and G o b s ¯ are the means of the forecasted and observed values, respectively, in the training period.
The statistical metrics of r and RMSE indicate the greatest model accuracy when they approach 1 and 0, respectively. For all the three sites, MPOE models outperformed POE models. For example, for the Toowoomba campus, the MPOE training/validation model accuracy statistics were r = 0.96/0.94 and RMSE = 7260.42/8026.74 kW with f k 8 , 2 and 90 as the best wavelet filter, decomposition level, and hidden neuron size, respectively. Comparatively, the POE training/validation model accuracy was poorer: r = 0.70/0.74 and RMSE = 17,715.42/16,284.72 kW for the best hidden neuron size of 9.

4. Model Evaluation and Results in the Testing Period

4.1. Model Prediction Quality

As the quality of model forecasts of G data cannot be established by a single statistical metric for the testing phase [30], additional measures, besides the RMSE (Equation (9)), were used [30,31,32,33,34,35,36,37,38,39]. These included the Mean absolute error (MAE), relative root-mean square error (RRMSE%), and relative mean absolute error (MAPE%),
M A E = 1 n i = 1 i = n | G i f o r G i o b s |  
R R M S E = 100 × 1 n i = 1 i = n ( G i f o r G i o b s ) 2 G o b s ¯
M A P E = 100 × 1 n i = 1 i = n | G i f o r G i o b s G i o b s |  
This MAE shows the model approaching perfection as its value approaches 0. The RRMSE and MAPE, both best when approaching 0, present an assessment of model accuracy relative to the range and mean of the forecasted parameter, when a clear evaluation cannot be provided by the RMSE or MAE alone [17]. Model performance is considered to be excellent when RRMSE < 10%, good if 10% < RRMSE < 20%, fair if 20% < RRMSE < 30%, and poor if R R M S E > 30 % [9,31,40,41].
Further model accuracy indexes include the Willmott’s Index (WI), Nash–Sutcliffe model efficiency coefficient (ENS) and Legates and McCabe’s Index (LM) below where values closest to 1 indicate the best performance.
W I = 1 [ i = 1 i = n ( G i f o r G i o b s ) 2 i = 1 i = n ( | G i f o r G o b s ¯ | + | G i o b s G o b s ¯ | ) 2 ] ,   and   0 W I 1
E N S = 1 [ i = 1 i = n ( G i f o r G i o b s ) 2 i = 1 i = n ( G i o b s G o b s ¯ ) 2 ] ,   and   E N S 1
L M = 1 [ i = 1 i = n | G i o b s G i f o r |   i = 1 i = n | G i o b s G o b s ¯ |   ] ,   and   ( E L M 1 )
Two statistical tests were also used to show that the MPOE model performs better than the POE model. Those were Wilcoxon Signed-Rank test [42,43,44] and T test [2]. They were performed with 0.05 significance level and 2-tailed hypothesis.

4.2. Results and Discussion

Using a plethora of model accuracy statistics (i.e., Equations (9)–(15)), the capability and accuracy of the MPOE model in forecasting daily electricity demand (G) was evaluated and compared to that of a traditional POE model, both drawing on testing datasets obtained from three USQ campuses (Toowoomba, Ipswich, and Springfield). While several models were developed and evaluated in this study, only results from optimum models, selected from these several trained models, are shown in Table 3. For all three stations’ datasets, the MPOE model showed close to 50% lower values of RMSE, MAE, RRMSE and MAPE and near 50% greater values of E N S and LM than those of the POE model. For example, for the Toowoomba campus dataset, the MPOE model (MAPE = 4.31%, LM = 0.74) clearly outperformed the POE model (MAPE = 11.31%, LM = 0.39). Moreover, the MPOE model yielded better WI values (0.98, 0.98, and 0.95) than the POE model (0.76, 0.75, and 0.67) for the Toowoomba, Ipswich, and Springfield study areas, respectively. This comparison (Table 3) demonstrated the MPOE model to have yielded a better performance than the non-wavelet POE model.
Additionally, to ensure the superiority of the proposed approach and support the results introduced in Table 3, Wilcoxon Signed-Rank test and T test have been presented in Table 4 for the forecasted error statistic | F E | = | G i f o r G i o b s | generated by the MPOE model against the | F E | generated by the POE model. With 0.05 significance level and 2-tailed hypothesis, significant results were shown in both tests (p value < 0.05). These results clearly indicate that the MOPE model receives the significance than the POE model.
To further examine the success of the MPOE model over the POE model for G forecasting in the testing period, observed and forecasted values were plotted as ordinate and abscissa for each model (Figure 5 and Figure 6), and models’ absolute forecasted errors |FE| (Figure 7). Time-series plots in Figure 5 show that the wavelet models have achieved greater accuracy (closer to observed) than the wavelet-free models for all sites.
Scatterplots (Figure 6) show the coefficient of determination ( R 2 ) and the linear regression line ( G f o r = a G o b s + b where a is the slope and b is the ordinate intercept) between the observed and forecasted values. The greater R 2 and values of a and b closer to 1 and 0, respectively, showed that the MPOE model outperformed the POE model when using each of the campus datasets. For the Ipswich campus dataset, the MPOE model yielded R 2 = 0.93 ,   a = 0.99   and   b = 993.15 , in contrast to R 2 = 0.42 ,   a = 0.47   and   b = 24601 for the POE model.
Using boxplots (Figure 7) and all station datasets, the MPOE and POE models were compared based on their 25%, 50% and 75% quartiles (lower, middle, upper line of box) as well as maximum and minimum values of the forecasted error statistic | F E | = | G i f o r G i o b s | . Consequently, the statistical error criteria generated for the MPOE model were significantly lower than those for POE.
Overall, from the results in Table 3 and Table 4, as well as Figure 5, Figure 6 and Figure 7, we can conclude that the MPOE model has achieved better forecasting performance than the POE model by generating lower values from MAE, MAPE%, RMSE, and RRMSE% (Table 3), larger values from WI, ENS and LM (Table 3), significant p values (Table 4), closer forecasted values to the observed values (Figure 5), better values from R 2 ,   a   and   b (Figure 6) and lower |FE| (Figure 7). The reason behind this is that the MODWT method has successfully addressed the non-stationarity issues in time-series datasets before running the POE model to enhance the forecasting accuracy.

5. Challenges and Future Work

While this study was the first to apply the best suitable wavelet transforms to energy forecasting datasets, thereby achieving a high performance MPOE model, some limitations should be addressed in upcoming works, in particular, the incorporation of external datasets, such as climate variables, which can be downloaded from different sources (e.g., SILO [45], the European Centre for Medium Range Weather Forecasts (ECMWF) and global numerical weather prediction models [1,46] and NASA’s Moderate Resolution Imaging Spectroradiometer (MODIS) [47,48,49,50]). These can be decomposed together with lag values using MODWT. The OS-ELM model will be then fed by the wavelet and scaling components resulting from the climate and lag variables, to develop a very high-dimensional model. Although this study has used a good amount of data (730 days and 366 days), a larger power grid model should be built and evaluated using larger datasets from electricity demand (G) to support national electricity markets. This could be achieved by testing the proposed method of this work with a larger study area or incorporating new datasets from the University of Southern Queensland (study area) when these data are available. However, given the large number of input variables that would be generated by MODWT, a method to select and narrow down the best input variables or a very fast model would be necessary to speed up the development step. Accordingly, different pre-processing techniques (e.g., iterative input selection (IIS) [51], grouping genetic algorithm (GGA) [52] or coral reef optimization (CRO) [53,54]), along with a fast forecasting method (e.g., deep learning strategy or long short term memory network [55]) should be integrated with MODWT to improve G data forecasting accuracy.

6. Conclusions

This study has developed a new energy forecasting model by integrating wavelet transformation based on MODWT with the PACF-OS-ELM model to improve the forecasting accuracy of electricity demand (G) data using the datasets from three regional campuses (Toowoomba, Ipswich, and Springfield) from the University of Southern Queensland (USQ). The MPOE model’s testing phase accuracy of prediction was then evaluated and compared to that of its classical non-wavelet model equivalent (i.e., POE) using several statistical criteria including correlation coefficient (r), root-mean square error (RMSE), mean absolute error (MAE), relative root-mean square error (RRMSE%), and relative mean absolute error (MAPE%), Willmott’s Index (WI), Nash–Sutcliffe efficiency coefficient (ENS) and Legates and McCabe’s Index (LM) as well as two statistical tests of Wilcoxon Signed-Rank test and T test. The MPOE model outperformed the POE model for all campus datasets.
Although better accuracy was yielded by the MPOE model developed, than the basic POE model, future work is needed to address some limitations associated with the data and methods used in this work. External datasets, such as climate variables and a pre-processing technique used to select the best inputs from those variables, such as IIS, could be employed to further reduce forecasting errors.
To sum up, accurate and reliable G forecasting can be supplied by the MPOE model, and can therefore help regional electricity markets to improve their system by delivering more precise decisions. However, improved model performance can be provided by future works that could address the challenges above.

Author Contributions

Conceptualization, M.S.A.-M.; methodology, M.S.A.-M.; software, M.S.A.-M.; validation, M.S.A.-M.; formal analysis, M.S.A.-M.; investigation, M.S.A.-M.; resources, M.S.A.-M.; data curation, M.S.A.-M.; writing—original draft preparation, writing—review and editing, M.S.A.-M., R.C.D. and Y.L.; visualization, M.S.A.-M.; supervision, R.C.D. and Y.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Acknowledgments

We thank Alicia Logan (Environmental Manager, campus services, University of Southern Queensland, Australia) for providing all the required data for this study. Our thanks also go to the Ministry of Higher Education and Scientific Research in the Government of Iraq for funding the first author’s PhD project and the University of Southern Queensland for providing access to various academic services. Special thanks to Professor Jan Adamowski (Department of Bioresource Engineering, Faculty of Agricultural, and Environmental Science, McGill University, Canada) and Assistant Professor John Quilty (Department of Civil and Environmental Engineering, University of Waterloo, Canada) for advice on the wavelet technique. Finally, we also would like to thank Professor Jan Adamowski again and Dr Barbara Harmes (Language Centre, Open Access College, University of Southern Queensland, Australia) for providing help in proof-reading this paper.

Conflicts of Interest

The authors declare no conflict of interest.

Acronyms

ciThreshold of ith hidden node
c o i f i Coiflets wavelet filter
d b i Daubechies wavelet filter
f ( . ) SFLM activation function
f k i Fejer–Korovkin wavelet filter
g j , l jth level scaling filter
h j , l jth level wavelet filter
int(.)Nearest integer function
kNumber of hidden nodes in SFLM
kWKilowatts
rPearson’s correlation coefficient
s y m i Symlets wavelet filter
w i weight vectors linking ith hidden node with the input node (SFLM)
ANNArtificial neural network
CROCoral reef optimization
DWTDiscrete wavelet transform
DWT-MRADiscrete wavelet transform–multiresolution analysis
ECMWFEuropean Centre for Medium Range Weather Forecasts
ELMExtreme learning machine
ENSNash–Sutcliffe model efficiency coefficient
| F E | Absolute Forecasted error statistics
GElectricity demand (kW)
G i f o r ith forecasted value of G (kW)
G f o r ¯ Mean of forecasted G values (kW)
POEPACF-OS-ELM
R 2 Coefficient of determination
RMSERoot-mean square error
RRMSERelative root-mean square error, %
SCScaling coefficient (MODWT)
SFLMsingle-layer feed-forward neural network
G i o b s ith observed value of G (Kw)
G o b s ¯ Mean of observed G values (kW)
GGAgrouping genetic algorithm
HSFLM’s hidden layer output matrix
H*Inverse of H matrix
IISIterative input selection
JDecomposition level
LjWidth of the jth level filters
LMLegates and McCabe’s Index
MHidden neuron size
MAEMean absolute error
MAPEMean absolute percentage error, %
MARSMultivariate adaptive regression spline
MLRMultiple linear regression
MODISModerate Resolution Imaging Spectroradiometer (NASA)
MODWTMaximum overlap discrete wavelet transform
MODWT-MRAMaximum overlap discrete wavelet transform–multiresolution analysis
MPOEMODWT-PACF-OS-ELM
MRAMultiresolution analysis
NNumber of values in a data series
OS-ELMOnline sequential extreme learning machine
PACFPartial autocorrelation function
SVRSupport vector regression
V j , i MODWT scaling coefficients
W j , i MODWT wavelet coefficients
WC1, WC2MODWT wavelet coefficients
WIWillmott’s Index
WTWavelet transforms
ρ i weight vectors linking ith hidden node with the output node (SFLM)

References

  1. Al-Musaylh, M.S.; Deo, R.C.; Adamowski, J.F.; Li, Y. Short-term electricity demand forecasting using machine learning methods enriched with ground-based climate and ECMWF Reanalysis atmospheric predictors in southeast Queensland, Australia. Renew. Sustain. Energy Rev. 2019, 113, 109–293. [Google Scholar] [CrossRef]
  2. Al-Musaylh, M.S.; Deo, R.C.; Adamowski, J.; Li, Y. Short-term electricity demand forecasting with MARS, SVR and ARIMA models using aggregated demand data in Queensland, Australia. Adv. Eng. Informatics 2018, 35, 1–16. [Google Scholar] [CrossRef]
  3. Al-Musaylh, M.S.; Deo, R.C.; Li, Y.; Adamowski, J.F. Two-phase particle swarm optimized-support vector regression hybrid model integrated with improved empirical mode decomposition with adaptive noise for multiple-horizon electricity demand forecasting. Appl. Energy 2018, 217, 422–439. [Google Scholar] [CrossRef]
  4. Ali, M.; Deo, R.C.; Downs, N.J.; Maraseni, T. Multi-stage hybridized online sequential extreme learning machine integrated with Markov Chain Monte Carlo copula-Bat algorithm for rainfall forecasting. Atmospheric Res. 2018, 213, 450–464. [Google Scholar] [CrossRef]
  5. Deo, R.C.; Wen, X.; Qi, F. A wavelet-coupled support vector machine model for forecasting global incident solar radiation using limited meteorological dataset. Appl. Energy 2016, 168, 568–593. [Google Scholar] [CrossRef]
  6. Ghimire, S.; Deo, R.C.; Raj, N.; Mi, J. Wavelet-based 3-phase hybrid SVR model trained with satellite-derived predictors, particle swarm optimization and maximum overlap discrete wavelet transform for solar radiation prediction. Renew. Sustain. Energy Rev. 2019, 113, 109247. [Google Scholar] [CrossRef]
  7. Areekul, P.; Senjyu, T.; Urasaki, N.; Yona, A. Neural-wavelet Approach for Short Term Price Forecasting in Deregulated Power Market. J. Int. Counc. Electr. Eng. 2011, 1, 331–338. [Google Scholar] [CrossRef] [Green Version]
  8. Conejo, A.J.; Plazas, M.; Espinola, R.; Molina, A.B. Day-Ahead Electricity Price Forecasting Using the Wavelet Transform and ARIMA Models. IEEE Trans. Power Syst. 2005, 20, 1035–1042. [Google Scholar] [CrossRef]
  9. Mohammadi, K.; Shamshirband, S.; Tong, C.W.; Arif, M.; Petković, D.; Chong, W.T. A new hybrid support vector machine–wavelet transform approach for estimation of horizontal global solar radiation. Energy Convers. Manag. 2015, 92, 162–171. [Google Scholar] [CrossRef]
  10. Tan, Z.; Zhang, Y.-J.; Wang, J.; Xu, J. Day-ahead electricity price forecasting using wavelet transform combined with ARIMA and GARCH models. Appl. Energy 2010, 87, 3606–3610. [Google Scholar] [CrossRef]
  11. Feng, Q.; Wen, X.; Li, J. Wavelet Analysis-Support Vector Machine Coupled Models for Monthly Rainfall Forecasting in Arid Regions. Water Resour. Manag. 2014, 29, 1049–1065. [Google Scholar] [CrossRef]
  12. Maheswaran, R.; Khosa, R. Comparative study of different wavelets for hydrologic forecasting. Comput. Geosci. 2012, 46, 284–295. [Google Scholar] [CrossRef]
  13. SehgaliD, V.; Sahay, R.R.; Chatterjee, C. Effect of Utilization of Discrete Wavelet Components on Flood Forecasting Performance of Wavelet Based ANFIS Models. Water Resour. Manag. 2014, 28, 1733–1749. [Google Scholar] [CrossRef]
  14. Tiwari, M.K.; Chatterjee, C. Development of an accurate and reliable hourly flood forecasting model using wavelet–bootstrap–ANN (WBANN) hybrid approach. J. Hydrol. 2010, 394, 458–470. [Google Scholar] [CrossRef]
  15. Tiwari, M.K.; Adamowski, J. Urban water demand forecasting and uncertainty assessment using ensemble wavelet-bootstrap-neural network models. Water Resour. Res. 2013, 49, 6486–6507. [Google Scholar] [CrossRef]
  16. Zhang, B.-L.; Dong, Z.-Y. An adaptive neural-wavelet model for short term load forecasting. Electr. Power Syst. Res. 2001, 59, 121–129. [Google Scholar] [CrossRef]
  17. Prasad, R.; Deo, R.C.; Li, Y.; Maraseni, T. Input selection and performance optimization of ANN-based streamflow forecasts in the drought-prone Murray Darling Basin region using IIS and MODWT algorithm. Atmospheric Res. 2017, 197, 42–63. [Google Scholar] [CrossRef]
  18. Dghais, A.A.A.; Ismail, M.T. A comparative study between discrete wavelet transform and maximal overlap discrete wavelet transform for testing stationarity. Int. J. Math. Comput. Phys. Electr. Comput. Eng. 2013, 7, 1677–1681. [Google Scholar]
  19. Quilty, J.; Adamowski, J. Addressing the incorrect usage of wavelet-based hydrological and water resources forecasting models for real-world applications with best practices and a new forecasting framework. J. Hydrol. 2018, 563, 336–353. [Google Scholar] [CrossRef]
  20. Barzegar, R.; Moghaddam, A.A.; Adamowski, J.; Ozga-Zielinski, B. Multi-step water quality forecasting using a boosting ensemble multi-wavelet extreme learning machine model. Stoch. Environ. Res. Risk Assess. 2017, 32, 799–813. [Google Scholar] [CrossRef]
  21. Barzegar, R.; Fijani, E.; Moghaddam, A.A.; Tziritis, E. Forecasting of groundwater level fluctuations using ensemble hybrid multi-wavelet neural network-based models. Sci. Total. Environ. 2017, 599, 20–31. [Google Scholar] [CrossRef]
  22. Lan, Y.; Soh, Y.C.; Huang, G.-B. Ensemble of online sequential extreme learning machine. Neurocomputing 2009, 72, 3391–3395. [Google Scholar] [CrossRef]
  23. Liang, N.-Y.; Huang, G.-B.; Saratchandran, P.; Sundararajan, N. A Fast and Accurate Online Sequential Learning Algorithm for Feedforward Networks. IEEE Trans. Neural Networks 2006, 17, 1411–1423. [Google Scholar] [CrossRef]
  24. Huang, G.-B.; Zhu, Q.-Y.; Siew, C.-K. Extreme learning machine: Theory and applications. Neurocomputing 2006, 70, 489–501. [Google Scholar] [CrossRef]
  25. Yadav, B.; Ch, S.; Mathur, S.; Adamowski, J. Discharge forecasting using an Online Sequential Extreme Learning Machine (OS-ELM) model: A case study in Neckar River, Germany. Measurement 2016, 92, 433–445. [Google Scholar] [CrossRef]
  26. Percival, D.B.; Walden, A.T. Wavelet Methods for Time Series Analysis; Cambridge University Press: Cambridge, UK, 2000; p. 4. [Google Scholar]
  27. Rathinasamy, M.; Adamowski, J.; Khosa, R. Multiscale streamflow forecasting using a new Bayesian Model Average based ensemble multi-wavelet Volterra nonlinear method. J. Hydrol. 2013, 507, 186–200. [Google Scholar] [CrossRef]
  28. Seo, Y.; Choi, Y.; Choi, J. River Stage Modeling by Combining Maximal Overlap Discrete Wavelet Transform, Support Vector Machines and Genetic Algorithm. Water 2017, 9, 525. [Google Scholar] [CrossRef] [Green Version]
  29. Hsu, C.-W.; Chang, C.-C.; Lin, C.-J. A Practical Guide to Support Vector Classification; Technical Report. Department of Computer Science and Information Engineering, University of National Taiwan: Taipei, Taiwan, 2003; pp. 1–12. Available online: https://www.csie.ntu.edu.tw/~cjlin/papers/guide/guide.pdf (accessed on 15 January 2020).
  30. Chai, T.; Draxler, R. Root mean square error (RMSE) or mean absolute error (MAE)? – Arguments against avoiding RMSE in the literature. Geosci. Model Dev. 2014, 7, 1247–1250. [Google Scholar] [CrossRef] [Green Version]
  31. Mohammadi, K.; Shamshirband, S.; Anisi, M.H.; Alam, K.A.; Petković, D. Support vector regression based prediction of global solar radiation on a horizontal surface. Energy Convers. Manag. 2015, 91, 433–441. [Google Scholar] [CrossRef]
  32. Willmott, C.J.; Robeson, S.M.; Matsuura, K. A refined index of model performance. Int. J. Clim. 2011, 32, 2088–2094. [Google Scholar] [CrossRef]
  33. Willmott, C.J. On the Evaluation of Model Performance in Physical Geography; Springer Science and Business Media LLC: Berlin/Heidelberg, Germany, 1984; pp. 443–460. [Google Scholar]
  34. Willmott, C.J. Some comments on the evaluation of model performance. Bull. Am. Meteorol. Soc. 1982, 63, 1309–1313. [Google Scholar] [CrossRef] [Green Version]
  35. Willmott, C.J. On the validation of models. Phys. Geogr. 1981, 2, 184–194. [Google Scholar] [CrossRef]
  36. Dawson, C.; Abrahart, R.; See, L. HydroTest: A web-based toolbox of evaluation metrics for the standardised assessment of hydrological forecasts. Environ. Model. Softw. 2007, 22, 1034–1052. [Google Scholar] [CrossRef] [Green Version]
  37. LeGates, D.R.; McCabe, G.J. Evaluating the use of “goodness-of-fit” Measures in hydrologic and hydroclimatic model validation. Water Resour. Res. 1999, 35, 233–241. [Google Scholar] [CrossRef]
  38. Krause, P.; Boyle, D.P.; Bäse, F. Comparison of different efficiency criteria for hydrological model assessment. Adv. Geosci. 2005, 5, 89–97. [Google Scholar] [CrossRef] [Green Version]
  39. Nash, J.E.; Sutcliffe, J.V. River flow forecasting through conceptual models part I—A discussion of principles. J. Hydrol. 1970, 10, 282–290. [Google Scholar]
  40. Li, M.-F.; Tang, X.-P.; Wu, W.; Liu, H. General models for estimating daily global solar radiation for different solar radiation zones in mainland China. Energy Convers. Manag. 2013, 70, 139–148. [Google Scholar] [CrossRef]
  41. Heinemann, A.B.; Van Oort, P.; Fernandes, D.S.; Maia, A.D.H.N. Sensitivity of APSIM/ORYZA model due to estimation errors in solar radiation. Bragantia 2012, 71, 572–582. [Google Scholar] [CrossRef] [Green Version]
  42. Zhang, Z.; Hong, W.-C.; Li, J. Electric Load Forecasting by Hybrid Self-Recurrent Support Vector Regression Model With Variational Mode Decomposition and Improved Cuckoo Search Algorithm. IEEE Access 2020, 8, 14642–14658. [Google Scholar] [CrossRef]
  43. Dong, Y.; Zhang, Z.; Hong, W.-C. A Hybrid Seasonal Mechanism with a Chaotic Cuckoo Search Algorithm with a Support Vector Regression Model for Electric Load Forecasting. Energies 2018, 11, 1009. [Google Scholar] [CrossRef] [Green Version]
  44. Hong, W.-C.; Dong, Y.; Lai, C.-Y.; Chen, L.-Y.; Wei, S.-Y. SVR with Hybrid Chaotic Immune Algorithm for Seasonal Load Demand Forecasting. Energies 2011, 4, 960–977. [Google Scholar] [CrossRef] [Green Version]
  45. Jeffrey, S.J.; Carter, J.O.; Moodie, K.B.; Beswick, A.R. Using spatial interpolation to construct a comprehensive archive of Australian climate data. Environ. Model. Softw. 2001, 16, 309–330. [Google Scholar] [CrossRef]
  46. Ghimire, S.; Deo, R.C.; Downs, N.J.; Raj, N. Self-adaptive differential evolutionary extreme learning machines for long-term solar radiation prediction with remotely-sensed MODIS satellite and Reanalysis atmospheric products in solar-rich cities. Remote. Sens. Environ. 2018, 212, 176–198. [Google Scholar] [CrossRef]
  47. Deo, R.C.; Şahin, M. Forecasting long-term global solar radiation with an ANN algorithm coupled with satellite-derived (MODIS) land surface temperature (LST) for regional locations in Queensland. Renew. Sustain. Energy Rev. 2017, 72, 828–848. [Google Scholar] [CrossRef]
  48. Wan, Z. MODIS land-surface temperature algorithm theoretical basis document (LST ATBD). Institute for Computational Earth System Science, Santa Barbara. 1999; p. 75. Available online: https://modis.gsfc.nasa.gov/data/atbd/atbd_mod11.pdf (accessed on 10 February 2020).
  49. Wan, Z.; Zhang, Y.; Zhang, Q.; Li, Z.-L. Quality assessment and validation of the MODIS global land surface temperature. Int. J. Remote. Sens. 2004, 25, 261–274. [Google Scholar] [CrossRef]
  50. MODIS. MODIS (Moderate-Resolution Imaging Spectroradiometer). 2018. Available online: https://modis.gsfc.nasa.gov/about/media/modis_brochure.pdf (accessed on 10 February 2020).
  51. Galelli, S.; Castelletti, A. Tree-based iterative input variable selection for hydrological modeling. Water Resour. Res. 2013, 49, 4295–4310. [Google Scholar] [CrossRef]
  52. Bueno, L.C.; Nieto-Borge, J.C.; García-Díaz, P.; Rodríguez, G.; Salcedo-Sanz, S. Significant wave height and energy flux prediction for marine energy applications: A grouping genetic algorithm—Extreme Learning Machine approach. Renew. Energy 2016, 97, 380–389. [Google Scholar] [CrossRef]
  53. Salcedo-Sanz, S.; Pastor-Sánchez, Á.; Prieto, L.; Blanco-Aguilera, A.; García-Herrera, R. Feature selection in wind speed prediction systems based on a hybrid coral reefs optimization—Extreme learning machine approach. Energy Convers. Manag. 2014, 87, 10–18. [Google Scholar] [CrossRef]
  54. Salcedo-Sanz, S.; Casanova-Mateo, C.; Pastor-Sánchez, Á.; Sánchez-Girón, M. Daily global solar radiation prediction based on a hybrid Coral Reefs Optimization—Extreme Learning Machine approach. Sol. Energy 2014, 105, 91–98. [Google Scholar] [CrossRef]
  55. Ghimire, S.; Deo, R.C.; Raj, N.; Mi, J. Deep Learning Neural Networks Trained with MODIS Satellite-Derived Predictors for Long-Term Global Solar Radiation Prediction. Energies 2019, 12, 2407. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Flowchart showing the model development stages.
Figure 1. Flowchart showing the model development stages.
Energies 13 02307 g001
Figure 2. Daily electricity demand (G, kW) time-series for the three study sites of Toowoomba, Ipswich, and Springfield campuses.
Figure 2. Daily electricity demand (G, kW) time-series for the three study sites of Toowoomba, Ipswich, and Springfield campuses.
Energies 13 02307 g002
Figure 3. The model input variables constructed from the statistically significant lags at a 95% confidence interval from the original daily G-datasets in the training period for the three study sites based on correlation coefficient (r) of predictors (lags) using the partial autocorrelation function (PACF).
Figure 3. The model input variables constructed from the statistically significant lags at a 95% confidence interval from the original daily G-datasets in the training period for the three study sites based on correlation coefficient (r) of predictors (lags) using the partial autocorrelation function (PACF).
Energies 13 02307 g003
Figure 4. Maximum overlap discrete wavelet transform (MODWT) coefficients constructed using lag 1 of the Toowoomba campus dataset with the optimum wavelet (scaling) filter ( f k 8 ) and wavelet level 2. WC (a, b) and SC (c) represent the wavelet and scaling coefficients, respectively.
Figure 4. Maximum overlap discrete wavelet transform (MODWT) coefficients constructed using lag 1 of the Toowoomba campus dataset with the optimum wavelet (scaling) filter ( f k 8 ) and wavelet level 2. WC (a, b) and SC (c) represent the wavelet and scaling coefficients, respectively.
Energies 13 02307 g004
Figure 5. Observed vs. forecasted G data in the testing period with the optimal POE and MPOE models for the three study sites.
Figure 5. Observed vs. forecasted G data in the testing period with the optimal POE and MPOE models for the three study sites.
Energies 13 02307 g005
Figure 6. Scatterplots of the observed and forecasted G data in the testing phase with the optimal models of POE and MPOE. Equations of linear regression and the coefficient of determination are incorporated. (a) Toowoomba Campus. (b) Ipswich campus. (c) Springfield campus.
Figure 6. Scatterplots of the observed and forecasted G data in the testing phase with the optimal models of POE and MPOE. Equations of linear regression and the coefficient of determination are incorporated. (a) Toowoomba Campus. (b) Ipswich campus. (c) Springfield campus.
Energies 13 02307 g006
Figure 7. Boxplots of the absolute forecasted error |FE| in the testing dataset for the three study sites with the optimal models of POE and MPOE.
Figure 7. Boxplots of the absolute forecasted error |FE| in the testing dataset for the three study sites with the optimal models of POE and MPOE.
Energies 13 02307 g007
Table 1. Data splitting and descriptive statistics for the G data from the University of Southern Queensland campuses’ three stations.
Table 1. Data splitting and descriptive statistics for the G data from the University of Southern Queensland campuses’ three stations.
StationData Period (dd-mm-yyyy)Original 15-Mins DataNo. Daily Data PointsDescriptive Statistics for the Whole Daily Datasets
TotalNo. ZerosTotalTraining (70%)Validation (15%)Testing (15%)Minimum (kW)Maximum (kW)Mean (kW)
Toowoomba campus (Main feed)01-01-2013 to 31-12-201470,0806073051210910981,579.70195,037141,328
Ipswich campus (Main feed)01-09-2015 to 31-08-201635,13630366256555523,33662,378.0643,716.96
Springfield campus (A Block)35,1360521413,536.808310.05
Table 2. Optimum model performance and parameters in the training and validation phases based on correlation coefficient (r) and root-mean square error (RMSE; kW), for the three stations with the daily forecast horizon. The models in boldface are the optimal (best performing) models.
Table 2. Optimum model performance and parameters in the training and validation phases based on correlation coefficient (r) and root-mean square error (RMSE; kW), for the three stations with the daily forecast horizon. The models in boldface are the optimal (best performing) models.
StationModelNo. Hidden NeuronsNo. Wavelet/Scaling FilterNo. Wavelet LevelNo. Models Developed Best Model
TrainingValidationWavelet/Scaling FilterWavelet LevelHidden Neuron Size
rRMSE (kW)rRMSE (kW)
Toowoomba campus (Main feed)POE100Non-wavelet model1000.7017715.420.7416284.72Non-wavelet model9
MPOE10030927,0000.967260.420.948026.74fk8290
Ipswich campus (Main feed)POE100Non-wavelet model1000.686944.430.677543.30Non-wavelet model10
MPOE10030824,0000.972476.190.904279.08db2/sym2354
Springfield campus (A Block)POE100Non-wavelet model1000.651036.280.611641.08Non-wavelet model4
MPOE10030824,0000.95441.640.891164.65fk14576
Table 3. Optimum model performance in the testing phase for daily forecast horizon based on Willmott’s Index (WI), Nash–Sutcliffe model efficiency coefficient (ENS), root-mean square error (RMSE; kW), mean absolute error (MAE; kW), mean absolute percentage error (MAPE%), relative root-mean square error (RRMSE %), as well as Legates and McCabes Index (LM) for the three stations. The models in boldface are the optimal (best performing) models.
Table 3. Optimum model performance in the testing phase for daily forecast horizon based on Willmott’s Index (WI), Nash–Sutcliffe model efficiency coefficient (ENS), root-mean square error (RMSE; kW), mean absolute error (MAE; kW), mean absolute percentage error (MAPE%), relative root-mean square error (RRMSE %), as well as Legates and McCabes Index (LM) for the three stations. The models in boldface are the optimal (best performing) models.
StationModelWIENSRMSE (kW)MAE (kW)MAPE (%)RRMSE (%)LM
Toowoomba campus (Main feed)POE0.760.4218,030.4412,812.3211.3113.580.39
MPOE0.980.917267.625400.994.315.470.74
Ipswich campus (Main feed)POE0.750.237564.844860.7616.2919.290.36
MPOE0.980.932337.801980.875.465.960.74
Springfield campus (A Block)POE0.67-0.101612.921142.3612.4917.430.11
MPOE0.950.80692.78540.305.847.490.58
Table 4. Wilcoxon Signed-Rank test and T test results for the | F E | of the MOPE model vs. the | F E | for the POE model.
Table 4. Wilcoxon Signed-Rank test and T test results for the | F E | of the MOPE model vs. the | F E | for the POE model.
StationWilcoxon Signed-Rank TestT Test
p Valuep Value
Toowoomba campus (Main feed)0.000010.00001
Ipswich campus (Main feed)0.000760.00053
Springfield campus (A Block)0.000180.00043

Share and Cite

MDPI and ACS Style

Al-Musaylh, M.S.; Deo, R.C.; Li, Y. Electrical Energy Demand Forecasting Model Development and Evaluation with Maximum Overlap Discrete Wavelet Transform-Online Sequential Extreme Learning Machines Algorithms. Energies 2020, 13, 2307. https://doi.org/10.3390/en13092307

AMA Style

Al-Musaylh MS, Deo RC, Li Y. Electrical Energy Demand Forecasting Model Development and Evaluation with Maximum Overlap Discrete Wavelet Transform-Online Sequential Extreme Learning Machines Algorithms. Energies. 2020; 13(9):2307. https://doi.org/10.3390/en13092307

Chicago/Turabian Style

Al-Musaylh, Mohanad S., Ravinesh C. Deo, and Yan Li. 2020. "Electrical Energy Demand Forecasting Model Development and Evaluation with Maximum Overlap Discrete Wavelet Transform-Online Sequential Extreme Learning Machines Algorithms" Energies 13, no. 9: 2307. https://doi.org/10.3390/en13092307

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop