Next Article in Journal
The Effect of a Reduction in the Catalyst Loading on a Mini Passive Direct Methanol Fuel Cell
Previous Article in Journal
Investigation into the Effect of Permeable Boundary Sealing on the Behavior of Hydrate Exploitation via Depressurization Combined with Heat Injection
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Fusion Forecasting Algorithm for Short-Term Load in Power System

1
CCCC Tianjin Waterway Bureau Co., Ltd., Tianjin 300202, China
2
School of Electrical Automation and Information Engineering, Tianjin University, Tianjin 300072, China
*
Author to whom correspondence should be addressed.
Energies 2024, 17(20), 5173; https://doi.org/10.3390/en17205173
Submission received: 21 August 2024 / Revised: 12 October 2024 / Accepted: 15 October 2024 / Published: 17 October 2024
(This article belongs to the Section F3: Power Electronics)

Abstract

:
Short-term load forecasting plays an important role in power system scheduling, optimization, and maintenance, but no existing typical method can consistently maintain high prediction accuracy. Hence, fusing different complementary methods is increasingly focused on. To improve forecasting accuracy and stability, these features that affect the short-term power system are firstly extracted as prior knowledge, and the advantages and disadvantages of existing methods are analyzed. Then, three typically methods are used for short-term power load forecasting, and their interaction and complementarity are studied. Finally, the Choquet integral (CI) is used to fuse the three existing complementarity methods. Different from other fusion methods, the CI can fully utilize the interactions and complementarity among different methods to achieve consistent forecasting results, and reduce the disadvantages of a single forecasting method. Essentially, a CI with n inputs is equivalent to n! constrained feedforward neural networks, leading to a strong generalization ability in the load prediction process. Consequently, the CI-based method provides an effective way for the fusion forecasting of short-term load in power systems.

1. Introduction

Short-term load forecasting is a crucial task in the power system and serves as a fundamental basis for power load scheduling, optimization, and maintenance [1]. However, the power loads have real-time, rapid-changeable, and non-storage characteristics. Ensuring the stability, safety, and economy of the power system through accurate forecasting has always been a key issue [2].
The current effective forecasting methods include two categories [3], current deep learning and traditional machine learning. The deep learning methods that have been applied to short-term load forecasting in power systems include convolutional neural networks, long short-term memory networks, and lightweight networks, etc. [4]. Deep learning methods can effectively solve load forecasting issues if there are sufficient representative samples and if load changes exhibit a strong repeatability. However, for short-term power system load forecasting, the above conditions often are not met, and thus their predicted results may have a high uncertainty and produce significant errors [5]. Moreover, their interpretability and operability are often quite restricted. On the other hand, traditional machine learning methods are effective under certain conditions and usage environments. Notably, three commonly used algorithms are Kalman filtering (KF) [6], backpropagation (BP) neural networks [7], and the gray system (GS) model [8], each with its own characteristics and applicable range. These methods have the advantages of clear principles, easy realization, and good interpretability.
KF is a recursive, optimally filtering and forecasting method. It is a highly efficient and useful method for solving a large amount of time series forecasting problems with prior and subsequent correlations. When KF is used for power system load forecasting, it may have an easy realization, clear principles, and stable forecasting results. However, KF is easily affected by noise and nonlinear variables [9]. For irregular load changes, constructing the recursive matrix and observation matrix in KF is key and challenging in practice. Moreover, possible random interference factors and noisy data also limit its forecasting accuracy and stability.
The BP neural network is widely used in traditional machine learning, with numerous open-source systems and functional modules available in most software packages and computing platforms. The use of BP for short-term load forecasting can establish causal relationships between various related factors, offer explanations for load measurements, and improve forecasting accuracy under certain conditions. However, BP networks have evident limitations, such as the difficulty in avoiding the problem of local optima of the objective function and the gradient disappearing in the learning process [10], which have not been fundamentally solved so far.
The GS is a modeling and approximation method based on gray theory [11]. It is a simple and practical method to solve modeling, forecasting, and optimization in inconsistent, uncertain, and incomplete situations. Currently, many uncertain and random factors affect daily power load changes, and both mechanistic and semi-mechanistic models struggle to accurately describe and quantitatively calculate these factors. In contrast, the GS method offers advantages such as simple calculations, strong practicality, and a high accuracy for power forecasting, particularly when the load changes are not rapid. So, the GS may be suitable for short-, mid-, and long- term power load forecasting. However, GS-based forecasting is less effective when samples are complex. Especially for time series with long-term and persistent load changes, the forecasting results may be inaccurate.
In summary, the above studies partially or entirely use meteorological information, daily patterns, and historical loads as the key factors of feature selection in the forecasting of power load [12]. Their input features are not processed based on the characteristics of the short-term load forecasting task before inputting the features into the forecasting model, optimization, and computation. Therefore, they are unable to effectively utilize existing experience and prior knowledge in the learning and forecasting models. Furthermore, most machine learning-based methods the predict power load independently and do not consider integration and time series characteristics, leading to limitations such as a reduced applicability and unstable results [13]. Nevertheless, integrating multiple load forecasting methods can provide more accurate and stable short-term forecasting for the power system, offering practical significance and urgency.
In this study, the three commonly used short-term load forecasting methods, KF, BP, and GS, are used to model and predict short-term loads. After exploring their applicability and complementarity, the Choquet integral (CI) model [14] is proposed as a fusion forecasting method to enhance accuracy and stability. The CI is a nonlinear fusion model that considers the interactions among various methods or information sources. It can automatically identify the advantages and disadvantages of different methods by solving model parameters, and thereby endowing a higher importance to those accurate sources, reducing the limitations of single methods in predicting unstable or inaccurate results under different conditions. The proposed method based on the CI is validated by predicting the annual load data of a certain district in Tianjin, China, and further practical results are obtained as to the applicability of the new method.

2. Related Work

2.1. KF Technique

The KF provides the state estimation of any dynamic system through recursive relationships, and is the optimal estimator for non-stationary stochastic processes under the criterion of minimum error covariance [15]. The implementation of KF consists of the following five fundamental equations [16]:
Y k = H k X k + W k  (Measurement equation)
X k + 1 = A k X k + V k  (State equation)
where A k is the one-step state transition matrix; V k is the dynamic noise of the system at the kth moment; H k is the observation matrix; X k is an N-dimensional state vector; and W k is the random interference noise at the kth moment. The forecasting at time (k + 1) is
X ˜ k + 1 = A k X ^ k
where X ^ k is the filtering estimation at time k. After obtaining the new measurement Yk+1, the forecasting error ( Y k + 1 Y ˜ k + 1 ) is multiplied by an appropriate coefficient as the correction term for state forecasting, and the estimated value at time k + 1 is obtained as
X ˜ k + 1 = A k X ^ k + K k + 1 ( Y k + 1 H k + 1 A k X ^ k )
After selecting the gain matrix K k + 1 , the covariance matrix of the estimation error ( X k + 1 X ^ k + 1 ) is obtained as
P k + 1 = E { ( X k + 1 X ^ k + 1 ) ( X k + 1 X ^ k + 1 ) T }
The KF forecasting result is achieved by recursively using the above five key equations.

2.2. BP Neural Network

BP first selects the factors that primarily reflect load changes as the input X of the network and uses the load forecasting value Y as the output. This forms a pair (X, Y) and further constructs a time series. The BP modeling steps are as follows [17]:
Step 1. Initialize parameters. Namely, the number of input layer nodes n, the number of hidden layer nodes l, the number of output layer nodes m, the connection weights Wij and Wjk from the input to the hidden layer and output layer neurons, the hidden layer threshold a, the output layer threshold b, and learning rate η .
Step 2. Compute hidden layer. The output in the hidden layer Hj is calculated as
H j = f ( i = 1 n W i j X i a ) , i = 1 , 2 , , n ; j = 1 , 2 , , l
where f is the hidden layer excitation function.
Step 3. Update output. It is calculated as
O k = j = 1 i H j W j k b k , k = 1 , 2 , , m
Step 4. Calculate error. According to Ok and the actual/real output Yk, the error ek is calculated as
e k = Y k O k , k = 1 , 2 , , m
Step 5. Update weighting value. The connecting weighting values among the nodes are calculated as
W i j = W i j + η H j ( 1 H j ) X i k = 1 m W j k e k s . t . ,   W j k = W j k + η H j e k
where i = 1, …, n; j = 1, …, l; k = 1, …, m.
Step 6. Update threshold. The threshold is updated as
a j = a j + η H j ( 1 H j ) k = 1 m W j k e k , j = 1 , 2 , , l b j = b k + e k , k = 1 , 2 , , m

2.3. GS Model

The GS model uses any historical data series to establish a differential equation as
d X / d t + a X = u
where a and u are a pair of coefficients [18].
Let the variable X(0) be the original data sequence X(0) = (X(0)(1), X(0)(2), …, X(0)(n)) that generates the following first-order sequence as
X ( 1 ) = ( X ( 1 ) ( 1 ) , X ( 1 ) ( 2 ) , , X ( 1 ) ( n ) ) ,   s . t . ,   X ( 1 ) k = i = 1 k x ( 0 ) ( i )
where X(1)(k) satisfies the discrete form along the following differential equations,
X ( 1 ) ( k + 1 ) = [ X ( 0 ) ( 1 ) u ^ / a ^ ] e a ^ k + u ^ / a ^
The GS model for the original data column X(0) can be obtained by
X ^ ( 0 ) ( k + 1 ) = X ^ ( 1 ) ( k + 1 ) X ^ ( 1 ) ( k ) = ( e a 1 ) [ X ( 0 ) ( 1 ) u ^ / a ^ ] e a ^ k
where k = 0, 1, 2, 3, …
In the GS model, the sequence X(1)(k) obeys an exponential growth law, and its solution follows the discrete first-order differential equation as
X ( 1 ) ( k + 1 ) + 1 2 a [ X ( 1 ) ( k + 1 ) + X ( 1 ) ( k ) ] = u ,   k = 1 , 2 , , n
Based on the residual between the predicted and the actual value, the least squares criteria method can estimate the values of a and u. Hence, the power load forecasting can be realized by (15).

3. Proposed Fusion Method

3.1. Feature Extraction

The total load of the power system L can generally be decomposed into four components according to the type of influencing factors [19], namely
L = Ln + Lw + Ls + Lr
where Ln represents the normal trend of the load component, typically characterized by the load of typical historical days; Lw is the power load component related to meteorological factors, primarily influenced by changes in various meteorological factors; Ls is caused by special external factors such as holidays; and Lr is a random component of the power load, which, while having a small proportion, is usually difficult to predict. Therefore, we focus on the first three key factors.
(1) Temperature and humidity: Based on previous research experiences in power load forecasting [20], the two meteorological factors of temperature and relative humidity have the strongest correlation with power system load. Thus, we construct their daily mean and variance into a vector,
Hk = (tm(k), tv(k), hm(k), hv(k)), k = 1, 2, …, K,
where K is the total number of observed days, and tm(k) and tv(k) refer to the daily mean and variance of the temperature, and hm(k) and hv(k) those of the humidity. According to the range of annual temperature and humidity in the detection area, the ranges of the four variables can be determined. All the above vectors consist of a set SH = {Hk|k = 1, 2, …, K}.
(2) Special days. The power loads are very different between working and non-working days, and particularly special days such as Spring Festival, Labor Day, and National Day. Each of them usually includes a group of continuous non-working days. Hence, a feature vector to represent any special day is constructed as follows,
Dq = (dq1, dq2,⋯, dqm), q = 1, 2,…,Q;
where q denotes the qth special day, m is the number of non-working days, and Q is the total number of special days. All such vectors consist of a set, SD = {Dq|q = 1, 2, …, Q}.
(3) Normal day. Owing to the cyclical and continuous changes, normal historical power loads from a week except special days are nearly repeated [21]. Assume that the total number of normal days is N; all normal days consist of a set, SN = {Dp|p = 1, 2, …, N}.
Therefore, the power load must be forecasted individually over special days and normal days after considering the influences of temperature and humidity. And any predicted power load firstly is distinguished by SDSH with relative power loads, and SNSH with relative power loads, respectively.
Let the forecasting model be q = F(H, c), where q is the power load and c is the data type. We use KF, BP, and the GS to construct forecasting models to approximate F(·,·) using historical data. For each predicted day, the three models provide power load forecastings, which are then integrated into the CI model to achieve a more accurate forecasting of power loads. Figure 1 shows the flowchart of our proposed model.

3.2. Forecasting of the Three Existing Typical Methods

As an example, the historical load data used in this study consists of daily 24 h loads in a district in Tianjin from 1 July 2019 to 30 June 2021. The sampling unit is hours, and in total, 17,088 pieces of data are collected. Table 1 shows their main features, such as their maximum, minimum, average, standard deviation, and variance. All data are partitioned into two groups, and the data in the first year are used to construct the forecasting model and the second year to evaluate the predicting accuracy of the model.

3.2.1. KF Forecasting

The KF forecasting method is developed using relative power loads from SD∪SH and SN∪SH in the first year. Denote the load data at time k as qk, k = 1, 2, …, n. Each piece of data is defined as a node, and the kth node refers to a three-dimensional vector X k : ( q k , q k , q k ) , where q k and q k are defined as
q k = q k q k 1   a n d   q k = q k q k 1
They are the first-order and the second-order differences of qk, and are recursively computed. The calculation process is shown in Figure 2.
After recording the load every day along each regular interval (1 h), 24 daily sampling values are obtained, and the three-dimensional representation of the measurement nodes can be obtained, X t ( q t , q k , q k ) . Hence, the KF matrix coefficient A is determined as follows. Since
q k = q k 1 + q k 1   a n d   q k = q k 1 + q k 1 + q k 1
the coefficient matrix A in the KF equation is obtained as
q k q k q k = A q k 1 q k 1 q k 1 = 1   1   1 0   1   1 a   b   c q k 1 q k 1 q k 1
When there is no other available prior information, the measurement matrix Hk is taken as the identity matrix.
Using the following seven special days, such as New Year’s Day, Spring Festival, Tomb-Sweeping Day, Labor Day, the Long Boat Festival, Mid-Autumn Festival, and National Day, as dividing points, we divide the entire year from June 2019 to June 2020 into 7 stages to determine the parameters in each forecasting model. In the same way, the year from July 2020 to July 2021 is divided into 7 stages for load forecasting. In every stage, the forecasting is performed in every hour of all days. The KF forecasting accuracy is evaluated by a group of statistical quantities in each stage, such as maximal absolute error, average absolute relative error, average relative error, and standard error, as shown in Table 2 as follows.
The following conclusions can be inferred from Table 2.
(1) The minimum forecasting error occurs in the second stage, but the forecasting error tends to stabilize from the third stage onwards. This is because the KF method is advantageous as it is an adaptive process with a dynamic correction function. It updates the state variables using the estimated value from the previous moment and the observed value at the current moment to obtain the estimated value at this stage.
(2) The maximum absolute error ranges from 64.010 to 685.982 MW, the average absolute error ranges from 43.845 to 486.239 MW, the average relative error ranges from 15.6% to 38.6%, and the standard deviation ranges from 12.137 to 166.267, respectively. All error indicators are relatively high. This is because the parameters of the KF model significantly impact the forecasting results, with the state transition matrix having a particularly notable effect on the KF forecasting. Especially in the KF forecasting process, the state matrix is inferred based on the load information of the first two months of every two quarters, and its accuracy directly affects the forecasting results.
(3) Analysis of the measurement period reveals that the average relative error of load forecasting change during the National Day Golden Week is the smallest. Load value fluctuations are relatively small in summer and winter, while they are larger in spring and autumn. This indicates that seasonal variations impact the accuracy of the forecasting results.

3.2.2. BP Forecasting

The process of solving the BP network model is as follows:
(1)
Input and output layer design: The inputs of the BP take the first-order and second-order difference of hourly loads, and the output is the load. Therefore, the number of nodes in the input layer is 2, and that in the output layer is 1.
(2)
Hidden layer design: We use a multi-input–single-output BP network with three hidden layers for load forecasting. The determination of the number of hidden layer neurons is very important, and we thus refer to the empirical formula [22]:
l = n + m + a
where n is the number of input layer neurons, m is the number of output layer neurons, and a is a constant between the range [1, 10]. Therefore, the number of neurons in the BP network in each quarter can be calculated.
(3)
Selection of incentive function: There are various transfer functions for BP neural networks. But with periodic data, using the tansig function for the transfer function has smaller errors and higher stability than the logsig function. Therefore, tansig is used, and the Purelin function is used as the transfer function in the output layer.
(4)
Network parameter determination: Let the network parameters have 5000 epochs of network iterations and an expected error of 10−5. The forecasting model established in the first two months of each quarter forecasts the third month by the BP neural network method; the experimental results are shown in Table 3.
The results can be inferred from Table 3 as follows.
(1) In some stages, there are individual points in the predicted results with significant forecasting errors, and some forecasting values deviate from the actual values. This may be due to the inherent drawback of local extremum points in the BP algorithm.
(2) The maximum absolute error is between 26.871 and 466.754 MW, the average absolute error ranges from 20.127 to 354.853 MW, the average relative error is between 3.8% and 23.4%, and the standard deviation is between 4.092 and 70.371.
(3) In the forecasting results of each quarter, the absolute and relative errors of the first, second, and fourth stages are relatively small, while the errors of other stages are relatively large, indicating that different stages each day have a certain impact on the accuracy.

3.2.3. GS Forecasting

When using the GS, both parameters a and u must be determined. We determined them through fitting the samples in the year from June 2019 to June 2020, as shown in Table 4.
After taking these parameters into (13), Table 5 shows the errors in all seven stages.
The following results can be inferred from Table 5.
(1) The maximum absolute error predicted by the GS model is between 6.9329 and 109.3515 MW, the average absolute error ranges from 5.054 to 87.239 MW, the average relative error is between 1.62% and 18.57%, and the standard deviation is between 1.1782 and 43.5972. After evaluating these data, it was found that the GS forecasting results did not fluctuate up and down with the actual load value. The overall predicted trend manifests as exponential characteristics, which should be consistent with the actual situation in the short term. However, the general future load trend will not fully obey the exponential growth characteristics, which is the reason that the GS generates the larger error.
(2) The maximum absolute error, average relative error, and standard deviation of the predicted results outside the third and fifth stages are relatively small; especially, the absolute error of the forecasting in the second stage reaches small values, basically reaching the small error level that can be used for load forecasting. However, the GS has certain limitations and is only suitable for modeling and forecasting problems with relatively gentle data changes. It is not suitable for situations where the growth rate of data sequences is too fast or exhibits significant fluctuations.

3.3. Fusion Based on CI Model

Let X = {x1, x2, …, xn} be a set of n inputs (components) and P(X) be the power set of X. The set function g on X: P(X)→[0, 1] is called a non-additive measure [23], which represents the interrelation among the n inputs, satisfying
(1)
Boundary condition: g ( ϕ ) = 0 , g ( X ) = 1 ( ϕ is an empty set);
(2)
Monotonicity condition: For any two sets A and B on X that satisfy A B , g ( A ) g ( B ) .
Let h: XR be a normalized map on the measure g, the discrete equation of CI is
E g = h d g = i = 1 n [ g ( A i ) g ( A i + 1 ) ] h ( x i )
where A ( i ) = { x i * , x i + 1 * , , x n * }   h ( x 1 * ) , h ( x 2 * ) , , h ( x n * ) are an ordering arrangement of h(x1), h(x2),…,h(xn), satisfying h ( x 1 * ) h ( x 2 * ) h ( x n * ) , h ( x n * ) = 0 , g ( A n + 1 * ) = 0 . There is the relation with (23),
i = 1 n [ g ( A i ) g ( A i + 1 ) ] = 1
(24) shows that the CI can be formulated as a weighted average of the n inputs. However, each weighting value of the typical average formula is only relative to each input itself, while each in the CI is relative to all inputs. Therefore, the former is linear and the latter is nonlinear. Specially, the CI has a very strong generalization ability, and a CI with n inputs is equivalent to n! constrained feedforward neural networks. For example, when there are 3 inputs, x1, x2, x3, the CI is equivalent to 6 feedforward neural networks, as shown in Figure 3. The weights of each feedforward neural network consist of 3 relative measures. Therefore, the CI can act as a nonlinear classifier with a strong generalization ability and interpretability.
The parameter determination in the CI is a key step. For 2n parameters from n inputs, since there are additional constraints on the monotonicity conditions among these inputs, the solving method must be special. Currently, the typical and effective method to solve the CI parameter is the heuristic least mean square (HLMS) [24]. After giving a set of K training data pairs from the input hk to output Ek: { ( h 1 k , h 2 k , , h n k ) , E k | k = 1 , 2 , , K } , the objective function of HLMS is formulated as
J = min ( k = 1 K ( E k E ^ g k ) 2 )
where E ^ g k is the computed CI value of ( h 1 k , h 2 k , , h n k ) .
To apply the CI to integrate the forecasting results from the KF, BP, and GS, we denote the variables x1, x2, and x3 as their error from the real value. An HLMS algorithm can be performed to solve the parameter of the CI in the open-resource software Kappalab [25]. For the set of training samples of three inputs relative to the three methods, x1, x2 and x3, Table 6 shows the solved parameter of the CI in Kappalab by HLMS after using the samples of each quarter in the entire year from June 2019 to June 2020.
After substituting these parameters into (23) and using the MATLAB tool, the forecasting results by the CI for the entire year from July 2020 to July 2021 are presented by fusing the results from the three methods. Taking the four stages of the first quarter in the second year as an example, the forecasting results are shown in Figure 4, where the forecasting results are evaluated by the error between each real load and the computed load.
The forecasting loads by the CI fusion method have reduced the problems of large errors in the first three sampling points of the KF, large errors in individual points of the BP method, and exponential changes in the forecasting values of the GS method. The forecasting errors of the proposed CI fusion for the four quarters are shown in Table 7.
Compared with the respective forecasting results from the KF, BP, and GS, the fusion results by the CI from Table 7 can be summarized as follows.
(1) The maximum forecasting error among the three methods occurs in the KF method. Although KF is a continuous iterative process, the errors at the initial points and the increase in subsequent errors can cancel each other out, and the subsequent filtering tends to stabilize. However, due to the significant errors at the first three points, the maximum errors still have a significant impact on the overall accuracy of load forecasting, resulting in various error indicators being less favorable compared to other methods.
(2) For stages 1 and 2, the GS forecasting results show the minimum average relative error, because this stage corresponds to the minimum fluctuation of the load value at the morning time of each day, and the GS model is well-suited for load forecasting with relatively flat data changes. The forecasting results of the CI fusion method are affected by the large forecasting error of the KF method, leading to inferior results compared to the GS in some stages. However, it weakens the instability and limitations of single forecasting results of both KF and the BP neural network.
(3) Comparing the errors of three single forecasting methods, the average relative error from the CI fusion forecasting results is the smallest, which overall reduces the instability and limitations of the single forecasting results of the other three methods in various stages. The advantages of the CI fusion method are obvious.

4. Conclusions

This study aims to address the issues of imprecision, uncertainty, and instability in short-term power load forecasting using a single method. A new fusion method using the Choquet integral is proposed. This method can fully use the interactions and complementarity among various methods, leading to the more accurate and stable forecasting results of short-term loads. Different from the other existing methods, the proposed fusion method can effectively exact the key features and parameters in the fusion process by using a typical optimization algorithm, which is very easily realized and understood in practice. Therefore, the proposed method provides new ideas and a practical means for fusing multimodal and multi-resource methods in short-term load forecasting, and it can be extended to any power system.
However, there is still room for improvement in the method proposed in this study. Firstly, further research and analysis are required on the typicality of the fused forecasting methods. Secondly, in the case of big data, current deep learning-based methods can be used to further evaluate and compare the advantages, disadvantages, and usability of the proposed method. Finally, an applicable method must be chosen to conduct a more in-depth analysis of the various factors that affect the operation of the power system and load changes, in order to effectively improve the accuracy of short-term load forecasting.

Author Contributions

Conceptualization, T.Y.; Methodology, S.Y.; Validation, Y.Z.; Data curation, G.L.; Writing—original draft, Y.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data are not publicly available since they are not shared resource.

Conflicts of Interest

Authors Tao Yu, Ye Wang, Yuchong Zhao, Gang Luo were employed by the CCCC Tianjin Waterway Bureau Co., Ltd. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Dong, J.; Luo, L.; Zhang, Q. A parallel short-term power load forecasting method considering high-level elastic loads. IEEE Trans. Instrum. Meas. 2023, 72, 2524210. [Google Scholar] [CrossRef]
  2. Luo, L.; Dong, J.; Zhang, Q. A distributed short-term load forecasting method in consideration of holiday distinction. Sustain. Energy Grid Netw. 2024, 38, 101296. [Google Scholar] [CrossRef]
  3. Jahan, I.S.; Snasel, V.; Misak, S. Intelligent Systems for Power Load Forecasting: A Study Review. Energies 2020, 13, 6105. [Google Scholar] [CrossRef]
  4. Lai, C.S.; Yang, Y.; Pan, K.; Zhang, J.; Yuan, H.; Ng, W.W.Y. Multi-view neural network ensemble for short and midterm load forecasting. IEEE Trans. Power Syst. 2021, 36, 2992–3003. [Google Scholar] [CrossRef]
  5. He, Y.; Xu, Q.; Wan, J.; Yang, S. Short-term power load probability density forecasting based on quantile regression neural network and triangle kernel function. Energy 2016, 114, 498–512. [Google Scholar] [CrossRef]
  6. Sharma, S.; Majumdar, A.; Elvira, V.; Chouzenoux, E. Blind Kalman Filtering for Short-Term Load Forecasting. IEEE Trans. Power Syst. 2021, 35, 4916–4919. [Google Scholar] [CrossRef]
  7. Li, L.J.; Huang, W. A Short-Term Power Load Forecasting Method Based on BP Neural Network. Appl. Mech. Mater. 2014, 494–495, 1647–1650. [Google Scholar] [CrossRef]
  8. Li, Q.; Liu, S. Grey input-output analysis for fixed assets. J. Grey Syst. 2022, 29, 14–28. [Google Scholar]
  9. Khan, M.E.; Dutt, D.N. An expectation-maximization algorithm based Kalman smoother approach for event-related desynchronization (ERD) estimation from EEG. IEEE Trans. Biomed. Eng. 2007, 54, 1191–1198. [Google Scholar] [CrossRef]
  10. Jetcheva, J.G.; Majidpour, M.; Chen, W.P. Neural network model ensembles for building-level electricity load forecasts. Energy Build. 2014, 84, 214–223. [Google Scholar] [CrossRef]
  11. Li, H.Z.; Guo, S.; Li, C.J.; Sun, J.Q. A hybrid annual power load forecasting model based on generalized regression neural network with fruit fly optimization algorithm. Knowl. Based Syst. 2013, 37, 378–387. [Google Scholar] [CrossRef]
  12. Xie, K.; Yi, H.; Hu, G.; Li, L.; Fan, Z. Short-term power load forecasting based on Elman neural network with particle swarm optimization. Neurocomputing 2020, 416, 136–142. [Google Scholar] [CrossRef]
  13. Liu, Y.; Wang, W.; Ghadimi, N. Electricity load forecasting by an improved forecast engine for building level consumers. Energy 2017, 139, 18–30. [Google Scholar] [CrossRef]
  14. Grabisch, M. A new algorithm for identifying fuzzy measures and its application to pattern recognition. In Proceedings of the IEEE International Conference on Fuzzy Systems, Yokohama, Japan, 20–24 March 1995; pp. 145–150. [Google Scholar]
  15. Kalman, R.E. A new approach to linear filtering and prediction problems. J. Basic Eng. 1960, 82, 35–45. [Google Scholar] [CrossRef]
  16. Graves, A. Long Short-Term Memory. In Supervised Sequence Labelling with Recurrent Neural Networks; Springer: Berlin/Heidelberg, Germany, 2012; pp. 1735–1780. [Google Scholar]
  17. Hochreiter, S.; Schmidhuber, J. Long short-term memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef]
  18. Srivastava, N.; Hinton, G.; Krizhevsky, A.; Sutskever, I.; Salakhutdinov, R. Dropout: A simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 2014, 15, 1929–1958. [Google Scholar]
  19. Tokgoz, A.; Unal, G. A RNN based time series approach for forecasting turkish electricity load. In Proceedings of the 26th Signal Processing and Communications Applications Conference, Izmir Turkey, 2–5 May 2018. [Google Scholar]
  20. Zheng, J.; Xu, C.; Zhang, Z.; Li, X. Electric load forecasting in smart grids using long-short-term-memory based recurrent neural network. In Proceedings of the 2017 51st Annual Conference on Information Sciences and Systems (CISS), Baltimore, MD, USA, 22–24 March 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 1–6. [Google Scholar]
  21. Chen, Z.; Sun, L. Short-Term Electrical Load Forecasting Based on Deep Learning LSTM Networks. Electron. Technol. 2018, 10, 11–18. [Google Scholar]
  22. Shang, H.; Jiang, Z.; Xu, R.; Wang, D.; Wu, P.; Chen, Y. The dynamic mechanism of a novel stochastic neural firing pattern observed in a real biological system. Cogn. Syst. Res. 2019, 53, 123–136. [Google Scholar] [CrossRef]
  23. Angilella, S.; Grecoa, S.; Matarazzo, B. Non-additive robust ordinal regression: A multiple criteria decision model based on the Choquet integral. Eur. J. Oper. Res. 2010, 201, 277–288. [Google Scholar] [CrossRef]
  24. Grabisch, M. Alternative representations of discrete fuzzy measures for decision making. Int. J. Uncertain. Fuzziness Knowl. Based Syst. 1997, 5, 587–607. [Google Scholar] [CrossRef]
  25. Kappa Software. Available online: http://www.kappa.com.cn (accessed on 14 October 2024).
Figure 1. Illustration of the forecasting process of our proposed model.
Figure 1. Illustration of the forecasting process of our proposed model.
Energies 17 05173 g001
Figure 2. Illustration of the process to obtain a three-dimensional vector from load nodes.
Figure 2. Illustration of the process to obtain a three-dimensional vector from load nodes.
Energies 17 05173 g002
Figure 3. Equivalence of a CI and n! feedforward neural networks. Note: ①~⑱are g 123 g 23 , g 23 g 3 , g 3 g 4 , g 132 g 32 , g 32 g 2 , g 2 g 4 , g 213 g 13 , g 13 g 3 , g 3 g 4 , g 321 g 21 , g 21 g 1 , g 1 g 4 , g 312 g 12 , g 12 g 2 , g 2 g 4 , g 231 g 31 , g 31 g 1 , g 1 g 4 , respectively, where g(·) refers to g(A).
Figure 3. Equivalence of a CI and n! feedforward neural networks. Note: ①~⑱are g 123 g 23 , g 23 g 3 , g 3 g 4 , g 132 g 32 , g 32 g 2 , g 2 g 4 , g 213 g 13 , g 13 g 3 , g 3 g 4 , g 321 g 21 , g 21 g 1 , g 1 g 4 , g 312 g 12 , g 12 g 2 , g 2 g 4 , g 231 g 31 , g 31 g 1 , g 1 g 4 , respectively, where g(·) refers to g(A).
Energies 17 05173 g003
Figure 4. Comparison of the forecasting loads by the three methods and their CI fusion. (a) Stage 1; (b) stage 2; (c) stage 3; (d) stage 4.
Figure 4. Comparison of the forecasting loads by the three methods and their CI fusion. (a) Stage 1; (b) stage 2; (c) stage 3; (d) stage 4.
Energies 17 05173 g004
Table 1. Data features of historical loads in two years (unit: MW).
Table 1. Data features of historical loads in two years (unit: MW).
Period\FeatureMaximumMinimumAverage Variance Standard Deviation
June 2019–June 20204785.151834.643206.40789.9328.11
July 2020–July 20213464.79587.222345.73312.5417.68
Table 2. KF forecasting error for 7 stages (unit: MW).
Table 2. KF forecasting error for 7 stages (unit: MW).
StageMaximal Absolute ErrorAverage Absolute ErrorAverage Relative ErrorStandard Error
1685.982486.23929.7%166.267
264.01043.84518.0%12.137
3550.280467.08325.3%82.295
4321.439224.45124.3%49.528
5436.222253.58037.6%69.458
6441.775310.23838.6%73.582
7358.751198.23915.6%70.223
Table 3. Analysis of forecasting results using BP neural network (unit: MW).
Table 3. Analysis of forecasting results using BP neural network (unit: MW).
StageMaximal Absolute ErrorAverage Absolute ErrorAverage Relative ErrorStandard Error
133.54526.7833.8%14.948
226.87120.1279.7%4.092
3466.754354.85313.9%70.371
483.29550.38510.0%31.331
5134.968100.35813.1%49.695
6129.70287.45311.2%54.244
7136.51389.72923.4%49.288
Table 4. GS parameters of four quarters.
Table 4. GS parameters of four quarters.
Quarter1234
a0.9540.8130.7620.742
u0.3220.4550.6510.658
Table 5. The GS forecasting results (unit: MW).
Table 5. The GS forecasting results (unit: MW).
StageMaximal Absolute ErrorAverage Absolute ErrorAverage Relative ErrorStandard Error
126.56618.7323.2%9.478
26.9325.0541.6%1.178
3109.35187.23917.1%6.631
468.19550.78016.0%3.452
597.38883.22712.5%43.597
689.64050.71811.8%41.034
781.05065.45018.5%8.787
Table 6. Solved CI model parameters.
Table 6. Solved CI model parameters.
Quarterg(x1)g(x2)g(x3)g(x1, x2)g(x1, x3)g(x2, x3)g(x1, x2, x3)
10.3510.4520.5430.4220.5230.7321.000
20.3270.4180.4760.3280.4890.5491.000
30.4030.3050.4040.3320.5820.5121.000
40.3600.2890.3320.4240.6640.6751.000
Table 7. Forecasting errors by CI in seven stages (Unit: MW).
Table 7. Forecasting errors by CI in seven stages (Unit: MW).
StageMaximal Absolute ErrorAverage Maximal ErrorAverage Relative ErrorStandard Error
130.12126.4563.2%7.263
212.65410.3372.8%3.188
3109.22183.70411.8%30.126
446.13940.65910.4%20.783
570.85350.76012.8%28.491
643.57337.3279.1%30.328
744.20536.6639.7%20.393
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yu, T.; Wang, Y.; Zhao, Y.; Luo, G.; Yue, S. Fusion Forecasting Algorithm for Short-Term Load in Power System. Energies 2024, 17, 5173. https://doi.org/10.3390/en17205173

AMA Style

Yu T, Wang Y, Zhao Y, Luo G, Yue S. Fusion Forecasting Algorithm for Short-Term Load in Power System. Energies. 2024; 17(20):5173. https://doi.org/10.3390/en17205173

Chicago/Turabian Style

Yu, Tao, Ye Wang, Yuchong Zhao, Gang Luo, and Shihong Yue. 2024. "Fusion Forecasting Algorithm for Short-Term Load in Power System" Energies 17, no. 20: 5173. https://doi.org/10.3390/en17205173

APA Style

Yu, T., Wang, Y., Zhao, Y., Luo, G., & Yue, S. (2024). Fusion Forecasting Algorithm for Short-Term Load in Power System. Energies, 17(20), 5173. https://doi.org/10.3390/en17205173

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop