Next Article in Journal
Advanced Integration of Forecasting Models for Sustainable Load Prediction in Large-Scale Power Systems
Previous Article in Journal
Multi-Objective Profit-Based Unit Commitment with Renewable Energy and Energy Storage Units Using a Modified Optimization Method
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Short-Term Photovoltaic Power Prediction Using Nonlinear Spiking Neural P Systems

1
School of Electrical Engineering and Electronic Information, Xihua University, Chengdu 610039, China
2
Sichuan Province Key Laboratory of Power Electronics Energy-Saving Technologies & Equipment, Xihua University, Chengdu 610039, China
3
School of Computer and Software Engineering, Xihua University, Chengdu 610039, China
*
Author to whom correspondence should be addressed.
Sustainability 2024, 16(4), 1709; https://doi.org/10.3390/su16041709
Submission received: 31 December 2023 / Revised: 1 February 2024 / Accepted: 1 February 2024 / Published: 19 February 2024
(This article belongs to the Topic Solar Forecasting and Smart Photovoltaic Systems)

Abstract

:
To ensure high-quality electricity, improve the dependability of power systems, reduce carbon emissions, and promote the sustainable development of clean energy, short-term photovoltaic (PV) power prediction is crucial. However, PV power is highly stochastic and volatile, making accurate predictions of PV power very difficult. To address this challenging prediction problem, in this paper, a novel method to predict the short-term PV power using a nonlinear spiking neural P system-based ESN model has been proposed. First, we combine a nonlinear spiking neural P (NSNP) system with a neural-like computational model, enabling it to effectively capture the complex nonlinear trends in PV sequences. Furthermore, an NSNP system featuring a layer is designed. Input weights and NSNP reservoir weights are randomly initialized in the proposed model, while the output weights are trained by the Ridge Regression algorithm, which is motivated by the learning mechanism of echo state networks (ESNs), providing the model with an adaptability to complex nonlinear trends in PV sequences and granting it greater flexibility. Three case studies are conducted on real datasets from Alice Springs, Australia, comparing the proposed model with 11 baseline models. The outcomes of the experiments exhibit that the model performs well in tasks of PV power prediction.

1. Introduction

Under the globally growing demand for green energy, photovoltaic (PV) power generation, as a sustainable, environmentally friendly, and flexible distributed energy source, has received increasingly more attention [1,2]. Grid-connected PV systems offer environmental advantages and considerable economic benefits. Unfortunately, PV power generation, due to its uncertainty and intermittent characteristics, poses a series of challenges to the operation of the existing grid system when achieving high penetration rates [3]. Solar energy is an environmentally friendly, renewable, cost-effective, and widely available energy option. With technological advances and cost reductions, the application of solar energy is rapidly increasing worldwide, making a significant contribution to sustainable development and the energy transition. PV power and solar thermal power are currently the two main subtypes of solar energy, with PV power generation continuing to rise at a quicker rate each year. PV power plants are often unstable and intermittent power sources because of the cyclical nature of day and night and the stochastic nature of daylight [4]. Climatic conditions, weather, and other variables all play a crucial role in influencing the generation of PV power systems. Once a significant amount of PV is added to systems, these features cause new difficulties for the electrical system, particularly making grid scheduling harder and more intricate [5]. One of the most important fundamental technologies for enhancing the dispatch and operation quality and reducing the extra capacity is PV power prediction.
PV power prediction can be categorized according to its time scales: ultra-short term [6], short term, and medium to long term [7,8]. Short-term prediction focuses on near-term PV power changes and usually covers a time scale of minutes to hours. Short-term prediction holds paramount significance for power system operation and dispatch and can help grid managers facilitate real-time adjustments and optimizations to cope with the volatility of PV power. PV power prediction can be categorized into single-step prediction and multi-step prediction based on the variance in the number of time intervals for prior prediction. PV power prediction may be separated into single-field prediction and regional prediction based on the disparity in the spatial scale of prediction [9].
In recent years, numerous approaches have been explored for PV power prediction, and they can all be broadly divided into three main categories: statistical, physical, and machine learning (ML) methods [10]. For modeling and prediction, physical methods rely on the physical equations and fundamentals of PV power systems. These methods consider factors such as the light intensity, sun angle, weather conditions, and characteristics of PV modules to calculate and predict PV power output by building mathematical models and equations. Physical methods usually require detailed meteorological data and PV system parameters, as well as a good understanding of the physical characteristics of PV power generation systems. However, the model parameters in this method may fluctuate with the operation of the PV power generation system, leading to a diminished prediction accuracy. Statistical methods are employed for predicting PV power based on historical data and statistical models. These methods typically employ statistical techniques such as time series analyses, regression analyses and averaging to analyze and model patterns and trends in power output. Statistical methods can consider factors such as seasonality, cyclicality, and trends and use the statistical properties of historical data to make predictions. However, this method is not ideal for predicting the statistical results of non-linear time-series solar radiation data.
ML methods utilize computer algorithms and models to learn and predict PV power from data. These methods can automatically discover patterns and associations in data by training models, and can make predictions about future power based on well-trained models. Algorithms for ML that are often employed include random forests, regression algorithms, support vector machines, plain Bayes, decision trees, and artificial neural networks, among others. ML methods can utilize a variety of input features, including meteorological data, temporal information, historical power data, and other relevant factors, to improve prediction accuracy. ML methods can adapt to complex nonlinear relationships and large amounts of data, making them well suited for instantaneous and short-term predictions. They can continuously improve prediction performance through model training and optimization. Currently, ML methods are widely favored as one of the most popular strategies for predicting time series [11].
The most cutting-edge technology currently used in the area of ML is deep learning (DL), commonly referred to as deep neural networks (DNNs). DL models have garnered significant research attention in recent years for problems involving PV power prediction tasks due to their outstanding performance in the professional areas of fault detection, pattern identification, and image treatment. Some common DL models have an autoencoder (AE) [12], convolutional neural networks (CNNs) [13], restricted Boltzmann machine (RBM) [14], deep belief networks (DBNs) [15], and long short-term memory networks (LSTM) [16]. Many DL models have been employed in PV power prediction because of the close connection between PV power and time series. In Ref. [17], for the challenge of predicting PV power generation within hours, a quad-kernel CNN model (QK_ CNN) was employed. This model replaced the first FC layer in the traditional CNN with a global maximum pooling strategy, which reduced the amount of calculations, avoided overfitting, and reduced the dimensionality to extract global salient features. The design utilized four CNNs with various kernel sizes to capture long-term and short-term correlations between target sequences. A hybrid DL predictive model was used also in Ref. [18]. The suggested model does not require the input of future weather forecast data considering that it is based on wavelet packet decomposition (WPD) and LSTM networks. The mixed model is suitable without any a priori assumptions and when there are complex nonlinear variables in the PV power series. The LSTM network can also detect deep characteristics and concealed nonlinear correlations in historical PV power generation information. A strategy of linear weighting can be used to combine the predictions from each LSTM network to achieve greater precision in prediction. In Ref. [19], three direct prediction models for PV power were proposed: a one-dimensional CNN model, an LSTM model, and a CNN+LTSM combination with and without the data transformation link. By contrasting their prediction abilities, it was determined which DL model achieved the best results in predicting PV power. In Ref. [20], Liu et al. suggested a novel approach for predicting short-term PV power, using a parallel bidirectional LSTM and CNN (BILSTM-CNN). This method uses three unique decomposition methods to decompose the input feature information and then input it into a three-way parallel BILSTM-CNN network to improve the prediction accuracy. In Ref. [21], Yang et al. suggested a method for predicting PV power using multi-head self-attention (MSA) and LSTM. The method first uses a special MSA to determine the contextual dependencies of the input information and then uses the LSTM network to enhance the extraction of local features to improve the prediction accuracy. In Ref. [22], Lou et al. suggested a PV power prediction method that integrated a CNN, LSTM, and transfer learning technology. This approach applied two unsupervised methods, namely a domain adversarial neural network and edge disparity difference. It combined these methods with a CNN-LSTM network to predict PV power and improve its accuracy. While the aforementioned methods have shown some advancements in enhancing the accuracy of PV power prediction, they encounter several challenges. These challenges include an excessive computational complexity, long prediction times, and too many hyperparameters. It should be noted that a profoundly complex nonlinear connection exists between the output power of the PV system and variables like light intensity and temperature, which makes it difficult to accurately model using simple mathematical models. Furthermore, the intricacy of the model structure makes it relatively hard to explain the decision-making process of the model. Therefore, in addition to pursuing a high prediction accuracy, improving the interpretability of the model is also a direction worthy of attention in current research. In refs. [23,24], a novel variation of a recurrent neural network (RNN), called the echo state network (ESN), was introduced. The three most common parts of an ESN are an input layer, a reserve layer, and an output layer [25,26]. First, an ESN randomly initializes its input weights and reservoir weights and then updates the output weights through training. An ESN is able to circumvent the shortcomings of RNNs and provides an improved rate of the process of convergence [27]. In Ref. [28], Xinhui et al. proposed a prediction model of an ESN based on wavelet theory (WT + ESN) for PV power prediction. This study used three error indicators to compare three traditional models of the WT + ESN model and proved the superiority of the WT + ESN model. In Ref. [29], Jayawardene et al. compared the Adaptive Neuro-Fuzzy Inference System (ANFIS) and an ESN. Both methods were employed to predict short-term PV power under varied weather conditions in Clemson, South Carolina, and the findings demonstrated that the ESN outperformed the ANFIS under different weather conditions. In Ref. [30], Li et al. introduced a short-term PV power prediction approach for dual-core prediction that combines an ESN and kernel extreme learning machine (ESN-KELM). This approach employed a multiscale similar-day algorithm and the fast iterative filter decomposition method to obtain the model input data and then applied the upgraded optimization method to optimize the ESN-KELM model parameters to improve the prediction accuracy. The results confirm the benefits of the ESN-KELM model in predicting PV power. However, the highly nonlinear and intricate state vector evolution inside ESNs makes it hard to understand and explain the decision-making processes of these models.
Spiking neural P (SNP) systems [31] belong to a particular sort of neural-like computing models that resemble biological neurons and abstract their spiking mechanism. SNP has received widespread attention due to its high degree of parallelism and computational integrity. The nonlinear variations of SNP systems can be referred to as nonlinear spiking neural P (NSNP) systems [32]. The nonlinear spike mechanism of an NSNP system is good at handling complex nonlinear changes in data. It is worth mentioning that SNP and its variants have been employed in the development of some cycle-like time-series forecast models because of the spiking mechanism of biological neurons, i.e., the gated spiking neural P (GSNP) system model [33] and NSNP systems based on the ESN model [34].
PV power data are essentially time-series data, and the underlying system generating these data is a complex, nonlinear, dynamical system. Moreover, PV power data have short-term correlations and long-term dependencies. An ESN can avoid the shortcomings of traditional RNNs and the training process is relatively simple. A distinguishing characteristic of NSNP systems is their nonlinear spiking mechanism, which provides powerful nonlinear dynamics and is able to capture complex nonlinear characteristics. To address the challenging problem in PV power prediction, a novel short-term PV power prediction model is developed inspired by the ESN framework using the nonlinear spiking mechanism of NSNP systems. The proposed model is structurally an NSNP system configured with one input layer and one output layer, which are at the front and back of this system, respectively. This design is similar to an ESN, but the proposed model differs from it in two ways: (i) A system of NSNP serves as a reservoir; (ii) The kinetic equations of NSNP systems are entirely distinct from those of the ESN and vary in terms of theoretical derivation. However, this design can bring a benefit: a learning method similar to ESNs can be adapted to train the proposed model.
The following is a summary of this paper’s contributions.
(1)
This study focuses on presenting a novel model to predict PV power using NSNP systems. By improving the accuracy of PV power prediction, the utilization of power resources can be optimized and energy waste reduced, thus promoting sustainable energy utilization.
(2)
The proposed method comprehensively extracts the nonlinear characteristics of the PV sequence through an NSNP system. This system is inspired by biological neurons and is applied to PV power prediction to improve the technical efficiency, inject new perspectives into sustainability research, and deepen the understanding of the sustainable development of clean energy. This fusion of technological innovation and ecological thinking opens up a new pathway to improving the accuracy of short-term PV power generation prediction, while leading a new paradigm in sustainability research.
(3)
To comprehensively prove the versatility and effectiveness of the proposed model, three different datasets and five performance metrics are used to evaluate the model. The outcomes indicate that the proposed model substantially enhances the accuracy of short-term PV power predictions.
The remainder of the paper is segmented into the following sections. In Section 2, the overall research framework of the proposed scheme is introduced and the proposed model is thoroughly explained. In Section 3, three case examples are discussed along with the experimental findings from each. The conclusions and prospects are given in Section 4.

2. Overall Research Framework of the Proposed Scheme

Figure 1 illustrates the overall framework of this study, divided into three parts. The specific description is as follows:
(1)
First, preprocess the three sets of downloaded raw PV data, including removing outliers and filling missing values caused by machine failures. The data imputation method adopted in this study is linear interpolation. After preliminary data processing, format the data to meet the requirements of the proposed model. The data type is set to float32 in this study. To expedite the convergence speed and enhance the generalization capability of the prediction model, ensuring that the model learns on the same scale, this research normalizes all three sets of data to the range (0,1) through the min-max scaling method.
(2)
After completing all the data preprocessing, we input the processed data into the proposed PV prediction model for learning and training. In this study, the proposed model is trained using the Ridge Regression algorithm. The combination of the nonlinear spiking mechanism of NSNP systems with Ridge Regression forms an effective learning framework.
(3)
To fully demonstrate the benefits of the proposed model in short-term PV power forecasting, a comprehensive evaluation and comparison of the model is conducted using three different datasets and five evaluation metrics (RMSE, MAE, MAPE, MBE, R 2 ).

2.1. Detailed Description of the Proposed Prediction Model

The distinguishing characteristic of NSNP systems, which are a nonlinear variation of SNP systems, is their nonlinear spiking mechanism. We first introduce a variant of NSNP systems, and then derive its mathematical model according to the nonlinear spiking mechanism. Rich dynamics are able to be displayed in NSNP systems because of the nonlinear spiking mechanism. This study will realize a new PV power forecast model utilizing NSNP systems.

2.1.1. Nsnp Systems

Definition 1.
The following is an NSNP system:
Π = ( O , σ 1 , σ 2 , , σ m , s y n , x )
where
(1) 
O = { a }  represents a composed alphabet (a is referred to as the spike);
(2) 
σ i = ( u i , r i )  refers is the ith spiking neuron in this tuple. Subscript i can be  1 , 2 , , m , where
(a) 
u i R  represents the original state of neuron  σ i ;
(b) 
r i  represents a nonlinear spiking rule:  a g ( u ) a f ( u ) , where  g ( · )  and  f ( · )  represent two nonlinear functions and  u = u i + x ;
(3) 
s y n = { 1 , 2 , , m } × { 1 , 2 , , m } ; i j  for any  ( i , j ) syn , i , j = 1 , 2 , , m ; (synapse);
(4) 
x represents the external input of Π.
A directed network consisting of m neurons makes up an NSNP system, where neurons serve as nodes, and edges indicate synaptic connections between neurons. Figure 2 shows an NSNP system, where σ 1 , σ 2 , , σ m represent the m spiking neurons. An external input x, a state unit u i , and a nonlinear rule a g ( u ) a f ( u ) are contained in each neuron, where g ( · ) and f ( · ) are two identical or different nonlinear functions.
Suppose that at time t, the current state of neuron σ i is represented by u i ( t ) , and its external input is represented by x ( t ) . Once a neuron σ i fires, the system will consume the spikes of g ( · ) and generate the spikes of f ( · ) , where u = u i ( t 1 ) + x ( t ) . The subsequent neurons connected to the synapse receive the produced spikes along the synapse. In consideration of this, the state equation of neuron σ i is represented by
u i ( t ) = u i ( t 1 ) g u i ( t 1 ) + x i ( t ) + j = 1 , j i m f u j ( t 1 ) + x ( t )
where i = 1,2 , , m . Suppose the external input is x = l = 1 n x l . Therefore, we have
u i ( t ) = u i ( t 1 ) g ( u i ( t 1 ) + l = 1 n x l ( t ) ) + j = 1 , j i m f ( u j ( t 1 ) + l = 1 n x l ( t ) )
To reduce the complexity of the model, assume g ( u ) = f ( u ) . Therefore, the state equation of neuron σ i is rewritten as:
u i ( t ) = u i ( t 1 ) + j = 1 m f ( u j ( t 1 ) + l = 1 n x l ( t ) )
By parameterizing the NSNP system, a parameterized NSNP system can be obtained; the following is a description of the state equation for neuron σ i :
u i ( t ) = α · u i ( t 1 ) + j = 1 m w i j 1 f ( β · u j ( t 1 ) + l = 1 n w l j 2 x l ( t ) )
where α , β , w i j 1 , and w i j 2 are parameters, i = 1 , 2 , , m . Furthermore, this state equation is able to be expressed as the following matrix:
u ( t ) = α · u ( t 1 ) + W f β · u ( t 1 ) + W i n x ( t )
where u ( t ) = u 1 ( t ) , u 2 ( t ) , , u m ( t ) T and x ( t ) = x 1 ( t ) , x 2 ( t ) , , x n ( t ) T ; W = w i j 1 m × m and W i n = ( w l j 2 ) n × m represent two parameter matrices; and α and β represent two scalar parameters.
This parameterized NSNP system is a dynamical system, where Equation (6) is its kinetic equation or mathematical model. The underlying system generating the PV power data is essentially an unknown complex dynamical system. In this study, this parameterized NSNP system will be used to model this unknown complex dynamical system.

2.1.2. Proposed Model

There is a special subclass of recurrent neural networks (RNNs) known as ESNs that process input sequence data by introducing echo neurons as the reservoir. This design makes ESNs have a better performance and trainability when processing sequence data and overcomes the shortcomings of traditional RNN’s gradient disappearance problems and training complexity. From a structural point of view, an input layer, a reservoir, and an output (readout) layer make up an ESN. There is a substantial amount of interconnected neurons in the reservoir.
The model which we suggest in this paper is an entirely novel recurrent-like model. Figure 3a shows the structure of the NSNP-system-based ESN block. This block is an NSNP system that has input and output (readout) layers at the front and back of this system, respectively. The NSNP-system-based ESN block uses an NSNP system as the reservoir, in contrast to the existing ESN and its variants. In other words, the NSNP system provides the dynamics for the proposed model. The proposed prediction model is shown in Figure 3b; in this model, PV data consisting of each weather variable and historical power generation data are used as input to the NSNP-system-based ESN block, and then the system-based ESN block is trained to determine W o u t and W o u t , and finally PV power prediction can be performed.
Assume that there are n input neurons in the input layer and that the n inputs at time t are represented by the formula x ( t ) = [ x 1 ( t ) , x 2 ( t ) , , x n ( t ) ] T . The reservoir (NSNP system) is made up of m spiking neurons and the output layer constitutes k output neurons.
Then, assume that W = w i j 2 m × m R m × m represents the weight matrix within the NSNP system, W i n = w i j 1 n × m R n × m represents the weight matrix connecting the input layer with the NSNP system, W o u t = w i j 3 m × k R m × k represents the weight matrix connecting the NSNP system with the output layer, and W o u t = w i j 4 n × k R n × k represents the weight matrix connecting the input layer with the output layer.
The newly revised equation for the proposed model is given by Equation (6) below:
u ( t ) = α · u ( t 1 ) + W f β · u ( t 1 ) + W i n x ( t )
and the output equation is:
y ( t ) = f o W o u t u ( t ) + W o u t x ( t )
where f represents a s i g m o i d function (like t a n h ) and f 0 represents a linear or nonlinear function. The two scalar parameters are denoted by α [ 0 , 1 ] and β [ 0 , 1 ] . y ( t ) = [ y 1 ( t ) , y 2 ( t ) , , y k ( t ) ] T represents the output vector at time t and W i n R n × m , W R m × m , W o u t R m × k , and W o u t R n × k represent four weight matrices. In this study, we set f 0 to be an identity function, i.e., f o ( x ) = x .

2.1.3. Model Learning

The proposed model is trained using a supervised learning algorithm. As shown in Figure 3b, the initialization stage and the training stage are the two parts of the proposed model learning process.
The NSNP reservoir weight matrix W and the input weight matrix W i n both are produced at random during the initialization stage. The weight matrices W and W i n remain unchanged after completing the initialization.
The second stage is the training stage, the purpose of which is to obtain the best possible output weight matrix W o u t and W o u t through training.
Assume that X ( t ) = [ u ( t ) T , x ( t ) T ] T R m + n , and W ¯ o u t ( t ) = [ W o u t T , W o u t T ] T represents the output weight matrix, x ( t ) = [ x 1 ( t ) , x 2 ( t ) , , x n ( t ) ] T R n represents the input vector at time t, and y ( t ) = [ y 1 ( t ) , y 2 ( t ) , , y k ( t ) ] T R k denotes its output. Then, the proposed model’s output is stated as
y ( t ) = W ¯ o u t ( t ) X ( t )
Assume that the goal output is y ( 1 ) , y ( 2 ) , , y ( M ) , and that there are actually M obtainable sequence data x ( 1 ) , x ( 2 ) , , x ( M ) . H is represented as H = [ x ( 1 ) , x ( 2 ) , , x ( M ) ] . Likewise, Y = [ y ( 1 ) , y ( 2 ) , , y ( M ) ] . Therefore, we get
Y = W ¯ o u t H
where W ¯ o u t can be obtained by Equation (11):
W ¯ o u t = ( H T H ) 1 H Y
where ( H T H ) 1 stands for the inverse matrix of H T H . It is very easy to overfit the model if the dimensions of matrix H are too large. To solve this problem, a regularization term can be introduced, so the regularized goal function shown below can be used in the proposed model:
J ( W o u t ) = 1 2 Y W ¯ o u t H 2 + 1 2 λ W ¯ o u t 2
where λ represents a regularization parameter. Thus, the goal output matrix W ¯ o u t is able to be obtained:
W ¯ o u t = ( H T H + λ I ) 1 H Y
where the unit matrix is denoted by I R m × m . In the proposed model, similar to an ESN, the Ridge Regression algorithm can be used to figure out the output matrix W ¯ o u t , i.e.,
W o u t * = arg min W o u t J ( W o u t ) = 1 2 Y W o u t H 2 + 1 2 λ W o u t 2
After obtaining the optimal output weights, the proposed model can predict the data in the test set. In the experiments, considering the randomness introduced by random initialization, each group of experiments was carried out 30 times. In other words, 30 experiments correspond to 30 different output matrices W o u t * ; thus, 30 different sets of predicted values were obtained. The last evaluation was performed based on the average of the results of these 30 experiments.

3. Case Study

Three benchmark datasets using three PV systems were employed to evaluate the prediction performance of the proposed model. The three benchmark datasets have been extensively utilized in the existing related work. To contrast the proposed model with other existing baseline prediction models, the experiments on the three datasets were taken into consideration as three cases to analyze.

3.1. Case Study 1

3.1.1. Dataset

The input data are from DKASC Alice Springs PV system site 2 [35]. Alice Springs receives plenty of sunshine throughout the year, with long daylight hours in the summer (December through February). Fall and spring are sunny, while winter brings shorter daylight and longer nights. With data being collected in this system every 5 min, this dataset contains 103,954 samples from the whole year of 2019. The primary components of the data are the climate temperature (°C), active power (kW), wind direction ( A ^ °), average current phase (A), global horizontal radiation (W/m2 × sr), climate relative humidity (%), etc.
We carefully evaluated each characteristic and made constant attempts in this case study. The five input characteristics of the global horizontal irradiance, active power, weather temperature, relative humidity, and average current phase were chosen. These characteristics mentioned above, particularly the average current phase and averages of the global horizontal radiation, have a direct effect on the production of PV power. The input dataset was split into a training set and a testing set, with 90% of the training data and 10% of the test data, respectively.

3.1.2. Performance Metrics

Three performance metrics, mean absolute error ( M A E ), root mean square error ( R M S E ), and determination coefficient ( R 2 ), are employed to assess the performance of the model. Here are their definitions below:
M A E = 1 N t = 1 N | p t p ¯ t |
R M S E = 1 N t = 1 N ( p t p ¯ t ) 2
R 2 = 1 t = 1 N ( p t p ¯ t ) 2 t = 1 N ( p t p ¯ p e r i o d ) 2
where N represents the number of power samples used to determine the prediction error, p t represents the real value of PV power, p ¯ t represents the forecast value of PV power, and p ¯ p e r i o d represents the average value during the prediction period.
The average difference between the real value of a predictive model and its predicted value is calculated by the R M S E . The R M S E is more sensitive to large differences between predicted and real values and is easily influenced by the extremum.
The average variance between the real value and the predicted value of the prediction model is shown by the M A E . The M A E can more accurately depict the real condition of the error among the actual and anticipated values and is less susceptible to outliers than the R M S E .
R 2 is the correlation between target predicted values and actual values, reflecting how well the model fits.

3.1.3. Experimental Results

To completely illustrate the superiority of the proposed model, the data, in this case, were processed into three different resolutions: 5-min resolution (34,637 data points), 10-min resolution (51,984 data points), and 15-min resolution (34,637 data points). The proposed model was compared to four baseline models. The four baseline models are the CNN [36], CNN_LSTM [19], QK_CNN [17], and QK_CNN (FC) [17]. The CNN model used for comparison here employs two convolutional layers, two FC layers, and two maximum pooling layers, and all the designs of the CNN portion of the CNN_LSTM model are consistent with this CNN.
Table 1 provides the best values of the hyperparameters utilized in Case 1. The grid search approach was employed to seek all hyperparameters’ best values. The six hyperparameter settings in the proposed model are as follows: (i) α and β represent two scalar parameters, respectively; (ii) n represents the number of neurons in the NSNP system; (iii) ρ and σ represent the spectral radius and the proportion of non-zero connections in the initialization stage, respectively; (iv) λ represents regularization parameter.
The prediction results of the proposed model and four baseline models are given in Table 2, which include the prediction results at 15 min, 10 min, and 5 min resolutions. The experimental results for these four baseline models are derived from the original literature. For 15 min and 5 min resolutions, the proposed model surpasses the four baseline models in all indicators. For the 10 min resolution, the proposed model outperforms the four baseline models in the MAE and R 2 metrics, but it performs marginally worse than the CNN and QK_CNN in the RMSE metric.
From Figure 4a–c, we can see the prediction results of the proposed model on the test set at 15 min, 10 min, and 5 min resolutions, respectively, where the original and prediction curves of every group of PV power on the test set are plotted on the left, while their absolute error curves are plotted on the right. From these figures, we can observe that the predicted curves almost coincide with the original curves at the three resolutions. This suggests that the proposed model can capture the fluctuations in the PV data well. Also, we see lower absolute prediction errors at the three resolutions.

3.2. Case Study 2

3.2.1. Dataset

This dataset is derived from four years (2014–2017) of data from Alice Springs 1B in Central Australia [35] at a resolution of 5 min. The obtained data have 12 characteristics, which include the active power (kW), current phase average (A), climatic temperature (°C), climatic relative humidity (%), wind speed (m/s), horizontal diffuse radiation ( W / m 2 × s r ), etc. Using all available characteristics as model inputs, the data were processed into eight sets of different-sized data volumes, the smallest containing 52,119 data and the largest containing 382,501 data. A total of 90% of the dataset was utilized as a training set, while the remaining 10% was utilized as a test set.

3.2.2. Performance Metrics

For comparison with the baseline models, the M A E (mean absolute error), R M S E (root mean square error), and M A P E (mean absolute percentage error) are utilized as assessment measures for these trials. The M A E and R M S E are defined in Equations (15) and (16). The following is the definition of the M A P E :
M A P E = t = 1 N ( p t p ¯ t ) ) / p t N
where N denotes the number of test specimens, p t denotes the real PV power value, and p ¯ t denotes the predicted value.
M A P E indicates the mean prediction error for each sample as a percentage of the true value. A lower M A P E value indicates a higher accuracy of the predictive model, namely a smaller relative error between the prediction and true values.

3.2.3. Experimental Results

Table 3 provides the parameter settings for the proposed model in this case study.
For the eight different input sequences, the proposed model was compared with three baseline models [19], and Table 4 displays the outcomes of the comparison. The three baseline models are LSTM, the CNN, and CLSTM. Ref. [19] is where the experimental data of three baseline models can be found. In Table 4, the following findings can be observed: (i) The proposed model obtains the lowest RMSE value and MAE value for the eight input sequences. (ii) LSTM achieves the best MAPE value on 1 Y, CNN achieves the best value on 1.5 Y, and CLSTM achieves the best MAPE values on the other six input sequences. All in all, the proposed model demonstrates a superior ability to forecast according to the RMSE and MAE metrics; however, in the MAPE metric, the proposed model performs relatively poorly.
Additionally, Table 4 shows that for the LSTM, CNN, and CLSTM models, the accuracy does not continue to increase with the input sequence length, but rather falls. The peculiar characteristics of the time series itself may be to blame for this. When a certain amount of data is processed, the preceding front input data and output data are far away from each other, so the correlation between them is small. If you add more data at this time, the prediction result can deteriorate instead of improve. However, the predictive accuracy of the proposed model is significantly less affected by the length of the input sequence than the other three models. This may be due to the stronger ability of the NSNP system embedded in the proposed model to extract time-series features and long-term dependencies.
Figure 5a,b present the prediction performance of the proposed model at 0.5 Y and 4 Y, respectively, where the original and predicted curves for every group of PV power are plotted on the left, and their absolute error curves are plotted on the right. The outcomes show that the proposed model is highly capable of capturing the variations in the PV data series, and has a small absolute error of prediction.

3.3. Case Study 3

3.3.1. Dataset

This dataset is formed from data from the PV system of DKASC, Alice Springs [35] in Australia from 1 June 2014 to 12 June 2016. The following six variables are chosen to be the input of the proposed model: PV power (P), relative humidity ( R H ), diffuse horizontal radiation ( D H R ), ambient temperature ( A T ), wind speed ( W S ), and global horizontal radiation ( G H R ). In this case, the training set includes the data from 1 June 2014 to 31 May 2015, and the test set contains the data from 1 June 2015 to 12 June 2016. Thus, the training set and test set consist of 365-day and 378-day weather conditions and PV power generation, respectively.

3.3.2. Performance Metrics

Three assessment measures were employed for comparison with the baseline models: the root mean square error (RMSE), mean absolute percentage error (MAPE), and mean bias error (MBE). The R M S E and M A P E are given in Equations (16) and (18), respectively. The M B E is described below:
M B E = 1 N t = 1 N ( p t p ¯ t )
where N represents the predicted horizon and p t and p ¯ t represent the real PV value and predicted PV power output value at time t, respectively.
The MBE is an indicator used to evaluate the degree of bias of a prediction model, which measures the average difference or deviation between the prediction value and the real value. Positive numbers indicate overprediction by the model, while negative numbers indicate underprediction by the model.

3.3.3. Experimental Results

Table 5 provides the parameter settings associated with the proposed model. To assess the prediction performance of the proposed model, this model was contrasted with five baseline models under the four different seasons of spring, summer, autumn, and winter. The five baseline models are the WPD-LSTM model, LSTM network, the RNN, SVM, and multi-layer perceptron (MLP).
Seasonal variations affect the generation of PV power. Therefore, the proposed model and these five baseline models were evaluated in different seasons. According to the Australian seasons, the test set was separated into four subsets: winter (June–August), spring (September–November), summer (December–February), and fall (March–May). Figure 6 shows the prediction results of the proposed model for different seasons. The original curves and the predicted curves of every group of PV power are on the left side, and their absolute error curves are on the right side. From Figure 6, we can observe that the predicted curves for the four seasons almost coincide with the original curves, especially the best fitting curves in spring. It is concluded that the proposed model effectively captures the data’s shifting trends well. Additionally, we also find that there are lower absolute prediction errors for the four seasons.
Table 6 provides the outcomes of the proposed model and the five baseline models [18] for each season. The results of experiments associated with the five baseline models are WPD-LSTM, LSTM, GRU, RNN, and MLP, respectively. These five baseline models are retrieved from Ref. [18]. From the comparison results, it can be found that for the MBE, MAPE, and RMSE metrics, the proposed model outperforms all five baseline models in all seasons. For example, the RMSE of the proposed model is 0.0692, which is superior to the 1.0382, 1.035, 1.0581, 1.0861, and 0.2357 for LSTM, GRU, the RNN, MLP, and WPD-LSTM, respectively. Similar results were found for the proposed model using the other two metrics (MAPE and MBE).
As can be seen from the predicted outcomes of the entire test set, the MBE metric of the proposed model ranges from 0.0013 to 0.0074 , and the absolute values of these two extreme values are within 1%, which implies that this model has a strong fitting capability. Compared with the other five existing baseline models, the proposed model has a significantly lower absolute value of the MBE metric. This shows that there is very little prediction bias in the proposed model. For the MAPE and RMSE metrics, averaged over the entire test set, the same conclusions were also drawn.

4. Conclusions

PV power is a renewable and clean energy source. Accurate PV power forecasting is crucial for the optimized operation of grid-connected PV power generation. It also reduces energy waste and promotes sustainable energy utilization. Considering the inherent high randomness and fluctuation in PV power, predicting it is a challenging task. Leveraging advanced PV power prediction technology can provide operators with essential foresight, aiding them in more effectively strategizing, scheduling, and maintaining PV power generation systems. NSNP systems belong to a class of neural-like computing models motivated by the nonlinear mechanism of spiking neurons. One unique feature of NSNP systems is the nonlinear spike mechanism, which provides nonlinear dynamics for NSNP systems. Traditional deep neural networks cannot accurately mine PV sequences’ nonlinear characteristics. To solve this problem and enhance the prediction accuracy of PV power, a novel method to forecast short-term PV power using an NSNP-system-based ESN model has been proposed. Moreover, NSNP, as a variant of SNP, is also inspired by biological neurons, and its application to the field of short-term PV prediction not only improves the technical efficacy but also incorporates ecological thinking. With the proposed model, we not only find a new way to improve the accuracy of short-term PV power prediction but also promote a brand new research direction in the field of sustainability research.
To comprehensively assess the performance of the proposed model, in this paper, we collected three distinct datasets from Alice Springs, Australia. The proposed model was compared with 11 baseline models under the same conditions. Five evaluation metrics (RMSE, MAE, MAPE, MBE, R 2 ) were employed to assess the models.
(1)
Validation on the first dataset confirmed that the proposed model exhibits a high prediction accuracy across three different resolutions (5 min, 10 min, 15 min).
(2)
Validation on the second dataset demonstrated that the proposed model exhibits a superior predictive performance across various time series lengths (0.5 Y–4 Y).
(3)
Validation on the third dataset revealed that, across four distinct seasons (spring, summer, autumn, and winter), the proposed model demonstrates a superior performance in both the forecasting accuracy and stability.
Through a comprehensive analysis of experimental results from three different datasets, the superior performance of the proposed model in short-term PV power prediction tasks has been confirmed. This indicates that the approach, which combines an ESN system and the nonlinear spiking mechanism of NSNP, is a promising new method. It can be utilized to improve the accuracy and reliability of short-term PV power predictions.
Regrettably, on the second dataset, the proposed model exhibited a poor performance in terms of MAPE values. This may be attributed to the dataset’s extensive size, as the model may struggle to adapt well to prolonged environmental changes. This suggests the need for a more sophisticated model to accurately capture the long-term trends in PV power. As mentioned earlier, there are many variants of SNP. To better extract nonlinear features from large-scale data, developing more SNP variants and integrating them with deep neural network models is a prospective avenue for future research.
It should be noted that this paper focuses on the application of an NSNP system, and the model performance may be affected by the parameter settings, which can be obtained via the network search method, which leads to the need for careful parameter tuning in practical applications. Furthermore, although the proposed model has achieved remarkable results in the current study, we recognize that there is still room for improvement. One potential direction of improvement is to introduce an attention mechanism to capture the more important parts of PV data more precisely.

Author Contributions

Conceptualization, Y.G. and H.P.; methodology, Y.G.; software, L.G.; validation, Y.G., J.W. and H.P.; formal analysis, Y.G.; investigation, Y.G.; resources, Y.G.; data curation, Y.G. and L.G.; writing—original draft preparation, Y.G.; writing—review and editing, Y.G.; visualization, Y.G.; supervision, J.W.; project administration, J.W.; funding acquisition, J.W. All authors have read and agreed to the published version of the manuscript.

Funding

This work was partially supported by the National Natural Science Foundation of China (nos. 62076206 and 62176216) and the Research Fund of Sichuan Science and Technology Project (no. 2022ZYD0115), China.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data that have been used are confidential.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Kabir, E.; Kumar, P.; Kumar, S.; Adelodun, A.A.; Kim, K.H. Solar energy: Potential and future prospects. Renew. Sustain. Energy Rev. 2018, 82, 894–900. [Google Scholar] [CrossRef]
  2. Creutzig, F.; Agoston, P.; Goldschmidt, J.C.; Luderer, G.; Nemet, G.; Pietzcker, R.C. The underestimated potential of solar energy to mitigate climate change. Nat. Energy 2017, 2, 17140. [Google Scholar] [CrossRef]
  3. Stein, G.; Letcher, T.M. Integration of PV generated electricity into national grids. In A Comprehensive Guide to Solar Energy Systems; Academic Press: Cambridge, MA, USA, 2018; pp. 321–332. [Google Scholar]
  4. Cervone, G.; Clemente-Harding, L.; Alessandrini, S.; Delle-Monache, L. Short-term photovoltaic power forecasting using artificial neural networks and an analog ensemble. Renew. Energy 2017, 108, 274–286. [Google Scholar] [CrossRef]
  5. Van-der-Meer, D.W.; Shepero, M.; Svensson, A.; Widén, J.; Munkhammar, J. Probabilistic forecasting of electricity consumption, photovoltaic power generation and net demand of an individual building using Gaussian processes. Appl. Energy 2018, 213, 195–207. [Google Scholar] [CrossRef]
  6. Antonanzas, J.; Osorio, N.; Escobar, R.; Urraca, R.; Martinez-de-Pison, F.J.; Antonanzas-Torres, F. Review of Photovoltaic Power Forecasting. Sol. Energy 2016, 136, 78–111. [Google Scholar] [CrossRef]
  7. Raza, M.Q.; Khosravi, A. A review on artificial intelligence-based load demand forecasting techniques for smart grid and buildings. Renew. Sustain. Energy Rev. 2015, 50, 1352–1372. [Google Scholar] [CrossRef]
  8. De Marcos, R.A.; Bello, A.; Reneses, J. Electricity price forecasting in the short-term hybridising fundamental and econometric modelling. Electr. Power Syst. Res. 2019, 2167, 240–251. [Google Scholar] [CrossRef]
  9. Ben-Taieb, S.; Bontempi, G.; Atiya, A.F.; Sorjamaa, A. A review and comparison of strategies for multi-step ahead time series forecasting based on the NN5 forecasting competition. Expert Syst. Appl. 2012, 39, 7067–7083. [Google Scholar] [CrossRef]
  10. Hodge, B.M.; Brancucci-Martinez-Anido, C.; Wang, Q.; Chartan, E.; Florita, A.; Kiviluoma, J. The combined value of wind and solar power forecasting improvements and electricity storage. Appl. Energy 2018, 214, 1–15. [Google Scholar] [CrossRef]
  11. Yagli, G.M.; Yang, D.; Srinivasan, D. Automatic hourly solar forecasting using machine learning models. Renew. Sustain. Energy Rev. 2019, 105, 487–498. [Google Scholar] [CrossRef]
  12. Chen, S.; Yu, J.; Wang, S. One-dimensional convolutional auto-encoder-based feature learning for fault diagnosis of multivariate processes. J. Process. Control 2020, 87, 54–67. [Google Scholar] [CrossRef]
  13. Yao, G.; Lei, T.; Zhong, J. A review of convolutional-neural-network-based action recognition. Pattern Recognit. Lett. 2019, 118, 14–22. [Google Scholar] [CrossRef]
  14. Xie, C.; Lv, J.; Li, Y.; Sang, Y. Cross-correlation conditional restricted Boltzmann machines for modeling motion style. Knowl. Based Syst. 2018, 159, 259–269. [Google Scholar] [CrossRef]
  15. Zhang, J.; Ling, C.; Li, S. EMG signals based human action recognition via deep belief networks. IFAC Pap. Online 2019, 52, 271–276. [Google Scholar] [CrossRef]
  16. Abdel-Nasser, M.; Mahmoud, K. Accurate photovoltaic power forecasting models using deep LSTM-RNN. Neural Comput. Appl. 2019, 31, 2727–2740. [Google Scholar] [CrossRef]
  17. Ren, X.; Zhang, F.; Zhu, H.; Liu, Y. Quad-kernel deep convolutional neural network for intra-hour photovoltaic power forecasting. Appl. Energy 2022, 323, 119682. [Google Scholar] [CrossRef]
  18. Li, P.; Zhou, K.; Lu, X.; Yang, S. A hybrid deep learning model for short-term PV power forecasting. Appl. Energy 2020, 259, 114216. [Google Scholar] [CrossRef]
  19. Wang, K.; Qi, X.; Liu, H. A comparison of day-ahead photovoltaic power forecasting models based on deep learning neural network. Appl. Energy 2019, 251, 113315. [Google Scholar] [CrossRef]
  20. Liu, Q.; Li, Y.; Jiang, H.; Chen, Y.; Zhang, J. Short-term photovoltaic power forecasting based on multiple mode decomposition and parallel bidirectional long short term combined with convolutional neural networks. Energy 2024, 286, 129580. [Google Scholar] [CrossRef]
  21. Yang, T.; Zhao, Q.; Meng, Y. Ultra-short-term photovoltaic power prediction based on multi-head probSparse self-attention and long short-term memory. J. Phys. Conf. Ser. 2023, 2558, 012007. [Google Scholar] [CrossRef]
  22. Ilias, L.; Sarmas, E.; Marinakis, V.; Askounis, D.; Doukas, H. Unsupervised domain adaptation methods for photovoltaic power forecasting. Appl. Soft Comput. 2023, 149, 110979. [Google Scholar] [CrossRef]
  23. Jaeger, H. The “Echo State” Approach to Analysing and Training Recurrent Neural Networks-with an Erratum Note; Technical Report; German National Research Center for Information Technology GMD: Bonn, Germany, 2001; Volume 148, p. 13. [Google Scholar]
  24. Jaeger, H.; Haas, H. Harnessing nonlinearity: Predicting chaotic systems and saving energy in wireless communication. Science 2004, 304, 78–80. [Google Scholar] [CrossRef] [PubMed]
  25. Jaeger, H.; Lukoševičius, M.; Popovici, D.; Siewert, U. Optimization and applications of echo state networks with leaky-integrator neurons. Neural Netw. 2007, 20, 335–352. [Google Scholar] [CrossRef] [PubMed]
  26. Lukoševicius, M.; Popovici, D.; Jaeger, H.; Siewert, U.; Park, R. Time Warping Invariant Echo State Networks; Technical Report; Planet GmbH: Berlin, Germany, 2006. [Google Scholar]
  27. Sun, X.; Li, T.; Li, Q.; Huang, Y.; Li, Y. Deep belief echo-state network and its application to time series prediction. Knowl. Based Syst. 2017, 130, 17–29. [Google Scholar] [CrossRef]
  28. Xinhui, D.; Shuai, W.; Juan, Z. Research on marine photovoltaic power forecasting based on wavelet transform and echo state network. Pol. Marit. Res. 2017, 24, 53–59. [Google Scholar] [CrossRef]
  29. Jayawardene, I.; Venayagamoorthy, G. Comparison of adaptive neuro-fuzzy inference systems and echo state networks for PV power prediction. Procedia Comput. Sci. 2015, 53, 92–102. [Google Scholar] [CrossRef]
  30. Li, N.; Li, L.; Zhang, F.; Jiao, T.; Wang, S.; Liu, X.; Wu, X. Research on short-term photovoltaic power prediction based on multi-scale similar days and ESN-KELM dual core prediction model. Energy 2023, 277, 127557. [Google Scholar] [CrossRef]
  31. Ionescu, M.; Păun, G.; Yokomori, T. Spiking neural P systems. Fundam. Inform. 2006, 71, 279–308. [Google Scholar]
  32. Peng, H.; Lv, Z.; Li, B.; Li, B.; Luo, X.; Wang, J.; Song, X. Nonlinear spiking neural P systems. Int. J. Neural Syst. 2020, 30, 2050008. [Google Scholar] [CrossRef]
  33. Liu, Q.; Long, L.; Peng, H.; Wang, J.; Yang, Q.; Song, X. Gated spiking neural P systems for time series forecasting. IEEE Trans. Neural Networks Learn. Syst. 2023, 34, 6227–6236. [Google Scholar] [CrossRef]
  34. Long, L.; Lugu, R.; Xiong, X.; Liu, Q.; Peng, H.; Wang, J.; Pérez-Jiménez, M.J. Echo spiking neural P systems. Knowl. Based Syst. 2022, 253, 109568. [Google Scholar] [CrossRef]
  35. Desert Knowledge Australia Centre. Available online: http://dkasolarcentre.com.au/historical-data/download (accessed on 3 January 2024).
  36. Oh, S.L.; Ng, E.Y.K.; Tan, R.S.; Acharya, U.R. Automated diagnosis of arrhythmia using combination of CNN and LSTM techniques with variable length heart beats. Comput. Biol. Med. 2018, 102, 278–287. [Google Scholar] [CrossRef]
Figure 1. Overall framework diagram of the proposed strategy.
Figure 1. Overall framework diagram of the proposed strategy.
Sustainability 16 01709 g001
Figure 2. NSNP-system composed of m spiking neurons.
Figure 2. NSNP-system composed of m spiking neurons.
Sustainability 16 01709 g002
Figure 3. (a) NSNP-system-based ESN block. (b) Prediction model.
Figure 3. (a) NSNP-system-based ESN block. (b) Prediction model.
Sustainability 16 01709 g003
Figure 4. Prediction results of the proposed model at three resolutions: (a) 15-min resolution; (b) 10-min resolution; (c) 5-min resolution.
Figure 4. Prediction results of the proposed model at three resolutions: (a) 15-min resolution; (b) 10-min resolution; (c) 5-min resolution.
Sustainability 16 01709 g004
Figure 5. Prediction results of the proposed model at two different length sequences: (a) 0.5 Y input sequence; (b) 4 Y input sequence.
Figure 5. Prediction results of the proposed model at two different length sequences: (a) 0.5 Y input sequence; (b) 4 Y input sequence.
Sustainability 16 01709 g005
Figure 6. Prediction results of the proposed model in the different seasons of (a) winter, (b) spring, (c) summer, and (d) autumn.
Figure 6. Prediction results of the proposed model in the different seasons of (a) winter, (b) spring, (c) summer, and (d) autumn.
Sustainability 16 01709 g006
Table 1. Setting up the parameters for the proposed model.
Table 1. Setting up the parameters for the proposed model.
Resolution
(min)
α β n ρ σ λ
150.310.545510.989.5%1.0 × 10−6
100.700.450510.9845%1.0 × 10−6
50.750.400510.989.5%1.0 × 10−6
Table 2. Comparison results of the proposed model and baseline models.
Table 2. Comparison results of the proposed model and baseline models.
ModelResolution (min)MAERMSE R 2
CNN150.2540.5290.96
CNN_LSTM150.2680.5470.96
QK_CNN(FC)150.2360.5290.96
QK_CNN150.2300.5190.96
The proposed model150.0640.4920.96
CNN100.1970.4490.97
CNN_LSTM100.2000.4530.97
QK_CNN(FC)100.1810.4530.97
QK_CNN100.1780.4480.97
The proposed model100.0580.4520.97
CNN50.1330.3530.98
CNN_LSTM50.1300.3520.98
QK_CNN(FC)50.1380.3700.98
QK_CNN50.1240.3510.98
The proposed model50.0390.3480.98
Table 3. Setting up the parameters for the proposed model.
Table 3. Setting up the parameters for the proposed model.
Model Input Sequence α β n ρ σ λ
0.5 Y0.200.98150.9840%1.0 × 10−6
1 Y0.700.60510.9810%1.0 × 10−6
1.5 Y0.500.10510.9810%1.0 × 10−6
2 Y0.090.50510.989%1.0 × 10−6
2.5 Y0.090.60510.9850%1.0 × 10−6
3 Y0.050.40510.9835%1.0 × 10−6
3.5 Y0.900.40510.9835%1.0 × 10−6
4 Y0.900.50510.9835%1.0 × 10−6
Table 4. Prediction results of different models with different input sequences.
Table 4. Prediction results of different models with different input sequences.
ModelLSTMCNNCLSTMThe Proposed Model
Input SequenceRMSEMAEMAPERMSEMAEMAPERMSEMAEMAPERMSEMAEMAPE
0.5 Y1.2440.6540.1311.1280.5660.1141.1610.5590.1120.0840.0180.301
1 Y1.3930.6160.1031.5630.6400.1111.4340.6280.1050.1320.0250.474
1.5 Y1.5330.5990.1011.4110.5670.0951.2480.5290.0950.1110.0160.355
2 Y1.3200.4570.0680.9830.4520.0590.9410.3970.0520.1320.0200.662
2.5 Y0.9450.3890.0510.4470.2310.0410.4260.1980.0350.1320.0270.493
3 Y0.3980.1810.0320.3670.1400.0250.3430.1260.0220.1310.0160.519
3.5 Y1.1500.4550.0831.1360.4120.0770.9910.3840.0700.0830.0130.461
4 Y1.4650.5650.0890.9710.4780.0830.8860.4050.0800.0790.0100.707
Table 5. Parameters of the proposed model.
Table 5. Parameters of the proposed model.
Season α β n ρ σ λ
Winter0.010.4510.989%1.0 × 10−6
Spring0.010.4510.989%1.0 × 10−6
Summer0.010.4510.989%1.0 × 10−6
Autumn0.010.4510.989%1.0 × 10−6
Table 6. PV power prediction results of different models in four different seasons.
Table 6. PV power prediction results of different models in four different seasons.
SeasonErrorWPD-LSTMLSTMGRURNNMLPThe Proposed Model
MBE0.0396 0.0474 0.0600 0.1254 0.0451 0 . 0030
WinterMAPE1.86815.02215.67916.48698.46890.2816
RMSE0.15260.85560.84710.88100.91610.0600
MBE0.0809 0.0767 0.0982 0.0667 0.3629 0 . 0013
SpringMAPE2.26605.15966.15756.13529.33350.2304
RMSE0.24540.90710.91700.93401.06980.0636
MBE 0.0436 0.0792 0.2067 0.2108 0.3535 0 . 0074
SummerMAPE2.888511.110812.336912.132112.47420.5025
RMSE0.27051.25041.23881.25691.26300.0819
MBE 0.0436 0.1268 0.1201 0.1718 0.0586 0 . 0074
AutumnMAPE2.62195.63115.4607.745810.53930.4524
RMSE0.22211.07101.07481.10221.06120.0711
MBE0.0067 0.084 0.1206 0.1442 0.1995 0 . 0048
AverageMAPE2.40027.59788.51698.726310.15750.3667
RMSE0.23571.03821.03511.05811.08610.0692
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Gao, Y.; Wang, J.; Guo, L.; Peng, H. Short-Term Photovoltaic Power Prediction Using Nonlinear Spiking Neural P Systems. Sustainability 2024, 16, 1709. https://doi.org/10.3390/su16041709

AMA Style

Gao Y, Wang J, Guo L, Peng H. Short-Term Photovoltaic Power Prediction Using Nonlinear Spiking Neural P Systems. Sustainability. 2024; 16(4):1709. https://doi.org/10.3390/su16041709

Chicago/Turabian Style

Gao, Yunzhu, Jun Wang, Lin Guo, and Hong Peng. 2024. "Short-Term Photovoltaic Power Prediction Using Nonlinear Spiking Neural P Systems" Sustainability 16, no. 4: 1709. https://doi.org/10.3390/su16041709

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop