4.1. Activation and Reasoning Process of the New Wind Power Generation Forecasting Model
In the engineering application of wind power generation forecasting, the rule activation mechanism of the IBRB model has limitations. Rules are only activated when samples fall within a fixed interval, which coarsens the input information and causes the loss of information about the distribution of input values within the interval. This limits the model’s sensitivity, prevents it from accurately reflecting the complex characteristics of wind power, and makes it difficult to meet the prediction accuracy requirements for wind power grid scheduling.
To address the above issues, the BRB-f model optimizes the rule activation method based on the practical needs of wind power generation forecasting. For each input attribute value, the BRB-f model calculates its membership degree at each reference value using a triangular membership function. This design ensures that each attribute’s input activates no more than two reference values, accurately quantifying matching degree between the input value and each reference value, while also achieving smooth transitions between reference values.
The BRB-f model defines a rule for each reference value of each attribute. When input arrives, each attribute activates one or two reference values based on fuzzy membership degrees. To enhance the reliability of the BRB-f model’s inference in wind power generation forecasting, the model introduces evidential reasoning (ER) [
21,
22] during the inference process, placing the reliability of the rules on top of the ER analysis algorithm. The introduction of ER allows the model to fully integrate wind power generation input information. Once the rules are activated, all activated rules are fused through the ER, ultimately outputting the power generation forecasting results. This multi-rule activation and weighted fusion not only ensure that the data input can accurately match the corresponding rules at different membership degrees but also improve the accuracy of the prediction results through the ER inference process. The specific formulas are as follows:
The
piece of evidence
can be represented by the following belief distribution.
where
represents the identification framework,
represents the evaluation level of the complex system,
represents the belief degree that the result is evaluated as evaluation level
, and
represents the belief degree regarding the identification framework
, indicating global ignorance.
The weight of the evidence is represented as
, and the reliability of the evidence is represented as
. They both satisfy
and
. Therefore, the belief distribution after the combined weighting of evidence weight and evidence reliability can be expressed as follows.
where the power set is represented by
, and the mixed probability mass
of the evidence
on the evaluation level
is represented as follows.
where
represents the normalization coefficient and satisfies the condition
. The combined belief degree
of
pieces of independent evidence is calculated as follows.
where
, the belief degree level
of the result after merging the previous pieces of evidence is denoted as
. This satisfies
and
.
The above reasoning process can yield the following output belief distribution and expected output utility values.
where
represents the final expected utility value, and
represents the utility at level
.
Define the set of reference values for activation as
For each combination
, the activation weight is calculated as follows:
For the combination , we have a set of evidence , where the parameters of the evidence as follows:
The belief degree distribution is , where , represents the belief degree at output level , with the weight and reliability .
The normalization factor is calculated as follows:
Step 1: Calculate the evidence combination term of the
attribute for output level
n
where
represents the reliability of the attribute,
represents the weight of the attribute,
represents the belief degree of the
attribute corresponding to level
,
represents the number of evaluation levels.
represents the unreliability,
represents the belief degree assigned to output level,
represents the unassigned belief degree.
Step 2: Calculate the total joint effect of multi-attribute evidence on each output level
For
output levels, calculate the product of the evidence combination terms for all attributes, and then sum them up.
where
represents the number of attributes.
Step 3: Calculate the evidence conflict correction item
where
represents conflict dimension,
represents the local ignorance conjunction term, that is, the product of local ignorance of all attributes.
Step 4: Calculate the normalization factor
The aggregated belief distribution is calculated through the ER, where the combined belief degree corresponding to level is calculated as follows:
Step 1: Calculate the net joint item of the target level
Extract the joint evidence items of the target level
, subtract the joint items of local ignorance shared across all levels, and obtain the net joint items of the target level.
Step 2: Calculate the net joint term after conflict correction
Multiply the net joint term by
to eliminate the interference of evidence conflict.
Step 3: Fix global unreliability
Step 4: Calculate the aggregate confidence
where
,
is the abbreviation for
,
represents the normalization factor.
The predicted output is calculated as follows:
where
is the representative value of the output level.
The final output is the weighted average of all activation combination outputs; the formula is as follows:
4.2. Optimization Model of the New Wind Power Generation Forecasting Model
In the engineering practice of complex system modeling and prediction, the main goal of model optimization is to improve prediction accuracy. Given that the projection covariance matrix adaptation evolution strategy (P-CMA-ES) algorithm demonstrates fast convergence, high optimization precision, and strong robustness in continuous space optimization problems, it can effectively enhance the efficiency of parameter tuning for the BRB-f model. Therefore, this algorithm is used to optimize the BRB-f model.
To build an optimization model, the function to be optimized must first be clarified. From the engineering requirement of minimizing prediction error, the difference between the BRB-f model’s predicted values and the actual values is the main optimization metric, represented as
. Based on this, the objective function to be optimized is defined as a global minimization function, with the expression as follows:
In the above formula, the calculation method of
is as follows.
where
represents the total number of training sample data,
is the predicted output value of the BRB-f model, and
is the actual value of the complex system. Based on this objective function, the optimization process of the P-CMA-ES algorithm is as follows:
First, parameter initialization. In the BRB-f method, the parameters to be optimized directly affect the model’s prediction performance and stability, mainly including key parameters such as belief degrees, rule reliabilities, and rule weights. To facilitate efficient algorithmic search, the set of parameters to be optimized is represented in vector form as follows:
Second, the sampling operation. Each generation of candidate parameters is generated through normal distribution sampling, ensuring that the samples cover the parameter space while also considering search efficiency, which can be expressed as
where
is the
solution in the
generation optimization.
is the step size.
is the mean of the search distribution in the
generation.
is the covariance matrix.
represents the normal distribution.
is the number of offspring.
Third, constrained projection. Since the parameters of the BRB-f model have physical meaning constraints, candidate solutions need to be projected onto a feasible hyperplane to prevent invalid parameter combinations from causing the model to fail. The specific implementation is as follows:
where
represents a parameter vector with all ones.
is the number of variables with constraints.
is the count of equality constraints.
Fourth, mean update. Update the mean of the next generation of parameters through a weighted average to accelerate the convergence speed. The specific operation is as follows:
where
represents the weight coefficient.
is the population size of the offspring.
is the
solution among the
solutions in the
generation.
Fifth, updating the covariance matrix. Update the covariance matrix based on the information from the population’s evolution, so that the search region contracts towards the optimal solution. The specific steps are as follows.
where
represents the step size in the
generation. In the
generation,
is the evolutionary path,
and
is the learning rate. in the
generation,
is the population of offspring.
represents the
solution vector among
solution vectors in the
generation.
Finally, recursively perform the above steps until the optimization is complete.
Based on the aforementioned engineering optimization process, a BRB-f model optimization structure that balances prediction accuracy with engineering feasibility can be obtained, as shown in
Figure 5.
4.3. Modeling Process of the New Wind Power Generation Forecasting Model
Step 1: Construct the initial BRB-f model. Build the initial framework for wind power generation forecasting based on BRB-f. Model parameters are determined by engineering experts based on the characteristics of wind power generation data, including the division of attribute reference values and the initialization of belief degrees.
Step 2: Introduce fuzzy membership. Calculate the membership of each attribute input to each attribute reference value using fuzzy membership functions. This eliminates the boundary jumps of traditional interval matching, thereby achieving a smooth transition.
Step 3: Rule activation and weight calculation. Determine the contribution weight of each rule based on the degree of matching, preparing a weighted set of evidence for subsequent reasoning, thus avoiding bias from a single rule.
Step 4: Evidence reasoning and rule fusion. Synthesize evidence considering rule reliabilities, perform weighted averaging, and generate the final prediction result that comprehensively accounts for all evidences.
Step 5: Belief degree distribution calculation and parameter optimization. Calculate the belief distribution based on the activated rules, and introduce optimization algorithms to train and optimize the main model parameters, enhancing the model’s prediction accuracy and robustness.
Step 6: Model performance validation. Conduct a comprehensive performance evaluation of the optimized BRB-f model using a wind turbine power generation forecasting, focusing on the accuracy and stability of wind power generation forecasting.
The modeling process of the BRB-f model is shown in
Figure 6.
4.4. Computational Complexity and Scalability Analysis
The proposed BRB-f framework inevitably incurs additional computational overhead due to the introduction of fuzzy membership, multiple rule activation and fusion, and parameter optimization. To systematically evaluate the practicality of this method, this section will analyze its computational cost, space complexity, and scalability in large-scale application scenarios.
Let the number of reference values for a single feature be , the computational complexity for a single sample with features is , and the total complexity for samples is . Memory consumption increases linearly with the number of samples, feature dimensions, and the number of reference values.
- 2.
Multi-rule activation and fusion
Combine activation rules for multiple attributes and calculate the output belief degree using the ER algorithm. Each attribute input value can activate up to 2 rules, and the number of rule combinations for a single sample is . For a dataset containing attributes and evaluation results, the computational complexity of a single inference using the ER algorithm is , and the total complexity is . The complexity increases exponentially with the number of features in the dataset.
BRB-f optimizes the belief degrees, rule reliabilities, rule weights, and other parameters through the P-CMA-ES algorithm. For a dataset with rules and evaluation results, the dimension of optimized parameters is , where , the total population size is , the complexity of a single iteration is , and the total complexity is .
- 4.
Overall computational burden
The overall complexity of this framework is polynomial and is dominated by multi-rule activation and the iterative optimization phase. Although BRB-f requires more resources than IBRB to compute multi-rule activation, this part of the overhead mainly depends on the number of features in the dataset, and the number of features can be reduced through methods such as feature selection or principal component analysis (PCA). During the iterative optimization process, the evaluation of candidate parameters is independent of each other and does not need to be performed sequentially, making it suitable for parallel execution.
- 5.
Scalability and feasibility
Despite the additional overhead, this framework can still be scaled to medium- and large-scale datasets. The complexity grows exponentially with the number of dataset features, but the number of features can be reduced and dimensions optimized through mainstream dimensionality reduction methods such as feature selection and principal component analysis (PCA). The dimension of optimized parameters grows linearly with the number of rules, but the BRB-f framework only requires setting a small number of attribute reference values and rules. The parameter optimization module supports parallel computing to improve efficiency, making this method suitable for practical engineering systems. Future work will further explore lightweight strategies and distributed deployment to enhance its applicability to large-scale scenarios.