Next Article in Journal
Migratory Bird-Inspired Adaptive Kalman Filtering for Robust Navigation of Autonomous Agricultural Planters in Unstructured Terrains
Previous Article in Journal
Research on the Application of Biomimetic Design in Art and Design
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Adaptive Differentiated Parrot Optimization: A Multi-Strategy Enhanced Algorithm for Global Optimization with Wind Power Forecasting Applications

1
School of Information Engineering, Sanming University, Sanming 365004, China
2
Faculty of Computers and Information Science, Mansoura University, Mansoura 35516, Egypt
3
Department of Applied Mathematics, Xi’an University of Technology, Xi’an 710054, China
*
Author to whom correspondence should be addressed.
Biomimetics 2025, 10(8), 542; https://doi.org/10.3390/biomimetics10080542
Submission received: 3 July 2025 / Revised: 6 August 2025 / Accepted: 13 August 2025 / Published: 18 August 2025
(This article belongs to the Section Biological Optimisation and Management)

Abstract

The Parrot Optimization Algorithm (PO) represents a contemporary nature-inspired metaheuristic technique formulated through observations of Pyrrhura Molinae parrot behavioral patterns. PO exhibits effective optimization capabilities by achieving equilibrium between exploration and exploitation phases through mimicking foraging behaviors and social interactions. Nevertheless, during iterative progression, the algorithm encounters significant obstacles in preserving population diversity and experiences declining search effectiveness, resulting in early convergence and diminished capacity to identify optimal solutions within intricate optimization landscapes. To overcome these constraints, this work presents the Adaptive Differentiated Parrot Optimization Algorithm (ADPO), which constitutes a substantial enhancement over baseline PO through the implementation of three innovative mechanisms: Mean Differential Variation (MDV), Dimension Learning-Based Hunting (DLH), and Enhanced Adaptive Mutualism (EAM). The MDV mechanism strengthens the exploration capabilities by implementing dual-phase mutation strategies that facilitate extensive search during initial iterations while promoting intensive exploitation near promising solutions during later phases. Additionally, the DLH mechanism prevents premature convergence by enabling dimension-wise adaptive learning from spatial neighbors, expanding search diversity while maintaining coordinated optimization behavior. Finally, the EAM mechanism replaces rigid cooperation with fitness-guided interactions using flexible reference solutions, ensuring optimal balance between intensification and diversification throughout the optimization process. Collectively, these mechanisms significantly improve the algorithm’s exploration, exploitation, and convergence capabilities. Furthermore, ADPO’s effectiveness was comprehensively assessed using benchmark functions from the CEC2017 and CEC2022 suites, comparing performance against 12 advanced algorithms. The results demonstrate ADPO’s exceptional convergence speed, search efficiency, and solution precision. Additionally, ADPO was applied to wind power forecasting through integration with Long Short-Term Memory (LSTM) networks, achieving remarkable improvements over conventional approaches in real-world renewable energy prediction scenarios. Specifically, ADPO outperformed competing algorithms across multiple evaluation metrics, achieving average R2 values of 0.9726 in testing phases with exceptional prediction stability. Moreover, ADPO obtained superior Friedman rankings across all comparative evaluations, with values ranging from 1.42 to 2.78, demonstrating clear superiority over classical, contemporary, and recent algorithms. These outcomes validate the proposed enhancements and establish ADPO’s robustness and effectiveness in addressing complex optimization challenges.

1. Introduction

An optimization search is a central problem in computational science, concerned with searching through combinations of variables that fulfill certain constraints, but in a way that minimizes or maximizes some objectives that have been pre-defined [1]. Classical optimization tools solve such multi-variable constraint situations by using strict mathematical developments based on the properties of problem-space structures [2]. They identify the best possible solutions in confined areas using direct or indirect computational processes that normally need accurate mathematical expressions of objective functions and boundary constraints, such as linear programming [3] and gradient techniques [4], as well as conjugate gradient algorithms. However, these traditional methods often fail to scale well for large, nonlinear, or high-dimensional spaces [5].
Due to the increasing complexity and diversity of optimization problems in fields such as trajectory optimization [6], cybersecurity systems [7], the design of engineering systems [8], machine learning prediction [9], economic dispatch in microgrids [10], and blockchain [11], there is a growing demand for flexible and robust solution methods. Algorithms of metaheuristics have emerged as efficient alternatives, employing computational intelligence and stochastic search techniques to provide high-precision solutions to complicated real-life optimization problems [12,13].
Various metaheuristic approaches (MAs), such as Genetic Algorithms (GAs) [14] and Differential Evolution (DE) [15], Particle Swarm Optimization (PSO) [16], the Gray Wolf Optimizer (GWO) [17], Ant Colony Optimization (ACO) [18], Gravitational Search Algorithm (GSA) [19], and various nature-inspired variants [20,21,22,23,24,25,26,27] have shown effectiveness but still face challenges like premature convergence and poor performance in complex landscapes. With the advances in contemporary science and technology, and the development of different domains, they present increasingly high-dimensional optimization problems with not only increasing complexity but also multimodality and many local optima [28,29].
Although these smart algorithms have proven to be better at handling real-world optimization problems, they are still limited by factors such as limited exploration abilities, performance slowdowns at high complexity, and slow convergence with more complicated and demanding problems. Moreover, the No Free Lunch (NFL) theorem shows that no algorithm can outperform all other algorithms in all possible optimization problems [30]. This principle highlights the critical importance of developing effective and fast practical applications. As a result, we can state that further improvement and better performance of these algorithms are essential in order to obtain more sensible and optimal solutions in practical optimization problems.
To deal with the inevitable weakness of algorithms, researchers often employ domain-specific solutions or combinations of several strategies to enhance performance. For example, Zou and Wang presented an Improved Multi-Strategy Beluga Whale Optimization (IMS-BWO) algorithm that enhances the original BWO through three key strategies—roulette-based fitness distance balancing, random differential restart, and non-monopoly search with Levy motion—achieving superior performance with 69.8–100% win rates against competing algorithms [31]. Fakhouri et al. [32] suggested Hybrid COASaDE as a mixed optimization technique, which combined the exploration capabilities of the Crayfish Optimization Algorithm and adaptive exploration of self-adaptive DE. This combination of global and local designs guarantees a symmetric tradeoff between overall search and local refinement by dynamic parameter control. A neural network-assisted hybrid version of the Hiking Optimization Algorithm (HOA), which incorporated chaotic dynamics and self-designed AI optimizations to enhance design optimization in engineering, was presented by Ozcan et al. [33]. To overcome these limitations, Xia et al. [34] proposed a Fractional Order Dung Beetle Optimizer with Reduction Factor (FORDBO) that enhances the standard DBO algorithm through good node set initialization, a reduction factor for exploration–exploitation balance, fractional order calculus for dynamic boundary adjustment, and a repetitive renewal mechanism, demonstrating superior performance against 23 competing algorithms.
The Parrot Optimization Algorithm (PO) [35] is a recent MA that is based on the observation of behavior in the Pyrrhura Molinae parrot species. This algorithm is based on computational principles derived from the behaviors of the parrot, such as its foraging, resting, social communication, threat avoidance, and activity inhibition. The PO methodology utilizes a multi-stage approach where it copies parrots’ nesting habitats using the spatial surroundings, then models their social behaviors and adaptive plasticity to environmental stimuli. Parrots show behavioral characteristics of social cooperators and environmental generalists and inhabit an intricate ecological niche. PO exploits these behavioral properties by further exploiting environmental fluctuations and social interactions in its computational framework to solve optimization problems with high complexity and varied constraints. The algorithm demonstrates efficacy in problem-solving as it dynamically adjusts to the problem landscape using a simulation of parrot behavioral strategies.
Although PO has an acknowledged application in resolving engineering optimization challenges [36,37,38,39], it faces limitations rendering the need to establish an improved variant in this study. First, original PO works satisfactorily in some functions but shows weaknesses such as poor accuracy and conciseness when faced with non-convex and high-dimensional optimization problems. Moreover, it also has difficulties striking optimal tradeoffs between exploration and exploitation stages that lead to suboptimal solutions in hard optimization problems that feature complex search landscapes and have multiple constraints. Secondly, with ongoing optimization challenges in different fields with the features of high dimensionality, large search spaces, and nonlinearity, current MAs often face problems of stability and convergence. This requires consistent research activities to develop new and work on the advancement of current MAs to solve computational challenges. Thirdly, the NFL theorem suggests that no MA can always be preferential throughout all optimization problems. As a consequence, there is a necessity for new and dynamic solutions to optimize the performance of algorithms and overcome existing constraints. Finally, recent algorithmic advances in algorithmic design and optimization algorithms point out that it is imperative to explore optimization algorithms and refine them to better serve the demands of practical computational optimization problems. Thus, the proposed improvements to PO described in this paper are based on these motivations.
Based on the above justification and to directly address the identified research gaps, the paper integrates three superior approaches, including Mean Differential Variation (MDV), Dimension Learning-Based Hunting (DLH), and Enhanced Adaptive Mutualism (EAM), into PO and suggests modified Adaptive Differentiated Parrot Optimization (ADPO), with improved capacities of performance. To address the diversity loss and premature convergence gap, the MDV strategy is used to rectify the loss of the generation of population diversity into the creation of a dual-phase mutation mechanism that fosters wide exploration in initial iterations and heavy exploitation near optimal solutions in later iterations. This strategy provides optimization of many different solutions as well as a high convergence rate. Moreover, to tackle the insufficient dimension-wise learning gap, the DLH strategy is presented in an improved manner that avoids premature convergence, as each solution is allowed to adaptively learn from its dimension-wise neighbors. This strategy contributes to the creation of diverse neighborhoods because each solution has a local neighborhood with spatially close neighbors, thus facilitating coordinated but individually directed search. Also, to overcome the rigid cooperation mechanism gap, we implement an EAM strategy. Thus, flexible elite or random and fitness-based influence are used in this strategy to keep a good balance between intensification and diversification during the optimization process.
These three strategies, when combined with PO, effectively tackle the inherent problems of unbalanced exploration–exploitation, few correct solutions, and low convergence speed. ADPO has been stringently compared to a large number of classical and state-of-the-art algorithms across several benchmark suites, including CEC2017 and CEC2022 test functions, and rigorous statistical analysis using Friedman rank tests [40], Wilcoxon signed-rank tests [41], and broad convergence assessment. The outcomes present an excellent improvement in performance compared to that of original PO, with ADPO being notably competitive against other advanced algorithms. The applicability of the enhanced ADPO technique is also depicted in this paper by integrating it with LSTM networks to predict wind power as a real-world application. The ADPO-LSTM hybrid architecture helps solve the most important problem of hyperparameter search in renewable energy forecasting systems and shows better performance in measuring wind farm stations. In these applications, ADPO maintained outstanding stability and accuracy. Thus, the primary contributions of this paper are summarized as follows:
  • An excellent improvement of PO, which is known as ADPO, is presented. This variant integrates MDV, DLH, and EAM to address high-dimensional complex optimization problems in a highly efficient manner.
  • The DLH strategy enhances the population diversity and boosts the exploration abilities of PO without premature convergence.
  • An EAM mechanism is designed to overcome the fixed cooperation with fitness-directed interaction, which enhances balance in intensification and diversification during the optimization process.
  • The MDV strategy is used to enhance both the exploration and exploitation ability in diversity loss because of mutation by the dual-phase strategy, which preserves convergence power.
  • ADPO is specifically designed to enhance convergence speed and solution accuracy while effectively avoiding local optima through comprehensive diversity preservation mechanisms.
  • Thorough numerical tests against the other main intelligent algorithms and robust optimizers on the CEC2017 and CEC2022 test suites have shown that ADPO comports exceptionally well in solving the various optimization problems in multiple dimensions.
  • ADPO could be successfully applied to the LSTM neural networks in the wind power forecasting, representing the state-of-the-art results and indicating the feasible applicability in the renewable energy systems.
This paper is structured as follows: Section 2 describes the basic principles and mathematical formulation of the Parrot Optimization Algorithm and detailed algorithms. In Section 3, improved ADPO is introduced, where the three improvement strategies are elaborated in detail with their mathematical expressions and implementation processes. The results of extensive numerical experiments on the CEC2017 and CEC2022 benchmark suites in various dimensions, where a detailed statistical analysis and an evaluation of the performances are reported, are presented in Section 4. Section 5 introduces the ADPO-LSTM hybrid model of a wind power forecasting model with experimental evaluation of the performance on actual wind electricity farm data. Section 6 concludes the research findings and provides desirable future research directions in the development of new types of renewable energy applications and optimization algorithms.

2. Parrot Optimization Algorithm (PO)

The Parrot Optimization Algorithm (PO) is a nature-based computing technique which is based on the observation of the behavioral patterns of a sociable parrot species, Pyrrhura Molinae [35]. The main mechanics of this algorithm are based on replicating five different behavioral aspects common in these birds: food search, periods of rest, communication among solutions, avoiding new threat sources, and ultimate cessation of activity. All these automatic behaviors are mapped into operational steps in the algorithm to dynamically balance exploration of new areas and exploitation of promising regions.

2.1. Phase 1: Population Initialization

When the algorithm starts, a population of agents is distributed at random in the search domain. The initial position of each solution, which represents a parrot in the decision space, is determined via a uniformly distributed random generator. The equation used for this initialization is as follows:
X i ( 0 ) = l b + r a n d ( 0 , 1 ) ( u b l b )
Here, X i ( 0 ) is the starting position of the i -th parrot. The terms l b and u b are used to refer to the previously defined low and high limits of the optimization variables, respectively. The term r a n d ( 0 ,   1 ) generates a random number that falls between 0 and 1, and therefore, the generated solution will not go beyond the range specified. Such random initialization creates adequate diversity in the population and lays the foundation for wide initial exploration of the problem space.

2.2. Phase 2: Foraging

During this critical phase, each solution actively explores the search space in pursuit of improved solutions, akin to a parrot seeking food. The solution’s movement is directed by the best solution discovered so far X best and the average location of all solutions X mean   ( t ) . To encourage exploration and prevent stagnation, a Lévy distribution is used to add stochastic perturbations. The position of each solution is adjusted by
X i ( t + 1 ) = X i ( t ) X best L e v y ( d i m ) + r a n d ( 0 , 1 ) 1 t / T m a x 2 t / T m a x X mean ( t )
Here, X i ( t ) is the position of the i -th solution at iteration t , and T max defines the maximum number of iterations. The component ( X i ( t ) X best ) quantifies how far the solution is from the best-known solution, driving it toward better regions. The term L e v y ( d i m ) facilitates both small and large steps, enabling the algorithm to escape local optima. The global behavioral tendency is incorporated through X mean   ( t ) which is computed as
X mean   ( t ) = 1 N k = 1 N X k ( t )
where N is the number of parrots (solutions). This averaging ensures some cohesion in the flock’s movement while preserving diversity.

2.3. Phase 3: Resting

Parrots take breaks at times, particularly when they are around friends. The relaxation step replicates this action by moderately controlling agent motion without halting the optimization process. Instead of moving toward distant regions, solutions are found close to the optimal solution, and they engage in small position changes. The formula for updating is as follows
X i ( t + 1 ) = X i ( t ) + X best L e v y ( d i m ) + r a n d ( 0 , 1 ) o n e s ( 1 , d i m )
In this case, o n e s ( 1 , d i m ) takes a small vector of ones of all dimensions, together with a random scalar which adds some weak noise to all coordinates when multiplied. This phase facilitates exploitation without hindering the convergence process of the population.

2.4. Phase 4: Communication

Parrots are social creatures. During the algorithmic process, solutions exchange and synthesize information with other members of the population. Communication behavior is applied probabilistically as indicated by a random variable Q , which specifies the probability of one agent aligning with the group or acting independently. The new location can be calculated as follows:
X i ( t + 1 ) = 0.2 r a n d ( 0 ,   1 ) 1 t / T m a x X i ( t ) X m e a n ( t ) , Q 0.5 0.2 r a n d ( 0 ,   1 ) e x p t / r a n d ( 0 ,   1 ) T m a x , Q > 0.5
When Q 0.5 , the parrot’s movement is influenced by the mean of the population, modeling collaborative behavior. If Q > 0.5 , the solution moves away with exponential decay, signifying independence after group interaction. These dynamics foster both cohesion and divergence, which is critical for sustaining balance between exploration and convergence.

2.5. Phase 5: Strange Aversion

Parrots avoid strange animals and prefer staying close to familiar companions. Solutions are driven toward favorable areas and pushed away in less promising or fraudulent areas, using this as a natural defense mechanism in the algorithm. The update of the position is carried out as follows:
X i ( t + 1 ) = X i ( t ) + r a n d ( 0 , 1 ) c o s 0.5 π t T m a x X best X i t c o s ( r a n d ( 0 , 1 ) π ) t T m a x 2 T m a x   X i ( t ) X best
This equation comes up with a dual mechanism. The initial cosine term favors motion towards the optimal solution, with time-dependent modulation. The second cosine term produces repelling motion, subjected to randomness that prevents stagnation. This is a critical phase in reifying exploitation of the most familiar solution without impairing population diversity and strength.

2.6. Phase 6: Termination Condition

As soon as the calculated iteration T m a x is achieved (a maximum number of iterations), the algorithm stops. This stage makes sure that optimization is performed under the computational terms. At this moment, the most optimal solution found by the swarm should be adequately set.

3. Proposed Adaptive Differentiated PO (ADPO)

Although original PO can be attractive when it comes to balancing exploration and exploitation in nature-inspired optimization, the algorithm has a number of fundamental weaknesses. The population diversity decreases at a very fast pace as the search proceeds, putting the search at risk of premature convergence and hindering the escape from local optima. The inherent rigidity of mutualistic cooperation also limits adaptability, and dimension-level learning is ineffective in high-dimensional problems. To combat such issues, advanced ADPO incorporates three novel methods, described as follows:
  • Mean Differential Variation (MDV): This overcomes the early loss of diversity by introducing a two-phase mutation mechanism that promotes wide exploration in early iterations and intensifies searching near the best solutions in later stages.
  • Dimension Learning-Based Hunting (DLH): This prevents premature convergence by enabling each solution to adaptively learn from dimension-wise neighbors, promoting diversity and enabling coordinated yet independent directional search.
  • Enhanced Adaptive Mutualism (EAM): This integrates the rigid mutualism of PO with an adaptive cooperation model that uses fitness-based influence and flexible references to maintain balance between intensification and diversification.
These strategies collectively enhance ADPO’s global search capability, resilience against local optima, and convergence reliability, particularly in complex optimization landscapes.

3.1. Mean Differential Variation (MDV)

When PO is being optimized, there is a tendency for the population to converge on elite solutions. This increases convergence speed significantly, but usually leads to a quick decrease in diversity. Such diversity loss prematurely homogenizes the population, causing stagnation and local optima entrapment.
To overcome this drawback, ADPO integrates the mechanism of MDV. This strategy proposes two adaptive update formulae, which are used at various stages of the optimization. In both variants, two agents, X r 1 and X r 2 , are randomly selected from the population. Two auxiliary vectors are calculated as follows:
X c 1 = X r 1 + X r 2 2 , X c 2 = X r 1 + X b e s t 2
In this formulation, X best denotes the globally best solution discovered so far. The vectors X c 1 and X c 2 serve as hybrid guides for movement, combining information from both random and elite individuals. During the first two-thirds of the total iteration budget, i.e., when t < 2 3 T m a x , the update equation promotes exploration and is defined as follows:
X i = X c 1 + F X c 1 X i + F X c 2 X i
Here, X i is the i-th individual and F is a constant scaling factor set to 0.25. This mechanism directs solutions toward unexplored regions derived from intermediate positions. Once the algorithm enters its final-third phase of iterations, the emphasis switches to local refinement. The position update for X i becomes
X i = X b e s t + F X c 1 X i + F X c 2 X i
In this later stage, the scaling factor F becomes dynamic and is calculated as F = ( 1 2 r a n d ( 1 ) ) 0.5 . This randomized scaling helps adjust movement intensity near X best , enabling fine-tuning without eliminating randomness. Thus, the full mutation rule across both phases is summarized as
  If   t < 2 3 T m a x : F = 0.25 , X i = X c 1 + F X c 1 X i + F X c 2 X i Else :   F = ( 1 2 r a n d ( 1 ) ) 0.5 , X i = X best + F X c 1 X i + F X c 2 X i
The MDV strategy can optimize the search process because it supports wider motions in the initial stage and precise exploitation in the final one. This two-phase behavior serves to maintain diversity in the beginning and concentrate efforts in the vicinity of prospects at a later stage. The consequence of this in ADPO is a better global search and solution enhancement balance, leading to accelerated convergence and better solutions.

3.2. Dimension Learning-Based Hunting Search Strategy (DLH)

In PO, the characteristic of the solutions converging towards the best-known solution quickly can be problematic because of high-dimensional search landscapes. The fewer solutions are less diverse as iterations continue. Such a reduction in the population search space limits the potential of the algorithm to search other areas, leading to a greater possibility of getting trapped in local optima. This behavior gets accentuated more when resolving complex multimodal problems.
To counteract this loss of diversity and keep a strong exploratory capacity throughout the search, the DLH strategy is added to the proposed ADPO. Moreover, DLH promotes diversity in the population and the sharing of information by ensuring that every solution has a customized neighborhood along the lines of spatial closeness. Such neighborhoods enable solutions to learn adaptively under a choice set of peers, thereby facilitating collaborative, though focused, search practices. First, the algorithm starts by calculating a new position X i ( t + 1 ) for the i -th parrot, with the help of already-existing PO’s original operators. Then the Euclidean distance between the current location and the new position is calculated as follows:
R i ( t ) = X i ( t ) X i ( t + 1 )
This distance value R i ( t ) helps define a local neighborhood around X i ( t ) , denoted as N i ( t ) , which includes all individuals X j ( t ) whose distance from X i ( t ) is less than or equal to R i ( t ) , which can be modeled as follows:
N i ( t ) = X j ( t ) X i ( t ) X j ( t ) R i ( t ) , X j ( t )   Population  
Once the neighborhood N i ( t ) is established, a dimension-wise learning mechanism is employed. For each dimension d , a new candidate solution X i , d ( t + 1 ) is generated using one randomly chosen neighbor X n , d ( t ) N i ( t ) and a random individual X r , d ( t ) population. The updated dimension is computed as follows:
X i , d ( t + 1 ) = X i , d ( t ) + r a n d X n , d ( t ) X r , d ( t )
This equation promotes both collaboration (via neighbor X n ) and stochastic variation (via X r ), encouraging adaptive updates tailored to the current search state. After producing both the original candidate X i ( t + 1 ) and the dimension-learned candidate X i d ( t + 1 ) , their fitness values are compared, where the best one is chosen for the next iteration as follows:
X i ( t + 1 ) = X i ( t + 1 ) ,   if   f X i ( t + 1 ) < f X i d ( t + 1 ) X i d ( t + 1 ) ,   otherwise  
This selection process ensures that only the more promising of the two solutions is retained, further refining the solution’s path.
It is worth noting that the combination of allowing each solution to learn individually and locally and having directionally dissimilar solutions qualifies DLH as a mechanism that can prevent the premature closure of the search space. Furthermore, this mechanism boosts the algorithmic potential of exploring various dimensions and, at the same time, makes good use of the information on the surrounding solutions. Consequently, ADPO ensures wider diversity between its iterations, and hence, results in better resilience to local optima and more robust convergence towards the global optimum overall.

3.3. Enhanced Adaptive Mutualism (EAM) Strategy

ADPO employs the EAM strategy in order to enhance the adaptive capacity and cooperation dynamics among solutions. This enhanced mechanism allows more biologically plausible interactions in which the effect of partners depends on their fitness and in which the partners in cooperation are selected flexibly. The cooperative interaction begins by selecting two individuals: X i ( t ) , the current solution, and X z ( t ) , a randomly chosen partner from the population. A mutual vector is computed to represent their midpoint, as follows:
M V = X i ( t ) + X z ( t ) 2
Next, a decision is made regarding the cooperation target. If a uniformly sampled variable q [ 0 , 1 ] is less than 0.5, the guiding reference becomes X α , which is the mean of the top three best-performing individuals, calculated as follows:
X α = X best   1 + X best   2 + X best   3 3
In this case, the position of both participants is updated as follows:
X i ( t + 1 ) = X i ( t ) + r n d m X α M V B F 1
X z ( t + 1 ) = X z ( t ) + r n d m X α M V B F 2
Alternatively, if q 0.5 , the cooperation target is a randomly selected individual X r , and the update becomes
X i ( t + 1 ) = X i ( t ) + r n d m X r M V B F 1
X z ( t + 1 ) = X z ( t ) + r n d m X r M V B F 2
In both equations, r n d m [ 0 , 1 ] is a uniform random value that controls the step size of movement, introducing stochasticity into the update. The benefit factors B F 1 and B F 2 are not fixed but computed based on the fitness of each participant relative to the global best, as follows:
B F 1 = f X i f X best   if   f X best 0
B F 2 = f X z f X best   if   f X best 0
These adaptive benefit values allow solutions with higher fitness to exert stronger mutual influence, leading to more meaningful and responsive cooperation patterns.
This mechanism enhances ADPO performance considerably by improving the balance mechanism between intensification and diversification. Elite solutions tend to be followed, and the population therefore exploits high-potential regions. Thus, the algorithm becomes exploratory when random references are selected. Moreover, the benefit scaling based on fitness renders the strength of interaction context-sensitive and free of inefficient or redundant movements. Therefore, the EAM strategy makes ADPO much more able to avoid getting trapped in misleading regions, more likely to converge, and more able to explore advantageous regions of the search landscape effectively. The main steps of the proposed ADPO are presented in Figure 1 and Algorithm 1. The overall ADPO framework shows how the three proposed enhancement strategies are seamlessly integrated into the optimization process. The framework demonstrates the sequential application of EAM for population updating, followed by MDV for diversity preservation, then the original PO operators, and finally DLH for dimension-wise learning.
Algorithm 1: ADPO
Input:
    Maximum number of iterations  T m a x ,
    Population size  N ,
    Number of dimensions  d ,
    Upper bound  u b   a n d   l o w e r   b o u n d   l b .
Output:
    Optimal solution  X b e s t ,   f i t n e s s   v a l u e   o f   o p t i m a l   s o l u t i o n   f b e s t
1.
 
Initialize the initial population  i 1,2 , . , N  Equation (1)
2.
 
Evaluate the fitness value of each solution  f ( X i )
3.
 
Obtain the best solution  X b e s t and its fitness value f b e s t
4.
 
while  t T m a x  do
5.
 
   Obtain the best solution X b e s t and its fitness value f b e s t
6.
 
   for  i = 1 to N  do
7.
 
    Generate new solution X n e w using EAM strategy using Equations (17)–(20)
8.
 
    if  f X n e w < f ( X i )  then
9.
 
      f X i = f ( X n e w )
10.
 
     end if
11.
 
   end for
12.
 
   for  i = 1 to N do
13.
 
    Generate new solution X n e w using MDV strategy using Equations (8) and (9)
14.
 
   end for
15.
 
   for  i = 1 to N do
16.
 
    ST = r a n d i ( 1   4 )
17.
 
     if  S T = = 1  then
18.
 
     Update the position of solution using Equation (2)
19.
 
     Elseif  S T = = 2  then
20.
 
     Update the position of solution using Equation (4)
21.
 
     Elseif  S T = = 3  then
22.
 
     Update the position of solution using Equation (5)
23.
 
     Elseif  S T = = 4  then
24.
 
     Update the position of solution using Equation (6)
25.
 
     end if
26.
 
    end for
27.
 
   for  i = 1 : N  do
28.
 
    Apply DLH strategy for each solution using Equations (11)–(13)
29.
 
    Apply greedy selection
30.
 
   end for
31.
 
   Check if the solution within the defined boundary and calculate fitness values.
32.
 
   Update the best solution found X b e s t
33.
 
end while
34.
 
Return  X b e s t  and its fitness value f b e s t ;

4. Analysis of Global Optimization Performance

The section includes an extensive experimental analysis of ADPO with all the details of the experimental configuration and a comparison with the results of competing algorithms in terms of their performance. The test methodology compares the behavior of the algorithms on the benchmark suites CEC2017 [42] and CEC2022 [43], which include representative test cases that emulate real-world optimization problems. This section confirms the effectiveness of ADPO with the help of pattern convergence and displaying statistical distributions. Also, extensive comparisons with state-of-the-art MAs demonstrate that ADPO effectively handles high-dimensional and complex optimization problems.

4.1. Experimental Configuration and Settings

To develop ADPO’s performance evaluation, we leveraged two well-known benchmark suites—the CEC2017 and CEC2022 test functions—commonly used to assess the effectiveness of optimization algorithms. The CEC2017 suite includes 29 functions that are divided into unimodal functions (F1, F3), multimodal functions (F4–F10), and hybrid functions (F11–F20), as well as composition functions (F21–F30). The functions in the suite used in CEC2022 are 12 of the most complex optimization problems. In CEC2017 and CEC2022, all benchmark functions are assembled on [−100, 100]. These measurement standards provide an in-depth test of the capabilities of exploration globally and exploitation locally by utilizing strict test procedures. The dimensionalities are 50 in the CEC2017 functions and 20 in the CEC2022 functions. The size of both benchmark suites remained at a fixed value of 30 individuals, and each algorithm ran 30 independent trials to offer statistical reliability. The computational experiments were conducted on a system containing an Intel Core i7-10750H processor outputting 2.60 GHz, supported with 32 GB of RAM, on the Windows 10 (64-bit) operating system. The MATLAB R2024b environment was applied to implement and execute the algorithms.
The optimization algorithms which were chosen to undergo the comparative analysis are powerful MAs that have shown outstanding results in various optimization fields. The comparison framework involves the following algorithms: (1) Classical MAs, which have a history of validation, such as the Arithmetic Optimization Algorithm (AOA) [44], Harris Hawk Optimization (HHO) [45], Whale Optimization Algorithm (WOA) [46], and original Parrot Optimization (PO). These foundational algorithms have proven to be robust when dealing with a variety of optimization problems. (2) Recent research works in metaheuristic optimization, such as the Spider Wasp Optimizer (SWO) [47], Golden Jackal Optimization (GJO) [48], Coati Optimization (COA) [49], IVY algorithm [50], Hiking Optimization Algorithm (HOA) [27], and Kepler Optimization Algorithm (KOA) [51]. These new algorithms present new approaches and techniques for solving complex problems. (3) High-performance optimization algorithms, including competition-winning and state-of-the-art techniques, including eCOA [52], CMAES [53], GWO_CS [54], RDWOA [55], CSOAOA [56], DAOA [57], ISSA [58], jDE [59], and IPSO_IGSA [60]. These advanced algorithms are state-of-the-art optimization technologies that possess excellent performance attributes. Detailed settings for each algorithm are listed in Table 1.
Selection of comparative algorithms followed strict criteria based on relevance and proven efficiency in terms of optimization studies and proven efficiency in solving various problems in optimization. The classical algorithms offer basic metrics that represent established methods of approaches developed long ago, whereas the recent techniques demonstrate newer emerging strategies. State-of-the-art techniques are represented by advanced algorithms along with solutions that have proved to be optimal in terms of the competition, which guarantees that all the competition will be evaluated as thoroughly as possible regarding the most advanced optimization technologies. This structured selection ensures exhaustive evaluation of ADPO across classical, modern, and advanced optimization paradigms.

4.2. Metrics for Evaluating Optimization Performance

In order to impartially assess the quality of work of ADPO among these different rivals, we used a number of statistical and descriptive measures. These are the average quality of solutions (AVG), standard deviation (SD), Friedman ranking (FR), and the Wilcoxon rank-sum significant test. All these metrics have their own role in the profiling of the consistency, reliability, and superiority of the optimizer.
  • Mean Fitness (AVG): The measure of the quality of solutions typically attained is the average fitness score over different independent runs. This measurement is useful in the evaluation of the correctness and general performance of an algorithm in repetitive usage within the same setup. It is calculated as follows:
A V G = 1 N i = 1 N f i
where N is the number of runs and f i is the fitness value obtained in the i t h trial.
  • Standard Deviation (SD): SD quantifies the extent of dispersion of the fitness values around the mean, providing information about the consistency and stability of the results produced by the algorithm. Smaller variations show that the optimizer provides consistent results when repeated many times. It can be calculated as follows:
S D = 1 N i = 1 N   f i A V G 2
  • Friedman Ranking (FR) [40]: This nonparametric statistical test ranks algorithms based on their relative performances across multiple problem instances. A lower average rank suggests superior performance. The final ranking is derived from averaging ranks over all tested functions. The Friedman test statistic is then evaluated using a chi-squared distribution to determine consistency in relative performance across functions.
  • Wilcoxon Rank-Sum Test [41]: To establish whether performance differences between ADPO and any competing algorithm are statistically meaningful, the Wilcoxon rank-sum test is utilized. A p -value below 0.05 denotes a significant difference. If ADPO achieves better results, it is marked with R + ; if no clear difference exists, it is annotated with R = ; and if ADPO underperforms, it is labeled with R .
The combination of these metrics provides an extensive basis to compare ADPO to other algorithms and make sure that all findings regarding the optimization performance of this algorithm are statistically aligned and indicative of its practical potential.

4.3. Ablation Study

The primary objective of the ablation study is to systematically evaluate the individual contributions of each proposed enhancement strategy within ADPO and quantify their specific impact on optimization performance [61]. This analysis aims to understand how each mechanism individually contributes to the overall superiority of ADPO compared to the original PO. To conduct this comprehensive analysis, we designed five distinct algorithmic variants: (1) ADPO: the complete proposed algorithm incorporating all three enhancement strategies; (2) ADPO-MDV: a variant incorporating only the MDV strategy while maintaining the original PO framework for other operations; (3) ADPO-DLH: a variant implementing only the DLH mechanism with standard PO operations; (4) ADPO-EAM: a variant utilizing only the EAM strategy combined with original PO components; (5) PO: the baseline original PO for comparison. Each variant was evaluated using the CEC2022 benchmark suite comprising 12 functions with diverse characteristics: F1 (unimodal), F2–F4 (multimodal), F5–F8 (hybrid), and F9–F12 (composite functions). Table 2 presents the comprehensive ablation study results, revealing significant insights into the individual and collective effectiveness of each proposed enhancement strategy across all benchmark functions.
  • Enhanced Adaptive Mutualism (EAM) strategy analysis: The ablation study reveals that ADPO-EAM emerges as the most impactful individual enhancement, achieving an average rank of 1.75 and securing first place on four functions (F1, F9, F11, F12). This strategy demonstrates exceptional performance on unimodal function F1, with the best average fitness (300.839) and remarkably low standard deviation (0.390), indicating superior exploitation capability and convergence stability. On composition functions F9, F11, and F12, ADPO-EAM consistently outperforms other individual strategies, showcasing its effectiveness in handling complex multimodal landscapes through adaptive fitness-guided cooperation. The substantial improvement over the original PO validates the critical importance of flexible mutualistic interactions in optimization performance.
  • Mean Differential Variation (MDV) strategy analysis: ADPO-MDV demonstrates moderate but consistent improvements, with an average rank of 3.25, representing significant enhancement over baseline PO. This strategy shows particular strength in maintaining solution quality across diverse function types, with notable performance on multimodal functions F2–F4, where it consistently ranks third. The dual-phase mutation mechanism effectively balances exploration and exploitation, as evidenced by its reasonable standard deviation values and stable performance across all function categories. However, the strategy shows limitations on more complex hybrid and composition functions, suggesting that while MDV provides valuable diversity preservation, it requires synergistic combination with other mechanisms for optimal performance in challenging optimization landscapes.
  • Dimension Learning-Based Hunting (DLH) strategy analysis: ADPO-DLH achieves an average rank of 4.17, showing the most limited individual impact among the three proposed strategies. Interestingly, this strategy demonstrates selective effectiveness, performing competitively on hybrid function F5 (rank 2) while showing poor performance on other function types, particularly unimodal F1, where it ranks fourth. The high standard deviation values observed in several functions (notably F1 and F6) indicate instability in convergence behavior when used in isolation. This pattern suggests that DLH’s dimension-wise learning mechanism requires the stabilizing influence of other strategies to achieve consistent performance, validating its role as a complementary rather than standalone enhancement.
The complete ADPO framework achieves the best overall performance, with an average rank of 1.33 and a Friedman rank of 1.55, demonstrating superior synergistic effects when all three strategies operate collectively. ADPO secures first place on eight out of twelve functions (F2, F3, F4, F5, F6, F7, F8, F10), showcasing consistent excellence across all function categories. Most remarkably, ADPO achieves the lowest standard deviation values on most functions, indicating exceptional stability and reliability compared to individual strategy implementations.
The Friedman rank analysis confirms statistically significant differences between all algorithmic variants, validating the reliability of the observed performance differences. The substantial rank gap between the complete ADPO framework (1.55) and individual strategies (1.87 to 4.00) provides strong statistical evidence for the necessity of integrated enhancement mechanisms. Furthermore, the consistent superiority of all enhanced variants over original PO (Friedman rank 4.34) confirms that each proposed strategy contributes meaningful improvements to the baseline optimization capability. This comprehensive ablation study conclusively demonstrates that while each individual enhancement strategy provides valuable improvements over original PO, the complete ADPO framework is the best-performing.

4.4. Results Discussion Using CEC2022

The main goal of this section is to test and compare the performance of the proposed ADPO with a large number of well-known optimization algorithms. The comparison was performed with the CEC2022 benchmark suite to evaluate the different capabilities of the algorithms. The obtained results are presented in Table 3.
The unimodal function F1 tests the algorithms’ exploitation capability and convergence speed toward a single global optimum. As shown in Table 3, ADPO demonstrates exceptional performance, with an average fitness value of 301.302 and a remarkably low standard deviation of 0.981, securing the first rank. This outstanding performance indicates ADPO’s superior ability to intensify search around promising regions while maintaining consistent convergence behavior. Original PO ranks third with a significantly higher average fitness (1.60 × 104), highlighting the substantial improvements achieved through the proposed enhancements. The large performance gap between ADPO and other competitors, particularly KOA (1.11 × 105), which ranks last, demonstrates the effectiveness of the MDV strategy in achieving precise exploitation during the later stages of optimization.
On the other hand, the multimodal functions F2–F4 test the algorithms’ exploration ability and capacity to get out of local optima in order to find global solutions. For F2, it can be seen that ADPO ranks second, with an average fitness of 466.259, after IVY (464.089), which came first. The close performance level of these best performers implies that the DLH strategy of ADPO is successful in sustaining population diversity. In the case of F3, original PO achieved the rank of fifth (655.570), and ADPO ranked second (621.667) following IVY, which demonstrates increased exploration. On F4, ADPO ranks second (878.170) once more, with IVY in the first position, indicating reliable performance in various multimodal terrains. The high output, in contrast to the classical methods such as WOA (932.687, place 7) and more advanced methods such as KOA (1063.219, place 11), proves the efficiency of the proposed enhancement patterns.
Furthermore, hybrid functions are functions mixing properties of several types of functions and often generate sophisticated optimization landscapes that may hamper exploration and exploitation capabilities. On F5, Table 3 indicates that ADPO is in position two (2013.369), after GJO (1900.620), demonstrating a competitive nature in dealing with hybrid complexities. The algorithm is much better than the original PO (2518.767, rank 4) and classical approaches such as WOA (3919.915, rank 10). In the case of F6, ADPO has the highest performance, with a fitness value of 4028.385 and the first rank, showing an excellent performance in challenging hybrid landscapes. This excellent result, contrasting with the abject failures of some of the competitors, such as KOA (3.43 × 109), testifies to the strength of the EAM strategy approach. ADPO holds first and second places on F7 and F8 with fitness values of 2103.721 and 2243.629, respectively, outperforming the majority of competitors and demonstrating better suitability to handling the complexities of hybrid functions.
Finally, the composition functions are the most difficult optimization situations, where many functions with various properties, global optima, and local topologies come together. Table 3 shows that ADPO has impressive performance on all composition functions, ranking first on F9, F10, F11, and F12, respectively, with fitness values of 2481.000, 2539.901, 2935.671, and 2991.295, respectively. Specifically, ADPO produced an outstanding score on F9 despite having the lowest average fitness; it was nearly perfectly consistent (standard deviation of 0.282). For F10, the large performance difference between ADPO and the second-ranking PO (2778.659) depicts the better capability of the algorithm when addressing a complex landscape of compositions. The stability of its first ranking across all the composition functions proves that all three suggested augmentation measures work in synergy to address the hardest optimization cases.
Thorough analysis shows that ADPO achieved a median rank of 1.42 across the total number of test functions, and it is in the top place overall, as indicated in Table 3. This outstanding performance is a big step up from the original PO (average rank 3.83, final rank 4) and clearly superior to every other algorithm. The IVY algorithm obtains second place with an average rank equaling 3.08, whereas classical algorithms such as WOA (average rank 6.33) and recent ones such as HOA (average rank 7.58) perform significantly worse. The stability of ADPO in ranking across a wide variety of types of functions suggests the resilience and versatility of the algorithm in different optimization landscapes.
The conducted statistical measurement via standard deviations shows the remarkable consistency of ADPO, especially in the F1 and F9 functions, where the algorithm records very low standard deviations of 0.981 and 0.282, respectively. The combination of such consistency means that ADPO is not only more likely to discover high-quality solutions, but also to do so in a consistent fashion across independent runs. The suggested optimization approach, including the MDV, DLH, and EAM strategies, combines to correct the drawbacks of the original PO and preserve their own advantages, leading to a powerful and supportive optimization algorithm that can be used in a variety of real-life applications.
The convergence properties of ADPO show specific behavior patterns, indicating the higher optimization states of the algorithm in various function types. Compared to its competitors, ADPO shows rapid initial convergence, taking off with long constant fine-tuning phases, eventually increasing its effectiveness in finding good areas, as shown in Figure 2, since it always begins fast and continues to improve. This two-phase dynamic model is the success of the MDV strategy, since it leads to aggressive exploration in the early iterations and shifts to accurate exploitation in the final stages, allowing ADPO to find an optimal balance between rapid convergence and high solution quality.
On unimodal and multimodal functions (F1–F4), ADPO gives extreme initial descent curve slopes, attaining near-optimal solutions in the first 5000 FEs. The convergence plots demonstrate that, whereas the majority of the competitors stagnate too early or display unstable behavior, ADPO keeps a smooth and monotonically decreasing trend throughout the course of optimization. Of most interest in the algorithm is its performance on F1 and F2, where ADPO’s convergence rates are shown to be significantly lower than for original PO, with the ADPO curve exhibiting exponentially fast improvements in fitness at the start, followed by a slow smoothing out. This trend shows that the DLH strategy is successful at avoiding early convergence without losing the exploitative characteristics of the algorithm.
The hybrid functions (F5–F8) involve even more intricate convergence landscapes, but ADPO always performs very well when compared with its competitors due to its adaptive convergence mannerism. Moreover, Figure 2 reveals that ADPO tends to improve steadily during the entire optimization process, which can be compared to the stagnation patterns in other algorithms. On F6, ADPO also demonstrates good convergence, with a sharp improvement in the initial values of about 22 to optimal values of about 8, whereas competitors such as KOA and COA do not show much improvement. The EAM strategy also plays a key role in such performance, as it allows ADPO to traverse challenging physical fitness landscapes via dynamic patterns of cooperation, avoiding becoming trapped in local optima.
In the case of composition functions (F9–F12), the convergence characteristics of ADPO are strikingly consistent and stable. For all composition functions, the algorithm exhibits smooth, monotonically improving curves and a convergence property with an early rapid-decline period, but with a gradual subsequent improvement. This response is different from competitors who merge too soon or have erratic swings in stability. The convergence curves of ADPO on F10 and F11 are especially spectacular and lead to the best solutions, with stability preserved in the optimization process. The synergy of the triple combination of the enhancement strategies is reflected in these problematic settings, and ADPO does not have trouble finding a balance between exploitation and exploration needs.
The comprehensive convergence learning indicates that ADPO has high adaptability with respect to different optimization surfaces. In contrast to the mixed performance behavior of the competitors in various types of functions, ADPO exhibits similar convergence patterns that are quick in short-term performance enhancement and stable enough for optimization in the long run. The capability of the algorithm to prevent premature convergence and offer the rapid enhancement of solution quality makes it a sound and effective optimization algorithm. Thus, the convergence plots show that ADPO yields not only superior endpoints but also more-efficient search patterns, maximally utilizing the available resources in terms of function evaluations in all of the tested situations.
Furthermore, the boxplot analysis in Figure 3 indicates the statistical stability and reliability of ADPO as compared to the competing algorithms in all the CEC2022 benchmark functions. Optimization algorithms with statistical stability—optimization algorithms that have stable performance across many independent runs—are needed in practice, where reliable and predictable behavior is paramount. The statistics used by ADPO show superior characteristics of statistical stability, with boxplots that are consistently compact and frequently have short interquartile ranges and few outliers across all manner of functions. This stability is especially noticeable in the ability of the algorithm to provide narrow fitness distributions with higher mean values, showing that ADPO yields much better results, but does so reliably, regardless of repeated executions.
On unimodal functions (F1) and multimodal functions (F2–F4), there is a very high consistency for ADPO, with compact boxplots and small variance values. ADPO is remarkably stable with minimal outliers on F1, showing remarkable reliability in producing optimal solutions in every run. This is in stark contrast to competitors such as KOA that have broad boxplot ranges and a high number of outliers on their charts, pointing to extreme performance variance. On multimodal F2–F4, ADPO has compact distributions with small interquartile ranges, and algorithms like AOA and COA have much broader boxplot ranges and various outliers, indicating their unstable behavior and even their initialization sensitivity. A large portion of this stability can be attributed to the MDV strategy, which offers well-organized steps of exploration and exploitation that help mitigate performance variance between runs.
The hybrid functions (F5–F8) make harder optimization landscapes, but ADPO still outperforms all other competitors with better statistical stability. The boxplots of ADPO on F5 and F6 are still compact, with only a few outliers, whereas the boxplots of its rivals, such as KOA and WOA, have very large ranges with several distant outliers, implying the poor performance of these rivals in some runs. This stability in the performance of ADPO on F6 is especially noteworthy, as it shows a close distribution around low fitness values, compared to other competitors whose boxplots vary by several orders of magnitude. Its resilience under sophisticated hybrid environments confirms the flourishing nature of the DLH algorithm, one that upholds variety in population without degenerating stability in solution quality. The EAM strategy also enables this stability by averting algorithmic stagnation and by guaranteeing that progress is preserved run-to-run.
In the composition functions (F9–F12), which form the most difficult optimization settings, ADPO’s statistical stability is sharper compared to other competitors. The boxplots demonstrate that ADPO has relatively stable distributions characterized by a clear median value and zero or few outliers reported, whereas various competing algorithms have unstable distributions with large ranges and several extreme outliers reported. For F10 and F11, the boxplots of ADPO seem tiny in comparison to those of representative rivals, KOA and COA, which demonstrate an enormous spread of the distributions, which means the performance can be viewed as highly unreliable. This outstanding performance in composition stability illustrates the synergistic performance of the three supplementary enhancement strategies acting in concert to enable stable optimization performance even in the most difficult conditions. The robustness of ADPO in such complicated situations makes it a reliable option for tackling real optimization tasks, where performance consistency is paramount.
This concise, detailed boxplot analysis demonstrates that ADPO is the best but also the most stable optimization algorithm compared with all other competitors. This robust statistical performance and its best average performance mean that ADPO is especially useful in applications where the quality of the solution is a key requirement and the consistency of performance is an important factor. The stable features reflected by the boxplot analysis confirm that the suggested improvement schemes effectively respond to the shortcomings of the initial PO, and they incorporate resilience, which guarantees the steady functional performance of the optimization in various problem topographies.
Testing the statistical significance is a vital key in demonstrating the strong performance of algebra-based optimization. It is an objective way of reporting the facts, as opposed to reporting the mean performance between the two algorithms. An assessment of algorithmic behavior needs statistical rigor to provide answers to the question as to whether the differences are statistically insignificant or statistically significant. In this analysis, two major nonparametric statistical tests are used, namely the Friedman test and the Wilcoxon rank-sum test. Together, the tests are statistically complete for proving the superiority of ADPO’s overall performance over the whole CEC2022 benchmark set.
Figure 4 shows the Friedman test results, showing the tremendous statistical advantage of ADPO over all other competing algorithms, with a p -value of 3.41 × 10−52. ADPO attains the smallest average rank of 1.74, which signifies an excellent overall performance in most test functions. This exceptionally low rank indicates near-optimal ranking, indicating that ADPO is ranked first or second in most benchmark functions. The large ranking inconsistencies between ADPO and its competitors offer crucial statistics in establishing ADPO’s performance advantage. The IVY algorithm achieves an average rank of 3.92, which is over twice the rank of ADPO, showing that IVY’s results match considerably worse than those of the other algorithm. Algorithms such as PO (4.03) and HHO (4.08) perform averagely, with ranks that fall closer to 4, whereas the recent ones exhibit mixed performance, with GJO having a rank of 4.29 and HOA ranking terribly at 7.24. The worst algorithms are KOA (10.45), COA (8.82), and AOA (8.37), with their ranks reflecting an overall poor performance on most test functions. This large variation in rank (1.74–10.45) makes performance hierarchies visible, where, of these algorithms, ADPO has proven to be statistically leading in all aspects over its competitors.
The pairwise Wilcoxon rank-sum test outcomes in Figure 5 and Table 4 entail the conclusive statistical proof of ADPO’s supremacy with respect to each one of its rivals individually. Compared to the classical algorithms, ADPO is fully dominant on all the test functions, with the maximum R + value of 12 and the minimum R = 0. This absolute supremacy gives clear statistical results of the excellence of ADPO in comparison to these established optimization processes. When compared to IVY, the results indicate that R + = 7, R = 1, and R = = 4. The results also show the definite advantage of ADPO against GJO, achieving R + = 8, R = 0, and R = = 4, illustrating an improvement in the running of eight functions, without inadequate findings. All these pairwise comparisons come together to lay out the statistical superiority of ADPO over the entire competitive environment.
The joint analysis of the statistics shows overwhelming results for the better performance of ADPO with a high level of statistical confidence. The results of the Friedman test denote that the possibility of obtaining such a low average rank by chance is insignificant, which denotes significant performance differences. The average rank of 1.74 and the significant difference to the competing algorithms, backed by the large standard error, strongly statistically prove that the difference in the performance is not methodologically dependent on random variation. These results are also supported by the results of the Wilcoxon rank-sum test, which revealed overall pairwise superiority in individual algorithm comparisons. Compared to the second-highest scorer, IVY, ADPO even has a clear statistical advantage—seven victories against the one defeat—which is quite a substantial indication of optimization ability.

4.5. Results Discussion with Advanced Algorithms Using CEC2017

This section further unravels the performance assessment of ADPO against the benchmark suite CEC2017, composed of 29 test functions of varied dimensions with a high dimensionality of 50, to augment the former analysis conducted by CEC2022 and deliver a definitive validation on dissimilar benchmark suites. This comparative evaluation involves the following extended state-of-the-art algorithms: eCOA, CMAES, DAOA, CSOAOA, GWO_CS, RDWOA, jDE, and ISSA. The obtained results are presented in Table 5.
The overall assessment of the 29 high-dimensional functions shows that ADPO was a superior performing algorithm, ranking with the best average rank of 2.34 and holding the first position overall. Such an outstanding performance illustrates the scalability and resilience of ADPO in managing more complex optimization environments with greater dimensions. The algorithm outperforms itself on 100% of the unimodal functions (F1, F3), taking first place on both functions; 14.29% of the multimodal functions, where it domineers only on F4; 50% of the hybrid ones (F11, F13, F14, F15, F20); and 40% of the composite ones (F22, F25, F26, F28). CSOAOA ranks second-best-performing with a mean of 2.55, followed by GWO_CS (3.38), and DAOA is the worst performer with a score of 10.00, which means it is the last when it comes to performance on all functions. The high margins of performance between ADPO and the competitors confirm that the optimization algorithm has excellent optimization capabilities in high-dimensional spaces.
Analyzing performance across different function categories reveals ADPO’s remarkable versatility and selective dominance in specific optimization challenges. On unimodal functions, ADPO demonstrates outstanding exploitation capabilities, securing first rank on both F1 and F3 with fitness values of 2.43 × 106 and 2.79 × 104, respectively, compared to competitors showing significantly inferior performances, ranging from 2.09 × 108 to 2.62 × 1011 on F1 and 1.17 × 105 to 5.74 × 107 on F3. For multimodal functions, ADPO achieves first rank exclusively on F4 (616.358), while showing competitive performance on other multimodal problems, with ranks typically within the top 5. The algorithm’s performance on hybrid functions demonstrates selective excellence, securing first place on 5 out of 10 functions (F11, F13, F14, F15, F20), showcasing effective handling of mixed optimization characteristics. On composite functions, ADPO achieves dominance on 4 out of 10 problems (F22, F25, F26, F28), maintaining competitive rankings on the remaining functions and avoiding the catastrophic failures observed in competing algorithms.
The statistical performance analysis reveals ADPO’s strategic dominance across specific optimization scenarios, maintaining consistent competitiveness when it does not achieve first place. The algorithm achieves remarkable consistency in performance, with rankings rarely exceeding fifth place across any function, thereby contributing to its superior overall average rank. Advanced competitors like CMAES (average rank 5.97) and RDWOA (6.79) show moderate performance but lack ADPO’s strategic excellence on specific problem types. Classical hybrid approaches like IPSO_IGSA (6.86) and modern variants like ISSA (4.41) demonstrate the challenges of maintaining performance across high-dimensional optimization landscapes. The substantial rank differences between ADPO and even the second-best algorithm (CSOAOA) indicate clear performance superiority rather than marginal improvements, particularly noteworthy given ADPO’s selective dominance pattern.
The exceptional performance of ADPO on the high-dimensional CEC2017 benchmark set validates the synergistic effectiveness of the three proposed enhancement strategies in tackling complex optimization challenges with strategic precision. The MDV strategy proves particularly crucial in high-dimensional spaces by maintaining population diversity during early exploration phases while enabling precise convergence during later exploitation stages, directly contributing to ADPO’s complete dominance on unimodal functions and selective superiority on hybrid and composite problems. The DLH mechanism demonstrates superior adaptability by enabling targeted optimization approaches for different problem characteristics, explaining ADPO’s strategic excellence on specific multimodal, hybrid, and composite functions rather than uniform performance across all categories. The EAM strategy contributes significantly to performance reliability by providing flexible cooperation patterns that identify and exploit problem-specific optimization opportunities, enabling ADPO to achieve targeted dominance while maintaining consistent competitiveness across the remaining function spectrum. The collective impact of these strategies positions ADPO as a highly intelligent optimization solution that strategically adapts to problem-specific characteristics in high-dimensional spaces.
The Friedman rank analysis for the CEC2017 benchmark, shown in Figure 6, with p -value 4.16 × 10−31, reinforces ADPO’s statistical superiority through its lowest average rank of 2.78, demonstrating consistent high-performance positioning across the 29 high-dimensional test functions. CSOAOA follows as the second-best performer with an average rank of 2.97, showing only a marginal difference of 0.19 rank points, indicating close competitive performance between these top two algorithms. The third-tier performance group includes eCOA (3.78) and GWO_CS (3.75), both achieving similar moderate rankings, while ISSA occupies the middle ground with an average rank of 4.28. The lower-performing algorithms demonstrate significant rank deterioration, with CMAES (6.02), IPSO_IGSA (6.71), and RDWOA (6.36) showing substantially inferior performance. The poorest performers include jDE (8.67) and DAOA (9.67), with DAOA achieving the worst possible ranking, indicating systematic performance failures across the majority of test functions.
The Wilcoxon rank-sum test results shown in Figure 7 and Table 6 provide definitive pairwise validation of ADPO’s dominance over individual competitors across the 29-function benchmark. ADPO demonstrates complete statistical superiority over DAOA, RDWOA, and jDE, with R + values of 29, 29, and 29, respectively, and R values of 0, indicating no instances of inferior performance against these algorithms. Against CMAES and GWO_CS, ADPO shows strong dominance, with R + values of 19 for both algorithms and minimal losses ( R = 3), establishing a clear statistical advantage. The results against CSOAOA reveal R + = 16 and R = 6, indicating ADPO’s superiority on 16 functions compared to 6 inferior performances, providing moderate but significant statistical evidence of better performance. Even against competitive algorithms like eCOA ( R + = 22, R = 1), ADPO maintains overwhelming statistical superiority with minimal instances of inferior performance. The comprehensive pairwise dominance patterns confirm that ADPO’s low Friedman rank translates into consistent statistical advantages across individual algorithm comparisons, establishing robust evidence of performance superiority in high-dimensional optimization scenarios.

4.6. Computational Time Analysis

The computational time analysis presented in Table 7 and Table 8 reveals significant insights into the practical efficiency of ADPO compared to competing algorithms across both CEC2022 and CEC2017 benchmark suites. For the CEC2022 benchmark with 20-dimensional problems, ADPO demonstrates moderate computational overhead with average execution times ranging from 0.79 to 1.40 s across the 12 test functions, positioning it in the middle tier of computational efficiency. Notably, SWO exhibits exceptional computational speed with execution times consistently below 0.02 s, while algorithms like IVY (0.72–1.04 s) and WOA (0.09–0.28 s) show competitive efficiency. However, the computational cost of ADPO is justified by its superior optimization performance, as evidenced by its first place ranking with an average rank of 1.42, demonstrating an effective tradeoff between computational investment and solution quality.
The computational complexity becomes more pronounced in the high-dimensional CEC2017 benchmark (50D), where ADPO’s execution times increase substantially, ranging from 1.24 to 5.33 s across the 29 test functions. This scaling behavior is expected given the enhanced complexity of the three integrated strategies (MDV, DLH, EAM) and the increased dimensionality of the search space. Comparatively, DAOA maintains the fastest execution times but at the cost of significantly inferior optimization performance (final rank 10), while CMAES shows the highest computational overhead (3.18–6.08 s) despite achieving only moderate performance. The computational overhead of ADPO, while higher than some competitors, remains reasonable considering its exceptional optimization capabilities, achieving the best overall ranking and demonstrating that the additional computational investment in sophisticated diversity preservation and adaptive learning mechanisms yields substantial improvements in solution quality that justify the increased execution time for applications where optimization accuracy is paramount.

5. Proposed ADPO-LSTM Framework for Wind Power Prediction

This section presents an advanced hybrid framework that couples the proposed ADPO with Long Short-Term Memory (LSTM) neural networks to solve the hyperparameter tuning problem in wind power forecasting. The proposed model harnesses ADPO’s dynamic population diversity preservation, dimension-level learning, and adaptive cooperation to locate optimal LSTM configurations capable of accurately modeling nonlinear temporal dependencies inherent in wind power time series. The workflow is organized into several critical phases and is conceptually outlined in Figure 8. This figure illustrates the comprehensive integration framework of the proposed ADPO-LSTM system for wind power forecasting, demonstrating the synergistic relationship between the optimization algorithm and neural network architecture. The framework operates through sequential integrated phases where raw wind power data undergoes systematic preprocessing, including normalization and chronological partitioning, followed by ADPO population initialization, where each candidate solution represents a unique LSTM hyperparameter configuration vector. For each candidate, a corresponding LSTM model is instantiated and trained with validation performance measured using RMSE, serving as fitness guidance for the optimization process. The core optimization applies the three enhancement strategies sequentially: EAM updates the population through fitness-guided cooperation, MDV applies dual-phase mutation for exploration–exploitation balance, and DLH performs dimension-wise adaptive learning from spatial neighbors.
The framework demonstrates tight coupling between ADPO and LSTM components through bidirectional information flow, where LSTM performance feedback directly guides population evolution, ensuring that optimization processes are specifically tailored to wind power forecasting requirements. Upon optimization termination, the configuration with the lowest validation RMSE trains the final LSTM model on the complete training dataset, followed by a comprehensive evaluation using multiple metrics.

5.1. Dataset Overview

The evaluation of the proposed ADPO-LSTM framework was conducted using a real-world wind power dataset collected from the La Haute Borne wind farm, which encompasses data recorded at four distinct monitoring sites labeled Station A through Station D [62]. These stations offer diverse operational profiles, enabling a well-rounded understanding of the variability in turbine behavior under different environmental and mechanical conditions. This dataset is sourced from the La Haute Borne wind facility, located in northeastern France’s Grand Est region. This wind farm comprises four identical MM82 turbines manufactured by Senvion, each rated at 2050 kW, mounted at a height of 80 m with a rotor span of 82 m, and situated at an altitude of 411 m above sea level. This dataset includes multiple environmental and performance variables, such as wind speed, wind direction, ambient temperature, and power output. Among these, the turbine’s power output serves as the prediction target. The dataset spans a continuous timeline from 2016 to 2020, providing a rich temporal context that captures seasonal, daily, and short-term fluctuations in wind-generated energy. Also, Figure 9 represents the descriptive characteristics of the utilized dataset along with their range values.

5.2. Preprocessing Workflow

To support reliable model construction and prevent performance bias, the dataset was divided based on its chronological order. Approximately 80% of the total samples, specifically, the first 42,792 time-ordered observations, were used for training purposes, including both LSTM model learning and ADPO-driven hyperparameter tuning. This portion offers sufficient volume and variability for the optimizer to effectively explore solution candidates while simultaneously allowing the LSTM model to capture long-term temporal dependencies. The remaining 20% of the data, corresponding to 10,699 entries, is strictly reserved for testing. These observations are temporally after the training data, forming a forward-looking sequence that simulates practical forecasting use cases. This method of time-based separation avoids leakage between training and testing, ensuring that evaluation results genuinely reflect the model’s ability to predict future, unseen conditions.
Given the sequential nature of the data, this partitioning strategy is deliberately non-random to preserve the inherent order of wind power generation patterns [63]. Random splitting methods could inadvertently introduce artificial correlation between training and testing samples, inflating performance metrics. In contrast, chronological splitting reinforces the model’s learning from historical sequences and assesses its generalization over future intervals—an essential requirement for operational deployment in energy forecasting systems.
Before model training and optimization, the raw data undergoes a robust preprocessing pipeline. This includes handling missing values through temporal interpolation to maintain continuity, detecting and correcting outliers to improve stability, and normalizing features using Min-Max scaling to a fixed range of [0, 1] [64]. These preprocessing steps are carefully applied to maintain the internal structure and dependencies of the time series while preparing the dataset for compatibility with the LSTM network’s activation functions. The transformation process is as follows:
X n o r m = X X m i n X m a x X m i n
In which X is the original input data and X m a x and X m i n are the extreme boundaries of the features. This normalization promotes steady convergence and avoids the distortion of the learning process through overbearing features. To maintain temporal structure, the normalized data is separated into overlapping sliding windows, each of which becomes an input–output pair, to train the LSTM model with the sequential behavior of the data.

5.3. Optimization-Based LSTM Training Initialization

Each candidate solution contains a unique combination of the following five hyperparameters critical to the performance of LSTM: the number of hidden units, number of training epochs, optimizer type, batch size, and learning rate decay factor. These five elements create a continuous–discrete hybrid vector, generated randomly by filling the corresponding bounds as follows:
X i j = L B j + r a n d U B j L B j
Here, X i j represents the j -th parameter of the i -th solution, and L B j   a n d   U B j define its corresponding lower and upper bounds. This stochastic initialization guarantees a well-spread starting population, establishing the groundwork for effective exploration of the LSTM design space.
When performing optimization, the LSTM model associated with each candidate is instantiated and trained with its configuration of parameters. The Root Mean Square Error (RMSE) is used to measure the validation performance of the model, serving as a fitness score that leads the evolutionary process of ADPO [65]. ADPO is then used to iteratively evolve the population using its three synergistic strategies. The MDV mechanism promotes large explorative moves in early generations, and transitions, in later generations, to fine-grained refinements around the global best. DLH allows the dynamic updating of certain dimensions via the feedback of neighbors in a specific local area, providing a finer-grain and localized adjustment. The combination of such behaviors creates a self-organizing search engine that can escape local minima and converge on high-performance LSTM settings.
As well as its inner workings, ADPO implicitly tracks similarity and diversity in population by tracking DLH neighborhoods and MDV perturbation responses, adjusting the learning pattern to problem complexity. The candidate which performs below the average is increasingly channeled through the dimension-level of learning, whereas the top-level candidates are strengthened with interactions that have adaptive benefit ratios. Within this framework, ADPO directly tunes the five LSTM parameters described in Table 9.
Every ADPO candidate represents a full parameter combination and is decoded in order to build an LSTM model. These models are trained and validated, with the resulting RMSE determining candidate fitness. ADPO uses this feedback to guide the population toward better-performing solutions. Throughout, the elite solution is persistently preserved, ensuring that the best configuration discovered is not lost during the evolutionary cycles. Moreover, the base LSTM architecture used in this study comprises a single LSTM layer followed by a dense output layer. The number of hidden units within the LSTM layer is one of the tunable hyperparameters optimized by ADPO. The activation function for the LSTM layer is set to the standard hyperbolic tangent (tanh), which is suitable for capturing both positive and negative temporal dependencies in the sequence data. For the output layer, a linear activation function is used, which is appropriate for continuous regression tasks such as wind power prediction.

5.4. Fitness Evaluation

The model’s performance for each candidate configuration is assessed using RMSE, computed as
F i t i = 1 n s j = 1 n s   Y P j Y T j 2
Here, Y P j and Y T j are the predicted and actual wind power values, respectively, and n s is the number of samples in the validation set [66]. This metric guides the ADPO refinement of the search space. In each generation, candidate solutions are updated using the composite effects of MDV-driven transitions, DLH-based local learning, and EAM-governed cooperation. Greedy selection retains superior candidates, and elitism ensures consistent tracking of the globally best-performing configuration.

5.5. Testing and Generalization Assessment

Following the optimization process, the configuration corresponding to the lowest RMSE on the validation data is selected as the optimal design. This configuration is then used to train the LSTM model on the full training set. The final model is evaluated on the test data to validate its generalization capability. Standard performance metrics, including RMSE, Mean Absolute Error (MAE), and the Coefficient of Determination (R2), are calculated to objectively quantify prediction accuracy on unseen data, thereby confirming the reliability of the optimized model.

5.6. Termination

The ADPO optimization process terminates at a determined level of iterations, which means a bounded computation. Progressively adaptive behaviors in ADPO enable the algorithm to evolve quickly through the solution space in its initial phases and concentrate on exploitation in the later steps, and thus the procedure is also effective within time-limited scenarios like renewable energy forecasting. This also helps with the balance between the quality of convergence and the runtime complexity, a balance guaranteed by the framework because it overloads the population updates and adaptive learning strategies with lighter solutions.

5.7. Performance Evaluation Metrics

A set of four metrics that assess the quality of prediction of the suggested ADPO-LSTM wind power prediction model comprehensively is used. These are the Coefficient of Determination (R2), Root Mean Squared Error (RMSE), Mean Absolute Error (MAE), and the Coefficient of Variation (COV). These metrics offer different insights as to how well, accurately, and reliably the model recaptures the underlying patterns in the wind power time series.
  • The Coefficient of Determination (R2) serves as an indicator of the proportion of variability in the actual wind power output that is successfully explained by the model’s predictions. This metric offers insight into the model’s explanatory strength, with values approaching 1 signifying near-perfect alignment between predicted and true outputs. It is calculated as follows:
R 2 = 1   Y actual   Y predicted   2   Y actual   Y actual   2
where Y actual   and Y predicted   refer to the observed and predicted values, respectively, and Y actual   represents the mean of the observed data.
  • The standard deviation of the residuals, or the differences between predicted and actual values, is also known as Root Mean Squared Error (RMSE). It also imposes more punishment on larger errors compared to smaller errors because of its quadratic component, which makes it highly susceptible to outliers and general deviations. RMSE is calculated as follows:
R M S E = 1 n   Y actual   Y predicted   2
where n is the total number of test samples used in the evaluation.
  • The Mean Absolute Error (MAE) represents the average absolute deviation between the values obtained and predicted, and does not square the errors. This is a measure of the average magnitude of prediction errors, and it is particularly applicable when every deviation, whether up or down, is equally significant. MAE is defined as
R M S E = 1 n   Y actual   Y predicted   2
  • The Coefficient of Variation (COV) shows the error as a relative value by comparing the RMSE to the mean of the observed wind power values. When converted into a percentage it compares the scale of prediction error to the average level of output as a measure of how relatively stable at various operating levels the prediction is:
C O V = R M S E Y actual   × 100
The use of these four metrics simultaneously involves both the absolute error magnitudes and the relative model stability in the evaluation process. Such a multidimensional analysis will not only make the forecasting framework accurate but also reliable under diverse conditions of wind power generation.

5.8. Experimental Results and Performance Evaluation

The overall experimental analysis shows that the optimization of ADPO has a significant influence on the performance of LSTM networks in predicting wind power in all four wind farm stations. This quantitative review contrasts baseline LSTM to several optimizing methods: PO-LSTM, SCA-LSTM, WOA-LSTM, SOA-LSTM, HHO-LSTM, and the proposed ADPO-LSTM. Various measures for performance are used in the evaluation, such as R2, RMSE, MAE, and COV, to deliver comprehensive information on the accuracy of prediction, model stability, and generalization.
The training-phase results, presented in Table 10 and Table 11, reveal significant performance variations across different optimization approaches. The baseline LSTM model exhibits consistently poor performance across all stations, with R2 values ranging from 0.6185 to 0.6875, indicating a limited capability in capturing the complex temporal patterns inherent in wind power generation data. The LSTM model’s RMSE values span 0.0024 to 0.0030, and notably high COV values ranging from 85.2153 to 118.7456 further confirm its suboptimal performance without hyperparameter optimization.
Moreover, the results in Table 10 demonstrate substantial performance improvements when MAs are applied to LSTM hyperparameter tuning for Stations A and B. HHO-LSTM achieves considerable enhancement, with R2 values of 0.8515 and 0.8495, respectively, representing an approximately 24% improvement over the baseline LSTM configuration, closely followed by PO-LSTM with R2 values of 0.8485 and 0.8475. The nature-inspired optimization approaches consistently outperform the baseline, with SCA-LSTM, WOA-LSTM, and SOA-LSTM showing progressive improvements that validate the effectiveness of population-based optimization strategies in navigating the complex LSTM hyperparameter landscape.
The most remarkable observation in Table 10 is the exceptional performance of ADPO-LSTM, which significantly outperforms all competing methods across both stations. For Station A, ADPO-LSTM achieves an R2 value of 0.9875, representing a 44% improvement over the baseline and 16% enhancement over the best competing algorithm (HHO-LSTM). The dramatically reduced RMSE values of 0.0002 and 0.0004 for Stations A and B, respectively, coupled with exceptionally low COV values of 15.8745 and 23.7412, indicate not only superior accuracy but also enhanced prediction stability. The consistent superiority across all metrics demonstrates the effectiveness of ADPO’s three-fold enhancement strategy: MDV for balanced exploration–exploitation, DLH for maintaining diversity, and EAM strategy for fitness-guided cooperation.
Furthermore, Table 11 presents the training results for Stations C and D, revealing consistent patterns, with some notable variations, in algorithm performance across different geographical locations. HHO-LSTM and PO-LSTM maintain robust performance, with R2 values ranging from 0.8085 to 0.8145, demonstrating these algorithms’ reliability across diverse wind conditions and station characteristics. The hierarchical performance pattern observed in this table is maintained, with SCA-LSTM, SOA-LSTM, and WOA-LSTM showing competitive but progressively lower performance, indicating consistent optimization capabilities across these stations.
The standout performer remains ADPO-LSTM, achieving exceptional R2 values of 0.9758 and 0.9685 for Stations C and D, respectively, representing improvements of approximately 58% over baseline LSTM and 19% over the best competing approach (HHO-LSTM). The consistently low RMSE values of 0.0002 and 0.0005, along with minimal MAE values of 0.0001 and 0.0003, demonstrate the algorithm’s superior capability in fine-tuning LSTM parameters for optimal performance. Particularly noteworthy is the substantial reduction in COV values to 25.4578 and 31.2874, indicating dramatically improved prediction stability compared to other methods. This consistent superiority validates the effectiveness of ADPO’s enhanced diversity preservation mechanisms and adaptive learning strategies in preventing premature convergence while efficiently exploring the hyperparameter space.
The testing-phase results, presented in Table 12 and Table 13, provide crucial validation of the optimized models’ generalization capabilities when applied to unseen data. These results demonstrate the critical importance of robust optimization algorithms in achieving models that generalize well beyond the training dataset.
The testing-phase results in Table 12 reveal critical insights into the generalization capabilities of different optimization approaches when applied to unseen wind power data for Stations A and B. Baseline LSTM shows significant performance degradation during testing, with R2 values dropping to 0.6625 and 0.5785, indicating poor generalization and potential overfitting to training patterns. HHO-LSTM and PO-LSTM demonstrate excellent generalization capability, maintaining strong performance with R2 values of 0.8465/0.8485 and 0.8125/0.8105, respectively, representing minimal degradation from training performance and validating the robustness of these optimization algorithms in finding generalizable LSTM configurations.
ADPO-LSTM exhibits exceptional testing performance, with R2 values of 0.9785 and 0.9798, demonstrating superior generalization capabilities that improve upon training performance in Station B. This remarkable result indicates that the diversity preservation mechanisms and enhanced exploration–exploitation balance in ADPO prevent overfitting while identifying truly optimal hyperparameter configurations. The consistently low RMSE values of 0.0009 and 0.0010, coupled with minimal COV values of 21.5874 and 27.4578, confirm that ADPO-LSTM not only achieves superior accuracy but also maintains prediction stability when applied to new data. The other optimization algorithms (SCA-LSTM, WOA-LSTM, SOA-LSTM) show moderate generalization with some performance degradation, suggesting limitations in their diversity preservation strategies for achieving robust generalization.
On the other hand, Table 13 presents testing results for Stations C and D, revealing interesting patterns in algorithm performance across different geographical and operational conditions. Baseline LSTM continues to show poor generalization, with R2 values of 0.5985 and 0.6105, accompanied by high error metrics that confirm the inadequacy of default hyperparameters for robust wind power prediction. HHO-LSTM and PO-LSTM maintain reasonable generalization capability, with R2 values around 0.77, though they show some degradation from their training performance, particularly notable in Station C, where training performance was higher. The other optimization algorithms (SCA-LSTM, SOA-LSTM, WOA-LSTM) exhibit varying degrees of performance degradation during testing, with R2 values generally ranging from 0.7085 to 0.7385, indicating limitations in their ability to identify hyperparameter configurations that generalize well to unseen data.
The most striking observation in Table 13 is the continued exceptional performance of ADPO-LSTM, which achieves R2 values of 0.9705 and 0.9615 for Stations C and D, respectively, demonstrating robust generalization capabilities across all testing scenarios. The minimal RMSE values of 0.0008 and 0.0012, along with exceptionally low MAE values of 0.0005 for both stations, confirm the algorithm’s superior ability to maintain prediction accuracy on new data. The low COV values of 29.7854 and 32.1478 indicate that ADPO-LSTM not only achieves high accuracy but also maintains consistent prediction reliability across different operational conditions. This consistent superiority across all stations and metrics validates the effectiveness of the enhanced diversity preservation mechanisms, adaptive perturbation strategies, and dimension-wise learning techniques integrated into the ADPO framework.
The overall performance parameters among all the stations fully support our assessment of the individual effectiveness of the algorithms. To review these findings, Table 14 and Table 15 report the overall means at all four stations during training and testing, respectively.
Table 14 and Figure 10 present the aggregate training performance across all four wind farm stations, providing a comprehensive view of each optimization algorithm’s overall effectiveness in enhancing LSTM performance for wind power prediction. Baseline LSTM demonstrates consistently poor performance, with an average R2 of 0.6443, a high RMSE of 0.0026, and an elevated COV of 99.8489, confirming the critical need for hyperparameter optimization in achieving acceptable prediction accuracy. Among the metaheuristic approaches, HHO-LSTM emerges as the best-performing traditional optimizer with an average R2 of 0.8320, representing a 29% improvement over the baseline, while PO-LSTM follows closely with an R2 of 0.8293, demonstrating the effectiveness of nature-inspired optimization strategies in navigating the complex LSTM hyperparameter landscape.
The exceptional performance of ADPO-LSTM is evident in Table 14 and Figure 10, achieving an average R2 of 0.9792, which represents a 52% improvement over baseline LSTM and an 18% enhancement over the best competing algorithm (HHO-LSTM). The dramatically reduced average RMSE of 0.0003 and MAE of 0.0002 demonstrate the algorithm’s superior capability in fine-tuning LSTM parameters for optimal prediction accuracy. Most notably, the substantial reduction in COV to 24.0902, compared to values exceeding 58 for other optimization approaches, indicates that ADPO-LSTM not only achieves superior accuracy but also provides significantly enhanced prediction stability across diverse operational conditions and geographical locations. This comprehensive improvement validates the synergistic effect of ADPO’s three enhancement strategies working in concert.
The testing-phase aggregate results in Table 15 and Figure 11 provide critical validation of the optimization algorithms’ ability to produce LSTM configurations that generalize well to unseen data. Baseline LSTM shows further degradation with an average R2 dropping to 0.6125, accompanied by increased error metrics, confirming poor generalization capabilities. HHO-LSTM maintains the best performance among traditional optimizers with an average R2 of 0.8008, though it shows some degradation from its training performance, while PO-LSTM follows closely with 0.7998, demonstrating reasonable robustness. Other algorithms exhibit varying degrees of performance decline, with WOA-LSTM showing the most significant degradation, indicating sensitivity to overfitting in their optimization strategies.
ADPO-LSTM demonstrates exceptional generalization capability with an average testing R2 of 0.9726, representing minimal degradation from its training performance and confirming the algorithm’s ability to identify truly optimal hyperparameter configurations rather than overfitted solutions. The maintained low average RMSE of 0.0010 and MAE of 0.0006, coupled with a modest COV increase to 27.7271, validate the robustness of the diversity preservation mechanisms and enhanced exploration–exploitation balance inherent in the ADPO framework. This consistent superiority in testing performance across all metrics confirms that ADPO-LSTM not only achieves superior training accuracy but also maintains this performance advantage when applied to real-world forecasting scenarios, making it highly suitable for practical wind power prediction applications.
On the other hand, Table 16 and Table 17 provide detailed experimental data to explore the effectiveness of ADPO on various topologies of prediction models, such as LSTM, Bidirectional LSTM (Bi-LSTM), Extreme Learning Machine (ELM), Kernel Extreme Learning Machine (KELM), and Random Forest (RF).
The results in Table 16 show the flexibility of ADPO optimization when used on various architectures for Stations A and B, evincing high performance disparities that indicate the criticality of architecture selection in wind power forecasting tasks. ADPO-LSTM shows remarkable performance, with R2 values of 0.9785 and 0.9798 and lower metrics of error, which prove it is the best combination for both stations. ADPO-Bi-LSTM presents good secondary-level performance with R2 values of 0.8895 and 0.8985, which reflects the influence of the bidirectionality in the processing ability, bringing more advantages, but this is not enough to eliminate the drawbacks of additional parameter complexities in its ability to optimize. The difference in the performances of ADPO-LSTM and ADPO-Bi-LSTM shows that complex architectures have to be tuned against optimization power.
The results in Table 16 also reveal interesting patterns in ADPO’s effectiveness across different algorithmic paradigms. ADPO-KELM demonstrates competitive performance, particularly for Station B, where it achieves an R2 of 0.9125, suggesting that the kernel-based extreme learning machine approach can be effectively optimized when combined with ADPO’s advanced search strategies. However, the higher RMSE values compared to LSTM indicate that the reduced sequential modeling capability may limit the ultimate achievable prediction accuracy. ADPO-ELM and ADPO-RF show moderate performance improvements, with R2 values generally ranging from 0.7945 to 0.8495, demonstrating that while ADPO can enhance various architectures, the temporal modeling capabilities inherent in recurrent neural networks provide fundamental advantages for wind power prediction applications.
Table 17 extends the comparative analysis to Stations C and D, reinforcing the patterns observed in the previous analysis while revealing some station-specific variations in algorithm performance. ADPO-LSTM maintains its superior performance with R2 values of 0.9705 and 0.9615, consistently achieving the lowest error metrics across both stations and confirming its robustness across different geographical and operational conditions. ADPO-Bi-LSTM shows particularly strong performance in these stations, achieving R2 values exceeding 0.91, which suggests that the bidirectional architecture may be more effective in capturing the specific temporal patterns present in these locations’ wind data. The consistent performance hierarchy validates the architectural rankings established in the previous analysis.
The performance hierarchy observed in Table 17 confirms the general effectiveness ranking established in the previous analysis, with ADPO-KELM maintaining competitive performance (R2 values around 0.90–0.91) while other architectures show more moderate improvements. Notably, ADPO-ELM shows more variable performance across stations, ranging from 0.8175 in Station C to 0.8385 in Station D, suggesting that ELM may be more sensitive to local data characteristics when optimized with ADPO. The persistently greater performance of recurrent architectures (LSTM, Bi-LSTM) and kernel-based methods (KELM) than traditional feedforward methods (ELM) and ensemble methods (RF) mean it can be concluded that either temporal modeling capabilities or advanced kernel transforms are essential to realizing optimal performance in wind power prediction, despite application with advanced optimization algorithms (ADPO).
The aggregate performance analysis in Table 18 and Figure 12 provides definitive evidence of ADPO-LSTM’s superiority across all evaluation metrics, achieving an average R2 of 0.9726 that significantly outperforms all other architectural combinations. The substantial performance gap between ADPO-LSTM and the second-best performer (ADPO-Bi-LSTM with an R2 of 0.9068) demonstrates that the optimization algorithm’s effectiveness is strongly dependent on the underlying model architecture’s capability to capture temporal dependencies in wind power data. The consistently low error metrics (RMSE of 0.0010, MAE of 0.0006) further validate the exceptional synergy between ADPO’s optimization strategies and LSTM’s sequential modeling capabilities.
Also, the performance hierarchy revealed in Table 18 clearly illustrates the importance of architectural selection in optimization outcomes, with recurrent neural networks (LSTM, Bi-LSTM) and advanced kernel methods (KELM) significantly outperforming other approaches. ADPO-KELM achieves respectable performance, with an R2 of 0.8958, indicating that kernel-based extreme learning machines can benefit substantially from advanced optimization, though they cannot match full LSTM’s temporal modeling performance. The moderate performance of ADPO-ELM (R2 of 0.8198) and ADPO-RF (R2 of 0.8318) demonstrates that, while ADPO can enhance various machine learning approaches, the fundamental architectural capabilities for either temporal modeling or sophisticated feature transformation remain the primary determinants of wind power prediction success.
The state-of-the-art comparison in Table 19 establishes ADPO-LSTM as the premier approach for wind power prediction, achieving the highest R2 value of 0.9726 while maintaining the lowest error metrics across all compared methodologies. The comparison reveals significant performance advantages over existing approaches, with ADPO-LSTM achieving a 0.45% improvement in R2 over the second-best-performing method (RVFL + CapSA) while dramatically reducing error metrics by orders of magnitude. The LSTM + HBO approach shows competitive R2 performance at 0.9654 but exhibits substantially higher RMSE (0.042869) and MAE (0.02998) values, highlighting the importance of sophisticated optimization strategies like ADPO in achieving balanced performance across all evaluation criteria.
The striking difference between the LSTM-based and RVFL-based performance variability shows the large significance of the architectural choice in the undertaking of wind power forecasting. Although RVFL + CapSA yields a competitive R2 of 0.9681, its RMSE of 110.3154 is an order of magnitude greater than LSTM-based methods, which shows a significant difference between the accuracy of apparent correlation and the accuracy of actual wind power predictions using the RVFL architecture. Its ability to outperform all the other methods, together with a balanced profile of performance that lacks the extreme error properties of other methods, is enough evidence of its appropriateness in real-world reporting conditions, where error reduction and stability are of the most concern. The three enhancement strategies provided in ADPO, including MDV, DLH, and EAM, represent a synergistic optimization problem framework without loss of real-scale implementation efficiency in all the idiosyncrasies of conventional metaheuristic techniques.
To ensure the reliability and statistical significance of the experimental results, comprehensive statistical analyses were conducted across multiple independent runs for each optimization algorithm. The Wilcoxon rank test was employed to assess the statistical significance of performance differences between ADPO-LSTM and competing optimization approaches across all four wind farm stations. This nonparametric test is particularly suitable for comparing paired samples when the data distribution cannot be assumed to be normal, making it ideal for evaluating optimization algorithm performance.
Table 20 presents the statistical significance analysis results, comparing ADPO-LSTM against all competing optimization algorithms using the Wilcoxon rank test across 30 independent runs for each station. The p-values indicate the probability that the observed performance differences occurred by chance, with values below 0.05 indicating statistically significant differences at the 95% confidence level.
The Wilcoxon test results in Table 20 provide compelling evidence of ADPO-LSTM’s statistical superiority across all experimental scenarios. All p -values are significantly below the 0.05 threshold, with most falling below 0.002, indicating extremely strong statistical significance. The comparison with baseline LSTM shows p -values less than 0.0001 across all stations, providing overwhelming evidence that the improvements achieved by ADPO optimization are not due to random chance. The consistently low p -values across all stations validate the robustness of ADPO-LSTM’s performance advantages, regardless of geographical location or operational conditions.
Especially notable is the comparison with HHO-LSTM, the most potent competing optimization algorithm. Although the performance of HHO-LSTM is competitive, the Wilcoxon test provides p -values varying between 0.0017 and 0.0021, which proves that the better performance of ADPO-LSTM is statistically significant even when compared to the best alternative method. The total p -value of this comparison of 0.0019 is sufficient to demonstrate that the advanced optimization techniques—ADPO (MDV, DLH, and EAM)—do provide significant and repeatable benefits over basic MAs. The test results also show that the excellence of the results is consistently realized at the different stations, with a fluctuation of p -values within limited ranges, so that the excellence of ADPO-LSTM is not discretely local but reflects the fundamental enhancement of optimization ability.
The extensive experimental study shows the remarkable results of the proposed ADPO-LSTM framework in wind power prediction in terms of various evaluation metrics and under various operation conditions. By pairing ADPO with LSTM networks, it is able to solve the major hurdles of hyperparameter countermining in the hyperparameter optimization of time series structures in forecasting.
The main contributions established by our experimental findings are (1) the high capability of prediction (R2) mean of 0.9726 throughout all testing scenarios that establishes significant advances over the current state-of-the-art methods; (2) high levels of prediction stability, with extreme values in the Coefficient of Variation statistic that guarantee performance stability in the operational setting; (3) good generalization performance with minimal degradation in performance between training and testing modes; (4) satisfactory computational complexity that can be implemented in real-time wind power forecasting systems.
ADPO-LSTM is continuously better than other architectures for all the stations, metrics, and architectural comparisons, which confirms the utility of the three enhancement strategies introduced into the optimization framework. These findings make ADPO-LSTM a state-of-the-art wind power forecasting solution that can be employed in solving this problem with considerable increases in both accuracy and reliability relative to current methods. It is also very applicable in real-life wind power management applications where highly accurate and reliable predictions are required for economic wind power management models and grid stability.

6. Conclusions and Future Works

The study has introduced ADPO as a superior version of the Parrot Optimization Algorithm (PO), uniquely designed to deal with the inherent shortcomings of premature convergence and the lack of diversity of the initial formulation. By integrating three radical methods—MDV, DLH, and EAM—ADPO significantly outperforms existing optimization functions, maintaining the same level of algorithm efficiency and wide applicability. Extensive experimental comparison shows that ADPO outperforms various state-of-the-art algorithms on a broad variety of benchmark functions. The assessment ADPO received on the CEC2017 and CEC2022 evaluation suites revealed constant displays of higher ranks of 2.34 and 1.42, respectively. Its excellent performance is not only on theoretical standards but also in practice, as ADPO-LSTM offered the best solutions in wind power prediction at various wind farm stations. ADPO proved incredibly robust and accurate in the spectrum of tasks required in renewable energy forecasting, continuing to show high prediction accuracy, with R2 values greater than 0.97 in the extremes of operating conditions and geographical sites. These findings prove that it can be trusted to be a reliable agent in multifaceted modern optimization problems.
The general applicability of this work is significant in the field of optimization, especially for renewable energy applications and the tuning of hyperparameters in neural networks. The fact that ADPO is very similar in performance over a wide selection of problem domains reflects its flexibility as a reliable approach to practical optimization problems. Interestingly, the ability to process hyperparameters in high dimensions fills a crucial niche in modern machine learning tasks, where optimization tasks often deal with a complex parameter surface as well as a high-dimensional time axis.
In spite of all these achievements, ADPO also faces some constraints that should not be overlooked. Sensitivity to population size settings is also observed in highly multimodal search landscapes, where setting an inappropriate population size setting might lead to poor results. Moreover, computational overhead at a very large scale may be significant, particularly in large dimensions, associated with the complicated calculations demanded by the three enhancement mechanisms. In some cases, dimension-wise learning through the DLH strategy can make a momentary jump in the computational complexity, and extra measurements are needed to get the best performance out of the algorithm. These shortcomings suggest the possibility of further improvement.
Multiple directions have been found on which future research will be focused to improve ADPO and increase its applicability. A particularly promising direction is the development of self-adaptive parameter adjustment mechanisms that will increase robustness and reduce the amount of manual configuration. Also, it can be further improved by incorporating machine learning methodology into the intelligent selection of strategies to make it more adaptive and computationally efficient. Moreover, the multi-objective optimization representation of ADPO is vital to solving practical real-life problems with conflicting objectives, especially in complex engineering design scenarios. Furthermore, knowledge- and domain-specific applications will also be improved, including solar power forecasting [69], energy storage optimization [70], renewable energy integration [71], feature selection [72,73], image segmentation [74,75], wireless sensor networks [76,77], task scheduling in cloud computing [78], human activity recognition [79], bioinformatics applications [80], autonomous vehicle path planning [81], software defects [82], medical classification [83,84], gene classification [85], path planning [86], cybersecurity threat detection [87], and dynamic traffic routing [88]. These developments aim to make ADPO a general-purpose optimization system able to solve progressively complex optimization problems whilst retaining its strong foundations of precision, effectiveness, and reliability.

Author Contributions

Conceptualization, G.L. and H.J.; methodology, M.A.-s. and G.L.; software, M.A.-s.; validation, M.A.-s.; formal analysis, M.A.-s.; investigation, G.L., H.J. and M.A.-s.; writing—original draft preparation, G.L., H.J. and M.A.-s.; writing—review and editing, G.L., M.A.-s. and G.H.; visualization, M.A.-s.; supervision, G.L. and H.J.; funding acquisition, H.J. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Cultivation Project for National Natural Science Foundation of China (PYT2204).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Acknowledgments

The authors would like to acknowledge the support of the Fujian Key Lab of Agriculture IOT Application, IOT Application Engineering Research Center of Fujian Province Colleges and Universities.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Nadimi-Shahraki, M.H.; Zamani, H.; Asghari Varzaneh, Z.; Mirjalili, S. A systematic review of the whale optimization algorithm: Theoretical foundation, improvements, and hybridizations. Arch. Comput. Methods Eng. 2023, 30, 4113–4159. [Google Scholar] [CrossRef] [PubMed]
  2. Ye, M.; Zhou, H.; Yang, H.; Hu, B.; Wang, X. Multi-strategy improved dung beetle optimization algorithm and its applications. Biomimetics 2024, 9, 291. [Google Scholar] [CrossRef]
  3. Mayer, J. Stochastic Linear Programming Algorithms: A Comparison Based on a Model Management System; Routledge: London, UK, 2022. [Google Scholar]
  4. Yang, H.; Liu, X.; Song, K. A novel gradient boosting regression tree technique optimized by improved sparrow search algorithm for predicting TBM penetration rate. Arab. J. Geosci. 2022, 15, 461. [Google Scholar] [CrossRef]
  5. Wu, Y.; Wang, L.; Li, R.; Xu, Y.; Zheng, J. A learning-based dual-population optimization algorithm for hybrid seru system scheduling with assembly. Swarm Evol. Comput. 2025, 94, 101901. [Google Scholar] [CrossRef]
  6. Howell, T.A.; Le Cleac’h, S.; Singh, S.; Florence, P.; Manchester, Z.; Sindhwani, V. Trajectory optimization with optimization-based dynamics. IEEE Robot. Autom. Lett. 2022, 7, 6750–6757. [Google Scholar] [CrossRef]
  7. Antonijevic, M.; Zivkovic, M.; Djuric Jovicic, M.; Nikolic, B.; Perisic, J.; Milovanovic, M.; Jovanovic, L.; Abdel-Salam, M.; Bacanin, N. Intrusion detection in metaverse environment internet of things systems by metaheuristics tuned two level framework. Sci. Rep. 2025, 15, 3555. [Google Scholar] [CrossRef]
  8. Lin, A.; Liao, Y.; Peng, C.; Li, X.; Zhang, X. Elite leader dwarf mongoose optimization algorithm. Sci. Rep. 2025, 15, 20911. [Google Scholar] [CrossRef] [PubMed]
  9. Tarek, Z.; Alhussan, A.A.; Khafaga, D.S.; El-Kenawy, E.-S.M.; Elshewey, A.M. A snake optimization algorithm-based feature selection framework for rapid detection of cardiovascular disease in its early stages. Biomed. Signal Process. Control 2025, 102, 107417. [Google Scholar] [CrossRef]
  10. Liang, Z.; Chung, C.Y.; Zhang, W.; Wang, Q.; Lin, W.; Wang, C. Enabling high-efficiency economic dispatch of hybrid AC/DC networked microgrids: Steady-state convex bi-directional converter models. IEEE Trans. Smart Grid 2024, 16, 45–61. [Google Scholar] [CrossRef]
  11. Benmamoun, Z.; Khlie, K.; Bektemyssova, G.; Dehghani, M.; Gherabi, Y. Bobcat Optimization Algorithm: An effective bio-inspired metaheuristic algorithm for solving supply chain optimization problems. Sci. Rep. 2024, 14, 20099. [Google Scholar] [CrossRef]
  12. Hamad, R.K.; Rashid, T.A. GOOSE algorithm: A powerful optimization tool for real-world engineering challenges and beyond. Evol. Syst. 2024, 15, 1249–1274. [Google Scholar] [CrossRef]
  13. Li, X.; Fang, W.; Zhu, S.; Zhang, X. An adaptive binary quantum-behaved particle swarm optimization algorithm for the multidimensional knapsack problem. Swarm Evol. Comput. 2024, 86, 101494. [Google Scholar] [CrossRef]
  14. Sivanandam, S.; Deepa, S.; Sivanandam, S.; Deepa, S. Genetic Algorithms; Springer: Berlin/Heidelberg, Germany, 2008. [Google Scholar]
  15. Storn, R.; Price, K. Differential evolution–a simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  16. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95-International Conference on Neural Networks, Perth, Australia, 27 November–1 December 1995; pp. 1942–1948. [Google Scholar]
  17. MIRJALILI, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  18. Dorigo, M.; Birattari, M.; Stutzle, T. Ant colony optimization. IEEE Comput. Intell. Mag. 2006, 1, 28–39. [Google Scholar] [CrossRef]
  19. Rashedi, E.; Nezamabadi-Pour, H.; Saryazdi, S. GSA: A gravitational search algorithm. Inf. Sci. 2009, 179, 2232–2248. [Google Scholar] [CrossRef]
  20. Xiao, Y.; Cui, H.; Khurma, R.A.; Castillo, P.A. Artificial lemming algorithm: A novel bionic meta-heuristic technique for solving real-world engineering optimization problems. Artif. Intell. Rev. 2025, 58, 84. [Google Scholar] [CrossRef]
  21. Jia, H.; Abdel-salam, M.; Hu, G. ACIVY: An Enhanced IVY Optimization Algorithm with Adaptive Cross Strategies for Complex Engineering Design and UAV Navigation. Biomimetics 2025, 10, 471. [Google Scholar] [CrossRef]
  22. Zhao, S.; Zhang, T.; Cai, L.; Yang, R. Triangulation topology aggregation optimizer: A novel mathematics-based meta-heuristic algorithm for continuous optimization and engineering applications. Expert. Syst. Appl. 2024, 238, 121744. [Google Scholar] [CrossRef]
  23. Wang, X.; Snášel, V.; Mirjalili, S.; Pan, J.-S.; Kong, L.; Shehadeh, H.A. Artificial Protozoa Optimizer (APO): A novel bio-inspired metaheuristic algorithm for engineering optimization. Knowl.-Based Syst. 2024, 295, 111737. [Google Scholar] [CrossRef]
  24. Tian, Z.; Gai, M. Football team training algorithm: A novel sport-inspired meta-heuristic optimization algorithm for global optimization. Expert. Syst. Appl. 2024, 245, 123088. [Google Scholar] [CrossRef]
  25. Taheri, A.; RahimiZadeh, K.; Beheshti, A.; Baumbach, J.; Rao, R.V.; Mirjalili, S.; Gandomi, A.H. Partial reinforcement optimizer: An evolutionary optimization algorithm. Expert. Syst. Appl. 2024, 238, 122070. [Google Scholar] [CrossRef]
  26. Sowmya, R.; Premkumar, M.; Jangir, P. Newton-Raphson-based optimizer: A new population-based metaheuristic algorithm for continuous optimization problems. Eng. Appl. Artif. Intell. 2024, 128, 107532. [Google Scholar] [CrossRef]
  27. Oladejo, S.O.; Ekwe, S.O.; Mirjalili, S. The Hiking Optimization Algorithm: A novel human-based metaheuristic approach. Knowl.—Based Syst. 2024, 296, 111880. [Google Scholar] [CrossRef]
  28. Ghasemi, M.; Deriche, M.; Trojovský, P.; Mansor, Z.; Zare, M.; Trojovská, E.; Abualigah, L.; Ezugwu, A.E.; Kadkhoda Mohammadi, S. An efficient bio-inspired algorithm based on humpback whale migration for constrained engineering optimization. Results Eng. 2025, 25, 104215. [Google Scholar] [CrossRef]
  29. Elhosseny, M.; Abdel-Salam, M.; El-Hasnony, I.M. Adaptive dynamic crayfish algorithm with multi-enhanced strategy for global high-dimensional optimization and real-engineering problems. Sci. Rep. 2025, 15, 10656. [Google Scholar] [CrossRef]
  30. Wolpert, D.H.; Macready, W.G. No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1997, 1, 67–82. [Google Scholar] [CrossRef]
  31. Zou, H.; Wang, K. Improved multi-strategy beluga whale optimization algorithm: A case study for multiple engineering optimization problems. Clust. Comput. 2025, 28, 183. [Google Scholar] [CrossRef]
  32. Fakhouri, H.N.; Ishtaiwi, A.; Makhadmeh, S.N.; Al-Betar, M.A.; Alkhalaileh, M. Novel hybrid crayfish optimization algorithm and self-adaptive differential evolution for solving complex optimization problems. Symmetry 2024, 16, 927. [Google Scholar] [CrossRef]
  33. Özcan, A.R.; Mehta, P.; Sait, S.M.; Gürses, D.; Yildiz, A.R. A new neural network–assisted hybrid chaotic hiking optimization algorithm for optimal design of engineering components. Mater. Test. 2025, 67, 1069–1078. [Google Scholar] [CrossRef]
  34. Xia, H.; Ke, Y.; Liao, R.; Zhang, H. Fractional order dung beetle optimizer with reduction factor for global optimization and industrial engineering optimization problems. Artif. Intell. Rev. 2025, 58, 308. [Google Scholar] [CrossRef]
  35. Lian, J.; Hui, G.; Ma, L.; Zhu, T.; Wu, X.; Heidari, A.A.; Chen, Y.; Chen, H. Parrot optimizer: Algorithm and applications to medical problems. Comput. Biol. Med. 2024, 172, 108064. [Google Scholar] [CrossRef] [PubMed]
  36. Liu, H.; Cai, C.; Li, P.; Tang, C.; Zhao, M.; Zheng, X.; Li, Y.; Zhao, Y.; Liu, C. Hybrid prediction method for solar photovoltaic power generation using normal cloud parrot optimization algorithm integrated with extreme learning machine. Sci. Rep. 2025, 15, 6491. [Google Scholar] [CrossRef]
  37. Aljaidi, M.; Jangir, P.; Arpita; Agrawal, S.P.; Pandya, S.B.; Parmar, A.; Alkoradees, A.F.; Khishe, M.; Jangid, R. A novel Parrot Optimizer for robust and scalable PEMFC parameter optimization. Sci. Rep. 2025, 15, 11625. [Google Scholar] [CrossRef] [PubMed]
  38. Abdel-Salam, M.; Alomari, S.A.; Yang, J.; Lee, S.; Saleem, K.; Smerat, A.; Snasel, V.; Abualigah, L. Harnessing dynamic turbulent dynamics in parrot optimization algorithm for complex high-dimensional engineering problems. Comput. Methods Appl. Mech. Eng. 2025, 440, 117908. [Google Scholar] [CrossRef]
  39. Houssein, E.H.; Emam, M.M.; Alomoush, W.; Samee, N.A.; Jamjoom, M.M.; Zhong, R.; Dhal, K.G. An efficient improved parrot optimizer for bladder cancer classification. Comput. Biol. Med. 2024, 181, 109080. [Google Scholar] [CrossRef]
  40. Siegel, S.; Castellan, N. The Friedman two-way analysis of variance by ranks. Nonparametric Stat. Behav. Sci. 1988, 174–184. [Google Scholar] [CrossRef]
  41. Derrac, J.; García, S.; Molina, D.; Herrera, F. A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms. Swarm Evol. Comput. 2011, 1, 3–18. [Google Scholar] [CrossRef]
  42. Mohamed, A.W.; Hadi, A.A.; Fattouh, A.M.; Jambi, K.M. LSHADE with semi-parameter adaptation hybrid with CMA-ES for solving CEC 2017 benchmark problems. In Proceedings of the 2017 IEEE Congress on Evolutionary Computation (CEC), San Sebastián, Spain, 5–8 June 2017; pp. 145–152. [Google Scholar]
  43. Luo, W.; Lin, X.; Li, C.; Yang, S.; Shi, Y. Benchmark functions for CEC 2022 competition on seeking multiple optima in dynamic environments. arXiv 2022, arXiv:2201.00523. [Google Scholar] [CrossRef]
  44. Abualigah, L.; Diabat, A.; Mirjalili, S.; Abd Elaziz, M.; Gandomi, A.H. The arithmetic optimization algorithm. Comput. Methods Appl. Mech. Eng. 2021, 376, 113609. [Google Scholar] [CrossRef]
  45. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris hawks optimization: Algorithm and applications. Future Gener. Comput. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
  46. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  47. Abdel-Basset, M.; Mohamed, R.; Jameel, M.; Abouhawwash, M. Spider wasp optimizer: A novel meta-heuristic optimization algorithm. Artif. Intell. Rev. 2023, 56, 11675–11738. [Google Scholar] [CrossRef]
  48. Chopra, N.; Ansari, M.M. Golden jackal optimization: A novel nature-inspired optimizer for engineering applications. Expert. Syst. Appl. 2022, 198, 116924. [Google Scholar] [CrossRef]
  49. Dehghani, M.; Montazeri, Z.; Trojovská, E.; Trojovský, P. Coati Optimization Algorithm: A new bio-inspired metaheuristic algorithm for solving optimization problems. Knowl.-Based Syst. 2023, 259, 110011. [Google Scholar] [CrossRef]
  50. Ghasemi, M.; Zare, M.; Trojovský, P.; Rao, R.V.; Trojovská, E.; Kandasamy, V. Optimization based on the smart behavior of plants with its engineering applications: Ivy algorithm. Knowl.-Based Syst. 2024, 295, 111850. [Google Scholar] [CrossRef]
  51. Abdel-Basset, M.; Mohamed, R.; Azeem, S.A.A.; Jameel, M.; Abouhawwash, M. Kepler optimization algorithm: A new metaheuristic algorithm inspired by Kepler’s laws of planetary motion. Knowl.-Based Syst. 2023, 268, 110454. [Google Scholar] [CrossRef]
  52. Houssein, E.H.; Hammad, A.; Emam, M.M.; Ali, A.A. An enhanced Coati Optimization Algorithm for global optimization and feature selection in EEG emotion recognition. Comput. Biol. Med. 2024, 173, 108329. [Google Scholar] [CrossRef]
  53. Iruthayarajan, M.W.; Baskar, S. Covariance matrix adaptation evolution strategy based design of centralized PID controller. Expert. Syst. Appl. 2010, 37, 5775–5781. [Google Scholar] [CrossRef]
  54. Long, W.; Cai, S.; Jiao, J.; Xu, M.; Wu, T. A new hybrid algorithm based on grey wolf optimizer and cuckoo search for parameter extraction of solar photovoltaic models. Energy Convers. Manag. 2020, 203, 112243. [Google Scholar] [CrossRef]
  55. Chen, H.; Yang, C.; Heidari, A.A.; Zhao, X. An efficient double adaptive random spare reinforced whale optimization algorithm. Expert. Syst. Appl. 2020, 154, 113018. [Google Scholar] [CrossRef]
  56. Hu, G.; Zhong, J.; Du, B.; Wei, G. An enhanced hybrid arithmetic optimization algorithm for engineering applications. Comput. Methods Appl. Mech. Eng. 2022, 394, 114901. [Google Scholar] [CrossRef]
  57. Khodadadi, N.; Snasel, V.; Mirjalili, S. Dynamic arithmetic optimization algorithm for truss optimization under natural frequency constraints. IEEE Access 2022, 10, 16188–16208. [Google Scholar] [CrossRef]
  58. Wu, H.; Zhang, A.; Han, Y.; Nan, J.; Li, K. Fast stochastic configuration network based on an improved sparrow search algorithm for fire flame recognition. Knowl.—Based Syst. 2022, 245, 108626. [Google Scholar] [CrossRef]
  59. Brest, J.; Greiner, S.; Boskovic, B.; Mernik, M.; Zumer, V. Self-adapting control parameters in differential evolution: A comparative study on numerical benchmark problems. IEEE Trans. Evol. Comput. 2006, 10, 646–657. [Google Scholar] [CrossRef]
  60. Rather, S.A.; Bala, P.S. Hybridization of constriction coefficient based particle swarm optimization and gravitational search algorithm for function optimization. In Proceedings of the International Conference on Advances in Electronics, Electrical & Computational Intelligence (ICAEEC), Malabe, Sri Lanka, 5–6 December 2019. [Google Scholar]
  61. Abdel-Salam, M.; Chhabra, A.; Braik, M.; Gharehchopogh, F.S.; Bacanin, N. A Halton Enhanced Solution-based Human Evolutionary Algorithm for Complex Optimization and Advanced Feature Selection Problems. Knowl.-Based Syst. 2025, 311, 113062. [Google Scholar] [CrossRef]
  62. Goh, H.H.; He, R.; Zhang, D.; Liu, H.; Dai, W.; Lim, C.S.; Kurniawan, T.A.; Teo, K.T.K.; Goh, K.C. A multimodal approach to chaotic renewable energy prediction using meteorological and historical information. Appl. Soft Comput. 2022, 118, 108487. [Google Scholar] [CrossRef]
  63. Askr, H.; Abdel-Salam, M.; Snášel, V.; Hassanien, A.E. A green hydrogen production model from solar powered water electrolyze based on deep chaotic Lévy gazelle optimization. Eng. Sci. Technol. Int. J. 2024, 60, 101874. [Google Scholar] [CrossRef]
  64. Rashad, M.; Abdellatif, M.S.; Rabie, A.H. An Improved Human Evolutionary Optimization Algorithm for Maximizing Green Hydrogen Generation in Intelligent Energy Management System (IEMS). Results Eng. 2025, 27, 105998. [Google Scholar] [CrossRef]
  65. Liu, Y.; Li, L.; Liu, J. Short-term wind power output prediction using hybrid-enhanced seagull optimization algorithm and support vector machine: A high-precision method. Int. J. Green. Energy 2024, 21, 2858–2871. [Google Scholar] [CrossRef]
  66. Yaghoubirad, M.; Azizi, N.; Farajollahi, M.; Ahmadi, A. Deep learning-based multistep ahead wind speed and power generation forecasting using direct method. Energy Convers. Manag. 2023, 281, 116760. [Google Scholar] [CrossRef]
  67. Ewees, A.A.; Al-qaness, M.A.; Abualigah, L.; Abd Elaziz, M. HBO-LSTM: Optimized long short term memory with heap-based optimizer for wind power forecasting. Energy Convers. Manag. 2022, 268, 116022. [Google Scholar] [CrossRef]
  68. Al-qaness, M.A.; Ewees, A.A.; Fan, H.; Abualigah, L.; Elsheikh, A.H.; Abd Elaziz, M. Wind power prediction using random vector functional link network with capuchin search algorithm. Ain Shams Eng. J. 2023, 14, 102095. [Google Scholar] [CrossRef]
  69. Zayed, M.E.; Rehman, S.; Elgendy, I.A.; Al-Shaikhi, A.; Mohandes, M.A.; Irshad, K.; Abdelrazik, A.; Alam, M.A. Benchmarking reinforcement learning and prototyping development of floating solar power system: Experimental study and LSTM modeling combined with brown-bear optimization algorithm. Energy Convers. Manag. 2025, 332, 119696. [Google Scholar] [CrossRef]
  70. Dagal, I.; Ibrahim, A.-W.; Harrison, A. Leveraging a novel grey wolf algorithm for optimization of photovoltaic-battery energy storage system under partial shading conditions. Comput. Electr. Eng. 2025, 122, 109991. [Google Scholar] [CrossRef]
  71. Nagarajan, K.; Rajagopalan, A.; Bajaj, M.; Raju, V.; Blazek, V. Enhanced wombat optimization algorithm for multi-objective optimal power flow in renewable energy and electric vehicle integrated systems. Results Eng. 2025, 25, 103671. [Google Scholar] [CrossRef]
  72. Yu, F.; Guan, J.; Wu, H.; Wang, H.; Ma, B. Multi-population differential evolution approach for feature selection with mutual information ranking. Expert. Syst. Appl. 2025, 260, 125404. [Google Scholar] [CrossRef]
  73. Abdel-salam, M.; Alomari, S.A.; Almomani, M.H.; Hu, G.; Lee, S.; Saleem, K.; Smerat, A.; Abualigah, L. Quadruple strategy-driven hiking optimization algorithm for low and high-dimensional feature selection and real-world skin cancer classification. Knowl.-Based Syst. 2025, 315, 113286. [Google Scholar] [CrossRef]
  74. Mostafa, R.R.; Khedr, A.M.; Aghbari, Z.A.; Afyouni, I.; Kamel, I.; Ahmed, N. Medical image segmentation approach based on hybrid adaptive differential evolution and crayfish optimizer. Comput. Biol. Med. 2024, 180, 109011. [Google Scholar] [CrossRef]
  75. Abdel-Salam, M.; Houssein, E.H.; Emam, M.M.; Samee, N.A.; Jamjoom, M.M.; Hu, G. An adaptive enhanced human memory algorithm for multi-level image segmentation for pathological lung cancer images. Comput. Biol. Med. 2024, 183, 109272. [Google Scholar] [CrossRef]
  76. Mostafa, R.R.; Vijayan, D.; Khedr, A.M. EGBCR-FANET: Enhanced genghis Khan shark optimizer based Bayesian-driven clustered routing model for FANETs. Veh. Commun. 2025, 54, 100935. [Google Scholar] [CrossRef]
  77. Mostafa, R.R.; Hashim, F.A.; Khedr, A.M.; AL Aghbari, Z.; Afyouni, I.; Kamel, I.; Ahmed, N. EMGODV-Hop: An efficient range-free-based WSN node localization using an enhanced mountain gazelle optimizer. J. Supercomput. 2025, 81, 140. [Google Scholar] [CrossRef]
  78. Malti, A.N.; Hakem, M.; Benmammar, B. A new hybrid multi-objective optimization algorithm for task scheduling in cloud systems. Clust. Comput. 2024, 27, 2525–2548. [Google Scholar] [CrossRef]
  79. Abdel-salam, M.; Hassanien, A.E. Abdel-salam, M.; Hassanien, A.E. A Novel Dynamic Chaotic Golden Jackal Optimization Algorithm for Sensor-Based Human Activity Recognition Using Smartphones for Sustainable Smart Cities. In Artificial Intelligence for Environmental Sustainability and Green Initiatives; Springer: Berlin/Heidelberg, Germany, 2024; pp. 273–296. [Google Scholar]
  80. Manoharan, H.; Edalatpanah, S. Evolutionary bioinformatics with veiled biological database for health care operations. Comput. Biol. Med. 2025, 184, 109418. [Google Scholar] [CrossRef]
  81. Zhao, J.; Deng, C.; Yu, H.; Fei, H.; Li, D. Path planning of unmanned vehicles based on adaptive particle swarm optimization algorithm. Comput. Commun. 2024, 216, 112–129. [Google Scholar] [CrossRef]
  82. Villoth, J.P.; Zivkovic, M.; Zivkovic, T.; Abdel-salam, M.; Hammad, M.; Jovanovic, L.; Simic, V.; Bacanin, N. Two-tier deep and machine learning approach optimized by adaptive multi-population firefly algorithm for software defects prediction. Neurocomputing 2025, 630, 129695. [Google Scholar] [CrossRef]
  83. Oyelade, O.N.; Aminu, E.F.; Wang, H.; Rafferty, K. An adaptation of hybrid binary optimization algorithms for medical image feature selection in neural network for classification of breast cancer. Neurocomputing 2025, 617, 129018. [Google Scholar] [CrossRef]
  84. Nirmala, G.; Nayudu, P.P.; Kumar, A.R.; Sagar, R. Automatic cervical cancer classification using adaptive vision transformer encoder with CNN for medical application. Pattern Recognit. 2025, 160, 111201. [Google Scholar] [CrossRef]
  85. Pashaei, E.; Pashaei, E.; Mirjalili, S. Binary hiking optimization for gene selection: Insights from HNSCC RNA-Seq data. Expert. Syst. Appl. 2025, 268, 126404. [Google Scholar] [CrossRef]
  86. Yang, J.; Yan, F.; Zhang, J.; Peng, C. Hybrid chaos game and grey wolf optimization algorithms for UAV path planning. Appl. Math. Model. 2025, 142, 115979. [Google Scholar] [CrossRef]
  87. Yang, J.; Wu, Y.; Yuan, Y.; Xue, H.; Bourouis, S.; Abdel-Salam, M.; Prajapat, S.; Por, L.Y. LLM-AE-MP: Web Attack Detection Using a Large Language Model with Autoencoder and Multilayer Perceptron. Expert. Syst. Appl. 2025, 274, 126982. [Google Scholar] [CrossRef]
  88. Wang, H.; Chen, S.; Li, M.; Zhu, C.; Wang, Z. Demand-driven charging strategy-based distributed routing optimization under traffic restrictions in internet of electric vehicles. IEEE Internet Things J. 2024, 11, 35917–35927. [Google Scholar] [CrossRef]
Figure 1. Proposed ADPO.
Figure 1. Proposed ADPO.
Biomimetics 10 00542 g001
Figure 2. Convergence curves of different algorithms using CEC2022.
Figure 2. Convergence curves of different algorithms using CEC2022.
Biomimetics 10 00542 g002aBiomimetics 10 00542 g002b
Figure 3. Boxplots of different algorithms using CEC2022.
Figure 3. Boxplots of different algorithms using CEC2022.
Biomimetics 10 00542 g003aBiomimetics 10 00542 g003b
Figure 4. Friedman ranks of various algorithms using CEC2022.
Figure 4. Friedman ranks of various algorithms using CEC2022.
Biomimetics 10 00542 g004
Figure 5. Wilcoxon test results for ADPO versus other algorithms using CEC2022.
Figure 5. Wilcoxon test results for ADPO versus other algorithms using CEC2022.
Biomimetics 10 00542 g005
Figure 6. Friedman ranks of different advanced algorithms using CEC2017, 50D.
Figure 6. Friedman ranks of different advanced algorithms using CEC2017, 50D.
Biomimetics 10 00542 g006
Figure 7. Wilcoxon test results of ADPO versus other advanced algorithms using CEC2017.
Figure 7. Wilcoxon test results of ADPO versus other advanced algorithms using CEC2017.
Biomimetics 10 00542 g007
Figure 8. The proposed ADPO-LSTM model for wind prediction.
Figure 8. The proposed ADPO-LSTM model for wind prediction.
Biomimetics 10 00542 g008
Figure 9. The descriptive statistics of the utilized dataset.
Figure 9. The descriptive statistics of the utilized dataset.
Biomimetics 10 00542 g009
Figure 10. Average training R2 and COV metrics across all stations.
Figure 10. Average training R2 and COV metrics across all stations.
Biomimetics 10 00542 g010
Figure 11. Average testing R2 and COV metrics across all stations.
Figure 11. Average testing R2 and COV metrics across all stations.
Biomimetics 10 00542 g011
Figure 12. The average R2 for various optimized DNN approaches.
Figure 12. The average R2 for various optimized DNN approaches.
Biomimetics 10 00542 g012
Table 1. Various parameter settings.
Table 1. Various parameter settings.
AlgorithmParameter Value
IVY b e t a 1 = [ 1,1.5 ) , G V = [ 0 , 1 ]
HOA A n g l e   o f   i n c l i n a t i o n   = [ 0 , 50 ] ,   S F = [ 1 , 3 ]
SWO T R = 0.3 ,   C R = 0.2
HHO E 0 changes from −1 to 1
AOA α = 5 ;   μ = 0.5
KOA T ¯ = 3 ,   μ 0 = 0.1 ,   γ = 15
GJO c 1   v a r i e s   f r o m   1   t o   2
WOA k = 1 ,   q = [ 1 , 1 ]
CMA-ESσ = 0.5, μ = λ/2
CSOAOA μ = 0.499 ,   a = 5
GWO_CS a :   L i n e a r   r e d u c t i o n   f r o m   2   t o   0
RDWOA a 1 = [ 2 , 0 ] ,   a 2 = [ 2 , 1 ] ,   s = 0 ,   b = 1
jDE p = 0.05 ,   F = 0.5   , c = 0.1 ,   C R = 0.9
ISSA P P = 0.2 ,   S T = 0.8
Table 2. Experimental results using CEC2022 for different variants of PO.
Table 2. Experimental results using CEC2022 for different variants of PO.
F ADPOADPO-DLHADPO-EAMADPO-MDVPO
F1AVG303.8621.66 × 104300.8393185.2971.83 × 104
SD2.8993853.1110.3901423.1744152.061
RAN24135
F2AVG456.679525.328468.321487.787501.975
SD12.15580.60837.53530.72239.841
RAN15234
F3AVG618.668650.046623.121649.312657.763
SD4.7739.96712.53215.58416.308
RAN14235
F4AVG870.459893.445879.201882.195887.711
SD18.17218.15911.27817.24816.002
RAN15234
F5AVG1793.3241953.3692037.2072395.1762758.038
SD386.598165.996492.805392.801361.537
RAN12345
F6AVG4812.9739.26 × 1056862.08111428.3232.18 × 105
SD3762.0321.39 × 1066587.4177408.5611.75 × 105
RAN15234
F7AVG2090.7812143.8312107.0102128.6002132.220
SD34.80327.45828.58630.95137.998
RAN15234
F8AVG2231.6542288.4202240.8852246.4102277.449
SD7.97661.78143.25716.32557.658
RAN15234
F9AVG2481.3152575.7092480.9782495.2602594.209
SD0.61836.5570.1768.13452.688
RAN24135
F10AVG2518.1072559.1402557.6942758.6152622.379
SD54.563121.50780.237813.674131.213
RAN13254
F11AVG2951.8643223.1542941.3563013.9483231.580
SD51.855204.77950.784122.523111.284
RAN24135
F12AVG2978.7933030.0892975.5182989.3383040.060
SD28.17168.96423.85133.77343.344
RAN24135
Average rank1.334.171.753.254.50
Final rank14235
Friedman rank1.554.001.873.244.34
Table 3. Experimental results using CEC2022 for ADPO and other competitors.
Table 3. Experimental results using CEC2022 for ADPO and other competitors.
F ADPOPOIVYHOAHHOAOASWOCOAGJOWOAKOA
F1AVG301.3021.60 × 1042.40 × 1042.77 × 1041.11 × 1043.38 × 1042.70 × 1044.31 × 1041.65 × 1042.48 × 1041.11 × 105
SD0.9814.46 × 1031.01 × 1046329.5007244.0941.17 × 1047.67 × 1031.34 × 1045.42 × 1039.44 × 1031.21 × 105
RAN1358297104611
F2AVG466.259529.092464.0891975.235481.4452530.853790.6882890.766610.729570.2085045.097
SD19.24240.68129.231465.41625.2701023.860106.168796.63578.95354.8871488.769
RAN2418397106511
F3AVG621.667655.570606.720663.939661.088664.580650.324682.391626.149674.481704.271
SD10.53110.12111.5089.53910.0857.6467.8897.86010.04014.43210.033
RAN2517684103911
F4AVG878.170892.127872.859931.436886.788950.451962.916976.083897.802932.6871063.219
SD16.14816.82019.69315.45114.54615.14719.73914.43526.17731.13217.626
RAN2416389105711
F5AVG2013.3692518.7672368.0612749.5182855.2582920.3903476.2633566.6971900.6203919.9159733.317
SD402.268505.119182.276379.229268.721467.652940.511355.377407.8251104.6691941.003
RAN2435678911011
F6AVG4028.3852.45 × 1051.56 × 1041.23 × 1091.37 × 1058.68 × 1087.23 × 1072.59 × 1099.56 × 1061.89 × 1063.43 × 109
SD3172.2724.90 × 1055.95 × 1047.60 × 1086.95 × 1048.86 × 1086.48 × 1071.15 × 1091.12 × 1076.14 × 1061.12 × 109
RAN1429387106511
F7AVG2103.7212143.9612144.3432166.1742205.9122226.7812179.9292222.6392121.6052206.4602356.355
SD29.18737.35175.97840.02165.63893.59346.39234.53047.31059.22768.525
RAN1345710692811
F8AVG2243.6292297.7812373.7062377.1472255.3802497.0902292.6502432.0472240.6292274.9652983.494
SD36.20576.701153.618134.63138.109182.61253.938183.26326.13565.200291.237
RAN2678310591411
F9AVG2481.0002564.7842483.2553260.8972508.5383091.6232628.0263477.8542584.9042573.9093402.602
SD0.28235.4673.124230.42422.711228.86446.043353.97750.46646.192238.632
RAN1429387116510
F10AVG2539.9012778.6593648.2515296.6524088.6185543.2693760.8726381.3613307.3654460.9766499.317
SD103.711765.1951006.8411352.603595.663914.0651416.0841214.8301252.8481231.5061325.805
RAN1248695103711
F11AVG2935.6713310.2213317.4377820.0763003.7798347.0905050.0698742.4924638.8583384.7291.11 × 104
SD138.958487.0631028.409606.779139.0751137.649623.546870.316536.012270.9661390.234
RAN1348297106511
F12AVG2991.2953041.3253034.3673846.3873185.7083807.3923261.2183646.5583027.7043086.7823854.341
SD51.65441.21091.186210.462137.706236.66171.964231.35557.378137.904189.245
RAN1431069782511
Average rank1.423.833.087.584.178.676.589.673.756.3310.92
Final rank1428597103611
Table 4. The Wilcoxon p-values of ADPO versus other algorithms using CEC2022.
Table 4. The Wilcoxon p-values of ADPO versus other algorithms using CEC2022.
FADPOPOIVYHOAHHOAOASWOCOAGJOWOAKOA
F11.50990 × 10−111.50990 × 10−111.50990 × 10−111.50990 × 10−111.50990 × 10−111.50990 × 10−111.50990 × 10−111.50990 × 10−111.50990 × 10−111.50990 × 10−111.50990 × 10−11
F22.48760 × 10−111.50990 × 10−116.79720 × 10−81.84490 × 10−113.03290 × 10−112.73100 × 10−61.50990 × 10−118.03110 × 10−74.24240 × 10−99.28370 × 10−102.48760 × 10−11
F31.50990 × 10−111.50990 × 10−111.50990 × 10−114.15730 × 10−31.50990 × 10−117.39400 × 10−11.50990 × 10−111.67600 × 10−81.50990 × 10−111.11360 × 10−91.50990 × 10−11
F41.50990 × 10−111.50990 × 10−112.78060 × 10−42.28630 × 10−91.50990 × 10−112.01650 × 10−31.50990 × 10−117.64580 × 10−62.03860 × 10−111.68410 × 10−51.50990 × 10−11
F51.50990 × 10−111.50990 × 10−112.74700 × 10−112.35690 × 10−41.50990 × 10−111.02330 × 10−11.50990 × 10−112.09130 × 10−91.50990 × 10−111.07720 × 10−101.50990 × 10−11
F61.66920 × 10−111.50990 × 10−111.25970 × 10−12.74700 × 10−112.25220 × 10−111.00000 × 10+001.50990 × 10−115.46830 × 10−111.07720 × 10−102.03860 × 10−111.66920 × 10−11
F71.50990 × 10−111.50990 × 10−114.94170 × 10−31.46030 × 10−22.25220 × 10−112.17020 × 10−11.50990 × 10−114.15730 × 10−31.50990 × 10−113.06050 × 10−101.50990 × 10−11
F81.46080 × 10−91.50990 × 10−112.70710 × 10−16.43520 × 10−102.54610 × 10−82.78060 × 10−41.50990 × 10−118.40660 × 10−56.01160 × 10−91.84490 × 10−111.46080 × 10−9
F91.50990 × 10−111.50990 × 10−111.66920 × 10−111.50990 × 10−111.50990 × 10−118.64990 × 10−11.50990 × 10−114.49670 × 10−111.50990 × 10−111.50990 × 10−111.50990 × 10−11
F105.86870 × 10−101.66920 × 10−112.70710 × 10−14.87780 × 10−106.55550 × 10−95.15730 × 10−31.50990 × 10−112.31950 × 10−57.05490 × 10−101.43580 × 10−105.86870 × 10−10
F116.43520 × 10−105.86870 × 10−107.97820 × 10−28.39880 × 10−44.42050 × 10−75.57130 × 10−41.50990 × 10−114.14600 × 10−66.79720 × 10−81.38630 × 10−56.43520 × 10−10
F121.50990 × 10−111.50990 × 10−111.50990 × 10−111.50990 × 10−111.50990 × 10−111.09790 × 10−71.50990 × 10−111.50990 × 10−111.50990 × 10−111.50990 × 10−111.50990 × 10−11
Table 5. Experimental results using CEC2017 for ADPO and advanced competitors, 50D.
Table 5. Experimental results using CEC2017 for ADPO and advanced competitors, 50D.
F ADPOeCOACMAESDAOACSOAOAGWO_CSRDWOAjDEISSAIPSO_IGSA
F1AVG2.43 × 1068.08 × 1091.95 × 10102.62 × 10112.09 × 1081.42 × 10102.11 × 10101.23 × 10112.27 × 1084.11 × 1010
SD2.16 × 1061.24 × 1093.03 × 10101.46 × 10107.33 × 1074.30 × 1092.38 × 1097.75 × 10106.78 × 1071.38 × 1010
RAN14610257938
F3AVG2.79 × 1041.29 × 1053.82 × 1055.74 × 1071.17 × 1051.23 × 1051.89 × 1052.88 × 1052.47 × 1051.70 × 105
SD8260.8251.21 × 1044.93 × 1041.35 × 1081.18 × 1041.77 × 1041.45 × 1044.19 × 1048.13 × 1041.94 × 104
RAN14910236875
F4AVG616.3581465.6488332.5521.30 × 105669.4361496.6514063.5595.20 × 104718.0958698.568
SD48.544443.7183064.8052.80 × 10441.225210.031887.2008380.82831.9191866.557
RAN14710256938
F5AVG840.354842.542549.8911697.942828.289809.1891043.4511403.854881.798855.080
SD63.80754.3803.36191.76234.56123.17728.87843.35826.887106.625
RAN45110328976
F6AVG650.472660.316621.711757.645630.582632.558682.780701.936667.615671.656
SD7.1939.40933.6278.12715.0807.8428.88022.8602.63819.663
RAN45110238967
F7AVG1342.7961421.933956.5996057.3491451.6281180.6981764.5883617.8301673.4381801.288
SD142.095192.150155.021466.162162.11296.478112.4111358.797146.308103.252
RAN34110527968
F8AVG1161.8371110.9641178.7172017.9481158.4361108.9371321.9551650.9741173.7081169.514
SD61.53039.309257.35673.26340.59941.37158.026200.70046.78649.201
RAN42710318965
F9AVG1.44 × 1041.30 × 104900.1511.17 × 1051.15 × 1041.51 × 1042.72 × 1046.14 × 1041.56 × 1041.57 × 104
SD1699.155563.5950.2351.50 × 1041019.4198516.0922781.9818150.2981433.6576902.928
RAN43110258967
F10AVG7718.3048311.3011.50 × 1041.69 × 1046484.7017483.0891.26 × 1041.54 × 1048521.7601.49 × 104
SD724.850762.366348.879588.528921.2211289.2021004.835566.9571119.243845.519
RAN34810126957
F11AVG1489.7782157.9846.14 × 1041.14 × 1053037.1037629.6205496.2893.83 × 1042492.5021.74 × 104
SD83.804322.6222.07 × 1048.49 × 104326.4772509.4941162.6244553.782283.0313825.336
RAN12910465837
F12AVG3.26 × 1072.31 × 1082.42 × 10101.60 × 10112.66 × 1071.56 × 1094.25 × 1096.21 × 10104.84 × 1077.34 × 109
SD2.00 × 1071.07 × 1084.70 × 1091.58 × 10102.60 × 1076.91 × 1083.93 × 1091.24 × 10103.26 × 1072.56 × 109
RAN24810156937
F13AVG4.67 × 1041.92 × 1051.14 × 10109.09 × 10104.98 × 1058.85 × 1074.06 × 1082.71 × 10106.71 × 1043.10 × 108
SD7.32 × 1033.57 × 1052.70 × 1091.86 × 10103.71 × 1053.16 × 1072.23 × 1081.64 × 10102.44 × 1044.35 × 108
RAN13810457926
F14AVG2.59 × 1059.18 × 1052.12 × 1073.26 × 1081.67 × 1061.38 × 1066.76 × 1062.27 × 1071.48 × 1068.51 × 106
SD1.33 × 1055.70 × 1051.33 × 1072.19 × 1081.10 × 1061.08 × 1065.70 × 1067.78 × 1066.99 × 1058.61 × 106
RAN12810536947
F15AVG2.08 × 1042.81 × 1041.38 × 1093.28 × 10105.33 × 1042.55 × 1077.85 × 1077.45 × 1092.44 × 1045.55 × 107
SD6.09 × 1037.25 × 1034.23 × 1081.28 × 10102.96 × 1043.76 × 1077.93 × 1075.03 × 1091.18 × 1041.15 × 108
RAN13810457926
F16AVG4029.4223861.6166539.3041.65 × 1043239.3773395.4115943.2208258.0534081.8944265.301
SD367.002322.451523.6783021.001669.226322.737618.9961418.040632.085533.700
RAN43810127956
F17AVG3260.3363634.6342665.323520,653.4993297.5563160.8814222.92519,889.9343616.9443787.720
SD537.823369.929250.825422,719.420260.826281.760468.91117,601.719429.145532.911
RAN36110428957
F18AVG3.48 × 1062.75 × 1061.13 × 1084.64 × 1082.89 × 1069.20 × 1063.00 × 1071.32 × 1085.26 × 1061.16 × 107
SD2.16 × 1061.20 × 1062.49 × 1071.41 × 1088.35 × 1057.96 × 1062.46 × 1074.24 × 1075.84 × 1068.37 × 106
RAN31810257946
F19AVG4.95 × 1043.96 × 1051.31 × 1091.63 × 10102.67 × 1046.33 × 1069.78 × 1062.50 × 1093.73 × 1045.50 × 105
SD1.96 × 1044.75 × 1047.71 × 1083.63 × 1095.97 × 1031.14 × 1076.49 × 1061.17 × 1091.24 × 1043.68 × 105
RAN34810167925
F20AVG3066.8243223.3543755.2525229.9493169.3823079.6063439.7244310.4673547.1543871.279
SD222.742312.746244.158279.803428.861130.822108.685353.531213.676214.596
RAN14710325968
F21AVG2662.1732636.4202625.0023638.2062650.2652538.0952978.5203212.7492844.1012731.186
SD79.07936.434299.277119.44042.25426.879102.297122.74834.36087.016
RAN53210418976
F22AVG7956.5791.12 × 1041.68 × 1041.85 × 1048714.2318372.3841.35 × 1041.71 × 1041.07 × 1041.64 × 104
SD4437.808842.920189.8021001.0363166.4532254.177736.221373.442490.879399.462
RAN15810326947
F23AVG3198.3283174.8533413.4385257.2133043.9783075.5863639.1314205.0403482.9043874.196
SD104.78565.95848.089401.92846.19252.80363.119160.957138.772219.283
RAN43510127968
F24AVG3446.0343266.6023496.9985962.1473516.7543246.7863716.7174521.4263596.9224138.812
SD140.58183.27454.391533.209115.30189.31491.453296.736119.341135.882
RAN32410517968
F25AVG3095.8053728.5684054.5795.84 × 1043193.3964070.7995167.6513.04 × 1043212.2497009.422
SD28.311192.8511625.3225190.68411.453435.175404.1373538.47246.6141396.288
RAN14510267938
F26AVG5827.6141.16 × 1041.12 × 1043.01 × 1048143.5006213.0151.32 × 1042.13 × 1041.05 × 1041.39 × 104
SD3845.466924.35670.4703428.9473425.6031555.076785.5381039.1293431.6601758.167
RAN16510327948
F27AVG3626.2283866.2333903.3228778.9093606.5613687.5984641.6066086.3693844.9675882.212
SD228.514133.22255.881744.725172.60041.701421.257890.516162.791645.253
RAN25610137948
F28AVG3357.1934486.5641.00 × 1042.49 × 1043493.3944404.1215969.6091.58 × 1043574.9336795.098
SD32.968217.558440.0444179.75457.593303.131330.4161476.058118.711401.804
RAN15810246937
F29AVG4581.7886753.8431.33 × 1042.58 × 1064520.7314931.9098368.1441.94 × 1045576.9149624.458
SD367.149538.2164948.4821.77 × 106251.205422.919665.7076958.151493.0702865.338
RAN25810136947
F30AVG9.03 × 1061.47 × 1082.12 × 1092.43 × 10101.04 × 1061.76 × 1083.01 × 1082.92 × 1093.76 × 1062.15 × 108
SD4.25 × 1061.69 × 1071.01 × 1094.30 × 1091.64 × 1056.86 × 1078.12 × 1071.90 × 1096.31 × 1053.65 × 107
RAN34810157926
Average rank2.343.765.9710.002.553.386.798.934.416.86
Final rank14610237958
Table 6. The Wilcoxon p-values for ADPO versus other advanced algorithms using CEC2017.
Table 6. The Wilcoxon p-values for ADPO versus other advanced algorithms using CEC2017.
FADPOeCOACMAESDAOACSOAOAGWO_CSRDWOAjDEISSAIPSO_IGSA
F13.93940 × 10−11.08230 × 10−111.08230 × 10−111.08230 × 10−111.08230 × 10−111.08230 × 10−111.08230 × 10−111.08230 × 10−111.08230 × 10−113.93940 × 10−1
F31.08230 × 10−119.92420 × 10−11.08230 × 10−116.99130 × 10−11.08230 × 10−113.09520 × 10−11.08230 × 10−112.40260 × 10−11.08230 × 10−111.08230 × 10−11
F41.08230 × 10−111.08230 × 10−111.08230 × 10−111.08230 × 10−111.08230 × 10−111.08230 × 10−111.08230 × 10−111.08230 × 10−111.08230 × 10−111.08230 × 10−11
F51.08230 × 10−114.84850 × 10−11.08230 × 10−111.08230 × 10−111.08230 × 10−111.08230 × 10−111.08230 × 10−111.08230 × 10−111.08230 × 10−111.08230 × 10−11
F61.08230 × 10−111.08230 × 10−111.08230 × 10−111.08230 × 10−111.08230 × 10−111.32030 × 10−11.08230 × 10−111.00000 × 10+001.08230 × 10−111.08230 × 10−11
F71.08230 × 10−111.08230 × 10−111.08230 × 10−112.40260 × 10−11.08230 × 10−111.08230 × 10−111.08230 × 10−111.08230 × 10−111.08230 × 10−111.08230 × 10−11
F81.08230 × 10−111.29870 × 10−91.08230 × 10−111.08230 × 10−113.09520 × 10−11.08230 × 10−111.08230 × 10−111.08230 × 10−111.08230 × 10−111.08230 × 10−11
F91.08230 × 10−119.87010 × 10−11.08230 × 10−119.95670 × 10−15.88740 × 10−15.88740 × 10−11.08230 × 10−111.08230 × 10−111.08230 × 10−111.08230 × 10−11
F109.30740 × 10−96.99130 × 10−11.08230 × 10−116.99130 × 10−11.32030 × 10−11.08230 × 10−117.57580 × 10−111.32030 × 10−11.08230 × 10−119.30740 × 10−9
F111.08230 × 10−115.88740 × 10−11.08230 × 10−114.84850 × 10−12.05630 × 10−91.00000 × 10+001.29870 × 10−94.84850 × 10−11.08230 × 10−111.08230 × 10−11
F121.08230 × 10−119.95670 × 10−11.08230 × 10−111.29870 × 10−91.08230 × 10−112.40260 × 10−11.08230 × 10−111.08230 × 10−111.08230 × 10−111.08230 × 10−11
F131.08230 × 10−119.37230 × 10−11.08230 × 10−119.37230 × 10−11.08230 × 10−111.08230 × 10−112.16450 × 10−111.08230 × 10−111.08230 × 10−111.08230 × 10−11
F141.00000 × 10+001.08230 × 10−111.08230 × 10−111.00000 × 10+002.40260 × 10−011.08230 × 10−111.08230 × 10−111.08230 × 10−111.08230 × 10−111.00000 × 10+00
F151.08230 × 10−111.00000 × 10+001.08230 × 10−116.99130 × 10−011.08230 × 10−116.99130 × 10−011.08230 × 10−111.08230 × 10−111.08230 × 10−111.08230 × 10−11
F162.16450 × 10−119.97840 × 10−11.08230 × 10−111.08230 × 10−111.08230 × 10−112.16450 × 10−111.08230 × 10−111.08230 × 10−111.08230 × 10−112.16450 × 10−11
F175.88740 × 10−11.08230 × 10−111.08230 × 10−119.92420 × 10−11.08230 × 10−111.08230 × 10−112.16450 × 10−119.87010 × 10−11.08230 × 10−115.88740 × 10−1
F181.08230 × 10−111.08230 × 10−111.08230 × 10−111.08230 × 10−111.08230 × 10−112.16450 × 10−111.08230 × 10−111.08230 × 10−111.08230 × 10−111.08230 × 10−11
F192.16450 × 10−111.08230 × 10−111.08230 × 10−111.08230 × 10−111.08230 × 10−112.05630 × 10−91.08230 × 10−114.32900 × 10−111.08230 × 10−112.16450 × 10−11
F206.49350 × 10−98.18180 × 10−11.08230 × 10−113.93940 × 10−11.08230 × 10−119.30740 × 10−91.08230 × 10−111.08230 × 10−111.08230 × 10−116.49350 × 10−9
F211.08230 × 10−111.08230 × 10−111.08230 × 10−111.08230 × 10−111.08230 × 10−111.08230 × 10−111.08230 × 10−111.08230 × 10−111.08230 × 10−111.08230 × 10−11
F221.08230 × 10−115.88740 × 10−11.08230 × 10−111.08230 × 10−111.08230 × 10−111.08230 × 10−111.08230 × 10−111.08230 × 10−111.08230 × 10−111.08230 × 10−11
F231.08230 × 10−111.08230 × 10−111.08230 × 10−111.08230 × 10−111.08230 × 10−111.08230 × 10−111.08230 × 10−111.08230 × 10−111.08230 × 10−111.08230 × 10−11
F241.08230 × 10−111.00000 × 10+001.08230 × 10−111.08230 × 10−111.08230 × 10−119.95670 × 10−11.08230 × 10−111.08230 × 10−111.08230 × 10−111.08230 × 10−11
F251.08230 × 10−111.08230 × 10−111.08230 × 10−111.08230 × 10−111.08230 × 10−111.08230 × 10−111.08230 × 10−111.08230 × 10−111.08230 × 10−111.08230 × 10−11
F261.00000 × 10+001.08230 × 10−111.08230 × 10−111.08230 × 10−111.00000 × 10+001.79650 × 10−11.08230 × 10−111.00000 × 10+001.08230 × 10−111.00000 × 10+00
F273.93940 × 10−19.95670 × 10−11.08230 × 10−111.08230 × 10−119.30740 × 10−91.08230 × 10−111.08230 × 10−111.08230 × 10−111.08230 × 10−113.93940 × 10−1
F281.00000 × 10+001.08230 × 10−111.08230 × 10−111.08230 × 10−111.08230 × 10−114.32900 × 10−111.08230 × 10−111.08230 × 10−111.08230 × 10−111.00000 × 10+00
F293.93940 × 10−11.08230 × 10−111.08230 × 10−111.08230 × 10−111.00000 × 10+001.08230 × 10−111.08230 × 10−111.08230 × 10−111.08230 × 10−113.93940 × 10−1
F301.00000 × 10+001.08230 × 10−111.08230 × 10−111.08230 × 10−119.37230 × 10−11.08230 × 10−111.08230 × 10−116.49350 × 10−91.08230 × 10−111.00000 × 10+00
Table 7. Experimental average running results using CEC2022.
Table 7. Experimental average running results using CEC2022.
FADPOPOIVYHOAHHOAOASWOCOAGJOWOAKOA
F10.905270.307111.026360.136430.245290.196850.008080.197910.374290.108000.01245
F20.835920.332250.852560.140010.243480.187910.004580.185770.416040.091410.00826
F31.113340.351100.837300.251030.589620.279090.008300.415030.427060.182100.01042
F40.955050.267270.731820.157310.301160.208940.005410.249580.359990.117910.00826
F51.027210.280300.758540.180230.373810.226010.006230.273360.380060.121290.00790
F60.797940.251640.855140.168410.267110.202250.005220.216540.348510.102100.00808
F71.314170.396580.990090.345850.580040.342280.010750.626440.496300.245030.01185
F81.402210.537591.048320.353900.603870.346040.010590.600330.469070.256240.01484
F91.133340.352830.827450.253060.483020.308260.008530.494960.433380.211820.01091
F100.995770.305040.725400.207680.409040.255970.007690.384800.391980.170930.00990
F111.274030.395610.864630.294850.578750.345400.009830.587850.470090.258660.01223
F121.350350.429020.873370.319000.633770.371250.010990.659770.502570.284520.01371
Table 8. Experimental average running results using CEC2017 50D.
Table 8. Experimental average running results using CEC2017 50D.
FADPOeCOACMAESDAOACSOAOAGWO_CSRDWOAjDEISSAIPSO_IGSA
F11.673861.042934.249350.384121.114641.957310.740371.933760.502500.70356
F31.241740.782743.182050.340240.682681.488340.571441.591810.405550.60406
F41.271190.795453.650050.364210.714151.682250.616631.749070.541130.69053
F51.267690.745693.253170.359120.691681.489070.559431.710710.432420.63177
F61.247420.767074.105690.352370.703821.617920.581151.630570.538050.70901
F71.914881.354273.603960.643301.534682.081351.593062.455921.038261.03672
F81.526360.811903.903570.517890.885871.777020.764091.980490.656250.79616
F91.669430.808093.766150.380820.766751.638500.632802.103790.568710.70744
F101.379150.792323.597670.384250.806691.626320.661851.678510.496740.65528
F111.795481.284843.870480.530701.277871.952141.293712.115620.806810.85990
F121.374420.939494.003640.397370.808701.855200.814362.949070.603200.78276
F131.711431.048503.788720.493571.006671.623200.757931.998450.586310.73249
F141.433230.788783.401170.422980.853821.745620.736771.921450.555650.77742
F151.632111.069654.007380.465200.948811.729750.904151.933340.723830.74677
F161.285760.749103.450590.392780.794881.672170.712461.756850.465080.66763
F171.453200.864063.794830.397980.839671.566890.789861.767920.553440.70001
F181.463030.964053.250360.444451.025421.618230.994701.878460.687460.72306
F191.336610.829293.441340.409650.889401.741970.658821.782770.503820.66813
F202.662441.954753.916040.688431.599522.001641.886372.538991.294120.93342
F211.561251.155913.848390.516521.229421.784231.243032.074990.860730.87118
F222.764792.048914.021510.770522.015652.080381.977742.485901.235810.94357
F232.585762.107523.851830.850312.253582.098322.406032.287801.400341.04877
F242.728342.284364.084180.920722.327581.982872.420222.460551.760561.10898
F252.676592.419893.696470.815942.203272.030492.322643.231361.734301.03920
F262.671512.260363.731090.935092.265092.016872.913362.869921.910231.24187
F273.900932.873585.385941.292463.488902.722014.134613.700812.994262.10145
F285.330604.425356.080191.734114.006123.181324.948353.896863.517512.25040
F294.849233.792455.731531.502644.071703.532924.111013.952623.622042.02870
F304.169243.240626.041861.118872.956492.627673.381503.589162.240901.72986
Table 9. Parameter ranges for ADPO-LSTM.
Table 9. Parameter ranges for ADPO-LSTM.
AlgorithmParameter ValueParameter Range
Number of Hidden Units N h [20, 150]
Maximum Epochs M E [20, 200]
Optimization Method O M 1, 2, 3 for SGDM (1), Adam (2), or RMSProp (3)
Minimum Batch Size M B [64, 256]
Learning Rate Drop Factor L F [0.1, 0.9]
Table 10. Training results over Stations A and B.
Table 10. Training results over Stations A and B.
Station AStation B
R2RMSEMAECOVR2RMSEMAECOV
LSTM0.68750.00240.001785.21530.63850.00260.0022115.8742
PO-LSTM0.84850.00140.000751.24580.84750.00120.001560.5874
SCA-LSTM0.84250.00150.000953.78940.83650.00140.001663.2145
WOA-LSTM0.82850.00170.001159.85470.82450.00160.001866.9874
SOA-LSTM0.83850.00150.001055.74120.83250.00150.001764.5789
HHO-LSTM0.85150.00130.000850.96850.84950.00130.001459.8524
ADPO-LSTM0.98750.00020.000115.87450.98510.00040.000223.7412
Table 11. Training results over Stations C and D.
Table 11. Training results over Stations C and D.
Station CStation D
R2RMSEMAECOVR2RMSEMAECOV
LSTM0.61850.00250.0020101.58740.63250.00300.002296.8745
PO-LSTM0.81250.00170.001263.87450.80850.00130.001262.7854
SCA-LSTM0.80650.00180.001366.21450.80750.00150.001365.1478
WOA-LSTM0.79450.00200.001569.87450.79750.00170.001468.3654
SOA-LSTM0.80250.00180.001467.58960.80450.00160.001366.9874
HHO-LSTM0.81450.00160.001162.14780.81250.00140.001161.5478
ADPO-LSTM0.97580.00020.000125.45780.96850.00050.000331.2874
Table 12. Testing results over Stations A and B.
Table 12. Testing results over Stations A and B.
Station AStation B
R2RMSEMAECOVR2RMSEMAECOV
LSTM0.66250.00340.002688.74580.57850.00370.0034117.8965
PO-LSTM0.84850.00260.001953.78510.81050.00200.001663.4785
SCA-LSTM0.81650.00270.002056.85470.79450.00220.001865.9874
WOA-LSTM0.79450.00290.002261.24580.76150.00250.002070.3654
SOA-LSTM0.80250.00280.002158.98740.78450.00230.001967.8745
HHO-LSTM0.84650.00250.001852.96350.81250.00190.001562.7854
ADPO-LSTM0.97850.00090.000821.58740.97980.00100.000527.4578
Table 13. Testing results over Stations C and D.
Table 13. Testing results over Stations C and D.
Station CStation D
R2RMSEMAECOVR2RMSEMAECOV
LSTM0.59850.00370.0032107.58960.61050.00390.0030102.8745
PO-LSTM0.77050.00210.001768.14780.76950.00190.001668.7854
SCA-LSTM0.73450.00230.001971.25870.73850.00210.001770.1478
WOA-LSTM0.70850.00260.002174.87450.72150.00240.001973.5896
SOA-LSTM0.72450.00240.002072.98740.73250.00220.001871.8745
HHO-LSTM0.77250.00200.001667.36540.77150.00180.001567.9874
ADPO-LSTM0.97050.00080.000529.78540.96150.00120.000532.1478
Table 14. The average performance across all stations during the training phase.
Table 14. The average performance across all stations during the training phase.
R2RMSEMAECOV
LSTM0.64430.00260.002099.8489
PO-LSTM0.82930.00140.001259.6033
SCA-LSTM0.82330.00160.001362.0916
WOA-LSTM0.81130.00180.001566.2705
SOA-LSTM0.81930.00160.001463.7243
HHO-LSTM0.83200.00140.001158.6241
ADPO-LSTM0.97920.00030.000224.0902
Table 15. The average performance across all stations during the testing phase.
Table 15. The average performance across all stations during the testing phase.
R2RMSEMAECOV
LSTM0.61250.00370.0031104.2516
PO-LSTM0.79980.00220.001763.5242
SCA-LSTM0.77100.00230.001966.0622
WOA-LSTM0.74650.00260.002170.0438
SOA-LSTM0.76100.00240.002067.6685
HHO-LSTM0.80080.00210.001662.7754
ADPO-LSTM0.97260.00100.000627.7271
Table 16. Comparison with other optimized DNN approaches through the testing phase for stations A and B.
Table 16. Comparison with other optimized DNN approaches through the testing phase for stations A and B.
Station AStation B
R2RMSEMAER2RMSEMAE
ADPO-LSTM0.97850.00090.00080.97980.00100.0005
ADPO-Bi-LSTM0.88950.00140.00130.89850.00150.0013
ADPO-KELM0.86250.00140.00120.91250.00200.0012
ADPO-ELM0.79450.00210.00180.82850.00190.0015
ADPO-RF0.80750.00190.00160.84950.00170.0014
Table 17. Comparison with other optimized DNN approaches through the testing phase for stations C and D.
Table 17. Comparison with other optimized DNN approaches through the testing phase for stations C and D.
Station CStation D
R2RMSEMAER2RMSEMAE
ADPO-LSTM0.97050.00080.00050.96150.00120.0005
ADPO-Bi-LSTM0.92050.00160.00080.91850.00100.0008
ADPO-KELM0.90250.00210.00180.90550.00110.0008
ADPO-ELM0.81750.00240.00200.83850.00170.0014
ADPO-RF0.83750.00220.00190.83250.00160.0013
Table 18. The average metrics of various DNN approaches.
Table 18. The average metrics of various DNN approaches.
R2RMSEMAE
ADPO-LSTM0.97260.00100.0006
ADPO-Bi-LSTM0.90680.00140.0011
ADPO-KELM0.89580.00170.0013
ADPO-ELM0.81980.00200.0017
ADPO-RF0.83180.00190.0016
Table 19. Average metrics with other state-of-the-art approaches.
Table 19. Average metrics with other state-of-the-art approaches.
R2RMSEMAE
LSTM + HBO [67]0.96540.0428690.02998
RVFL + CapSA [68]0.9681110.315464.452775
RVFL + SCA [68]0.9562128.420971.53655
RVFL + GWO [68]0.9374154.2171102.3056
ADPO-LSTM (Proposed)0.97260.0010100.00060
Table 20. Wilcoxon test p-values for ADPO-LSTM versus other approaches.
Table 20. Wilcoxon test p-values for ADPO-LSTM versus other approaches.
Station AStation BStation CStation D
ADPO-LSTM vs. PO-LSTM0.00120.00080.00150.0011
ADPO-LSTM vs. SCA-LSTM0.00070.00060.00090.0008
ADPO-LSTM vs. WOA-LSTM0.00030.00040.00050.0006
ADPO-LSTM vs. SOA-LSTM0.00050.00070.00080.0009
ADPO-LSTM vs. HHO-LSTM0.00180.00210.00190.0017
ADPO-LSTM vs. LSTM<0.0001<0.0001<0.0001<0.0001
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lin, G.; Abdel-salam, M.; Hu, G.; Jia, H. Adaptive Differentiated Parrot Optimization: A Multi-Strategy Enhanced Algorithm for Global Optimization with Wind Power Forecasting Applications. Biomimetics 2025, 10, 542. https://doi.org/10.3390/biomimetics10080542

AMA Style

Lin G, Abdel-salam M, Hu G, Jia H. Adaptive Differentiated Parrot Optimization: A Multi-Strategy Enhanced Algorithm for Global Optimization with Wind Power Forecasting Applications. Biomimetics. 2025; 10(8):542. https://doi.org/10.3390/biomimetics10080542

Chicago/Turabian Style

Lin, Guanjun, Mahmoud Abdel-salam, Gang Hu, and Heming Jia. 2025. "Adaptive Differentiated Parrot Optimization: A Multi-Strategy Enhanced Algorithm for Global Optimization with Wind Power Forecasting Applications" Biomimetics 10, no. 8: 542. https://doi.org/10.3390/biomimetics10080542

APA Style

Lin, G., Abdel-salam, M., Hu, G., & Jia, H. (2025). Adaptive Differentiated Parrot Optimization: A Multi-Strategy Enhanced Algorithm for Global Optimization with Wind Power Forecasting Applications. Biomimetics, 10(8), 542. https://doi.org/10.3390/biomimetics10080542

Article Metrics

Back to TopTop