Next Article in Journal
A Novel Dual Color Image Watermarking Algorithm Using Walsh–Hadamard Transform with Difference-Based Embedding Positions
Previous Article in Journal
IoT Network Security Threat Detection Algorithm Integrating Symmetric Routing and a Sparse Mixture-of-Experts Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Enhanced Plant Growth Algorithm with Adam Learning, Lévy Flight, and Dynamic Stage Control

School of Computer, Jiangsu University of Science and Technology, Zhenjiang 212001, China
*
Author to whom correspondence should be addressed.
Symmetry 2026, 18(1), 64; https://doi.org/10.3390/sym18010064 (registering DOI)
Submission received: 21 November 2025 / Revised: 23 December 2025 / Accepted: 26 December 2025 / Published: 30 December 2025
(This article belongs to the Section Computer)

Abstract

This study addresses the limitations of the traditional Plant Growth Algorithm (PGA), including insufficient local exploitation, premature convergence, and performance degradation in high-dimensional optimization. To enhance search efficiency, we propose the ALDPGA (Adam–Lévy Dynamic Plant Growth Algorithm), which incorporates Adam-based adaptive gradient learning, Lévy long-tailed perturbation, and a dynamic stage-control mechanism. The method strengthens directional refinement in the light region using gradient-assisted learning and a simulated-annealing rule, while staged hybrid perturbations and adaptive learning-rate scheduling expand early exploration in the shaded region. During the cell-elongation phase, Lévy-driven dynamic trajectories guide the transition from global search to fine-grained convergence. Notably, the light and shaded regions of the algorithm are designed symmetrically, balancing exploration and exploitation. The light region reflects phototropism, fostering growth towards optimal solutions, while the shaded region adapts to explore previously underexplored areas. Extensive experiments on CEC2017, CEC2020, and CEC2022 benchmarks demonstrate improvements in optimal solutions, convergence speed, and statistical stability. Wilcoxon tests confirm the significance of these gains, and ablation studies verify the contributions of each component. ALDPGA’s enhanced robustness and optimization efficiency make it well suited for complex, multimodal, and high-dimensional problems, offering new insights into bio-inspired optimization frameworks.

1. Introduction

Problems arise widely across both professional domains and everyday applications [1]. In recent years, research has increasingly focused on developing optimization techniques that are more efficient and practically applicable. The fundamental goal of optimization is to identify the optimal or near-optimal solution from a large set of candidates under given constraints. Broadly, optimization approaches can be classified into mathematical methods and metaheuristic methods. Mathematical approaches are well suited for problems with clear structures and explicit constraints, whereas metaheuristic algorithms excel in addressing complex, multimodal, and high-dimensional nonlinear problems [2]. Traditional deterministic and classical heuristic algorithms often struggle with computational limitations, making them inadequate for solving complex optimization tasks. Consequently, the continuous search for new optimization paradigms in both academia and engineering has accelerated the development of metaheuristic (intelligent) optimization algorithms [3]. Compared with calculus-based analytical techniques or exhaustive search strategies, metaheuristics offer robust and versatile global optimization capabilities, demonstrating significant performance advantages across various fields by enhancing solution efficiency and reducing computational cost.

1.1. Motivations

A novel optimization technique named the Plant Growth Algorithm (PGA) [4] was introduced in 2025 in ScienceDirect. In contrast to most metaheuristic algorithms inspired by animal behaviors, PGA is rooted in the phototropic growth mechanisms of plant cells. In this framework, candidate solutions are modeled as “cells,” and their fitness values correspond to the perceived “light intensity,” reflecting solution quality.
The key innovation of PGA lies in breaking the traditional paradigm where all individuals converge toward a single global optimum. Instead, the population is partitioned into two distinct subgroups: the light region and the shaded region, this exemplary symmetry design achieves an effective partitioning of the cells’ search space. Through the coordinated effects of auxin redistribution, cell mitosis, and mutation-like processes, these two regions co-evolve toward higher perceived light intensity, effectively balancing global exploration and local exploitation.
Nevertheless, an examination of its mathematical formulation indicates that several strategic components of PGA still exhibit substantial potential for refinement.
  • Excessive reliance on global best guidance weakens late-stage local exploitation. In the cell mitosis phase, the light region functions as the primary exploitation zone of the algorithm, where individuals generally represent superior solutions within the population. However, in the original PGA, the update direction of this region depends excessively on the global best solution X b e s t . Although this global-guidance mechanism can accelerate convergence, it simultaneously weakens the algorithm’s capacity for thorough local neighborhood exploration in the later stages. As population members become more homogeneous, the diversity of differential terms diminishes significantly, increasing the risk of premature convergence to local optima.
  • The shaded region lacks adaptive regulation and effective global–local coordination. Meanwhile, the shaded region is mainly responsible for global exploration. Yet, its original update mechanism remains rather simplistic—primarily relying on random perturbations and difference operations with the best solution from the light region—without adaptive or dynamic regulation. Furthermore, the update mechanisms of the light and shaded regions operate relatively independently, lacking interactive information exchange. This separation undermines the synergy between global exploration and local exploitation, reducing the algorithm’s overall search effectiveness.
  • Randomized step length and direction lead to unstable convergence trajectories. In the curvature and elongation phase of the PGA’s phototropic growth process, the curvature and cell vicinity components collaboratively determine the displacement direction of each cell. Nevertheless, in this phase, both the step length and direction factor are modeled as random variables ( r 7 , r 8 ), without any adaptive regulation mechanism. The absence of mechanisms leveraging historical search information causes the convergence trajectory to exhibit pronounced oscillations and instability in high-dimensional complex spaces, thereby limiting the algorithm’s accuracy during the later stages of optimization.
  • Fixed decay of growth parameters ignores problem-dependent dynamics. Additionally, in the Mutation and Auxin Redistribution Operator phases, the parameter α governs the phototropic growth behavior of PGA via a fixed exponential decay function α = e t / T . Although this time-dependent decay mechanism captures, to some extent, the natural tendency of plant growth rates to slow down over time, it overlooks dynamic factors such as problem complexity, dimensionality, and the distribution of fitness values. Consequently, during the early stages of optimization, an excessively rapid reduction in step size restricts global exploration, whereas in the later stages, overly small step sizes diminish the efficiency of local exploitation.
  • Static regional strategies fail to adapt to different search stages. Lastly, within the original structural design of PGA, the update mechanisms of the light region and shaded region remain static and do not adapt according to different stages of iteration. Theoretically, during the early phase of the search, exploration-oriented mutation processes should be emphasized to expand the search space, whereas during the later phase, convergence-oriented local refinement should be strengthened to enhance accuracy. However, PGA does not differentiate between these phase-specific behaviors, resulting in insufficient dynamic adaptability and imbalance in the overall optimization process.

1.2. Contributions

To overcome the above limitations, a series of improvements are introduced while explicitly preserving the core symmetry philosophy of the original PGA, specifically the “light-shaded collaboration and elongation” strategy.
  • Light-region enhancement via adaptive gradient learning and probabilistic acceptance. In the light region, an adaptive learning mechanism driven by gradient estimation is incorporated together with a simulated annealing–based acceptance rule. Specifically, inspired by the Adam [5] optimizer, first- and second-moment estimates of historical updates are maintained to dynamically regulate [6] the search step size and direction. This enables each individual to adjust not only according to the current best solution but also in response to historical gradient tendencies and population-level distribution patterns, thereby strengthening local exploitation while mitigating oscillations and premature convergence in high-dimensional spaces [7].
    Moreover, to prevent stagnation in the later stages, a simulated-annealing acceptance mechanism is employed. By decreasing the acceptance probability of inferior solutions through a temperature-decay schedule, the algorithm preserves population diversity in the early phase and achieves smoother convergence in the later phase, ensuring adequate exploration of the search space [8].
  • Shaded-region improvement through staged hybrid perturbation and dynamic learning-rate scheduling. In the shaded region, a staged hybrid updating [9] mechanism together with a dynamically scheduled learning rate [10] is introduced. The algorithm partitions the search into different phases according to the iteration progress: during the early stage, intensified random perturbations and differential-like operations expand the search region, facilitating broader global exploration.
    As the search advances, guidance information from the light region is gradually incorporated, steering shaded-region individuals toward promising areas of the solution space and thereby achieving a dynamic balance between exploration and exploitation. In addition, a learning-rate schedule that decays over iterations is employed to enhance update stability. This enables individuals to exhibit stronger directional consistency and higher local sensitivity in the later stages, preventing the search dynamics from becoming excessively stochastic [9].
  • Cell-elongation phase redesign using Lévy-driven global perturbation and dynamic weighting. In the cell elongation phase, a global perturbation model based on Lévy flight and a dynamic weight scheduling mechanism are incorporated. By leveraging the long-tailed nature of the Lévy distribution, this model effectively combines large-scale jumps with fine local tuning, enabling cross-regional exploration and avoiding entrapment in local optima. Additionally, the inclusion of an elite-guided factor prevents the population from collectively moving in an incorrect direction when the current best solution ( X b e s t ) is suboptimal. To achieve a better balance between convergence speed and precision, a time-varying weight parameter is introduced, guiding individuals to focus on exploration during the early stage and on exploitation in the later stage, thereby strengthening the algorithm’s dynamic equilibrium.
  • Overall contribution and paper organization. Based on the above improvements, an enhanced algorithm named ALDPGA is proposed, which integrates Adam-based adaptive gradient learning, elite-guided Lévy flight perturbation, and a dynamic phase-control mechanism within a unified symmetric growth framework.
The remainder of this paper is organized as follows: Section 2 presents the detailed workflow of the original PGA algorithm, including its optimization strategies, as well as several proven optimization mechanisms widely adopted in both deep learning and evolutionary computation, such as the Adam optimizer, the Metropolis acceptance criterion, and the traditional Lévy flight strategy. Section 3 introduces the proposed improvements to the PGA algorithm based on these classical optimization strategies, leading to the development of the ALDPGA algorithm, which incorporates an Adam-based adaptive gradient descent mechanism, an elite-guided Lévy flight strategy, and a dynamic phase control system. Section 4 discusses and analyzes the experimental results, Section 5 presents the Discussion and Future Directions, and Section 6 concludes the discussion.

2. Related Work

2.1. The PGA

The Plant Growth Algorithm (PGA) is an intelligent optimization algorithm inspired by the phototropic growth behavior of plants in nature. The algorithm first divides the population into a light region and a shaded region based on fitness performance, and then updates the cells (candidate solutions) in each region using distinct strategies to search for higher light intensity, corresponding to better fitness values [4]. Its core mechanism simulates the physiological processes of plant cells under light exposure, including mitosis, cell elongation, and auxin redistribution. Through the dynamic interaction between the light region and the shaded region, PGA effectively balances global exploration and local exploitation.
At the initialization stage, PGA treats the population as a collection of plant cells X = { x 1 , x 2 , , x N } and classifies newly generated solutions based on a randomized proportion. Cells assigned to the light region X L correspond to high-quality solutions, whereas those assigned to the shaded region X S represent relatively inferior ones.
Light-region cells produce new candidate solutions by mimicking the biological process of cell division, in which a parent cell splits into two daughter cells. One daughter cell undergoes a mutation-like operation to enhance population diversity and facilitate escape from local optima. The other daughter cell models the auxin redistribution mechanism: auxin, a key plant hormone, redistributes toward the shaded side under light stimulation, promoting differential elongation and steering the growth direction toward the light source, i.e., toward the current best performing cell. The corresponding formulas for generating these two types of new solutions are given as follows:
X L n e w 1 ( t ) = X L r a n d + α β r 2 | X L r a n d X i ( t ) | + α β r 3 | X L b e s t X i ( t ) | X L n e w 2 ( t ) = X i L ( t ) + α r 4 | X L b e s t ( t ) X i L ( t ) |
where r 2 and r 3 are random perturbation terms sampled from [ 1 , 1 ] , while α = e t / T denotes the growth-limiting factor that decays over iterations. The directional coefficient β { 1 , 1 } (not reiterated hereafter) governs the search direction. The combined effects of the stochastic mutation factor β and the decaying growth factor α enhance the stability of the update process and improve the adaptability of the cells to different search environments. Together, these components characterize the growth behavior of light-region cells under the interplay of random mutation and guidance from the current best solution.
Cells in the shaded region experience insufficient illumination, and their growth dynamics are predominantly governed by intrinsic genetic factors rather than external environmental influences. Consequently, PGA adopts both the mutation operator and the auxin redistribution operator in the shaded region as well, updating the corresponding cells (solutions) through these two mechanisms.
The mutation operator models the spontaneous variations that arise in shaded-region cells due to genetic factors. Similar to the mutation mechanism in the light region, the mutation of shaded cells is independent of external illumination and is driven solely by the differences between an individual cell and a randomly selected counterpart. The auxin redistribution operator captures the transfer of auxin from the light region to the shaded region under differential light exposure. By “absorbing” auxin from the light-region cells, shaded cells promote their own elongation, contributing to the plant’s overall phototropic bending. In the original PGA, a light-region cell X L r a n d is randomly chosen and the best light-region cell X L b e s t is used as a reference. The update equations for shaded-region cells are therefore expressed as follows:
X S n e w 1 ( t ) = X i S ( t ) + α β r 5 | X i S ( t ) X r a n d | X S n e w 2 ( t ) = X L r a n d + α β r 6 | X L b e s t X i S ( t ) |
where r 5 and r 6 are generated in the same manner as r 2 , with values uniformly drawn from [ 1 , 1 ] . They serves to enhance the diversity of the shaded-region cells, thereby preserving the algorithm’s global exploration capability. Because the behavioral changes in these cells are not directly governed by illumination, this mechanism can be interpreted as modeling the plant’s adaptive response to low-light conditions.
After these two update mechanisms are applied, all newly generated cells are integrated into the current population. The algorithm then performs fitness-based selection, retaining only the top performing individuals to ensure a stable population size. This strategy maintains population diversity while simultaneously guiding the overall search toward the optimal region.
Once the light- and shaded-region cells have completed their respective updates, the algorithm proceeds to the cell-elongation stage. This stage emulates the phototropic bending behavior of plants under uneven illumination, enabling the search to further approach the optimal solution.
The process is driven by auxin-induced differential deformation: variations in auxin concentration cause directional elongation, thereby adjusting the positions of candidate solutions and supporting fine-grained local exploitation. Cells on the shaded side typically accumulate higher auxin concentrations, and because plant stems and leaves exhibit lower sensitivity to auxin, they undergo more pronounced elongation. This differential growth leads to bending toward the light source, a mechanism that PGA leverages to model phototropic search dynamics.
The mathematical formulation of this stage consists of two components: curvature and cell vicinity.
The curvature term quantifies the discrepancy between the average fitness of the light region and the current best solution. When the light-region population exhibits poor performance, the curvature value increases, thereby enhancing the algorithm’s exploratory behavior. The curvature is computed as follows:
c u r v a t u r e = β × α M e a n F i t n e s s ( X L ) B e s t F i t n e s s ( X )
In the original PGA description, the parameters in Equation (3) are defined as follows:
  • α is an adaptive parameter that facilitates exploration in the early stages and exploitation in the later stages of the search process.
  • β { 1 , 1 } is a direction-switching coefficient that enables flexible movement, mimicking variations observed in real-world growth behaviors.
  • MeanFitness ( X L ) denotes the average fitness of cells in the light region.
  • BestFitness ( X ) represents the best fitness value among all cells in the population.
When the light region maintains high solution quality, the curvature term approaches zero, indicating that the algorithm should prioritize stable convergence. Conversely, when the light region performs poorly, the curvature increases, thereby stimulating stronger exploratory behavior. The Factor of Curvature (FOC) represents the deviation of an individual cell from the best performing cell and serves as a directional adjustment mechanism that mimics the phototropic bending of plants toward the light source:
F O C = r 7 × c u r v a t u r e ( X i ( t ) X b e s t ( t ) )
where r 7 is a random number ( 1 , 1 ) . Beyond the curvature component, cell vicinity plays a vital role in regulating the extent of cell elongation. By examining the spatial arrangement of neighboring cells, an individual cell can infer whether it is illuminated or occluded. In natural settings, shaded cells tend to cluster closely together, whereas cells exposed to light are typically more dispersed. The cell vicinity term incorporates these local interaction effects and captures how spatial relationships among adjacent cells influence their growth direction:
C e l l v i c i n i t y = α × β × r 8 X i ( t ) + X i + 1 ( t ) 2
where r 8 is a random number ( 1 , 1 ) . The update integrates the three aforementioned components into a unified framework.
X i n e w ( t ) = X i ( t ) + F O C + C e l l v i c i n i t y
In the final stage, each cell is guided toward the global optimum while also performing localized exploitation through interactions with its neighboring cells. Once the cell classification and elongation steps are completed, the newly generated and existing cells are merged, and their fitness values are evaluated using the objective function. A greedy selection strategy is then applied to retain the top N individuals, ensuring that boundary violations are avoided. This cycle is repeated throughout the iterative process until termination.

2.2. Adam Optimizer

The Adam optimizer consolidates these techniques into a highly efficient learning framework. As one of the most powerful and widely adopted optimization algorithms in deep learning [11], Adam has achieved great popularity due to its strong empirical performance [12]. A brief introduction to the Adam algorithm is provided below.
A fundamental component of the Adam optimizer is its use of exponentially weighted moving averages to estimate both the first-order momentum and the second-order moment of the gradient [13]. To achieve this, Adam maintains the following state variables:
v t = β 1 v t 1 + ( 1 β 1 ) g t s t = β 2 s t 1 + ( 1 β 2 ) g t 2
where β 1 and β 2 are non-negative weighting coefficients, typically set to β 1 = 0.9 and β 2 = 0.999 . As a result, the moving average of the second-moment estimate evolves significantly more slowly than that of the first-moment estimate [14]. It is important to note that initializing v 0 = s 0 = 0 introduces a substantial bias during the early iterations. This issue can be mitigated using the bias-correction terms 1 β i t [12]. The corresponding bias-corrected state variables are therefore computed as follows:
v ^ t = v t 1 β 1 t s ^ t = s t 1 β 2 t
With the bias-corrected moment estimates available, the update direction can be formulated. The first step is to rescale the gradient using a procedure analogous to that of RMSProp [15], yielding:
g t = η v ^ t s ^ t + ϵ
Unlike RMSProp, Adam applies momentum rather than the raw gradient when performing updates. Furthermore, the rescaling uses 1 s t + ϵ instead of 1 s t + ϵ , introducing a subtle difference between the two approaches. Empirically, the former yields better performance, making Adam conceptually similar to RMSProp. In practice, ϵ is typically set to 10 6 [12] to balance numerical stability and approximation error [14]. The final update step is then given by:
x t = x t 1 g t
Adam adapts the learning rate of each parameter by maintaining first- and second-moment estimates of the gradient. While the early gradient estimates may be biased toward smaller values, the bias correction mechanism effectively accelerates the initial learning dynamics. Adam has consistently demonstrated robust and competitive performance across diverse model architectures and datasets [16]. The optimization trajectory of the Adam optimizer on the Beale function is illustrated in Figure 1.
In summary, Adam combines the strengths of both Momentum and RMSProp, offering the advantage of adaptively adjusting the learning rate for each parameter. Although gradient estimates may be biased toward smaller values during the initial iterations, the bias correction mechanism accelerates early learning. By leveraging historical search information, Adam updates parameters not through simple gradient descent but through an accumulated understanding of past search behavior.
During continuous exploration of the solution space, each individual benefits from an adaptive learning rate informed by its update history, thereby preventing the search process from becoming overly random. As a result, even when the algorithm enters a local optimum basin, Adam reduces oscillatory behavior and accelerates convergence.

2.3. Metropolis Criterion

The Metropolis acceptance criterion is a standard mechanism in Simulated Annealing (SA) [17]. When Δ t < 0 , the candidate solution S is unconditionally accepted. If Δ t 0 , the new solution is accepted with probability exp ( Δ t / T ) . The key idea is to introduce probabilistic acceptance rather than relying on a strictly deterministic rule. By allowing “worse” solutions to be accepted with a non-zero probability, the Metropolis rule enables the algorithm to escape local optima and continue exploring the global search space. This capability underpins its extensive use in optimization and probabilistic sampling.
This mechanism of conditionally accepting inferior solutions equips the algorithm with the ability to escape local optima, which is particularly advantageous when dealing with multimodal optimization problems. In the proposed algorithm, this acceptance principle is applied within the light region. Since light-region cells correspond to relatively high-quality solutions, the algorithm can safely accept worse solutions during the early stages of the search. This prevents premature convergence to local optima and encourages continued exploration toward regions with higher global light intensity.

2.4. Lévy Flight Theory

Lévy flight [18] is a distinctive random-walk model—often referred to as Lévy walk or Lévy wandering—that mimics the stochastic foraging behavior observed in nature. During a Lévy flight, an individual performs random movements whose step lengths and directions follow a Lévy distribution. Because this distribution permits occasional long-distance jumps, the resulting search trajectory effectively integrates local exploitation with global exploration. In the context of intelligent optimization algorithms, Lévy flight step lengths are commonly generated using the Mantegna algorithm [19]:
s = u | v | 1 / β
where
u N ( 0 , σ u 2 ) , v N ( 0 , σ v 2 ) σ u = Γ ( 1 + β ) sin ( π β / 2 ) Γ ( ( 1 + β ) / 2 ) β 2 ( β 1 ) / 2 1 / β σ v = 1
The update rule of the algorithm is conventionally formulated as:
x t + 1 = x t + α · s · ( x t x b e s t )
where α denotes the step-size scaling coefficient that controls the amplitude of the Lévy jump. The strength of Lévy flight lies in its hybrid search behavior that integrates both long and short jumps: long-distance moves enable the algorithm to escape local basins of attraction, while small corrective steps facilitate fine-tuned local refinement [18]. Together, these behaviors create a hierarchical and progressively advancing global search trajectory within the solution space.

2.5. Classical Optimization Algorithms

To guarantee the rationality and comprehensiveness of the experimental comparisons, several representative intelligent optimization algorithms were selected as benchmark methods in this study, which can be classified into three categories.
The first category consists of classical algorithms such as PSO (Particle Swarm Optimization) [2], GA (Genetic Algorithm) [20], and DE (Differential Evolution) [21], which are theoretically mature, include relatively few parameters, and are widely used as baseline methods. The second category consists of swarm intelligence algorithms, including GWO [22] and VO [23], whose biologically inspired mechanisms enable a balanced interplay between global exploration and local exploitation.
The third category comprises recently developed physics- and nature-inspired optimizers, including TOC [24], SAO [25], and RIME, which are derived from meteorological or physical processes and exhibit higher model complexity, stronger optimization capability, and adaptive parameter behaviors. The detailed algorithmic settings are summarized in Table 1.

3. Proposed Algorithm

The PGA algorithm exhibits strong performance in solving optimization problems, and its dual-region update mechanism—dividing the population into the light and shaded regions—effectively improves the search efficiency for optimal fitness. Nevertheless, the original PGA still encounters limitations in certain complex or high-dimensional tasks. To address these issues, this study introduces enhancement strategies for the light region, the shaded region, and the cell selection and update processes, aiming to further improve the solution accuracy and convergence speed of PGA.

3.1. Improvement in the Light Region

3.1.1. Construction of Candidate Solutions

In the original PGA algorithm, individuals in the light region are updated through linear mutation and auxin redistribution. Biologically, cells in the early stages of plant growth exhibit high activity and strong light responsiveness, making them more suitable for broad, jump-like exploration rather than fine-grained optimization. Accordingly, during the early search stage, this study retains the original mitosis-based update mechanism of PGA.
In the later stage of the search, continuing to use the original model makes the update direction overly dependent on random perturbations, hindering stable convergence and preventing the effective use of gradient information. Moreover, the strong greedy selection mechanism updates individuals only when fitness improves, making the algorithm prone to local optima and premature convergence.
Therefore, in the mid-to-late search phase, this study adopts an adaptive search strategy that leverages historical information. Specifically, Adam-based Gradient Descent and the Metropolis acceptance criterion are integrated to achieve a hybrid update process that combines gradient memory with probabilistic acceptance.
A candidate search line is first constructed to enable the initial update to progress in a momentum-like manner. In the later search stage, mild periodic perturbations are introduced to avoid local optima, while their frequency is moderated in high-dimensional spaces to prevent excessive oscillations [26]. This mechanism preserves a degree of exploration while maintaining effective local exploitation. The search line is formulated as follows:
X n 1 t + 1 = ( ω + α i ) X L t k = 1 X L t + sin ( 2 π d i m × t ) X n e w 1 t + 1 2 2 k < d i m 50 X L t + sin 2 π t / d i m X n e w 1 t + 1 o t h e r w i s e
where k represents a hopping index of the search line within the interval [ 1 , dim / 2 ] , which is used to distinguish between the initial stage and subsequent stages of the search process, leading to different constructions of the search line [6]. For high-dimensional optimization problems, a dimensional threshold of 50 is adopted to mitigate excessive oscillations caused by sine–cosine perturbations [27], thereby enabling a segmented construction strategy for the search lines. The parameters ω and α i in the above Equation (14) are defined as follows:
ω = r a n d ( ) × 1 I t e r a t i o n 2 t 2 2 I t e r a t i o n t + 1 2 α i = cos ( 1 r a n d ( ) ) × 2 π
In the pre-exploration phase, performing excessive momentum adjustments can introduce unnecessary computational cost and oscillatory behavior. To avoid this, the update frequency k is fixed to 1, allowing the position to be updated solely according to the first part of the equation [28]. The search line is further refined through sine–cosine perturbations, where the constant 1/2 in ω is an empirically optimized value obtained from repeated trials.
This design improves convergence speed while reducing computational overhead. The oscillatory behavior of the parameter w is shown in Figure 2.

3.1.2. Direction Factor ζ

In ALDPGA, individuals in the light region perform not only mitosis-based updates and gradient-guided refinement but also utilize a dynamic initialization mechanism to increase the diversity of search-space exploration. The key idea is to continuously introduce new random starting positions during the iteration process, thereby reducing the risk of individuals becoming trapped in local optima [7].
During each update of the light region, two distinct individuals are randomly selected from within the light region (denoted as a 1 and a 2 ) [29], ensuring that both originate from the elite subset of the population. Using the position vectors X L ( a 1 ) and X L ( a 2 ) , a new candidate initial solution is then constructed. To ensure that the newly generated position remains feasible, a boundary-checking mechanism is applied to constrain X new .
X new = X L ( a 1 ) + r ( X L ( a 1 ) X L ( a 2 ) ) , where a 1 , a 2 i , X new = min max ( X new , l b ) , u b
where r is a random perturbation factor drawn from the interval [ 0 , 1 ] . It maintains diversity among newly generated individuals in the light region, allowing the algorithm to restart exploration from multiple directions and thus reducing the risk of premature convergence.
It is essential that the reference individual differs from the current one, which is both reasonable and biologically consistent. Moreover, when fewer than two individuals exist in the light region, the reference process becomes invalid. Boundary handling is also applied to ensure that all updated individuals remain within the feasible solution domain.
With this design, each iteration provides the light-region cells with distinct “gradient-guided starting points,” effectively balancing global exploration and local exploitation. As a result, ALDPGA exhibits more stable convergence behavior. On this basis, the relative fitness direction factor is defined as follows:
ζ L = f L ( a 1 ) f L ( i ) | f L ( a 1 ) f L ( i ) | + ε { 1 , + 1 }
The relative fitness direction factor retains only the sign of the change (improvement or deterioration), guiding X L , n e w 1 ( i ) t + 1 to move in the appropriate direction. Because it does not depend on gradient magnitude, this approach offers strong robustness and avoids issues caused by inconsistent scaling. The small constant ε is included to prevent division by zero.
Once the relative fitness direction is established, the offset direction must be determined as P = X ¯ L X L , n e w 1 ( i ) t + 1 . Using the mean position of the light region rather than the best individual is preferred, as it prevents excessive deviation from the population structure. This design balances both contraction and dispersion in the search process, helping preserve population diversity while improving convergence speed.
During the early stage of the algorithm, the main goal is to perform broad and thorough exploration of the solution space. By employing diverse search strategies, the algorithm uncovers potential high-quality solutions and avoids premature convergence to local optima. Once it enters the later stage, the focus shifts toward fine-grained optimization and stable convergence, where the objective is to refine and enhance existing solutions so that the population can approach the global optimum more efficiently and in a more directed manner. Consequently, two candidate solutions are constructed as follows:
X L a , new 1 ( i ) t + 1 = X L , new 1 ( i ) t + 1 + a rand ζ ( G X L ( a 1 ) ) ( X L , new 1 ( i ) t + 1 X L ( a 2 ) ) X L b , new 1 ( i ) t + 1 = X L ( a 1 ) + a rand ( G X L ( a 2 ) ) a rand = 1 t T rand ( 1 , dim )
where HG denotes the guiding point influenced by both the learning rate and the best-performing cell, which is explained in detail in Equation (21). To enable different candidate selections in the early and late phases of the search, a smoothed time-dependent probability function is introduced. This function adaptively selects a better candidate according to the dimensional index and varies smoothly over time, thereby preventing oscillatory behavior during updates. In this formulation, j [ 1 , dim ] denotes the dimension index.
X L , n 1 t + 1 [ j ] = X L a , new 1 t + 1 ( i ) [ j ] , p = 0.5 + 0.2 sin 2 π t T X L b , new 1 t + 1 ( i ) [ j ] , else .

3.1.3. Guiding Point

According to the Adam-inspired directional update mechanism, the position of each individual is generally refined by subtracting an adaptively scaled update term, which can be written as x t = x t 1 g t , where g t denotes a directionally scaled update obtained from a surrogate directional estimation combined with the Adam moment mechanism. When the global best solution is employed as a guiding reference to enhance exploitation, the same directional refinement can be equivalently expressed by anchoring the update at the global best position, yielding f = X best g t .
It should be emphasized that the update term g t does not correspond to an analytical gradient of the objective function. Since the considered optimization problems are treated as black-box problems, no derivative information is available. Instead, a gradient-like surrogate direction is constructed based on positional differences among population members and the current global best solution, which serves as heuristic guidance for directional refinement and is adaptively scaled using the Adam moment estimation mechanism, rather than performing conventional gradient descent.
Since the magnitude of the directional update may remain relatively stable when a fixed learning rate is employed, the self-adaptive learning capability of each individual can be weakened, leading to slow convergence in the early stage and oscillatory or even divergent behavior in the later stage [30]. To alleviate this issue, a linearly decaying learning-rate strategy is introduced as follows:
η t = η max η max η min t T
where η max = 0.01 and η min = 0.0005 , enabling large exploratory steps in the early stage and precise convergence in the later stage. A well-known issue of Adam in metaheuristic optimization is that accumulated variance may cause the update direction to flip. The dynamically decaying η t mitigates such oscillations in the later stage and reduces the risk of overshooting the global optimum [31]. By combining Equations (7) and (8), the guidance formulation for the best solution in the light region is expressed as follows:
G = X best η ( t ) v ^ L ( t ) s ^ L ( t ) + ϵ
With a simple [ l b , u b ] boundary constraint applied, the plant cells can converge rapidly and perform local exploitation, leading to improved solution precision. Nevertheless, the original algorithm continues to rely on a strongly greedy selection mechanism, which may cause the currently obtained best solution to remain in a local region, especially for hybrid and composite functions. To address this issue, the following acceptance criterion is introduced in this study. Specifically, the directional vector is estimated using the relative positional relationships among the global best solution, the population mean, and the current individual, which reflect a population-level tendency toward promising regions rather than a local derivative of the objective function.

3.1.4. Metropolis Acceptance Criterion

In the convergence process of optimization algorithms, overly greedy selection—retaining only better individuals—ensures monotonic fitness improvement but often leads to premature convergence. This problem is particularly pronounced in multimodal and complex composite functions, where individuals may cluster in a local region early on, resulting in search stagnation and difficulty escaping local optima in later phases. To address this issue, the Metropolis acceptance criterion is incorporated into the light-region update mechanism, allowing inferior solutions to be accepted with a certain probability and thus preserving population diversity.
The core idea of this criterion is that when the newly generated candidate X L , n e w 1 t + 1 produces a worse fitness value than the original solution, namely Δ f = f ( X L , n e w 1 t + 1 ) f ( X L t ) > 0 , it is not rejected immediately. Instead, it is accepted with a probability governed by an exponential annealing distribution:
P ( accept ) = exp Δ f T L ( t )
where T L ( t ) denotes the temperature control parameter for the light region, which regulates the algorithm’s tolerance to inferior solutions across different stages of the iteration. Its decay pattern is defined as follows:
T L ( t ) = T 0 L T f L T 0 L t / T
where T 0 L is the initial temperature, T f L is the final temperature, and T denotes the maximum number of iterations. As the iterations proceed, the temperature decreases gradually, enabling higher exploratory behavior in the early phase and promoting stable convergence in the later phase.
The specific acceptance strategy can be expressed as follows:
X L t + 1 = X L , new 1 t + 1 , if Δ f 0 , X L , new 1 t + 1 , if Δ f > 0 and rand < e Δ f T L ( t ) , X L t , otherwise .
This mechanism allows the algorithm to dynamically balance local exploitation and global exploration. At the early stage, the high temperature introduces strong randomness, enabling the algorithm to accept inferior solutions with higher probability and escape local optimal regions. As the temperature decreases over iterations, the search becomes more stable, accepting inferior solutions only occasionally, thus guiding the population toward convergence to the global optimum.
In summary, the combination of gradient-guided updates, the Adam adaptive learning-rate mechanism, and the Metropolis acceptance criterion equips ALDPGA with stronger robustness and enables simultaneous improvements in both stable convergence and global search performance for complex optimization problems. This effectively overcomes the shortcomings of the traditional PGA, which tends to converge slowly and become trapped in local optima in the later stages. The pseudocode of the light region is presented in Algorithm 1.
Algorithm 1 Light region Updating Strategy in ALDPGA.
1:
Initialize X L , fitness f ( X L ) , and Adam parameters ( β 1 , β 2 , η m a x , η m i n )
2:
for  i = 1 to N L  do
3:
   Construct candidate line n p o _ l i n e with time-dependent weights
4:
   Select random references a 1 , a 2 i and compute ζ L
5:
   Obtain offset P and gradient f via Adam-based mapping
6:
   Generate two candidates X L a , n e w 1 and X L b , n e w 1
7:
   Dimension-wise selection with smooth probability
8:
   Apply boundary constraint and evaluate Δ f
9:
   if  Δ f 0 or rand < exp ( Δ f / T L ( t ) )  then
10:
     Accept X n e w (Metropolis acceptance)
11:
   end if
12:
end for
13:
Update global best X b e s t

3.2. Improvement in the Shaded Region

In the traditional PGA, individuals in the shaded region primarily perform simple mitosis and random perturbation updates. Since this process lacks historical gradient information and directional feedback, the resulting search behavior is largely blind and fails to establish an effective global exploration gradient. In complex multimodal problems, such a highly random mechanism may enhance diversity but also leads to significant computational inefficiency.
To mitigate these problems, the shaded region adopts improvement strategies similar to those in the light region, including an Adam-based enhanced gradient update mechanism and the construction of a relative offset direction using the population mean and the relative fitness direction factor. As the overall structure parallels that of the light region, further details are omitted here.
During the update process, each individual in the shaded region also constructs a candidate search line. Unlike the light region, where the focus is on localized exploitation, the shaded region aims at cross-region global exploration. The candidate search line for shaded-region updates is formulated as follows:
X S , n 1 t + 1 = ( ω + α i ) X S t , k = 1 , X S t + sin ( 2 π d i m × t ) X S , n e w 1 t + 1 , k > 1 .
where ω serves as the time-dependent weight and α i as the directional coefficient, consistent with their definitions in the light region. These parameters regulate the intensity of exploration and the amplitude of perturbations. By adopting this linear–periodic composite update scheme, individuals in the shaded region generate periodic directional perturbations across dimensions, strengthening both the nonlinearity and the coverage of the global search.
To incorporate global directional information, this study further introduces a relative fitness direction factor into the construction of the offset vector P:
ζ S = f ( a 1 ) f ( i ) f ( a 1 ) f ( i ) + ε , P = X all ¯ X S , new 1 ( i ) t + 1 .
where a 1 denotes the reference individual index, with a 1 A , and ζ S { 1 , + 1 } specifies the sign of the search direction. Unlike the light region, where fitness comparison is restricted to a local subset, the shaded region randomly selects its reference individual a 1 from the entire population A , ensuring that the resulting direction factor reflects global information. Furthermore, X ¯ represents the mean of the entire population rather than the mean of the shaded region alone, which preserves search breadth and diversity while guiding the population toward the global optimum.
Building upon the above formulation and incorporating the Adam-based gradient-descent mechanism, the update of the guiding point for the shaded region is given by:
f d i r = X b e s t η t v ^ S ( t ) s ^ S ( t ) + ϵ
where v ^ S ( t ) and s ^ S ( t ) correspond to the bias-corrected first- and second-order moment terms, while η t denotes the dynamically decaying learning rate.
The construction of candidate solutions follows the same procedure as in the light region, while each dimension employs a probability-adaptive selection mechanism:
X S , new 1 ( i ) t + 1 [ j ] = X S a , new 1 ( i ) t + 1 [ j ] , rand k > rand , X S b , new 1 ( i ) t + 1 [ j ] , else .
This mechanism maintains independence and randomness in candidate selection across dimensions, enabling individuals to retain differentiated search behaviors and effectively reducing the risk of premature convergence. Once the candidate solutions are generated, standard boundary-handling procedures are applied to ensure that the updated search points remain within the feasible solution domain:
X n e w = min max ( X n e w , l b ) , u b , f ( X n e w ) < f ( X S t )
If the fitness of the new solution is better than that of the current individual, the new solution is accepted; otherwise, the individual remains at its original position. This approach balances stability and efficiency, enabling shaded-region individuals to continually refine their positions under gradient guidance.
After independently updating the shaded and light regions in each iteration, the algorithm reselects both the global best and the regional best solutions and periodically repartitions the two regions. This procedure maintains the dynamic balance of the overall search framework. The pseudocode of the shaded-region is presented in Algorithm 2. The initial light-region and shaded-region search stages of the plant are illustrated in Figure 3.
Algorithm 2 Shaded region Updating Strategy in ALDPGA.
1:
The overall process is similar to Algorithm 1, with the following differences:
2:
    – Reference indices a 1 , a 2 are selected from the entire population instead of the light zone only.
3:
    – Direction factor ζ S and offset P are computed using global mean X ¯ a l l .
4:
    – Gradient update uses f d i r = X b e s t η t v ^ S / ( s ^ S + ϵ ) .
5:
    – Dimension-wise candidate selection uses adaptive rule ( rand / k > rand ) .
6:
    – Acceptance follows pure greedy update: if f ( X n e w ) < f ( X S ) then accept.

3.3. Improvement in the Cell Elongation Stage and the Lévy Flight Strategy

In the original PGA, the cell elongation stage models the final growth response of plant cells under light stimulation, aiming primarily at local exploitation and fine-tuning of solutions. Nonetheless, this stage presents three major shortcomings:
(1)
Lack of global exploration: During the elongation stage, the original algorithm updates individuals only through random perturbations and movement toward the current best solution, without employing long-range jump exploration. Consequently, the algorithm is easily trapped in local optima when optimizing multimodal functions.
(2)
Monolithic update mechanism: In the original algorithm, the update direction is determined solely by the linear term ( X X b e s t ) , without incorporating probabilistic perturbations or non-Gaussian randomness. As a result, the search trajectory tends to converge prematurely and exhibits limited ability to escape local optima.
(3)
Late-stage oscillation and divergence: Because the original algorithm does not include a dynamic suppression mechanism for search intensity, it continues to apply relatively strong perturbations even near the optimum, which can cause instability such as oscillation or boundary overshooting.
To address these limitations, the elongation stage in this study incorporates a multi-stage Lévy flight strategy together with a curvature-dependent nonlinear elongation mechanism. The transition between stages is regulated by a Sigmoid decay probability function, enabling dynamic balancing between global jump exploration in the early stage and fine-grained exploitation in the later stage.

3.3.1. Stage Partitioning and Probability Control Mechanism

In the cell elongation stage, a probabilistic function p(t) is employed to control the switching of search modes, defined as follows:
p ( t ) = 1 1 + exp ( 10 × ( t / T 0.5 ) )
The probability function follows a smooth Sigmoid curve, allowing the algorithm to perform global Lévy flights with high probability in the early stage and gradually shift to local elongation in the later stage. Specifically, Lévy flight is executed when rand > p ( t ) ; otherwise, local elongation is carried out. This probability-based control enables a smooth temporal transition between search modes and prevents convergence oscillations caused by abrupt phase switching.

3.3.2. Segmented Lévy-Based Global Exploration

Lévy flight is a non-Gaussian stochastic walk in which step lengths follow a power-law heavy-tailed distribution. This enables the search process to perform a few long-range jumps alongside numerous short-range adjustments, achieving an effective balance between global exploration and local exploitation.
To incorporate this mechanism, the Mantegna algorithm is employed to generate Lévy step lengths, formulated mathematically as follows. The first step is to define the scale parameter σ u of the Lévy distribution:
σ u = Γ ( 1 + β ) sin π β 2 Γ 1 + β 2 β 2 ( β 1 ) / 2 1 / β
where Γ denotes the Gamma function, while β serves as the control parameter of the Lévy distribution, typically chosen within the range 1.2 1.9 . Smaller values of β lead to broader jump ranges and increase the probability of long-distance transitions. Two independent normal distributions are then sampled to construct the Lévy step-length matrix, where n denotes the population size and m the number of dimensions.
u N ( 0 , σ u 2 ) , v N ( 0 , 1 ) , Z = u i j | v i j | 1 / β n × m
The heavy-tailed nature of the Lévy distribution implies that only a very small fraction of the step lengths correspond to long-distance jumps, enabling individuals to traverse multiple local extrema and thereby effectively avoid falling into local-optimum traps. In the ALDPGA algorithm, Lévy flight is embedded into the cell elongation stage to regulate late-stage global jumps and elite-guided movement. According to the iteration ratio τ = t / T , the algorithm adaptively selects different Lévy update strategies:
X n e w = X + s c a l e · d e c a y · Z s p a n , τ > 0.4 X + s c a l e · 0.8 · Z ( X b e s t X a l l ) R , else
where τ = t / Iter is a parameter positively correlated with t used to select different Lévy update methods. The s c a l e denotes the dimension-normalization scaling factor. The term d e c a y = ( 1 τ ) 0.7 controls the gradual attenuation of the Lévy step size over time. The parameter s p a n = ( u b l b ) represents the width of the search interval, and ⊙ denotes the element-wise multiplication operator. Here, R = rand ( N , dim ) is a uniformly distributed random matrix of size N × dim .
Combining the above mechanisms, the algorithm applies non-guided Lévy flights during the early phase ( τ > 0.4 ), enlarging the search space through global random jumps. In the later phase ( τ 0.4 ), it transitions to elite-guided Lévy flights, allowing individuals to perform directional long-range jumps around the global best solution. The pseudocode of the Cell Elongation and Lévy flight phase is presented in Algorithm 3.
This approach maintains exploration while accelerating convergence. The complete procedural flow of the ALDPGA algorithm is illustrated in Figure 4.
In conclusion, the proposed enhancements systematically strengthen the original PGA from three perspectives: the light region, the shaded region, and the cell elongation stage. The synergy of these three mechanisms across the iterative process enables ALDPGA to achieve notable improvements in exploration depth, search efficiency, and convergence stability.
Algorithm 3 Cell Elongation and Lévy Flight Phase in ALDPGA.
1:
Compute switching probability p ( t ) = 1 / ( 1 + exp ( 10 ( t / T 0.5 ) ) )
2:
if   rand > p ( t )   then
3:
   Perform Lévy-based global exploration:
4:
       If τ > 0.4 , random global Lévy flight; else elite-guided Lévy search
5:
   Apply boundary check and greedy update
6:
else
7:
   for  i = 1 to N do
8:
     Compute curvature factor and vicinity interaction
9:
     Update X n e w = X + F O C + 0.3 × C e l l _ v i c i n i t y
10:
     Apply boundary and fitness update
11:
   end for
12:
end if
13:
Return updated population X

3.4. Distinction from Existing Adam- or Lévy-Enhanced Metaheuristics

It should be emphasized that, unlike generic hybrid metaheuristic algorithms that directly embed Adam or Lévy operators into the population update process, there are the following points:
  • ALDPGA introduces these mechanisms in a structure-aware manner by explicitly leveraging the light–shade partitioning and phototropic growth behavior that are intrinsic to PGA.
  • Moreover, in ALDPGA, Adam is not employed as a general-purpose optimizer; instead, it is specifically applied to elite cells and functions as a growth operator analogous to biologically inspired directional growth.
  • Similarly, the Lévy mechanism in ALDPGA does not act as a simple random perturbation. Rather, it adaptively varies with the plant growth stage (i.e., the iteration process), operating either as stochastic directional exploration or elite-guided perturbation, thereby supporting the natural transition from exploration to exploitation during the cell elongation phase.
    Consequently, we argue that the dynamic stage-control strategy adopted in ALDPGA should not be regarded as an external scheduling heuristic, but rather as an intrinsic component that is highly consistent with the biological interpretation of PGA.

3.5. Analyzing the Time Complexity of ALDPGA

To analyze the computational complexity of ALDPGA, its main components and execution frequency are examined. The algorithm consists of an initialization phase, a main iterative process, including Adam-based updates in the light region, updates in the shadow region, and the cell-elongation mechanism, as well as sorting and population management.
During initialization, the population X and the first- and second-order moment vectors m and v in the Adam optimizer are generated. This process involves N individuals distributed in a D-dimensional search space, resulting in a time complexity of O ( N · D ) .
In the dimension-wise Adam updates for both the light and shadow regions, search lines are constructed and nested loops are executed. Consequently, the per-iteration computational cost for processing N individuals is O ( N · D 2 ) .
In the early-stage mitosis or standard update process, only basic vector operations and boundary checks are performed, leading to a time complexity of O ( N · D ) .
In the cell-elongation and Lévy flight stage, Lévy step generation and position updates are D-dimensional vector operations with a complexity of O ( N · D ) . The subsequent elongation operation perturbs individual positions and evaluates fitness values, yielding a complexity of O ( N · D + N · f o b j ) .
For sorting and population management, the population is ranked in each iteration using a sorting procedure with a complexity of O ( N l o g N ) . An additional strong sorting operation is executed every 20 iterations, which does not affect the asymptotic complexity.
This analysis focuses on the algorithmic computational cost and excludes the objective function evaluation, which is treated as a black-box operation. Considering T iterations with dimension-wise updates and population-level operations, the overall time complexity of ALDPGA is O ( T · ( N · D 2 + N l o g N ) ) . When the dimensionality D is large, the dominant term becomes O ( T · N · D 2 ) . Compared with conventional algorithms such as PSO and GWO, which typically have a complexity of O ( N · D ) , ALDPGA introduces dimension-wise gradient exploration. By applying the Adam optimizer to perform fine-grained searches along each dimension, the algorithm achieves higher convergence accuracy and improved capability to escape local optima at the expense of increasing the computational complexity from linear to quadratic with respect to D.
To comprehensively assess the effectiveness and cooperative impact of these enhancements, the next section presents comparative experiments on the CEC2017 [32], CEC2020 [33], and CEC2022 [34] benchmark suites. The performance of ALDPGA is evaluated across various dimensions and problem categories, and further compared with multiple classical and state-of-the-art optimization algorithms through detailed statistical analyses.

4. Experimental Results and Discussion

4.1. Experimental Setup

To ensure a comprehensive and fair evaluation of the proposed optimization algorithm, this study adopts several authoritative benchmark test suites from recent years, including 30 functions from CEC2017, 10 functions from CEC2020, and 12 functions from CEC2022. These benchmarks are used to systematically assess the performance of ALDPGA, with the detailed function categories listed in Table 2.
In addition, ALDPGA is compared with eight representative optimization algorithms. Following the CEC benchmark protocol, each test function is independently executed 30 times. For all algorithms, the termination criterion is set to a maximum of 3000 iterations, with an initial population size of 30. The dimensional configurations used in the experiments are listed in Table 2.

4.2. Experiment 1: Comparisons of the Solution Accuracy

This section reports the performance comparison between the proposed ALDPGA and the competing algorithms on the CEC2017, CEC2020, and CEC2022 benchmark suites. For each test function, the mean value (mean) and standard deviation (std) of the obtained results are presented, with the best performing outcomes highlighted in bold. The data are presented in scientific notation, rounded to two decimal places. The detailed results are provided in Table 3, Table 4, Table 5 and Table 6. To reduce redundancy in the main text, the performance results of the CEC2017 benchmark set with 50 dimensions are presented in the Appendix A (see Table A1).
It is worth noting that the CEC2020 and CEC2022 benchmark suites include several functions that are inherited from the classical CEC2017 benchmarks. Specifically, functions f 1 , f 9 , and f 10 in CEC2020 correspond to functions f 1 , f 23 , and f 24 in CEC2017, respectively. Consequently, when the dimensional settings and experimental configurations are kept consistent, the stochastic nature of metaheuristic algorithms has only a limited impact on the final outcomes for these inherited functions, leading to highly similar or even identical result values reported in the tables.
Table 3 summarizes the mean and std performance of ten algorithms on the CEC2017 benchmark set under the 100-dimensional configuration. In general, ALDPGA attains the first rank on 22 out of 30 benchmark functions. This clearly demonstrates the stronger overall optimization capability and the broadly consistent advantage of ALDPGA in high-dimensional scenarios.
At the function level, ALDPGA exhibits pronounced order-of-magnitude advantages on composite and complex functions. For instance, on functions F1, F3, F12, F14, F18, and F30, the mean values achieved by ALDPGA are several orders of magnitude lower than those of PGA and most comparison algorithms, indicating more efficient global exploration and improved convergence quality. These findings demonstrate that the gradient-guided strategy and the dynamic weighting mechanism effectively enhance local exploitation capability, enabling faster convergence toward the optimum.
Moreover, ALDPGA maintains relatively low std values on most functions (e.g., F1, F3, F11, F12, and F30), highlighting its good stability across 30 independent runs. Although ALDPGA does not obtain the best results on a limited number of functions (e.g., F6, F13, F15, F19, and F27–F29), suggesting that certain algorithms may hold local advantages on specific landscapes, this does not detract from its overall superiority and robustness under the 100-dimensional setting. This confirms that the incorporated dynamic stage adjustment mechanism enables strong adaptability and stability when addressing the complex search landscapes of hybrid and composite functions, thereby ensuring superior optimization performance even in high-dimensional settings.
It is worth noting that on a small number of functions (e.g., F15, F28, and F29 in CEC2017), PGA slightly outperforms ALDPGA. To further investigate this phenomenon, the landscapes of representative functions are visualized, as shown in Figure 5. A careful examination reveals that these functions are characterized by relatively smooth, basin-dominated landscapes. Under such conditions, the low-bias stochastic growth mechanism of PGA can maintain a more uniform coverage over flat regions, resulting in slightly lower mean values.
In contrast, although ALDPGA significantly enhances the overall performance on high-dimensional hybrid and composite functions, for certain smooth basin-dominated problems where excessive exploration is not critical, ALDPGA may incur minor performance degradation. This behavior reflects a typical trade-off between exploration and exploitation rather than a structural deficiency of the algorithm. Nevertheless, this phenomenon does not affect the overall conclusions, as ALDPGA consistently achieves superior rank distributions and statistically significant improvements across the entire benchmark suite.
Table 4 and Table 5 present the comparative performance results of ALDPGA and the competing algorithms on the CEC2020 benchmark set.
When the dimensionality is reduced to CEC2020-50D, the competition becomes more pronounced, implying that certain algorithms, particularly SAO, are better able to exploit advantages on specific functions. ALDPGA remains optimal or near-optimal on F 3 , F 5 , F 6 , F 7 , and F 9 , and continues to exhibit order-of-magnitude superiority on F 5 and F 7 , confirming the effectiveness of its core enhancement mechanisms on complex problems. Conversely, functions such as F 1 , F 2 , F 8 , and F 10 tend to favor problem-specific structures, local search behaviors, or stable convergence patterns, under which some algorithms can achieve marginal advantages.
In the 100-dimensional case, ALDPGA secures the first rank on nine out of ten benchmark functions, indicating a clear overall superiority. Numerically, ALDPGA achieves pronounced order-of-magnitude improvements on several challenging functions. For example, on F 1 , F 5 , and F 7 , the mean values obtained by ALDPGA are reduced by multiple orders of magnitude compared with PGA and most competitors. These results demonstrate a more effective synergy between global exploration and late-stage exploitation in high-dimensional complex landscapes.
This observation demonstrates that ALDPGA possesses enhanced adaptability and generalization ability in high-dimensional and complex optimization scenarios. With increasing dimensionality, the Adam-based gradient learning mechanism and the Lévy flight-driven stochastic global exploration become more effective within the enlarged search space, enabling the algorithm to achieve superior convergence accuracy and stable optimization behavior.
To assess both the performance of the proposed algorithm on high-dimensional complex problems and its applicability and universal effectiveness in lower-dimensional settings, additional experiments were conducted on the 20-dimensional CEC2022 benchmark suite. Compared with earlier benchmarks such as CEC2017 and CEC2020, CEC2022 introduces more intricate landscape structures and dynamic perturbation components, offering a more comprehensive evaluation of an optimization algorithm’s stability and adaptability across varying search scales.
As shown in Table 6, ALDPGA still achieves the highest number of first-rank results overall, indicating that it maintains strong overall competitiveness in low- and medium-dimensional scenarios. However, compared with the high-dimensional cases, its performance advantages are relatively more dispersed.
From a function-level perspective, ALDPGA attains the best performance on functions F 1 , F 5 , F 8 , F 10 , and F 12 . In particular, for F 1 , the mean value reaches 3.00 × 102 with an std value close to zero, demonstrating extremely high stability and reproducibility. Moreover, relatively small std values are also observed on functions such as F 8 and F 10 , indicating a more robust convergence behavior.
Overall, these results suggest that ALDPGA exhibits good generalization performance across different dimensional settings, thereby validating its effectiveness and stability.
To more clearly illustrate the performance differences between ALDPGA and the traditional PGA, three evaluation metrics—best convergence value, mean value, and standard deviation—were analyzed, as presented in Table 7. The notation “++” represents that the algorithm achieves convergence accuracy [35] more than one order of magnitude better than competing algorithms, whereas “+” denotes higher accuracy within the same order of magnitude. In contrast, “ ” indicates that the convergence accuracy is more than one order of magnitude worse, while “−” signifies lower accuracy within the same order of magnitude.
According to Table 7, ALDPGA achieves strong performance on the unimodal functions f 1 f 3 . For the best results, ALDPGA surpasses the compared algorithms on 26 functions, with 5 of them exhibiting improvements exceeding one order of magnitude. Nonetheless, its performance is slightly inferior to the original PGA on f 13 , f 15 , and f 22 . For the mean results, ALDPGA is worse than PGA on only five functions: f 13 , f 15 , f 19 , f 27 , and f 28 .
In terms of standard deviation, ALDPGA performs one order of magnitude worse on the hybrid function f 28 and shows slightly poorer results on f 4 , f 15 , f 18 , f 25 , and f 27 . For all other functions, it demonstrates superior performance. Importantly, ALDPGA exhibits remarkable advantages on complex functions such as f 12 , f 14 , and f 18 , achieving improvements of one order of magnitude in both mean accuracy and stability compared with the traditional PGA.
These results suggest that although ALDPGA experiences slight fluctuations on a few complex functions, its overall performance is still markedly superior to that of the traditional PGA. This difference mainly stems from the varying landscape characteristics of the benchmark functions, which impose different requirements on the balance between global exploration and local exploitation. For multimodal and hybrid functions, the update mechanism of PGA can still maintain certain advantages in local convergence under specific dimensional conditions. In contrast, ALDPGA generally achieves higher convergence accuracy and more stable optimization performance due to the Adam-enhanced gradient learning and the Lévy-based global exploration strategies.
The comprehensive performance comparisons between ALDPGA and the other algorithms are reported in Appendix A Table A2, Table A3, Table A4, Table A5, Table A6, Table A7, Table A8 and Table A9. The results indicate that ALDPGA does not exhibit degradation exceeding one order of magnitude relative to any competing algorithm in terms of best and mean performance. With respect to std, ALDPGA shows slightly inferior performance on only a very limited number of functions, and the difference is confined to approximately one order of magnitude when compared with certain algorithms such as DE and RIME. By contrast, when benchmarked against classical methods including PSO, TOC, and VO, ALDPGA maintains a dominant advantage across nearly all evaluation metrics.
In summary, ALDPGA exhibits enhanced global search capability and improved convergence stability across the three metrics—best result, mean performance, and standard deviation—demonstrating its effectiveness in both exploration and exploitation during the optimization process.

4.3. Experiment 2: Comparisons of Convergence Performance

In this study, convergence curves of ALDPGA, the traditional PGA, and eight additional comparison algorithms were evaluated on the 30 CEC2017 benchmark functions, as shown in Figure 6. Two to three functions of each type were selected for visualization: The red bold curve denotes the proposed ALDPGA, the blue thick curve represents the traditional PGA, and the remaining curves illustrate the performance of the other comparison methods. The figure clearly illustrates that ALDPGA achieves faster convergence and higher solution accuracy on the majority of functions, thereby confirming the effectiveness of the proposed enhancements [35]. All convergence plots for CEC2017 and CEC2020 are included in the Appendix A; see Figure A1 and Figure A2 for details.
For the unimodal functions f 1 , f 3 , ALDPGA attains markedly superior final convergence accuracy compared with the other algorithms and converges significantly faster during the early stages. This demonstrates that the Adam-based gradient learning mechanism effectively accelerates local directional adjustments, enhancing both precision and efficiency of the search process.
For the multimodal functions f 5 , f 7 , f 9 , traditional algorithms such as PSO and GWO exhibit evident premature convergence, whereas ALDPGA maintains a strong downward trend during the mid-iterations, indicating a superior capability to escape local optima. This benefit arises from the combined influence of the Lévy-flight global exploration and the dynamic stage adjustment system, which enables the population to resume global searching after being trapped in local optima.
For the hybrid and composite functions, ALDPGA exhibits smooth and strongly descending convergence curves that stabilize toward the final iterations, indicating high convergence accuracy and stability in complex search spaces. This pattern demonstrates the algorithm’s ability to adaptively balance exploration and exploitation across different stages, thereby achieving superior convergence behavior on challenging problems.

4.4. Experiment 3: Comparisons of Stability Performance

During the stability analysis stage, boxplots [36] were utilized to assess the stability of the ALDPGA algorithm. Two to three functions of each type were selected for visualization, as shown in Figure 7. All box plots for CEC2017 and CEC2020 are included in the Appendix A; see Figure A3 and Figure A4 for details. Each boxplot corresponds to the same experimental conditions and parameter settings as those used in the convergence plots. In the boxplots, the blue rectangle represents the box, the whiskers are displayed using blue dashed lines, and outliers are marked with red “+” symbols. The three horizontal lines inside the box represent, from bottom to top, the first quartile Q 1 , the median Q 2 (displayed in red), and the third quartile Q 3 . The interquartile range (IQR) is defined as the height of the box, namely | Q 3 Q 1 | . Outliers are identified as data points below Q 1 1.5 I Q R or above Q 3 + 1.5 I Q R [37].
Overall, ALDPGA shows narrow box heights with the median positioned toward the lower part of the box, indicating low variability across 30 independent runs on the CEC2017 suite and consistently high convergence accuracy.
In terms of outliers, ALDPGA presents several abnormal points on the multimodal functions f 5 and f 8 , the hybrid function f 10 , and the composite functions f 20 and f 28 . This suggests that while the algorithm achieves high accuracy on these functions, its robustness is somewhat limited.
Notably, several functions ( f 10 and f 20 ) contain low outliers, indicating that the algorithm occasionally obtains even better solutions during individual runs. This highlights the algorithm’s strong global exploration capability in complex search landscapes.

4.5. Experiment 4: Statistical Significance Testing Experiments

Due to the inherent stochastic nature of metaheuristic optimization algorithms, relying solely on numerical performance results is often insufficient to reliably determine whether the observed performance differences are statistically significant. To more rigorously validate the effectiveness of the proposed ALDPGA, this study further conducts a systematic set of nonparametric statistical tests, including the Wilcoxon signed-rank [38] test with multiple comparison control, as well as the Friedman global test followed by post hoc analysis.

4.5.1. Wilcoxon Signed-Rank Test with Multiple Comparison Correction

First, the Wilcoxon signed-rank test is conducted to perform pairwise statistical comparisons between ALDPGA and the competing algorithms on the 30 benchmark functions of CEC2017 (100dim). As the performance results of metaheuristic algorithms typically violate the normality assumption, the Wilcoxon test, a distribution-free nonparametric approach, is well suited for this analysis. A two-sided hypothesis test is employed, with the significance level set to α = 0.05.
For each comparison, the positive rank sum R + and the negative rank sum R are computed to quantify the performance dominance of ALDPGA over the comparison algorithm. The corresponding calculation formulas are as follows:
R + = d i > 0 rank ( d i ) + 1 2 d i = 0 rank ( d i ) R = d i < 0 rank ( d i ) + 1 2 d i = 0 rank ( d i )
To mitigate the inflation of Type I error caused by multiple pairwise comparisons, Holm–Bonferroni correction and false discovery rate (FDR) correction are applied to the original p-values.
The Wilcoxon test results are summarized in Table 8. For all competing algorithms, the R + values associated with ALDPGA are substantially larger than the corresponding R values, indicating that ALDPGA achieves superior performance on the majority of benchmark functions. In addition, all original p-values are well below the significance threshold. More importantly, after applying Holm–Bonferroni and FDR corrections, all adjusted p-values remain below 0.05, confirming that the performance advantages of ALDPGA are statistically significant even under strict multiple comparison control.

4.5.2. Friedman Test and Post Hoc Analysis

Although the Wilcoxon signed-rank test provides pairwise statistical evidence, it does not offer a global assessment of multiple algorithms simultaneously. Therefore, the Friedman test is further employed to perform a global statistical analysis of the overall performance differences among all algorithms across the complete benchmark set. In this test, each benchmark function is treated as a block, and algorithms are ranked according to their performance, with smaller ranks indicating better performance.
The Friedman test produces a p-value of 3.016 × 10−25 ( α = 0.05 ), which is far below the significance level of α = 0.05. As a result, the null hypothesis of equivalent performance among all algorithms is decisively rejected, confirming the presence of statistically significant overall differences.
The average ranking results obtained from the Friedman test are shown in Table 9, which further demonstrates that ALDPGA achieved the best overall performance with an average ranking of 1.59, significantly outperforming other competing algorithms. This demonstrates that the superiority of ALDPGA is stable and consistent across the entire benchmark suite, rather than being confined to a limited number of test cases.
To further identify the specific differences between ALDPGA and the competing methods, a post hoc multiple comparison analysis based on average ranks is conducted following the Friedman test, with Bonferroni correction applied to ensure conservative statistical inference. The post hoc results, reported in Table 10, show that ALDPGA significantly outperforms TOC, GWO, PSO, VO, GA, RIME, and DE in terms of overall ranking. Although ALDPGA also achieves better average ranks than SAO and PGA, these differences do not reach statistical significance after Bonferroni correction.
In summary, these statistical analysis results convincingly demonstrate that the performance improvements achieved by ALDPGA on complex and high-dimensional optimization problems are not only advantageous in terms of numerical outcomes, but are also robust and reliable from a statistical perspective. It should be noted that the lack of statistical significance for a few algorithms in the post hoc analysis is mainly attributable to the conservative nature of the Bonferroni correction, as well as the inherently strong competitiveness of these algorithms.

4.6. Experiment 5: Effectiveness of the Involved Strategies

To examine the independent contribution and effectiveness of each proposed enhancement to the overall algorithmic performance, an ablation study was designed [39]. By systematically removing or replacing specific modules or strategies and observing the resulting changes in performance, the influence of each component on the overall algorithm can be quantified. In this study, several simplified variants were developed based on the fully improved ALDPGA.
In ALDPGA-1, the Adam-based adaptive gradient learning mechanism is removed from the light-region update. The position update no longer uses first- and second-moment estimates, and individuals are updated using a conventional scaled directional rule.
ALDPGA-2 disables the Lévy-flight-based perturbation in the cell-elongation phase. The long-tailed random steps are replaced by the original PGA-style mutation mechanism, eliminating large occasional jumps. The Adam-based learning and dynamic stage-control strategy are preserved.
In ALDPGA-3, the dynamic stage-control mechanism is removed, and all algorithmic components operate with fixed parameters throughout the optimization process. The adaptive switching between exploration and exploitation phases is disabled. Adam learning and Lévy perturbation are still applied as in the full ALDPGA.
By comparing each ablated version with the complete ALDPGA, the independent contribution of each module and its influence on the overall algorithmic performance can be evaluated more intuitively. The effectiveness of the proposed strategies was assessed on the CEC2017 benchmark, where each function was independently executed 30 times with a maximum of 3000 iterations. The mean result from the 30 runs was used as the performance metric for evaluating ALDPGA.
Similarly, the best result for each function is highlighted, and the ranking of each variant provides an intuitive indication of the effectiveness of the improvement strategies. The corresponding results are summarized in Table 11.
The results reported in Table 11 and Appendix A Table A10 demonstrate that ALDPGA consistently achieves the strongest and most stable overall performance across both dimensional settings. These findings indicate that the three enhancement mechanisms integrated into the complete ALDPGA jointly contribute to higher winning rates and improved robustness across different dimensions, with particularly concentrated advantages at the medium-dimensional setting (50D).
Although the ablation variants are not strictly inferior to ALDPGA on all functions, occasional cases are observed under the 100D setting where variants such as ALDPGA-3 or ALDPGA-1 slightly outperform ALDPGA in terms of mean performance, leading to a small number of rank1 results. Such localized superiority typically reflects a bias toward exploration or exploitation under specific fitness landscapes, such as highly multimodal or noise-sensitive functions. However, given their limited frequency and substantially lower overall rank1 counts, these cases do not alter the conclusion that the complete ALDPGA mechanism is more robust.
Further analysis of the degradation patterns among the variants provides insight into the relative contributions of individual modules. ALDPGA-2 achieves the fewest rank1 results at both 100D and 50D, indicating that removing the Lévy-driven long-tailed perturbation and dynamic elongation trajectories significantly weakens the algorithm’s ability to escape local optima in multimodal and composite functions. Although ALDPGA-1 remains competitive in both dimensional settings, its overall performance is still inferior to that of ALDPGA, highlighting the critical role of Adam-based directional and scale-adaptive updates in improving exploitation efficiency, accelerating convergence, and maintaining stability, especially in high-dimensional scenarios. Overall, ALDPGA-2 performs between ALDPGA-1 and ALDPGA-3, suggesting that stage-wise scheduling contributes to a stable balance between exploration and exploitation, albeit with a smaller impact than the combined effects of Lévy perturbation and Adam learning.
Taken together, Table 11 and the Appendix A Table A10 confirm that the complete ALDPGA achieves the highest number of rank1 results under both 50D and 100D settings of CEC2017, typically accompanied by more favorable standard deviation values. This indicates that the performance gains of ALDPGA are reflected not only in improved mean solution quality, but also in enhanced statistical stability.

4.7. Applied Engineering Challenges in Mathematical Modeling

This section aims to systematically evaluate the performance of DTSA in solving complex constrained optimization problems. To this end, several representative engineering benchmark problems with complicated objective functions and multiple constraints are employed to assess its optimization capability [40]. The selected benchmarks include: tension/compression spring design problem [41], three-bar truss design problem [42], and cantilever beam design problem [43].

4.7.1. The Tension/Compression Spring Design

The tension/compression spring design problem is a well-known constrained engineering optimization problem in mechanical system design. The objective is to minimize the total weight, or equivalently the volume, of a helical spring while satisfying prescribed load conditions and mechanical performance requirements. The design must simultaneously ensure adequate strength, stiffness, and durability, which are typically modeled by a set of linear and nonlinear inequality constraints associated with shear stress, deflection limits, and geometric feasibility. The problem involves three continuous decision variables, namely the wire diameter ( X 1 ) , the mean coil diameter ( X 2 ) , and the number of active coils ( X 3 ) [44]. Together, these variables govern the mechanical characteristics and manufacturability of the spring. The results of the tension spring design problem are shown in Table 12. The results show that the ALDPGA has better performance.

4.7.2. Three-Bar Truss Design Problem

The three-bar truss design problem is a widely used benchmark in structural optimization. The objective is to minimize the total weight of a planar truss structure subject to stress constraints. The design variables correspond to the cross-sectional areas of the truss members and are denoted as ( X 1 ) and ( X 2 ) . Despite its relatively simple structural configuration, the interaction between stress constraints and load distribution introduces pronounced nonlinear characteristics into the optimization process. The results of the three-bar truss design problem are shown in Table 13. The results show that the ALDPGA has better performance.

4.7.3. Cantilever Beam Design Problem

The cantilever beam design problem is a well-established structural optimization benchmark in mechanical and civil engineering. The objective is to minimize the total weight of the beam while satisfying prescribed strength and stiffness requirements under external loads. The constraints typically involve limits on bending stress and tip deflection, which are essential to ensure structural safety and functional reliability. The cantilever beam is commonly modeled as a stepped structure consisting of five segments, each with its own height or width. These segmental cross-sectional dimensions are treated as the main design variables and directly affect the stress distribution and deformation behavior along the beam. The results of the cantilever beam design problem are shown in Table 14. The results show that the ALDPGA has better performance.

5. Discussion and Future Directions

Comprehensive evaluations on the CEC2017, CEC2020, and CEC2022 benchmarks show that ALDPGA outperforms nine representative algorithms, including the original PGA. ALDPGA achieves superior best-fitness values, mean performance, and stability, particularly on complex multimodal and hybrid functions where faster convergence and higher accuracy are observed. Ablation experiments demonstrate that the Adam learning mechanism, Lévy flight strategy, and dynamic stage control all contribute critically to the performance gains, and these components exhibit notable synergistic effects.
Although ALDPGA has demonstrated strong performance, several avenues for further research remain open. First, the current implementations of Lévy flight and dynamic decay parameters are fixed. Future work could employ adaptive learning or deep policy networks to dynamically adjust the Lévy step distribution, enabling more intelligent and task-adaptive search trajectories. Second, the dynamic stage-control mechanism is currently based on a piecewise schedule determined by the iteration ratio. More flexible switching behavior may be achieved by introducing reinforcement learning or Markov Decision Processes to build finer-grained stage-discrimination models. Third, while Adam improves local exploitation in the light region, its update rule still operates within a fixed action space. Future studies may investigate adaptive gradient-fusion mechanisms defined in continuous action spaces, allowing update directions to better conform to the geometric characteristics of complex optimization landscapes.
Furthermore, ALDPGA can be extended to more challenging problem types such as constrained optimization, multi-objective optimization, large-scale optimization, and high-dimensional sparse problems. Leveraging distributed computing, GPU acceleration, and large-scale parallelization may also enhance its applicability to industrial-scale tasks. Additional biological mechanisms—such as auxin redistribution or curvature regulation—may also be incorporated to develop more realistic and efficient plant-growth-inspired optimization models.

6. Conclusions

The experimental results indicate that ALDPGA demonstrates strong advantages in high-dimensional and multimodal optimization scenarios. Therefore, several key conclusions can be drawn from this study:
  • Systematic evaluations on the CEC2017, CEC2020, and CEC2022 benchmark suites show that the algorithm surpasses both the original PGA and various classical or state-of-the-art optimizers with respect to best performance, mean fitness, and stability.
  • Through the coordinated integration of Adam–SA gradient learning, Lévy-based global perturbation, and dynamic stage control, ALDPGA maintains high global exploration capability in the early search period and achieves fast, stable convergence in later iterations. Ablation studies confirm that these components are all essential contributors to performance improvement and that significant synergistic effects exist among them.
  • Meanwhile, comprehensive nonparametric statistical analyses further confirm the robustness of the proposed ALDPGA. These results jointly verify that the superiority of ALDPGA is not incidental but statistically reliable across the benchmark suite.
  • In terms of engineering applications, ALDPGA achieves excellent optimization results on representative engineering problems, including the tension/compression spring design, three-bar truss optimization, and cantilever beam design problems. These results further demonstrate the effectiveness and reliability of ALDPGA in solving complex, constrained offline engineering optimization tasks.
Overall, this study represents the first systematic integration of Adam, Lévy flight, and dynamic stage control into the PGA framework, and the resulting performance improvements are substantial. Developing more intelligent, adaptive, and learning-driven variants of ALDPGA is an important direction for future research.

Author Contributions

Conceptualization, S.G.; Methodology, S.G.; Software, Y.X.; Validation, Y.X. and W.L.; Formal analysis, Y.X. and W.L.; Investigation, Y.X. and W.L.; Resources, Y.X.; Data curation, W.L. and B.Q.; Writing—original draft, Y.X.; Writing—review editing, S.G.; Visualization, Y.X.; Supervision, B.Q. and S.G.; Project administration, B.Q. and S.G.; Funding acquisition, B.Q. and S.G. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China under Grant No. 62376109.

Data Availability Statement

The data used in this study must be authentic and reliable to ensure the validity and reproducibility of the experimental results. The datasets and source code used in this study are openly available at: https://github.com/xieeix-code/The-Intelligent-Optimization-Algorithm-ALDPGA (accessed on 25 December 2025). All benchmark data sets used in this study (CEC2017, CEC2020, and CEC2022) are publicly available from the official CEC competition website.

Acknowledgments

The authors gratefully appreciate the codes and data made publicly available through the MathWorks File Exchange and sincerely thank the original contributors for providing these valuable resources. The authors also express their sincere gratitude to their two supervisors for their valuable guidance and continuous support. The authors have reviewed and edited the output and take full responsibility for the content of this publication. During the preparation of this manuscript, the authors utilized ChatGPT 5.0 to assist with language refinement and stylistic improvement. All generated content was subsequently reviewed, verified, and revised by the authors, who take full responsibility for the accuracy and integrity of the final publication.

Conflicts of Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Appendix A

Table A1. Running results of 10 algorithms in the CEC2017 (50-D).
Table A1. Running results of 10 algorithms in the CEC2017 (50-D).
Function GWOTOCPSOGADERIMESAOVOPGAALDPGA
F 1 Mean9.88 × 1094.30 × 1097.11 × 1098.44 × 1052.96 × 1085.13 × 1054.95 × 1031.26 × 1059.33 × 1036.06 × 103
Std3.91 × 1091.93 × 1094.98 × 1093.14 × 1054.99 × 1081.32 × 1057.05 × 1036.14 × 1049.94 × 1036.85 × 103
F 3 Mean9.59 × 1041.21 × 1056.94 × 1043.82 × 1051.12 × 1052.37 × 1042.39 × 1055.55 × 1048.36 × 1043.00 × 102
Std1.25 × 1046.88 × 1042.76 × 1048.58 × 1042.55 × 1047.60 × 1033.98 × 1041.52 × 1041.64 × 1041.65 × 10−6
F 4 Mean1.41 × 1031.46 × 1031.24 × 1035.82 × 1025.39 × 1026.01 × 1025.18 × 1025.37 × 1025.38 × 1024.69 × 102
Std6.54 × 1024.02 × 1028.59 × 1024.63 × 1013.42 × 1014.02 × 1015.07 × 1014.98 × 1014.21 × 1015.58 × 101
F 5 Mean7.28 × 1021.03 × 1036.67 × 1021.04 × 1035.88 × 1026.84 × 1026.44 × 1029.81 × 1027.19 × 1025.83 × 102
Std3.79 × 1017.25 × 1013.65 × 1018.43 × 1012.12 × 1013.16 × 1013.63 × 1016.35 × 1013.93 × 1011.97 × 101
F 6 Mean6.19 × 1026.72 × 1026.10 × 1026.06 × 1026.02 × 1026.09 × 1026.01 × 1026.71 × 1026.15 × 1026.01 × 102
Std5.51 × 1001.11 × 1013.37 × 1005.53 × 1001.22 × 1004.67 × 1001.13 × 1008.55 × 1009.30 × 1008.11 × 10−1
F 7 Mean1.09 × 1031.73 × 1039.58 × 1021.17 × 1039.69 × 1029.50 × 1021.17 × 1031.76 × 1031.03 × 1038.65 × 102
Std6.93 × 1011.62 × 1026.93 × 1018.11 × 1011.12 × 1024.94 × 1015.16 × 1012.05 × 1026.63 × 1013.70 × 101
F 8 Mean1.05 × 1031.33 × 1039.92 × 1021.26 × 1038.95 × 1029.88 × 1029.41 × 1021.24 × 1031.03 × 1038.92 × 102
Std7.55 × 1017.04 × 1015.13 × 1016.95 × 1012.40 × 1013.27 × 1013.14 × 1016.92 × 1015.51 × 1011.98 × 101
F 9 Mean9.14 × 1031.44 × 1045.03 × 1032.11 × 1041.69 × 1034.38 × 1032.45 × 1032.25 × 1043.02 × 1031.15 × 103
Std3.69 × 1034.26 × 1033.51 × 1035.22 × 1031.02 × 1031.64 × 1031.80 × 1037.15 × 1031.18 × 1032.31 × 102
F 10 Mean7.00 × 1031.22 × 1047.15 × 1037.34 × 1031.36 × 1047.26 × 1036.55 × 1039.54 × 1037.92 × 1037.30 × 103
Std9.54 × 1021.45 × 1039.56 × 1027.06 × 1022.35 × 1038.49 × 1021.91 × 1031.22 × 1031.03 × 1031.12 × 103
F 11 Mean4.82 × 1032.83 × 1031.70 × 1031.10 × 1041.40 × 1031.47 × 1031.46 × 1031.52 × 1031.31 × 1031.29 × 103
Std1.75 × 1031.41 × 1034.80 × 1027.92 × 1032.56 × 1029.07 × 1011.28 × 1029.81 × 1015.80 × 1016.20 × 101
F 12 Mean1.18 × 1091.00 × 1093.34 × 1091.79 × 1072.99 × 1065.25 × 1071.99 × 1065.84 × 1073.57 × 1061.77 × 105
Std9.76 × 1087.91 × 1083.84 × 1096.38 × 1062.27 × 1063.81 × 1071.17 × 1063.32 × 1071.91 × 1068.52 × 104
F 13 Mean2.98 × 1082.07 × 1087.18 × 1081.13 × 1051.03 × 1049.02 × 1048.07 × 1039.62 × 1041.23 × 1049.99 × 103
Std8.28 × 1084.96 × 1081.01 × 1095.13 × 1048.53 × 1034.51 × 1046.59 × 1033.86 × 1041.02 × 1048.19 × 103
F 14 Mean9.92 × 1057.72 × 1058.46 × 1055.41 × 1065.53 × 1041.67 × 1055.10 × 1043.36 × 1051.40 × 1051.17 × 104
Std9.99 × 1059.60 × 1052.15 × 1064.11 × 1067.06 × 1041.15 × 1054.14 × 1047.56 × 1048.85 × 1041.06 × 104
F 15 Mean1.61 × 1075.70 × 1074.79 × 1068.65 × 1049.98 × 1031.11 × 1041.27 × 1047.19 × 1041.63 × 1041.28 × 104
Std2.38 × 1072.87 × 1081.81 × 1077.95 × 1048.27 × 1037.16 × 1035.66 × 1033.96 × 1041.09 × 1041.09 × 104
F 16 Mean3.10 × 1035.02 × 1033.41 × 1034.18 × 1032.80 × 1033.31 × 1033.33 × 1034.61 × 1033.34 × 1033.07 × 103
Std3.70 × 1026.27 × 1025.25 × 1025.35 × 1026.35 × 1024.53 × 1024.23 × 1025.03 × 1025.31 × 1023.84 × 102
F 17 Mean2.90 × 1034.01 × 1033.00 × 1033.94 × 1032.78 × 1033.00 × 1032.92 × 1033.90 × 1032.95 × 1032.69 × 103
Std2.97 × 1024.26 × 1022.85 × 1024.29 × 1024.25 × 1022.31 × 1022.76 × 1023.96 × 1023.61 × 1022.61 × 102
F 18 Mean5.11 × 1069.52 × 1064.67 × 1063.99 × 1063.07 × 1051.94 × 1068.14 × 1057.58 × 1051.54 × 1065.65 × 104
Std5.40 × 1062.99 × 1071.69 × 1073.93 × 1065.36 × 1051.35 × 1068.28 × 1054.59 × 1051.76 × 1064.84 × 104
F 19 Mean6.67 × 1064.03 × 1061.91 × 1062.46 × 1041.11 × 1041.77 × 1042.15 × 1043.60 × 1061.82 × 1041.66 × 104
Std2.41 × 1077.28 × 1066.18 × 1061.47 × 1049.40 × 1031.14 × 1041.33 × 1041.60 × 1061.57 × 1041.40 × 104
F 20 Mean2.92 × 1033.62 × 1032.99 × 1033.52 × 1033.04 × 1033.05 × 1032.97 × 1033.69 × 1033.11 × 1032.83 × 103
Std3.25 × 1023.48 × 1023.73 × 1024.15 × 1023.11 × 1022.78 × 1023.18 × 1024.20 × 1023.39 × 1023.01 × 102
F 21 Mean2.51 × 1032.86 × 1032.51 × 1032.93 × 1032.40 × 1032.50 × 1032.45 × 1032.84 × 1032.52 × 1032.39 × 103
Std2.95 × 1011.02 × 1025.05 × 1011.07 × 1025.35 × 1014.38 × 1013.90 × 1011.11 × 1024.58 × 1012.20 × 101
F 22 Mean9.14 × 1031.42 × 1049.00 × 1039.78 × 1031.55 × 1049.00 × 1038.31 × 1031.16 × 1049.58 × 1039.02 × 103
Std9.42 × 1021.20 × 1039.17 × 1026.39 × 1021.48 × 1037.28 × 1021.15 × 1031.04 × 1031.07 × 1039.50 × 102
F 23 Mean3.01 × 1033.73 × 1033.35 × 1033.53 × 1032.87 × 1032.95 × 1032.89 × 1033.70 × 1032.96 × 1032.83 × 103
Std6.62 × 1012.18 × 1021.68 × 1021.13 × 1024.80 × 1015.08 × 1012.55 × 1013.29 × 1025.47 × 1012.67 × 101
F 24 Mean3.17 × 1033.89 × 1033.52 × 1034.03 × 1033.03 × 1033.12 × 1033.05 × 1034.01 × 1033.06 × 1033.00 × 103
Std9.39 × 1012.23 × 1021.76 × 1021.79 × 1027.71 × 1015.50 × 1013.30 × 1012.80 × 1023.47 × 1012.15 × 101
F 25 Mean3.75 × 1033.83 × 1033.33 × 1033.09 × 1033.07 × 1033.07 × 1033.02 × 1033.09 × 1033.05 × 1033.03 × 103
Std3.28 × 1023.30 × 1025.31 × 1022.80 × 1013.52 × 1012.76 × 1013.66 × 1012.51 × 1013.96 × 1013.84 × 101
F 26 Mean6.69 × 1031.40 × 1047.38 × 1039.80 × 1035.20 × 1035.93 × 1035.26 × 1031.26 × 1045.87 × 1034.78 × 103
Std6.66 × 1022.06 × 1031.36 × 1031.01 × 1033.96 × 1027.57 × 1023.38 × 1022.38 × 1034.08 × 1022.57 × 102
F 27 Mean3.66 × 1034.06 × 1033.78 × 1034.16 × 1033.45 × 1033.46 × 1033.38 × 1034.91 × 1033.45 × 1033.44 × 103
Std1.47 × 1022.38 × 1022.58 × 1022.89 × 1021.08 × 1025.60 × 1019.12 × 1013.87 × 1021.25 × 1021.07 × 102
F 28 Mean4.39 × 1034.55 × 1034.25 × 1033.36 × 1033.35 × 1033.33 × 1033.30 × 1033.33 × 1033.31 × 1033.30 × 103
Std4.31 × 1021.12 × 1038.00 × 1022.91 × 1014.87 × 1012.84 × 1011.77 × 1011.24 × 1011.20 × 1011.89 × 101
F 29 Mean4.54 × 1037.43 × 1034.49 × 1035.11 × 1033.67 × 1034.56 × 1034.21 × 1036.59 × 1034.28 × 1033.98 × 103
Std3.03 × 1021.18 × 1033.96 × 1023.52 × 1022.37 × 1022.78 × 1023.79 × 1025.02 × 1023.74 × 1023.01 × 102
F 30 Mean1.14 × 1081.01 × 1087.52 × 1062.01 × 1061.15 × 1067.08 × 1061.13 × 1067.50 × 1071.18 × 1069.57 × 105
Std4.76 × 1078.63 × 1077.85 × 1066.70 × 1054.10 × 1052.92 × 1062.98 × 1051.69 × 1073.75 × 1051.62 × 105
Rank First00004070018
Table A2. Performance comparison of GWO and ALDPGA in the CEC2017 (100-D).
Table A2. Performance comparison of GWO and ALDPGA in the CEC2017 (100-D).
FunctionGWOALDPGA
BestMeanStdBestMeanStd
F 1 3.43 × 1010 ( )5.20 × 1010 ( )9.97 × 109 ( )1.00 × 102 ( + + )7.35 × 103 ( + + )9.01 × 103 ( + + )
F 3 2.41 × 105 ( )2.78 × 105 ( )2.13 × 104 ( )3.00 × 102 ( + + )5.98 × 102 ( + + )7.47 × 102 ( + + )
F 4 3.21 × 103 (−)5.16 × 103 (−)1.27 × 103 ( )4.00 × 102 (+)5.66 × 102 (+)8.03 × 101 ( + + )
F 5 1.09 × 103 (−)1.20 × 103 (−)1.12 × 102 (−)7.04 × 102 (+)7.99 × 102 (+)4.85 × 101 (+)
F 6 6.31 × 102 (−)6.40 × 102 (−)5.58 × 100 (−)6.06 × 102 (+)6.16 × 102 (+)5.18 × 100 (+)
F 7 1.79 × 103 (−)2.04 × 103 (−)1.32 × 102 (−)1.11 × 103 (+)1.32 × 103 (+)1.21 × 102 (+)
F 8 1.34 × 103 (−)1.48 × 103 (−)6.85 × 101 (+)1.00 × 103 (+)1.12 × 103 (+)7.34 × 101 (−)
F 9 1.58 × 104 (−)3.41 × 104 (−)1.22 × 104 (−)3.08 × 103 (+)5.99 × 103 (+)2.75 × 103 (+)
F 10 1.22 × 104 (−)1.70 × 104 (−)5.13 × 103 (−)1.20 × 104 (+)1.47 × 104 (+)1.18 × 103 (+)
F 11 4.61 × 104 ( )6.51 × 104 ( )1.39 × 104 ( )1.64 × 103 ( + + )2.21 × 103 ( + + )2.52 × 102 ( + + )
F 12 3.53 × 109 ( )9.24 × 109 ( )4.88 × 109 ( )1.33 × 105 ( + + )4.69 × 105 ( + + )1.78 × 105 ( + + )
F 13 1.48 × 106 ( )1.32 × 109 ( )1.51 × 109 ( )5.07 × 103 ( + + )1.76 × 104 ( + + )1.14 × 104 ( + + )
F 14 8.04 × 105 ( )7.09 × 106 ( )3.53 × 106 ( )1.70 × 104 ( + + )9.26 × 104 ( + + )4.75 × 104 ( + + )
F 15 7.15 × 104 ( )1.56 × 108 ( )2.38 × 108 ( )2.15 × 103 ( + + )9.64 × 103 ( + + )8.61 × 103 ( + + )
F 16 4.83 × 103 (−)6.16 × 103 (−)5.80 × 102 (−)3.69 × 103 (+)4.90 × 103 (+)5.48 × 102 (+)
F 17 4.40 × 103 (−)5.23 × 103 (−)8.97 × 102 (−)3.49 × 103 (+)4.51 × 103 (+)4.64 × 102 (+)
F 18 1.22 × 106 ( )6.18 × 106 ( )5.80 × 106 ( )5.68 × 104 ( + + )1.71 × 105 ( + + )7.66 × 104 ( + + )
F 19 3.99 × 106 ( )2.65 × 108 ( )3.36 × 108 ( )2.21 × 103 ( + + )1.24 × 104 ( + + )1.37 × 104 ( + + )
F 20 3.65 × 103 (−)4.80 × 103 (−)8.25 × 102 (−)3.44 × 103 (+)4.48 × 103 (+)4.74 × 102 (+)
F 21 2.80 × 103 (−)3.00 × 103 (−)1.22 × 102 (−)2.51 × 103 (+)2.63 × 103 (+)6.43 × 101 (+)
F 22 1.57 × 104 (−)1.96 × 104 (−)3.02 × 103 (−)1.51 × 104 (+)1.72 × 104 (+)1.21 × 103 (+)
F 23 3.50 × 103 (−)3.65 × 103 (−)1.39 × 102 (−)3.06 × 103 (+)3.14 × 103 (+)4.17 × 101 (+)
F 24 4.01 × 103 (−)4.27 × 103 (−)1.22 × 102 (−)3.54 × 103 (+)3.62 × 103 (+)5.43 × 101 (+)
F 25 5.53 × 103 (−)7.02 × 103 (−)1.16 × 103 ( )3.13 × 103 (+)3.26 × 103 (+)7.36 × 101 ( + + )
F 26 1.36 × 104 (−)1.54 × 104 (−)1.12 × 103 (−)8.08 × 103 (+)9.62 × 103 (+)8.18 × 102 (+)
F 27 3.91 × 103 (−)4.12 × 103 (−)1.23 × 102 (−)3.42 × 103 (+)3.64 × 103 (+)1.10 × 102 (+)
F 28 6.61 × 103 (−)8.87 × 103 (−)1.13 × 103 (+)3.10 × 103 (+)4.56 × 103 (+)3.69 × 103 (−)
F 29 6.97 × 103 (−)8.50 × 103 (−)8.69 × 102 (−)5.03 × 103 (+)6.08 × 103 (+)5.33 × 102 (+)
F 30 1.58 × 108 ( )1.67 × 109 ( )1.66 × 109 ( )6.74 × 103 ( + + )1.02 × 104 ( + + )3.62 × 103 ( + + )
+ + + 10 19 0 0 10 19 0 0 12 15 2 0 0 0 19 10 0 0 19 10 0 2 15 12
Table A3. Performance comparison of TOC and ALDPGA in the CEC2017 (100-D).
Table A3. Performance comparison of TOC and ALDPGA in the CEC2017 (100-D).
FunctionTOCALDPGA
BestMeanStdBestMeanStd
F 1 1.87 × 1010 ( )4.52 × 1010 ( )1.58 × 1010 ( )1.00 × 102 ( + + )7.35 × 103 ( + + )9.01 × 103 ( + + )
F 3 2.35 × 105 ( )4.30 × 105 ( )2.53 × 105 ( )3.00 × 102 ( + + )5.98 × 102 ( + + )7.47 × 102 ( + + )
F 4 3.34 × 103 (−)8.02 × 103 ( )3.20 × 103 ( )4.00 × 102 (+)5.66 × 102 ( + + )8.03 × 101 ( + + )
F 5 1.56 × 103 (−)1.85 × 103 (−)1.33 × 102 (−)7.04 × 102 (+)7.99 × 102 (+)4.85 × 101 (+)
F 6 6.77 × 102 (−)6.94 × 102 (−)6.67 × 100 (−)6.06 × 102 (+)6.16 × 102 (+)5.18 × 100 (+)
F 7 2.90 × 103 (−)3.63 × 103 (−)3.49 × 102 (−)1.11 × 103 (+)1.32 × 103 (+)1.21 × 102 (+)
F 8 2.12 × 103 (−)2.28 × 103 (−)9.68 × 101 (−)1.00 × 103 (+)1.12 × 103 (+)7.34 × 101 (+)
F 9 3.76 × 104 ( )5.72 × 104 (−)9.29 × 103 (−)3.08 × 103 ( + + )5.99 × 103 (+)2.75 × 103 (+)
F 10 2.43 × 104 (−)2.76 × 104 (−)1.68 × 103 (−)1.20 × 104 (+)1.47 × 104 (+)1.18 × 103 (+)
F 11 1.42 × 104 (−)6.47 × 104 ( )4.47 × 104 ( )1.64 × 103 (+)2.21 × 103 ( + + )2.52 × 102 ( + + )
F 12 2.72 × 109 ( )8.15 × 109 ( )5.85 × 109 ( )1.33 × 105 ( + + )4.69 × 105 ( + + )1.78 × 105 ( + + )
F 13 8.04 × 107 ( )4.01 × 108 ( )2.36 × 108 ( )5.07 × 103 ( + + )1.76 × 104 ( + + )1.14 × 104 ( + + )
F 14 1.09 × 106 ( )1.34 × 107 ( )2.09 × 107 ( )1.70 × 104 ( + + )9.26 × 104 ( + + )4.75 × 104 ( + + )
F 15 8.86 × 106 ( )7.01 × 107 ( )6.96 × 107 ( )2.15 × 103 ( + + )9.64 × 103 ( + + )8.61 × 103 ( + + )
F 16 8.12 × 103 (−)1.17 × 104 (−)1.85 × 103 (−)3.69 × 103 (+)4.90 × 103 (+)5.48 × 102 (+)
F 17 6.74 × 103 (−)1.08 × 104 (−)7.36 × 103 ( )3.49 × 103 (+)4.51 × 103 (+)4.64 × 102 ( + + )
F 18 1.12 × 106 ( )1.31 × 107 ( )1.10 × 107 ( )5.68 × 104 ( + + )1.71 × 105 ( + + )7.66 × 104 ( + + )
F 19 1.14 × 107 ( )1.93 × 108 ( )2.31 × 108 ( )2.21 × 103 ( + + )1.24 × 104 ( + + )1.37 × 104 ( + + )
F 20 5.27 × 103 (−)6.18 × 103 (−)5.46 × 102 (−)3.44 × 103 (+)4.48 × 103 (+)4.74 × 102 (+)
F 21 3.75 × 103 (−)4.12 × 103 (−)1.83 × 102 (−)2.51 × 103 (+)2.63 × 103 (+)6.43 × 101 (+)
F 22 2.46 × 104 (−)2.99 × 104 (−)2.04 × 103 (−)1.51 × 104 (+)1.72 × 104 (+)1.21 × 103 (+)
F 23 4.84 × 103 (−)5.34 × 103 (−)3.24 × 102 (−)3.06 × 103 (+)3.14 × 103 (+)4.17 × 101 (+)
F 24 5.67 × 103 (−)6.78 × 103 (−)6.13 × 102 ( )3.54 × 103 (+)3.62 × 103 (+)5.43 × 101 ( + + )
F 25 5.45 × 103 (−)8.34 × 103 (−)1.74 × 103 ( )3.13 × 103 (+)3.26 × 103 (+)7.36 × 101 ( + + )
F 26 2.34 × 104 (−)3.47 × 104 (−)5.05 × 103 (−)8.08 × 103 (+)9.62 × 103 (+)8.18 × 102 (+)
F 27 4.08 × 103 (−)4.87 × 103 (−)4.72 × 102 (−)3.42 × 103 (+)3.64 × 103 (+)1.10 × 102 (+)
F 28 6.00 × 103 (−)9.82 × 103 (−)2.27 × 103 (+)3.10 × 103 (+)4.56 × 103 (+)3.69 × 103 (−)
F 29 1.15 × 104 (−)1.59 × 104 (−)2.88 × 103 (−)5.03 × 103 (+)6.08 × 103 (+)5.33 × 102 (+)
F 30 8.32 × 107 ( )6.60 × 108 ( )4.70 × 108 ( )6.74 × 103 ( + + )1.02 × 104 ( + + )3.62 × 103 ( + + )
+ + + 10 19 0 0 11 18 0 0 14 14 1 0 0 0 19 10 0 0 18 11 0 1 14 14
Table A4. Performance comparison of PSO and ALDPGA in the CEC2017 (100-D).
Table A4. Performance comparison of PSO and ALDPGA in the CEC2017 (100-D).
FunctionPSOALDPGA
BestMeanStdBestMeanStd
F 1 1.67 × 1010 ( )4.04 × 1010 ( )1.62 × 1010 ( )1.00 × 102 ( + + )7.35 × 103 ( + + )9.01 × 103 ( + + )
F 3 3.22 × 105 ( )4.90 × 105 ( )9.91 × 104 ( )3.00 × 102 ( + + )5.98 × 102 ( + + )7.47 × 102 ( + + )
F 4 2.96 × 103 (−)5.57 × 103 (−)2.73 × 103 ( )4.00 × 102 (+)5.66 × 102 (+)8.03 × 101 ( + + )
F 5 9.13 × 102 (−)1.06 × 103 (−)9.30 × 101 (−)7.04 × 102 (+)7.99 × 102 (+)4.85 × 101 (+)
F 6 6.21 × 102 (−)6.32 × 102 (−)6.70 × 100 (−)6.06 × 102 (+)6.16 × 102 (+)5.18 × 100 (+)
F 7 1.24 × 103 (−)1.70 × 103 (−)2.76 × 102 (−)1.11 × 103 (+)1.32 × 103 (+)1.21 × 102 (+)
F 8 1.21 × 103 (−)1.36 × 103 (−)7.58 × 101 (−)1.00 × 103 (+)1.12 × 103 (+)7.34 × 101 (+)
F 9 1.90 × 104 (−)5.32 × 104 (−)2.14 × 104 (−)3.08 × 103 (+)5.99 × 103 (+)2.75 × 103 (+)
F 10 1.13 × 104 (+)1.56 × 104 (−)1.67 × 103 (−)1.20 × 104 (−)1.47 × 104 (+)1.18 × 103 (+)
F 11 4.28 × 103 (−)1.48 × 104 (−)1.26 × 104 ( )1.64 × 103 (+)2.21 × 103 (+)2.52 × 102 ( + + )
F 12 1.39 × 109 ( )1.46 × 1010 ( )1.20 × 1010 ( )1.33 × 105 ( + + )4.69 × 105 ( + + )1.78 × 105 ( + + )
F 13 2.41 × 105 ( )2.07 × 109 ( )2.67 × 109 ( )5.07 × 103 ( + + )1.76 × 104 ( + + )1.14 × 104 ( + + )
F 14 4.86 × 105 ( )2.41 × 106 ( )2.22 × 106 ( )1.70 × 104 ( + + )9.26 × 104 ( + + )4.75 × 104 ( + + )
F 15 1.28 × 104 (−)5.04 × 108 ( )8.00 × 108 ( )2.15 × 103 (+)9.64 × 103 ( + + )8.61 × 103 ( + + )
F 16 5.09 × 103 (−)6.33 × 103 (−)7.44 × 102 (−)3.69 × 103 (+)4.90 × 103 (+)5.48 × 102 (+)
F 17 4.24 × 103 (−)6.16 × 103 (−)1.09 × 103 (−)3.49 × 103 (+)4.51 × 103 (+)4.64 × 102 (+)
F 18 1.09 × 106 ( )7.11 × 106 ( )4.56 × 106 ( )5.68 × 104 ( + + )1.71 × 105 ( + + )7.66 × 104 ( + + )
F 19 1.08 × 106 ( )3.15 × 108 ( )4.92 × 108 ( )2.21 × 103 ( + + )1.24 × 104 ( + + )1.37 × 104 ( + + )
F 20 3.51 × 103 (−)5.24 × 103 (−)8.52 × 102 (−)3.44 × 103 (+)4.48 × 103 (+)4.74 × 102 (+)
F 21 2.94 × 103 (−)3.14 × 103 (−)1.20 × 102 (−)2.51 × 103 (+)2.63 × 103 (+)6.43 × 101 (+)
F 22 1.54 × 104 (−)1.84 × 104 (−)1.49 × 103 (−)1.51 × 104 (+)1.72 × 104 (+)1.21 × 103 (+)
F 23 4.16 × 103 (−)4.73 × 103 (−)3.33 × 102 (−)3.06 × 103 (+)3.14 × 103 (+)4.17 × 101 (+)
F 24 4.71 × 103 (−)6.00 × 103 (−)5.07 × 102 (−)3.54 × 103 (+)3.62 × 103 (+)5.43 × 101 (+)
F 25 3.58 × 103 (−)4.88 × 103 (−)1.08 × 103 ( )3.13 × 103 (+)3.26 × 103 (+)7.36 × 101 ( + + )
F 26 9.60 × 103 (−)1.88 × 104 (−)3.52 × 103 (−)8.08 × 103 (+)9.62 × 103 (+)8.18 × 102 (+)
F 27 3.68 × 103 (−)4.24 × 103 (−)4.14 × 102 (−)3.42 × 103 (+)3.64 × 103 (+)1.10 × 102 (+)
F 28 4.33 × 103 (−)9.00 × 103 (−)3.12 × 103 (+)3.10 × 103 (+)4.56 × 103 (+)3.69 × 103 (−)
F 29 6.14 × 103 (−)7.49 × 103 (−)7.24 × 102 (−)5.03 × 103 (+)6.08 × 103 (+)5.33 × 102 (+)
F 30 1.21 × 107 ( )1.28 × 109 ( )1.32 × 109 ( )6.74 × 103 ( + + )1.02 × 104 ( + + )3.62 × 103 ( + + )
+ + + 8 20 1 0 9 20 0 0 12 16 1 0 0 1 20 8 0 0 20 9 0 1 16 12
Table A5. Performance comparison of GA and ALDPGA in the CEC2017 (100-D).
Table A5. Performance comparison of GA and ALDPGA in the CEC2017 (100-D).
FunctionGAALDPGA
BestMeanStdBestMeanStd
F 1 4.91 × 106 ( )9.20 × 106 ( )4.58 × 106 ( )1.00 × 102 ( + + )7.35 × 103 ( + + )9.01 × 103 ( + + )
F 3 5.54 × 105 ( )7.64 × 105 ( )1.13 × 105 ( )3.00 × 102 ( + + )5.98 × 102 ( + + )7.47 × 102 ( + + )
F 4 7.24 × 102 (−)8.76 × 102 (−)9.50 × 101 (−)4.00 × 102 (+)5.66 × 102 (+)8.03 × 101 (+)
F 5 1.56 × 103 (−)1.90 × 103 (−)1.59 × 102 (−)7.04 × 102 (+)7.99 × 102 (+)4.85 × 101 (+)
F 6 6.10 × 102 (−)6.17 × 102 (−)5.42 × 100 (−)6.06 × 102 (+)6.16 × 102 (+)5.18 × 100 (+)
F 7 1.89 × 103 (−)2.28 × 103 (−)1.84 × 102 (−)1.11 × 103 (+)1.32 × 103 (+)1.21 × 102 (+)
F 8 1.95 × 103 (−)2.20 × 103 (−)1.12 × 102 (−)1.00 × 103 (+)1.12 × 103 (+)7.34 × 101 (+)
F 9 3.42 × 104 ( )5.01 × 104 (−)6.91 × 103 (−)3.08 × 103 ( + + )5.99 × 103 (+)2.75 × 103 (+)
F 10 1.23 × 104 (−)1.53 × 104 (−)1.52 × 103 (−)1.20 × 104 (+)1.47 × 104 (+)1.18 × 103 (+)
F 11 6.63 × 104 ( )1.17 × 105 ( )2.98 × 104 ( )1.64 × 103 ( + + )2.21 × 103 ( + + )2.52 × 102 ( + + )
F 12 3.16 × 107 ( )6.97 × 107 ( )2.08 × 107 ( )1.33 × 105 ( + + )4.69 × 105 ( + + )1.78 × 105 ( + + )
F 13 3.83 × 104 (−)9.51 × 104 (−)7.27 × 104 (−)5.07 × 103 (+)1.76 × 104 (+)1.14 × 104 (+)
F 14 2.50 × 106 ( )8.34 × 106 ( )3.40 × 106 ( )1.70 × 104 ( + + )9.26 × 104 ( + + )4.75 × 104 ( + + )
F 15 1.37 × 104 (−)5.82 × 104 (−)7.06 × 104 (−)2.15 × 103 (+)9.64 × 103 (+)8.61 × 103 (+)
F 16 5.31 × 103 (−)6.61 × 103 (−)6.69 × 102 (−)3.69 × 103 (+)4.90 × 103 (+)5.48 × 102 (+)
F 17 5.08 × 103 (−)6.31 × 103 (−)7.16 × 102 (−)3.49 × 103 (+)4.51 × 103 (+)4.64 × 102 (+)
F 18 1.62 × 106 ( )8.06 × 106 ( )3.35 × 106 ( )5.68 × 104 ( + + )1.71 × 105 ( + + )7.66 × 104 ( + + )
F 19 1.86 × 104 (−)4.89 × 104 (−)2.24 × 104 (−)2.21 × 103 (+)1.24 × 104 (+)1.37 × 104 (+)
F 20 4.56 × 103 (−)5.86 × 103 (−)6.80 × 102 (−)3.44 × 103 (+)4.48 × 103 (+)4.74 × 102 (+)
F 21 3.71 × 103 (−)4.01 × 103 (−)1.36 × 102 (−)2.51 × 103 (+)2.63 × 103 (+)6.43 × 101 (+)
F 22 1.53 × 104 (−)1.80 × 104 (−)1.34 × 103 (−)1.51 × 104 (+)1.72 × 104 (+)1.21 × 103 (+)
F 23 3.77 × 103 (−)4.04 × 103 (−)1.38 × 102 (−)3.06 × 103 (+)3.14 × 103 (+)4.17 × 101 (+)
F 24 5.09 × 103 (−)5.63 × 103 (−)3.13 × 102 (−)3.54 × 103 (+)3.62 × 103 (+)5.43 × 101 (+)
F 25 3.40 × 103 (−)3.49 × 103 (−)4.27 × 101 (+)3.13 × 103 (+)3.26 × 103 (+)7.36 × 101 (−)
F 26 2.15 × 104 (−)2.53 × 104 (−)2.31 × 103 (−)8.08 × 103 (+)9.62 × 103 (+)8.18 × 102 (+)
F 27 3.79 × 103 (−)4.29 × 103 (−)2.76 × 102 (−)3.42 × 103 (+)3.64 × 103 (+)1.10 × 102 (+)
F 28 3.43 × 103 (−)3.53 × 103 (+)3.57 × 101 ( + + )3.10 × 103 (+)4.56 × 103 (−)3.69 × 103 ( )
F 29 7.09 × 103 (−)8.21 × 103 (−)5.58 × 102 (−)5.03 × 103 (+)6.08 × 103 (+)5.33 × 102 (+)
F 30 7.79 × 104 ( )2.34 × 105 ( )1.18 × 105 ( )6.74 × 103 ( + + )1.02 × 104 ( + + )3.62 × 103 ( + + )
+ + + 8 21 0 0 7 21 1 0 7 20 1 1 0 0 21 8 0 1 21 7 1 1 20 7
Table A6. Performance comparison of DE and ALDPGA in the CEC2017 (100-D).
Table A6. Performance comparison of DE and ALDPGA in the CEC2017 (100-D).
FunctionDEALDPGA
BestMeanStdBestMeanStd
F 1 1.10 × 107 ( )1.86 × 109 ( )1.99 × 109 ( )1.00 × 102 ( + + )7.35 × 103 ( + + )9.01 × 103 ( + + )
F 3 4.55 × 105 ( )5.92 × 105 ( )6.73 × 104 ( )3.00 × 102 ( + + )5.98 × 102 ( + + )7.47 × 102 ( + + )
F 4 7.34 × 102 (−)8.84 × 102 (−)8.21 × 101 (−)4.00 × 102 (+)5.66 × 102 (+)8.03 × 101 (+)
F 5 7.57 × 102 (−)8.58 × 102 (−)6.71 × 101 (−)7.04 × 102 (+)7.99 × 102 (+)4.85 × 101 (+)
F 6 6.07 × 102 (−)6.12 × 102 (+)2.74 × 100 (+)6.06 × 102 (+)6.16 × 102 (−)5.18 × 100 (−)
F 7 1.30 × 103 (−)1.65 × 103 (−)3.03 × 102 (−)1.11 × 103 (+)1.32 × 103 (+)1.21 × 102 (+)
F 8 1.06 × 103 (−)1.15 × 103 (−)4.37 × 101 (+)1.00 × 103 (+)1.12 × 103 (+)7.34 × 101 (−)
F 9 6.59 × 103 (−)1.83 × 104 (−)5.16 × 103 (−)3.08 × 103 (+)5.99 × 103 (+)2.75 × 103 (+)
F 10 3.04 × 104 (−)3.19 × 104 (−)5.24 × 102 (+)1.20 × 104 (+)1.47 × 104 (+)1.18 × 103 (−)
F 11 3.76 × 103 (−)1.06 × 104 (−)4.12 × 103 ( )1.64 × 103 (+)2.21 × 103 (+)2.52 × 102 ( + + )
F 12 4.79 × 106 ( )2.46 × 108 ( )6.47 × 108 ( )1.33 × 105 ( + + )4.69 × 105 ( + + )1.78 × 105 ( + + )
F 13 2.72 × 103 (+)4.84 × 105 ( )2.39 × 106 ( )5.07 × 103 (−)1.76 × 104 ( + + )1.14 × 104 ( + + )
F 14 8.40 × 104 (−)4.30 × 105 (−)2.51 × 105 (−)1.70 × 104 (+)9.26 × 104 (+)4.75 × 104 (+)
F 15 1.98 × 103 (+)6.53 × 104 (−)3.21 × 105 ( )2.15 × 103 (−)9.64 × 103 (+)8.61 × 103 ( + + )
F 16 3.41 × 103 (+)5.19 × 103 (−)1.82 × 103 (−)3.69 × 103 (−)4.90 × 103 (+)5.48 × 102 (+)
F 17 3.29 × 103 (+)5.02 × 103 (−)1.20 × 103 (−)3.49 × 103 (−)4.51 × 103 (+)4.64 × 102 (+)
F 18 3.45 × 105 (−)1.14 × 106 (−)5.21 × 105 (−)5.68 × 104 (+)1.71 × 105 (+)7.66 × 104 (+)
F 19 2.06 × 103 (+)2.51 × 105 ( )8.52 × 105 ( )2.21 × 103 (−)1.24 × 104 ( + + )1.37 × 104 ( + + )
F 20 4.73 × 103 (−)6.60 × 103 (−)9.46 × 102 (−)3.44 × 103 (+)4.48 × 103 (+)4.74 × 102 (+)
F 21 2.59 × 103 (−)2.69 × 103 (−)5.88 × 101 (+)2.51 × 103 (+)2.63 × 103 (+)6.43 × 101 (−)
F 22 3.24 × 104 (−)3.38 × 104 (−)5.32 × 102 (+)1.51 × 104 (+)1.72 × 104 (+)1.21 × 103 (−)
F 23 3.13 × 103 (−)3.29 × 103 (−)9.09 × 101 (−)3.06 × 103 (+)3.14 × 103 (+)4.17 × 101 (+)
F 24 3.62 × 103 (−)3.88 × 103 (−)1.76 × 102 (−)3.54 × 103 (+)3.62 × 103 (+)5.43 × 101 (+)
F 25 3.41 × 103 (−)3.57 × 103 (−)8.86 × 101 (−)3.13 × 103 (+)3.26 × 103 (+)7.36 × 101 (+)
F 26 9.94 × 103 (−)1.19 × 104 (−)1.25 × 103 (−)8.08 × 103 (+)9.62 × 103 (+)8.18 × 102 (+)
F 27 3.43 × 103 (−)3.60 × 103 (+)7.17 × 101 (+)3.42 × 103 (+)3.64 × 103 (−)1.10 × 102 (−)
F 28 3.50 × 103 (−)3.82 × 103 (+)2.74 × 102 ( + + )3.10 × 103 (+)4.56 × 103 (−)3.69 × 103 ( )
F 29 4.82 × 103 (+)5.83 × 103 (+)4.81 × 102 (+)5.03 × 103 (−)6.08 × 103 (−)5.33 × 102 (−)
F 30 9.20 × 103 (−)4.11 × 105 ( )1.51 × 106 ( )6.74 × 103 (+)1.02 × 104 ( + + )3.62 × 103 ( + + )
+ + + 3 20 6 0 6 19 4 0 8 13 7 1 0 6 20 3 0 4 19 6 1 7 13 8
Table A7. Performance comparison of RIME and ALDPGA in the CEC2017 (100-D).
Table A7. Performance comparison of RIME and ALDPGA in the CEC2017 (100-D).
FunctionRIMEALDPGA
BestMeanStdBestMeanStd
F 1 7.33 × 106 ( )1.29 × 107 ( )5.19 × 106 ( )1.00 × 102 ( + + )7.35 × 103 ( + + )9.01 × 103 ( + + )
F 3 2.91 × 105 ( )3.73 × 105 ( )5.97 × 104 ( )3.00 × 102 ( + + )5.98 × 102 ( + + )7.47 × 102 ( + + )
F 4 6.68 × 102 (−)7.90 × 102 (−)6.78 × 101 (+)4.00 × 102 (+)5.66 × 102 (+)8.03 × 101 (−)
F 5 8.93 × 102 (−)1.01 × 103 (−)8.05 × 101 (−)7.04 × 102 (+)7.99 × 102 (+)4.85 × 101 (+)
F 6 6.22 × 102 (−)6.30 × 102 (−)5.66 × 100 (−)6.06 × 102 (+)6.16 × 102 (+)5.18 × 100 (+)
F 7 1.28 × 103 (−)1.53 × 103 (−)1.16 × 102 (+)1.11 × 103 (+)1.32 × 103 (+)1.21 × 102 (−)
F 8 1.17 × 103 (−)1.31 × 103 (−)8.29 × 101 (−)1.00 × 103 (+)1.12 × 103 (+)7.34 × 101 (+)
F 9 7.86 × 103 (−)2.52 × 104 (−)8.78 × 103 (−)3.08 × 103 (+)5.99 × 103 (+)2.75 × 103 (+)
F 10 1.42 × 104 (−)1.61 × 104 (−)1.27 × 103 (−)1.20 × 104 (+)1.47 × 104 (+)1.18 × 103 (+)
F 11 3.09 × 103 (−)4.23 × 103 (−)6.04 × 102 (−)1.64 × 103 (+)2.21 × 103 (+)2.52 × 102 (+)
F 12 1.18 × 108 ( )4.38 × 108 ( )1.81 × 108 ( )1.33 × 105 ( + + )4.69 × 105 ( + + )1.78 × 105 ( + + )
F 13 7.06 × 104 ( )2.26 × 105 ( )3.87 × 105 ( )5.07 × 103 ( + + )1.76 × 104 ( + + )1.14 × 104 ( + + )
F 14 4.78 × 105 ( )2.58 × 106 ( )1.47 × 106 ( )1.70 × 104 ( + + )9.26 × 104 ( + + )4.75 × 104 ( + + )
F 15 3.22 × 104 ( )1.68 × 105 ( )4.42 × 105 ( )2.15 × 103 ( + + )9.64 × 103 ( + + )8.61 × 103 ( + + )
F 16 5.27 × 103 (−)6.58 × 103 (−)6.99 × 102 (−)3.69 × 103 (+)4.90 × 103 (+)5.48 × 102 (+)
F 17 5.02 × 103 (−)5.72 × 103 (−)5.12 × 102 (−)3.49 × 103 (+)4.51 × 103 (+)4.64 × 102 (+)
F 18 1.62 × 106 ( )5.02 × 106 ( )2.59 × 106 ( )5.68 × 104 ( + + )1.71 × 105 ( + + )7.66 × 104 ( + + )
F 19 4.76 × 105 ( )1.64 × 106 ( )9.36 × 105 ( )2.21 × 103 ( + + )1.24 × 104 ( + + )1.37 × 104 ( + + )
F 20 4.38 × 103 (−)5.32 × 103 (−)4.24 × 102 (+)3.44 × 103 (+)4.48 × 103 (+)4.74 × 102 (−)
F 21 2.75 × 103 (−)2.86 × 103 (−)7.79 × 101 (−)2.51 × 103 (+)2.63 × 103 (+)6.43 × 101 (+)
F 22 1.55 × 104 (−)1.83 × 104 (−)1.38 × 103 (−)1.51 × 104 (+)1.72 × 104 (+)1.21 × 103 (+)
F 23 3.27 × 103 (−)3.38 × 103 (−)8.05 × 101 (−)3.06 × 103 (+)3.14 × 103 (+)4.17 × 101 (+)
F 24 3.66 × 103 (−)3.88 × 103 (−)1.20 × 102 (−)3.54 × 103 (+)3.62 × 103 (+)5.43 × 101 (+)
F 25 3.35 × 103 (−)3.50 × 103 (−)7.44 × 101 (−)3.13 × 103 (+)3.26 × 103 (+)7.36 × 101 (+)
F 26 1.09 × 104 (−)1.22 × 104 (−)8.88 × 102 (−)8.08 × 103 (+)9.62 × 103 (+)8.18 × 102 (+)
F 27 3.57 × 103 (−)3.72 × 103 (−)7.01 × 101 (+)3.42 × 103 (+)3.64 × 103 (+)1.10 × 102 (−)
F 28 3.50 × 103 (−)3.58 × 103 (+)4.15 × 101 ( + + )3.10 × 103 (+)4.56 × 103 (−)3.69 × 103 ( )
F 29 7.05 × 103 (−)8.19 × 103 (−)5.48 × 102 (−)5.03 × 103 (+)6.08 × 103 (+)5.33 × 102 (+)
F 30 7.82 × 106 ( )3.06 × 107 ( )1.36 × 107 ( )6.74 × 103 ( + + )1.02 × 104 ( + + )3.62 × 103 ( + + )
+ + + 9 20 0 0 9 19 1 0 9 15 4 1 0 0 20 9 0 1 19 9 1 4 15 9
Table A8. Performance comparison of SAO and ALDPGA in the CEC2017 (100-D).
Table A8. Performance comparison of SAO and ALDPGA in the CEC2017 (100-D).
FunctionSAOALDPGA
BestMeanStdBestMeanStd
F 1 1.08 × 107 ( )3.20 × 108 ( )7.94 × 108 ( )1.00 × 102 ( + + )7.35 × 103 ( + + )9.01 × 103 ( + + )
F 3 5.22 × 105 ( )7.88 × 105 ( )1.53 × 105 ( )3.00 × 102 ( + + )5.98 × 102 ( + + )7.47 × 102 ( + + )
F 4 6.17 × 102 (−)7.20 × 102 (−)4.12 × 101 (+)4.00 × 102 (+)5.66 × 102 (+)8.03 × 101 (−)
F 5 8.47 × 102 (−)1.18 × 103 (−)2.81 × 102 (−)7.04 × 102 (+)7.99 × 102 (+)4.85 × 101 (+)
F 6 6.08 × 102 (−)6.19 × 102 (−)6.06 × 100 (−)6.06 × 102 (+)6.16 × 102 (+)5.18 × 100 (+)
F 7 1.94 × 103 (−)2.09 × 103 (−)1.04 × 102 (+)1.11 × 103 (+)1.32 × 103 (+)1.21 × 102 (−)
F 8 1.18 × 103 (−)1.43 × 103 (−)2.01 × 102 (−)1.00 × 103 (+)1.12 × 103 (+)7.34 × 101 (+)
F 9 5.41 × 103 (−)2.37 × 104 (−)1.07 × 104 (−)3.08 × 103 (+)5.99 × 103 (+)2.75 × 103 (+)
F 10 1.29 × 104 (−)1.86 × 104 (−)6.08 × 103 (−)1.20 × 104 (+)1.47 × 104 (+)1.18 × 103 (+)
F 11 8.21 × 104 ( )1.45 × 105 ( )5.63 × 104 ( )1.64 × 103 ( + + )2.21 × 103 ( + + )2.52 × 102 ( + + )
F 12 6.21 × 106 ( )3.99 × 107 ( )2.58 × 107 ( )1.33 × 105 ( + + )4.69 × 105 ( + + )1.78 × 105 ( + + )
F 13 3.52 × 103 (+)1.26 × 104 (+)8.96 × 103 (+)5.07 × 103 (−)1.76 × 104 (−)1.14 × 104 (−)
F 14 2.15 × 105 ( )7.19 × 105 (−)3.99 × 105 (−)1.70 × 104 ( + + )9.26 × 104 (+)4.75 × 104 (+)
F 15 2.04 × 103 (+)4.18 × 103 (+)3.27 × 103 (+)2.15 × 103 (−)9.64 × 103 (−)8.61 × 103 (−)
F 16 4.44 × 103 (−)5.67 × 103 (−)1.07 × 103 (−)3.69 × 103 (+)4.90 × 103 (+)5.48 × 102 (+)
F 17 3.70 × 103 (−)4.92 × 103 (−)8.18 × 102 (−)3.49 × 103 (+)4.51 × 103 (+)4.64 × 102 (+)
F 18 7.31 × 105 ( )3.04 × 106 ( )3.15 × 106 ( )5.68 × 104 ( + + )1.71 × 105 ( + + )7.66 × 104 ( + + )
F 19 2.25 × 103 (−)6.42 × 103 (+)3.70 × 103 (+)2.21 × 103 (+)1.24 × 104 (−)1.37 × 104 (−)
F 20 3.69 × 103 (−)4.98 × 103 (−)1.04 × 103 (−)3.44 × 103 (+)4.48 × 103 (+)4.74 × 102 (+)
F 21 2.68 × 103 (−)2.85 × 103 (−)1.46 × 102 (−)2.51 × 103 (+)2.63 × 103 (+)6.43 × 101 (+)
F 22 1.33 × 104 (+)1.94 × 104 (−)5.47 × 103 (−)1.51 × 104 (−)1.72 × 104 (+)1.21 × 103 (+)
F 23 3.18 × 103 (−)3.30 × 103 (−)6.27 × 101 (−)3.06 × 103 (+)3.14 × 103 (+)4.17 × 101 (+)
F 24 3.55 × 103 (−)3.72 × 103 (−)9.70 × 101 (−)3.54 × 103 (+)3.62 × 103 (+)5.43 × 101 (+)
F 25 3.33 × 103 (−)3.45 × 103 (−)7.51 × 101 (−)3.13 × 103 (+)3.26 × 103 (+)7.36 × 101 (+)
F 26 9.24 × 103 (−)1.02 × 104 (−)5.59 × 102 (+)8.08 × 103 (+)9.62 × 103 (+)8.18 × 102 (−)
F 27 3.35 × 103 (+)3.47 × 103 (+)5.79 × 101 (+)3.42 × 103 (−)3.64 × 103 (−)1.10 × 102 (−)
F 28 3.40 × 103 (−)3.48 × 103 (+)4.86 × 101 ( + + )3.10 × 103 (+)4.56 × 103 (−)3.69 × 103 ( )
F 29 5.52 × 103 (−)6.50 × 103 (−)5.13 × 102 (+)5.03 × 103 (+)6.08 × 103 (+)5.33 × 102 (−)
F 30 9.23 × 103 (−)4.61 × 104 (−)7.23 × 104 ( )6.74 × 103 (+)1.02 × 104 (+)3.62 × 103 ( + + )
+ + + 6 19 4 0 5 19 5 0 6 14 8 1 0 4 19 6 0 5 19 5 1 8 14 6
Table A9. Performance comparison of VO and ALDPGA in the CEC2017 (100-D).
Table A9. Performance comparison of VO and ALDPGA in the CEC2017 (100-D).
FunctionVOALDPGA
BestMeanStdBestMeanStd
F 1 1.22 × 106 ( )2.02 × 106 ( )5.14 × 105 ( )1.00 × 102 ( + + )7.35 × 103 ( + + )9.01 × 103 ( + + )
F 3 2.73 × 105 ( )3.57 × 105 ( )5.12 × 104 ( )3.00 × 102 ( + + )5.98 × 102 ( + + )7.47 × 102 ( + + )
F 4 6.62 × 102 (−)8.02 × 102 (−)6.45 × 101 (+)4.00 × 102 (+)5.66 × 102 (+)8.03 × 101 (−)
F 5 1.41 × 103 (−)1.65 × 103 (−)1.42 × 102 (−)7.04 × 102 (+)7.99 × 102 (+)4.85 × 101 (+)
F 6 6.71 × 102 (−)6.82 × 102 (−)8.01 × 100 (−)6.06 × 102 (+)6.16 × 102 (+)5.18 × 100 (+)
F 7 2.86 × 103 (−)3.62 × 103 (−)4.48 × 102 (−)1.11 × 103 (+)1.32 × 103 (+)1.21 × 102 (+)
F 8 1.80 × 103 (−)2.03 × 103 (−)1.02 × 102 (−)1.00 × 103 (+)1.12 × 103 (+)7.34 × 101 (+)
F 9 3.12 × 104 ( )6.76 × 104 ( )1.68 × 104 (−)3.08 × 103 ( + + )5.99 × 103 ( + + )2.75 × 103 (+)
F 10 1.59 × 104 (−)2.17 × 104 (−)2.31 × 103 (−)1.20 × 104 (+)1.47 × 104 (+)1.18 × 103 (+)
F 11 4.00 × 103 (−)5.19 × 103 (−)7.30 × 102 (−)1.64 × 103 (+)2.21 × 103 (+)2.52 × 102 (+)
F 12 1.61 × 108 ( )3.61 × 108 ( )9.80 × 107 ( )1.33 × 105 ( + + )4.69 × 105 ( + + )1.78 × 105 ( + + )
F 13 2.68 × 104 (−)6.53 × 104 (−)1.55 × 104 (−)5.07 × 103 (+)1.76 × 104 (+)1.14 × 104 (+)
F 14 1.81 × 106 ( )2.54 × 106 ( )2.96 × 105 (−)1.70 × 104 ( + + )9.26 × 104 ( + + )4.75 × 104 (+)
F 15 2.68 × 104 ( )5.32 × 104 (−)1.65 × 104 (−)2.15 × 103 ( + + )9.64 × 103 (+)8.61 × 103 (+)
F 16 6.64 × 103 (−)8.43 × 103 (−)9.90 × 102 (−)3.69 × 103 (+)4.90 × 103 (+)5.48 × 102 (+)
F 17 5.34 × 103 (−)6.66 × 103 (−)5.74 × 102 (−)3.49 × 103 (+)4.51 × 103 (+)4.64 × 102 (+)
F 18 1.16 × 106 ( )3.32 × 106 ( )8.60 × 105 ( )5.68 × 104 ( + + )1.71 × 105 ( + + )7.66 × 104 ( + + )
F 19 1.21 × 106 ( )1.22 × 107 ( )6.44 × 106 ( )2.21 × 103 ( + + )1.24 × 104 ( + + )1.37 × 104 ( + + )
F 20 4.97 × 103 (−)6.06 × 103 (−)6.37 × 102 (−)3.44 × 103 (+)4.48 × 103 (+)4.74 × 102 (+)
F 21 2.99 × 103 (−)3.73 × 103 (−)2.77 × 102 (−)2.51 × 103 (+)2.63 × 103 (+)6.43 × 101 (+)
F 22 2.19 × 104 (−)2.47 × 104 (−)1.80 × 103 (−)1.51 × 104 (+)1.72 × 104 (+)1.21 × 103 (+)
F 23 4.56 × 103 (−)5.61 × 103 (−)5.32 × 102 ( )3.06 × 103 (+)3.14 × 103 (+)4.17 × 101 ( + + )
F 24 5.91 × 103 (−)6.91 × 103 (−)5.51 × 102 ( )3.54 × 103 (+)3.62 × 103 (+)5.43 × 101 ( + + )
F 25 3.37 × 103 (−)3.49 × 103 (−)5.25 × 101 (+)3.13 × 103 (+)3.26 × 103 (+)7.36 × 101 (−)
F 26 2.63 × 104 (−)3.41 × 104 (−)4.44 × 103 (−)8.08 × 103 (+)9.62 × 103 (+)8.18 × 102 (+)
F 27 5.44 × 103 (−)6.27 × 103 (−)6.87 × 102 (−)3.42 × 103 (+)3.64 × 103 (+)1.10 × 102 (+)
F 28 3.46 × 103 (−)3.53 × 103 (+)3.96 × 101 ( + + )3.10 × 103 (+)4.56 × 103 (−)3.69 × 103 ( )
F 29 9.01 × 103 (−)1.18 × 104 (−)1.65 × 103 (−)5.03 × 103 (+)6.08 × 103 (+)5.33 × 102 (+)
F 30 2.30 × 107 ( )4.79 × 107 ( )1.58 × 107 ( )6.74 × 103 ( + + )1.02 × 104 ( + + )3.62 × 103 ( + + )
+ + + 9 20 0 0 8 20 1 0 8 18 2 1 0 0 20 9 0 1 20 8 1 2 18 8
Figure A1. Convergence curves of ALDPGA and the comparative algorithms on CEC2017 (100-D).
Figure A1. Convergence curves of ALDPGA and the comparative algorithms on CEC2017 (100-D).
Symmetry 18 00064 g0a1
Figure A2. Convergence curves of ALDPGA and the comparative algorithms on CEC2020 (100-D).
Figure A2. Convergence curves of ALDPGA and the comparative algorithms on CEC2020 (100-D).
Symmetry 18 00064 g0a2
Figure A3. Stability performance of ALDPGA and the comparative algorithms on CEC2017 (100-D).
Figure A3. Stability performance of ALDPGA and the comparative algorithms on CEC2017 (100-D).
Symmetry 18 00064 g0a3
Figure A4. Stability performance of ALDPGA and the comparative algorithms on CEC2020 (100-D).
Figure A4. Stability performance of ALDPGA and the comparative algorithms on CEC2020 (100-D).
Symmetry 18 00064 g0a4
Table A10. ALDPGA and several variants run results in the CEC2017 (50-D).
Table A10. ALDPGA and several variants run results in the CEC2017 (50-D).
Function ALDPGA_1ALDPGA_2ALDPGA_3PGAALDPGA
F 1 Mean5.30 × 1034.15 × 1036.06 × 1039.33 × 1036.06 × 103
Std6.44 × 1035.19 × 1036.85 × 1039.94 × 1036.85 × 103
F 3 Mean1.17 × 1035.69 × 1043.00 × 1028.36 × 1043.00 × 102
Std1.33 × 1031.49 × 1041.96 × 10−61.64 × 1041.65 × 10−6
F 4 Mean5.33 × 1025.26 × 1024.72 × 1025.38 × 1024.69 × 102
Std6.29 × 1015.00 × 1015.38 × 1014.21 × 1015.58 × 101
F 5 Mean5.96 × 1027.45 × 1025.84 × 1027.19 × 1025.83 × 102
Std2.25 × 1014.10 × 1011.96 × 1013.93 × 1011.97 × 101
F 6 Mean6.05 × 1026.30 × 1026.01 × 1026.15 × 1026.01 × 102
Std8.26 × 10−11.75 × 1018.39 × 10−19.30 × 1008.11 × 10−1
F 7 Mean8.71 × 1021.11 × 1038.65 × 1021.03 × 1038.65 × 102
Std3.81 × 1011.08 × 1023.60 × 1016.63 × 1013.70 × 101
F 8 Mean8.90 × 1021.06 × 1038.92 × 1021.03 × 1038.92 × 102
Std2.28 × 1013.94 × 1012.00 × 1015.51 × 1011.98 × 101
F 9 Mean1.13 × 1038.29 × 1031.15 × 1033.02 × 1031.15 × 103
Std2.28 × 1023.49 × 1032.33 × 1021.18 × 1032.31 × 102
F 10 Mean7.78 × 1037.98 × 1037.31 × 1037.92 × 1037.30 × 103
Std8.85 × 1029.76 × 1021.13 × 1031.03 × 1031.12 × 103
F 11 Mean1.28 × 1031.33 × 1031.29 × 1031.31 × 1031.29 × 103
Std4.95 × 1017.35 × 1016.20 × 1015.80 × 1016.20 × 101
F 12 Mean3.27 × 1063.02 × 1062.18 × 1053.57 × 1061.77 × 105
Std1.50 × 1061.79 × 1069.90 × 1041.91 × 1068.52 × 104
F 13 Mean9.27 × 1031.42 × 1041.00 × 1041.23 × 1049.99 × 103
Std7.34 × 1031.13 × 1048.21 × 1031.02 × 1048.19 × 103
F 14 Mean5.73 × 1041.88 × 1051.26 × 1041.40 × 1051.17 × 104
Std3.96 × 1041.41 × 1051.00 × 1048.85 × 1041.06 × 104
F 15 Mean1.38 × 1041.33 × 1041.29 × 1041.63 × 1041.28 × 104
Std9.27 × 1039.13 × 1031.09 × 1041.09 × 1041.09 × 104
F 16 Mean3.08 × 1033.55 × 1033.07 × 1033.34 × 1033.07 × 103
Std3.99 × 1024.64 × 1023.87 × 1025.31 × 1023.84 × 102
F 17 Mean2.74 × 1033.07 × 1032.69 × 1032.95 × 1032.69 × 103
Std3.71 × 1023.22 × 1022.60 × 1023.61 × 1022.61 × 102
F 18 Mean3.03 × 1051.48 × 1067.18 × 1041.54 × 1065.65 × 104
Std3.63 × 1051.39 × 1065.78 × 1041.76 × 1064.84 × 104
F 19 Mean1.70 × 1041.87 × 1041.66 × 1041.82 × 1041.66 × 104
Std1.27 × 1041.38 × 1041.40 × 1041.57 × 1041.40 × 104
F 20 Mean2.99 × 1033.09 × 1032.80 × 1033.11 × 1032.83 × 103
Std3.56 × 1022.83 × 1023.04 × 1023.39 × 1023.01 × 102
F 21 Mean2.39 × 1032.56 × 1032.39 × 1032.52 × 1032.39 × 103
Std2.76 × 1014.70 × 1012.19 × 1014.58 × 1012.20 × 101
F 22 Mean9.29 × 1039.60 × 1038.99 × 1039.58 × 1039.02 × 103
Std1.13 × 1039.17 × 1029.71 × 1021.07 × 1039.50 × 102
F 23 Mean2.83 × 1032.97 × 1032.83 × 1032.96 × 1032.83 × 103
Std2.95 × 1014.38 × 1012.67 × 1015.47 × 1012.67 × 101
F 24 Mean3.01 × 1033.12 × 1033.00 × 1033.06 × 1033.00 × 103
Std4.21 × 1014.66 × 1012.15 × 1013.47 × 1012.15 × 101
F 25 Mean3.04 × 1033.06 × 1033.03 × 1033.05 × 1033.03 × 103
Std3.54 × 1013.97 × 1013.90 × 1013.96 × 1013.84 × 101
F 26 Mean4.79 × 1036.40 × 1034.78 × 1035.87 × 1034.78 × 103
Std2.80 × 1025.81 × 1022.57 × 1024.08 × 1022.57 × 102
F 27 Mean3.40 × 1033.46 × 1033.44 × 1033.45 × 1033.44 × 103
Std9.92 × 1017.26 × 1011.09 × 1021.25 × 1021.07 × 102
F 28 Mean3.31 × 1033.31 × 1033.30 × 1033.31 × 1033.30 × 103
Std2.91 × 1011.74 × 1011.89 × 1011.20 × 1011.89 × 101
F 29 Mean3.87 × 1034.46 × 1033.98 × 1034.28 × 1033.98 × 103
Std2.78 × 1023.64 × 1023.02 × 1023.74 × 1023.01 × 102
F 30 Mean1.09 × 1061.38 × 1069.62 × 1051.18 × 1069.57 × 105
Std2.73 × 1054.98 × 1051.61 × 1053.75 × 1051.62 × 105
Rank First712019

References

  1. Boussaïd, I.; Lepagnot, J.; Siarry, P. A survey on optimization metaheuristics. Inf. Sci. 2013, 237, 82–117. [Google Scholar] [CrossRef]
  2. Radwan, M.; Elsayed, S.; Sarker, R.; Essam, D.; Coello, C.C. Neuro-PSO algorithm for large-scale dynamic optimization. Swarm Evol. Comput. 2025, 94, 101865. [Google Scholar] [CrossRef]
  3. Tomar, V.; Bansal, M.; Singh, P. Metaheuristic algorithms for optimization: A brief review. Eng. Proc. 2023, 59, 238. [Google Scholar]
  4. Bohat, V.K.; Hashim, F.A.; Batra, H.; Elaziz, M.A. Phototropic growth algorithm: A novel metaheuristic inspired from phototropic growth of plants. Knowl.-Based Syst. 2025, 322, 113548. [Google Scholar] [CrossRef]
  5. Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. In Proceedings of the 3rd International Conference on Learning Representations (ICLR), San Diego, CA, USA, 7–9 May 2015. [Google Scholar]
  6. Shao, Y.; Wang, J.; Sun, H.; Yu, H.; Xing, L.; Zhao, Q.; Zhang, L. An Improved BGE-Adam Optimization Algorithm Based on Entropy Weighting and Adaptive Gradient Strategy. Symmetry 2024, 16, 623. [Google Scholar] [CrossRef]
  7. Xia, Y.; Ji, Y. Application of a novel metaheuristic algorithm inspired by Adam gradient descent in distributed permutation flow shop scheduling problem and continuous engineering problems. Sci. Rep. 2025, 15, 21692. [Google Scholar] [CrossRef]
  8. Duan, Q.; Wang, L.; Kang, H.; Shen, Y.; Sun, X.; Chen, Q. Improved Salp Swarm Algorithm with Simulated Annealing Mechanism Based on Symmetric Perturbation. Symmetry 2021, 13, 1092. [Google Scholar] [CrossRef]
  9. Seyyedabbasi, A.; Tareq, W.Z.; Bačanin, N. An Effective Hybrid Metaheuristic Algorithm for Solving Global Optimization Algorithms. Multimed. Tools Appl. 2024, 83, 85103–85138. [Google Scholar] [CrossRef]
  10. Danach, K.; Harb, H.; Hejase, H.J.; Saker, L. Hybrid metaheuristic framework with reinforcement learning-based adaptation for large-scale combinatorial optimization. Eur. J. Pure Appl. Math. 2025, 18, 6602. [Google Scholar] [CrossRef]
  11. Loshchilov, I.; Hutter, F. Fixing weight decay regularization in Adam. In Proceedings of the International Conference on Learning Representations (ICLR), New Orleans, LA, USA, 6–9 May 2019. [Google Scholar]
  12. Liu, L.; Jiang, H.; He, P.; Chen, W.; Liu, X.; Gao, J.; Han, J. On the variance of the adaptive learning rate and beyond. In Proceedings of the International Conference on Learning Representations (ICLR), Addis Ababa, Ethiopia, 26–30 April 2020. [Google Scholar]
  13. Liu, M.; Yao, D.; Liu, Z.; Guo, J.; Chen, J. An improved Adam optimization algorithm combining adaptive coefficients and composite gradients based on randomized block coordinate descent. Comput. Intell. Neurosci. 2023, 2023, 4765891. [Google Scholar] [CrossRef]
  14. Zou, F.; Shen, L.; Jie, Z.; Zhang, W.; Liu, W. A Sufficient Condition for Convergences of Adam and RMSProp. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 16–20 June 2019; pp. 11127–11135. [Google Scholar]
  15. Zhang, Q.; Zhou, Y.; Zou, S. Convergence Guarantees for RMSProp and Adam in Stochastic Non convex Optimization with Affine Noise Variance. arXiv 2024, arXiv:2404.01436. [Google Scholar]
  16. Hospodarskyy, O.; Matsiuk, V.; Matsiuk, R.; Turkot, V.; Vasylkiv, N. Understanding the Adam optimization algorithm in machine learning. In Proceedings of the CITI’2024: 2nd International Workshop on Computer Information Technologies in Industry 4.0, Ternopil, Ukraine, 12–14 June 2024. [Google Scholar]
  17. Yang, D.; Xu, B.; Xu, B.; Lu, T.; Wang, X. Application of Particle Swarm Optimization with Simulated Annealing in MIT Regularization Image Reconstruction. Symmetry 2022, 14, 275. [Google Scholar] [CrossRef]
  18. Li, J.; Pan, Q.K.; Gao, L.; Li, X.L.; Wang, C. Survey of Lévy flight-based metaheuristics for optimization. Mathematics 2022, 10, 2785. [Google Scholar] [CrossRef]
  19. Tarkhaneh, O.; Moser, I. An improved differential evolution algorithm using Archimedean spiral and neighborhood search based mutation approach for cluster analysis. Future Gener. Comput. Syst. 2019, 101, 921–939. [Google Scholar] [CrossRef]
  20. Zhang, T.; Yin, Q.; Li, S.; Guo, T.; Fan, Z. An Optimized Genetic Algorithm-Based Wavelet Image Fusion Technique for PCBDetection. Appl. Sci. 2025, 15, 3217. [Google Scholar] [CrossRef]
  21. Zhang, Y.; Wang, H.; Li, X.; Zhou, Y. An Adaptive Differential Evolution Algorithm with Strategy Self-Learning for Global Optimization. Inf. Sci. 2023, 625, 250–270. [Google Scholar]
  22. Sheng, L.; Wu, S.; Lv, Z. Modified Grey Wolf Optimizer and Application in Parameter Optimization of PI Controller. Appl. Sci. 2025, 15, 4530. [Google Scholar] [CrossRef]
  23. Hussien, A.G.; Soleimanian Gharehchopogh, F.; Bouaouda, A.; Kumar, S.; Hu, G. Recent applications and advances of African Vultures Optimization Algorithm. Artif. Intell. Rev. 2024, 57, 335. [Google Scholar] [CrossRef]
  24. Braik, M.; Al-Hiary, H.; Alzoubi, H.; Hammouri, A.; Al-Betar, M.A.; Awadallah, M.A. Tornado optimizer with Coriolis force: A novel bio-inspired meta-heuristic algorithm for solving engineering problems. Artif. Intell. Rev. 2025, 58, 123. [Google Scholar] [CrossRef]
  25. Deng, L.; Liu, S. Snow ablation optimizer: A novel metaheuristic technique for numerical optimization and engineering design. Expert Syst. Appl. 2023, 225, 120069. [Google Scholar] [CrossRef]
  26. Qin, A.K.; Suganthan, P.N. Self-adaptive differential evolution algorithm for numerical optimization. In Proceedings of the IEEE Congress on Evolutionary Computation (CEC), Edinburgh, UK, 2–5 September 2005; pp. 1785–1791. [Google Scholar]
  27. Shao, Y.; Yang, J.; Zhou, W.; Sun, H.; Xing, L.; Zhao, Q.; Zhang, L. An Improvement of Adam Based on a Cyclic Exponential Decay Learning Rate and Gradient Norm Constraints. Electronics 2024, 13, 1778. [Google Scholar] [CrossRef]
  28. Mirjalili, S.; Lewis, A. The Whale Optimization Algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  29. Sarhani, M.; Preuss, M.; Fernandez, C.M. Initialization of metaheuristics: Comprehensive review, critical analysis and research directions. Int. Trans. Oper. Res. 2023, 30, 1886–1919. [Google Scholar] [CrossRef]
  30. Sun, H.; Yu, H.; Shao, Y.; Wang, J.; Xing, L.; Zhang, L.; Zhao, Q. An Improved Adam’s Algorithm for Stomach Image Classification. Algorithms 2024, 17, 272. [Google Scholar] [CrossRef]
  31. Sun, H.; Cui, J.; Shao, Y.; Yang, J.; Xing, L.; Zhao, Q.; Zhang, L. A Gastrointestinal Image Classification Method Based on Improved Adam Algorithm. Mathematics 2024, 12, 2452. [Google Scholar] [CrossRef]
  32. Awad, N.H.; Ali, M.Z.; Liang, J.J.; Qu, B.Y.; Suganthan, P.N. Problem Definitions and Evaluation Criteria for the CEC 2017 Special Session and Competition on Single Objective Bound Constrained Real-Parameter Numerical Optimization; Technical Report; Nanyang Technological University: Singapore, 2016. [Google Scholar]
  33. Liang, J.J.; Suganthan, P.N.; Qu, B.Y.; Gong, D.W.; Yue, C.T. Problem Definitions and Evaluation Criteria for the CEC 2020 Special Session on Multimodal Multiobjective Optimization; Technical Report; Nanyang Technological University: Singapore, 2020. [Google Scholar]
  34. Luo, W.; Lin, X.; Li, C.; Yang, S.; Shi, Y. Benchmark functions for the CEC 2022 competition on seeking multiple optima in dynamic environments. arXiv 2022, arXiv:2201.00523. [Google Scholar] [CrossRef]
  35. Mirjalili, S. Nature-Inspired Optimizers: Theories, Literature Reviews and Applications. In Studies in Computational Intelligence; Springer: Cham, Switzerland, 2019; Volume 811. [Google Scholar]
  36. Li, H.; Jiang, J.; Ma, Z.; Li, L.; Liu, J.; Li, C.; Yu, Z. A Symmetry-Driven Adaptive Dual-Subpopulation Tree–Seed Algorithm for Complex Optimization with Local Optima Avoidance and Convergence Acceleration. Symmetry 2025, 17, 1200. [Google Scholar] [CrossRef]
  37. Montgomery, D.C.; Runger, G.C. Applied Statistics and Probability for Engineers, 7th ed.; Wiley: New York, NY, USA, 2018. [Google Scholar]
  38. Gao, F.; Abisado, M. Enhanced Feature Engineering Symmetry Model Based on Novel Dolphin Swarm Algorithm. Symmetry 2025, 17, 1736. [Google Scholar] [CrossRef]
  39. Li, W.; Liang, P.; Sun, B.; Sun, Y.; Huang, Y. Reinforcement learning-based particle swarm optimization with neighborhood differential mutation strategy. Swarm Evol. Comput. 2023, 78, 101274. [Google Scholar] [CrossRef]
  40. Sun, H.; Zhou, W.; Yang, J.; Shao, Y.; Xing, L.; Zhao, Q.; Zhang, L. An Improved Medical Image Classification Algorithm Based on Adam Optimizer. Mathematics 2024, 12, 2509. [Google Scholar] [CrossRef]
  41. Askarzadeh, A. A novel metaheuristic method for solving constrained engineering optimization problems: Crow search algorithm. Comput. Struct. 2016, 169, 1–12. [Google Scholar] [CrossRef]
  42. Li, Y.; Zhao, Y.; Liu, J. Dimension by dimension dynamic sine cosine algorithm for global optimization problems. Appl. Soft Comput. 2021, 98, 106933. [Google Scholar] [CrossRef]
  43. Baghmisheh, M.V.; Peimani, M.; Sadeghi, M.H.; Ettefagh, M.M.; Tabrizi, A.F. A hybrid particle swarm–Nelder–Mead optimization method for crack detection in cantilever beams. Appl. Soft Comput. 2012, 12, 2217–2226. [Google Scholar] [CrossRef]
  44. Jiang, J.; Huang, J.; Wu, J.; Luo, J.; Yang, X.; Li, W. DTSA: Dynamic Tree-Seed Algorithm with Velocity-Driven Seed Generation and Count-Based Adaptive Strategies. Symmetry 2024, 16, 795. [Google Scholar] [CrossRef]
Figure 1. Adam optimization trajectory for Beale function. The color mapping represents the contour plot of the Beale function. The four colored lines, red, blue, green, and purple, represent the optimization trajectories at different learning rates.
Figure 1. Adam optimization trajectory for Beale function. The color mapping represents the contour plot of the Beale function. The four colored lines, red, blue, green, and purple, represent the optimization trajectories at different learning rates.
Symmetry 18 00064 g001
Figure 2. ω -oscillation plot with random perturbations.
Figure 2. ω -oscillation plot with random perturbations.
Symmetry 18 00064 g002
Figure 3. Cell Search Phase in the Light and Shaded Regions during the Early Plant-Growth Stage.
Figure 3. Cell Search Phase in the Light and Shaded Regions during the Early Plant-Growth Stage.
Symmetry 18 00064 g003
Figure 4. Algorithmic Flow Diagram of ALDPGA.
Figure 4. Algorithmic Flow Diagram of ALDPGA.
Symmetry 18 00064 g004
Figure 5. Three-dimensional landscapes of CEC2017 functions F15, F28, and F29.
Figure 5. Three-dimensional landscapes of CEC2017 functions F15, F28, and F29.
Symmetry 18 00064 g005
Figure 6. Convergence curves of ALDPGA and the comparative algorithms on typical benchmark functions in CEC2017 (100-D).
Figure 6. Convergence curves of ALDPGA and the comparative algorithms on typical benchmark functions in CEC2017 (100-D).
Symmetry 18 00064 g006
Figure 7. Stability performance of ALDPGA and the comparative algorithms on typical benchmark functions in CEC2017 (100-D).
Figure 7. Stability performance of ALDPGA and the comparative algorithms on typical benchmark functions in CEC2017 (100-D).
Symmetry 18 00064 g007
Table 1. Specification of Competing Algorithms.
Table 1. Specification of Competing Algorithms.
AlgorithmYearDescription of Algorithms
GA1975GA simulates selection, crossover, and mutation, iteratively approaching optimal solutions with strong robustness and comprehensive global search.
PSO1995PSO simulates swarm cooperation, updating positions using personal and global bests, offering strong global search and
fast convergence.
DE1997DE updates individuals via vector-based mutation and crossover, featuring simple structure, few parameters, and strong global search ability.
GWO2014GWO models wolf hierarchy and hunting behavior, where α , β , and δ wolves guide the population to balance exploration
and exploitation.
VO2021VO models vultures’ foraging and circling behaviors, using an energy factor to switch between global exploration and
local exploitation.
TOC2022TOC simulates tornado updraft and spiral flows, enabling alternating global and local search through upward and rotating particle movements.
SAO2023SAO models snow-melting evaporation and aggregation, providing precise convergence and strong ability to escape local optima.
RIME2023RIME models ice-crystal formation and frost deposition, using freezing and redistribution to maintain diversity and enhance local optimization.
Table 2. CEC2017, CEC2020, and CEC2022 Test Suites.
Table 2. CEC2017, CEC2020, and CEC2022 Test Suites.
Test SuiteFunction CategoriesDim/Iter/Runs
CEC2017• Unimodal ( f 1 f 3 )10/30/50/100D
• Multimodal ( f 4 f 10 )3000 Iter
• Hybrid ( f 11 f 20 )30 Runs
• Composition ( f 21 f 30 )
CEC2020• Unimodal ( f 1 )10/30/50/100D
• Basic ( f 2 f 4 )3000 Iter
• Hybrid ( f 5 f 7 )30 Runs
• Composition ( f 8 f 10 )
CEC2022• Unimodal ( f 1 )10/20D
• Basic ( f 2 f 5 )3000 Iter
• Hybrid ( f 6 f 8 )30 Runs
• Composition ( f 9 f 12 )
Table 3. Running results of 10 algorithms in the CEC2017 (100-D).
Table 3. Running results of 10 algorithms in the CEC2017 (100-D).
Function GWOTOCPSOGADERIMESAOVOPGAALDPGA
F 1 Mean5.20 × 10104.52 × 10104.04 × 10109.20 × 1061.86 × 1091.29 × 1073.20 × 1082.02 × 1069.83 × 1047.35 × 103
Std9.97 × 1091.58 × 10101.62 × 10104.58 × 1061.99 × 1095.19 × 1067.94 × 1085.14 × 1054.06 × 1059.01 × 103
F 3 Mean2.78 × 1054.30 × 1054.90 × 1057.64 × 1055.92 × 1053.73 × 1057.88 × 1053.57 × 1053.05 × 1055.98 × 102
Std2.13 × 1042.53 × 1059.91 × 1041.13 × 1056.73 × 1045.97 × 1041.53 × 1055.12 × 1043.20 × 1047.47 × 102
F 4 Mean5.16 × 1038.02 × 1035.57 × 1038.76 × 1028.84 × 1027.90 × 1027.20 × 1028.02 × 1027.09 × 1025.66 × 102
Std1.27 × 1033.20 × 1032.73 × 1039.50 × 1018.21 × 1016.78 × 1014.12 × 1016.45 × 1014.94 × 1018.03 × 101
F 5 Mean1.20 × 1031.85 × 1031.06 × 1031.90 × 1038.58 × 1021.01 × 1031.18 × 1031.65 × 1031.14 × 1037.99 × 102
Std1.12 × 1021.33 × 1029.30 × 1011.59 × 1026.71 × 1018.05 × 1012.81 × 1021.42 × 1021.57 × 1024.85 × 101
F 6 Mean6.40 × 1026.94 × 1026.32 × 1026.17 × 1026.12 × 1026.30 × 1026.19 × 1026.82 × 1026.37 × 1026.16 × 102
Std5.58 × 1006.67 × 1006.70 × 1005.42 × 1002.74 × 1005.66 × 1006.06 × 1008.01 × 1008.91 × 1005.18 × 100
F 7 Mean2.04 × 1033.63 × 1031.70 × 1032.28 × 1031.65 × 1031.53 × 1032.09 × 1033.62 × 1031.84 × 1031.32 × 103
Std1.32 × 1023.49 × 1022.76 × 1021.84 × 1023.03 × 1021.16 × 1021.04 × 1024.48 × 1021.97 × 1021.21 × 102
F 8 Mean1.48 × 1032.28 × 1031.36 × 1032.20 × 1031.15 × 1031.31 × 1031.43 × 1032.03 × 1031.41 × 1031.12 × 103
Std6.85 × 1019.68 × 1017.58 × 1011.12 × 1024.37 × 1018.29 × 1012.01 × 1021.02 × 1021.20 × 1027.34 × 101
F 9 Mean3.41 × 1045.72 × 1045.32 × 1045.01 × 1041.83 × 1042.52 × 1042.37 × 1046.76 × 1041.64 × 1045.99 × 103
Std1.22 × 1049.29 × 1032.14 × 1046.91 × 1035.16 × 1038.78 × 1031.07 × 1041.68 × 1045.94 × 1032.75 × 103
F 10 Mean1.70 × 1042.76 × 1041.56 × 1041.53 × 1043.19 × 1041.61 × 1041.86 × 1042.17 × 1041.58 × 1041.47 × 104
Std5.13 × 1031.68 × 1031.67 × 1031.52 × 1035.24 × 1021.27 × 1036.08 × 1032.31 × 1031.47 × 1031.18 × 103
F 11 Mean6.51 × 1046.47 × 1041.48 × 1041.17 × 1051.06 × 1044.23 × 1031.45 × 1055.19 × 1035.97 × 1032.21 × 103
Std1.39 × 1044.47 × 1041.26 × 1042.98 × 1044.12 × 1036.04 × 1025.63 × 1047.30 × 1022.85 × 1032.52 × 102
F 12 Mean9.24 × 1098.15 × 1091.46 × 10106.97 × 1072.46 × 1084.38 × 1083.99 × 1073.61 × 1081.94 × 1074.69 × 105
Std4.88 × 1095.85 × 1091.20 × 10102.08 × 1076.47 × 1081.81 × 1082.58 × 1079.80 × 1071.07 × 1071.78 × 105
F 13 Mean1.32 × 1094.01 × 1082.07 × 1099.51 × 1044.84 × 1052.26 × 1051.26 × 1046.53 × 1041.51 × 1041.76 × 104
Std1.51 × 1092.36 × 1082.67 × 1097.27 × 1042.39 × 1063.87 × 1058.96 × 1031.55 × 1041.21 × 1041.14 × 104
F 14 Mean7.09 × 1061.34 × 1072.41 × 1068.34 × 1064.30 × 1052.58 × 1067.19 × 1052.54 × 1061.27 × 1069.26 × 104
Std3.53 × 1062.09 × 1072.22 × 1063.40 × 1062.51 × 1051.47 × 1063.99 × 1052.96 × 1057.03 × 1054.75 × 104
F 15 Mean1.56 × 1087.01 × 1075.04 × 1085.82 × 1046.53 × 1041.68 × 1054.18 × 1035.32 × 1047.12 × 1039.64 × 103
Std2.38 × 1086.96 × 1078.00 × 1087.06 × 1043.21 × 1054.42 × 1053.27 × 1031.65 × 1045.78 × 1038.61 × 103
F 16 Mean6.16 × 1031.17 × 1046.33 × 1036.61 × 1035.19 × 1036.58 × 1035.67 × 1038.43 × 1035.90 × 1034.90 × 103
Std5.80 × 1021.85 × 1037.44 × 1026.69 × 1021.82 × 1036.99 × 1021.07 × 1039.90 × 1029.47 × 1025.48 × 102
F 17 Mean5.23 × 1031.08 × 1046.16 × 1036.31 × 1035.02 × 1035.72 × 1034.92 × 1036.66 × 1035.15 × 1034.51 × 103
Std8.97 × 1027.36 × 1031.09 × 1037.16 × 1021.20 × 1035.12 × 1028.18 × 1025.74 × 1026.32 × 1024.64 × 102
F 18 Mean6.18 × 1061.31 × 1077.11 × 1068.06 × 1061.14 × 1065.02 × 1063.04 × 1063.32 × 1062.43 × 1061.71 × 105
Std5.80 × 1061.10 × 1074.56 × 1063.35 × 1065.21 × 1052.59 × 1063.15 × 1068.60 × 1051.08 × 1067.66 × 104
F 19 Mean2.65 × 1081.93 × 1083.15 × 1084.89 × 1042.51 × 1051.64 × 1066.42 × 1031.22 × 1071.01 × 1041.24 × 104
Std3.36 × 1082.31 × 1084.92 × 1082.24 × 1048.52 × 1059.36 × 1053.70 × 1036.44 × 1061.01 × 1041.37 × 104
F 20 Mean4.80 × 1036.18 × 1035.24 × 1035.86 × 1036.60 × 1035.32 × 1034.98 × 1036.06 × 1034.90 × 1034.48 × 103
Std8.25 × 1025.46 × 1028.52 × 1026.80 × 1029.46 × 1024.24 × 1021.04 × 1036.37 × 1025.10 × 1024.74 × 102
F 21 Mean3.00 × 1034.12 × 1033.14 × 1034.01 × 1032.69 × 1032.86 × 1032.85 × 1033.73 × 1032.96 × 1032.63 × 103
Std1.22 × 1021.83 × 1021.20 × 1021.36 × 1025.88 × 1017.79 × 1011.46 × 1022.77 × 1021.12 × 1026.43 × 101
F 22 Mean1.96 × 1042.99 × 1041.84 × 1041.80 × 1043.38 × 1041.83 × 1041.94 × 1042.47 × 1041.82 × 1041.72 × 104
Std3.02 × 1032.04 × 1031.49 × 1031.34 × 1035.32 × 1021.38 × 1035.47 × 1031.80 × 1031.46 × 1031.21 × 103
F 23 Mean3.65 × 1035.34 × 1034.73 × 1034.04 × 1033.29 × 1033.38 × 1033.30 × 1035.61 × 1033.39 × 1033.14 × 103
Std1.39 × 1023.24 × 1023.33 × 1021.38 × 1029.09 × 1018.05 × 1016.27 × 1015.32 × 1029.65 × 1014.17 × 101
F 24 Mean4.27 × 1036.78 × 1036.00 × 1035.63 × 1033.88 × 1033.88 × 1033.72 × 1036.91 × 1033.90 × 1033.62 × 103
Std1.22 × 1026.13 × 1025.07 × 1023.13 × 1021.76 × 1021.20 × 1029.70 × 1015.51 × 1021.38 × 1025.43 × 101
F 25 Mean7.02 × 1038.34 × 1034.88 × 1033.49 × 1033.57 × 1033.50 × 1033.45 × 1033.49 × 1033.39 × 1033.26 × 103
Std1.16 × 1031.74 × 1031.08 × 1034.27 × 1018.86 × 1017.44 × 1017.51 × 1015.25 × 1014.97 × 1017.36 × 101
F 26 Mean1.54 × 1043.47 × 1041.88 × 1042.53 × 1041.19 × 1041.22 × 1041.02 × 1043.41 × 1041.26 × 1049.62 × 103
Std1.12 × 1035.05 × 1033.52 × 1032.31 × 1031.25 × 1038.88 × 1025.59 × 1024.44 × 1031.22 × 1038.18 × 102
F 27 Mean4.12 × 1034.87 × 1034.24 × 1034.29 × 1033.60 × 1033.72 × 1033.47 × 1036.27 × 1033.57 × 1033.64 × 103
Std1.23 × 1024.72 × 1024.14 × 1022.76 × 1027.17 × 1017.01 × 1015.79 × 1016.87 × 1028.66 × 1011.10 × 102
F 28 Mean8.87 × 1039.82 × 1039.00 × 1033.53 × 1033.82 × 1033.58 × 1033.48 × 1033.53 × 1033.56 × 1034.56 × 103
Std1.13 × 1032.27 × 1033.12 × 1033.57 × 1012.74 × 1024.15 × 1014.86 × 1013.96 × 1012.75 × 1023.69 × 103
F 29 Mean8.50 × 1031.59 × 1047.49 × 1038.21 × 1035.83 × 1038.19 × 1036.50 × 1031.18 × 1046.82 × 1036.08 × 103
Std8.69 × 1022.88 × 1037.24 × 1025.58 × 1024.81 × 1025.48 × 1025.13 × 1021.65 × 1035.23 × 1025.33 × 102
F 30 Mean1.67 × 1096.60 × 1081.28 × 1092.34 × 1054.11 × 1053.06 × 1074.61 × 1044.79 × 1075.02 × 1041.02 × 104
Std1.66 × 1094.70 × 1081.32 × 1091.18 × 1051.51 × 1061.36 × 1077.23 × 1041.58 × 1073.19 × 1043.62 × 103
Rank First00002050022
The bold text indicates the average value obtained by the best algorithm in the current function.
Table 4. Running results of 10 algorithms in the CEC2020 (50-D).
Table 4. Running results of 10 algorithms in the CEC2020 (50-D).
Function GWOTOCPSOGADERIMESAOVOPGAALDPGA
F 1 Mean9.88 × 1094.30 × 1097.11 × 1098.44 × 1052.96 × 1085.13 × 1054.95 × 1031.26 × 1059.33 × 1036.06 × 103
Std3.91 × 1091.93 × 1094.98 × 1093.14 × 1054.99 × 1081.32 × 1057.05 × 1036.14 × 1049.94 × 1036.85 × 103
F 2 Mean8.07 × 1031.24 × 1047.01 × 1037.84 × 1031.45 × 1047.26 × 1036.91 × 1039.98 × 1038.03 × 1037.44 × 103
Std2.34 × 1031.24 × 1037.86 × 1027.78 × 1021.77 × 1039.97 × 1021.11 × 1039.01 × 1021.06 × 1037.74 × 102
F 3 Mean1.09 × 1031.73 × 1039.58 × 1021.17 × 1039.69 × 1029.50 × 1021.17 × 1031.76 × 1031.03 × 1038.65 × 102
Std6.93 × 1011.62 × 1026.93 × 1018.11 × 1011.12 × 1024.94 × 1015.16 × 1012.05 × 1026.63 × 1013.70 × 101
F 4 Mean1.90 × 1031.91 × 1031.91 × 1031.90 × 1032.17 × 1031.92 × 1031.94 × 1031.92 × 1031.92 × 1031.91 × 103
Std5.97 × 10−142.12 × 1012.00 × 1006.19 × 10−11.35 × 1034.79 × 1003.49 × 1007.58 × 1005.94 × 1001.90 × 100
F 5 Mean8.90 × 1061.18 × 1074.87 × 1066.48 × 1063.61 × 1052.19 × 1064.57 × 1057.71 × 1059.45 × 1055.21 × 104
Std8.30 × 1069.49 × 1064.96 × 1063.02 × 1062.00 × 1059.86 × 1052.17 × 1053.73 × 1055.13 × 1052.93 × 104
F 6 Mean2.85 × 1035.09 × 1032.79 × 1033.14 × 1032.97 × 1032.81 × 1032.58 × 1034.54 × 1033.24 × 1032.38 × 103
Std4.86 × 1027.09 × 1024.14 × 1023.95 × 1029.88 × 1022.59 × 1023.67 × 1025.93 × 1024.58 × 1023.28 × 102
F 7 Mean4.81 × 1064.65 × 1062.01 × 1067.94 × 1062.33 × 1051.23 × 1062.62 × 1054.76 × 1056.45 × 1054.11 × 104
Std4.25 × 1068.58 × 1063.26 × 1064.87 × 1061.34 × 1056.16 × 1051.54 × 1051.67 × 1054.05 × 1054.32 × 104
F 8 Mean9.14 × 1031.42 × 1049.00 × 1039.78 × 1031.55 × 1049.00 × 1038.31 × 1031.16 × 1049.58 × 1039.02 × 103
Std9.42 × 1021.20 × 1039.17 × 1026.39 × 1021.48 × 1037.28 × 1021.15 × 1031.04 × 1031.07 × 1039.50 × 102
F 9 Mean3.17 × 1033.89 × 1033.52 × 1034.03 × 1033.03 × 1033.12 × 1033.05 × 1034.01 × 1033.06 × 1033.00 × 103
Std9.39 × 1012.23 × 1021.76 × 1021.79 × 1027.71 × 1015.50 × 1013.30 × 1012.80 × 1023.47 × 1012.15 × 101
F 10 Mean3.75 × 1033.83 × 1033.33 × 1033.09 × 1033.07 × 1033.07 × 1033.02 × 1033.09 × 1033.05 × 1033.03 × 103
Std3.28 × 1023.30 × 1025.31 × 1022.80 × 1013.52 × 1012.76 × 1013.66 × 1012.51 × 1013.96 × 1013.84 × 101
Rank First1000004005
The bold text indicates the average value obtained by the best algorithm in the current function.
Table 5. Running results of 10 algorithms in the CEC2020 (100-D).
Table 5. Running results of 10 algorithms in the CEC2020 (100-D).
Function GWOTOCPSOGADERIMESAOVOPGAALDPGA
F 1 Mean5.20 × 10104.52 × 10104.04 × 10109.20 × 1061.86 × 1091.29 × 1073.20 × 1082.02 × 1069.83 × 1047.35 × 103
Std9.97 × 1091.58 × 10101.62 × 10104.58 × 1061.99 × 1095.19 × 1067.94 × 1085.14 × 1054.06 × 1059.01 × 103
F 2 Mean1.71 × 1042.77 × 1041.55 × 1041.57 × 1043.21 × 1041.64 × 1041.96 × 1042.22 × 1041.63 × 1041.53 × 104
Std4.17 × 1032.49 × 1031.55 × 1031.06 × 1036.31 × 1021.52 × 1036.54 × 1032.20 × 1031.45 × 1031.32 × 103
F 3 Mean2.04 × 1033.63 × 1031.70 × 1032.28 × 1031.65 × 1031.53 × 1032.09 × 1033.62 × 1031.84 × 1031.32 × 103
Std1.32 × 1023.49 × 1022.76 × 1021.84 × 1023.03 × 1021.16 × 1021.04 × 1024.48 × 1021.97 × 1021.21 × 102
F 4 Mean1.90 × 1031.92 × 1032.00 × 1041.91 × 1032.37 × 1031.99 × 1032.53 × 1031.94 × 1031.95 × 1031.91 × 103
Std1.58 × 10−133.71 × 1015.25 × 1041.16 × 1001.14 × 1031.27 × 1013.53 × 1022.30 × 1011.27 × 1015.71 × 100
F 5 Mean5.07 × 1079.11 × 1072.93 × 1074.25 × 1073.17 × 1061.05 × 1075.11 × 1061.05 × 1075.02 × 1062.94 × 105
Std2.72 × 1076.28 × 1072.67 × 1071.37 × 1071.24 × 1062.55 × 1062.23 × 1062.64 × 1062.43 × 1061.53 × 105
F 6 Mean5.32 × 1031.42 × 1045.69 × 1035.98 × 1034.82 × 1035.99 × 1035.04 × 1039.07 × 1035.97 × 1034.72 × 103
Std6.69 × 1022.89 × 1037.88 × 1027.99 × 1021.94 × 1037.04 × 1021.56 × 1031.56 × 1037.33 × 1026.03 × 102
F 7 Mean1.96 × 1072.58 × 1071.04 × 1071.63 × 1072.05 × 1067.38 × 1062.05 × 1066.02 × 1062.84 × 1069.19 × 104
Std1.03 × 1071.11 × 1076.68 × 1064.51 × 1066.68 × 1052.78 × 1068.38 × 1051.88 × 1061.41 × 1065.11 × 104
F 8 Mean1.96 × 1042.99 × 1041.84 × 1041.80 × 1043.38 × 1041.83 × 1041.94 × 1042.47 × 1041.82 × 1041.72 × 104
Std3.02 × 1032.04 × 1031.49 × 1031.34 × 1035.32 × 1021.38 × 1035.47 × 1031.80 × 1031.46 × 1031.21 × 103
F 9 Mean4.27 × 1036.78 × 1036.00 × 1035.63 × 1033.88 × 1033.88 × 1033.72 × 1036.91 × 1033.90 × 1033.62 × 103
Std1.22 × 1026.13 × 1025.07 × 1023.13 × 1021.76 × 1021.20 × 1029.70 × 1015.51 × 1021.38 × 1025.43 × 101
F 10 Mean7.02 × 1038.34 × 1034.88 × 1033.49 × 1033.57 × 1033.50 × 1033.45 × 1033.49 × 1033.39 × 1033.26 × 103
Std1.16 × 1031.74 × 1031.08 × 1034.27 × 1018.86 × 1017.44 × 1017.51 × 1015.25 × 1014.97 × 1017.36 × 101
Rank First1000000009
The bold text indicates the average value obtained by the best algorithm in the current function.
Table 6. Running results of 10 algorithms in the CEC2022 (20-D).
Table 6. Running results of 10 algorithms in the CEC2022 (20-D).
Function GWOTOCPSOGADERIMESAOVOPGAALDPGA
F 1 Mean9.85 × 1039.70 × 1031.64 × 1035.76 × 1044.90 × 1023.00 × 1021.37 × 1043.00 × 1025.14 × 1023.00 × 102
Std4.96 × 1031.53 × 1043.58 × 1032.01 × 1047.53 × 1022.26 × 10−19.39 × 1034.06 × 10−26.44 × 1021.02 × 10−9
F 2 Mean4.99 × 1025.06 × 1024.66 × 1024.55 × 1024.47 × 1024.56 × 1024.47 × 1024.57 × 1024.49 × 1024.47 × 102
Std4.35 × 1016.93 × 1012.40 × 1011.14 × 1011.78 × 1011.82 × 1011.45 × 1011.11 × 1011.56 × 1018.97 × 100
F 3 Mean6.04 × 1026.38 × 1026.02 × 1026.02 × 1026.00 × 1026.00 × 1026.00 × 1026.50 × 1026.00 × 1026.00 × 102
Std3.12 × 1001.23 × 1013.21 × 1002.69 × 1001.15 × 10−13.51 × 10−12.42 × 10−11.50 × 1017.26 × 10−17.44 × 10−2
F 4 Mean8.50 × 1028.87 × 1028.41 × 1029.24 × 1028.25 × 1028.57 × 1028.35 × 1028.97 × 1028.49 × 1028.32 × 102
Std1.84 × 1012.48 × 1011.16 × 1012.27 × 1011.47 × 1011.96 × 1011.15 × 1012.27 × 1011.80 × 1011.39 × 101
F 5 Mean1.13 × 1031.40 × 1039.39 × 1024.08 × 1039.01 × 1029.61 × 1029.01 × 1022.25 × 1039.09 × 1029.00 × 102
Std1.95 × 1023.06 × 1028.57 × 1019.69 × 1021.38 × 1001.70 × 1021.28 × 1006.59 × 1021.30 × 1016.75 × 10−1
F 6 Mean3.35 × 1067.46 × 1041.56 × 1051.23 × 1048.67 × 1038.84 × 1034.62 × 1035.78 × 1038.83 × 1037.70 × 103
Std8.16 × 1062.71 × 1054.52 × 1058.31 × 1036.98 × 1036.23 × 1033.85 × 1034.90 × 1037.10 × 1036.55 × 103
F 7 Mean2.08 × 1032.14 × 1032.05 × 1032.14 × 1032.03 × 1032.07 × 1032.06 × 1032.13 × 1032.07 × 1032.04 × 103
Std4.07 × 1015.72 × 1013.19 × 1019.66 × 1011.13 × 1015.22 × 1013.59 × 1013.97 × 1013.22 × 1011.50 × 101
F 8 Mean2.26 × 1032.32 × 1032.28 × 1032.38 × 1032.23 × 1032.24 × 1032.25 × 1032.34 × 1032.23 × 1032.22 × 103
Std5.19 × 1018.71 × 1017.27 × 1011.38 × 1022.22 × 1014.79 × 1014.91 × 1019.65 × 1017.93 × 1003.09 × 100
F 9 Mean2.52 × 1032.51 × 1032.53 × 1032.50 × 1032.48 × 1032.48 × 1032.48 × 1032.48 × 1032.48 × 1032.48 × 103
Std2.54 × 1013.80 × 1015.74 × 1015.84 × 1001.70 × 1003.65 × 10−28.12 × 10−126.09 × 10−25.89 × 10−28.44 × 10−7
F 10 Mean3.39 × 1034.99 × 1032.88 × 1033.29 × 1032.81 × 1032.64 × 1033.26 × 1035.16 × 1032.91 × 1032.55 × 103
Std5.77 × 1021.26 × 1033.46 × 1022.72 × 1022.97 × 1021.43 × 1027.64 × 1021.09 × 1036.87 × 1021.67 × 102
F 11 Mean3.46 × 1033.42 × 1033.53 × 1032.99 × 1032.95 × 1032.88 × 1032.95 × 1033.23 × 1032.93 × 1032.90 × 103
Std3.30 × 1027.44 × 1025.80 × 1024.60 × 1021.79 × 1021.36 × 1021.85 × 1021.35 × 1034.50 × 1016.15 × 101
F 12 Mean2.96 × 1033.04 × 1033.00 × 1033.15 × 1032.95 × 1032.96 × 1032.95 × 1033.20 × 1032.95 × 1032.94 × 103
Std1.88 × 1016.47 × 1013.56 × 1011.26 × 1021.66 × 1011.89 × 1011.15 × 1011.51 × 1028.14 × 1009.28 × 100
Rank First0000213006
The bold text indicates the average value obtained by the best algorithm in the current function.
Table 7. Performance comparison of PGA and ALDPGA in the CEC2017 (100-D).
Table 7. Performance comparison of PGA and ALDPGA in the CEC2017 (100-D).
FunctionPGAALDPGA
BestMeanStdBestMeanStd
F 1 1.40 × 103 ( )9.83 × 104 ( )4.06 × 105 ( )1.00 × 102 ( + + )7.35 × 103 ( + + )9.01 × 103 ( + + )
F 3 2.62 × 105 ( )3.05 × 105 ( )3.20 × 104 ( )3.00 × 102 ( + + )5.98 × 102 ( + + )7.47 × 102 ( + + )
F 4 6.11 × 102 (−)7.09 × 102 (−)4.94 × 101 (+)4.00 × 102 (+)5.66 × 102 (+)8.03 × 101 (−)
F 5 8.72 × 102 (−)1.14 × 103 (−)1.57 × 102 (−)7.04 × 102 (+)7.99 × 102 (+)4.85 × 101 (+)
F 6 6.24 × 102 (−)6.37 × 102 (−)8.91 × 100 (−)6.06 × 102 (+)6.16 × 102 (+)5.18 × 100 (+)
F 7 1.50 × 103 (−)1.84 × 103 (−)1.97 × 102 (−)1.11 × 103 (+)1.32 × 103 (+)1.21 × 102 (+)
F 8 1.23 × 103 (−)1.41 × 103 (−)1.20 × 102 (−)1.00 × 103 (+)1.12 × 103 (+)7.34 × 101 (+)
F 9 5.83 × 103 (−)1.64 × 104 (−)5.94 × 103 (−)3.08 × 103 (+)5.99 × 103 (+)2.75 × 103 (+)
F 10 1.30 × 104 (−)1.58 × 104 (−)1.47 × 103 (−)1.20 × 104 (+)1.47 × 104 (+)1.18 × 103 (+)
F 11 2.55 × 103 (−)5.97 × 103 (−)2.85 × 103 ( )1.64 × 103 (+)2.21 × 103 (−)2.52 × 102 ( + + )
F 12 4.25 × 106 ( )1.94 × 107 ( )1.07 × 107 ( )1.33 × 105 ( + + )4.69 × 105 ( + + )1.78 × 105 ( + + )
F 13 3.33 × 103 (+)1.51 × 104 (+)1.21 × 104 (−)5.07 × 103 (−)1.76 × 104 (−)1.14 × 104 (+)
F 14 2.13 × 105 ( )1.27 × 106 ( )7.03 × 105 ( )1.70 × 104 ( + + )9.26 × 104 ( + + )4.75 × 104 ( + + )
F 15 2.14 × 103 (+)7.12 × 103 (+)5.78 × 103 (+)2.15 × 103 (−)9.64 × 103 (−)8.61 × 103 (−)
F 16 3.93 × 103 (−)5.90 × 103 (−)9.47 × 102 (−)3.69 × 103 (+)4.90 × 103 (+)5.48 × 102 (+)
F 17 3.99 × 103 (−)5.15 × 103 (−)6.32 × 102 (−)3.49 × 103 (+)4.51 × 103 (+)4.64 × 102 (+)
F 18 6.51 × 105 ( )2.43 × 106 ( )1.08 × 106 ( )5.68 × 104 ( + + )1.71 × 105 ( + + )7.66 × 104 ( + + )
F 19 2.22 × 103 (−)1.01 × 104 (+)1.01 × 104 (+)2.21 × 103 (+)1.24 × 104 (+)1.37 × 104 (−)
F 20 3.73 × 103 (−)4.90 × 103 (−)5.10 × 102 (−)3.44 × 103 (+)4.48 × 103 (+)4.74 × 102 (+)
F 21 2.75 × 103 (−)2.96 × 103 (−)1.12 × 102 (−)2.51 × 103 (+)2.63 × 103 (+)6.43 × 101 (+)
F 22 1.47 × 104 (+)1.82 × 104 (−)1.46 × 103 (−)1.51 × 104 (−)1.72 × 104 (+)1.21 × 103 (+)
F 23 3.17 × 103 (−)3.39 × 103 (−)9.65 × 101 (−)3.06 × 103 (+)3.14 × 103 (+)4.17 × 101 (+)
F 24 3.68 × 103 (−)3.90 × 103 (−)1.38 × 102 (−)3.54 × 103 (+)3.62 × 103 (+)5.43 × 101 (+)
F 25 3.32 × 103 (−)3.39 × 103 (−)4.97 × 101 (+)3.13 × 103 (+)3.26 × 103 (+)7.36 × 101 (−)
F 26 1.04 × 104 (−)1.26 × 104 (−)1.22 × 103 (−)8.08 × 103 (+)9.62 × 103 (+)8.18 × 102 (+)
F 27 3.45 × 103 (−)3.57 × 103 (+)8.66 × 101 (+)3.42 × 103 (+)3.64 × 103 (−)1.10 × 102 (−)
F 28 3.40 × 103 (−)3.56 × 103 (+)2.75 × 102 ( + + )3.10 × 103 (+)4.56 × 103 (−)3.69 × 103 ( )
F 29 5.93 × 103 (−)6.82 × 103 (−)5.23 × 102 (+)5.03 × 103 (+)6.08 × 103 (+)5.33 × 102 (−)
F 30 9.67 × 103 (−)5.02 × 104 (−)3.19 × 104 (−)6.74 × 103 (+)1.02 × 104 (+)3.62 × 103 (+)
+ + + 0 3 5 21 0 5 5 19 1 6 6 16 5 21 0 3 5 19 0 5 6 16 1 6
Table 8. Wilcoxon signed-rank test results between ALDPGA and comparative algorithms.
Table 8. Wilcoxon signed-rank test results between ALDPGA and comparative algorithms.
Comparison R + R p raw p Holm p FDR Sig raw Sig Holm Sig FDR
ALDPGA vs. GWO43502.56 × 10−62.31 × 10−57.69 × 10−6truetruetrue
ALDPGA vs. TOC43502.56 × 10−62.31 × 10−57.69 × 10−6truetruetrue
ALDPGA vs. PSO43502.56 × 10−62.31 × 10−57.69 × 10−6truetruetrue
ALDPGA vs. GA42696.53 × 10−63.27 × 10−51.18 × 10−5truetruetrue
ALDPGA vs. DE410253.15 × 10−59.44 × 10−54.05 × 10−5truetruetrue
ALDPGA vs. RIME423128.85 × 10−63.54 × 10−51.33 × 10−5truetruetrue
ALDPGA vs. SAO354813.16 × 10−34.12 × 10−33.16 × 10−3truetruetrue
ALDPGA vs. VO42964.80 × 10−62.88 × 10−51.08 × 10−5truetruetrue
ALDPGA vs. PGA360752.06 × 10−34.12 × 10−32.32 × 10−3truetruetrue
R + and R denote the sum of positive and negative ranks, respectively. p Holm and p FDR are adjusted p-values using Holm–Bonferroni and FDR corrections. S i g r a w / S i g H o l m / S i g F D R are boolean results indicating whether the result is significant (p < 0.05).
Table 9. Friedman test results and mean ranks of the compared algorithms.
Table 9. Friedman test results and mean ranks of the compared algorithms.
AlgorithmMeanRank
ALDPGA1.5862
PGA3.5517
SAO3.8276
DE4.5862
RIME4.9655
GA6.3793
GWO7.0000
PSO7.0000
VO7.0000
TOC9.1034
The Friedman test yields a p-value of 3.016 × 10−25 ( α = 0.05 ), indicating statistically significant differences among the compared algorithms. Lower mean ranks correspond to better overall performance.
Table 10. Post-hoc comparison results between ALDPGA and other algorithms based on Friedman mean ranks.
Table 10. Post-hoc comparison results between ALDPGA and other algorithms based on Friedman mean ranks.
Alg i Alg j LowerCIDiffUpperCI p adj
ALDPGATOC−1.01 × 101−7.52 × 100−4.92 × 1001.46 × 10−19
ALDPGAGWO−8.01 × 100−5.41 × 100−2.82 × 1004.42 × 10−10
ALDPGAPSO−8.01 × 100−5.41 × 100−2.82 × 1004.42 × 10−10
ALDPGAVO−8.01 × 100−5.41 × 100−2.82 × 1004.42 × 10−10
ALDPGAGA−7.39 × 100−4.79 × 100−2.20 × 1007.46 × 10−8
ALDPGARIME−5.97 × 100−3.38 × 100−7.87 × 10−19.61 × 10−4
ALDPGADE−5.59 × 100−3.00 × 100−4.07 × 10−17.26 × 10−3
ALDPGASAO−4.83 × 100−2.24 × 1003.51 × 10−12.17 × 10−1
ALDPGAPGA−4.56 × 100−1.97 × 1006.27 × 10−16.05 × 10−1
MeanRank denotes the average rank obtained from the Friedman test. LowerCI and UpperCI indicate the confidence interval bounds of the mean rank differences. p adj represents the adjusted p-value from post-hoc multiple comparisons.
Table 11. ALDPGA and several variants run results in the CEC2017 (100-D).
Table 11. ALDPGA and several variants run results in the CEC2017 (100-D).
Function ALDPGA_1ALDPGA_2ALDPGA_3PGAALDPGA
F 1 Mean1.19 × 1041.04 × 1047.48 × 1039.83 × 1047.35 × 103
Std1.63 × 1041.58 × 1049.22 × 1034.06 × 1059.01 × 103
F 3 Mean1.82 × 1042.39 × 1055.41 × 1023.05 × 1055.98 × 102
Std9.51 × 1032.04 × 1042.55 × 1023.20 × 1047.47 × 102
F 4 Mean6.77 × 1027.19 × 1025.73 × 1027.09 × 1025.66 × 102
Std5.65 × 1015.20 × 1017.60 × 1014.94 × 1018.03 × 101
F 5 Mean8.06 × 1021.19 × 1037.99 × 1021.14 × 1037.99 × 102
Std5.07 × 1011.17 × 1024.85 × 1011.57 × 1024.85 × 101
F 6 Mean6.14 × 1026.50 × 1026.22 × 1026.37 × 1026.16 × 102
Std5.98 × 1001.07 × 1015.15 × 1008.91 × 1005.18 × 100
F 7 Mean1.30 × 1032.09 × 1031.33 × 1031.84 × 1031.32 × 103
Std1.14 × 1021.85 × 1021.34 × 1021.97 × 1021.21 × 102
F 8 Mean1.12 × 1031.48 × 1031.12 × 1031.41 × 1031.12 × 103
Std5.03 × 1011.02 × 1027.34 × 1011.20 × 1027.34 × 101
F 9 Mean6.12 × 1032.16 × 1046.01 × 1031.64 × 1045.99 × 103
Std2.27 × 1036.80 × 1032.76 × 1035.94 × 1032.75 × 103
F 10 Mean1.55 × 1041.58 × 1041.47 × 1041.58 × 1041.47 × 104
Std1.10 × 1031.40 × 1031.19 × 1031.47 × 1031.18 × 103
F 11 Mean2.01 × 1032.87 × 1032.21 × 1035.97 × 1032.21 × 103
Std3.19 × 1026.40 × 1022.52 × 1022.85 × 1032.52 × 102
F 12 Mean7.53 × 1061.71 × 1077.19 × 1051.94 × 1074.69 × 105
Std7.90 × 1061.16 × 1072.14 × 1051.07 × 1071.78 × 105
F 13 Mean1.57 × 1042.04 × 1041.75 × 1041.51 × 1041.76 × 104
Std1.72 × 1041.19 × 1041.14 × 1041.21 × 1041.14 × 104
F 14 Mean1.94 × 1051.08 × 1069.68 × 1041.27 × 1069.26 × 104
Std1.04 × 1056.09 × 1054.72 × 1047.03 × 1054.75 × 104
F 15 Mean1.12 × 1046.97 × 1039.66 × 1037.12 × 1039.64 × 103
Std1.09 × 1044.54 × 1038.65 × 1035.78 × 1038.61 × 103
F 16 Mean5.00 × 1036.01 × 1034.90 × 1035.90 × 1034.90 × 103
Std6.36 × 1028.27 × 1025.50 × 1029.47 × 1025.48 × 102
F 17 Mean4.81 × 1035.27 × 1034.52 × 1035.15 × 1034.51 × 103
Std5.29 × 1027.31 × 1024.65 × 1026.32 × 1024.64 × 102
F 18 Mean2.47 × 1052.75 × 1061.72 × 1052.43 × 1061.71 × 105
Std1.16 × 1051.78 × 1068.32 × 1041.08 × 1067.66 × 104
F 19 Mean1.19 × 1047.26 × 1031.25 × 1041.01 × 1041.24 × 104
Std1.30 × 1045.57 × 1031.38 × 1041.01 × 1041.37 × 104
F 20 Mean4.93 × 1035.31 × 1034.49 × 1034.90 × 1034.48 × 103
Std4.83 × 1026.39 × 1024.75 × 1025.10 × 1024.74 × 102
F 21 Mean2.63 × 1032.99 × 1032.63 × 1032.96 × 1032.63 × 103
Std4.56 × 1011.10 × 1026.43 × 1011.12 × 1026.43 × 101
F 22 Mean1.81 × 1041.82 × 1041.72 × 1041.82 × 1041.72 × 104
Std1.30 × 1031.32 × 1031.19 × 1031.46 × 1031.21 × 103
F 23 Mean3.15 × 1033.45 × 1033.14 × 1033.39 × 1033.14 × 103
Std4.60 × 1017.92 × 1014.24 × 1019.65 × 1014.17 × 101
F 24 Mean3.60 × 1033.99 × 1033.62 × 1033.90 × 1033.62 × 103
Std5.42 × 1011.30 × 1025.43 × 1011.38 × 1025.43 × 101
F 25 Mean3.32 × 1033.35 × 1033.26 × 1033.39 × 1033.26 × 103
Std4.77 × 1016.53 × 1017.37 × 1014.97 × 1017.36 × 101
F 26 Mean9.27 × 1031.33 × 1049.62 × 1031.26 × 1049.62 × 103
Std6.58 × 1029.80 × 1028.18 × 1021.22 × 1038.18 × 102
F 27 Mean3.65 × 1033.62 × 1033.64 × 1033.57 × 1033.64 × 103
Std9.15 × 1019.12 × 1011.07 × 1028.66 × 1011.10 × 102
F 28 Mean6.30 × 1033.46 × 1034.56 × 1033.56 × 1034.56 × 103
Std5.33 × 1033.74 × 1013.69 × 1032.75 × 1023.69 × 103
F 29 Mean6.28 × 1037.29 × 1036.08 × 1036.82 × 1036.08 × 103
Std5.04 × 1025.96 × 1025.29 × 1025.23 × 1025.33 × 102
F 30 Mean2.83 × 1045.05 × 1041.02 × 1045.02 × 1041.02 × 104
Std1.73 × 1045.53 × 1043.63 × 1033.19 × 1043.62 × 103
Rank First534215
The bold text indicates the average value obtained by the best algorithm in the current function.
Table 12. Tension spring design problem.
Table 12. Tension spring design problem.
GWOTOCPSOGADERIMESAOVOPGAALDPGA
Best0.0130.0130.0130.0130.0130.0130.0130.0130.0130.013
Mean0.0130.0130.0130.0150.0130.0170.0140.0150.0130.013
Std0.0000.0030.0000.0020.0000.0020.0010.0020.0000.000
Worst0.0130.0300.0140.0210.0130.0180.0180.0180.0140.013
X10.0510.0520.0530.0620.0520.0670.0570.0620.0520.052
X20.3340.3750.3940.6320.3570.8510.5200.6800.3630.370
X312.85711.9959.8384.90411.2813.0197.3184.15411.61510.933
Table 13. Design problem of three-bar truss.
Table 13. Design problem of three-bar truss.
GWOTOCPSOGADERIMESAOVOPGAALDPGA
Best263.896263.896263.896263.897263.896263.901263.896263.896263.896263.896
Mean263.897263.896263.896266.287263.896265.257263.955263.909263.900263.898
Std0.0010.0000.0002.9730.0002.1160.1560.0100.0080.004
Worst263.899263.897263.896273.974263.896272.508264.732263.936263.938263.915
X10.7890.7890.7890.7980.7890.8010.7890.7880.7880.790
X20.4080.4080.4080.4060.4080.3880.4070.4100.4090.406
Table 14. Cantilever beam design problem.
Table 14. Cantilever beam design problem.
GWOTOCPSOGADERIMESAOVOPGAALDPGA
Best1.3401.3401.3401.3561.3401.3421.3401.3401.3401.340
Mean1.3401.3401.3401.6151.3401.4341.3401.3401.3401.340
Std0.0000.0000.0000.2090.0000.0990.0000.0000.0000.000
Worst1.3401.3401.3402.0561.3401.6671.3401.3401.3401.340
X16.0186.0186.0167.3456.0156.0836.0166.0166.0136.015
X25.3105.3145.3096.6995.3105.6655.3095.3085.3055.307
X34.4954.4924.4955.1154.4945.0284.4944.4944.5014.496
X43.4993.5013.5024.1153.5023.8273.5023.5003.5013.502
X52.1522.1502.1532.6072.1522.3832.1532.1562.1542.154
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Xie, Y.; Li, W.; Qin, B.; Gao, S. An Enhanced Plant Growth Algorithm with Adam Learning, Lévy Flight, and Dynamic Stage Control. Symmetry 2026, 18, 64. https://doi.org/10.3390/sym18010064

AMA Style

Xie Y, Li W, Qin B, Gao S. An Enhanced Plant Growth Algorithm with Adam Learning, Lévy Flight, and Dynamic Stage Control. Symmetry. 2026; 18(1):64. https://doi.org/10.3390/sym18010064

Chicago/Turabian Style

Xie, Yuhang, Wei Li, Bin Qin, and Shang Gao. 2026. "An Enhanced Plant Growth Algorithm with Adam Learning, Lévy Flight, and Dynamic Stage Control" Symmetry 18, no. 1: 64. https://doi.org/10.3390/sym18010064

APA Style

Xie, Y., Li, W., Qin, B., & Gao, S. (2026). An Enhanced Plant Growth Algorithm with Adam Learning, Lévy Flight, and Dynamic Stage Control. Symmetry, 18(1), 64. https://doi.org/10.3390/sym18010064

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop