Next Article in Journal
Recent Advances in the Applications of Biomaterials in Ovarian Cancer
Previous Article in Journal
An Improved Crested Porcupine Optimization Algorithm Incorporating Butterfly Search and Triangular Walk Strategies
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Arctic Puffin Optimization Algorithm Integrating Opposition-Based Learning and Differential Evolution with Engineering Applications

Key Laboratory of Data Science and Artificial Intelligence of Jiangxi Education Institutes, Gannan Normal University, Ganzhou 341000, China
*
Author to whom correspondence should be addressed.
Biomimetics 2025, 10(11), 767; https://doi.org/10.3390/biomimetics10110767 (registering DOI)
Submission received: 24 October 2025 / Revised: 9 November 2025 / Accepted: 10 November 2025 / Published: 12 November 2025
(This article belongs to the Section Biological Optimisation and Management)

Abstract

The Arctic Puffin Optimization (APO) algorithm, proposed in 2024, is a swarm intelligence optimization. Similar to other swarm intelligence optimization algorithms, it suffers from issues such as slow convergence in the early stage, being easy to fall into local optima, and insufficient balance between exploration and exploitation. To address these limitations, an improved APO (IAPO) algorithm incorporating multiple strategies is proposed. Firstly, a mirror opposition-based learning mechanism is introduced to expand the search scope, improving the efficiency of searching for the optimal solution, which enhances the algorithm’s convergence accuracy and optimization speed. Secondly, a dynamic differential evolution strategy with adaptive parameters is integrated to improve the algorithm’s ability to escape local optima and achieve precise optimization. Comparative experimental results between IAPO and eight other optimization algorithms on 20 benchmark functions, as well as CEC2019 and CEC2022 test functions, show that IAPO achieves higher accuracy, faster convergence, and superior robustness, securing first-place average rankings of 1.35, 1.30, 1.25, and 1.08 on the 20 benchmark functions, CEC 2019, 10- and 20-dimensional CEC 2022 test sets, respectively. Finally, simulation experiments were conducted on three engineering optimization design problems. IAPO achieved optimal values of 5.2559 × 10−1, 1.09 × 103, and 1.49 × 104 for these engineering problems, ranking first in all cases. This further validates the effectiveness and practicality of the IAPO algorithm.

1. Introduction

Swarm intelligence algorithms are meta-heuristic algorithms that mimic the collective behavior of natural biological populations to address complex optimization problems [1,2,3,4]. The core idea is to achieve intelligent overall behavior through simple rules and local information exchange among individuals. In recent years, scholars have developed a range of swarm intelligence algorithms, drawing inspiration from observing and simulating the living habits of organisms. For example, the Slime Mould Algorithm (SMA) [5] mimics the foraging behavior of slime molds and utilizes adaptive weighting to adjust its search direction, rendering it suitable for high-dimensional and multi-modal optimization. However, the algorithm tends to converge slowly and often becomes trapped in local optima. However, it tends to become trapped in local optima and exhibits slow convergence. The Tunicate Swarm Algorithm (TSA) [6] simulates the jet propulsion and group foraging behavior of tunicates, enhancing global exploration through a conflict avoidance mechanism. Nonetheless, it involves multiple parameters, which may lead to performance degradation and poor adaptability in high-dimensional settings. The Snake Optimizer (SO) [7] employs a male–female dual-population mechanism to increase population diversity, but its convergence accuracy is limited. The Particle Swarm Optimization (PSO) algorithm [8] features a simple principle and fast convergence, but it is prone to getting stuck in local optima and is sensitive to parameter settings. The Arctic Puffin Optimization (APO) algorithm [9] employs a versatile multi-stage strategy that integrates behaviors of both aerial flight and underwater foraging to better adapt to complex environments. However, the common challenges are the insufficiently smooth transition between the exploration and exploitation phases and complex parameter adjustment. The Differentiated Creative Search (DCS) algorithm [10] boosts search efficiency via cooperative evolution and multi-strategy fusion, yet it suffers from high implementation complexity and pronounced parameter sensitivity. However, they exhibit weak global exploration capabilities, a tendency to become trapped in local optima, and low convergence accuracy in high-dimensional problems. Given that these algorithms often suffer from issues such as low convergence accuracy and an imbalance between exploration and exploitation, several innovative algorithmic improvements have been proposed. For example, Xia et al. [11] introduced a meta-learning-based alternating minimization method that replaces handcrafted strategies with learned adaptive ones, achieving substantial performance gains. Long et al. [12] proposed an enhanced PSO algorithm, utilizing opposition-based learning to improve initial population quality. Introducing continuous mapping enhances neighborhood search capability, and integrating particle perturbation increases diversity to avoid local optima. Cai et al. [13] proposed a multi-strategy differentiated creative search method, introducing co-evolution to improve the DCS algorithm efficiency. Integrating a composite fitness-distance evaluation enables balanced exploration-exploitation transitions, while linear population size reduction further enhances performance. Guo et al. [14] proposed a “sweep-rotate” gait that enhances planetary rover escape capability from granular terrain. A Bayesian optimization-based escape strategy clarifies variable influences and parameter ranges, improving optimization accuracy. Tian et al. [15] proposed an adaptive improvement method to address the low efficiency of traditional Jump Markov Chain Monte Carlo algorithms. By dynamically adjusting the proposal distribution and employing parallel annealing techniques, they significantly enhanced the efficiency and convergence of curve fitting calculations.
Typical engineering optimization problems exhibit nonlinear, non-differentiable, or multimodal characteristics that hinder the application of traditional gradient-based methods; metaheuristic algorithms emerge as particularly suitable solutions. Their effectiveness lies in their ability to perform a global search without requiring gradient information. Numerous engineering optimization problem models have already been developed. For instance, Kumar et al. [16] addressed classical problems, including tension/compression spring design, which minimizes spring weight by optimizing three design variables, subject to four constraints. Zhou et al. [17] developed a physics-informed neural network framework for fatigue life prediction, which incorporates partial differential inequalities derived from experimental data as physical constraints. Tao et al. [18] proposed a novel turbine blade tip design methodology based on free-form deformation technology, developing a thermo-aerodynamic optimization framework that employs large-scale variable optimization.
The APO algorithm, a novel swarm intelligence algorithm introduced by Wang et al. in 2024 [9], mimics the survival and predation behaviors of the Arctic puffin. While its unique multi-stage foraging strategy presents a promising framework for optimization, empirical studies reveal that the APO, in its original form, is not exempt from the common pitfalls of swarm intelligence algorithms. These include premature convergence, susceptibility to local optima, and an inadequate balance between exploration and exploitation in certain issues, which ultimately limit its convergence precision and practical applicability. To alleviate these issues, several improvements have been proposed. Fakhouri et al. [19] introduced an adaptive differential evolution strategy that utilizes adaptive parameter control and external archives to strengthen its global exploration and convergence efficiency. Sun and Wang [20] proposed a dual-strategy approach combining elite opposition-based learning with adaptive T-distribution mutation to enhance initial population quality and achieve superior balance between global exploration and local exploitation. Zhang [21] introduced an elite reverse learning strategy that significantly improved convergence speed and optimized the effective performance of communication systems. Su and Jiang [22] integrated a Gaussian mutation strategy to balance global exploration with local exploitation. By employing elite opposition-based learning to expand the search space and enhance population diversity. Nevertheless, as stated by the no-free-lunch theorem [23], no single optimization algorithm can dominate all others on every possible problem. Although the aforementioned strategies enhance the optimization performance of APO, it remains prone to local optima and an imbalance between exploration and exploitation.
Therefore, this paper proposes the following improvements to APO: first, a mirror opposition-based learning mechanism with an adaptive mirroring factor is introduced. During the search process, both the current solutions and their dynamically generated mirror opposites are considered as feasible solutions, thus enhancing population diversity and the efficiency of locating optimal solutions, which enhances convergence accuracy. Second, a dynamic differential evolution strategy with an adaptive scaling factor is incorporated to aid in precise optimization and strengthen the algorithm’s ability to escape local optima.
The remainder of this paper is structured as follows: Section 2 presents the fundamental concepts of the standard APO. Section 3 describes the fundamental concepts of the improved APO (IAPO) using mathematical formulations and flowcharts, along with corresponding pseudocode. Section 4 conducts ablation experiments on the proposed improvement strategies and analyzes the performance of the IAPO against eight comparison algorithms. Section 5 applies nine algorithms to three engineering optimization design problems. Finally, the paper concludes and discusses potential directions for future research.
The key contributions of this study are listed below:
(a)
A mirror opposition-based learning mechanism is introduced to enhance the quality of the initial population and expand the search scope. An adaptive parameter-driven dynamic differential evolution strategy is also integrated. By adjusting parameters adaptively, the algorithm prioritizes global exploration in early iterations and shifts focus to local exploitation in later stages, thereby achieving an automatic balance between exploration and exploitation and improving the ability to escape local optima.
(b)
Comprehensive experiments are conducted on 20 benchmark functions, the CEC 2019 test set, and the CEC 2022 test set, which collectively validate the superiority of the proposed IAPO. Furthermore, simulation experiments on three engineering application problems demonstrate that IAPO outperforms other comparative algorithms in both robustness and practical applicability.

2. APO

The APO algorithm comprises three stages: population initialization, exploration (aerial flight), and exploitation (underwater foraging). A behavior transition coefficient regulates the shift between the latter two phases. The corresponding mathematical models are detailed below.

2.1. Initialization

As with other metaheuristic algorithms, the process begins with random initialization:
X i t = r a n d × ( u b l b ) + l b , i = 1 , 2 , , N ,
where for iteration t : X i t denotes the position of the i t h Arctic puffin; r a n d is a random variable uniformly distributed in [0, 1]; and u b and l b respectively define the search space’s upper and lower bounds.

2.2. Aerial Flight Phase (Exploration)

Arctic puffins utilize distinctive flight and foraging strategies to cope with demanding environments, the first of which is aerial search. They flexibly switch between aerial and underwater modes to meet nutritional needs and respond to diverse conditions. This section elaborates on two aerial strategies.
The first of which is aerial search, formulated as:
Y i t + 1 = X i t + ( X i t X r t ) × L ( D ) + R R = r a n d ( 0.5 × ( 0.05 + r a n d ) ) × α   , s . t . α ~ N o r m a l ( 0 , 1 )
where r is a random integer in the range [1, N 1 ] that excludes i ; X i t denotes the i t h candidate solution; X r t represents a candidate solution, chosen at random with X i t X r t from the population; L ( D ) is a Levy flight-generated random number; D corresponds to a specific dimension; and α is a value sampled from a standard normal distribution.
The second strategy involves swooping predation:
Z i t + 1 = Y i t + 1 × S S = tan ( ( r a n d 0.5 ) × π ) .

2.3. Underwater Foraging Phase (Exploitation)

The exploitation phase in APO is modeled by the underwater foraging behavior of Arctic puffins, characterized by three strategies: gathering foraging, intensifying search, and predator avoidance.
(1) Gathering foraging
This strategy mimics the collaborative hunting of puffins, which target fish schools near the surface. This cooperative approach boosts efficiency, as individuals can identify rich diving spots and food sources by observing the group. The update formula is:
W i t + 1 =     X r 1 t + G × L ( D ) × ( X r 2 t X r 3 t ) ,   r a n d 0.5 X r 1 t + G × ( X r 2 t X r 3 t ) ,   r a n d < 0.5 ,
where G is a cooperation factor, used to regulate the Arctic puffin’s foraging pattern, with G = 0.5 in this paper; r 1 , r 2 and r 3 are both distinct random integers within [1, N ] (excluding i ), while X r 1 t , X r 2 t , and X r 3 t are all candidate solutions randomly selected from the current population.
(2) Intensifying the search
As available food depletes, arctic puffins navigate to new underwater areas to find prey and maintain their nutrient intake. This movement is modeled by:
Y i t + 1 = W i t + 1 × ( 1 + f ) , f = 0.1 × ( r a n d 1 ) × ( T t ) T ,
where T is maximum iteration count, t is current iteration, and r a n d injects randomness into the adaptive factor f . This factor, which is central to the intensifying search phase along with the parameter ( 1 + f ) , is designed to fine-tune the Arctic puffin’s underwater position. Inspired by the puffin’s foraging flexibility, f dynamically adjusts as iterations progress, enabling the puffin to refine its position based on search status and randomness, thereby enhancing its ability to locate richer food sources.
(3) Avoiding predators
This strategy [9] lies in simulating the collective warning behavior of Arctic puffins. Once a puffin detects a predator, it issues an alarm call, prompting other members to immediately evacuate the danger zone upon receiving the signal. The corresponding mathematical formulation is as follows:
Z i t + 1 =     X i t + G × L ( D ) × ( X r 1 t X r 2 t ) ,   r a n d 0.5 , X i t + β × ( X r 1 t X r 2 t ) ,   r a n d < 0.5
where G is a cooperation factor, used to regulate the Arctic puffin’s foraging pattern, with G = 0.5 in this paper; β is a random variable drawn from a uniform distribution over [0, 1]; r 1 and r 2 are both distinct random integers within [1, N ] (excluding i ), and X r 1 t and X r 2 t represent two candidate solutions obtained via random selection from the current population. This approach employs a balancing mechanism that activates two different escape tactics based on the perceived threat level. When   r a n d 0.5 —simulating the presence of a nearby predator—puffins perform a swift escape maneuver, leading to a significant relocation to avoid danger. This response improves the algorithm’s capacity to escape local optima. On the other hand, when   r a n d < 0.5 , puffins switch to a more measured evasion tactic, permitting finer local search. Such a dual-response mechanism enables the algorithm to adapt to various search situations, enabling a balance between exploration and exploitation.

2.4. Behavioral Transition Coefficient B

Governed by the behavioral transition coefficient B , the APO algorithm smoothly shifts from global exploration (aerial flight) in early iterations to local exploitation (underwater foraging) in later stages. This dynamic transition ensures a search balance. The coefficient B is defined as:
B = 2 × log ( 1 / r a n d ) × ( 1 t / T ) ,
where t is current iteration and T is maximum iteration count. A threshold parameter D = 0.5 is used to determine the current search phase: when B > D , the algorithm performs exploration; when B D , it switches to exploitation. This dynamic adjustment enables adaptive control over the search process throughout the optimization run.

3. IAPO

To address three key limitations of the APO algorithm: sluggish early-stage convergence, a propensity to become trapped in local optima, and an inadequate balance between exploration and exploitation, we propose an IAPO. This enhanced algorithm incorporates a mirror opposition-based learning mechanism and a dynamic differential evolution strategy.

3.1. Mirror Opposition-Based Learning Mechanism

In many optimization scenarios, the search process often starts with random initial values and gradually approaches the optimal solution. When initial random values are close to the optimum, the problem can be solved efficiently. However, in the worst case, if the initial values lie opposite to the optimal region, the optimization process becomes time-consuming.
Without prior knowledge, it is difficult to ensure favorable initial solutions. From a logical perspective, the solution space can be explored more effectively by considering both current candidate solutions and their opposites. Introducing opposite solutions as feasible candidates can enhance the efficiency of locating the global optimum. This idea is rooted in the concept of opposition-based learning [24,25] and can be mathematically formulated as follows:
R i t + 1 = ( X max + X min ) X i t .
Based on this concept, Yao et al. [26] proposed a mirror opposition-based learning mechanism inspired by convex lens imaging. Unlike conventional opposition-based learning, which produces a fixed opposite solution, the mirror opposition-based mechanism introduces adaptive perturbation via a mirror factor q , enabling dynamic adjustment of opposite solutions. This not only improves optimization accuracy but also maintains convergence speed [27]. The updated position of an individual is given by:
M i t + 1 = X max + X min 2 + X max + X min 2 q X i t q ,
where q is the mirror factor, X max is the upper bound of the position, and X min is the lower bound of the position.
The adaptive update formula for the mirror factor q is:
q = 10 × ( 1 2 × ( t T ) 2 ) .
Generally, as a machine learning strategy, opposition-based learning offers the potential to extend existing learning algorithms.

3.2. Dynamic Differential Evolution Strategy

The Differential Evolution (DE) algorithm [28] shares its conceptual foundation with genetic algorithms. Both employ random initialization to generate a starting population, utilize fitness values to guide selection, and proceed iteratively through mutation, crossover, and selection operations. However, the Arctic Puffin Optimization (APO) algorithm tends to converge near local optima in the later stages of optimization, increasing the risk of premature convergence. To mitigate this limitation, a dynamic differential evolution strategy is incorporated, which introduces an adaptive scaling factor. The specific procedure is outlined below.
The initial scaling factor E 0 and crossover probability C R were determined through preliminary parameter sensitivity experiments within the range [0, 1]. A parameter set demonstrating consistent effectiveness across most test problems was selected to maintain both effective adaptation and population diversity:
E 0 = 0.4 ,
C R = 0.1 ,
This paper employs the DE/rand/1 mutation strategy, which is selected for its independence from the current best solution. This characteristic helps maintain population diversity and enhances the algorithm’s ability to escape local optima. Furthermore, its simple structure allows smooth integration with the proposed adaptive parameter mechanism.
V i t + 1 = X r 1 t + E × ( X r 2 t X r 3 t ) ,
where r 1 , r 2 , r 3 are distinct integers randomly selected from the range [1, N ] (excluding i ), and E is an adaptive scaling factor calculated as follows:
E = E 0 × 2 λ ,
λ = e ( 1 T / ( T + 1 t ) ) .
during the early iterations, the scaling factor E remains relatively high to promote global exploration. As the search progresses, the value E gradually decreases to shift the focus toward local refinement.
Next, the crossover operation is applied to combine the mutation vector V i t + 1 with the target vector X i t according to certain rules, producing a trial vector:
U i t + 1 = [ u i , 1 t + 1 , , u i , j t + 1 , , u i , D t + 1 ] ,
where each component is defined by:
u i , j t + 1 = V i , j t + 1   ,   i f   r j [ 0 , 1 ) C R   o r   j = r ( i ) X i , j t   ,   o t h e r w i s e ,
where V i , j t + 1 is the j t h dimension value of the i t h new parameter individual; r j [ 0 , 1 ) denotes the random number calculated in the j t h dimension, and C R is the crossover probability. D denotes the dimension of the population, and r ( i ) is a random integer in [1, D ], thus ensuring the mutation vector contributes at least one component.
Finally, a greedy selection operation is performed based on fitness comparison:
X i t + 1 = U i t + 1 ,   i f   F ( U i t + 1 ) < F ( X i t ) X i t   ,   i f   F ( U i t + 1 ) F ( X i t )   ,
where F ( ) denotes the fitness function. This ensures that the population evolves toward improved solutions over successive generations, maintaining selection pressure toward higher-quality regions of the search space.

3.3. Algorithm Flow

The pseudocode of IAPO is shown as Algorithm 1:
Algorithm 1. IAPO
Input: Population size N, maximum iterations T, problem dimension D.
Output: Global best position Xgbest, global best fitness f(Xgbest).
1.   Initialize parameters N, T; randomly initialize population X i 0 , evaluate fitness
f( X i 0 ) for each individual.
2.   Initialize the local best solution f(Xlbest) and position Xlbest.
3.   Initialize the global best solution f(Xgbest) and position Xgbest.
4.     for t = 1 → T
5.           Calculate the mirror factor q using Formula (10).
6.           for i = 1 → N
7.                 Calculate the new position M i t + 1 using Formula (9).
8.                 if f( M i t + 1 ) < f( X i t )
9.                       X i t + 1 = M i t + 1 , f( X i t + 1 ) = f( M i t + 1 )
10.               end if
11.         end for
12.         for i = 1 → N
13.         Calculate the behavior transition coefficient B using Formula (7).
14.         if B > 0.5
15.            Calculate the new positions Y i t + 1 and Z i t + 1 using formulas (2)–(3)
16.            Update the individual position through comparison
17.         else if
18.            Calculate the new positions W i t + 1 , Y i t + 1 , and Z i t + 1 using formulas
19.            (4)–(6); Update the individual position through comparison
20.         end if
21.    end for
22.    Update f(Xlbest), Xlbest, f(Xgbest), and Xgbest.
23.    for i = 1 → N
24.         Calculate the mutation vector V i t + 1 using formulas (13)–(15).
25.         Calculate the trial vector U i t + 1 using formulas (16)–(17).
26.    end for
27.    for i = 1 → N
28.         if f( U i t + 1 ) < f( X i t )
29.             X i t + 1 = U i t + 1 , f( X i t + 1 ) = f( U i t + 1 )
30.      end if
31.    end for
32.    Update f(Xgbest) and Xgbest.
33.  end for
34.  Return: Xgbest and f(Xgbest) f(Xgbest)
To clearly describe the overall solution logic of IAPO, a flowchart of this algorithm is drawn as Figure 1.

3.4. Time Complexity Analysis

Time complexity serves as a key metric for evaluating algorithmic performance, as it governs code execution efficiency. The following analysis compares the time complexity between the standard APO and the improved IAPO algorithm.
Assuming the population size of Arctic puffins is p o p , the maximum number of iterations is T , and the dimension is D .
(1) Time complexity of APO
The APO algorithm operates through three sequential phases: population initialization, aerial flight for global exploration, and underwater foraging for local exploitation. The time complexities of these three stages are as follows:
Initialization phase: Iterate N times, generating a D -dimensional vector each time, with a time complexity of O N · D .
Exploration phase: Includes two strategies—aerial search and swooping predation—each looping N times, generating a new D-dimensional solution each time, with a complexity of O N · D . After T iterations, the time complexity is O N · D · T .
Exploitation Phase: This phase is characterized by the application of three distinct strategies: gathering foraging, intensifying search, and predator avoidance, with position updates for all three strategies requiring p o p × D calculations each time. After T iterations, the time complexity is O N · D · T .
Hence, the combined time complexity of all algorithmic components amounts to O N · D + O N · D · T + O N · D · T O N · D · T .
(2) Complexity analysis of the mirror opposition-based learning mechanism
This approach diversifies the search space by creating opposition-based counterparts for the population, with computations involving inverse calculations for each dimension of every individual in the population, single execution has a complexity of O N · D . After T iterations, this results in a total time complexity of O N · D · T .
(3) Time complexity of the dynamic differential evolution strategy
The core operations of the dynamic differential evolution strategy include mutation, crossover, and selection. The time complexities of these three operations during the iteration phase are as follows:
Mutation operation: For each individual, select 3 random individuals to calculate a D-dimensional differential vector, with a complexity of O N · D .
Crossover operation: Perform crossover operations on each dimension of each individual according to probability, with a complexity of O N · D .
Selection operation: Compare the fitness of the target individual with that of the trial individual, with a complexity of O N · D .
Total iteration complexity: After T iterations of the above three operations, the total time complexity is O N · D · T × 3 O N · D · T .
(4) Time complexity of IAPO
The total time complexity of the IAPO algorithm is the sum of the time complexities of the APO algorithm, the mirror-based learning mechanism, and the dynamic differential evolution strategy: O N · D · T + O N · D · T + O N · D · T O N · D · T . In summary, it is found that the time complexity of IAPO is of the same order as that of standard APO and does not significantly increase the computational burden of the algorithm.

4. Simulation Experiments and Result Analysis

The performance of the proposed IAPO was rigorously evaluated using a comprehensive set of test functions, comprising 20 classical benchmarks, the CEC 2019 set, and the CEC 2022 set. The basic information of these test sets is provided in Table 1, Table 2 and Table 3, respectively.
The 20 benchmark functions listed in Table 1 [29] are divided into three groups with distinct characteristics to comprehensively assess the algorithm’s performance: F1–F7 are unimodal functions, which are characterized by the presence of a single global optimum and the absence of local optima. These functions primarily test the convergence speed of the algorithm in straightforward optimization scenarios. F8–F13 [30] are multimodal functions, which contain one global optimum and multiple local optima. They are used to evaluate the algorithm’s ability to balance global exploration and local exploitation. F14–F20 [31] are fixed-dimensional multimodal functions, which test the robustness of algorithms in exploration and exploitation capabilities under constrained dimensions. The CEC 2019 test set [32], shown in Table 2, includes 10 objective functions. Functions F1 to F3 vary in dimension and search range, while F4 to F10 are 10-dimensional. The theoretical optimum for each function in this set is 1. The CEC 2022 test set [33], detailed in Table 3, comprises twelve functions with dimensions of 10 and 20, encompassing unimodal (F1), basic (F2–F5), hybrid (F6–F8), and composition (F9–F12) types. Many of the CEC 2019 and CEC 2022 functions are multimodal [34] and present significant optimization challenges.
To ensure fairness in comparisons, all algorithms were coded in MATLAB 2022a and run on a computer with the following configuration: an AMD Ryzen 5 processor and 16 GB of RAM. The parameters were consistent across all algorithms: population size N = 30 and maximum iterations T = 500. Each algorithm was independently run 30 times. The best value (Best), mean value (Mean), and standard deviation (Std) of the results were used as evaluation indicators.

4.1. Ablation Study on Improvement Strategies

IAPO enhances the standard APO by incorporating two strategies: a mirror opposition-based learning mechanism and a dynamic differential evolution strategy. To analyze the contribution of each strategy, two variant algorithms were constructed:
(1) BAPO: APO enhanced with the mirror opposition-based learning mechanism.
(2) DAPO: APO enhanced with the dynamic differential evolution strategy.
IAPO, APO, BAPO, and DAPO were evaluated on three distinct sets: 20 benchmark functions, along with the CEC 2019 and CEC 2022 test sets. The results of the ablation study are visualized using radar charts and ranking bar charts: Figure 2 corresponds to the 20 benchmark functions, Figure 3 to CEC 2019, and Figure 4 and Figure 5 to the 10- and 20-dimensional CEC 2022 sets, respectively.
In the accompanying radar charts, each radial axis corresponds to a specific test function, with larger values indicating poorer performance. A smaller enclosed area suggests better overall performance. Across all figures, IAPO (blue circle) consistently occupies positions closer to the center and covers the smallest area, indicating superior performance and stability. Similarly, in the ranking bar charts, IAPO consistently ranks first, validating the effectiveness of integrating both improvement strategies.

4.2. Experiments on Benchmark Functions

For a comprehensive evaluation of IAPO’s performance, comparative experiments employed the suite of benchmark functions detailed in Table 1. The following representative swarm intelligence algorithms were selected for comparison: the well-established Whale Optimization Algorithm (WOA) [35], Harris Hawks Optimization (HHO) [36], and Particle Swarm Optimization (PSO) [8]; more recently proposed algorithms, including the Water Uptake and Transport in Plants (WUTP) [37] algorithm and the Rüppell’s Fox Optimizer (RFO) [38]; two improved algorithms for APO the JAPO algorithm [19] and the ETAAPO algorithm [20], and the standard Arctic Puffin Optimization (APO) algorithm. Parameter settings for each algorithm were set as follows: IAPO( E 0 = 0.4 ,     C R = 0.1 , G = 0.5 , D = 0.5 ), WOA( a = 2 0 , b = 1 ), PSO( w = 0.7 , c 1 = 1.5 , c 2 = 2.0 ), WUTP( p = 0.5 , β = 0.1 ), RFO( β = 0.1 , L = 100 ), APO( F = 0.5 , D = 0.5 ), JAPO( u F = 0.5 , u C R = 0.5 , p 0 = 0.05 , F = 0.5 , B = 0.5 ), ETAAPO( F = 0.5 , B = 0.5 ). The optimization results of all algorithms on 30-dimensional functions F1–F13 and fixed-dimensional functions F14–F20 are summarized in Table 4. The bold numbers in the table indicate the optimal values.
For the unimodal functions F1–F4, IAPO achieved the theoretical optimum value of zero for the best, mean, and standard deviation of the results, significantly outperforming the other eight algorithms. This demonstrates IAPO’s strong global search capability and stability. On functions F5–F7, IAPO also ranked first in all three evaluation metrics. On the multimodal functions F8–F13, which contain numerous local optima, IAPO attained the theoretical optimum on F8 with the smallest standard deviation, indicating a high ability to avoid local optima. On F9–F11, both HHO and IAPO delivered excellent and identical results. For F12 and F13, IAPO again outperformed all other algorithms across all metrics.
Across the subset of fixed-dimensional multimodal functions (F14–F20), the performance of the nine algorithms was comparable on F16. IAPO performed relatively poorly on F14, ranking sixth. On F18 and F19, IAPO’s standard deviation was slightly higher than that of ETAAPO and JAPO, placing it second. However, on F15, F17, and F20, IAPO achieved the best convergence accuracy, the highest average performance, and the smallest standard deviation, thus validating its robustness and effective balance between exploration and exploitation for solving fundamental problems.
To further illustrate the convergence behavior, the convergence curves of IAPO and the other algorithms are plotted in Figure 6.
Figure 6 presents a comparative view of the convergence characteristics, highlighting IAPO’s performance against other algorithms. For most test functions—including F1 to F4, F7, F9 to F11, F15, and F17 to F20—the IAPO curve converges rapidly and attains high precision early in the process. By the 50th iteration, IAPO already shows a clear advantage in both convergence speed and convergence accuracy. On functions F5, F6, F12, and F13, IAPO reaches the optimal value after approximately 200 iterations, and its curve continues to decline gradually even up to 500 iterations, reflecting sustained search refinement. Notably, on F7 and F8, IAPO repeatedly escapes local optima and finds better solutions, demonstrating a strong ability to avoid premature convergence. Overall, these results confirm that IAPO maintains high optimization accuracy and faster convergence speed across a variety of benchmark problems.
Figure 7 presents a radar chart and a bar chart summarizing the ranking of each algorithm across the 20 benchmark functions. In the radar chart, IAPO (blue circles) encloses the smallest area and is consistently positioned closer to the center—indicating superior and more stable performance. Similarly, the bar chart confirms that IAPO achieves the highest overall ranking, underscoring its effectiveness compared to the other algorithms.

4.3. Experiments on CEC 2019 Test Functions

Following the experimental setup used for the benchmark functions in Section 4.2, the proposed IAPO algorithm was compared with WOA, HHO, PSO, WUTP, RFO, APO, JAPO, and ETAAPO. The corresponding results of the CEC 2019 test set are detailed in Table 5, with all parameter configurations consistent with those described in Section 4.2.
As shown in Table 5, IAPO achieves competitive results on most test functions. Notably, it generally exhibits smaller standard deviations than the other algorithms, indicating more stable performance. Among the ten CEC 2019 functions, IAPO ranks first on eight functions (F1, F2, F4–F8, and F10). On function F3, IAPO’s best value is slightly lower than that of JAPO, but its mean accuracy is the highest among all algorithms. On F9, IAPO’s mean value ranks third, behind JAPO and ETAAPO. Overall, IAPO demonstrates clear superiority in 80% of the functions, confirming its effectiveness in handling complex optimization problems.
Figure 8 displays the convergence curves of IAPO and the comparison algorithms on the CEC 2019 test set. The results show that IAPO converges rapidly to lower fitness values across most functions. On F1, F2, F5, and F6, it reaches high precision within a small number of iterations. By the 50th iteration, IAPO already shows significantly better accuracy and convergence speed than the other algorithms. On functions such as F3, F4, F8, and F10, IAPO repeatedly escapes local optima and continues to refine the solution, demonstrating strong local avoidance capability.
Figure 9 summarizes the ranking of each algorithm using a radar chart and a bar chart. In the radar chart, IAPO (blue circle) covers the smallest area and is positioned closest to the center, reflecting its superior and consistent performance. The bar chart further confirms that IAPO achieves the highest overall ranking.

4.4. Experiments on the CEC 2022 Test Set

The IAPO algorithm was compared with eight other swarm intelligence algorithms—WOA, HHO, PSO, WUTP, RFO, APO, JAPO, and ETAAPO—using the same parameter settings as in Section 4.2. The optimization results for the 10- and 20-dimensional CEC 2022 sets are presented in Table 6 and Table 7, respectively.
As shown in Table 6, for the 10-dimensional case, IAPO achieved the first rank on 10 out of 12 functions. On function F2, IAPO ranked second, slightly behind APO, while on F4, it placed third with a marginal difference from the top two algorithms, APO and JAPO. In the 20-dimensional setting (Table 7), IAPO ranked first on 11 functions, and second on F4, closely following JAPO. Overall, IAPO exhibited superior performance on 87.5% of the functions in the CEC 2022 test set.
Figure 10 and Figure 11 illustrate the convergence curves of IAPO and the comparison algorithms for the 10- and 20-dimensional sets, respectively. In both figures, IAPO converges more rapidly than the other algorithms, with smoother convergence curves and fewer instances of stagnation in local optima. This indicates that IAPO possesses not only faster convergence but also stronger global search capability and robustness for addressing challenging high-dimensional optimization problems.
Figure 12 and Figure 13 present the ranking comparisons based on the 10- and 20-dimensional CEC 2022 sets, respectively. In the radar charts, IAPO (blue circle) occupies the smallest area and is consistently positioned near the center, reflecting its stable and superior performance across most functions. The accompanying bar charts confirm that IAPO achieves the highest overall ranking, further validating its effectiveness.
In summary, IAPO exhibits rapid initial convergence, which is facilitated by the introduced mirror opposition-based learning. This mechanism creates a high-quality, diverse initial population, providing a favorable starting point for the optimization process. Furthermore, IAPO demonstrates exceptional performance on complex multimodal functions. This capability is primarily due to its adaptive differential evolution strategy, which dynamically balances global exploration and local exploitation throughout the iterations. Effectively introducing new individuals enables the algorithm to escape local optima consistently.

4.5. Nonparametric Statistical Analysis Using Wilcoxon Rank-Sum and Friedman Tests

To provide a statistically rigorous comparison of algorithm performance beyond basic metrics such as the best, mean, and standard deviation, this experiment employs the Wilcoxon rank-sum test at a 95% confidence level. This test assesses whether the differences between IAPO and each comparison algorithm are statistically significant. The test was implemented in MATLAB 2022a using the ranksum(x,y) function. A p-value below the 0.05 threshold indicates a statistically significant difference between the two algorithms, whereas a value above it suggests no such significance. In the results, the symbols “+”, “=“, and “−” denote that IAPO performs significantly better, shows no significant difference, or performs significantly worse than the comparison algorithm, respectively.
The Wilcoxon test statistics for IAPO across the different test sets are summarized in Table 8. The results, presented as the number of functions where IAPO wins/ties/loses against the comparison algorithms, are 142/15/3 for the 20 benchmark functions, 70/8/2 for the CEC 2019 test set, 91/3/2 for the 10-dimensional CEC 2022 set, and 94/2/0 for the 20-dimensional CEC 2022 set. The prevalence of p-values less than 0.05 indicates that IAPO’s performance is significantly different from that of the other algorithms across most functions.
To further validate the overall performance ranking of IAPO against the comparison algorithms, the non-parametric Friedman test was conducted. For each test function, the algorithms were ranked based on their optimization results (with Rank 1 assigned to the best performer). If multiple algorithms achieved identical results, they were assigned an average rank. The test was implemented in MATLAB 2022a using the friedman(data) function. The Friedman test compares the average ranks of all algorithms across all functions, where a lower average rank indicates superior overall optimization performance. The calculation of the average rank is shown in Equation (18).
r a n k a j = 1 N r i = 1 N r R i j A v e r a n k j = 1 N t f a = 1 N t f r a n k a j ,
where r a n k a j denotes the average rank of the j t h algorithm over N r independent runs on the a t h test function. N r is the number of independent runs, R i j denotes the rank of the j t h algorithm among all algorithms in the i t h run, A v e r a n k j is the overall average rank of the j t h algorithm, and N t f is the number of test functions.
Table 9 shows the Friedman test results comparing IAPO with the eight comparison algorithms. The results show that IAPO achieved the lowest average ranks across all test sets: 1.15 on the 20 benchmark functions, 2.02 on the CEC 2019 test set, 1.50 on the 10-dimensional CEC 2022 set, and 1.48 on the 20-dimensional CEC 2022 set. With an overall rank first among all nine algorithms, IAPO demonstrates statistically superior performance in optimization against its comparison algorithms.

5. Experiments in Practical Engineering Optimization Problems

The applicability of IAPO to practical engineering design is further assessed using three constrained optimization problems: the planetary gear train design, the heat exchanger network design (case 1), and the blending-pooling-separation problem. The performance of IAPO is compared with that of WOA, HHO, PSO, WUTP, RFO, APO, JAPO, and ETAAPO.
All experiments were conducted in MATLAB 2022a under identical hardware and software conditions. Employing a population size of 30 and a maximum of 300 iterations, each algorithm underwent 30 independent runs. The Best, Mean, and Standard Deviation (Std) values were recorded as evaluation metrics.

5.1. Planetary Gear Train Design Problem

The planetary gear train design problem [39] is a constrained optimization task in mechanical power transmission systems, with a schematic shown in Figure 14. Formulated for automotive applications, this problem aims to minimize the maximum gear ratio error in a planetary transmission system. The solution entails determining the total gear tooth count, modeled through six integer decision variables. (number of gear teeth x 1 ~ x 6 = N 1 ~ N 6 ) and three discrete variables (gear module x 7 , x 8 = m 1 , m 2 , number of planetary gears x 9 = p ), totaling 9 variables subject to 11 constraints.
Mathematically, the optimization model can be expressed as:
Consider variable x   =   ( x 1 , x 2 , x 3 , x 4 , x 5 , x 6 , x 7 , x 8 , x 9 ) = ( N 1 , N 2 , N 3 , N 4 , N 5 , N 6 , m 1 , m 2 , p )
Minimize f ( x ) = max | i k i 0 k | ,   k = { 1 , 2 , , R } .
Where i 1 = N 6 N 4 ,   i 01 = 3.11 ,   i 2 = N 6 ( N 1 N 3 + N 2 N 4 ) N 1 N 3 ( N 6 + N 4 ) ,   I R = N 2 N 6 N 1 N 3 ,   i 02 = 1.84
Subject to
g 1 ( x ) = m 2 ( N 6 + 2.5 ) D max   0 , g 2 ( x ) = m 1 ( N 1 + N 2 ) + m 1 ( N 2 + 2 ) D max 0 , g 3 ( x ) = m 2 ( N 4 + N 5 ) + m 2 ( N 5 + 2 ) D max 0 , g 4 ( x ) = m 1 ( N 1 + N 2 ) m 2 ( N 6 + N 3 ) | m 1 m 2 0 , g 5 ( x ) = ( N 1 + N 2 ) sin ( π / p ) + N 2 + 2 + δ 22 0 , g 6 ( x ) = ( N 6 N 3 ) sin ( π / p ) + N 3 + 2 + δ 33 0 , g 7 ( x ) = ( N 4 + N 5 ) sin ( π / p ) + N 5 + 2 + δ 55 0 , g 9 ( x ) = N 4 N 6 + 2 N 5 + 2 δ 56 + 4 0 , g 10 ( x ) = N 4 N 6 + N 4 + 2 δ 34 + 4 0 , h 1 ( x ) = N 6 N 4 p = i n t e g e r .
where
δ 22 = δ 33 = δ 55 = δ 35 = δ 56 = 0.5 ,   β = cos 1 ( N 4 + N 5 ) 2 + ( N 6 N 3 ) 2 ( N 3 + N 5 ) 2 2 ( N 6 N 3 ) ( N 4 + N 5 ) , D max = 220 .
With bounds
p = ( 3 , 4 , 5 ) ,   m 1 ,   m 2 = ( 1.75 , 2.0 , 2.25 , 2.5 , 2.75 , 3.0 ) ,   17 N 1 96 , 14 N 2 54 ,   14 N 3 51 ,   17 N 4 46 ,   14 N 5 51 ,   48 N 5 124 .
The optimization results of IAPO and other comparison algorithms on the planetary gear design problem are summarized in Table 10, including the recorded best, mean, and standard deviation values. The corresponding optimal design variable sets for each algorithm are summarized in Table 11, which indicates that IAPO achieves the smallest values in all three performance metrics. Specifically, IAPO obtains the best objective value of 5.2559 × 10−1 with the following optimal variable set: (35, 26, 25, 24, 20, 87, 1.5461, 2.0606, 1.4739). The convergence curve is illustrated in Figure 15, where IAPO exhibits the fastest convergence rate, with its curve consistently occupying the lowest position. These results demonstrate that IAPO is well-suited for this engineering problem and effectively minimizes the maximum gear ratio error.

5.2. Heat Exchanger Network Design Problem (Case 1)

The heat exchanger network design problem [40] involves the optimal configuration design of heat exchanger structures, representing a complex engineering optimization challenge. The problem requires heating a cold fluid stream using three hot fluids with distinct inlet temperatures, aiming to minimize the heat transfer area of the exchanger. This represents the first instance of a heat exchange network design problem. In this case, the formulation includes two nonlinear and six linear equality constraints, and a nonlinear objective function. Moreover, nine linear inequality constraints are incorporated to account for temperature limitations.
Mathematically, the optimization model can be expressed as:
Minimize: f ( x ) = 35 x 1 0.6 + 35 x 2 0.6 .
Subject to:
h 1 ( x ) = 200 x 1 x 4 x 3 = 0 , h 2 ( x ) = 200 x 2 x 6 x 5 = 0 , h 3 ( x ) = x 3 10000 ( x 7 100 ) = 0 , h 4 ( x ) = x 5 10000 ( 300 x 7 ) = 0 , h 5 ( x ) = x 3 10000 ( 600 x 8 ) = 0 , h 6 ( x ) = x 5 10000 ( 900 x 9 ) = 0 , h 7 ( x ) = x 4 ln ( x 8 100 ) x 4 ln ( 600 x 7 ) x 8 + x 7 + 500 = 0 , h 8 ( x ) = x 6 ln ( x 9 x 7 ) x 6 ln ( 600 ) x 9 + x 7 + 600 = 0 ,
With bounds
0 x 1 10 ,   0 x 2 200 ,   0 x 3 100 ,   0 x 4 200 ,   1000 x 5 2000000 ,   0 x 6 600 ,   100 x 7 600 ,   100 x 8 600 ,   100 x 9 900 .
The optimization results of the IAPO and comparison algorithms on the heat exchanger network design problem (Case 1) are summarized in Table 12 and Table 13. Table 12 presents the best, mean, and standard deviation values, while Table 13 lists the minimum area and corresponding optimal variables obtained by each algorithm. The results demonstrate that IAPO achieves the smallest values in all three performance metrics, attaining the minimum heat transfer area of 1.09E+03 with the optimal variable set: (0.809, 98.130, 78.244, 0.483, 1,999,921.277, 101.902, 100.008, 599.992, 700.008). The convergence curves, illustrated in Figure 16, where IAPO exhibits the fastest convergence rate, with its curve consistently occupying the lowest position. These results confirm that IAPO is well-suited for this problem and effectively minimizes the heat exchanger’s heat transfer area.

5.3. Blending-Pooling-Separation Problem

This problem [16] describes a typical chemical engineering unit operation that separates a three-component feed mixture into two multi-component product streams through a network of separators and splitting/blending/pooling, ultimately yielding high-purity products. The operating cost of each separator is proportional to its flow rate, and the process must satisfy mass balance constraints around each unit operation. The objective is to minimize total cost while adhering to material balances, component allocations, and flow constraints. The problem includes 15 nonlinear equality constraints, 17 linear equality constraints, five linear inequality constraints, and a linear objective function. Variables x1 to x38 denote flow indicators.
Mathematically, the optimization model can be expressed as:
Minimize: f ( x ) = 0 . 9979 + 0.00432 x 5 + 0.01517 x 13 .
Subject to:
h 1 ( x ) = x 4 + x 3 + x 2 + x 1 = 300 , h 2 ( x ) = x 6 x 8 x 7 = 0 , h 3 ( x ) = x 9 x 11 x 10 x 12 = 0 , h 4 ( x ) = x 14 x 16 x 17 x 15 = 0 , h 5 ( x ) = x 18 x 20 x 19 = 0 , h 6 ( x ) = x 5 x 21 x 6 x 22 x 9 x 23 = 0 , h 7 ( x ) = x 5 x 24 x 6 x 25 x 9 x 26 = 0 , h 8 ( x ) = x 5 x 27 x 6 x 28 x 9 x 29 = 0 , h 9 ( x ) = x 13 x 30 x 14 x 31 x 18 x 32 = 0 , h 10 ( x ) = x 13 x 33 x 14 x 34 x 18 x 35 = 0 , h 11 ( x ) = x 13 x 36 x 14 x 37 x 18 x 35 = 0 , h 12 ( x ) = 0.333 x 1 + x 15 x 31 x 5 x 21 = 0 , h 13 ( x ) = 0.333 x 1 + x 15 x 34 x 5 x 24 = 0 , h 14 ( x ) = 0.333 x 1 + x 15 x 37 x 5 x 27 = 0 , h 15 ( x ) = 0.333 x 2 + x 10 x 23 x 13 x 30 = 0 , h 16 ( x ) = 0.333 x 2 + x 10 x 26 x 13 x 33 = 0 , h 17 ( x ) = 0.333 x 2 + x 10 x 29 x 13 x 36 = 0 , h 18 ( x ) = 0.333 x 3 + x 7 x 22 + x 11 x 23 + x 16 x 31 + x 19 x 32 = 30 , h 19 ( x ) = 0.333 x 3 + x 7 x 25 + x 11 x 26 + x 16 x 34 + x 19 x 35 = 50 , h 15 ( x ) = 0.333 x 2 + x 10 x 23 x 13 x 30 = 0 , h 16 ( x ) = 0.333 x 2 + x 10 x 26 x 13 x 33 = 0 , h 17 ( x ) = 0.333 x 2 + x 10 x 29 x 13 x 36 = 0 , h 18 ( x ) = 0.333 x 3 + x 7 x 22 + x 11 x 23 + x 16 x 31 + x 19 x 32 = 30 , h 19 ( x ) = 0.333 x 3 + x 7 x 25 + x 11 x 26 + x 16 x 34 + x 19 x 35 = 50 , h 20 ( x ) = 0.333 x 3 + x 7 x 28 + x 11 x 29 + x 16 x 37 + x 19 x 38 = 30 , h 21 ( x ) = x 21 + x 24 + x 27 = 1 , h 22 ( x ) = x 22 + x 25 + x 28 = 1 , h 23 ( x ) = x 23 + x 26 + x 29 = 1 , h 24 ( x ) = x 30 + x 33 + x 36 = 1 , h 25 ( x ) = x 31 + x 34 + x 37 = 1 , h 26 ( x ) = x 32 + x 35 + x 38 = 1 , h 27 ( x ) = x 25 = 0 , h 28 ( x ) = x 28 = 0 , h 29 ( x ) = x 23 = 0 , h 30 ( x ) = x 37 = 0 , h 31 ( x ) = x 32 = 0 , h 32 ( x ) = x 35 = 0 ,
with bounds:
0 x 1 , x 3 , x 8 , x 9 , x 5 , x 6 , x 14 , x 18 , x 10 , x 16 , x 13 , x 20 90 , 0 x 2 , x 4 , x 7 , x 11 , x 12 , x 15 , x 17 , x 19 150 , 0 x 21 , x 23 , x 24 , x 25 , x 27 , x 28 1 , 0 x 22 , x 32 , x 34 , x 35 , x 37 , x 38 1.2 , 0 x 26 , x 29 , x 30 , x 31 , x 33 , x 36 0.5 .
The optimization results of IAPO and comparison algorithms on the blending-pooling-separation problem are summarized in Table 14 and Table 15. Table 14 reports the best, mean, and standard deviation values, while Table 15 provides the minimum cost and corresponding optimal variables for each algorithm. The results show that IAPO dominates the comparison, showing clear superiority across all three key metrics, attaining the lowest cost of 1.49E+04 with the optimal variable set: (32.5, 92.4, 59.9, 115.4, 28.2, 24.5, 6.8, 17.9, 3.8, 0.7, 2.2, 0.2, 80.7, 48.8, 15.4, 0.6, 32.3, 37.5, 28.9, 9.1, 0.6, 0.6, 0.1, 0.4, 0.5, 0.2, 0.6, 0.5, 0.5, 0.4, 0.4, 0.2, 0.4, 0.0, 0.9, 0.4, 0.4, 0.2). The convergence curves illustrated in Figure 17, confirm that IAPO converges most rapidly, with its curve consistently positioned at the lowest point. These findings demonstrate that IAPO is highly effective for this problem and successfully minimizes the total system cost.
To position this work within current research trends, this paper compares IAPO with several recently proposed algorithms through literature-based benchmarking, as shown in Table 16. IAPO demonstrates optimal performance across 20 benchmark functions, exhibits high applicability to engineering problems and robustness, and overall performance is relatively strong.

6. Conclusions and Expectations

This paper proposes an improved Arctic Puffin Optimization (IAPO) algorithm to address the shortcomings of slow initial convergence, susceptibility to local optima, and poor exploration-exploitation balance in the standard APO. The proposed IAPO algorithm integrates a lens imaging opposite learning mechanism with a dynamic differential evolution strategy. This paper selects three categories of metaheuristic algorithms for comparison: classical algorithms, recently proposed swarm intelligence algorithms, and improved variants of APO. Comprehensive experiments were conducted on 20 benchmark functions, along with the CEC 2019 and CEC 2022 test sets. The results show that IAPO achieves higher accuracy, faster convergence, and superior robustness, securing first-place average rankings of 1.35, 1.30, 1.25, and 1.08 on the 20 benchmark functions, CEC 2019, 10- and 20-dimensional CEC 2022 test sets, respectively. Additionally, through three engineering optimization simulation experiments, IAPO achieved optimal solutions of 5.2559 × 10−4, 1.09 × 103, and 1.49 × 104 for the respective engineering problems, ranking first in all cases. These results further validate the algorithm’s robust practical application value. However, the proposed IAPO also has limitations. Its performance was occasionally inferior to the standard APO on a small number of test functions. Moreover, while the adaptive strategy reduces the risk of premature convergence, it does not guarantee immunity from getting trapped in local optima in all scenarios.
Given that IAPO demonstrates superior overall performance on the test set and three engineering application problems despite suboptimal results on a few specific tests, the future work will focus on the following directions: (1) Applying IAPO to a wider range of practical engineering problems. (2) Replacing the maximum iteration limit with a maximum number of fitness evaluations as the stopping criterion to ensure experimental fairness. (3) Conducting comparative experiments between IAPO and more state-of-the-art algorithms on additional benchmark datasets. (4) Select appropriate strategies based on the characteristics of different optimization problems or combine them with other algorithms. (5) Extending the application of the IAPO algorithm to multi-objective optimization problems.

Author Contributions

Conceptualization, Y.Z.; Methodology, Y.Z. and T.W.; Software, Y.Z.; Validation, T.W.; Resources, N.Z.; Data curation, T.W. and N.Z.; Writing—original draft, Y.Z.; Writing—review & editing, Y.Z. and N.Z.; Supervision, T.W.; Project administration, T.W.; Funding acquisition, T.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research is supported by the National Natural Science Foundation of China (61966002), Key Program of Jiangxi Provincial Natural Science Foundation (20242BAB26024), and Jiangxi Provincial Postgraduate Innovation Specialty Fund Program (YC2024-S815).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data that support the findings of this study are available from the corresponding author upon request. There are no restrictions on data availability.

Conflicts of Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

References

  1. Tang, J.; Liu, G.; Pan, Q. A review on representative swarm intelligence algorithms for solving optimization problems: Applications and trends. IEEE/CAA J. Autom. Sin. 2021, 8, 1627–1643. [Google Scholar] [CrossRef]
  2. Pandya, M.; Dassani, S. A novel algorithm for identifying the optimal CNN architectures regulated by swarm intelligence. Intell. Syst. Appl. 2022, 16, 200145. [Google Scholar] [CrossRef]
  3. Chakraborty, A.; Kar, A.K. Swarm Intelligence: A Review of Algorithms. In Nature-Inspired Computing and Optimization; Panigrahi, B.K., Hoda, M.N., Sharma, V., Goel, S., Eds.; Springer: Cham, Switzerland, 2017; pp. 475–494. [Google Scholar] [CrossRef]
  4. Tang, J.; Duan, H.; Lao, S. Swarm intelligence algorithms for multiple unmanned aerial vehicles collaboration: A comprehensive review. Artif. Intell. Rev. 2023, 56, 4295–4327. [Google Scholar] [CrossRef]
  5. Li, S.; Chen, H.; Wang, M.; Heidari, A.A.; Mirjalili, S. Slime mould algorithm: A new method for stochastic optimization. Future Gener. Comput. Syst. 2020, 111, 300–323. [Google Scholar] [CrossRef]
  6. Kaur, S.; Awasthi, L.K.; Sangal, A.L.; Dhiman, G. Tunicate swarm algorithm: A new bio-inspired based metaheuristic paradigm for global optimization. Eng. Appl. Artif. Intell. 2020, 90, 103541. [Google Scholar] [CrossRef]
  7. Hashim, F.A.; Hussien, A.G. Snake optimizer: A novel meta-heuristic optimization algorithm. Knowl.-Based Syst. 2022, 242, 108320. [Google Scholar] [CrossRef]
  8. Marini, F.; Walczak, B. Particle swarm optimization (PSO). A tutorial. Chemom. Intell. Lab. Syst. 2015, 149, 153–165. [Google Scholar] [CrossRef]
  9. Wang, W.; Tian, W.; Xu, D.; Liu, Y.; Wang, Z.; Lim, W.H.; Wang, B. Arctic puffin optimization: A bio-inspired metaheuristic algorithm for solving engineering design optimization. Adv. Eng. Softw. 2024, 195, 103694. [Google Scholar] [CrossRef]
  10. Duankhan, P.; Sunat, K.; Chiewchanwattana, S.; Nasa-ngium, P. The Differentiated Creative Search (DCS): Leveraging differentiated knowledge-acquisition and creative realism to address complex optimization problems. Expert Syst. Appl. 2024, 252, 123734. [Google Scholar] [CrossRef]
  11. Xia, J.Y.; Li, S.; Huang, J.J.; Yang, Z.; Jaimoukha, I.M.; Gunduz, D. Metalearning-based alternating minimization algorithm for nonconvex optimization. IEEE Trans. Neural Netw. Learn. Syst. 2022, 34, 5366–5380. [Google Scholar] [CrossRef]
  12. Long, X.; Cai, W.; Yang, L.; Huang, H. Improved particle swarm optimization with reverse learning and neighbor adjustment for space surveillance network task scheduling. Swarm Evol. Comput. 2024, 85, 101482. [Google Scholar] [CrossRef]
  13. Cai, X.; Zhang, C. An Innovative Differentiated Creative Search Based on Collaborative Development and Population Evaluation. Biomimetics 2025, 10, 260. [Google Scholar] [CrossRef]
  14. Guo, J.; Li, Y.; Huang, B.; Ding, L.; Gao, H.; Zhong, M. An online optimization escape entrapment strategy for planetary rovers based on Bayesian optimization. J. Field Robot. 2024, 41, 2518–2529. [Google Scholar] [CrossRef]
  15. Tian, Z.; Lee, A.; Zhou, S. Adaptive tempered reversible jump algorithm for Bayesian curve fitting. Inverse Probl. 2024, 40, 045024. [Google Scholar] [CrossRef]
  16. Kumar, A.; Wu, G.; Ali, M.Z.; Luo, Q.; Mallipeddi, R. A test-suite of non-convex constrained optimization problems from the real-world and some baseline results. Swarm Evol. Comput. 2020, 56, 100693. [Google Scholar] [CrossRef]
  17. Zhou, S.; Henrich, M.; Wei, Z.; Feng, F.; Yang, B.; Münstermann, S. A general physics-informed neural network framework for fatigue life prediction of metallic materials. Eng. Fract. Mech. 2025, 332, 111136. [Google Scholar] [CrossRef]
  18. Tao, Z.; Li, W.; Guo, Z.; Chen, Y.; Song, L.; Li, J. Aerothermal optimization of a turbine rotor tip configuration based on free-form deformation approach. Int. J. Heat Fluid Flow 2024, 110, 109644. [Google Scholar] [CrossRef]
  19. Fakhouri, H.N.; Alkhalaileh, M.S.; Hamad, F.; Alsharman, N.; Ghatasheh, N. Hybrid arctic puffin algorithm for solving design optimization problems. Algorithms 2024, 17, 589. [Google Scholar] [CrossRef]
  20. Sun, L.; Wang, B. Arctic puffin optimization algorithm based on multi-strategy blending. J. Comput. Commun. 2024, 12, 151–170. [Google Scholar] [CrossRef]
  21. Zhang, J. Performance Optimization in Communication Systems Using Deep Reinforcement Learning with Elite Reverse Learning Strategy—Arctic Puffin Optimization. In Proceedings of the 2025 3rd International Conference on Integrated Circuits and Communication Systems, Raichur, India, 21–22 February 2025; pp. 1–5. [Google Scholar] [CrossRef]
  22. Su, Y.; Jiang, W. Path Planning Based on the Improved Arctic Puffin Algorithm. In Proceedings of the 2025 5th International Conference on Control and Intelligent Robotics, New York, NY, USA, 20–22 June 2025; pp. 136–140. [Google Scholar] [CrossRef]
  23. Wolpert, D.H.; Macready, W.G. No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1997, 1, 67–82. [Google Scholar] [CrossRef]
  24. Tizhoosh, H.R. Opposition-based learning: A new scheme for machine intelligence. In Proceedings of the International Conference on Computational Intelligence for Modelling, Control and Automation and International Conference on Intelligent Agents, Web Technologies and Internet Commerce, Vienna, Austria, 28–30 November 2005; Volume 1, pp. 695–701. [Google Scholar] [CrossRef]
  25. Zhan, H.X.; Wang, T.H.; Zhang, X. A snake optimization algorithm integrating opposition-based learning and differential evolution strategy. J. Zhengzhou Univ. 2024, 56, 25–31. [Google Scholar] [CrossRef]
  26. Yao, L.; Yuan, P.; Tsai, C.Y.; Li, Y.; Chen, J. ESO: An enhanced snake optimizer for real-world engineering problems. Expert Syst. Appl. 2023, 230, 120594. [Google Scholar] [CrossRef]
  27. Lu, H.; Zhan, H.; Wang, T. A multi-strategy improved snake optimizer and its application to SVM parameter selection. Math. Biosci. Eng. 2024, 21, 7297–7336. [Google Scholar] [CrossRef]
  28. Qin, A.K.; Huang, V.L.; Suganthan, P.N. Differential evolution algorithm with strategy adaptation for global numerical optimization. IEEE Trans. Evol. Comput. 2008, 13, 398–417. [Google Scholar] [CrossRef]
  29. Rao, R.V.; Patel, V. Comparative performance of an elitist teaching-learning-based optimization algorithm for solving unconstrained optimization problems. Int. J. Ind. Eng. Comput. 2013, 4, 29–50. [Google Scholar] [CrossRef]
  30. Trivedi, I.N.; Pradeep, J.; Narottam, J.; Bhesdadiya, R.H.; Jangir, P.; Kumar, A.; Jangir, N. Novel adaptive whale optimization algorithm for global optimization. Indian J. Sci. Technol. 2016, 9, 1–16. [Google Scholar] [CrossRef]
  31. Sharma, P.; Raju, S. Metaheuristic optimization algorithms: A comprehensive overview and classification of benchmark test functions. Soft Comput. 2024, 28, 3123–3186. [Google Scholar] [CrossRef]
  32. Salgotra, R.; Mirjalili, S. Multi-algorithm based evolutionary strategy with adaptive mutation mechanism for constraint engineering design problems. Expert Syst. Appl. 2024, 258, 125055. [Google Scholar] [CrossRef]
  33. Chai, Y.; Chang, X.M.; Ren, S. Improved beluga whale optimization algorithm integrating multiple strategies. Comput. Eng. Appl. 2025, 61, 76–93. [Google Scholar] [CrossRef]
  34. Bujok, P.; Kolenovsky, P. Eigen crossover in cooperative model of evolutionary algorithms applied to CEC 2022 single objective numerical optimisation. In Proceedings of the 2022 IEEE Congress on Evolutionary Computation (CEC), Padua, Italy, 18–23 July 2022; pp. 1–8. [Google Scholar] [CrossRef]
  35. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  36. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris hawks optimization: Algorithm and applications. Future Gener. Comput. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
  37. Braik, M.; Al-Hiary, H. A novel meta-heuristic optimization algorithm inspired by water uptake and transport in plants. Neural Comput. Appl. 2025, 37, 13643–13724. [Google Scholar] [CrossRef]
  38. Braik, M.; Al-Hiary, H. Rüppell’s fox optimizer: A novel meta-heuristic approach for solving global optimization problems. Clust. Comput. 2025, 28, 292. [Google Scholar] [CrossRef]
  39. Xu, X.; Chen, J.; Lin, Z.; Wang, C.; Ma, X.; Li, H. Optimization design for the planetary gear train of an electric vehicle under uncertainties. Actuators 2022, 11, 49. [Google Scholar] [CrossRef]
  40. Chang, C.; Liao, Z.; Costa, A.L.H.; Zhang, N.; Yang, Y. Globally optimal design of intensified shell and tube heat exchangers using complete set trimming. Comput. Chem. Eng. 2022, 158, 107644. [Google Scholar] [CrossRef]
  41. Mohammed, B.O.; Aghdasi, H.S.; Salehpour, P. Dhole optimization algorithm: A new metaheuristic algorithm for solving optimization problems. Clust. Comput. 2025, 28, 430. [Google Scholar] [CrossRef]
  42. Askarzadeh, A.; Alipour, M.A. E2: A basic optimization method using exploration-exploitation concept. Soft Comput. 2025, 29, 5519–5539. [Google Scholar] [CrossRef]
  43. Wang, T.-L.; Gu, S.-W.; Liu, R.-J.; Chen, L.-Q.; Wang, Z.; Zeng, Z.-Q. Cuckoo catfish optimizer: A new meta-heuristic optimization algorithm. Artif. Intell. Rev. 2025, 58, 326. [Google Scholar] [CrossRef]
  44. Gao, Y.; Wang, J.; Li, C. Escape after love: Philoponella prominens optimizer and its application to 3D path planning. Clust. Comput. 2025, 28, 81. [Google Scholar] [CrossRef]
Figure 1. Flowchart of the overall solution process of the IAPO algorithm.
Figure 1. Flowchart of the overall solution process of the IAPO algorithm.
Biomimetics 10 00767 g001
Figure 2. Comparison of ablation study results on the 20 benchmark functions. (a) Radar chart illustrates the ranking of each algorithm across all test functions (in the radar charts, each axis represents a test function, with larger values indicating poorer performance. A smaller enclosed area suggests better overall performance). (b) Bar chart compares the overall average rankings of the four algorithms on the test set.
Figure 2. Comparison of ablation study results on the 20 benchmark functions. (a) Radar chart illustrates the ranking of each algorithm across all test functions (in the radar charts, each axis represents a test function, with larger values indicating poorer performance. A smaller enclosed area suggests better overall performance). (b) Bar chart compares the overall average rankings of the four algorithms on the test set.
Biomimetics 10 00767 g002
Figure 3. Comparison of ablation study results on the CEC 2019 test set. (a) Radar chart illustrates the ranking of each algorithm across all test functions. (b) Bar chart compares the overall average rankings of the four algorithms on the test set.
Figure 3. Comparison of ablation study results on the CEC 2019 test set. (a) Radar chart illustrates the ranking of each algorithm across all test functions. (b) Bar chart compares the overall average rankings of the four algorithms on the test set.
Biomimetics 10 00767 g003
Figure 4. Comparison of ablation study results on the 10-dimensional CEC 2022 test set. (a) Radar chart illustrates the ranking of each algorithm across all test functions. (b) Bar chart compares the overall average rankings of the four algorithms on the test set.
Figure 4. Comparison of ablation study results on the 10-dimensional CEC 2022 test set. (a) Radar chart illustrates the ranking of each algorithm across all test functions. (b) Bar chart compares the overall average rankings of the four algorithms on the test set.
Biomimetics 10 00767 g004
Figure 5. Comparison of ablation study results on the 20-dimensional CEC 2022 test set. (a) Radar chart illustrates the ranking of each algorithm across all test functions. (b) Bar chart compares the overall average rankings of the four algorithms on the test set.
Figure 5. Comparison of ablation study results on the 20-dimensional CEC 2022 test set. (a) Radar chart illustrates the ranking of each algorithm across all test functions. (b) Bar chart compares the overall average rankings of the four algorithms on the test set.
Biomimetics 10 00767 g005
Figure 6. Comparative convergence behavior of the nine algorithms on 20 benchmark functions.
Figure 6. Comparative convergence behavior of the nine algorithms on 20 benchmark functions.
Biomimetics 10 00767 g006aBiomimetics 10 00767 g006b
Figure 7. Performance ranking of IAPO and comparison algorithms on the 20 benchmark test functions. (a) Radar chart illustrates the ranking of each algorithm across all test functions. (b) Bar chart compares the overall average rankings of the four algorithms on the test set.
Figure 7. Performance ranking of IAPO and comparison algorithms on the 20 benchmark test functions. (a) Radar chart illustrates the ranking of each algorithm across all test functions. (b) Bar chart compares the overall average rankings of the four algorithms on the test set.
Biomimetics 10 00767 g007
Figure 8. Comparative convergence behavior of the nine algorithms on the CEC 2019 set.
Figure 8. Comparative convergence behavior of the nine algorithms on the CEC 2019 set.
Biomimetics 10 00767 g008
Figure 9. Relative performance ranking of IAPO and competing algorithms on the CEC 2019 test set. (a) Radar chart illustrates the ranking of each algorithm across all test functions. (b) Bar chart compares the overall average rankings of the four algorithms on the test set.
Figure 9. Relative performance ranking of IAPO and competing algorithms on the CEC 2019 test set. (a) Radar chart illustrates the ranking of each algorithm across all test functions. (b) Bar chart compares the overall average rankings of the four algorithms on the test set.
Biomimetics 10 00767 g009
Figure 10. Comparative convergence behavior of the nine algorithms on the 10D CEC 2022 set.
Figure 10. Comparative convergence behavior of the nine algorithms on the 10D CEC 2022 set.
Biomimetics 10 00767 g010
Figure 11. Comparative convergence behavior of the nine algorithms on the 20D CEC 2022 set.
Figure 11. Comparative convergence behavior of the nine algorithms on the 20D CEC 2022 set.
Biomimetics 10 00767 g011
Figure 12. Performance ranking of IAPO and comparison algorithms on the 10-dimensional CEC 2022 test set. (a) Radar chart illustrates the ranking of each algorithm across all test functions. (b) Bar chart compares the overall average rankings of the four algorithms on the test set.
Figure 12. Performance ranking of IAPO and comparison algorithms on the 10-dimensional CEC 2022 test set. (a) Radar chart illustrates the ranking of each algorithm across all test functions. (b) Bar chart compares the overall average rankings of the four algorithms on the test set.
Biomimetics 10 00767 g012
Figure 13. Performance ranking of IAPO and comparison algorithms on the 20-dimensional CEC 2022 test set. (a) Radar chart illustrates the ranking of each algorithm across all test functions. (b) Bar chart compares the overall average rankings of the four algorithms on the test set.
Figure 13. Performance ranking of IAPO and comparison algorithms on the 20-dimensional CEC 2022 test set. (a) Radar chart illustrates the ranking of each algorithm across all test functions. (b) Bar chart compares the overall average rankings of the four algorithms on the test set.
Biomimetics 10 00767 g013
Figure 14. Schematic view of planetary gear train design problem.
Figure 14. Schematic view of planetary gear train design problem.
Biomimetics 10 00767 g014
Figure 15. Convergence curves of the nine algorithms applied to the planetary gear train design problem.
Figure 15. Convergence curves of the nine algorithms applied to the planetary gear train design problem.
Biomimetics 10 00767 g015
Figure 16. Convergence curves of the nine algorithms applied to the heat exchanger network design problem (case 1).
Figure 16. Convergence curves of the nine algorithms applied to the heat exchanger network design problem (case 1).
Biomimetics 10 00767 g016
Figure 17. Convergence curves of the nine algorithms applied to the blending-pooling-separation problem.
Figure 17. Convergence curves of the nine algorithms applied to the blending-pooling-separation problem.
Biomimetics 10 00767 g017
Table 1. Set of 20 benchmark test functions.
Table 1. Set of 20 benchmark test functions.
FunctionNameDimensionRangeOptimal ValueFunctionNameDimensionRangeOptimal Value
F1Sphere30[−100,100]0F11Griewank30[−600,600]0
F2Schwefel 2.2230[−10,10]0F12Penalized30[−50,50]0
F3Schwefel 1.230[−100,100]0F13Penalized230[−50,50]0
F4Schwefel 2.2130[−100,100]0F14Foxholes2[−65,65]0.9980
F5Rosenbrock30[−30,30]0F15Kowalik’s4[−5,5]0.0003
F6Step30[−100,100]0F16Six-Hump Camelback2[−5,5]−1.3106
F7Quartic30[−1.28,1.28]0F17Hartman66[0,1]−3.3220
F8Schwefel30[−500,500]−12,569.5F18Shekel54[0,10]−10.1532
F9Rastrigin30[−5.12,5.12]0F19Shekel74[0,10]−10.4029
F10Ackley30[−32,32]0F20Shekel104[0,10]−10.5364
Table 2. CEC 2019 test function set.
Table 2. CEC 2019 test function set.
FunctionNameDimensionRangeOptimal Value
F1Storn’s Chebyshev Polynomial Fitting Problem9[−8192,8192]1
F2Inverse Hilbert Matrix Problem16[−16,384,16,384]1
F3Lennard-Jones Minimum Energy Cluster18[−4,4]1
F4Rastrigin’s Function10[−100,100]1
F5Grienwank’s Function10[−100,100]1
F6Weierstrass Function10[−100,100]1
F7Modified Schaffer’s Function10[−100,100]1
F8Expanded Schaffer’s F6 Function10[−100,100]1
F9Happy Cat Function10[−100,100]1
F10Ackley Function10[−100,100]1
Table 3. CEC 2022 test function set.
Table 3. CEC 2022 test function set.
FunctionNameDimensionRangeOptimal Value
F1Shifted and Rotated Zakharov Function10/20[−100,100]300
F2Shifted and Rotated Rosenbrock’s Function10/20[−100,100]400
F3Shifted and Rotated Rastrigin’s Function10/20[−100,100]600
F4Shifted and Rotated Non-Continuous Rastrigin’s Function10/20[−100,100]800
F5Shifted and Rotated Levy Function10/20[−100,100]900
F6Hybrid Function 1 (N = 3)10/20[−100,100]1800
F7Hybrid Function 2 (N = 6)10/20[−100,100]2000
F8Hybrid Function 3 (N = 5)10/20[−100,100]2200
F9Composition Function 1 (N = 5)10/20[−100,100]2300
F10Composition Function 2 (N = 4)10/20[−100,100]2400
F11Composition Function 3 (N = 5)10/20[−100,100]2600
F12Composition Function 4 (N = 6)10/20[−100,100]2700
Table 4. Comparative performance of nine Algorithms on 20 benchmark functions.
Table 4. Comparative performance of nine Algorithms on 20 benchmark functions.
FunctionIndicatorWOAHHOPSOWUTPRFOAPOJAPOETAAPOIAPO
F1Best1.02 × 10−866.25 × 10−1114.30 × 1032.58 × 10−13.25 × 1031.23 × 10−48.66 × 10−56.55 × 10−210
Mean1.56 × 10−696.40 × 10−966.16 × 1038.82 × 10−15.83 × 1037.30 × 10−44.66 × 10−42.49 × 10−170
Std8.52 × 10−692.86 × 10−958.84 × 1026.66 × 10−11.79 × 1034.90 × 10−44.77 × 10−43.82 × 10−170
F2Best1.55 × 10−582.75 × 10−613.19 × 1012.64 × 10−12.19 × 1013.77 × 10−31.79 × 10−31.81 × 10−120
Mean4.61 × 10−524.58 × 10−503.57 × 1016.04 × 10−13.19 × 1018.05 × 10−36.29 × 10−31.50 × 10−110
Std1.95 × 10−512.19 × 10−492.48 × 1002.96 × 10−14.75 × 1002.20 × 10−33.43 × 10−31.20 × 10−110
F3Best3.13 × 1043.53 × 10−999.96 × 1031.11 × 1043.91 × 1031.92 × 10−25.22 × 10−23.07 × 10−190
Mean4.75 × 1041.63 × 10−701.45 × 1041.87 × 1048.64 × 1032.82 × 10−12.42 × 10−17.99 × 10−150
Std9.95 × 1038.92 × 10−702.63 × 1033.76 × 1034.22 × 1031.70 × 10−11.43 × 10−12.36 × 10−140
F4Best6.34 × 10−28.10 × 10−572.81 × 1014.87 × 1001.42 × 1013.23 × 10−13.48 × 10−13.01 × 10−100
Mean4.34 × 1012.06 × 10−483.17 × 1018.90 × 1002.80 × 1017.05 × 10−16.86 × 10−11.24 × 10−70
Std2.87 × 1011.12 × 10−472.53 × 1002.36 × 1005.78 × 1002.76 × 10−12.04 × 10−13.70 × 10−70
F5Best2.69 × 1014.25 × 10−61.66 × 1064.82 × 1014.71 × 1055.58 × 10−15.55 × 10−11.87 × 10−21.48 × 10−7
Mean2.81 × 1018.56 × 10−32.73 × 1062.37 × 1022.27 × 1062.68 × 1012.43 × 1011.28 × 1008.53 × 10−1
Std5.72 × 10−11.41 × 10−26.62 × 1051.83 × 1021.43 × 1064.98 × 1009.22 × 1001.70 × 1004.67 × 100
F6Best4.05 × 10−25.77 × 10−73.94 × 1031.99 × 10−13.10 × 1035.06 × 10−41.28 × 10−41.10 × 10−64.07 × 10−14
Mean3.96 × 10−12.10 × 10−45.91 × 1037.42 × 10−16.17 × 1032.78 × 10−31.73 × 10−32.32 × 10−24.56 × 10−13
Std2.52 × 10−12.47 × 10−47.22 × 1024.12 × 10−11.96 × 1031.82 × 10−31.55 × 10−33.61 × 10−23.00 × 10−13
F7Best1.89 × 10−42.30 × 10−61.16 × 1001.40 × 10−22.42 × 10−19.59 × 10−31.68 × 10−22.39 × 10−37.82 × 10−6
Mean3.40 × 10−31.62 × 10−41.75 × 1004.03 × 10−21.08 × 1002.94 × 10−23.02 × 10−21.67 × 10−26.09 × 10−5
Std3.16 × 10−31.58 × 10−43.30 × 10−11.20 × 10−26.84 × 10−19.65 × 10−39.31 × 10−39.88 × 10−34.97 × 10−5
F8Best−1.26 × 104−1.26 × 104−7.81 × 103−5.15 × 103−5.63 × 103−8.70 × 103−9.01 × 103−7.49 × 103−1.26 × 104
Mean−1.05 × 104−1.24 × 104−5.85 × 103−4.19 × 103−4.05 × 103−6.84 × 103−6.91 × 103−5.71 × 103−1.26 × 104
Std1.79 × 1036.52 × 1027.23 × 1023.63 × 1027.00 × 1021.11 × 1031.12 × 1037.00 × 1023.84 × 10−2
F9Best002.08 × 1021.92 × 1021.34 × 1024.05 × 1013.91 × 1010.00 × 1000
Mean7.58 × 10−1502.37 × 1022.16 × 1021.77 × 1021.00 × 1029.67 × 1017.70 × 10−10
Std1.97 × 10−1401.27 × 1011.30 × 1012.69 × 1014.27 × 1014.01 × 1014.22 × 1000
F10Best8.88 × 10−168.88 × 10−161.28 × 1011.62 × 10−11.09 × 1013.49 × 10−33.11 × 10−37.16 × 10−118.88 × 10−16
Mean4.56 × 10−158.88 × 10−161.38 × 1014.42 × 10−11.32 × 1017.73 × 10−35.83 × 10−33.56 × 10−108.88 × 10−16
Std2.55 × 10−1504.28 × 10−12.01 × 10−11.10 × 1003.54 × 10−32.20 × 10−33.36 × 10−100
F11Best004.22 × 1014.45 × 10−12.66 × 1014.74 × 10−41.96 × 10−40.00 × 1000
Mean1.21 × 10−205.74 × 1017.70 × 10−15.74 × 1016.96 × 10−35.95 × 10−32.01 × 10−80
Std6.63 × 10−208.21 × 1001.64 × 10−11.60 × 1018.21 × 10−38.27 × 10−31.10 × 10−70
F12Best4.49 × 10−35.85 × 10−81.98 × 1021.77 × 10−21.80 × 1016.06 × 10−61.28 × 10−52.62 × 10−94.63 × 10−15
Mean2.48 × 10−21.17 × 10−51.66 × 1052.92 × 10−16.30 × 1047.00 × 10−33.51 × 10−32.54 × 10−22.27 × 10−14
Std2.01 × 10−21.82 × 10−51.11 × 1053.72 × 10−12.12 × 1052.63 × 10−21.89 × 10−21.30 × 10−11.61 × 10−14
F13Best8.61 × 10−23.71 × 10−71.02 × 1061.87 × 10−11.17 × 1053.01 × 10−47.81 × 10−51.27 × 10−55.03 × 10−15
Mean4.61 × 10−18.21 × 10−53.23 × 1061.27 × 1002.90 × 1063.05 × 10−35.04 × 10−34.80 × 10−21.11 × 10−13
Std2.50 × 10−11.06 × 10−41.21 × 1061.67 × 1003.21 × 1064.35 × 10−36.62 × 10−34.74 × 10−21.23 × 10−13
F14Best9.98 × 10−19.98 × 10−19.98 × 10−19.98 × 10−19.98 × 10−19.98 × 10−19.98 × 10−19.98 × 10−19.98 × 10−1
Mean2.90 × 1005.01 × 1009.98 × 10−19.98 × 10−12.14 × 1009.98 × 10−11.06 × 1009.98 × 10−11.66 × 100
Std2.95 × 1004.42 × 1003.61 × 10−41.54 × 10−162.62 × 1000.00 × 1003.62 × 10−10.00 × 1004.37 × 100
F15Best3.08 × 10−43.08 × 10−47.05 × 10−46.16 × 10−43.07 × 10−43.07 × 10−43.07 × 10−43.07 × 10−43.07 × 10−4
Mean5.35 × 10−43.98 × 10−41.00 × 10−37.11 × 10−44.42 × 10−33.07 × 10−43.07 × 10−43.07 × 10−43.07 × 10−4
Std2.27 × 10−42.29 × 10−42.85 × 10−44.30 × 10−58.11 × 10−31.22 × 10−191.40 × 10−191.34 × 10−191.11 × 10−19
F16Best−1.03 × 100−1.03 × 100−1.03 × 100−1.03 × 100−1.03 × 100−1.03 × 100−1.03 × 100−1.03 × 100−1.03 × 100
Mean−1.03 × 100−1.03 × 100−1.03 × 100−1.03 × 100−1.03 × 100−1.03 × 100−1.03 × 100−1.03 × 100−1.03 × 100
Std1.48 × 10−91.85 × 10−94.73 × 10−41.57 × 10−56.78 × 10−166.78 × 10−166.78 × 10−166.78 × 10−166.78 × 10−16
F17Best−3.32 × 100−3.26 × 100−3.31 × 100−3.32 × 100−3.32 × 100−3.32 × 100−3.32 × 100−3.32 × 100−3.32 × 100
Mean−3.22 × 100−3.15 × 100−3.24 × 100−3.24 × 100−3.27 × 100−3.31 × 100−3.31 × 100−3.30 × 100−3.32 × 100
Std9.69 × 10−27.51 × 10−25.72 × 10−26.98 × 10−26.01 × 10−23.02 × 10−23.63 × 10−24.21 × 10−21.34 × 10−2
F18Best−1.02 × 101−1.01 × 101−9.89 × 100−1.02 × 101−1.02 × 101−1.02 × 101−1.02 × 101−1.02 × 101−1.02 × 101
Mean−8.02 × 100−5.83 × 100−6.12 × 100−7.42 × 100−6.88 × 100−1.02 × 101−1.02 × 101−1.02 × 101−1.02 × 101
Std2.89 × 1001.78 × 1002.78 × 1003.25 × 1003.03 × 1007.07 × 10−157.07 × 10−156.63 × 10−156.90 × 10−15
F19Best−1.04 × 101−5.09 × 100−9.92 × 100−1.04 × 101−1.04 × 101−1.04 × 101−1.04 × 101−1.04 × 101−1.04 × 101
Mean−6.59 × 100−5.09 × 100−8.01 × 100−1.04 × 101−7.81 × 100−1.04 × 101−1.02 × 101−1.04 × 101−1.04 × 101
Std3.56 × 1003.02 × 10−31.68 × 1001.65 × 10−153.31 × 1009.90 × 10−161.22 × 1001.40 × 10−151.23 × 10−15
F20Best−1.05 × 101−1.05 × 101−9.98 × 100−1.05 × 101−1.05 × 101−1.05 × 101−1.05 × 101−1.05 × 101−1.05 × 101
Mean−6.97 × 100−5.39 × 100−8.56 × 100−1.05 × 101−8.33 × 100−1.05 × 101−1.05 × 101−1.05 × 101−1.05 × 101
Std3.03 × 1001.47 × 1001.26 × 1003.09 × 10−153.47 × 1002.06 × 10−151.98 × 10−152.56 × 10−151.81 × 10−15
Table 5. Comparative performance of nine Algorithms on the CEC 2019 set.
Table 5. Comparative performance of nine Algorithms on the CEC 2019 set.
FunctionIndicatorWOAHHOPSOWUTPRFOAPOJAPOETAAPOIAPO
F1Best3.32 × 1011.00 × 1001.47 × 1062.14 × 1067.17 × 1031.00 × 1001.00 × 1001.00 × 1001.00 × 100
Mean1.04 × 1071.00 × 1001.81 × 1076.64 × 1062.97 × 1063.55 × 1002.83 × 1002.65 × 1011.00 × 100
Std1.41 × 10708.46 × 1062.98 × 1064.63 × 1066.65 × 1004.92 × 1003.66 × 1010
F2Best3.11 × 1034.60 × 1002.62 × 1033.76 × 1032.24 × 1023.98 × 1004.15 × 1004.36 × 1003.93 × 100
Mean7.59 × 1034.96 × 1004.51 × 1035.70 × 1031.61 × 1035.21 × 1005.17 × 1004.68 × 1004.38 × 100
Std2.90 × 1031.10 × 10−11.07 × 1039.01 × 1021.09 × 1039.22 × 10−17.18 × 10−12.25 × 10−14.52 × 10−1
F3Best1.29 × 1002.13 × 1006.99 × 1007.02 × 1003.70 × 1001.38 × 1001.01 × 1002.46 × 1001.19 × 100
Mean5.62 × 1004.97 × 1008.49 × 1009.44 × 1006.93 × 1004.26 × 1003.73 × 1003.87 × 1003.01 × 100
Std2.03 × 1001.31 × 1006.67 × 10−17.62 × 10−11.39 × 1001.94 × 1001.85 × 1008.60 × 10−11.02 × 100
F4Best2.84 × 1012.62 × 1012.92 × 1013.46 × 1012.32 × 1015.97 × 1004.98 × 1001.43 × 1011.00 × 100
Mean6.42 × 1015.22 × 1013.96 × 1014.90 × 1014.63 × 1011.06 × 1019.63 × 1003.03 × 1018.48 × 100
Std1.71 × 1011.62 × 1016.19 × 1006.00 × 1001.34 × 1015.25 × 1002.77 × 1006.86 × 1003.26 × 100
F5Best1.70 × 1001.60 × 1004.16 × 1001.36 × 1002.20 × 1001.00 × 1001.00 × 1002.50 × 1001.00 × 100
Mean2.56 × 1002.04 × 1006.29 × 1001.83 × 1001.48 × 1011.07 × 1001.05 × 1008.27 × 1001.04 × 100
Std7.74 × 10−13.31 × 10−11.23 × 1002.09 × 10−11.26 × 1015.53 × 10−22.99 × 10−24.14 × 1004.10 × 10−2
F6Best4.88 × 1003.08 × 1004.92 × 1001.03 × 1003.52 × 1001.00 × 1001.00 × 1002.15 × 1001.00 × 100
Mean8.75 × 1008.17 × 1006.73 × 1001.54 × 1006.17 × 1001.00 × 1001.00 × 1004.58 × 1001.00 × 100
Std2.12 × 1002.13 × 1006.73 × 10−15.59 × 10−11.27 × 1005.69 × 10−43.75 × 10−41.15 × 1001.53 × 10−8
F7Best7.81 × 1025.57 × 1021.34 × 1031.37 × 1039.53 × 1023.72 × 1025.02 × 1014.12 × 1021.61 × 100
Mean1.37 × 1031.22 × 1031.67 × 1031.68 × 1031.48 × 1039.11 × 1026.86 × 1021.04 × 1034.61 × 102
Std3.07 × 1022.95 × 1022.27 × 1021.57 × 1023.12 × 1023.44 × 1024.24 × 1022.48 × 1021.88 × 102
F8Best4.02 × 1003.96 × 1003.87 × 1004.14 × 1003.79 × 1002.79 × 1003.12 × 1002.84 × 1002.53 × 100
Mean4.72 × 1004.69 × 1004.41 × 1004.44 × 1004.60 × 1003.79 × 1003.82 × 1003.73 × 1003.23 × 100
Std3.11 × 10−13.14 × 10−12.38 × 10−11.43 × 10−13.54 × 10−13.98 × 10−13.54 × 10−13.91 × 10−12.61 × 10−1
F9Best1.16 × 1001.16 × 1001.41 × 1001.24 × 1001.08 × 1001.10 × 1001.04 × 1001.04 × 1001.04 × 100
Mean1.42 × 1001.46 × 1001.57 × 1001.32 × 1001.73 × 1001.17 × 1001.16 × 1001.16 × 1001.17 × 100
Std1.70 × 10−12.22 × 10−17.93 × 10−25.11 × 10−29.09 × 10−14.08 × 10−25.56 × 10−27.19 × 10−25.34 × 10−2
F10Best2.10 × 1012.10 × 1012.12 × 1012.13 × 1012.12 × 1011.00 × 1001.00 × 1001.04 × 1011.00 × 100
Mean2.13 × 1012.12 × 1012.15 × 1012.15 × 1012.14 × 1011.91 × 1011.66 × 1011.96 × 1011.23 × 101
Std1.25 × 10−11.05 × 10−11.18 × 10−11.06 × 10−11.39 × 10−16.18 × 1008.77 × 1003.47 × 1009.49 × 100
Table 6. Comparative performance of nine Algorithms on the 10-dimensional CEC 2022 set.
Table 6. Comparative performance of nine Algorithms on the 10-dimensional CEC 2022 set.
FunctionIndicatorWOAHHOPSOWUTPRFOAPOJAPOETAAPOIAPO
F1Best5.29 × 1034.12 × 1028.96 × 1021.81 × 1033.93 × 1023.00 × 1023.00 × 1024.28 × 1023.00 × 102
Mean2.64 × 1049.82 × 1021.68 × 1033.37 × 1034.38 × 1033.00 × 1023.00 × 1021.28 × 1033.00 × 102
Std1.19 × 1044.05 × 1025.17 × 1021.03 × 1033.22 × 1033.17 × 10−72.29 × 10−75.53 × 1029.89 × 10−10
F2Best4.02 × 1024.00 × 1024.15 × 1024.68 × 1024.17 × 1024.00 × 1024.00 × 1024.01 × 1024.00 × 102
Mean4.58 × 1024.41 × 1024.37 × 1025.00 × 1025.17 × 1024.00 × 1024.01 × 1024.69 × 1024.01 × 102
Std6.68 × 1013.18 × 1012.97 × 1011.62 × 1011.34 × 1021.06 × 1002.82 × 1004.72 × 1012.47 × 100
F3Best6.07 × 1026.15 × 1026.09 × 1026.02 × 1026.03 × 1026.00 × 1026.00 × 1026.06 × 1026.00 × 102
Mean6.40 × 1026.38 × 1026.14 × 1026.06 × 1026.24 × 1026.00 × 1026.00 × 1026.10 × 1026.00 × 102
Std1.40 × 1011.07 × 1013.05 × 1001.75 × 1009.79 × 1002.82 × 10−46.24 × 10−53.27 × 1003.66 × 10−14
F4Best8.22 × 1028.14 × 1028.24 × 1028.23 × 1028.14 × 1028.03 × 1028.03 × 1028.09 × 1028.04 × 102
Mean8.44 × 1028.27 × 1028.37 × 1028.39 × 1028.33 × 1028.12 × 1028.10 × 1028.20 × 1028.17 × 102
Std1.69 × 1016.17 × 1004.77 × 1005.74 × 1001.07 × 1015.79 × 1005.56 × 1005.50 × 1006.73 × 100
F5Best9.44 × 1021.04 × 1039.44 × 1029.00 × 1029.23 × 1029.00 × 1029.00 × 1029.02 × 1029.00 × 102
Mean1.51 × 1031.41 × 1031.00 × 1039.00 × 1021.17 × 1039.00 × 1029.00 × 1029.41 × 1029.00 × 102
Std4.78 × 1021.39 × 1022.63 × 1016.01 × 10−51.72 × 1028.29 × 10−21.99 × 10−103.20 × 1013.66 × 10−14
F6Best2.48 × 1031.93 × 1039.93 × 1048.09 × 1041.83 × 1031.80 × 1031.80 × 1031.82 × 1031.80 × 103
Mean5.62 × 1037.17 × 1032.11 × 1061.01 × 1062.00 × 1031.82 × 1031.82 × 1031.93 × 1031.80 × 103
Std3.05 × 1035.52 × 1031.56 × 1068.38 × 1053.80 × 1021.24 × 1011.64 × 1013.55 × 1022.41 × 100
F7Best2.02 × 1032.03 × 1032.04 × 1032.02 × 1032.03 × 1032.00 × 1032.00 × 1032.00 × 1032.00 × 103
Mean2.08 × 1032.08 × 1032.06 × 1032.03 × 1032.06 × 1032.00 × 1032.00 × 1032.03 × 1032.00 × 103
Std3.67 × 1013.48 × 1017.88 × 1006.17 × 1002.33 × 1012.83 × 1005.49 × 1008.33 × 1001.06 × 10−3
F8Best2.22 × 1032.21 × 1032.23 × 1032.23 × 1032.21 × 1032.20 × 1032.20 × 1032.20 × 1032.20 × 103
Mean2.23 × 1032.23 × 1032.24 × 1032.23 × 1032.25 × 1032.21 × 1032.21 × 1032.22 × 1032.21 × 103
Std9.00 × 1001.47 × 1013.09 × 1011.61 × 1004.89 × 1017.23 × 1007.47 × 1008.40 × 1005.31 × 100
F9Best2.54 × 1032.54 × 1032.53 × 1032.54 × 1032.54 × 1032.53 × 1032.53 × 1032.53 × 1032.53 × 103
Mean2.62 × 1032.61 × 1032.56 × 1032.55 × 1032.61 × 1032.53 × 1032.53 × 1032.58 × 1032.53 × 103
Std5.31 × 1015.29 × 1015.01 × 1017.24 × 1004.58 × 1011.25 × 10−53.14 × 10−62.74 × 1010.00 × 100
F10Best2.50 × 1032.50 × 1032.50 × 1032.50 × 1032.50 × 1032.50 × 1032.50 × 1032.50 × 1032.50 × 103
Mean2.63 × 1032.65 × 1032.58 × 1032.50 × 1032.61 × 1032.50 × 1032.50 × 1032.50 × 1032.50 × 103
Std2.68 × 1021.67 × 1027.36 × 1015.09 × 10−11.53 × 1027.38 × 10−26.13 × 10−21.73 × 1004.72 × 10−2
F11Best2.68 × 1032.61 × 1032.96 × 1032.60 × 1033.03 × 1032.60 × 1032.60 × 1032.72 × 1032.60 × 103
Mean3.00 × 1032.83 × 1034.41 × 1032.85 × 1035.15 × 1032.60 × 1032.62 × 1032.79 × 1032.60 × 103
Std1.48 × 1021.25 × 1026.74 × 1021.14 × 1021.99 × 1036.67 × 10−68.98 × 1016.45 × 1014.22 × 10−13
F12Best2.89 × 1032.87 × 1032.87 × 1032.88 × 1032.87 × 1032.86 × 1032.86 × 1032.87 × 1032.86 × 103
Mean2.91 × 1032.91 × 1032.88 × 1032.89 × 1032.93 × 1032.86 × 1032.86 × 1032.88 × 1032.86 × 103
Std5.04 × 1015.36 × 1014.67 × 1016.57 × 1004.14 × 1011.16 × 1008.35 × 10−11.28 × 1011.20 × 10−1
Table 7. Comparative performance of nine Algorithms on the 20-dimensional CEC 2022 set.
Table 7. Comparative performance of nine Algorithms on the 20-dimensional CEC 2022 set.
FunctionIndicatorWOAHHOPSOWUTPRFOAPOJAPOETAAPOIAPO
F1Best1.77 × 1041.04 × 1049.65 × 1032.22 × 1041.57 × 1043.23 × 1023.31 × 1028.38 × 1033.01 × 102
Mean3.90 × 1042.38 × 1041.43 × 1043.13 × 1042.81 × 1045.78 × 1025.79 × 1021.54 × 1043.16 × 102
Std1.37 × 1047.12 × 1033.24 × 1036.30 × 1039.19 × 1032.73 × 1023.51 × 1023.93 × 1032.08 × 101
F2Best4.83 × 1024.51 × 1024.97 × 1025.70 × 1026.54 × 1024.45 × 1024.45 × 1025.99 × 1024.45 × 102
Mean6.22 × 1025.48 × 1025.48 × 1026.45 × 1021.38 × 1034.57 × 1024.54 × 1028.28 × 1024.52 × 102
Std7.73 × 1014.62 × 1014.14 × 1014.86 × 1013.98 × 1021.14 × 1019.26 × 1001.28 × 1027.57 × 100
F3Best6.37 × 1026.42 × 1026.20 × 1026.18 × 1026.34 × 1026.00 × 1026.00 × 1026.22 × 1026.00 × 102
Mean6.70 × 1026.62 × 1026.29 × 1026.26 × 1026.60 × 1026.00 × 1026.00 × 1026.35 × 1026.00 × 102
Std1.66 × 1018.07 × 1004.97 × 1003.56 × 1001.36 × 1011.10 × 10−11.11 × 10−16.94 × 1003.97 × 10−5
F4Best8.65 × 1028.56 × 1028.97 × 1029.02 × 1028.87 × 1028.25 × 1028.18 × 1028.77 × 1028.24 × 102
Mean9.36 × 1028.87 × 1029.27 × 1029.33 × 1029.47 × 1028.60 × 1028.44 × 1029.03 × 1028.53 × 102
Std3.50 × 1011.52 × 1011.27 × 1011.12 × 1012.41 × 1013.63 × 1012.76 × 1011.37 × 1012.23 × 101
F5Best1.91 × 1032.32 × 1031.36 × 1039.01 × 1022.09 × 1039.00 × 1029.00 × 1021.23 × 1039.00 × 102
Mean4.00 × 1033.03 × 1031.67 × 1039.38 × 1023.15 × 1039.01 × 1029.02 × 1021.62 × 1039.00 × 102
Std1.18 × 1033.36 × 1021.54 × 1024.70 × 1017.43 × 1022.08 × 1003.16 × 1002.60 × 1028.58 × 10−2
F6Best5.10 × 1055.85 × 1043.44 × 1073.36 × 1067.74 × 1031.95 × 1031.89 × 1033.89 × 1031.89 × 103
Mean6.63 × 1062.47 × 1051.15 × 1082.75 × 1072.15 × 1083.32 × 1033.31 × 1032.36 × 1072.62 × 103
Std8.80 × 1061.14 × 1054.76 × 1073.84 × 1074.43 × 1082.18 × 1031.81 × 1034.45 × 1079.44 × 102
F7Best2.13 × 1032.10 × 1032.11 × 1032.10 × 1032.07 × 1032.01 × 1032.02 × 1032.04 × 1032.01 × 103
Mean2.23 × 1032.18 × 1032.14 × 1032.13 × 1032.17 × 1032.04 × 1032.04 × 1032.08 × 1032.03 × 103
Std5.30 × 1014.40 × 1013.10 × 1011.32 × 1017.11 × 1011.14 × 1011.06 × 1012.30 × 1013.53 × 100
F8Best2.23 × 1032.24 × 1032.24 × 1032.25 × 1032.23 × 1032.22 × 1032.22 × 1032.23 × 1032.22 × 103
Mean2.31 × 1032.28 × 1032.31 × 1032.26 × 1032.32 × 1032.23 × 1032.23 × 1032.23 × 1032.23 × 103
Std8.20 × 1017.64 × 1015.85 × 1011.33 × 1011.29 × 1021.76 × 1002.05 × 1002.32 × 1011.09 × 100
F9Best2.51 × 1032.49 × 1032.49 × 1032.51 × 1032.63 × 1032.48 × 1032.48 × 1032.54 × 1032.48 × 103
Mean2.60 × 1032.53 × 1032.53 × 1032.56 × 1032.80 × 1032.48 × 1032.48 × 1032.62 × 1032.48 × 103
Std5.59 × 1013.53 × 1015.43 × 1012.30 × 1011.19 × 1024.25 × 10−16.70 × 10−15.20 × 1013.66 × 10−4
F10Best2.50 × 1032.50 × 1032.51 × 1032.51 × 1032.62 × 1032.50 × 1032.50 × 1032.52 × 1032.50 × 103
Mean5.03 × 1034.33 × 1033.87 × 1032.52 × 1035.27 × 1032.50 × 1032.50 × 1032.57 × 1032.50 × 103
Std1.24 × 1038.68 × 1021.93 × 1035.63 × 1001.59 × 1031.63 × 10−11.37 × 10−11.07 × 1029.71 × 10−2
F11Best3.52 × 1033.04 × 1031.48 × 1042.92 × 1031.35 × 1042.90 × 1032.90 × 1033.94 × 1032.60 × 103
Mean3.98 × 1033.74 × 1031.93 × 1042.94 × 1032.62 × 1042.90 × 1032.90 × 1035.51 × 1032.91 × 103
Std2.72 × 1026.53 × 1022.26 × 1031.22 × 1011.01 × 1043.28 × 10−12.93 × 10−16.20 × 1026.91 × 101
F12Best2.97 × 1033.00 × 1032.95 × 1033.10 × 1033.15 × 1032.94 × 1032.94 × 1033.07 × 1032.93 × 103
Mean3.08 × 1033.20 × 1033.01 × 1033.20 × 1033.51 × 1032.96 × 1032.95 × 1033.20 × 1032.94 × 103
Std8.97 × 1011.86 × 1025.31 × 1014.93 × 1012.50 × 1021.37 × 1011.17 × 1019.99 × 1015.37 × 100
Table 8. Wilcoxon Rank-Sum Test Statistics.
Table 8. Wilcoxon Rank-Sum Test Statistics.
IAPO vs.Algorithm20 Benchmark FunctionsCEC 2019 Test SetCEC 2022 Test Set (D = 10)CEC 2022 Test Set (D = 20)
Wilcoxon Rank-Sum Test
(+/=/−)
WOA17/3/010/0/012/0/012/0/0
HHO20/0/09/0/111/1/012/0/0
PSO19/1/010/0/012/0/012/0/0
WUTP20/0/010/0/012/0/010/2/0
RFO17/3/09/0/112/0/012/0/0
APO16/3/17/3/09/2/112/0/0
JAPO16/3/17/3/011/0/110/2/0
ETAAPO17/2/18/2/012/0/012/0/0
Total142/15/370/8/291/3/294/2/0
Table 9. Friedman test results.
Table 9. Friedman test results.
/20 Benchmark FunctionsCEC 2019 Test SetCEC 2022 (D = 10) Test SetCEC 2022 (D = 20) Test Set
AlgorithmAverage RankRankAverage RankRankAverage RankRankAverage RankRank
WOA4.1967.4297.8297.498
HHO4.0554.7965.7065.285
PSO4.7774.4944.5443.924
WUTP5.8587.3686.7676.577
RFO6.3797.3277.4788.239
APO3.5643.4332.7832.813
JAPO3.2933.0122.4922.732
ETAAPO2.7724.6455.5456.486
IAPO1.1512.0211.5011.481
Table 10. Optimization results for the planetary gear train design problem.
Table 10. Optimization results for the planetary gear train design problem.
IndicatorWOAHHOPSOWUTPRFOAPOJAPOETAAPOIAPO
Best5.2735 × 10−15.2577 × 10−15.3706 × 10−15.6300 × 10−15.3236 × 10−15.2577 × 10−15.2577 × 10−15.2577 × 10−15.2559 × 101
Mean6.0987 × 10−15.3638 × 10−16.4595 × 10−17.4550 × 10−11.2129 × 1005.2826 × 10−15.2875 × 10−15.3147 × 10−15.2790 × 101
Std9.4882 × 10−21.0106 × 10−22.4300 × 10−11.2548 × 10−16.9410 × 10−11.8985 × 1033.4490 × 10−35.0353 × 10−32.0440 × 10−3
Rank657894231
Table 11. Independent variables corresponding to experimental results.
Table 11. Independent variables corresponding to experimental results.
AlgorithmNormBest
x1x2x3x4x5x6x7x8x9
WOA42 25 20 24 25 87 1.5781 1.9996 1.4096 5.2735 × 10−1
HHO42 30 24 24 23 87 1.1278 2.1629 1.2861 5.2577 × 10−1
PSO38 33 40 33 14 120 2.0000 1.0000 1.1563 5.3706 × 10−1
WUTP35 24 22 24 25 87 3.4818 1.6355 1.1661 5.6300 × 10−1
RFO51 26 20 29 36 107 2.2973 1.4142 1.0956 5.3236 × 10−1
APO37 22 20 24 25 87 3.9063 2.8195 1.3820 5.2577 × 10−1
JAPO35 26 25 24 21 87 1.8549 1.5882 1.2334 5.2577 × 10−1
ETAAPO35 26 25 24 25 87 2.9835 2.7809 1.1835 5.2577 × 10−1
IAPO35 26 25 24 20 87 1.5461 2.0606 1.4739 5.2559 × 10−1
Table 12. Optimization results for the heat exchanger network design problem.
Table 12. Optimization results for the heat exchanger network design problem.
IndicatorWOAHHOPSOWUTPRFOAPOJAPOETAAPOIAPO
Best1.31 × 10117.96 × 1094.98 × 1098.98 × 10139.39 × 1083.23 × 1062.79 × 1051.40 × 1081.09 × 103
Mean2.45 × 10143.89 × 10141.34 × 10143.19 × 10142.27 × 10141.64 × 1092.95 × 1087.27 × 10125.35 × 106
Std3.26 × 10144.11 × 10144.07 × 10141.23 × 10144.15 × 10145.05 × 1095.66 × 1082.69 × 10132.75 × 107
Rank586973241
Table 13. Independent variables corresponding to experimental results.
Table 13. Independent variables corresponding to experimental results.
AlgorithmNormBest
x1x2x3x4x5x6x7x8x9
WOA4.362 155.332 99.244 1.057 1,984,877.272 63.785 100.858 599.977 701.717 1.31 × 1011
HHO0.266 21.099 18.153 17.008 1,999,810.207 473.322 100.042 599.943 700.098 7.96 × 109
PSO9.243 80.114 83.735 0.000 2,000,000.000 124.693 100.000 600.000 699.916 4.98 × 109
WUTP4.919 51.101 14.953 88.623 1,912,821.010 169.528 118.924 595.857 704.465 8.98 × 1013
RFO0.022 24.309 40.891 157.277 1,999210.439 411.170 100.060 599.988 700.044 9.39 × 108
APO3.877 104.385 67.432 0.092 1,999,897.728 95.794 100.012 599.994 700.008 3.23 × 106
JAPO0.045 30.486 61.723 6.930 1,999,935.023 328.011 100.006 599.993 700.006 2.79 × 105
ETAAPO0.235 190.614 63.126 4.063 1,999,803.081 52.459 100.024 599.968 700.034 1.40 × 108
IAPO0.809 98.130 78.244 0.483 1,999,921.277 101.902 100.008 599.992 700.008 1.09 × 103
Table 14. Optimization results for the blending-pooling-separation problem.
Table 14. Optimization results for the blending-pooling-separation problem.
IndicatorWOAHHOPSOWUTPRFOAPOJAPOETAAPOIAPO
Best4.19 × 1054.73 × 1052.43 × 1051.01 × 1071.46 × 1075.92 × 1043.26 × 1041.71 × 1061.49 × 104
Mean2.37 × 1063.75 × 1062.44 × 1061.56 × 1077.64 × 1071.15 × 1051.10 × 1053.39 × 1064.15 × 104
Std2.48 × 1062.73 × 1062.73 × 1062.98 × 1063.43 × 1073.62 × 1043.48 × 1041.08 × 1062.33 × 104
Rank654893271
Table 15. Independent variables corresponding to experimental results.
Table 15. Independent variables corresponding to experimental results.
AlgorithmNorm
x1x2x3x4x5x6x7x8x9x10x11x12x13x14x15x16x17x18x19
WOA0 54 90 150 0 3 0 0.1 1.4 0.0 1.6 0.0 34 0 0.1 5 0 23 22
HHO24 31 90 150 4 4 11 0.2 2.8 0.0 5.3 6.2 15 18 2.4 0 14 13 1
PSO0 150 0 150 0 90 89 0.0 0.0 0.0 0.0 0.0 90 90 0.0 87 0 18 18
WUTP18 94 63 83 22 51 2 14.2 4.8 7.9 15.7 22.2 56 26 0.7 19 36 14 30
RFO66 79 43 111 48 2 2 0.2 39.9 68.5 11.5 4.5 30 72 26.8 33 45 54 47
APO43 71 51 135 25 28 19 9.1 3.0 0.6 1.2 1.0 75 35 2.0 27 5 28 2
JAPO54 79 35 133 45 33 13 19.9 2.6 0.3 1.1 1.1 72 51 19.0 12 20 39 38
ETAAPO67 81 81 60 29 31 2 27.5 0.3 3.6 0.7 11.3 54 61 6.6 10 41 39 46
IAPO33 92 60 115 28 24 7 17.9 3.8 0.7 2.2 0.2 81 49 15.4 1 32 38 29
NormBest
x20x21x22x23x24x25x26x27x28x29x30x31x32x33x34x35x36x37x38
0 0.0 1.2 0.0 0.1 0.0 0.3 0.0 0.5 0.1 0.0 0.5 0.0 0.5 0.0 0.8 0.5 0.7 0.0 4.19 × 105
17 1.0 0.0 1.0 0.2 1.0 0.5 0.1 0.0 0.4 0.4 0.0 0.1 0.5 0.1 0.2 0.5 0.2 0.0 4.73 × 105
1 0.2 0.0 0.4 1.0 0.0 0.4 1.0 0.0 0.5 0.5 0.4 0.0 0.5 0.3 0.8 0.5 0.4 0.0 2.43 × 105
15 0.4 0.4 0.3 0.4 0.3 0.3 0.5 0.6 0.4 0.3 0.1 0.4 0.3 0.7 0.4 0.5 0.8 0.6 1.01 × 107
43 0.5 0.2 0.2 0.5 1.0 0.5 0.7 0.2 0.2 0.2 0.2 0.2 0.3 0.3 0.1 0.2 0.4 0.5 1.46 × 107
27 0.5 0.4 0.5 0.7 0.7 0.4 0.5 0.3 0.2 0.3 0.1 0.8 0.3 0.6 0.2 0.3 0.3 0.2 5.92 × 104
2 0.5 0.7 0.0 0.4 0.6 0.1 0.5 0.7 0.4 0.3 0.3 0.2 0.4 0.0 0.7 0.4 0.3 0.2 3.26 × 104
0 0.4 0.3 0.7 0.4 0.7 0.0 0.8 0.8 0.2 0.5 0.3 0.1 0.3 0.4 0.5 0.4 0.2 0.1 1.71 × 106
9 0.6 0.6 0.1 0.4 0.5 0.2 0.6 0.5 0.5 0.4 0.4 0.2 0.4 0.0 0.9 0.4 0.4 0.2 1.49 × 104
Table 16. Performance comparison of IAPO and the latest algorithm on 20 benchmark functions.
Table 16. Performance comparison of IAPO and the latest algorithm on 20 benchmark functions.
AlgorithmPublication YearNumber of Optimal Solutions (20 Functions)Engineering ApplicabilityRobustness
IAPOThis Experiment17 (Dim = 30)Excellent PerformanceStrong (Stable performance across different test sets)
DOA [41]20259 (Dim = 30)Moderate PerformancePoor performance at CEC2022 set
E2 [42]2025Not TestedTop-ranked Performance Not tested on the dataset
CCO [43]202518 (Dim = 10)Not TestedStrong (Demonstrates superior performance across diverse test sets)
PPO [44]2025Not TestedSuperior PerformanceAverage (Average performance at CEC2022 set)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhu, Y.; Wang, T.; Zhao, N. Arctic Puffin Optimization Algorithm Integrating Opposition-Based Learning and Differential Evolution with Engineering Applications. Biomimetics 2025, 10, 767. https://doi.org/10.3390/biomimetics10110767

AMA Style

Zhu Y, Wang T, Zhao N. Arctic Puffin Optimization Algorithm Integrating Opposition-Based Learning and Differential Evolution with Engineering Applications. Biomimetics. 2025; 10(11):767. https://doi.org/10.3390/biomimetics10110767

Chicago/Turabian Style

Zhu, Yating, Tinghua Wang, and Ning Zhao. 2025. "Arctic Puffin Optimization Algorithm Integrating Opposition-Based Learning and Differential Evolution with Engineering Applications" Biomimetics 10, no. 11: 767. https://doi.org/10.3390/biomimetics10110767

APA Style

Zhu, Y., Wang, T., & Zhao, N. (2025). Arctic Puffin Optimization Algorithm Integrating Opposition-Based Learning and Differential Evolution with Engineering Applications. Biomimetics, 10(11), 767. https://doi.org/10.3390/biomimetics10110767

Article Metrics

Back to TopTop