Previous Article in Journal
Virtual Reality in PCATD-Based Instrument Flight Training: A Quasi-Transfer of Training Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

IALA: An Improved Artificial Lemming Algorithm for Unmanned Aerial Vehicle Path Planning

1
School of Mechanical Engineering, Dalian Jiaotong University, Dalian 116028, China
2
School of Materials Science and Engineering, Dalian Jiaotong University, Dalian 116021, China
*
Author to whom correspondence should be addressed.
Technologies 2026, 14(2), 91; https://doi.org/10.3390/technologies14020091 (registering DOI)
Submission received: 14 January 2026 / Revised: 22 January 2026 / Accepted: 27 January 2026 / Published: 1 February 2026
(This article belongs to the Section Information and Communication Technologies)

Abstract

With the increasing application of unmanned aerial vehicle (UAV) in multiple fields, the path planning problem has become a key challenge in the optimization domain. This paper proposes an Improved Artificial Lemming Algorithm (IALA), which incorporates three strategies: the optimal information retention strategy based on individual historical memory, the hybrid search strategy based on differential evolution operators, and the local refined search strategy based on directed neighborhood perturbation. These strategies are designed to enhance the algorithm’s global exploration and local exploitation capabilities in tackling complex optimization problems. Subsequently, comparative experiments are conducted on the CEC2017 benchmark suite across three dimensions (30D, 50D, and 100D) against eight state-of-the-art algorithms proposed in recent years, including SBOA and DBO. The results demonstrate that IALA achieves superior performance across multiple metrics, ranking first in both the Wilcoxon rank-sum test and the Friedman ranking test. Analyses of convergence curves and data distributions further verify its excellent optimization performance and robustness. Finally, IALA and the comparative algorithms are applied to eight 3D UAV path planning scenarios and two amphibious UAV path planning models. In the independent repeated experiments across the eight scenarios, IALA attains the optimal performance 13 times in terms of the two metrics, Mean and Std. It also ranks first in the Monte Carlo experiments for the two amphibious UAV path planning models.

1. Introduction

With technological advancements driving the increasing complexity of modern engineering problems, they have brought about optimization challenges characterized by high dimensionality and strong uncertainty [1], and the UAV path planning problem is a typical representative. Since the emergence of Unmanned Aerial Vehicles (UAVs), their rapid development has led to widespread applications [2]. Initially designed exclusively for military use, UAVs underwent practical flight tests during World War II [3]. Since then, UAVs have entered public view and been widely adopted across various industries. Over the past few decades, UAV-related products have experienced rapid development and iterative upgrades, triggering an explosive growth in the number of UAVs [4]. Since the winter of 2019, the outbreak of COVID-19 has sparked a surge in interest in contactless delivery [5], making UAV-based transportation and logistics mainstream. Furthermore, against the backdrop of the “low-altitude economy,” UAV mission planning problems, including last-mile delivery [6], have become increasingly prominent. Beyond these application scenarios, UAVs are also accurately deployed in various refined fields, such as irrigation support in agricultural production [7], marine environment monitoring [8], urban planning [9], search and rescue [10], and many others. As a key link in autonomous UAV flight missions, 3D path planning determines whether UAVs can safely and efficiently perform tasks in complex environments [11]. Therefore, the UAV path planning problem is a complex optimization problem in the field of robotic mission planning [12,13].
In recent years, swarm intelligence optimization algorithms have been gradually applied to the field of UAV path planning due to their global optimization capabilities [14]. As heuristic algorithms, they do not rely on gradient information and can find optimal solutions in complex search spaces, making them particularly suitable for high-dimensional and nonlinear optimization problems. To summarize, this paper addresses the key issues of UAV flight for mission scenarios in complex obstacle-terrain environments. By constructing mission environment scenarios and mathematical models, the improved swarm intelligence optimization algorithm is applied to solve the models. Finally, experiments are conducted to verify the effectiveness and robustness of the proposed model and algorithm, providing feasible solutions and ideas for future researchers. The main contributions of this study are as follows:
  • An Improved Artificial Lemming Algorithm (IALA) is proposed. On the basis of the original Artificial Lemming Algorithm (ALA), three improved strategies—Memory-based Learning Strategy, DE-hybrid Strategy, and directed neighborhood local search—are integrated to comprehensively enhance IALA’s ability to solve high-dimensional optimization problems and UAV path planning problems.
  • Multi-dimensional repeated independent experiments are conducted to compare IALA with eight other state-of-the-art algorithms on the CEC2017 benchmark suite, so as to highlight the superiority of IALA. Wilcoxon rank-sum test and Friedman ranking analysis are employed to verify the significant differences between IALA and the comparative algorithms.
  • IALA and the comparative algorithms are applied to an eight-scenario 3D UAV path planning model. Thirty independent experiments are repeated to compare the performance of IALA with other algorithms in engineering problems.
  • An amphibious UAV path planning model is constructed, and IALA along with the other eight algorithms are applied to this model. The path planning results are analyzed through visual presentation. In addition, Monte Carlo simulations of the nine algorithms are performed in this scenario to evaluate the robustness of the algorithms.
The outline of the remainder of this manuscript is as follows. Section 2 presents a brief summary of the development of existing metaheuristic algorithms. Section 3 introduces the principles and procedures of the original Artificial Lemming Algorithm (ALA). Section 4 describes and analyzes the strategies, pseudocode, and computational complexity of the improved algorithm. Section 5 conducts qualitative and quantitative tests on the CEC2017 benchmark suite, along with an analysis of the statistical test results. Section 6 carries out experimental operations and result analysis for the UAV 3D path planning model. Section 7 performs path planning experiments and Monte Carlo simulations for amphibious UAVs. Finally, Section 8 summarizes this research. The research framework is shown in the Figure 1.

2. Literature Review

In both industrial applications and theoretical research, many practical problems are formulated as mathematical optimization problems. These problems are typically characterized by high computational dimensionality and complex constraints, posing significant challenges to the application of heuristic algorithms or traditional methods. The emergence of metaheuristic algorithms has provided a new approach to solving such problems. Whether for single-objective or multi-objective optimization, these algorithms explore and exploit the objective function and various constraints through iteration to find the optimal solution.
Swarm intelligence (SI) optimization algorithms are a class of metaheuristic algorithms that simulate the cooperative behaviors of biological swarms (such as wolf packs, birds, ants, etc.). Characterized by advantages including no reliance on gradient information, strong robustness, and adaptability to complex nonlinear problems, they have become a core tool for solving complex issues in fields such as engineering optimization and function approximation [15]. In recent years, with the increasing demands for high-dimensional optimization, multi-objective decision-making, and practical engineering constraints, researchers have promoted the development of swarm intelligence algorithms toward more efficient and practical directions through algorithm improvement, cross-domain integration, and application expansion.
Since the last century, a series of swarm optimization algorithms have emerged one after another, such as Genetic Algorithm (GA) [16], Ant Colony Optimization (ACO) [17], Particle Swarm Optimization (PSO) [18], differential evolution (DE) [19], etc. Among them, GA is a type of evolutionary algorithm based on the theory of biological evolution. Its core idea simulates the mechanisms of “natural selection” and “genetic mutation” in nature, gradually approaching the optimal solution of the optimization problem through selection, crossover, and mutation operations during population iteration. PSO is a swarm intelligence algorithm that simulates the foraging behavior of bird flocks and the migration of fish schools. Each “particle” can be imagined as a thinking bird: it remembers both the “best foraging spot” it has visited (personal best) and perceives the “global optimal location” discovered by the entire flock (global best). By continuously adjusting its flight direction and position, it gradually approaches the optimal solution. Its core logic is simple: the velocity of a particle is jointly determined by “personal experience” (cognitive component) and “group experience” (social component), with random perturbations added to avoid convergence to local optima. The position is updated iteratively based on the velocity, eliminating the need for complex crossover and mutation operations. This “information sharing” mechanism endows PSO with a natural advantage in extensive exploration of new regions and exploitation of optimal solution areas. Additionally, it does not rely on gradient information, making it particularly suitable for nonlinear and high-dimensional problems. Beyond the aforementioned classic optimization algorithms, inspired by natural biological swarms and other natural phenomena, researchers have proposed numerous innovative algorithms with superior performance, which have been continuously improved by many scholars.
For example, Turkish scholar Karaboga et al. proposed the Artificial Bee Colony (ABC) algorithm [20], inspired by the foraging behavior of honeybees. The Sparrow Search Algorithm (SSA) [21] was proposed by Chinese scholars Xue and Shen based on the social hierarchy and foraging strategies of sparrows. Dehghani et al. developed the Coati Optimization Algorithm (COA) inspired by the behavior of coatis in nature, and simulations have demonstrated its significant advantage in balancing exploration in global search and exploitation in local search [22]. Arora et al. simulated the food search and mating behaviors of butterflies—where butterflies use olfaction to locate nectar or mating partners—and proposed and tested the Butterfly Optimization Algorithm (BOA) [23]. Zhong et al. presented a metaheuristic optimization algorithm inspired by the group behavior of beluga whales, namely the Beluga Whale Optimization (BWO) algorithm. By adaptively balancing exploration and exploitation capabilities and introducing a Lévy flight strategy to enhance global convergence performance, BWO has exhibited excellent optimization performance and robustness in standard test functions and practical engineering optimization problems [24]. Duan and Qiao proposed the Pigeon-Inspired Optimization (PIO) algorithm, a metaheuristic inspired by the navigation behavior of pigeon flocks. By combining two operators, it improves the global search capability and convergence speed during the optimization process, and has been successfully applied to the path planning problem of flying robots, showing good robustness and optimization performance [25]. The Nutcracker Optimization Algorithm (NOA) [26] simulates the spatial memory and random exploration behaviors of nutcracker birds during seed search, storage, and retrieval. Through the design of different global and local search operators, it effectively enhances the algorithm’s optimization performance in standard test functions and practical engineering problems. The Slime Mould Algorithm (SMA) [27] is a stochastic optimization algorithm proposed by Li et al., inspired by the oscillation patterns of slime moulds in nature. Its core lies in achieving optimization by simulating the behavioral rules and morphological dynamic changes of slime moulds during foraging. The Marine Predators Algorithm (MPA) [28], proposed by Faramarzi et al., draws core inspiration from the multiple behavioral mechanisms of predators in marine ecosystems. It not only incorporates the differentiated foraging strategies of various marine predators (such as sharks, tuna, etc.) but also integrates the regulation of optimal encounter probability between prey and predators, memory storage and vortex effects unique to the marine environment, and the impact of fish aggregation behavior on group optimization. The Artificial Gorilla Troops Optimizer (AGTO) [29], invented by Abdollahzadeh et al., is inspired by the social behaviors and movement characteristics of gorillas in wild survival. It constructs an optimization mechanism by simulating the natural behaviors of gorilla groups, such as cooperative interaction and territory exploration. Additionally, Australian scholar Mirjalili proposed two highly cited algorithms in recent years: the Grey Wolf Algorithm (GWO) [30]. Chinese scholars proposed the Special Forces Algorithm (SFA), inspired by human behaviors during task execution. Other algorithms include Harris Hawks Optimization (HHO, 2019) [31], Black-Winged Kite Algorithm (BKA, 2024) [32], Crested Porcupine Optimizer (CPO, 2024) [33], Polar Lights Optimizer (PLO, 2024) [34], Whale Optimization Algorithm (WOA, 2016) [35], and Beaver Algorithm (BA, 2024). These methods all mimic biological processes such as natural selection or evolution, where solutions are represented as individuals that undergo reproduction and mutation to generate new and potentially improved candidate solutions for the given problem [36]. Examples of the development timeline of swarm intelligence optimization algorithms are shown in the Figure 2.
In the application of numerous original algorithms, as the dimensionality of problems increases, the number of iterations and evaluations grows exponentially, leading to certain inherent challenges for such algorithms. It has gradually been found that existing algorithms more or less have certain defects, such as premature convergence, parameter sensitivity, and high complexity. Moreover, due to the wide applicability and success of swarm intelligence optimization algorithms, ongoing research directions have emerged [37]. Based on the No Free Lunch Theorem [38], different algorithms exhibit varying performances on different problems. Therefore, in the field of academic research, a large number of scholars have conducted extensive and in-depth studies on optimization algorithms.
For instance, regarding common improvement strategies: most swarm intelligence optimization algorithms randomly generate initial populations within given upper and lower bounds, resulting in significant randomness. Consequently, chaos mappings such as Logistic chaos mapping [39], Tent chaos mapping [40], and Cubic mapping have been developed and integrated with optimization algorithms to assist in finding optimal solutions within the given search space. Additionally, populations initialized using good point sets are more uniformly distributed compared to randomly generated ones.
Each swarm intelligence optimization algorithm has its own inherent optimization pattern. To achieve “large-stride exploitation + small-step exploration”, a random walk strategy, Lévy flight [41], is employed to enhance the effective convergence of the algorithm. To expand the search breadth during optimization, the opposition-based learning strategy [42] accelerates the efficiency of finding optimal solutions through the combined effect of current solutions and their opposite solutions. Common opposition-based learning strategies include random opposition-based learning, quasi-opposition-based learning, quasi-reflection learning, and dynamic opposition-based learning.
The mutation perturbation strategy is a randomness regulation mechanism that modifies the characteristics of individuals in the population with a low probability to help the algorithm escape local optima and maintain population diversity, such as the Cauchy–Gaussian mutation strategy. The adaptive strategy [43], as a method for dynamically optimizing algorithm parameters or search behaviors, is also frequently used in algorithm improvement. Furthermore, the multi-algorithm fusion strategy enhances the problem-solving capability of algorithms by combining the advantages or complementarities of different algorithms.
The Improved Artificial Lemming Algorithm (IALA) proposed in this paper is originally derived from the Artificial Lemming Algorithm (ALA), which was first presented by Xiao, Cui, et al. in 2025 [44]. Inspired by four wild survival behaviors of lemmings, ALA implements two types of search mechanisms through these behaviors: The exploration phase, dominated by long-distance migration and burrowing, which is used to extensively scan the search space for promising regions; and the exploitation phase, centered on foraging and escaping from predators, which focuses on local optimization within promising areas. Meanwhile, an energy-decreasing mechanism is introduced to dynamically adjust the trade-off between exploration and exploitation. This mechanism enables individuals to prioritize exploration in the early iterations and shift toward exploitation in the later stages, thereby enhancing the ability to escape local optima. However, similar to most bio-inspired metaheuristic methods, its optimization mode and the requirement for refined optimization still demand further in-depth research and improvement. Comparison of advantages and disadvantages of some classic or latest algorithms is shown in the Table 1.

3. ALA Algorithm

Lemmings are small rodents that inhabit the Arctic tundra, renowned for their rapid reproduction and collective migration. The Artificial Lemming Algorithm (ALA) is a metaheuristic algorithm that performs optimized search by simulating four typical behaviors of lemmings. Specifically, ALA employs long-distance migration and digging hole behaviors for global exploration, while foraging and predator avoidance behaviors are utilized for local exploitation. Moreover, an energy-decreasing mechanism is introduced to dynamically balance exploration and exploitation. The original algorithm implementation process can be referenced in Reference [44].

3.1. Initialization

At the start of the algorithm, it is necessary to set parameters such as population size N , maximum number of iterations T m a x , and dimension of decision variables D i m . Subsequently, the positions Z i ( 0 ) ( i = 1 , , N ) of N lemming individuals are initialized within the search space, following a random distribution as adopted by most algorithms. The fitness value of each individual is calculated, and the current global optimal position Z b e s t ( 0 ) is recorded. At this point, the iteration counter t = 1 is initialized, and the cyclic iteration begins.

3.2. Long-Distance Migration (Exploration)

The long-distance migration behavior simulates the process where lemmings randomly conduct long-distance migration to search for better habitats when food is scarce. In ALA, the position update of the i th individual is calculated using the following Formula (1):
Z i ( t + 1 ) = Z b e s t ( t ) + F × B M × R × [ Z b e s t ( t ) Z i ( t ) ] + ( 1 R ) × [ Z i ( t ) Z a ( t ) ] F = 1 if 2 × r a n d + 1 = 1 1 if 2 × r a n d + 1 = 2
where Z i ( t + 1 ) represents the position of the i th individual in the next generation; Z b e s t ( t ) denotes the position of the current global optimal individual; Z a ( t ) is the position of a randomly selected individual in the population; F is the direction flag, with a value of + 1 or 1 ; B M is the Brownian motion step vector following a standard normal distribution; and R is a random vector whose elements are uniformly distributed within [ 1 , 1 ] . This update formula performs search by utilizing the position difference between the global optimal individual and the random individual, while employing Brownian motion to provide random perturbations, thereby enhancing the global search capability. Z i ( t ) is the current position of the i th individual; the value of F is used to change the search direction; R = 2 × rand ( 1 , D i m ) 1 is the random vector of 1 × D i m ; and Z a ( t ) represents the position of another randomly selected individual in the population. The function of Formula (1) is to enable the individual to move randomly toward both the global optimal solution and the random companion, thereby achieving extensive regional exploration.

3.3. Digging Holes (Exploration)

Digging behavior simulates the process in which lemmings randomly dig new burrows in their habitats. In the algorithm, the update Formula (2) corresponding to this behavior is as follows:
Z i ( t + 1 ) = Z i ( t ) + F × L × Z b e s t ( t ) Z b ( t )
where Z b ( t ) is the position of another randomly selected individual in the population and L is a random factor related to the current number of iterations, which is used to control the digging distance. The value of L is given by the following Formula (3):
L = r a n d × 1 + sin ( t 2 )
Here, r a n d represents a random number within the interval of [ 0 , 1 ] . This mechanism dynamically correlates the digging behavior with the iterative process: in the early stages of iteration, due to the large value of L , individuals can conduct digging (exploration) over a wide range; as t increases, sin ( t / 2 ) fluctuates between [ 1 , 1 ] , enabling dynamic adjustment of the search range. The meaning of symbol 7 in this formula is similar to that in Formula (1). In brief, Formula (2) enables individuals to perform random jumps near their current coordinates based on the coordinates of the global optimal individual and random companions, thereby forming new search regions.

3.4. Foraging (Exploitation)

Foraging behavior simulates the process in which lemmings randomly wander inside burrows to search for food sources. To mimic this behavior, ALA adopts a spiral search mechanism, and the position update Formula (4) is as follows:
Z i ( t + 1 ) = Z b e s t ( t ) + F × s p i r a l × r a n d × Z i ( t )
where parameter r a n d [ 0 , 1 ] is a random number, parameter F is the same as previously defined; s p i r a l represents the spiral random search factor, which is defined as follows:
s p i r a l = r a d i u s × sin ( 2 π × r a n d ) + cos ( 2 π × r a n d ) r a d i u s = j = 1 D i m z b e s t , j ( t ) z i , j ( t ) 2
Here, r a d i u s denotes the Euclidean distance between the current individual Z i ( t ) and the global optimum Z b e s t ( t ) . Formula (4) performs local search by means of random spiral jumps in the vicinity of the current optimal solution, thereby achieving local exploitation. The spiral parameter s p i r a l enables the search to possess directionality while preserving randomness, so as to meticulously explore the space around the optimal point.

3.5. Avoiding Predators (Exploitation)

Predator avoidance behavior simulates the process in which lemmings quickly flee back to their burrows and perform deceptive movements when encountering threats from predators. The position update Formula (6) for this phase is as follows:
Z i ( t + 1 ) = Z b e s t ( t ) + F × G × L e v y ( D i m ) × Z b e s t ( t ) Z i ( t )
where G is an iteration-related escape coefficient, which is given by the following formula:
G = 2 × 1 t T m a x
As the number of iterations increases, G decreases linearly from 2 to 0, causing the escape step size to shrink gradually. L e v y ( D i m ) denotes the Lévy flight random step size with a dimension of D i m , which is used to simulate the deceptive movements of lemmings. For the specific implementation of the Lévy distribution, it is only necessary to understand that it generates random jumps with large step sizes, which helps the algorithm escape local optima. In summary, Formula (6) enables individuals to perform Lévy flight escape along the direction of the global optimum, randomly flee from the current local region, and enhance the local search capability.

3.6. Energy Mechanism

ALA dynamically regulates the transition between the exploration and exploitation phases throughout the iterative process via the energy coefficient. The energy coefficient gradually decreases with iterations, and its calculation Formula (8) is as follows:
E ( t ) = 4 × arctan ( 1 t T max ) × ln ( 1 r a n d )
This formula ensures that E generally decreases as t increases, while exhibiting random fluctuations. The threshold is usually set to 1: when E > 1 , the algorithm is considered “energetic” and can perform exploration behaviors; when E 1 , it executes exploitation behaviors. The calculation yields P E > 1 0.5 , meaning there is approximately a 50% chance of entering the exploration phase. This energy-decreasing mechanism ensures a smooth transition of the algorithm from global search to local search, preventing premature convergence to local optima.
In each iteration, the behavior pattern is determined according to the energy coefficient E : if E > 1 (exploration phase), either long-distance migration or digging is randomly selected for execution—for example, long-distance migration is performed with a probability of 0.3, otherwise digging is adopted. If E 1 (exploitation phase), either foraging or predator avoidance is randomly selected for execution—for example, foraging is performed with a probability of 0.5, otherwise predator avoidance is adopted. After the update is completed, the fitness values of all individuals are recalculated, the global optimum Z b e s t is updated, and then the next iteration is carried out until the maximum number of iterations is reached or the stopping criterion is satisfied. The above rules ensure that the search strategy switches dynamically during the iterative process. The overall flow of the ALA algorithm is shown in the Figure 3 below. In this study, individuals in the population are encoded as candidate flight paths in the form of 3D waypoint sequences, and their fitness is jointly determined by multiple metrics including path length, collision penalty, and flight constraints. The global search and local search behaviors in the Artificial Lemming Algorithm (ALA) correspond to large-scale exploration and local refined adjustment in the path space, respectively, where the energy mechanism is used to dynamically control the switching of search phases. Furthermore, this paper introduces the strategies of historical memory guidance, differential evolution operator fusion, and directed neighborhood perturbation into the standard algorithmic framework. These strategies enable the algorithm to effectively refine the path for complex terrain and narrow feasible corridors while maintaining its global search capability, thereby better adapting to the constraint characteristics and engineering requirements of UAV path planning.

4. Proposed Algorithm

This paper selects the Artificial Lemming Algorithm (ALA) as the basic optimization framework, mainly based on the fact that its energy-driven phased search mechanism can adaptively balance global exploration and local exploitation without relying on gradient information, which is highly compatible with the high-dimensional, nonlinear, strongly constrained and multimodal search characteristics of the 3D UAV path planning problem. Compared with common swarm intelligence optimization algorithms, ALA features a relatively simple structure and a small number of parameters, which facilitates the embedding of constraint handling and local refinement operators.
In this section, we modify the Artificial Lemming Algorithm based on its inherent properties and propose an Improved Artificial Lemming Algorithm (IALA). It comprehensively improves the performance of the original ALA through three strategies, namely the Memory-based Learning Strategy, Hybrid Search Based on DE, and directed neighborhood local search. Of these, the first two are adapted from existing strategies, while the last one is original and innovative.

4.1. Optimal Information Retention Strategy Based on Individual Historical Memory

In the original Artificial Lemming Algorithm (ALA), the position update of individuals relies entirely on the new solutions generated in the current iteration. When an individual obtains a relatively optimal solution during the random perturbation or exploration phase, this high-quality information may be destroyed by subsequent search operations, leading to unstable search directions and even performance degradation. To enhance the algorithm’s ability to memorize historical high-quality solutions, this paper introduces an optimal information retention strategy based on individual historical memory. Specifically, an independent historical memory unit is constructed for each individual in the population, which is used to record the historical optimal position and its corresponding fitness value experienced by the individual during the search process. Let the position and fitness of the   i th individual in the   t th iteration be   X i   t and   f i   t respectively; its historical optimal position and fitness are defined by the following Formula (9):
X i p b e s t ( t ) = arg min k t f ( X i ( k ) ) ,   f i p b e s t ( t ) = min k t f ( X i ( k ) )
After the end of each generation of iteration, if the current fitness of the individual is better than its historical optimal fitness, the memory unit will be updated; otherwise, the original memory information will remain unchanged. Based on the historical memory pool of all individuals, the global historical optimal solution can be further obtained, as shown in the following Formula (10):
X g b e s t ( t ) = arg m i n f i p b e s t ( t )
By introducing the individual historical memory mechanism, the algorithm can not only effectively retain the high-quality solutions discovered during the search process, but also avoid the loss of high-quality solutions caused by random updates, thereby improving the stability of the search process and the overall convergence accuracy.

4.2. Hybrid Search Strategy Based on Differential Evolution Operator Fusion

Although the Artificial Lemming Algorithm (ALA) exhibits strong random search capability in the global exploration phase, its local exploitation capability is limited in the later stage of the search, making it prone to falling into local optima when dealing with complex multimodal problems. To compensate for this deficiency, this paper introduces the idea of differential evolution (DE) into ALA and constructs a DE-hybrid Strategy based on the fusion of differential evolution operators.
In this strategy, the differential mutation operation is constructed based on the current individual X i   t and the global historical optimal individual X g b e s t ( t ) , as shown in the following Formula (11):
V i ( t ) = X i ( t ) + F ( t ) ( X g b e s t ( t ) X i ( t ) )
where F t is the adaptive mutation factor, which changes dynamically with the number of iterations, and its calculation method is as follows:
F ( t ) = F m i n + ( F m a x F m i n ) 1 t T m a x
To enhance the search diversity among dimensions, a binomial crossover operation is introduced to generate the trial vector U :
U i , j ( t ) = V i , j ( t ) , if   r a n d j p   or   j = j r a n d X i , j ( t ) , otherwise
where p is the crossover probability and j r a n d is a randomly selected dimension index, which ensures that mutation occurs in at least one dimension. Subsequently, the individual position is updated via a greedy selection mechanism, as shown in the following Formula (14):
X i ( t + 1 ) = U i ( t ) , f ( U i ( t ) ) < f ( X i ( t ) ) X i ( t ) , otherwise
By integrating differential evolution operators, the algorithm not only maintains its inherent global exploration capability, but also significantly enhances the local exploitation capability in the later stage of the search, thus effectively improving the solution accuracy of the algorithm in complex optimization problems.

4.3. Local Refined Search Strategy Based on Directed Neighborhood Perturbation

To further improve the search accuracy of the algorithm in the neighborhood of high-quality solutions, this paper proposes a local refined search strategy based on directed neighborhood perturbation. This strategy is mainly aimed at performing directed minor perturbations on the elite individuals in the historical memory pool, so as to fully explore potential optimal regions.
First, the memory pool is sorted according to the historical optimal fitness of individuals, and the top N e elite individuals are selected to form the elite subset:
X e = { X 1 p b e s t , X 2 p b e s t , , X N e p b e s t }
For the elite individual X k p b e s t , calculate the Euclidean distance between it and other elite individuals, and select the two nearest neighborhood individuals X k 1 p b e s t and X k 2 p b e s t . Based on the neighborhood difference direction, construct the directed perturbation search Formula (16):
X k n e w = X k p b e s t + α ( t ) r a n d ( X k 1 p b e s t X k 2 p b e s t )
where α ( t ) is a perturbation intensity factor that decreases with the number of iterations, which is used to control the search step size:
α ( t ) = 2 1 t T m a x
If the fitness of the individual is improved after perturbation, its historical memory unit will be updated. This directed neighborhood local search strategy can conduct refined exploration of the neighborhood of high-quality solutions while maintaining the rationality of the search direction, thus effectively improving the local search capability of the algorithm and the accuracy of solutions in complex search spaces.
The three proposed improvement strategies are functionally complementary to each other: the individual historical memory is used to retain and propagate high-quality solutions, which significantly improves search stability and solution reproducibility; the differential evolution fusion operator enhances population diversity through mutation and crossover, thereby boosting global search capability and convergence speed; the directed neighborhood local search performs fine-tuning on the problem near constraint boundaries and narrow corridors, which strengthens solution feasibility. The optimal solutions generated by DE are written into the historical memory, which in turn serves as the reference target for local perturbation and refinement, while the refined optimal solutions are diffused again among the population via DE, forming a positive feedback loop. The three strategies achieve synergy through such interactions.

4.4. Pseudocode

Based on the previously proposed algorithm improvement strategy, the novelly introduced algorithm pseudocode is shown in Algorithm 1.
Algorithm 1: IALA (Improved Artificial Lemming Algorithm)
Input: max iterations: Tmax, population size: N, problem dimension: Dim
Output: Optimal position: Z_best, fitness_best
1. Initialize N individuals Zi (i = 1, …, N);
2. evaluate fitness; Set Zbest = arg min fitness; pbesti = Zi
3. While t ≤ Tmax do
4.   Compute energy coefficient E(t), for each individual Zi do
5.      If E > 1 then            # Exploration phase
6.        If rand < 0.3 then        # Long-distance migration
7.          update Zi
8.      Else             # Digging holes
9.        update Zi
10.        End If
11.       Else               # Exploitation phase
12.        If rand < 0.5 then        # Foraging
13.          update Zi
14.        Else              # Predator avoidance
15.          update Zi
16.        End If
17.       End If
18.       Apply boundary control and greedy selection
19.       Update pbesti if fitness (Zi) improves             ▷ Strategy 1
20.   End For
21.   Apply DE operator: current-to-best mutation and crossover       ▷ Strategy 2
22.   Perform neighborhood directed local search on elite pbest set    ▷ Strategy 3
23.   Update Z_best from pbest library
24.   t = t + 1
25. End While
26. return Z_best and fitness_best

4.5. Complexity Analysis

It takes the population size   N , dimension D , maximum number of iterations T m a x , and the cost of a single evaluation of the objective function   T f as parameters. On the basis of retaining the original update mechanism, IALA introduces the memory pool, DE operation, directed local search, and other strategies. Among them, the additional overhead brought by the first two is a linear term at the level of constant factor, accompanied by more objective function evaluations. In contrast, the directed local search calculates and sorts the distance between each elite individual and other individuals, resulting in a quadratic term O ( N 2 D + N 2 log N ) . Therefore, in the worst-case scenario, the overall time complexity of IALA is O ( T m a x ( N 2 D + N 2 log N + N T f ) ) . If T f = O ( D ) , it can be simplified to O ( T m a x N 2 D ) , and the space complexity is O ( N D ) .
The original ALA performs only one population update, boundary correction, and one objective function evaluation for each candidate solution in each iteration. Therefore, the time complexity per generation is O ( N D + N T f ) . If T f = O ( D ) , it can be simplified to O ( N D ) per generation, with an overall complexity of O ( T m a x N D ) . The space complexity is also O ( N D ) .
Compared with the original ALA, IALA improves the stability and convergence quality of solutions through three strategies, but at the cost of increasing the time complexity from linear to quadratic with respect to N .

5. CEC2017 Test

CEC2017 consists of 29 test functions, covering problems ranging from simple cases to complex high-dimensional combinatorial optimization problems. It can comprehensively evaluate the global convergence capability, exploration capability, and integrated optimization performance of the algorithms under test. We selected state-of-the-art and highly cited meta-heuristic or improved algorithms published in the past five years for comparative experiments with the IALA algorithm. The comparative algorithms include VPPSO (2023) [45], IAGWO (2024) [46], IWOA (2025) [47], DBO (2022) [48], ARO (2022) [49], AOO (2024) [50], SBOA (2024) [51], and the original ALA (2025) [44], to fully demonstrate the convergence capability, global optimization performance, and robustness of IALA. The search domain of CEC2017 was set to [ 100 , 100 ] d . The population size of all algorithms was uniformly set to 30, the number of iterations to 500, the problem dimensions to 30, 50, and 100, and the number of independent experiments to 30. All experimental data were processed using scientific notation (retaining two decimal places), and the optimal values are bolded in each table. All experiments in this research were conducted on a Windows 10, 64-bit computing system equipped with an Intel (R) Core (TM) i5-12400 processor running at 2.5 GHz and 16 GB RAM, and using MATLAB 2023a as the software platform.

5.1. Test Results and Analysis

As shown in Table 2, on the 29 CEC2017-30D benchmark test functions, the IALA algorithm achieved the optimal average value on 22 functions and the suboptimal average value on five functions, and its overall performance was significantly superior to that of other algorithms. Meanwhile, the standard deviation of IALA was also relatively small in most cases: it achieved the lowest standard deviation on 22 functions and the suboptimal standard deviation on four functions, indicating a high stability of its solutions. IALA obtained the optimal results on both unimodal functions (F1, F3); it delivered the optimal results on six out of seven multimodal functions; it achieved eight optimal results and two suboptimal results on 10 hybrid functions; and it gained six optimal results and three suboptimal results on 10 composition functions.
When the problem dimension is increased to 50 (as shown in Table 3), the IALA algorithm achieves the global optimal mean value on 20 out of all 29 test functions, including F1, F6, F9, F11, F15, F18, and F19, and ranks suboptimal on another six functions. Taking the unimodal function F1 as an example, the mean value (4440.01) and standard deviation (4610.12) of IALA are far lower than those of other competing algorithms, demonstrating a magnitude-level leading advantage, which intuitively reflects its extremely high optimization accuracy in handling high-dimensional complex problems. In terms of algorithm stability, IALA achieves the lowest standard deviation on 21 functions and suboptimal standard deviation on five functions.
To evaluate the optimization potential of the algorithm in extremely complex search spaces, the experimental dimension was further increased to 100D. As shown in Table 4, the IALA algorithm exhibits good performance advantages on the 29 benchmark functions. In the comparison of mean values across 30 independent experiments, IALA achieved the global optimal solution on 22 functions (covering F1–F9, F11–F14, F17, F18, F21, F23–F28, and F30) and obtained suboptimal results on two functions (F15, F19). IALA maintained a leading position in both unimodal functions; it outperformed other algorithms in six out of the seven multimodal functions; and it secured the optimal results in six hybrid functions and eight composition functions respectively. Evaluation based on standard deviation shows that IALA had the smallest fluctuation values on most test functions. Compared with the 30D and 50D scenarios, the stability advantage of IALA becomes increasingly significant under the 100D setting.
Convergence speed is a crucial indicator for evaluating the optimization capability of algorithms. In this paper, 16 representative functions from the CEC2017 benchmark suite are selected (unimodal: F1, F3; multimodal: F5, F6, F7, F9; hybrid: F11, F12, F14, F18, F19; composition: F21, F23, F25, F26, F30) to conduct a visual comparison of the performance of nine algorithms including IALA, VPPSO, IAGWO, IWOA, DBO, ARO, AOO, SBOA, and the original ALA on 30-dimensional problems.
Figure 4 presents the average convergence curves of each function after 30 independent runs. Overall, IALA exhibits significantly superior convergence speed and final solution accuracy compared with the comparative algorithms on most test functions: its average fitness curve usually drops rapidly in the early stage and stabilizes at a lower value, indicating that it possesses both strong global exploration capability and efficient local exploitation capability. For instance, on the unimodal functions F1 and F3, IALA takes a distinct lead in the early iterations and finally converges to a fitness level far lower than that of other algorithms; on complex hybrid functions (e.g., F12, F19, F26), IALA effectively avoids early stagnation and keeps declining to obtain better solutions, whereas most comparative algorithms (e.g., VPPSO, IAGWO, SBOA) show obvious plateaus in the middle and late stages; on the composition functions F21 and F30, IALA also maintains a steady decline and achieves optimal or near-optimal final accuracy.
Figure 5 demonstrates that IALA still maintains significant performance advantages in higher dimensions. Its convergence curves exhibit faster decline rates and lower final fitness values on most functions, indicating that the proposed mechanisms can still effectively balance exploration and exploitation when the dimension increases, thus avoiding falling into local optima.
For instance, on the multimodal functions F6 and F7, IALA achieves a substantial decline within the first 100 iterations and ultimately attains convergence accuracy far superior to that of other algorithms; on the hybrid functions F11 and F18, the curve of IALA decreases steadily and continuously, whereas most comparative algorithms experience obvious stagnation in the middle and late stages; on the composition functions F23 and F25, IALA also exhibits stronger continuous optimization capability and finally obtains the optimal or suboptimal average fitness. These results further confirm the superiority and stability of IALA in solving high-dimensional complex optimization problems.
When the problem dimension reaches 100 (as shown in Figure 6), the convergence curves of IALA can still maintain a fast decline rate and low final average fitness values on most functions, reflecting the algorithm’s excellent scalability and search efficiency when the dimension increases drastically.
For instance, on the multimodal functions F6 and F14, IALA significantly outperforms other algorithms in the early and middle iterations and continues to optimize toward better regions, whereas most comparative algorithms tend to flatten out in the middle and late stages; on the composition function F23, IALA avoids obvious premature convergence, with its curve showing a steady downward trend. Particularly, on the composition function F30, the curve of IALA exhibits a strong downward momentum from the initial stage and finally converges to a fitness level far lower than that of all comparative algorithms, which fully demonstrates its powerful global search and refined exploitation capabilities in handling highly complex, ultra-high-dimensional problems with deceptive local optima. These results indicate that IALA still possesses outstanding competitiveness and robustness in the extreme scenario of 100 dimensions.
As a classic statistical graphic tool, the box plot has distinct advantages in analyzing data distribution. It concisely and intuitively displays the central tendency, dispersion degree, and outliers of data through the minimum value, lower quartile, median, upper quartile, and maximum value, while effectively avoiding the interference of extreme values on the description of the overall distribution. Therefore, we adopt box plots to conduct a visual analysis of the results of multiple independent runs of the IALA algorithm and comparative algorithms on the CEC2017 benchmark test functions, so as to comprehensively evaluate the optimization performance and distribution characteristics of the algorithms.
It can be seen from the box plots of CEC2017-30D (Figure 7) that the proposed IALA algorithm exhibits significant superiority on most test functions. Compared with eight comparative algorithms including VPPSO, IAGNO, IWOA, DBO, ARO, AOO, SBOA, and ALA, IALA achieves a lower median, a more compact box, and fewer outliers on most functions, indicating that it has stronger global search capability, higher convergence accuracy, and better stability. For example, on functions F1, F6, F9, and F19, the box position of IALA is significantly lower than that of other algorithms, while those of DBO and ARO are significantly higher than that of other algorithms. The median of IALA’s box is close to or reaches the optimal value, and the interquartile range is extremely small, showing strong robustness; on complex multimodal functions such as F12, F18, and F30, IALA also maintains the lowest function value distribution, whereas most comparative algorithms such as DBO and IWOA show a higher median and more outliers.
It can be seen from Figure 8 that the IALA algorithm can also exhibit significant competitive advantages in 50-dimensional complex optimization problems. IALA achieves a lower median, a smaller interquartile range, and fewer outliers on the vast majority of functions, which demonstrates its superior global convergence accuracy and algorithm stability in high-dimensional spaces. For example, on functions F3, F5, F21, and F23, the box position of IALA is significantly lower than that of other algorithms, with the median close to the theoretical optimal value, the box height being the smallest, and almost no outliers. In contrast, algorithms such as DBO and SBOA show a higher median and a wider distribution interval. On high-dimensional multimodal functions such as F7, F11, and F25, IALA also maintains the optimal function value distribution, whereas most comparative algorithms present obvious outliers and large fluctuations. Even on function F26, which has the largest fluctuation among all functions, IALA can still maintain a relatively small box and a low data distribution.
Figure 9 illustrates the performance of various algorithms under the 100-dimensional setting. On functions F11, F12, F18, and F21, the box position of IALA is significantly lower than that of other algorithms, with the median close to the theoretical optimum, an extremely narrow distribution interval, and almost no outliers. In contrast, algorithms such as IAGWO, AOO, and DBO exhibit considerably higher medians and larger fluctuations. On complex hybrid functions like F25 and F26, IALA also maintains the optimal function value distribution, whereas most comparative algorithms show larger box heights accompanied by more outliers. It is particularly noteworthy that on function F30, the data distribution of IALA is extremely stable—the box degenerates into an approximate horizontal line, with the interquartile range close to zero and no outliers at all. This outperforms the distribution intervals of other algorithms, fully demonstrating the strong robustness of the proposed improvement mechanisms in ultra-high-dimensional spaces. Overall, the performance of IALA across 16 test functions and three dimensions verifies the remarkable effectiveness of its improvement strategies in addressing high-dimensional optimization problems.

5.2. Wilcoxon Rank-Sum Test

To comprehensively verify the superior performance of the proposed algorithm, this section adopts the Wilcoxon rank-sum test to validate whether there are significant differences between the results of IALA and those of other comparative algorithms at the significance level of 0.05. The null hypothesis H0 assumes that there is no significant difference between IALA and any other algorithm. When p < 0.05, we reject the null hypothesis, indicating that there is a significant difference between the two algorithms. Conversely, when p > 0.05, we accept the null hypothesis, which suggests that there is no significant difference between the algorithms, meaning their performance is comparable. The differences between various algorithms are presented in the form of a list, with values greater than 0.05 bolded.
The statistical results on the CEC2017-30D test suite are shown in Table 5. IALA is significantly superior to IAGWO, IWOA, DBO, and ARO across all 29 test functions, with most p-values reaching an extremely small magnitude of 1.83 × 10 4 . IALA shows no significant difference from VPPSO and ALA only on functions F10 and F27, and also exhibits no significant difference from AOO and SBOA on function F4. In addition, compared with SBOA, IALA has no significant difference on 10 functions, indicating that the two algorithms may have similar performance on these problems.
As shown in Table 6, on the CEC2017-50D test suite, the statistical results indicate that the IALA algorithm exhibits significant performance differences in the vast majority of cases across the 29 test functions. IALA shows significant differences from both DBO and the original ALA on all functions (with all p-values less than 0.05). Compared with VPPSO, IWOA, and ARO, there are no significant differences only on individual functions: VPPSO on F16, IWOA on F26, and ARO on F16. In comparison with IAGWO, AOO, and SBOA, there are no significant differences on two functions respectively: IAGWO on F13 and F28, AOO on F16 and F20, and SBOA on F17 and F29. It is worth noting that F16 is a common function where the performance of multiple algorithms is similar to that of IALA. For SBOA, which showed no significant differences from IALA on 10 functions in the 30D scenario, the number of such functions is reduced to 2.
As shown in Table 7, when the problem dimension is increased to 100D, there are significant differences between IALA and both DBO and the original ALA across all 30 test functions. Compared with VPPSO, ARO, and AOO, there are no significant differences only on a single function, namely F20 and F16 respectively. In comparison with IWOA and SBOA, there are no significant differences to the two functions respectively. Meanwhile, no significant differences are observed between IALA and IAGWO on four functions (F16, F27, F29, and F30).

5.3. Exploration and Exploitation Analysis

To gain a better understanding of the dynamic behavior of the IALA algorithm in solving problems and explain the performance differences of the algorithm on different types of problems, we analyzed and visualized the exploration and exploitation ratio of the proposed algorithm on the CEC2017-30D test suite. Specifically, Figure 10 presents the results on low-dimensional classic benchmark functions (F1, F3, F7, F9), while Figure 11 displays those on Complex High-Interaction Functions (F1, F3). These two categories of functions exhibit distinctly different problem characteristics and pose different challenges to population-based optimizers.
IALA can adaptively adjust the balance between exploration and exploitation according to function characteristics, thereby achieving efficient global search and local refinement. On the unimodal function F1, the exploitation rate rises rapidly to nearly 100%, while the exploration rate drops quickly to close to 0%, indicating that the algorithm can quickly focus on the optimal region in a smooth unimodal landscape and achieve efficient convergence. On the simple multimodal functions F3 and F7, the exploration rate maintains a high proportion in the early and middle stages and then decreases gradually, with the exploitation rate increasing correspondingly. This reflects that the algorithm prolongs the global exploration stage in scenarios with local optima traps to avoid premature convergence. On F9, the exploitation rate also takes a dominant position relatively quickly, but the exploration rate maintains a significant proportion within the first 100 iterations and then transitions smoothly to the exploitation stage.
The functions in Figure 11 are characterized by high multimodality, asymmetry, and composition of multiple sub-functions, making them significantly more difficult to optimize than unimodal and simple multimodal functions. The results show that IALA exhibits a stronger adaptive balancing capability in these complex landscapes: in the early stage of iterations, the exploration rate maintains a high level and decreases slowly over a long iterative interval, thus providing sufficient global diversity for the population to escape local optima traps; subsequently, the exploitation rate gradually dominates to achieve refined local search. For instance, on F11 and F18, the exploitation rate rapidly reverses with the exploration rate within the first 100 iterations; on F14 and F21, the exploration rate fluctuates and decreases slowly in the early and middle stages, indicating that the algorithm dynamically maintains exploration capability according to the search state; on extremely complex composition functions such as F26 and F30, the exploration rate decreases more gently and the rising curve of the exploitation rate is smoother, with no abrupt switching occurring during the entire process. These curves fully demonstrate that IALA’s dynamic balancing mechanism effectively addresses the risk of premature convergence caused by high-complexity problems by prolonging the exploration phase and smoothly transitioning to the exploitation phase, thereby achieving higher global search efficiency and convergence accuracy.

5.4. Friedman Ranking Analysis

Figure 12 presents the Friedman ranking heatmap of the nine algorithms across different dimensions. The heatmap displays the Friedman rankings of each algorithm on the CEC2017 benchmark suite. The closer the color is to red, the higher the ranking and the better the performance; the closer the color is to blue, the lower the ranking and the poorer the performance. The performance of IALA on various function types is as follows:
It can be seen from the Friedman average ranking heatmap across the three dimensions (30D, 50D, and 100D) that the proposed IALA algorithm exhibits significant comprehensive advantages on the vast majority of test functions, consistently securing top rankings. In the 30D case, IALA ranks sixth only on the F10 function, and ranks among the top three on all other functions, with first place on most of them. In the 50D case, IALA ranks seventh on the F10 and F22 functions and fourth on the F30 function, while it places among the top three on all remaining functions. In the 100D case, except for relatively low rankings on the F10, F20, and F22 functions, IALA ranks among the top three on all other functions. Overall, as the dimension increases, IALA maintains top-three rankings on most functions, which fully verifies the algorithm’s robustness and global optimization performance across different dimensions.
Figure 13 presents a Sankey diagram based on the Friedman average rankings, which intuitively displays the performance ranking distribution of the nine algorithms across the 30D, 50D, and 100D settings in the CEC2017 benchmark suite. Obviously, across all dimensions, the connection lines flowing from the IALA algorithm (represented by the green strip) to Rank 1 are the thickest, indicating that IALA has obtained the most first-place rankings on the vast majority of test functions.

6. Three-Dimensional Path Planning for UAV

This section investigates the 3D UAV path planning problem. Unlike the 2D scenario, 3D path planning needs to take into account multiple factors such as undulating terrain, 3D obstacle distribution, altitude constraints, and turning/climbing angle constraints. We first construct a mathematical model for multi-constraint 3D path planning, then use the IALA algorithm and comparative algorithms to solve it, and analyze the path optimization performance of each algorithm in complex environments.

6.1. Mathematical Model

Set the number of path control points as n. The set of intermediate points is given by vectors X = ( x 1 , , x n ) , Y = ( y 1 , , y n ) , and Z = ( z 1 , , z n ) , and the coordinates of the start point and end point are p 0 = ( x 0 , y 0 , z 0 ) and p n + 1 = ( x n + 1 , y n + 1 , z n + 1 ) respectively. The complete path nodes are denoted as P = p 0 , p 1 , , p n , p n + 1 , where p i = ( x i , y i , z i r e l ) , i = 1 , , n , and z i rel represent the relative height above the ground. The set of obstacles or threats is defined as { T i } i = 1 N T . The i th threat is represented by the vector T i = ( x i T , y i T , R i ) , and its projection radius on the plane is R i . The total cost consists of four components: the path length term, the threat or obstacle term, the altitude term, and the maneuverability term. The final total cost is obtained by means of a weighted linear combination. The cost of path length is:
J 1 = k = 0 N 2 x k + 1 x k , y k + 1 y k , z k + 1 abs z k abs
Among these, z k abs denotes absolute height, i.e., height relative to sea level.
Threat or obstacle costs are:
For any threat T i = ( x i T , y i T , R i ) and any line segment [ p k x y , p k + 1 x y ] on the path, calculate the minimum distance d i , k = Dist ( ( x i T , y i T ) , [ ( x k , y k ) , ( x k + 1 , y k + 1 ) ] ) from the planar point to the line segment. Let the threshold values be R col , i = R i + r d , R safe , i = R i + r d + d danger , and r d denote the UAV radius, d danger denote the danger distance, and J denote the infinite penalty. The final cost function is expressed as follows:
c i , k = 0 , d i , k > R safe , i J , d i , k < R col , i R safe , i d i , k , e l s e J 2 = i = 1 N T k = 0 N 2 c i , k .
For the altitude cost, for any intermediate point i = 1 , , n , if the relative height z i rel < 0 , it is determined as a ground collision, and the cost is set to J . Otherwise, the cost is defined as the deviation from the center of the allowable altitude interval. The cost formula is:
J 3 = i = 1 n J , z i rel < 0 z i rel z max + z min 2 , e l s e
where z max , z min denotes the upper and lower limits of the allowable altitude relative to the ground specified by the model.
For the maneuverability cost, this term is used to penalize excessive changes in the turning angle and climbing angle, thereby reflecting the physical constraints of the UAV’s maneuverability. For each set of three consecutive nodes p k , p k + 1 , p k + 2 ( k = 0 , , N 3 ) , the projection vectors s 1 = x j + 1 x j , y j + 1 y j , 0 and s 2 = x j + 2 x j + 1 , y j + 2 y j + 1 , 0 on the plane are defined. At node k, the climbing angle is defined as θ k , and the turning angle ϕ k is defined as follows:
θ k = atan 2 ( z k + 1 abs z k abs , s 1 2 ) ϕ k = atan 2 ( s 1 × s 2 2 , s 1 s 2 )
A cost is imposed when the UAV exceeds the specified threshold. As shown in the formula
J 4 = k = 0 N 3 I ( | ϕ k | > Φ max ) | ϕ k | + I ( | θ k + 1 θ k | > Θ max ) | θ k + 1 θ k |
where I ( ) is the indicator function, Θ max and Φ max denotes the allowable maximum turning angle variation and maximum climbing angle variation, respectively. The final cost function is expressed as follows:
F ( x , y , z ) = i = 1 4 b i J i

6.2. Simulation Experiments and Discussion of Results

To evaluate the effectiveness of the proposed IALA algorithm in practical engineering problems, eight different complex scenarios are designed in the experiments of this section (as shown in Figure 14). These scenarios are constructed based on two types of maps, with each map containing four obstacle distributions, resulting in a total of eight scenarios. Red cylinders are used to simulate obstacles with different densities and distributions, and green surfaces represent undulating terrain. The UAV is required to find the shortest path while avoiding obstacles, and at the same time, take into account path smoothness, flight altitude, and other constraints.
Considering factors such as computing resources and practical requirements, the experimental settings are configured as follows: the population size is set to N = 30, the number of iterations is D = 100, and the coefficients of the fitness function are assigned as b1 =5, b2 =1, b3 =10, and b4 =1 respectively. The number of path points is set to 10, and 30 independent experiments are conducted for each scenario.
It can be seen from the eight scenarios in Table 8 that the IALA algorithm exhibits significant comprehensive advantages in terms of Mean and Std. Combining Table 8 and Figure 15, IALA ranks first in the average path cost across six out of the eight scenarios, ranks below SBOA in one scenario, and ranks below both SBOA and AOO in another scenario. IALA finds the optimal path in 6/8 scenarios and ranks among the top three in the remaining scenarios, with the number of first-place rankings far exceeding that of other algorithms.
In terms of algorithm stability, IALA achieves the lowest standard deviation in seven scenarios, significantly outperforming the other eight comparative algorithms. The standard deviation values of IALA generally range from 20 to 300, and are far below 100 in most scenarios, while the standard deviations of comparative algorithms often exceed 300 and even reach thousands. This fully demonstrates that IALA has extremely strong robustness and repeatability. In conclusion, the dual advantages of IALA in path effectiveness and stability verify its excellent performance and engineering application potential in practical UAV path planning problems.
In Table 9, we verify whether there are significant differences between the results of IALA and those of other algorithms in UAV 3D path planning at the significance level of 0.05. As can be seen from the data in the table, most of the p-values are significantly less than 0.05, with only seven cases showing no significant difference. Significant differences are observed between IALA and the four algorithms, namely IAGWO, IWOA, DBO, and ARO, with their p-values all far less than 0.01 and even close to 0. This indicates that the path cost performance of IALA is highly significantly different from that of the comparative algorithms in most scenarios, and combined with the average cost data in Table 7, this difference reflects the significant superiority of IALA. In Scenario 3 and Scenario 6, the p-values are as low as approximately 1.83 × 10−4, indicating the maximum difference. In Scenario 7 and Scenario 8, although some p-values are slightly higher, the significance is still maintained. Although IALA performs similarly to some algorithms such as VPPSO and ALA (p > 0.05) in a few complex scenarios like Scenario 8, from the perspective of the overall distribution, IALA maintains a leading position in the vast majority of dimensions.
To intuitively demonstrate the path planning performance of the IALA algorithm in handling various environments, the experiment sets the population size to 30 and the maximum number of iterations to 500, and conducts a single-run test on eight experimental scenarios. The planned trajectories are shown in Figure 16.
Across all test scenarios, compared with the comparative algorithms, the paths planned by IALA exhibit distinct advantages. In Scenario 2 and Scenario 4, some comparative algorithms (represented by blue or orange lines) generate paths with significant detours, which considerably increase the total path length. By observing scenarios with dense obstacle distributions such as Scenario 3 and Scenario 7, it can be found that IALA is capable of maintaining extremely high obstacle avoidance safety. The paths generated by IALA not only avoid collisions when bypassing the red dangerous areas but also do not deviate excessively from the obstacles. In terms of trajectory morphology, with the number of iterations set to 500, IALA can generate trajectories with high smoothness.
To quantitatively evaluate the evolutionary efficiency of the IALA algorithm, Figure 17 presents the convergence curves of the optimal fitness values of multiple algorithms (as shown in Figure 16) with the increase in the number of iterations across eight experimental scenarios. At the initial stage of all scenarios, IALA, represented by the red curve, exhibits the steepest descending slope. Especially in Scenarios 1, 4, and 6, IALA can quickly lock into the high-quality solution region in an extremely short period. This indicates that the hybrid search strategy introduced by IALA has greatly enhanced the global exploration efficiency of the algorithm in the initial stage. As the iterations proceed to 500, the final convergence level achieved by IALA in all eight scenarios is almost the lowest among all algorithms. Particularly in complex terrain scenarios such as Scenario 4 and Scenario 8, other comparative algorithms like DBO and SBOA stagnate midway, while IALA can continuously explore better paths and ultimately obtain fitness values significantly superior to those of other algorithms. By observing the trends of the convergence curves, algorithms such as IAGWO and IWOA show an obvious plateau phase between the 100th and 200th generations, falling into local optima.

7. Path Planning for Amphibious UAVs

This section investigates the path planning problem of amphibious UAVs. Compared with conventional UAVs, amphibious UAVs need to take into account the constraints of both underwater and aerial flight environments, such as drag coefficients under different media and underwater/aerial threat points. Based on this, we have constructed a path model that integrates the costs of underwater paths and aerial paths.

7.1. Mathematical Model

In this model, we consider that the amphibious UAV has different drag coefficients when operating in water and air, so we set different path cost coefficients d air and d ocean respectively. Except for the start point and end point, let the number of intermediate control points be m , and the set of control points be { p i } i = 1 m , where each control point is denoted as p i = ( p x , i , p y , i , p z , i ) . The start point and end point are p 0 and p m + 1 respectively. The control variable is defined as x = [ p x , 1 , , p x , m , p y , 1 , , p y , m , p z , 1 , , p z , m ] . r rad and r gun represent the radii of two types of spherical threats. h ocean denotes the water surface height. Under discretization, the total path length between sampling points is approximated as follows:
L k = 1 N 1 r k + 1 r k 2 = k = 1 N 1 ( x k + 1 x k ) 2 + ( y k + 1 y k ) 2 + ( z k + 1 z k ) 2
To distinguish between the underwater segment and the aerial segment, we find the sampling index k 0 = arg min k | Z ( t k ) h o c e a n | that is closest to the sea level height. The total path is divided into the underwater segment L down and the aerial segment L up , which are defined respectively as follows:
L d o w n k = 1 k 0 1 r k + 1 r k 2 L u p k = k 0 N 1 r k + 1 r k 2
Therefore, the cost of path length can be defined as follows:
J ( x ) = d o c e a n L d o w n ( x ) + d a i r L u p ( x )
During the flight of a UAV, besides the path length, the most critical consideration is collision avoidance. A penalty shall be imposed if a collision occurs on any path segment. During the path evaluation process, a collision is deemed to have occurred if the line segment between any two waypoints of the path intersects with an obstacle or a threat area, and a penalty term is incorporated into the cost function. Specifically, the collision penalty can be determined by the number of collisions, and is weighted and superimposed with basic costs such as path length, thereby significantly reducing the fitness of infeasible paths. This penalty mechanism enables the optimization process to gradually guide the algorithm to converge to collision-free feasible paths while maintaining the diversity of the search. Let { r k } k = 1 N be the discrete sampling points. A collision is deemed to occur if any of the following three conditions is satisfied:
There exists k such that z k < Z terr ( x k , y k ) , where Z terr ( x k , y k ) denotes the terrain height corresponding to the planar coordinates of k ; there exists k such that r k Ω , where Ω denotes the boundary of the workspace; for any k that satisfies z k > h ocean , ( x k , y k ) r rad 2 < R rad and ( x k , y k ) r gun 2 < R gun , where R rad and R g u n represent the distances between any detected k and the threats r rad and r gun respectively. In addition, unknown threat points are set in both water and air to simulate different threats from underwater and aerial environments. The UAV’s flight path must not coincide with these points; otherwise, a collision is also deemed to have occurred. Define the collision indicator function as follows:
C ( x ) = 1 , c o l l i s i o n 0 , e l s e .
The final function incorporates the collision penalty as a multiplicative penalty term, with a penalty factor p = 1000 applied. Therefore, the final fitness is defined as follows:
F ( x ) = 1 + ( P 1 ) C ( x ) d ocean L down ( x ) + d air L up ( x )

7.2. Simulation Experiments and Discussion of Results

In this section, the IALA algorithm and eight other state-of-the-art algorithms are applied to the path planning task of amphibious UAVs. As shown in Figure 18, two types of 3D terrain and threat maps with different levels of complexity are designed for the experiments. To simulate the requirement for computational real-time performance in practical tasks, the experimental settings are configured as follows: the population size is set to N = 30, and the number of iterations is set to D = 100. The path model is discretized into five key nodes (including the start point and the end point).
In Figure 19, since the scenario is relatively simple, most algorithms are able to find good solutions. The paths generated by the VPPSO, IAGWO, IWOA, and IALA algorithms are almost straight lines, approaching the optimal solution. However, in terms of the cost value, IALA ranks first among all algorithms with a cost value of 183.08, achieving the global optimal solution in this scenario. The paths of AOO and SBOA exhibit an obvious outward convexity phenomenon. Although this detouring behavior avoids the mountain threat area, the unreasonable distribution of path points leads to path redundancy.
An analysis of the path planning results and cost data of the nine algorithms in the complex scenario (as shown in Figure 20) reveals that the complexity of the terrain scenario has led to a much wider performance gap among the algorithms compared with the previous simple scenario. The IALA algorithm still ranks first among all algorithms with a cost value of 193.28. DBO takes the second place with a cost of 194.72, while SBOA, which performed poorly in the simple scenario, closely follows with a cost of 194.90, demonstrating strong competitiveness. The costs of IWOA, ALA, and VPPSO are significantly higher, among which the cost of IWOA is about 30% higher than that of IALA.
As can be seen from the trajectory diagram, the mountain on the left and the aerial threat points form a narrow passage corridor, and the IALA adopts refined local search to pass through the gap between the central mountain and the threat area. This strategy shortens the voyage to the greatest extent, and at the same time avoids aerial threats by utilizing low-altitude flight, reflecting the extremely high optimization efficiency of the algorithm. The path planned by VPPSO presents a huge arc. Although it ensures safety, its overly conservative obstacle avoidance strategy leads to a sharp increase in voyage cost. ALA has unnecessary sharp course turns in the underwater segment, which reflects the instability of the algorithm in handling constrained problems.
Compared with single-run or a small number of repeated experiments, the Monte Carlo simulation can estimate the performance distribution of algorithms more stably in a statistical sense, and it is a widely adopted standard method for the robustness evaluation of stochastic optimization algorithms. This paper employs the Monte Carlo simulation to conduct multiple independent repeated experiments, so as to reduce the impact of randomness and objectively evaluate the robustness of the algorithms. In this experiment, the terrain was set to be randomly generated, and the nine algorithms were independently run 30 times each in this model.
The results in Table 10 show that the IALA algorithm ranked first overall with a mean value of 188.32 and a standard deviation of 1.38. Compared with other algorithms, IALA’s low standard deviation proves its strong stability when facing randomly complex environments. In addition, from the average convergence curve on the right side, it can be observed that IALA’s convergence speed is significantly faster than that of other algorithms, converging to the optimal value within fewer iterations. In contrast, although IAGWO can converge rapidly, it falls into a local optimal value. The DBO algorithm exhibited obvious optimization failure in random terrain, with drastic fluctuations in standard deviation. This fully demonstrates that the improved search mechanism of IALA has stronger practical value and robustness in dealing with random events.

8. Summary

To address the complex optimization problem of UAV path planning, this paper proposes an Improved Artificial Lemming Algorithm (IALA). By integrating three strategies, namely the historical memory mechanism, differential evolution operator, and directed neighborhood search, the proposed algorithm significantly enhances the convergence speed, global optimization capability, and robustness of the original Artificial Lemming Algorithm (ALA) in high-dimensional problems. In the CEC2017 benchmark tests, IALA outperformed the comparative algorithms across different dimensions. Wilcoxon rank-sum test, Friedman ranking analysis and visual analysis further confirmed its advantages in balancing exploration and exploitation. In the experiments involving eight scenarios of UAV 3D path planning, IALA generated better and more stable paths, reducing the risks of detouring and collision. In the amphibious UAV model, IALA exhibited excellent performance in constrained underwater and aerial environments, and Monte Carlo simulation verified its adaptability to random environments.
Although this paper verifies the effectiveness and robustness of the IALA algorithm in solving UAV path planning problems in complex 3D environments through extensive simulation experiments, certain limitations still exist. First, the algorithm is mainly verified in simulation environments and has not yet been tested on real flight platforms. Second, the computational overhead in high-dimensional or large-scale scenarios may affect real-time applications. In addition, the current research is mainly focused on static obstacle and threat environments, and its generalization capability in dynamic scenarios, multi-UAV cooperative tasks, and parameter adaptability still needs to be further explored. Future work will focus on directions such as real-flight verification, real-time performance optimization, and expansion to dynamic and cooperative planning, so as to improve the engineering applicability of the algorithm. In addition, in References [52,53], a constrained Model Predictive Control (MPC) method for load-carrying quadrotors and a robust observation method capable of resisting multiple time-varying delay disturbances are proposed from the perspectives of the controller and the observer, respectively. Their respective effectiveness has been verified through theoretical proof and numerical simulation, indicating promising prospects for development in the field of UAV.

Author Contributions

Conceptualization, X.Z. and R.L.; methodology, R.L. and X.Z.; investigation, X.Z. and R.L.; writing—original draft preparation, R.L.; writing—review and editing, X.Z. and R.L.; supervision, S.H., X.Z. and Z.D.; and funding acquisition, S.H. and X.Z.; All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the research grant from the National Key Research and Development Program of China (2024YFB3312204).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Chen, Z.; Chen, Y.; Jin, X.; Chen, H.; Yu, S. CLMGO: An enhanced Moss Growth Optimizer for complex engineering problems. Results Eng. 2025, 28, 108276. [Google Scholar] [CrossRef]
  2. Zheng, X.; Liu, R.; Li, S. A novel improved dung beetle optimization algorithm for collaborative 3D path planning of UAVs. Biomimetics 2025, 10, 420. [Google Scholar] [CrossRef]
  3. Sullivan, J.M. Evolution or revolution? The rise of UAVs. IEEE Technol. Soc. Mag. 2006, 25, 43–49. [Google Scholar] [CrossRef]
  4. Kunovjanek, M.; Wankmüller, C. Containing the COVID-19 pandemic with drones-Feasibility of a drone enabled back-up transport system. Transp. Policy 2021, 106, 141–152. [Google Scholar] [CrossRef]
  5. Benarbia, T.; Kyamakya, K. A literature review of drone-based package delivery logistics systems and their implementation feasibility. Sustainability 2021, 14, 360. [Google Scholar] [CrossRef]
  6. Kim, J.; Kim, S.; Ju, C.; Son, H.I. Unmanned aerial vehicles in agriculture: A review of perspective of platform, control, and applications. IEEE Access 2019, 7, 105100–105115. [Google Scholar] [CrossRef]
  7. Ndlovu, H.S.; Odindi, J.; Sibanda, M.; Mutanga, O. A systematic review on the application of UAV-based thermal remote sensing for assessing and monitoring crop water status in crop farming systems. Int. J. Remote Sens. 2024, 45, 4923–4960. [Google Scholar] [CrossRef]
  8. Yang, Z.; Yu, X.; Dedman, S.; Rosso, M.; Zhu, J.; Yang, J.; Xia, Y.; Tian, Y.; Zhang, G.; Wang, J. UAV remote sensing applications in marine monitoring: Knowledge visualization and review. Sci. Total Environ. 2022, 838, 155939. [Google Scholar] [CrossRef] [PubMed]
  9. Guan, S.; Zhu, Z.; Wang, G. A review on UAV-based remote sensing technologies for construction and civil applications. Drones 2022, 6, 117. [Google Scholar] [CrossRef]
  10. Yang, T.; Jiang, Z.; Sun, R.; Cheng, N.; Feng, H. Maritime search and rescue based on group mobile computing for unmanned aerial vehicles and unmanned surface vehicles. IEEE Trans. Ind. Inform. 2020, 16, 7700–7708. [Google Scholar] [CrossRef]
  11. Debnath, D.; Hawary, A.F.; Ramdan, M.I.; Alvarez, F.V.; Gonzalez, F. QuickNav: An effective collision avoidance and path-planning algorithm for UAS. Drones 2023, 7, 678. [Google Scholar] [CrossRef]
  12. Shehadeh, M.A.; Kůdela, J. Benchmarking global optimization techniques for unmanned aerial vehicle path planning. Expert Syst. Appl. 2025, 293, 128645. [Google Scholar] [CrossRef]
  13. Wang, Y.; Mulvaney, D.; Sillitoe, I.; Swere, E. Robot navigation by waypoints. J. Intell. Robot. Syst. 2008, 52, 175–207. [Google Scholar] [CrossRef]
  14. Saeed, R.A.; Omri, M.; Abdel-Khalek, S.; Ali, E.S.; Alotaibi, M.F. Optimal path planning for drones based on swarm intelligence algorithm. Neural Comput. Appl. 2022, 34, 10133–10155. [Google Scholar] [CrossRef]
  15. Castellanos, A.; Cruz-Reyes, L.; Fernández, E.; Rivera, G.; Gomez-Santillan, C.; Rangel-Valdez, N. Hybridisation of swarm intelligence algorithms with multi-criteria ordinal classification: A strategy to address many-objective optimisation. Mathematics 2022, 10, 322. [Google Scholar] [CrossRef]
  16. Goldberg, D.E. Genetic Algorithms in Search, Optimization, and Machine Learning; Addison-Wesley Pub. Co.: Boston, MA, USA, 1989. [Google Scholar]
  17. Dorigo, M.; Colorni, A.; Maniezzo, V. Distributed optimization by ant colonies. In Proceedings of the First European Conference on Artificial Life, Paris, France, 11–13 December 1991; pp. 134–142. [Google Scholar]
  18. Eberhart, R.; Kennedy, J. A new optimizer using particle swarm theory. In MHS’95, Proceedings of the Sixth International Symposium on Micro Machine and Human Science, Nagoya, Japan, 4–6 October 1995; IEEE: New York, NT, USA, 1995; pp. 39–43. [Google Scholar]
  19. Storn, R.; Price, K. Differential evolution–a simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  20. Karaboga, D.; Basturk, B. On the performance of artificial bee colony (ABC) algorithm. Appl. Soft Comput. 2008, 8, 687–697. [Google Scholar] [CrossRef]
  21. Xue, J.; Shen, B. A novel swarm intelligence optimization approach: Sparrow search algorithm. Syst. Sci. Control Eng. 2020, 8, 22–34. [Google Scholar] [CrossRef]
  22. Dehghani, M.; Montazeri, Z.; Trojovská, E.; Trojovský, P. Coati Optimization Algorithm: A new bio-inspired metaheuristic algorithm for solving optimization problems. Knowl.-Based Syst. 2023, 259, 110011. [Google Scholar] [CrossRef]
  23. Arora, S.; Singh, S. Butterfly optimization algorithm: A novel approach for global optimization. Soft Comput. 2019, 23, 715–734. [Google Scholar] [CrossRef]
  24. Zhong, C.; Li, G.; Meng, Z. Beluga whale optimization: A novel nature-inspired metaheuristic algorithm. Knowl.-Based Syst. 2022, 251, 109215. [Google Scholar] [CrossRef]
  25. Duan, H.; Qiao, P. Pigeon-inspired optimization: A new swarm intelligence optimizer for air robot path planning. Int. J. Intell. Comput. Cybern. 2014, 7, 24–37. [Google Scholar] [CrossRef]
  26. Abdel-Basset, M.; Mohamed, R.; Jameel, M.; Abouhawwash, M. Nutcracker optimizer: A novel nature-inspired metaheuristic algorithm for global optimization and engineering design problems. Knowl.-Based Syst. 2023, 262, 110248. [Google Scholar] [CrossRef]
  27. Li, S.; Chen, H.; Wang, M.; Heidari, A.A.; Mirjalili, S. Slime mould algorithm: A new method for stochastic optimization. Future Gener. Comput. Syst. 2020, 111, 300–323. [Google Scholar] [CrossRef]
  28. Faramarzi, A.; Heidarinejad, M.; Mirjalili, S.; Gandomi, A.H. Marine Predators Algorithm: A nature-inspired metaheuristic. Expert Syst. Appl. 2020, 152, 113377. [Google Scholar] [CrossRef]
  29. Abdollahzadeh, B.; Soleimanian Gharehchopogh, F.; Mirjalili, S. Artificial gorilla troops optimizer: A new nature-inspired metaheuristic algorithm for global optimization problems. Int. J. Intell. Syst. 2021, 36, 5887–5958. [Google Scholar] [CrossRef]
  30. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  31. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris hawks optimization: Algorithm and applications. Future Gener. Comput. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
  32. Wang, J.; Wang, W.C.; Hu, X.X.; Qiu, L.; Zang, H.F. Black-winged kite algorithm: A nature-inspired meta-heuristic for solving benchmark functions and engineering problems. Artif. Intell. Rev. 2024, 57, 98. [Google Scholar] [CrossRef]
  33. Abdel-Basset, M.; Mohamed, R.; Abouhawwash, M. Crested Porcupine Optimizer: A new nature-inspired metaheuristic. Knowl.-Based Syst. 2024, 284, 111257. [Google Scholar] [CrossRef]
  34. Yuan, C.; Zhao, D.; Heidari, A.A.; Liu, L.; Chen, Y.; Chen, H. Polar lights optimizer: Algorithm and applications in image segmentation and feature selection. Neurocomputing 2024, 607, 128427. [Google Scholar] [CrossRef]
  35. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  36. Molina, D.; Poyatos, J.; Ser, J.D.; García, S.; Hussain, A.; Herrera, F. Comprehensive taxonomies of nature-and bio-inspired optimization: Inspiration versus algorithmic behavior, critical analysis recommendations. Cogn. Comput. 2020, 12, 897–939. [Google Scholar] [CrossRef]
  37. Mande, S.S.; Srinivasulu, M.; Anand, S.; Anuradha, K.; Tiwari, M.; Esakkiammal, U. Swarm Intelligence Algorithms for Optimization Problems a Survey of Recent Advances and Applications. ITM Web Conf. 2025, 76, 05008. [Google Scholar] [CrossRef]
  38. Wolpert, D.H.; Macready, W.G. No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 2002, 1, 67–82. [Google Scholar] [CrossRef]
  39. Fan, C.; Ding, Q. Analysing the dynamics of digital chaotic maps via a new period search algorithm. Nonlinear Dyn. 2019, 97, 831–841. [Google Scholar] [CrossRef]
  40. Liu, M.; Zhang, Y.; Guo, J.; Chen, J.; Liu, Z. An adaptive lion swarm optimization algorithm incorporating tent chaotic search and information entropy. Int. J. Comput. Intell. Syst. 2023, 16, 39. [Google Scholar] [CrossRef]
  41. Yang, J.; Cai, Y.; Tang, D.; Chen, W.; Hu, L. Memetic quantum optimization algorithm with levy flight for high dimension function optimization. Appl. Intell. 2022, 52, 17922–17940. [Google Scholar] [CrossRef]
  42. Mohapatra, S.; Mohapatra, P. Fast random opposition-based learning Golden Jackal Optimization algorithm. Knowl.-Based Syst. 2023, 275, 110679. [Google Scholar] [CrossRef]
  43. Zheng, X.; Liu, R.; Liu, X. Simulation Application of Adaptive Strategy Hybrid Secretary Bird Optimization Algorithm in Multi-UAV 3D Path Planning. Computers 2025, 14, 439. [Google Scholar] [CrossRef]
  44. Xiao, Y.; Cui, H.; Khurma, R.A.; Castillo, P.A. Artificial lemming algorithm: A novel bionic meta-heuristic technique for solving real-world engineering optimization problems. Artif. Intell. Rev. 2025, 58, 84. [Google Scholar] [CrossRef]
  45. Shami, T.M.; Mirjalili, S.; Al-Eryani, Y.; Daoudi, K.; Izadi, S.; Abualigah, L. Velocity pausing particle swarm optimization: A novel variant for global optimization. Neural Comput. Appl. 2023, 35, 9193–9223. [Google Scholar] [CrossRef]
  46. Yu, M.; Xu, J.; Liang, W.; Qiu, Y.; Bao, S.; Tang, L. Improved multi-strategy adaptive Grey Wolf Optimization for practical engineering applications and high-dimensional problem solving. Artif. Intell. Rev. 2024, 57, 277. [Google Scholar] [CrossRef]
  47. Huang, S.; Liu, D.; Fu, Y.; Chen, J.; He, L.; Yan, J.; Yang, D. Prediction of Self-Care Behaviors in Patients Using High-Density Surface Electromyography Signals and an Improved Whale Optimization Algorithm-Based LSTM Model. J. Bionic Eng. 2025, 22, 1963–1984. [Google Scholar] [CrossRef]
  48. Xue, J.; Shen, B. Dung beetle optimizer: A new meta-heuristic algorithm for global optimization. J. Supercomput. 2023, 79, 7305–7336. [Google Scholar] [CrossRef]
  49. Wang, L.; Cao, Q.; Zhang, Z.; Mirjalili, S.; Zhao, W. Artificial rabbits optimization: A new bio-inspired meta-heuristic algorithm for solving engineering optimization problems. Eng. Appl. Artif. Intell. 2022, 114, 105082. [Google Scholar] [CrossRef]
  50. Wang, R.B.; Hu, R.B.; Geng, F.D.; Xu, L.; Chu, S.C.; Pan, J.S.; Meng, Z.-Y.; Mirjalili, S. The Animated Oat Optimization Algorithm: A nature-inspired metaheuristic for engineering optimization and a case study on Wireless Sensor Networks. Knowl.-Based Syst. 2025, 318, 113589. [Google Scholar] [CrossRef]
  51. Fu, Y.; Liu, D.; Chen, J.; He, L. Secretary bird optimization algorithm: A new metaheuristic for solving global optimization problems. Artif. Intell. Rev. 2024, 57, 123. [Google Scholar] [CrossRef]
  52. Hernández-González, O.; Targui, B.; Valencia-Palomo, G.; Guerrero-Sánchez, M.E. Robust cascade observer for a disturbance unmanned aerial vehicle carrying a load under multiple time-varying delays and uncertainties. Int. J. Syst. Sci. 2024, 55, 1056–1072. [Google Scholar] [CrossRef]
  53. Urbina-Brito, N.; Guerrero-Sánchez, M.E.; Valencia-Palomo, G.; Hernández-González, O.; López-Estrada, F.R.; Hoyo-Montaño, J.A. A predictive control strategy for aerial payload transportation with an unmanned aerial vehicle. Mathematics 2021, 9, 1822. [Google Scholar] [CrossRef]
Figure 1. Research framework diagram.
Figure 1. Research framework diagram.
Technologies 14 00091 g001
Figure 2. Example of partial swarm intelligence optimization algorithm.
Figure 2. Example of partial swarm intelligence optimization algorithm.
Technologies 14 00091 g002
Figure 3. ALA algorithm flowchart.
Figure 3. ALA algorithm flowchart.
Technologies 14 00091 g003
Figure 4. CEC2017-30D average convergence curve.
Figure 4. CEC2017-30D average convergence curve.
Technologies 14 00091 g004aTechnologies 14 00091 g004b
Figure 5. CEC2017-50D average convergence curve.
Figure 5. CEC2017-50D average convergence curve.
Technologies 14 00091 g005aTechnologies 14 00091 g005b
Figure 6. CEC2017-100D average convergence curve.
Figure 6. CEC2017-100D average convergence curve.
Technologies 14 00091 g006aTechnologies 14 00091 g006b
Figure 7. Box distribution diagram of fitness values for CEC2017-30D.
Figure 7. Box distribution diagram of fitness values for CEC2017-30D.
Technologies 14 00091 g007aTechnologies 14 00091 g007b
Figure 8. Box distribution diagram of fitness values for CEC2017-50D.
Figure 8. Box distribution diagram of fitness values for CEC2017-50D.
Technologies 14 00091 g008aTechnologies 14 00091 g008b
Figure 9. Box distribution diagram of fitness values for CEC2017-100D.
Figure 9. Box distribution diagram of fitness values for CEC2017-100D.
Technologies 14 00091 g009aTechnologies 14 00091 g009b
Figure 10. Percentage of exploration/exploitation in the base function.
Figure 10. Percentage of exploration/exploitation in the base function.
Technologies 14 00091 g010
Figure 11. Percentage of exploration/exploitation in hybrid and combination functions.
Figure 11. Percentage of exploration/exploitation in hybrid and combination functions.
Technologies 14 00091 g011
Figure 12. Heatmap of Friedman ranking for CEC2017-30D 50D 100D.
Figure 12. Heatmap of Friedman ranking for CEC2017-30D 50D 100D.
Technologies 14 00091 g012aTechnologies 14 00091 g012b
Figure 13. Friedman ranking proportion of CEC2017-30D, 50D, and 100D Sankey diagram.
Figure 13. Friedman ranking proportion of CEC2017-30D, 50D, and 100D Sankey diagram.
Technologies 14 00091 g013
Figure 14. Layout of eight scenarios.
Figure 14. Layout of eight scenarios.
Technologies 14 00091 g014
Figure 15. Multi algorithm ranking comparison radar chart.
Figure 15. Multi algorithm ranking comparison radar chart.
Technologies 14 00091 g015
Figure 16. Top views of paths obtained by IALA and advanced algorithms in eight scenarios.
Figure 16. Top views of paths obtained by IALA and advanced algorithms in eight scenarios.
Technologies 14 00091 g016
Figure 17. Convergence curves of IALA and advanced algorithms in eight scenarios.
Figure 17. Convergence curves of IALA and advanced algorithms in eight scenarios.
Technologies 14 00091 g017
Figure 18. Layout of two scenarios.
Figure 18. Layout of two scenarios.
Technologies 14 00091 g018
Figure 19. D path views of IALA and advanced algorithms in simple scenario.
Figure 19. D path views of IALA and advanced algorithms in simple scenario.
Technologies 14 00091 g019aTechnologies 14 00091 g019b
Figure 20. D path views of IALA and advanced algorithms in complex scenario.
Figure 20. D path views of IALA and advanced algorithms in complex scenario.
Technologies 14 00091 g020aTechnologies 14 00091 g020b
Table 1. Comparison of advantages and disadvantages of some classic or latest algorithms.
Table 1. Comparison of advantages and disadvantages of some classic or latest algorithms.
ClassificationNameYearAdvantageDisadvantage
Evolutionary AlgorithmGA1975Strong global search capability, clear underlying principles, and easy integration with other algorithmsSlow convergence speed, prone to getting stuck in local optima, high randomness in results, and complex parameter tuning
DE1996Simple structure, fast convergence speed, strong robustness, excels at handling continuous-space optimization problemsProne to premature convergence, performance degrades in high-dimensional complex problems, sensitive to variation strategies and parameter selection
Physical Heuristic AlgorithmSA1983Good global convergence, simple structure, effectively escapes local optimaSlow convergence speed, cooling strategy significantly impacts performance, solution accuracy is sometimes insufficient
MSO2025Dual-strategy synergy, efficient search, minimal parametersDependent on initial distribution, late-stage convergence oscillations, computational complexity
TOA2025Precise local adjustments, intuitive algorithmic workflow, minimal control parameters, easy to understand and implementOptimization results heavily depend on the quality of the initial solution, excessive focus on local optimization, limited global exploration capability
DOA2025Unique balancing mechanism with a simple core system that is easy to implementRequires initial accumulation of experience and exhibits slow convergence
Swarm Intelligence AlgorithmPSO1995Simple concept, few parameters, fast convergence, easy to implementProne to local optima, low convergence accuracy in later stages, prone to oscillation
ACO1991Excels at discrete combinatorial optimization with fast convergence.Computational time-consuming, prone to premature convergence, and sensitive to initial parameters.
CPO2024Simple structure, few parameters, easy to implement and adjustLeading competitive strategies may lead to population instability
GO2024Fast convergence speed, high precision, and high adaptabilityParameter adjustments significantly impact performance
HSO2025Global information search, escaping local optima with strong convergence accuracyHigh computational complexity, convergence stability requires improvement
Table 2. F1–F30 benchmark function test results (dim = 30).
Table 2. F1–F30 benchmark function test results (dim = 30).
VPPSOIAGWOIWOADBOAROAOOSBOAALAIALA
meanF18.38 × 1061.56 × 1061.70 × 1062.59 × 1083.55 × 1077.99 × 1042.18 × 1041.16 × 1082.16 × 103
F34.96 × 1045.87 × 1041.65 × 1051.02 × 1055.91 × 1042.52 × 1042.66 × 1043.10 × 1042.93 × 103
F45.32 × 1026.43 × 1025.22 × 1027.01 × 1025.50 × 1025.08 × 1025.05 × 1025.64 × 1025.03 × 102
F56.54 × 1026.52 × 1027.43 × 1027.49 × 1026.24 × 1026.37 × 1025.84 × 1026.28 × 1025.60 × 102
F66.39 × 1026.21 × 1026.56 × 1026.48 × 1026.14 × 1026.28 × 1026.03 × 1026.17 × 1026.00 × 102
F79.35 × 1029.11 × 1021.08 × 1031.04 × 1039.61 × 1028.76 × 1028.43 × 1029.33 × 1027.87 × 102
F89.28 × 1029.20 × 1029.80 × 1021.01 × 1039.07 × 1028.98 × 1028.80 × 1029.30 × 1028.65 × 102
F93.56 × 1034.58 × 1035.37 × 1037.10 × 1032.69 × 1034.49 × 1031.37 × 1032.31 × 1039.01 × 102
F104.82 × 1034.39 × 1035.81 × 1036.59 × 1034.54 × 1034.78 × 1034.40 × 1036.30 × 1035.15 × 103
F111.43 × 1032.11 × 1031.42 × 1032.25 × 1031.31 × 1031.28 × 1031.21 × 1031.33 × 1031.17 × 103
F122.34 × 1078.12 × 1073.41 × 1065.67 × 1073.67 × 1061.35 × 1071.14 × 1061.10 × 1071.94 × 105
F139.88 × 1042.51 × 1074.60 × 1048.23 × 1061.65 × 1041.39 × 1051.92 × 1041.37 × 1056.69 × 103
F141.13 × 1057.46 × 1053.16 × 1053.71 × 1058.44 × 1044.93 × 1044.21 × 1042.47 × 1031.48 × 103
F153.88 × 1046.48 × 1031.19 × 1041.25 × 1055.58 × 1034.56 × 1041.35 × 1043.05 × 1041.72 × 103
F162.98 × 1033.03 × 1033.03 × 1033.34 × 1032.65 × 1032.65 × 1032.36 × 1032.82 × 1032.38 × 103
F172.19 × 1032.34 × 1032.45 × 1032.57 × 1032.15 × 1032.17 × 1031.91 × 1032.17 × 1031.89 × 103
F181.22 × 1062.60 × 1063.33 × 1063.62 × 1067.95 × 1051.21 × 1065.24 × 1058.42 × 1041.30 × 104
F192.03 × 1068.73 × 1037.55 × 1031.70 × 1068.19 × 1035.13 × 1051.37 × 1042.50 × 1042.06 × 103
F202.55 × 1032.61 × 1032.71 × 1032.68 × 1032.47 × 1032.50 × 1032.26 × 1032.53 × 1032.28 × 103
F212.43 × 1032.45 × 1032.51 × 1032.55 × 1032.40 × 1032.42 × 1032.36 × 1032.44 × 1032.36 × 103
F223.61 × 1033.98 × 1034.16 × 1035.54 × 1032.34 × 1034.26 × 1032.43 × 1036.96 × 1032.31 × 103
F232.81 × 1033.11 × 1032.89 × 1033.00 × 1032.79 × 1032.79 × 1032.73 × 1032.79 × 1032.72 × 103
F242.97 × 1033.35 × 1033.09 × 1033.19 × 1032.96 × 1032.95 × 1032.90 × 1032.96 × 1032.88 × 103
F252.95 × 1032.96 × 1032.91 × 1032.98 × 1032.97 × 1032.91 × 1032.91 × 1032.92 × 1032.89 × 103
F264.70 × 1034.00 × 1035.60 × 1037.00 × 1035.21 × 1034.60 × 1033.95 × 1035.26 × 1034.16 × 103
F273.30 × 1033.27 × 1033.49 × 1033.31 × 1033.27 × 1033.25 × 1033.22 × 1033.24 × 1033.23 × 103
F283.32 × 1033.38 × 1034.64 × 1033.51 × 1033.35 × 1033.26 × 1033.25 × 1033.43 × 1033.22 × 103
F294.31 × 1034.04 × 1034.34 × 1034.40 × 1033.91 × 1033.96 × 1033.65 × 1034.03 × 1033.65 × 103
F307.84 × 1061.86 × 1043.70 × 1059.12 × 1061.05 × 1053.77 × 1065.12 × 1043.09 × 1051.60 × 104
stdF12.07 × 1075.34 × 1051.05 × 1061.77 × 1082.00 × 1073.95 × 1043.14 × 1045.77 × 1072.17 × 103
F38.40 × 1032.17 × 1044.84 × 1044.24 × 1047.81 × 1038.49 × 1037.00 × 1039.50 × 1031.77 × 103
F42.91 × 1012.69 × 1022.50 × 1011.64 × 1023.32 × 1012.64 × 1012.44 × 1013.69 × 1011.94 × 101
F54.48 × 1013.10 × 1015.17 × 1016.12 × 1013.54 × 1013.78 × 1012.08 × 1011.71 × 1011.81 × 101
F69.03 × 1009.60 × 1001.02 × 1011.34 × 1014.47 × 1001.01 × 1011.72 × 1004.91 × 1004.83 × 10−3
F75.80 × 1014.56 × 1011.01 × 1026.42 × 1016.38 × 1013.41 × 1012.96 × 1013.55 × 1012.10 × 101
F82.38 × 1012.29 × 1013.80 × 1015.82 × 1012.08 × 1012.73 × 1012.31 × 1012.90 × 1012.53 × 101
F98.78 × 1021.06 × 1031.07 × 1032.31 × 1037.36 × 1022.29 × 1033.59 × 1027.94 × 1021.56 × 100
F106.15 × 1025.67 × 1028.64 × 1021.30 × 1035.51 × 1027.01 × 1027.77 × 1026.82 × 1027.17 × 102
F111.58 × 1021.50 × 1031.21 × 1021.17 × 1036.40 × 1015.29 × 1014.22 × 1015.06 × 1013.40 × 101
F122.20 × 1072.18 × 1082.07 × 1069.67 × 1072.72 × 1061.36 × 1077.84 × 1051.48 × 1072.64 × 105
F134.50 × 1041.37 × 1084.22 × 1041.50 × 1071.29 × 1041.02 × 1051.87 × 1041.20 × 1053.44 × 103
F141.04 × 1057.35 × 1053.62 × 1057.48 × 1058.33 × 1044.89 × 1043.97 × 1042.28 × 1032.61 × 101
F151.94 × 1046.88 × 1031.33 × 1043.06 × 1055.73 × 1033.16 × 1041.34 × 1041.39 × 1041.06 × 102
F163.12 × 1022.91 × 1023.69 × 1024.82 × 1023.03 × 1022.97 × 1023.31 × 1024.06 × 1022.38 × 102
F172.21 × 1022.68 × 1022.60 × 1022.52 × 1022.01 × 1021.96 × 1021.34 × 1021.52 × 1021.06 × 102
F181.21 × 1066.28 × 1064.49 × 1065.15 × 1069.82 × 1059.49 × 1054.85 × 1054.65 × 1048.63 × 103
F191.35 × 1068.91 × 1039.82 × 1032.45 × 1066.99 × 1035.14 × 1051.49 × 1041.77 × 1042.65 × 102
F201.48 × 1021.79 × 1022.18 × 1022.54 × 1021.87 × 1022.01 × 1021.35 × 1021.61 × 1021.29 × 102
F213.39 × 1013.93 × 1014.89 × 1015.20 × 1012.07 × 1012.35 × 1011.72 × 1013.03 × 1012.04 × 101
F221.96 × 1032.11 × 1032.34 × 1032.51 × 1031.57 × 1012.12 × 1036.82 × 1022.14 × 1032.38 × 101
F235.03 × 1011.53 × 1027.45 × 1018.72 × 1014.27 × 1014.00 × 1012.38 × 1012.86 × 1011.70 × 101
F244.61 × 1011.29 × 1026.17 × 1011.36 × 1022.78 × 1014.65 × 1012.24 × 1013.27 × 1012.89 × 101
F252.95 × 1013.31 × 1012.20 × 1014.24 × 1012.97 × 1012.31 × 1011.83 × 1011.47 × 1011.37 × 100
F261.18 × 1031.82 × 1031.61 × 1031.10 × 1031.21 × 1039.19 × 1027.78 × 1025.29 × 1023.43 × 102
F274.40 × 1012.07 × 1021.38 × 1025.94 × 1012.39 × 1012.40 × 1011.14 × 1011.85 × 1011.41 × 101
F283.33 × 1011.31 × 1021.48 × 1032.92 × 1023.25 × 1013.35 × 1013.14 × 1015.43 × 1021.81 × 101
F291.88 × 1023.35 × 1023.20 × 1022.93 × 1022.47 × 1022.12 × 1022.03 × 1022.52 × 1021.10 × 102
F305.66 × 1065.16 × 1041.08 × 1061.44 × 1071.18 × 1052.53 × 1061.25 × 1053.50 × 1055.52 × 103
Table 3. F1–F30 benchmark function test results (dim = 50).
Table 3. F1–F30 benchmark function test results (dim = 50).
VPPSOIAGWOIWOADBOAROAOOSBOAALAIALA
meanF16.95 × 1081.80 × 1071.71 × 1087.48 × 1092.81 × 1091.68 × 1061.80 × 1072.05 × 1094.44 × 103
F31.44 × 1051.72 × 1052.85 × 1052.83 × 1051.65 × 1051.34 × 1051.04 × 1051.13 × 1053.63 × 104
F48.25 × 1027.34 × 1027.07 × 1021.59 × 1031.05 × 1036.34 × 1026.31 × 1028.75 × 1025.95 × 102
F57.95 × 1027.80 × 1028.89 × 1029.60 × 1027.79 × 1027.50 × 1027.22 × 1028.44 × 1026.28 × 102
F66.52 × 1026.35 × 1026.69 × 1026.66 × 1026.30 × 1026.44 × 1026.16 × 1026.38 × 1026.00 × 102
F71.25 × 1031.19 × 1031.53 × 1031.39 × 1031.36 × 1031.05 × 1031.13 × 1031.30 × 1038.71 × 102
F81.10 × 1031.09 × 1031.16 × 1031.30 × 1031.10 × 1031.07 × 1031.01 × 1031.13 × 1039.39 × 102
F91.04 × 1041.57 × 1041.56 × 1042.96 × 1041.04 × 1041.23 × 1045.54 × 1031.06 × 1049.80 × 102
F108.34 × 1037.14 × 1038.78 × 1031.17 × 1048.03 × 1038.02 × 1036.97 × 1031.20 × 1049.84 × 103
F112.40 × 1034.67 × 1031.84 × 1035.09 × 1032.25 × 1031.57 × 1031.48 × 1032.10 × 1031.29 × 103
F121.91 × 1081.53 × 1084.35 × 1078.30 × 1088.64 × 1077.78 × 1071.45 × 1071.66 × 1083.06 × 106
F139.99 × 1042.77 × 1089.71 × 1041.47 × 1082.35 × 1051.34 × 1051.13 × 1042.06 × 1062.26 × 104
F146.17 × 1051.65 × 1071.26 × 1064.44 × 1069.00 × 1055.12 × 1052.39 × 1055.73 × 1049.23 × 103
F153.39 × 1043.90 × 1073.12 × 1048.34 × 1061.08 × 1045.24 × 1041.10 × 1041.01 × 1053.61 × 103
F163.54 × 1033.87 × 1034.03 × 1034.79 × 1033.41 × 1033.42 × 1033.13 × 1034.22 × 1033.33 × 103
F173.39 × 1033.49 × 1033.60 × 1034.18 × 1033.08 × 1033.11 × 1032.71 × 1033.44 × 1032.85 × 103
F185.18 × 1068.86 × 1065.00 × 1061.41 × 1073.34 × 1063.22 × 1062.47 × 1067.44 × 1051.28 × 105
F191.58 × 1061.53 × 1044.17 × 1048.17 × 1061.69 × 1041.20 × 1061.90 × 1041.80 × 1058.12 × 103
F203.19 × 1033.25 × 1033.73 × 1033.73 × 1033.16 × 1033.08 × 1032.88 × 1033.42 × 1033.02 × 103
F212.58 × 1032.61 × 1032.74 × 1032.87 × 1032.55 × 1032.54 × 1032.47 × 1032.63 × 1032.43 × 103
F229.40 × 1039.65 × 1031.08 × 1041.38 × 1049.31 × 1039.36 × 1037.80 × 1031.35 × 1041.15 × 104
F233.13 × 1033.78 × 1033.33 × 1033.56 × 1033.13 × 1033.05 × 1032.94 × 1033.11 × 1032.87 × 103
F243.25 × 1034.01 × 1033.42 × 1033.71 × 1033.27 × 1033.23 × 1033.10 × 1033.24 × 1033.03 × 103
F253.33 × 1033.31 × 1033.18 × 1033.65 × 1033.53 × 1033.11 × 1033.16 × 1033.38 × 1033.08 × 103
F268.19 × 1037.98 × 1037.58 × 1031.06 × 1049.19 × 1035.77 × 1036.40 × 1037.57 × 1035.11 × 103
F273.75 × 1033.56 × 1034.14 × 1033.89 × 1033.78 × 1033.64 × 1033.39 × 1033.54 × 1033.48 × 103
F283.86 × 1033.57 × 1039.15 × 1036.14 × 1034.24 × 1033.39 × 1033.47 × 1034.97 × 1033.35 × 103
F295.56 × 1035.35 × 1035.40 × 1036.65 × 1034.88 × 1034.96 × 1034.12 × 1035.21 × 1034.29 × 103
F301.21 × 1084.95 × 1063.27 × 1065.12 × 1071.05 × 1075.20 × 1071.08 × 1062.70 × 1073.96 × 106
stdF16.88 × 1084.57 × 1068.02 × 1071.13 × 10101.43 × 1097.27 × 1051.57 × 1076.54 × 1084.61 × 103
F31.66 × 1044.31 × 1046.49 × 1048.86 × 1041.68 × 1042.88 × 1041.62 × 1042.08 × 1046.92 × 103
F49.55 × 1012.05 × 1026.73 × 1011.49 × 1032.25 × 1023.76 × 1015.62 × 1018.07 × 1013.39 × 101
F55.56 × 1012.95 × 1013.86 × 1011.07 × 1023.91 × 1016.34 × 1013.75 × 1014.72 × 1013.21 × 101
F66.57 × 1005.92 × 1003.04 × 1001.35 × 1017.55 × 1009.88 × 1006.12 × 1008.21 × 1001.22 × 10−1
F71.42 × 1026.70 × 1011.41 × 1021.43 × 1021.26 × 1027.05 × 1016.81 × 1018.50 × 1013.91 × 101
F84.98 × 1013.85 × 1016.37 × 1019.88 × 1014.18 × 1014.75 × 1013.81 × 1014.74 × 1014.22 × 101
F91.98 × 1032.23 × 1032.54 × 1038.06 × 1031.77 × 1034.15 × 1032.00 × 1033.69 × 1032.04 × 102
F101.20 × 1039.04 × 1021.05 × 1032.52 × 1037.46 × 1029.51 × 1028.42 × 1021.21 × 1038.96 × 102
F113.84 × 1023.21 × 1031.99 × 1023.44 × 1035.45 × 1021.17 × 1021.13 × 1022.56 × 1023.47 × 101
F121.66 × 1087.06 × 1082.39 × 1075.35 × 1083.79 × 1073.53 × 1071.09 × 1076.27 × 1071.64 × 106
F136.62 × 1041.52 × 1091.15 × 1051.58 × 1082.04 × 1057.87 × 1049.40 × 1031.60 × 1068.65 × 103
F143.69 × 1052.61 × 1071.05 × 1065.29 × 1067.28 × 1053.16 × 1051.68 × 1058.67 × 1041.14 × 104
F151.94 × 1041.45 × 1081.61 × 1042.13 × 1076.43 × 1032.64 × 1049.31 × 1036.77 × 1041.29 × 103
F163.51 × 1025.98 × 1024.41 × 1026.05 × 1024.51 × 1024.05 × 1024.30 × 1024.70 × 1023.14 × 102
F173.04 × 1023.15 × 1023.45 × 1024.37 × 1022.79 × 1023.01 × 1023.55 × 1023.54 × 1022.41 × 102
F183.81 × 1061.99 × 1073.43 × 1061.46 × 1072.49 × 1062.19 × 1062.16 × 1066.57 × 1051.31 × 105
F191.68 × 1069.06 × 1036.24 × 1041.17 × 1071.10 × 1041.03 × 1061.39 × 1042.08 × 1055.51 × 103
F203.34 × 1022.95 × 1023.37 × 1024.06 × 1023.74 × 1022.93 × 1022.12 × 1023.77 × 1022.48 × 102
F215.43 × 1014.43 × 1018.74 × 1018.05 × 1014.96 × 1015.09 × 1013.76 × 1015.24 × 1014.45 × 101
F221.63 × 1031.96 × 1031.39 × 1032.04 × 1031.97 × 1031.04 × 1032.81 × 1032.22 × 1031.06 × 103
F238.40 × 1013.38 × 1021.39 × 1021.31 × 1026.70 × 1016.89 × 1014.44 × 1013.99 × 1013.85 × 101
F246.47 × 1012.17 × 1021.58 × 1021.92 × 1025.31 × 1018.34 × 1014.95 × 1014.87 × 1014.13 × 101
F251.00 × 1021.62 × 1023.66 × 1011.15 × 1031.70 × 1022.34 × 1015.63 × 1011.10 × 1021.90 × 101
F262.02 × 1032.24 × 1033.23 × 1031.68 × 1031.88 × 1032.04 × 1031.33 × 1036.60 × 1024.07 × 102
F271.35 × 1026.30 × 1023.97 × 1022.31 × 1021.08 × 1021.11 × 1025.95 × 1011.24 × 1027.98 × 101
F281.81 × 1025.46 × 1023.90 × 1032.16 × 1032.36 × 1024.02 × 1016.52 × 1011.92 × 1033.62 × 101
F294.27 × 1022.29 × 1036.32 × 1021.05 × 1033.66 × 1023.45 × 1023.70 × 1023.82 × 1022.92 × 102
F303.51 × 1071.55 × 1071.36 × 1066.81 × 1074.30 × 1061.65 × 1072.74 × 1051.23 × 1071.09 × 106
Table 4. F1–F30 benchmark function test results (dim = 100).
Table 4. F1–F30 benchmark function test results (dim = 100).
VPPSOIAGWOIWOADBOAROAOOSBOAALAIALA
meanF11.86 × 10103.22 × 1088.99 × 1099.11 × 10105.17 × 10104.33 × 1081.44 × 10103.25 × 10101.20 × 107
F33.91 × 1054.95 × 1058.22 × 1056.90 × 1053.81 × 1055.85 × 1053.39 × 1053.18 × 1052.20 × 105
F42.93 × 1033.88 × 1031.97 × 1031.52 × 1046.31 × 1031.01 × 1031.68 × 1033.68 × 1038.36 × 102
F51.30 × 1031.27 × 1031.50 × 1031.64 × 1031.44 × 1031.28 × 1031.16 × 1031.52 × 1039.10 × 102
F66.62 × 1026.48 × 1026.72 × 1026.78 × 1026.56 × 1026.63 × 1026.41 × 1026.70 × 1026.03 × 102
F72.66 × 1032.28 × 1033.14 × 1032.99 × 1032.95 × 1032.02 × 1032.38 × 1032.65 × 1031.22 × 103
F81.67 × 1031.63 × 1031.94 × 1032.16 × 1031.76 × 1031.52 × 1031.50 × 1031.83 × 1031.20 × 103
F92.67 × 1044.16 × 1043.41 × 1047.73 × 1043.41 × 1043.75 × 1042.75 × 1045.20 × 1044.36 × 103
F101.79 × 1041.67 × 1041.93 × 1042.92 × 1042.02 × 1041.80 × 1041.72 × 1042.81 × 1042.40 × 104
F117.36 × 1047.53 × 1041.58 × 1052.31 × 1058.53 × 1043.65 × 1043.70 × 1046.53 × 1049.19 × 103
F121.37 × 1094.76 × 1098.58 × 1087.80 × 1093.92 × 1095.90 × 1082.99 × 1083.06 × 1093.62 × 107
F131.88 × 1052.05 × 1059.22 × 1052.66 × 1081.53 × 1077.48 × 1044.09 × 1045.87 × 1071.12 × 104
F146.03 × 1068.43 × 1064.20 × 1061.94 × 1076.20 × 1063.99 × 1063.79 × 1063.26 × 1065.35 × 105
F155.43 × 1052.36 × 1046.12 × 1041.13 × 1082.18 × 1058.38 × 1047.19 × 1035.44 × 1061.27 × 104
F167.53 × 1037.25 × 1037.03 × 1039.82 × 1036.83 × 1036.50 × 1035.73 × 1038.74 × 1036.72 × 103
F175.56 × 1031.20 × 1046.08 × 1039.10 × 1035.12 × 1035.28 × 1034.86 × 1036.55 × 1034.80 × 103
F185.44 × 1061.49 × 1077.63 × 1062.78 × 1075.38 × 1064.86 × 1064.57 × 1064.80 × 1066.80 × 105
F196.63 × 1063.13 × 1056.92 × 1051.20 × 1087.00 × 1055.03 × 1061.03 × 1041.11 × 1071.66 × 104
F205.57 × 1035.29 × 1036.00 × 1037.39 × 1035.29 × 1035.28 × 1034.67 × 1036.62 × 1035.72 × 103
F213.21 × 1034.27 × 1033.62 × 1034.00 × 1033.20 × 1033.14 × 1032.96 × 1033.35 × 1032.76 × 103
F222.12 × 1041.97 × 1042.26 × 1042.92 × 1042.24 × 1042.05 × 1042.05 × 1043.06 × 1042.65 × 104
F233.90 × 1036.03 × 1034.13 × 1034.80 × 1033.87 × 1033.82 × 1033.39 × 1033.84 × 1033.26 × 103
F244.73 × 1036.82 × 1035.01 × 1036.14 × 1034.90 × 1034.51 × 1034.06 × 1034.47 × 1033.77 × 103
F255.29 × 1034.85 × 1034.43 × 1038.62 × 1036.64 × 1033.82 × 1034.25 × 1036.04 × 1033.52 × 103
F262.16 × 1042.00 × 1042.41 × 1042.79 × 1042.53 × 1041.62 × 1041.89 × 1041.75 × 1049.97 × 103
F274.52 × 1035.39 × 1037.44 × 1034.73 × 1034.57 × 1034.11 × 1033.77 × 1033.92 × 1033.76 × 103
F286.32 × 1036.12 × 1032.15 × 1041.93 × 1049.33 × 1033.93 × 1034.91 × 1031.03 × 1043.72 × 103
F291.06 × 1047.68 × 1038.47 × 1031.14 × 1049.07 × 1038.81 × 1037.14 × 1039.64 × 1037.83 × 103
F303.65 × 1083.37 × 1081.22 × 1072.75 × 1085.73 × 1078.70 × 1071.04 × 1066.31 × 1075.24 × 105
stdF14.88 × 1097.22 × 1072.26 × 1097.28 × 10101.11 × 10101.76 × 1086.25 × 1096.40 × 1091.46 × 107
F34.62 × 1041.52 × 1051.80 × 1052.86 × 1052.74 × 1049.62 × 1042.03 × 1043.88 × 1042.89 × 104
F45.18 × 1026.13 × 1033.11 × 1021.38 × 1041.48 × 1037.81 × 1014.26 × 1026.72 × 1024.05 × 101
F51.17 × 1026.04 × 1014.62 × 1011.82 × 1026.44 × 1019.48 × 1016.16 × 1011.05 × 1028.57 × 101
F65.10 × 1005.41 × 1003.14 × 1001.28 × 1017.11 × 1006.93 × 1005.79 × 1007.68 × 1001.00 × 100
F72.98 × 1021.27 × 1022.12 × 1022.20 × 1022.01 × 1021.86 × 1021.52 × 1022.00 × 1029.07 × 101
F86.32 × 1018.12 × 1017.76 × 1011.98 × 1029.70 × 1011.09 × 1028.99 × 1019.50 × 1018.67 × 101
F98.13 × 1035.81 × 1032.47 × 1038.70 × 1034.59 × 1036.49 × 1034.04 × 1038.64 × 1032.25 × 103
F102.65 × 1032.86 × 1031.49 × 1033.61 × 1031.62 × 1031.77 × 1031.48 × 1031.48 × 1032.43 × 103
F111.12 × 1041.70 × 1046.83 × 1045.96 × 1041.85 × 1041.12 × 1049.44 × 1031.35 × 1042.24 × 103
F126.14 × 1081.04 × 10103.19 × 1082.25 × 1092.24 × 1092.47 × 1081.88 × 1081.02 × 1091.67 × 107
F136.95 × 1055.26 × 1055.28 × 1051.84 × 1081.05 × 1072.26 × 1041.80 × 1042.77 × 1074.05 × 103
F142.72 × 1063.94 × 1061.52 × 1061.20 × 1072.26 × 1061.81 × 1061.82 × 1061.70 × 1063.53 × 105
F152.70 × 1064.11 × 1042.13 × 1041.66 × 1081.38 × 1059.09 × 1043.65 × 1034.93 × 1065.01 × 103
F167.95 × 1021.56 × 1038.71 × 1021.53 × 1037.40 × 1027.13 × 1027.88 × 1028.84 × 1029.74 × 102
F175.83 × 1023.30 × 1047.13 × 1021.48 × 1035.12 × 1026.76 × 1025.75 × 1024.77 × 1024.66 × 102
F182.75 × 1062.93 × 1074.38 × 1061.80 × 1072.58 × 1063.69 × 1061.98 × 1062.96 × 1063.02 × 105
F196.10 × 1061.68 × 1067.39 × 1059.98 × 1074.54 × 1053.85 × 1061.01 × 1046.96 × 1061.38 × 104
F205.74 × 1027.45 × 1025.57 × 1027.26 × 1024.63 × 1026.33 × 1024.13 × 1025.56 × 1026.03 × 102
F211.13 × 1026.09 × 1021.15 × 1021.65 × 1029.29 × 1011.14 × 1028.00 × 1018.27 × 1019.23 × 101
F222.17 × 1032.21 × 1031.40 × 1035.27 × 1031.84 × 1031.69 × 1031.53 × 1031.39 × 1032.13 × 103
F231.18 × 1024.19 × 1022.27 × 1022.30 × 1021.27 × 1021.56 × 1027.24 × 1011.00 × 1028.98 × 101
F242.34 × 1029.20 × 1023.14 × 1023.98 × 1022.56 × 1021.51 × 1021.20 × 1021.04 × 1021.01 × 102
F254.17 × 1021.07 × 1032.19 × 1024.78 × 1036.68 × 1021.07 × 1022.73 × 1027.91 × 1025.39 × 101
F262.73 × 1035.88 × 1032.29 × 1033.36 × 1032.36 × 1034.19 × 1033.15 × 1031.09 × 1038.85 × 102
F272.55 × 1022.44 × 1031.88 × 1034.45 × 1022.42 × 1021.69 × 1021.60 × 1021.48 × 1029.95 × 101
F287.99 × 1022.67 × 1037.58 × 1035.91 × 1039.77 × 1021.49 × 1023.94 × 1024.41 × 1038.93 × 101
F291.04 × 1031.57 × 1037.79 × 1022.02 × 1037.37 × 1028.66 × 1029.22 × 1028.40 × 1025.98 × 102
F301.82 × 1089.43 × 1086.08 × 1061.71 × 1084.68 × 1074.01 × 1079.22 × 1052.30 × 1072.08 × 105
Table 5. p-value on CEC2017 (DIM = 30).
Table 5. p-value on CEC2017 (DIM = 30).
FunctionVPPSOIAGWOIWOADBOAROAOOSBOAALA
F11.53 × 10−53.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−115.97 × 10−93.02 × 10−11
F33.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11
F43.83 × 10−62.78 × 10−72.13 × 10−43.02 × 10−112.67 × 10−93.71 × 10−15.11 × 10−16.72 × 10−10
F52.37 × 10−103.34 × 10−113.02 × 10−113.02 × 10−113.47 × 10−101.09 × 10−105.61 × 10−54.50 × 10−11
F63.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11
F73.69 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.69 × 10−111.61 × 10−102.03 × 10−93.02 × 10−11
F81.29 × 10−94.57 × 10−95.49 × 10−111.09 × 10−102.57 × 10−74.35 × 10−51.76 × 10−22.03 × 10−9
F93.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11
F106.79 × 10−28.66 × 10−53.67 × 10−37.74 × 10−61.00 × 10−35.75 × 10−26.20 × 10−47.04 × 10−7
F113.02 × 10−115.00 × 10−91.46 × 10−103.02 × 10−111.33 × 10−101.55 × 10−94.71 × 10−44.98 × 10−11
F123.69 × 10−111.96 × 10−105.49 × 10−113.02 × 10−118.15 × 10−113.02 × 10−114.69 × 10−83.02 × 10−11
F133.02 × 10−111.17 × 10−41.20 × 10−83.02 × 10−117.30 × 10−43.02 × 10−117.98 × 10−23.02 × 10−11
F143.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11
F153.02 × 10−111.36 × 10−73.02 × 10−113.02 × 10−113.82 × 10−93.02 × 10−111.78 × 10−103.02 × 10−11
F163.20 × 10−91.69 × 10−91.20 × 10−81.61 × 10−108.56 × 10−47.30 × 10−46.84 × 10−11.64 × 10−5
F177.04 × 10−76.72 × 10−106.70 × 10−111.09 × 10−108.20 × 10−72.03 × 10−76.41 × 10−14.18 × 10−9
F183.02 × 10−113.34 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−111.09 × 10−10
F193.02 × 10−112.87 × 10−101.46 × 10−103.02 × 10−116.70 × 10−113.34 × 10−112.23 × 10−93.02 × 10−11
F204.69 × 10−82.67 × 10−94.20 × 10−107.77 × 10−91.58 × 10−41.64 × 10−55.49 × 10−13.01 × 10−7
F216.12 × 10−101.21 × 10−103.02 × 10−113.02 × 10−111.25 × 10−74.62 × 10−106.95 × 10−18.15 × 10−11
F221.17 × 10−99.76 × 10−109.76 × 10−104.08 × 10−117.77 × 10−95.57 × 10−107.69 × 10−88.15 × 10−11
F233.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−116.70 × 10−112.03 × 10−92.71 × 10−13.34 × 10−11
F245.00 × 10−93.02 × 10−113.02 × 10−113.02 × 10−115.57 × 10−106.52 × 10−96.55 × 10−48.89 × 10−10
F253.02 × 10−113.02 × 10−116.12 × 10−103.02 × 10−113.02 × 10−111.02 × 10−51.43 × 10−53.02 × 10−11
F263.39 × 10−25.83 × 10−32.68 × 10−45.07 × 10−102.68 × 10−45.97 × 10−58.19 × 10−16.12 × 10−10
F271.46 × 10−101.11 × 10−63.02 × 10−115.46 × 10−94.20 × 10−101.64 × 10−53.27 × 10−21.54 × 10−1
F284.98 × 10−113.20 × 10−93.02 × 10−113.02 × 10−113.02 × 10−111.03 × 10−68.15 × 10−53.02 × 10−11
F293.69 × 10−112.00 × 10−57.39 × 10−113.69 × 10−111.87 × 10−56.53 × 10−85.30 × 10−15.09 × 10−8
F303.02 × 10−115.09 × 10−83.01 × 10−73.02 × 10−112.49 × 10−63.02 × 10−113.04 × 10−19.92 × 10−11
Table 6. p-value on CEC2017 (DIM = 50).
Table 6. p-value on CEC2017 (DIM = 50).
FunctionVPPSOIAGWOIWOADBOAROAOOSBOAALA
F13.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11
F33.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11
F43.02 × 10−111.20 × 10−82.92 × 10−93.02 × 10−113.02 × 10−117.20 × 10−51.08 × 10−23.02 × 10−11
F53.34 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−111.96 × 10−102.15 × 10−103.02 × 10−11
F63.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11
F73.34 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−116.70 × 10−113.02 × 10−113.02 × 10−11
F84.50 × 10−114.50 × 10−114.08 × 10−113.02 × 10−113.34 × 10−111.78 × 10−102.57 × 10−73.69 × 10−11
F93.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11
F101.60 × 10−71.96 × 10−106.36 × 10−51.22 × 10−25.00 × 10−92.83 × 10−84.50 × 10−113.65 × 10−8
F113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.47 × 10−109.76 × 10−103.02 × 10−11
F123.02 × 10−118.15 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.20 × 10−93.02 × 10−11
F136.72 × 10−103.55 × 10−13.82 × 10−93.02 × 10−114.50 × 10−114.98 × 10−115.97 × 10−53.02 × 10−11
F143.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.69 × 10−113.69 × 10−111.73 × 10−6
F153.69 × 10−116.55 × 10−43.69 × 10−113.02 × 10−111.07 × 10−73.02 × 10−112.39 × 10−43.02 × 10−11
F165.55 × 10−26.77 × 10−55.53 × 10−88.99 × 10−118.30 × 10−13.48 × 10−11.56 × 10−21.07 × 10−9
F177.09 × 10−81.10 × 10−83.16 × 10−104.50 × 10−113.67 × 10−34.71 × 10−41.15 × 10−12.60 × 10−8
F184.08 × 10−113.69 × 10−113.69 × 10−113.69 × 10−113.34 × 10−113.02 × 10−116.07 × 10−115.97 × 10−9
F193.02 × 10−119.03 × 10−41.47 × 10−73.02 × 10−113.56 × 10−43.02 × 10−116.55 × 10−46.12 × 10−10
F204.21 × 10−23.18 × 10−31.20 × 10−89.26 × 10−93.64 × 10−25.40 × 10−12.15 × 10−22.13 × 10−5
F211.21 × 10−103.02 × 10−113.02 × 10−113.02 × 10−111.41 × 10−91.69 × 10−95.61 × 10−53.02 × 10−11
F223.96 × 10−81.86 × 10−63.39 × 10−28.29 × 10−62.78 × 10−76.52 × 10−99.92 × 10−111.56 × 10−8
F233.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−111.09 × 10−101.73 × 10−63.02 × 10−11
F246.07 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−114.50 × 10−111.49 × 10−63.02 × 10−11
F253.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−112.03 × 10−72.15 × 10−103.02 × 10−11
F261.31 × 10−82.60 × 10−52.46 × 10−11.96 × 10−103.69 × 10−112.71 × 10−21.17 × 10−53.02 × 10−11
F273.16 × 10−102.42 × 10−23.02 × 10−115.57 × 10−104.08 × 10−114.11 × 10−71.04 × 10−43.27 × 10−2
F283.02 × 10−119.00 × 10−13.02 × 10−113.02 × 10−113.02 × 10−111.17 × 10−55.57 × 10−103.02 × 10−11
F297.39 × 10−111.50 × 10−22.92 × 10−93.02 × 10−111.47 × 10−77.12 × 10−91.05 × 10−14.20 × 10−10
F303.02 × 10−117.04 × 10−72.24 × 10−27.39 × 10−114.98 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11
Table 7. p-value on CEC2017 (DIM = 100).
Table 7. p-value on CEC2017 (DIM = 100).
FunctionVPPSOIAGWOIWOADBOAROAOOSBOAALA
F13.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11
F33.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−119.92 × 10−11
F43.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−114.08 × 10−113.02 × 10−113.02 × 10−11
F54.50 × 10−113.34 × 10−113.02 × 10−113.02 × 10−113.02 × 10−114.08 × 10−111.96 × 10−103.02 × 10−11
F63.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11
F73.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11
F83.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−115.49 × 10−114.98 × 10−113.02 × 10−11
F93.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11
F102.67 × 10−91.69 × 10−92.67 × 10−98.84 × 10−79.83 × 10−81.96 × 10−108.99 × 10−111.85 × 10−8
F113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11
F123.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11
F133.02 × 10−112.61 × 10−103.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11
F143.69 × 10−113.02 × 10−113.69 × 10−113.02 × 10−113.02 × 10−115.49 × 10−114.98 × 10−112.37 × 10−10
F151.96 × 10−106.97 × 10−33.34 × 10−113.02 × 10−113.02 × 10−113.34 × 10−111.87 × 10−53.02 × 10−11
F169.52 × 10−42.90 × 10−11.30 × 10−18.10 × 10−104.12 × 10−15.79 × 10−11.89 × 10−49.26 × 10−9
F174.42 × 10−65.97 × 10−54.18 × 10−93.02 × 10−111.08 × 10−23.85 × 10−37.84 × 10−14.50 × 10−11
F183.02 × 10−113.02 × 10−113.34 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.69 × 10−11
F193.02 × 10−119.03 × 10−43.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−112.51 × 10−23.02 × 10−11
F203.95 × 10−11.63 × 10−25.75 × 10−22.92 × 10−95.08 × 10−31.27 × 10−22.02 × 10−81.86 × 10−6
F213.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−114.98 × 10−111.69 × 10−93.02 × 10−11
F224.18 × 10−92.61 × 10−105.97 × 10−99.33 × 10−23.08 × 10−82.61 × 10−101.46 × 10−101.17 × 10−9
F233.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.69 × 10−119.53 × 10−73.02 × 10−11
F243.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−112.15 × 10−103.02 × 10−11
F253.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11
F263.02 × 10−111.16 × 10−73.02 × 10−113.02 × 10−113.02 × 10−111.11 × 10−63.02 × 10−113.02 × 10−11
F273.02 × 10−113.79 × 10−13.02 × 10−114.50 × 10−113.02 × 10−114.20 × 10−107.39 × 10−11.75 × 10−5
F283.02 × 10−113.69 × 10−113.02 × 10−113.02 × 10−113.02 × 10−111.36 × 10−73.02 × 10−113.02 × 10−11
F294.98 × 10−119.05 × 10−21.11 × 10−34.20 × 10−101.07 × 10−72.28 × 10−56.91 × 10−49.76 × 10−10
F303.02 × 10−111.45 × 10−13.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.37 × 10−43.02 × 10−11
Table 8. Average cost and standard deviation of 30 independent experiments.
Table 8. Average cost and standard deviation of 30 independent experiments.
VPPSOIAGWOIWOADBOAROAOOSBOAALAIALA
Scenario 1Mean5378.475827.436064.005874.965563.045652.055384.725777.495271.45
Std132.88246.70433.91308.2179.79378.04117.91203.2628.24
Scenario 2Mean5421.145675.116481.745950.075504.325474.525365.365631.725275.16
Std104.08275.87358.87327.3682.75157.77116.03175.4725.82
Scenario 3Mean6086.086055.688126.747084.966217.216005.435757.367041.825559.58
Std332.17280.81991.73438.57285.46241.88104.86392.4547.34
Scenario 4Mean7061.437077.098860.458248.597477.857117.826647.137623.866158.55
Std873.86992.45846.77749.44403.67907.32510.43663.52383.61
Scenario 5Mean5413.445812.976101.835641.985450.935446.255353.265512.535301.05
Std124.47319.20534.25130.9062.5594.20112.40153.6328.09
Scenario 6Mean5590.655807.437102.226489.685816.675522.775484.605841.875349.86
Std163.36341.63582.16540.91229.0389.7861.59328.2732.00
Scenario 7Mean6623.596580.927820.107041.786563.946086.496008.296592.286123.95
Std559.11441.13611.80458.18344.97267.89293.38281.06290.84
Scenario 8Mean6486.196940.267119.207101.767416.306433.315932.866267.926191.59
Std509.55610.11604.551058.16770.65542.81378.27577.36187.29
Table 9. p-value in eight scenarios.
Table 9. p-value in eight scenarios.
VPPSOIAGWOIWOADBOAROAOOSBOAALA
Scenario 13.76 × 10−021.83 × 10−041.83 × 10−041.83 × 10−041.83 × 10−041.83 × 10−044.52 × 10−021.83 × 10−04
Scenario 23.61 × 10−031.83 × 10−041.83 × 10−041.83 × 10−041.83 × 10−043.61 × 10−033.76 × 10−027.69 × 10−04
Scenario 31.83 × 10−041.83 × 10−041.83 × 10−041.83 × 10−041.83 × 10−041.83 × 10−043.30 × 10−041.83 × 10−04
Scenario 43.61 × 10−034.59 × 10−031.83 × 10−041.83 × 10−042.46 × 10−045.80 × 10−032.57 × 10−023.30 × 10−04
Scenario 59.11 × 10−031.83 × 10−045.83 × 10−041.83 × 10−041.83 × 10−042.83 × 10−034.73 × 10−011.83 × 10−04
Scenario 62.83 × 10−031.83 × 10−041.83 × 10−041.83 × 10−041.83 × 10−041.83 × 10−041.83 × 10−041.83 × 10−04
Scenario 73.12 × 10−021.73 × 10−021.83 × 10−041.01 × 10−035.80 × 10−036.78 × 10−013.07 × 10−014.59 × 10−03
Scenario 83.07 × 10−015.80 × 10−037.69 × 10−044.52 × 10−021.71 × 10−033.85 × 10−015.39 × 10−027.34 × 10−01
Table 10. Results of Monte Carlo simulation.
Table 10. Results of Monte Carlo simulation.
VPPSOIAGWOIWOATechnologies 14 00091 i001
meanstdmeanstdmeanstd
196.558.5655,126.8985,346.3793,036.6794,518.23
Ranking3Ranking7Ranking8
DBOAROAOO
meanstdmeanstdmeanstd
116,809.2796,913.186293.9033,412.366284.4033,355.41
Ranking9Ranking5Ranking4
SBOAALAIALA
meanstdmeanstdmeanstd
190.142.8824,641.7463,391.69188.321.38
Ranking2Ranking6Ranking1
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zheng, X.; Liu, R.; Huang, S.; Duan, Z. IALA: An Improved Artificial Lemming Algorithm for Unmanned Aerial Vehicle Path Planning. Technologies 2026, 14, 91. https://doi.org/10.3390/technologies14020091

AMA Style

Zheng X, Liu R, Huang S, Duan Z. IALA: An Improved Artificial Lemming Algorithm for Unmanned Aerial Vehicle Path Planning. Technologies. 2026; 14(2):91. https://doi.org/10.3390/technologies14020091

Chicago/Turabian Style

Zheng, Xiaojun, Rundong Liu, Shiming Huang, and Zhicong Duan. 2026. "IALA: An Improved Artificial Lemming Algorithm for Unmanned Aerial Vehicle Path Planning" Technologies 14, no. 2: 91. https://doi.org/10.3390/technologies14020091

APA Style

Zheng, X., Liu, R., Huang, S., & Duan, Z. (2026). IALA: An Improved Artificial Lemming Algorithm for Unmanned Aerial Vehicle Path Planning. Technologies, 14(2), 91. https://doi.org/10.3390/technologies14020091

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop