Next Article in Journal
Exploring the Application of Models of DNA Evolution to Normalized Compression Distance (NCD) Matrices
Previous Article in Journal
On the Existence and Uniqueness of Two-Dimensional Nonlinear Fuzzy Difference Equations with Logarithmic Interactions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Research on Path Planning for Mobile Robot Using the Enhanced Artificial Lemming Algorithm

1
Engineering Training Center, Guizhou Institute of Technology, Guiyang 550025, China
2
College of Agriculture, Anshun University, Anshun 561000, China
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(21), 3533; https://doi.org/10.3390/math13213533
Submission received: 28 September 2025 / Revised: 21 October 2025 / Accepted: 30 October 2025 / Published: 4 November 2025

Abstract

To address the key challenges in shortest path planning for known static obstacle maps—such as the tendency to converge to local optima in U-shaped/narrow obstacle regions, unbalanced computational efficiency, and suboptimal path quality—this paper presents an enhanced Artificial Lemming Algorithm (DMSALAs). The algorithm integrates a dynamic adaptive mechanism, a hybrid Nelder–Mead method, and a localized perturbation strategy to improve the search performance of ALAs. To validate DMSALAs efficacy, we conducted ablation studies and performance comparisons on the IEEE CEC 2017 and CEC 2022 benchmark suites. Furthermore, we evaluated the algorithm in mobile robot path planning scenarios, including simulated grid maps (10 × 10, 20 × 20, 30 × 30, 40 × 40) and a real-world experimental environment built by our team. These experiments confirm that DMSALAs effectively balance optimization accuracy and practical applicability in path planning problems.

1. Introduction

Meta-heuristic (MH) algorithms excel at solving complex optimization problems like mobile robot path planning in static environments, where traditional methods struggle with non-convex obstacle avoidance and high-dimensional search spaces. Their ability to balance global exploration (finding feasible paths around fixed obstacles) and local exploitation (optimizing path length) makes them particularly suitable for grid-based navigation systems. The No Free Lunch theorem [1] further justifies developing specialized variants for structured environments with predetermined obstacle configurations.
Recently, the field of MH algorithms has witnessed a remarkable increase in research focus and attention. There are many types of MH algorithms, including swarm intelligence algorithms, evolutionary algorithms, and physics-inspired algorithms. There are many swarm intelligence algorithms; for example, the improved gorilla troops optimizer (IGTO) [2] represents a novel meta-heuristic approach that integrates lens opposition and adaptive hill strategies. The Horse Herd Optimization Algorithm (HOA) [3] derives from observed social behaviors in equine populations, systematically encoding six characteristic traits across growth phases. The Graylag Goose Optimization Algorithm (GGO) [4] computationally emulates the self-organizing flight patterns of geese that emerge from altitude and meteorological variations. Binary variants of the Artificial Bee Colony algorithm (BABC) [5] preserve the defined criteria while inheriting the advantages of the original ABC algorithm. The Modified Artificial Hummingbird Algorithm (MAHA) [6] improves upon the performance of its predecessor. The Whale Optimization Algorithm (WOA) [7] is inspired by the bubble-net hunting strategy, which can precisely circumvent obstacles along local paths. The hybrid gray wolf optimization (HGWO) [8,9,10] utilizes combined metaheuristic principles to generate collision-free optimal paths for autonomous robots. The improved PSO (IPSO) [11] improves the performance of autonomous path planners by optimizing 3D route generation for speed and precision. The improvement adaptive ant colony algorithm (IAACO) [12] integrates four key factors—directional guidance, obstacle exclusion, adaptive heuristic information, and adjustable pheromone decay to optimize indoor robotic path planning. The proposed algorithm, named the hybrid Particle Swarm Optimization-Simulated Annealing algorithm [13,14], successfully avoids local optima entrapment and enhances search diversity, resulting in quicker convergence and decreased resource consumption. Each of these algorithms possesses distinct characteristics. For example, the IGTOs algorithm exhibits strong exploration capability, excellent balance, and high robustness; however, it suffers from increased computational complexity and requires careful parameter tuning. Similarly, the WOAs algorithm offers a simple structure, ease of implementation, and strong local exploitation ability, yet it is prone to premature convergence and demonstrates limited global exploration performance. Evolutionary algorithms include many algorithms, such as the Population State Evaluation-based Differential Evolution (PSEDEs) [15], which incorporates insights from the evolutionary states of Differential Evolution (DEs), and the enhanced Genetic Algorithm (EGAs) [15,16], which focuses on advancing path initialization techniques in continuous space, ultimately deriving the best-performing route between given start and goal coordinates. The PSEDEs algorithm features a simple structure and ease of implementation, with strong local exploitation capability and fast convergence speed. However, it is prone to receiving traps in local optima, and its search behavior heavily depends on population distribution. The EGAs algorithm demonstrates robust global exploration ability and low dependency on specific problem structures, yet it suffers from slow convergence and high sensitivity to parameter settings. In addition to the above two algorithms, there are also physics-inspired algorithms, for example, the Multi-Strategy Boosted Snow Ablation Optimizer (MSAO) [16] presents a new SAO version enhanced by four optimization strategies. The Interior Search algorithm (ISA) [17] takes methodological cues from harmonic balance concepts in decor composition. The Farmland Fertility algorithm (FFAs) [18] computationally models the productivity cycles inherent in arable land biomes. The MSAOs algorithm features a novel search mechanism with clear physical intuition, yet it suffers from insufficient exploitation capability. The ISAs algorithm incorporates a unique update mechanism but is prone to receiving traps in local optima. The FFAs algorithm exhibits a strong metaphorical foundation and adaptability, along with robust global exploration ability; however, it has weaknesses in local exploitation and requires careful parameter tuning. These original and improved algorithms leverage various strategies, such as Levy flight [19] can improve the global search efficiency of robots in sparse obstacle environments, bidirectional search strategy [20] is capable of narrowing down the search scope and decreasing the time required to identify the optimal trajectory, and adaptive parameters strategy [21] can use a feedback mechanism to achieve dynamic adaptation of parameters, thereby improving the quality of the current path solution. However, many algorithms perform well in dynamic environments, but when optimizing paths for fixed maps, they suffer from excessive exploration (such as unnecessary random walking) or lack a directional search mechanism for known obstacle structures, resulting in suboptimal path lengths.
The ALAs [22] represents an innovative swarm intelligence algorithm inspired by lemming behaviors: migratory patterns, burrowing activities, and food-searching strategies. The proposed method exhibits a straightforward architecture, implementation simplicity, and superior computational performance. It incorporates Brownian motion, random walk, and Levy flight mechanisms to achieve exploration-exploitation equilibrium, while adaptive parameter tuning progressively narrows the search range with iterations. However, the algorithm has certain limitations, including fixed parameters for search radius and jump probability, a lack of an adaptive mechanism to dynamically adjust based on population diversity or iteration progress, and an absence of a local fine-tuning strategy for the current optimal solution, which may lead to entrapment in local optima. Additionally, population diversity may decline rapidly with iterations, resulting in premature convergence. Especially when applied to mobile robot path planning problems, the ALAs algorithm further exposes the following issues: a fixed search radius makes it prone to generating collision paths in dense obstacle areas, a lack of heuristic guidance for known map structures (prioritizing search along obstacle contours), and a decline in population diversity causes path optimization to prematurely converge to suboptimal solutions.
The aforementioned algorithms have been widely applied in various research domains. However, when employed for path planning problems, they commonly face challenges such as an imbalance between exploration and exploitation, weak local search capability, and a tendency toward premature convergence—issues that also apply to the ALAs algorithm. To systematically address these limitations, this study addresses the optimal path planning problem in predefined static obstacle environments, with its primary challenges including: ensuring global optimal pathfinding within bounded constraint spaces, avoiding algorithmic entrapment in local optima within U-shaped or narrow obstacle regions, and striking a balance between computational efficiency and path optimality. To tackle these limitations, we introduce DMSALAs, which integrate a dynamic adaptive mechanism, hybrid Nelder–Mead optimization, and localized perturbation strategies to boost the search efficiency of the ALAs algorithm in path planning for mobile robots within static obstacle environments. The dynamic adaptive mechanism is designed to modulate the search granularity according to the proximity between the current trajectory and obstacles. The Hybrid Nelder–Mead optimization is employed for local smoothing of key turning points in the path. The small-range perturbation can conduct directional exploration for dead-end exits (such as the openings of U-shaped obstacles).
To verify the performance of the DMSALAs algorithm, we conducted a series of comprehensive experimental evaluations. First, to evaluate the efficacy of the three strategies, we performed ablation studies on the IEEE CECs 2017 benchmark, comparing results with six variant algorithms. Both IEEE CECs 2017 and CECs 2022 benchmarks were utilized to comprehensively assess DMSALAs against eight state-of-the-art algorithms spanning different categories. Furthermore, to explore the algorithm’s performance in mobile robot path planning, we set the shortest path as the optimization goal. DMSALAs were tested on four maps with varying dimensions (10 × 10, 20 × 20, 30 × 30, and 40 × 40), along with the real-world experimental setup constructed by our team. All experimental results effectively validated the superiority of the DMSALAs algorithm. This study provides a new solution for path length optimization based on prior maps in static environments, and its improved strategies specifically perform collaborative optimization for convergence speed and path optimality in structured obstacle spaces.

2. ALA Algorithm

ALAs primarily models key behaviors of lemmings, as depicted in Figure 1.

2.1. Initialization

In the initialization phase, similar to other algorithms, the algorithm generates an ini-tial solution randomly within a predefined solution space Ω , N is the population size and D is a dimension. The formula is as follows:
Ω i , j = L B j + r a n d · U B j L B j
where i = 1 , 2 , N , j = 1 , 2 , , D , r a n d 0 , 1 , U B j and L B j is the j t h upper and lower boundaries.

2.2. Long-Distance Migration (Exploration)

The formula of lemmings’ long-distance migration is as follows:
Ω i t + 1 = Ω b e s t t + F · B · r · Ω b e s t t Ω i t + 1 r · Ω i t Ω a t
where Ω i t + 1 and Ω i t are the location of i t h search agent at t + 1 t h and t t h iteration. Ω b e s t t is the current optimal solution. r = 2 · r a n d 1 , as is shown in Equation (5). r is a random number uniformly distributed within the interval [−1, 1]. It determines, at each iteration, whether an individual is more inclined to move toward the best solution or to random exploration. When r approaches 1, the weight of the term Ω b e s t t Ω i t increases, enhancing the exploitation behavior and causing the individual to closely follow the current best solution. When r approaches −1, the weight of the term Ω i t Ω a t increases due to the growing value of ( 1 r ) , thereby strengthening the exploration behavior and promoting random walks within the search space. In this way, r adaptively adjusts the balance between exploration and exploitation for each individual at every iteration, rather than relying on a fixed strategy. F is the flag for the search direction conversion, as is shown in Equation (3). This introduces a stochastic reverse mechanism into the search process. When F = 1 the individual moves in the original direction determined by the subsequent vector; when F = 1 , the individual moves in the opposite direction. This mechanism effectively prevents individuals from blindly persisting in a single direction and enhances the likelihood of escaping local optima. It is a random vector representing Brownian motion; its value follows the probability density of a standard normal distribution and primarily controls the step size. B acts as a random multiplicative factor applied to the entire movement vector. A larger B value results in a significant jump, while a smaller B value leads to a fine–grained local move. B interacts with F the product F · B jointly determines the basic step size and direction of the current movement. This base random step is then combined with the exploration-exploitation directional vector weighted by r , ultimately determining the position update, as is shown in Equation (4). Ω a t represents a randomly chosen individual within the population, a is an integer between 1 and N .
F = 1 i f 2 · r a n d + 1 = 1 1 i f 2 · r a n d + 2 = 2
B = 1 2 · π · exp x 2 2

2.3. Digging Holes (Exploration)

The secondary behavior of lemmings entails burrowing to achieve dual purposes: self-protection and food storage. The corresponding model is described below:
Ω i t + 1 = Ω i t + F · L · Ω b e s t t Ω b t
where L = r a n d · 1 + sin t 2 , L is a randomly generated step size that periodically oscillates within the interval [0, 2). The exploration step size expands during iterations when the sine function approaches 1, and contracts when the sine function approaches −1; however, due to the addition of 1, the minimum value remains non-negative. This mechanism helps balance global search capability with the avoidance of excessive jumps. Ω b t represents a randomly chosen individual from the population, b is an integer between 1 and N . Through the coordinated interaction of L oscillating random step size, F stochastic direction reversal, and the base random vector, the model effectively implements a comprehensive, randomized exploration strategy with adaptive step-size variation.

2.4. Foraging (Exploitation)

The third behavior of lemmings involves foraging, wherein they move around within their burrows to locate food. The formula is as follows:
Ω i t + 1 = Ω b e s t t + F · s p i r a l · r a n d · Ω i t
where s p i r a l is for spiral foraging, as is shown in Equation (7). R is the search radius, as is shown in Equation (8). This equation generates a local search behavior characterized by a random spiral trajectory around the optimal solution through the coordinated action of the s p i r a l function and F . R controls the scale of the spiral, while the combination of trigonometric functions determines its specific shape and phase. Together, these components enable the spiral function to dynamically generate a spiral vector whose amplitude is proportional to the distance and whose shape varies randomly, guiding individuals in conducting fine-grained and diverse exploration around the optimal solution.
s p i r a l = R · sin 2 · π · r a n d + cos 2 · π · r a n d
R = j = 1 D Ω b e s t , j t Ω i , j t 2

2.5. Exploration and Exploitation Transition Mechanism

The transition mechanism between exploration and exploitation is governed by the energy coefficient E .
E = 4 · arctan [ 1 t / T max ] · ln ( 1 / r a n d )
In the initialization phase of ALAs, some control parameters were set. Afterwards, at each iteration, if the energy coefficient E > 1 , ALAs enter the exploration phase, executing long-distance migration behavior or digging behavior. When E 1 , ALAs perform exploitation through foraging behavior or evading natural predators’ behavior. These behaviors are employed by all search agents to update new candidate solutions and determine the currently found optimal solution. As the number of iterations increases, the value of E gradually decreases, allowing search agents to transition robustly from exploration to exploitation. Once the termination condition is satisfied, the procedure will terminate and output the optimal global solution. DMSALAs workflow is shown in Figure 2.

3. Improvement Strategy

In this section, we implement a dynamic adaptive mechanism, a hybrid Nelder–Mead approach, and a localized perturbation strategy to improve the search performance of the ALAs algorithm.

3.1. Dynamic Adaptive Mechanism

The original algorithm features a fixed search radius and jump probability, resulting in poor robustness [23]. Consequently, we introduce an exponentially attenuating radius to enable the DMSALAs algorithm to dynamically adjust its search radius and jump probability [24,25], thereby achieving adaptability to different problem characteristics. The formula is presented as follows:
C = r max · exp λ · t / T
where the maximum search radius r max is an Euclidean norm expressed as r max = n o r m ( U B L B ) , where the minimum search radius is r min = 0.01 · r max , and the radius decay coefficient is λ = 7 . Let F e s and M a x f e s denote the current and maximum number of objective function evaluations.
In the initial stage, when F e s 0 , exp ( F e s ) 1 , so the search radius C r max . This allows the mobile robot to explore possible paths in a large area, and in the Late stage of algorithm search, F e s M a x f e s , ( F e s / M a x f e s ) 1 , exp ( λ ) r min , causing C 0 . The algorithm then prioritizes local fine-grained search, enabling the robot to fine-tune around the discovered suboptimal paths and converge near the optimal solution. The search radius C decays exponentially with the number of evaluations F e s , scanning the environment on a large scale using the map’s diagonal length r max in the early stage, and optimizing paths with r min level precision in the late stage.

3.2. The Hybrid Nelder–Mead Method

To strengthen the algorithm’s aptitude for local search, we incorporate the hybrid Nelder–Mead method into the ALAs algorithm [26,27]. The hybrid Nelder–Mead method enables adaptive local search and an elite retention strategy, replaces inferior solutions with superior ones, and prevents population degradation [28]. The pseudocode for the Hybrid Nelder–Mead Method is as follows (Algorithm 1):
Algorithm 1. Pseudocode for the Hybrid Nelder–Mead Method
IF current_Fes > 0.7 · max_Fes AND mod (current_Fes 5) = 0 THEN
        // Perform local search using fminsearch
        [new_position, new_fitness] = fminsearch(objective_function, current_position)
                IF new_fitness < current_fitness THEN
                // Update current individual
                current_position = new_position
                global_best_fitness = new_fitness
                // Find worst individual in population
                worst_index = index_of_max(fitness_population)
                // Replace worst individual with new solution
                population[worst_index] = new_position
                fitness_population[worst_index] = new_fitness
        END IF
END IF
where worst_index denotes the encoded address of the current suboptimal solution, new_position signifies the optimal outcome of the local search process, and new_fitness stands for the corresponding objective function value.
In the late phase of the algorithm, a hybrid Nelder–Mead approach is integrated, with the primary aim of enhancing the quality and convergence accuracy of mobile robot path planning results. Specifically, the Nelder–Mead simplex method is employed to conduct fine-grained searches on the current global optimal path (current_position), enhancing the algorithm’s exploitation capability to improve convergence precision and the quality of the solved path. Meanwhile, population diversity is maintained by replacing the worst path in the population.
When current_Fes > 0.7 · max_Fes, it signifies that the algorithm has entered the terminal phase of iteration. Additionally, mod (current_Fes, 5) = 0 means the current evalution number current_Fes is divisible by 5. When both conditions current_Fes > 0.7 · max_Fes and mod (current_Fes, 5) = 0 are satisfied, a gradient-free Nelder–Mead local search is performed once every five evaluations. This strategy substantially cuts down the algorithm’s computational overhead while guaranteeing the quality of path planning results.
During local search, the fminsearch function is used, which takes the objective function fobj, the current global optimal path (current_position). This strategy substantially cuts down the algorithm’s computational overhead while guaranteeing the quality of path planning results. If the fitness value of the new path obtained through local enhancement search is superior to that of the current global optimal path (new_fitness < global_best_fitness), the global optimal path (current_position) and its fitness value Score are updated. This indicates that a better path for the mobile robot has been found, which is then taken as the new global optimal path.
Next, the new path and fitness value new_fitness obtained from fminsearch are used to replace the worst path in the population. Specifically, the fitness values of all paths within the population are computed using the max function to determine the index of the least optimal path. The coordinate values and fitness value of this worst path are then replaced by the new path P and new_fitness. This procedure iteratively discards suboptimal path solutions to enhance the overall quality of the population.

3.3. Small-Range Perturbation Strategy

To address the ALAs algorithms’ propensity to converge to local optima, we devised a localized perturbation strategy [29] grounded in an evaluation of the current population diversity. The formula is presented as follows:
p = p min + p max p min · d / r max
The maximum jump probability is p max = 0.5 , and the minimum jump probability is p min = 0.1 . To characterize the distribution of the population within the solution space and identify premature convergence of the ALAs algorithm, the standard deviation of each column of X is calculated as d i v e r s i t y = s t d ( X ) . A larger standard deviation indicates greater individual differences in that dimension (better population diversity). The average standard deviation across all dimensions is d = m e a n ( d i v e r s i t y ) , representing the overall average diversity of the population across all dimensions.
When the path distribution is concentrated ( d 0 ), a 10% basic perturbation is retained; when the distribution is dispersed ( d r max ), the perturbation intensity is increased to 50%. This mechanism ensures that mobile robots can quickly locate feasible paths in complex environments and eliminate unnecessary jitter during the convergence phase.
When the path distribution is dispersed ( d r max ), p p max maintaining a 50% perturbation intensity to achieve a high jump probability. When the path distribution is concentrated ( d 0 ), p p min , reducing the jump probability while retaining 10% basic perturbation to prevent the algorithm from falling into local optima. This mechanism ensures that mobile robots can both quickly locate feasible paths in complex environments and eliminate unnecessary jitter during the convergence phase. The formula is as follows:
Ω i ( t + 1 ) = Ω i t + 0.1 · C · r a n d   i f   r a n d p

3.4. DMSALAs Flow

The workflow of DMSALAs in mobile robot path planning is as follows:
(a)
Initialization Phase:
Set the initial global optimal fitness value Score = inf, indicating no optimal path solution has been found yet. Initialize the global optimal position as a zero vector X b e s t = z e r o s ( 1 , N ) , indicating that path fitness has not been evaluated. Initialize the convergence curve storage parameter C o n v e r g e n c e = [   ] to record the algorithm’s convergence characteristics during iterations.
(b)
Initial Fitness Evaluation:
Initialize the evaluation counter F e s = 0 and the global optimal path cost Score. Decode each path in the population and compute its comprehensive performance using the fitness function fobj, which optimizes for path length.
(c)
Dynamic Adaptive Adjustment Mechanism:
Formulate a dynamic adaptation parameter C to accommodate environmental dynamics encountered by the robot. Set a small-range perturbation factor p to enable local path refinement.
(d)
Search Strategy Selection
Global Search ( E > 1 )
If r a n d < 0.3 , initiate a hybrid search combining the global best solution and a random individual to leverage historical information and random exploration. If r a n d 0.3 , execute an adaptive step-size random walk for global sampling.
Local Search ( E 1 )
If r a n d < 0.5 perform a constrained spiral search around the current position.
If r a n d 0.5 further decide.
If r a n d < p apply an adaptive Levy flight strategy to perturb the current best path X b e s t for escaping local optima.
If r a n d p execute a small-range perturbation for fine-tuning the path.
(e)
New Path Evaluation:
Compute the fitness of new paths generated by both global and local searches using the objective function fobj. Apply a greedy strategy to compare new paths with the current best path, updating the solution if the new path is superior.
(f)
Nelder–Mead Local Enhancement:
When current_Fes > 0.7 · max_Fes AND mod (current_Fes, 5) = 0 (after 70% of evaluations and every 5 iterations).
Invoke the fminsearch function with parameters fobj, current_position to perform gradient-free Nelder–Mead local search.
Substitute the least optimal individual in the population with the new trajectory p and its fitness value to preserve diversity.
(g)
Termination and Output:
Upon completion of all iterations, the global optimal trajectory and its corresponding fitness value are recorded, followed by the output of the mobile robot’s optimal path plan.

3.5. Time Complexity Analysis

To analyze the time complexity of the DMSALAs algorithm, we adopt a standard methodology. Specifically, we use the first problem from the CEC 2017 benchmark suite, which has a dimensionality of 10. For consistency across all algorithms being compared, we set a maximum of 1000 iterations. The equation is evaluated using T 0 given below:
t i c ; f o r   i = 1 : 1000000         x = 0.55 + d o u b l e ( i ) ;         x = x + x ; x = x / 2 ; x = x * x ; x = s q r t ( x ) ; x = log ( x ) ; x = exp ( x ) ; x = x / ( x + 2 ) ; e n d t o c ;
T 1 is the time taken by the first problem from the CEC 2017 benchmark suite (called CEC 2017 F1) for a single run, T 2 is the time taken by an algorithm for evaluating CEC 2017 F1 for five runs. The ratio ( T 1 T 2 ) / T 0 reflects the relative difference between the single—run time and the time for five runs of an algorithm, normalized by the benchmark time T 0 . A larger ratio indicates a more significant discrepancy between these two-time metrics relative to T 0 . Thereby showcasing the variations in time complexity among different algorithms when solving this problem and highlighting the superior time-related performance and competitiveness of a specific algorithm compared to others.
For the DMSALAs algorithm, its ratio value is 5.089566. Among the tested algorithms, this places it in second position, just behind the GWOs algorithm. Notably, it surpasses the GAs and SFOAs algorithms. The time complexity analysis is presented in Table 1.

3.6. Parameter_Sensitivity_Analysis

To conduct parameter sensitivity analysis, we selected five test functions (F1, F7, F15, F23, and F30) from the CEC 2017 test suite, with a dimension of dim = 10 and a maximum fitness evaluation count of Maxfex = 1000 · dim. Each function was independently run for 30 rounds. The values of the three parameters, namely the radius decay coefficient λ = 1 , 3 , 5 , 7 , 10 in the Dynamic Adaptive Mechanism, the maximum jump probability p max = 0.1 , 0.3 , 0.5 , 0.7 , 0.9 , and the population size N = 20 , 30 , 40 , 50 , 60 , were set differently. The average solutions obtained by the DMSALAs algorithm on the five functions under various combinations of λ , p max , and N are presented in Table 2 and Figure 3. The optimal value is highlighted with a red dot in Figure 3.
The sensitivity analysis of the average fitness values under different parameter settings through system testing shows that when the λ parameter varies within the range of 1 to 10, DMSALAs’ performance remains relatively stable. The best performance is achieved when λ = 7 , with an average fitness value of 1677.95, which is not significantly different from other values (fluctuating between 1677.95 and 1682.84), indicating that the λ parameter has a minor impact on the algorithm’s performance within the tested range and demonstrates good robustness. The p max parameter exhibits a distinct sensitivity characteristic. As p max increases from 0.1 to 0.5, the average fitness value improves from 1681.26 to 1670.95, representing a performance enhancement of approximately 0.6%. However, when p max further increases to 0.9, the performance drops to 1686.48. This suggests that there is an optimal p max value (0.5), and exceeding this range leads to performance deterioration. The sensitivity analysis of the N parameter indicates that the algorithm’s performance is optimal at N = 30 (1672.61). As N increases from 20 to 60, the performance shows an initial improvement followed by slight fluctuations, but the overall change is relatively small (1672.61–1678.60), indicating that the N parameter has a limited impact on the algorithm’s performance within a reasonable range. The results of the sensitivity analysis suggest that all three parameters have some influence on the algorithm’s performance, but the degree and pattern of their influence vary. The DMSALAs can maintain the best performance when λ = 0.7 , p max = 0.5 , and N = 30 .

4. Experimental Verifications

To verify the performance of the DMSALAs algorithm, we conducted a series of comprehensive experimental evaluations, encompassing an ablation study on the IEEE CEC 2017 benchmark and comparative assessments on both IEEE CEC 2017 and CEC 2022. To ensure fairness in algorithm evaluation, we employed the maximum number of fitness evaluations as the primary metric for assessing algorithm efficiency. We set T = 1000 and N = 30 . To maintain consistency with the majority of existing studies and enable fair comparisons, the experiments primarily focus on the standard 30D setting on the IEEE CEC 2017 and 10D on the IEEE CEC 2022. This dimensionality is widely recognized as a benchmark for evaluating algorithm performance, as it strikes a favorable balance between problem complexity and computational cost.
All experimental results reported in this section are presented as statistical averages and standard deviations derived from 30 independent trial runs. The comparative analysis employs a sign test notation system, where ‘+’ denotes superior performance of a comparison algorithm over DMSALAs on a given test function, ‘− indicates inferior performance relative to DMSALAs, and ‘=‘ signifies statistically equivalent results. For a comprehensive performance evaluation, we implement Friedman’s test analysis, incorporating both average ranking scores and ordinal rankings. Before conducting the Friedman test, the Lilliefors normality test was performed on all results, and the corresponding p-values are presented in Table 3, Table 4 and Table 5. These dual metrics provide a quantitative assessment of each algorithm’s overall optimization capability across the benchmark suite. To enhance the credibility of our experimental results, we conducted the Wilcoxon rank-sum test for statistical significance analysis.

4.1. Ablation Experiment

An ablation study was carried out using the IEEE CEC 2017 dataset to assess the performance of various strategies. Six variants of the ALAs algorithm, namely ALAs1–ALAs6, were designed, each integrating distinct strategies. Specifically, ALA1 features a dynamic adaptive mechanism; ALAs2 utilizes the hybrid Nelder–Mead approach; ALAs3 applies a small-scale perturbation technique; ALAs4 combines the dynamic adaptive mechanism with the hybrid Nelder–Mead method; ALAs5 integrates the dynamic adaptive mechanism and the small-scale perturbation strategy; and ALAs6 merges the hybrid Nelder–Mead approach and the small-scale perturbation technique.
As presented in Table 3, the results of the Lilliefors normality test are as follows: a total of 203 tests (29 test functions and seven algorithms), with 38 cases where the normal distribution cannot be rejected and 165 cases of non-normal distribution. These results confirm that our data indeed have significant non-parametric characteristics, thereby providing a sufficient statistical basis for the application of the Friedman test. The DMSALAs algorithm outperformed others by identifying 22 optimal solutions, with a (+/−/=) statistical outcome of (22/1/6) and a Friedman mean of 1.48, securing the top rank. Following closely, ALAs6 detected 17 optimal values, yielding a (+/−/=) statistic of (17/2/10) and a Friedman mean of 1.62, ranking second. ALAs5 identified five optimal values, with a (+/−/=) statistical result of (5/5/19) and a Friedman mean of 2.97, placing sixth. ALAs4 found six optimal solutions, recording a (+/−/=) statistic of (6/9/14) and a Friedman mean of 2.90, ranking fifth. ALAs3 recognized 11 optimal values, presenting a (+/−/=) statistical outcome of (11/7/11) and a Friedman mean of 2.55, ranking fourth. ALAs2 discovered 10 optimal solutions, with a (+/−/=) statistic of (10/4/15) and a Friedman mean of 2.34, ranking third. In contrast, ALAs1 only identified 10 optimal values, with a (+/−/=) statistical result of (3/11/15) and a Friedman mean of 3.52, ranking seventh. These findings not only validate the effectiveness of the three strategies mentioned above but also emphasize the remarkable superiority of the DMSALAs algorithm, as depicted in Figure 4.

4.2. Comparison Experiment on IEEE CEC2017

To thoroughly evaluate the optimization performance of the proposed DMSALAs algorithm, we performed comprehensive experiments on the IEEE CECs 2017 benchmark. DMSALAs were compared with eight MHs algorithms, including (ALAs) [22], WMAs [30], PSOs [31], SFOAs [32], BKAs [33], GOOSEs [34], BWOs [35], and GWOs [10], selected for their diverse search mechanisms and proven effectiveness in optimization tasks.
As illustrated in Table 4, the results of the Lilliefors normality test are as follows: a total of 261 tests (29 test functions and nine algorithms), with 48 cases where the normal distribution cannot be rejected and 213 cases of non-normal distribution. These results confirm that our data indeed have significant non-parametric characteristics, thereby providing a sufficient statistical basis for the application of the Friedman test. The DMSALAs successfully pinpointed 21 out of 29 optimal solutions, with a (+/−/=) statistical result of (21/0/8). Its Friedman mean score stands at 1.66, securing the top rank and significantly outperforming all comparative algorithms. ALAs achieved 12 optimal solutions, which confirms that the improvements, including dynamic adaptation mechanisms, hybrid Nelder–Mead integration, and perturbation strategies, have notably enhanced its performance. The (+/−/=) statistic is (12/0/17), accompanied by a Friedman mean of 2.03, ranking second. WMA obtained eight optimal solutions, demonstrating competitive yet less consistent results. Its (+/−/=) statistic is (8/0/21), with a Friedman mean of 3.07, placing it third. BKA identified two optimal values, yielding a (+/−/=) statistic of (2/0/27) and a Friedman mean of 3.52 (fourth place). PSOs secured three optimal solutions, with corresponding metrics of (3/0/26) and 3.86 (fifth place). GWOs only achieved one optimal value, reflecting its limitations in handling complex or high-dimensional problem spaces, with a (+/−/=) statistic of (1/0/28) and a Friedman mean of 4.21 (sixth place). Notably, BWOs, SFOAs, and GOOSEs failed to converge to any global optima, indicating inherent weaknesses in balancing exploration and exploitation. These findings, as visualized in Figure 5, unequivocally validate the dominant performance of the DMSALAs algorithm across evaluated benchmarks.

4.3. Comparison Experiment on IEEE CEC2022

To evaluate DMSALAs performance on the test set, we contrasted it with the eight previously mentioned algorithms using the IEEE CEC 2022 benchmarking framework. The experimental findings are documented in Table 5, while the convergence trajectories of each algorithm across various test functions are illustrated in Figure 6.
As presented in Table 5, the results of the Lilliefors normality test are as follows: a total of 108 tests (12 test functions and nine algorithms), with 21 cases where the normal distribution cannot be rejected and 87 cases of non-normal distribution. These results confirm that our data indeed have significant non-parametric characteristics, thereby providing a sufficient statistical basis for the application of the Friedman test. The DMSALAs successfully obtained 10 out of 12 optimal solutions, with a (+/−/=) statistical result of (10/0/2) and a Friedman mean of 1.42, ranking first and significantly outperforming all competing algorithms. ALAs secured seven optimal solutions, verifying that improvements such as dynamic adaptation, hybrid Nelder–Mead integration, and perturbation strategies effectively enhance its performance. With a (+/−/=) statistic of (7/0/5) and a Friedman mean of 1.58, it ranks second. WMAs achieved five optimal solutions, showing competitive yet less stable performance, with a (+/−/=) statistic of (5/0/7) and a Friedman mean of 2.42, ranking third. Both PSOs and GWOs identified only 2 optimal values, sharing a (+/−/=) statistical result of (2/0/10) with Friedman means of 3.08 and 3.50, ranking fourth and sixth, respectively, which reflects their limitations in handling complex high-dimensional problems. BKAs obtained three optimal values, with a (+/−/=) statistic of (3/0/9) and a Friedman mean of 3.17, ranking fifth. Notably, BWOs, SFOAs, and GOOSEs failed to converge to any global optima, indicating flaws in their exploration–exploitation balance mechanisms.

4.4. Wilcoxon Rank Sum Test

The Wilcoxon rank sum test is employed to assess the disparity between DMSALAs and competitor methods. A significance level of 0.05 is set, and (+|=|−) denotes superiority, equality, or inferiority of the DMSALAs algorithm compared to its competitors. Table 6 illustrates the notable variances found in the majority of functions between our suggested DMSALAs and alternative algorithms. The results are as follows: 182/0/50 and 84/0/12. These results collectively indicate that our proposed DMSALAs exhibit substantial distinctions when compared with the other ten algorithms.

5. Path Planning Problem for Mobile Robots

In this section, we conducted a comprehensive performance evaluation of the DMSALAs algorithm, validating its efficacy in solving the mobile robot path planning problem through comparative analyses with eight benchmark algorithms. The validation was performed across two distinct environments: a simulated testing scenario and the physical operational setting of the RIA-E100 robotic system.

5.1. Simulation Experiment for Mobile Robot Path Planning

To comprehensively evaluate the performance and scalability of the proposed algorithm, experiments were conducted on four grid maps of different sizes: 10 × 10, 20 × 20, 30 × 30, and 40 × 40. This size sequence establishes a uniformly increasing gradient of search space complexity, aiming to systematically assess the algorithm’s adaptability from simple to complex scenarios. The smaller-scale maps are used to verify the basic correctness of the algorithm, while the medium- and large-scale maps are employed to evaluate its convergence behavior, solution quality, and computational efficiency as the problem size increases—thereby assessing the algorithm’s scalability. The evaluated algorithmic parameters consisted of the optimal, average, and standard deviation values of the path length, calculated over 30 independent trials.
The path length in the mobile robot path planning problem is defined as:
f ( x ) = f ( x ) + i = 1 s i z e ( p a t h , 1 ) ( p a t h ( i + 1 , 1 ) p a t h ( i , 1 ) ) 2 + ( p a t h ( i + 1 , 2 ) p a t h ( i , 2 ) ) 2 , N b = 0 E ( 1 ) · E ( 2 ) · N b , N b > 0
where p a t h represents the generated route, E ( 1 ) and E ( 2 ) represent the number of rows and columns in the map, respectively, and N b indicates the number of obstacles encountered along the path. First, a smooth path is generated through a midpoint neighborhood search followed by two smoothing processes. Subsequently, the total path length, denoted as f ( x ) , is computed by summing the Euclidean distances between each pair of adjacent points along the trajectory.
The DMSALAs algorithms performance evaluation was executed using MATLAB 2023b software. Simulations were run on a desktop system featuring Windows 11 Pro, an Intel(R) Core (TM) i9-14900KF processor (3.20 GHz), and 128 GB of RAM (Hangzhou China).
As depicted in Table 7 and Figure 7, the DMSALAs algorithm efficiently located optimal values across four maps with distinct scales and attained the optimal average performance. The Friedman test ranking is the first. Thereby highlighting its superior optimization capability. While DMSALAs demonstrated stable performance, they did not yield the lowest standard deviation in all cases. Notably, the BKA algorithm achieved the best standard deviation in two scenarios, whereas the BWO and WMA algorithms each excelled in one case. This suggests that the DMSALAs algorithm has room for stability improvement.
DMSALAs consistently achieved the minimum path lengths (optimal values) across all four map scenarios, demonstrating their superior equilibrium between exploration and exploitation capabilities. The algorithm also exhibited the lowest mean path length across multiple runs, underscoring its robust performance. Although DMSALAs standard deviations remained stable, they were not optimal in all cases (BKAs outperformed in 2 of 4 maps). This minor variability in performance is likely attributed to random perturbations in complex mapping environments. By contrast, BKAs and BWOs occasionally demonstrated better standard deviations, primarily due to their exploitation-oriented strategies. DMSALAs adaptive mechanism prioritizes optimality over strict repeatability, which explains its slightly higher variance trade-off that emphasizes solution quality over absolute consistency.

5.2. Real-World Experimental Verification of DMSALAs for Mobile Robot Path Planning

In this section, the DMSALAs algorithm and the eight previously introduced algorithms were experimentally evaluated on the RIA-E100 mobile robot, a compact modular model from GaiTech Robotics’ RIAs series. As illustrated in Figure 8, the robot features a differential-drive configuration with two powered wheels at the front and two passive omnidirectional wheels at the rear, ensuring agile maneuverability. Its computational capabilities are supported by an Intel i5 processor, 8GB of RAM, and a 120GB SSD, enabling real-time data processing. The experimental setup integrates an Astra Orbbec RGBD camera, an Inertial Measurement Unit (IMU), and a 2D LiDAR sensor, facilitating environmental perception. Wireless communication between the control computer and the robot is established via Wi-Fi, ensuring seamless data transfer. The testing environment has a length of 3.2 m and a width of 2.7 m, with obstacles randomly distributed within the area.
In Figure 9 and Figure 10, the starting point of the map is located at the lower left corner in the actual map scenario, while the endpoint is positioned at the upper right corner. In contrast, within the Rviz environment, the map starts at the lower right corner and ends at the upper left corner. Cuboid-shaped obstacles are strategically placed to simulate challenging navigation scenarios, with the unobstructed zones designated as navigable regions. To ensure comparability, identical start and end coordinates were assigned across all algorithms. Each algorithm underwent 30 independent runs, during which the minimum path length was recorded. Subsequently, the arithmetic means and variance of these 30 trials were computed to analyze the algorithms’ performance stability statistically. Figure 9 captures the robot’s navigation process in the physical environment, and Figure 10 illustrates the corresponding occupancy grid map generated in Rviz, providing a comprehensive view of the experimental outcomes.
Table 8 presents the optimal values, mean values, and variances of path lengths derived from 30 independent runs of the DMSALAs algorithm and eight comparative algorithms. DMSALAs attained an optimal path length of 3.687 m and a mean path length of 3.881 m, both of which ranked top-ranked across all algorithms. Although the standard deviation of DMSALAs over 30 runs was not the minimum, standing in third place, trailing only the WMAs and ALAs algorithms, it still maintained a relatively low magnitude. These findings firmly validate the effectiveness of the DMSALAs algorithm in addressing mobile robot path planning challenges.

6. Conclusions

By integrating a dynamic adaptive mechanism, a hybrid Nelder–Mead method, and a small-range perturbation strategy into the ALAs algorithm, we developed the DMSALAs algorithm. To assess its efficacy, DMSALAs were first compared with six variant algorithms to validate the proposed strategies, then benchmarked against eight advanced algorithms to demonstrate its superior performance. These enhancements effectively address the original algorithm’s core limitations-fixed parameters, weak local search capability, and insufficient diversity preservation-making it better suited for complex optimization tasks. Specifically, the hybrid Nelder–Mead component plays a pivotal role in refining waypoints, minimizing detours, and ensuring smooth trajectories; the dynamic search radius adjustment prevents premature stagnation, and perturbation strategies enable escape from concave obstacle regions.
To further verify DMSALAs applicability in mobile robot path planning, we systematically executed simulation experiments for the DMSALAs algorithm and eight comparative algorithms across four maps with varying scales. Subsequently, we carried out real-world experiments utilizing the RIA-E100 mobile robot within a custom-constructed physical environment. Our dual-approach experimental design, which combines virtual simulations across four maps of varying scales and real-world validations using the RIA-E100 robot, enabled a comprehensive evaluation of the DMSALAs algorithm and eight comparative algorithms. This methodology demonstrated DMSALAs superiority in addressing practical challenges, proving it to be a highly effective optimizer, especially for complex, high-dimensional problems. Its consistent top performance across diverse scenarios highlights its broad applicability in real-world engineering optimizations, such as robotic path planning tasks that demand both solution precision and reliability.
Notably, while DMSALAs demonstrate strong effectiveness, its standard deviation results across various tests indicate room for stability improvements, even after the implemented enhancements. This presents a key direction for future research. Additionally, as this study tested DMSALAs up to 30D problems, exploring its performance in higher-dimensional scenarios (50D and 100D) constitutes another critical research avenue. While this work focuses on mobile robot path planning, future efforts will aim to extend DMSALAs to other practical domains, including industrial robotics and search-and-rescue operations. In this study, high-performance algorithms were not included in the comparative analysis; however, performance comparisons with a broader selection of state-of-the-art algorithms will be conducted in future work and will constitute an important research direction. Furthermore, we plan to carry out rigorous ablation studies to precisely quantify the synergistic effects of the hybrid strategy. Specifically, on the CECs benchmark suite, incremental variants of the algorithm will be compared to disentangle the contribution of the enhanced local search component from the effect of increased function evaluations, thereby providing empirical support for the proposed synergy.
Although current stability and scalability limitations require further investigation, DMSALAs consistent excellence in both benchmark functions and applied path planning solidifies its status as a state-of-the-art optimization tool. Future research will prioritize bridging the gap between theoretical optimization and real-world deployment, ensuring robust performance in dynamic and uncertain environments.

Author Contributions

P.Q.: Conceptualization, Methodology, Writing, Data testing, Reviewing, Software; X.S.: Conceptualization, Funding; Z.Z.: Supervision, Reviewing. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (No. 52065010 and No. 52165063). The Key Laboratory of New Power System Operation Control of Guizhou Province (Qiankehe Platform ZSYS[2025]007).

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Qu, P.J.; Yuan, Q.N.; Du, F.L.; Gao, Q.Y. An improved manta ray foraging optimization algorithm. Sci. Rep. 2024, 14, 10301. [Google Scholar] [CrossRef]
  2. Xiao, Y.N.; Sun, X.; Guo, Y.L.; Li, S.P.; Zhang, Y.P.; Wang, Y.W. An Improved Gorilla Troops Optimizer Based on Lens Opposition-Based Learning and Adaptive β-Hill Climbing for Global Optimization. Cmes-Comput. Model. Eng. Sci. 2022, 131, 815–850. [Google Scholar] [CrossRef]
  3. MiarNaeimi, F.; Azizyan, G.; Rashki, M. Horse herd optimization algorithm: A nature-inspired algorithm for high-dimensional optimization problems. Knowl.-Based Syst. 2021, 213, 106711. [Google Scholar] [CrossRef]
  4. El-kenawy, E.M.; Khodadadi, N.; Mirjalili, S.; Abdelhamid, A.A.; Eid, M.M.; Ibrahim, A. Greylag Goose Optimization: Nature-inspired optimization algorithm. Expert Syst. Appl. 2024, 238, 122147. [Google Scholar] [CrossRef]
  5. Zhu, F.F.; Shuai, Z.H.; Lu, Y.R.; Su, H.H.; Yu, R.W.; Li, X.; Zhao, Q.; Shuai, J.W. oBABC: A one-dimensional binary artificial bee colony algorithm for binary optimization. Swarm Evol. Comput. 2024, 87, 10156. [Google Scholar] [CrossRef]
  6. Ebeed, M.; Abdelmotaleb, M.A.; Khan, N.H.; Jamal, R.; Kamel, S.; Hussien, A.G.; Zawbaa, H.M.; Jurado, F.; Sayed, K. A Modified Artificial Hummingbird Algorithm for solving optimal power flow problem in power systems. Energy Rep. 2024, 11, 982–1005. [Google Scholar] [CrossRef]
  7. Mirjalili, S.; Lewis, A. The Whale Optimization Algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  8. Belge, E.; Altan, A.; Hacioglu, R. Metaheuristic Optimization-Based Path Planning and Tracking of Quadcopter for Payload Hold-Release Mission. Electronics 2022, 11, 1208. [Google Scholar] [CrossRef]
  9. Li, W.Y.; Shi, R.H.; Dong, J. Harris hawks optimizer based on the novice protection tournament for numerical and engineering optimization problems. Appl. Intell. 2023, 53, 6133–6158. [Google Scholar] [CrossRef]
  10. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  11. Shao, S.K.; Peng, Y.; He, C.L.; Du, Y. Efficient path planning for UAV formation via comprehensively improved particle swarm optimization. Isa Trans. 2020, 97, 415–430. [Google Scholar] [CrossRef]
  12. Miao, C.W.; Chen, G.Z.; Yan, C.L.; Wu, Y.Y. Path planning optimization of indoor mobile robot based on adaptive ant colony algorithm. Comput. Ind. Eng. 2021, 156, 107230. [Google Scholar] [CrossRef]
  13. Mafarja, M.M.; Mirjalili, S. Hybrid Whale Optimization Algorithm with simulated annealing for feature selection. Neurocomputing 2017, 260, 302–312. [Google Scholar] [CrossRef]
  14. Yu, H.L.; Gao, Y.L.; Wang, L.; Meng, J.T. A Hybrid Particle Swarm Optimization Algorithm Enhanced with Nonlinear Inertial Weight and Gaussian Mutation for Job Shop Scheduling Problems. Mathematics 2020, 8, 1355. [Google Scholar] [CrossRef]
  15. Li, C.L.; Sun, G.J.; Deng, L.B.; Qiao, L.Y.; Yang, G.Q. A population state evaluation-based improvement framework for differential evolution. Inf. Sci. 2023, 629, 15–38. [Google Scholar] [CrossRef]
  16. Xiao, Y.N.; Cui, H.; Hussien, A.G.; Hashim, F.A. MSAO: A multi-strategy boosted snow ablation optimizer for global optimization and real-world engineering applications. Adv. Eng. Inform. 2024, 61, 102464. [Google Scholar] [CrossRef]
  17. Gandomi, A.H. Interior search algorithm (ISA): A novel approach for global optimization. Isa Trans. 2014, 53, 1168–1183. [Google Scholar] [CrossRef]
  18. Shayanfar, H.; Gharehchopogh, F.S. Farmland fertility: A new metaheuristic algorithm for solving continuous optimization problems. Appl. Soft Comput. 2018, 71, 728–746. [Google Scholar] [CrossRef]
  19. Padash, A.; Sandev, T.; Kantz, H.; Metzler, R.; Chechkin, A. Asymmetric Levy Flights Are More Efficient in Random Search. Fractal Fract. 2022, 6, 260. [Google Scholar] [CrossRef]
  20. Pavlik, J.A.; Sewell, E.C.; Jacobson, S.H. Two new bidirectional search algorithms. Comput. Optim. Appl. 2021, 80, 377–409. [Google Scholar] [CrossRef]
  21. Xu, Y.H.; Zhou, W.N.; Fang, J.A.; Sun, W. Adaptive lag synchronization and parameters adaptive lag identification of chaotic systems. Phys. Lett. A 2010, 374, 3441–3446. [Google Scholar] [CrossRef]
  22. Xiao, Y.N.; Cui, H.; Abu Khurma, R.; Castillo, P.A. Artificial lemming algorithm: A novel bionic meta-heuristic technique for solving real-world engineering optimization problems. Artif. Intell. Rev. 2025, 58, 84. [Google Scholar] [CrossRef]
  23. Wei, Q.L.; Liu, D.R.; Lin, H.Q. Value Iteration Adaptive Dynamic Programming for Optimal Control of Discrete-Time Nonlinear Systems. IEEE Trans. Cybern. 2016, 46, 840–853. [Google Scholar] [CrossRef] [PubMed]
  24. Zhao, Y.W.; Niu, B.; Zong, G.D.; Xu, N.; Ahmad, A.M. Event-triggered optimal decentralized control for stochastic interconnected nonlinear systems via adaptive dynamic programming. Neurocomputing 2023, 539, 126163. [Google Scholar] [CrossRef]
  25. Liang, H.J.; Liu, G.L.; Zhang, H.G.; Huang, T.W. Neural-Network-Based Event-Triggered Adaptive Control of Nonaffine Nonlinear Multiagent Systems with Dynamic Uncertainties. IEEE Trans. Neural Netw. Learn. Syst. 2021, 32, 2239–2250. [Google Scholar] [CrossRef]
  26. Mai, C.L.; Zhang, L.X.; Chao, X.W.; Hu, X.; Wei, X.Z.; Li, J. A novel MPPT technology based on dung beetle optimization algorithm for PV systems under complex partial shade conditions. Sci. Rep. 2024, 14, 6471. [Google Scholar] [CrossRef]
  27. Yin, Z.Y.; Jin, Y.F.; Shen, J.S.; Hicher, P.Y. Optimization techniques for identifying soil parameters in geotechnical engineering: Comparative study and enhancement. Int. J. Numer. Anal. Methods Geomech. 2018, 42, 70–94. [Google Scholar] [CrossRef]
  28. Ali, A.F.; Tawhid, M.A. A hybrid cuckoo search algorithm with Nelder Mead method for solving global optimization problems. Springerplus 2016, 5, 473. [Google Scholar] [CrossRef]
  29. Huang, J.; Biggs, J.D.; Bai, Y.L.; Cui, N.G. Integrated guidance and control for solar sail station-keeping with optical degradation. Adv. Space Res. 2021, 67, 2823–2833. [Google Scholar] [CrossRef]
  30. He, J.H.; Zhao, S.J.; Ding, J.Y.; Wang, Y.M. Mirage search optimization: Application to path planning and engineering design problems. Adv. Eng. Softw. 2025, 203, 103883. [Google Scholar] [CrossRef]
  31. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95-International Conference on Neural Networks, Perth, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar] [CrossRef]
  32. Jia, H.M.; Zhou, X.L.; Zhang, J.R.; Mirjalili, S. Superb Fairy-wren Optimization Algorithm: A novel metaheuristic algorithm for solving feature selection problems. Clust. Comput.-J. Netw. Softw. Tools Appl. 2025, 28, 246. [Google Scholar] [CrossRef]
  33. Wang, J.; Wang, W.C.; Hu, X.X.; Qiu, L.; Zang, H.F. Black-winged kite algorithm: A nature-inspired meta-heuristic for solving benchmark functions and engineering problems. Artif. Intell. Rev. 2024, 57, 98. [Google Scholar] [CrossRef]
  34. Hamad, R.K.; Rashid, T.A. GOOSE algorithm: A powerful optimization tool for real-world engineering challenges and beyond. Evol. Syst. 2024, 15, 1249–1274. [Google Scholar] [CrossRef]
  35. Zhong, C.T.; Li, G.; Meng, Z. Beluga whale optimization: A novel nature-inspired metaheuristic algorithm. Knowl.-Based Syst. 2022, 251, 109215. [Google Scholar] [CrossRef]
Figure 1. ALAs mainly simulation three behaviors of lemmings. (a) long-distance migration; (b) digging holes; (c) foraging behavior.
Figure 1. ALAs mainly simulation three behaviors of lemmings. (a) long-distance migration; (b) digging holes; (c) foraging behavior.
Mathematics 13 03533 g001
Figure 2. DMSALAs workflow.
Figure 2. DMSALAs workflow.
Mathematics 13 03533 g002
Figure 3. The solutions of the DMSALAs on 5 functions when λ , p max , and N take different values.
Figure 3. The solutions of the DMSALAs on 5 functions when λ , p max , and N take different values.
Mathematics 13 03533 g003
Figure 4. Convergence curve of DMSALAs and its variants on IEEE CEC 2017.
Figure 4. Convergence curve of DMSALAs and its variants on IEEE CEC 2017.
Mathematics 13 03533 g004
Figure 5. Convergence curve of DMSALAs and 8 advanced algorithms on IEEE CEC 2017.
Figure 5. Convergence curve of DMSALAs and 8 advanced algorithms on IEEE CEC 2017.
Mathematics 13 03533 g005
Figure 6. Convergence curve of DMSALAs and 8 advanced algorithms on IEEE CEC 2022.
Figure 6. Convergence curve of DMSALAs and 8 advanced algorithms on IEEE CEC 2022.
Mathematics 13 03533 g006
Figure 7. Path found of DMSALAs and 8 advanced algorithms.
Figure 7. Path found of DMSALAs and 8 advanced algorithms.
Mathematics 13 03533 g007
Figure 8. Structural Diagram of the RIA-E100 Mobile Robot.
Figure 8. Structural Diagram of the RIA-E100 Mobile Robot.
Mathematics 13 03533 g008
Figure 9. The operation of the RIA-E100 mobile robot in the real environment.
Figure 9. The operation of the RIA-E100 mobile robot in the real environment.
Mathematics 13 03533 g009
Figure 10. Map of the RIA-E100 mobile robot in Rviz.
Figure 10. Map of the RIA-E100 mobile robot in Rviz.
Mathematics 13 03533 g010
Table 1. Time Complexity Analysis.
Table 1. Time Complexity Analysis.
FunctionDimensionAlgorithm T 0 T 1 T 2 ( T 1 T 2 ) / T 0
CEC2017F110DMSALAs0.0455600.8144820.5826015.089566
GWO0.0455600.6269840.3750945.528745
GA0.0455600.7822760.6100653.779854
SFOA0.0455600.6897110.5497743.071483
Table 2. The average solutions obtained by the DMSALAs on five functions under various combinations of λ , p max , and N .
Table 2. The average solutions obtained by the DMSALAs on five functions under various combinations of λ , p max , and N .
λ Values p max Values N Values
1357100.10.30.50.70.92030405060
1678.341682.841678.241677.951678.991681.261673.981670.951679.991686.481678.281672.611678.601673.341674.91
Table 3. DMSALAs performance compared with its variants on IEEE CECs 2017.
Table 3. DMSALAs performance compared with its variants on IEEE CECs 2017.
DMSALAsALAs1ALAs2ALAs3ALAs4ALAs5ALAs6
CEC2017-F1Mean1.00 × 1024.82 × 1031.00 × 1028.11 × 1021.00 × 1024.33 × 1031.00 × 102
Std2.45 × 10−94.05 × 1033.39 × 10−91.10 × 1034.08 × 10−94.09 × 1034.02 × 10−9
p-value1.50 × 10−27.25 × 10−31.61 × 10−31.00 × 10−34.33 × 10−23.78 × 10−23.81 × 10−2
CEC2017-F3Mean3.00 × 1023.15 × 1023.00 × 1023.00 × 1023.00 × 1023.04 × 1023.00 × 102
Std5.36 × 10−104.10 × 1013.66 × 10−144.56 × 10−92.27 × 10−132.00 × 1011.29 × 10−13
p-value1.02 × 10−21.00 × 10−31.00 × 10−31.00 × 10−33.47 × 10−24.51 × 10−31.00 × 10−3
CEC2017-F4Mean4.00 × 1024.13 × 1024.00 × 1024.00 × 1024.00 × 1024.04 × 1024.00 × 102
Std1.01 × 1002.01 × 1011.72 × 10−138.14 × 10−11.01 × 1001.55 × 1001.01 × 100
p-value1.00 × 10−31.00 × 10−31.00 × 10−31.00 × 10−31.00 × 10−31.47 × 10−21.00 × 10−3
CEC2017-F5Mean5.21 × 1025.19 × 1025.22 × 1025.14 × 1025.16 × 1025.20 × 1025.14 × 102
Std1.00 × 1016.03 × 1006.44 × 1004.98 × 1001.00 × 1019.23 × 1004.41 × 100
p-value3.47 × 10−13.87 × 10−11.91 × 10−23.18 × 10−14.56 × 10−21.00 × 10−11.66 × 10−1
CEC2017-F6Mean6.00 × 1026.00 × 1026.00 × 1026.00 × 1026.01 × 1026.00 × 1026.00 × 102
Std7.83 × 10−15.20 × 10−11.33 × 10−12.17 × 10−11.80 × 1004.46 × 10−23.11 × 10−1
p-value1.00 × 10−31.00 × 10−31.00 × 10−31.00 × 10−31.00 × 10−31.00 × 10−31.00 × 10−3
CEC2017-F7Mean7.22 × 1027.29 × 1027.27 × 1027.22 × 1027.31 × 1027.29 × 1027.36 × 102
Std1.03 × 1018.69 × 1005.64 × 1004.46 × 1001.10 × 1011.05 × 1017.65 × 100
p-value2.45 × 10−15.00 × 10−15.00 × 10−12.64 × 10−15.00 × 10−14.62 × 10−17.40 × 10−2
CEC2017-F8Mean8.18 × 1028.19 × 1028.16 × 1028.13 × 1028.22 × 1028.20 × 1028.16 × 102
Std9.68 × 1009.44 × 1006.32 × 1004.89 × 1009.68 × 1007.74 × 1006.91 × 100
p-value1.26 × 10−25.00 × 10−13.79 × 10−11.52 × 10−11.70 × 10−24.52 × 10−13.24 × 10−1
CEC2017-F9Mean9.00 × 1029.08 × 1029.00 × 1029.02 × 1029.01 × 1029.02 × 1029.00 × 102
Std4.98 × 1002.64 × 1006.38 × 10−18.40 × 10−21.40 × 1015.56 × 1002.03 × 10−1
p-value1.00 × 10−31.00 × 10−31.00 × 10−31.00 × 10−31.00 × 10−31.00 × 10−31.00 × 10−3
CEC2017-F10Mean1.57 × 1031.81 × 1031.58 × 1031.62 × 1031.63 × 1031.85 × 1031.60 × 103
Std1.89 × 1022.31 × 1021.80 × 1022.46 × 1021.67 × 1022.41 × 1022.00 × 102
p-value2.37 × 10−22.25 × 10−11.00 × 10−19.05 × 10−31.77 × 10−12.49 × 10−13.48 × 10−2
CEC2017-F11Mean1.10 × 1031.11 × 1031.11 × 1031.12 × 1031.12 × 1031.11 × 1031.11 × 103
Std8.64 × 1004.17 × 1003.99 × 1002.73 × 1009.81 × 1004.71 × 1013.96 × 100
p-value1.00 × 10−33.73 × 10−24.54 × 10−23.39 × 10−21.00 × 10−31.00 × 10−33.67 × 10−2
CEC2017-F12Mean1.54 × 1032.74 × 1041.65 × 1031.69 × 1033.13 × 1033.15 × 1041.57 × 103
Std1.93 × 1022.03 × 1043.13 × 1021.72 × 1024.62 × 1032.17 × 1042.80 × 102
p-value5.00 × 10−13.10 × 10−24.08 × 10−25.00 × 10−11.00 × 10−33.64 × 10−38.22 × 10−2
CEC2017-F13Mean1.31 × 1031.79 × 1031.84 × 1031.38 × 1031.36 × 1031.38 × 1031.31 × 103
Std1.18 × 1023.98 × 1021.81 × 1012.39 × 1021.32 × 1023.26 × 1029.25 × 100
p-value1.00 × 10−31.03 × 10−21.00 × 10−31.00 × 10−31.00 × 10−33.36 × 10−11.00 × 10−3
CEC2017-F14Mean1.42 × 1031.44 × 1031.44 × 1031.44 × 1031.44 × 1031.42 × 1031.42 × 103
Std1.14 × 1011.02 × 1019.13 × 1007.75 × 1009.31 × 1009.41 × 1007.48 × 100
p-value2.96 × 10−14.63 × 10−21.00 × 10−31.00 × 10−35.00 × 10−12.97 × 10−11.00 × 10−3
CEC2017-F15Mean1.51 × 1031.62 × 1031.51 × 1031.65 × 1031.60 × 1031.61 × 1031.51 × 103
Std1.40 × 1026.74 × 1011.70 × 1014.11 × 1007.36 × 1015.63 × 1018.18 × 100
p-value1.00 × 10−31.50 × 10−11.00 × 10−31.24 × 10−14.29 × 10−21.59 × 10−12.61 × 10−2
CEC2017-F16Mean1.64 × 1031.69 × 1031.68 × 1031.66 × 1031.68 × 1031.66 × 1031.65 × 103
Std6.57 × 1018.47 × 1015.55 × 1018.03 × 1019.01 × 1019.03 × 1016.35 × 101
p-value1.00 × 10−32.38 × 10−31.00 × 10−31.00 × 10−31.00 × 10−31.00 × 10−31.00 × 10−3
CEC2017-F17Mean1.73 × 1031.76 × 1031.77 × 1031.78 × 1031.77 × 1031.74 × 1031.74 × 103
Std4.86 × 1014.87 × 1019.68 × 1002.26 × 1014.71 × 1015.29 × 1012.79 × 101
p-value1.00 × 10−31.00 × 10−34.54 × 10−31.00 × 10−31.00 × 10−31.00 × 10−31.00 × 10−3
CEC2017-F18Mean2.44 × 1033.82 × 1031.84 × 1031.82 × 1032.45 × 1034.83 × 1031.84 × 103
Std1.12 × 1034.31 × 1031.64 × 1014.93 × 1001.20 × 1036.81 × 1032.13 × 101
p-value1.00 × 10−31.00 × 10−33.39 × 10−11.23 × 10−31.00 × 10−31.00 × 10−31.00 × 10−3
CEC2017-F19Mean1.94 × 1032.00 × 1031.91 × 1031.90 × 1031.95 × 1031.95 × 1031.91 × 103
Std4.30 × 1011.71 × 1023.60 × 1001.43 × 1001.39 × 1024.00 × 1013.22 × 100
p-value1.00 × 10−31.00 × 10−32.03 × 10−25.00 × 10−11.00 × 10−31.00 × 10−35.29 × 10−2
CEC2017-F20Mean2.04 × 1032.07 × 1032.04 × 1032.08 × 1032.10 × 1032.05 × 1032.03 × 103
Std6.06 × 1016.55 × 1014.00 × 1013.59 × 1017.10 × 1014.54 × 1012.63 × 101
p-value1.00 × 10−31.00 × 10−31.00 × 10−31.00 × 10−36.93 × 10−21.00 × 10−31.00 × 10−3
CEC2017-F21Mean2.23 × 1032.23 × 1032.25 × 1032.24 × 1032.21 × 1032.23 × 1032.25 × 103
Std5.02 × 1014.88 × 1015.82 × 1015.58 × 1013.10 × 1015.20 × 1015.89 × 101
p-value1.00 × 10−31.00 × 10−31.00 × 10−31.00 × 10−31.00 × 10−31.00 × 10−31.00 × 10−3
CEC2017-F22Mean2.30 × 1032.44 × 1032.43 × 1032.30 × 1032.30 × 1032.30 × 1032.30 × 103
Std1.34 × 1011.30 × 1016.23 × 10−12.25 × 1014.32 × 1024.25 × 1027.33 × 10−1
p-value1.00 × 10−31.00 × 10−35.00 × 10−11.00 × 10−31.00 × 10−31.00 × 10−31.01 × 10−1
CEC2017-F23Mean2.61 × 1032.62 × 1032.62 × 1032.62 × 1032.62 × 1032.62 × 1032.61 × 103
Std7.46 × 1001.30 × 1014.83 × 1007.30 × 1007.32 × 1008.43 × 1005.63 × 100
p-value3.82 × 10−37.62 × 10−35.00 × 10−13.45 × 10−34.71 × 10−21.34 × 10−22.39 × 10−1
CEC2017-F24Mean2.73 × 1032.74 × 1032.73 × 1032.75 × 1032.75 × 1032.73 × 1032.74 × 103
Std6.38 × 1014.61 × 1016.38 × 1015.81 × 1004.74 × 1016.30 × 1014.88 × 100
p-value1.00 × 10−31.00 × 10−31.00 × 10−35.00 × 10−11.00 × 10−31.00 × 10−32.77 × 10−1
CEC2017-F25Mean2.92 × 1032.92 × 1032.92 × 1032.93 × 1032.93 × 1032.93 × 1032.92 × 103
Std3.27 × 1016.66 × 1012.27 × 1012.31 × 1016.80 × 1012.64 × 1012.39 × 101
p-value1.00 × 10−31.00 × 10−31.00 × 10−31.00 × 10−31.00 × 10−31.00 × 10−31.00 × 10−3
CEC2017-F26Mean2.94 × 1033.13 × 1033.07 × 1033.06 × 1033.11 × 1033.03 × 1032.94 × 103
Std2.91 × 1024.06 × 1021.07 × 1023.86 × 1023.14 × 1023.10 × 1021.63 × 102
p-value1.00 × 10−31.00 × 10−31.00 × 10−31.00 × 10−31.00 × 10−31.00 × 10−31.00 × 10−3
CEC2017-F27Mean3.09 × 1033.09 × 1033.09 × 1033.09 × 1033.09 × 1033.09 × 1033.09 × 103
Std2.22 × 1009.01 × 1009.52 × 1008.71 × 1002.92 × 1002.85 × 1002.22 × 100
p-value1.07 × 10−11.00 × 10−31.00 × 10−31.00 × 10−35.00 × 10−11.00 × 10−31.00 × 10−3
CEC2017-F28Mean3.17 × 1033.41 × 1033.33 × 1033.34 × 1033.18 × 1033.18 × 1033.17 × 103
Std5.64 × 1011.63 × 1025.46 × 1012.29 × 1026.29 × 1011.47 × 1026.08 × 101
p-value1.00 × 10−31.00 × 10−31.00 × 10−31.00 × 10−31.00 × 10−31.00 × 10−31.00 × 10−3
CEC2017-F29Mean3.20 × 1033.21 × 1033.17 × 1033.17 × 1033.19 × 1033.19 × 1033.17 × 103
Std6.42 × 1015.18 × 1013.33 × 1014.10 × 1014.40 × 1014.91 × 1012.96 × 101
p-value1.00 × 10−31.03 × 10−16.05 × 10−31.00 × 10−32.04 × 10−18.48 × 10−35.16 × 10−2
CEC2017-F30Mean3.29 × 1032.95 × 1053.33 × 1036.52 × 1053.31 × 1033.11 × 1053.30 × 103
Std8.67 × 1016.75 × 1051.68 × 1021.08 × 1064.76 × 1015.80 × 1055.73 × 101
p-value1.46 × 10−31.00 × 10−31.00 × 10−31.00 × 10−31.00 × 10−31.00 × 10−32.36 × 10−2
(+/−/=) Statistic(22/1/6)(3/11/15)(10/4/15)(11/7/11)(6/9/14)(5/5/19)(17/2/10)
Friedman mean1.483.522.342.552.902.971.62
Ranking1734562
The best mean results and standard deviation values are highlighted in bold. The p-value is the result of the Lilliefors normality test.
Table 4. DMSALAs performance compared with 8 advanced algorithms on IEEE CEC 2017.
Table 4. DMSALAs performance compared with 8 advanced algorithms on IEEE CEC 2017.
DMSALAsALAsWMAsPSOsSFOAsBKAsGOOSEsBWOsGWOs
CEC2017-F1Mean1.00 × 1021.65 × 1033.34 × 1031.78 × 1032.51 × 1094.28 × 1052.75 × 1037.35 × 1093.92 × 107
Std9.25 × 10−91.86 × 1033.15 × 1031.99 × 1031.8 × 1092.55 × 1052.64 × 1032.56 × 1091.99 × 108
p-value1.66 × 10−21.00 × 10−32.56 × 10−21.00 × 10−31.60 × 10−21.18 × 10−34.54 × 10−23.48 × 10−21.00 × 10−3
CEC2017-F3Mean3.00 × 1023.00 × 1023.00 × 1023.00 × 1023.20 × 1049.68 × 1033.21 × 1029.51 × 1032.56 × 103
Std6.24 × 10−108.18 × 10−93.50 × 10−141.02 × 10−31.64 × 1047.72 × 1037.71 × 1011.84 × 1032.30 × 103
p-value2.92 × 10−21.00 × 10−31.00 × 10−31.00 × 10−35.00 × 10−13.23 × 10−31.00 × 10−32.97 × 10−21.00 × 10−3
CEC2017-F4Mean4.00 × 1024.00 × 1024.00 × 1024.07 × 1025.76 × 1024.06 × 1024.10 × 1027.58 × 1024.09 × 102
Std7.28 × 10−14.78 × 10−19.05 × 10−31.21 × 1011.10 × 1024.41 × 1002.01 × 1011.32 × 1026.13 × 100
p-value1.00 × 10−33.49 × 10−21.73 × 10−21.00 × 10−31.85 × 10−26.12 × 10−31.00 × 10−35.00 × 10−11.00 × 10−3
CEC2017-F5Mean5.20 × 1025.15 × 1025.11 × 1025.47 × 1025.84 × 1025.30 × 1025.84 × 1025.86 × 1025.18 × 102
Std8.08 × 1005.91 × 1004.30 × 1001.34 × 1012.00 × 1011.01 × 1012.74 × 1011.15 × 1011.09 × 101
p-value1.96 × 10−25.00 × 10−14.25 × 10−23.37 × 10−14.41 × 10−35.00 × 10−12.41 × 10−25.00 × 10−31.92 × 10−2
CEC2017-F6Mean6.00 × 1026.00 × 1026.02 × 1026.12 × 1026.50 × 1026.12 × 1026.57 × 1026.47 × 1026.01 × 102
Std1.43 × 1009.82 × 10−12.83 × 1001.02 × 1011.49 × 1015.36 × 1001.13 × 1016.29 × 1001.76 × 100
p-value1.00 × 10−31.00 × 10−31.00 × 10−31.39 × 10−41.50 × 10−25.00 × 10−15.00 × 10−12.85 × 10−21.00 × 10−3
CEC2017-F7Mean7.33 × 1027.27 × 1027.26 × 1027.24 × 1028.42 × 1027.67 × 1021.00 × 1037.98 × 1027.32 × 102
Std1.04 × 1018.36 × 1008.58 × 1006.68 × 1004.96 × 1012.21 × 1011.91 × 1021.02 × 1019.33 × 100
p-value4.15 × 10−22.91 × 10−23.23 × 10−24.01 × 10−21.18 × 10−31.75 × 10−21.00 × 10−39.56 × 10−25.00 × 10−1
CEC2017-F8Mean8.22 × 1028.17 × 1028.12 × 1028.20 × 1028.78 × 1028.42 × 1028.60 × 1028.52 × 1028.14 × 102
Std1.03 × 1016.80 × 1005.89 × 1009.08 × 1001.95 × 1011.46 × 1012.40 × 1016.15 × 1005.86 × 100
p-value1.71 × 10−25.00 × 10−13.08 × 10−25.00 × 10−13.40 × 10−14.02 × 10−12.70 × 10−21.66 × 10−19.11 × 10−2
CEC2017-F9Mean9.00 × 1029.03 × 1029.04 × 1029.34 × 1022.48 × 1031.35 × 1032.30 × 1031.42 × 1039.08 × 102
Std1.81 × 10−14.45 × 1008.18 × 1001.31 × 1021.00 × 1032.37 × 1026.68 × 1029.67 × 1011.37 × 101
p-value1.00 × 10−31.00 × 10−31.00 × 10−31.00 × 10−31.65 × 10−21.50 × 10−21.53 × 10−23.47 × 10−11.00 × 10−3
CEC2017-F10Mean1.71 × 1031.67 × 1031.59 × 1031.90 × 1032.74 × 1031.74 × 1032.52 × 1032.46 × 1031.63 × 103
Std1.62 × 1022.57 × 1022.42 × 1023.00 × 1022.32 × 1021.89 × 1023.61 × 1021.39 × 1022.98 × 102
p-value4.75 × 10−34.61 × 10−23.37 × 10−24.00 × 10−14.77 × 10−22.83 × 10−24.50 × 10−25.00 × 10−31.59 × 10−2
CEC2017-F11Mean1.10 × 1031.11 × 1031.15 × 1031.14 × 1032.11 × 1031.15 × 1031.21 × 1032.02 × 1031.14 × 103
Std2.91 × 1009.38 × 1004.88 × 1012.07 × 1011.23 × 1037.12 × 1016.06 × 1015.64 × 1023.05 × 101
p-value2.09 × 10−21.51 × 10−21.16 × 10−22.78 × 10−21.00 × 10−31.00 × 10−32.26 × 10−11.54 × 10−21.00 × 10−3
CEC2017-F12Mean2.01 × 1031.75 × 1031.71 × 1041.62 × 1047.00 × 1075.14 × 1052.65 × 1054.73 × 1078.07 × 105
Std1.48 × 1032.23 × 1021.49 × 1041.38 × 1041.08 × 1081.47 × 1062.13 × 1053.72 × 1077.83 × 105
p-value1.00 × 10−32.80 × 10−22.80 × 10−31.42 × 10−31.00 × 10−31.00 × 10−31.00 × 10−31.85 × 10−37.33 × 10−3
CEC2017-F13Mean1.32 × 1031.34 × 1037.23 × 1038.74 × 1032.04 × 1063.16 × 1031.85 × 1049.66 × 1051.25 × 104
Std1.12 × 1016.13 × 1017.61 × 1037.19 × 1035.52 × 1063.91 × 1031.36 × 1041.01 × 1066.19 × 103
p-value1.00 × 10−31.93 × 10−21.00 × 10−39.45 × 10−31.00 × 10−31.00 × 10−31.86 × 10−21.00 × 10−31.97 × 10−2
CEC2017-F14Mean1.42 × 1031.43 × 1031.47 × 1032.16 × 1031.06 × 1041.58 × 1034.78 × 1031.66 × 1033.58 × 103
Std9.60 × 1001.18 × 1013.57 × 1011.20 × 1038.59 × 1039.21 × 1014.46 × 1037.51 × 1011.84 × 103
p-value4.16 × 10−21.00 × 10−31.44 × 10−21.00 × 10−31.48 × 10−22.32 × 10−21.00 × 10−33.22 × 10−21.00 × 10−3
CEC2017-F15Mean1.63 × 1031.51 × 1031.71 × 1032.78 × 1033.55 × 1042.70 × 1032.12 × 1047.50 × 1035.70 × 103
Std7.14 × 1012.87 × 1001.48 × 1021.53 × 1033.31 × 1041.19 × 1032.05 × 1042.17 × 1034.66 × 103
p-value1.56 × 10−13.22 × 10−11.25 × 10−11.00 × 10−31.56 × 10−21.00 × 10−37.95 × 10−39.32 × 10−33.22 × 10−3
CEC2017-F16Mean1.63 × 1031.66 × 1031.71 × 1031.86 × 1032.09 × 1031.67 × 1032.25 × 1032.03 × 1031.71 × 103
Std4.28 × 1017.35 × 1011.16 × 1021.27 × 1021.70 × 1027.21 × 1012.40 × 1028.49 × 1011.10 × 102
p-value1.00 × 10−31.00 × 10−33.85 × 10−31.08 × 10−22.68 × 10−11.00 × 10−31.37 × 10−13.53 × 10−21.76 × 10−3
CEC2017-F17Mean1.74 × 1031.77 × 1031.78 × 1031.77 × 1031.90 × 1031.75 × 1032.12 × 1031.82 × 1031.76 × 103
Std4.88 × 1012.37 × 1014.78 × 1013.77 × 1019.56 × 1011.21 × 1012.17 × 1022.21 × 1013.16 × 101
p-value3.37 × 10−31.00 × 10−32.28 × 10−31.68 × 10−39.43 × 10−33.03 × 10−21.50 × 10−35.00 × 10−11.00 × 10−3
CEC2017-F18Mean2.65 × 1031.83 × 1031.62 × 1041.38 × 1042.69 × 1068.53 × 1031.70 × 1045.65 × 1062.76 × 104
Std3.36 × 1032.75 × 1011.17 × 1041.17 × 1045.47 × 1061.04 × 1041.44 × 1045.29 × 1061.70 × 104
p-value1.00 × 10−31.00 × 10−32.48 × 10−23.81 × 10−21.00 × 10−31.00 × 10−31.00 × 10−34.17 × 10−31.00 × 10−3
CEC2017-F19Mean1.90 × 1031.94 × 1032.06 × 1034.40 × 1031.04 × 1052.40 × 1039.01 × 1034.87 × 1042.69 × 104
Std4.41 × 1011.32 × 1001.48 × 1023.28 × 1031.63 × 1052.21 × 1028.03 × 1032.85 × 1046.64 × 104
p-value1.00 × 10−35.00 × 10−12.30 × 10−31.00 × 10−31.00 × 10−33.37 × 10−31.00 × 10−31.51 × 10−21.00 × 10−3
CEC2017-F20Mean2.02 × 1032.06 × 1032.06 × 1032.11 × 1032.27 × 1032.05 × 1032.29 × 1032.22 × 1032.08 × 103
Std1.02 × 1014.88 × 1015.45 × 1017.21 × 1011.09 × 1022.75 × 1011.39 × 1024.88 × 1015.71 × 101
p-value1.00 × 10−31.76 × 10−21.00 × 10−31.76 × 10−31.50 × 10−11.00 × 10−32.50 × 10−25.00 × 10−11.00 × 10−3
CEC2017-F21Mean2.25 × 1032.25 × 1032.31 × 1032.30 × 1032.26 × 1032.29 × 1032.37 × 1032.27 × 1032.31 × 103
Std5.85 × 1015.86 × 1012.93 × 1016.62 × 1012.29 × 1015.48 × 1014.97 × 1012.69 × 1011.94 × 101
p-value1.00 × 10−31.00 × 10−31.00 × 10−31.00 × 10−31.00 × 10−31.00 × 10−31.00 × 10−35.00 × 10−11.00 × 10−3
CEC2017-F22Mean2.30 × 1032.30 × 1032.30 × 1032.30 × 1032.67 × 1032.35 × 1032.70 × 1032.65 × 1032.31 × 103
Std1.37 × 1021.44 × 1012.78 × 1008.66 × 1006.66 × 1022.43 × 1027.49 × 1022.09 × 1021.78 × 101
p-value1.00 × 10−31.00 × 10−31.00 × 10−31.00 × 10−31.00 × 10−31.00 × 10−31.00 × 10−31.17 × 10−11.00 × 10−3
CEC2017-F23Mean2.62 × 1032.62 × 1032.62 × 1032.71 × 1032.68 × 1032.62 × 1032.77 × 1032.68 × 1032.62 × 103
Std7.21 × 1007.52 × 1006.76 × 1003.34 × 1011.60 × 1015.47 × 1006.96 × 1011.20 × 1011.25 × 101
p-value1.40 × 10−22.88 × 10−21.41 × 10−23.71 × 10−22.12 × 10−23.28 × 10−22.50 × 10−25.00 × 10−15.00 × 10−1
CEC2017-F24Mean2.74 × 1032.75 × 1032.74 × 1032.79 × 1032.81 × 1032.75 × 1032.88 × 1032.75 × 1032.75 × 103
Std4.60 × 1019.84 × 1006.48 × 1001.00 × 1021.43 × 1012.59 × 1011.41 × 1027.24 × 1011.39 × 101
p-value1.00 × 10−33.64 × 10−21.09 × 10−11.00 × 10−33.40 × 10−11.00 × 10−31.00 × 10−37.96 × 10−21.00 × 10−3
CEC2017-F25Mean2.92 × 1032.92 × 1032.93 × 1032.93 × 1033.05 × 1032.93 × 1032.94 × 1033.17 × 1032.94 × 103
Std1.97 × 1012.36 × 1012.58 × 1012.26 × 1011.07 × 1021.86 × 1017.67 × 1017.60 × 1012.20 × 101
p-value1.00 × 10−31.00 × 10−31.00 × 10−31.00 × 10−33.79 × 10−34.77 × 10−21.00 × 10−35.00 × 10−11.00 × 10−3
CEC2017-F26Mean3.04 × 1032.97 × 1033.04 × 1033.19 × 1033.57 × 1033.01 × 1034.07 × 1033.76 × 1033.11 × 103
Std2.64 × 1022.03 × 1022.96 × 1025.15 × 1024.73 × 1027.43 × 1016.36 × 1022.36 × 1023.87 × 102
p-value1.00 × 10−31.00 × 10−31.00 × 10−31.36 × 10−33.92 × 10−35.00 × 10−15.59 × 10−23.08 × 10−11.00 × 10−3
CEC2017-F27Mean3.09 × 1033.09 × 1033.10 × 1033.18 × 1033.11 × 1033.09 × 1033.27 × 1033.15 × 1033.10 × 103
Std2.27 × 1009.39 × 1002.15 × 1014.13 × 1011.40 × 1011.85 × 1009.38 × 1011.30 × 1011.20 × 101
p-value4.58 × 10−11.00 × 10−31.00 × 10−35.00 × 10−11.21 × 10−23.15 × 10−31.35 × 10−35.00 × 10−11.00 × 10−3
CEC2017-F28Mean3.18 × 1033.34 × 1033.39 × 1033.32 × 1033.44 × 1033.22 × 1033.37 × 1033.55 × 1033.37 × 103
Std5.98 × 1011.78 × 1021.29 × 1021.33 × 1021.18 × 1024.98 × 1011.50 × 1021.04 × 1021.03 × 102
p-value1.00 × 10−31.00 × 10−31.00 × 10−31.00 × 10−31.74 × 10−11.61 × 10−31.00 × 10−35.00 × 10−11.00 × 10−3
CEC2017-F29Mean3.17 × 1033.21 × 1033.20 × 1033.24 × 1033.43 × 1033.18 × 1033.61 × 1033.37 × 1033.20 × 103
Std5.95 × 1012.74 × 1014.81 × 1015.76 × 1011.07 × 1023.10 × 1012.20 × 1025.75 × 1014.47 × 101
p-value7.20 × 10−26.33 × 10−21.52 × 10−21.25 × 10−15.00 × 10−11.00 × 10−33.08 × 10−12.96 × 10−11.24 × 10−3
CEC2017-F30Mean3.28 × 1035.84 × 1055.10 × 1052.75 × 1058.52 × 1051.51 × 1051.51 × 1062.84 × 1065.96 × 105
Std4.10 × 1018.16 × 1059.33 × 1054.94 × 1051.31 × 1052.25 × 1052.11 × 1062.61 × 1069.05 × 105
p-value1.92 × 10−21.00 × 10−31.00 × 10−31.00 × 10−31.00 × 10−31.00 × 10−31.00 × 10−31.00 × 10−31.00 × 10−3
(+/−/=) Statistic(21/0/8)(12/0/17)(8/0/21)(3/0/26)(0/10/19)(2/0/27)(0/12/17)(0/7/22)(1/0/28)
Friedman mean1.662.033.073.866.793.526.456.314.21
Ranking123594876
The best mean results and standard deviation values are highlighted in bold. The p-value is the result of the Lilliefors normality test.
Table 5. DMSALAs performance compared with 8 advanced algorithms on IEEE CEC 2022.
Table 5. DMSALAs performance compared with 8 advanced algorithms on IEEE CEC 2022.
DMSALAsALAsWMAsPSOsSFOAsBKAsGOOSEsBWOsGWOs
CEC2022-F1Mean3.00 × 1023.00 × 1023.00 × 1023.00 × 1022.30 × 1048.38 × 1033.23 × 1026.94 × 1032.18 × 103
Std8.06 × 10−103.16 × 10−94.09 × 10−143.40 × 10−31.24 × 1044.97 × 1031.25 × 1021.77 × 1032.15 × 103
p-value2.81 × 10−21.00 × 10−31.00 × 10−31.00 × 10−33.62 × 10−22.32 × 10−21.00 × 10−33.75 × 10−11.00 × 10−3
CEC2022-F2Mean4.02 × 1024.07 × 1024.06 × 1024.11 × 1025.88 × 1024.07 × 1024.09 × 1027.87 × 1024.30 × 102
Std2.01 × 1002.72 × 1003.32 × 1002.13 × 1011.43 × 1023.39 × 1001.57 × 1011.45 × 1022.68 × 101
p-value1.00 × 10−31.00 × 10−31.00 × 10−31.00 × 10−31.24 × 10−21.00 × 10−31.00 × 10−32.24 × 10−21.00 × 10−3
CEC2022-F3Mean6.00 × 1026.00 × 1026.00 × 1026.06 × 1026.73 × 1026.17 × 1026.38 × 1026.41 × 1026.02 × 102
Std1.46 × 1007.78 × 10−41.47 × 1006.76 × 1001.76 × 1014.54 × 1001.54 × 1015.39 × 1001.19 × 100
p-value1.00 × 10−31.00 × 10−31.00 × 10−36.98 × 10−32.50 × 10−21.50 × 10−35.00 × 10−15.00 × 10−13.63 × 10−3
CEC2022-F4Mean8.20 × 1028.14 × 1028.12 × 1028.19 × 1028.69 × 1028.37 × 1028.49 × 1028.47 × 1028.14 × 102
Std8.20 × 1006.81 × 1005.16 × 1006.67 × 1001.29 × 1011.53 × 1012.04 × 1015.45 × 1006.90 × 100
p-value1.78 × 10−24.27 × 10−22.21 × 10−22.70 × 10−21.66 × 10−22.26 × 10−21.16 × 10−21.48 × 10−22.89 × 10−3
CEC2022-F5Mean9.00 × 1029.05 × 1029.06 × 1029.55 × 1022.16 × 1031.39 × 1032.15 × 1031.42 × 1039.20 × 102
Std5.64 × 1003.55 × 10−16.33 × 1001.06 × 1027.73 × 1022.99 × 1026.09 × 1021.06 × 1023.66 × 101
p-value2.56 × 10−31.00 × 10−31.00 × 10−31.00 × 10−39.01 × 10−25.12 × 10−23.77 × 10−25.00 × 10−11.00 × 10−3
CEC2022-F6Mean1.80 × 1031.84 × 1031.93 × 1031.87 × 1031.29 × 1073.70 × 1033.00 × 1031.72 × 1063.91 × 103
Std1.11 × 1012.75 × 1011.65 × 1031.71 × 1033.12 × 1071.95 × 1032.52 × 1031.22 × 1062.38 × 103
p-value1.14 × 10−11.00 × 10−33.53 × 10−17.48 × 10−31.00 × 10−33.26 × 10−13.80 × 10−31.87 × 10−21.00 × 10−3
CEC2022-F7Mean2.02 × 1032.02 × 1032.02 × 1032.04 × 1032.10 × 1032.02 × 1032.15 × 1032.09 × 1032.03 × 103
Std6.34 × 1004.89 × 1007.32 × 1001.92 × 1013.43 × 1011.36 × 1006.72 × 1011.36 × 1011.64 × 101
p-value1.00 × 10−31.00 × 10−31.00 × 10−37.05 × 10−22.50 × 10−21.35 × 10−11.00 × 10−34.31 × 10−27.25 × 10−2
CEC2022-F8Mean2.22 × 1032.22 × 1032.22 × 1032.22 × 1032.27 × 1032.23 × 1032.42 × 1032.24 × 1032.22 × 103
Std5.49 × 1004.95 × 1009.40 × 1002.20 × 1016.42 × 1011.43 × 1001.66 × 1024.32 × 1005.75 × 100
p-value1.00 × 10−31.00 × 10−31.00 × 10−31.00 × 10−31.00 × 10−32.33 × 10−16.11 × 10−33.07 × 10−11.00 × 10−3
CEC2022-F9Mean2.49 × 1032.53 × 1032.53 × 1032.53 × 1032.68 × 1032.52 × 1032.56 × 1032.70 × 1032.56 × 103
Std7.10 × 10−120.00 × 1000.00 × 1003.67 × 1016.42 × 1012.91 × 1013.54 × 1012.00 × 1013.72 × 101
p-value1.92 × 10−31.00 × 10−31.00 × 10−31.00 × 10−35.00 × 10−11.00 × 10−34.59 × 10−31.37 × 10−21.00 × 10−3
CEC2022-F10Mean2.50 × 1032.50 × 1032.57 × 1032.56 × 1032.73 × 1032.50 × 1033.05 × 1032.55 × 1032.56 × 103
Std6.32 × 1018.71 × 1015.88 × 1016.66 × 1014.96 × 1022.27 × 10−15.25 × 1025.52 × 1016.02 × 101
p-value1.00 × 10−31.00 × 10−31.00 × 10−31.00 × 10−31.00 × 10−31.18 × 10−13.85 × 10−31.00 × 10−31.00 × 10−3
CEC2022-F11Mean2.73 × 1032.66 × 1032.78 × 1032.75 × 1033.06 × 1032.67 × 1032.76 × 1033.12 × 1032.82 × 103
Std1.61 × 1021.46 × 1021.67 × 1021.63 × 1023.42 × 1024.54 × 1011.66 × 1021.82 × 1022.01 × 102
p-value1.00 × 10−31.00 × 10−32.40 × 10−31.00 × 10−32.16 × 10−31.74 × 10−11.00 × 10−33.83 × 10−11.00 × 10−3
CEC2022-F12Mean2.86 × 1032.86 × 1032.87 × 1032.89 × 1032.87 × 1032.86 × 1033.10 × 1032.90 × 1032.86 × 103
Std2.39 × 1001.86 × 1001.09 × 1005.20 × 1017.29 × 1001.34 × 1007.53 × 1011.73 × 1011.17 × 101
p-value6.79 × 10−31.24 × 10−21.09 × 10−12.98 × 10−34.37 × 10−35.55 × 10−31.82 × 10−13.70 × 10−11.00 × 10−3
(+/−/=) Statistic(10/0/2)(7/0/5)(5/0/7)(2/0/10)(0/5/7)(3/0/9)(0/4/8)(0/3/9)(2/0/10)
Friedman mean1.421.582.423.086.253.175.175.583.50
Ranking123495786
The best mean results and standard deviation values are highlighted in bold. The p-value is the result of the Lilliefors normality test.
Table 6. Wilcoxon rank sum test statistical results.
Table 6. Wilcoxon rank sum test statistical results.
DMSALAs
VS.
CECs2017CECs2022DMSALAs VS.CECs2017CECs2022
F1–F30F1–F12F1–F30F1–F12
Dim = 30Dim = 10Dim = 30Dim = 10
ALAs19/0/1010/0/2BKA20/0/910/0/2
WMAs17/0/129/0/3GOOSE28/0/112/0/0
PSOs25/0/410/0/2BWO27/0/211/0/1
SFOAs28/0/112/0/0GWO18/0/1110/0/2
Total (+|=|−)89/0/2741/0/7 93/0/2343/0/5
Table 7. DMSALAs performance compared with 8 advanced algorithms on 4 different maps.
Table 7. DMSALAs performance compared with 8 advanced algorithms on 4 different maps.
Maps DMSALAsALAsWMAsPSOsSFOAsBKAsGOOSEsBWOsGWOs
10 × 10Best12.9574212.9574212.9574212.9574212.9574212.9574213.543212.9574212.95742
mean13.1404913.2587614.8789513.2256515.9701113.6859426.6765313.2604814.48174
std0.6455480.544971.6852360.5376233.7185010.5496425.117470.3304061.479801
20 × 20Best27.5425828.9370330.266128.2579833.4302430.3980238.4535540.9033934.57773
mean28.7442929.2066929.8571329.8934345.3209330.7127111.552748.9889232.53923
std0.9028471.2309581.9092472.02417767.051150.732436146.786966.325954.096773
30 × 30Best47.8347747.9641648.8794548.6948950.9420349.8351761.0390354.4510151.95469
mean53.2270955.4034181.6815163.42001426.865760.90314303.6167319.795669.24881
std6.1357556.466712154.71957.363908420.89325.838168435.3804386.559512.41287
40 × 40Best64.985966.4073169.14770.0784494.1041465.5885291.3664175.3966879.70253
mean90.563294.5066598.02464362.42151353.98699.31272607.4042666.667608.8957
std17.6546619.125249.074725606.3704602.607820.52429768.94261306.395767.74
(+/−/=) Statistic(4|0|0)(1|3|0)(1|3|0)(1|3|0)(1|1|2)(1|3|0)(1|2|1)(1|1|2)(1|3|0)
Friedman mean12.254.753.7584.257.757.256
Ranking125394876
The best values are highlighted in bold.
Table 8. Results of the DMSALAs and 8 advanced algorithms on the RIA-E100 robot.
Table 8. Results of the DMSALAs and 8 advanced algorithms on the RIA-E100 robot.
DMSALAsALAsWMAsPSOsSFOAsBKAsGOOSEssBWOsGWOs
Best (m)3.6873.9413.9144.0904.6514.9834.7974.4864.338
Mean (m)3.8814.0694.0774.3685.0495.2175.2684.8054.539
std0.25870.19520.11560.48970.51390.50470.49280.59150.3622
The best values are highlighted in bold.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Qu, P.; Song, X.; Zhou, Z. Research on Path Planning for Mobile Robot Using the Enhanced Artificial Lemming Algorithm. Mathematics 2025, 13, 3533. https://doi.org/10.3390/math13213533

AMA Style

Qu P, Song X, Zhou Z. Research on Path Planning for Mobile Robot Using the Enhanced Artificial Lemming Algorithm. Mathematics. 2025; 13(21):3533. https://doi.org/10.3390/math13213533

Chicago/Turabian Style

Qu, Pengju, Xiaohui Song, and Zhijin Zhou. 2025. "Research on Path Planning for Mobile Robot Using the Enhanced Artificial Lemming Algorithm" Mathematics 13, no. 21: 3533. https://doi.org/10.3390/math13213533

APA Style

Qu, P., Song, X., & Zhou, Z. (2025). Research on Path Planning for Mobile Robot Using the Enhanced Artificial Lemming Algorithm. Mathematics, 13(21), 3533. https://doi.org/10.3390/math13213533

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop