Previous Article in Journal
Global Research Trends in Biomimetic Lattice Structures for Energy Absorption and Deformation: A Bibliometric Analysis (2020–2025)
Previous Article in Special Issue
ACIVY: An Enhanced IVY Optimization Algorithm with Adaptive Cross Strategies for Complex Engineering Design and UAV Navigation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Application of Multi-Strategy Controlled Rime Algorithm in Path Planning for Delivery Robots

by
Haokai Lv
1,
Qian Qian
1,*,
Jiawen Pan
1,
Miao Song
2,
Yong Feng
1 and
Yingna Li
1
1
School of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650500, China
2
School of Information and Engineering, Shanghai Maritime University, Shanghai 201306, China
*
Author to whom correspondence should be addressed.
Biomimetics 2025, 10(7), 476; https://doi.org/10.3390/biomimetics10070476 (registering DOI)
Submission received: 9 June 2025 / Revised: 10 July 2025 / Accepted: 14 July 2025 / Published: 19 July 2025

Abstract

As a core component of automated logistics systems, delivery robots hold significant application value in the field of unmanned delivery. This research addresses the robot path planning problem, aiming to enhance delivery efficiency and reduce operational costs through systematic improvements to the RIME optimization algorithm. Through in-depth analysis, we identified several major drawbacks in the standard RIME algorithm for path planning: insufficient global exploration capability in the initial stages, a lack of diversity in the hard RIME search mechanism, and oscillatory phenomena in soft RIME step size adjustment. These issues often lead to undesirable phenomena in path planning, such as local optima traps, path redundancy, or unsmooth trajectories. To address these limitations, this study proposes the Multi-Strategy Controlled Rime Algorithm (MSRIME), whose innovation primarily manifests in three aspects: first, it constructs a multi-strategy collaborative optimization framework, utilizing an infinite folding Fuch chaotic map for intelligent population initialization to significantly enhance the diversity of solutions; second, it designs a cooperative mechanism between a controlled elite strategy and an adaptive search strategy that, through a dynamic control factor, autonomously adjusts the strategy activation probability and adaptation rate, expanding the search space while ensuring algorithmic convergence efficiency; and finally, it introduces a cosine annealing strategy to improve the step size adjustment mechanism, reducing parameter sensitivity and effectively preventing path distortions caused by abrupt step size changes. During the algorithm validation phase, comparative tests were conducted between two groups of algorithms, demonstrating their significant advantages in optimization capability, convergence speed, and stability. Further experimental analysis confirmed that the algorithm’s multi-strategy framework effectively suppresses the impact of coordinate and dimensional differences on path quality during iteration, making it more suitable for delivery robot path planning scenarios. Ultimately, path planning experimental results across various Building Coverage Rate (BCR) maps and diverse application scenarios show that MSRIME exhibits superior performance in key indicators such as path length, running time, and smoothness, providing novel technical insights and practical solutions for the interdisciplinary research between intelligent logistics and computer science.

1. Introduction

The initial development of mobile robot technology was driven by the demands of high-risk industries such as industrial and military applications. These fields require extensive human resources while also facing challenges related to safety, complexity, and high precision. In the industrial sector, early robots were primarily used in nuclear industries, petrochemical plants, offshore operations, construction, outdoor applications (such as forestry and anti-personnel mine clearance), mining, and even recreational activities [1]. Over the past few decades, Finland’s VTT Technical Research Centre has been continuously researching mobile robot technology [2], successfully applying it to underground mines, electronics factories, and other diverse scenarios. In the military sector, the United States has developed mobile systems such as the Mobile Detection Assessment Response System (MDARS) and the Spiral Track Autonomous Robot (STAR) [3], significantly reducing safety risks in high-risk military operations. While these early mobile robots represented breakthroughs in functionality and operation, their applications remained largely confined to specific industrial and military environments.
With rapid advancements in internet technology, sensor technology, computer vision, and artificial intelligence, robots have gradually transitioned from fixed, controlled environments to more complex and dynamic applications. Mobile robots now play a crucial role in search and rescue [4], goods delivery [5], unmanned services [6], and geological exploration [7]. Particularly in the context of seasonal influenza outbreaks, demand for contactless delivery services has surged. Delivery robots, capable of efficiently performing contactless deliveries in diverse environments, have gained increasing public attention. However, a core challenge in their practical application lies in efficiently planning paths, avoiding obstacles, and minimizing energy consumption in complex environments. These challenges make path planning a key research focus.
The path planning problem for delivery robots is highly complex and dynamic, with the primary objective of generating an optimal or near-optimal path from the starting point to the destination within a given environment. The algorithm design must meet the following core requirements: first, the algorithm must possess global optimality, ensuring that it can find the shortest or safest path from the starting point to the destination on a large-scale map; second, the algorithm should minimize movement-related costs to the greatest extent possible, including path length, energy consumption, and time factors [8]. Traditional path planning algorithms are often limited by local optima, low computational efficiency, and poor environmental adaptability, making it difficult to meet the aforementioned requirements.
In recent years, bio-inspired optimization algorithms have demonstrated significant advantages in path planning due to their unique biologically inspired mechanisms. These algorithms, which simulate natural biological behaviors or physical phenomena (such as swarm intelligence, biological evolution, and physicochemical processes), provide novel approaches for solving path optimization problems in complex environments. For instance, Miao et al. [9] proposed an adaptive ant colony algorithm (IAACO), achieving comprehensive global optimization for robot path planning, while Wang et al. [10] introduced a novel flamingo path-planning algorithm, both demonstrating excellent performance in path optimization. However, according to the “No Free Lunch” theorem [11], no single optimization algorithm can perform optimally across all types of path-planning problems. Therefore, achieving a balance between algorithmic specialization and adaptability is crucial to ensuring search efficiency while enabling broad application. Researchers have improved various optimization algorithms, such as enhanced genetic algorithms [12], improved sparrow search algorithms [13], and enhanced dung beetle optimizer [14]. These advancements have helped overcome the limitations of traditional algorithms and have demonstrated significant potential in robot path planning. Consequently, researchers are increasingly focusing on bio-inspired optimization algorithms, particularly enhanced versions, for path planning applications.
The Rime Optimization Algorithm (RIME) [15] is a novel bio-inspired optimization algorithm inspired by the physical process of RIME formation in nature. By simulating the layered growth pattern of soft RIME on object surfaces and the penetrating crystallization characteristics of hard RIME crystals, it establishes a comprehensive “growth-penetration” collaborative optimization framework. This algorithm innovatively constructs a biomimetic mapping between meteorological phenomena and intelligent computation, offering advantages such as strong robustness, few parameters, and easy implementation. Compared to other bio-inspired algorithms such as the artificial bee colony algorithm [16] and the moth-flame optimization algorithm [17], RIME exhibits superior robustness and has been widely applied in engineering and mechanical fields. For example, Ismaeel et al. [18] applied RIME to parameter identification for proton exchange membrane fuel cells (PEMFC), optimizing fuel cell performance prediction models. Similarly, Abdel-Salam et al. [19] proposed an adaptive mutual-benefit chaotic RIME optimization algorithm, which significantly improved classification accuracy and reduced feature dimensions. Notably, RIME features a simple overall structure, and as demonstrated in [15], this emerging metaheuristic algorithm significantly outperforms most optimization algorithms in global search capability. Additionally, the overall time complexity of the RIME algorithm is O (Np × D × T) (where the main factors are the population size Np, problem dimension D, and the number of iterations T), indicating high computational efficiency. Based on these characteristics, the RIME algorithm can essentially meet the two core requirements of the aforementioned path planning problem.
However, directly applying the RIME algorithm to path planning for delivery robots still has some shortcomings. First, the soft frost search strategy is selected based on the adhesion coefficient E. Since the E value is relatively low in the early stages of the algorithm, the execution probability of soft frost search is low, resulting in most individuals being unable to update, which reduces the convergence speed and search capability of the algorithm. Second, the hard frost penetration mechanism directly updates individuals to the optimal position with a single update method, limiting the exploration ability of hard frost individuals and leading to insufficient population diversity. In the path planning process, insufficient early-stage search capability and lack of population diversity prevent the algorithm from fully exploring the search space, causing it to miss better path segments, generate overly long paths, and ultimately increase the robot’s travel time and energy consumption. Finally, the environmental factor in soft frost search decreases in a stepwise manner, leading to unstable variations in search step size. This causes significant differences in step values at different iteration stages, resulting in discontinuous changes in node coordinates. Consequently, the front-to-back coordinate differences and adjacent dimension differences increase, leading to a higher number of turning points in the path, ultimately reducing the smoothness of the generated path.
To address the aforementioned issues and further enhance the application potential of the RIME algorithm in delivery robot path planning, this paper proposes a novel Multi-Strategy Controlled RIME Optimization Algorithm (MSRIME). First, based on an analysis of various chaotic mapping techniques, MSRIME introduces a population initialization method using Fuch chaotic mapping [20], leveraging its chaotic properties to generate high-quality initial populations. Second, a controlled elite strategy and an adaptive search strategy are proposed to enhance the algorithm’s early-stage search capability and optimize the hard frost piercing mechanism, thereby improving population diversity and convergence speed, and effectively enhancing the global optimality of path planning. Additionally, a cosine annealing strategy is employed to replace the original stepwise step-size variation mechanism, ensuring smoother step-size adjustments in soft frost search, reducing the need for manual parameter tuning, and effectively avoiding local optima caused by abrupt step-size changes. By introducing a control factor a, multi-strategy synergy is achieved, further improving the overall performance of the algorithm. To validate the optimization performance of MSRIME, extensive comparative experiments and statistical analyses were conducted. Experimental results on the CEC2017 and CEC2022 benchmark test function sets demonstrate that MSRIME significantly outperforms comparison algorithms in terms of optimization capability, convergence speed, and stability. Finally, MSRIME was applied to path planning experiments for delivery robots in four scenarios with different Building Coverage Ratio (BCR). The results show that MSRIME outperforms comparison algorithms in key metrics such as path length, runtime, and path smoothness, further proving its superior performance in the field of path planning.
More significantly, as an enhanced version of RIME, MSRIME not only inherits RIME’s meteorological biomimetic characteristics but also integrates multiple optimization strategies through control factors, establishing a more universal “intelligence-nature” collaborative optimization framework. This not only offers a more efficient solution for delivery robot path planning but also fosters deeper interdisciplinary integration across biomimetics, computer science, and intelligent transportation.

2. Rime Optimization Algorithm (RIME)

RIME is an algorithm inspired by the natural formation of RIME frost. Rime occurs when water vapor in the air accumulates without condensing and then freezes at low temperatures, adhering to tree branches and other objects. The growth of RIME is influenced by factors such as temperature, wind speed, humidity, and air conditions, leading to variations in its formation under different circumstances. Additionally, due to environmental factors and growth limitations, RIME cannot grow indefinitely; once it reaches a relatively stable state, its growth ceases. Based on differences in environmental conditions and wind speed, RIME typically forms in two distinct types: soft RIME and hard RIME.

2.1. Initialization Phase

In the initialization phase of the RIME algorithm, the relevant parameters are set, including Population size (N), Dimensionality of the search space (dim), and Maximum number of iterations (T). The initial population Ri,j, I = 1, 2, 3,…, N, j = 1, 2, 3,…, dim.
R N × dim = r a n d ( N , dim ) × ( u b l b ) + l b
Here, N × dim represents the matrix space formed by the population size and dimensionality. ub denotes the upper bound of the feasible domain for the RIME population, while lb represents the lower bound. The initial population is randomly distributed within these boundaries to ensure diversity in the search space.

2.2. Soft Rime Search

The exploration and exploitation phase of the RIME algorithm consists of soft RIME search and hard RIME piercing. The soft RIME search is primarily responsible for exploration and exploitation, while the hard RIME piercing accelerates the algorithm’s convergence. The details are as follows:
By simulating the movement of soft RIME agents within the RIME system, RIME introduces a soft RIME search strategy. This strategy is controlled by an attachment coefficient E, which regulates the condensation probability of soft rime. The value of E increases as the number of iterations progresses, following the variation process illustrated in Figure 1. To ensure a gradual and structured exploration-exploitation transition, the authors introduce two key elements: Environmental factor β: A trapezoidal function, where [] represents rounding operations, and w controls the number of segments in the trapezoidal structure. β decreases in a stepwise manner as iterations progress, simulating the effect of external environmental conditions. Directional control variable cosθ: Along with a random number r1, this variable determines the movement direction of agents in the population. Both θ and β change dynamically over iterations, allowing the algorithm to transition smoothly between large-scale exploration and fine-tuned exploitation.
This design ensures high efficiency and precision in the optimization process. The update formula for soft RIME search is given as follows:
θ = π t 10 T
β = 1 [ w t T ] / w
E = ( t / T )
R i , j n e w = R b e s t , j + r 1 cos θ β ( h ( u b i j l b i j ) + l b i j ) , r 2 E
where T represents the maximum number of iterations and t is the current iteration number, controlling the progress of the algorithm. The upper and lower bounds for the j-th dimension of the i-th agent are denoted as ubij and lbij, respectively. r2 and h are random numbers within the range (0,1), where r2 determines whether the agent’s position is updated, and h, together with ubij and lbij, controls the position update range of the agent in each dimension. β is a trapezoidal decreasing function as shown in Figure 2, which works in coordination with other parameters to achieve the soft RIME search strategy.

2.3. Hard Rime Piercing

RIME introduces the hard RIME piercing mechanism by simulating the movement of hard RIME agents within the RIME system. The purpose of this mechanism is to ensure that all agents in the population have a certain probability of mutating into hard-rime agents, allowing them to continue the hard-rime exploration process and enhancing the algorithm’s convergence. The specific update formula is as follows:
R i , j n e w = R b e s t , j , r 3 F n o r m r ( S i )
where r3 is a random number within the range (0,1), and Fnormr(Si) represents the normalized fitness value of the i-th agent. During the algorithm’s search process, Equations (5) and (6) jointly simulate the position update mechanism of the rime population.

2.4. Positive Greedy Selection Mechanism

The original algorithm employs a positive greedy selection mechanism, where, after each iteration, the updated fitness value of an agent is compared with its pre-update fitness value. If the updated fitness value is superior, both the agent and its fitness value are replaced accordingly. This ensures that the population evolves toward a better direction with each iteration, thereby enhancing exploration efficiency.

3. Multi-Strategy Controlled Rime Optimization Algorithm

3.1. Fuch Chaotic Mapping

In metaheuristic intelligent optimization algorithms, the quality of the initial population plays a crucial role in determining overall algorithm performance. A high-quality initial population enhances the global search capability of the algorithm and accelerates convergence, enabling faster identification of the optimal solution. Conversely, a low-quality initial population can restrict the search space, slow down the search process, and even cause the algorithm to fall into local optima, thereby diminishing its overall effectiveness.
However, in most swarm intelligence algorithms, agents in the initial population are generated randomly within a predefined range. This randomness introduces significant uncertainty in the initial population distribution. By replacing conventional random initialization with chaotic mapping, the chaotic properties can be leveraged to generate a highly diverse initial population, enhancing population diversity from the start. Several studies have demonstrated the effectiveness of chaotic mapping in improving initialization quality. Gao et al. [21] employed Tent chaotic mapping for initializing the whale optimization algorithm, increasing the diversity of the initial whale population, and ensuring a more uniform distribution of agent positions. Duan et al. [22] applied Tent chaotic functions along with a reverse learning mechanism to initialize the Grey Wolf Optimizer, resulting in more uniform and diverse population distributions. Wang et al. [23] used Henon chaotic mapping to initialize the Vulture Optimization Algorithm, significantly improving its search efficiency.

3.1.1. Selection of Chaotic Mapping

The RIME algorithm employs a positive greedy selection mechanism during population updates. After each iteration, agents are updated based on their fitness values, retaining only the historical best value for each agent. Additionally, the hard RIME mechanism allows certain agents to directly update their positions to the current global best agent. Furthermore, the elite control strategy (Section 3.2) and the adaptive search equation (Section 3.3) are also designed around the optimal agent. As a result, both the original and improved versions of the RIME algorithm heavily depend on the best agent at each iteration. If the initial population is of low quality and the search process overly relies on the current best agent, the algorithm may only explore regions near a local optimum, increasing the risk of stagnation. To mitigate this issue, chaotic mapping can be used to generate a diverse, random, and unpredictable initial population, reducing the likelihood of the algorithm being trapped in local optima.
To further investigate the characteristics of chaotic mappings and determine the most suitable mapping method for MSRIME, this study conducted a systematic literature review of algorithm-related publications in four top-tier journals: IEEE Transactions on Pattern Analysis and Machine Intelligence, Computers in Industry, Applied Soft Computing, and Artificial Intelligence Review. Focusing on the period from 2004 to 2024, we used the search keywords “chaotic mapping method + optimization algorithm” and statistically analyzed the usage frequency of various chaotic mapping techniques. The results are shown in Table 1. The statistical data reveal that Logistic mapping and Tent mapping are the most frequently used, each appearing in over 200 studies over the past 20 years. The results demonstrate that both are widely applied in the enhancement of optimization algorithms. In addition, Multidimensional mappings enable direct interconnection of signals with similar characteristics, which facilitates effective processing and generation of data points in high-dimensional spaces, making them particularly suitable for high-density data scenarios [24]. Although the Fuch mapping appears less frequently in the statistical table, recent studies have revealed its distinctive advantages [25]: parameter-independent strong robustness, aperiodic chaotic stability, and exceptional harmonic spread-spectrum performance. These unique characteristics render it more practical than conventional chaotic mappings in specific application scenarios and technical domains. To ensure the comprehensiveness of this study, we selected not only highly cited one-dimensional chaotic maps (Logistic map and Tent map) but also incorporated a multi-dimensional chaotic map (Henon map) and a one-dimensional chaotic map with multiple unique properties (Fuch map) for more in-depth investigation.

3.1.2. Initial Value Sensitivity Analysis

The aforementioned chaotic mappings share two crucial characteristics: initial value sensitivity and chaotic properties. Initial value sensitivity refers to the phenomenon in chaotic systems where minute changes in initial conditions can lead to significant differences in system trajectories. In chaotic mappings, higher initial value sensitivity indicates that the method can generate an initial population more likely to contain high-quality agents, thus accelerating algorithm convergence.
To assess the sensitivity of chaotic systems to initial values, the bit change rate can be employed as a quantitative metric. As noted in reference [20], when a system’s initial value undergoes a minor perturbation within the range of 10−6, observable changes occur in some bit values within the chaotic sequence. The bit change rate is calculated as the ratio of the number of changed bits (b) to the total number of bits (B), expressed as a percentage (b/B × 100%). A higher percentage indicates greater sensitivity of the system to initial conditions. In this study, we adopted the experimental protocol described in the reference to calculate the bit change rate for the selected chaotic maps. For the two-dimensional Hénon chaotic map, the initial values for both dimensions were set to the same numerical value. Table 2 presents the results of the initial value sensitivity analysis for four chaotic maps.
From the table, it is evident that the Fuch map exhibits the highest overall bit change rate, demonstrating the strongest sensitivity to initial values. The Tent map follows, while the Logistic map and Hénon map show relatively weaker sensitivity to initial conditions.

3.1.3. Chaotic Property Analysis

The Lyapunov exponent is a key indicator for assessing the chaotic characteristics of a system [26]. A positive Lyapunov exponent indicates chaotic behavior, with larger values corresponding to more pronounced chaotic characteristics and higher degrees of chaos. This paper conducts a Lyapunov exponent analysis for four types of chaotic mappings. In chaotic mappings, df/dx characterizes the degree of trajectory separation, with 0 being the critical point for orbit separation in chaotic systems. For the initial state x0, the trajectory x1, x2, x3, … can be obtained through iterative chaotic mapping f(x), and a variable k is introduced to represent the initial infinitesimal distance between the two trajectories. Let λ denote the trajectory separation exponent during each chaotic iteration. The separation distance between trajectories after n iterations can be expressed as fn(x0 + k) − f(x0) = keλ. Substituting this into the chaotic mapping relationship, and letting k approach an infinitesimal value and n approach infinity, the Lyapunov exponent λ values for the four types of chaotic mappings were calculated, with the results shown in Table 3.
e λ = f n ( x 0 + k ) f ( x 0 ) k = lim n d n ( x ) d x
Based on the comparison of Lyapunov exponents, the Fuch chaotic mapping exhibits the strongest chaotic characteristics, followed by the Tent chaotic mapping, while the Logistic and Henon mappings show weaker chaotic behavior.
Mappings with both high initial value sensitivity and high chaos degree are clearly more suitable for the RIME algorithm. Based on the above analysis, the Logistic and Henon mappings exhibit poor initial value sensitivity and chaotic characteristics, whereas the Fuch and Tent mappings satisfy the required conditions.

3.1.4. Comparison of Chaotic Mappings

To scientifically evaluate the performance differences between Fuch mapping and Tent mapping in population initialization, this study designed a systematic comparative experimental scheme. Two variants of the RIME algorithm improved by chaotic mapping were selected: the FRIME algorithm using Fuch mapping and the TRIME algorithm using Tent mapping, while retaining the original RIME algorithm with random initialization as a control benchmark. The experiments selected the first 10 benchmark functions from the CEC2017 test set used by the original algorithm (including the F2 function, which has been officially removed but retained for comparison completeness) for performance evaluation. The population size was set to 50, the number of iterations to 100, and the dimension to 30, and each experiment was run 20 times independently.
Table 4 presents the statistical intervals of the optimization results for the three algorithms over 20 independent runs, where the interval endpoints represent the minimum (left endpoint) and maximum (right endpoint) values of the optimal solutions, respectively. To highlight the algorithmic performance advantages, the best minimum and maximum endpoint values for each test function are bolded. The experimental data show that the FRIME algorithm exhibits the highest frequency of bolded values across most test functions, demonstrating significant advantages. These results indicate that the initialization strategy based on Fuch mapping not only approximates the global optimal solution more stably but also achieves significantly higher solution accuracy than the compared algorithms.
The dispersion of the initial solutions often determines the optimization speed of the algorithm and the probability of getting trapped in a local optimum. From the above analysis, it can be seen that, compared to using Tent chaotic mapping and random generation methods, initializing the population with Fuch chaotic mapping results in a more orderly distribution, thereby minimizing the likelihood of the algorithm getting trapped in a local optimum. Therefore, this paper adopts the Fuch chaotic mapping to enhance the initialization method, with its mathematical model defined as follows:
R x + 1 = cos ( 1 R x 2 ) ( u b l b ) + l b
In the Fuch chaotic mapping, Rx represents the chaotic variable, and Rx ≠ 0; Equation (8) is used to generate a chaotic sequence. The initial value R0 is randomly selected from the range [0,1]. Through the iteration formula, a series of chaotic values R1, R2, …, Rmax are generated; x represents the iteration count; and max is the maximum iteration count. The specific process is as follows: in the population initialization stage of the MSRIME algorithm, the initial information is first obtained, including population size N, spatial dimension dim, maximum iteration count T, and the upper and lower bounds of the search space, ub and lb. The random number sequence is generated using Equation (8), and the random number sequence corresponding to the population agents is linearly mapped to its upper and lower bounds, resulting in the initial population R.

3.2. Control Elite Strategy

In the process of solving optimization problems, the dynamic evolution of the population is crucial for improving the overall performance of the algorithm. Reasonable population changes can not only enhance the algorithm’s global search ability but also significantly accelerate the convergence process, thereby finding the optimal solution in a shorter time. Although the frost-ice population initialization has been optimized using Fuch mapping, as discussed in Section 3.1, inherent flaws in RIME’s evolution strategy may still result in low-quality populations (analyzed in detail in the next section). To address this challenge, this section introduces a new search strategy: the control elite strategy.
During the iteration process of the original frost-ice algorithm, four types of frost-ice agents satisfying different conditions may appear:
(1)
Agents that satisfy only the soft frost search condition but not the hard frost puncture condition, performing soft frost search.
(2)
Agents that satisfy only the hard frost puncture condition but not the soft frost search condition, performing hard frost puncture.
(3)
Agents that satisfy both the soft frost search and hard frost puncture conditions, performing hard frost puncture.
(4)
Agents that satisfy neither the soft frost search nor the hard frost puncture conditions, performing no operations.
The conditions for generating the aforementioned four types of individuals have different values at various stages of the algorithm’s execution. This can lead to a series of defects and shortcomings in the population of individuals during the update process. To visually illustrate how the number of agents meeting different conditions changes during various iterative phases of the algorithm, this study conducted a 100-iteration experiment with 100 agents from the original algorithm.
The data at iterations 1, 30, 60, and 90 were selected to simulate the early, early-middle, middle-late, and late stages of algorithm execution. The proportion of agents satisfying soft frost search (Condition 1), hard frost search (Conditions 2 and 3), and other agents (Condition 4) was calculated, and the results were visualized as the pie charts shown in Figure 3.
Since the adhesion coefficient E in the soft frost search is relatively low in the early stages (as shown in Figure 1), the probability of executing soft frost search is extremely low. Figure 3 also shows that in the early iterations, the proportions of the soft frost population (Soft) and hard frost population (Hard) are relatively low. Most agents remain stagnant in the initial phase due to satisfying Condition (4), resulting in low search efficiency during the early iterations.
In contrast, the proportion of agents satisfying the “Other” condition is relatively high in the early stages. Enhancing the search capability of these agents in the early phase can effectively improve the algorithm’s optimization performance. The core idea of the elite learning strategy is to leverage elite agents—those with high fitness in the current iteration—to guide the update of other agents, thereby accelerating convergence toward better solutions.
Specifically, this study applies the elite learning strategy to a subset of agents satisfying Condition (4), enabling some agents to move toward higher-quality agents. This defines a new Condition (5): If an agent satisfies Condition (4) and a randomly generated value is less than a predefined probability a, the agent executes the elite learning strategy.
During the iteration process, agents satisfying Condition (4) may fall into the following two scenarios:
Scenario 1: Some agents have relatively good fitness but satisfy Condition (4) due to a large random number. These agents can be retained.
Scenario 2: Some agents have poor fitness and also satisfy Condition (4) due to a small random number. These agents should execute the elite learning strategy.
To better distinguish between these two types of agents and to adapt to the increasing overall fitness of the frost population as the iteration process progresses, an adaptive control factor, a, is introduced to regulate the triggering probability of the elite learning strategy. The calculation of a is given by Equation (9), and its variation trend is shown in Figure 4. As seen in Figure 4, a increases linearly from an initial value of 0.25 to 0.75 as the number of iterations t increases. This linear increase controls the number of agents satisfying Condition (5) throughout the iterations. Furthermore, as shown in the elite learning formula (Equation (10)), the use of a along with the normalized fitness value of an agent as the triggering condition ensures that only agents with relatively small random numbers and low fitness (i.e., agents corresponding to Scenario 2 described earlier) execute the elite learning strategy.
a = T / 2 + t 2 T
The elite learning strategy formula is given as Equation (10):
R i , j n e w = R i , j + R b e s t , i 2 , F n o r m r ( S i ) r 3 < a , r 2 E
F = i = 1 n j = 1 i x j 2
To more intuitively investigate the impact of the controlled elite strategy on the dynamic evolution of the population, this study selects the classical optimization test function, Schwefel’s Problem 1.2 [27], as the experimental subject, as shown in Equation (11). This function has a well-defined global optimum, easily observable convergence behavior, a nonlinear optimization process, and strong coupling between variables. These characteristics effectively reflect an algorithm’s performance in complex search spaces, making it suitable for analyzing population dynamics and demonstrating algorithm performance. The initial population size is set to 100, with the algorithm iterating 100 times. Population distribution is sampled and analyzed at three key iteration points (initial state t = 1, mid-iteration t = 50, and late iteration t = 90), with results shown in Figure 5.
In the figure, pentagram nodes represent the theoretical optimal solution, red dots indicate soft frost population agents, blue dots represent hard frost population agents, gray dots denote non-mutated agents, and green dots signify newly introduced controlled elite population agents. The experimental results show that, without the controlled elite strategy, the algorithm in the early iteration phase is primarily dominated by non-mutated agents (gray dots), indicating limited search capability at this stage. However, after introducing the controlled elite strategy, some non-mutated agents in the early iterations are assigned elite search capabilities, transforming into elite population agents (green dots).
As the iterations progress, elite population agents maintain a certain proportion in the mid-iteration phase (t = 50) but significantly decrease in the late iteration phase (t = 90), demonstrating that the control factor effectively regulates the probability of elite agent generation. The introduction of the control factor ensures that the early iteration phase is still dominated by non-mutated agents, whereas the late iteration phase is primarily driven by the soft frost and hard frost populations. This mechanism enhances early-stage exploration while preventing premature convergence or insufficient exploitation due to excessive or premature elite population generation.

3.3. Adaptive Search Strategy

As shown in Figure 3, the proportion of individuals satisfying the Hard Frost Penetration (Hard) condition remains relatively stable, fluctuating between approximately 5% and 20% throughout the iteration process. However, as shown in Equation (6), the update mechanism of the Hard Frost strategy exhibits high uniformity, leading to a lack of the diversity required for population search and a reduced ability to explore local regions. To address this issue, this paper proposes an adaptive search strategy applied to the update process of the Hard population.
The adaptive search strategy refers to an optimization approach that improves the search process by learning and adapting during iterations, enabling the algorithm to explore and exploit information in the search space more effectively. This strategy dynamically adjusts the search direction, step size, or search space based on problem characteristics and feedback from the search process to enhance algorithm efficiency and performance. While the introduction of the elite learning strategy mitigates the weak search capability in the early and mid-iterations of the algorithm, the search range and accuracy of the population still require further optimization. Beyond individuals satisfying conditions (4) and (5), a portion of the population meets conditions (2) or (3) and follows the Hard Frost update mechanism during the early, middle, and late stages of the iteration process. Enhancing the search range of these individuals in the early and middle stages, as well as improving their local exploitation capability in the later stages, is also a key factor in effectively increasing algorithm efficiency.
In the original algorithm, individuals that satisfy conditions (2) or (3) are directly generated at the optimal individual location, which leads to a reduction in population diversity during the iteration process. To enable individuals that meet the hard frost search condition to explore a larger range around the global optimal position in the early and middle stages, and to conduct refined development near the global optimal position in the later stages, this section integrates the adaptive search strategy with the control factor a described in Equation (9) and proposes an adaptive search equation. This equation replaces the original hard frost mechanism (Equation (6)) and adjusts the search range of the hard frost process through the adaptive variation of the control factor a. Furthermore, during the iteration process, the value of 1 −a will not fall below 0.25, ensuring that even in the later stages of iteration, there is a small probability of generating relatively large step sizes, which increases the algorithm’s ability to escape local optima. The specific equation is as follows:
R i , j n e w = R b e s t , j + ( 2 r 1 ) ( 1 a ) ( R b e s t , j R i , j ) , r 3 < F n o r m r ( S i )
where r is a random number between 0 and 1, and a is the control factor. As the number of iterations increases, the search range of the hard frost mechanism will adaptively decrease.
Through the application of the elite strategy in Section 3.2, MSRIME introduces some elite-characterized Other individuals during the exploration process to enhance the algorithm’s search capability in the early and middle stages. At the same time, the adaptive search strategy optimizes the updating method for the Hard population, allowing it to adaptively increase the search range of the population individuals. The simultaneous application of these two strategies, working together in harmony, can effectively improve the overall search capability of the population.

3.4. Cosine Annealing Strategy

As mentioned in Section 2.2, β is a trapezoidal decreasing function (Equation (3)) used in the original algorithm to simulate changes in the external environment. Its value decreases with the number of iterations, enabling a stepwise transition between large-scale exploration and small-scale exploitation. The parameter w in Equation (3) needs to be manually adjusted to control the number of trapezoidal segments.
In reference [15], the parameter w is set to 5, and the step size function variation is plotted in Figure 2. As shown in the figure, the step size gradually decreases with the increase in the number of iterations. However, as indicated by the convergence curves (a) and (d) in Figure 11 of Section 5, when the number of iterations reaches 4/5 of the total iterations (i.e., at the transition point from the fourth to the fifth segment of the trapezoidal function), the RIME algorithm’s convergence curve has already stabilized near the current optimal solution. This phenomenon can be attributed to the five-segment trapezoidal function design, where the step size undergoes abrupt changes at certain critical points. Such discontinuous step size variations disrupt the search process, causing the algorithm to prematurely focus on local regions for fine-tuned searches before fully exploring the global solution space, resulting in missing the global optimal solution.
Additionally, the trapezoidal function follows a predefined fixed pattern, lacking the ability to dynamically adjust based on the current search state. Furthermore, in the process of RIME ice formation, the surrounding environment does not change in a stepwise manner. To make the environmental factor β better align with the physical variations of the external environment and to smoothly adjust the step size of Soft Frost agents, the cosine annealing strategy is introduced to replace the trapezoidal function β.
In deep learning, model training typically relies on gradient descent and its variants to iteratively adjust model parameters. The learning rate is a crucial hyperparameter in these algorithms, as it determines the step size for each parameter update. Cosine annealing is a widely used learning rate scheduling strategy applied to optimize the training process of deep learning models [28]. It is formulated as Equation (13), where ηt represents the learning rate at time step t (i.e., the current iteration count), while ηmin and ηmax denote the minimum and maximum learning rates, respectively, and T is the iteration period length (i.e., the maximum number of iterations).
η t = η min + 1 2 ( η max η min ) ( 1 + cos ( t π T ) )
Li et al. [29] found through research that, compared to the trapezoidal decay approach, cosine annealing is a superior step size adjustment method. Unlike the trapezoidal function, the smooth decay of cosine annealing facilitates a more stable convergence to better solutions during the iterative exploration process. This adjustment method helps reduce oscillations caused by abrupt step size changes, and it eliminates the need for manual parameter tuning by allowing automatic adjustments based on a predefined formula, thereby reducing the complexity of hyperparameter tuning. As demonstrated in the convergence curves presented in Section 5, in some test functions, MSRIME consistently escapes local optima in the later stages of iteration, outperforming the original algorithm.
This observation confirms that the combination of cosine annealing and adaptive search strategies effectively expands the search range, thereby increasing the probability of escaping local optima. Consequently, this paper adopts cosine annealing as the environmental factor, with its variation range set to [0,1]. The variation trend is illustrated in Figure 6, and the final formulation is given as follows:
β = 1 2 ( 1 + cos ( t π T ) )

3.5. Selection of Control Factors and Multi-Strategy Coordination Analysis

Section 3.2 elaborates on the mechanism for setting the control factor a, which serves as the trigger condition for the elite strategy. The value of a is required to increase monotonically. While functions other than Equation (9) (e.g., Equations (15) and (16)) also exhibit this monotonic increase, this study ultimately selected Equation (9) as the control factor.
The primary objective of introducing the controlled elite strategy is to enhance the algorithm’s exploratory capability during the early stages of iteration, to a certain extent. However, it is crucial to prevent a large number of ‘frost’ individuals from abruptly transforming into ‘elite’ individuals, which could lead to premature convergence of the algorithm. Concurrently, a small proportion of elite individuals must continue to be generated and integrated into the exploitation process during the mid-to-late stages of iteration. To validate the appropriateness of this parameter selection, we conducted experiments on population size variation using Schwefel’s Problem 1.2 (from Section 3.2) as the test function. The population size was set to 100, and the number of iterations was 100. We compared three different parameter settings for a: Equation (9)’s a (hereafter referred to as a1), Equation (15)’s a (hereafter referred to as a2), and Equation (16)’s a (hereafter referred to as a3). The experimental results are presented in Figure 7.
a = t 2 T
a = T + t 2 T
As depicted in Figure 7, the green lines represent the population changes when the controlled elite strategy is triggered. However, it is evident that the curve in Figure 7b (using a2) essentially ceases to generate elite individuals after approximately 60 iterations. Conversely, the curve in Figure 7c (using a3) shows that elite individuals constitute nearly 80% of the population in the very early stages of iteration. Both of these behaviors contradict our initial design intention. Furthermore, the upper limit of the parameter a1 = 0.75 can complement the cosine annealing strategy, ensuring that ‘hard frost’ individuals maintain a sufficiently large step size under the adaptive search strategy (Section 3.3) during the later stages of iteration. This compensates for the reduced ability of ‘soft frost’ individuals to escape local optima under cosine annealing conditions.
To validate the selection of parameters and the effectiveness of the multi-strategy combination, this section utilizes Schwefel’s Problem 1.2 as the test function. We set the initial population size to 50, ran 100 iterations, and repeated the experiment 30 times. The results are presented in Table 5. In this table, RIME1 represents the RIME algorithm integrated with the Fuch chaotic map; RIMEa1, RIMEa2, and RIMEa3 represent the RIME algorithm using the elite control strategies with parameters a1, a2, and a3 respectively; RIME2 denotes the RIME algorithm with the adaptive search strategy; and RIME3 refers to the RIME algorithm incorporating the cosine annealing strategy.
The experimental results show that among the RIMEa1, RIMEa2, and RIMEa3 groups, RIMEa1 performed best in terms of both average value (AVG) and standard deviation (STD), further confirming the rationality of selecting parameter a1 for the control factor. Conversely, RIMEa3 exhibited the poorest optimization performance, indicating that inappropriate parameter settings can lead to an excessive generation of elite individuals in the early stages, causing premature convergence. Notably, the MSRIME algorithm, which integrates a multi-strategy collaborative mechanism, significantly outperformed all single-strategy improved algorithm variants in convergence performance. Figure 8, which analyzes the multi-strategy complementary mechanism, shows that MSRIME demonstrates significant advantages in both optimization accuracy and stability compared to algorithms solely employing the Fuch chaotic map (RIME1), the elite control strategy (RIMEa1), or the cosine annealing strategy (RIME3). This outcome confirms that a carefully designed multi-strategy collaborative framework can not only effectively reduce the complexity of parameter tuning but also fully leverage the complementary strengths of each strategy, thereby comprehensively enhancing the algorithm’s overall optimization performance.

3.6. Time Complexity Analysis

Reference [30] indicates that the time complexity of the original RIME algorithm is primarily determined by the population initialization and update operations (including soft frost search and hard frost search). Its overall time complexity can be expressed as O (Np × D × T).
Compared to the original RIME algorithm, the proposed MSRIME algorithm introduces improvements in the following aspects: First, the population initialization method is replaced from random generation to Fuch chaotic mapping, but the time complexity of population initialization remains O (Np × D), consistent with the original method. Second, in the update operations, the hard frost search strategy is improved into an adaptive form, while its time complexity remains unchanged. Additionally, a cosine annealing strategy is introduced to replace the step-size adjustment mechanism in the original soft frost search, and the time complexity of the cosine annealing strategy is O (T), the same as the original step-size adjustment method. Finally, the elite control strategy is introduced during the update process, and its control factor update has a time complexity of O (T). In summary, the overall time complexity of the update operations in the MSRIME algorithm remains O (Np × D × T), consistent with the complexity order of the original RIME algorithm.

3.7. Implementation of MSRIME

Based on the above analysis, this section presents the pseudocode implementation of the MSRIME algorithm (Algorithm 1). The algorithm begins by initializing the population using the Fuch chaotic map, which is known for its high sensitivity to initial values, thereby ensuring a diverse distribution of initial solutions within the search space. Subsequently, it proceeds into an iterative optimization process. At the start of each iteration, key control parameters (including E, a, etc.) and necessary random numbers are dynamically generated. Based on predefined conditions, the current population is adaptively divided into three sub-populations: the soft frost population is updated using Equation (5), the hard frost population using Equation (12), and the elite population using Equation (10). After each iteration, a strict positive greedy selection strategy is executed. Finally, upon reaching the predetermined maximum number of iterations, the algorithm outputs the optimal solution.
Algorithm 1: MSRIME Algorithm
1Initialize the population using the Fuch chaotic mapping.
2Obtain the current best agent and best fitness value.
3While tT
4  Generate random numbers r 2 , r 3 , update E, a, and β using Equations (4), (9), and (14).
5If r2 < E
6  Perform soft frost search using Equation (5).
7End If
8If r3 < Normalize fitness of Si
9  Perform hard frost search using Equation (12).
10Else If Normalize fitness of Sir3< a && r2 ≥ E
11  Execute the controlled elite strategy using Equation (10).
12End If
13If F ( R i n e w ) < F ( R i )
14  Select the optimal solution and replace it using a positive greedy selection mechanism.
15End If
16  t = t + 1
17End While

4. Qualitative Analysis of MSRIME

Qualitative analysis systematically explores and evaluates the properties, behavior, and characteristics of an algorithm through intuitive understanding, theoretical derivation, and experimental observation. In the study of optimization algorithms, qualitative analysis typically focuses on the overall behavior, performance trends, and performance under different conditions.
To this end, this section selects functions from the CEC2022 test function set (as shown in Table 5), which includes unimodal functions (f1), basic functions (f2–f5), hybrid functions (f6–f8), and composition functions (f9–f12). These functions are derived from corresponding basic test functions through translation, rotation, scaling, and composition, thereby increasing their complexity. This approach eliminates the defect of the basic test functions having the origin as the optimal point and removes the symmetry present in the solution space. As a result, these functions can comprehensively test the global search capability, local search capability, robustness, and adaptability of optimization algorithms.
The experiment is conducted with a problem dimension of 20, a population size of 50, and 500 iterations. The qualitative analysis of MSRIME is performed from two perspectives: First, by analyzing its convergence behavior, the convergence performance and search behavior of MSRIME are demonstrated. Then, the population diversity is explored to examine whether the algorithm can maintain a good diversity of solutions during the search process, thereby avoiding premature convergence.

4.1. Convergence Behavior Analysis

The experimental results of the convergence behavior analysis are shown in Figure 9. To better illustrate the distribution characteristics of the test functions, column (a) in the figure presents the 3D shape of the objective functions, allowing readers to intuitively understand their properties. Column (b) records the search trajectories of the algorithm in the search space, where black dots represent the population distribution and red dots denote the optimal solution, reflecting the exploration range and movement path of the algorithm. Column (c) depicts the variation trend of the average fitness value, serving as a typical convergence indicator that clearly demonstrates the convergence trend of the algorithm. Column (d) tracks the changes in the population along the first dimension during optimization, providing insight into whether the population exhibits wide-range exploration, local search behavior, and convergence tendencies. Column (e) illustrates the change in the best-found objective function value over iterations, directly reflecting the convergence performance of the algorithm.
From column (b), it can be observed that across all functions f1-f12, the population trajectories of the MSRIME algorithm converge toward the optimal solution region, indicating that the search paths are relatively concentrated and the population effectively focuses on the target area, demonstrating strong convergence characteristics. In column (c), the overall fitness value curve rapidly declines and gradually stabilizes, indicating that the population fitness values improve quickly in the early stages and then gradually converge. Notably, in functions f1, f2, f5, f6, f8, f9, f11, f12, the fitness curves experience a sharp decline within the first one-third of the iterations. This suggests that the introduction of the controlled elite strategy significantly enhances the algorithm’s early exploration ability and accelerates convergence. Additionally, for functions f3, f4, and f7, the fitness curves exhibit another substantial decline even in the later iterations, demonstrating that the adaptive search and cosine annealing strategies effectively prevent the algorithm from being trapped in local optima. The one-dimensional trajectory in column (d) reveals that in most functions, the curves exhibit noticeable jumps throughout the process. This continuous jump behavior indicates that the algorithm not only maintains a broad search space but also balances global exploration and local exploitation effectively. Finally, column (e) shows the trend of the best objective function value over iterations. In all functions, the descending convergence curves indicate that the algorithm progressively finds better solutions. Notably, in functions f1, f4, f6, and f7, the algorithm escapes local optima in the later iterations to discover better solutions. For functions f5 and f8, the optimal solution is found early in the mid-iterations. These observations further confirm the beneficial impact of the multi-strategy improvements on the algorithm’s convergence behavior.

4.2. Population Diversity Analysis

Population diversity is a critical metric for evaluating the range and uniformity of individual distributions, directly influencing an algorithm’s global search capability and convergence performance. This section analyzes the diversity variation trends at different iterative stages through population distribution visualization and diversity measurement curves.
In Figure 10, column (a) presents the characteristics of the target convergence functions. Columns (b) to (e) illustrate the search distributions of the optimization algorithm at different iteration stages (1st, 100th, 250th, and 500th iterations), where black dots represent the population distribution and red dots denote the optimal solution.
From the figure, it can be observed that at iteration 1, which represents the initial state, all population agents are randomly distributed across the entire search space. As the number of iterations increases, up to iteration 100, there is no significant aggregation of agents around the optimal solution, indicating that the algorithm maintains comprehensive global exploration in the early stages. By iteration 250, some agents have started moving toward the vicinity of the optimal solution. At iteration 500, all functions exhibit a noticeable concentration of agents around the optimal solution, demonstrating that in the middle and late stages of the iterations, the population shifts towards refined local exploitation to further optimize the solution.
Finally, in each iteration, the population diversity is quantified by computing the mean deviation of agents from the median in each dimension and then averaging these deviations across all dimensions [31]. This can be expressed by Equation (17):
P   D = 1 D j = 1 D 1 N i = 1 N x i , j M j
where PD is the population diversity, N is the number of agents, D is the number of dimensions, xi,j is the value of agent i in dimension j, and Mj is the median of all agents in dimension j. These values are recorded and plotted as a curve, with the horizontal axis representing the number of iterations and the vertical axis representing the diversity value, reflecting the distribution changes of the population in the search space. The resulting curves are shown in column (f). Observing the curves in column (f), a common trend emerges across all functions: high diversity in the early stage, a rapid decline in the middle stage, and stabilization in the later stage. In the initial iterations, all function curves exhibit high population diversity, indicating that the use of the Fuch mapping results in a well-distributed initial population. High diversity implies significant differences between agents, facilitating global exploration. As the algorithm progresses, the population gradually converges toward the optimal solution, leading to a reduction in differences among agents. During this phase, the algorithm transitions from global exploration to local exploitation, accompanied by a decrease in population diversity. In the final convergence stage, the population becomes concentrated in a smaller region, with minimal differences among agents. These observed diversity trends confirm that the algorithm maintains good population diversity and convergence behavior.

5. Experiments and Analysis

To comprehensively and rigorously evaluate the performance of the algorithm proposed in this paper, we designed and conducted two sets of experiments in this section. A total of 16 comparative algorithms were selected, utilizing two widely used test suites. Detailed function information for these two test suites can be found in Table 6. Specific details, literature sources, and relevant parameters for all comparative algorithms discussed in Section 5 and Section 6 of this paper are available in Table 7.
The first set of experiments (detailed in Section 5.1) involved a comparative analysis of nine classic optimization algorithms on high-dimensional instances of the CEC2017 test suite. The second set of experiments (detailed in Section 5.2) evaluated seven state-of-the-art and highly relevant high-performance improved algorithms against the MSRIME algorithm on the latest CEC2022 test suite. Finally, to ensure the statistical significance of the evaluation results, we performed a statistical analysis of the average performance data from both experiment sets using the Wilcoxon rank-sum test and the Friedman test (detailed in Section 5.3).
To ensure fairness and reliability, all experiments are conducted under a unified environment: MATLAB 2019a as the software platform and an Intel(R) Core(TM) i7-8750H CPU @ 2.20 GHz 2.21 GHz as the hardware environment.

5.1. First Set of Experiments (CEC2017)

In this subsection, MSRIME is compared with nine highly cited optimization algorithms. These include traditional classical intelligent optimization algorithms such as Particle Swarm Optimization (PSO) [32], Gravitational Search Algorithm (GSA) [33], and Sparrow Search Algorithm (SSA) [34]. Additionally, high-performance optimization algorithms such as Grey Wolf Optimizer (GWO) [35] and Whale Optimization Algorithm (WOA) [36] are considered. Moreover, the comparison includes recently proposed optimization algorithms post-2021, namely the African Vulture Optimization Algorithm (AVOA) [37], Dung Beetle Optimization Algorithm (DBO) [38], Hunter-Prey Optimization Algorithm (HPO) [39], as well as the original RIME algorithm. The relevant parameters of these comparative algorithms are provided in Table 7.
To comprehensively evaluate the optimization capability of MSRIME, experiments were conducted using the widely adopted CEC 2017 test suite at a dimensionality of D = 100. The population size was set to 50, with a maximum of 500 iterations. Function F2 was excluded, and the remaining 29 functions were tested. The optimization results were analyzed based on the average value (AVG) and standard deviation (STD) from 30 independent runs. The average value reflects the algorithm’s optimization ability, while the standard deviation indicates its stability. Table 8 presents the average values and standard deviations of the experimental results, and the convergence curves are illustrated in Figure 11, where “iteration#” denotes the number of iterations.
According to the experimental data in Table 8, even in the 100-dimensional case, the MSRIME algorithm, integrating multiple improvement strategies, achieves the best performance on most test functions. Although for functions F3, F11, F19, F29, and F30, the mean value of MSRIME does not reach the optimal level, it still demonstrates a significant advantage over the original algorithm, proving the effectiveness of the proposed improvements. Notably, when handling multimodal and hybrid functions, MSRIME exhibits the best average performance across all multimodal functions, highlighting its exceptional ability to escape local optima. Furthermore, for all hybrid functions except F19 and F20, MSRIME also achieves the best mean performance and, in most cases, the lowest standard deviation. This indicates that even when faced with complex problems, MSRIME can effectively approach the global optimum while maintaining high reliability.
To further analyze the algorithm’s iterative process, this study presents the convergence graphs for 12 selected functions, as shown in Figure 11. These 12 functions were chosen from the 29 test functions based on the following criteria: the first and last functions from the unimodal and multimodal categories, and the first two and last two functions from the hybrid and composition categories.
Observing the convergence curves, MSRIME consistently demonstrates a faster convergence speed on most functions. Notably, for functions F1, F10, and F22, MSRIME is able to escape local optima in subsequent iterations, leading to superior solutions. This indicates that the combination of the cosine annealing strategy and the adaptive search strategy effectively extends the algorithm’s exploration range. Furthermore, MSRIME exhibits the strongest convergence on the majority of function curves, which further confirms that the fusion of multiple strategies not only overcomes the limitations of the original algorithm but also significantly enhances optimization efficiency.

5.2. Second Set of Experiments (CEC2022)

This section presents the test analysis for the second set of comparative algorithms, utilizing the latest CEC 2022 test suite. This set specifically includes enhanced versions of the algorithms tested in Section 5.1, with a focus on recently proposed high-performance optimization methods.
The selected comparative algorithms include improved versions of the GWO algorithm: the SOGWO algorithm [40] and the IGWO algorithm [41], as well as an enhanced version of the WOA algorithm: the EWOA algorithm [42]. All these algorithms have been frequently cited in top-tier journals over the past five years. Additionally, we selected algorithms proposed within the last three years that share similarities with the MSRIME algorithm in terms of multi-strategy improvements. These include the Multi-Strategy Whale Optimization Algorithm (MSWOA) [43] and the Multi-Strategy Sparrow Search Algorithm (MISSA) [44]. Finally, to ensure a more targeted comparison with the proposed algorithm, the latest improved versions of the RIME algorithm, namely the IRIME algorithm [45] and the ACGRIME algorithm [46], were also chosen to more accurately assess the performance advantages of the proposed algorithm.
The test results are shown in Table 9. From the table, it can be observed that MSRIME exhibits outstanding performance on the vast majority of test functions, particularly on f1, f4, f5, and f11, where its mean value is significantly better than other algorithms. In terms of standard deviation, MSRIME achieves the smallest values on multiple functions, indicating high solution stability.
For the comparison algorithms: IGWO, as a highly cited improved version of GWO, performs well on certain functions (e.g., f3), but its overall performance is still slightly inferior to MSRIME. MSWOA, despite also employing multi-strategy improvement techniques, fails to surpass MSRIME in both mean values and standard deviations, with particularly noticeable gaps in f1, f4, and f7. MISSA, as a multi-strategy improved version of the Sparrow search algorithm, falls significantly short of MSRIME in overall performance. IRIME and ACGRIME, as improved versions of RIME, perform well on certain functions. In particular, ACGRIME achieves the best mean and standard deviation on f6 and f10. However, from the overall test results, MSRIME attains the optimal mean value in 2/3 of the functions, demonstrating significant advantages in optimizing complex functions. Even in functions where MSRIME does not achieve the best result, the gap between its performance and the best-performing algorithm is relatively small. Overall, MSRIME surpasses other comparison algorithms in terms of stability and optimization depth, making it a more reliable optimization method.
The convergence curves comparing the MSRIME algorithm with other high-performance improved algorithms are shown in Figure 12. From the convergence curves, it can be observed that, except for f10 in (j), MSRIME converges significantly faster than other algorithms in all other figures, achieving the optimal fitness value in the early iteration stage and maintaining stability thereafter. This phenomenon reflects that the introduction of the Fuch mapping and elite strategy control effectively improves the population quality in the early iteration stage, accelerating the convergence speed of the algorithm.
On the convergence curves of functions f4, f5, f6, f7, f10, and f11, the MSRIME algorithm exhibits multiple step-like drops in the mid-iteration stage. Similarly, on function f1, when the curve has already flattened in the late iteration stage, a significant jump-like drop is also observed. These phenomena are typical cases of escaping from the current local optimum, indicating that MSRIME possesses a superior ability to escape local optima. This further demonstrates that the combination of the cosine annealing strategy for adjusting the soft frost search step size and the adaptive search strategy’s dynamic adaptation of the search space effectively prevents the algorithm from getting trapped in local optima.

5.3. Statistical Significance Analysis

To statistically validate whether the optimization results of the improved MSRIME algorithm are superior to other comparative algorithms, we further employed the Wilcoxon signed-rank test and the Friedman test [47] for analysis in this section.
In the Wilcoxon test, three symbols (+/−/=) are used for evaluation: “+” indicates that MSRIME outperforms the other algorithm, “−” signifies that MSRIME’s performance is worse, and “=” denotes similar performance between MSRIME and the other algorithm. The Friedman test is utilized to assess the overall performance of the algorithms, where a lower ranking value indicates better performance.
Table 10 presents the Wilcoxon signed-rank test results and the average Friedman ranks for both sets of experiments conducted in Section 5.1 and Section 5.2. As shown in the table, the Wilcoxon signed-rank test results demonstrate that MSRIME exhibits a significant advantage in the vast majority of test functions, with the occurrence of “+” surpassing 2/3 of the comparisons. This indicates that, in terms of statistical significance, MSRIME’s optimization results are demonstrably superior to other algorithms, showcasing high reliability and stability.
Furthermore, the Friedman test results in Figure 13 and the rankings in Table 10 consistently show that MSRIME achieves the highest rank among all comparative algorithms, illustrating its overall strongest optimization capability. Therefore, based on this statistical analysis, it can be concluded that the proposed MSRIME algorithm outperforms other advanced optimization algorithms.

6. Path Planning for Delivery Robots Based on MSRIME

6.1. Robot Design and Application

As shown in Figure 14, the structural design of the delivery robot embodies a perfect combination of functionality and practicality. The robot primarily consists of two main parts: The upper section houses the storage compartment, which adopts a modular design that allows for flexible capacity adjustments based on the size and quantity of delivery items. The storage compartment is equipped with an intelligent lock system to ensure security during transportation. Additionally, it supports user authentication, enabling contactless delivery. The lower section contains the power base, which serves as the robot’s mobility core. The power base features high-performance drive motors and a rotating wheel assembly, providing precise path planning and long-lasting endurance, ensuring that the robot can navigate complex environments with agility.
As an automated mobile delivery robot, its primary task is to autonomously plan routes and efficiently transport items to designated locations. It can be deployed in various environments, including urban areas, campuses, and factories, catering to different architectural distributions. By offering convenient, efficient, and secure delivery services, the robot helps reduce labor costs and enhance operational efficiency.
The robot’s workflow consists of the following five stages:
(1)
Initialization Stage: The robot departs from a designated starting location and completes system self-checks and position initialization.
(2)
Loading Stage: The robot receives a delivery task, loads the items into the storage compartment, and confirms the item details and destination.
(3)
Path Planning Stage: Based on the current environment and destination, the robot calculates an optimal route from its current position to the target location. The optimal route is determined by considering path length, travel time, energy consumption, and safety factors.
(4)
Delivery Stage: The robot follows the planned route, detecting and avoiding obstacles during transit.
(5)
Arrival Stage: Upon reaching the destination, the robot awaits item retrieval, confirms successful handover, and then returns to its starting point to prepare for the next delivery task.
Among all workflow stages, path planning is the core process that enables autonomous navigation. This crucial task is assigned to the MSRIME algorithm, ensuring efficient and adaptive route optimization for the delivery robot.

6.2. Adaptability Study of MSRIME in Path Planning

In path planning problems, if the map size is h rows and m columns, the problem dimension dim is m − 2. Each solution in the algorithm represents a vector of length dim. The fitness function is defined as shown in Equation (18), which includes the path evaluation function Fl, the turning point evaluation function Fz, and parameters p and q as weights for the two evaluation metrics. These weights can be set according to actual requirements. Bn represents the number of nodes on the path that are located on obstacles. The path evaluation function Fl is defined as the shortest path length, calculated as shown in Equation (19). The turning point evaluation function Fz is defined as the number of turns in the path. The calculation process first determines whether the i-th point on the path has a turn using Equations (21) and (22), and then computes Fz using Equation (20), where n represents the total number of points on the path, including the starting and ending points, and the coordinates of the i-th point are (xi,yi).
F = { p F l + q F z h m B n , B n = 0 , B n 0
F l = i = 1 n 1 ( x i + 1 x i ) 2 + ( y i + 1 y i ) 2
F z = i = 2 n 1 f z
f z = { 1 , 0 , f t ( i ) f t ( i + 1 ) f t ( i ) = f t ( i + 1 )
f t ( i ) = arctan ( y i y i 1 x i x i 1 )
In optimization algorithms for delivery robot path planning, the variation in search step size significantly influences the coordinate changes of individuals during the iteration process, including front-to-back coordinate differences and adjacent dimension differences, thereby affecting the quality of the generated path. The front-to-back coordinate difference reflects the magnitude of an individual’s movement over consecutive iterations, and the search step size directly determines the variation amplitude of this difference. For a given node (xi,yi), its coordinate difference between iterations is denoted as Δd, as shown in Equation (23), while the average front-to-back coordinate difference Δavg is calculated using Equation (24), where old represents the coordinate before iteration, new represents the coordinate after iteration, t denotes the current iteration number, T is the total number of iterations, and N represents the population size. The adjacent dimension difference reflects the numerical difference between adjacent dimensions within the same iteration and is influenced by search mechanisms, initialization strategies, and step size variations. Its average value is computed using Equation (25), where Ri,j (t) represents the coordinate value of the i-th individual in the j-th dimension during the t-th iteration.
As described in Section 2.2 and Section 3.4, the soft frost search step size in the RIME algorithm follows a stepwise variation with multiple step jump points. When a step jump occurs, the magnitude of an individual’s iterative movement significantly increases, directly leading to a larger front-to-back coordinate difference Δd. Meanwhile, at the jump points, the variation amplitude of certain dimensions also increases considerably, causing a rise in the average adjacent dimension difference Δr, thereby increasing the number of turning points in the path and making the path more tortuous.
Δ d i ( t ) = x i n e w ( t ) x i o l d ( t ) + y i n e w ( t ) y i o l d ( t )
Δ a v g = 1 2 N T t = 1 T i = 1 N Δ d i ( t )
Δ r = t = 1 T i = 1 N j = 2 dim R i , j ( t ) R i , j 1 ( t ) T ( dim 1 ) N
By introducing the Fuch chaotic mapping, controlled elite strategy, and adaptive search strategy, the MSRIME algorithm significantly enhances early-stage search capability and overall optimization performance, effectively improving the global optimality of path planning and better meeting the practical requirements of delivery robot applications. Furthermore, to address the issue in the RIME algorithm where stepwise changes in step size may degrade path quality, MSRIME adopts a cosine annealing strategy, ensuring a smooth and continuous variation of step size during iterations. This prevents the increase in front-to-back coordinate differences and adjacent dimension differences caused by abrupt step jumps, thereby enhancing the stability and quality of path planning.
To validate the effectiveness of these improvements and further analyze the applicability of MSRIME compared to RIME in the field of path planning, this study conducts six path planning experiments using RIME and MSRIME algorithms on a randomly generated grid map (as described in Section 6.3). In the experiments, the average front-to-back coordinate difference Δavg and the average adjacent dimension difference Δr are computed using Equations (24) and (25), respectively, while path length L and the number of turning points z are obtained using Equations (19) and (20). The experimental results are shown in Figure 15, with relevant statistical data summarized in Table 11.
From the experimental results, in the six tests conducted, the path length L obtained by the MSRIME algorithm was superior to or equal to that of the RIME algorithm in five tests, while the number of turning points z was superior to or equal to that of RIME in four tests. The average front-back coordinate difference Δavg was superior to or equal to RIME in four tests, and the average adjacent-dimension difference Δr was superior to or equal to RIME in five tests. This indicates that MSRIME can generate smoother and higher-quality paths in most cases.
Notably, in experiments b, c, and e, smaller values of Δavg and Δr were indeed more conducive to producing smooth and high-quality paths. These findings suggest that though the average front-back coordinate difference and the average adjacent-dimension difference do not directly determine the path length L and the number of turning points z, they do exert a certain influence on these metrics. Smaller values of Δavg and Δr contribute to more uniform changes in path nodes, reducing abrupt turning points and enhancing both path smoothness and global optimality.
Overall, the MSRIME algorithm not only inherits the high computational efficiency of the RIME algorithm but also introduces a single control factor, further simplifying the implementation of multiple strategies and enhancing the algorithm’s operability and stability. More importantly, MSRIME demonstrates outstanding overall optimization capability, exhibiting significant advantages over the RIME algorithm in key metrics such as global optimality in path planning, path smoothness, and path length. These results preliminarily validate MSRIME’s superior adaptability in delivery robot path planning applications, indicating that this method can generate high-quality paths more effectively.
Furthermore, the experiments in Section 6.4 will further validate these conclusions and explore the application potential of the MSRIME algorithm in delivery robot path planning, aiming to comprehensively evaluate its actual performance in different environments.

6.3. Environment Setup

In this experimental setup, grid-based modeling is employed for environment representation. This method divides the entire environment into a series of uniform grid cells, facilitating path planning and analysis. Each cell in the grid map represents a specific location in the environment and is labeled based on its traversability: Traversable grid cells are marked in white, non-traversable grid cells (occupied by buildings) are marked in black. Grid cells are indexed systematically according to their coordinates, following a left-to-right and bottom-to-top ordering. This systematic indexing provides a clear structure for path planning, ensuring efficient navigation.
To enhance the realism of the simulation, the delivery robot’s map is designed based on building density and Building Coverage Ratio (BCR). The BCR is defined as the ratio of building area to land area and is commonly used to assess urban building scale and planning scope [48].
Taking the United States as an example, Soliman et al. [49] conducted a statistical analysis of the geographical distribution of BCR across various regions in the U.S. and provided detailed BCR information for the city of Chicago. The results show that the BCR in Chicago ranges from 0% to 83%, with over 75% of the areas having a BCR between 7% and 26%. Therefore, the BCR for the first map in this experiment is set within the range of 7% to 26% to reflect the most common building density. Additionally, Le et al. [50] studied the building density and BCR in highly urbanized areas, finding that the BCR in such regions varies widely, ranging from 40% to 80%. However, Lau et al. [51] pointed out that every 10% increase in building coverage can lead to a temperature rise of 0.28 °C, and a BCR exceeding 50% can impose significant burdens on the urban environment, traffic, and energy consumption. As a result, with the optimization of urban planning, new cities tend to reduce building coverage to mitigate the heat island effect and improve the urban environment. Particularly in path planning experiments, if the BCR is too high, the number of feasible paths will significantly decrease, which is unfavorable for testing algorithm performance, especially in small-scale maps. Based on this, the BCR for the second map in this experiment is set to be higher than 26% but below 50%, with the BCR for small-scale maps slightly lower than that for large-scale maps.
To comprehensively evaluate the path planning capability of the MSRIME algorithm in various delivery environments, four simulated environments with different building densities and map scales were designed based on the above analysis. The detailed specifications are presented in Table 12.
The experiment was designed with two typical regions: general areas and urbanized areas. In general areas, obstacles account for 26% of the total area and are randomly distributed. In contrast, in urbanized areas, the number of obstacles significantly increases to 40%, with a more complex layout, simulating intricate urban road conditions. The experiment maps are categorized into two scales: small-scale maps and large-scale maps. The small-scale map has a size of 20 × 20, with the starting point at (0,0) and the destination at (20,20). The large-scale map has a size of 40 × 40, with the starting point at (0,0) and the destination at (40,40). To reflect the difference in obstacle density across map scales, the obstacle proportion in small-scale maps is 5% lower than that in large-scale maps.
To comprehensively evaluate the performance of path planning, metrics such as path length L (Equation (19)), execution time time, and the number of path turning points z (Equation (20)) are adopted to assess energy consumption, decision-making efficiency, path feasibility, and algorithm superiority, respectively. A shorter path length and fewer turning points indicate higher path quality.
In this experiment, the comparison algorithms include the classical algorithms from Section 4, namely GWO, RIME, and DBO; the high-performance improved algorithms SOGWO and IRIME; as well as the optimization algorithms specifically designed for robot path planning, FSA and ISSA, as mentioned in references [10,13]. The number of experiments is set to 30, and the remaining parameters remain consistent with Section 5.
To quantify the performance improvement of MSRIME compared to the original RIME algorithm, this study calculates the performance variation rate α, which serves as a measure of the overall optimization efficiency of the improved algorithm. The specific calculation method for α is given in Equation (26), where Q represents various performance indicators such as L, time, and z. QMSIME denotes the experimental results of the MSRIME algorithm under the corresponding indicator, while QRIME represents the results of the RIME algorithm. A larger α value indicates a more significant performance improvement of the algorithm.
α Q = Q M S R I M E Q R I M E Q R I M E × 100 %

6.4. Results Presentation

The experimental results for the small-scale maps are shown in Figure 16, with the corresponding summary of various performance indicators provided in Table 13. From the analysis of the images and tables, it is evident that different algorithms exhibit varying path planning strategies, regardless of whether they are applied to ordinary maps or urban maps.
In terms of final results, the MSRIME algorithm consistently achieves the optimal path length across all tested environments. Although the MSRIME algorithm does not achieve the shortest average path length in urban maps, it is only 0.4728 units longer than the ISSA algorithm, which ranks first, indicating a minor gap in path length performance. Notably, in the small-scale urban map, MSRIME achieves the best performance across all indicators except execution time. However, the execution time difference between MSRIME and the fastest algorithm is only 0.0622 s, demonstrating its competitive efficiency.
From the overall experimental results, MSRIME exhibits significant advantages in multiple aspects of path planning tasks, including path quality and planning efficiency.
The experimental results for the large-scale maps are shown in Figure 17, with the corresponding summary of various performance indicators provided in Table 14. Due to the lower building coverage ratio (BCR) in the ordinary map, the obstacle distribution is relatively sparse, and the robot rarely encounters complex impassable segments. As a result, the path planning results in fewer turning points. In contrast, in the urban map, where the BCR is higher, there are fewer passable routes and denser obstacles, requiring the robot to navigate around many obstacles, leading to a significant increase in turning points during path planning.
From the analysis of images and tables, it is clear that the MSRIME algorithm achieves the best results in terms of average path length in both the ordinary and urban maps. It also performs best in terms of optimal path length in the ordinary map. While the MSRIME algorithm’s optimal path length in the urban map is slightly worse than that of the ISSA algorithm, it significantly outperforms the ISSA algorithm in terms of execution time and path smoothness. This indicates that the MSRIME algorithm can generate smoother paths in less time, reducing the number of turns the robot has to make, which in turn improves motion efficiency and reduces energy consumption.
In conclusion, even when facing complex and variable road conditions in the large-scale map, the MSRIME algorithm is able to efficiently find optimal paths. Its outstanding global search ability, fast convergence performance, and good path smoothness give it a significant advantage in path planning tasks in complex environments.
Since the MSRIME algorithm introduces additional computational complexity compared to the RIME algorithm to enhance its optimization capability, its execution time does not show a significant advantage over RIME. Across the four sets of experiments, the execution time of MSRIME remains within ±0.1 s of that of RIME, indicating a minimal difference in computational efficiency. Thus, when calculating the performance change rate (α), only optimal path length, average path length, and number of path turning points were selected as evaluation metrics. The final results are presented in Figure 18.
From Figure 18, it can be observed that all three indicators show a positive change across all four maps, confirming that the overall planning capability of MSRIME has significantly improved compared to RIME. Specifically, Average path length improved by 5.8%, 5.3%, 5.1%, and 12.6%, respectively. Number of path turning points reduced by 20%, 50%, 26%, and 27.8%, respectively. This indicates that under the guidance of the MSRIME algorithm, the robot can quickly adapt to different environments, plan smoother paths with fewer turning points, and minimize path length, leading to significant reductions in energy consumption.
Notably, the MSRIME algorithm excels in reducing path turning points. In the small-scale urban map, the number of turning points decreased by 50%, demonstrating effective turning point optimization, which directly enhances the robot’s overall movement efficiency. This result further reinforces the practical applicability of the MSRIME algorithm in path planning tasks.

6.5. Additional Tests for Multi-Scenario Application

Building upon the systematic validation of the delivery robot’s path planning capabilities across various BCR scenarios in Section 6.4, this section designs four distinct and functionally differentiated test scenarios for additional experiments, aiming to further investigate the algorithm’s adaptability in special environments.
Scenario 1 simulates a 20 × 20 warehouse environment, constructing a shelf-shuttling situation with five standard shelves and randomly distributed small obstacles, primarily examining the robot’s ability to navigate around shelves. Scenario 2 replicates a loop-shaped corridor structure commonly found in buildings such as schools or hotels, enhancing navigation difficulty through random obstacle placement. Scenario 3 constructs a large, winding passage, simulating complex road networks within logistics parks, where the starting coordinates are (0, 10) and the ending coordinates are (40, 30), Scenario 4 deploys large elliptical obstacles on a 40 × 40 map, effectively simulating impassable barriers in natural environments like ponds, rock formations, or dense thickets.
Through this rigorously designed set of comparative experiments, we evaluate the robust path planning performance of the MSRIME algorithm in multimodal special scenarios. The experimental results are presented in Figure 19 and Table 15.
Analysis of the experimental data from Figure 19 and Table 15 reveals that across the four test scenarios, Scenarios 1 and 2 exhibited relatively minor performance differences among the algorithms due to the limited traversable paths. Nevertheless, the MSRIME algorithm secured a close second place in Scenario 2 for average path length, while maintaining a leading edge in crucial metrics like optimal path length and number of turns.
Particularly in the tests for Scenarios 3 and 4, although the MSRIME algorithm was slightly inferior to the DBO and FSA algorithms in terms of the number of turns in Scenario 4, its performance in the two core metrics—average path length and optimal path length—significantly surpassed both algorithms, fully demonstrating its performance advantage.
Regarding computational efficiency, the MSRIME algorithm retained the high-efficiency characteristics of the RIME algorithm, introducing only an additional time overhead of less than 0.2 s while achieving a comprehensive improvement across all performance indicators.
Cumulatively, across all four scenarios, the MSRIME algorithm achieved optimal performance in 75% of the test scenarios for the three key metrics: optimal path length, average path length, and number of turns. This result powerfully validates the algorithm’s outstanding adaptability and robustness in multi-scenario path planning tasks.

7. Conclusions

To address the prevalent issues of local optima, low computational efficiency, and poor adaptability in complex dynamic environments often encountered by traditional path planning algorithms, this paper proposes a Multi-Strategy Controlled Rime Optimization Algorithm (MSRIME). Unlike existing mainstream multi-strategy improvement algorithms, MSRIME innovatively presents a multi-strategy collaborative framework that relies on a control factor to coordinate and complement various strategies. This framework incorporates multiple effective experiments, such as chaotic characteristic analysis and population variation analysis, in the selection of individual strategies, thereby achieving a performance enhancement where the whole is greater than the sum of its parts.
In simulation experiments, MSRIME’s significant advantages were demonstrated through optimization comparison experiments and statistical analysis. Subsequently, this paper delves into the impact of coordinate differences between successive nodes and differences in adjacent dimensions within multi-dimensional space on path quality, verifying that the MSRIME algorithm can effectively mitigate these effects, thereby improving path quality. Finally, in the path planning experiments, to better align with real-world obstacle distributions, this paper analyzed the Building Coverage Ratio (BCR) design ranges of mainstream architectural environments and, for the first time, applied them to the map settings for delivery robot path planning simulation experiments. The results confirmed MSRIME’s advantages in key metrics such as path length, runtime, and path smoothness, proving its effectiveness in planning shorter, smoother, and more energy-efficient routes. This further substantiates MSRIME’s reliability in supporting path planning for delivery robots in practical applications.
The multi-strategy control framework proposed in this paper, along with the analysis of the impact of coordinate differences between successive nodes and differences in adjacent dimensions within multi-dimensional space on path quality, not only offers an alternative algorithm for smart logistics but also provides new insights for future bionic algorithm improvements and advancements in the field of path planning. However, this study still has certain limitations: first, global path planning heavily relies on precise environmental maps, which may lead to performance degradation in real-world applications due to map update delays or sensor errors; second, this paper exclusively focuses on two-dimensional path planning, without considering multi-level buildings or complex three-dimensional terrains, thus limiting the validation of the algorithm’s scalability; third, experiments were conducted solely in a simulated environment, lacking verification on physical hardware platforms, which means real-world deployment performance remains untested. Future research will address these limitations by exploring MSRIME’s applications in dynamic environments, 3D spaces, and actual hardware platforms, thereby further enhancing its practicality and robustness.

Author Contributions

Methodology and overall research plan, H.L.; data and figure curation, H.L., Q.Q. and J.P.; original draft writing preparation, H.L., Q.Q. and M.S.; proofreading and editing, Y.F. and Y.L.; revision and funding acquisition, Q.Q., Y.F. and Y.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research is supported by the National Science Foundation of China (32060193 to Q.Q., 62062047 to Y.F.), the Foundation of Yunnan Key Laboratory of Computer Technology Applications and the Xingdian Talents Support Program of Yunnan Province (XDYC-QNRC-2022-0149) to Q.Q., Yunnan High-Level Science and Technology Talents and Innovation Team Selection Special Project (202405AS350001) to Y.L.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data that support the findings of this study are available from the corresponding author upon request. There are no restrictions on data availability.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Virk, G.S. Industrial mobile robots: The future. Ind. Robot. Int. J. 1997, 24, 102–105. [Google Scholar] [CrossRef]
  2. Lehtinen, H.; Kaarmila, P.; Blom, M.; Kauppi, I.; Kerva, J. Mobile robots evolving in industrial applications. In Proceedings of the International Symposium on Robotics, Montreal, QC, Canada, 14–17 May 2000; Volume 31, pp. 96–101. [Google Scholar]
  3. Pransky, J. Mobile robots: Big benefits for US military. Ind. Robot. Int. J. 1997, 24, 126–130. [Google Scholar] [CrossRef]
  4. Sun, Z.; Yang, H.; Ma, Y.; Wang, X.; Mo, Y.; Li, H.; Jiang, Z. BIT-DMR: A humanoid dual-arm mobile robot for complex rescue operations. IEEE Robot. Autom. Lett. 2021, 7, 802–809. [Google Scholar] [CrossRef]
  5. Nouri, H.E.; Driss, O.B.; Ghédira, K. Hybrid metaheuristics for scheduling of machines and transport robots in job shop environment. Appl. Intell. 2016, 45, 808–828. [Google Scholar] [CrossRef]
  6. Kim, M.; Kim, S.; Park, S.; Choi, M.-T.; Kim, M.; Gomaa, H. Service robot for the elderly. IEEE Robot. Autom. Mag. 2009, 16, 34–45. [Google Scholar] [CrossRef]
  7. Kang, H.; Li, H.; Zhang, J.; Lu, X.; Benes, B. Flycam: Multitouch gesture controlled drone gimbal photography. IEEE Robot. Autom. Lett. 2018, 3, 3717–3724. [Google Scholar] [CrossRef]
  8. Tao, B.; Kim, J.H. Mobile robot path planning based on bi-population particle swarm optimization with random perturbation strategy. J. King Saud Univ.-Comput. Inf. Sci. 2024, 36, 101974. [Google Scholar] [CrossRef]
  9. Miao, C.; Chen, G.; Yan, C.; Wu, Y. Path planning optimization of indoor mobile robot based on adaptive ant colony algorithm. Comput. Ind. Eng. 2021, 156, 107230. [Google Scholar] [CrossRef]
  10. Wang, Z.; Liu, J. Flamingo Search Algorithm and Its Application to Path Planning Problem. In Proceedings of the 2021 4th International Conference on Artificial Intelligence and Pattern Recognition, Chongqing, China, 21–23 August 2021; pp. 567–573. [Google Scholar]
  11. Wolpert, D.H.; Macready, W.G. No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1997, 1, 67–82. [Google Scholar] [CrossRef]
  12. Li, Y.; Huang, Z.; Xie, Y. Path planning of mobile robot based on improved genetic algorithm. In Proceedings of the 2020 3rd International Conference on Electron Device and Mechanical Engineering (ICEDME), Suzhou, China, 1–3 May 2020; pp. 691–695. [Google Scholar]
  13. Liu, L.; Liang, J.; Guo, K.; Ke, C.; He, D.; Chen, J. Dynamic path planning of mobile robot based on improved sparrow search algorithm. Biomimetics 2023, 8, 182. [Google Scholar] [CrossRef] [PubMed]
  14. Zhang, W.; Zhang, H.; Zhang, X. An enhanced dung beetle optimizer with adaptive node selection and dynamic step search for mobile robots path planning. Meas. Sci. Technol. 2025, 36, 036301. [Google Scholar] [CrossRef]
  15. Su, H.; Zhao, D.; Heidari, A.A.; Liu, L.; Zhang, X.; Mafarja, M.; Chen, H. RIME: A physics-based optimization. Neurocomputing 2023, 532, 183–214. [Google Scholar] [CrossRef]
  16. Karaboga, D.; Basturk, B. A powerful and efficient algorithm for numerical function optimization: Artificial bee colony (ABC) algorithm. J. Glob. Optim. 2007, 39, 459–471. [Google Scholar] [CrossRef]
  17. Mirjalili, S. Moth-flame optimization algorithm: A novel nature-inspired heuristic paradigm. Knowl.-Based Syst. 2015, 89, 228–249. [Google Scholar] [CrossRef]
  18. Ismaeel, A.A.K.; Houssein, E.H.; Khafaga, D.S.; Aldakheel, E.A.; Said, M. Performance of rime-ice algorithm for estimating the PEM fuel cell parameters. Energy Rep. 2024, 11, 3641–3652. [Google Scholar] [CrossRef]
  19. Abdel-Salam, M.; Hu, G.; Çelik, E.; Gharehchopogh, F.S.; El-Hasnony, I.M. Chaotic RIME optimization algorithm with adaptive mutualism for feature selection problems. Comput. Biol. Med. 2024, 179, 108803. [Google Scholar] [CrossRef] [PubMed]
  20. Fu, W.; Ling, C. An Adaptive Iterative Chaos Optimization Method. J. Xian Jiaotong Univ. 2013, 47, 33–38. [Google Scholar]
  21. Gao, P.; Ding, H.Q.; Xu, R. Whale optimization algorithm based on skew tent chaotic map and nonlinear strategy. Acad. J. Comput. Inform. Sci. 2021, 4, 91–97. [Google Scholar]
  22. Duan, B.; Ma, Y.; Liu, J.; Jin, Y. Nonlinear Grey Wolf Optimization Algorithm Based on Chaotic Mapping and Reverse Learning Mechanism. Softw. Eng. 2023, 26, 36–40. [Google Scholar]
  23. Wang, B.; Zhang, Z.; Siarry, P.; Liu, X.; Królczyk, G.; Hua, D.; Brumercik, F.; Li, Z. A nonlinear African vulture optimization algorithm combining Henon chaotic mapping theory and reverse learning competition strategy. Expert Syst. Appl. 2024, 236, 121413. [Google Scholar] [CrossRef]
  24. Bucolo, M.; Buscarino, A.; Fortuna, L.; Gagliano, S. Multidimensional discrete chaotic maps. Front. Phys. 2022, 10, 862376. [Google Scholar] [CrossRef]
  25. Chang, H.; Lu, X.; Han, J.; Tang, T. Research on Chaotic SPWM Strategy Based on Fuch Map. In Proceedings of the 2023 IEEE PELS Students and Young Professionals Symposium (SYPS), Shanghai, China, 27–29 August 2023; pp. 1–5. [Google Scholar]
  26. Wolf, A. Quantifying chaos with Lyapunov exponents. Chaos 1986, 16, 285–317. [Google Scholar]
  27. Suganthan, P.N.; Hansen, N.; Liang, J.; Deb, K.; Chen, Y.P.; Auger, A.; Tiwar, S. Problem definitions and evaluation criteria for the CEC 2005 special session on real-parameter optimization. KanGAL Rep. 2005, 2005005, 2005. [Google Scholar]
  28. Loshchilov, I.; Hutter, F. Sgdr: Stochastic gradient descent with warm restarts. arXiv 2016, arXiv:1608.03983. [Google Scholar]
  29. Li, X.; Zhuang, Z.; Orabona, F. A second look at exponential and cosine step sizes: Simplicity, adaptivity, and performance. In Proceedings of the International Conference on Machine Learning, Vienna, Austria, 18–24 July 2021; pp. 6553–6564. [Google Scholar]
  30. Li, W.; Yang, X.; Yin, Y.; Wang, Q. A Novel Hybrid Improved RIME Algorithm for Global Optimization Problems. Biomimetics 2024, 10, 14. [Google Scholar] [CrossRef] [PubMed]
  31. Nadimi-Shahraki, M.H.; Zamani, H. DMDE: Diversity-maintained multi-trial vector differential evolution algorithm for non-decomposition large-scale global optimization. Expert Syst. Appl. 2022, 198, 116895. [Google Scholar] [CrossRef]
  32. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95-International Conference on Neural Networks, Perth, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar]
  33. Rashedi, E.; Nezamabadi-Pour, H.; Saryazdi, S. GSA: A gravitational search algorithm. Inf. Sci. 2009, 179, 2232–2248. [Google Scholar] [CrossRef]
  34. Xue, J.; Shen, B. A novel swarm intelligence optimization approach: Sparrow search algorithm. Syst. Sci. Control. Eng. 2020, 8, 22–34. [Google Scholar] [CrossRef]
  35. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  36. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  37. Abdollahzadeh, B.; Gharehchopogh, F.S.; Mirjalili, S. African vultures optimization algorithm: A new nature-inspired metaheuristic algorithm for global optimization problems. Comput. Ind. Eng. 2021, 158, 107408. [Google Scholar] [CrossRef]
  38. Xue, J.; Shen, B. Dung beetle optimizer: A new meta-heuristic algorithm for global optimization. J. Supercomput. 2023, 79, 7305–7336. [Google Scholar] [CrossRef]
  39. Naruei, I.; Keynia, F.; Molahosseini, A.S. Hunter–prey optimization: Algorithm and applications. Soft Comput. 2022, 26, 1279–1314. [Google Scholar] [CrossRef]
  40. Dhargupta, S.; Ghosh, M.; Mirjalili, S.; Sarkar, R. Selective opposition based grey wolf optimization. Expert Syst. Appl. 2020, 151, 113389. [Google Scholar] [CrossRef]
  41. Nadimi-Shahraki, M.H.; Taghian, S.; Mirjalili, S. An improved grey wolf optimizer for solving engineering problems. Expert Syst. Appl. 2021, 166, 113917. [Google Scholar] [CrossRef]
  42. Nadimi-Shahraki, M.H.; Zamani, H.; Mirjalili, S. Enhanced whale optimization algorithm for medical feature selection: A COVID-19 case study. Comput. Biol. Med. 2022, 148, 105858. [Google Scholar] [CrossRef] [PubMed]
  43. Yang, W.; Xia, K.; Fan, S.; Wang, L.; Li, T.; Zhang, J.; Feng, Y. A multi-strategy whale optimization algorithm and its application. Eng. Appl. Artif. Intell. 2022, 108, 104558. [Google Scholar] [CrossRef]
  44. Chen, G.; Zhu, D.; Chen, X. Similarity detection method of science fiction painting based on multi-strategy improved sparrow search algorithm and Gaussian pyramid. Multimed. Tools Appl. 2024, 83, 41597–41636. [Google Scholar] [CrossRef]
  45. Peng, Q.; Wang, X.; Tang, A. Feature selection for intrusion detection based on an improved rime optimization algorithm. Mol. Cell. Biomech. 2024, 21, 599. [Google Scholar] [CrossRef]
  46. Batis, M.; Chen, Y.; Wang, M.; Liu, L.; Heidari, A.A.; Chen, H. ACGRIME: Adaptive chaotic Gaussian RIME optimizer for global optimization and feature selection. Clust. Comput. 2025, 28, 61. [Google Scholar] [CrossRef]
  47. Hollander, M.; Wolfe, D.A. Nonparametric Statistical Methods: Solutions Manual to Accompany; John Wiley and Sons: Hoboken, NJ, USA, 1999. [Google Scholar]
  48. Schläpfer, M.; Lee, J.; Bettencourt, L. Urban skylines: Building heights and shapes as measures of city size. arXiv 2015, arXiv:1512.00946. [Google Scholar] [CrossRef]
  49. Soliman, A.; Mackay, A.; Schmidt, A.; Allan, B.; Wang, S. Quantifying the geographic distribution of building coverage across the US for urban sustainability studies. Comput. Environ. Urban Syst. 2018, 71, 199–208. [Google Scholar] [CrossRef]
  50. Le, Q.H.; Shin, H.; Kwon, N.; Ho, J.; Ahn, Y. Deep learning based urban building coverage ratio estimation focusing on rapid urbanization areas. Appl. Sci. 2022, 12, 11428. [Google Scholar] [CrossRef]
  51. Lau, T.K.; Lin, T.P. Investigating the relationship between air temperature and the intensity of urban development using on-site measurement, satellite imagery and machine learning. Sustain. Cities Soc. 2024, 100, 104982. [Google Scholar] [CrossRef]
Figure 1. Variation Process of Coefficient E.
Figure 1. Variation Process of Coefficient E.
Biomimetics 10 00476 g001
Figure 2. The process of change in the β of environmental factors.
Figure 2. The process of change in the β of environmental factors.
Biomimetics 10 00476 g002
Figure 3. Population Distribution Diagram.
Figure 3. Population Distribution Diagram.
Biomimetics 10 00476 g003
Figure 4. The variation process of the control factor.
Figure 4. The variation process of the control factor.
Biomimetics 10 00476 g004
Figure 5. Evolution of the Controlled Elite Population. (a) Early-stage population without the controlled elite strategy. (b) Mid-stage population without the controlled elite strategy. (c) Late-stage population without the controlled elite strategy. (d) Early-stage population with the controlled elite strategy. (e) Mid-stage population with the controlled elite strategy. (f) Late-stage population with the controlled elite strategy.
Figure 5. Evolution of the Controlled Elite Population. (a) Early-stage population without the controlled elite strategy. (b) Mid-stage population without the controlled elite strategy. (c) Late-stage population without the controlled elite strategy. (d) Early-stage population with the controlled elite strategy. (e) Mid-stage population with the controlled elite strategy. (f) Late-stage population with the controlled elite strategy.
Biomimetics 10 00476 g005
Figure 6. Cosine Annealing Image.
Figure 6. Cosine Annealing Image.
Biomimetics 10 00476 g006
Figure 7. Parameter analysis experiment. (a) a1, (b) a2, (c) a3.
Figure 7. Parameter analysis experiment. (a) a1, (b) a2, (c) a3.
Biomimetics 10 00476 g007
Figure 8. Framework of Multi-Strategy Coordination and Integration.
Figure 8. Framework of Multi-Strategy Coordination and Integration.
Biomimetics 10 00476 g008
Figure 9. Convergence behavior analysis: (a) shows the functional characteristic diagram, (b) displays the search trajectory, (c) presents the variation trend of average fitness values, (d) illustrates the variation trend along the first dimension, and (e) demonstrates the changes in the optimal solution with iteration count.
Figure 9. Convergence behavior analysis: (a) shows the functional characteristic diagram, (b) displays the search trajectory, (c) presents the variation trend of average fitness values, (d) illustrates the variation trend along the first dimension, and (e) demonstrates the changes in the optimal solution with iteration count.
Biomimetics 10 00476 g009aBiomimetics 10 00476 g009b
Figure 10. Population Diversity Analysis: (a) displays the function characteristic diagram, (b) shows the search trajectory after 1 iteration, (c) presents the search trajectory after 100 iterations, (d) illustrates the search trajectory after 250 iterations, (e) demonstrates the search trajectory after 500 iterations, and (f) provides the population diversity analysis.
Figure 10. Population Diversity Analysis: (a) displays the function characteristic diagram, (b) shows the search trajectory after 1 iteration, (c) presents the search trajectory after 100 iterations, (d) illustrates the search trajectory after 250 iterations, (e) demonstrates the search trajectory after 500 iterations, and (f) provides the population diversity analysis.
Biomimetics 10 00476 g010aBiomimetics 10 00476 g010b
Figure 11. CEC2017 Test Convergence Curves: (al) represent the convergence curves of functions F1, F3, F4, F10, F11, F12, F19, F20, F21, F22, F29, and F30, respectively.
Figure 11. CEC2017 Test Convergence Curves: (al) represent the convergence curves of functions F1, F3, F4, F10, F11, F12, F19, F20, F21, F22, F29, and F30, respectively.
Biomimetics 10 00476 g011aBiomimetics 10 00476 g011b
Figure 12. CEC2022 Test Convergence Curves: (al) present the convergence curves of functions f1-f12.
Figure 12. CEC2022 Test Convergence Curves: (al) present the convergence curves of functions f1-f12.
Biomimetics 10 00476 g012aBiomimetics 10 00476 g012b
Figure 13. Friedman Test Results of Classical Algorithms. (a) CEC 2017. (b) CEC 2022.
Figure 13. Friedman Test Results of Classical Algorithms. (a) CEC 2017. (b) CEC 2022.
Biomimetics 10 00476 g013
Figure 14. Construction of Delivery Robot.
Figure 14. Construction of Delivery Robot.
Biomimetics 10 00476 g014
Figure 15. Path Planning Results: (af) sequentially display the path planning experimental results from the first to the sixth trial.
Figure 15. Path Planning Results: (af) sequentially display the path planning experimental results from the first to the sixth trial.
Biomimetics 10 00476 g015
Figure 16. Experimental results of small-scale map path planning. (a) Small ordinary map. (b) Small urbanization Map.
Figure 16. Experimental results of small-scale map path planning. (a) Small ordinary map. (b) Small urbanization Map.
Biomimetics 10 00476 g016
Figure 17. Experimental results of large-scale map path planning. (a) Large ordinary map. (b) Large urbanization map.
Figure 17. Experimental results of large-scale map path planning. (a) Large ordinary map. (b) Large urbanization map.
Biomimetics 10 00476 g017
Figure 18. Performance change rate results. (a) Small ordinary map. (b) Small urbanization map. (c) Large ordinary map. (d) Large urbanization map.
Figure 18. Performance change rate results. (a) Small ordinary map. (b) Small urbanization map. (c) Large ordinary map. (d) Large urbanization map.
Biomimetics 10 00476 g018
Figure 19. Additional test results. (a) Scenario 1. (b) Scenario 2. (c) Scenario 3. (d) Scenario 4.
Figure 19. Additional test results. (a) Scenario 1. (b) Scenario 2. (c) Scenario 3. (d) Scenario 4.
Biomimetics 10 00476 g019
Table 1. Statistical Analysis of Chaotic Mappings.
Table 1. Statistical Analysis of Chaotic Mappings.
NameDimensionFrequency of OccurrenceNameDimensionFrequency of Occurrence
Logisticone-dimensional204Cubicone-dimensional32
Tentone-dimensional226Kentone-dimensional14
Chebyshevone-dimensional25Piecewiseone-dimensional20
Henonmulti-dimensional32Lozimulti-dimensional5
Circleone-dimensional41Fuchone-dimensional7
Table 2. Initial Value Sensitivity Analysis.
Table 2. Initial Value Sensitivity Analysis.
Initial ValueVariation RangeLogistic/%Tent/%Henon/%Fuch/%
0.1310−626.2528.4318.6948.33
0.1510−626.0034.3718.9737.51
0.1810−628.1321.8821.4743.50
0.2010−626.2528.1317.7234.38
0.2310−637.5033.5613.9728.13
0.2710−631.2534.3716.5337.50
Table 3. Lyapunov Exponent Analysis.
Table 3. Lyapunov Exponent Analysis.
Chaotic MappingLogisticTentFuchHenon
Lyapunov value0.69410.76462.86610.4312
Table 4. Comparison of Chaotic Mappings.
Table 4. Comparison of Chaotic Mappings.
FFRIMERIMETRIME
F1[1.61 × 106,4.37 × 107][3.91 × 106,9.69 × 107][9.16 × 106,6.68 × 107]
F2[1.91 × 1087,1.98 × 10113][3.42 × 1088,1.00 × 10103][5.41 × 1088,5.10 × 10114]
F3[7.85 × 104,1.90 × 105][8.76 × 104,2.41 × 105][7.97 × 104,2.42 × 105]
F4[4.48 × 102,5.61 × 102][4.77 × 102,6.28 × 102][4.37 × 102,5.80 × 102]
F5[5.68 × 102,6.92 × 102][5.81 × 102,6.60 × 102][5.93 × 102,7.08 × 102]
F6[6.10 × 102,6.28 × 102][6.16 × 102,6.35 × 102][6.09 × 102,6.30 × 102]
F7[8.47 × 102,9.72 × 102][8.53 × 102,9.93 × 102][8.76 × 102,1.01 × 103]
F8[8.83 × 102,9.91 × 102][8.66 × 102,9.62 × 102][8.65 × 102,9.81 × 102]
F9[1.65 × 103,1.46 × 104][2.28 × 103,9.45 × 103][1.97 × 103,1.22 × 104]
F10[3.78 × 103,6.43 × 103][4.51 × 103,6.77 × 103][3.83 × 103,6.35 × 103]
Table 5. Multi-parameter and multi-strategy comparison experiment.
Table 5. Multi-parameter and multi-strategy comparison experiment.
MSRIMERIMEa1RIMEa2RIMEa3RIME1RIME2RIME3
AVG5.14 × 1036.72 × 1037.49 × 1038.36 × 1037.47 × 1037.80 × 1035.77 × 103
STD1.73 × 1032.23 × 1032.57 × 1032.29 × 1032.25 × 1031.83 × 1031.96 × 103
Table 6. Test Functions.
Table 6. Test Functions.
Function InformationFmin
CEC 2017Unimodal FunctionF1Shifted and Rotated Bent Cigar Function100
F2Shifted and Rotated Sum of Different Power Function200
F3Shifted and Rotated Zakharov Function300
Simple Multimodal FunctionsF4Shifted and Rotated Rosenbrock’s Function400
F5Shifted and Rotated Rastrigin’s Function500
F6Shifted and Rotated Expanded Scaffer’s F6 Function600
F7Shifted and Rotated Lunacek Bi_Rastrigin Function700
F8Shifted and Rotated Non-Continuous Rastrigin’s Function800
F9Shifted and Rotated Levy Function900
F10Shifted and Rotated Schwefel’s Function1000
Hybrid FunctionsF11Hybrid Function 1 (n = 3)1100
F12Hybrid Function 2 (n = 3)1200
F13Hybrid Function 3 (n = 3)1300
F14Hybrid Function 4 (n = 4)1400
F15Hybrid Function 5 (n = 4)1500
F16Hybrid Function 6 (n = 4)1600
F17Hybrid Function 6 (n = 5)1700
F18Hybrid Function 6 (n = 5)1800
F19Hybrid Function 6 (n = 5)1900
F20Hybrid Function 6 (n = 6)2000
Composition FunctionsF21Composition Function 1 (n = 3)2100
F22Composition Function 2 (n =3)2200
F23Composition Function 3 (n =4)2300
F24Composition Function 4 (n = 4)2400
F25Composition Function 5 (n = 5)2500
F26Composition Function 6 (n = 5)2600
F27Composition Function 7 (n = 6)2700
F28Composition Function 8 (n = 6)2800
F29Composition Function 9 (n = 3)2900
F30Composition Function 10 (n =3)3000
CEC 2022Unimodal Functionf1Shifted and full Rotated Zakharov Function300
Basic Functionsf2Shifted and full Rotated Rosenbrock’s Function400
f3Shifted and full Rotated Expanded Schaffer’s f6 Function600
f4Shifted and full Rotated Non-Continuous Rastrigin’s Function800
f5Shifted and full Rotated Levy Function900
Hybrid Functionsf6Hybrid Function 1 (n = 3)1800
f7Hybrid Function 2 (n = 6)2000
f8Hybrid Function 3 (n = 5)2200
Composition Functionsf9Composition Function 1 (n = 5)2300
f10Composition Function 2 (n = 4)2400
f11Composition Function 3 (n = 5)2600
f12Composition Function 4 (n = 6)2700
Search Range: [−100, 100]
Table 7. Algorithm parameter settings.
Table 7. Algorithm parameter settings.
AlgorithmReferenceFull Algorithm NameParameter
Information
FSA[10]Flamingo Search Algorithmb = 0.1
ISSA[13]Improved sparrow search algorithmp = 0.2
RIME[15]Rime algorithmw = 5
PSO[32]Particle swarm optimizationc1 = c2 = 2, w = 0.7
GSA[33]Gravitational search algorithmα = 20
SSA[34]Sparrow search algorithmp = 0.2
GWO[35]Grey wolf optimizerα = [2,0]
WOA[36]Whale optimization algorithmb = 1
AVOA[37]African vultures optimization algorithmp = 0.5
DBO[38]Dung beetle optimizerp = 0.2
HPO[39]Hunter–prey optimization:——
SOGWO[40]Selective opposition based grey wolf optimization——
IGWO[41]Improved grey wolf optimizer——
EWOA[42]Enhanced whale optimization algorithmb = 1
MSWOA[43]Multi-strategy whale optimization algorithmb = 1
MISSA[44]Multi-strategy improved sparrow search algorithmp = 0.3
IRIME[45]Improved RIME optimization algorithmw = 5
ACGRIME[46]Adaptive chaotic Gaussian RIME optimizera = 4
Table 8. CEC2017 Test Results.
Table 8. CEC2017 Test Results.
F1F3F4F5F6
AVGSTDAVGSTDAVGSTDAVGSTDAVGSTD
PSO3.27 × 1086.36 × 1074.67 × 1058.52 × 1047.83 × 1021.31 × 1021.33 × 1037.58 × 106.70 × 1024.67
GSA2.06 × 10111.04 × 10103.59 × 1052.63 × 1047.14 × 1048.25 × 1031.49 × 1031.40 × 1026.73 × 1023.43
GWO4.32 × 10101.10 × 10104.75 × 1056.76 × 1043.56 × 1031.30 × 1031.17 × 1038.30 × 106.39 × 1023.53
WOA8.18 × 10101.22 × 10101.07 × 1061.93 × 1051.25 × 1042.37 × 1031.79 × 1031.20 × 1026.97 × 1022.30
DBO9.60 × 10107.72 × 10107.58 × 1053.07 × 1056.69 × 1034.59 × 1031.63 × 1031.80 × 1026.75 × 1021.21 × 10
AVOA3.11 × 1091.60 × 1093.30 × 1051.99 × 1041.52 × 1032.06 × 1021.36 × 1038.41 × 106.68 × 1022.42
RIME6.82 × 1072.50 × 1076.63 × 1055.93 × 1049.12 × 1021.10 × 1021.06 × 1038.88 × 106.50 × 1026.77
SSA2.34 × 10114.39 × 10104.79 × 1054.93 × 1048.16 × 1041.10 × 1042.16 × 1038.51 × 107.11 × 1028.62
HPO2.19 × 10101.93 × 10107.56 × 1051.26 × 1054.47 × 1032.85 × 1031.64 × 1031.56 × 1026.79 × 1029.71
MSRIME5.51 × 1052.84 × 1054.34 × 1056.80 × 1047.78 × 1025.86 × 109.84 × 1026.95 × 106.33 × 1025.24
F7F8F9F10F11
AVGSTDAVGSTDAVGSTDAVGSTDAVGSTD
PSO2.34 × 1032.64 × 1021.73 × 1038.39 × 106.29 × 1041.00 × 1041.83 × 1049.80 × 1028.58 × 1033.03 × 103
GSA3.18 × 1032.07 × 1022.01 × 1033.40 × 103.22 × 1045.35 × 1032.07 × 1041.33 × 1031.84 × 1052.06 × 104
GWO1.96 × 1038.18 × 101.52 × 1036.98 × 103.81 × 1041.59 × 1042.01 × 1047.22 × 1037.51 × 1041.95 × 104
WOA3.74 × 1032.14 × 1022.27 × 1037.30 × 109.04 × 1042.45 × 1042.86 × 1041.34 × 1033.19 × 1051.64 × 105
DBO2.76 × 1032.50 × 1022.27 × 1032.51 × 1026.63 × 1047.43 × 1032.20 × 1045.23 × 1032.22 × 1055.92 × 104
AVOA3.12 × 1031.54 × 1021.77 × 1039.58 × 102.72 × 1042.85 × 1031.62 × 1041.85 × 1036.21 × 1041.99 × 104
RIME1.86 × 1031.22 × 1021.41 × 1039.59 × 103.82 × 1041.71 × 1041.61 × 1041.48 × 1031.84 × 1044.20 × 103
SSA4.61 × 1034.88 × 1022.58 × 1038.98 × 109.63 × 1041.42 × 1043.18 × 1041.40 × 1031.98 × 1053.06 × 104
HPO5.55 × 1036.67 × 1021.95 × 1032.03 × 1024.50 × 1049.16 × 1031.91 × 1041.13 × 1031.17 × 1055.65 × 104
MSRIME1.63 × 1037.96 × 101.31 × 1038.06 × 102.05 × 1044.25 × 1031.59 × 1049.75 × 1021.66 × 1042.63 × 103
F12F13F14F15F16
AVGSTDAVGSTDAVGSTDAVGSTDAVGSTD
PSO5.54 × 1083.08 × 1081.40 × 1064.38 × 1052.97 × 1061.53 × 1061.41 × 1058.39 × 1046.36 × 1033.29 × 102
GSA1.49 × 10111.35 × 10103.40 × 10101.27 × 1093.57 × 1072.34 × 1071.35 × 10101.83 × 1091.87 × 1049.55 × 102
GWO8.39 × 1093.58 × 1091.19 × 1099.80 × 1081.16 × 1073.96 × 1062.05 × 1083.05 × 1086.87 × 1031.67 × 102
WOA1.83 × 10103.40 × 1091.05 × 1095.26 × 1081.39 × 1076.19 × 1061.42 × 1081.16 × 1081.65 × 1041.66 × 103
DBO3.64 × 1091.36 × 1099.63 × 1076.92 × 1071.26 × 1071.29 × 1071.01 × 1071.60 × 1078.81 × 1031.81 × 103
AVOA4.69 × 1082.85 × 1087.01 × 1042.40 × 1047.52 × 1064.06 × 1065.72 × 1041.57 × 1047.04 × 1035.76 × 102
RIME7.96 × 1084.95 × 1082.52 × 1058.86 × 1046.13 × 1061.52 × 1061.15 × 1053.74 × 1046.74 × 1038.13 × 102
SSA1.49 × 10112.84 × 10103.55 × 10105.03 × 1096.97 × 1072.67 × 1071.71 × 10101.83 × 1092.16 × 1042.48 × 103
HPO1.33 × 10109.62 × 1098.54 × 1086.34 × 1086.00 × 1066.90 × 1061.44 × 1083.22 × 1087.39 × 1037.64 × 102
MSRIME3.87 × 1082.09 × 1086.60 × 1041.12 × 1042.52 × 1061.05 × 1065.04 × 1041.60 × 1046.04 × 1038.22 × 102
F17F18F19F20F21
AVGSTDAVGSTDAVGSTDAVGSTDAVGSTD
PSO5.28 × 1032.85 × 1023.20 × 1061.18 × 1065.70 × 1061.73 × 1061.73 × 1071.11 × 1063.74 × 1031.61 × 102
GSA6.32 × 1064.13 × 1064.75 × 1071.21 × 1075.75 × 1072.30 × 1071.53 × 10102.63 × 1095.43 × 1032.41 × 102
GWO5.06 × 1036.57 × 1021.38 × 1079.51 × 1069.81 × 1064.36 × 1065.13 × 1085.22 × 1083.06 × 1037.94 × 10
WOA1.23 × 1044.23 × 1031.22 × 1074.61 × 1069.93 × 1063.34 × 1062.01 × 1088.17 × 1074.30 × 1031.29 × 102
DBO9.06 × 1031.21 × 1031.50 × 1077.99 × 1062.44 × 1071.33 × 1073.85 × 1073.68 × 1073.90 × 1032.24 × 102
AVOA6.37 × 1035.72 × 1023.51 × 1061.91 × 1063.70 × 1061.95 × 1061.89 × 1064.88 × 1053.57 × 1031.94 × 102
RIME5.57 × 1037.73 × 1027.40 × 1063.75 × 1068.69 × 1063.92 × 1061.11 × 1071.01 × 1072.89 × 1036.66 × 10
SSA2.25 × 1066.71 × 1051.53 × 1089.98 × 1071.43 × 1084.86 × 1071.69 × 10101.83 × 1094.69 × 1031.98 × 102
HPO6.82 × 1033.04 × 1028.69 × 1061.06 × 1075.35 × 1067.85 × 1063.39 × 1075.08 × 1073.78 × 1032.77 × 102
MSRIME4.88 × 1032.41 × 1022.66 × 1061.18 × 1064.52 × 1061.01 × 1061.81 × 1063.70 × 1062.81 × 1033.26 × 10
F22F23F24F25F26
AVGSTDAVGSTDAVGSTDAVGSTDAVGSTD
PSO2.22 × 1045.98 × 1025.16 × 1032.89 × 1025.53 × 1033.38 × 1023.44 × 1033.41 × 102.11 × 1046.47 × 103
GSA2.49 × 1049.79 × 1028.61 × 1035.05 × 1021.30 × 1041.11 × 1032.07 × 1041.13 × 1034.58 × 1042.24 × 103
GWO2.40 × 1046.71 × 1033.63 × 1037.57 × 104.22 × 1031.24 × 1025.94 × 1035.24 × 1021.54 × 1049.71 × 102
WOA3.07 × 1042.01 × 1035.22 × 1031.99 × 1026.33 × 1033.54 × 1028.66 × 1038.52 × 1023.81 × 1042.81 × 103
DBO2.54 × 1042.90 × 1034.53 × 1031.99 × 1025.59 × 1031.80 × 1021.11 × 1048.11 × 1032.55 × 1042.01 × 103
AVOA2.00 × 1041.20 × 1034.22 × 1032.55 × 1025.16 × 1033.59 × 1024.12 × 1031.38 × 1022.53 × 1042.67 × 103
RIME2.00 × 1042.22 × 1033.54 × 1036.89 × 104.10 × 1032.26 × 1023.56 × 1036.73 × 101.31 × 1041.36 × 103
SSA3.51 × 1044.85 × 1027.25 × 1036.06 × 1021.20 × 1043.89 × 1022.61 × 1043.30 × 1035.01 × 1044.52 × 103
HPO2.13 × 1042.54 × 1034.22 × 1031.36 × 1025.05 × 1031.99 × 1025.00 × 1031.48 × 1032.38 × 1043.48 × 103
MSRIME1.79 × 1044.60 × 1023.38 × 1031.50 × 1023.92 × 1031.24 × 1023.40 × 1035.22 × 101.30 × 1041.42 × 103
F27F28F29F30
AVGSTDAVGSTDAVGSTDAVGSTD
PSO3.66 × 1032.14 × 1023.50 × 1036.62 × 108.99 × 1034.68 × 1022.09 × 1079.02 × 106
GSA1.60 × 1041.66 × 1032.77 × 1041.82 × 1032.83 × 1056.67 × 1042.99 × 10102.99 × 109
GWO4.10 × 1032.02 × 1028.08 × 1039.17 × 1028.35 × 1037.65 × 1024.26 × 1087.56 × 107
WOA5.86 × 1035.94 × 1021.19 × 1041.74 × 1021.95 × 1043.84 × 1031.76 × 1098.44 × 108
DBO4.22 × 1033.25 × 1022.07 × 1042.84 × 1031.25 × 1042.63 × 1031.47 × 1086.61 × 107
AVOA4.33 × 1032.97 × 1024.75 × 1036.24 × 1029.56 × 1037.68 × 1022.79 × 1078.59 × 106
RIME3.85 × 1031.24 × 1023.68 × 1038.13 × 108.87 × 1038.73 × 1021.03 × 1085.72 × 107
SSA1.27 × 1041.22 × 1033.14 × 1042.84 × 1031.81 × 1056.64 × 1042.89 × 10105.92 × 109
HPO3.92 × 1031.38 × 1021.34 × 1045.72 × 1039.03 × 1039.70 × 1023.95 × 1088.74 × 108
MSRIME3.63 × 1031.22 × 1023.46 × 1032.64 × 108.50 × 1031.17 × 1034.73 × 1072.56 × 107
Table 9. CEC2022 Test Results.
Table 9. CEC2022 Test Results.
f1f2f3f4
AVGSTDAVGSTDAVGSTDAVGSTD
SOGWO1.26 × 1043.08 × 1035.03 × 1024.27 × 106.05 × 1022.048.60 × 1022.21 × 10
IRIME4.48 × 1027.57 × 1014.32 × 1022.91 × 106.03 × 1021.358.52 × 1021.34 × 10
EWOA2.42 × 1049.81 × 1035.69 × 1023.58 × 106.62 × 1021.53 × 109.20 × 1023.12 × 10
IGWO1.05 × 1036.97 × 1024.53 × 1022.46 × 106.02 × 1021.518.90 × 1023.79 × 10
MSWOA2.56 × 1043.68 × 1035.73 × 1028.13 × 106.70 × 1021.01 × 109.23 × 1021.91 × 10
ACGRIME7.18 × 1032.75 × 1034.49 × 1021.53 × 106.02 × 1027.03 × 10−18.46 × 1021.12 × 10
MISSA1.19 × 1044.33 × 1036.04 × 1026.61 × 106.02 × 1026.439.41 × 1021.67 × 10
MSRIME3.00 × 1022.30 × 10−24.17 × 1024.15 × 106.02 × 1021.478.38 × 1021.10 × 10
f5f6f7f8
AVGSTDAVGSTDAVGSTDAVGSTD
SOGWO1.01 × 1036.82 × 103.33 × 1056.49 × 1052.05 × 1036.442.28 × 1036.93 × 10
IRIME1.12 × 1031.40 × 1028.04 × 1035.03 × 1032.06 × 1031.79 × 102.23 × 1031.54
EWOA4.25 × 1032.90 × 1037.38 × 1051.01 × 1062.23 × 1032.36 × 102.28 × 1035.87 × 10
IGWO1.06 × 1031.51 × 1021.75 × 1054.71 × 1052.04 × 1031.59 × 102.24 × 1036.78
MSWOA4.66 × 1032.99 × 1031.01 × 1061.13 × 1062.26 × 1036.90 × 102.27 × 1034.78 × 10
ACGRIME1.02 × 1031.45 × 1023.98 × 1031.52 × 1032.05 × 1031.33 × 102.23 × 1032.13
MISSA2.59 × 1031.06 × 1031.58 × 1077.79 × 1062.20 × 1034.78 × 102.30 × 1036.52 × 10
MSRIME9.28 × 1022.51 × 105.53 × 1033.86 × 1032.05 × 1031.67 × 102.23 × 1032.55
f9f10f11f12
AVGSTDAVGSTDAVGSTDAVGSTD
SOGWO2.51 × 1031.85 × 103.51 × 1038.24 × 1023.21 × 1033.63 × 1022.96 × 1031.62 × 10
IRIME2.47 × 1036.47 × 10−12.67 × 1032.02 × 1023.09 × 1032.18 × 1022.90 × 1031.21 × 10
EWOA2.56 × 1034.36 × 104.74 × 1031.25 × 1033.79 × 1034.56 × 1023.07 × 1038.38 × 10
IGWO2.48 × 1031.04 × 103.40 × 1031.29 × 1033.15 × 1031.75 × 1022.92 × 1036.35
MSWOA2.58 × 1034.29 × 104.54 × 1031.35 × 1033.44 × 1031.34 × 1023.08 × 1039.11 × 10
ACGRIME2.48 × 1033.52 × 10−12.51 × 1036.24 × 102.93 × 1036.46 × 102.94 × 1034.05
MISSA2.55 × 1035.00 × 104.80 × 1031.94 × 1033.48 × 1031.68 × 1022.96 × 1031.00 × 10
MSRIME2.47 × 1032.11 × 10−12.61 × 1031.29 × 1022.90 × 1032.27 × 10−22.90 × 1033.77
Table 10. Wilcokerson rank-sum test and Friedman rank.
Table 10. Wilcokerson rank-sum test and Friedman rank.
CEC 2017CEC 2022
Algorithm+/−/=RankAlgorithm+/−/=Rank
PSO22/3/44SOGWO11/0/15
GSA29/0/010IRIME10/0/23
GWO25/1/35EWOA12/0/07
WOA28/1/08IGWO10/1/14
DBO29/0/07MSWOA12/0/08
AVOA27/1/16ACGRIME8/2/22
RIME24/1/43MISSA11/0/16
SSA29/0/09MSRIME 1
HPO24/1/42
MSRIME 1
Table 11. Experimental Results and Statistical Analysis.
Table 11. Experimental Results and Statistical Analysis.
Experiment NumberAlgorithmLzavgr
(a)RIME30.20661513.01256.0611
MSRIME29.67221713.81393.4018
(b)RIME31.62081614.79115.3895
MSRIME30.20661013.85664.9674
(c)RIME31.62081414.22816.4446
MSRIME29.7990612.50525.0993
(d)RIME30.02851013.66955.9711
MSRIME30.20661013.01084.8837
(e)RIME30.20721013.91655.0276
MSRIME30.2066913.67834.2919
(f)RIME29.9755714.20456.7476
MSRIME29.85431214.62676.7497
Table 12. BCR distribution on different maps.
Table 12. BCR distribution on different maps.
Small Ordinary MapSmall
Urbanization
Map
Large
Ordinary Map
Large
Urbanization
Map
Number of grid units40040016001600
BCR21%35%26%40%
Number of building units84140416640
Table 13. Statistical Table of Small Map Experiment Results.
Table 13. Statistical Table of Small Map Experiment Results.
EnvironmentAlgorithmLbestLavgTimez
Small ordinary mapIRIME28.589129.24710.762711
ISSA28.126229.02190.76399
GWO30.285832.62080.446310
SOGWO28.851429.07880.51369
RIME28.843830.44930.742910
DBO30.051231.9640.627112
FSA30.484133.88360.845213
MSRIME28.064228.68250.50858
Small urbanization mapIRIME29.056830.04270.788212
ISSA29.504630.02850.63799
GWO31.524635.37820.537916
SOGWO29.948230.61430.86809
RIME29.952632.20660.512215
DBO31.751434.31370.755513
FSA30.021432.14210.66809
MSRIME29.054330.50130.561010
Table 14. Statistical Table of Large Map Experiment Results.
Table 14. Statistical Table of Large Map Experiment Results.
EnvironmentAlgorithmLbestLavgTimez
Large ordinary mapIRIME64.024869.86651.07387
ISSA66.452374.56151.889410
GWO76.854282.74690.897725
SOGWO68.465270.19461.947713
RIME65.521668.37450.818823
DBO72.354276.82840.56345
FSA70.254072.15670.98669
MSRIME62.985164.88540.839217
Large urbanization mapIRIME68.254870.71171.689231
ISSA65.954667.19042.559739
GWO79.356286.35230.982736
SOGWO68.254672.68261.270736
RIME69.254876.80610.980136
DBO74.326977.07111.18618
FSA71.239879.89951.087811
MSRIME65.985667.09030.987826
Table 15. Additional test data.
Table 15. Additional test data.
EnvironmentAlgorithmLbestLavgTimez
Scenario 1IRIME53.182854.78511.023520
ISSA53.182854.28641.254620
GWO53.768655.02481.235021
SOGWO53.243454.43471.342520
RIME53.211854.43470.845721
DBO53.243455.08731.248220
FSA54.853755.69360.885420
MSRIME53.182854.10480.864920
Scenario 2IRIME34.371635.25460.975616
ISSA34.258735.72970.829414
GWO34.549835.25870.710216
SOGWO34.549835.54280.852915
RIME34.549835.72130.570015
DBO35.600739.89950.666114
FSA34.371635.81240.681615
MSRIME34.258735.54980.611514
Scenario 3IRIME52.410853.68401.816438
ISSA51.710853.54282.526837
GWO53.900955.62362.021644
SOGWO53.035755.32424.231440
RIME52.996154.26871.395433
DBO53.888554.68732.248636
FSA52.032154.98512.354838
MSRIME50.819053.09241.558433
Scenario 4IRIME69.302872.81361.190514
ISSA69.2983 72.03541.814214
GWO70.254679.51391.654126
SOGWO69.298374.37161.854614
RIME67.277972.62180.957023
DBO70.254976.24261.01447
FSA69.842373.46461.14807
MSRIME67.9775 71.25241.133111
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lv, H.; Qian, Q.; Pan, J.; Song, M.; Feng, Y.; Li, Y. Application of Multi-Strategy Controlled Rime Algorithm in Path Planning for Delivery Robots. Biomimetics 2025, 10, 476. https://doi.org/10.3390/biomimetics10070476

AMA Style

Lv H, Qian Q, Pan J, Song M, Feng Y, Li Y. Application of Multi-Strategy Controlled Rime Algorithm in Path Planning for Delivery Robots. Biomimetics. 2025; 10(7):476. https://doi.org/10.3390/biomimetics10070476

Chicago/Turabian Style

Lv, Haokai, Qian Qian, Jiawen Pan, Miao Song, Yong Feng, and Yingna Li. 2025. "Application of Multi-Strategy Controlled Rime Algorithm in Path Planning for Delivery Robots" Biomimetics 10, no. 7: 476. https://doi.org/10.3390/biomimetics10070476

APA Style

Lv, H., Qian, Q., Pan, J., Song, M., Feng, Y., & Li, Y. (2025). Application of Multi-Strategy Controlled Rime Algorithm in Path Planning for Delivery Robots. Biomimetics, 10(7), 476. https://doi.org/10.3390/biomimetics10070476

Article Metrics

Back to TopTop