Previous Article in Journal
A Particle Swarm Optimization-Guided Ivy Algorithm for Global Optimization Problems
Previous Article in Special Issue
Bio-Inspired Observability Enhancement Method for UAV Target Localization and Sensor Bias Estimation with Bearing-Only Measurement
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multi-Strategy-Assisted Hybrid Crayfish-Inspired Optimization Algorithm for Solving Real-World Problems

1
School of Art and Design, Xi’an University of Technology, Xi’an 710054, China
2
Department of Applied Mathematics, Xi’an University of Technology, Xi’an 710054, China
*
Author to whom correspondence should be addressed.
Biomimetics 2025, 10(5), 343; https://doi.org/10.3390/biomimetics10050343
Submission received: 5 March 2025 / Revised: 12 April 2025 / Accepted: 15 April 2025 / Published: 21 May 2025

Abstract

:
In order to solve problems with the original crayfish optimization algorithm (COA), such as reduced diversity, local optimization, and insufficient convergence accuracy, a multi-strategy optimization algorithm for crayfish based on differential evolution, named the ICOA, is proposed. First, the elite chaotic difference strategy is used for population initialization to generate a more uniform crayfish population and increase the quality and diversity of the population. Secondly, the differential evolution strategy and the dimensional variation strategy are introduced to improve the quality of the crayfish population before its iteration and to improve the accuracy of the optimal solution and the local search ability for crayfish at the same time. To enhance the updating approach to crayfish exploration, the Levy flight strategy is adopted. This strategy aims to improve the algorithm’s search range and local search capability, prevent premature convergence, and enhance population stability. Finally, the adaptive parameter strategy is introduced to improve the development stage of crayfish, so as to better balance the global search and local mining ability of the algorithm, and to further enhance the optimization ability of the algorithm, and the ability to jump out of the local optimal. In addition, a comparison with the original COA and two sets of optimization algorithms on the CEC2019, CEC2020, and CEC2022 test sets was verified by Wilcoxon rank sum test. The results show that the proposed ICOA has strong competition. At the same time, the performance of ICOA is tested against different high-performance algorithms on 6 engineering optimization examples, 30 high–low-dimension constraint problems and 2 large-scale NP problems. Numerical experiments results show that ICOA has superior performance on a range of engineering problems and exhibits excellent performance in solving complex optimization problems.

1. Introduction

In recent years, optimization techniques have been increasingly employed to tackle complex problems in various disciplines, including science, engineering, and other relevant fields [1,2,3]. The dimensions and sophistication of these optimization problems have grown exponentially [4]. The traditional gradient descent method and Newton method have been proved to be insufficient to meet evolving engineering needs, and still have many defects in dealing with multi-extremum problems. Therefore, a meta-heuristic algorithm with many advantages, such as randomness, flexibility, ease of implementation, and no need for gradient information [5], comes into being. When it comes to optimizing a given problem, meta-heuristic algorithms can strike a balance between designing efficiently from a local optimum to converging at a point. As a result, they increase the likelihood of finding a global optimum. Meta-heuristic algorithms are extensively utilized, owing to their capacity to efficiently determine the global optimum solution for various optimization problems. These algorithms exhibit several desirable features, including independence from initial conditions and solution domains, robustness, and other exceptional qualities [6,7]. Its practical utility has also been evidenced and employed in various fields, including the multiple traveling salesman problem [8], multilevel threshold image segmentation [9], ship routing and scheduling problems [10], feature selection [11], and multilevel image segmentation [12]. Currently, there are four broad categories.
Group intelligence algorithms are inspired by the effects of animal behavior. Take the PSO [13] algorithm for instance, which draws inspiration from the foraging behavior of birds. In this algorithm, the search space is represented by a collection of particles, each of which embodies a potential solution. To improve efficiency, every particle explores both local and global optima during the search process, aiming to achieve enhanced outcomes. Finally, the velocity and weights of the particles are added to obtain the solution. The Artificial Bee Colony Algorithm (ABC) [14], Ant Colony Optimization (ACO) [15], and Cuckoo Search Optimization (CSO) [16] are the classical algorithms proposed after the PSO algorithm. In recent years, there has been a surge of novel algorithms introduced as a result of extensive research. Consider, for instance, the Snake Optimizer (SO) [17], which finds inspiration from the feeding and reproduction behaviors of snakes. Similarly, the White Shark Optimizer (WSO) [18] draws inspiration from the distinctive navigation and foraging characteristics of white sharks. Alongside these, we also have the Harris Hawk Optimizer (HHO) [19], Artificial Rabbit Optimization (ARO) [20], and Artificial Hummingbird Optimization Algorithm (AHA) [21], and various other algorithms with diverse inspirations and applications.
Algorithms that draw inspiration from various physical phenomena are known as physics-based algorithms. Most famously, Rashedi et al. proposed a physics-based gravitational search algorithm (GSA) [22]. Another renowned physics-based algorithm is the Simulated Annealing Algorithm (SA) [23], which emulates the annealing process of solid matter in the field of physics. These notable examples highlight the application of principles from physics to optimize algorithm design. Evolutionary algorithms are meta-heuristic algorithms that mimic natural evolutionary mechanisms. A prominent example of a physics-based algorithm is the Genetic Algorithm (GA) [24], which simulates the process of biological evolution through natural selection and the hereditary mechanism described in Darwin’s theory. GA uses binary encoding to facilitate interbreeding and mutation in a population of organisms, with each individual representing a candidate solution. Additionally, several new evolutionary algorithms have emerged, including Evolutionary Programming (EP) [25], Differential Evolution (DE) [26], and the Virulence Optimization Algorithm (VOA) [27]. These algorithms draw inspiration from biological and evolutionary principles to optimize their design.
Meta-heuristic algorithms that draw inspiration from human behavioral habits are categorized as human-inspired algorithms [28]. One prominent example in this category is the Harmony Search (HS) algorithm [29]. Other algorithms that share similar principles include Teaching–Learning-Based Optimization (TLBO) [30], Social Group Optimization (SGO) [31], the Group Teaching Optimization Algorithm (GTOA) [32], the Brain Storming Optimization Algorithm (BSO) [33], and the Imperialist Competition Algorithm (ICA) [34]. These algorithms leverage human behavioral patterns to optimize their problem-solving approaches.
In summary, by comparing different features and advantages of meta-heuristic algorithms, the following conclusions can be drawn:
  • They are usually inspired by some natural law or mathematical theory [35].
  • No theoretical derivation is required to transform the problem into a model that is less dependent on mathematical conditions [36].
  • The complexity of the algorithm determines its search rate for finding approximate and suitable solutions [37].
There are two main ways to solve the problem in the meta-heuristic algorithm: (1) a single solution; and (2) A total-based solution. A single-solution-based algorithm that starts its process with a candidate solution and improves on it during iteration. A population-based meta-heuristic algorithm initializes the random solution so that the population tends to the most promising region. Compared to algorithms based on a single solution, population-based meta-heuristics have two advantages: exploration and exploitation [38,39]. The two progress each other, making the algorithm converge faster and improving the global search capability.
According to the No Free Lunch Theory (NFL) [40], there is no perfect algorithm, and there must be a disadvantage to this in some aspect. Based on crawfish’s foraging behavior and summer resort behavior, scholars proposed a new meta-heuristic optimization algorithm, namely the Crawfish Optimization Algorithm (COA) [41]. The COA is a novel way to model the foraging, summer resort, and competitive behaviors of crayfish. By adjusting the temperature, the exploration and development ability of the algorithm are balanced. This algorithm focuses on modeling crayfish behavior and balancing the algorithm by adjusting the temperature. Through a comparative analysis of experimental results, it is observed that the COA demonstrates favorable optimization performance in CEC2014 benchmark functions, as well as in 23 standard benchmark functions and engineering application problems.
However, the COA has its limitations, for instance, low population diversity, poor convergence speed, and low global accuracy of solutions. To address these limitations and improve its optimization capabilities, the paper proposes an enhanced version of the algorithm called the ICOA. The proposed approach includes several improvements. Firstly, an elite chaotic difference strategy is introduced in the initialization stage to promote a more even distribution of crayfish and to obtain an initial population of higher quality. Additionally, to augment the population’s diversity and quality, a differential evolution strategy is integrated before the summer vacation. Moreover, during crayfish summer resort behavior, a Levy flight strategy is employed to amplify their search capability, broaden their exploration range, and enhance their capacity to evade local optima. Finally, an adaptive strategy is introduced during the competition phase to bolster the crayfish’s global search capability, resulting in enhanced solution accuracy and faster convergence in successive iterations.
The efficacy and high competitiveness of the proposed ICOA is demonstrated through comprehensive experimentation on diverse problem sets. Specifically, we evaluate the performance of the ICOA on the CEC2017, CEC2020, and CEC2022 test sets, six engineering applications, 30 high- and low-dimensional constrained problems, and two NP problems. In addition, we evaluated its ability to solve hypersonic missile trajectory planning problems. To assess the effectiveness of the ICOA, a comprehensive comparison is conducted between its solutions and those generated by classical and advanced optimization algorithms. The results obtained were rigorously evaluated by statistical analysis using Wilcoxon rank sum tests to ensure that any significant differences were identified. The key contributions of this research can be summarized as follows:
(i)
The ICOA is proposed by incorporating four strategies, namely, the elite chaotic differential strategy, differential variation strategy, Levy flight strategy, dimensional variation strategy, and adaptive parameter strategy.
(ii)
The effectiveness and potential of the ICOA in addressing complex optimization problems are validated through experimental results obtained from benchmark test sets such as CEC2017, CEC2019, and CEC2020. These results are compared with other state-of-the-art swarm intelligence optimization algorithms, revealing the superior performance of the ICOA. This comparative analysis highlights the algorithm’s effectiveness and reinforces its capability to tackle challenging optimization problems.
(iii)
The ICOA is applied to various real-world industrial design problems, including six specific cases. In addition, thirty high–low-dimension constraint problems, two NP problems, and one hypersonic missile trajectory planning problem are evaluated. The performance of the ICOA is methodically compared to that of classical or state-of-the-art optimization algorithms, providing insights into its efficacy and applicability across diverse problem domains.
The following section of this study will be structured as outlined: Section 2 introduces the inspiration and mathematical model of COA. Section 3 proposes the ICOA, introduces the elite chaotic difference strategy, the difference variance strategy, and the Levy flight strategy, as well as the dimensional variation strategy and the adaptive parameter strategy, and analyzes the algorithmic complexity of ICOA. In Section 4, we conduct numerical experiments and analyze the results of ICOA in comparison to other optimization algorithms using the CEC2017, CEC2020, and CEC2022 test sets. In Section 5, the proposed ICOA is used to solve 6 practical industrial design problems, 30 high–low-dimension constraint problems and two NP problems. In Section 6, a reasonable discussion is presented on the work of this paper. Finally, in Section 7, this paper provides concluding remarks and suggestions for future research directions.

2. Overview of the Crawfish Optimization Algorithm

In 2023, Heming Jia et al. [41] introduced the Crayfish Optimization Algorithm (COA), drawing inspiration from crayfish foraging, summer vacation, and competitive behaviors. The algorithm simulates scenarios where crayfish are on summer vacation, competing, or foraging, and identifies the best location based on varying temperatures. Figure 1 is a schematic diagram of crayfish derived from the literature [41].
The COA is initialized as shown in Equation (1):
A X = [ A X 1 , A X 2 , , A X N ] = A X 1 , 1       A X 1 , j   A X 1 , d i m                                       A X i , 1       A X i , j     A X i , d i m                                       A X N , 1   A X N , j A X N , d i m ,
where AX is the initial population position, N is the number of populations, dim is the number of population dimensions, A X i , j is the position of individual i in dimension j, and the value is obtained from Equation (2).
A X i , j = l j + ( u j l j ) × r ,
where l j denotes the lower bound of the j-th dimension, u j denotes the upper bound of the j-th dimension, and r denotes a random number.
Changes in temperature affect the behavior of crayfish at different stages, which are defined by Equation (3).
t = r × 15 + 20 ,
where t denotes the ambient temperature and r is a random number.
p = C 1 × 1 2 × π × σ × exp ( t μ ) 2 2 σ 2 ,
where µ is the temperature most suitable for crayfish, p is the foraging intake, and σ and C1 are used to control the intake of crayfish at different temperatures.

2.1. Summer Vacation

When the temperature reaches a critical point, the crayfish will go into their burrows and start vacationing. The burrows A X s h are ventilated as follows:
A X s h = ( A X g + A X l ) / 2 ,
where AXg denotes the final optimal position and AXl denotes the current optimal position.
When rand < 0.5, the crayfish will avoid the heat by Equation (6), as shown in Figure 2a.
A X i , j t + 1 = A X i , j t + C 2 × r × ( A X s h A X i , j t ) ,
where t is the current iteration number, t + 1 then indicates the next iteration, and C2 is the decreasing curve, as shown in Equation (7).
C 2 = 2 ( t / T ) ,
where T is the maximum value of the number of iterations.

2.2. Competition Stage

As shown in Figure 2b, when the t > 30C and r ≥ 0.5, crayfish behave according to Equation (8) for burrowing competition.
A X i , j t + 1 = A X i , j t A X z , j t + A X s h ,
where t is the temperature and r is the random probability. Z is a random individual, as defined in Equation (9).
Z = r o u n d ( r × ( N 1 ) ) + 1 .

2.3. Formalization Stage

When the temperature is 30 °C, this is the right time for crayfish to forage for food. At this point, the food position AXfood is as in Equation (10):
A X f o o d = A X G ,
The size of the food, denoted as AQ, is defined as follows:
A Q = C 3 × r a n d × ( f i t n e s s i / f i t n e s s f o o d ) ,
where C3 is defined as the largest food item with a constant value of 3, f i t n e s s i is the fitness value of the i-th individual, and f i t n e s s f o o d is the fitness value at the time of foraging for food.
When AQ > (C3 + 1)/2, crayfish forage for food through Equation (12).
A X f o o d = exp 1 A Q × A X f o o d ,
When the food item is small, foraging at this time is shown in Equation (13).
A X i , j t + 1 = A X i , j t + A X f o o d × p × ( cos ( 2 × π × r ) sin ( 2 × π × r ) ) ,
When Q ≤ (C3 + 1)/2, crayfish feed as shown in Equation (14).
A X i , j t + 1 = A X i , j t A X f o o d × p + p × r a n d × A X i , j t .
At this point, the crayfish has completed all of its behavioral manifestations, that is, it has completed the theoretical process of the COA.
Algorithm 1 is given as pseudo-code for the COA.
Algorithm 1: Crayfish optimization algorithm
Begin
     Step 1: Initialization. Set the parameters of the crayfish population.
     Step 2: Fitness calculation. By calculating the fitness value of the initialized population to get X g , X l .
     Step 3: while termination criteria are not met do
                  Defining temperature temp by Equation (3)
                      if temp > 30 do
                      Define cave X s h according by Equation (5)
                        if rand < 0.5 do
                          Crayfish conducts the summer resort stage according to equation (6)
                        else
                            Crayfish compete for caves through Equation (8)
                      end
                  else
                          P and Q can be found by Equation (4) and Equation (11), respectively.
                          if Q ≥ 2 do
                              Crayfish shreds food by Equation (12)
                              Crayfish foraging according to Equation (13)
                          else
                                 Crayfish foraging according to Equation (14)
                          end
                  end
                  Update fitness value and output X g , X l .
          end while

3. Improved Crayfish Optimization Algorithm with Mixed Strategies

Although the COA has advantages, such as strong optimization ability and fast convergence speed, it is inevitable that it is prone to fall into the local optimum, resulting in low calculation accuracy and affecting optimization ability. In order to further improve the performance of the COA, this section combines the following four strategies to improve the COA and propose the ICOA.

3.1. Elite Chaos Difference Strategies

Chaotic mapping is a stochastic method in nonlinear dynamical systems [42] that initializes populations by using chaotic variables instead of random variables [43]. Therefore, it can perform a thorough search of the solution space more efficiently than a random search that relies primarily on probability [44]. Therefore, this paper considers a well-known type of chaotic mapping: logistic chaotic mapping.
The initial population is considered to be divided into three parts, which are as follows:
(1)
Elite learning selects the population according to a certain ratio as the first part of the initialization decomposition.
E X i d   = e × ( U B L B ) + L B , i = 1 , 2 , , N / 10 ,
where N denotes the number of populations, E X i d refers to the i-th crayfish, UB and LB are the upper and lower bounds of the d-th dimension of the population, and e is the elite probability, which is taken as 0.1 in this paper.
(2)
Logistic chaotic mapping is carried out on the remaining population according to the ratio column as the second part of the initialization solution.
x t + 1 = μ x t ( 1 x t ) ,   t = 1 , 2 , , N / 5   C X i d = x t + 1 × ( u b l b ) + l b ,
where ub and lb are the upper and lower bounds of the d-th dimension of the population, μ [ 0 , 4 ] ,   x [0,1]. In this paper, μ = 3.9, x = 0.5, and xt+1 is denoted as the newly generated chaotic mapping values, and C X i d is the second part of the candidate solution.
(3)
Differential learning, which is performed on the remaining populations, randomizes the elite populations as well as the chaotically mapped populations to be updated by the differential operation to obtain the final differential population, which is the last part of the initial solution.
t j + 1 = t j + 0.5 × ( E X j t j ) , D X i d = l b + t j + 1 × ( u b l b ) ,
where tj is some solution of the chaotic mapping population, ub and lb are the upper and lower bounds of the d-th dimension of the population, EXj is some candidate solution of the elite learning population, and tj+1 denotes the new solution after differentiation, and D X i d is the last part of the initialization.
The candidate solutions of the above three components are the final initialized crayfish population, expressed as Equation (18).
X i d = [ E X i d , C X i d , D X i d ] ,
Performing elite chaotic difference initialization prior to crayfish summer vacation improves the quality of the initial population, allowing for faster access to higher-quality solutions.

3.2. Differential Variation Strategy

By employing the differential mutation strategy, the algorithm effectively enhances the quality of the population. This strategy plays a crucial role in expanding the search range of the population, preventing it from getting trapped in local optima. As a result, the search ability of the algorithm is significantly improved. Therefore, this section considers two operations of differential evolutionary algorithms: the variation operation and the selection operation.
(1)
Mutation operation
To determine the mutated individual, we calculate the vector difference between two randomly chosen individuals from the population, following a scaling process. We also consider the vector synthesis of the individual that is being mutated. This approach ensures a diverse set of genetic information is incorporated into the mutated individual. The mathematical representation for the mutation operation is as follows:
U t + 1 = x r 1 ( t ) + F × ( x r 2 ( t ) x r 3 ( t ) ) ,   i r 1 r 2 r 3 ,
where the scaling factor F = 0.9 and x r i ( t ) denotes the ri-th individual in the t-th generation population.
(2)
Selection operation
After the mutation operation is completed, Levy flight operation is performed on the mutant individual U t + 1 . The greedy choice is made between the subsequent individual U t + 1 and the original target individual L U t + 1 , and the individual with good fitness value is selected to enter the next iteration. The greedy choice model is as follows:
U t + 1 = L U t + 1         f ( L U t + 1 ) < f ( U t + 1 ) U t + 1             f ( L U t + 1 ) f ( U t + 1 ) ,
By introducing the differential evolution strategy before the crayfish summer vacation, it is beneficial to consider other individuals in this iteration to update the population information of the current iteration, effectively avoiding the situation where the previous solution is a local optimal solution. It also ensures that better individuals will be generated after the initialization of crayfish individuals and randomly learn from the previous generation of individuals, which increases the diversity of the population, expands the search range of the population, effectively avoids premature stagnation of the algorithm, and enhances its ability to jump out of the local optimum. Figure 3 shows the crayfish undergoing the mutation operation.

3.3. Levy Flight Strategy

Levy flight refers to the use of Levy flight distribution to simulate the random process of flight paths [45,46], which can describe complex random motion trajectories and phenomena with remote correlation and multi-scale properties. The crawfish burrowing process based on this strategy is shown in Equation (21).
X n e w = X i + γ L e v y ( X f X i ) ,
where Xnew is the new position obtained after the Levy-based flight, Xi is the i-th crayfish, Xf is the cave that the crayfish is going to, γ is the step scaling factor, which is generally taken as 0.01, and L e v y is the Levy randomized path defined as shown in Equation (22), and is the dot product operation.
L e v y ( β ) ~ u = t β ,   1 β 3 ,
For ease of computation, define the random numbers as follows,
s = u v 1 / β ,
      u ~ N ( 0 , σ 2 ) , v ~ N ( 0 , 1 ) ,
σ = Γ 1 + β sin π β 2 β Γ 1 + β 2 2 β 1 2 1 / β ,
where u obeys a Gaussian distribution, v obeys a normal distribution in Equation (24). σ is defined as Equation (25), and Γ(x) is a gamma function with β taking the value of 1.5. Figure 4 shows a plot of the simulated trajectory of the Levy flight.
The Levy flight strategy greatly expanded the range of the crayfish’s search when they explored their burrows during the summer vacation, increasing the variety of the search process. The search ability of the algorithm is improved, and the ability to jump out of the local optimal is enhanced.

3.4. Dimensional Variation and Adaptive Parameter Strategy

3.4.1. Dimensional Variation Strategy

For the new population generated after the differential evolution strategy, the t-distribution variational operator is introduced to perturb the optimal individual position [47]. The freedom parameters of the t-distribution variational operator vary with the number of iterations, and the dimensionally variational strategy is specifically defined as follows: assuming dimension equals d and the current optimal solution g b e s t = ( g b e s t 1 , g b e s t 2 , g b e s t d ) , then the new solution, g n e w = ( g n e w 1 , g n e w 2 , , g b e s t d ) obtained by mutating the current optimal solution dimension by dimension, is computed as follows:
g n e w d = g b e s t d + TD ( t ) d × g b e s t d ,
where t is the current number of iterations, TD ( t ) is the t-distribution with a parameter t of degrees of freedom, and TD ( t ) d is the random number generated by the t-distribution in the d-th dimension. Since it is impossible to directly judge whether the new position obtained after mutation is better than the original position, this paper uses the principle of greed to judge whether to accept the new position instead of the original optimal position. The greedy principle is used to guide the population to better evolve to the optimal individual position and improve the convergence speed of the algorithm.

3.4.2. Adaptive Parameter Strategy

Inspired by the crayfish entering the burrow during summer vacation, a variable C1 is set to control the crayfish competition for the burrow, and the decreasing curve C is transformed and applied to the competition phase. For the two random individuals Xi and Xj in the competition stage, vector difference is carried out through Equation (27), and adaptive parameter changes are carried out, and then the original XFj is added to finally obtain the new position XNi,j.
X N i , j = C × ( X i X j ) + X F j ,
C 1 = F × 2 l a m u d a ,
l a m u d a = exp ( 1 ) 1 - T T + 1 t ,
where C represents the adaptive parameter, F is set to 0.4, and t and T represent the current and maximum number of iterations, respectively.
The adaptive parameter strategy greatly improves the global search capability of the algorithm during the crayfish competition phase and drastically improves the convergence ability of the algorithm. Figure 5 shows the variation in the C and C1 curves.

3.5. ICOA Pseudo-Code

Algorithm 2 gives the pseudo-code of the ICOA, and the flowchart is shown in Figure 6.
Algorithm 2: The proposed ICOA
Begin
     Step1: Initialization. Crayfish populations were initialized using the elite chaos differential strategy (i.e., Equation (18)).
     Step2: Fitness calculation. By calculating the population fitness value fitness, the optimal fitness f b e s t value as well as the corresponding individuals X b e s t were recorded;
     Step3: While (t < T) do
                     Defining temperature temp by Equation (3)
                         for i = 1 to N do
                          U t + 1 = x r 1 ( t ) + F × ( x r 2 ( t ) x r 3 ( t ) ) ,   i r 1 r 2 r 3 ,     //Mutation operation
                              U t + 1 = L U t + 1     f ( L U t + 1 ) < f ( U t + 1 ) U t + 1             f ( L U t + 1 ) f ( U t + 1 )     //Select operation
                         end for
                              g n e w d = g b e s t d + T D ( t ) d × g b e s t d  //Dimensional variation
                         for i = 1 to N do
                             if temp > 30 do        //Summer resort stage and competition stage
                                  if rand < 0.5 do
                                         X n e w = X i + γ L e v y ( X f X i )
                                  Else                 //Competition stage
                                    For j = 1 to Dim do
                                        z = r o u n d ( r a n d × ( N 1 ) ) + 1
                                        X n e w i , j = C × ( X i X j ) + X F j
                                        C 1 = F * 2 ^ l a m u d a
                                        l a m u d a = exp ( 1 ) ^ ( 1 T / ( T + 1 t ) )
                                 end for
                               end
                             else             //forging stage
                                p = C 1 × 1 2 × π × σ × exp ( t e m p μ ) 2 2 σ 2
                             if P >2 do
                                X f o o d = exp 1 Q × X f o o d
                         else
                              X i , j t + 1 = X i , j t X f o o d × p + p × r a n d × X i , j t
                         end
                         end for
                         Calculate and rank the fitness values.
                         Update X n e w
                             t = t+1
        Step4: Return. Return the optimum position X b e s t and fitness value f ( X b e s t ) of Crayfish
End

3.6. Time Complexity of the ICOA

The time complexity of ICOA is commonly dependent on the population size N, the dimension D, the objective function value f for each iteration, and the maximum number of iterations T. The time complexity of ICOA is calculated based on these factors, as shown in Equation (30), Figure 6 shows the flowchart of the ICOA.
O ( I COA ) = O ( Norms ) + O ( initialize ) + O ( DE ) + O ( cos t ) + O ( update )                               = O ( 1 ) + O ( N D ) + O ( T N D ) + O ( T N f ) + O ( T N D )                               = O ( 1 + N D + 2 T N D + T N f ) ,
Since the parameter T is generally set very large, the simplified Equation is as follows (31):
O ( I COA ) O ( T N ( 2 D + f ) ) .

4. Numerical Experiment and Analysis

In this section, a set of test functions is employed to assess and analyze the performance of the ICOA. The CEC 2017, CEC 2020, and CEC 2022 test sets, consisting of 29, 10, and 12 test functions, respectively, are used to compare the results of the ICOA with those of other classical or recent optimization algorithms. The population size for all algorithms is 50, while the dimension is set at 10 with a maximum number of iterations of 1000. To minimize the influence of randomness on the algorithms, each algorithm is executed independently for 20 trials on the test functions, and the outcomes are compared among the primary algorithm sets.

4.1. ICOA Is Compared with the First Group of Optimization Algorithms

4.1.1. Comparison of the Test Set CEC 2020

In this section, a comparative analysis will be conducted between the proposed ICOA and nine other intelligent optimization algorithms. The selected algorithms can be categorized into three groups: (1) classical intelligent optimization algorithms such as the PSO [14] and the DE [27]. (2) Newly proposed intelligent optimization algorithms in the last year include the Fox for Auricular Fox Algorithm (FFA) [48], the Chernobyl Disaster Optimizer (CDO) [49], and the original Crayfish Optimization Algorithm (COA) [41]. (3) Representative intelligent optimization algorithms in recent years include the Gray Wolf Optimization Algorithm (GWO) [50], HHO [20], the Zebra Optimization Algorithm (ZOA) [51], and the Sparrow Optimization Algorithm (SSA) [52]. Table 1 provides the initial parameters of all optimization algorithms.
Table 2 presents the findings from 20 separate runs of the ICOA compared to other algorithms on the 10-dimensional CEC 2020 test set. The results include the mean, standard deviation, and p-value obtained from the Wilcoxon rank sum test. The ICOA serves as the benchmark, and the statistical outcomes are reported based on 20 runs at a 95% confidence level ( α = 0 . 05 ). In this case, “+” means that the ICOA performs significantly poorly compared to other algorithms, “=” means that there is no significant difference between the ICOA and the algorithm being compared, and “−” and “+” mean the opposite. The bolded data represent the optimal average values and minimum variances of the nine algorithms on each test function.
According to the results shown in Table 2, the ICOA performs best in all ten tested functions. Its optimization ability proves to be better than the original COA in all cases. This indicates that the four strategies in the original COA positively affect the ICOA by improving its convergence speed and computational accuracy. As far as the test function F1 is concerned, the worst value, the mean value, and the standard deviation of the ICOA are significantly smaller compared to other optimization algorithms. These values are also very close to the optimal values. In addition, based on the average ranking, it can be concluded that the ICOA is ranked first (rank = 1). The last row of Table 2 shows the results of the Wilcoxon rank sum test and the ranking of the algorithms. It can be seen that the results of the COA, DE, PSO, CDO, FFA, SSA, HHO, and ZOA are 0/0/10, while the results of the GWO are 0/3/7. The comprehensive analysis shows that the ICOA outperforms the other comparative algorithms on the CEC 2020 test set, and it has a strong competitive advantage.
Figure 7 shows the convergence curve of each algorithm on the CEC 2020 test set. It can be intuitively seen that the COA falls into local optimality on F1, F2, F3, F5, F7, and F9, leading to premature convergence, while the ICOA’s convergence speed and accuracy are greatly improved compared with the COA. And most of the test functions do not stop in the later iteration stage, but continue converging. Comparing the convergence curves of other algorithms, we can see that the ICOA has faster convergence speed and higher convergence accuracy. Especially in F5, F6, F7, F10, the ICOA’s advantage is more obvious.
Figure 8 shows the box diagram of each algorithm on the CEC 2020 test function. It can be seen that on most test functions, the ICOA has a narrow box type, low position, and small median, which indicates that the solution obtained by the ICOA has high consistency and high precision. In addition, the ICOA has fewer outliers on the 10 test functions, which indicates that the algorithm is less affected by the randomness of the strategy during operation. It can be seen from the box diagram that the ICOA is superior to other comparison algorithms in terms of stability and accuracy.

4.1.2. Comparison on Test Set CEC 2022

In order to better verify the performance of the proposed ICOA, a new test set from 2022 was selected for testing. The experiment sets the population size N of all algorithms as 50, the dimension dim as 20, and the maximum number of iterations T as 1000.
Table 3 displays the mean, standard deviation, and Wilcoxon rank sum test p-values for the ICOA and other optimization algorithms based on 20 independent runs on the CEC 2022 test set. The bolded data represent the optimal average values and minimum variances of each test function. As can be seen from Table 3, the optimization ability of the ICOA on 12 test functions has been significantly improved compared with that of the ICOA. On the test functions F1, F2, F6, F7, F8, F9, and F10, the ICOA achieved the best fitness value while also having a relatively small variance. From the last row of Table 3, the Wilcoxon rank sum test statistics results and the rankings of each algorithm are combined to obtain the Wilcoxon rank sum test results of the nine optimization algorithms as follows: 0/0/12, 0/0/12, 0/0/12, 0/0/12, 0/0/12, 1/0/11, 1/1/10, 0/0/12, and 0/1/11. It can be seen that the ICOA outperforms the COA on all the test functions. In general, the ICOA effectively improves the performance of the original algorithm and shows strong ability in solving the CEC 2022 test set. In addition, in Table 3, rank refers to the ranking of each algorithm for each test function based on the size of the average, mean rank refers to the average ranking of the 12 functions, and result is the final algorithm ranking on the CEC 2022 test function. ICOA ranked first place in 10 functions, mean rank, and result in the comprehensive comparison of the algorithms. This shows that the ICOA has better performance not only on most of the proposed test functions, but also on the whole CEC 2022 test set. In conclusion, the ICOA greatly improves the performance of the original algorithm and demonstrates significant potential in solving the CEC 2022 test set.
To further demonstrate the performance of the proposed ICOA, we chose to use Student’s parametric test (t-test) [53,54] for statistical evaluation. T-p in Table 3 is the p-value of the t-test, and the statistical results are given by running the ICOA 20 times at the 95% significance level (t = 0.05), and the experimental results show that ICOA is significant for the original COA on 83.3% of the test functions (10 out of 12 test functions). The other eight comparison algorithms are 100%, 58.3%, 100%, 100%, 100%, 91.7%, 58.3%, 100%, and 91.7%. It can be seen that the ICOA is more than 90% significant for all the other algorithms, and only two algorithms are significant at 58.3% on the CEC 2022 test set. In conclusion, the ICOA improves the performance of the original algorithm in all aspects.
Figure 9 and Figure 10 show the convergence plot and box plot for the 12 test functions, respectively. As can be seen from Figure 9,the ICOA has a faster convergence speed and higher accuracy on test functions F1, F3, F7, and F10, and converges in the later period. On the test functions F2, F4, F6, F9, F11, and F12, although the convergence speed of the ICOA is not very fast, it is obviously superior to other comparison algorithms in the most accurate calculation. In general, when solving CEC 2022 test functions, the ICOA performs well on 90% of the test functions, and is superior to other comparison algorithms in terms of calculation accuracy and convergence speed. Refer to Figure 10. It can be observed that the abnormal points generated by the ICOA are relatively few. In most of the test functions, the median value of the ICOA is superior. Moreover, on the F3, F5, F7, F11, and F12 test functions, the box area is relatively narrow and the position is lower. In general, the ICOA has high stability and good solution accuracy.

4.2. ICOA Compared with the Second Group of Optimization Algorithms

4.2.1. Comparison on CEC 2020 Test Set

In order to further validate the performance of the proposed ICOA, a comparative analysis is conducted with nine other intelligent optimization algorithms on the CEC 2020 test set. (1) Improved classical algorithms include the Gaussian Quantum Particle Swarm Algorithm (GQPSO) [55] and the Adaptive Differential Evolutionary Algorithm (SaDE) [56]. (2) Newly proposed algorithms in the last two years include the Locust Optimization Algorithm (GOA) [57], the Whale Optimization Algorithm (WOA) [58], the Arithmetic Optimization Algorithm (AOA) [59], and the original crayfish algorithm [41]. (3) Improved high-performance algorithms include the Improved Gray Wolf Optimizer (IGWO) [60] and the Improved Sparrow Optimization Algorithm (ISSA) [61]. (4) The Tree Growth Algorithm (TTA) [62] was cited more than 200 times in 2018. The experiments were set for all algorithms with a population size of 50, a dimension of 10, and a maximum number of iterations of 1000. Table 4 shows the initial parameters of all the optimization algorithms used in this section.
Table 5 shows the worst, best, mean, std, and Wilcoxon rank sum test p-values obtained by the ICOA and the second group of optimization algorithms after 20 independent runs of the CEC 2020 test set, and the bolded data are the optimal mean and minimum variance of each test function. It can be seen from Table 5 that the ICOA’s optimization ability in ten test functions has been greatly improved compared with the COA. On the test functions F1, F4, F5, F7, and F10, the ICOA has the smallest standard deviation difference when it reaches the minimum fitness average. On F3 and F6, IGWO reaches the optimal fitness value and has a small variance. In terms of average rank and ranking, SaDE, ICOA, IGWO, and WOA perform better because their average rank is less than 5. The ICOA is in the top three on nine test functions with an average rank of 1.6, ranking first. Therefore, the performance order of nine algorithms in solving the CEC2020 test function is ICOA > IGWO > SaDE > WOA > ISSA > AOA > COA > WOA > GOA > GQPSO. According to the statistical results of the Wilcoxon rank sum test in the last row of Table 5 and the ranking of each algorithm, the Wilcoxon rank sum test results of ten optimization algorithms can be obtained as follows: 0/0/10, 1/0/9, 0/0/10, 0/0/10, 0/0/10, 0/0/10, 0/1/9, 0/0/10, and 1/0/9, respectively. It can be seen that the ICOA is superior to the COA in 10 test functions. In summary, the ICOA effectively improves the performance of the original algorithm and shows strong ability in solving the CEC 2020 test set.
Figure 11 and Figure 12 show the convergence plot and box plot for the 10 test functions, respectively. As can be seen from Figure 11, on the test functions F4, F5, F7, F8, F9, and F10, the ICOA has a faster convergence speed and higher accuracy, and has been converging in the later stage. When solving CEC 2020 test functions, the ICOA performs well on most test functions, and is superior to other comparison algorithms in calculation accuracy and convergence speed. From Figure 12, it can be seen that the ICOA generates fewer outliers. In most test functions, the median of the ICOA is better, the box area of the F1, F4, F5, F7, and F8 test functions is narrow, and the position is lower. In general, the ICOA has high stability and good solving precision.

4.2.2. Comparison on Test Set CEC 2017

It is more convincing to perform comparison experiments on different test sets, so this section describes comparison experiments on the CEC 2017 test set, and the experiments set the population size of all algorithms to be pop = 50, the number of dimensions to be dim = 10, and the maximum number of iterations to be iterated T = 1000.
Table 6 shows the worst, best, mean, Std, and Wilcoxon rank sum test p-values obtained by the ICOA and other optimization algorithms after 20 independent runs of the CEC 2017 test set, and the bolded data are the optimal mean and minimum variance of each test function. As can be seen from Table 6, compared with the COA, the ICOA’s 29 test functions have been greatly improved. On F1, F4, F7, F8, F9, F11, F12, F13, F14, F18, F19, F20, and F29, the ICOA reached the optimal fitness value and the variance was also small. In F15, F17, and F26, although the minimum variance was not reached, the optimal average value was reached. In terms of average rank and ranking, the IGWO, ICOA, SaDE, and WOA perform better because their average rank is less than 5. The ICOA is in the top three on twenty-eight test functions, in the top two on twenty-five test functions, and in first place on fifteen test functions, with an average rank of 1.655, ranking first. According to the statistical results of Wilcoxon rank sum testing in the last row of Table 6, combined with the ranking of each algorithm, the Wilcoxon rank sum test results of nine optimization algorithms can be obtained as follows: 0/0/29, 0/2/27, 0/0/29, 1/1/27, 0/0/29, 1/2/26, 3/2/24, 0/1/28, and 1/1/27. It can be seen that the ICOA is superior to the COA in 29 test functions. In summary, the ICOA effectively improves the performance of the original algorithm and shows strong ability in solving the CEC 2017 test set.
Figure 13 shows the convergence curve of each algorithm on the CEC2017 test set. It can be visually seen that the COA falls into local optimization and leads to premature convergence on F3, F10, F16, F18, F19, F20, F24, F29, and F30, while the ICOA’s convergence speed and accuracy are greatly improved compared with the COA. And most of the test functions do not stop in the later iteration stage, but continue converging. Comparing the convergence curves of other algorithms, we can see that the ICOA has a faster convergence speed and higher convergence accuracy. Especially in F1, F8, F12, F13, F25, F26, and F29, the ICOA’s advantage is more obvious.
Figure 14 shows the box diagram of each algorithm on the CEC 2017 test function. As can be seen that on most test functions, the ICOA has a narrow box type, low position, and small median, which indicates that the solution obtained by the ICOA has high consistency and high precision. In addition, the ICOA has fewer outliers on 29 test functions, which indicates that the algorithm is less affected by policy randomness during operation. It can be seen from the box diagram that the ICOA is superior to other comparison algorithms in terms of stability and accuracy.

5. ICOA Solves All Kinds of Optimization Problems

5.1. ICOA Solves Engineering Optimization Problems

This section aims to assess the efficiency of the ICOA in solving real-world engineering optimization problems by comparing it with other high-performance algorithms. For the six engineering problems involved, the penalty function is first used to transform these constrained problems into unconstrained problems before comparing them. We set the population size, maximum number of iterations, and number of independent runs for all optimization algorithms to 40, 1000, and 30, respectively. The data in bold in this section are all the relevant optimal values that the algorithm solves for each problem.

5.1.1. Speed Reducer Design Problem

The objective of the problem is to obtain the minimum mass under four design constraints; Figure 15 shows a schematic diagram of this problem, and the mathematical models for these four constraints are as follows:
Set the following:
I C = [ i c 1   i c 2   i c 3   i c 4   i c 5   i c 6   i c 7 ]  
Minimize
f ( I C ) = 0.7854 i c 1 i c 2 2 ( 3.3333 i c 3 2 + 14.9334 i c 3 43.0934 ) 1.508 i c 1 ( i c 6 2 + i c 7 2 ) + 7 . 4777 i c 6 3 + i c 7 3 + 0.7854 i c 4 i c 6 2 + i c 5 i c 7 2
Subject it to
g 1 ( I C ) = 27 i c 1 i c 2 2 i c 3 1 0 , g 2 ( I C ) = 397.5 i c 1 i c 2 2 i c 3 2 1 0 , g 3 ( I C ) = 1.93 i c 4 3 i c 2 i c 3 i c 6 4 1 0 , g 4 ( I C ) = 1.93 i c 5 3 i c 2 i c 3 i c 6 4 1 0 , g 5 ( I C ) = 1 100 i c 6 3 745 i c 4 i c 2 i c 3 2 + 16.9 × 10 6 1 0 , g 6 ( I C ) = 1 85 i c 7 3 745 i c 5 i c 2 i c 3 2 + 16.9 × 10 6 1 0 , g 7 ( I C ) = i c 2 i c 3 40 1 0 , g 8 ( I C ) = 5 i c 2 i c 1 1 0 , g 9 ( I C ) = i c 1 12 i c 2 1 0 , g 10 ( I C ) = 1.5 i c 6 + 1.9 i c 4 1 0 , g 11 ( I C ) = 1.1 i c 7 + 1.9 i c 5 1 0
The boundaries are as follows:
2 . 6 i c 1 3.6 ,   0.7 i c 2 0.8 ,   17 i c 3 28 ,   7.3 i c 4 8.3 , 7.3 i c 5 8.3 ,   2.9 i   c 6 3.9 ,   5 i c 7 5.5 .
Table 7 shows the optimal value and corresponding design variables obtained by the ICOA and other optimization algorithms to solve the reducer design problem. These comparison algorithms are GWO [51], HHO [20], DO [63], WFO [64], GOA, SSA, FFA [49], and AOA. Further, Table 8 gives the statistical results of the design problems of the reducer. It can be seen from the table that the ICOA can obtain an optimal cost of 2996.348165, which indicates that the ICOA can obtain the best result in solving the design problem of the reducer.

5.1.2. Hydrodynamic Thrust Bearing Design Problems

The primary aim of this design problem is to minimize the power loss in the bearings [65]. Figure 16 illustrates this issue. This objective is mathematically expressed as follows:
The min is the following:
f ( c o a ) = q p 0 0.7 + E f
Subject to
c 1 ( c o a ) = 1000 p 0 0 , c 2 ( c o a ) = W 101000 0 , c 3 ( c o a ) = 5000 W π ( r 2 r 0 2 ) 0 , c 4 ( c o a ) = 50 p 0 0 , c 5 ( c o a ) = 0.001 0.0307 386.4 p 0 q 2 π r h 0 , c 6 ( c o a ) = r r 0 0 , c 7 ( c o a ) = h 0.001 0 .
where
W = π P 0 2 r 2 R 0 2 ln r r 0 , P 0 = 6 μ q π h 3 ln r r 0 , E f = 9336 q × 0.0307 × 0.5 k , k = 2 ( 10 p 559.7 ) , h = 2 π × 750 60 2 2 π μ E f r 4 4 r 0 4 4
The boundaries are as follows:
1 r 16 ,   1 r 0 16 , 1 × 10 6 μ 16 × 10 6 ,   1 q 16 .
Table 9 presents the comparison of the optimal values and design variables achieved through the ICOA, as well as several other optimization algorithms, in addressing the problem of hydrostatic thrust bearing. These algorithms include GWO [51], HHO [20], DO, WFO, GOA, SSA, FFA [49], and AOA [59]. The results of the mathematical calculations for this problem are further given in Table 10. From the table, we can observe that the ICOA achieves an optimal cost of 2996.348165 for this design problem. This result highlights the superiority of the ICOA over other algorithms in obtaining the optimal solution for this specific problem.

5.1.3. Welded Beam Design Problem

The main objective of this specific problem is to formulate a design for a welded beam that minimizes the total cost [66]. Figure 17 is a simulation of this problem. The problem can be mathematically represented as follows:
Minimize
f ( C O A ) = 0.04811 c o a 3 c o a 4 ( c o a 2 + 14 ) + 1.10471 c o a 1 2 c o a 2
Subject it to:
c 1 ( C O A ) = c o a 1 c o a 4 0 ,   c o a 2 ( C O A ) = δ ( C O A ) δ max 0 , c 3 ( C O A ) = P P c ( C O A ) ,   c 4 ( C O A ) = τ max τ ( C O A ) ,   c 5 ( C O A ) = σ ( C O A ) σ max 0 .
where
τ = τ 2 + τ 2 + 2 τ τ c o a 2 2 R ,   τ = R m J ,   τ = P 2 c o a 2 c o a 1 , m = P c o a 2 2 + L , R = c o a 2 2 4 + c o a 1 + c o a 3 2 2 ,   J = 2 c o a 2 2 4 + c o a 1 + c o a 3 2 2 2 c o a 1 c o a 2 σ ( C O A ) = 6 P L c o a 4 c o a 3 2 ,   σ ( C O A ) = 6 P L 3 E c o a 3 2 c o a 4 ,   P c ( C O A ) = 4.013 E c o a 3 c o a 4 3 6 L 2 1 c o a 3 2 L E 4 G .
l = 14 I N ,   p = 6000 l b ,   e = 30.10 6 p i ,   s max = 30000 p i , q max = 13600 p s i ,   g = 12.10 6 p i ,   t max = 0.25 I N .
The boundaries are as follows:
0 . 1 c o a 3 , c o a 2 10 ,   0.1 c o a 4 2 ,   0.125 c o a 1 2 .
As shown in Table 11, the optimal values and the corresponding four design variables solved by the ICOA are compared with the results obtained by COA [41], GWO [51], HHO [20], SO [18], DO [63], WFO [64], GOA, SSA, ISSA, FFA [49], and AOA [59]. The statistical results of each algorithm for solving this problem are given in Table 12. By examining the results, it becomes evident that the ICOA consistently outperforms other algorithms in terms of all metrics, indicating its capability to offer higher-quality and more stable solutions for this problem. This underscores the robust adaptability of the ICOA.

5.1.4. Robot Gripper Design Optimization Problem

This design problem primarily aims to address the range between the maximum and minimum values generated by the clamping arm of the robot [67]. Figure 18 is a simulation of this problem. The problem can be formulated as follows:
Set
IC = (ic1,ic2,ic3,ic4,ic5,ic6,ic7)
Minimize
f ( I C ) = min C F k ( I C , C ) + max F k ( I C , C )
Subject it to
c 1 ( I C ) = Y min + y ( I C , C max ) 0 ,   c 2 ( I C ) = y ( I C , C max ) 0 , c 3 ( I C ) = Y max y ( I C , 0 ) 0 , c 4 ( I C ) = y ( I C , 0 ) Y G 0 , c 5 ( I C ) = l 2 + e 2 ( a + b ) 2 0 , c 6 ( I C ) = b 2 ( a e ) 2 ( l C max ) 2 0 c 7 ( I C ) = C max l 0 ,
where
α = cos 1 i c 1 2 + g 2 + i c 5 2 2 i c 1 g + ϕ , g = i c 4 2 + ( C i c 6 ) 2 ,   β = cos 1 i c 5 2 + g 2 i c 1 2 2 i c 1 g ϕ , ϕ = tan 1 i c 4 i c 6 C , y ( I C , C ) = 2 ( i c 2 + i c 4 + i c 3 sin ( β + i c 7 ) ) , F k = P i c 5 sin ( α + β ) 2 i c 3 cos ( α ) , Y min = 50 , Y max = 100 , Y G = 150 , C max = 100 , P = 100 .
The boundaries are as follows:
0 i c 4 50 ,   100 i c 3 200 ,   10 i c 2 , i c 1 , i c 5 150 ,   1 i c 7 3.14 ,   100 i c 6 300 .
Table 13 shows the optimal value and optimal corresponding variable obtained by the ICOA and other optimization algorithms for solving the robot arm design problem. These comparison algorithms are as follows: COA [41], SCA [68], AO [69], BWO [70], DO [63], WFO [64], GOA, SSA, RSA [71], FFA [49], GRO [72], and AOA [59]. Table 14 further gives the statistical results of the robot clamp arm design problems. As can be seen from the table, the ICOA can obtain the optimal cost of 7.2740693811E−17, which is better than COA.

5.1.5. Cantilever Beam Design Issues

The engineering problem at hand is relatively straightforward, aiming to minimize the weight of a cantilever beam by utilizing five variables [73]. Figure 19 is a simulation of this problem.
Minimize
I C = [ i c 1   i c 2   i c 3   i c 4   i c 5 ]
f ( I C ) = 0.6224 ( i c 1 + i c 2 + i c 3 + i c 4 + i c 5 )
Subject it to
c ( I C ) = 60 i c 1 3 + 27 i c 2 3 + 19 i c 3 3 + 7 i c 4 3 + 1 i c 5 3 1 0
The boundaries are as follows:
0 . 01 i c 1 , i c 2 , i c 3 , i c 4 , i c 5 100
As shown in Table 15, the ICOA was used to solve the cantilever beam design problem, and the optimal values and the corresponding four design variables solved by the ICOA were compared with the results obtained by COA [41], SCA, AO, BWO, WFO, GOA, SSA, FFA [49], and AOA [59]. The numerical results of each algorithm solving this problem alone are given in Table 16. By analyzing the results, it becomes apparent that the ICOA consistently produces smaller values for all the indicators. This outcome indicates that the ICOA is capable of providing higher-quality and more stable solutions for this problem. This highlights the strong performance of the ICOA.

5.1.6. Heat Exchanger Design Issues

This design application is taken from Hawk and Hitkowski [74]. It features a linear objective function whose minimization is bounded by six inequalities (three of which are non-convex). All eight variables are bounded. The parameters are slightly modified, and the mathematical model is as follows [74]:
Minimize
f ( I C ) = i c 1 + i c 2 + i c 3 .
Subject it to
c 1 ( I C ) = 0.0025 ( i c 4 + i c 6 ) 1 0 ,   c 2 ( I C ) = 0.0025 ( i c 5 + i c 7 i c 4 ) 1 0 , c 3 ( I C ) = 0.01 ( i c 8 i c 5 ) 1 0 , c 4 ( I C ) = i c 1 i c 6 + 833.33252 i c 4 + 100 i c 1 63333.333 0 , c 5 ( I C ) = i c 2 i c 7 + 1250 i c 5 + i c 2 i c 4 1250 i c 4 , c 6 ( I C ) = i c 3 i c 8 + i c 3 i c 5 2500 i c 5 + 1250000 0 .
The boundaries are as follows:
100 i c 1 10000 ,   1000 i c 2 10000 ,   1000 i c 3 10000 ,   10 i c 4 , i c 5 , i c 6 , i c 7 , i c 8 1000 .
As shown in Table 17, the ICOA is used to solve heat exchanger design problems. The optimal value of the ICOA solution and the corresponding eight variables were compared with the results obtained by ISSA, GWO [51], AOA [59], AO, COA [41], HHO [20], WFO, GOA, BWO, DO, SO [18], GRO. Table 18 shows the numerical results of each algorithm to solve this problem. It can be seen that the ICOA solves all the indicators relatively quickly. This shows that the ICOA provides more stable and more accurate solutions for heat exchanger design problems.

5.2. ICOA Solves Constrained Optimization Problems

This section involves the utilization of a set of mathematical functions, encompassing a total of 30 functions. To narrow down the scope, 20 functions were specifically examined within the context of small-dimensional problems. Additionally, the remaining 10 functions were explored in relation to large-dimensional problems. Notably, these functions encompass six distinct forms. Each section provides detailed and comprehensive information pertaining to the specific functions under examination. Detailed information on the functions considered is available online at https://www.sfu.ca/~ssurjano/ (accessed on 14 April 2025). In this section, thirteen comparison algorithms are selected from COA [41], AO, AOA [59], DO, HHO [20], GWO, SaDE, SO [18], GRO, ISSA, BWO, WFO [64], and GOA.

5.2.1. Low-Dimensional Problems

In this section, the ICOA and other comparative algorithms are utilized to evaluate 20 benchmark functions with dimensions ranging from 2 to 6 (F1 to F20). Additionally, the summary of the optimal values obtained by each algorithm and the target values of the benchmark functions is presented in Table 19 and Figure 20. The results show that the proposed ICOA outperforms the original algorithm and most of the comparative algorithms in terms of optimization and providing values close to the optimal response of the target.
Table 19 shows that on low-dimensional problems, the ICOA can achieve optimal results on F7~F14 and F16~F20, which are closer to the required objective values. And on 15 low-dimensional problems, the ICOA solution has better results than the original algorithm solution, thus verifying that the improved crayfish algorithm is more accurate. Figure 20 gives a comparison plot of the optimal fitness values of the ICOA with the other 12 intelligent optimization algorithms on the low-dimensional problem, as well as a histogram of the difference values of each algorithm with respect to the objective value of each function. The comparison chart shows that ICOA works best on F1, F2, F3, F5, F6, and F9~F20. The histogram clearly shows that the ICOA is either the same or closer to the objective value on each function compared to the original algorithm. This shows that the ICOA is more accurate than the original algorithm when dealing with low-dimensional math problems.

5.2.2. Higher Dimensional Problems

In order to evaluate the performance of the algorithms on large-scale problems, 10 benchmark functions (F21 to F30) have been considered. The dimensions studied in this section are 30, 100, 500, and 1000. They were executed 30 times for each problem, algorithm, and dimension. The results show that the proposed algorithm has good performance in estimating the optimal response for large-scale problems. Table 20, Table 21, Table 22, Table 23 and Table 24 provide the results of all the algorithms implemented in this section.
From Table 20, we can see that the ICOA is still suitable for some high-dimensional problems and achieves the best results. Functions F21 to F22 demonstrate cases where the optimal value obtained by the ICOA outperforms the original algorithm, providing evidence that the ICOA is more effective in solving large-scale problems.
Table 21, Table 22 and Table 23 present the optimal solutions obtained by the ICOA, as well as other comparative algorithms, on dimensions of 100, 500, and 1000 for the functions F21 to F30. In particular, it can be seen that for functions F21, F23, F26, and F28, the ICOA produces the best results, while for functions F24, F25, F29, and F30, the ICOA achieves superior optimal values compared to the original algorithm. This underscores the enhanced efficiency and accuracy of the ICOA when solving large-scale problems. Overall, the ICOA offers valuable advantages and increased accuracy in tackling large-scale problem solving.
Figure 21 gives the optimal fitness value plot of the ICOA with other intelligent optimization algorithms on high-dimensional problems, and 30, 100, 500, and 1000 dimensions are chosen to better illustrate the ability of the ICOA to deal with high-dimensional problems. On F21~F28, the ICOA achieves optimal results on all four dimensions, which shows that the ICOA has a strong advantage on high-dimensional problems. And on F21~F29, the ICOA is better than the original algorithm’s optimization results, which shows that the ICOA is more advantageous in solving high-dimensional problems and has higher accuracy than the original algorithm.

5.3. ICOA Solving the NP-Hard Problem

In this subsection, our goal is to examine the extensibility and adaptability of the ICOA to more challenging problem fields, in general, and, in particular,, mixed-integer affine problems. To evaluate the performance of the ICOA on complex problems, we will utilize it in solving two representative mixed-integer nonlinear problems. By comparing the results with other algorithms known for their good performance, we can ascertain the effectiveness of the ICOA in tackling such challenges. These algorithms are SaDE, AO, GOA, AOA, DO, HHO [20], GWO, and COA [41]. Each problem experiment setup parameter was set according to the problem. In our experimental setup, we have established the number of iterations to be T = 200, alongside a population size of N = 50.
A.
NP1. logistics distribution [75]
Logistics distribution has been a significant factor impeding the development of the logistics industry in recent times. Especially for dairy enterprises, cold chain distribution is a very critical link. Coordination and optimization of multiple variables are required in cold storage, transportation, etc., to achieve the purposes of cost reduction and efficiency improvement.
To address this problem, the ICOA is designed for the vehicle optimization of cold chain distribution, which is verified to have the advantages of short solution time, fast convergence, and high solution accuracy by comparing with other algorithms.
Figure 22 gives the convergence plot of the algorithms of the ICOA and other algorithms in finding the shortest path to the ball. The ICOA clearly achieves the best results, indicating that ICOA solves this problem with high accuracy. Figure 23 illustrates that the ICOA achieves optimal vehicle scheduling, minimizing the cost to the organization. It shows that the ICOA is more advantageous in solving the problem of vehicle scheduling optimization.
Table 24 gives the optimal values as well as the mean and variance when the ICOA solves NP1. The bolded data are the optimal values for each metric. Taking a look at Table 24, it is evident that the ICOA performs better than other methods in terms of the optimal value, mean, and variance when solving this problem. This observation supports the notion that the ICOA delivers superior quality and stability when solving the problem at hand, indicating its high accuracy and reliability.
B.
NP2. TSP issues [75]
Solving the TSP focuses on determining the shortest path by selecting the smallest distance among all available paths. Therefore, we apply the proposed ICOA to this problem to find the shortest path.
By conducting an analysis of Figure 24 and Figure 25, it becomes evident that the ICOA outperforms other algorithms in successfully solving the traveling salesman problem (TSP). It consistently obtains the optimal solution, namely the shortest path, surpassing the performance of alternative algorithms. This shows that the ICOA has a greater advantage than other algorithms in solving the TSP, thus indicating that the ICOA has a good ability to solve the TSP.
Table 25 shows the optimal value, average value, and variance when the ICOA solves the TSP. The bolded data are the optimal values under each index. Table 25 shows that the ICOA has the lowest optimal value and the lowest average value when solving this problem. The ICOA can give a better solution to solve this problem, which shows that the ICOA has a higher precision.

6. Discussion

Compared with other classical optimization methods, the ICOA has better optimization performance. Based on three test sets, six engineering problems, high- and low-dimensional mathematical problems, and NP problems, combined with a hypersonic missile path planning verification experiment, the implementation of the ICOA is discussed in detail. On the test set CEC 2020, the ICOA ranked first among all eight optimization algorithms, indicating that the addition of four strategies greatly improved the performance of the COA. On the new test set, CEC 2022, the ICOA ranked second in two test functions, but first in the remaining ten test functions, indicating that the proposed ICOA is equally adaptable and effective in the new problem. To test the powerful performance of the ICOA, we re-selected nine additional optimization algorithms and tested a variety of test functions in the CEC2017 test set. The results show that the ICOA ranks first in fourteen test functions and in the top three in twenty-eight test functions, accounting for 95.6% of the top three, with a ranking value of 1.6. This comprehensively demonstrates that the proposed ICOA represents a substantial improvement to the development and exploration capabilities of the original COA. Finally, the proposed ICOA is optimized for both high- and low-dimensional mathematical functions and NP problems. It is proved that the ICOA has great advantages in high-dimensional problems. These tests verify the effectiveness and reliability of the ICOA in engineering optimization.

7. Conclusions and Future Work

The main research focus of this paper is as follows. (1) An improved crayfish algorithm (ICOA) based on the elite chaotic difference strategy, the difference variance strategy, the Levy flight strategy, and the multi-strategy of the dimensional variation strategy and adaptive parameter strategy is proposed. The main purpose of the proposed ICOA is to prevent the premature local convergence of the COA and to solve some problems with low precision, so as to improve the accuracy of the COA and expand the search ability of the COA. (2). Several test sets, engineering examples of different sizes, a set of low- and high-dimensional mathematical functions, two NP problems, and path planning problems were used to verify the performance of the ICOA. This included CEC 2017, CEC 2020, and CEC 2022, and three test sets, as well as six engineering examples, twenty low-dimensional mathematical functions, ten high-dimensional mathematical functions, and, finally, the use of large-scale NP problems, and a hypersonic missile trajectory planning problem.
The following conclusions were obtained using numerical and graphical comparisons of the ICOA with other intelligent optimization algorithms:
(1)
The elite chaotic difference strategy improves good initial solutions for the ICOA, prevents blind searches, and ensures a more uniform distribution of populations in space.
(2)
The ICOA ranks first in all the CEC 2020 test sets, and tenth out of twelve test functions in the CEC 2022 test set, based on the first set of comparison algorithms. Based on the second set of comparison algorithms on the CEC 2017 and CEC 2020 test sets, respectively, the combined rank is first (rank = 1.6, 1.517). It shows that the Levy flight strategy and dimensional variation strategy and adaptive strategies greatly improve the convergence and search ability of the COA.
(3)
The results of six engineering examples and hypersonic factor missile trajectory planning show that the ICOA is more efficient and stable than other algorithms in solving practical engineering problems.
(4)
The outcomes obtained from the evaluation of high-dimensional and low-dimensional mathematical problems, along with complex NP problems, demonstrate that the enhanced strategy significantly enhances the optimization capability of COA. Moreover, it also improves the stability in solving large-scale problems. These results imply that the ICOA outperforms the original algorithm in terms of accuracy and the quality of solutions for large-scale problems.
Moving forward, leveraging the superior abilities of the ICOA in handling diverse complex problems, our future work will focus on optimizing even more complex global optimization problems. To validate the performance of the ICOA further, we plan to tackle real-world engineering problems, thus providing practical evidence of its effectiveness. Furthermore, the proposed ICOA has the potential to address practical problems across diverse fields. For instance, it can handle scheduling tasks in fog computing, complex engineering applications, parameter estimation in photovoltaic modeling [76], multi-threshold image segmentation [77], 3D path planning of UAVs, and many more areas.

Author Contributions

Conceptualization, W.L., Y.H. and G.H.; Methodology, W.L., Y.H., G.H. and C.Z.; Software, W.L., Y.H. and C.Z.; Validation, Y.H., G.H. and C.Z.; Formal analysis, C.Z.; Investigation, W.L., Y.H., G.H. and C.Z.; Resources, G.H.; Data curation, W.L. and Y.H.; Writing—original draft, W.L., Y.H., G.H. and C.Z.; Writing—review and editing, W.L., Y.H., G.H. and C.Z.; Visualization, W.L. and C.Z.; Supervision, W.L., G.H. and C.Z.; Project administration, G.H.; Funding acquisition, G.H. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported by the Key Research Projects of the Shaanxi Provincial Government Research Office in 2024 (grant No. 2024HZ1186), and the 2024 Shaanxi Provincial Communist Youth League and Youth Work Research Project (grant No. 2024HZ1236).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All data generated or analyzed during this study are included in this published article.

Conflicts of Interest

The authors declare no conflicts of interest.

Statement

Current research is limited to the intelligent optimization field, which is beneficial primary use engineering optimization and does not pose a threat to public health or national security. Authors acknowledge the dual use potential of the research involving crayfish optimization algorithm and confirm that all necessary precautions have been taken to prevent potential misuse. As an ethical responsibility, authors strictly adhere to relevant national and international laws about DURC. Authors advocate for responsible deployment, ethical considerations, regulatory compliance, and transparent reporting to mitigate misuse risks and foster beneficial outcomes.

References

  1. Garip, Z.; Karayel, D.; Çimen, M.E. A study on path planning optimization of mobile robots based on hybrid algorithm. Concurr. Comput. Pract. Exp. 2022, 34, e6721. [Google Scholar] [CrossRef]
  2. Wansasueb, K.; Panmanee, S.; Panagant, N.; Pholdee, N.; Bureerat, S.; Yildiz, A.R. Hybridised differential evolution and equilibrium optimiser with learning parameters for mechanical and aircraft wing design. Knowl.-Based Syst. 2022, 239, 107955. [Google Scholar] [CrossRef]
  3. Yuan, M.; Li, Y.; Zhang, L.; Pei, F. Research on intelligent workshop resource scheduling method based on improved NSGA-II algorithm. Robot. Comput.-Integr. Manuf. 2021, 71, 102141. [Google Scholar] [CrossRef]
  4. Zhan, Z.H.; Shi, L.; Tan, K.C.; Zhang, J. A survey on evolutionary computation for complex continuous optimization. Artif. Intell. Rev. 2022, 55, 59–110. [Google Scholar] [CrossRef]
  5. Merrikh-Bayat, F. The runner-root algorithm: A metaheuristic for solving unimodal and multimodal optimization problems inspired by runners and roots of plants in nature. Appl. Soft Comput. 2015, 33, 292–303. [Google Scholar] [CrossRef]
  6. Mzili, T.; Rif, M.E.; Mzili, I.; Dhiman, G. A novel discrete rat swarm optimization (DRSO) algorithm for solving the traveling salesman problem. Decis. Mak. Appl. Manag. Eng. 2022, 5, 287–299. [Google Scholar] [CrossRef]
  7. Jia, H.; Sun, K.; Li, Y.; Cao, N. Improved marine predators algorithm for feature selection and SVM optimization. KSII Trans. Internet Inf. Syst. (TIIS) 2022, 16, 1128–1145. [Google Scholar]
  8. Mzili, I.; Mzili, T.; Rif, M.E. Efcient routing optimization with discrete penguins search algorithm for MTSP. Decis Mak. Appl. Manag. Eng. 2023, 6, 730–743. [Google Scholar] [CrossRef]
  9. Liu, Q.; Li, N.; Jia, H.; Qi, Q.; Abualigah, L. Modifed remora optimization algorithm for global optimization and multilevel thresholding image segmentation. Mathematics 2022, 10, 1014. [Google Scholar] [CrossRef]
  10. Das, M.; Roy, A.; Maity, S.; Kar, S.; Sengupta, S. Solving fuzzy dynamic ship routing and scheduling problem through new genetic algorithm. Decis Mak. Appl. Manag. Eng. 2022, 5, 329–361. [Google Scholar] [CrossRef]
  11. Jia, H.; Zhang, W.; Zheng, R.; Wang, S.; Leng, X.; Cao, N. Ensemble mutation slime mould algorithm with restart mechanism for feature selection. Int. J. Intell. Syst. 2022, 37, 2335–2370. [Google Scholar] [CrossRef]
  12. Qi, H.; Zhang, G.; Jia, H.; Xing, Z. A hybrid equilibrium optimizer algorithm for multi-level image segmentation. Math. Biosci. Eng. 2021, 18, 4648–4678. [Google Scholar] [CrossRef] [PubMed]
  13. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95-International Conference on Neural Networks, Perth, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar]
  14. Karaboga, D.; Basturk, B. A powerful and efcient algorithm for numerical function optimization: Artifcial bee colony (ABC) algorithm. J Glob. Optim. 2007, 39, 459–471. [Google Scholar] [CrossRef]
  15. Dorigo, M.; Birattari, M.; Stutzle, T. Ant colony optimization. IEEE Comput. Intell. Mag. 2006, 1, 28–39. [Google Scholar] [CrossRef]
  16. Gandomi, A.H.; Yang, X.S.; Alavi, A.H. Cuckoo search algorithm: A metaheuristic approach to solve structural optimization problems. Eng. Comput. 2013, 29, 17–35. [Google Scholar] [CrossRef]
  17. Hashim, F.A.; Hussien, A.G. Snake optimizer: A novel meta-heuristic optimization algorithm. Knowl. Based Syst. 2022, 242, 108320. [Google Scholar] [CrossRef]
  18. Braik, M.; Hammouri, A.; Atwan, J.; Al-Betar, M.A.; Awadallah, M.A. White shark optimizer: A novel bio-inspired meta-heuristic algorithm for global optimization problems. Knowl. Based Syst. 2022, 243, 108457. [Google Scholar] [CrossRef]
  19. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris hawks optimization: Algorithm and applications. Futur. Gener. Comp. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
  20. Wang, L.; Cao, Q.; Zhang, Z.; Mirjalili, S.; Zhao, W. Artificial rabbits optimization: A new bio-inspired meta-heuristic algorithm for solving engineering optimization problems. Eng. Appl. Artif. Intell. 2022, 114, 105082. [Google Scholar] [CrossRef]
  21. Zhao, W.; Wang, L.; Mirjalili, S. Artificial hummingbird algorithm: A new bio-inspired optimizer with its engineering applications. Comput. Methods Appl. Mech. Eng. 2022, 388, 114194. [Google Scholar] [CrossRef]
  22. Rashedi, E.; Nezamabadi-Pour, H.; Saryazdi, S. GSA: A gravitational search algorithm. Inf. Sci. 2009, 179, 2232–2248. [Google Scholar] [CrossRef]
  23. Kirkpatrick, S.; Gelatt, C.D.; Vecchi, M.P. Optimization by simulated annealing. Science 1983, 220, 671–680. [Google Scholar] [CrossRef] [PubMed]
  24. Rajeev, S.; Krishnamoorthy, C.S. Discrete optimization of structures using genetic algorithms. J. Struct. Eng. 1992, 118, 1233–1250. [Google Scholar] [CrossRef]
  25. David, B.F. Artificial Intelligence through Simulated Evolution. Evol. Comput. Foss. Rec. IEEE 1998, 227–296. [Google Scholar] [CrossRef]
  26. Storn, R.; Price, K. Differential evolution-A simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  27. Jaderyan, M.; Khotanlou, H. Virulence optimization algorithm. Appl. Soft Comput. 2016, 43, 596–618. [Google Scholar] [CrossRef]
  28. Wen, C.; Jia, H.; Wu, D.; Rao, H.; Li, S.; Liu, Q.; Abualigah, L. Modifed remora optimization algorithm with multistrategies for global optimization problem. Mathematics 2022, 10, 3604. [Google Scholar] [CrossRef]
  29. Geem, Z.W.; Kim, J.H.; Loganathan, G.V. A new heuristic optimization algorithm: Harmony search. Simulation 2001, 76, 60–68. [Google Scholar] [CrossRef]
  30. Rao, R.V.; Savsani, V.J.; Vakharia, D.P. Teaching–learning-based optimization: An optimization method for continuous non-linear large scale problems. Inf. Sci. 2012, 183, 1–15. [Google Scholar] [CrossRef]
  31. Satapathy, S.; Naik, A. Social group optimization (SGO): A new population evolutionary optimization technique. Complex. Intell. Syst. 2016, 2, 173–203. [Google Scholar] [CrossRef]
  32. Zhang, Y.; Jin, Z. Group teaching optimization algorithm: A novel metaheuristic method for solving global optimization problems. Expert Syst. Appl. 2016, 148, 113246. [Google Scholar] [CrossRef]
  33. Cheng, S.; Qin, Q.; Chen, J.; Shi, Y. Brain storm optimization algorithm: A review. Artif. Intell. Rev. 2016, 46, 445–458. [Google Scholar] [CrossRef]
  34. Xing, B.; Gao, W.J.; Xing, B.; Gao, W.J. Imperialist competitive algorithm. In Innovative Computational Intelligence: A Rough Guide to 134 Clever Algorithms; Kacprzyk, J., Jain, L.C., Eds.; Springer: Berlin/Heidelberg, Germany, 2014; Volume 16, pp. 211–216. [Google Scholar]
  35. Abualigah, L.; Elaziz, M.A.; Khasawneh, A.M.; Alshinwan, M.; Ibrahim, R.A.; Alqaness, M.A.; Mirjalili, S.; Sumari, P.; Gandomi, A.H. Meta-heuristic optimization algorithms for solving real-world mechanical engineering design problems: A comprehensive survey, applications, comparative analysis, and results. Neural Comput. Appl. 2022, 34, 4081–4110. [Google Scholar] [CrossRef]
  36. Zandavi, S.M.; Chung, V.Y.Y.; Anaissi, A. Stochastic dual simplex algorithm: A novel heuristic optimization algorithm. IEEE Trans. Cybern. 2019, 51, 2725–2734. [Google Scholar] [CrossRef]
  37. Dong, W.; Zhou, M. A supervised learning and control method to improve particle swarm optimization algorithms. IEEE Trans. Syst. Man Cybern. Syst. 2016, 47, 1135–1148. [Google Scholar] [CrossRef]
  38. Črepinšek, M.; Liu, S.H.; Mernik, M. Exploration and exploitation in evolutionary algorithms: A survey. ACM Comput. Surv. 2013, 45, 35. [Google Scholar] [CrossRef]
  39. Ezugwu, A.E.; Shukla, A.K.; Nath, R.; Akinyelu, A.A.; Agushaka, J.O.; Chiroma, H.; Muhuri, P.K. Metaheuristics: A comprehensive overview and classification along with bibliometric analysis. Artif. Intell. Rev. 2021, 54, 4237–4316. [Google Scholar] [CrossRef]
  40. Wolpert, D.H.; Macready, W.G. No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1997, 1, 67–82. [Google Scholar] [CrossRef]
  41. Jia, H.; Rao, H.; Wen, C.; Mirjalili, S. Crayfish optimization algorithm. Artif. Intell. Rev. 2023, 56 (Suppl. S2), 1919–1979. [Google Scholar] [CrossRef]
  42. Xu, Y.P.; Tan, J.W.; Zhu, D.J.; Ouyang, P.; Taheri, B. Model identification of the proton exchange membrane fuel cells by extreme learning machine and a developed version of arithmetic optimization algorithm. Energy Rep. 2021, 7, 2332–2342. [Google Scholar] [CrossRef]
  43. Gandomi, A.H.; Yang, X.S.; Talatahari, S.; Alavi, A.H. Firefly algorithm with chaos. Commun. Nonlinear Sci. Numer. Simul. 2013, 18, 89–98. [Google Scholar] [CrossRef]
  44. Fister, I.; Perc, M.; Kamal, S.M.; Fister, I. A review of chaos-based firefly algorithms: Perspectives and research challenges. Appl. Math. Comput. 2015, 252, 155–165. [Google Scholar] [CrossRef]
  45. Lu, X.L.; He, G. QPSO algorithm based on Lévy flight and its application in fuzzy portfolio. Appl. Soft Comput. 2021, 99, 106894. [Google Scholar] [CrossRef]
  46. Deepa, R.; Venkataraman, R. Enhancing whale optimization algorithm with levy flight for coverage optimization in wireless sensor networks. Comput. Electr. Eng. 2021, 94, 107359. [Google Scholar] [CrossRef]
  47. Zhang, Z. A flower pollination algorithm based on t-distribution elite retention mechanism. J. Anhui Univ. Sci. Technol. (Nat. Sci. Ed.) 2018, 38, 50–58. [Google Scholar]
  48. Trojovska, E.; Dehghani, M.; Trojovský, P. Fennec Fox Optimization: A New Nature-Inspired Optimization Algorithm. IEEE Access 2022, 10, 84417–84443. [Google Scholar] [CrossRef]
  49. Shehadeh, H. Chernobyl Disaster Optimizer (CDO): A Novel Metaheuristic Method for Global Optimization. Neural Comput. Appl. 2022, 35, 10733–10749. [Google Scholar] [CrossRef]
  50. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  51. Trojovská, E.; Dehghani, M.; Trojovský, P. Zebra Optimization Algorithm: A New Bio-Inspired Optimization Algorithm for Solving Optimization Algorithm. IEEE Access 2022, 10, 49445–49473. [Google Scholar] [CrossRef]
  52. Xue, J.; Shen, B. A novel swarm intelligence optimization approach: Sparrow search algorithm. Syst. Sci. Control Eng. Open Access J. 2020, 8, 22–34. [Google Scholar] [CrossRef]
  53. Mzili, T.; Mzili, I.; Riffi, M.E. Artificial rat optimization with decision-making: A bio-inspired metaheuristic algorithm for solving the traveling salesman problem. Decis. Mak. Appl. Manag. Eng. 2023, 6, 150–176. [Google Scholar] [CrossRef]
  54. Mzili, T.; Mzili, I.; Riffi, M.E.; Kurdi, M.; Ali, A.H.; Pamucar, D.; Abualigah, L. Enhancing COVID-19 vaccination and medication distribution routing strategies in rural regions of Morocco: A comparative metaheuristics analysis. Inform. Med. Unlocked 2024, 46, 101467. [Google Scholar] [CrossRef]
  55. Sansawas, S.; Roongpipat, T.; Ruangtanusak, S.; Chaikhet, J.; Worasucheep, C.; Wattanapornprom, W. Gaussian Quantum-Behaved Particle Swarm with Learning Automata-Adaptive Attractor and Local Search. In Proceedings of the 19th International Conference on Electrical Engineering/Electronics, Computer, Telecommunications and Information Technology (ECTI-CON), Prachuap Khiri Khan, Thailand, 24–27 May 2022; pp. 1–4. [Google Scholar]
  56. Qin, A.K.; Suganthan, P.N. Self-adaptive differential evolution algorithm for numerical optimization. In Proceedings of the IEEE Congress on Evolutionary Computation, Edinburgh, UK, 2–5 September 2005; Volume 2, pp. 1785–1791. [Google Scholar]
  57. Saremi, S.; Mirjalili, S.; Lewis, A. Grasshopper Optimisation Algorithm: Theory and application. Adv. Eng. Softw. 2017, 105, 30–47. [Google Scholar] [CrossRef]
  58. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  59. Abualigah, L.; Diabat, A.; Mirjalili, S.; Elaziz, M.A.; Gandomi, A.H. The Arithmetic Optimization Algorithm. Comput. Methods Appl. Mech. Eng. 2021, 376, 113609. [Google Scholar] [CrossRef]
  60. Nadimi-Shahraki, M.H.; Taghian, S.; Mirjalili, S. An improved grey wolf optimizer for solving engineering problems. Expert Syst. Appl. 2021, 166, 113917. [Google Scholar] [CrossRef]
  61. Song, W.; Liu, S.; Wang, X.; Wu, W. An Improved Sparrow Search Algorithm. In Proceedings of the 2020 IEEE Intl Conf on Parallel & Distributed Processing with Applications. Big Data & Cloud Computing, Sustainable Computing & Communications, Social Computing & Networking (ISPA/BDCloud/SocialCom/SustainCom), Exeter, UK, 17–19 December 2020; pp. 537–543. [Google Scholar]
  62. Cheraghalipour, A.; Hajiaghaei-Keshteli, M.; Paydar, M.M. Tree Growth Algorithm (TGA): A novel approach for solving optimization problems. Eng. Appl. Artif. Intell. 2018, 72, 393–414. [Google Scholar] [CrossRef]
  63. Zhao, S.; Zhang, T.; Ma, S.; Chen, M. Dandelion optimizer: A nature-inspiredmetaheuristic algorithm for engineering applications. Eng. Appl. Artif. Intell. 2022, 114, 105075. [Google Scholar] [CrossRef]
  64. Luo, K. Water flow optimizer: A nature-inspired evolutionary algorithm for global optimization. IEEE Trans. Cybern. 2021, 52, 7753–7764. [Google Scholar] [CrossRef]
  65. Santos, I.F. Controllable sliding bearings and controllable lubrication principles—An overview. Lubricants 2018, 6, 16. [Google Scholar] [CrossRef]
  66. Ragsdell, K.M.; Phillips, D.T. Optimal design of a class of welded structures using geometric programming. J. Eng. Ind. 1976, 98, 1021–1025. [Google Scholar] [CrossRef]
  67. Osyczka, A.; Krenich, S.; Karas, K. Optimum design of robot grippers using genetic algorithms. In Proceedings of the Third World Congress of Structural and Multidisciplinary Optimization, (WCSMO), Buffalo, NY, USA, 17–21 May 1999; pp. 241–243. [Google Scholar]
  68. Mirjalili, S. SCA: A Sine Cosine Algorithm for solving optimization problems. Knowl.-Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  69. Abualigah, L.; Yousri, D.; Abd Elaziz, M.; Ewees, A.A.; Al-Qaness, M.A.; Gandomi, A.H. Aquila Optimizer: A novel meta-heuristic optimization algorithm. Comput. Ind. Eng. 2021, 157, 107250. [Google Scholar] [CrossRef]
  70. Abualigah, L.; Abd Elaziz, M.; Sumari, P.; Geem, Z.W.; Gandomi, A.H. Reptile Search Algorithm (RSA): A nature-inspired meta-heuristic optimizer. Expert Syst. Appl. 2022, 191, 116158. [Google Scholar] [CrossRef]
  71. Zhong, C.; Li, G.; Meng, Z. Beluga whale optimization: A novel nature-inspired metaheuristic algorithm. Knowl.-Based Syst. 2022, 251, 109215. [Google Scholar] [CrossRef]
  72. Zolfi, K. Gold rush optimizer: A new population-based metaheuristic algorithm. Oper. Res. Decis. 2023, 33, 113–150. [Google Scholar] [CrossRef]
  73. Thanedar, P.B.; Vanderplaats, G.N. Survey of Discrete Variable Optimization for Structural Design. J. Struct. Eng. 1995, 121, 301–306. [Google Scholar] [CrossRef]
  74. Floudas, C.A.; Ciric, A.R. Strategies for overcoming uncertainties in heat exchanger network synthesis. Comput. Chem. Eng. 1989, 13, 1133–1152. [Google Scholar] [CrossRef]
  75. Yang, Z. AFO Solving Real-World Problems. 2023. [Google Scholar]
  76. Houssein, E.H.; Zaki, G.N.; Diab, A.A.Z.; Younis, E.M.G. An efficient Manta Ray Foraging Optimization algorithm for parameter extraction of three-diode photovoltaic model. Comput Electr. Eng. 2021, 94, 107304. [Google Scholar] [CrossRef]
  77. Houssein, E.H.; Hussain, K.; Abualigah, L.; Elaziz, M.A.; Alomoush, W.; Dhiman, G.; Djenouri, Y.; Cuevas, E. An improved opposition-based marine predators algorithm for global optimization and multilevel thresholding image segmentation. Knowl.-Based Syst. 2021, 229, 107348. [Google Scholar] [CrossRef]
Figure 1. Schematic diagram of crayfish.
Figure 1. Schematic diagram of crayfish.
Biomimetics 10 00343 g001
Figure 2. Crayfish summer vacation chart. (a) Enter the burrow. (b) Get into the cave fight.
Figure 2. Crayfish summer vacation chart. (a) Enter the burrow. (b) Get into the cave fight.
Biomimetics 10 00343 g002
Figure 3. Mutation operation.
Figure 3. Mutation operation.
Biomimetics 10 00343 g003
Figure 4. Simulated trajectory plot of Levy flight.
Figure 4. Simulated trajectory plot of Levy flight.
Biomimetics 10 00343 g004
Figure 5. Curve diagrams of C and C1.
Figure 5. Curve diagrams of C and C1.
Biomimetics 10 00343 g005
Figure 6. Flowchart of the ICOA.
Figure 6. Flowchart of the ICOA.
Biomimetics 10 00343 g006
Figure 7. Convergence curve of ICOA and other algorithms for solving 10-dimensional CEC 2020 test set.
Figure 7. Convergence curve of ICOA and other algorithms for solving 10-dimensional CEC 2020 test set.
Biomimetics 10 00343 g007
Figure 8. ICOA and other algorithms to solve box diagram of 10-dimensional CEC 2020 test set.
Figure 8. ICOA and other algorithms to solve box diagram of 10-dimensional CEC 2020 test set.
Biomimetics 10 00343 g008
Figure 9. Convergence curves of ICOA and other optimization algorithms for solving CEC 2022 test set.
Figure 9. Convergence curves of ICOA and other optimization algorithms for solving CEC 2022 test set.
Biomimetics 10 00343 g009
Figure 10. Box plot of ICOA and other optimization algorithms running 20 times for solving CEC 2022 test set.
Figure 10. Box plot of ICOA and other optimization algorithms running 20 times for solving CEC 2022 test set.
Biomimetics 10 00343 g010
Figure 11. Convergence curves of ICOA and other optimization algorithms for solving CEC 2020 test set.
Figure 11. Convergence curves of ICOA and other optimization algorithms for solving CEC 2020 test set.
Biomimetics 10 00343 g011
Figure 12. Box plot of ICOA and other optimization algorithms running 20 times for solving CEC 2020 test set.
Figure 12. Box plot of ICOA and other optimization algorithms running 20 times for solving CEC 2020 test set.
Biomimetics 10 00343 g012
Figure 13. Convergence curve of ICOA and other algorithms for solving 10-dimensional CEC2017 test set.
Figure 13. Convergence curve of ICOA and other algorithms for solving 10-dimensional CEC2017 test set.
Biomimetics 10 00343 g013aBiomimetics 10 00343 g013b
Figure 14. ICOA and other algorithms solving the box diagram of 10-dimensional CEC 2017 test set.
Figure 14. ICOA and other algorithms solving the box diagram of 10-dimensional CEC 2017 test set.
Biomimetics 10 00343 g014aBiomimetics 10 00343 g014bBiomimetics 10 00343 g014c
Figure 15. Schematic design of reducer.
Figure 15. Schematic design of reducer.
Biomimetics 10 00343 g015
Figure 16. Schematic diagram of hydrostatic thrust bearing.
Figure 16. Schematic diagram of hydrostatic thrust bearing.
Biomimetics 10 00343 g016
Figure 17. Schematic diagram of welded beam design issues.
Figure 17. Schematic diagram of welded beam design issues.
Biomimetics 10 00343 g017
Figure 18. Schematic of the robot gripper arm design problem.
Figure 18. Schematic of the robot gripper arm design problem.
Biomimetics 10 00343 g018
Figure 19. Schematic of cantilever beam design.
Figure 19. Schematic of cantilever beam design.
Biomimetics 10 00343 g019
Figure 20. The comparison chart of the optimal values of the ICOA on F1–F20 with other comparative algorithms, as well as the histogram of the differences in the results of each algorithm.
Figure 20. The comparison chart of the optimal values of the ICOA on F1–F20 with other comparative algorithms, as well as the histogram of the differences in the results of each algorithm.
Biomimetics 10 00343 g020aBiomimetics 10 00343 g020bBiomimetics 10 00343 g020c
Figure 21. Comparison chart of the optimal fitness values of F21~F30 ICOA with other algorithms in 30, 100, 500, and 1000 dimensions.
Figure 21. Comparison chart of the optimal fitness values of F21~F30 ICOA with other algorithms in 30, 100, 500, and 1000 dimensions.
Biomimetics 10 00343 g021
Figure 22. Convergence curves of ICOA and other comparison algorithms in logistics distribution problem.
Figure 22. Convergence curves of ICOA and other comparison algorithms in logistics distribution problem.
Biomimetics 10 00343 g022
Figure 23. Comparison chart of ICOA and other comparative algorithms for vehicle scheduling optimization.
Figure 23. Comparison chart of ICOA and other comparative algorithms for vehicle scheduling optimization.
Biomimetics 10 00343 g023
Figure 24. Convergence curves of ICOA and other comparison algorithms in TSP.
Figure 24. Convergence curves of ICOA and other comparison algorithms in TSP.
Biomimetics 10 00343 g024
Figure 25. The shortest path graph of the ICOA compared with other contrast algorithms.
Figure 25. The shortest path graph of the ICOA compared with other contrast algorithms.
Biomimetics 10 00343 g025
Table 1. Initial parameter settings of all algorithms.
Table 1. Initial parameter settings of all algorithms.
AlgorithmParameter NameParameter Value
COAadaptive parameters (α, k)[0,1], [0,1]
C10.2
C33
δ 25
μ 3
DEscaling factor (F)[0,1], [0,1]
crossover rate (CR)0.9
HHOstarting energy (E0)[−1,1]
CDOSγRand(1,300,000) km/s
SβRand(1,270,000) km/s
SαRand(116,000) km/s
rRand(0,1)
SSAα[0,1]
warning value (R2)[0,1]
safety value (ST)[0.5,1]
QRandom numbers obeying a normal distribution
ZOAr[0,1]
I[1,2]
R0.01
Ps (switching probability)[0,1]
PSOcognitive and social coefficients2,2
inertial constants[0.2,0.8]
GWOcontrol parameter (C)[0,2]
ICOAadaptive parameters ( α , k)[0,1], [0,1]
C10.2
C33
δ 25
μ 3
scaling factor (F)[0.4,0.8]
C[1,2]
β[1,3]
beta
Table 2. Comparison of results between ICOA and other algorithms (10-dimensional CEC 2020 test set).
Table 2. Comparison of results between ICOA and other algorithms (10-dimensional CEC 2020 test set).
FunIndexCOADEPSOCDOFFASSAGWOHHOZOAICOA
F1Mean3838.9552081,649,311,26927,581,932.0913,429,818,43522,496,879,6983719.63225423,520,636.32331,765.7796304,236,883.6101.4910899
Std3007.602591636,805,261.850,249,275.92,201,887,8125,949,781,9933024.70717977,107,311.27174,444.6561629,022,2391.08284947
p-value6.796E−086.796E−086.796E−086.796E−086.796E−086.796E−086.796E−086.796E−086.796E−08
Rank3 8 6 9 10 2 5 4 7 1
F2Mean1860.0625152959.3884152168.85482533.4457943722.4313771739.7704681540.0652441906.0942951583.8163011448.116386
Std371.5677643228.2440872443.693734139.4368457271.604534297.2513639239.2009606268.9545701195.8931473165.3887062
p-value1.235E−076.796E−085.166E−066.796E−086.796E−087.577E−065.250E−012.062E−061.929E−02
Rank5 9 7 8 10 4 2 6 3 1
F3Mean765.9056692824.7519216745.4028849790.23722311162.396417776.1333762729.6126697777.9206277733.8894709720.5373713
Std24.1697910219.4832908611.104391056.80353568570.1423648925.896150229.08932689517.8577959810.276875663.212966537
p-value1.657E−076.796E−081.918E−076.796E−086.796E−086.796E−083.369E−016.796E−081.576E−06
Rank5 9 4 8 10 6 2 7 3 1
F4Mean1901.4381892481.4107432016.75550812,724.660722,852,758.0111901.8914051902.1001471906.2763312132.9950681900.926674
Std0.941761873917.4421483117.9837341541.0243893,356,626.5860.7850722810.9869710432.346971549622.1633590.275094376
p-value5.629E−046.796E−086.796E−086.796E−086.796E−087.898E−083.057E−036.796E−086.796E−08
Rank2 8 6 9 10 3 4 5 7 1
F5Mean8779.842918785,981.1194674.50296916,143.3969525,412,786.184657.76387469,736.5801524,035.2729112,421.518881710.774526
Std6940.834127458,224.59842780.5332496400.11428123,626,290.212241.418386156,126.43321,366.5986228,288.906375.91583468
p-value6.796E−086.796E−086.796E−086.796E−086.796E−086.796E−086.796E−086.796E−086.796E−08
Rank4 9 3 6 10 2 8 7 5 1
F6Mean1673.0108011969.6955531879.75861887.0778542804.4474081762.0757131725.9443851804.8502851799.6782851632.171341
Std65.8167227885.90377711107.796066959.2641346380.7092261111.04968791.85497078111.996599899.2174137366.09978203
p-value2.745E−046.796E−081.235E−076.796E−086.796E−081.576E−061.415E−051.376E−062.218E−07
Rank2 9 7 8 10 4 3 6 5 1
F7Mean3302.173721166,506.28182650.623114203,618.087610,244,437.842960.2472478129.10975811,965.880376025.0367962100.672449
Std976.9895466124,776.3342715.6411572414.289098312,006,509.81368.94123573949.52661811,200.054093260.5166450.310442338
p-value6.796E−086.796E−086.796E−086.796E−086.796E−086.796E−086.796E−086.796E−086.796E−08
Rank4 8 2 9 10 3 6 7 5 1
F8Mean2298.2491682651.3626372455.8925963178.8426384091.0285692303.5410992307.1232772313.5346352324.2334422295.819255
Std14.16200851164.2269187301.5124043350.8521508695.12832012.6221151246.1052151837.08654141425.5906797522.55983287
p-value1.481E−036.796E−086.796E−086.796E−086.796E−086.917E−072.690E−061.047E−066.796E−08
Rank2 8 7 9 10 3 4 5 6 1
F9Mean2747.1134282794.3238092824.7114652910.2954412992.4676842724.674492733.3634082778.9800952687.942172655.546401
Std7.48376544710.41871269116.374200220.1764152155.3832079497.3881133655.28598944105.0872007135.0390389117.1652515
p-value3.987E−065.227E−072.041E−056.796E−086.796E−083.499E−061.199E−016.796E−085.227E−07
Rank5 7 8 9 10 3 4 6 2 1
F10Mean2931.845093057.2078132926.7403143574.216494342.8169252913.4626942940.029412927.297172962.4133092902.5189
Std21.9867994640.9906967262.6117871982.27196576599.66813177.3671519325.2052308626.0174341347.4153396914.40075427
p-value8.572E−066.767E−082.553E−076.767E−086.767E−083.488E−063.924E−071.910E−071.910E−07
Rank5 8 3 9 10 2 6 4 7 1
Mean Rank3.78.35.48.4103.24.45.751
Result38691024751
+/=/−0/0/100/0/100/0/100/0/100/0/100/0/100/3/70/0/100/0/10-
Table 3. Comparison results of the ICOA and other algorithms (CEC 2022 test set).
Table 3. Comparison results of the ICOA and other algorithms (CEC 2022 test set).
FunIndexCOADEPSOCDOFFASSAGWOHHOZOAICOA
F1Mean20,199.1825876,171.53004944.21409825,500.33409228,117,680.12142.34333210,977.105892552.371539029.770081300.5671868
Std6960.12614515,952.34818576.8655192890.0341877487,428,142.81011.1826163750.7324071506.5695143380.3935640.569563772
p-value6.796E−086.796E−086.796E−086.796E−086.796E−086.796E−086.796E−086.796E−086.796E−08
T-p2.8799E−145.7637E−210.325267.1078E−530.0242752.0347E−073.653E−132.0027E−053.281E−15
Rank7 9 2 8 10 3 6 4 5 1
F2Mean462.92614881542.444397518.12795162023.3244259085.491279450.4501635476.8470318478.7412019581.5630808443.4221186
Std20.65489048226.51931255.7823210537.549266682305.80414118.5513431215.1546976127.9290982882.6800649920.22370434
p-value2.041E−056.796E−081.657E−076.796E−086.796E−083.372E−025.166E−069.748E−066.796E−08
T-p0.00165441.2315E−190.0936261.3009E−556.882E−170.00156293.7369E−070.000371123.5299E−14
Rank3 8 6 9 10 2 4 5 7 1
F3Mean623.0433595662.2285346650.0185587660.5528855724.9135447633.2590033603.6815354653.1966242642.7965557605.537832
Std15.661401085.8744107997.3454056295.48201456111.2864470113.062743082.4183273328.9366947977.0030456195.621803075
p-value1.037E−046.796E−086.796E−086.796E−086.796E−083.416E−074.903E−016.796E−086.796E−08
T-p0.000116421.42E−251.1469E−221.791E−281.3329E−326.3709E−110.18651.578E−241.03E−19
Rank3 9 6 8 10 4 1 7 5 2
F4Mean888.25330671007.725436891.7401872943.90707251106.972433892.0369531860.729065883.2562497856.8945837855.7576147
Std6.4784274616.1129078829.8195496814.4853539220.8743645720.4508004226.006155698.74899025711.6446064518.53309918
p-value2.062E−066.796E−083.705E−056.796E−086.796E−089.748E−066.949E−017.577E−067.557E−01
T-p2.8582E−075.8033E−250.33651.0869E−186.2815E−303.5426E−060.854310.000502480.84321
Rank5 9 6 8 10 7 3 4 2 1
F5Mean2273.2611587002.9850162014.6822123246.48229714,149.767662361.3269581068.2563612657.9936171726.3552551195.296702
Std684.71874471357.80889237.2145198236.41685592168.568502226.5678246166.67826252.3357777203.9597704414.2645949
p-value1.104E−056.796E−085.874E−066.796E−086.796E−081.918E−076.359E−017.898E−081.997E−04
T-p1.8187E−093.7713E−212.7513E−072.6749E−241.1337E−252.2516E−180.664363.4122E−234.9521E−09
Rank5 9 4 8 10 6 1 7 3 2
F6Mean6205.807376751,183,223.34,006,662.7374,366,063,1487,653,231,5739249.1783762,452,926.50394,197.04474,793,747.8065270.092938
Std5340.238554286,302,362.17,392,193.8162,244,628,8192,712,022,7597846.9159645,796,887.43545,125.895589,243,505.8541257.414628
p-value2.616E−016.796E−086.796E−086.796E−086.796E−084.094E−012.341E−036.796E−088.597E−06
T-p0.455348.2009E−120.388788.7357E−079.7743E−150.00189170.00957292.1898E−100.046577
Rank2 8 6 9 10 3 5 4 7 1
F7Mean2089.5921292271.915312140.2795692318.2633452543.4169732104.4500932064.5000462151.0596022107.4373662042.839982
Std39.0566110148.1604919942.7740653438.74995223126.673092834.4020229939.5329376538.373724320.4874872625.78606107
p-value9.278E−056.796E−082.960E−076.796E−086.796E−088.597E−061.782E−032.218E−071.376E−06
T-p2.9555E−073.3136E−216.4519E−128.4535E−355.2263E−243.6903E−081.032E−054.2897E−154.0238E−14
Rank3 8 6 9 10 4 2 7 5 1
F8Mean2283.482192369.1281572305.4324742249.27226812,015.431732296.6824062254.4720272253.5692132265.4691542226.704865
Std70.5527995762.0969917988.9021247.6492525558917.10485674.2469547247.3461751636.838213864.687977864.137240249
p-value2.690E−066.796E−081.413E−076.796E−086.796E−082.690E−064.155E−042.960E−072.356E−06
T-p0.00900228.5727E−120.00782896.4937E−160.000170046.9008E−070.0505680.000595750.0023227
Rank6 9 8 2 10 7 4 3 5 1
F9Mean2480.8021662736.8493022510.0318673475.3199344017.4640682480.8405272506.3978672491.9351412570.9802912480.781291
Std0.02684795346.4924232329.3811710557.60767292707.48775680.04857262221.0874052510.6907117338.614305452.59307E−05
p-value6.796E−086.796E−086.796E−086.796E−086.796E−086.796E−086.796E−086.796E−086.796E−08
T-p4.3633E−052.9457E−170.314911.4288E−402.8428E−124.3547E−067.4004E−052.3248E−072.6177E−12
Rank2 8 6 9 10 3 5 4 7 1
F10Mean3511.9598394992.2216584691.4575875530.1679678369.5200433582.3555473344.5644353760.3345363431.5294032653.69296
Std839.6821411461.4084481189.216981435.12401429.1177911696.5281943666.8767227774.5187084789.3937417436.276607
p-value8.292E−051.047E−061.576E−061.918E−076.796E−083.293E−051.444E−045.874E−064.166E−05
T-p0.00135533.9727E−082.9656E−081.9522E−241.2016E−392.2333E−067.4658E−061.0622E−110.0055316
Rank4 8 7 9 10 5 2 6 3 1
F11Mean2926.3961386941.5355663741.0105788486.32935114,492.733442930.9347783333.7492222955.7715984621.6620542900.081251
Std94.34590768612.35333341347.2144325.623629192092.598691.98415239250.377896383.67962857834.5871513112.3701873
p-value5.115E−036.796E−086.796E−086.796E−086.796E−081.014E−039.127E−083.152E−026.796E−08
T-p0.455812.3514E−260.202284.6683E−631.588E−240.707372.0536E−090.267176.1674E−09
Rank2 8 6 9 10 3 5 4 7 1
F12Mean2991.6289133126.4136923822.1632383522.0350464234.638023004.663562966.9128533100.9384383310.479562952.554545
Std65.2184376235.09782036266.718905443.82476357307.087396963.1129592621.76545959119.8779699100.273988117.37457276
p-value1.349E−036.796E−086.796E−086.796E−086.796E−082.596E−054.320E−032.960E−076.796E−08
T-p0.00066545.6551E−219.1348E−199.778E−405.179E−260.000844720.117941.1851E−052.2374E−18
Rank3 6 9 8 10 4 2 5 7 1
Mean Rank3.7508.2505.5008.00010.0004.253.3335.0005.2501.167
Result39781042561
+/=/−0/1/110/0/120/0/120/0/120/0/120/0/122/1/90/0/120/1/11
Table 4. Initial parameters of the optimization algorithm.
Table 4. Initial parameters of the optimization algorithm.
AlgorithmParameter NameReference Point
SaDEScaling factor (F)0.5
Crossover rate (CR)0.9
Probability (p)0.5
GQPSOU, ψ, t[0,1]
Contractile expansion factor (β)0
Gaussian parameter (σ)0.16
GOAAttractive force (f)0.5
Attractive Length Scale (l)1.5
g (gravitational constant)9.8 m/s
WOAB (spiral shape parameters)[0,1]
IRand[−1,1]
P (probability of a predation mechanism)Rand[0,1]
a (convergence factor)Random numbers obeying a normal distribution
AOAConstant (C1,C2,C3,C4)2, 6,1,2
IGWOControl parameter (C)[0,2]
ISSAeConstant
Step Control Parameters (β)N(0,1)
Table 5. Comparison results of ICOA and other algorithms (CEC2020 test set).
Table 5. Comparison results of ICOA and other algorithms (CEC2020 test set).
FunIndexCOASaDEGQPSOAOAGOAWOAIGWOISSATGAICOA
F1Mean3080.590138205.63048019,625,110,388213,202,816.44,379,490,880302.280919418,471.776942600.286025945,491,194.5101.4098254
Std2180.04528325.71488281,856,910,397469,523,140.82,300,629,583276.08780928809.9101782930.092116417,299,520.21.055445014
p-value6.796E−085.792E−016.796E−086.796E−086.796E−086.015E−076.796E−084.517E−076.796E−08
Rank52107936481
F2Mean1913.3890081893.5595653151.2977891674.8433672185.2284441881.8931781376.820821895.8206722174.848041458.234238
Std350.947033139.3642723208.173789299.890429177.503404284.7332401309.3453585454.2873522286.2958063151.7652699
p-value1.444E−046.674E−066.796E−081.988E−011.235E−072.341E−036.787E−023.605E−025.227E−07
Rank75103941682
F3Mean774.7307259729.6251739821.2219869749.065088767.3536999741.4243149720.5099737748.2369501806.9572282721.0349346
Std26.265198194.85672882611.9441412514.1234948915.6212052212.04714979.22201165720.6432196216.534194463.460709391
p-value5.166E−061.075E−016.796E−081.415E−052.563E−075.091E−042.073E−021.199E−016.796E−08
Rank83106741592
F4Mean1901.5805621901.95715476,717.823571903.6395510,187.438121904.1194451902.2757751901.3326881932.6684041900.810655
Std0.8022469660.42605012438,034.262182.3207431229688.6776242.119783680.4069468090.60273643525.781675820.259623522
p-value9.278E−056.796E−086.796E−084.539E−076.796E−083.939E−076.796E−084.601E−046.796E−08
Rank34106975281
F5Mean8256.4629737408.827254472,957.61824502.521987434,060.09812106.7902992945.7225384610.7292243,167.685591710.941224
Std7614.40585924,896.86396138,356.90392161.335567165,625.822211.39494861091.8111682432.12568621,813.4218811.33206389
p-value6.796E−085.166E−066.796E−086.796E−086.796E−082.563E−076.796E−087.898E−086.796E−08
Rank76104923581
F6Mean1655.8910511615.8909492276.266781759.6067511980.673761737.9572771603.5774891737.656581735.4974511627.109196
Std72.5534319837.67620158141.593370662.76771617125.865120584.480500773.53490747696.2371889582.3313037153.47052931
p-value1.017E−012.616E−016.796E−082.222E−042.960E−073.648E−011.988E−014.601E−041.794E−04
Rank42108971653
F7Mean3059.1102322208.463589566,560.73926167.232251227,056.13372213.3722272499.9917552450.2061469566.1761532101.420693
Std512.136508428.6440918385,508.46162838.195448373,754.5143104.6616754174.3121823243.61105366823.8435863.747408303
p-value6.796E−087.579E−046.796E−086.796E−086.796E−082.563E−079.173E−081.657E−086.796E−08
Rank62107935481
F8Mean2301.7563982301.7030512898.5935072276.1941912571.8067252304.8371572299.5170072302.4419192348.3605212295.869634
Std0.7920013191.04995385590.4690529543.57611423114.449388913.9438647223.463218822.42132057150.8513702922.57289382
p-value3.048E−041.898E−016.796E−082.853E−016.798E−061.201E−061.201E−061.443E−041.610E−04
Rank54101973682
F9Mean2719.0996132736.9448042865.8183872689.7145832791.3921092722.2243212727.0216552725.7840942636.0928812612.083805
Std75.2073740357.82998677.81898066135.628700192.6692912896.420692253.8972433185.3927301948.87360673118.8353758
p-value3.293E−052.139E−035.277E−072.471E−045.428E−012.690E−042.184E−019.173E−089.173E−08
Rank48103957621
F10Mean2920.3996542923.1104143269.5645562934.4495313076.61182916.9573282898.0958422925.2694962989.6971362905.437014
Std64.2106121323.3680262952.2151786426.84240033106.372515564.290114630.33189913823.0636375924.9049644417.26832487
p-value1.791E−043.636E−036.776E−082.219E−046.776E−083.636E−039.461E−013.696E−056.776E−08
Rank45107931682
Mean Rank5.34.1105.595.13.15.17.21.6
Result63107952581
+/=/−1/0/91/3/60/0/101/1/80/0/100/0/103/1/60/1/90/0/10-
Table 6. Comparison of results between ICOA and other algorithms (10-dimensional CEC 2017 test set).
Table 6. Comparison of results between ICOA and other algorithms (10-dimensional CEC 2017 test set).
FunIndexCOASaDEGQPSOAOAGOAWOAIGWOISSATGAICOA
F1Best148.67303454747.6222611,466,633,1853,822,463.598334,306,252.5100.28938252041.073121125.572396370,114,699.2159.9555967
Worst9549.96783946,364.359672,895,384,603147,860,1472,034,750,87616,182.0044534,064.539918,045.257431,313,880,4411797.617184
Mean4116.59425919,510.532512,033,179,55854,641,045.48967,098,811.12548.6629627622.6344673817.315048773,490,841.5510.4325359
Std3451.74032112,395.32945374,346,351.649,935,698.39508,659,189.64117.2766757762.9618035461.116842246,047,598.6348.6744138
p-value0.01332056.79562E−086.79562E−086.79562E−086.79562E−080.2616166.79562E−080.00904546.79562E−08
Rank46107925381
F3Best300.0115365300.026297211,337.31372302.894906210,684.80791300300.01918663003212.316168300
Worst372.363719920.380480520,813.757222313.19100718,890.01384300300.2383643300.000000111,162.33462300
Mean304.9282754346.359027115,508.93347650.926271314,528.15198300300.06428433006492.62042300
Std15.98248703139.72520792554.692367568.4728262311.0104291.74298E−120.0521755122.72674E−082171.2246314.25057E−11
p-value6.75738E−086.75738E−086.75738E−086.75738E−086.75738E−080.1004446.75738E−081.91209E−056.75738E−08
Rank56107914382
F4Best400.0319889406.3324266494.8966315403.3790854435.0131351400.0007614400.7881249400.0189761416.3654147400
Worst409.5988797409.6791913613.0620111444.1383144617.0037101407.5123122406.0013963409.3511312511.6735362403.9865791
Mean404.6699325407.3377881541.9514887419.8505524510.276305401.6576344402.1708677403.8830054453.9491199400.7973158
Std3.010232690.69842828127.5127052214.3920776141.227557592.6811307321.0373463122.66165896323.749913511.636057547
p-value1.06166E−077.876E−086.77647E−089.14744E−086.77647E−089.10523E−076.90001E−072.95221E−076.77647E−08
Rank56107923481
F5Best507.9603546517.1084588547.0449047507.960565518.4506157502.9848772500.9953028513.9294167544.4969693501.9899181
Worst526.8638492531.20374565.9932599525.4408086542.4827219519.899161516.2317563553.7275871564.9711659517.9092429
Mean517.9901144524.5896705554.8746013516.6819111530.6707215513.3821867505.2230268526.8638368555.976207509.8363908
Std4.9300601293.8348820426.7428834234.7433813837.5326703534.760443023.2899946428.6557735136.2659836053.953514287
p-value2.92486E−056.79562E−086.79562E−085.89592E−056.79562E−082.59146E−054.6804E−051.19538E−066.79562E−08
Rank56948317102
F6Best600.0646378600629.9998944601.3325972610.9706087600.7012471600.045807600611.6621673600.00015
Worst648.918318600660.8601099623.7118128636.6164527621.3363286600.0843116625.6466821626.2959508600.0045008
Mean608.8001365600649.0458874607.860604622.6489137607.2577357600.0621377605.1406886618.7073681600.001341
Std14.275197525.21631E−146.4664458526.1653847996.0974673316.3699535870.010850087.0912819164.0989792440.001080142
p-value7.89803E−081.94473E−086.79562E−086.79562E−086.79562E−086.79562E−086.79562E−080.001348586.79562E−08
Rank71106953482
F7Best712.013249711.412164818.3422902727.1132342742.2848267726.6088087712.577756716.2732618752.1640413714.3827533
Worst855.8382714725.1023536857.3651285806.1758234832.9248346797.1786426737.0763445822.6989394825.4520316726.3788175
Mean762.0943164721.2208093835.2569854755.089176776.097165742.8577434720.7600727744.8829921797.6770415718.9003091
Std46.230788033.48068535411.1303883422.2680021523.8201274116.129835426.96408944331.1181090820.624976992.927239924
p-value0.004320180.002341276.79562E−086.79562E−086.79562E−086.79562E−080.9892090.0001793646.79562E−08
Rank73106842591
F8Best805.9698249807.2457367874.0694682820.0699399834.9417722810.9445396801.9954459806.9647084825.6936112802.9875186
Worst876.6114734819.664581902.3440368851.2843135860.1755839836.8134044816.4533424871.6366218860.5124113813.3907572
Mean831.880596814.0204409886.95372837.4797929848.1388623826.664851808.7544151830.7943993843.2812621807.9906668
Std21.86918963.2129993618.2899231049.0164815388.6578460557.2584762575.01472619318.59604687.9344830593.145065154
p-value1.65708E−076.91658E−076.79562E−086.79562E−086.79562E−082.21199E−070.01332056.79562E−086.79562E−08
Rank63107942581
F9Best900.08982479001902.294749902.040387934.0248542903.7265596900.0020026900996.5772439900
Worst1194.18378900.0000062684.768821032.7612391601.1244721119.018646900.02171931768.5827531465.8854900
Mean933.9467429900.00000052299.918782927.49198471296.20627955.1920242900.00638421009.2254151173.247669900
Std76.800073381.36528E−06213.502612135.89157465205.092180354.739031350.004752395252.8979347103.47898074.16317E−12
p-value6.6344E−080.02495016.6344E−086.6344E−086.6344E−086.6344E−086.6344E−086.6344E−086.6344E−08
Rank52104963781
F10Best1340.2052721702.7230112759.2067351625.2256041781.5907661118.6256981003.9385851559.4226862072.1583651024.77159
Worst2600.1537342280.9628913373.2079442392.1234622914.7204172244.6456832200.9640332554.4460622570.3000061872.259067
Mean2003.5121352042.8198473053.5240051908.5525842241.2740631698.9406491314.6651942062.6797112310.3314811466.530483
Std346.214506135.3497372180.4873836238.866608299.1371429309.1977504358.1750795273.7979532140.259575228.2632009
p-value0.04679161.06457E−076.79562E−089.74798E−069.17277E−080.003966240.007113490.006557196.79562E−08
Rank56104831792
F11Best1113.6797361109.9825641235.4550221119.2105691159.2678031103.9949931102.0476221113.9298791137.9336661100.99496
Worst1209.0168311128.2790272054.9469191207.7199241790.6902711169.8971661112.0496741188.5987621263.0685021105.121276
Mean1149.6406661118.4106491590.2228851143.8496191294.2551161129.3111851107.4414341151.4237111206.4708971102.621416
Std28.547124015.515854063225.527905220.77982629167.904957917.512308692.51634157521.1897409833.638828611.447853224
p-value6.79562E−086.79562E−086.79562E−086.79562E−086.79562E−086.79562E−081.65708E−076.79562E−086.79562E−08
Rank63105942781
F12Best49,086.8010151,710.1332484,401,058.7312,238.8473,216,470.2922518.110565660.21814830,000.8107911,007,844.022138.110046
Worst4,925,172.527662,879.8469738,709,578.89,956,555.414188,423,358.721,571.9306987,688.828842,630,148.30749,842,458.14318.510312
Mean1,242,192.291248,406.3993233,880,579.83,353,132.96478,363,722.718393.65443834,571.38395810,606.449530,300,014.752897.07194
Std1,515,129.883188,509.6202160,770,530.53,104,917.49961,884,597.914356.64538624,143.10043767,394.480211,563,636.79604.365916
p-value6.79562E−086.79562E−086.79562E−086.79562E−086.79562E−080.0002470616.79562E−086.79562E−086.79562E−08
Rank64107923581
F13Best1464.5020631303.614389234,592.8777798.9681373181.1310741313.1669071450.970581343.7915282682.9676981302.095923
Worst30,355.929922018.73855439,033,98216,571.90555299,294.48211799.9067513770.67668926,299.72566156,322.61811313.32242
Mean5051.7624671347.56643714,765,418.4212,016.0097227,623.063151444.0672482032.51871410,697.1259557,245.247851307.709977
Std6461.487512158.049669110,008,094.312371.31540364,439.3798152.1812383564.11941367824.80635451,753.899213.113739726
p-value6.79562E−080.00101416.79562E−086.79562E−086.79562E−081.43085E−076.79562E−086.79562E−086.79562E−08
Rank52107834691
F14Best1473.5385841401.5444963160.523191458.7801521540.5981411414.2245911434.9483211430.5427751445.764191400.000465
Worst1769.1997441423.691449367,734.98121698.48708917,355.756211462.7884571478.5579021615.1602242083.9489781421.091345
Mean1561.2909721411.53260332,677.615651508.0671025232.8715951432.4867561447.5804791486.3307681612.8134981403.319026
Std79.544483547.53260618579,637.6688255.248295743527.99010512.9999495511.5054078443.96708702170.89665134.84782392
p-value6.79562E−080.0001037346.79562E−086.79562E−086.79562E−086.79562E−086.79562E−086.79562E−086.79562E−08
Rank72106934581
F15Best1642.4262611500.9776287215.2293821576.4178673075.4105821502.2671521509.2275411504.3496211615.4548481500.135994
Worst4595.4865551503.08472136,872.232172425.17768520,522.595481749.7674961565.534371782.9568345737.0779251502.575169
Mean2288.073231501.58828520,457.776911707.25354712,045.028051548.6916971527.4786271606.688482814.2931541500.969889
Std873.87582040.5447930018393.159846200.63127834949.44208858.2064366516.6602049879.787148451257.9926280.731431977
p-value6.79562E−080.00101416.79562E−086.79562E−086.79562E−086.79562E−086.79562E−086.79562E−086.79562E−08
Rank72106943581
F16Best1602.0725331600.8147691931.6797481629.6981311673.6831661600.724741601.4600341612.3637471618.7606871600.033067
Worst1879.5396761638.8958022381.8339271983.1776232130.9341681975.7750271613.0805762151.8279371841.6500881838.422985
Mean1676.4491871603.7734392172.1714741808.5678541906.6387351776.9710621603.7572321834.7440521696.7369221614.012491
Std89.503470458.312587623107.7315397132.5891305129.6763427156.56737652.777722188144.932534268.8216190552.9725289
p-value8.29242E−050.003056636.79562E−081.43085E−076.79562E−081.37606E−060.0009209131.59972E−054.53897E−07
Rank42107961853
F17Best1718.1575871703.0396531785.4933731728.8518331736.8337781723.9656141718.2294331714.2908881735.871871702.443984
Worst1787.3094241728.1329391942.1678891818.9921371820.2174551801.0462871738.5456851769.7123971801.4965421723.230288
Mean1736.1171181719.0515571849.2536321776.7016691770.355991755.0391591732.305931738.2983621763.9017221713.32226
Std18.648703988.20262099238.6070388829.9586925221.0692655818.699674165.87840630813.8710974315.975460648.378076252
p-value9.74798E−060.001625266.79562E−086.79562E−086.79562E−084.53897E−076.67365E−065.87357E−066.79562E−08
Rank42109863571
F18Best2376.9161741801.86206638,0311.40253058.4092284944.3245691824.2017082395.578491867.9729429172.5127631801.809688
Worst29,165.8445414,073.78964236,981,725.720,275.651690,437,745.232143.42292515,224.326846515.278517901,377.021823.819475
Mean12,335.732632560.00379573,228,735.247266.7785928442,913.21880.8734335197.8773983520.611854156,798.01731818.361647
Std7381.607042774.91383566,232,838.864182.19899220,607,718.2185.950898542933.6663771479.373335210,322.55845.290637159
p-value6.79562E−080.000374996.79562E−086.79562E−086.79562E−086.79562E−086.79562E−086.79562E−086.79562E−08
Rank73106925481
F19Best1941.4278581900.01429115,096.121861910.1091682528.872911903.1054081910.0381111905.3626361957.6650281900.191068
Worst2237.8966312062.8065193,586,326.5342288.843408139,537.53991953.8946711957.4072523063.36313723,836.460811903.769604
Mean2066.9250171908.7910891,034,285.8321988.08254726,112.510571916.0260281920.8372312038.7114765706.6099911901.56281
Std75.6977057136.25686385914,707.645987.6385134933,168.08214.6327191810.79753957247.57900245104.5899630.814218167
p-value6.79562E−083.93881E−076.79562E−086.79562E−086.79562E−086.79562E−086.79562E−086.79562E−086.79562E−08
Rank72105934681
F20Best2001.01459120002184.352362035.9242462055.5815042023.2887042000.6303872020.9949592034.3389322000.000003
Worst2140.6161952020.0001082383.6753362154.4769382224.5111732211.5429772029.8713492286.1793442086.0151942002.614276
Mean2031.7256282001.2559652291.7295882078.2192512133.8972352083.5469582023.1034982091.4597772054.9670872000.639073
Std38.038569144.42615390751.8989519230.9806390261.9879722558.83222438.83927607794.6106745513.91689360.629731914
p-value5.16578E−062.15196E−056.79562E−086.79562E−086.79562E−086.79562E−084.53897E−077.89803E−086.79562E−08
Rank42106973851
F21Best2200.0002882201.4428162257.2087292200.7255492244.66061922002200.0071222002208.8852272200
Worst2328.3455832329.6166042415.1286032342.0485982365.8985522350.9314412315.8730892351.2287362254.600662315.521808
Mean2299.2644122296.9062682347.6038932287.6723422336.1548012293.2405932274.585582321.9426842224.2557022272.128253
Std42.7623573250.3887429754.9773101762.563935335.489230856.5904335650.2005870731.6545117912.2842742254.36216179
p-value7.4064E−050.002560626.67365E−066.67365E−060.0009209130.003637240.8392320.00345930.597863
Rank76104953812
F22Best2211.39665123002736.1496172211.5823512287.3362632248.6769432300.3059582300.6343592289.7147372300.001265
Worst2304.2152732302.950233116.6285382399.3516422733.7516132315.3329282307.3238612305.5661782429.3369172301.877899
Mean2297.1202512301.3907482913.387052276.533142539.1890472301.8099522304.4914722302.2492022339.2651252300.85498
Std20.191166330.988251463108.137563452.14553769106.241588815.299098382.0637910271.28937287838.784971370.481061092
p-value0.04112360.6359456.79562E−080.2853056.79562E−081.59972E−051.59972E−050.000374991.59972E−05
Rank24101957683
F23Best2606.6837562609.441542717.9563292630.6230332658.3845972609.1733392600.0260682613.2304562637.8661952605.504363
Worst2624.836262628.8685632796.7357232698.6136842750.8983552648.6909242620.3307432661.0001352673.1773122614.139535
Mean2615.1384142618.130772755.6879322654.8153152692.0025462629.4435342607.7793842628.0126462658.091262610.531075
Std5.8108371854.61252357120.6168494717.7818475822.3505971113.00512515.45060242114.332148849.2599489922.092261325
p-value2.59598E−051.04727E−066.79562E−087.89803E−086.79562E−089.17277E−080.008354832.21776E−076.79562E−08
Rank34107961582
F24Best2736.23290725002717.0018142503.9253072565.9443525002500.0016032601.2387872584.0460562500
Worst2760.9671772757.5647073036.9645062860.787742893.1847872778.7077252750.9836912793.6876142783.8090262742.948767
Mean2745.5311472739.7861692863.4807422722.0028572778.8925242717.0777732722.2946642743.2653452634.9747422643.449909
Std7.72231247356.6091803684.2173108125.9998303110.558408794.0266896353.3197735351.0489140443.08010434120.1871294
p-value5.87357E−061.59972E−051.37606E−060.0002222032.56295E−070.0001608670.001782382.92486E−050.023551
Rank86104935712
F25Best2897.7799192897.7428693159.1586282897.9317963002.7035912897.7428692897.7446122897.7428692962.2203372897.742869
Worst2950.6163042946.1738653372.8055053013.0368533448.3903972960.1386022943.5529222949.4529683041.8532812943.442847
Mean2929.3246672926.2263723262.7169442936.1494843147.346032923.2965372900.3047352919.7763222991.839642906.968546
Std22.8985881623.1392314950.6609422429.30352266133.0432725.2904090910.1865630423.7370091521.1611821218.69368178
p-value9.03062E−070.0005059856.70985E−085.11864E−066.70985E−080.0003030110.05572253.9337E−066.70985E−08
Rank65107941382
F26Best2600.19100228003892.4096082607.2963953398.74681726002600.00212928002981.1515442800
Worst4091.0333173946.6394644552.2058094007.0498414480.0629833320.8590283776.6323713094.8562963213.9454232900.000003
Mean3073.1658973011.7373354221.7217112963.6269143864.6782862960.5853162928.8354822928.6275053109.6107432880.000001
Std340.8278619302.7313026173.3020535324.6732024364.7951636218.7965612210.494822593.7608930463.6276892941.03913375
p-value6.79562E−080.4999916.79562E−080.0008357176.79562E−080.0335366.79562E−080.0060316.79562E−08
Rank76105943281
F27Best3088.9780173088.9780133170.1359753093.0063683112.2452173089.517993089.0108263089.3080773099.9261843088.978013
Worst3134.8046963115.695973445.4184063200.002013264.8390693133.2731423091.1115083190.3422813106.5619163093.434321
Mean3098.3117493092.486293305.709773125.0800923208.7952643102.6444453089.4587023111.1257263103.8360893089.761299
Std9.8394047225.85856054462.4865057939.6874219538.753465128.8817167860.41947815331.928332121.8023371061.212620095
p-value2.67821E−060.0003034086.75738E−081.19538E−061.42319E−071.36981E−060.5249091.36981E−061.19538E−06
Rank43108951762
F28Best3100.00366231003679.4258683278.7431573216.85377231003100.03116431003212.1177013100
Worst3731.8129263411.8218343900.3680613750.4105993821.4771243411.8218083411.8218083736.1799733415.9157023411.821808
Mean3312.3863493262.7854373815.8190293316.7784683612.1083553238.4588973189.3212393311.5866253292.3084243208.348486
Std164.0830227115.470176464.51116625102.8066089183.5683596157.1689589140.1209232173.950962351.35725851142.9493947
p-value0.001934260.0120676.6063E−080.1400471.76322E−060.1180990.4730650.8055520.635627
Rank74108931652
F29Best3146.9759653148.898873352.3170943172.3972313183.391453162.8836483131.5096813154.997733177.6566243128.083943
Worst3336.5317763224.4773883654.7473453310.9028543428.0371443326.1592273171.988563374.3475753269.5686773147.167077
Mean3220.6106543186.2668743491.349563232.1787313292.5007533233.8011193150.5558833234.013563230.0370763133.342165
Std66.2736235419.042113585.9320043434.6041319561.2516489144.5385189611.6749215657.2980678730.302328374.760727689
p-value9.17277E−086.79562E−086.79562E−086.79562E−086.79562E−086.79562E−080.0001609817.89803E−086.79562E−08
Rank43106972851
F30Best3976.8461393908.4695055,545,090.3313275.11226624,485.186513420.7822633721.2517394354.413685128,224.86793443.647265
Worst1,384,854.4651,251,762.74332,821,146.7732,342.488407075071,251,762.7438250,23.83391,251,762.7432,055,685.5721,251,762.743
Mean469,355.0617170,001.38418,121,356.67161,197.849312,076,824.32106,806.094487,603.44456236,546.66842,956.9005106,779.5696
Std608,883.7639385,324.61576,301,155.986186,849.813213,396,013.56325,441.6895252,107.0941406,876.0247362,331.6332325,450.5459
p-value2.56295E−071.04727E−066.79562E−080.0001294059.17277E−080.0114291.37606E−065.22689E−071.05847E−07
Rank75104931682
Mean Rank5.5513.75810.0005.3798.8273.9652.8635.4137.0961.517
Result73105942681
+/=/−0/0/290/2/270/0/291/1/270/0/291/2/263/2/240/1/281/1/27-
Table 7. Results of reducer design problems.
Table 7. Results of reducer design problems.
AlgorithmsVariablesOptimum Value
ic1ic2ic3ic4ic5ic6ic7
COA3.5001443010.700013637177.37.83.3502345795.2867477112996.512548
ICOA3.50.7177.3000000017.83.3502146665.286683232996.348165
GWO3.5005788090.7177.3727789837.83.3515099365.2882684282998.556168
HHO3.5000063290.7177.4356893077.83.3504703795.2940645153002.312787
DO3.5000213740.7177.3026612387.8000496363.3502518445.2866961012996.39877
WFO2.6051921090.70036039417.026203267.5941308158.1164585872.9023306365.000658902100,002,377.9
GOA3.5000191560.7177.37.83.3503138115.2871166072996.656585
SSA2.60.7177.37.82.95100,002,530.8
FFA3.5639258170.7177.37.83.5056937355.2868294283062.924719
AOA3.60.7178.37.83.4780502975.3091909143093.161996
Table 8. Statistical results of reducer Problemsproblems.
Table 8. Statistical results of reducer Problemsproblems.
AlgorithmsBestWorstMeanStd
COA2996.5125485440.506173119.577601546.2929035
ICOA2996.3481655628.0741723139.487761588.0200818
GWO2998.5561683014.7605173004.1283983.68652866
HHO3002.3127875572.4076253655.283181111.043272
DO2996.398773006.0214642999.2199252.793965817
WFO100,002,377.9100,002,470.5100,002,421.327.49125769
GOA2996.6565853000.4776062997.8903711.187587807
SSA100,002,530.8100,003,226.2100,002,939.3167.8427193
FFA3062.92471911,118,255.945,449,561.8584,226,598.291
AOA3093.1619963228.7225183161.79042652.68475681
Table 9. Hydrostatic thrust bearing design problem results.
Table 9. Hydrostatic thrust bearing design problem results.
AlgorithmsVariablesOptimum Value
rr0μq
COA6.6657743536.6808844737.26308 × 10−614665.174504
ICOA8.153696638.1537880229.96607× 10−61.023513439958.0737549
GWO6.6799189596.6833938269.60624× 10−62.0671795093418.088762
AO7.9877191826.543810187.83822× 10−615.8708878362,719.0719
SO9.0903684299.0910993749.03321× 10−611814.372174
DO7.0799450397.081526039.18667× 10−61.0073736512013.67956
WFO6.037241186.1108207539.35983× 10−66.03691324513,500.33491
GOA7.93217927.949737616.09553× 10−616351.059315
FFA13.5729958613.574046329.80632× 10−66.4771307634267.49196
GRO7.7945220837.8725726625.58822× 10−61.9363125132414.211291
AOA8.986280228.9879541461.01511× 10−61610,111.4088
Table 10. Statistical results of hydrostatic thrust bearing design problem results.
Table 10. Statistical results of hydrostatic thrust bearing design problem results.
AlgorithmsBestWorstMeanStd
COA4665.17450420,682.759548266.9094513746.995955
ICOA958.073754922,952.135988457.3368467263.332382
GWO3418.0887628213.6221184884.434961183.893831
AO62,719.0719458,528.2129210,568.144710,5726.5923
SO1814.37217455,114.3041552,361.8830411,897.66929
DO2013.679568161.4760923554.3512971807.90875
WFO13,500.3349169,337.3277737,717.8781417,240.74021
GOA6351.05931543,182.5600214,284.766998888.626988
FFA4267.4919613,092.376757708.852392429.408312
GRO2414.2112916107.1733624193.9448111129.766826
AOA10,111.408833,120.08714,552.681015066.244501
Table 11. Results of welded beam design problem.
Table 11. Results of welded beam design problem.
AlgorithmsVariablesOptimum Value
coa1coa2coa3coa4
COA0.2057297473.2538184799.036546190.2057343851.69536477301484
ICOA0.2057298783.2531158029.0366239110.205729641.69524705320638
GWO0.2071361693.2719081589.0071113010.2073701741.70713835001213
HHO0.202242063.3276541439.4112963720.2046996961.75634507460288
SO0.1996696513.528179999.0146513880.2068615511.72792710447185
DO0.2057336723.2530686359.0365651540.2057325241.69526036151233
WFO0.1807998754.2326502438.914112260.2143901661.82921020034696
GOA0.3283299782.2955047997.1916433870.3284022222.1249262944582
SSA0.10.10.10.12.25348137991264
ISSA0.1763821243.8446850959.1675172710.2117595111.79876399875485
FFA0.2618959482.6692075948.2067347530.2890332862.10450411846326
AOA0.1889102243.674031117100.2028299751.86950299444208
Table 12. Statistical results of welded beam design problems.
Table 12. Statistical results of welded beam design problems.
AlgorithmsBestWorstMeanStd
COA1.695364773014841.7046618631.6960779090.002032594
ICOA1.695247053206381.6952799351.6952494027.2712× 10−6
GWO1.707138350012131.922344291.830400740.047717425
HHO1.756345074602882.195129241.8780525230.109369283
SO1.727927104471852.3408510791.9636532380.161335053
DO1.695260361512331.7088931751.7004304650.00478447
WFO1.829210200346962.9098207712.2972400260.352133884
GOA2.12492629445823.9208440712.7266666370.550699826
SSA2.25348137991264616,699.106662,436.4806149,720.9795
ISSA1.798763998754858.1279900232.6403997361.355838167
FFA2.104504118463263.1772830122.6672516050.318784137
AOA1.869502994442082.6542954672.2442110560.281180602
Table 13. Results of robot gripper arm design problem.
Table 13. Results of robot gripper arm design problem.
AlgorithmsVariablesOptimum Value
ic1ic2ic3ic4ic5ic6ic7
COA99.9857158138.18002065199.9772657010.160507651001.4792328637.2865327450E−17
ICOA100.000004438.19655187199.9999998016.750424241001.5645012047.2740693811E−17
SCA96.4360994833.95248713186.8158442038.33969361001.7389400811.1383886011E−16
AO108.477939310161.74210601501003.141.2555430736E−15
BWO98.7599901636.28125241200028.34606581001.5563462568.9127469379E−17
DO99.9999897138.196563352000126.81684691002.0978080157.2740793402E−17
WFO142.5464321130.5210616182.29200378.150600978126.6622149163.44163482.5568449725.2614674772E+00
GOA98.4271396636.32268301129.7971558027.980042631001.6062594611.3372373669E−16
SSA101010001010013.4694372699E+102
RSA99.6019990275.18022507147.29771848.353478481138.7177764150.09162743.141.0268188818E+01
FFA100.572311830.9748282110001010013.5568574350E−16
GRO144.7323872113.5823517190.417193529.49193114148.850327134.67323382.8602268543.0270547513E+00
AOA81.7687986218.922752652000119.04015431003.142.6163653784E−16
Table 14. Statistical results of robot gripper arm design problems.
Table 14. Statistical results of robot gripper arm design problems.
AlgorithmsBestWorstMeanStd
COA7.2865327450E−176.7826310333.1796811891.720743265
ICOA7.2740693811E−173.5080218410.5036890081.231070738
SCA1.1383886011E−162.92765E−161.73937E−165.56391E−17
AO1.2555430736E−158.5786342813.717189483.555871938
BWO8.9127469379E−173.93736E−161.80241E−167.65703E−17
DO7.2740793402E−172.9846707650.5575503821.145282181
WFO5.2614674772E+0093.9393629314.8418145419.17351114
GOA1.3372373669E−1610.433726325.4970790084.694065699
SSA3.4694372699E+1026.7583E+1051.5136E+1051.8813E+105
RSA1.0268188818E+011.3142E+1051.1547E+1042.9275E+104
FFA3.5568574350E−165.3519687611.264345722.031519024
GRO3.0270547513E+004.7408116053.8409546940.463312144
AOA2.6163653784E−166.7932607892.3133559892.479617989
Table 15. Results of cantilever beam design problem.
Table 15. Results of cantilever beam design problem.
AlgorithmsVariablesOptimum Value
ic1ic2ic3ic4ic5
COA5.9732391725.271417334.462322993.4765677112.1373425913.3169948668511
ICOA5.9732200125.2714062574.4623584173.476566672.13735200213.3169948659626
SCA6.1970477374.8965021344.5870310063.7112545311.9462683813.4483502463377
AO5.9528777325.279802114.4728599383.4695286962.14989209813.3172651303013
BWO6.0939145355.2451745824.4540056153.4255901072.1026037813.3219019616545
WFO5.316955037.0904114514.2121239763.883429262.13799701314.0917065721423
GOA5.8185350065.3276088164.5295752763.4683558122.180641813.3254317182751
FFA5.7148351316.1772109794.8269573423.131852771.91936839113.5983591073533
AOA6.1484525524.6775184444.753855613.9073002423.11558348714.0679269131395
Table 16. Statistical results of cantilever beam design problems.
Table 16. Statistical results of cantilever beam design problems.
AlgorithmsBestWorstMeanStd
COA13.316994866851113.316995415294513.31699493263211.34E−d07
ICOA13.316994865962613.316994874398613.31699486638541.88611E−09
SCA13.448350246337713.934897050204513.63408010481670.14898056
AO13.317265130301313.329316617537613.3217919661690.002885066
BWO13.321901961654513.396271953399813.34617281593910.019786476
WFO14.091706572142329.03225836955219.02070582428244.048134
GOA13.325431718275113.696253107507513.42074880756850.099786135
SSA42.5490746330585107.18717163289777.181585108403916.7616407
FFA13.598359107353317.031847542404214.92426521837950.970489372
AOA14.067926913139537.611965004508219.95251315280946.760253941
Table 17. Results of heat exchanger design issues.
Table 17. Results of heat exchanger design issues.
AlgorithmsVariablesOptimum Value
ic1ic2ic3ic4ic5ic6ic7ic8
COA462.54276071013.9633055903.791795154.3961403264.1452697244.3227926289.7539322364.08918827380.297861
ICOA648.14411131407.0160025158.063247173.9996594293.6775363226.0003082280.3220856393.67752467213.223361
GWO162.86187291566.2026325781.639069107.0005946269.0613907270.4296038236.4759442369.03179217510.703573
HHO2428.0817261066.887415250.125875199.5199501289.9983826198.7567003305.5955845389.99675528745.095011
SO184.46121512100.5036845177.788126116.4323451292.888505282.6675462221.4407806392.8884967462.753025
DO866.69969691000.0133045547.182004180.6688392278.1145075200.891534302.4952316378.11387897413.895005
WFO941.20578176999.2494326530.66565475.65402374338.3660656228.1608045135.4661082403.841483514,471.12087
GOA465.79735212851.3650714591.27227133.8214808317.6517498203.863732215.9026524417.01368887908.434694
BWO822.41873582149.374565229.952691170.5531089312.1534096221.9056041258.209572403.77640698201.745986
ISSA1289.4011161099.0599575000.262925212.0593125299.9904139187.9379361312.0666078399.99020027388.723998
AO102.90000081789.5767578049.56442427.7126494179.541758618.01720773138.725557279.25018789942.041182
GRO175.14873851456.218265792.948411114.8347834268.2822482284.8061069246.5522545368.28217957274.950879
AOA7491.1854961326.5151641000098.17921924198.5000802257.1230991226.1767422279.21499318,817.70066
Table 18. Statistical results of heat exchanger design problems.
Table 18. Statistical results of heat exchanger design problems.
AlgorithmsBestWorstMeanStd
COA7380.2978618964.8519778051.946612407.6809122
ICOA7213.2233617215.8113617213.4897020.520710395
GWO7510.70357317,167.022658424.5470891677.0323
HHO8745.095011113,030.678225,639.522523,418.86501
SO7462.75302512,888.05988849.2828951128.8162
DO7413.89500510,220.211068331.891441596.9471728
WFO14,471.12087110,916.38649,893.1032425,381.69323
GOA7908.43469438,248.4464717,525.927979165.578149
BWO8201.74598623,947.1755211,544.596473519.229927
ISSA7388.72399840,162.511,478.792999054.672031
AO9942.041182138,486.54929,114.0748431,314.68264
GRO7274.9508797862.4070587465.570478132.9925336
AOA18,817.7006691,992.030638,459.9446416,725.11458
Table 19. The optimization results of each algorithm on F1–F20.
Table 19. The optimization results of each algorithm on F1–F20.
FunctionGoalCOAICOAGWOHHOSODOWFOGOABWOISSAAOGROAOASaDE
F107.95803E−1603.06555E−09001.39944E−153.13344E−067.37883E−161.02931E−0501.62567E−0601.22759E−050
F200000000.0009634190000000
F30.39790.3978873580.3978873580.3978873920.3978873580.3978873580.3978873580.3978920420.3978873580.4111465770.3978873580.3978940230.3978873580.3981784770.397887358
F400.0507562150.0415411350.0441750430.0090942370.0072253460.0105292990.3717421170.050.3166301710.0001180970.2794747250.0013682970.0553023670.000470248
F500007.085E−24007.91521E−328.88831E−060001.3419E−309001.6168E−196
F601.79489E−072.5162E−201.94102E−054.04155E−081.60281E−064.29879E−051.92813981505.3038E−051.96417E−083.4404E−073.17417E−110.007447077.26818E−28
F7−2.0626−2.062611871−2.062611871−2.062611871−2.062611871−2.062611871−2.062611871−2.062611589−2.062611871−2.062611871−2.062611871−2.062611871−2.062611871−2.062611871−2.062611871
F80.9980.9980038380.9980038380.9980038380.9980038380.9980038380.9980038380.9980038390.9980038380.9980038380.9980038380.9980038380.9980038380.9980038380.998003838
F9−1−1−1−1−1−1−1−0.998323161−1−1−1−1−1−1−1
F10−1−1−1−0.999999994−0.999999999−1−1−0.596052289−1−1−1−0.99999998−1−0.999991609−1
F11−959.641−959.6406627−959.6406627−959.6406627−959.6406627−959.6406627−959.6406627−954.2726351−959.640144−959.6406626−959.6406627−959.6406625−959.6406627−959.6406605−959.6406627
F123333.000000144333.0000000013.37215504533.03140862133.001781149333
F13−3.8628−3.862779787−3.862779787−3.862719813−3.862774994−3.862779787−3.862779785−3.855931775−3.862779784−3.862514692−3.862779787−3.857798555−3.862779787−3.859632062−3.862779787
F14−3.1355−3.134494141−3.134494141−3.134488544−3.133280487−3.134494141−3.134494139−2.988602842−3.134460663−3.127934785−3.134494141−3.129923381−3.134494138−3.125722955−3.134494141
F15−3.3224−3.322367983−3.322367263−3.322283466−3.194634299−3.322367968−3.322367746−2.470374497−3.312487035−3.27368502−3.322368011−3.248899166−3.322367366−3.144075779−3.322367996
F16−19.2085−19.20850257−19.20850257−19.20849716−19.20850257−19.20850257−19.20850257−18.77858166−19.20850257−19.20813946−19.20850257−19.20832906−19.20850257−19.20850226−19.20850257
F17−4.1558−4.155786006−4.155809292−4.155804085−4.155809292−4.155809292−4.155809291−4.084770207−4.154811612−4.149230676−4.155809292−4.152046957−4.155809292−4.097316987−4.155809292
F18−1.9133−1.913222955−1.913222955−1.913222804−1.913222955−1.913222955−1.913222955−1.911128474−1.913222955−1.913091261−1.913222955−1.91322141−1.913222955−1.913199577−1.913222955
F1907.3841E−25801.56218E−408.87572E−431.65686E−371.27491E−190.0005191692.90305E−789.95372E−6404.4688E−433.94985E−8002.36907E−19
F20−1.8013−1.80130341−1.80130341−1.801297686−1.801303405−1.80130341−1.80130341−1.772270204−1.80130341−1.80016541−1.80130341−1.801294557−1.80130341−1.797607951−1.80130341
Table 20. The optimal results of various algorithms on F21–F30 (d = 30).
Table 20. The optimal results of various algorithms on F21–F30 (d = 30).
FunctionGoalCOAICOAGWOHHOSODOWFOGOABWOISSAAOGROAOASaDE
F2108.88178E−168.88178E−167.99361E−158.88178E−164.44089E−155.25639E−0614.824316268.88178E−168.88178E−168.88178E−168.88178E−164.44089E−158.88178E−165.4184E−08
F2200.6666666670.6666666670.6666668850.1709679020.249466330.66666680667,938.281570.2260309120.2159385670.0008472130.2492292830.6666666680.6666666670.666666667
F230000.00113670802.9976E−150.371966384356.2659402000000.0530252125.8483077
F2400.6275976674.11113E−060.6352942882.08407E−112.32165E−054.95983E−0725.037857661.49976E−328.1182E−314.37709E−161.82308E−070.1800096322.3317476115.58128E−15
F2504.43864E−111.32288E−171.53939E−089.05537E−101.00418E−159.48455E−120.4068905111.34978E−318.24335E−058.69894E−296.50439E−061.05736E−166.09368E−066.10458E−17
F260008.0809076700.0055359620.59437998357.3590589000000201.3498799
F270−1.019172729−1.019174434−1.019174183−1.019174434−1.019174433−1.019174434−1.009587538−1.019174434−1.019172619−1.019174415−1.019173965−1.019174434−1.019145841−1.019174045
F2806.166E−21400.0026125242.27384E−341.81897E−162.940895611137,590.71527.58861E−663.53755E−5601.06282E−399.31175E−222.45574E−703067.605705
F2902494.1728420.0009561344907.9236770.0003818270.000792362470.1056187000.7336466666.4192170.0003818273009.5541491.5551042731419.3708645527.9368613775.690232
F30−1174.97997−1076.027927−1146.711529−1041.345823−1174.984938−1174.984873−1160.847916−739.3406631−1031.975357−1174.984971−1174.984971−1127.859557−1160.554286−620.2374434−1174.984971
Table 21. The optimal results of various algorithms on F21–F30 (d = 100).
Table 21. The optimal results of various algorithms on F21–F30 (d = 100).
FunctionGoalCOAICOAGWOHHOSODOWFOGOABWOISSAAOGROAOASaDE
F2108.88178E−168.88178E−167.54952E−148.88178E−164.44089E−150.00401203216.83705138.88178E−168.88178E−168.88178E−168.88178E−164.44089E−158.88178E−160.104610124
F2200.6666666680.6666666680.6666669220.1668825110.2503116770.66666686520,744.427290.2431306520.1831173070.0010761970.2489293770.666666670.6666666670.666666667
F230000000.001528428509.92298640000056.983788470.162929738
F2405.7145785260.0492830275.6344678027.02166E−093.76805E−050.637289645224.23257671.49976E−327.98322E−300.0003638384.59827E−083.9673853018.5119947170.253489629
F2508.5895E−195.22431E−221.1093E−082.03448E−301.34978E−311.11652E−160.0071675641.34978E−311.66919E−121.34978E−314.03921E−071.34978E−311.18039E−061.34978E−31
F2600000040.344452691050.004385000000785.9269144
F270−1.019174434−1.019174434−1.019174434−1.019174434−1.019174434−1.019174411−1.018817632−1.019174434−1.019174434−1.019174434−1.019174413−1.019174434−1.019174265−1.019174434
F280002.84714E−322.3606E−2203.0845E−1770.04186974817,546.15030001.48E−3061.4657E−1841.2653E−373.595936325
F29013,944.082112.14895517822,039.570260.0012802560.00823158115,722.9603232,973.9071727,382.266470.00127275713,035.4941884.6772484115,863.7387628,502.4381926,394.17618
F30−3916.5999−3382.608802−3902.700855−2661.655296−3916.616567−3916.614685−3576.73468−1954.098272−3649.766234−3916.61657−3916.616151−3301.284001−3371.233377−1544.590776−3774.833127
Table 22. The optimal results of various algorithms on F21–F30 (d = 500).
Table 22. The optimal results of various algorithms on F21–F30 (d = 500).
FunctionGoalCOAICOAGWOHHOSODOWFOGOABWOISSAAOGROAOASaDE
F2108.88178E−168.88178E−161.03927E−088.88178E−164.44089E−152.80695827417.505986528.88178E−168.88178E−168.88178E−168.88178E−164.44089E−150.00747310912.74325313
F2200.6666788970.6669460590.6666671830.2499980030.990503599974.36505670,024,486.310.9981613450.195605570.2955949070.25034678813,942.209280.7953334727,503,030.487
F230001.79856E−14002.9322693622525.542517000005323.253883500.0040974
F2405.23782E−154.05874E−291.57557E−081.63724E−131.34978E−311.00955E−140.0165668741.34978E−311.13062E−091.34978E−319.24672E−061.34978E−310.2888591951.34978E−31
F2505.24E−154.06E−291.58E−081.64E−131.35E−311.01E−140.01661.35E−311.13E−091.35E−310.000009251.35E−310.2891.35E−31
F260002.27374E−1100803.11130945849.282670000005738.63403
F270−1.019174433−1.019174434−1.019174428−1.019174434−1.019174434−1.019174407−1.018491532−1.019174434−1.019174434−1.019174428−1.019173986−1.019174434−1.019173968−1.019174434
F280005.64931E−129.716E−2106.1291E−16015,981.2870132,843,919.290001.8327E−3047.9406E−14313.425457485,239,874.427
F290113,693.843593,787.47483131,229.24690.0315591830.343522069115,349.5417191,390.6858130,409.76330.00636378486,034.14495140,266.6566147,667.0188181,790.5584177,728.0627
F30−1.96E+04−12,115.31382−12,484.51034−9086.742048−19,583.08263−19,583.0815−13,947.91964−8638.626515−18,359.375−19,583.08285−19,567.77419−18,507.25335−11,705.00585−4215.269631−11,566.49805
Table 23. The optimal results of various algorithms on F21–F30 (d = 1000).
Table 23. The optimal results of various algorithms on F21–F30 (d = 1000).
FunctionGoalCOAICOAGWOHHOSODOWFOGOABWOISSAAOGROAOASaDE
F2108.88178E−168.88178E−169.54888E−078.88178E−164.44089E−155.02823697917.678265928.88178E−168.88178E−168.88178E−168.88178E−164.44089E−150.00892968617.03175179
F2200.6666782460.6667770370.6666970330.2499995010.9997585591,888,035.074161,860,213.30.250.2500002990.3603283640.2502260713,695,576.940.96388784132,903,477.4
F230006.99055E−110071.750968655545.8590870000027,764.513893692.772093
F24086.5639315931.3230376784.216600264.39319E−060.0002852892808.0882792525.3062391.49976E−323.29719E−280.0229400210.00037006986.2148197490.402368651883.153516
F2507.37533E−151.07836E−251.79031E−084.37939E−141.34978E−314.19062E−150.0870140991.34978E−314.93075E−101.34978E−312.59717E−051.34978E−312.82718E−061.34978E−31
F260001.086833165003062.12851410,882.47635000002.01959E−0512,107.0719
F270−1.019174434−1.019174434−1.019174406−1.019174434−1.019174434−1.019174398−1.017444522−1.01917441−1.019174434−1.019174432−1.019169813−1.019174434−1.019173958−1.019174434
F280002.2473E−071.8194E−2005.9176E−1541,159,759.229123,096,879.10005.7335E−3025.3254E−13184.1347405776,716,528.52
F290264,763.676346,884.42466308,366.60590.6227057181.1041150442,688,68.4353387,129.2203293,688.66130.012727568211,723.711330,786.998331,334.9251379,589.4864373,780.5163
F30−39,165.999−20,936.71154−25,019.29056−14,894.51127−39,166.15666−39,165.84764−23,009.37501−18,144.68341−31,035.61711−39,166.1657−38,848.70446−32,474.21484−19,797.64546−6980.135197−21,420.73319
Table 24. Statistical results of solving NP 1.
Table 24. Statistical results of solving NP 1.
Algorithms\IndicatorsBestMeanStdRank
COA3,898,584.1673,898,584.16705
SaDE3,967,179.7513,967,179.7514.9085E−108
AO4,304,761.0754,304,761.0759.817E−109
AOA3,926,034.1523,926,034.1524.9085E−107
DO3,839,698.3523,839,698.35204
HHO3,818,448.3643,818,448.3644.9085E−103
GWO3,635,403.8043,635,403.80402
GOA3,907,841.5033,907,841.50306
ICOA3,581,472.8373,581,472.8374.9085E−101
Table 25. Statistical results for solving the TSP.
Table 25. Statistical results for solving the TSP.
Algorithms\IndicatorsBestMeanStdRank
COA4.035285554.0352855504
SaDE4.0800754194.08007541905
AO4.2246199844.2246199849.36222E−167
AOA4.8320039474.83200394709
DO4.8080471374.8080471379.36222E−168
HHO4.0852114434.0852114439.36222E−166
GWO4.0278257114.0278257119.36222E−163
GOA4.014680124.014680129.36222E−162
ICOA4.0141486364.01414863601
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lin, W.; He, Y.; Hu, G.; Zhang, C. Multi-Strategy-Assisted Hybrid Crayfish-Inspired Optimization Algorithm for Solving Real-World Problems. Biomimetics 2025, 10, 343. https://doi.org/10.3390/biomimetics10050343

AMA Style

Lin W, He Y, Hu G, Zhang C. Multi-Strategy-Assisted Hybrid Crayfish-Inspired Optimization Algorithm for Solving Real-World Problems. Biomimetics. 2025; 10(5):343. https://doi.org/10.3390/biomimetics10050343

Chicago/Turabian Style

Lin, Wenzhou, Yinghao He, Gang Hu, and Chunqiang Zhang. 2025. "Multi-Strategy-Assisted Hybrid Crayfish-Inspired Optimization Algorithm for Solving Real-World Problems" Biomimetics 10, no. 5: 343. https://doi.org/10.3390/biomimetics10050343

APA Style

Lin, W., He, Y., Hu, G., & Zhang, C. (2025). Multi-Strategy-Assisted Hybrid Crayfish-Inspired Optimization Algorithm for Solving Real-World Problems. Biomimetics, 10(5), 343. https://doi.org/10.3390/biomimetics10050343

Article Metrics

Back to TopTop