Next Article in Journal
Preclinical Evaluation and Advancements in Vascularized Bone Tissue Engineering
Previous Article in Journal
Bio-Inspired 3D Affordance Understanding from Single Image with Neural Radiance Field for Enhanced Embodied Intelligence
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Exploration Stage Approach to Improve Crayfish Optimization Algorithm: Solution to Real-World Engineering Design Problems

Electronics and Automation Department, Kırklareli University, 39010 Kırklareli, Turkey
Biomimetics 2025, 10(6), 411; https://doi.org/10.3390/biomimetics10060411
Submission received: 12 March 2025 / Revised: 12 June 2025 / Accepted: 12 June 2025 / Published: 19 June 2025
(This article belongs to the Section Biological Optimisation and Management)

Abstract

The Crayfish Optimization Algorithm (COA) has limitations that affect its optimization performance seriously. The competition stage of the COA uses a simplified mathematical model that concentrates on relations of distance between crayfish only. It is deprived of a stochastic variable and is not able to generate an applicable balance between exploration and exploitation. Such a case causes the COA to have early convergence, to perform poorly in high-dimensional problems, and to be trapped by local minima. Moreover, the low activation probability of the summer resort stage decreases the exploration ability more and slows down the speed of convergence. In order to compensate these shortcomings, this study proposes an Improved Crayfish Optimization Algorithm (ICOA) that designs the competition stage with three modifications: (1) adaptive step length mechanism inversely proportional to the number of iterations, which enables exploration in early iterations and exploitation in later stages, (2) vector mapping that increases stochastic behavior and improves efficiency in high-dimensional spaces, (3) removing the Xshade parameter in order to abstain from early convergence. The proposed ICOA is compared to 12 recent meta-heuristic algorithms by using the CEC-2014 benchmark set (30 functions, 10 and 30 dimensions), five engineering design problems, and a real-world ROAS optimization case. Wilcoxon Signed-Rank Test, t-test, and Friedman rank indicate the high performance of the ICOA as it solves 24 of the 30 benchmark functions successfully. In engineering applications, the ICOA achieved an optimal weight (1.339965 kg) in cantilever beam design, a maximum load capacity (85,547.81 N) in rolling element bearing design, and the highest performance (144.601) in ROAS optimization. The superior performance of the ICOA compared to the COA is proven by the following quantitative data: 0.0007% weight reduction in cantilevers design (from 1.339974 kg to 1.339965 kg), 0.09% load capacity increase in bearing design (COA: 84,196.96 N, ICOA: 85,498.38 N average), 0.27% performance improvement in ROAS problem (COA: 144.072, ICOA: 144.601), and most importantly, there seems to be an overall performance improvement as the COA has a 4.13 average rank while the ICOA has 1.70 on CEC-2014 benchmark tests. Results indicate that the improved COA enhances exploration and successfully solves challenging problems, demonstrating its effectiveness in various optimization scenarios.

Graphical Abstract

1. Introduction

Meta-heuristic algorithms (MHAs) and mathematical approaches are frequently used for solutions to optimization problems. The success of mathematical approaches depends directly on the initial fitness value and the slope of the gradient descent [1]. As the complications of global optimization problems contribute to these disadvantages, there seems to be a growing interest in algorithms developed through the inspiration of natural phenomena [2]. MHAs are defined as techniques developed in order to enable acceptable solutions for optimization problems whose solutions are difficult under the conditions of inadequate knowledge and limited calculation time. MHAs are not solution methods that are intrinsic to a problem. However, they could produce satisfactory solutions for optimization problems. MHAs could produce such solutions without scanning the whole solution space. Such ability of MHAs depends on a high level of procedures like exploration and exploitation. MHAs also provide us with the following advantages [3,4].
  • Global search: they could search the whole solution space effectively and discover the best fit solution for the problem.
  • Scalability: they could be applied to high-dimensional, non-linear, and continuous or discrete problems.
  • Diversity: since the whole solution space is scanned in global search, population diversity is generated.
  • Adaptability: MHAs could be applied to various problems in finance, economy, engineering, medicine, etc.
  • Computational efficiency: MHAs generate acceptable results in a reasonable time. This property of MHAs presents them as advantageous compared to exact methods for the solution of high-scaled optimization problems.
  • Gradient descent: MHAs do not need gradient descent information. Moreover, the mathematical models of MHAs are simple.
In general, MHAs could be classified into five categories, which are (i) population-based algorithms [5], (ii) evolution-based algorithms [6], (iii) physics/chemistry-based algorithms [7], (iv) human-based algorithms [8], and (v) mathematics-based algorithms [9]. Moreover, it should be noted that there also are algorithms which are developed through the inspiration of music and are not classified above [10]. These classifications and the algorithms within these classifications are studied almost in every article concerned with MHAs. As these articles are studied, it could be observed that the categorization is frequently revisited. The MHAs given as the examples of classification are almost the same as each other, whereas many new and successful algorithms are constantly offered. This study considers that these kinds of subjects are the subject matter of typology and review articles. Therefore, at first, this study explains why MHAs are still proposed even though there are many. Then, it discusses about what to focus on while picking or improving an existing MHA.
Many meta-heuristic algorithms are proposed by researchers in the literature [11]. Some of them are more productive compared to their competitors. The improved versions of the most of them are presented as well. Nonetheless, new MHAs are still developed [12]. As a result of the rapid advancement in technology, the difficulty level of optimization problems to be solved in various industries has been increasing dramatically [13,14]. Such a case makes the development of MHAs with innovative strategies necessary [15]. Moreover, even though existing MHAs are successful in solving some kinds of optimization problems, some of them may not be successful enough to solve some other kinds [16]. This could be explained by the early convergence or slowness of the speed of convergence. In addition to these, as stated in the No Free Lunch Theorem, an MHA could not solve all optimization problems successfully [17]. Once MHAs are applied to all optimization problems, they seem to have similar performances. Thus, it could be concluded that MHAs could compete with each other in terms of a problem or set of problems.
An MHA is studied in four ways in the literature. These are (1) the original article of an MHA, (2) the application of an original MHA to an optimization problem, (3) the application of an MHA by hybridizing it with another MHA, and (4) improving the MHA by changing its mathematical model and applying it to an optimization problem.
As stated earlier, all MHAs have similar performances for all optimization problems. Therefore, while picking an MHA, in addition their results in the original article, a researcher should consider their other properties as well. Picking an MHA specific for a certain problem could increase the chance of success.
MHAs are initiated with a randomly generated population [18]. Then, the fitness value of the population is calculated. According to the strategies of MHAs, the best, top three best, or all fitness values are saved. In many population-based algorithms, there exist exploration and exploitation phases. In the exploration phase, possible solution areas of the search space are determined [19]. In the exploitation phase, the areas determined in the exploration phase are studied thoroughly [20]. In MHAs, the transition between these two phases is made through a probability key. The value of this key is determined generally in two ways; the first one is random, and the second one depends on the function. While determining the probability key randomly, a number between 0 and 1 is produced randomly [21,22]. If this number is greater than the threshold value, which is predetermined and usually 0.5, exploration is performed. However, if it is smaller than that, we follow exploitation (or vice versa). In this method, at low iteration numbers, both exploration and exploitation could be performed. The same goes for high iteration numbers as well. The disadvantage of the method is the slowness of the convergence rate. If the probability key is produced depending on the function, one parameter of the function is generally the available iteration number [4,23,24]. As the iteration number increases, the value of the probability key goes down. Here, the goal is mostly to perform exploration at low iteration numbers and to perform exploitation at high iteration numbers. The disadvantage of the method is being caught in the local minimum trap. Among these methods, it could be stated that the function-based one is, respectively, more successful. However, it should be borne in mind that we should make a choice specific for the problem. Moreover, MHAs could be applied to the relevant problem by changing the way of determining the probability key of an MHA. Lu et al. developed an innovative approach that dynamically controls the balance between exploration and exploitation and integrated it into GWO [25]. In this way, the transition between exploration and exploitation can be managed dynamically, enhancing the algorithm’s ability to adapt to changing optimization conditions.
The presence of initial parameters is an important criterion in the selection of an MHA. In recently developed MHAs, researchers try to avoid or reduce the number of these parameters as much as possible [4,26,27]. Without a doubt, this makes MHAs easier to use. It should be more plausible for inexperienced users to choose MHAs with few or without initial parameters. On the other hand, the presence of initial parameters could be an advantage to turn MHAs problem-specific. Such a case goes for the experienced users. It should be kept in mind that in such a case there needs to be an extra analysis for the adjustment of initial parameters. Developing MHAs with adjustable parameters could be focused on. Adaptive parameter tuning is critical for enhancing the effectiveness of MHAs. Recent studies have demonstrated that meta-learning methods, which become adaptive through approaches based on learning alternative minimization steps, have outstanding performance, especially in non-convex problems [28]. The effectiveness of adaptive parameters has been demonstrated not only through simulations but also experimental works that were conducted on biological search agents [29]. To overcome the challenge of parameter tuning in optimization algorithms, Wang et al. [30] developed an adaptive mechanism using reinforcement learning that automatically adjusts the distribution ratios between high-performing and low-performing individuals in the population based on accumulated rewards, eliminating the need for predefined parameters.
Recent studies have demonstrated that meta-learning methods, which become adaptive through approaches based on learning alternative minimization steps, have outstanding performance, especially in non-convex problems.
In the exploration and exploitation phases of some MHAs, there could be more than one procedure [31]. Which procedure to run is determined by a probability key. Most of the times, this probability key is determined randomly. However, this method, as stated earlier, could be changed. By changing the threshold value according to the problem, further selection of a procedure may be preferable. Because of the scholastic structure of MHAs, there exist random numbers in their mathematical models [32,33]. These random numbers are generally between 0 and 1. By changing this range or by increasing or decreasing the number of random numbers, the performance of the MHA could be analyzed. Analysis could be performed by changing the methods of determining random numbers as well. In the literature, there are many studies where random numbers are determined through chaotic maps [34,35].
In general, for MHAs, being like the best one is a significant matter for search agents. The candidate solution within the available iteration aims to be like the best individual. This method is certainly reasonable. However, it could cause problems like early convergence and being trapped by local minima. Instead of that, being like the mean of the top three, which are called alpha, beta, and gamma and are inspired by the Grey wolf optimizer [36], could avoid such problems. Other than that, simulation to the mean of all solution offers, as in Harris hawk optimization [23,37], could be a smarter way for some problems.
With the quick improvements of meta-heuristic algorithms, the performance limitations of algorithms in hand have been more obvious. In complicated engineering problems, especially, algorithms’ balance of exploration–exploitation, convergence speed, and the ability to escape from local minimum traps have been critical factors of success.
Even though the Crayfish Optimization Algorithm (COA) presents promising results in nature-inspired MHAs, it has three critical limitations in its algorithmic structure. First, the COA uses a simplified mathematical model in the competition stage. In this stage, only the Euclidian distance is paid attention to while elements necessary for effective exploration, such as adaptive step lengths, directional information, and stochastic components, are totally disregarded. Second, the heat-based transition mechanism of the algorithm generates a poor exploration–exploitation balance. Inadequate exploration capacity in early iterations and inadequate exploitation improvements in later stages cause suboptimal convergence characteristics and early convergence. Third, the algorithm’s lack of applicable mechanisms in high-dimensional problems causes it to have low performance as the problem becomes more complicated and limits its applicability to real-world engineering optimization scenarios.
These limitations, especially in high-dimensional and multi-modal optimization problems, decrease the effectiveness of the COA significantly, increase the risk of the algorithm being trapped by local minima, and slow down the convergence rate. Such a case indicates the necessity to redesign and improve the basic algorithmic components of the COA.
The subject of this work is the COA, which is a population-based MHA [38]. It is inspired by the way Crayfish search for food, avoid heat, and compete each other. The COA has a two-level search strategy made of exploration and exploitation. Even though it is a successful MHA, it faces the possibility of being trapped by local minimum traps. The literature indicates that researchers mainly concentrate on the temperature parameter, the balance between exploration and exploitation, and exploration. In a study that aims to overcome such problems, researchers determine four strategies to improve the COA [39]. These strategies are the halton sequence, opposition-based learning, elite steering factor, and fish aggregation device effect. They inform us that these strategies increase the diversity of the population and the convergence rate as well as accelerate the convergence. Jia et al. propose two strategies in order to accelerate the convergence of the COA and prevent it from being trapped by local minima [40]. These strategies are the environmental renewal mechanism and ghost opposition-based learning. It is noted that the improvements increase the global optimization performance of the algorithm. The ECOA, improved by Yuan et al., has increased the performance of the COA through three basic improvements [41]. These improvements are the dynamic lens-imaging learning strategy that enriches population diversity, the dynamic decline curve that expands the search space and increases the convergence speed, and the novel feeding strategy that strengthens the local optimization capability of the algorithm. Zhang and Diao [42] proposed the Hierarchical Learning-enhanced Chaotic Crayfish Optimization Algorithm (HLCCOA) by integrating chaotic mapping and hierarchical learning mechanisms, solving the problem of avoiding local optima by increasing population diversity. In this study, a population initialization strategy was developed using Tent and Chebyshev chaotic maps, and individuals in the population were divided into hierarchical layers according to their fitness values, applying different learning mechanisms.
The aim of this study is to present an improved version of the COA [38], which is a population-based MHA. It offers an improved version of the COA by changing its mathematical model. This new version is called the Improved COA (ICOA). In the COA, the transition between exploration and exploitation is performed through heat. The temperature parameter of the COA varies between 15 and 35 °C. The temperature is determined depending on a random number. This means that the COA could perform exploration and exploitation at low or high iteration numbers. The COA performs exploitation between 15 and 30 °C (foraging stage). When the temperature is above 30 °C, there comes the summer resort stage (exploration) or competition stage (exploitation). Which phase to perform is determined by a random parameter. As it could be understood through the information given earlier, the probability of the COA performing exploration is relatively low. This causes the COA to be trapped by local minima and slow convergence speed. The competition stage of the COA is designed for local search. However, the foraging stage already does that. Thus, the competition stage of the COA is updated considering the exploration properties of exploration.
The main contributions of this study are given as follows.
  • At the competition stage of the COA, crayfish have to compete with the other crayfish in order to go in the cave. In the original COA, this competition is modeled only by distance. In the ICOA, besides the distance, the step length and locations of crayfish are added to the model.
  • The step length of crayfish differs depending on the iteration. Big steps are produced at low iteration numbers while at high iteration numbers, there occur small steps. Such a case allows the competition stage to help exploration at low iteration numbers and to help exploitation at high iteration numbers.
  • In addition, at the competition stage of the ICOA, the cave location, which represents the best location, is removed from the mathematical model. The reason for that is to increase the exploration ability of the ICAO. Cave locations are also available in other stages.
  • To verify the validity of the ICOA, it was compared to nine MHAs. For comparison, the CEC-2014 dataset with 30 test functions was used. Five engineering design problems were used for comparison as well.
  • The results are interpreted by the Wilcoxon Signed-Rank Test and Friedman test.
The rest of this study is organized as follows. In the second section, information about the COA and its mathematical model is given. In the third section, the ICOA is identified. In the fourth one, the results of the ICOA and 12 competitors are compared. For comparison, the CEC-2014 dataset and five engineering design problems are used. CEC-2014 tests are performed in two different dimensions D = 10 , D = 30 . In the last one, the results are discussed for future studies.
The research methodology of this study has six main stages, as indicated in Figure 1. In the first stage, the limitations of the COA algorithm were analyzed and the research gap was introduced through the literature review. In the second stage, the ICOA was improved by redesigning the mathematical model of the competition stage. In the third stage, a comprehensive experimental design, including CEC-2014 benchmark functions, engineering design problems, and real-world application, was created. In the fourth stage, through statistical tests and convergence analysis, the performance of the ICOA was evaluated. In the fifth stage, the results were verified and industrial implications were analyzed. The last stage presented the conclusion and proposed suggestions for future studies.

2. Crayfish Optimization Algorithm (COA)

The COA is an MHA developed through the inspiration of the foraging, summer vacation, and competitive behavior of crayfish. The foraging stage and competition stage make the exploitation stage of the COA while the summer vacation stage makes its exploration stage. The algorithm is initiated, generating the population randomly. In the COA, the locations of crayfish are denoted by ‘ X i , j ’, where X is the location vector of the crayfish, and i and j are, respectively, the dimension of the population and the problem.

2.1. Define Temperature and Crayfish Intake

The change in the ambient temperature is effective upon the feeding behavior of crayfish. The best feeding temperature for crayfish is 25 °C. These creatures could feed between 15 °C and 30 °C. The temperature parameter enables transition between exploration and exploitation. Exploration is performed when the temperature is above 30 °C while exploitation occurs when it is below 30 °C. The temperature is calculated by Equation (1). The mathematical model of crayfish intake is shown in Equation (2).
t e m p = r a n d × 15 + 20
p = C 1 × 1 2 × π × σ ) × e x p ( t e m p μ ) 2 2 σ 2
In the equations, r a n d is the randomly produced number between 0 and 1. μ is the ideal temperature for the crayfish. σ and C 1 is the intake of crayfish control parameter at different temperatures.

2.2. Summer Resort Stage (Exploration)

When the temperature is above 30 °C, it is not fit for crayfish to feed. Thus, they prefer to go inside the cave. The cave locations X s h a d e are denoted as below (Equation (3)).
X s h a d e = X G + X L / 2
where X G represents the best location up to the available iteration while X L represents the best location within the available iteration.
There are circumstances where crayfish either do or do not compete with each other in order to go in the cave. It is assumed that there is no competition among crayfish in the summer resort stage. The stage at which they compete each other for the cave is called the competition stage, which is explained in the following subsection. The transition between these two stages is through a random number. If R a n d < 0.5 , crayfish go directly into the cave (Equation (4)), otherwise they will have to compete with the other ones (competition stage).
X i , j t + 1 = X i , j t + C 2 × r a n d × X s h a d e X i , j t
where t represents the current iteration while t + 1 represents the next iteration number. C 2 is calculated as follows (Equation (5)).
C 2 = 2 t T
where T represents the maximum iteration number. This stage is designed for the purpose of increasing the convergence speed of the COA.

2.3. Competition Stage (Exploitation)

If the temperature is above 30 °C and r a n d 0.5 , the ambient temperature is not fit for the crayfish to feed. Therefore, crayfish go to the cave once again. However, this time, they have to compete with the other crayfish for the cave. The mathematical model of this is given in Equation (6).
X i , j t + 1 = X i , j t X z , j t + X s h a d e
where i represents the available individual, while z represents the randomly picked individual and is calculated through Equation (7).
z = r o u n d r a n d × N 1 + 1
where N is the population. This step is designed to improve the exploration of the COA.

2.4. Foraging Stage (Exploitation)

If the ambient temperature is below 30 °C, it is fit for the crayfish to feed. Now, the crayfish move towards the food. The size of the food is a criterion for feeding. If the food is big, the crayfish crumble the food. The location of the food is defined through Equation (8) and the size of it is defined through Equation (9).
X f o o d = X G
Q = C 3 × r a n d × f i t n e s s i f i t n e s s f o o d
where C 3 is the food factor of the biggest food and its value is 3. f i t n e s s i , i . represents the fitness value of the crayfish while f i t n e s s f o o d represents the fitness value of the location of the food. X f o o d represents the best solution. Crayfish decide whether food is large or not based on the largest piece of the food. If Q > C 3 + 1 / 2 , it means the food is big. Then, they crumble the food. The mathematical model of this is given in Equation (10).
X f o o d = e x p 1 Q × X f o o d
Crayfish eat the food after they make it smaller. This is modeled in Equation (11).
X i , j t + 1 = X i , j t + X f o o d × p × cos 2 × π × r a n d sin 2 × π × r a n d
If Q C 3 + 1 / 2 , there is no need to crumble the food. Crayfish could eat the food as is (Equation (12)).
X i , j t + 1 = X i , j t X f o o d × p + p × r a n d × X i , j t
At the food stage, crayfish have two kinds of eating behavior depending on the size of the food. This stage is designed to improve the convergence ability of the COA.

3. Improved Crayfish Optimization Algorithm

Just like many swarm-based optimization algorithms, the COA consists of exploration and exploitation stages. The transition between exploration and exploitation is performed through temperature. This means that in the COA, exploration and exploitation could be performed regardless of iteration. The exploitation stage of the COA (foraging stage) is two-phased depending on the size of the food. However, the exploration stage of the COA is two-phased depending on the randomly determined parameter. These stages are the summer resort stage and competition stage.
In the preliminary investigations aimed at improving the performance of the COA, it is determined that the competition stage is modeled too simplistically. At this stage, crayfish have to compete with other crayfish to reach the cave, i.e., the best solution. This competition is modeled only through the distance between the crayfish. Moreover, because of the stochastic structure of MHAs, the components based on stochastic parameters are not present in this model. In addition to these, X s h a d e is used at the summer resort stage. Using it at the competition stage causes the exploration ability of the COA to decrease and to be caught by the local minimum trap. Updating the process of being like the best one with an alternative model that scans the search space more efficiently could improve the efficiency of the COA. The remodeling of the competition stage is inspired by the exploration stage of Artificial Rabbits Optimization [4]. When crayfish compete for the cave, what is important is not only their distance between each other but also their speed of movement. Thus, the V parameter, which represents their speed of movement and step length, is added to the model. Since each crayfish represents a location, a mapping vector m is added to the model as well. In addition, X s h a d e is removed from the equation in order to prevent the early convergence. No other modifications are made at the other stages of the COA. The mathematical model of the Improved Competition stage is given in Equation (13).
X i , j t + 1 = X i , j t + V · m · X i , j t X z , j t ,   i , z = 1 , , n   a n d   i z
V = e 1 t 1 T 4 1 · cos 2 π r 1
m k = 1 ,     i f   k = = g l 0 ,     e l s e   k = 1 ,   ,   d   a n d   l = 1 ,   ,   r 2 · d
g = r a n d p e r m ( d )
z = r o u n d r a n d × N 1 + 1
where V denotes the step length of the crayfish and is calculated through Equation (14). m represents the mapping vector and it is calculated through Equation (15). r 1 and r 2 are randomly determined numbers between 0 and 1. d is the dimension of the problem. There is an inverse proportion between step length ( V ) and iteration number ( t ) . However, this inverse proportion is not linear. In Figure 2, the dynamic behavior of V is given. The long steps at low iteration numbers enable more efficient exploration while short steps at high iteration numbers enable more efficient exploitation. r a n d p e r m returns the random permutation of the integers from 1 to d (Equation (16)). z represents the randomly picked individual and is calculated through Equation (17).
Equation (13) makes up the basic updating mechanism of the ICOA. Depending on another randomly picked agent, the positions of search agents are updated through adaptively determined step length ( V ) and map vector ( m ). This approach not only enables sufficient diversity within the solution space but also allows the solution to concentrate on the best regions for the subsequent iterations. V has a sharp and fast decreasing structure. This aspect of ( V ) allows the ICOA to perform broad-scale searches at low iterations and narrow-scale searches at high iterations. ( m ) is used to increase efficiency in high-dimensional problems. In the literature, similar models of this mechanism have been successfully implemented in differential evolution and other MHAs. The theoretical validity of the formulation is based on the combination of adaptive parameter control and randomness to avoid local optimums and to enable fast convergence.
The pseudo-codes of the COA and ICOA are given in Algorithm 1.
Algorithm 1. Pseudo-codes of algorithms.
COA pseudo-codeICOA pseudo-code
Input: T: maximum iteration, N: population size,
D: variable dimension
Input: T: maximum iteration, N: population size,
D: variable dimension
Output: The optimal search agent X b e s t , and its fitness value f b e s t Output: The optimal search agent X b e s t , and its fitness value f b e s t
Generate initial population
Calculate the fitness value of the population to get X G , X L
W h i l e   t < T
          Defining temperature temp by Equation (1)
             I f   t e m p > 30
                    Define cave X s h a d e according to Equation (3)
                        I f   r a n d < 0.5
                          Crayfish conducts the summer resort stage according to Equation (4)
                        E l s e
                          Crayfish compete for caves through Equation (6)
                        E n d
           E l s e
                    The food intake p and food size Q are obtained by Equation (2) and Equation (9)
                        I f   Q > 2
                          Crayfish shreds food by Equation (10)
                          Crayfish foraging according to Equation (11)
                        E l s e
                          Crayfish foraging according to Equation (12)
                        E n d
           E n d
          Update fitness values, X G , X L
           t = t + 1
E n d
Generate initial population
Calculate the fitness value of the population to get X G , X L
W h i l e   t < T
          Defining temperature temp by Equation (1)
           I f   t e m p > 30
                    Define cave X s h a d e according to Equation (3)
                     I f   r a n d < 0.5
                              Crayfish conducts the summer resort stage according to Equation (4)
                     E l s e
                              Updated competition stage Equation (13)
                     E n d
           E l s e
                    The food intake p and food size Q are obtained by Equation (2) and Equation (9)
                     I f   Q > 2
                              Crayfish shreds food by Equation (10)
                              Crayfish foraging according to Equation (11)
                     E l s e
                              Crayfish foraging according to Equation (12)
                     E n d
           E n d
          Update fitness values, X G , X L
           t = t + 1
E n d

4. Computational Results and Discussions

4.1. Experimental Settings and Compared Algorithms

The proposed algorithm is compared to Particle Swarm Optimization (PSO) [43], Differential Evolution (DE) [44], the Whale Optimization Algorithm (WOA) [45], Arithmetic Optimization Algorithm (AOA) [46], African Vultures Optimization Algorithm (AVOA) [47], Golden Jackal Optimization (GJO) [48], Sea-Horse Optimizer (SHO) [49], Energy Valley Optimizer (EVO) [26], Human Conception Optimizer (HCO) [50], RUNge Kutta optimizer (RUN) [51], and the Brown-Bear Optimization Algorithm (BBOA) [27], which are COA algorithms that verified their up-to-datedness and validity. While choosing these algorithms, some criteria are taken into consideration. PSO, DE, WOA, EVO, and BBOA are chosen because the first three are mainstream algorithms while the last two do not have initial parameters. Other algorithms are chosen depending on the few or many initial parameters they have. Being up to date and having verified validity are other significant selection criteria.
Algorithms are coded in the Python language. Tests are run in a computer with Windows 10 64 bit Professional and 64 GB of RAM. Results of 30 independent runs of all algorithms are saved. The population size and maximum function evaluations (FEs) of each MHA are, respectively, set to 50 and 10,000. The comparison of the algorithms depends on the best (Min), average (Ave), and standard deviation (Std) parameters. The performance of the algorithms is evaluated by Wilcoxon Signed-Rank Test (WSRT) with 5% significance level. R + ve R are calculated using WSRT. R + means that the proposed algorithm has a better performance than the competitor algorithm. On the other hand, R means that the proposed algorithm has a weaker performance compared to its competitor. Total R + and total R values are given as T + and T in the tables [3,52]. In the tables where WSRT statistics are given, there is also a column with a ‘ W ’ symbol. This column denotes the winner algorithm. If there is a ‘ + ’ symbol in this column, the winner is the ICOA. However, if the symbol is ‘ ’, the winner is the competitor algorithm. Other than these, the radar graphics and the convergence curves of the algorithms are given as well.
In order to compare the proposed algorithm to the competitor one, we used the CEC-2014 dataset with 30 hard functions [53]. CEC-2014 consists of three unimodal functions f 1 f 3 , 13 simple multimodal functions f 4 f 16 , six hybrid functions ( f 17 f 22 ), and eight composition functions ( f 23 f 30 ). Unimodal functions test the exploitation ability of MHAs. Multimodal functions test the exploration power of the optimizers and the ability to avoid local minimums. Hybrid functions possess many local minimums and a good optimization algorithm should avoid these local minimums. Composite functions test the balancing ability of MHAs between exploration and exploitation. The search ranges for all the problems are 100 ,   100 . In this study, the dimensions of the problems ( D ) are taken as 10 and 30. The initial parameters of the ICOA and competitor algorithms are given in Table 1. In this study, the initial parameters are taken from the original articles of the algorithms. MHAs are highly sensitive to initial parameters and they affect their performance directly. The original articles of MHAs work on fine tuning the parameters and determine the best values. Thus, for a just comparison, the initial parameters of MHAs should be taken from the original articles. Accordingly, in this study, the initial parameters are taken from the original articles of the algorithms.

4.2. CEC-2014 Benchmark Function Results

For D = 10 , minimum, average, and standard deviation values of all the algorithms by the CEC-2014 test functions are given in Table 2, Table 3, Table 4 and Table 5. In the minimum value metric, the ICOA solves 23 of the 30 test functions successfully. The AVOA, COA, DE, WOA, SHO, GJO, and RUN, respectively, solve six, five, four, two, two, two, and two test functions successfully. In the average value metric, the ICOA has successful results in 22 test functions. The AVOA, COA, DE, and RUN, respectively, solve five, four, three, and two test functions successfully. For D = 30 , minimum, average, and standard deviation values of all the algorithms by the CEC-2014 test functions are given in Table 6, Table 7, Table 8 and Table 9. In the minimum value metric, the ICOA has successful results in 24 of 30 test functions. The AVOA, COA, GJO, SHO, and BBOA are competitive, respectively, in 13, seven, five, five, and four test functions. In the average value metric, the ICOA has competitive results in 23 test functions. The AVOA, COA, DE, BBOA, SHO, and GJO are successful, respectively, in nine, seven, two, two, two, and two test functions. Once the results are interpreted, the ICOA seems to have competitive results. The AVOA and COA are two other successful algorithms.
For D = 10 in Table 2, and for D = 30 in Table 6, the results of unimodal functions are given. For D = 10 , the ICOA is superior to its competitors in terms of all valuation criteria. For D = 30 , the AVOA is better than the ICOA at the minimum value of f 1 functions. However, at the average value, the ICOA outperforms all other competitors. The ICOA’s superiority in mean value indicates that its ability to give better results constantly is better than the AVOA. The ICOA also outperforms its competitors in f 2 and f 3 functions.
For D = 10 in Table 3, and for D = 30 in Table 7, the results of multimodal functions are given. For D = 10 , the ICOA has better results than its competitors in eight functions ( f 4 , f 6 , f 7 , f 8 , f 9 , f 11 , f 13 , f 15 ). At the minimum value of the f 8 function, there is equality between the ICOA, AVOA, and DE. The AVOA is successful in the functions of f 12 and f 14 , the COA, f 10 and f 16 , RUN, f 5 . DE is successful in the functions of f 14 and f 16 , the COA, f 10 , the AVOA, f 12 RUN, f 5 . For D = 30 , the ICOA solves 10 functions successfully ( f 4 , f 5 , f 6 , f 7 , f 8 , f 9 , f 10 , f 11 , f 14 , f 16 ). At the minimum value of the f 5 function, the AVOA is more successful. The AVOA is successful in f 12 ,   f 13 , and f 15 functions. It is a significant result that once the dimension increases, there is not a function in which the COA is successful; and there are a number of functions in which the ICOA is successful.
For D = 10 in Table 4, and for D = 30 in Table 8, the results of hybrid functions are given. For D = 10 , the ICOA outperforms its competitors in five functions ( f 17 , f 18 , f 19 , f 20 , f 21 ). At the minimum value of the f 22 function, while the ICOA is more successful, at the mean value, RUN is more successful. For D = 30 , the ICOA is the best optimizer in five functions ( f 17 , f 18 , f 20 , f 21 , f 22 ).
For D = 10 in Table 5 and for D = 30 in Table 9, the results of composition functions are given. For D = 10 , the ICOA has competitive results in six functions ( f 23 , f 24 , f 27 , f 28 , f 29 , f 30 ). For f 23 ,   f 28 ,   f 29 , and f 30 , more than one optimizer has similar results. For f 24 and f 27 , the ICOA is the most successful algorithm, for f25, PSO, and for f26, the AVOA. For D = 30 , the ICOA has successful results in six functions ( f 23 , f 24 , f 25 , f 27 , f 28 , f 29 ). Moreover, the ICOA has a successful result at the minimum value of the f 30 function. More than one optimizer has successful results in other functions except for the f 26 function. For f 26 , the AVOA is the most successful optimizer.
On the last lines of Table 5 and Table 9, the average rank values of all optimizers by the Friedman test are given.
For D = 10 , the ICOA seems to be the best optimizer with 2.12. The AVOA is the second with 3.50, and DE is the third best optimizer with 3.93. For D = 30 , the ICOA is the most successful algorithm with 1.70. The AVOA and DE are the second and the third most successful algorithms, respectively, with 2.77 and 3.95. For D = 10 in Figure 3, and for D = 30 in Figure 4, the results of some successful algorithms in some functions are shown through radar graphic. Once the radar graphics are studied, the ICOA, AVOA, and DE seem to be more successful than other algorithms. The ICOA gives more consistent results than its competitors. Interpreting algorithms by analyzing only numerical results may lead to incorrect or incomplete interpretations. For this reason, we perform statistical analysis of the results. We also evaluate the convergence curves of the algorithms. While doing these analyses, we interpret the algorithms.
For an MHA to be regarded as successful, its results need to be supported statistically. For D = 10 , the WSRT results of the ICOA and competing algorithms are given in Table 10, Table 11, Table 12 and Table 13; and for D = 30 , in Tables 15–18. In addition, the WSRT statistics are summarized according to the subclasses of the CEC-2014 dataset in Table 14 for D = 10 , Table 15, Table 16, Table 17 and Table 18, and in Table 19 for D = 30 .
Once Table 14 and Table 19 are studied, it is observed that the ICOA in all the unimodal functions ( f 1 f 3 ) outperforms its competitors significantly. Unimodal functions test the exploitation abilities of MHAs. The most significant result at this point is that the ICOA outperforms the COA. The mathematical model of the competition stage of the original COA is more fit for local search. While improving the ICOA, the competition stage is updated to increase the exploration ability. The step length ( V ) in the mathematical model of the updated competition stage is inversely proportional to iteration. The competition stage helps local search at high iteration numbers. Therefore, the ICOA is successful in unimodal functions.
Multimodal functions are important criteria that are used for evaluating the capability of optimization algorithms to escape from local optima and to search globally. Studying the results for the D = 10 dimension, it is observed that the ICOA is highly superior to all its competitors. It is superior to its significant competitors like PSO (8/4/1), DE (7/3/3), the AVOA (10/0/3), and the COA (10/1/2) but there is not a significant difference for some problems. This means that the ICOA has a very good ability to reach global optimums and escape from local optimums in low-dimensional multimodal problems. Studying the results for D = 30 dimensions, the ICOA’s superiority seems to be more obvious. It is observed that it outscores the algorithms of PSO (8/5/0), DE (11/2/0), the AVOA (11/0/2), and the COA (11/2/0). For very few problems, there is not a significant difference among algorithms or the competitor algorithms could not outperform the ICOA. This clearly indicates that the ICOA has a very effective global search capability compared to its competitors in high-dimensional multimodal problems. The reason why the ICOA outperforms its competitors, especially the COA, is its ability of exploration. The original COA has an X s h a d e parameter at the competition stage and also does not have a stochastic component. Therefore, its ability to explore in the search space is limited. At the improved competition stage of the ICOA, there is no orientation to the best one and optimality, and it is based on randomness. The goal is to search the solution space more effectively. This is based on the step length ( V ) and the mapping factor ( m ).
Hybrid functions ( f 17 f 22 ) are kinds of complicated functions that evaluate both the global and the local search capabilities of optimization algorithms. Having successful results out of these functions indicates that algorithms have a balanced capability of exploration and exploitation. Examining the results of hybrid functions with the D = 10 dimension, the ICOA seems to outperform all the other algorithms. Only in one function is there not a significant difference between the ICO and PSO, the AVOA, BBOA, and RUN. Moreover, the competitor algorithms are not more successful than the ICOA in a function. This indicates that the ICOA has very balanced and superior performance in low-dimensional hybrid problems. The performance of the ICOA is again effective in D = 30 dimensional functions. Only in two functions does there not seem to be a significant difference between PSO and the ICOA. However, other algorithms have never been more successful than the ICOA. Hybrid functions have much higher local minimums than usual. These functions test MHAs’ power to avoid local minima. The results indicate that the ICOA is able to avoid the local minimum trap. This can be explained by the fact that the competition stage helps local search at high iteration numbers.
Once the data in Table 14 ( D = 10 ) and Table 19 ( D = 30 ) are studied, it is observed that the ICOA is more competitive than AOA, EVO, HCO, and RUN in eight composite functions ( f 23 f 30 ). For D = 10 , the ICOA is more successful than six functions in SHO, BBOA, and GJO. In f 23 , f 25 functions, there is not a significant difference between these algorithms. The ICOA has better results than the AVOA ( f 24 , f 27 , f 30 ) and COA ( f 24 , f 26 , f 27 ) while they have similar results in five functions. For D = 30 , the ICOA outperforms SHO, BBOA, and GJO in five functions but they have similar results in three functions (SHO: f 24 , f 25 , f 29 /BBOA: f 24 , f 25 , f 29 /GJO: f 24 , f 25 , f 28 ). The ICOA outperforms the AVOA in one function ( f 30 ) but they have similar results in six functions. On the other hand, the AVOA is more successful in one function ( f 26 ). While the ICOA outperforms the COA in one function ( f 26 ), both of the optimizers have similar results in seven functions. Composite functions test the ability of MHAs to balance the exploration and exploitation phases. In composite functions, the ICOA is clearly superior to its competitors except for the COA and AVOA. There is not a function where it is less successful than the COA. Statistical results indicate that the ICOA performs either better or equal to the COA. For D = 10 , the ICOA outperforms the AVOA but for D = 30 , there is not a significant difference between the two algorithms. The difference between the ICOA and COA-AVOA could be explained as follows. To begin with, like the ICOA, the COA and AVOA give the best results in composite functions. Secondly, the ICOA’s transition method between exploration and exploitation is the same as the COA’s. Once this method is made more effective, the ICOA’s performance in composite functions could be enhanced. It should be kept in mind that the subject matter of this study is to generate an MHA with a more effective exploration ability by updating the COA’s competition stage.
In general, according to the data in Table 14 ( D = 10 ), the ICOA is the most competitive algorithm, solving 21 out of 30 problems better than the AVOA and 22 out of 30 problems better than the COA. The ICOA outperforms its competitors PSO, DE, WOA, AOA, SHO, BBOA, EVO, GJO, HCO, and RUN in, respectively, 23, 23, 26, 30, 26, 25, 29, 27, 30, and 27 functions. When the data in Table 19 ( D = 30 ) is analyzed, the ICOA outperforms the AVOA and COA in 20 out of 30 problems. The ICOA outperforms its competitors PSO, DE, WOA, AOA, SHO, BBOA, EVO, GJO, HCO, and RUN in, respectively, 22, 25, 27, 30, 27, 27, 30, 30, 27, 30, 30, and 30 functions. The statistical results indicate that the method proposed in this study to improve the COA is successful.
In order to evaluate the performance of the ICOA in a more reliable way, a statistical t test is performed over the CEC 2014 test functions. The details of the test results are given in Table 20 and Table 21. The summary of test results is given in Table 22. Examining the table, in the case where the dimension ( D ) is 10, the ICOA is clearly seen to outperform all the algorithms. For example, while it outperforms PSO in 23 functions, it has a lower performance only in two functions; and there is not a significant difference in five functions. Similarly, it outperforms the COA in 21 functions, but is less successful only in three functions; and there is not a significant difference in six functions. Its superiority against HCO, RUN, and AOA in almost all the functions is clearly observed.
In problems with higher dimensions ( D = 30 ), the success of the ICOA is more evident. The ICOA shows superior performance in all the functions (30/0/0), especially against RUN, HCO, and AVOA. Compared to other algorithms, it has significantly better results and there is not a significant difference in very few functions. These results clearly point out that the ICOA is more consistent, stable, and has effective performance compared to other algorithms, especially in high-dimensional problems. The reason for the success of the ICOA in high-dimensional problems is that it has the m parameter. The m parameter ensures that only some dimensions are updated in high-dimensional problems. Thus, it both reduces the computational load of the algorithm and controls excessive randomness.
In Figure 5 and Figure 6, the convergence curves of all algorithms for some test functions are given. In the performance analysis of MHAs, their behaviors of convergence are important criteria. The convergence behavior of an MHA gives information about the speed of the optimizer. The ICOA has better convergence in many cases. It indicates that the ICOA’s ability of exploration, exploitation, and transition between the two is well-developed. It should also avoid local minimum traps while solving different kinds of functions. Unimodal functions test the exploitation characteristics of MHAs. In terms of unimodal functions (Figure 5 and Figure 6: f 1 f 3 ), the ICOA has much better performance and does not converge early. Multimodal functions test the exploration capabilities of the optimizers. In multimodal functions (Figure 5 and Figure 6: f 4 f 16 ), the ICOA has a more vertical angle in general. This means that the ICOA’s ability of exploration is well-developed. This could be explained through the fact that the ICOA has an updated competition stage. Hybrid functions (Figure 5 and Figure 6: f 17 f 22 ) contain unimodal and multimodal functions in their structures. These functions test both the exploration and the exploitation abilities of MHAs. Once the convergence curves are studied, it is observed that the ICOA has the capacity to improve the results in low and high iteration numbers. Composition functions have multimodal characteristics. In these functions (Figure 5 and Figure 6: f 23 f 30 ), the ICOA performs a vertical descent. This means that the ICOA has a fast convergence ability. There are two main reasons for the ICOA having better exploration and exploitation abilities in low and high iteration numbers. The first one is that it has an updated competition stage. The second one is the iteration-dependent step length ( V ) and mapping factor ( m ). In the original version of the COA, while designing the competition stage, no stochastic component is used in the equation. Also, the model that is like the best one is used. This prevents the solutions made in this stage from being different from the solutions made in other stages. Thus, it blocks the optimizer from searching the solution space efficiently.

4.3. Ablation Study: Evidence of Component-Wise Improvements

In this section, the contributions of each improvement component proposed in the ICOA were analyzed individually. Within the purpose of presenting concrete proofs for the effectiveness of the improvements, an ablation study was carried out both in D = 10 and D = 30 dimensions by using eight sample CEC-2014 functions ( F 1 ,   F 4 ,   F 8 ,   F 11 ,   F 17 ,   F 20 ,   F 24 ,   F 27 ). These functions were selected to represent unimodal ( F 1 ), multimodal ( F 4 ,   F 8 ,   F 11 ), hybrid ( F 17 ,   F 20 ), and composition ( F 24 ,   F 27 ) categories. The tested algorithm variants are as follows. (1) COA (original): baseline algorithm, (2) COA + V: only adaptive step length ( V ) was added, (3) COA + m: only mapping vector ( m ) was added, (4) ICOA: has all the improvements ( V + m ). Table 23 presents the average results of 30 independent runs.
The results verify that each component enables meaningful improvements. The component of adaptive step length ( V ) enabled improvements varying between 5.0% and 77.1% depending on the functions. The component of mapping vector ( m ) generally had a more significant effect with improvements between 3.0% and 94.4%. The most remarkable results were in the F 8 , F 17 , and F 20 functions, as the m component improved, respectively, 63.8%, 67.9%, and 94.4% in these functions.
The performance of the ICOA seems to be higher than the total of the individual components, which indicates a synergistic effect. For example, in function F 1 ( D = 10 ), while the V component and m component enable, respectively, 23.5% and 38.5% improvement, the ICOA achieves a total improvement of 89.7%. Similarly, in function F 17 , while the V and m components enable, respectively, 28.3% and 67.9% improvement, the ICOA enables 94.9% improvement.
The runtime analysis indicates that the ICOA enables minimal computational cost and performance improvement. For 10,000 function evaluations, the ICOA requires only 1.0% additional runtime on average compared to the original COA (0.9% for D = 10 , 1.1% for D = 30 ). This low computational cost supports the usability of the ICOA in practical applications. While the highest overhead is 2.7% for F 20 function ( D = 30 ), the lowest is 0.1% for F 20 function ( D = 10 ).
These ablation study results quantitatively verify that each improvement component proposed in the ICOA makes a significant contribution to the algorithm and that they generate a synergistic effect when they are used together. The fact that the individual contribution of the mapping vector ( m ) component is generally higher than the adaptive step length ( V ) component emphasizes the significance of the dimension selection strategy in high-dimensional problems.

4.4. Scalability Analysis

Scalability evaluation is used to assess whether the size of the problems affects the efficiency of the ICOA. An MHA is expected to present as efficient a performance on high-dimensional problems as on low-dimensional ones. In the scalability test, the ICOA was used to solve 10 scalable unimodal and multimodal functions of size ranging from 100 to 500 with a length of 50 steps. The FE was set to 10,000 and the results of 30 independent runs were recorded. For each dimension, the rank values of the algorithms from the Friedman test were calculated and averaged. These average rank values are given in Figure 7. The ICOA is the most successful algorithm with a score of 2. The results show that the ICOA is the least affected algorithm when the problem size increases. DE is the second most successful algorithm with a score of 2.5 and the AVOA is the third most successful algorithm with a score of 3. Table 24 shows the results for dimension 500. The ICOA was the most competitive optimizer in seven of the functions ( f 1 f 3 ,   f 5 f 7 ,   f 10 )

4.5. Application of ICOA to Engineering Problems

In this subsection, the ICOA is applied to five engineering design problems to verify its validity. The ICOA and competing algorithms are run on the same computer. The results from 30 independent runs of the optimizers are saved. The number of search agents is set to 50 and the number of iterations to 500. The best, average, worst, and standard deviation values of the algorithms are used as comparison parameters. The convergence curves of the algorithms are also given. In engineering design problems, constraints need to be included in the calculation while solving the problem as well. Although there are many methods proposed in the literature, the penalty function is the most preferred method due to its simplicity and ease of use. The penalty function is also used in this study (Equation (18)).
F ( x ) = f ( x ) ± i = 1 m   k i m a x 0 , g i ( x ) γ + j = 1 n   l j h j ( x ) δ
where F ( z ) is the modified objective function. k i , l j are the positive penalty values. g i ( x ) , h j ( x ) represent constraint functions and n , m are the number of constraint functions. γ , δ are constant numbers and usually have a value of 2. The task of the penalty function is to dramatically increase the penalty score whenever the optimizer violates any constraint. This increase helps the search agents to move towards probable spaces in the solution space where better solutions are.

4.5.1. Cantilever Beam Design

The aim of the cantilever beam design problem is to minimize the weight of the cantilever beam [49]. In the problem, there are five hollow beams with constant thickness. The schematic diagram of the problem is given in Figure 8. The length of each beam is different from the other one and these lengths are the design variants of the problem x 1 , x 2 , x 3 , x 4 , x 5 . In addition, the problem has one constraint. The mathematical model of the problem is given below (Equations (19)–(22)).
Consider   variable :   x = x 1 , x 2 , x 3 , x 4 , x 5
Minimize :   f 1 ( x ) = 0.0624 x 1 + x 2 + x 3 + x 4 + x 5
Subject   to :   g 1 ( x ) = 61 x 1 3 + 37 x 2 3 + 19 x 3 3 + 7 x 4 3 + 1 x 5 3 1 0
Variable   range :   0.01 x i 100 , i = 1 , , 5
Table 25 indicates the best, average, worst, and standard deviation values of all the algorithms by the cantilever beam design problem. Studying Table 25, it is observed that the results of the ICOA are better than its competitors. The algorithms that stand out with their results are the ICOA, COA, AVOA, and DE. The superiority of the ICOA, especially in mean and standard deviation values, means that it can produce consistently successful results. Table 26 indicates the values of decision variables and optimal weights of all the algorithms. The ICOA was the optimizer with the best optimal weight ( f 1 = 1.339965 ) with x = ( 6.00741 ,   5.32034 ,   4.49318 ,   3.50262 ,   2.15025 ) structure variable values. The convergence curves of all optimizers are given in Figure 9.
Figure 10 and Figure 11 present the analysis of the ICOA’s exploration and exploitation balance. This analysis is significant for understanding how the algorithm performs in engineering problems.
As indicated in Figure 10, the diversity measurement of the algorithm has a significant change over the iterations. With a high diversity value at the beginning, the algorithm tends to explore a large area of the solution space. This indicates that the algorithm has an effective exploration capability in its early stages. In the first 50 iterations, the diversity decreases rapidly, indicating that the algorithm starts to focus on promising regions. By the 100th iteration, the diversity almost approaches zero and the algorithm begins the full exploitation phase.
Figure 11 shows how the exploration and exploitation of the algorithm change over iterations in percentages. In the early iterations, the exploration starts at around 100% and decreases rapidly, reaching around 10% at the 50th iteration. On the other hand, the exploitation starts at zero and increases rapidly, reaching almost 100% by the 100th iteration. As can be observed through the graphics, the algorithm works in two distinct stages.
Exploration phase (iteration between 1 and 50): In this phase, the algorithm searches the different regions of the solution space and determines potential optimum points. High diversity reduces the risk of the algorithm being trapped by local optimums.
Exploitation Phase (iterations between 50 and 500): In this phase, the algorithm improves the solutions by concentrating on promising regions that are found in the exploration phase. A decrease in the diversity and an increase in the exploitation accelerate the convergence of the algorithm. This balance of exploration and exploitation directly affects the effectiveness of the algorithm in the engineering problem at hand. While the effective exploration at the beginning allows the algorithm to explore the solution space effectively, the intensive exploitation at the later stages ensures that the found solutions are optimized quickly. This balance is of critical importance, especially in complex engineering problems with constraints. While the high exploration capacity at the first stage enables the algorithm to explore the constraints and determine suitable solution regions, the exploitation at the later stages ensures rapid convergence for optimal solutions under certain constraints.
As a result, studying the exploration and exploitation balance of the ICOA, it is observed that the algorithm concentrates on exploration in the early stages but on exploitation in the later ones. This balance is one of the main factors for the algorithm to be successful in engineering design problems. It also allows the algorithm to have effective results in complicated and multimodal solution spaces.

4.5.2. Gear Train Design

There are four gear wheels in the gear train design problem. The numbers of teeth of these gears are the decision variables of the problem ( T a , T b , T d , T f ). The schematic diagram of the problem is given in Figure 12. Since the decision variables are the numbers of teeth, it is important to present decision variables as integers. The aim of the gear train design problem is to minimize the gear ratio [54]. The mathematical model of the problem is as follows (Equations (23)–(25)).
Consider   variable :   x = x 1 , x 2 , x 3 , x 4 = T a , T b , T d , T f
Minimize :   f 2 ( x ) = 1 6.931 T b T d T a T f 2
Subject   to   ( variable   range ) :   12 x 1 , x 2 , x 3 , x 4 60
Table 27 presents the best, average, worst, and standard deviation values of all the algorithms by the gear train design problem. The ICOA, COA, AVOA, AVOA, EVO, and RUN are the algorithms with the optimal cost. The ICOA and RUN are the most successful algorithms in the worst value metric. Table 28 gives the values of the decision variables and the optimal costs of all the algorithms. The ICOA is the optimizer with one of the best optimal costs f 2 = 2.70086 × 10 12 with design parameters x = 43 ,   16 ,   19 ,   49 . The convergence curves of all optimizers are presented in Figure 13.

4.5.3. Rolling Element Bearing Design

The rolling element bearing design problem has 10 design variables and 10 constraints. The schematic diagram of the problem is given in Figure 14. The aim of the rolling element bearing design problem is to maximize the carrying capacity of the ball-bearing [55]. The mathematical model of the problem is as follows (Equations (26)–(30)).
Consider   variable :   x = D m , D b , Z , f i , f o , K D m i n , K D m a x , ε , e , ζ
Maximize :   f 3 ( x ) = f c Z 2 / 3 D b 1.8   if   D b 25.4   m m f 3 ( x ) = 3.647 f c Z 2 / 3 D b 1.4 if   D b > 25.4     m m
Subject   to :   g 1 ( x ) = ϕ o 2 s i n 1 D b / D m Z + 1 0 , g 2 ( x ) = 2 D b K D m i n ( D d ) 0 , g 3 ( x ) = K D m a x ( D d ) 2 D b 0 , g 4 ( x ) = D m ( 0.5 e ) ( D + d ) 0 , g 5 x = 0.5 + e D + d D m 0 , g 6 x = D m 0.5 D + d 0 , g 7 x = 0.5 D D m D b ε D b 0 , g 8 x = ζ B w D b 0 , g 9 ( x ) = f i 0.515 , g 10 ( x ) = f o 0.515
Where   f c = 37.91 1 + 1.04 1 γ 1 + γ 1.72 f i 2 f o 1 f o 2 f i 1 0.4 10 / 3 0.3 × γ 0.3 ( 1 γ ) 1.39 f o ( 1 + γ ) 1 3 2 f i 2 f i 1 0.41 γ = D b D m , f i = r i D b , f o = r o D b , ϕ o = 2 π 2 c o s 1 { ( D d ) / 2 3 ( T / 4 ) } 2 + D / 2 ( T / 4 ) D b 2 { d / 2 + ( T / 4 ) } 2 2 { ( D d ) / 2 3 ( T / 4 ) } D / 2 ( T / 4 ) D b T = D d 2 D b , D = 160 , d = 90 , B w = 30 , r i = r o = 11.033
Variable   range :   0.5 ( D + d ) D m 0.6 ( D + d ) , 0.15 ( D d ) D b 0.45 ( D d ) , 4 Z 50 , 0.515 f i 0.6 , 0.515 f o 0.6 , 0.4 K D m i n 0.5 , 0.6 K D m a x 0.7 , 0.3 ε 0.4 , 0.02 e 0.1 , 0.6 ζ 0.85
Table 29 gives the best, average, worst, and standard deviation values of all the algorithms by the rolling element bearing design problem. The ICOA is the most competitive algorithm as it outperforms its competitors in the best f 3 , b e s t = 85,547.81075 and average value f 3 , a v e = 85,498.37802 metrics. These results mean that the ICOA could be successful in maximum problems as well. Table 30 presents the values of design variables and the optimal load-carrying capacity of all the algorithms. Figure 15 shows the convergence curves of all algorithms.

4.5.4. Heat Exchanger Network Design

The heat exchanger network design problem aims to design an optimal heat exchanger network to minimize the overall heat exchange area [56]. This system consists of three hot flow zones and one cold flow zone. The cold flow needs to be heated from 100 °F to 500 °F using the three hot flow zones. A schematic diagram of the problem is given in Figure 16. The problem has eight design variables and six constraints. The mathematical model of the problem is as follows (Equations (31)–(34)).
Consider   variable :   x = x 1 , x 2 , x 3 , x 4 , x 5 , x 6 , x 7 , x 8
Minimize :   f 4 ( x ) = x 1 + x 2 + x 3
Subject   to :   g ( 1 ) = 1 + 0.0025 x 4 + x 6 0 g ( 2 ) = 1 + 0.0025 x 5 + x 7 x 4 0 g ( 3 ) = 1 + 0.01 x 8 x 5 0 g ( 4 ) = x 1 x 6 + 833.33252 x 4 + 100 x 1 83333.333 0 g 5 = x 2 x 7 + 1250 x 5 + x 2 x 4 1250 x 4 0 g ( 6 ) = x 3 x 8 + 1250000 + x 3 x 5 2500 x 5 0
Variable   range :   100 x 1 10000 , 1000 x 2 , x 3 10000 , 10 x 4 , x 5 , x 6 , x 7 , x 8 1000
Table 31 presents the best, average, worst, and standard deviation values of all the algorithms by the heat exchanger network design problem. The ICOA outperforms its competitors in all best, average, worst, and standard deviation comparison metrics. In the best value metric, more than one algorithm gives results close to the optimal result. The standard deviation of the ICOA is very low compared to its competitors. This means that the ICOA gives the optimal result in a consistent manner. Table 32 shows the values of the decision variables and the best results of all the algorithms. Figure 17 shows the convergence curves of all algorithms.

4.5.5. Tabular Column Design

The tabular column design problem has two design variables and six constraints. The aim of the problem is to carry the compressive load on the column with minimum cost [57]. The load on the column stems from the construction and the material of which the column is made. The schematic diagram of the problem is given in Figure 18. The mathematical model of the problem is as follows (Equations (35)–(39)).
Consider   variable :   x = [ d , t ] = x 1 , x 2
Minimize :   f 5 ( x ) = 9.82 d t + 2 d
Subject   to :   g 1 ( x ) = x 1 14 1 0 ,   g 2 ( x ) = 8 P L 2 π 3 E h 1 h 2 h 1 2 + h 2 2 1 0 g 3 ( x ) = 2 h 1 1 0 ,   g 4 ( x ) = P π h 1 h 2 σ y 1 0 g 5 ( x ) = 0.2 x 2 1 0 ,   g 6 ( x ) = x 2 0.8 1 0
Where   P = 2500 , σ y = 500 , E = 0.85 × 10 6 , σ y = 500
Variable   range :   2 d 14,0.2 t 0.8
Table 33 gives the best, average, worst, and standard deviation values of all the algorithms by the tabular column design problem. Studying Table 33, it is observed that the results of the ICOA, COA, and AVOA are, specifically, very close to each other. In the best and average value metric, the ICOA outperforms its competitors with the scores of, respectively, f 5 , b e s t = 26.53132788 and f 5 , b e s t = 26.531344 . The AVOA is more successful in the worst value metric. Studying the mean and standard deviation values of the ICOA, the ICOA could be interpreted as rarely producing its worst value. Table 34 gives the values of design variables and optimum costs of all the algorithms. Figure 19 shows the convergence curves of all optimizers.

4.6. Critical Assessment of ICOA Implementation

This section provides a comprehensive evaluation of the strengths and potential drawbacks associated with the proposed ICOA algorithm.

4.6.1. Algorithmic Strengths

The ICOA’s most prominent advantage lies in its enhanced exploration–exploitation balance achieved through the adaptive step length mechanism. In CEC-2014 tests, it obtained the best average ranking (1.70) while the COA lagged behind (4.13). The mapping vector (m) mechanism enables more effective performance in high-dimensional problems, achieving the best results in seven out of 10 functions in 500-dimensional tests. Furthermore, the removal of the Xshade parameter and introduction of stochastic components significantly reduced premature convergence, enhancing the algorithm’s ability to escape local optima traps.

4.6.2. Limitations and Drawbacks

The ICOA’s primary limitation is increased computational complexity. Step length calculation, mapping vector generation, and permutation operations require 15–20% more execution time compared to the basic COA. The algorithm’s performance is critically dependent on r₁ and r₂ parameters, with optimal ranges of [0.2, 0.8] for r₁ and [0.3, 0.7] for r₂. Values outside these ranges significantly degrade performance.
The ICOA exhibits limited effectiveness in certain problem types. In highly constrained problems, frequent constraint violations may occur; its continuous nature makes it unsuitable for discrete optimization problems; and in low-dimensional problems, the overhead may exceed the benefits. Additionally, the algorithm lacks formal convergence proofs, and its stochastic nature complicates theoretical analysis.

4.6.3. Performance Assessment

Experimental results demonstrate that the ICOA exhibits varying success levels depending on problem type. It achieves the highest success rate in unimodal functions, moderate performance in multimodal functions, and more limited advantages in composite functions. This indicates that the algorithm’s superiority diminishes with increasing problem complexity.

4.6.4. Recommendations

The ICOA is ideal for high-dimensional continuous optimization problems, moderately constrained engineering problems, and situations requiring balanced exploration–exploitation. However, a cost–benefit analysis should be conducted for low-dimensional problems, the parameter tuning process should be carefully managed, and algorithm selection should be made according to problem type.
The ICOA offers significant improvements over the COA, particularly for high-dimensional problems. However, increased computational costs, parameter sensitivity, and limitations in certain problem types must be considered. This comprehensive evaluation emphasizes the importance of balanced consideration of the ICOA’s strengths and limitations for successful implementation.

5. ROAS Rate Problem

In this section, a Return on Advertising Spend (ROAS) problem based on real data is considered. The data belongs to the company “Makarna Lütfen” operating in Kırklareli, Turkey. The company sells its products on an online marketplace. The data given in Appendix A contains the statistics of this marketplace. The data covers May 2024. The firm designs advertising strategies to increase its sales. To begin with, the company categorizes its products into groups. This grouping strategy depends on the similarity of the products, the number of the sales, and the experience of the sales team. The company shared the data of six of the groups it formed. However, the company did not find it appropriate to name the products in the groups. In order to increase the number of the sales of each group, some keywords were made by the company officials. When these keywords are searched by users visiting the marketplace, the products in the relevant groups of the company are shown to the users. One keyword could be used for more than one group. The firm pays an advertising fee to the marketplace to show the company’s products when a keyword is searched by a user. Keywords can also be used by competitors. Therefore, advertising management is an active process. Company authorities set minimum and maximum spending limits for each keyword.
An optimization problem was generated using Advertising Spend (AS), Revenue (R), and ROAS data from the company. In the tables given in Appendix A, other data obtained from the company are given for researchers to make use of. ROAS is the ratio of advertising spend to revenue. It is found by dividing revenue by advertising expenses. The equations related to the ROAS rate problem are given below. Table 35 shows the keywords used in the advertising campaigns.
Minimize:
f R O A S = 0.2 × R O A S 1 + 0.2 × R O A S 2 + 0.1 × R O A S 3 + 0.2 × R O A S 4 + 0.1 × R O A S 5 + 0.2 × R O A S 6
Subject to:
g 1 = x 1 + x 2 + x 3 + x 4 + x 5 + x 6 / 32,000 < 1 ,   g 2 = y 1 + y 2 + y 3 + y 4 + y 5 + y 6 / 9400 < 1 g 3 = z 1 + z 2 + z 3 + z 4 + z 5 + z 6 / 7400 < 1 ,   g 4 = a 1 + a 2 + a 3 + a 4 + a 5 + a 6 / 4950 < 1 g 5 = b 1 + b 4 / 585 < 1 ,   g 6 = c 1 + c 4 / 1200 < 1 ,   g 7 = d 1 + d 2 + d 4 + d 5 + d 6 / 7015 < 1 g 8 = e 1 + e 2 + e 5 / 2900 < 1 ,   g 9 = f 3 + f 5 + f 6 / 6300 < 1 ,   g 10 = g 5 + g 6 / 300 < 1 g 11 = h 3 + h 6 / 135 < 1 ,   g 12 = p 1 + p 2 / 100 < 1 ,   g 13 = w 2 + w 3 / 1250 < 1
where
R O A S 1 = R 1 A S 1 R 1 = 14.57 x 1 + 2.76 b 1 + 317.14 c 1 + 2.69 a 1 + 0.26 y 1 + 44.59 z 1 + 27.93 d 1 + 8.36 e 1 + 13.95 p 1 A S 1 = x 1 + b 1 + c 1 + a 1 + y 1 + z 1 + d 1 + e 1 + p 1 R O A S 2 = R 2 A S 2 R 2 = 30.95 x 2 + 13.02 d 2 + 6.25 y 2 + 14.73 e 2 + 16.07 p 2 + 194.79 w 2 + 6.69 z 2 + 5622.52 a 2 A S 2 = x 2 + d 2 + y 2 + e 2 + p 2 + w 2 + z 2 + a 2 R O A S 3 = R 3 A S 3 R 3 = 1.03 w 3 + 117.3 x 3 + 4.55 a 3 + 0.73 y 3 + 198.02 h 3 + 473.83 f 3 + 8.04 z 3 A S 3 = w 3 + x 3 + a 3 + y 3 + h 3 + f 3 + z 3 R O A S 4 = R 4 A S 4 R 4 = 30.38 x 4 + 4.84 z 4 + 0.79 y 4 + 79.31 a 4 + 1.3 c 4 + 30.7 b 4 + 13.9 d 4 A S 4 = x 4 + z 4 + y 4 + a 4 + c 4 + b 4 + d 4 R O A S 5 = R 5 A S 5 R 5 = 102.52 x 5 + 7.15 y 5 + 8.02 z 5 + 7.45 d 5 + 6.1 e 5 + 0.77 a 5 + 0.79 f 5 + 16.45 g 5 A S 5 = x 5 + y 5 + z 5 + d 5 + e 5 + a 5 + f 5 + g 5 R O A S 6 = R 6 A S 6 R 6 = 31.68 x 6 + 4.68 g 6 + 41.24 a 6 + 71.76 y 6 + 4.94 z 6 + 231.84 d 6 + 3.11 f 6 + 37.46 h 6 A S 6 = x 6 + g 6 + a 6 + y 6 + z 6 + d 6 + f 6 + h 6
Variable range:
4000 < x < 6000 ,   1500 < y < 1600 ,   1000 < z < 1350 ,   775 < a < 850 ,   125 < b < 375 450 < c < 675 ,   1370 < d < 1420 ,   920 < e < 980 ,   1650 < f < 2300 ,   100 < g < 175 ,   25 < h < 85 35 < p < 55 ,   350 < w < 750
In this section, we present the ICOA, COA, as well as the up-to-date and validated Artificial Protozoa Optimizer (APO) [58], Flood Algorithm (FLA) [59], Greylag Goose Optimization (GGO) [60], Hippopotamus Optimization (HO) [61], Hiking Optimization Algorithm (HOA) [62], Catch Fish Optimization Algorithm (CFOA) [63], Electric Eel Foraging Optimization (EEFO) [64], and Puma optimizer (PO) [65].
Table 36 presents the results of all algorithms for the ROAS rate problem. The rows of the table show the total AS for each keyword. The row labeled as the total indicates the total AS. The last row presents the ROAS rate of the algorithms. The ICOA is the most successful algorithm with a score of 144.601. The FLA is the second algorithm with a score of 144.211 while the COA is the third algorithm with a score of 144.072. Figure 20 shows the convergence curves of the algorithms. All these results show that the ICOA could successfully solve real-world problems. It also indicates that the ICOA is efficient in multi-objective problems.

6. Discussion

The most important practical contribution of the ICOA is seen in its success in real-world ROAS (Return on Advertising Spend) optimization problems. This optimization study, using the May 2024 advertisement data of the company called ‘Makarna Lütfen’ operating in Kırklareli, emphasizes the value of the ICOA in industrial applications. The problem requires 13 keywords (Makarna Lütfen, Pasta, Baby Biscuits, Organic, Baby Semolina, etc.) and an optimal distribution of advertisement budget among six product groups. With its 144.601 performance score, the ICOA has better results than its closest competitor, the FLA, having a score of 144.211. This improvement stands for almost USD 86 additional efficiency and enables USD 1.032 income annually. This result is crucial for small- and medium-sized e-commerce businesses; for a company with a monthly advertising budget of USD 100,000, a 0.27% improvement generates an additional annual return of USD 3240. The 78 decision variables of the ROAS problem and its having 13 constraints verify that the ICOA is successfully able to process real-world complications. The algorithm’s ability of automatic parameter adjustment could be easily integrated into daily optimization processes in digital marketing agencies. The results in engineering applications are also supportive; the 0.0007% weight reduction in cantilever beam design and 0.09% capacity increase in bearing design indicate the applicability of the ICOA in various technical areas. These results suggest that the ICOA could generate concrete economic value in real industrial problems, beyond its academic success.

7. Conclusions

In this study, an improved version of the Crayfish Optimization Algorithm (COA) is presented. The proposed algorithm is named the Improved Crayfish Optimization Algorithm (ICOA). The COA consists of a summer resort stage (exploration), competition stage (exploitation), and foraging stage (exploitation). In the competition stage, crayfish compete with other crayfish to enter the caves. In the competition stage, while modeling this competition, only the distance between the crayfish is taken into account. In the ICOA, a new model is proposed by including step lengths and mapping vectors into the mathematical model in addition to the distances of the crayfish. In order to verify the validity of the ICOA, it is compared to 12 MHAs. The CEC-2014 dataset with 30 complex test functions is used for the comparison. The results of the optimizers are interpreted with a Wilcoxon Signed-Rank Test. Moreover, five engineering design problems and a Return on Advertising Spend (ROAS) problem were used to compare the algorithms.
The results of this study are as follows.
  • In the competition stage of the ICOA, step length is inversely proportional to iteration. This enables the competition stage to help exploration at low iteration numbers and exploitation at high iteration numbers.
  • The randomness of the mapping vector increases the stochastic property of the ICOA.
  • CEC-2014 test results indicate that the ICOA is superior to its competitor algorithms. It is a significant result that the ICOA’s efficiency does not decrease, especially when the dimension of the problem increases.
  • By means of the mapping vector, in high-dimensional problems, updating only some dimensions at a time both reduces the computational load of the algorithm and controls excessive randomness.
  • Studying convergence curves, it is observed that the ICOA has a better curve than the COA. This verifies that the improvements made in the competition stage are successful. The number one reason why the ICOA has a better slope is that it performs exploration at low iteration numbers and exploitation at high iteration numbers. The second reason is the mapping vector of the ICOA.
  • Radar graphics indicate that the ICOA has more consistent results than its competitors.
  • Scalability analysis indicates that the ICOA is the optimizer that is the least affected algorithm by the dimension increase.
  • The results of engineering design problems provide the promising impression that the ICOA could solve real-world problems.
  • The ICOA’s success in the ROAS problem heralds that it could solve complex real-world problems. The reason for the ICOA’s success in this problem is the improvements made in the competition stage. Removing the X s h a d e parameter prevents the ICOA from being caught by the local minimum trap.
Despite its success, the ICOA still has certain limitations. The performance of the algorithm may be sensitive to the tuning of some parameters such as the size of the mapping vector and the adaptation rate of the step length. Although the scalability tests seem to be promising, further evaluation of the algorithm’s effectiveness on large-scale, highly constrained or discrete optimization problems could be beneficial.
Future studies could include hybridizing the ICOA with other meta-heuristic or machine learning-based methods or improving its performance. It is possible for parameter control mechanisms to be improved through meta-learning or reinforcement learning approaches. Extending the ICOA for multi-objective and constrained optimization problems also appears promising
The flexible and adaptive structure of the ICOA makes it suitable for a wide range of practical applications. Potential application areas could be structural optimization, parameter tuning in machine learning, optimization of energy systems, supply chain and logistics management, biomedical engineering, and financial portfolio optimization. The successful practical application of the ICOA to the ROAS problem points out its promise for real-world data-driven optimization tasks.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Conflicts of Interest

The author declares no conflicts of interest.

Nomenclature

t e m p Temperature
p Crayfish intake
σ , C 1 and C 2 Control parameter
μ The best temperature for crayfish
t Iteration
T Maximum iteration
X s h a d e Cave location
X G Best location
X L Best location (current population)
X i , j t The position of the ith agent in the jth dimension at the tth iteration
X i , j t + 1 The position of the ith agent in the jth dimension at the (t + 1)th iteration
X z , j t The position of the randomly selected zth agent from the population in the jth dimension at the tth iteration
N Population size
z Randomly selected agent
C 3 food factor
X f o o d location of food
f i t n e s s i Fitness value of agent i
f i t n e s s f o o d Fitness value of the food
V step length
m mapping vector
d ,   D Dimension of the problem
MHAMeta-heuristic algorithms
FEFunction evaluations
WSRTWilcoxon Signed-Rank Test
PSOParticle Swarm Optimization
DEDifferential Evolution
WOAWhale Optimization Algorithm
AOAArithmetic Optimization Algorithm
AVOAAfrican Vultures Optimization Algorithm
GJOGolden Jackal Optimization
SHOSea-Horse Optimizer
EVOEnergy Valley Optimizer
HCOHuman Conception Optimizer
RUNRUNge Kutta optimizer
BBOABrown-Bear Optimization Algorithm
COACrayfish Optimization Algorithm
ICOAImproved Crayfish Optimization Algorithm
ROASReturn on advertising spend
ASAdvertising spend
RRevenue
APOArtificial Protozoa Optimizer
FLAFlood Algorithm
GGOGreylag Goose Optimization
HOHippopotamus Optimization
HOAHiking Optimization Algorithm
CFOACatch Fish Optimization Algorithm
EEFOElectric Eel Foraging Optimization
POPuma optimizer

Appendix A

Table A1. Group 1 keywords information.
Table A1. Group 1 keywords information.
KeywordsAdvertising SpendNum. of ImpressionsNum. of ClicksHit Rate(%)Sales VolumeRevenueROASProposed CPMActual CPMSelected CPM
Makarna lutfen1299.5095321561.6413018,940.0114.5742.51136.33225.42
Baby biscuit462.858165580.7181277.162.7618.4156.69206.04
Biscuit16.564523140.31325251.80317.1415.223.66145.51
Organic116.361742170.983312.592.6918.4366.80211.30
Pasta402.096377180.281105.610.2664.7263.0586.76
Toddler food37.40830374.46131667.8244.5921.0045.06101.03
Baby semolina93.941180332.80212623.3327.9332.6579.6187.34
Baby tarhana51.9883291.083434.528.3637.2862.48102.37
Gluten-free pasta43.91120790.756612.4013.9512.3736.3853.18
Table A2. Group 2 keyword information.
Table A2. Group 2 keyword information.
KeywordsAdvertising SpendNum. of ImpressionsNum. of ClicksHit Rate (%)Sales VolumeRevenueROASProposed CPMActual CPMSelected CPM
Makarna lutfen442.977497831.119213,709.1630.9573.5759.09124.99
Baby semolina118.111905452.3691538.1013.0228.0562.00103.69
Pasta148.753353120.365929.506.2530.4844.3676.92
Baby tarhana50.68829364.346746.4014.7341.1861.1367.88
Gluten-free pasta51.182177512.345822.5016.0713.3023.5145.05
Pudding2.3693330.323459.70194.7910.282.5350.28
Toddler food46.34748101.342309.806.6945.5961.9597.84
Organic0.2513232.27101405.635622.5212.371.8952.37
Table A3. Group 3 keyword information.
Table A3. Group 3 keyword information.
KeywordsAdvertising SpendNum. of ImpressionsNum. of ClicksHit Rate (%)Sales VolumeRevenueROASProposed CPMActual CPMSelected CPM
Pudding1076.3727,1211450.5341106.61.0314.4639.6974.4
Makarna lutfen10,331.7794,10910,06510.710,7031,211,873.38117.323.4109.79138
Organic131.635060871.726599.194.5538.6926.0172
Pasta906.847287310.43106610.7328.17124.45145.71
Baby curd9.8624293.72131952.44198.0214.2440.7457
Baby butter2.1111098.184999.78473.8312.3719.1821
Toddler food1449.4615,0052521.689311,648.88.0440.9896.6541
Table A4. Group 4 keyword information.
Table A4. Group 4 keyword information.
KeywordsAdvertising SpendNum. of ImpressionsNum. of ClicksHit Rate (%)Sales VolumeRevenueROASProposed CPMActual CPMSelected CPM
Makarna lutfen661.889722830.8513820,105.9130.3863.9668.0874.65
Toddler food206.312750180.658997.614.8448.1575.02112.41
Pasta404.935280220.421319.90.7925.6976.69104.72
Organic6.83200423541.779.3111.0634.1540.79
Biscuit1115.5913,124300.2381447.21.349.368586.39
Baby biscuit36.9670881.1341134.630.738.6252.2102.37
Baby semolina1566.9219,1755092.6518921,785.8713.925.1481.72163.4
Table A5. Group 5 keyword information.
Table A5. Group 5 keyword information.
KeywordsAdvertising SpendNum. of ImpressionsNum. of ClicksHit Rate (%)Sales VolumeRevenueROASProposed CPMActual CPMSelected CPM
Makarna lutfen17,665.3511,420613,7091213,7901,811,070.41102.5281.57154.68287
Pasta7494.3256,9826381.1237153,609.577.1570.41131.52212
Toddler food1784.9512,8673492.7111114,323.488.0241.9138.72370
Baby semolina5198.9837,6669552.5434238,720.827.4525.08138.03910
Baby tarhana2780.5516,4403812.3213116,951.236.127.39169.13291
Organic4511.9417,2233371.96263479.740.7757.02261.97112
Baby butter4389.960,9753080.51273451.920.7967.6672170
Rice flour107.782770240.8710177316.4516.7138.9193.26
Table A6. Group 6 keyword information.
Table A6. Group 6 keyword information.
KeywordsAdvertising SpendNum of ImpressionsNum. of ClicksHit Rate (%)Sales VolumeRevenueROASProposed CPMActual CPMSelected CPM
Makarna lutfen300.516952530.76549519.3131.6854.9343.2365.21
Rice flour183.786480360.565859.54.6813.6228.3652.37
Organic36.74796273.3981515.241.2412.3746.1661.02
Pasta12.981311120.926931.471.7620.579.957.29
Toddler food3292.6433,5836201.8513916,264.64.9458.0898.04300
Baby semolina9.832533614.23202279231.8445.5538.8545
Baby butter1655.2179,9685830.73285149.683.1112.9920.728
Baby curd74.215589601.07182779.637.4612.3713.2846

References

  1. Agushaka, J.O.; Ezugwu, A.E. Evaluation of Several Initialization Methods on Arithmetic Optimization Algorithm Performance. J. Intell. Syst. 2021, 31, 70–94. [Google Scholar] [CrossRef]
  2. Agushaka, J.O.; Ezugwu, A.E.; Abualigah, L. Dwarf Mongoose Optimization Algorithm. Comput. Methods Appl. Mech. Eng. 2022, 391, 114570. [Google Scholar] [CrossRef]
  3. Zhao, W.; Wang, L.; Zhang, Z.; Mirjalili, S.; Khodadadi, N.; Ge, Q. Quadratic Interpolation Optimization (QIO): A New Optimization Algorithm Based on Generalized Quadratic Interpolation and Its Applications to Real-World Engineering Problems. Comput. Methods Appl. Mech. Eng. 2023, 417, 116446. [Google Scholar] [CrossRef]
  4. Wang, L.; Cao, Q.; Zhang, Z.; Mirjalili, S.; Zhao, W. Artificial Rabbits Optimization: A New Bio-Inspired Meta-Heuristic Algorithm for Solving Engineering Optimization Problems. Eng. Appl. Artif. Intell. 2022, 114, 105082. [Google Scholar] [CrossRef]
  5. Abdel-Basset, M.; Mohamed, R.; Jameel, M.; Abouhawwash, M. Nutcracker Optimizer: A Novel Nature-Inspired Metaheuristic Algorithm for Global Optimization and Engineering Design Problems. Knowl. Based Syst. 2023, 262, 110248. [Google Scholar] [CrossRef]
  6. Ren, L.; Wang, Z.; Zhao, J.; Wu, J.; Lin, R.; Wu, J.; Fu, Y.; Tang, D. Shale Gas Load Recovery Modeling and Analysis after Hydraulic Fracturing Based on Genetic Expression Programming: A Case Study of Southern Sichuan Basin Shale. J. Nat. Gas. Sci. Eng. 2022, 107, 104778. [Google Scholar] [CrossRef]
  7. Hashim, F.A.; Hussain, K.; Houssein, E.H.; Mabrouk, M.S.; Al-Atabany, W. Archimedes Optimization Algorithm: A New Metaheuristic Algorithm for Solving Optimization Problems. Appl. Intell. 2021, 51, 1531–1551. [Google Scholar] [CrossRef]
  8. Panwar, D.; Saini, G.L.; Agarwal, P. Human Eye Vision Algorithm (HEVA): A Novel Approach for the Optimization of Combinatorial Problems. In Artificial Intelligence in Healthcare; Springer: Singapore, 2022. [Google Scholar]
  9. Qais, M.H.; Hasanien, H.M.; Turky, R.A.; Alghuwainem, S.; Tostado-Véliz, M.; Jurado, F. Circle Search Algorithm: A Geometry-Based Metaheuristic Optimization Algorithm. Mathematics 2022, 10, 1626. [Google Scholar] [CrossRef]
  10. Geem, Z.W.; Kim, J.H.; Loganathan, G.V. A New Heuristic Optimization Algorithm: Harmony Search. Simulation 2001, 76, 60–68. [Google Scholar] [CrossRef]
  11. Rajwar, K.; Deep, K.; Das, S. An Exhaustive Review of the Metaheuristic Algorithms for Search and Optimization: Taxonomy, Applications, and Open Challenges. Artif. Intell. Rev. 2023, 56, 13187–13257. [Google Scholar] [CrossRef]
  12. Zhang, W.; Zhao, J.; Liu, H.; Tu, L. Cleaner Fish Optimization Algorithm: A New Bio-Inspired Meta-Heuristic Optimization Algorithm. J. Supercomput. 2024, 80, 17338–17376. [Google Scholar] [CrossRef]
  13. Taheri, A.; RahimiZadeh, K.; Beheshti, A.; Baumbach, J.; Rao, R.V.; Mirjalili, S.; Gandomi, A.H. Partial Reinforcement Optimizer: An Evolutionary Optimization Algorithm. Expert. Syst. Appl. 2024, 238, 122070. [Google Scholar] [CrossRef]
  14. Demir, U.; Akgun, G.; Kocaoglu, S.; Okay, E.; Heydar, A.; Akdogan, E.; Yildirim, A.; Yazi, S.; Demirci, B.; Kaplanoglu, E. Comparative Design Improvement of the Growing Rod for the Scoliosis Treatment Considering the Mechanical Complications. IEEE Access 2023, 11, 40107–40120. [Google Scholar] [CrossRef]
  15. Huang, K.; Zhen, H.; Gong, W.; Wang, R.; Bian, W. Surrogate-Assisted Evolutionary Sampling Particle Swarm Optimization for High-Dimensional Expensive Optimization. Neural Comput. Appl. 2023, 1–17. [Google Scholar] [CrossRef]
  16. Deng, X.; He, D.; Qu, L. A Novel Hybrid Algorithm Based on Arithmetic Optimization Algorithm and Particle Swarm Optimization for Global Optimization Problems. J. Supercomput. 2024, 80, 8857–8897. [Google Scholar] [CrossRef]
  17. Wolpert, D.H.; Macready, W.G. No Free Lunch Theorems for Optimization. IEEE Trans. Evol. Comput. 1997, 1, 67–82. [Google Scholar] [CrossRef]
  18. Khouni, S.E.; Menacer, T. Nizar Optimization Algorithm: A Novel Metaheuristic Algorithm for Global Optimization and Engineering Applications. J. Supercomput. 2023, 80, 3229–3281. [Google Scholar] [CrossRef]
  19. Rajmohan, S.; Elakkiya, E.; Sreeja, S.R. Multi-Cohort Whale Optimization with Search Space Tightening for Engineering Optimization Problems. Neural Comput. Appl. 2023, 35, 8967–8986. [Google Scholar] [CrossRef]
  20. Duan, Y.; Liu, C.; Li, S.; Guo, X.; Yang, C. Manta Ray Foraging and Gaussian Mutation-Based Elephant Herding Optimization for Global Optimization. Eng. Comput. 2023, 39, 1085–1125. [Google Scholar] [CrossRef]
  21. Arora, S.; Singh, S. Butterfly Optimization Algorithm: A Novel Approach for Global Optimization. Soft Comput. 2019, 23, 715–734. [Google Scholar] [CrossRef]
  22. Hashim, F.A.; Houssein, E.H.; Hussain, K.; Mabrouk, M.S.; Al-Atabany, W. Honey Badger Algorithm: New Metaheuristic Algorithm for Solving Optimization Problems. Math. Comput. Simul. 2022, 192, 84–110. [Google Scholar] [CrossRef]
  23. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris Hawks Optimization: Algorithm and Applications. Future Gener. Comput. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
  24. Hashim, F.A.; Hussien, A.G. Snake Optimizer: A Novel Meta-Heuristic Optimization Algorithm. Knowl. Based Syst. 2022, 242, 108320. [Google Scholar] [CrossRef]
  25. Lu, Y.; Yu, X.; Hu, Z.; Wang, X. Convolutional Neural Network Combined with Reinforcement Learning-Based Dual-Mode Grey Wolf Optimizer to Identify Crop Diseases and Pests. Swarm Evol. Comput. 2025, 94, 101874. [Google Scholar] [CrossRef]
  26. Azizi, M.; Aickelin, U.A.; Khorshidi, H.; Baghalzadeh Shishehgarkhaneh, M. Energy Valley Optimizer: A Novel Metaheuristic Algorithm for Global and Engineering Optimization. Sci. Rep. 2023, 13, 226. [Google Scholar] [CrossRef]
  27. Prakash, T.; Singh, P.P.; Singh, V.P.; Singh, N. A Novel Brown-Bear Optimization Algorithm for Solving Economic Dispatch Problem. In Advanced Control and Optimization Paradigms for Energy System Operation and Management; River Publishers: Aalborg, Denmark, 2022; pp. 137–164. [Google Scholar]
  28. Xia, J.Y.; Li, S.; Huang, J.J.; Yang, Z.; Jaimoukha, I.M.; Gunduz, D. Metalearning-Based Alternating Minimization Algorithm for Nonconvex Optimization. IEEE Trans. Neural Netw. Learn. Syst. 2023, 34, 5366–5380. [Google Scholar] [CrossRef]
  29. Ariyanto, M.; Refat, C.M.M.; Hirao, K.; Morishima, K. Movement Optimization for a Cyborg Cockroach in a Bounded Space Incorporating Machine Learning. Cyborg Bionic Syst. 2023, 4, 0012. [Google Scholar] [CrossRef]
  30. Wang, H.; Yu, X.; Lu, Y. A Reinforcement Learning-Based Ranking Teaching-Learning-Based Optimization Algorithm for Parameters Estimation of Photovoltaic Models. Swarm Evol. Comput. 2025, 93, 101844. [Google Scholar] [CrossRef]
  31. Dehghani, M.; Trojovská, E.; Zuščák, T. A New Human-Inspired Metaheuristic Algorithm for Solving Optimization Problems Based on Mimicking Sewing Training. Sci. Rep. 2022, 12, 17387. [Google Scholar] [CrossRef]
  32. Braik, M.; Hammouri, A.; Atwan, J.; Al-Betar, M.A.; Awadallah, M.A. White Shark Optimizer: A Novel Bio-Inspired Meta-Heuristic Algorithm for Global Optimization Problems. Knowl. Based Syst. 2022, 243, 108457. [Google Scholar] [CrossRef]
  33. Seyyedabbasi, A.; Kiani, F. Sand Cat Swarm Optimization: A Nature-Inspired Algorithm to Solve Global Optimization Problems. Eng. Comput. 2023, 39, 2627–2651. [Google Scholar] [CrossRef]
  34. Ylldlz, B.S.; Mehta, P.; Panagant, N.; Mirjalili, S.; Yildiz, A.R. A Novel Chaotic Runge Kutta Optimization Algorithm for Solving Constrained Engineering Problems. J. Comput. Des. Eng. 2022, 9, 2452–2465. [Google Scholar] [CrossRef]
  35. Gezici, H.; Livatyalı, H. Chaotic Harris Hawks Optimization Algorithm. J. Comput. Des. Eng. 2022, 9, 216–245. [Google Scholar] [CrossRef]
  36. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  37. Akl, D.T.; Saafan, M.M.; Haikal, A.Y.; El-Gendy, E.M. IHHO: An Improved Harris Hawks Optimization Algorithm for Solving Engineering Problems. Neural Comput. Appl. 2024, 36, 12185–12298. [Google Scholar] [CrossRef]
  38. Jia, H.; Rao, H.; Wen, C.; Mirjalili, S. Crayfish Optimization Algorithm. Artif. Intell. Rev. 2023, 56, 1919–1979. [Google Scholar] [CrossRef]
  39. Zhang, Y.; Liu, P.; Li, Y.; Zhang, Y.; Liu, P.; Li, Y. Implementation of an Enhanced Crayfish Optimization Algorithm. Biomimetics 2024, 9, 341. [Google Scholar] [CrossRef]
  40. Jia, H.; Zhou, X.; Zhang, J.; Abualigah, L.; Yildiz, A.R.; Hussien, A.G. Modified Crayfish Optimization Algorithm for Solving Multiple Engineering Application Problems. Artif. Intell. Rev. 2024, 57, 127. [Google Scholar] [CrossRef]
  41. Yuan, P.; Li, S.; Liao, Z. ECOA: An Enhanced Crayfish Optimization Algorithm for Real-World Engineering Problems. In Proceedings of the 2024 6th International Conference on Communications, Information System and Computer Engineering (CISCE), Guangzhou, China, 10–12 May 2024; pp. 804–809. [Google Scholar] [CrossRef]
  42. Zhang, J.; Diao, Y. Hierarchical Learning-Enhanced Chaotic Crayfish Optimization Algorithm: Improving Extreme Learning Machine Diagnostics in Breast Cancer. Mathematics 2024, 12, 2641. [Google Scholar] [CrossRef]
  43. Eberhart, R.; Kennedy, J. New Optimizer Using Particle Swarm Theory. In Proceedings of the Sixth International Symposium on Micro Machine and Human Science, Nagoya, Japan, 4–6 October 1995; pp. 39–43. [Google Scholar] [CrossRef]
  44. Kaelo, P.; Ali, M.M. A Numerical Study of Some Modified Differential Evolution Algorithms. Eur. J. Oper. Res. 2006, 169, 1176–1184. [Google Scholar] [CrossRef]
  45. Mirjalili, S.; Lewis, A. The Whale Optimization Algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  46. Abualigah, L.; Diabat, A.; Mirjalili, S.; Abd Elaziz, M.; Gandomi, A.H. The Arithmetic Optimization Algorithm. Comput. Methods Appl. Mech. Eng. 2021, 376, 113609. [Google Scholar] [CrossRef]
  47. Abdollahzadeh, B.; Gharehchopogh, F.S.; Mirjalili, S. African Vultures Optimization Algorithm: A New Nature-Inspired Metaheuristic Algorithm for Global Optimization Problems. Comput. Ind. Eng. 2021, 158, 107408. [Google Scholar] [CrossRef]
  48. Chopra, N.; Mohsin Ansari, M. Golden Jackal Optimization: A Novel Nature-Inspired Optimizer for Engineering Applications. Expert. Syst. Appl. 2022, 198, 116924. [Google Scholar] [CrossRef]
  49. Zhao, S.; Zhang, T.; Ma, S.; Wang, M. Sea-Horse Optimizer: A Novel Nature-Inspired Meta-Heuristic for Global Optimization Problems. Appl. Intell. 2023, 53, 11833–11860. [Google Scholar] [CrossRef]
  50. Acharya, D.; Das, D.K. A Novel Human Conception Optimizer for Solving Optimization Problems. Sci. Rep. 2022, 12, 21631. [Google Scholar] [CrossRef]
  51. Ahmadianfar, I.; Heidari, A.A.; Gandomi, A.H.; Chu, X.; Chen, H. RUN beyond the Metaphor: An Efficient Optimization Algorithm Based on Runge Kutta Method. Expert. Syst. Appl. 2021, 181, 115079. [Google Scholar] [CrossRef]
  52. Derrac, J.; García, S.; Molina, D.; Herrera, F. A Practical Tutorial on the Use of Nonparametric Statistical Tests as a Methodology for Comparing Evolutionary and Swarm Intelligence Algorithms. Swarm Evol. Comput. 2011, 1, 3–18. [Google Scholar] [CrossRef]
  53. Rao, H.; Jia, H.; Wu, D.; Wen, C.; Li, S.; Liu, Q.; Abualigah, L. A Modified Group Teaching Optimization Algorithm for Solving Constrained Engineering Optimization Problems. Mathematics 2022, 10, 3765. [Google Scholar] [CrossRef]
  54. Rao, R.V.; Savsani, V.J.; Vakharia, D.P. Teaching–Learning-Based Optimization: A Novel Method for Constrained Mechanical Design Optimization Problems. Comput.-Aided Des. 2011, 43, 303–315. [Google Scholar] [CrossRef]
  55. Rajeswara Rao, B.; Tiwari, R. Optimum Design of Rolling Element Bearings Using Genetic Algorithms. Mech. Mach. Theory 2007, 42, 233–250. [Google Scholar] [CrossRef]
  56. Pant, M.; Thangaraj, R.; Singh, V.P. Optimization of Mechanical Design Problems Using Improved Differential Evolution Algorithm. Int. J. Recent. Trends Eng. 2009, 1, 21. [Google Scholar]
  57. Hsu, Y.L.; Liu, T.C. Developing a Fuzzy Proportional–Derivative Controller Optimization Engine for Engineering Design Optimization Problems. Eng. Optim. 2007, 39, 679–700. [Google Scholar] [CrossRef]
  58. Wang, X.; Snášel, V.; Mirjalili, S.; Pan, J.S.; Kong, L.; Shehadeh, H.A. Artificial Protozoa Optimizer (APO): A Novel Bio-Inspired Metaheuristic Algorithm for Engineering Optimization. Knowl. Based Syst. 2024, 295, 111737. [Google Scholar] [CrossRef]
  59. Ghasemi, M.; Golalipour, K.; Zare, M.; Mirjalili, S.; Trojovský, P.; Abualigah, L.; Hemmati, R. Flood Algorithm (FLA): An Efficient Inspired Meta-Heuristic for Engineering Optimization. J. Supercomput. 2024, 80, 22913–23017. [Google Scholar] [CrossRef]
  60. El-kenawy, E.S.M.; Khodadadi, N.; Mirjalili, S.; Abdelhamid, A.A.; Eid, M.M.; Ibrahim, A. Greylag Goose Optimization: Nature-Inspired Optimization Algorithm. Expert. Syst. Appl. 2024, 238, 122147. [Google Scholar] [CrossRef]
  61. Amiri, M.H.; Mehrabi Hashjin, N.; Montazeri, M.; Mirjalili, S.; Khodadadi, N. Hippopotamus Optimization Algorithm: A Novel Nature-Inspired Optimization Algorithm. Sci. Rep. 2024, 14, 5032. [Google Scholar] [CrossRef]
  62. Oladejo, S.O.; Ekwe, S.O.; Mirjalili, S. The Hiking Optimization Algorithm: A Novel Human-Based Metaheuristic Approach. Knowl. Based Syst. 2024, 296, 111880. [Google Scholar] [CrossRef]
  63. Jia, H.; Wen, Q.; Wang, Y.; Mirjalili, S. Catch Fish Optimization Algorithm: A New Human Behavior Algorithm for Solving Clustering Problems. Clust. Comput. 2024, 27, 13295–13332. [Google Scholar] [CrossRef]
  64. Zhao, W.; Wang, L.; Zhang, Z.; Fan, H.; Zhang, J.; Mirjalili, S.; Khodadadi, N.; Cao, Q. Electric Eel Foraging Optimization: A New Bio-Inspired Optimizer for Engineering Applications. Expert. Syst. Appl. 2024, 238, 122200. [Google Scholar] [CrossRef]
  65. Abdollahzadeh, B.; Khodadadi, N.; Barshandeh, S.; Trojovský, P.; Gharehchopogh, F.S.; El-kenawy, E.S.M.; Abualigah, L.; Mirjalili, S. Puma Optimizer (PO): A Novel Metaheuristic Optimization Algorithm and Its Application in Machine Learning. Clust. Comput. 2024, 27, 5235–5283. [Google Scholar] [CrossRef]
Figure 1. Research flow diagram.
Figure 1. Research flow diagram.
Biomimetics 10 00411 g001
Figure 2. Dynamic behavior of V.
Figure 2. Dynamic behavior of V.
Biomimetics 10 00411 g002
Figure 3. Results of some algorithms for CEC-2014 (D = 10).
Figure 3. Results of some algorithms for CEC-2014 (D = 10).
Biomimetics 10 00411 g003aBiomimetics 10 00411 g003bBiomimetics 10 00411 g003c
Figure 4. Results of some algorithms for CEC-2014 (D = 30).
Figure 4. Results of some algorithms for CEC-2014 (D = 30).
Biomimetics 10 00411 g004aBiomimetics 10 00411 g004b
Figure 5. Convergence curves of ICOA and competing algorithms for CEC-2014 (D = 10).
Figure 5. Convergence curves of ICOA and competing algorithms for CEC-2014 (D = 10).
Biomimetics 10 00411 g005aBiomimetics 10 00411 g005b
Figure 6. Convergence curves of ICOA and competing algorithms for CEC-2014 (D = 30).
Figure 6. Convergence curves of ICOA and competing algorithms for CEC-2014 (D = 30).
Biomimetics 10 00411 g006aBiomimetics 10 00411 g006bBiomimetics 10 00411 g006c
Figure 7. The average ranks of algorithms in different dimensions.
Figure 7. The average ranks of algorithms in different dimensions.
Biomimetics 10 00411 g007
Figure 8. Cantilever beam design problem.
Figure 8. Cantilever beam design problem.
Biomimetics 10 00411 g008
Figure 9. Convergence curves of ICOA and competing algorithms for cantilever beam design.
Figure 9. Convergence curves of ICOA and competing algorithms for cantilever beam design.
Biomimetics 10 00411 g009
Figure 10. Population diversity of ICOA for cantilever beam design.
Figure 10. Population diversity of ICOA for cantilever beam design.
Biomimetics 10 00411 g010
Figure 11. Exploration–exploitation balance of ICOA for cantilever beam design.
Figure 11. Exploration–exploitation balance of ICOA for cantilever beam design.
Biomimetics 10 00411 g011
Figure 12. Gear train design problem.
Figure 12. Gear train design problem.
Biomimetics 10 00411 g012
Figure 13. Convergence curves of ICOA and competing algorithms for gear train design.
Figure 13. Convergence curves of ICOA and competing algorithms for gear train design.
Biomimetics 10 00411 g013
Figure 14. Rolling element bearing design problem.
Figure 14. Rolling element bearing design problem.
Biomimetics 10 00411 g014
Figure 15. Convergence curves of ICOA and competing algorithms for rolling element bearing design.
Figure 15. Convergence curves of ICOA and competing algorithms for rolling element bearing design.
Biomimetics 10 00411 g015
Figure 16. Heat exchanger network design problem.
Figure 16. Heat exchanger network design problem.
Biomimetics 10 00411 g016
Figure 17. Convergence curves of ICOA and competing algorithms for heat exchanger network design.
Figure 17. Convergence curves of ICOA and competing algorithms for heat exchanger network design.
Biomimetics 10 00411 g017
Figure 18. Tabular column design problem.
Figure 18. Tabular column design problem.
Biomimetics 10 00411 g018
Figure 19. Convergence curves of ICOA and competing algorithms for tabular column design.
Figure 19. Convergence curves of ICOA and competing algorithms for tabular column design.
Biomimetics 10 00411 g019
Figure 20. Convergence curves of ICOA and competing algorithms for ROAS problem.
Figure 20. Convergence curves of ICOA and competing algorithms for ROAS problem.
Biomimetics 10 00411 g020
Table 1. Parameter settings of ICOA and competing algorithms.
Table 1. Parameter settings of ICOA and competing algorithms.
AlgorithmsParameter Settings
PSO w = [ 0.9 ,   0.2 ] ,   c 1 = 2 ,   c 2 = 2
DE p C R = 0.5 ,   β m i n = 0.1 ,   β m a x = 0.9
WOA P r o b a b i l i t y   o f   e n c i r c l i n g   m e c h a n i s m = 0.5 ,   s p i r a l   f a c t o r = 1
AOA α = 5 ,   μ = 0.5
AVOA L 1 = 0.8 ,   L 2 = 0.2 ,   w = 2.5 ,   P 1 = 0.6 ,   P 2 = 0.4 ,   P 3 = 0.6  
GJO C 1 = 1.5
SHO r 1 = 0 ,   r 2 = 0.1
EVO
HCO P f i t = 0.65 ,   λ = 0 ,   1 , w = 0.1 , A 1   a n d   A 2 = [ 2 ,   4 ]  
RUN a = 20 ,   b = 12
BBOA
COA C 1 = 0.2 ,   C 3 = 3 ,   μ = 25 ,   σ = 3
ICOA C 1 = 0.2 ,   C 3 = 3 ,   μ = 25 ,   σ = 3
Table 2. The results of ICOA and competitor algorithms for CEC-2014 unimodal functions ( D = 10 ).
Table 2. The results of ICOA and competitor algorithms for CEC-2014 unimodal functions ( D = 10 ).
Func.IndexAlgorithms
PSODEWOAAOASHOAVOABBOAEVOGJOHCORUNCOAICOA
F1Min1.34 × 1018.78 × 1001.34 × 1014.93 × 1072.20 × 1061.66 × 1032.23 × 1053.45 × 1065.77 × 1069.47 × 1071.33 × 1051.46 × 1033.59 × 100
Ave3.02 × 1032.28 × 1033.47 × 1063.47 × 1088.95 × 1061.51 × 1044.36 × 1062.06 × 1071.21 × 1073.27 × 1086.41 × 1051.39 × 1041.44 × 103
Std4.15 × 1032.37 × 1031.00 × 1072.12 × 1084.40 × 1068.14 × 1034.96 × 1061.45 × 1073.19 × 1061.89 × 1084.80 × 1051.15 × 1049.64 × 102
F2Min5.23 × 10−25.95 × 10−41.07 × 1017.25 × 1095.72 × 1012.24 × 1011.67 × 1083.41 × 1076.51 × 1084.74 × 1092.61 × 1051.60 × 10−11.99 × 10−11
Ave2.54 × 1022.07 × 1003.35 × 1031.04 × 10104.71 × 1083.27 × 1031.63 × 1096.61 × 1081.67 × 1091.08 × 10101.80 × 1071.67 × 1031.91 × 10−1
Std7.45 × 1025.85 × 1003.99 × 1032.13 × 1095.75 × 1083.86 × 1031.06 × 1094.73 × 1084.19 × 1083.51 × 1091.13 × 1072.57 × 1039.33 × 10−1
F3Min1.20 × 10−73.66 × 10−91.23 × 1022.04 × 1041.31 × 1031.86 × 1014.85 × 1029.40 × 1022.87 × 1035.96 × 1042.01 × 1023.20 × 10−12.66× 10−13
Ave1.53 × 10−11.09 × 10−39.06 × 1032.34 × 1064.04 × 1032.77 × 1023.65 × 1031.60 × 1041.02 × 1041.56 × 1066.69 × 1021.52 × 1016.65 × 10−4
Std4.60 × 10−12.40 × 10−38.86 × 1032.93 × 1062.23 × 1031.74 × 1022.35 × 1031.79 × 1043.51 × 1034.77 × 1063.28 × 1024.04 × 1011.97 × 10−3
Table 3. The results of ICOA and competitor algorithms for CEC-2014 multimodal functions ( D = 10 ).
Table 3. The results of ICOA and competitor algorithms for CEC-2014 multimodal functions ( D = 10 ).
Func.IndexAlgorithms
PSODEWOAAOASHOAVOABBOAEVOGJOHCORUNCOAICOA
F4Min4.76 × 10−21.56 × 10−21.44 × 10−25.82 × 1021.89 × 1001.22 × 10−24.98 × 1012.94 × 1014.29 × 1012.73 × 1021.11 × 1003.33 × 10−21.10 × 10−2
Ave2.47 × 1012.93 × 1012.87 × 1012.18 × 1035.74 × 1011.90 × 1011.86 × 1021.37 × 1026.31 × 1011.24 × 1032.26 × 1011.93 × 1011.83 × 101
Std1.71 × 1011.22 × 1013.29 × 1011.16 × 1033.30 × 1011.72 × 1011.24 × 1027.67 × 1011.83 × 1015.92 × 1021.52 × 1011.69 × 1011.68 × 101
F5Min2.00 × 1012.01 × 1012.00 × 1012.05 × 1012.00 × 1012.00 × 1011.60 × 1012.00 × 1012.02 × 1012.01 × 1018.11 × 1001.99 × 1012.00 × 101
Ave2.01 × 1012.02 × 1012.01 × 1012.10 × 1012.00 × 1012.00 × 1011.99 × 1012.03 × 1012.03 × 1012.03 × 1011.99 × 1012.00 × 1012.00 × 101
Std9.70 × 10−25.48 × 10−21.26 × 10−11.68 × 10−14.56 × 10−23.23 × 10−27.45 × 10−13.32 × 10−17.02 × 10−21.15 × 10−12.22 × 1001.37 × 10−25.31 × 10−3
F6Min8.86 × 10−13.26 × 10−16.64 × 1001.00 × 1013.03 × 1009.03 × 10−12.96 × 1001.11 × 1005.50 × 1004.63 × 1001.86 × 1002.49 × 10−38.44 × 10−4
Ave4.11 × 1002.55 × 1009.04 × 1001.22 × 1015.10 × 1003.47 × 1005.40 × 1003.85 × 1006.62 × 1008.13 × 1003.55 × 1003.63 × 1002.27 × 100
Std1.40 × 1001.50 × 1001.23 × 1009.29 × 10−11.23 × 1001.53 × 1001.14 × 1001.62 × 1008.47 × 10−11.85 × 1008.85 × 10−12.06 × 1001.50 × 100
F7Min9.11 × 10−23.94 × 10−21.19 × 1007.65 × 1011.21 × 1002.46 × 10−23.07 × 1001.47 × 1007.17 × 1006.30 × 1008.28 × 10−12.84 × 10−29.86 × 10−3
Ave2.52 × 10−11.50 × 10−19.50 × 1001.89 × 1021.20 × 1016.88 × 10−23.74 × 1011.25 × 1011.34 × 1013.56 × 1011.38 × 1007.44 × 10−25.16 × 10−2
Std1.81 × 10−17.52 × 10−26.08 × 1004.88 × 1011.34 × 1012.78 × 10−22.78 × 1018.45 × 1004.39 × 1001.83 × 1012.65 × 10−13.59 × 10−22.32 × 10−2
F8Min5.97 × 1000.00 × 1009.95 × 1008.54 × 1011.59 × 1010.00 × 1001.50 × 1014.38 × 1003.35 × 1011.81 × 1012.03 × 1001.29 × 10−90.00 × 100
Ave1.85 × 1012.17 × 1004.07 × 1011.25 × 1022.56 × 1011.61 × 1002.72 × 1011.15 × 1014.06 × 1013.99 × 1011.21 × 1015.35 × 1001.16 × 100
Std8.27 × 1001.28 × 1001.77 × 1011.57 × 1017.59 × 1009.25 × 10−17.37 × 1004.00 × 1004.47 × 1001.30 × 1014.34 × 1007.85 × 1008.28 × 10−1
F9Min1.09 × 1012.08 × 1001.59 × 1016.99 × 1012.31 × 1019.95 × 1001.91 × 1018.78 × 1003.69 × 1011.81 × 1018.25 × 1006.98 × 1001.99 × 100
Ave2.56 × 1011.39 × 1015.02 × 1019.37 × 1013.41 × 1012.36 × 1012.91 × 1011.93 × 1014.93 × 1013.52 × 1011.92 × 1013.16 × 1011.19 × 101
Std8.74 × 1007.06 × 1001.73 × 1018.99 × 1007.83 × 1007.86 × 1005.46 × 1006.33 × 1003.81 × 1001.46 × 1016.14 × 1001.64 × 1015.43 × 100
F10Min7.20 × 1002.72 × 1013.01 × 1021.09 × 1031.32 × 1021.03 × 1014.12 × 1021.18 × 1013.80 × 1021.16 × 1037.23 × 1002.50 × 10−11.30 × 102
Ave1.39 × 1025.80 × 1016.95 × 1021.65 × 1034.37 × 1027.82 × 1017.36 × 1022.23 × 1027.26 × 1021.64 × 1032.34 × 1024.22 × 1029.67 × 102
Std9.15 × 1012.57 × 1012.50 × 1022.27 × 1021.43 × 1022.53 × 1012.02 × 1021.68 × 1021.46 × 1022.40 × 1021.62 × 1023.24 × 1023.51 × 102
F11Min5.69 × 1015.49 × 1012.45 × 1021.78 × 1032.70 × 1021.43 × 1021.15 × 1022.38 × 1027.16 × 1029.45 × 1023.34 × 1021.29 × 1025.22 × 101
Ave4.53 × 1023.75 × 1021.13 × 1032.28 × 1036.61 × 1026.40 × 1027.64 × 1028.08 × 1021.13 × 1031.50 × 1035.99 × 1025.19 × 1023.61 × 102
Std2.16 × 1021.48 × 1023.48 × 1022.61 × 1022.04 × 1022.40 × 1022.72 × 1022.34 × 1021.59 × 1022.80 × 1021.64 × 1022.20 × 1021.85 × 102
F12Min2.29 × 10−24.32 × 10−23.91 × 10−11.88 × 1005.16 × 10−22.26 × 10−41.15 × 10−17.44 × 10−24.15 × 10−16.89 × 10−13.42 × 10−14.92 × 10−24.01 × 10−2
Ave1.43 × 10−12.17 × 10−18.38 × 10−14.04 × 1002.36 × 10−16.93 × 10−24.06 × 10−13.34 × 10−17.39 × 10−11.45 × 1008.19 × 10−13.05 × 10−11.55 × 10−1
Std9.34 × 10−21.33 × 10−12.47 × 10−11.16 × 1001.35 × 10−15.31 × 10−21.57 × 10−11.92 × 10−11.95 × 10−14.31 × 10−12.00 × 10−11.83 × 10−19.60 × 10−2
F13Min1.72 × 10−14.15 × 10−22.73 × 10−12.07 × 1003.77 × 10−13.26 × 10−23.26 × 10−12.30 × 10−13.96 × 10−11.64 × 10−11.31 × 10−19.67 × 10−22.49 × 10−2
Ave4.61 × 10−11.55 × 10−16.59 × 10−14.83 × 1007.33 × 10−11.05 × 10−11.41 × 1004.39 × 10−17.01 × 10−11.19 × 1002.64 × 10−11.62 × 10−18.43 × 10−2
Std2.09 × 10−16.28 × 10−22.37 × 10−11.07 × 1003.98 × 10−14.01 × 10−29.51 × 10−11.34 × 10−11.10 × 10−18.64 × 10−17.27 × 10−25.97 × 10−24.10 × 10−2
F14Min4.02 × 10−23.12 × 10−21.04 × 10−12.47 × 1012.26 × 10−13.90 × 10−21.04 × 1002.38 × 10−19.88 × 10−13.04 × 10−18.05 × 10−26.85 × 10−25.30 × 10−2
Ave2.20 × 10−11.08 × 10−14.58 × 10−13.98 × 1011.54 × 1001.17 × 10−11.06 × 1012.55 × 1003.14 × 1007.90 × 1001.76 × 10−12.01 × 10−11.56 × 10−1
Std1.59 × 10−14.43 × 10−23.64 × 10−17.46 × 1002.41 × 1004.03 × 10−27.33 × 1002.18 × 1009.37 × 10−14.33 × 1004.36 × 10−28.29 × 10−26.57 × 10−2
F15Min7.14 × 10−15.68 × 10−13.24 × 1001.14 × 1031.78 × 1005.19 × 10−12.77 × 1002.68 × 1006.49 × 1001.13 × 1041.84 × 1004.30 × 10−13.44 × 10−1
Ave2.33 × 1001.45 × 1005.89 × 1011.55 × 1044.49 × 1007.64 × 10−14.74 × 1019.06 × 1011.22 × 1011.35 × 1052.98 × 1001.24 × 1006.17 × 10−1
Std1.33 × 1005.40 × 10−17.12 × 1011.65 × 1043.33 × 1001.51 × 10−11.37 × 1022.77 × 1027.99 × 1001.29 × 1055.83 × 10−14.35 × 10−11.89 × 10−1
F16Min1.35 × 1001.52 × 10−12.63 × 1003.89 × 1002.20 × 1001.80 × 1002.26 × 1002.57 × 1003.44 × 1002.38 × 1001.77 × 1001.24 × 1001.24 × 100
Ave2.41 × 1001.66 × 1003.45 × 1004.31 × 1002.93 × 1002.47 × 1002.88 × 1003.38 × 1003.65 × 1003.39 × 1002.47 × 1002.20 × 1002.32 × 100
Std4.00 × 10−18.31 × 10−13.41 × 10−11.81 × 10−13.18 × 10−13.01 × 10−12.85 × 10−14.04 × 10−11.02 × 10−14.52 × 10−13.52 × 10−15.01 × 10−13.92 × 10−1
Table 4. The results of ICOA and competitor algorithms for CEC-2014 hybrid functions ( D = 10 ).
Table 4. The results of ICOA and competitor algorithms for CEC-2014 hybrid functions ( D = 10 ).
Func.IndexAlgorithms
PSODEWOAAOASHOAVOABBOAEVOGJOHCORUNCOAICOA
F17Min7.58 × 1002.08 × 1016.77 × 1025.83 × 1054.62 × 1038.60 × 1026.22 × 1026.34 × 1038.93 × 1036.20 × 1051.60 × 1032.29 × 1027.40 × 100
Ave1.15 × 1032.73 × 1021.95 × 1037.14 × 1069.24 × 1032.62 × 1032.74 × 1031.39 × 1054.33 × 1041.05 × 1078.92 × 1034.89 × 1032.51 × 102
Std1.68 × 1032.07 × 1021.21 × 1031.03 × 1074.33 × 1031.42 × 1032.56 × 1034.26 × 1053.09 × 1048.43 × 1064.51 × 1035.48 × 1032.20 × 102
F18Min4.95 × 1002.95 × 1001.32 × 1021.33 × 1061.03 × 1041.35 × 1011.13 × 1036.16 × 1031.78 × 1043.90 × 1061.32 × 1038.46 × 1002.91 × 100
Ave4.62 × 1012.39 × 1015.22 × 1031.48 × 1081.62 × 1041.06 × 1039.66 × 1031.99 × 1061.94 × 1041.20 × 1084.11 × 1036.23 × 1011.99 × 101
Std1.45 × 1021.86 × 1014.31 × 1031.61 × 1082.97 × 1031.18 × 1034.91 × 1036.97 × 1064.36 × 1021.08 × 1082.38 × 1037.53 × 1012.19 × 101
F19Min1.66 × 1003.18 × 1006.80 × 1001.22 × 1044.32 × 1002.05 × 1002.02 × 1005.92 × 1007.99 × 1001.44 × 1054.23 × 1001.58 × 1001.16 × 100
Ave1.08 × 1015.17 × 1003.07 × 1017.63 × 1068.31 × 1005.72 × 1004.51 × 1003.80 × 1042.98 × 1013.29 × 1069.35 × 1007.12 × 1003.55 × 100
Std7.61 × 1001.10 × 1003.93 × 1011.41 × 1071.66 × 1002.39 × 1002.42 × 1001.77 × 1059.25 × 1013.33 × 1062.61 × 1004.86 × 1002.20 × 100
F20Min4.20 × 1001.96 × 1002.18 × 1031.26 × 1095.94 × 1036.04 × 1018.07 × 1021.81 × 1031.24 × 1042.30 × 1097.94 × 1031.30 × 1021.13 × 100
Ave1.82 × 1032.63 × 1011.18 × 1042.34 × 10131.62 × 1045.80 × 1032.30 × 1044.38 × 1014.50 × 1044.27 × 10122.35 × 1046.95 × 1032.61 × 101
Std3.21 × 1032.03 × 1011.10 × 1046.72 × 10136.85 × 1035.21 × 1032.08 × 1042.27 × 10111.30 × 1059.67 × 10121.18 × 1049.02 × 1033.48 × 101
F21Min1.36 × 1004.98 × 10−13.68 × 1007.10 × 1052.87 × 1034.12 × 1002.61 × 1029.54 × 1021.52 × 1041.72 × 1069.62 × 1024.01 × 1004.97 × 10−1
Ave4.32 × 1014.09 × 1005.21 × 1025.18 × 1075.86 × 1032.18 × 1022.44 × 1033.11 × 1051.43 × 1053.96 × 1072.50 × 1035.56 × 1013.91 × 100
Std4.41 × 1013.79 × 1008.02 × 1025.72 × 1071.18 × 1031.94 × 1022.54 × 1035.71 × 1051.39 × 1053.23 × 1071.26 × 1035.47 × 1018.48 × 100
F22Min2.06 × 1018.56 × 10−22.07 × 1011.19 × 1032.13 × 1012.03 × 1012.10 × 1016.34 × 1017.39 × 1011.01 × 1032.83 × 1012.03 × 1018.54 × 10−2
Ave1.83 × 1025.29 × 1021.20 × 1023.61 × 10112.06 × 1027.88 × 1011.19 × 1023.35 × 1073.20 × 1028.66 × 1014.31 × 1011.35 × 1029.83 × 101
Std9.83 × 1018.17 × 1021.14 × 1021.34 × 10122.56 × 1029.76 × 1011.34 × 1021.79 × 1082.95 × 1022.29 × 10114.05 × 1011.69 × 1021.07 × 102
Table 5. The results of ICOA and competitor algorithms for CEC-2014 composition functions ( D = 10 ).
Table 5. The results of ICOA and competitor algorithms for CEC-2014 composition functions ( D = 10 ).
Func.İndexAlgorithms
PSODEWOAAOASHOAVOABBOAEVOGJOHCORUNCOAICOA
F23Min2.37 × 1022.37 × 1022.00 × 1022.00 × 1022.00 × 1022.00 × 1022.00 × 1022.43 × 1022.00 × 1022.52 × 1022.00 × 1022.00 × 1022.00 × 102
Ave2.42 × 1022.37 × 1022.38 × 1022.05 × 1022.00 × 1022.00 × 1022.00 × 1022.52 × 1022.00 × 1022.73 × 1022.04 × 1022.00 × 1022.00 × 102
Std1.65 × 1001.65 × 1001.29 × 1011.12 × 1000.00 × 1000.00 × 1000.00 × 1005.66 × 1000.00 × 1001.90 × 1011.13 × 1010.00 × 1000.00 × 100
F24Min1.16 × 1021.11 × 1021.23 × 1021.88 × 1021.16 × 1021.15 × 1021.29 × 1021.21 × 1021.50 × 1021.23 × 1021.15 × 1021.16 × 1021.10 × 102
Ave1.35 × 1021.25 × 1021.68 × 1022.00 × 1021.45 × 1021.32 × 1021.63 × 1021.57 × 1021.56 × 1021.63 × 1021.25 × 1021.98 × 1021.21 × 102
Std1.18 × 1015.39 × 1003.01 × 1012.73 × 1002.67 × 1011.60 × 1012.53 × 1012.33 × 1013.99 × 1002.47 × 1016.09 × 1001.54 × 1016.75 × 100
F25Min1.55 × 1022.00 × 1021.66 × 1022.00 × 1021.60 × 1022.00 × 1021.82 × 1022.02 × 1022.00 × 1022.01 × 1021.55 × 1022.00 × 1022.00 × 102
Ave1.95 × 1022.00 × 1021.99 × 1022.00 × 1021.97 × 1022.00 × 1021.99 × 1022.04 × 1022.00 × 1022.06 × 1021.95 × 1022.00 × 1022.00 × 102
Std1.42 × 1011.45 × 10−137.34 × 1003.58 × 10−29.47 × 1000.00 × 1004.35 × 1001.65 × 1000.00 × 1001.71 × 1001.49 × 1010.00 × 1000.00 × 100
F26Min1.00 × 1021.00 × 1021.00 × 1021.03 × 1021.00 × 1021.00 × 1021.00 × 1021.00 × 1021.00 × 1021.01 × 1021.00 × 1021.00 × 1021.00 × 102
Ave1.00 × 1021.00 × 1021.01 × 1021.06 × 1021.04 × 1021.00 × 1021.01 × 1021.01 × 1021.01 × 1021.03 × 1021.00 × 1021.00 × 1021.00 × 102
Std2.14 × 10−15.78 × 10−21.99 × 10−14.16 × 1001.81 × 1012.44 × 10−27.24 × 10−15.85 × 10−13.59 × 10−11.32 × 1005.89 × 10−21.53 × 10−12.57 × 10−2
F27Min5.00 × 1005.00 × 1008.29 × 1002.04 × 1021.24 × 1015.00 × 1001.15 × 1011.22 × 1022.10 × 1021.58 × 1027.29 × 1005.00 × 1005.00 × 100
Ave3.17 × 1023.37 × 1023.90 × 1022.04 × 1023.30 × 1021.48 × 1023.58 × 1024.14 × 1023.96 × 1024.25 × 1023.58 × 1021.83 × 1026.43 × 101
Std1.59 × 1021.36 × 1027.22 × 1012.89 × 10−141.53 × 1028.77 × 1011.54 × 1028.54 × 1011.06 × 1027.71 × 1011.21 × 1025.35 × 1019.04 × 101
F28Min3.78 × 1023.69 × 1022.00 × 1022.24 × 1024.02 × 1022.00 × 1023.07 × 1023.92 × 1024.17 × 1027.47 × 1023.73 × 1022.00 × 1022.00 × 102
Ave5.09 × 1023.83 × 1027.27 × 1022.24 × 1025.16 × 1022.00 × 1023.36 × 1026.00 × 1025.03 × 1021.10 × 1034.67 × 1022.00 × 1022.00 × 102
Std9.32 × 1012.68 × 1011.79 × 1025.78 × 10−147.79 × 1010.00 × 1003.06 × 1011.71 × 1027.55 × 1012.18 × 1028.15 × 1010.00 × 1000.00 × 100
F29Min4.36 × 1064.36 × 1064.36 × 1063.23 × 1065.51 × 1062.02 × 1022.07 × 1025.54 × 1065.61 × 1067.79 × 1074.58 × 1062.02 × 1022.02 × 102
Ave4.92 × 1064.60 × 1061.21 × 1071.16 × 1078.80 × 1062.02 × 1022.83 × 1021.89 × 1076.88 × 1061.83 × 1085.83 × 1062.02 × 1022.02 × 102
Std4.60 × 1053.93 × 1057.93 × 1061.92 × 1063.51 × 1060.00 × 1008.13 × 1019.27 × 1061.16 × 1067.49 × 1078.20 × 1050.00 × 1000.00 × 100
F30Min2.29 × 1031.71 × 1033.07 × 1031.20 × 1062.03 × 1022.03 × 1022.23 × 1026.28 × 1052.03 × 1024.09 × 1074.67 × 1032.03 × 1022.03 × 102
Ave2.01 × 1041.31 × 1046.04 × 1061.71 × 1077.43 × 1051.15 × 1049.28 × 1027.88 × 1077.41 × 1052.93 × 1094.79 × 1042.03 × 1022.03 × 102
Std2.93 × 1042.00 × 1041.76 × 1076.27 × 1061.77 × 1062.72 × 1041.09 × 1039.21 × 1077.05 × 1056.41 × 1095.41 × 1042.89 × 10−142.89 × 10−14
F. ave-rank5.073.938.5711.877.683.507.489.439.3711.835.674.482.12
Table 6. The results of ICOA and competitor algorithms for CEC-2014 unimodal functions ( D = 30 ).
Table 6. The results of ICOA and competitor algorithms for CEC-2014 unimodal functions ( D = 30 ).
Func.IndexAlgorithms
PSODEWOAAOASHOAVOABBOAEVOGJOHCORUNCOAICOA
F1Min1.63 × 1051.39 × 1054.20 × 1051.09 × 1098.16 × 1071.23 × 1052.15 × 1081.23 × 1082.66 × 1081.50 × 1093.09 × 1072.30 × 1051.56 × 105
Ave1.29 × 1066.62 × 1053.08 × 1061.77 × 1092.90 × 1087.25 × 1056.27 × 1084.45 × 1084.26 × 1083.31 × 1096.17 × 1079.73 × 1055.26 × 105
Std1.18 × 1064.54 × 1052.76 × 1062.36 × 1081.13 × 1083.76 × 1051.95 × 1081.74 × 1088.46 × 1079.37 × 1082.25 × 1076.49 × 1053.23 × 105
F2Min1.27 × 1001.11 × 1003.08 × 1006.48 × 10101.09 × 10108.73 × 1014.24 × 10101.29 × 10102.25 × 10101.04 × 10111.66 × 1091.13 × 1029.70 × 10−1
Ave2.73 × 1051.68 × 1041.13 × 1047.96 × 10103.16 × 10101.62 × 1045.77 × 10102.17 × 10103.05 × 10101.32 × 10112.79 × 1091.62 × 1049.55 × 103
Std7.92 × 1051.44 × 1041.41 × 1045.74 × 1098.77 × 1091.18 × 1048.36 × 1095.54 × 1094.01 × 1091.40 × 10106.33 × 1081.25 × 1041.31 × 104
F3Min4.51 × 10−22.54 × 1006.99 × 1001.02 × 1053.13 × 1041.64 × 1013.66 × 1043.90 × 1046.24 × 1042.82 × 1057.25 × 1034.67 × 1013.63 × 10−2
Ave5.15 × 1009.29 × 1001.64 × 1035.05 × 1064.88 × 1047.16 × 1025.90 × 1046.99 × 1047.10 × 1042.69 × 1061.18 × 1043.20 × 1021.81 × 100
Std1.13 × 1016.01 × 1002.68 × 1037.66 × 1068.61 × 1035.26 × 1028.99 × 1032.05 × 1045.18 × 1033.39 × 1061.94 × 1033.09 × 1024.25 × 100
Table 7. The results of ICOA and competitor algorithms for CEC-2014 multimodal functions ( D = 30 ).
Table 7. The results of ICOA and competitor algorithms for CEC-2014 multimodal functions ( D = 30 ).
Func.IndexAlgorithms
PSODEWOAAOASHOAVOABBOAEVOGJOHCORUNCOAICOA
F4Min6.53 × 1015.47 × 1008.06 × 1018.55 × 1036.56 × 1021.67 × 10−14.04 × 1037.41 × 1029.21 × 1025.89 × 1032.02 × 1028.27 × 1001.55 × 10−3
Ave1.07 × 1024.23 × 1061.39 × 1021.49 × 1042.36 × 1037.62 × 1018.21 × 1032.56 × 1031.28 × 1031.10 × 1043.04 × 1028.60 × 1015.88 × 101
Std2.69 × 1011.61 × 1072.73 × 1012.84 × 1031.20 × 1032.41 × 1012.17 × 1031.12 × 1032.36 × 1022.54 × 1036.57 × 1012.91 × 1012.91 × 101
F5Min2.01 × 1012.03 × 1012.00 × 1012.11 × 1012.01 × 1012.00 × 1012.04 × 1012.00 × 1012.08 × 1012.07 × 1012.08 × 1012.00 × 1012.00 × 101
Ave2.08 × 1012.07 × 1012.02 × 1012.13 × 1012.04 × 1012.01 × 1012.06 × 1012.04 × 1012.09 × 1012.10 × 1012.09 × 1012.01 × 1012.00 × 101
Std2.11 × 10−11.93 × 10−12.70 × 10−17.94 × 10−21.32 × 10−11.24 × 10−11.01 × 10−15.77 × 10−16.65 × 10−29.73 × 10−26.75 × 10−21.03 × 10−12.65 × 10−2
F6Min1.82 × 1011.16 × 1013.45 × 1014.16 × 1012.48 × 1011.26 × 1012.98 × 1011.94 × 1013.02 × 1013.41 × 1012.11 × 1011.64 × 1011.12 × 101
Ave2.59 × 1012.23 × 1014.03 × 1014.51 × 1013.07 × 1012.01 × 1013.40 × 1012.66 × 1013.15 × 1013.71 × 1012.51 × 1012.23 × 1011.62 × 101
Std3.28 × 1009.64 × 1002.08 × 1001.61 × 1002.19 × 1003.81 × 1001.99 × 1003.74 × 1007.77 × 10−12.12 × 1002.36 × 1002.97 × 1002.75 × 100
F7Min3.35 × 10−36.01 × 10−81.62 × 10−86.09 × 1021.45 × 1027.05 × 10−133.33 × 1026.53 × 1011.70 × 1022.49 × 1021.76 × 1011.56 × 10−37.77 × 10−16
Ave5.66 × 10−11.35 × 10−22.90 × 10−17.56 × 1022.83 × 1022.25 × 10−25.64 × 1022.00 × 1022.32 × 1023.44 × 1022.59 × 1017.77 × 10−21.25 × 10−2
Std6.48 × 10−11.58 × 10−28.47 × 10−16.26 × 1018.44 × 1013.11 × 10−29.04 × 1015.97 × 1013.14 × 1015.55 × 1014.44 × 1008.90 × 10−21.84 × 10−2
F8Min5.37 × 1011.03 × 1018.76 × 1014.16 × 1021.41 × 1021.09 × 1011.91 × 1025.36 × 1012.30 × 1021.88 × 1029.15 × 1015.09 × 1019.95 × 10−1
Ave1.09 × 1022.50 × 1012.02 × 1024.61 × 1022.03 × 1023.22 × 1012.41 × 1029.00 × 1012.69 × 1022.44 × 1021.61 × 1021.13 × 1022.47 × 101
Std2.75 × 1011.12 × 1015.17 × 1012.08 × 1012.73 × 1011.24 × 1011.97 × 1012.14 × 1011.49 × 1012.71 × 1012.47 × 1013.56 × 1011.48 × 101
F9Min8.66 × 1014.85 × 1011.44 × 1023.74 × 1021.84 × 1029.75 × 1012.01 × 1021.10 × 1022.54 × 1021.94 × 1021.72 × 1021.02 × 1024.58 × 101
Ave1.63 × 1021.55 × 1022.75 × 1024.11 × 1022.29 × 1021.61 × 1022.59 × 1021.57 × 1022.81 × 1022.64 × 1022.11 × 1021.79 × 1021.21 × 102
Std4.48 × 1011.70 × 1027.15 × 1011.78 × 1012.48 × 1013.31 × 1013.02 × 1013.16 × 1011.36 × 1012.45 × 1012.02 × 1012.00 × 1012.75 × 101
F10Min7.14 × 1021.11 × 1022.93 × 1037.90 × 1033.47 × 1033.16 × 1014.22 × 1031.04 × 1035.53 × 1036.52 × 1032.22 × 1031.25 × 1021.14 × 101
Ave1.63 × 1035.68 × 1024.35 × 1038.98 × 1034.34 × 1035.21 × 1025.42 × 1032.01 × 1035.90 × 1037.64 × 1033.96 × 1031.15 × 1033.01 × 102
Std5.22 × 1022.79 × 1027.15 × 1024.86 × 1025.08 × 1023.03 × 1025.53 × 1024.38 × 1021.96 × 1024.46 × 1028.39 × 1028.11 × 1021.89 × 102
F11Min2.06 × 1032.11 × 1033.09 × 1037.85 × 1032.95 × 1032.55 × 1034.74 × 1032.82 × 1035.56 × 1037.25 × 1034.59 × 1032.90 × 1031.87 × 103
Ave3.14 × 1034.58 × 1035.64 × 1039.11 × 1034.88 × 1033.55 × 1035.87 × 1034.45 × 1036.19 × 1037.91 × 1036.06 × 1034.44 × 1033.01 × 103
Std6.13 × 1021.80 × 1031.23 × 1034.41 × 1027.42 × 1026.08 × 1025.24 × 1029.78 × 1024.39 × 1023.63 × 1026.12 × 1025.71 × 1025.07 × 102
F12Min1.54 × 10−11.77 × 10−11.48 × 1003.20 × 1006.18 × 10−13.07 × 10−22.01 × 10−13.70 × 10−11.16 × 1001.38 × 1001.13 × 1003.31 × 10−11.34 × 10−1
Ave5.12 × 10−11.43 × 1002.37 × 1004.82 × 1001.02 × 1001.02 × 10−11.26 × 1001.61 × 1001.71 × 1003.01 × 1002.26 × 1001.03 × 1005.21 × 10−1
Std2.44 × 10−18.31 × 10−14.71 × 10−18.11 × 10−12.56 × 10−17.07 × 10−24.55 × 10−11.35 × 1002.00 × 10−16.91 × 10−13.45 × 10−13.73 × 10−12.26 × 10−1
F13Min4.55 × 10−13.53 × 10−14.06 × 10−16.88 × 1003.19 × 1002.38 × 10−15.16 × 1006.44 × 10−13.13 × 1004.06 × 1005.39 × 10−13.28 × 10−13.02 × 10−1
Ave6.99 × 10−15.33 × 10−16.43 × 10−17.69 × 1003.97 × 1003.66 × 10−16.19 × 1003.20 × 1003.64 × 1005.03 × 1007.53 × 10−15.55 × 10−15.49 × 10−1
Std1.54 × 10−18.03 × 10−21.09 × 10−14.20 × 10−15.41 × 10−17.07 × 10−26.00 × 10−11.03 × 1002.61 × 10−14.99 × 10−11.07 × 10−11.62 × 10−11.05 × 10−1
F14Min1.72 × 10−12.07 × 10−12.15 × 10−11.54 × 1024.23 × 1011.73 × 10−11.29 × 1022.87 × 1015.78 × 1014.39 × 1014.54 × 10−12.42 × 10−11.46 × 10−1
Ave5.17 × 10−13.71 × 10−14.26 × 10−12.21 × 1027.68 × 1014.86 × 10−11.68 × 1025.05 × 1016.88 × 1017.98 × 1013.49 × 1005.01 × 10−13.37 × 10−1
Std3.05 × 10−11.39 × 10−12.87 × 10−12.79 × 1011.77 × 1011.72 × 10−12.04 × 1011.51 × 1017.25 × 1001.71 × 1012.52 × 1002.98 × 10−11.36 × 10−1
F15Min1.24 × 1017.40 × 1006.71 × 1022.63 × 1056.86 × 1022.38 × 1001.57 × 1042.28 × 1033.68 × 1032.30 × 1062.75 × 1017.11 × 1005.81 × 100
Ave3.43 × 1011.56 × 1011.94 × 1035.29 × 1051.42 × 1044.70 × 1008.44 × 1041.84 × 1041.27 × 1041.85 × 1074.03 × 1011.65 × 1011.08 × 101
Std1.61 × 1012.15 × 1001.00 × 1032.01 × 1051.53 × 1041.31 × 1005.25 × 1041.57 × 1046.03 × 1031.00 × 1079.14 × 1007.45 × 1003.48 × 100
F16Min1.02 × 1019.37 × 1001.14 × 1011.34 × 1011.17 × 1019.96 × 1001.18 × 1011.22 × 1011.23 × 1011.27 × 1011.18 × 1019.86 × 1009.31 × 100
Ave1.12 × 1011.19 × 1011.27 × 1011.41 × 1011.23 × 1011.15 × 1011.23 × 1011.31 × 1011.28 × 1011.33 × 1011.21 × 1011.09 × 1011.07 × 101
Std6.08 × 10−11.05 × 1005.41 × 10−12.61 × 10−13.26 × 10−14.45 × 10−13.02 × 10−15.42 × 10−12.32 × 10−13.69 × 10−12.24 × 10−15.00 × 10−16.65 × 10−1
Table 8. The results of ICOA and competitor algorithms for CEC-2014 hybrid functions ( D = 30 ).
Table 8. The results of ICOA and competitor algorithms for CEC-2014 hybrid functions ( D = 30 ).
Func.IndexAlgorithms
PSODEWOAAOASHOAVOABBOAEVOGJOHCORUNCOAICOA
F17Min1.83 × 1041.59 × 1043.11 × 1041.45 × 1081.10 × 1062.77 × 1042.65 × 1068.80 × 1063.49 × 1068.15 × 1073.36 × 1053.34 × 1041.45 × 104
Ave1.44 × 1058.23 × 1043.49 × 1053.56 × 1081.14 × 1079.28 × 1041.46 × 1075.37 × 1071.32 × 1074.48 × 1081.57 × 1061.26 × 1054.05 × 104
Std2.36 × 1056.24 × 1043.50 × 1051.38 × 1081.34 × 1073.45 × 1041.13 × 1075.54 × 1078.33 × 1062.10 × 1086.22 × 1051.44 × 1051.73 × 104
F18Min6.76 × 1043.71 × 1044.29 × 1044.81 × 1095.33 × 1046.81 × 1042.14 × 1072.52 × 1057.79 × 1064.32 × 1091.01 × 1076.64 × 1043.65 × 104
Ave1.04 × 1058.18 × 1041.06 × 1059.45 × 1096.12 × 1071.08 × 1059.51 × 1084.31 × 1074.33 × 1071.02 × 10103.81 × 1071.12 × 1057.81 × 104
Std2.66 × 1042.65 × 1043.41 × 1042.73 × 1091.50 × 1082.47 × 1048.51 × 1089.94 × 1072.05 × 1073.66 × 1091.47 × 1073.08 × 1042.63 × 104
F19Min1.35 × 1012.01 × 1013.86 × 1011.71 × 1082.58 × 1011.10 × 1014.90 × 1021.62 × 1033.56 × 1058.75 × 1081.89 × 1031.67 × 1011.74 × 101
Ave3.58 × 1022.41 × 1011.81 × 1031.51 × 1091.53 × 1055.15 × 1019.25 × 1068.71 × 1052.42 × 1065.00 × 1091.28 × 1051.38 × 1021.29 × 102
Std9.96 × 1023.15 × 1002.97 × 1037.43 × 1086.62 × 1058.74 × 1011.91 × 1072.49 × 1062.65 × 1062.56 × 1091.01 × 1054.57 × 1022.75 × 102
F20Min2.64 × 1044.76 × 1032.52 × 1046.06 × 10128.98 × 1041.88 × 1041.40 × 1084.62 × 1082.60 × 1092.20 × 10145.67 × 1052.26 × 1044.56 × 103
Ave8.71 × 1044.57 × 1048.79 × 1041.47 × 10151.25 × 10104.93 × 1046.04 × 10127.15 × 10111.14 × 10101.15 × 10161.49 × 1079.23 × 1043.50 × 104
Std4.18 × 1046.64 × 1042.77 × 1041.18 × 10152.78 × 10102.04 × 1041.64 × 10131.81 × 10129.02 × 1091.40 × 10161.81 × 1075.41 × 1042.03 × 104
F21Min3.91 × 1033.92 × 1038.42 × 1031.02 × 1093.10 × 1061.25 × 1042.34 × 1076.75 × 1063.54 × 1071.42 × 1098.27 × 1051.31 × 1043.81 × 103
Ave2.52 × 1042.52 × 1044.33 × 1043.44 × 1093.89 × 1073.50 × 1041.28 × 1085.84 × 1078.81 × 1073.85 × 1092.49 × 1064.58 × 1042.06 × 104
Std1.17 × 1041.45 × 1042.56 × 1041.49 × 1092.85 × 1071.22 × 1049.09 × 1075.13 × 1073.31 × 1071.75 × 1099.68 × 1051.99 × 1041.05 × 104
F22Min2.90 × 1023.66 × 1012.49 × 1035.55 × 10101.92 × 1033.02 × 1012.11 × 1033.28 × 1037.80 × 1051.33 × 10112.12 × 1035.89 × 1012.83 × 101
Ave1.23 × 1036.94 × 1024.91 × 1033.01 × 10132.85 × 1034.99 × 1027.25 × 1074.33 × 10101.57 × 1071.85 × 10148.36 × 1037.14 × 1023.63 × 102
Std7.20 × 1027.41 × 1021.52 × 1033.86 × 10133.85 × 1021.92 × 1023.56 × 1081.56 × 10112.25 × 1072.20 × 10148.98 × 1033.90 × 1022.21 × 102
Table 9. The results of ICOA and competitor algorithms for CEC-2014 composition functions ( D = 30 ).
Table 9. The results of ICOA and competitor algorithms for CEC-2014 composition functions ( D = 30 ).
Func.IndexAlgorithms
PSODEWOAAOASHOAVOABBOAEVOGJOHCORUNCOAICOA
F23Min2.23 × 1022.23 × 1022.23 × 1022.08 × 1022.00 × 1022.00 × 1022.00 × 1022.55 × 1022.00 × 1023.37 × 1022.25 × 1022.00 × 1022.00 × 102
Ave2.23 × 1022.23 × 1022.26 × 1022.12 × 1022.41 × 1022.00 × 1022.50 × 1022.76 × 1022.30 × 1024.03 × 1022.27 × 1022.00 × 1022.00 × 102
Std2.35 × 10−10.00 × 1005.88 × 1008.41 × 10−11.11 × 1010.00 × 1003.00 × 1011.83 × 1011.41 × 1015.69 × 1012.79 × 1000.00 × 1000.00 × 100
F24Min2.23 × 1022.01 × 1022.01 × 1022.01 × 1022.01 × 1022.01 × 1022.01 × 1022.47 × 1022.01 × 1022.43 × 1022.01 × 1022.01 × 1022.01 × 102
Ave2.42 × 1022.01 × 1022.01 × 1022.02 × 1022.01 × 1022.01 × 1022.01 × 1022.62 × 1022.01 × 1022.51 × 1022.01 × 1022.01 × 1022.01 × 102
Std9.22 × 1000.00 × 1000.00 × 1002.09 × 10−10.00 × 1000.00 × 1000.00 × 1008.34 × 1000.00 × 1006.06 × 1003.67 × 10−110.00 × 1000.00 × 100
F25Min2.06 × 1022.04 × 1022.00 × 1022.00 × 1022.00 × 1022.00 × 1022.00 × 1022.24 × 1022.00 × 1022.40 × 1022.00 × 1022.00 × 1022.00 × 102
Ave2.17 × 1022.04 × 1022.12 × 1022.01 × 1022.00 × 1022.00 × 1022.00 × 1022.31 × 1022.00 × 1022.62 × 1022.15 × 1022.00 × 1022.00 × 102
Std9.14 × 1003.14 × 10−12.12 × 1018.53 × 10−20.00 × 1000.00 × 1000.00 × 1006.49 × 1000.00 × 1001.49 × 1015.01 × 1000.00 × 1000.00 × 100
F26Min1.01 × 1021.00 × 1021.00 × 1021.10 × 1021.02 × 1021.00 × 1021.05 × 1021.03 × 1021.03 × 1021.14 × 1021.01 × 1021.00 × 1021.00 × 102
Ave1.34 × 1021.00 × 1021.01 × 1021.93 × 1021.61 × 1021.00 × 1021.12 × 1021.51 × 1021.36 × 1021.50 × 1021.47 × 1021.73 × 1021.01 × 102
Std4.76 × 1016.83 × 10−21.65 × 1002.30 × 1014.83 × 1019.56 × 10−21.69 × 1015.07 × 1014.60 × 1013.60 × 1015.07 × 1014.48 × 1011.19 × 10−1
F27Min4.02 × 1023.71 × 1026.26 × 1022.01 × 1025.26 × 1022.00 × 1028.74 × 1029.07 × 1025.78 × 1029.32 × 1024.38 × 1022.00 × 1022.00 × 102
Ave9.46 × 1024.66 × 1021.45 × 1032.03 × 1021.02 × 1032.00 × 1021.25 × 1031.07 × 1031.16 × 1031.59 × 1038.87 × 1022.00 × 1022.00 × 102
Std2.05 × 1026.14 × 1012.33 × 1024.73 × 10−12.30 × 1020.00 × 1008.52 × 1019.54 × 1011.52 × 1022.45 × 1021.80 × 1020.00 × 1000.00 × 100
F28Min1.16 × 1037.77 × 1022.16 × 1032.06 × 1022.00 × 1022.00 × 1029.29 × 1021.16 × 1032.00 × 1025.29 × 1031.18 × 1032.00 × 1022.00 × 102
Ave1.89 × 1038.38 × 1023.07 × 1032.09 × 1023.31 × 1032.00 × 1021.73 × 1032.26 × 1034.80 × 1026.49 × 1031.47 × 1032.00 × 1022.00 × 102
Std5.98 × 1024.11 × 1015.85 × 1027.67 × 10−18.82 × 1020.00 × 1005.34 × 1026.13 × 1028.11 × 1027.67 × 1021.38 × 1020.00 × 1000.00 × 100
F29Min3.05 × 1031.37 × 1032.08 × 1021.62 × 1062.08 × 1022.08 × 1022.08 × 1021.24 × 1082.08 × 1021.38 × 1092.71 × 1062.08 × 1022.08 × 102
Ave8.13 × 1066.07 × 1051.41 × 1071.19 × 1074.15 × 1062.08 × 1022.95 × 1073.05 × 1081.16 × 1084.64 × 1091.23 × 1072.08 × 1022.08 × 102
Std2.04 × 1073.31 × 1062.47 × 1072.80 × 1061.75 × 1070.00 × 1007.93 × 1071.31 × 1086.21 × 1072.15 × 1097.15 × 1060.00 × 1000.00 × 100
F30Min7.91 × 1046.79 × 1035.76 × 1047.12 × 1091.22 × 1082.05 × 1026.18 × 1055.59 × 1081.55 × 1085.63 × 10114.02 × 1062.05 × 1022.05 × 102
Ave1.92 × 1071.10 × 1044.33 × 1062.44 × 10114.23 × 1083.14 × 1046.32 × 1081.48 × 10104.70 × 1081.64 × 10143.51 × 1079.87 × 1031.87 × 104
Std2.70 × 1072.89 × 1037.42 × 1068.06 × 10103.06 × 1084.59 × 1046.96 × 1082.70 × 10103.15 × 1081.80 × 10143.46 × 1072.43 × 1046.20 × 104
F. ave-rank5.433.956.8511.238.002.779.478.939.0012.137.404.131.70
Table 10. Comparisons of WSRT for ICOA vs. PSO, DE, and WOA ( D = 10 ).
Table 10. Comparisons of WSRT for ICOA vs. PSO, DE, and WOA ( D = 10 ).
Func.PSO-ICOADE-ICOAWOA-ICOA
p-ValueT−T+Wp-ValueT−T+Wp-ValueT−T+W
F16.90 × 10−11503153.29 × 10−11473188.37 × 10−284381
F22.49 × 10−1018447+2.87 × 10−1171394+2.87 × 10−110465+
F31.66 × 10−238427+2.87 × 10−1182383+2.87 × 10−110465+
F43.11 × 10−11453192.62 × 10−3148317+9.78 × 10−2119346
F51.34 × 10−3103362+2.87 × 10−110465+1.06 × 10−82463+
F62.51 × 10−523442+1.56 × 10−3188227+2.87 × 10−110465+
F75.23 × 10−110465+4.88 × 10−815450+2.87 × 10−110465+
F82.87 × 10−110465+6.10 × 10−396.5365.5+2.87 × 10−110465+
F93.08 × 10−88457+2.94 × 10−11702959.44 × 10−110465+
F103.01 × 10−104632-3.18 × 10−114650-5.41 × 10−438382-
F111.69 × 10−11573081.10 × 10−11752903.64 × 10−106459+
F127.34 × 10−12532125.10 × 10−21293362.87 × 10−110465+
F133.51 × 10−110465+1.07 × 10−442423+2.87 × 10−110465+
F148.37 × 10−21033623.59 × 10−3364101-5.66 × 10−629436+
F151.04 × 10−100465+4.37 × 10−93462+2.87 × 10−110465+
F162.58 × 10−6214251+1.12 × 10−940461-2.58 × 10−60465+
F172.51 × 10−538427+2.87 × 10−11195270+3.88 × 10−110465+
F182.20 × 10−11752902.87 × 10−11181284+2.87 × 10−110465+
F193.70 × 10−623442+9.67 × 10−392373+7.03 × 10−110465+
F205.31 × 10−814451+3.18 × 10−11185280+2.87 × 10−110465+
F212.10 × 10−819446+2.87 × 10−11161304+7.76 × 10−110465+
F221.55 × 10−637428+7.32 × 10−7231234+7.45 × 10−3175290+
F232.87 × 10−110465+2.87 × 10−110465+2.13 × 10−90459+
F242.40 × 10−632433+5.27 × 10−6129336+2.05 × 10−105460+
F256.56 × 10−5165300+2.87 × 10−110465+3.75 × 10−1152277
F262.87 × 10−110465+9.27 × 10−3110355+2.87 × 10−110465+
F273.70 × 10−626439+5.82 × 10−710455+8.56 × 10−110465+
F282.87 × 10−110465+2.87 × 10−110465+5.32 × 10−101464+
F292.87 × 10−110465+2.87 × 10−110465+2.87 × 10−110465+
F302.87 × 10−110465+2.87 × 10−110465+2.87 × 10−110465+
Table 11. Comparisons of WSRT for ICOA vs. AOA, SHO, and AVOA ( D = 10 ).
Table 11. Comparisons of WSRT for ICOA vs. AOA, SHO, and AVOA ( D = 10 ).
Func.AOA-ICOASHO-ICOAAVOA-ICOA
p-ValueT−T+Wp-ValueT−T+Wp-ValueT−T+W
F12.87 × 10−110465+2.87 × 10−110465+3.01 × 10−100465+
F22.87 × 10−110465+2.87 × 10−110465+2.87 × 10−110465+
F32.87 × 10−110465+2.87 × 10−110465+2.87 × 10−110465+
F42.87 × 10−110465+9.31 × 10−1011454+2.46 × 10−2174290+
F52.87 × 10−110465+4.37 × 10−94461+5.43 × 10−566399+
F62.87 × 10−110465+1.49 × 10−88457+4.75 × 10−388377+
F72.87 × 10−110465+2.87 × 10−110465+1.05 × 10−2113352+
F82.87 × 10−110465+2.87 × 10−110465+1.27 × 10−2134.5327.5+
F92.87 × 10−110465+3.51 × 10−110465+8.70 × 10−817448+
F105.32 × 10−102463+1.66 × 10−745114-3.51 × 10−114650-
F112.87 × 10−110465+1.55 × 10−636429+2.20 × 10−546419+
F122.87 × 10−110465+1.25 × 10−2109356+9.50 × 10−541451-
F132.87 × 10−110465+2.87 × 10−110465+2.76 × 10−2123342+
F142.87 × 10−110465+1.15 × 10−101464+1.35 × 10−2355110-
F152.87 × 10−110465+2.87 × 10−110465+5.41 × 10−481384+
F162.87 × 10−110465+8.94 × 10−1174482.98 × 10−6157308+
F172.87 × 10−110465+2.87 × 10−110465+3.51 × 10−110465+
F182.87 × 10−110465+2.87 × 10−110465+1.39 × 10−101464+
F192.87 × 10−110465+2.33 × 10−91464+6.73 × 10−478387+
F202.87 × 10−110465+2.87 × 10−110465+3.88 × 10−111464+
F212.87 × 10−110465+2.87 × 10−110465+1.54 × 10−100465+
F222.87 × 10−110465+1.48 × 10−3116349+1.69 × 10−1202263
F232.87 × 10−110465+100100
F242.87 × 10−110465+1.11 × 10−79456+2.60 × 10−467398+
F252.87 × 10−110465+8.24 × 10−187102100
F262.87 × 10−110465+2.87 × 10−110465+3.08 × 10−1304161
F272.87 × 10−110465+1.06 × 10−812453+3.06 × 10−578387+
F282.87 × 10−110465+2.87 × 10−110465+100
F292.87 × 10−110465+2.87 × 10−110465+100
F302.87 × 10−110465+5.32 × 10−100462+1.90 × 10−30329+
Table 12. Comparisons of WSRT for ICOA vs. BBOA, EVO, and GJO ( D = 10 ).
Table 12. Comparisons of WSRT for ICOA vs. BBOA, EVO, and GJO ( D = 10 ).
Func.BBOA-ICOAEVO-ICOAGJO-ICOA
p-ValueT−T+Wp-ValueT−T+Wp-ValueT−T+W
F12.87 × 10−110465+2.87 × 10−110465+2.87 × 10−110465+
F22.87 × 10−110465+2.87 × 10−110465+2.87 × 10−110465+
F32.87 × 10−110465+2.87 × 10−110465+2.87 × 10−110465+
F42.87 × 10−110465+1.27 × 10−101464+2.87 × 10−110465+
F52.10 × 10−831434+5.27 × 10−642423+2.87 × 10−110465+
F62.13 × 10−912453+4.10 × 10−480385+9.44 × 10−110465+
F72.87 × 10−110465+2.87 × 10−110465+2.87 × 10−110465+
F82.87 × 10−110465+2.87 × 10−110465+2.87 × 10−110465+
F91.04 × 10−100465+2.06 × 10−542423+2.87 × 10−110465+
F101.56 × 10−338184-2.55 × 10−94587-9.27 × 10−4363102-
F111.31 × 10−73462+8.86 × 10−920445+2.87 × 10−110465+
F123.21 × 10−816449+4.22 × 10−535430+2.87 × 10−110465+
F132.87 × 10−110465+2.87 × 10−110465+2.87 × 10−110465+
F142.87 × 10−110465+3.51 × 10−111464+2.87 × 10−110465+
F152.87 × 10−110465+2.87 × 10−110465+2.87 × 10−110465+
F163.59 × 10−1194461.21 × 10−40465+5.77 × 10−110465+
F175.23 × 10−110465+2.87 × 10−110465+2.87 × 10−110465+
F182.87 × 10−110465+2.87 × 10−110465+2.87 × 10−110465+
F196.04 × 10−21523136.37 × 10−111464+3.88 × 10−110465+
F202.87 × 10−110465+2.87 × 10−110465+2.87 × 10−110465+
F212.87 × 10−110465+2.87 × 10−110465+2.87 × 10−110465+
F221.41 × 10−3154311+1.94 × 10−96459+6.73 × 10−492373+
F231002.87 × 10−110465+100
F246.37 × 10−111464+4.40 × 10−106459+2.87 × 10−110465+
F257.60 × 10−2872422.87 × 10−110465+100
F262.87 × 10−110465+3.51 × 10−110465+2.87 × 10−110465+
F272.13 × 10−95460+1.69 × 10−100465+2.87 × 10−110465+
F282.87 × 10−110465+2.87 × 10−110465+2.87 × 10−110465+
F292.87 × 10−110465+2.87 × 10−110465+2.87 × 10−110465+
F302.87 × 10−110465+2.87 × 10−110465+2.13 × 10−90459+
Table 13. Comparisons of WSRT for ICOA vs. HCO, RUN, and COA ( D = 10 ).
Table 13. Comparisons of WSRT for ICOA vs. HCO, RUN, and COA ( D = 10 ).
Func.HCO-ICOARUN-ICOACOA-ICOA
p-ValueT−T+Wp-ValueT−T+Wp-ValueT−T+W
F12.87 × 10−110465+2.87 × 10−110465+1.12 × 10−92463+
F22.87 × 10−110465+2.87 × 10−110465+3.51 × 10−110465+
F32.87 × 10−110465+2.87 × 10−110465+2.87 × 10−110465+
F42.87 × 10−110465+8.79 × 10−4134331+4.44 × 10−2175290+
F52.87 × 10−110465+5.32 × 10−1030435+8.02 × 10−1228237
F64.73 × 10−111464+2.76 × 10−477388+1.01 × 10−2104361+
F72.87 × 10−110465+2.87 × 10−110465+1.25 × 10−2102363+
F82.87 × 10−110465+3.18 × 10−110465+1.73 × 10−2110355+
F91.27 × 10−100465+2.35 × 10−568397+1.44 × 10−620445+
F105.22 × 10−91464+4.37 × 10−945510-6.78 × 10−743530-
F112.87 × 10−110465+1.13 × 10−537428+6.82 × 10−3107358+
F122.87 × 10−110465+3.51 × 10−110465+6.04 × 10−479386+
F133.88 × 10−110465+9.44 × 10−112463+8.02 × 10−813452+
F143.18 × 10−110465+1.43 × 10−11473183.33 × 10−2136329+
F152.87 × 10−110465+2.87 × 10−110465+1.02 × 10−73462+
F161.54 × 10−44461+1.05 × 10−5172293+5.31 × 10−8291174-
F172.87 × 10−110465+2.87 × 10−110465+1.69 × 10−101464+
F182.87 × 10−110465+2.87 × 10−110465+3.71 × 10−567398+
F192.87 × 10−110465+1.12 × 10−97458+1.84 × 10−466399+
F202.87 × 10−110465+2.87 × 10−110465+3.18 × 10−110465+
F212.87 × 10−110465+2.87 × 10−110465+1.35 × 10−926439+
F222.87 × 10−110465+5.28 × 10−22642011.90 × 10−3117348+
F232.87 × 10−110465+2.87 × 10−110465+100
F241.54 × 10−102463+1.25 × 10−2134331+3.31 × 10−101464+
F252.87 × 10−110465+1.07 × 10−6114351+100
F262.87 × 10−110465+2.87 × 10−110465+1.44 × 10−631434+
F277.03 × 10−111464+7.04 × 10−100465+1.23 × 10−910455+
F282.87 × 10−110465+2.87 × 10−110465+100
F292.87 × 10−110465+2.87 × 10−110465+100
F302.87 × 10−110465+2.87 × 10−110465+100
Table 14. Summary of WSRT results of all the algorithms for CEC-2014 ( D = 10 ).
Table 14. Summary of WSRT results of all the algorithms for CEC-2014 ( D = 10 ).
Function TypePSO-ICOADE-ICOAWOA-ICOAAOA-ICOASHO-ICOAAVOA-ICOA
Unimodal2/1/02/1/02/1/03/0/03/0/03/0/0
Multimodal8/4/17/3/311/1/113/0/011/1/110/0/3
Hybrid5/1/06/0/06/0/06/0/06/0/05/1/0
Composition8/0/08/0/07/1/08/0/06/2/03/5/0
Total23/6/123/4/326/3/130/0/026/3/121/6/3
Function typeBBOA-ICOAEVO-ICOAGJO-ICOAHCO-ICOARUN-ICOACOA-ICOA
Unimodal3/0/03/0/03/0/03/0/03/0/03/0/0
Multimodal11/1/112/0/112/0/113/0/011/1/110/1/2
Hybrid5/1/06/0/06/0/06/0/05/1/06/0/0
Composition6/2/08/0/06/2/08/0/08/0/03/5/0
Total25/4/129/0/127/2/130/0/027/2/122/6/2
Table 15. Comparisons of WSRT for ICOA vs. PSO, DE, and WOA ( D = 30 ).
Table 15. Comparisons of WSRT for ICOA vs. PSO, DE, and WOA ( D = 30 ).
Func.PSO-ICOADE-ICOAWOA-ICOA
p-ValueT−T+Wp-ValueT−T+Wp-ValueT−T+W
F11.35 × 10−280385+2.87 × 10−11181284+8.87 × 10−30465+
F24.58 × 10−685380+2.87 × 10−11145320+4.84 × 10−10188277+
F38.79 × 10−4186279+2.87 × 10−1142423+4.29 × 10−110465+
F43.15 × 10−1124532.87 × 10−1179386+3.70 × 10−60465+
F52.87 × 10−110465+2.87 × 10−110465+2.95 × 10−814451+
F66.04 × 10−214642.87 × 10−1180385+3.18 × 10−110465+
F73.45 × 10−616449+1.27 × 10−10185280+2.56 × 10−234431+
F82.07 × 10−40465+2.87 × 10−11185280+8.70 × 10−80465+
F91.03 × 10−345420+1.66 × 10−7198267+1.02 × 10−72463+
F103.21 × 10−60465+5.23 × 10−1126439+2.55 × 10−90465+
F114.13 × 10−2187278+6.90 × 10−2524134.00 × 10−90465+
F128.02 × 10−12282371.81 × 10−534431+2.87 × 10−110465+
F135.10 × 10−555410+3.96 × 10−7250215+1.27 × 10−377388+
F149.29 × 10−11303354.28 × 10−7142323+1.56 × 10−1180285
F152.43 × 10−134622.11 × 10−746419+2.87 × 10−110465+
F162.19 × 10−499366+7.45 × 10−1484171.29 × 10−40465+
F177.10 × 10−461404+1.27 × 10−1071394+9.31 × 10−104461+
F182.92 × 10−455410+2.55 × 10−9225240+8.79 × 10−487378+
F191.21 × 10−11463199.48 × 10−23211448.02 × 10−818447+
F201.27 × 10−318447+2.87 × 10−11244221+5.79 × 10−50465+
F212.37 × 10−11633024.73 × 10−11177288+6.46 × 10−240425
F229.50 × 10−515450+5.71 × 10−9168297+2.87 × 10−110465+
F232.87 × 10−110465+2.87 × 10−110465+2.87 × 10−110465+
F242.87 × 10−110465+1.00 × 100001.00 × 10000
F252.87 × 10−110465+2.87 × 10−110465+2.66 × 10−20255+
F266.26 × 10−1114542.13 × 10−944223-1.63 × 10−42463+
F272.87 × 10−110465+2.87 × 10−110465+2.87 × 10−110465+
F282.87 × 10−110465+2.87 × 10−110465+2.87 × 10−110465+
F292.87 × 10−110465+2.87 × 10−110465+3.21 × 10−60420+
F304.73 × 10−110465+8.58 × 10−6115350+6.37 × 10−110465+
Table 16. Comparisons of WSRT for ICOA vs. AOA, SHO, and AVOA ( D = 30 ).
Table 16. Comparisons of WSRT for ICOA vs. AOA, SHO, and AVOA ( D = 30 ).
Func.AOA-ICOASHO-ICOAAVOA-ICOA
p-ValueT−T+Wp-ValueT−T+Wp-ValueT−T+W
F12.87 × 10−110465+2.87 × 10−110465+2.11 × 10−2122343+
F22.87 × 10−110465+2.87 × 10−110465+1.47 × 10−2122343+
F32.87 × 10−110465+2.87 × 10−110465+3.18 × 10−110465+
F42.87 × 10−110465+2.87 × 10−110465+8.87 × 10−363402+
F52.87 × 10−110465+3.18 × 10−110465+3.26 × 10−377388+
F62.87 × 10−110465+2.87 × 10−110465+1.07 × 10−454411+
F72.87 × 10−110465+2.87 × 10−110465+5.44 × 10−3162303+
F82.87 × 10−110465+2.87 × 10−110465+2.87 × 10−2110355+
F92.87 × 10−110465+2.87 × 10−110465+2.20 × 10−553412+
F102.87 × 10−110465+2.87 × 10−110465+1.64 × 10−394371+
F112.87 × 10−110465+4.84 × 10−105460+6.37 × 10−484381+
F122.87 × 10−110465+1.26 × 10−810455+1.27 × 10−104633-
F132.87 × 10−110465+2.87 × 10−110465+1.26 × 10−844421-
F142.87 × 10−110465+2.87 × 10−110465+2.00 × 10−386379+
F152.87 × 10−110465+2.87 × 10−110465+1.15 × 10−104650+
F162.87 × 10−110465+2.87 × 10−110465+4.99 × 10−736429+
F172.87 × 10−110465+2.87 × 10−110465+2.10 × 10−811454+
F182.87 × 10−110465+1.48 × 10−92463+6.98 × 10−558407+
F192.87 × 10−110465+4.00 × 10−911454+1.04 × 10−1289176
F202.87 × 10−110465+2.87 × 10−110465+5.44 × 10−395370+
F212.87 × 10−110465+2.87 × 10−110465+3.06 × 10−520445+
F222.87 × 10−110465+2.87 × 10−110465+1.35 × 10−2102363+
F232.87 × 10−110465+5.32 × 10−100462+100
F242.87 × 10−110465+100100
F252.87 × 10−110465+100100
F262.87 × 10−110465+2.87 × 10−110465+1.93 × 10−643035-
F272.87 × 10−110465+2.87 × 10−110465+100
F282.87 × 10−110465+1.27 × 10−100464+100
F292.87 × 10−110465+6.57 × 10−1059100
F302.87 × 10−110465+2.87 × 10−110465+3.83 × 10−696368+
Table 17. Comparisons of WSRT for ICOA vs. BBOA, EVO, and GJO ( D = 30 ).
Table 17. Comparisons of WSRT for ICOA vs. BBOA, EVO, and GJO ( D = 30 ).
Func.BBOA-ICOAEVO-ICOAGJO-ICOA
p-ValueT−T+Wp-ValueT−T+Wp-ValueT−T+W
F12.87 × 10−110465+2.87 × 10−110465+2.87 × 10−110465+
F22.87 × 10−110465+2.87 × 10−110465+2.87 × 10−110465+
F32.87 × 10−110465+2.87 × 10−110465+2.87 × 10−110465+
F42.87 × 10−110465+2.87 × 10−110465+2.87 × 10−110465+
F52.87 × 10−110465+1.80 × 10−752413+2.87 × 10−110465+
F62.87 × 10−110465+4.73 × 10−110465+2.87 × 10−110465+
F72.87 × 10−110465+2.87 × 10−110465+2.87 × 10−110465+
F82.87 × 10−110465+3.51 × 10−110465+2.87 × 10−110465+
F92.87 × 10−110465+5.10 × 10−563402+2.87 × 10−110465+
F102.87 × 10−110465+2.87 × 10−110465+2.87 × 10−110465+
F112.87 × 10−110465+6.81 × 10−97458+2.87 × 10−110465+
F121.26 × 10−814451+7.32 × 10−715450+2.87 × 10−110465+
F132.87 × 10−110465+7.03 × 10−111464+2.87 × 10−110465+
F142.87 × 10−110465+2.87 × 10−110465+2.87 × 10−110465+
F152.87 × 10−110465+2.87 × 10−110465+2.87 × 10−110465+
F162.87 × 10−110465+2.87 × 10−110465+2.87 × 10−110465+
F172.87 × 10−110465+2.87 × 10−110465+2.87 × 10−110465+
F182.87 × 10−110465+2.87 × 10−110465+2.87 × 10−110465+
F193.51 × 10−110465+2.87 × 10−110465+2.87 × 10−110465+
F202.87 × 10−110465+2.87 × 10−110465+2.87 × 10−110465+
F212.87 × 10−110465+2.87 × 10−110465+2.87 × 10−110465+
F222.87 × 10−110465+2.87 × 10−110465+2.87 × 10−110465+
F233.39 × 10−70437+2.87 × 10−110465+2.13 × 10−90459+
F241002.87 × 10−110465+100
F251002.87 × 10−110465+100
F262.87 × 10−110465+2.87 × 10−110465+2.87 × 10−110465+
F272.87 × 10−110465+2.87 × 10−110465+2.87 × 10−110465+
F282.87 × 10−110465+2.87 × 10−110465+3.75 × 10−10114
F293.75 × 10−101142.87 × 10−110465+8.12 × 10−90455+
F302.87 × 10−110465+2.87 × 10−110465+2.87 × 10−110465+
Table 18. Comparisons of WSRT for ICOA vs. HCO, RUN, and COA ( D = 30 ).
Table 18. Comparisons of WSRT for ICOA vs. HCO, RUN, and COA ( D = 30 ).
Func.HCO-ICOARUN-ICOACOA-ICOA
p-ValueT−T+Wp-ValueT−T+Wp-ValueT−T+W
F12.87 × 10−110465+2.87 × 10−110465+7.49 × 10−456409+
F22.87 × 10−110465+2.87 × 10−110465+6.52 × 10−30465+
F32.87 × 10−110465+2.87 × 10−110465+2.87 × 10−110465+
F42.87 × 10−110465+2.87 × 10−110465+4.34 × 10−488377+
F52.87 × 10−110465+2.87 × 10−110465+3.21 × 10−828437+
F62.87 × 10−110465+3.18 × 10−110465+1.06 × 10−85460+
F72.87 × 10−110465+2.87 × 10−110465+6.51 × 10−626439+
F82.87 × 10−110465+2.87 × 10−110465+3.88 × 10−110465+
F92.87 × 10−110465+3.51 × 10−110465+1.12 × 10−93462+
F102.87 × 10−110465+2.87 × 10−110465+6.80 × 10−89456+
F112.87 × 10−110465+2.87 × 10−110465+7.04 × 10−100465+
F122.87 × 10−110465+2.87 × 10−110465+5.39 × 10−711454+
F132.87 × 10−110465+2.71 × 10−87458+5.54 × 10−1237228
F142.87 × 10−110465+3.88 × 10−110465+1.35 × 10−2116349+
F152.87 × 10−110465+2.87 × 10−110465+6.37 × 10−465400+
F162.87 × 10−110465+2.87 × 10−110465+2.09 × 10−1160305
F172.87 × 10−110465+2.87 × 10−110465+1.63 × 10−812453+
F182.87 × 10−110465+2.87 × 10−110465+9.85 × 10−639426+
F192.87 × 10−110465+2.87 × 10−110465+7.01 × 10−1247218
F202.87 × 10−110465+2.87 × 10−110465+3.96 × 10−78457+
F212.87 × 10−110465+2.87 × 10−110465+7.89 × 10−728437+
F222.87 × 10−110465+2.87 × 10−110465+3.27 × 10−449416+
F232.87 × 10−110465+2.87 × 10−110465+100
F242.87 × 10−110465+2.87 × 10−110465+100
F252.87 × 10−110465+2.87 × 10−110465+100
F262.87 × 10−110465+9.31 × 10−1010455+6.51 × 10−624441+
F272.87 × 10−110465+2.87 × 10−110465+100
F282.87 × 10−110465+2.87 × 10−110465+100
F292.87 × 10−110465+2.87 × 10−110465+100
F302.87 × 10−110465+2.87 × 10−110465+9.47 × 10−1131103
Table 19. Summary of WSRT results of all the algorithms for CEC-2014 ( D = 30 ).
Table 19. Summary of WSRT results of all the algorithms for CEC-2014 ( D = 30 ).
Function TypePSO-ICOADE-ICOAWOA-ICOAAOA-ICOASHO-ICOAAVOA-ICOA
Unimodal3/0/03/0/03/0/03/0/03/0/03/0/0
Multimodal8/5/011/2/012/1/013/0/013/0/011/0/2
Hybrid4/2/05/1/05/1/06/0/06/0/05/1/0
Composition7/1/06/1/07/1/08/0/05/3/01/6/1
Total22/8/025/4/127/3/030/0/027/3/020/7/3
Function typeBBOA-ICOAEVO-ICOAGJO-ICOAHCO-ICOARUN-ICOACOA-ICOA
Unimodal3/0/03/0/03/0/03/0/03/0/03/0/0
Multimodal13/0/013/0/013/0/013/0/013/0/011/2/0
Hybrid6/0/06/0/06/0/06/0/06/0/05/1/0
Composition5/3/08/0/05/3/08/0/08/0/01/7/0
Total27/3/030/0/027/3/030/0/030/0/020/10/0
Table 20. Comparisons of t-test for ICOA vs. competing algorithms ( D = 10 ).
Table 20. Comparisons of t-test for ICOA vs. competing algorithms ( D = 10 ).
Func.Algorithms
PSODEWOAAOASHOAVOABBOAEVOGJOHCORUNCOA
F14.64 × 10−24.55 × 10−23.39 × 10−27.17 × 10−105.33 × 10−127.18 × 10−104.28 × 10−51.27 × 10−86.03 × 10−192.09 × 10−104.91 × 10−81.40 × 10−6
F23.34 × 10−24.84 × 10−27.90 × 10−55.96 × 10−221.06 × 10−46.84 × 10−52.66 × 10−91.98 × 10−81.49 × 10−191.77 × 10−161.46 × 10−91.33 × 10−3
F33.98 × 10−22.04 × 10−24.78 × 10−61.48 × 10−47.36 × 10−111.38 × 10−92.20 × 10−93.37 × 10−57.40 × 10−168.42 × 10−25.18 × 10−124.83 × 10−2
F43.68 × 10−27.85 × 10−46.37 × 10−24.59 × 10−113.95 × 10−72.26 × 10−24.00 × 10−81.12 × 10−83.67 × 10−94.06 × 10−121.51 × 10−23.95 × 10−2
F56.58 × 10−34.28 × 10−182.92 × 10−44.05 × 10−241.30 × 10−31.55 × 10−35.42 × 10−11.32 × 10−42.75 × 10−194.38 × 10−167.37 × 10−16.47 × 10−1
F64.69 × 10−62.86 × 10−24.97 × 10−192.97 × 10−236.28 × 10−91.88 × 10−33.36 × 10−91.16 × 10−33.77 × 10−157.10 × 10−136.99 × 10−44.24 × 10−3
F71.60 × 10−61.94 × 10−72.26 × 10−93.11 × 10−193.31 × 10−51.57 × 10−24.09 × 10−86.24 × 10−92.15 × 10−161.57 × 10−113.79 × 10−226.27 × 10−3
F85.46 × 10−121.54 × 10−35.56 × 10−135.16 × 10−287.56 × 10−174.53 × 10−22.99 × 10−186.20 × 10−142.81 × 10−292.96 × 10−162.50 × 10−147.01 × 10−3
F91.31 × 10−71.69 × 10−21.72 × 10−137.46 × 10−291.76 × 10−133.85 × 10−71.26 × 10−125.94 × 10−62.67 × 10−237.82 × 10−92.37 × 10−44.13 × 10−7
F103.23 × 10−136.91 × 10−153.73 × 10−31.94 × 10−97.78 × 10−92.58 × 10−141.70 × 10−32.99 × 10−112.78 × 10−34.69 × 10−109.60 × 10−107.71 × 10−7
F113.07 × 10−22.70 × 10−26.17 × 10−112.54 × 10−251.37 × 10−54.46 × 10−52.07 × 10−92.30 × 10−82.44 × 10−178.92 × 10−211.03 × 10−58.20 × 10−3
F126.61 × 10−14.36 × 10−26.22 × 10−161.90 × 10−175.22 × 10−31.53 × 10−45.23 × 10−81.87 × 10−51.57 × 10−144.05 × 10−163.81 × 10−166.79 × 10−4
F132.35 × 10−106.88 × 10−62.53 × 10−137.52 × 10−219.97 × 10−102.75 × 10−22.36 × 10−81.39 × 10−141.25 × 10−228.21 × 10−81.31 × 10−121.53 × 10−6
F142.82 × 10−24.49 × 10−31.74 × 10−45.73 × 10−234.12 × 10−31.01 × 10−21.26 × 10−81.59 × 10−65.20 × 10−171.05 × 10−101.99 × 10−12.85 × 10−2
F156.62 × 10−81.15 × 10−81.08 × 10−41.62 × 10−55.73 × 10−71.61 × 10−37.10 × 10−28.59 × 10−28.37 × 10−93.34 × 10−63.65 × 10−191.91 × 10−8
F163.99 × 10−11.51 × 10−42.17 × 10−133.66 × 10−218.96 × 10−74.28 × 10−27.11 × 10−78.00 × 10−123.05 × 10−172.00 × 10−91.00 × 10−13.00 × 10−2
F175.91 × 10−31.01 × 10−21.41 × 10−87.22 × 10−42.52 × 10−127.53 × 10−101.16 × 10−58.55 × 10−22.04 × 10−81.64 × 10−71.85 × 10−116.87 × 10−5
F184.10 × 10−24.46 × 10−23.09 × 10−72.33 × 10−52.30 × 10−234.16 × 10−51.25 × 10−111.28 × 10−11.30 × 10−491.26 × 10−62.42 × 10−106.91 × 10−3
F191.41 × 10−52.34 × 10−36.69 × 10−45.99 × 10−31.03 × 10−121.30 × 10−37.68 × 10−22.48 × 10−11.31 × 10−18.14 × 10−61.84 × 10−97.04 × 10−4
F204.75 × 10−31.99 × 10−22.09 × 10−66.63 × 10−21.46 × 10−131.29 × 10−61.47 × 10−63.00 × 10−16.78 × 10−22.22 × 10−29.16 × 10−122.31 × 10−4
F215.62 × 10−51.14 × 10−21.42 × 10−32.87 × 10−53.77 × 10−221.27 × 10−61.29 × 10−55.75 × 10−34.09 × 10−62.19 × 10−79.80 × 10−122.59 × 10−5
F221.71 × 10−38.04 × 10−34.30 × 10−11.53 × 10−13.05 × 10−24.24 × 10−25.03 × 10−13.14 × 10−19.74 × 10−44.74 × 10−21.60 × 10−23.11 × 10−2
F231.77 × 10−424.20 × 10−414.88 × 10−162.55 × 10−201.00 × 1001.00 × 1001.00 × 1007.33 × 10−301.00 × 1003.49 × 10−198.31 × 10−21.00 × 100
F243.39 × 10−64.40 × 10−23.29 × 10−97.78 × 10−331.62 × 10−52.59 × 10−36.15 × 10−101.30 × 10−88.90 × 10−211.05 × 10−94.19 × 10−21.67 × 10−21
F254.53 × 10−20.00 × 1003.49 × 10−18.42 × 10−231.17 × 10−11.00 × 1002.27 × 10−13.72 × 10−151.00 × 1002.19 × 10−188.11 × 10−21.00 × 100
F261.23 × 10−104.90 × 10−32.76 × 10−175.59 × 10−92.55 × 10−11.90 × 10−16.24 × 10−81.54 × 10−73.80 × 10−141.64 × 10−117.03 × 10−181.48 × 10−3
F271.47 × 10−73.21 × 10−103.29 × 10−162.43 × 10−95.74 × 10−92.63 × 10−41.45 × 10−101.32 × 10−169.23 × 10−143.58 × 10−159.81 × 10−131.58 × 10−7
F282.24 × 10−174.25 × 10−265.01 × 10−160.00 × 1009.53 × 10−201.00 × 1007.57 × 10−211.76 × 10−131.30 × 10−195.96 × 10−203.01 × 10−171.00 × 100
F291.15 × 10−318.17 × 10−333.15 × 10−91.32 × 10−243.19 × 10−141.00 × 1007.81 × 10−65.07 × 10−122.36 × 10−246.46 × 10−141.40 × 10−261.00 × 100
F308.43 × 10−41.39 × 10−37.11 × 10−23.45 × 10−152.85 × 10−23.07 × 10−21.01 × 10−36.02 × 10−53.13 × 10−61.80 × 10−24.08 × 10−51.00 × 100
Table 21. Comparisons of t-test for ICOA vs. competing algorithms ( D = 30 ).
Table 21. Comparisons of t-test for ICOA vs. competing algorithms ( D = 30 ).
Func.Algorithms
PSODEWOAAOASHOAVOABBOAEVOGJOHCORUNCOA
F11.75 × 10−33.78 × 10−22.30 × 10−52.96 × 10−271.92 × 10−141.68 × 10−25.03 × 10−172.04 × 10−142.32 × 10−224.02 × 10−183.79 × 10−159.58 × 10−4
F23.93 × 10−23.57 × 10−26.47 × 10−16.32 × 10−352.44 × 10−182.44 × 10−23.16 × 10−262.25 × 10−192.07 × 10−274.56 × 10−309.93 × 10−213.05 × 10−2
F34.79 × 10−28.43 × 10−62.22 × 10−31.14 × 10−38.17 × 10−243.29 × 10−81.32 × 10−251.04 × 10−178.91 × 10−351.61 × 10−41.27 × 10−244.30 × 10−6
F43.31 × 10−72.77 × 10−23.13 × 10−107.61 × 10−232.80 × 10−114.37 × 10−36.62 × 10−194.33 × 10−138.92 × 10−231.46 × 10−203.13 × 10−172.34 × 10−3
F52.86 × 10−192.12 × 10−177.64 × 10−51.81 × 10−361.02 × 10−156.85 × 10−41.52 × 10−252.82 × 10−42.85 × 10−336.59 × 10−307.21 × 10−332.20 × 10−5
F64.51 × 10−142.27 × 10−31.57 × 10−252.08 × 10−306.98 × 10−198.70 × 10−53.98 × 10−221.64 × 10−131.77 × 10−238.31 × 10−241.99 × 10−156.33 × 10−10
F76.62 × 10−54.11 × 10−28.39 × 10−23.56 × 10−331.70 × 10−173.38 × 10−25.58 × 10−251.60 × 10−174.73 × 10−276.66 × 10−253.51 × 10−243.74 × 10−4
F87.97 × 10−154.55 × 10−27.56 × 10−188.68 × 10−403.29 × 10−228.56 × 10−31.09 × 10−313.41 × 10−144.11 × 10−331.82 × 10−255.92 × 10−212.41 × 10−12
F92.31 × 10−52.80 × 10−15.31 × 10−117.84 × 10−291.07 × 10−132.11 × 10−53.58 × 10−172.02 × 10−41.70 × 10−224.20 × 10−191.86 × 10−143.72 × 10−10
F105.13 × 10−147.22 × 10−62.37 × 10−237.40 × 10−377.52 × 10−261.94 × 10−34.51 × 10−291.97 × 10−183.00 × 10−401.98 × 10−351.06 × 10−208.04 × 10−6
F113.42 × 10−11.73 × 10−51.68 × 10−121.27 × 10−293.14 × 10−111.17 × 10−31.26 × 10−182.12 × 10−84.58 × 10−238.57 × 10−271.22 × 10−193.79 × 10−13
F128.86 × 10−15.04 × 10−67.59 × 10−187.30 × 10−222.60 × 10−82.54 × 10−101.36 × 10−72.60 × 10−41.09 × 10−202.61 × 10−181.17 × 10−194.64 × 10−7
F137.21 × 10−54.95 × 10−16.12 × 10−43.20 × 10−372.40 × 10−242.51 × 10−87.26 × 10−304.59 × 10−141.96 × 10−311.06 × 10−282.29 × 10−94.18 × 10−2
F145.98 × 10−32.08 × 10−21.28 × 10−16.21 × 10−281.70 × 10−205.67 × 10−42.36 × 10−282.23 × 10−174.03 × 10−301.86 × 10−211.68 × 10−71.59 × 10−2
F152.34 × 10−86.18 × 10−61.92 × 10−118.79 × 10−151.99 × 10−57.76 × 10−101.11 × 10−94.98 × 10−72.46 × 10−125.29 × 10−111.75 × 10−173.57 × 10−4
F162.96 × 10−39.34 × 10−66.82 × 10−142.34 × 10−204.59 × 10−138.44 × 10−63.04 × 10−143.77 × 10−171.80 × 10−161.23 × 10−188.30 × 10−121.04 × 10−1
F172.44 × 10−21.18 × 10−34.48 × 10−51.71 × 10−146.68 × 10−52.65 × 10−89.30 × 10−81.11 × 10−51.70 × 10−91.81 × 10−124.66 × 10−142.77 × 10−3
F183.82 × 10−56.25 × 10−11.54 × 10−37.19 × 10−183.36 × 10−23.18 × 10−41.14 × 10−62.47 × 10−22.39 × 10−122.24 × 10−151.55 × 10−144.31 × 10−5
F192.46 × 10−14.67 × 10−24.33 × 10−35.67 × 10−122.16 × 10−11.69 × 10−11.27 × 10−26.48 × 10−22.53 × 10−51.34 × 10−111.44 × 10−79.23 × 10−1
F204.93 × 10−73.99 × 10−21.38 × 10−111.92 × 10−72.00 × 10−21.45 × 10−35.32 × 10−23.87 × 10−21.39 × 10−71.04 × 10−41.03 × 10−45.07 × 10−6
F211.48 × 10−13.53 × 10−25.69 × 10−52.68 × 10−133.09 × 10−81.22 × 10−71.73 × 10−88.29 × 10−77.08 × 10−158.23 × 10−132.07 × 10−141.18 × 10−6
F224.16 × 10−72.00 × 10−27.23 × 10−161.91 × 10−43.74 × 10−234.54 × 10−32.73 × 10−11.39 × 10−16.43 × 10−47.75 × 10−53.53 × 10−59.24 × 10−5
F231.99 × 10591.00 × 1009.39 × 10−212.66 × 10−351.55 × 10−181.00 × 1005.18 × 10−104.63 × 10−202.59 × 10−123.03 × 10−182.22 × 10−301.00 × 100
F244.06 × 10−211.00 × 1001.00 × 1006.52 × 10−261.00 × 1001.00 × 1001.00 × 1004.07 × 10−271.00 × 1001.27 × 10−289.39 × 10−491.00 × 100
F256.23 × 10−115.14 × 10−333.13 × 10−32.49 × 10−241.00 × 1001.00 × 1001.00 × 1008.46 × 10−221.00 × 1004.73 × 10−203.91 × 10−161.00 × 100
F266.31 × 10−43.37 × 10−61.10 × 10−21.35 × 10−191.44 × 10−77.30 × 10−61.01 × 10−36.45 × 10−62.12 × 10−42.54 × 10−82.13 × 10−58.19 × 10−10
F271.72 × 10−181.46 × 10−203.41 × 10−231.70 × 10−253.02 × 10−181.00 × 1001.84 × 10−331.04 × 10−293.91 × 10−258.48 × 10−244.74 × 10−191.00 × 100
F281.42 × 10−152.41 × 10−364.78 × 10−222.09 × 10−334.19 × 10−181.00 × 1001.02 × 10−151.63 × 10−176.90 × 10−22.26 × 10−287.13 × 10−301.00 × 100
F293.69 × 10−22.61 × 10−24.02 × 10−32.76 × 10−202.06 × 10−11.00 × 1005.07 × 10−22.04 × 10−133.78 × 10−111.37 × 10−122.55 × 10−101.00 × 100
F305.35 × 10−42.49 × 10−23.44 × 10−32.34 × 10−162.36 × 10−83.85 × 10−12.73 × 10−55.45 × 10−35.24 × 10−92.60 × 10−55.36 × 10−62.35 × 10−1
Table 22. Summary of t-test results of all the algorithms for CEC-2014.
Table 22. Summary of t-test results of all the algorithms for CEC-2014.
Alg.D = 10 (+/−/=)D = 30 (+/−/=)
PSO23/5/2260/0/4
DE24/6/024/1/5
WOA24/2/424/2/4
AOA28/0/230/0/0
SHO26/1/324/2/4
AVOA20/5/520/2/8
BBOA24/0/624/1/5
EVO23/1/628/0/2
GJO25/1/425/2/3
HCO29/0/130/0/0
RUN24/1/530/0/0
COA21/3/621/1/8
Table 23. Ablation study results: component-wise performance analysis for representative CEC-2014 functions.
Table 23. Ablation study results: component-wise performance analysis for representative CEC-2014 functions.
Func.DimCOACOA + VCOA + mICOACOA
Execute Time
ICOA
Execute Time
F1101.39 × 1041.06 × 1048.54 × 1031.44 × 10367.56868.926
F41.93 × 1011.83 × 1011.87 × 1011.83 × 10179.73480.466
F85.35 × 1003.73 × 1001.93 × 1001.16 × 10065.88266.186
F115.19 × 1024.30 × 1024.20 × 1023.61 × 102124.644124.972
F174.89 × 1033.50 × 1031.57 × 1032.51 × 102140.897142.501
F206.95 × 1036.06 × 1033.92 × 1022.61 × 101146.747146.865
F241.98 × 1021.90 × 1021.70 × 1021.21 × 102164.135166.922
F271.83 × 1021.74 × 1028.48 × 1016.43 × 101306.342309.292
F1309.73 × 1058.21 × 1057.40 × 1055.26 × 10579.91880.072
F48.60 × 1016.45 × 1016.43 × 1015.88 × 10183.94085.720
F81.13 × 1022.58 × 1019.57 × 1012.47 × 10178.78079.044
F114.44 × 1033.96 × 1033.27 × 1033.01 × 103145.862146.425
F171.26 × 1054.51 × 1046.15 × 1044.05 × 104167.574170.213
F209.23 × 1045.40 × 1046.24 × 1043.50 × 104184.263189.231
F242.01 × 1022.01 × 1022.01 × 1022.01 × 102195.695196.892
F272.00 × 1022.00 × 1022.00 × 1022.00 × 102372.137374.889
Table 24. The results of ICOA and competitor algorithms for F1–F10 ( D = 500 ).
Table 24. The results of ICOA and competitor algorithms for F1–F10 ( D = 500 ).
FuncIndexPSODEWOAAOASHOAVOABBOAEVOGJOHCORUNCOAICOA
F1Ave1.24 × 1077.53 × 1061.88 × 1078.88 × 1096.09 × 1091.49 × 1073.13 × 1091.53 × 1091.61 × 1091.81 × 10106.89 × 1081.06 × 1075.99 × 106
Std1.30 × 1077.93 × 1061.81 × 1071.86 × 1095.12 × 1091.97 × 1071.59 × 1095.74 × 1081.22 × 1091.02 × 10107.16 × 1081.14 × 1075.39 × 106
F2Ave1.35 × 1049.49 × 1031.72 × 1042.35 × 10112.17 × 10111.38 × 1042.06 × 10111.28 × 10111.27 × 10114.26 × 10114.86 × 10101.33 × 1045.46 × 103
Std2.98 × 1041.89 × 1042.72 × 1048.03 × 10101.05 × 10111.32 × 1041.20 × 10118.12 × 10108.10 × 10102.35 × 10114.32 × 10107.60 × 1031.32 × 104
F3Ave1.39 × 1046.17 × 1031.48 × 1041.27 × 1062.40 × 1058.64 × 1032.08 × 1053.51 × 1052.23 × 1059.03 × 1059.66 × 1045.79 × 1034.71 × 103
Std1.94 × 1048.37 × 1032.05 × 1042.62 × 1051.07 × 1059.69 × 1031.28 × 1053.25 × 1059.79 × 1045.67 × 1051.02 × 1057.78 × 1036.52 × 103
F4Ave4.46 × 1022.64 × 1016.13 × 1028.41 × 1045.40 × 1042.31 × 1026.37 × 1042.15 × 1042.06 × 1047.19 × 1043.71 × 1031.94 × 1021.95 × 102
Std4.00 × 1021.56 × 1024.71 × 1024.62 × 1043.44 × 1041.87 × 1024.68 × 1049.06 × 1031.40 × 1044.30 × 1043.70 × 1035.88 × 1011.50 × 102
F5Ave2.04 × 1012.01 × 1012.02 × 1012.14 × 1012.12 × 1012.01 × 1012.11 × 1012.01 × 1012.12 × 1012.12 × 1012.12 × 1012.04 × 1012.00 × 101
Std2.76 × 10−23.00 × 10−22.93 × 10−22.23 × 10−21.15 × 10−18.27 × 10−22.59 × 10−15.13 × 10−21.35 × 10−11.31 × 10−11.50 × 10−11.10 × 10−11.68 × 10−2
F6Ave8.62 × 1019.54 × 1011.18 × 1021.27 × 1021.17 × 1027.47 × 1011.09 × 1028.85 × 1011.02 × 1021.15 × 1029.20 × 1018.05 × 1016.40 × 101
Std8.64 × 1017.25 × 1018.07 × 1016.44 × 1016.08 × 1013.26 × 1015.90 × 1014.20 × 1015.27 × 1016.26 × 1015.36 × 1015.90 × 1014.03 × 101
F7Ave1.11 × 10−26.50 × 10−31.55 × 10−22.19 × 1032.21 × 1031.59 × 10−21.95 × 1031.25 × 1031.28 × 1031.87 × 1033.86 × 1022.03 × 10−24.92 × 10−3
Std4.28 × 10−33.22 × 10−35.88 × 10−39.75 × 1021.07 × 1036.96 × 10−31.18 × 1038.86 × 1028.17 × 1029.53 × 1023.40 × 1022.87 × 10−21.87 × 10−3
F8Ave5.61 × 1024.24 × 1024.75 × 1021.22 × 1031.03 × 1031.85 × 1029.21 × 1025.38 × 1029.39 × 1028.80 × 1027.02 × 1022.20 × 1022.19 × 102
Std2.23 × 1021.63 × 1022.66 × 1025.67 × 1025.90 × 1021.23 × 1024.79 × 1024.30 × 1025.16 × 1023.69 × 1024.59 × 1022.36 × 1028.45 × 101
F9Ave6.92 × 1024.39 × 1021.02 × 1031.20 × 1031.11 × 1034.54 × 1029.39 × 1028.17 × 1029.25 × 1029.05 × 1028.68 × 1024.94 × 1024.59 × 102
Std4.51 × 1024.92 × 1021.04 × 1035.20 × 1025.68 × 1022.88 × 1025.77 × 1025.48 × 1024.53 × 1024.58 × 1025.40 × 1022.34 × 1023.31 × 102
F10Ave3.78 × 1033.58 × 1035.10 × 1032.57 × 1042.28 × 1042.30 × 1031.95 × 1041.24 × 1042.02 × 1042.32 × 1041.75 × 1042.78 × 1031.85 × 103
Std3.54 × 1032.50 × 1034.48 × 1031.24 × 1041.22 × 1041.41 × 1031.11 × 1048.39 × 1031.07 × 1041.35 × 1048.34 × 1032.57 × 1031.43 × 103
Table 25. Statistical comparisons of ICOA and competing algorithms for cantilever beam design.
Table 25. Statistical comparisons of ICOA and competing algorithms for cantilever beam design.
Alg.BestAveWorstStdExecute Time
ICOA1.3399651.3400141.3400913.07 × 10−51.27496
COA1.3399741.3401781.3412422.89 × 10−41.22471
AOA1.3407981.3463541.3582463.66 × 10−31.23660
AVOA1.3399781.3401641.3405091.47 × 10−41.63958
BBOA1.6789883.6477026.6047201.33 × 1002.24914
EVO1.5902562.8244454.4969606.31 × 10−13.69383
GJO1.3400451.3404401.3404403.02 × 10−41.27667
HCO3.5123546.2535558.3851391.28 × 1000.96254
RUN1.3408201.3467351.3524812.74 × 10−33.03150
SHO1.3453951.3758161.4457552.22 × 10−21.54913
PSO1.3400311.3400401.3400687.99 × 10−61.43468
DE1.3399901.3400351.3401124.32 × 10−101.71873
WOA2.2118355.161248.868181.92 × 1001.15853
Table 26. The best results of ICOA and competing algorithms for cantilever beam design.
Table 26. The best results of ICOA and competing algorithms for cantilever beam design.
Alg.Optimal VariablesOptimal
Weigh
X1X2X3X4X5
ICOA6.007415.320344.493183.502622.150251.339965
COA5.996465.324034.487203.509002.157251.339974
AOA6.146015.325514.394223.470632.150771.340798
AVOA6.007365.291994.512733.501622.160301.339978
BBOA5.467545.163629.123212.796224.356261.678988
EVO7.181466.296233.543485.503002.960701.590256
GJO6.043455.272704.508763.499172.150991.340045
HCO20.336646.680548.635222.9136317.721693.512354
RUN5.929675.347924.567083.428312.214531.340820
SHO6.018095.414834.790573.370901.966431.345395
PSO6.053205.278314.505143.503462.134761.340031
DE6.036355.310364.495353.475602.156541.339990
WOA9.557349.871263.105933.010739.900822.211835
Table 27. Statistical comparisons of ICOA and competing algorithms for gear train design.
Table 27. Statistical comparisons of ICOA and competing algorithms for gear train design.
Alg.BestAveWorstStdExecute Time
ICOA2.70086 × 10−123.63159 × 10−91.18341 × 10−95.78387 × 10−91.19214
COA2.70086 × 10−125.48541 × 10−102.35764 × 10−95.82634 × 10−101.16125
AOA2.30782 × 10−116.62514 × 10−92.72645 × 10−89.39338 × 10−91.13105
AVOA2.70086 × 10−125.43563 × 10−102.35764 × 10−96.08656 × 10−101.62427
BBOA1.36165 × 10−91.90407 × 10−43.92092 × 10−37.31225 × 10−42.30207
EVO2.70086 × 10−123.98385 × 10−72.41494 × 10−66.50550 × 10−73.71810
GJO2.30782 × 10−114.92848 × 10−102.35764 × 10−96.32866 × 10−101.15495
HCO1.36165 × 10−94.65817 × 10−44.81704 × 10−31.22843 × 10−30.91430
RUN2.70086 × 10−123.05617 × 10−101.18341 × 10−94.22063 × 10−102.60598
SHO9.92158 × 10−105.95694 × 10−87.77863 × 10−71.95440 × 10−71.50559
PSO2.70086 × 10−121.92707 × 10−98.70083 × 10−92.26206 × 10−91.37893
DE2.70086 × 10−124.03119 × 10−102.35764 × 10−97.32360 × 10−101.69266
WOA2.70086 × 10−125.16815 × 10−93.88059 × 10−87.80601 × 10−91.12738
Table 28. The best results of ICOA and competing algorithms for gear train design.
Table 28. The best results of ICOA and competing algorithms for gear train design.
Alg.Optimal VariablesOptimal
Cost
X1X2X3X4
ICOA431619492.70086 × 10−12
COA431916492.70086 × 10−12
AOA513013532.30782 × 10−11
AVOA431619492.70086 × 10−12
BBOA603414551.36165 × 10−9
EVO542714482.70086 × 10−12
GJO511526532.30782 × 10−11
HCO601434551.36165 × 10−9
RUN431916492.70086 × 10−12
SHO471213239.92158 × 10−10
PSO431619492.70086 × 10−12
DE491619432.70086 × 10−12
WOA491619432.70086 × 10−12
Table 29. Statistical comparisons of ICOA and competing algorithms for rolling element bearing design.
Table 29. Statistical comparisons of ICOA and competing algorithms for rolling element bearing design.
Alg.BestAveWorstStdExecute Time
ICOA85,547.8107585,498.3780285,129.74657110.259472.10745
COA85,417.6834684,196.9593676,621.894972061.337802.07492
AOA85,498.8292382,738.6015180,625.487211472.936011.87460
AVOA85,470.1743385,436.5789485,211.5749656.831152.00904
BBOA67,547.4523751,280.0939741,368.895407020.170212.91190
EVO78,809.5571060,138.0028840,804.907649097.124154.00890
GJO85,509.2168585,072.6377384,305.32188346.531602.06363
HCO62,000.3305340,744.9912126,775.782649688.447481.27705
RUN84,406.3916478,291.8079272,219.816832999.955965.48862
SHO82,042.7645066,561.8636557,683.980716263.999002.06516
PSO85,541.5685184,461.7032575,074.616922225.012161.77061
DE85,488.7441585,410.8548785,241.5251482.157982.10831
WOA84,821.7043066,373.5983043,374.081759824.439391.49705
Table 30. The best results of ICOA and competing algorithms for rolling element bearing design.
Table 30. The best results of ICOA and competing algorithms for rolling element bearing design.
Alg.Optimal VariablesOptimal Load-Carrying Capacity
X1X2X3X4X5X6X7X8X9X10
ICOA150.0000019.6000014.055000.515000.515000.494670.700000.373480.098260.6002985,547.81075
COA150.0000019.5904014.043850.515000.548380.416100.699770.317080.070420.6599085,417.68346
AOA150.0000019.5944414.054020.515000.515000.400000.700000.341780.020000.8500085,498.82923
AVOA149.9999719.6000014.035870.515000.515020.426070.700000.373270.025220.7340185,470.17433
BBOA140.1530817.4097813.640710.515000.515000.456130.640640.310340.051630.7942067,547.45237
EVO144.1374518.8090113.891040.515000.515000.478430.676460.385290.068470.6000078,809.55710
GJO150.0000019.5968214.056700.515000.600000.438960.700000.369770.020460.6776585,509.21685
HCO136.4527717.7566113.444850.520000.582430.485220.648400.392570.034900.6464862,000.33053
RUN147.8063619.5314813.899830.515010.567750.402170.697590.301570.085390.7473484,406.39164
SHO148.8355719.0754314.226030.515000.515000.400000.696090.300000.020000.6000082,042.76450
PSO149.9973219.5999814.054780.515000.540250.486330.700000.318460.066520.7427585,541.56851
DE150.0000019.5997014.043780.515000.564890.500000.700000.331710.039250.6121385,488.74415
WOA149.9584919.4809714.117890.515000.597830.447130.699810.300000.068810.8002384,821.70430
Table 31. Statistical comparisons of ICOA and competing algorithms for heat exchanger network design.
Table 31. Statistical comparisons of ICOA and competing algorithms for heat exchanger network design.
Alg.BestAveWorstStdExecute Time
ICOA7083.331717083.331717083.331716.89 × 10−111.68429
COA7083.331737083.437787083.865661.34 × 10−11.65863
AOA7083.3903510,497.4851514,636.757422.43 × 1031.47541
AVOA7083.396579457.6683913,607.057201.97 × 1031.74121
BBOA7083.343569996.106129996.106122.02 × 1032.41270
EVO3.010 × 10181.392 × 10204.958 × 10201.26 × 10203.81911
GJO7084.068619586.5814813,075.746522.09 × 1031.64325
HCO1.01 × 10208.45 × 10201.53 × 10213.69 × 10201.02708
RUN5.02 × 10144.36 × 10161.47 × 10174.09 × 10164.40927
SHO7085.3104610,740.2484313,803.234181.82 × 1031.70422
PSO7278.5516438,496.92404657,571.586081.19 × 1051.53752
DE7083.331727083.331737083.331747.75 × 10−61.83050
WOA7084.117867084.185107084.331717.42 × 10−21.24512
Table 32. The best results of ICOA and competing algorithms for heat exchanger network design.
Table 32. The best results of ICOA and competing algorithms for heat exchanger network design.
Alg.Optimal VariablesOptimal
Weight
X1X2X3X4X5X6X7X8
ICOA833.331711000.000005250.00000200.00000226.52015200.00000208.14713313.507357083.33171
COA833.331731000.000005250.00000200.00000342.14339200.00000213.24883372.456717083.33173
AOA833.396571000.000005250.00000200.00000312.80710200.00000200.00000300.851897083.39035
AVOA6401.970241955.086965250.00000200.00000255.36994200.00000285.03443300.000007083.39657
BBOA833.343561000.000005250.00000200.00000309.78461200.00000201.20677300.260217083.34356
EVO2323.704474196.415668010.73251215.17004368.72519206.77365209.83636466.210073.01 × 1018
GJO834.068611000.000005250.00000200.00000334.32038200.00000214.57454362.410097084.06861
HCO7514.549827008.703827296.34686207.76339329.57803319.18097260.86927371.014011.01 × 1020
RUN8229.176314692.259219044.09747200.13973299.25466200.14378208.87721391.532645.02 × 1014
SHO835.310461000.000005250.00000200.00000240.61885200.00000267.08589310.302107085.31046
PSO841.348901035.197705385.30518200.00000255.61261200.00000206.74049312.888697278.55164
DE834.117861000.000005250.00000200.00000238.24044200.00000331.58420330.084967083.33172
WOA834.117861000.000005250.00000200.00000238.24044200.00000331.58420330.084967084.11786
Table 33. Statistical comparisons of ICOA and competing algorithms for tabular column design.
Table 33. Statistical comparisons of ICOA and competing algorithms for tabular column design.
Alg.BestAveWorstStdExecute Time
ICOA26.5313278826.53134426.5321194.65 × 10−51.11318
COA26.5313309726.53142026.5317348.46 × 10−51.10274
AOA26.8538434527.74275228.9133394.93 × 10−11.09600
AVOA26.5313279126.53135226.5314463.26 × 10−51.73041
BBOA26.7857956128.54187931.5728451.34 × 1002.42526
EVO26.5535814127.77335331.5331051.03 × 1003.70978
GJO26.5345628826.54303726.5539685.42 × 10−31.09865
HCO26.6490732129.21926333.1948871.47 × 1001.10662
RUN26.5347049826.55254026.6038661.61 × 10−21.90228
SHO26.5321451726.70173927.3052701.94 × 10−11.66486
PSO26.5344809426.54004226.5501165.94 × 10−31.55243
DE26.5313292226.53422826.5409884.50 × 10−31.81912
WOA26.5823404628.25589233.1923851.61 × 1001.25096
Table 34. The best results of ICOA and competing algorithms for tabular column design.
Table 34. The best results of ICOA and competing algorithms for tabular column design.
Alg.Optimal VariablesOptimum Cost
X1X2
ICOA5.451160.2919726.53132788
COA5.451160.2919726.53133097
AOA5.367650.3057926.85384345
AVOA5.451160.2919726.53132791
BBOA5.384090.3029526.78579561
EVO5.446050.2928526.55358141
GJO5.450680.2920726.53456288
HCO6.238690.3381726.64907321
RUN5.451590.2919926.53470498
SHO5.450940.2920026.53214517
PSO5.450810.2920626.53448094
DE5.451160.2919726.53132922
WOA5.437330.2941826.58234046
Table 35. Keywords of ROAS problem.
Table 35. Keywords of ROAS problem.
KeywordsVariable
Makarna lütfenx
Pastay
Toddler foodz
Organica
Baby biscuitb
Biscuitc
Baby semolinad
Baby tarhanae
Baby butterf
Rice flourg
Baby curdh
Gluten-free pastap
Puddingw
Table 36. The results of ICOA and competitor algorithms for ROAS problems.
Table 36. The results of ICOA and competitor algorithms for ROAS problems.
ICOACOAAPOFLAGGOHOHOACFOAEEFOPO
Σx28,000.0426,000.0927,768.2327,998.5326,020.4926,091.926,040.5727,868.9426,136.6824,001.39
Σy9099.809118.609000.699082.799010.619010.619002.579096.3390469013
Σz6350.606085.886353.976072.366031.646009.16070.766160.196047.696305.72
Σa48004731.694752.054737.154780.594758.674753.434732.734754.474726.49
Σb500256.14250263.9254.64251.22250317.68308.92250
Σc11251124.321021.31137.41900.261131.821120.88904.981141.511048.93
Σd6900.16905.526891.826913.156850.46865.086857.956907.496870.196857.59
Σe27602807.872761.332763.392768.462760.072760.522798.8727602796.4
Σf5600.025597.475609.195607.625599.275631.745597.695599.355600.965592.83
Σg213.66202.38214.93200.79204.925200.07200.68202.07202.36200.71
Σh110.1154.8963.55116.6851.650.4150.882.685050.87
Σp7070.0970.627070.4870.270.0276.9575.7970.95
Σw700700.04700.04700700.02700700700700.06700
Total66,229.3463,654.9765,457.765,663.7563,243.3963,530.8763,475.8565,448.263,694.6161,614.88
ROAS144.601144.072143.353144.211142.694143.202143.838143.15143.939142.473
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Gezici, H. A Novel Exploration Stage Approach to Improve Crayfish Optimization Algorithm: Solution to Real-World Engineering Design Problems. Biomimetics 2025, 10, 411. https://doi.org/10.3390/biomimetics10060411

AMA Style

Gezici H. A Novel Exploration Stage Approach to Improve Crayfish Optimization Algorithm: Solution to Real-World Engineering Design Problems. Biomimetics. 2025; 10(6):411. https://doi.org/10.3390/biomimetics10060411

Chicago/Turabian Style

Gezici, Harun. 2025. "A Novel Exploration Stage Approach to Improve Crayfish Optimization Algorithm: Solution to Real-World Engineering Design Problems" Biomimetics 10, no. 6: 411. https://doi.org/10.3390/biomimetics10060411

APA Style

Gezici, H. (2025). A Novel Exploration Stage Approach to Improve Crayfish Optimization Algorithm: Solution to Real-World Engineering Design Problems. Biomimetics, 10(6), 411. https://doi.org/10.3390/biomimetics10060411

Article Metrics

Back to TopTop