Implementation of an Enhanced Crayfish Optimization Algorithm

This paper presents an enhanced crayfish optimization algorithm (ECOA). The ECOA includes four improvement strategies. Firstly, the Halton sequence was used to improve the population initialization of the crayfish optimization algorithm. Furthermore, the quasi opposition-based learning strategy is introduced to generate the opposite solution of the population, increasing the algorithm’s searching ability. Thirdly, the elite factor guides the predation stage to avoid blindness in this stage. Finally, the fish aggregation device effect is introduced to increase the ability of the algorithm to jump out of the local optimal. This paper performed tests on the widely used IEEE CEC2019 test function set to verify the validity of the proposed ECOA method. The experimental results show that the proposed ECOA has a faster convergence speed, greater performance stability, and a stronger ability to jump out of local optimal compared with other popular algorithms. Finally, the ECOA was applied to two real-world engineering optimization problems, verifying its ability to solve practical optimization problems and its superiority compared to other algorithms.


Introduction
Engineering problems have become increasingly complex [1] with the rapid development of engineering technology.As a result, more and more issues need to be optimized, making the importance of optimization increasingly prominent [2].Traditional optimization algorithms, such as linear programming [3], gradient descent [4], and the simplex method [5], are often used to solve these problems in the past.Although they performed well in solving simple problems, their limitations became more apparent as engineering challenges and data processing requirements increased.Traditional algorithms are difficult to solve high-dimensional, nonlinear, or discrete optimization problems.Therefore, the swarm intelligent optimization algorithm [6] was developed, simulating the behavior of biological groups in nature to seek the optimal solution through cooperation and information exchange between individuals [7].The swarm intelligence algorithms have more vital global optimization ability, robustness, and adaptability and can deal with various complex optimization tasks compared to traditional algorithms.These problems are all complex, and a swarm intelligent optimization algorithm can help find the optimal or approximate optimal solution [8], improving the solving efficiency and solution quality.
Many swarming optimization algorithms have been proposed recently due to their simplicity and strong optimization capabilities [9].These algorithms include ant colony algorithm [10] (ACO) inspired by the foraging behavior of ants in nature, particle swarm optimization algorithm [11] (PSO) that simulates bird foraging behavior, and whale optimization algorithm [12] (WOA) that simulates the unique search methods and trapping mechanisms of humpback whales, among others.Although these standard swarming algorithms have been successful in many problems, they may still face some challenges in some cases [13].Scholars have proposed several improved versions to address these challenges and improve the performance of swarm intelligence algorithms.For instance, Hadi Moazen et al. [14] identified the shortcomings of the PSO and developed an improved PSO-ELPM algorithm with elite learning, enhanced parameter updating, and exponential mutation operators.This algorithm balances the exploration and development capabilities of the PSO and produces higher accuracy at an acceptable time complexity, as demonstrated by the CEC 2017 test set.Similarly, Ya Shen et al. [15] proposed the MEWOA algorithm, which divides individual whales into three subpopulations and adopts different renewal methods and evolutionary strategies.This algorithm has been applied to global optimization and engineering design problems, and its effectiveness and competitiveness have been verified.Furthermore, Fang Zhu et al. [16] found that the Dung Beetle optimization algorithm (DBO) was prone to falling into local optima in the late optimization period.They proposed the Dung Beetle search algorithm (QHDBO) based on quantum computing and a multi-strategy mixture to address this issue.The strategy's effectiveness was verified through experiments, which significantly improved the convergence speed and optimization accuracy of the DBO.These proposed and improved optimization algorithms significantly promote the development of swarm intelligence algorithms and make their performance better, allowing them to be applied to optimization problems in various fields, such as workshop optimization scheduling [17], microgrid optimization scheduling [18], vehicle path planning [19], engineering design [20], and wireless sensor layout [21].
The crayfish optimization algorithm [22] (COA) was proposed in September 2023 as a new swarm intelligence optimization algorithm inspired by the summer, competition, and predatory behavior of crayfish.There is intense competition in global optimization and engineering optimization.However, experiments have found that the convergence rate could be slower in some problems and has fallen into local optimal problems.This paper presents four strategies to improve it and comprehensively improve the optimization performance of the COA.The main contributions of this paper are as follows: (1) An enhanced crayfish optimization algorithm (ECOA) is proposed by mixing four improvement strategies.Halton sequence was introduced to improve the population initialization process of COA, which made the initial population distribution of crayfish more uniform and increased the initial population's diversity and the early COA's convergence rate.Before crayfish began to summer resort, compete, and predate, QOBL was applied to the COA population, which was conducive to increasing the search range of the population and improving the quality of candidate solutions to accelerate the convergence rate.There is a certain blindness in this process, and elite factors are introduced to guide the crayfish because crayfish can directly ingest food when the size of the food is appropriate.This paper introduces the fish device aggregation effect (FADs) in the marine predator algorithm (MPA) into the predation phase of crayfish to enhance the ability of COA to jump out of local optimality.(2) The proposed ECOA solves the widely used IEEE CEC2019 test function set and compares it with four standard swarm intelligence algorithms, four improved swarm intelligence algorithms, and crayfish optimization algorithms, respectively.Five experiments were carried out: numerical experiment, iterative curve analysis, box plot analysis, the Wilcoxon rank sum test, and ablation experiments.The experimental results show that the proposed ECOA is competitive and compared to similar algorithms, it has faster convergence speed, higher convergence accuracy, and stronger ability to jump out of local optima.(3) Using the ECOA for practical optimization problems in the three-bar truss design and pressure vessels design, and comparing it with other algorithms.The ECOA shows higher convergence accuracy, faster convergence speed, and higher stability compared to other algorithms.
The rest of this paper is arranged as follows: Part 2 introduces the standard crayfish optimization algorithm (COA), Part 3 details the proposed enhanced crayfish optimization algorithm (ECOA), Part 4 tests the effectiveness of the ECOA and its superiority over other optimization algorithms through five experiments, Part 5 applies the ECOA to practical engineering optimization problems and Part 6 is the conclusion and future work.

The Crayfish Optimization Algorithm (COA)
The crayfish optimization algorithm (COA) is a novel swarm intelligence optimization algorithm inspired by crayfish's summer heat, competition, and predation behavior.Crayfish are arthropods of the shrimp family that live in various freshwater areas.Research has shown that crayfish behave differently in different ambient temperatures.In the mathematical modeling of COA, the heat escape, competition, and predation behavior are defined as three distinct stages, and the optimization algorithm is controlled to enter different stages by defining different temperature intervals.Among them, the summer stage is the exploration stage of COA, the competition stage, and the foraging stage is the development stage of COA.The steps of the COA are described in detail below.

Population Initialization
The COA is a population-based algorithm that starts with population initialization to provide a suitable starting point for the subsequent optimization process.In the modeling of COA, the location of each crayfish represents a candidate solution to a problem, which has D dimension, and the population location of N crayfish constitutes a group of candidate solutions X, whose expression is shown as Equation (1).
where X is the location of the initial crayfish population, N is the number of crayfish population, D is the dimension of the problem, and X i,j is the initial location of the i crayfish in the j dimension, which is generated in the search space of the problem randomly, and the specific expression of X i,j is shown in Equation (2).
where lb j is the lower bound of the j-dimension of the problem variable in the search space, ub j is the upper bound, and r is the uniformly distributed random number belonging to [0, 1].

Define Temperature and Crawfish Food Intake
At different ambient temperatures, crayfish will enter different stages.The crayfish will enter the summer stage when the temperature is above 30 • C. Crayfish have assertive predation behavior between 15 • C and 30 • C, with 25 • C being the optimal temperature.Their food intake is also affected by temperature and is approximately normal as temperature changes.In COA, the temperature Temp is defined as Equation (3).
where Temp is the ambient temperature.The mathematical expression of food intake P of crayfish is shown in Equation ( 4).
where µ is the optimal temperature and C 1 and σ are used to control the food intake of crayfish at different ambient temperatures.

Summer Phase
When the temperature Temp is higher than 30 • C, crayfish will choose X shade cave for heat escape, which is the heat escape stage of the COA.The mathematical definition of X shade cave is shown in Equation (5).
where X G is the optimal position obtained by the algorithm iteration so far, and X L is the optimal position of the current crayfish population.
There may be competition for crawfish to get into the heat.Multiple crayfish will compete for the same burrow to escape the heat if there are many crayfish and few burrows.This will not be the case if there are more caves.A random number between 0 and 1, rand is used to determine whether a race has occurred in the COA.When the random number rand < 0.5, no other crayfish compete for the cave, and crayfish can directly enter the cave to escape the heat.The mathematical expression of this process is shown in Equation (6).
where t is the current number of iterations, X t i,j is the current position of the i crayfish in the j th dimension, t + 1 represents the number of iterations of the next generation, r is a random number [0, 1], and the value of C 2 decreases with the increase in iterations, as expressed in Equation (7).
where T is the maximum number of iterations of the algorithm.

Competition Phase
Multiple crayfish will compete for a cave and enter the competition stage when the temperature Temp is higher than 30 • C and the random number rand ≥ 0.5.At this stage, the position of the crayfish is updated, as shown in Equation (8).
where z is a random crayfish in the population, and its expression is shown in Equation (9).
where r is the random number belonging to [0, 1], and round(•) is the integer function.

Predation Stage
The crayfish will hunt and eat food when the temperature Temp ≤ 30 • C. The Crawfish move towards their food and eat it.The food location X f ood is defined in Equation (10).
The crayfish will judge the size of the food to adopt different ways before ingesting food.The size Q of food is defined in Equation (11).The crayfish will tear the food with their claws first if the food is too large, and alternate eating with their second and third walking feet.(11) where C 3 is the food factor, representing the maximum value of food, and the value is constant 3; Fitness i is the fitness value of the i crayfish, that is the objective function value; Fitness f ood represents the fitness value of the food location X f ood .The Crayfish judge the size of their food by the size C 3 of their maximum food.When the size of the food Q > (C 3 + 1)/2, the food is too large, and the tiny dragon will use chelates (shrimp claws, the first pair of feet) to tear the food; the mathematical expression is as Equation (12).
After that, the crayfish will alternate feeding with the second and third feet, a process simulated in the COA using sine and cosine functions, as shown in Equation (13).
where P is the food intake and r is the random number belonging to [0, 1].The food size is appropriate and crayfish can be directly ingested when Q ≤ (C 3 + 1)/2, and the position update expression is shown in Equation (14).
where r is a random number belonging to [0, 1].

The Enhanced Crayfish Optimization Algorithm (ECOA)
This section describes the proposed enhanced crayfish optimization algorithm (ECOA).Because the crawfish optimization algorithm (COA) has defects in slow convergence speed and easy falling into local areas, this paper adopts four strategies to improve it and comprehensively enhance the optimization performance of the crawfish optimization algorithm.Firstly, the Halton sequence is used to improve the population initialization so that the initial population is more evenly distributed in the search space.Secondly, quasi opposition-based learning strategy is introduced to generate the quasi-oppositional solution of the population, and the next-generation population is selected by greedy strategy, which increases the search space and enhances the diversity of the crayfish population.Thirdly, the foraging stage is improved, and the elite guiding factor is introduced to enhance the optimization rate of this stage.Finally, after the foraging stage, the vortex effect of the marine predator algorithm is introduced to strengthen the ability of the algorithm to jump out of the local optimal.Specific enhancement strategies are as follows.

Halton Sequence Population Initialization
Generally, population initialization in swarm intelligent optimization algorithms creates initial solutions that can converge to better solutions through continuous iteration [23].This process forms the basis of algorithm iteration, and the population initialization quality directly affects the algorithm's iteration speed and global optimization ability.In a standard COA, crayfish populations are generated randomly.Although this initialization method is simple and easy to implement, it may lead to uneven distribution of small and medium-sized lobsters in the population, and the population cannot cover the entire search space well, resulting in slow convergence of the algorithm and even falling into local optimal prematurely.
In contrast, the Halton sequence [24] makes the generated points evenly distributed throughout the search space as a low difference [25] numerical sequence.It differs from random sequences that create different points each time because of its deterministic nature.This paper uses the Halton sequence to replace the original random initialization strategy.This alternative can make the initial population cover the whole search space more evenly, improve the diversity of the population, and speed up the algorithm's convergence.The expression for the initial location of each crayfish produced by population initialization based on the Halton sequence is shown in Equation (15).
where Haltonset is a value based on the Halton sequence.

Quasi Opposition-Based Learning
Opposition-based learning [26] (OBL) was first proposed by Tizhoosh in 2005 and has been widely used to improve swarm intelligence optimization algorithms in subsequent studies.The concept is to generate the opposite solution of the current population, and by comparing the current solution and the opposite solution, retain the better candidate solution of the two as the next-generation population.OBL proposed that the opposition values of the current candidate solutions may be closer to the optimal solution, and the process is conducive to increasing the search range of the population and improving the quality of the candidate solutions.The position expression of the generated opposition solutions is shown in Equation ( 16).
where lb is the lower bound of the problem variable in the search space, and ub is the upper bound.
There are also limitations.Inverse learning can improve the algorithm's convergence speed in the early iteration stage, although some improvements show good performance.The inverse solution generated in the late iteration stage may slow down the algorithm speed due to the excessive position change.Quasi opposition-based learning [27] (QOBL) is a more effective oppositional learning strategy evolved from OBL.It requires that the opposite solution be close to the center M of the search space, not the opposite value; the value of M is shown in Equation (17). Figure 1 shows the position schematic of the generated quasi-opposition and the ordinary opposition solutions.It can be seen from the figure that the position of the quasi-opposition solution generated by QOBL is between the opposition solution and the center of the search space.
Biomimetics 2024, 9, x FOR PEER REVIEW 6 of 20 nature.This paper uses the Halton sequence to replace the original random initialization strategy.This alternative can make the initial population cover the whole search space more evenly, improve the diversity of the population, and speed up the algorithm's convergence.The expression for the initial location of each crayfish produced by population initialization based on the Halton sequence is shown in Equation (15).
where  is a value based on the Halton sequence.

Quasi Opposition-Based Learning
Opposition-based learning [26] (OBL) was first proposed by Tizhoosh in 2005 and has been widely used to improve swarm intelligence optimization algorithms in subsequent studies.The concept is to generate the opposite solution of the current population, and by comparing the current solution and the opposite solution, retain the better candidate solution of the two as the next-generation population.OBL proposed that the opposition values of the current candidate solutions may be closer to the optimal solution, and the process is conducive to increasing the search range of the population and improving the quality of the candidate solutions.The position expression of the generated opposition solutions is shown in Equation (16).
where  is the lower bound of the problem variable in the search space, and  is the upper bound.
There are also limitations.Inverse learning can improve the algorithm's convergence speed in the early iteration stage, although some improvements show good performance.The inverse solution generated in the late iteration stage may slow down the algorithm speed due to the excessive position change.Quasi opposition-based learning [27] (QOBL) is a more effective oppositional learning strategy evolved from OBL.It requires that the opposite solution be close to the center  of the search space, not the opposite value; the value of M is shown in Equation (17). Figure 1 shows the position schematic of the generated quasi-opposition and the ordinary opposition solutions.It can be seen from the figure that the position of the quasi-opposition solution generated by QOBL is between the opposition solution and the center of the search space.

The expression 𝑋
for the position of the quasi-opposite solution generated by QOBL is shown in Equation (18).
where r is the uniformly distributed random number belonging to [0, 1].

Elite Steering Factor
Formula ( 14) describes that crayfish can directly ingest food when the size of the food is appropriate, and there is a certain blindness in this process of crayfish.Therefore, this The expression X QOBL i for the position of the quasi-opposite solution generated by QOBL is shown in Equation (18).
where r is the uniformly distributed random number belonging to [0, 1].

Elite Steering Factor
Formula ( 14) describes that crayfish can directly ingest food when the size of the food is appropriate, and there is a certain blindness in this process of crayfish.Therefore, this paper introduces the elite factor [28] to guide the position update, as shown in Equation (19).With its replacement Formula ( 14), the algorithm can be guided to converge faster and improve the optimization accuracy to a certain extent.The boundary value is taken if the crayfish position updated by Equation ( 19) is outside the search boundary.
where X L is the optimal position of the current crayfish population, that is the elite.

Vortex Formation and Fish Aggregation Device Effect
Modeling the effects of fish aggregating devices (FADs) [29] was first proposed in the marine predator algorithm.In modeling the FADs effect, the individual position of the population may produce a long jump, which can help the algorithm jump out of the local optimal solution.In this paper, the FADs effect is added to the predation phase of COA.Suppose the crawfish fitness fitness t+1 i after the update of Equation ( 19) is not as good as the fitness fitness t i of the previous generation, the positions of these crawfish are disturbed according to the FAD effect, and the mathematical expression of this process is shown in (20).
where X t i is the current position of the i crayfish, X t+1 i represents the position of the next generation, the value of FADs is 0.2, U is a binary vector whose value is between 0 and 1, r is a uniformly distributed random number belonging to [0, 1], and X t r1 and X t r2 are the positions of two random individuals in the current population.

Pseudo-Code of the ECOA
The pseudo-code of the ECOA proposed in this paper is shown in Algorithm 1. paper introduces the elite factor [28] to guide the position update, as shown in Equation (19).With its replacement Formula ( 14), the algorithm can be guided to converge faster and improve the optimization accuracy to a certain extent.The boundary value is taken if the crayfish position updated by Equation ( 19) is outside the search boundary.
where  is the optimal position of the current crayfish population, that is the elite.

Vortex Formation and Fish Aggregation Device Effect
Modeling the effects of fish aggregating devices () [29] was first proposed in the marine predator algorithm.In modeling the  effect, the individual position of the population may produce a long jump, which can help the algorithm jump out of the local optimal solution.In this paper, the  effect is added to the predation phase of COA.Suppose the crawfish fitness itness after the update of Equation ( 19) is not as good as the fitness itness of the previous generation, the positions of these crawfish are disturbed according to the  effect, and the mathematical expression of this process is shown in (20).
where  is the current position of the  crayfish,  represents the position of the next generation, the value of  is 0.2,  is a binary vector whose value is between 0 and 1,  is a uniformly distributed random number belonging to [0,1], and  and  are the positions of two random individuals in the current population.

Pseudo-Code of the ECOA
The pseudo-code of the ECOA proposed in this paper is shown in Algorithm In the standard crayfish optimization algorithm (COA),  is the number of crayfish populations,  is the number of problem variables, and  is the maximum number of iterations in the algorithm.The time complexity of the standard crayfish optimization algorithm is ( ×  × ).In the ECOA, the complexity of the introduced Halton sequence initialization is ( × ), which is the same as the original random initialization.The complexity of quasi opposition-based learning is also ( × ) .The elite factor introduced during the predation phase is an improvement on the established steps of the original algorithm and will not increase the complexity of the original algorithm.Due to the fact that only a small number of crayfish undergo the  effect, its time complexity is much smaller than ( × ).In summary, the overall complexity of the ECOA is less than ( ×  × (3 + )).Therefore, the proposed ECOA does not increase much computational complexity and is on the same order of magnitude as the complexity of COA.

Experimental Scheme
In this section, we conduct simulation experiments on the IEEE CEC2019 [30] test function set to verify the proposed enhanced crayfish optimization algorithm's (ECOA) optimization performance.The name, dimension D, range and optimal value information of the IEEE CEC2019 test function set are shown in Table 1, which contains 10 minimally optimized single-objective test functions, which is highly challenging.In the standard crayfish optimization algorithm (COA), N is the number of crayfish populations, D is the number of problem variables, and T is the maximum number of iterations in the algorithm.The time complexity of the standard crayfish optimization algorithm is O(N × D × T).In the ECOA, the complexity of the introduced Halton sequence initialization is O(N × D), which is the same as the original random initialization.The complexity of quasi opposition-based learning is also O(N × D).The elite factor introduced during the predation phase is an improvement on the established steps of the original algorithm and will not increase the complexity of the original algorithm.Due to the fact that only a small number of crayfish undergo the FADs effect, its time complexity is much smaller than O(N × D).In summary, the overall complexity of the ECOA is less than O(N × D × (3 + T)).Therefore, the proposed ECOA does not increase much computational complexity and is on the same order of magnitude as the complexity of COA.

Experimental Scheme
In this section, we conduct simulation experiments on the IEEE CEC2019 [30] test function set to verify the proposed enhanced crayfish optimization algorithm's (ECOA) optimization performance.The name, dimension D, range and optimal value information of the IEEE CEC2019 test function set are shown in Table 1, which contains 10 minimally optimized single-objective test functions, which is highly challenging.
This paper compares the ECOA with a nine-population intelligent optimization algorithm.They include advanced standard optimization algorithms: the particle swarm optimization algorithm (PSO), the Aquila Optimizer [31] (AO), the Beluga Optimization algorithm [32] (BWO), the golden jackal optimization algorithm [33] (GJO), and the crayfish optimization algorithm (COA).Recently, advanced optimization algorithms have been proposed: the sine-cosine chaotic Harris Eagle Optimization Algorithm [34] (CSCAHHO), the Adaptive slime fungus algorithm [35] (AOSMA), the mixed arithmetic-trigonometric optimization algorithm [36] (ATOA), and the Adaptive Gray Wolf Optimizer [37] (AGWO).These algorithms include the most widely used algorithms, recently proposed algorithms, and four highly advanced improved algorithms.They have demonstrated strong optimization performance in previous research, and compared with these algorithms, they better reflect the excellent optimization ability of the proposed ECOA.The population of all algorithms is set to 30, the maximum number of iterations is set to 2000, and the critical parameters of each algorithm are set using the original algorithm parameters, as shown in Table 2.

Algorithms Parameters
γ damping when F is not decreasing significantly ECOA FADs = 0.2 All experiments were conducted on a computer with a Windows 10 operating system, a Intel(R) Core (TM) i7-7700HQ 2.80 GHz CPU and 16 GB memory.The simulation experiment platform used is Matlab R2022a.This experiment is divided into five parts: numerical experiment analysis, iterative curve analysis, box plot analysis, the Wilcoxon rank sum tests and ablation experiments.The above experimental system verifies the effectiveness and superiority of the ECOA.

Numerical Experiment and Analysis
In this section, the results of each algorithm running 30 times on CEC2019 are counted, and the statistical indicators include the best value (Best), the mean value (Mean), and the standard deviation (Std).The numerical experimental results of the ECOA and other comparisons are shown in Table 3.In addition to the PSO and the GJO, the ECOA and other algorithms have an optimal value of 1 on F1.The average and optimal values obtained by the ECOA are the minimum, and the minimum standard deviation is obtained on most test functions on F2-F9.By comparing the best value with the average value, it can be seen that the proposed ECOA algorithm has higher optimization accuracy than other algorithms.From the statistical results of the standard deviation, it can be seen that the ECOA proposed in most test functions has higher robustness.

Iterative Curve Analysis
This paper took the average value of the 30 iterations in which each optimization algorithm was independently run and drew iteration curves for intuitive comparison to verify the proposed ECOA's optimization performance further.The results are shown in Figure 2. On F1, in addition to the slow convergence speed of PSO and GJO, other algorithms converge quickly to the optimal value, and the ECOA's convergence speed is the fastest.On F2, except for the slow convergence speed of PSO, GJO, ATOA, and AGWO, other algorithms converge to a quick and close precision.On F3, F4, F7, F8, F9, and F10, although the speed is slightly behind that of some algorithms in the early stage of iteration, other algorithms fall into local optimality with the progress of iteration.The speed is slower, while the ECOA can search and converge faster and has a more vital ability to jump out of local optimality.On both F5 and F6, the ECOA converges to a better value faster than other algorithms.The proposed ECOA has a faster convergence speed and a more vital ability to jump out of local optima compared with the standard optimization algorithm or improved algorithm.

Box Plot Analysis
We draw boxplots according to the optimization results of the ECOA and other algorithms running 30 times on each test function and carry out the box plot analysis experiment.The boxplot [38] can reflect the distribution and spread range of 30 statistical results for each algorithm and reflect each algorithm's performance stability when optimizing the IEEE CEC2019 test function.The shorter the box range means that the algorithm's values are more similar when run several times, and the better the stability and reliability of the algorithm can be obtained.The smaller the value of the median line, the higher the opti-

Box Plot Analysis
We draw boxplots according to the optimization results of the ECOA and other algorithms running 30 times on each test function and carry out the box plot analysis experiment.
The boxplot [38] can reflect the distribution and spread range of 30 statistical results for each algorithm and reflect each algorithm's performance stability when optimizing the IEEE CEC2019 test function.The shorter the box range means that the algorithm's values are more similar when run several times, and the better the stability and reliability of the algorithm can be obtained.The smaller the value of the median line, the higher the optimization accuracy of the algorithm.The box diagram results are shown in Figure 3.In addition to the results of the ECOA boxplot on F1, F2, and F5, the ECOA's boxplot range is shorter, and the median line is lower than that of the compared algorithms on other test functions.As a result, the ECOA can achieve better optimization results during multiple runs, and its performance is more stable, which can be used as a more reliable optimization algorithm.

Box Plot Analysis
We draw boxplots according to the optimization results of the ECOA and other algorithms running 30 times on each test function and carry out the box plot analysis experiment.The boxplot [38] can reflect the distribution and spread range of 30 statistical results for each algorithm and reflect each algorithm's performance stability when optimizing the IEEE CEC2019 test function.The shorter the box range means that the algorithm's values are more similar when run several times, and the better the stability and reliability of the algorithm can be obtained.The smaller the value of the median line, the higher the optimization accuracy of the algorithm.The box diagram results are shown in Figure 3.In addition to the results of the ECOA boxplot on F1, F2, and F5, the ECOA's boxplot range is shorter, and the median line is lower than that of the compared algorithms on other test functions.As a result, the ECOA can achieve better optimization results during multiple runs, and its performance is more stable, which can be used as a more reliable optimization algorithm.

The Wilcoxon Rank Sum Test
The Wilcoxon rank sum test [39] is a nonparametric statistical method that can statistically test the difference in the optimization performance of optimization algorithms.This paper compares the ECOA running 30 times in IEEE CEC2019 with the results of other algorithms by the Wilcoxon rank sum test.The p value of the significance level was set at 0.05.Suppose the rank sum test result of the ECOA and the compared algorithm is less than 0.05.In that case, it indicates that the optimization performance of the ECOA is completely better than that of the compared algorithm, and there is a significant difference.Table 4 shows the results of the Wilcoxon rank sum test between the ECOA and each algorithm.It can be seen that the p-value of the ECOA and the compared algorithm is less than 0.05 in most of the number functions, which has a significant difference."N/A" indicates no significant difference between the two algorithms.In the last row of the table, the "worse", "same", and "better" of the optimization performance of the ECOA compared with the algorithm are visually counted, which are represented by "-", "=", and "+", respectively.The performance of the ECOA in comparison to the compared algorithms is less than 0.05 in the vast majority of p-values, with slight differences only in individual

The Wilcoxon Rank Sum Test
The Wilcoxon rank sum test [39] is a nonparametric statistical method that can statistically test the difference in the optimization performance of optimization algorithms.This paper compares the ECOA running 30 times in IEEE CEC2019 with the results of other algorithms by the Wilcoxon rank sum test.The p value of the significance level was set at 0.05.Suppose the rank sum test result of the ECOA and the compared algorithm is less than 0.05.In that case, it indicates that the optimization performance of the ECOA is completely better than that of the compared algorithm, and there is a significant difference.Table 4 shows the results of the Wilcoxon rank sum test between the ECOA and each algorithm.It can be seen that the p-value of the ECOA and the compared algorithm is less than 0.05 in most of the number functions, which has a significant difference."N/A" indicates no significant difference between the two algorithms.In the last row of the table, the "worse", "same", and "better" of the optimization performance of the ECOA compared with the algorithm are visually counted, which are represented by "-", "=", and "+", respectively.The performance of the ECOA in comparison to the compared algorithms is less than 0.05 in the vast majority of p-values, with slight differences only in individual functions (highlighted in bold).The rank sum test verifies that the ECOA has significantly improved performance compared with other algorithms.In this section, in order to further verify whether all four improvement strategies adopted for COA have had a positive improvement effect, ablation experiments are conducted to analyze.In the experiment, we will conduct ablation comparison experiments using only HCOA initialized with the Halton sequence, QCOA with only quasi oppositionbased learning strategy, ELCOA with only elite factor, FCOA with only FADs effect, original COA, and the ECOA proposed by the mixed four strategies.We selected eight different types of functions from the widely used and IEEE CEC2017 test function set for ablation experiments to further verify the rationality of the improved strategy and the superiority of the proposed ECOA.The information of eight test functions is shown in Table 5.To maintain fairness, the population size of all algorithms is set to 30 and the maximum number of iterations is set to 2000.The iteration curve results of all algorithm ablation experiments are shown in Figure 4.The results show that on eight different types of IEEE CEC2017 test functions, each single strategy improved COA has faster convergence speed and optimization accuracy compared to the original algorithm, and the optimization performance of the proposed ECOA is significantly better than other algorithms.Through the above experiments, the feasibility of the adopted strategy was further verified, and mixing these four strategies can comprehensively improve the optimization performance of the original algorithm.

Practical Engineering Optimization Experiment
In this section, to verify the optimization performance of the proposed ECOA in solving practical engineering problems, we will use the ECOA to solve two engineering design optimization problems [40], namely the three-bar truss design problem and the pressure vessel design problem.Compared with other algorithms, 12 standard and improved algorithms were tested, including the marine predator algorithm (MPA), sine cosine

Practical Engineering Optimization Experiment
In this section, to verify the optimization performance of the proposed ECOA in solving practical engineering problems, we will use the ECOA to solve two engineering design optimization problems [40], namely the three-bar truss design problem and the pressure vessel design problem.Compared with other algorithms, 12 standard and improved algorithms were tested, including the marine predator algorithm (MPA), sine cosine algorithm [41] (SCA), PSO, AO, BWO, GJO, COA, CSCAHHO, AOSMA, ATOA, and AGWO.All algorithms run independently 30 times to statistically analyze the experimental results, with a population size of 30 and a maximum iteration count of 2000.

Three-Bar Truss Design Problem
The three-bar truss design problem refers to the minimization of the volume of the truss by optimizing and adjusting the cross-sectional areas (X1 and X2) under stress constraints on each truss.The structure of the three-bar truss is shown in Figure 5, l represents spacing, X1, X2, and X3 represent cross-sectional areas.Its objective function is nonlinear and includes two decision parameters and three inequality constraints.The fitness iteration curve results of all algorithms are shown in Figure 6, which shows that compared to other algorithms, the ECOA has a faster convergence speed and higher convergence accuracy.
The numerical results of each algorithm are summarized in Table 6.The ECOA results in higher accuracy and more stable performance.
The three-bar truss design problem refers to the minimization of the volume of the truss by optimizing and adjusting the cross-sectional areas (X1 and X2) under stress constraints on each truss.The structure of the three-bar truss is shown in Figure 5,  represents spacing, X1, X2, and X3 represent cross-sectional areas.Its objective function is nonlinear and includes two decision parameters and three inequality constraints.The fitness iteration curve results of all algorithms are shown in Figure 6, which shows that compared to other algorithms, the ECOA has a faster convergence speed and higher convergence accuracy.The numerical results of each algorithm are summarized in Table 6.The ECOA results in higher accuracy and more stable performance.linear and includes two decision parameters and three inequality constraints.The fitne iteration curve results of all algorithms are shown in Figure 6, which shows that compare to other algorithms, the ECOA has a faster convergence speed and higher convergen accuracy.The numerical results of each algorithm are summarized in Table 6.The ECO results in higher accuracy and more stable performance.

Conclusions and Future Work
The crayfish optimization algorithm (COA) is a new swarm intelligence optimization algorithm that is performing well in global and engineering optimization.However, the COA also has some defects.In this paper, the standard COA mixing strategy is improved: (1) The Halton sequence is used to initialize the population so that the initial population is more evenly distributed in the search space, and the convergence speed of the initial COA iteration is increased.(2) QOBL is introduced to generate the quasi-oppositional solution of the population.The better solution is selected from the original solution, and the quasi-oppositional solution enters the next generation to improve the quality of candidate solutions.(3) Introducing elite guiding factors in the predation stage to avoid the blindness of this process.(4) The fish device aggregation effect (FADs) in the marine predator algorithm is introduced into COA to enhance its ability to jump out of local optimal.Based on the above, an enhanced crayfish optimization algorithm (ECOA) with better performance is proposed.This paper compares it with other optimization algorithms and improved algorithms on the CEC2019 test function and two real-world engineering optimization problems set to verify the effectiveness of the

Conclusions and Future Work
The crayfish optimization algorithm (COA) is a new swarm intelligence optimization algorithm that is performing well in global and engineering optimization.However, the COA also has some defects.In this paper, the standard COA mixing strategy is improved: (1) The Halton sequence is used to initialize the population so that the initial population is more evenly distributed in the search space, and the convergence speed of the initial COA iteration is increased.(2) QOBL is introduced to generate the quasi-oppositional solution of the population.The better solution is selected from the original solution, and the quasi-oppositional solution enters the next generation to improve the quality of candidate solutions.(3) Introducing elite guiding factors in the predation stage to avoid the blindness of this process.(4) The fish device aggregation effect (FADs) in the marine predator algorithm is introduced into COA to enhance its ability to jump out of local optimal.Based on the above, an enhanced crayfish optimization algorithm (ECOA) with better performance is proposed.This paper compares it with other optimization algorithms and improved algorithms on the CEC2019 test function and two real-world engineering optimization problems set to verify the effectiveness of the ECOA.The experimental results show that the ECOA can better balance the exploration and development of the algorithm and has a more robust global optimization ability.In future work, we will apply the ECOA to other practical problems, such as UAV track optimization, flexible shop scheduling, and microgrid optimization scheduling, to verify its ability to solve various complex practical problems.

Figure 1 .
Figure 1.Schematic diagram of individual location of QOBL population.

Figure 1 .
Figure 1.Schematic diagram of individual location of QOBL population.

Algorithm 1 .
The pseudo-code of the ECOA Biomimetics 2024, 9, x FOR PEER REVIEW 7 of 20

Figure 2 .
Figure 2. Comparison of iteration curves of each algorithm on the CEC2019 test set.

Figure 2 .
Figure 2. Comparison of iteration curves of each algorithm on the CEC2019 test set.

Figure 2 .
Figure 2. Comparison of iteration curves of each algorithm on the CEC2019 test set.

Figure 3 .
Figure 3.Comparison of box plots of various algorithms on the CEC2019 test set.

Figure 3 .
Figure 3.Comparison of box plots of various algorithms on the CEC2019 test set.

Figure 4 .
Figure 4. Iteration curves of various algorithms in ablation experiments.

Figure 4 .
Figure 4. Iteration curves of various algorithms in ablation experiments.

Figure 6 .
Figure 6.Iterative curves of various algorithms for three-bar truss design problems.

Figure 6 .
Figure 6.Iterative curves of various algorithms for three-bar truss design problems.

Figure 6 .
Figure 6.Iterative curves of various algorithms for three-bar truss design problems.

Figure 8 .
Figure 8. Iterative curves of various algorithms for pressure vessel design.

Figure 8 .
Figure 8. Iterative curves of various algorithms for pressure vessel design. 1.

Table 1 .
Information of IEEE CEC2019 test function set.

Table 1 .
Information of IEEE CEC2019 test function set.

Table 2 .
Important parameter settings of each algorithm.

Table 3 .
Numerical experimental results of each algorithm on the CEC2019 test set.

Table 4 .
Test p-values of each algorithm on the IEEE CEC2019 test set.

Table 5 .
Information on the eight test functions of IEEE CEC2017.

Table 7 .
Experimental results of various algorithms in pressure vessel design.

Table 7 .
Experimental results of various algorithms in pressure vessel design.