1. Introduction
An optimization problem is to achieve the optimal value of the design objective under certain constraints. Optimization problems exist widely in intelligent production, engineering manufacturing, agricultural development, and many other fields [
1]. But as the rapidly evolving digital age, data are showing explosive growth, and there are more and more multidimensional and multimodal problems, making many real-world problems more complex and diverse [
2]. For traditional mathematical optimization means, such as the gradient descent method [
3], conjugate gradient method [
4], and quasi-Newton method [
5], they often have some limitations when handling both discrete and other questions [
6]. They have a tendency to trap into local optimal solutions, slow convergence speed, or low computational accuracy. Therefore, it is very difficult to use traditional algorithms for calculation and extracting meaningful information. Generally speaking, solving NP hard problems involves finding a point in a multidimensional hyperspace, which is the optimal solution. However, the identification process is very complex, time-consuming, and computationally expensive. Therefore, it is important to find a biomimetic computing method that is fast and effective [
1].
Nowadays, metaheuristic algorithms have been proven to be competitive alternative algorithms, often used to solve highly complicated nonlinear optimization issues, such as multi-objective optimization problems [
7], multimodal optimization problems [
8], and complex constraint optimization problems [
9].
Metaheuristic algorithms can avoid local optima and have faster convergence speed, better robustness, and higher stability than traditional algorithms [
10]. This field has been developed so far that quite a number of very classical algorithms have been proposed. This mainly includes a genetic algorithm (GA) that emulates the evolutionary processes of living species [
11], differential evolution (DE) that optimizes a search through cooperation and competition among individuals within a group [
12], artificial immune systems (AIS) for modeling the body immune response mechanism [
13], an ant colony algorithm (ACO) using ants’ way finding behavior as a model [
14], particle swarm optimization (PSO) modeling the actions of birds in search of food [
15], a simulated annealing algorithm (SA) modeled on the annealing procedure with solid materials [
16], and a taboo search algorithm (TSA) for modeling the human intellectual memory procedure [
17]. They always emerge from the imitation or exposition of specific natural occurrences and sequences, or the cognitive actions of living collectives and have the characteristics of simplicity, universality, and ease of parallel processing.
Based on sources of inspiration, metaheuristic algorithms mainly include evolution algorithms, human behavior-based algorithms, physics and chemistry-based algorithms, and swarm intelligence-based algorithms [
18,
19].
The evolutionary algorithm is based on concepts such as biology and genetics and is built by modeling the laws of nature’s superiority and inferiority. They achieve population progress according to the laws of natural selection, and thus finish the best solution. Conventional evolutionary algorithms are primarily represented by GA and DE. Both algorithms are modeled from the principle of reproduction in nature and then use strategies such as crossover, selection, and mutation to update the population.
Human behavior-based algorithms are inspired by human performance, such as self-learning actions and social activities [
20]. The most commonly used algorithm is the imperialist competition algorithm [
21], social-based algorithms [
22], league championship algorithm [
23], and poor and rich optimization algorithm [
24].
The algorithms based on physics and chemistry mainly come from the physical laws and chemical phenomena in the universe. Among them, SA mentioned above is a classical algorithm. Furthermore, there are many algorithms developed from physical laws, such as gravity search algorithms [
25] based on the law of universal gravitation; chaos optimization algorithms [
26] based on the traversal, randomness, and regularity of chaotic phenomena; optical optimization algorithms [
27] based on the principle of optical reflection; and black hole algorithms based on strong attractive forces [
28].
Swarm intelligence-based algorithms simulate the behavior of natural populations such as ants, birds, bees, whales, lions, wolves, etc. Each population is a population of organisms. Groups search for the best location among themselves through behaviors such as cooperation and hunting. The representative algorithms are PSO and ACO referred to above. In addition, there are many algorithms of this type, such as beluga whale optimization [
29], grey wolf optimizer [
30], marine predator algorithm [
31], white shark optimizer [
32], emperor penguin optimizer [
33], and so on.
For metaheuristic algorithms, the ability to explore and develop directly determines the performance of the algorithms [
34]. Their imbalance will directly cause a reduction in the precision of problem-solving. Weak exploration ability will affect the population to explore more places, which will lead to getting trapped at local optima. And the lack of exploitation ability will directly affect the population’s ability to find the optimal value. This is likewise a prevalent issue with current optimization methodologies. At the same time, this is exactly what is meant by improvements to the algorithm.
M. Dehghani et al. [
35] presented the coati optimization algorithm (COA) in 2023. Coatis are very active, agile in movement, and have strong adaptability. They forage during the day and rest on trees at night. Iguanas are one of the favorite foods of long coatis, and coatis often cooperate to prey on them. In addition, long coatis also face the risk of being preyed upon. Thus, COA was inspired by the strategies used by long-haired coatis when they are attacking iguanas, as well as the strategies they use when facing and avoiding predators. Although COA shows a high level of competitiveness on some of the problems, it still has room for improvement. According to the literature [
36], COA always exhibits a state of premature convergence and is highly susceptible to falling into a local optimum. Meanwhile, during the experiments, the performance of COA relative to some newly proposed superior metaheuristic algorithms is always at a disadvantage when facing large-scale problems. And the disadvantage of low diversity in COA populations cannot be ignored either. As a result, many researchers have enhanced COA to solve more sophisticated engineering problems. F.A. Hashim et al. [
37] proposed an efficient adaptive mutation COA and applied it to feature selection and global optimization. P. Tamilarasu and G. Singaravel [
38] use an improved COA to achieve efficient scheduling in cloud computing environments. K. Thirumoorthy and J.J.B. J [
39] improved the COA and applied it to breast cancer classification.
Nevertheless, the No Free Lunch derivation [
40] has indicated that no single algorithm is capable of addressing every optimization challenge flawlessly. Excellent performance on one problem may not lead to a viable solution on another unrelated problem. As a result, researchers need to constantly develop new algorithms or make targeted improvements to certain algorithms to cope with increasingly complex real-world problems. Therefore, the improvement in some existing algorithms is very necessary.
Consequently, in this paper, the chaotic mapping strategy, lens imaging reverse learning strategy, crossover strategy, and Lévy flight strategy are applied to improve the COA. Firstly, the chaotic mapping strategy [
1] is introduced in the population initialization stage to use chaotic sequences for population initialization to obtain higher-quality populations. Secondly, the use of the lens imaging reverse learning strategy [
41] not only improves population diversity but also enlarges the scope of the search. In the early stage, the Lévy flight strategy [
42] is applied. It allows the population to get rid of partial optima and expand the search capability. In the end, the introduction of the crisscross optimization algorithm [
43] helps to amend the phenomenon of early convergence of the algorithm. The amalgamation of these strategies augments the optimization capability of the COA. The innovations as well as the main contributions of this paper are as follows.
- (1)
The enhanced COA consists of four strategies, namely the chaotic mapping strategy, the lens imaging reverse learning strategy, the Lévy flight strategy, and the crossover strategy.
- (2)
The effect of 10 common chaotic strategies to improve the COA is analyzed and the optimal strategy is finally selected.
- (3)
The CMRLCCOA is compared with the primitive COA, six new algorithms proposed in the last two years, four classic and well-recognized algorithms, and three improved algorithms, which are tested with the functions included in the CEC2017 and CEC2019 function sets. In addition, dim = 50 and 100 were also selected in the CEC2017 test set.
- (4)
CMRLCCOA is used to solve three engineering optimization problems, including a single-stage cylindrical gear reducer, a welded beam design problem, and a cantilever beam design problem.
- (5)
This paper establishes a mathematical model of the cruise trajectory of a hypersonic vehicle and solves the path planning problem with the newly proposed CMRLCCOA. Furthermore, the results of nine algorithms are compared. Thus, the reliability of CMRLCCOA is verified.
The remainder of this paper is organized as follows:
Section 2 briefly describes the mathematical model of COA.
Section 3 describes the detailed structure of the CMRLCCOA algorithm. The performance of the CMRLCCOA is evaluated on the basis of numerical experimental results in
Section 4.
Section 5 solves three real-world problems using CMRLCCOA. In
Section 6, The cruise ballistic trajectory problem for hypersonic vehicles is modeled and solved using CMRLCCOA. Finally,
Section 7 summarizes this paper.
3. Multi-Strategy Enhanced COA
This part uses four strategies to strengthen COA. These strategies are the chaotic mapping strategy, lens imaging reverse learning strategy, Lévy flight strategy, and crossover strategy. The newly proposed CMRLCCOA solves the shortcomings of COA, which is prone to local optimization and premature convergence. The improvement strategy of the algorithm is presented next and the results are briefly analyzed.
3.1. Chaos Mapping Strategy
The traditional COA adopts the method of randomly setting the initial population, which is difficult to spread throughout the population, resulting in a lack of diversity in the original coati population and restricting the flexibility. Chaos mapping was first proposed by Lorenz et al. in 1963 [
44]. Chaos mapping has characteristics such as randomness, traversal, and regularity [
45]. This strategy can guarantee the diversity of the original population. Therefore, many intelligent algorithms employ chaotic mapping strategies to strengthen the optimization of algorithms. Zeng et al. use chaotic mapping to generate a random and regular initialization particle swarm, improving global search capability [
46]. Xin et al. applied the chaotic mapping method for reinforcing the sparrow optimization algorithm [
47].
Chaos theory mainly studies the behavior of dynamic systems that are sensitive to initial states. The method of generating an initial population through chaotic mapping is to first use a one-dimensional chaotic map, specify a random initial value in it, and iteratively generate a series of continuous points. Chaotic mapping strategies can boost the competence of population diversity, success rate, and convergence.
Table 1 describes ten common chaotic mapping functions. To have a clearer perception of these functions,
Figure 4 visualizes some of the initialization functions. The image shows that these mappings allow random initial population positions to be evenly distributed in the search space. For the selection of different initialization methods, please see
Section 4.3.
3.2. Reverse Learning Strategy for Lens Imaging
Many intelligent optimization algorithms have low population diversity in the later stages of the iteration and do not easily search for optimal solutions. It is difficult to jump out when a coati searching for an individual falls into a local optimum. Consequently, a specular reflection learning strategy is introduced in this paper. This strategy is an optimization mechanism [
58], which extends the algorithm’s search area by computing the inverse solution at the current position. Therefore, it increases the likelihood of discovering the ideal solution. However, reverse learning strategies need to be combined with the principle of lens imaging to achieve better results [
59].
Imaging by a convex lens is an optical law. A convection lens has an object and a solid image on each side of the lens. The diagram is depicted in
Figure 5.
The lens imaging formula can be derived from
Figure 5 as follows:
where
u is the object distance,
v is the distance imaged, and
f is the focal length.
For the reverse learning strategy of convex lens imaging, an individual
P is imaged in a convex lens as in one-dimensional space. This is shown in
Figure 6. The principle of lens imaging is expressed as Equation (9).
Equation (10) is the solution formula for reverse learning of lens imaging, which is extended to
D-dimensional optimization problems. The reverse learning formula based on lens imaging is obtained as follows:
Among them, pj is the minimum in the j-th dimension, and qj is the maximum in the j-th dimension. Xj’ and Xj are the inverse solutions of the lens.
3.3. Lévy Flight Strategy
In the COA, the position update is highly influenced by the iguana. However, the position update range of iguanas is small, so the search space and solution space of this algorithm are limited. The Lévy flight strategy is a stochastic behavior strategy proposed by Paul Lévy in 1937 [
60], used to simulate the step size and direction during random walking or search processes. In this paper, the Lévy flight strategy is incorporated into the search phase of the COA to enlarge the search scope.
Figure 7 depicts the Lévy distribution along with their trajectories in two- and three-dimensional spaces. This random wandering behavior can be effective in increasing the diversity of populations, which in turn allows individuals to explore a wider range of space. Then, the Lévy flight process can be described as a random walk process, as shown in Equation (11).
The
λ can be calculated using the Mantegna method, as shown in Equation (12).
where
λ is set to be 1.5, and
μ and
v follow a normal distribution.
where Γ is the gamma function.
Therefore, Equation (15) is utilized to change the position of the coati.
where sign(
rand – ½) can take three values, namely −1, 0, or 1.
α represents the control quantity of step length, which can be expressed using Equation (16).
where
α0 is set to be 0.01.
Then, Equation (15) can be represented as
3.4. Cross Optimization Algorithm
Meng et al. proposed the crisscross optimization algorithm (CSO) [
43]. The algorithm utilizes horizontal and vertical crossing to update information, which can effectively solve the local optimization problem.
3.4.1. Horizontal Crossover
Before performing the crossover operation, two individuals are paired. Subsequently, the crossover is performed on the variables in the corresponding dimensions to generate new offspring. Assuming the
m-th and
n-th individuals are paired, the crossover operation is performed as follows:
where
Mhcm,j and
Mhcn,j are descendants of
Xm,j and
Xn,j, respectively. And
Xm,j and
Xn,j are two random individuals in the population.
r1 and
r2 are randomly distributed evenly between 0 and 1.
c1 and
c2 are randomly distributed evenly between −1 and 1.
The first term in Equations (18) and (19) represents the particle’s current optimum, and the second term represents the mutual influence between two different particles, and these two terms are well combined through the weight factor r1. The third term can increase the search interval. The final solutions Mhcm,j and Mhcn,j must be compared with the fitness of the parent particles Xm,j and Xn,j, and the solution with better fitness should be retained for the next iteration.
3.4.2. Vertical Crossover
Vertical crossover is executed across distinct dimensions of the variable. Due to the different ranges of values for different dimensions, they need to be normalized before crossing. Each vertical crossover only generates one offspring, and only updates one dimension of it.
where
r is randomly distributed evenly between 0 and 1.
Vertical crossing can cause the dimension that has already fallen into a local optimum to escape from local optimality without damaging the information of the other dimension. Thus, in general, this strategy is effective in keeping population sizes from dropping into local minima, and the probability of a vertical crossover is lower than the probability of a horizontal crossover.
3.4.3. Competitive Operator
There is a competitive relationship between the offspring population and the parent population. Only if the adaptation value of the offspring population is preferred to that of the parent population will it be retained and proceed to the next iteration. Otherwise, the parent population will continue to be retained. As a result of this simple competitive mechanism, individuals will move rapidly toward the search space with good fitness, close to the optimal solution. For example, in terms of horizontal crossing, the competition operator is defined as
3.5. The Framework of CMRLCCOA
Inspired by the above strategies (chaotic mapping, lens imaging reverse learning, Lévy flight, and crossover strategy), we propose a new hybrid metaheuristic algorithm, CMRLCCOA. These strategies greatly strengthen the stability and optimization capability of the algorithm. The specific steps for solving the D-dimensional minimum problem using CMRLCCOA are as follows:
Step 1: Initialize some parameters of CMRLCCOA—the number of search agents N, dimension of the solution D, boundaries of variables ub and lb, and number of iterations Miter.
Step 2: Initializing N populations of coatis using chaotic mapping.
Step 3: The fitness values for each candidate solution are computed. Afterwards, record the best fitness value fbest and the optimal position Xbest.
Step 4: Using a convex lens imaging reverse learning strategy to update N initial solutions by Equation (11), then calculating fitness values while retaining good fitness values and optimal solutions.
Step 5: While Citer < Miter, update the location of the iguana.
Step 6: For the first half of the individual coatis, using Equation (2) to change location of the i-th coati, and using Equation (7) again to update the position of the i-th coati.
Step 7: For the latter half of the individual coati, first set the iguana’s random location using Equation (3), then use Equation (4) to compute the new position of the i-th coati, and finally use Equation (7) to update the position of the i-th coati.
Step 8: Utilizing the Lévy flight strategy, the coati’s position is updated by Equation (18) and candidate solutions are calculated, while retaining the optimal solution and corresponding position.
Step 9: In the second stage of exploitation, first calculate the local boundaries of variables by Equation (5). The location of the i-th coati is changed using Equation (6). Equation (7) is used to update the optimal solution.
Step 10: Using the cross optimization strategy, horizontal and vertical crosses are performed on individuals of the coati by Equations (19)–(21) and offspring populations are obtained, and then the better preserved ones are selected from the parent and offspring populations.
Step 11: Set Citer = Citer + 1; if Citer < Miter, return to Step 5. Otherwise, the optimal location and fitness values obtained from solving the problem will be output.
To show the structure of the CMRLCCOA more clearly, the flowchart of CMRLCCOA is illustrated in
Figure 8. Additionally, the pseudo-code of CMRLCCOA is shown in Algorithm 1.
Algorithm 1: The proposed CMRLCCOA |
Input: Number of coatis (N), Number of variables (D), and maximum iterations (Miter). |
Output: Optimal fitness value fbest and Xbest. |
1: Construct the initial value for the agents through chaotic maps. |
2: Computing fitness values for coati populations. |
3: Using convex lens imaging reverse learning strategy to change the coatis’ position by Equation (11). |
4: Compare fitness values and retain the optimal fitness values and corresponding positions. |
5: While t ≤ Miter |
6: For i = 1 to N/2 |
7: Igu = Xbest; I = round(1 + rand(1,1)). |
8: Change the position of the coati by xi,j = xi,j + b × (Iguj − I×xi,j). |
9: Update position by Equation (7). |
10: End For |
11: For i = N/2 to N |
12: Igu = lb + rand × (ub − lb). |
13: If fitness(i) > fitness(Igu) |
14: Change the position by xi,j = xi,j + b × (Iguj – I × xi,j). |
15: Else |
16: Change the position by xi,j = xi,j + b × (I × xi,j − Iguj). |
17: End If |
18: Update position by Equation (7). |
19: End For |
20: Using Lévy strategy to update the position of the i-th coati by Equation (18). |
21: Calculate the fitness of coatis. |
22: If the fitness of coati < fitness(i) |
23: x(i) = coati;fit(i) = fit(coati). |
24: End If |
25: For i = 1 to N |
26: LbLocal = lb/t; UbLocal = ub/t. |
27: If rand < 0.5 |
28: Update the position of the coatis by xi,j = xi,j + (1 − 2b) × (LbLocal + b × (UbLocal − LbLocal)). |
29: Update position by Equation (7). |
30: Else |
31: For j = 1 to D |
32: r1 and r2 is a stochastic number in [0, 1]; c1 and c2 is a stochastic number in [−1, 1]. |
33: Update the position of the leaders using Equations (18) and (19). |
34: Calculating acclimatization values of coatis. |
35: End For |
36: End If |
37: End For |
38: For i = 1 to N-1 |
39: For j = 1 to D |
40: Update a uniformly random value r in 0 and 1. |
42: Update the position of the individuals using Equation (20). |
43: Calculating acclimatization values of coatis. |
44: End For |
45: End For |
46: t = t + 1, |
47: End While |
3.6. The Time Complexity of CMRLCCOA
This subsection investigates the time complexity of CMRLCCOA. First, we analyze the COA. The population scale and the amount of problem variables mainly contribute to the time complexity. In the initialization phase, the complexity of COA is O(ND), where
N is the size of the coati population and
D is the amount of variables. Among four improvement strategies for the presented CMRLCCOA, the chaotic mapping and lens imaging reverse learning strategies do not increase the complexity. In the first stage, the time complexity is O(NDT) + O(NDT/2) + O(NDT). In the second stage, the complexity is O(NDT) + O((N − 1)DT). Thus, the complexity can be characterized as Equation (22).
4. Numerical Experiments and Comparison with Other Algorithms
In
Section 4, we conduct experiments using functions from the CEC2017 and CEC2019 test suites. Among them, it contains 29 functions for CEC2017 and 10 functions for CEC2019. The number of iterations in this experiment is 500 and thirty individuals constitute the entire population. The dimensions of CEC2017 are 50 and 100. The CMRLCCOA is compared to fourteen existing metaheuristic algorithms, including four recognized classical algorithms, PSO (particle swarm optimization) [
15], DE (differential evolution) [
12], SA (simulated annealing) [
16], and ABC (Artificial Bee Colony Algorithm) [
61]; six recently proposed algorithms, KOA (Kepler Optimization Algorithm) [
62], SWO (Spider Wasp Optimizer) [
63], GMO (Geometric Mean Optimizer) [
64], OMA (Optical Microscope Algorithm) [
65], TROA (Tyrannosaurus Optimization Algorithm) [
66], and GO (GOOSE Algorithm) [
67]; and three improved algorithms, ISSA (Improved Sparrow Search Algorithm) [
68], IGWO (Improved Grey Wolf Optimizer) [
69], and EWOA (Enhanced Whale Optimization Algorithm) [
70]. Each comparison algorithm is run independently for 20. Finally, the optimum value, the worst value, the mean value, the standard deviation, and the rank for all results are calculated. Furthermore, the Wilcoxon signed rank test is performed to further check the quality of CMRLCCOA. The parameters of the other metaheuristic algorithms are listed in
Table 2. Finally, all tests are experimented in Matlab-2020b with a 2.11 GHz quad-core Intel(R) Core(TM) i5 and 8.00 GB.
4.1. Introduction to Test Sets
A total of 39 functions were used for testing in this experiment, which come from CEC2017 [
71] and CEC2019 [
72].
CEC2017 is a test set of intelligent algorithms widely used for a number of optimization problems. The functions are rotated and translated, which in turn increases the difficulty of finding optimization for the algorithms and has a high degree of acceptance. There are four types of benchmark functions: single-peaked, multi-peaked, hybrid, and combined. The single-peak functions (cec01, cec03) are characterized by the fact that there is only a global minimum, not a local minimum. This type of function verifies the convergence of the algorithm. Multi-peak functions (cec04–cec10) have local minima. Such functions verify the competence to get rid of local optima. Algorithms that perform well on these functions generally possess strong exploration capabilities. Hybrid functions (cec11–cec20) have each sub-function assigned a certain weight, which in turn better combines the properties of each sub-function. These functions can effectively verify the ability to find the global optimum. Composite functions (cec21–cec30) have additional bias values and weights for each sub-function. Such functions allow us to assess the accuracy of algorithms. The comprehensive performance will be demonstrated on these functions. To show the details of these functions more clearly, the partial function diagrams are shown in
Figure 9.
In addition, this experiment also uses the CEC2019 test set to assess the algorithm’s capability. The CEC2019 test set [
73] is a very effective benchmark function set for metaheuristic algorithm performance testing. Among them, cec01–cec03 have different dimensions and ranges, and cannot be moved and rotated. cec04–cec10 are ten-dimensional minimization problems, which can be moved and rotated. This test set is known as the “100-bit challenge” and is often used in international competitions. Some of the functions of CEC 2019 are shown in
Figure 10.
4.2. Assessment of Indicators
To appraise the efficacy of all algorithms, we take several performance metrics for the analysis, which include the optimal value (Best), the worst value (Worst), the mean value (Ave), the standard deviation (Std), and Rank. Their equations are presented in Equation (23) to Equation (26). The comparison of these metrics can be used to analyze the performance. It is worth noting that Ave can indicate the precision of the algorithm when addressing specific problem categories. The stability can be obtained from Std. Rank is obtained by comparing Ave and Std. If Rank is smaller, it means that the algorithm has a superior performance in solving a particular problem.
- (4)
Standard deviation (Std)
where
m corresponds to the tally of independent runs of the algorithms.
fi* is the global optimum obtained at the
i-th independent run.
4.3. Effect of Different Chaotic Mapping Functions on CMRLCCOA
Chaotic mapping produces a random sequence for initializing the population, producing a better initial solution [
74]. Chaotic mapping produces initial populations in different ways and gives different results. A better strategy produces good initial solutions and is a great enhancement for the subsequent optimization of the algorithm. In this context, 10 functions (taken from CEC2019) are optimized using 10 of the more common chaotic mapping methods to analyze and compare the impact of different chaotic mappings.
Table 1 lists these 10 recognized chaotic mapping methods.
Table 3 displays the effect and ranking of the 10 chaotic mapping strategies on CMRLCCOA. From the data, it can be seen that the Sine mapping strategy has the best optimization result with the smallest total rank, followed by the Bernoulli mapping strategy and Circle mapping strategy. From the above analysis, it can be concluded that the Sine chaotic mapping strategy performs best in this method and is the first strategy in this method. To show the comparison results more clearly, the experimental data are visualized here, as shown in
Figure 11, where the horizontal coordinate is the function of the test set and the vertical coordinate is the algorithm obtained through different mapping methods. And
Figure 11 indicates that the Sine chaotic mapping strategy ranks first among the four functions. This also shows that these strategies can be relatively more effective in improving the performance of COA.
4.4. Comparison of Optimization Results for CEC2017
For the purpose of examining CMRLCCOA’s competence in exploring, developing, and jumping out of local optimal solutions, a more competitive test suite, the CEC2017 test suite, was chosen for this paper. CEC2017 [
71] is highly recognized and is widely used to validate the performance in all aspects. What is more, this function is not included in this paper because it was not possible to test cec02.
4.4.1. Experimental Results of the CEC2017 Test Suite
In this experiment, CMRLCCOA is run 20 times independently with 14 other algorithms, and finally, Ave, Std, and Rank are calculated.
Table A1 and
Table A2 show the results obtained by 15 algorithms optimized in 50 and 100 dimensions. The top-ranked values are highlighted in a thick format.
The observation of the tabular data shows that the CMRLCCOA is ranked first overall with an average ranking of 3 and 2.6552 for dim = 50 and 100, respectively; this result shows that the improved CMRLCCOA is better at optimizing in different dimensions and all of them provide excellent output values. Most of the optimal values obtained by CMRLCCOA computation outperform other algorithms. This phenomenon shows that CMRLCCOA is adaptable to different types of functions. CMRLCCOA is able to optimize the nine test functions better in dim = 50 (cec03, cec11–12, cec21, cec23, and cec27-30). At dim = 100, CMRLCCOA better optimizes 13 test functions (cec03, cec04, cec07, cec11–12, cec14, cec15, cec20–21, cec23–24, cec27–29). GMO also optimizes better and performs better for 10 50-dimensional problems and 100-dimensional problems, second only to CMRLCCOA, and ranked second overall. On the contrary, COA, KOA, SWO, TROA, and GO do not show better optimization ability. In summary, CMRLCCOA significantly outperforms COA as well as the other 13 intelligent optimization algorithms in 50 and 100 dimensions.
The Wilcoxon signed rank test verifies the variability of results obtained by different algorithms [
75]. The significance results for
dim = 50 and 100 are shown in
Table A3 and
Table A4. “+/=/-” means that the comparative algorithm is significantly better/equal/worse than the CMRLCCOA. The observation of the data reveals that the Wilcoxon test results for COA, KOA, SWO, TROA, and GO are 0/0/29 at
dim = 50 and 100, indicating that these five algorithms are inferior to CMRLCCOA in all test functions. Meanwhile, the Wilcoxon signed rank results of GMO in two dimensions are 6/10/13 and 6/7/16, which are better. Secondly, ISSA and IGWO also perform better; both of them have six functions better than CMRLCCOA. However, when
dim = 50 or 100, ABC, EWOA, SA, and DE algorithms perform significantly worse than CMRLCCOA. Therefore, this result also shows that CMRLCCOA can address different types of problems.
Similar to
Figure 11,
Figure 12 shows two heat maps of the results obtained for all compared algorithms on CEC 2017 for two different dimensions. The vertical coordinates are all algorithms involved in comparison. The performance of all algorithms can be intuitively obtained from the heat map. In the heat map of both dimensions, the corresponding squares of CMRLCCOA proposed in this paper largely show a bluer situation relative to other algorithms. Meanwhile, the GO and TROA rows are always red. This phenomenon indicates that these two algorithms perform poorly on this test set and their performance needs to be improved.
4.4.2. Convergence Curves for Iterations
Figure 13 and
Figure 14 show partial convergence curves at
dim = 50 and 100. From the figure, it can be concluded that CMRLCCOA converges better on cec09, cec20–21, cec24, cec27, and cec28 at
dim = 50. When
dim = 100, CMRLCCOA converges better on functions cec07, cec09, cec12, cec20, cec25, and cec27–28. In addition, it can be seen that CMRLCCOA has a larger slope of the curve during the iteration of most functions, which indicates that CMRLCCOA always can converge faster in the early stages. This is made possible by the inclusion of the initialization strategy, which allows the population to explore a larger area. It is able to converge to the neighborhood of the optimum very quickly. In summary, CMRLCCOA can find the optimal solution quickly and can solve some sophisticated optimization questions.
The optimization ability of CMRLCCOA varies when dealing with different functions. As can be noticed from
Figure 13 and
Figure 14, CMRLCCOA is able to avoid interference factors well in the optimization of both single-peak and multi-peak functions, and both of them converge rapidly to the vicinity of the optimal solution. CMRLCCOA converges faster in hybrid and composite functions. The observation of the curves reveals that the slope of the pre-curve is very large and almost vertical so that the optimal candidate solution can be found in fewer iterations, which indicates that the algorithm has high sensitivity. In addition, it can be found that CMRLCCOA maintains the stability and continuity during the iteration process of most functions, and the convergence accuracy is better. In short, CMRLCCOA is able to solve the functions in CEC2017 efficiently.
4.4.3. Boxplot of Experimental Results
Combined with the convergence curves above, the corresponding box plots are given here. A box-and-line plot is an icon that describes the discrete distribution of data and provides a good description of outliers and skewness in the data. The length of the boxes corresponds to the stability of algorithms. If the box is narrower, the algorithm is more stable and robust. The upper limit of the box-and-line plot is the upper quartile, and the lower line is the lower quartile. Because of the randomized nature of the algorithm, some outliers are generated during the optimization of the problem, and to visually demonstrate the quality of the optimization results, box-and-line plots of the optimization results at
dim = 50 and 100 are given, as shown in
Figure 15 and
Figure 16. CMRLCCOA has less variation in the upper and lower distances than the other algorithms, especially at
dim = 50 for cec22, cec24, cec27, and cec28, and at
dim = 100 for cec01, cec09, cec11, cec12, ce25, and ce27–28. These functions verify the stability of CMRLCCOA. However, CMRLCCOA also shows some “+” indicators in the box plots of some functions, indicating that the algorithm also produces some outliers with uncertainty and randomness.
In conclusion, for most of the functions, CMRLCCOA is shorter and the upper and lower boundaries are closer together compared to the other 14 algorithms, which indicates that CMRLCCOA is more stable and has better minimum values compared to others.
4.5. Comparison of Optimization Results for CEC2019
This part of the numerical experiments is performed using the 10 functions in CEC2019 [
76]. First, the experiment was set up to run 20 independent repetitions. After that, the Ave, Best, Worst, Std, and Rank of the 20 results are computed. Secondly, we obtain the convergence iteration diagrams during the algorithm runs. In addition, all comparison algorithms are consistent with the above experiments.
Table 4 illustrates the calculation results. First-ranked data are marked in bold. Finally, box-and-line plots are plotted as a visualization of the quality of the solution results. The radar plot shows more visually how each algorithm ranks on each test function.
4.5.1. Statistical results on CEC2019
As indicated in
Table 4, the average rank of CMRLCCOA is 2.3, which is ranked first overall, better than the others and significantly better than COA. This effect indicates that CMRLCCOA obtains solutions of higher quality relative to the others. In addition, CMRLCCOA significantly optimizes the five functions (cec01, cec04, cec05, cec07, cec08). CMRLCCOA ranks first in terms of computational results in cec01, indicating that it performs well in low-dimensional test functions. CMRLCCOA is superior to the others in cec04 and cec05. This indicates that it is also suitable for higher-dimensional test functions. CMRLCCOA has excellent optimization ability in cec07 and cec08. GMO performs excellently in completing some problems, and has excellent optimization ability, ranking second. GMO outperforms the other algorithms and has a strong optimization ability. In contrast, other algorithms do not solve these functions well.
Table 5 shows the final test results for the Wilcoxon signed rank [
77]. A look at the data in
Table 5 reveals that the Wilcoxon symbolic rank test outputs for COA, KOA, SWO, GMO, OMA, TROA, GO, PSO, DE, SA, ABC, ISSA, IGWO, and EWOA are 0/1/9, 0/0/10, 0/0/10, 3/2/5, 1/1/8, 0/0/10, 0/0/10, 2/2/6, 2/3/5, 0/3/7, 0/1/9, 2/2/6, 2/1/7, and 0/1/9. It can be found that KOA, SWO, TROA, and GO are not as good as CMRLCCOA on all the tested functions, which can show that CMRLCCOA has better performance and is competitive.
4.5.2. Convergence Curves for Iterations
The convergence curves of comparison algorithms are shown in
Figure 17. CMRLCCOA has a very smooth iteration curve and can approach the optimal solution quickly. This shows that CMRLCCOA converges faster than other algorithms, so this algorithm is capable to solve high- and low-dimensional problems. In the experiments on the high-dimensional cec03 function, the CMRLCCOA function converges very fast and moves rapidly to the optimal solution. The CMRLCCOA reaches the neighborhood of the optimum with few iterations during the solution of functions cec05, cec07, and cec09. The convergence is significantly better. The GMO for cec08 is closest to the optimal value and its convergence effect is also excellent. In addition, as shown in
Figure 12, the CMRLCCOA algorithm has a very large slope, almost vertical, on the early convergence curves of most functions, indicating that the algorithm has a high sensitivity. Also, PSO, IGWO, OMA, and GMO algorithms show good competence in certain functions. The results show that CMRLCCOA converges faster, gradually approaches the optimal solution, and has better optimization ability than others.
4.5.3. Boxplot of Experimental Results
Figure 18 illustrates a box-and-line plot of CMRLCCOA and other comparative algorithms optimizing the CEC2019 test function. As can be noticed from the figure, it can be noticed that the CMRLCCOA has a lower median case and narrower inter-quartile range, especially in the functions cec01, cec02, cec05, and cec10. It shows that the solutions of CMRLCCOA are more centralized than the other algorithms and are robust. However, CMRLCCOA produces outliers in the optimization process of some functions, such as cec08 and cec09. This phenomenon indicates that this algorithm is unstable to some extent.
4.5.4. Radargram Behavior Analysis
A radar chart is a chart that shows multidimensional data and also shows how much weight is given to each variable in a set of data and can be used to show performance data. To visualize the performance ranking of the different tested functions for all algorithms,
Figure 19 illustrates the radar chart of the results for the 10 tested functions sorted. A larger area of the filled portion indicates a lower overall ranking of the algorithm. From
Figure 19, it can be concluded that CMRLCCOA has the smallest area, which indicates that CMRLCCOA has the smallest total ranking and the best overall optimization capability. Furthermore, GMO and IGWO also show better performance.
To show the results on the test set more clearly, they are shown here by stacked histograms. As shown in
Figure 20, the total height of CMRLCCOA is the lowest. This indicates that CMRLCCOA has relatively the best overall performance and CMRLCCOA is effective. This shows that the mixing of multiple strategies with COA and the construction of CMRLCCOA are effective as well as successful.
6. Real Application: Engineering Optimization Problems
Hypersonic technology is an important milestone in the history of the world’s armaments and equipment, which greatly enriches the content of offensive and defensive confrontation in the adjacent space, represents a country’s ability to develop and utilize space in the future, and is an important symbol of the army’s combat power and survivability, and has a wide range of prospects for application and extremely important military value. The main advantages of hypersonic vehicles are fast flight speed, high flyable altitude from the ground, strong capability of surprise and defense, and great destructive power. In the face of future informationization and intelligent combat, hypersonic vehicles can play a great role by using their characteristics [
84].
Since hypersonic vehicles are extremely fast in flight, the environment becomes more complex when the vehicle enters the re-entry or cruise phase, resulting in the need for a control system that is extremely stabilized and at the same time can achieve precise control. Due to the extremely high speed, the missile cannot make a sharp turn in the air. Therefore, in some instances, it is necessary to limit the curvature and turn rate of the aircraft trajectory [
85]. Many scholars at home and abroad have also studied this problem. In this section, we will model the path planning of cruise missiles for hypersonic vehicles and apply CMRLCCOA to solve the problem [
86].
6.1. Background and Establishment of the Model
With the continuous development of weapons technology, the system in the field of military defense and control is being gradually improved. The traditional ballistic missile path may face the risk of being predicted or even intercepted, which is not safe. For the actual ballistic path optimization design problem, different tactical indicators often have different optimization objectives. Hypersonic cruise missiles fly extremely fast and can change their trajectory, thus greatly reducing the risk of interception. These characteristics make it possible to attack targets with very short warning times and at very high speeds. However, current research in this area is relatively small and has not achieved a major breakthrough. In this section, we look at cruise missile trajectories at hypersonic speeds, first considering only the following two conditions:
The hypersonic flight threat area and trajectory map are shown in
Figure 24. The constrained region is shown in
Figure 25, which shows the positional coordinates of the craft in relation to the radar. In this paper, the radar-centered range of 400 km is used as the solution space, and it is considered that the vertical distance between each defense unit is as far as possible, thus increasing the lateral distance of the interceptor missile.
Hypersonic missile trajectory modeling needs to satisfy certain conditions. Assuming that there are a total of n cubic curves, the curvature of the i-th curve is denoted as pi(t), . The length of the curve is denoted as , and the derivative of the curvature is denoted as . The control fixed points are , respectively.
Optimization Objective: The length of the missile trajectory curve is the shortest and the curvature derivative is the largest.
Limitations: The range of feasible domains, continuity constraints, maximum curvature constraints.
Decision Variable: The control vertex.
Minimize
subject to
where
k1 and
k2 are the weighting factor.
The above is a more complex minimization problem proposed in this paper. Next, CMRLCCOA is used to solve it for hypersonic missiles.
6.2. Solving the Model
CMRLCCOA, as well as KOA, TROA, BWO, AO, HBA, SWO, GMO, OMA, and GO, is applied to the hypersonic cruise ballistic optimization problem.
Table 11 lists the optimal ballistic lengths solved by the 10 algorithms. Observing the table, it can be observed that the shortest length obtained by CMRLCCOA is 51.231801 km. This result is less than the results calculated by the others. This phenomenon indicates that CMRLCCOA performs better.
7. Summary and Outlook
In this paper, four strategies are used to improve COA, which leads to the proposed CMRLCCOA. First, in the initialization population phase, the coati population is initialized using the Sine chaotic mapping function to avoid population randomization. Second, a lens imaging reverse learning strategy is applied to renew the location of the coati population again. This strategy can expand the search space and enhance the quality of coati populations. Then, the Lévy flight strategy allows coatis to move over a wide range in the search space, reducing the iguana constraint. This method makes the algorithm better at finding a global optimum. Finally, the use of the crossover strategy reduces the search blind spots and improves the algorithm’s accuracy. Experiments are conducted in CEC2017 and CEC2019 test suites, where 50 and 100 dimensions are used in CEC2017. Lastly, the optimization results derived from CMRLCCOA are compared with COA, six new algorithms proposed in the last two years, four classical and well-recognized algorithms, and three enhanced algorithms. Then, we find that the newly proposed CMRLCCOA has better results and higher performance. In addition, CMRLCCOA is able to optimize to obtain better solutions to the three engineering problems. Finally, this paper also establishes a model of a hypersonic vehicle cruise ballistic problem. CMRLCCOA performs best in solving the hypersonic cruise ballistic trajectory optimization, reflecting the strong optimization capability and stability of CMRLCCOA.
To conclude, this study has strong scientific and practical value. The possible future work is as follows: Although the proposed CMRLCCOA has enhanced optimization capability and accelerated convergence speed, it still has areas of improvement in terms of computational complexity and computation time. CMRLCCOA will be further optimized for this problem in the follow-up work. In addition, we will continue to study on the basis of CMRLCCOA to obtain better solutions and apply it to address many complicated optimization problems, including route planning [
87,
88], image division problems [
89], workshop scheduling [
90], feature selection [
91,
92], shape optimization [
76], and engineering optimization [
93], and further expand the application field of intelligent algorithms.