Multi-Strategy-Improved Growth Optimizer and Its Applications

: The growth optimizer (GO) is a novel metaheuristic algorithm designed to tackle complex optimization problems. Despite its advantages of simplicity and high efficiency, GO often encounters localized stagnation when dealing with discretized, high-dimensional, and multi-constraint problems. To address these issues, this paper proposes an enhanced version of GO called CODG-BGO. This algorithm incorporates three strategies to enhance its performance. Firstly, the Circle-OBL initialization strategy is employed to enhance the quality of the initial population. Secondly, an exploration strategy is implemented to improve population diversity and the algorithm’s ability to escape local optimum traps. Finally, the exploitation strategy is utilized to enhance the convergence speed and accuracy of the algorithm. To validate the performance of CODGBGO, it is applied to solve the CEC2017, CEC2020, 18 feature selection problems, and 4 real engineering optimization problems. The experiments demonstrate that the novel CODGBGO algorithm effectively addresses the challenges posed by complex optimization problems, offering a promising approach.


Introduction
There are many realistic optimization problems in the field of scientific research and engineering applications, and they can almost always be converted into optimization problems for solutions [1,2].There are many challenges and difficulties in solving these problems, such as nonlinearity, discretization, and high complexity [3,4].Meta-heuristic algorithms are an effective method for solving this type of problem, which is widely used in real-world optimization problems due to their simplicity and ease of implementation [5,6].
In recent years, researchers have proposed a large number of optimization algorithms to solve realistic optimization problems.In this paper, these algorithms are categorized into four groups: evolution-based algorithms, population-based algorithms, chemistry and physics-based algorithms, and human-based algorithms.Among them, evolution-based algorithms are typically represented by Evolutionary Strategies (ES) [7], Biogeography-Based Optimization (BBO) [8], and Genetic Algorithm (GA) [9].The algorithms based on population intelligence are Fire Hawk Optimizer (FHO) [10], Bat Algorithm (BA) [11], fox optimizer [12], Golden Jackal Optimization (GJO) [13].Algorithms based on chemistry and physics have Gravitational Search Algorithm (GSA) [14], Big Bang-Big Crunch algorithm (BB-BC) [15], Simulated Annealing (SA) [16], Magnetic Optimization Algorithms (MOA) [17], Water Evaporation Optimization (WEO) [18], Atom Search Optimization (ASO) [19].Human-based algorithms have search and rescue optimization [20], Human Mental Search (HMS) [21], arithmetic optimization algorithm (AOA) [22].However, as optimization problems become highly complex, these aforementioned algorithms have some drawbacks in solving real-world optimization problems.For instance, there are problems such as getting stuck in localized stagnation and poor convergence performance.Therefore, optimization algorithms with improved strategies are widely used to solve complex real-world optimization problems.
Several researchers have proposed enhanced optimization algorithms to tackle specific engineering design challenges.A.M. Shaheen et al. introduced the enhanced equilibrium optimization algorithm (IEOA) for power distribution network configuration, achieving optimal allocation of distributed generators [23].Bahaeddin Turkoglu et al. introduced a binary artificial algae algorithm to enhance feature selection and classification accuracy [24].Gang Hu developed an enhanced variant of the black widow optimization algorithm for feature selection, yielding notable performance outcomes [25].Min Xu proposed a novel binary arithmetic optimization algorithm (BAOA) that outperformed competitors in benchmark datasets for feature selection [26].Gang Hu et al. proposed an enhanced hybrid arithmetic optimization algorithm named CSOAOA for solving engineering problems and demonstrated its utility in solving real optimization problems [27].
Although existing heuristic algorithms have made good progress in solving optimization problems, the number of combinations that algorithms need to search increases exponentially as the dimensionality of the optimization problem expands and the complexity of constraints increases.Existing algorithms often face the problem of premature convergence, meaning that they may stop at suboptimal solution sets before reaching the global optimum.The fundamental reason for this is that the exploration and exploitation performance of the algorithms is weak, which limits their effectiveness in capturing the intrinsic patterns and features of the actual optimization problem data.Therefore, it is necessary to explore a new, suitable, and efficient metaheuristic algorithm that can fully explore the search space during the optimization process in order to better mitigate the challenges brought by high-dimensional problems.Fortunately, the inspiration for the growth optimizer (GO) comes from the learning process of individuals in social growth, and it has been proven to be a robust tool with high exploration capability [28].Relevant studies have shown that GO has strong exploration capability and application scalability.For example, GO has been successfully applied to the parameter identification of solar photovoltaic cells [29,30], multi-level threshold image segmentation and wireless sensor network node deployment [31], and enhancing intrusion detection systems in the internet of things and cloud environments [32].
To the best of our knowledge, existing papers have not attempted to propose an improved GO applicable to both continuous and discrete optimization problems.Considering that GO may fall into local optima when solving high-dimensional complex optimization problems, this paper proposes an enhanced growth optimizer (CODGBGO) by combining three strategies.Among them, the Circle-OBL initialization strategy is used to generate initial solutions with good distributions.Secondly, the exploration strategy expands the search space by self-learning in a region of radius R and learning the differences among individuals, which in turn enhances the exploration ability of the algorithm and its ability to jump out of the local optimum trap.Finally, the exploitation strategy greatly improves the convergence speed and convergence accuracy of the algorithm through the guidance of optimal individuals.The main contributions of this paper are summarized as follows: • Using the Circle-OBL initialization strategy for initializing better-distributed populations makes the performance of the algorithm improve.

•
The exploration strategy improves the global exploration performance of the algorithm by learning self-knowledge over a radius R and improving population diversity by learning the differences between individuals.

•
The exploitation strategy leads to an improvement in the convergence speed and convergence accuracy of the algorithm through the bootstrapping of the optimal individuals.
• CODGBGO is proposed by combining the above strategies, and it is confirmed to be a promising optimization method by numerical optimization, feature selection, and engineering optimization.
The subsequent work is structured as follows: Section 2 presents the fundamental theory of primitive GO.In Section 3, CODGBGO is introduced as a novel approach.Subsequently, in Section 4, we apply the proposed CODGBGO to solve various problem sets, including the CEC2017 [33], and IEEE CEC2020 [34], as well as addressing 18 feature selection problems and 4 mechanical design problems.Section 5 provides the conclusions and future directions.

The Theory of Growth Optimizer
Inspired by the process of personal growth, GO was introduced in 2022 as a novel optimizer [28].Learning and reflection are two key stages of GO; they complement each other and foster individual growth.Learning is to draw knowledge from the gaps between different individuals, and reflecting is to improve one's knowledge by summarizing one's strengths and weaknesses.

Learning Stage
In the learning stage, growth resistance ( GR ) is used to indicate the amount of knowledge that an individual has learned.Assuming that the individual is represented as = , learners learn from and benefit from the gap k Gap .Generally, individuals will tend to learn from information that has a large knowledge gap.The learning factor ( LF ) is introduced for each group of knowledge gaps, and the learning factor k LF will affect an individual's learning efficiency for the kth group of knowledge gaps, expressed as the following equation: where k LF represents the kth group knowledge gap normalization and its value belongs to the range [0, 1].The larger k LF indicates the more efficiently the individual learns from the kth group knowledge gap, and vice versa.
The extent of knowledge learning varies across individuals, where i SF is used to denote the extent of learning of the ith individual, expressed as the following equation.
where i GR represents the resistance to growth of the ith individual and max GR represents the maximum resistance to growth.A larger i SF indicates that the individual has a greater range of knowledge to learn and tends to perform the exploration phase and, conversely, tends to perform the exploitation phase.
Individuals learn by learning from individual knowledge gaps to promote their growth.For example, the amount of knowledge acquired by the ith individual through learning from the kth group of knowledge gaps is denoted as k KA using the following equation: The specific learning process of the ith individual is expressed by the following equation: x KA KA KA KA (5) where It represents the current number of iterations and new i x represents the new state of the ith individual after learning.
By adjusting the individuals during the learning stage, the quality of the individuals is likely to improve, but it is also likely to regress; therefore, for the individuals whose quality has improved, the retention operation is performed.For individuals whose quality regresses, they are retained with probability 2 P , and those that exceed the probability 2 P are discarded.This process is described using the following equation: x f el e se x i r p x els (6) where 1 r denotes a random number in the range [0, 1] and 2 P takes the value 0.001.
( ) denotes the objective function value after learning for the ith individual.

Reflection Stage
The reflection stage is mainly a stage in which the individual retains his good aspects by reflecting on his deficiencies and, for the bad aspects, learns from the good individuals.For the aspects that cannot be remedied, it is necessary to abandon the previous knowledge and learn systematically again.The reflexive stage of GO is expressed using the following formula: where lb and ub are the lower and upper bounds of the problem, respectively, 2 r , 3 r , 4 r , 5 r represents pseudorandom numbers in the range [0, 1], 3 P represents the probability of reflection, which takes the value of 0.3 in this paper, AF represents the attenuation factor, FEs represents the current number of function evaluations, and MaxFEs represents the maximal number of function evaluations, and AF decreases linearly from 1 to 0.01 as FEs increases.j R denotes the jth dimension information of the leader individual and the elite individual, which means that in the reflection stage, if ith individual needs to learn from the jth dimension of the other excellent individuals, there will be an upper-level individual to guide it.

Implementation of GO for Optimization
In this subsection, a detailed implementation of GO is given.Algorithm 1 gives the pseudo-code of GO.The implementation steps of the GO algorithm are given below: Step 1: Initialize the run parameters containing the population size ( N ), the dimension of the problem to be solved ( D ), ub and lb of the problem to be solved, and the parameters 1 P , 2 P , 3 P .
Step 2: Initialize the population and calculate the fitness function value based on the initialization parameters.Each individual is a  1 D vector.The whole population is an  ND matrix.
Step 3: Enter the main loop.Sort the individuals in the population by GR to obtain the optimal solution best x , best x will be updated in each iteration.
Step 4: Execute the learning stage by first selecting the elite individual better x , the bottom individual worse x , and the random individuals x , and sequentially using Equations ( 1)-( 6) to update the position of ith individual and update the globally optimal gbestx in a timely manner.The number of function evaluations FEs is added to 1.
Step 5: Execute the reflection phase by updating the jth dimension of the ith individual according to Equations ( 7) and ( 8), and then use Equation ( 6) to update the ith individual position and update the global optimal solution gbestx in a timely manner.The number of function evaluations FEs is added to 1.
Step 6: If the loop terminates, return the globally optimal solution gbestx , otherwise go to step 3 to continue execution.

A Multi-Strategy Enhanced Growth Optimizer
The original GO has the advantages of faster convergence and simpler structure.However, when dealing with complex high-dimensional optimization problems, there is a lack of population diversity in the iterative process of the algorithm, which leads to the problem that the algorithm tends to be easy to trap in the local optimum.At the same time, there are deficiencies in the exploitation capability, resulting in the loss of convergence accuracy and convergence speed.In this work, the Circle-OBL initialization strategy, exploration strategy, and exploitation strategy are introduced into the original GO to form CODGBGO.In CODGBGO, firstly, the Circle-OBL initialization strategy is used to generate a well-initialized population to improve the overall quality of the population.Secondly, the exploration strategy is suggested in the learning stage to improve the diversity of the population; this improves the ability of the algorithm to escape from the local optimization trap.Finally, the exploitation strategy is employed in the reflection stage to improve the performance of the algorithm to exploitation, speed up the convergence speed of the algorithm, and improve the convergence accuracy.

Circle-OBL Initialization Strategy
Literature [35] indicates that the improvement of the initialization scheme using chaotic mapping and OBL strategies can generate initial populations with better solution quality, leading to better optimization results.Among them, chaotic mapping is used to address the problem of premature convergence by generating initial solutions with higher diversity levels.The OBL strategy aims to accelerate the convergence of the algorithm by exploring a wider region of the solution space during the initialization process.Therefore, in this subsection, the population will be initialized using the Circle-OBL initialization strategy, where the mathematical expression for the chaotic circle mapping is given by Equation ( 9) [36].
According to the literature [36], where = == , mod represents the modulo operation, sequences are more diverse after circle chaotic mapping.Figure 1a,b show the sequence distribution after 500 iterations for pseudo-random mapping and circle chaotic mapping, respectively, and it can be seen that circle chaotic mapping sequences have better traversal.Subsequently, the individuals are initialized using the generated circle chaotic mapping sequences, and at the same time, they are subjected to opposing learning, which is represented by the following equation: P , expressed as Equation (11).

Exploration Strategy
To address the problem that the original GO is prone to falling into the local optimum trap due to the lack of population diversity at the late stage of the algorithm iteration, the algorithm is easy to fall into the local optimum trap.In this section, the exploration strategy is proposed to enhance population diversity and then improve the ability of the algorithm to jump out of the local optimal trap.The strategy mainly contains the differential strategy part and the self-search strategy part.
Literature [37] indicates that the differential strategy can effectively enhance the algorithm's global search capability as well as its ability to jump out of the local optimum trap.Inspired by this, the difference strategy is introduced into the algorithm in this section to enhance its performance.Here, our main consideration is to learn from the knowledge gap between the current individual and a random individual and the knowledge gap between two random individuals, and the two sets of knowledge gaps are expressed using the following equation:

DE
denotes the knowledge gap between two random individuals.Subsequently, the ith individual learns from the two sets of gaps; here, the ith individual learns knowledge equally from both sets of knowledge gaps.This process is represented using the following equation: Literature [38] indicates that individuals can effectively enhance the global search performance of an algorithm by learning within a certain range.Inspired by this, a selflearning strategy is proposed in this subsection to enhance the algorithm's performance.A self-search strategy is the process by which an individual improves himself by learning knowledge within a radius of R .In the selection of R , the wider distribution of normally distributed random numbers is considered suitable to be used as a learning radius, allowing the global search ability to be strengthened.At the same time, considering that in the iterative process, a larger search radius will also cause the loss of convergence speed, multiplying a tail term on the basis of a normally distributed random number makes a balance in each function.Hence, R is expressed using the following equation: where (0,1) N denotes a random number obeying a normal distribution,    as a tail term to balance global search with local search.The self-search process for the ith individual is expressed using the following equation: The equal combination of the differential strategy and the self-search strategy forms the exploration strategy.The process is shown in Figure 2, the green line represents the simulation process of the exploration strategy.From the figure, it can be seen that through the self-learning strategy and the differential strategy, the individual makes it jump out of the local optimal region by learning in the region with radius R. The learning of the local optimal region can be seen by the self-learning strategy and the difference strategy.At the same time, the individual learns from the differences among other individuals, which improves their global search ability.Meanwhile, it can be seen that the population exploration space has improved.Through the above analysis, it is confirmed that the exploration strategy enhances the algorithm's global search ability.expressed as the following equation: and el (16) The new state of ith individual is subsequently retained using elite retention, expressed as the following equation: where ( )

Exploitation Strategy
In the literature [39], it is stated that using optimal individuals for bootstrapping can effectively enhance algorithm exploitation and improve the convergence speed.Inspired by this, an exploitation strategy based on optimal individual bootstrapping is proposed in this section, and at the same time, taking into account the situation of falling into a local optimum, the differential strategy is also taken into account in order to balance the exploitation phase and the exploration phase, aiming to enhance the algorithm development capability as much as possible, where the gap is represented by the following equation: where 4 k x denotes a random individual,

4/ ki
Gap denotes the gap between the current individual and the randomized individual.Also, in order to enhance the algorithm exploitation, the globally optimal individual is used to guide the position update of the current individual; this process is shown in Figure 3. From the figure, the individual quickly approaches the globally optimal individual, and at the same time, due to the consideration of the differential idea, the individual avoids falling into the local optimal region, which improves the convergence speed of the algorithm as well as the practical exploitation ability.The process is expressed using the following equation: Figure 3.The process of exploitation strategy.

Implementation of CODGBGO for Optimization
In this subsection, a detailed implementation of CODGBGO is given.Algorithm 2 gives the pseudo-code of CODGBGO.The implementation steps of the CODGBGO algorithm are given below: P ,  ,  .
Step 2: Use the Circle-OBL initialization strategy.Initialize the population according to Equations (10) and (11), and calculate the objective function value based on the initialization parameters.Each individual is a  1 D vector.The whole population is an  ND matrix.
Step 3: Enter the main loop.Sort the individuals in the population by GR to obtain the optimal solution best x .best x will be updated in each iteration.x , then sequentially using Equations ( 1)-( 6) to update the position of the ith individual, otherwise using Equations ( 12)- (17) to update the position of the ith individual, and updating the globally optimal node gbestx in a timely manner.The number of function evaluations FEs is added to 1.
Step 5: Execute the reflection stage, if  rand , by updating the jth dimension of the ith individual according to Equations ( 7) and ( 8), and then use Equation (6) to update the individual position; otherwise update the position of the ith individual according to Equations ( 17)- (19), and update the global optimal solution gbestx in a timely manner.The number of function evaluations FEs is added to 1.
Step 6: If the loop terminates, return the globally optimal solution gbestx ; otherwise, go to step 3 to continue execution.

Compytional Complexity
This section analyzes the computational complexity of the proposed CODGBGO algorithm.The computational complexity of the original GO initialization is () N , In each iteration, each member of the population undergoes two stages of updating and evaluation of its objective function.The computational complexity of the update process is     (2 ) T N D , where T is the maximum number of iterations.Therefore, the computa- )) N T D .Compared to GO, CODGBGO first intro- duces the Circle-OBL initialization strategy in the initialization process, so the computational complexity of initialization is (2 ) N .The introduction of exploration and exploi- tation strategies does not change the original GO logic, so the computational complexity of the update process remains     (2 ) T N D .Therefore, the computational complexity of CODGBGO is   +  (2 ( 1)) N T D .

Experimental Results
In this section, we will conduct a series of experiments to evaluate the performance of the proposed CODGBGO.First, its performance in solving numerical optimization problems is evaluated using the CEC2017 test set and the CEC2020 test set.Second, its performance in solving discretization problems is evaluated using 18 feature selection problems.Finally, four constrained engineering applications are solved using the proposed CODGBGO to evaluate its performance in solving realistic constrained optimization problems.Meanwhile, in order to comprehensively and objectively evaluate the solution performance of CODGBGO, the proposed CODGBGO is compared with numerous algorithms.The compared algorithms include classical algorithms, improved algorithms, high citation algorithms, popular algorithms, new algorithms, and superior algorithms.
To ensure the fairness of the experiments, the population size was set to 60, and the maximum number of function evaluations was set to 60,000.The test dimensions were set to 30D, 50D, and 100D for CEC2017, 10D, and 20D for CEC2020.All experiments were performed using Windows 11 as the operating system, and the code execution took place within the MATLAB R2021b environment.

Comparison Algorithm and CEC Benchmark Problems
In order to comprehensively and objectively evaluate the performance of the proposed CODGBGO, this paper compares CODGBGO with several existing optimization algorithms on several sets of test functions.Among them, the test function sets selected in this paper are IEEE CEC2017 and IEEE CEC2020, with detailed information as shown in Tables 1 and 2, involving unimodal functions, multimodal functions, hybrid functions, and composition functions.The unimodal function contains only one local optimal solution and is mainly used to verify the local search capability of CODGBGO.The complexity of multimodal, hybrid, and composition functions is higher than that of unimodal functions, and due to the inclusion of multiple local optimal solutions, they are easy to fall into the local optimal trap in the solution process.Therefore, the tests on multimodal, hybrid, and synthetic functions are mainly used to verify the performance of CODGBGO in solving complex problems, to check the ability to jump out of local optimums, and to evaluate the balance between the global search phase and the local search phase.Meanwhile, in order to comprehensively reflect the performance of CODGBGO, 21 comparison algorithms are selected for experimental comparison in this paper.These include classical algorithms, improved algorithms, high citation algorithms, popular algorithms, new algorithms, and superior algorithms.The specific parameter configurations are shown in Table 3.

258
new Snake Optimizer (SO) [57] 2022 In this subsection, the effectiveness of each of the added strategies is mainly tested.Among them, CODGBGO is a new optimization algorithm formed by integrating the Circle-OBL initialization strategy, the exploration strategy, and the exploitation strategy on the basis of GO.In order to verify the effectiveness of each strategy, each strategy is individually integrated into GO to form a new combination.Among them, the Circle-OBL initialization strategy is integrated into GO to form COGO, the exploration strategy is integrated into GO to form DGGO, and the exploitation strategy is integrated into GO to form BGO. These combinations were evaluated on the CEC2017 test function set with 30D dimensions, and each experiment was run independently for 30 times.The experimental results were ranked using the Friedman mean rank test to verify the effectiveness of the strategies.The experimental results are shown in Table 4. From Table 4, it can be observed that the rankings of COGO, DGGO, and BGO are all superior to GO.This indicates that the introduction of each strategy is beneficial for enhancing the performance of GO.Moreover, combining the three strategies into GO yields a more effective enhancement of its performance.In this subsection, the two control parameters of CODGBGO are studied in detail to determine the optimal parameter combination to achieve the best performance of CODG-BGO.In order to avoid experimental redundancy in parameter selection, a round of testing was performed before parameter {α, β} was selected, and finally parameter a was filtered to have α significant advantage of its effect in the interval [0.7, 0.9], and parameter β was filtered to have a significant advantage in the interval [0.85, 0.95].For further determination of the parameter values, parameter α is divided into sets {0.7, 0.8, 0.9} at intervals of 0.1 and parameter b is divided into sets {0.85, 0.9, 0.95} at intervals of 0.5, so there are 9 combinations: {0.7, 0.85}, {0.7, 0.9}, {0.7, 0.95}, {0.8, 0.85}, {0.8, 0.9}, {0.8, 0.95}, {0.9, 0.85}, {0.9, 0.9}, and {0.9, 0.95}.In order to determine the best parameter combinations, these nine combinations are tested in the CEC2017 test function set with a test dimension of 30D.Each experiment was run independently 30 times, using the Friedman mean rank test to rank the experimental results and determine the optimal parameter combination.The experimental results are shown in Table 5.
From Table 5, it can be found that when the value of β is 0.85, the sum of the final rankings of the combinations {0.7, 0.85}, {0.8, 0.85}, and {0.9, 0.85} is 24.When the value of β is 0.9, the sum of the final rankings of the combinations {0.7, 0.9}, {0.8, 0.9}, and {0.9, 0.9} is 13.When the value of β is 0.95, the sum of the final rankings of the combinations {0.7, 0.95}, {0.8, 0.95}, and {0.9, 0.95} is 8.To summarize, the algorithm achieves better results when the value of the parameter β is 0.95.In the same way, when the value of α is 0.7, the sum of the final rankings of the combinations {0.7, 0.85}, {0.7, 0.9}, and {0.7, 0.95} is 14.When the value of α is 0.8, the sum of the final rankings of the combinations {0.8, 0.85}, {0.8, 0.9}, and {0.8, 0.95} is 11.When the value of α is 0.9, the sum of the final rankings of the combinations {0.9, 0.85} and {0.8, 0.95} is 11.0.9, 0.85}, {0.9, 0.9}, and {0.9, 0.95} have a final rank sum of 20.In summary, the algorithm can achieve better results when the parameter α takes the value of 0.8.Therefore, the parameter {0.8, 0.95} is chosen as the best parameter combination for subsequent experiments in this paper.

Experimental Results
In order to further evaluate the performance of CODGBGO, in this subsection, the proposed CODGBGO is experimentally compared with 10 common optimization algorithms on IEEE CEC2017 test functions, where the compared algorithms involve classical, highly citation, improved, and new algorithms.The test functions involve unimodal, multimodal, hybrid, and composition functions.The performance of CODGBGO is comprehensively assessed through population diversity analysis, exploration and exploitation analysis, numerical analysis, convergence and stability analysis, non-parametric analysis, and expanded analysis.

Population Diversity Analysis
Having a good population diversity contributes to the algorithm's ability to quickly converge to the global optimal solution and avoid getting trapped in local optima.In this section, we mainly analyze the differences in population diversity between GO and CODGBGO.The IEEE CEC2017 test function set is used for diversity experiments, with a test dimension of 30D.The formula for calculating population diversity is shown below: where () Ic t denotes the population diversity in generation t and () j ct denotes the centrifugal degree of the jth dimension in generation t , using the following equation: The experimental results are shown in Figure 4.As can be seen from Figure 4, the introduction of the Circle-OBL initialization strategy in CODGBGO results in a higher initial population diversity compared to GO.Additionally, the introduction of the exploration strategy ensures that CODGBGO consistently maintains higher population diversity throughout the iteration process compared to GO.In conclusion, CODGBGO is able to more effectively avoid getting trapped in local optima, allowing it to converge to the global optimal solution faster.

Exploration and Exploitation Analysis
Exploration and exploitation are the two most important stages in metaheuristic algorithms, and effectively controlling these stages helps enhance algorithm performance.In this section, an analysis is conducted on the exploration and exploitation stages of CODGBGO using the IEEE CEC2017 test function set with a test dimension of 30D.The exploration ratio is calculated using Equation ( 22) and the exploitation ratio using Equation (23).
where () Di t denotes the dimension diversity measurement denoted as Equation ( 24), The experimental results are shown in Figure 5. From Figure 5, it can be observed that in the early iterations, CODGBGO explores the search space with a high percentage, indicating its ability to effectively explore the search space.Subsequently, due to the introduction of the exploitation strategy into GO, the exploitation ratio gradually increases, enhancing the algorithm's convergence speed and precision.Throughout the iteration process, CODGBGO achieves a good balance between exploration and exploitation stages, effectively avoiding the problem of algorithm stagnation in local optima and improving the algorithm's solving performance.As can be seen from Table 6, when the test dimension is 30D, CODGBGO ranks first on the 24 test functions, demonstrating a very excellent solving performance.On the unimodal function F1, the solution accuracy is weaker than that of INFO, which is mainly due to the fact that INFO adopts a more advanced exploitation technique, which guarantees the convergence accuracy, while the exploitation strategy introduced in this paper is only second to INFO.However, on the simple multimodal functions F4 to F10, CODGBGO occupies a great advantage and ranks first in 85.7% of cases, which indicates that due to the introduction of the exploration strategy, the algorithm's ability to jump out of the local optimum trap has been strengthened.Meanwhile, on the complex multimodal functions consisting of hybrid and composition functions, CODGBGO obtains a winning percentage of 90%, which indicates that the exploitation strategy and exploration strategy proposed in this paper make the two phases of the algorithm well-balanced, which enhances the algorithm's performance of the globally optimal search and can effectively cope with the challenges posed by the complex multimodal problems.Meanwhile, from a comprehensive point of view, CODGBGO outperforms other algorithms in terms of both the average ranking and the percentage of the first ranking, which indirectly confirms the role of the strategy proposed in this paper in boosting the performance of the algorithm.
Meanwhile, when the test dimension is 50D, CODGBGO also demonstrates strong solution performance, ranking first on 82.7% of the test functions.Among them, it dominates 85.7% of the simple multimodal test functions, which also shows that the exploration strategy proposed in this paper can still effectively improve the algorithm's ability to jump out of the local optimum with an increase in test dimensions.Meanwhile, on the complex multimodal function, it occupies 90% of the solution advantage, which also confirms that the exploitation strategy and exploration strategy proposed in this paper still have effectiveness with the increase of the test dimension.Meanwhile, from the perspective of the first overall ranking, it can be seen that the strategy proposed in this paper is still effective in promoting the algorithm as the problem dimension increases.
Finally, when the test dimension is 100D, the solution performance of all algorithms shows a decline.Inevitably, the solving performance of CODGBGO also shows a slight decline, but the overall ranking shows that the proposed CODGBGO still has a certain advantage in solving high-dimensional test problems.Specifically, it achieves a winning rate of 60% on the complex multimodal problem.This also indirectly reflects that the exploration strategy and exploitation strategy still have a certain contribution to the performance of the algorithm when solving high-dimensional problems.
To summarize, by testing different dimensions at CEC2017, it is confirmed that the exploration strategy and exploitation strategy proposed in this paper can promote the algorithm effectively.In addition, the disadvantage is that the facilitation will gradually decrease as the test dimensions rise, but this is within an acceptable range.

Convergence and Stability Analysis
In addition to the solution accuracy, the convergence speed and stability of the algorithm are also important.This subsection will analyze the convergence and stability of the algorithm.Experiments are conducted using the IEEE CEC2017 test function set with a test dimension of 30 D. The experimental results are shown in Figures 6 and 7. From Figure 6, it can be seen that in most cases, CODGBGO is in the lead after 30,000 function evaluations, with faster convergence speed and convergence accuracy.Meanwhile, from Figure 7, it can be seen that CODGBGO has higher solution stability.

Nonparametric Analysis
In this subsection, non-parametric tests are used to compare the differences between CODGBGO and competitive algorithms, and experiments are conducted using the IEEE CEC2017 test function set with test dimensions of 30D, 50D, and 100D.First, the Wilcoxon rank sum test is used to compare the differences between CODGBGO and the competitive algorithms, and the results of the experiments are shown in Table 9, Table 10, and Table 11, respectively.Where the significance factor p < 0.05 indicates that there is a significant difference between the competitive algorithms and CODGBGO, and on the contrary, it indicates that there is no significant difference between the competitive algorithms and CODGBGO.When there is a significant difference between the algorithms, the competitive algorithms are determined to be superior or weaker than CODGBGO by comparing the means.where '+' indicates that the competitive algorithms are significantly better than CODGBGO, '−' indicates that the competitive algorithms are significantly weaker than CODGBGO, and '=' indicates that there is no significant difference between the competitive algorithm and CODGBGO.As can be seen from Tables 9-11, CODGBGO significantly outperforms the other competing algorithms in most cases, demonstrating strong comprehensive solution performance.Secondly, the experimental results are ranked using the Friedman mean rank test, and the results are shown in Table 12, from which it can be seen that CODGBGO is ahead of the competing algorithms in terms of average rank on different test dimensions.In summary, it can be shown that the CODGBGO proposed in this paper is more competitive than other competitive algorithms on the IEEE CEC2017 test set.In this section, the performance differences between CODGBGO and the superior algorithms are primarily tested.The IEEE CEC2020 test function set is used for the experiments, with test dimensions of 10D and 20D.

Table 9. p-values of the IEEE CEC2017 test function set for 30D.
Firstly, the algorithm's performance is evaluated using the mean and standard deviation metrics, and the results are shown in Table 13.We can see from Table 13 that when the test dimension is 10D, CODGBGO ranks first on 80% of the test functions and achieves good solution performance.Among them, it is weaker than ALSHADE on test function F8 and ALSHADE, IMODE, and LSHADE on test function F9, which shows that CODGBGO still suffers from low convergence performance when dealing with specific problems compared to the best existing optimization algorithms, and this is attributed to the fact that there is still room for improvement in the exploration strategy and the exploitation strategy proposed in this paper.However, from a comprehensive point of view, CODGBGO is undeniably an excellent algorithm.Meanwhile, when the test is 20D, consistent with the above analysis, CODGBGO still has the best comprehensive performance, but there is still some room for improvement.
Secondly, the Wilcoxon rank sum test is used to compare the differences between CODGBGO and the superior algorithms, and the experimental results are shown in Table 14.From Table 14, it can be observed that for a test dimension of 10D, CODGBGO significantly outperforms ALSHADE in five test functions and significantly outperforms IMODE and LSHADE in four test functions.For a test dimension of 20D, CODGBGO significantly outperforms ALSHADE and LSHADE in seven test functions and significantly outperforms IMODE in six test functions, demonstrating its strong competitiveness.Secondly, the Friedman mean rank test is used to rank the experimental results, and the results are presented in Table 15.From Table 15, it can be seen that CODGBGO achieves the top ranking in all test dimensions, outperforming other superior algorithms and demonstrating higher solution stability and performance.

Results for Feature Selection Problems
It has been verified in previous experiments that the proposed CODGBGO possesses efficient performance in dealing with numerical optimization problems.In this section, the performance of the proposed CODGBGO for discretized optimization problems is evaluated by using CODGBGO for solving feature selection problems.The feature selection problem can be defined as the dimensionality reduction of the original dataset features to improve the classification accuracy.The feature selection problem can be viewed as a discretized optimization problem and the CODGBGO algorithm is proposed in this paper to solve this challenging problem.

Establishment of an Optimization Model
The feature selection problem is a multidimensional optimization problem where the objective is to obtain the highest classification accuracy using the minimum number of features.The fitness function of the feature selection model is defined as follows: where x denotes the selected feature subset, error denotes the classification error of the selected feature subset, R denotes the selected feature subset size, and n denotes the number of features in the original dataset,   , In this paper,  1 takes the value of 0.9.
To deal with the feature selection problem, CODGBGO was discretized by defining each individual in the population as a binary variable.For example, an individual in the population = ,1 , , ( , ,... ) , where D denotes the dimension of the problem to be solved and its value is the number of features in the original dataset, , ij x is a binary variable and , ij x equals 1 indicates that the jth feature is selected, otherwise, the jth feature is not selected.During the initialization phase of the population, generate N individuals, variables are randomly generated in the interval [0, 1] and then converted to binary variables by a threshold of 0.5.The definition is as follows: ,2,..., ; =1,2,..., .0, 0.5

Experimental Analyses
UCI is a machine learning database proposed by the University of California Irvine.It can be obtained at http://archive.ics.uci.edu/mL/index.php(accessed on 4 March 2024), In this paper, 18 datasets were selected to evaluate the performance of the proposed CODGBGO in dealing with discrete optimization problems.The dataset description information is shown in Table 16.The comparison algorithms are PSO, DE, GWO, MVO, WOA, ABO, BOA, HHO, EO, and GO.In this paper, the selected subset of features is used to calculate the classification accuracy using the K-nearest neighbor (KNN) algorithm, with K set to 5. Subsequently, the performance was evaluated using 5-fold cross-validation and the dataset is divided into five parts: one for testing and four for training.The accuracy of feature subset prediction can be effectively calculated by K-fold cross-validation.Meanwhile, 10 algorithms were used to experimentally compare with CODGBGO to evaluate the performance of CODG-BGO.The population size was set to 60, the maximum number of iterations was set to 100, and each experiment was executed independently for 30 times.The following metrics were used to evaluate the performance of CODGBGO: Fitness values: They are obtained from 30 independent runs of the experiment and contain the best value, the worst value, and the mean value.
Classification accuracy: it is obtained by calculating the average classification accuracy of 30 independent experiments.The classification accuracy for each experiment is obtained by solving the classification accuracy by experimenting with KNN on a selected subset of features.
Feature subset size: it is obtained by calculating the average feature subset size of 30 independent experiments.
Rank: It represents the ranking in different evaluation criteria (e.g., mean, best, worst) and is used to visualize the performance of the algorithm.
Table 17 shows the best fitness value, mean fitness value, worst fitness value, and ranking on each metric for 30 independent runs of the algorithm on different datasets.As can be seen in Table 17, CODGBGO ranked first on all 18 datasets in the best fitness value metric, with a 100% win rate.Meanwhile, on the mean fitness value metric, it ranked first on 17 datasets and was weaker than DE only on the Zoo dataset, with an average ranking of 1.0556.On the worst fitness value metric, it ranked first on 16 datasets, weaker than GWO on the IonosphereEW dataset, and weaker than GO on the Landsat dataset, with an average ranking of 1.1111.The final ranking of the proposed CODGBGO is first in all three metrics: best fitness value, average fitness value, and worst fitness value.The above analysis confirms that, due to the introduction of exploitation strategy and exploration strategy, CODGBGO has more efficient performance in dealing with feature selection problems.Meanwhile, compared with the comparison algorithms, CODGBGO possesses higher expandability, which also means that CODGBGO may also be very competitive when dealing with other engineering optimization problems.However, it inevitably falls into a local optimum when solving a specific problem and loses its global optimization capability.This also indicates that there is still room for improvement in the exploration strategy and exploitation strategy in this paper.Figure 8 shows the distribution of the best fitness values generated by CODGBGO over 30 independently run experiments.As can be seen in Figure 8, the stability of CODG-BGO outperforms the comparison algorithms on most of the datasets.This also shows that CODGBGO has higher solution stability in solving the feature selection problem and can be used as an effective method to solve the feature selection problem.In addition to the accuracy and stability of the solution, the speed of convergence of the algorithm in solving real-world problems is crucial.Figure 9 shows the average convergence curve of CODG-BGO in solving the feature selection problem.From Figure 9, it can be seen that its convergence speed and accuracy are better than those of the other comparison algorithms on most of the datasets.Through the above analysis, CODGBGO has higher computational accuracy, better stability, and faster convergence speed in solving the feature selection problem.Table 18 shows the Wilcoxon's rank sum test results of the algorithm in solving different datasets, where '+', '−', and '=' have the meanings as described earlier with a significance factor of 0.05.As can be seen from Table 18, CODGBGO significantly outperforms PSO, MVO, WOA, ABO, BOA, HHO, and EO on 18 datasets, DE and GWO on 17 datasets, and GO on 15 datasets.This also shows that CODGBGO is an effective method for solving the feature selection problem.In addition, Tables 19 and 20 show the average classification accuracy and the average feature subset size of the algorithms when solving different datasets, respectively.Figure 10 shows the average accuracy ranking of the algorithms when solving different datasets.Table 21 shows the running time for each dataset.As can be seen in Table 19 and Figure 10, CODGBGO finished first on 14 datasets, second on 3 datasets, and just 8th on the BreastEW dataset, with an average ranking of 1.5556 for average accuracy, resulting in a first-place ranking.As can be seen in Table 20, CODGBGO has an average ranking of 4 in terms of the average selected feature subset size, and ultimately ranks second behind EO, but the average classification accuracy of EO is weaker than that of CODGBGO.This shows that EO removes strongly correlated feature attributes in the feature selection process, which leads to a reduction in classification accuracy, which is attributed to the fact that EO's global optimization ability is not as good as CODGBGO's.While CODGBGO is weaker than EO in terms of the feature subset size, its classification accuracy is better than that of the other algorithms, which confirms that, due to the introduction of the strategies, CODGBGO's has achieved a better balance between the global and local searches and has a stronger global optimization ability.Finally, Figure 11 shows the combined performance on six evaluation criteria, which are best fitness value, average fitness value, worst fitness value, average accuracy, average running time, and average feature subset size.As can be seen in Figure 11, CODGBGO does not dominate in terms of average runtime, but it is within an acceptable range.CODGBGO is weaker than EO in average feature subset size, but CODGBGO outperforms EO in classification accuracy.CODGBGO outperforms the comparison algorithm in all other metrics.In summary, it can be shown that the comprehensive performance of CODGBGO is better than the comparison algorithm, and it is an effective method to solve the feature selection problem.

Results for Constrained Engineering Application Problems
In the previous experiments, the ability of CODGBGO to solve numerical optimization problems and discretized optimization problems was verified, and in this section, the proposed CODGBGO is mainly applied to solve four constrained engineering optimization problems in order to evaluate its performance in solving constrained engineering optimization problems.The selected engineering problems are 10-bar truss design, tension/compression spring design, weight minimization of a speed reducer, and welded beam design.The comparison algorithms are GO, INFO, SSA, TLBO, BESD, DE, PSO, SO, FOPSO, and ICGWO.The population size was set to 60, the maximum number of function evaluations was set to 60,000, and each experiment was run independently 30 times.4.5.1.Results on 10-Bar Truss Design [61] The primary objective of this problem is to minimize the weight of the truss structure while ensuring that certain frequency constraints are met.The structure is shown in Figure 12, and the mathematical formulation for addressing this problem can be described as follows: The optimal solution of the proposed CODGBGO and comparative algorithms for the 10-bar truss design problem is shown in Table 22, and the statistical results are presented in Table 23.From Table 22, it can be observed that CODGBGO ranks first and obtains the optimal solution X = (0.003515, 0.001471, 0.003511, 0.001472, 0.000065, 0.000456, 0.002368, 0.002371, 0.001244, 0.001241) with a fitness function value of 524.4511.Furthermore, Table 23 demonstrates that CODGBGO outperforms other algorithms in terms of the best fitness function value, worst fitness function value, average fitness function value, and standard deviation, ranking first in all categories.In conclusion, CODGBGO is an effective method for solving the 10-bar truss design problem.The primary aim of this problem is to optimize the mass of a tension or compression spring.This problem involves four constraints, and three variables are employed to compute the mass: the wire diameter ( 1x ), the average coil diameter ( 2x ), and the number of active coils ( 3x ).The structure is shown in Figure 13, and the formulation of this problem is as follows:  The optimal solution of the proposed CODGBGO and comparative algorithms for the tension/compression spring design problem are presented in Table 24, while the corresponding statistical results are shown in Table 25.From Table 24, it can be observed that GO, DE, SO, and CODGBGO all rank first simultaneously.Specifically, CODGBGO achieves the best solution with a fitness function value of 0.012665, corresponding to X = (0.051701, 0.356996, 11.272677).Moreover, from Table 25, it is evident that CODGBGO ranks second in terms of the worst fitness function value, being weaker than DE, but ranks first in terms of the average fitness function value.Therefore, based on the above analysis, CODGBGO is considered suitable for solving the tension/compression spring design problem.
Table 24.The optimal solution of the proposed CODGBGO and comparative algorithms for the tension/compression spring design.[63] This task encompasses the development of a speed reduction mechanism for a compact aircraft engine.The structure is shown in Figure 14, and the optimization problem derived from this task can be expressed in the following manner:  The optimal solution of the proposed CODGBGO and comparative algorithms for the weight minimization of a speed reducer problem is shown in Table 26, and the statistical results are presented in Table 27.From Table 26, it can be observed that GO, INFO, TLBO, PSO, SO, and CODGBGO all ranked first simultaneously.CODGBGO obtained the best solution with a fitness function value of 2994.424466,where X = (3.500000,0.700000, 17.000000, 7.300000, 7.715320, 3.350541, 5.286654).Additionally, Table 27 reveals that CODGBGO shares the top rank with GO, INFO, TLBO, and PSO regarding the worst and average fitness function values.Based on these findings, this study concludes that CODGBGO is well-suited for solving the weight minimization problem of a speed reducer.The primary aim of this problem is to optimize the design of a welded beam while minimizing costs.The problem involves five constraints, and four variables are utilized in the development of the welded beam.The structure is shown in Figure 15, and the mathematical formulation of this problem can be outlined as follows:  x (30) The optimal solution of the proposed CODGBGO algorithm and the comparative algorithm on the Welded beam design problem is shown in Table 28, and the statistical results are presented in Table 29.From Table 28, it can be observed that GO, INFO, TLBO, DE, PSO, SO, and CODGBGO are all ranked first simultaneously.CODGBGO achieves the best solution with a fitness function value of 1.670218, where X = (0.198832, 3.337365, 9.192024, 0.198832).Additionally, from Table 29, it can be seen that CODGBGO shares the top rank with GO, INFO, and TLBO in terms of the worst fitness function value.Furthermore, CODGBGO is jointly ranked first with GO, INFO, TLBO, and DE in terms of the average fitness function value.In conclusion, based on the above analysis, this study concludes that CODGBGO is suitable for solving the welded beam design problem.

Conclusions and Future Directions
In this work, the Circle-OBL initialization strategy, exploration strategy, and exploitation strategy are integrated into the GO to form an enhanced GO, which is called CODG-BGO.In CODGBGO, the first step is to provide a good initial population using the Circle-OBL initialization strategy, which is instrumental in enhancing the search quality.Secondly, the exploration strategy is adopted to enhance population diversity and the ability of the algorithm to escape local optima traps.Finally, the exploitation strategy is used to improve the exploitation ability of the algorithm and enhance its convergence performance.Through a large number of experiments, it has been confirmed that CODGBGO has good advantages and scalability in dealing with continuous optimization, discrete optimization, and engineering optimization, which is specifically reflected in the comprehensive evaluation due to the good balance between the exploitation phase and the exploration phase in CODGBGO, which makes the algorithm have a better performance in global optimization search.However, by analyzing the unimodal problems, we find that the exploitation strategy proposed in this paper needs to be further improved to enhance the exploitation performance.Meanwhile, when dealing with discrete combinatorial problems with high dimensionality, the CODGBGO proposed in this paper has weaker performance than the traditional optimization algorithms in specific cases, despite its superiority in comprehensive performance.Finally, although it shows good results on engineering optimization problems, it needs to be further tested in prospective applications, such as the field of nonlinear systems.
Therefore, our next work will focus on the following aspects: 1. further improving the CODGBGO proposed in this paper to further enhance the solution's performance.2. developing specific algorithms for high-dimensional combinatorial optimization problems that are applicable to such problems.3. incorporating complex optimization problems in nonlinear systems into the testing of the algorithms in order to evaluate their performance more comprehensively.

4 Gap
Dth dimension knowledge of the ith individual.The value of i GR is taken as the value of the objective function of the individual i x , where a larger i GR indicates that the individual has learned a lesser amount of knowledge and vice versa.The individual learns by examining the gaps between other individuals in the population and growing in the process.The gaps between the leader and the elite ( 1 Gap ), the leader and the bottom ( 2 Gap ), the elite and the bottom ( 3 Gap ), and the gap between two random individuals ( ) are mainly modeled in the learning stage.Each gap is described by the following equation: between two individuals of different types; opposing individual of ith individual.Compose N chaotic individuals into a chaotic population 4 P and N opposing individuals into an opposing population * 4 Combining the two populations to form = * 44 {} P P P , and taking the top N individuals with the best objective function values to form the final initialized population.
i f x denotes the objective function value after exploration strategy for the ith individual.

Figure 2 .
Figure 2. The process of exploration strategy.

Figure 4 .
Figure 4. Comparison of population diversity between GO and CODGBGO.

Figure 5 .
Figure 5.The exploration and exploitation ratio of CODGBGO.4.3.3.Numerical Analysis In this section, the performance of CODGBGO in solving numerical optimization problems is analyzed.The IEEE CEC2017 test functions are used for the experiments, and comparisons are made with 10 other state-of-the-art optimization algorithms.The test dimensions are 30D, 50D, and 100D, and the experimental results are presented inTable6, Table 7, and Table 8, respectively, where the bold numbers in the table indicate the top rankings.As can be seen from Table6, when the test dimension is 30D, CODGBGO ranks first on the 24 test functions, demonstrating a very excellent solving performance.On the unimodal function F1, the solution accuracy is weaker than that of INFO, which is mainly due to the fact that INFO adopts a more advanced exploitation technique, which guarantees the convergence accuracy, while the exploitation strategy introduced in this paper is only second to INFO.However, on the simple multimodal functions F4 to F10, CODGBGO occupies a great advantage and ranks first in 85.7% of cases, which indicates that due to the introduction of the exploration strategy, the algorithm's ability to jump out of the local optimum trap has been strengthened.Meanwhile, on the complex multimodal functions consisting of hybrid and composition functions, CODGBGO obtains a winning percentage of 90%, which indicates that the exploitation strategy and exploration strategy proposed in this paper make the two phases of the algorithm well-balanced, which enhances the algorithm's performance of the globally optimal search and can effectively cope with the challenges posed by the complex multimodal problems.Meanwhile, from a comprehensive point of view, CODGBGO outperforms other algorithms in terms of both the average ranking and the percentage of the first ranking, which indirectly confirms the role of the strategy proposed in this paper in boosting the performance of the algorithm.Meanwhile, when the test dimension is 50D, CODGBGO also demonstrates strong solution performance, ranking first on 82.7% of the test functions.Among them, it dominates 85.7% of the simple multimodal test functions, which also shows that the exploration strategy proposed in this paper can still effectively improve the algorithm's ability to jump out of the local optimum with an increase in test dimensions.Meanwhile, on the complex multimodal function, it occupies 90% of the solution advantage, which also confirms that the exploitation strategy and exploration strategy proposed in this paper still have effectiveness with the increase of the test dimension.Meanwhile, from the perspective of the first overall ranking, it can be seen that the strategy proposed in this paper is still effective in promoting the algorithm as the problem dimension increases.

Figure 6 .
Figure 6.Comparison of convergence curves on the IEEE CEC2017 test function set.

Figure 7 .
Figure 7.Comparison of stability on the IEEE CEC2017 test function set.

Figure 8 .
Figure 8.The boxplot of comparison algorithms on feature selection.

Figure 9 .
Figure 9.The convergence curve plots of comparison algorithms on feature selection.

Figure 10 .
Figure 10.Comparison of average classification accuracy.

Figure 11 .
Figure 11.Combined performance on six evaluation criteria.

Figure 13 .
Figure 13.The tension/compression spring design structure.

Figure 14 .
Figure 14.The speed reducer design structure.

Table 1 .
Information about the IEEE CEC2017 function test set.

Table 2 .
Information about the IEEE CEC2020 function test set.

Table 3 .
The comparison algorithm details and parameter settings.

Table 4 .
Friedman mean rank test results for different combinations of strategies.

Table 5 .
Friedman mean rank test results for different parameter combinations.

Table 6 ,
Table 7, and Table 8, respectively, where the bold numbers in the table indicate the top rankings.

Table 6 .
Comparison of numerical results for the IEEE CEC2017 test function set for 30D.

Table 7 .
Comparison of numerical results for the IEEE CEC2017 test function set for 50D.

Table 8 .
Comparison of numerical results for the IEEE CEC2017 test function for 100D.

Table 10 .
p-values of the IEEE CEC2017 test function set for 50D.

Table 11 .
p-values of the IEEE CEC2017 test function set for 100D.

Table 12 .
Friedman mean rank test results for the IEEE CEC2017 test function set.

Table 13 .
Comparison of numerical results on the IEEE CEC2020 test function.

Table 14 .
p-values of the IEEE CEC2020 test function set.

Table 15 .
Friedman mean rank test results for IEEE CEC2020 test function set.

Table 16 .
The information from 18 datasets.

Table 17 .
Comparison of algorithms for feature selection.

Table 18 .
p-value of the Wilcoxon rank sum test on different datasets.

Table 19 .
The mean classification accuracy of the comparison algorithm.

Table 20 .
The mean size of feature subsets selecting by the comparison algorithm.

Table 21 .
Mean runtime of feature selection based on different algorithms.

Table 22 .
The optimal solution of the proposed CODGBGO and comparative algorithms for the 10bar truss design problem.

Table 23 .
Statistical results of the proposed CODGBGO and comparative algorithms for the 10-bar truss design problem.

Table 25 .
Statistical results of the proposed CODGBGO and comparative algorithms for the tension/compression spring design.Results on Weight Minimization of a Speed Reducer

Table 26 .
The optimal solution of the proposed CODGBGO and comparative algorithms for the weight minimization of a speed reducer.

Table 27 .
Statistical results of the proposed CODGBGO and comparative algorithms for the weight minimization of a speed reducer.

Table 28 .
The optimal solution of the proposed CODGBGO and comparative algorithms for the welded beam design.

Table 29 .
Statistical results of the proposed CODGBGO and comparative algorithms for the welded beam design.