A Novel Approach to Combinatorial Problems: Binary Growth Optimizer Algorithm

The set-covering problem aims to find the smallest possible set of subsets that cover all the elements of a larger set. The difficulty of solving the set-covering problem increases as the number of elements and sets grows, making it a complex problem for which traditional integer programming solutions may become inefficient in real-life instances. Given this complexity, various metaheuristics have been successfully applied to solve the set-covering problem and related issues. This study introduces, implements, and analyzes a novel metaheuristic inspired by the well-established Growth Optimizer algorithm. Drawing insights from human behavioral patterns, this approach has shown promise in optimizing complex problems in continuous domains, where experimental results demonstrate the effectiveness and competitiveness of the metaheuristic compared to other strategies. The Growth Optimizer algorithm is modified and adapted to the realm of binary optimization for solving the set-covering problem, resulting in the creation of the Binary Growth Optimizer algorithm. Upon the implementation and analysis of its outcomes, the findings illustrate its capability to achieve competitive and efficient solutions in terms of resolution time and result quality.


Introduction
There are a series of real-world problems that exhibit high complexity [1][2][3][4][5].These are characterized by the presence of multiple local optima and a global optimum that, due to the high number of variables and current computing capabilities, would take unimaginable amounts of time to solve the mathematical functions that model these problems.Therefore, techniques have been studied and developed to efficiently obtain optimal solutions, with metaheuristic algorithms being among the most popular.These algorithms stand out mainly because of their flexibility, their ease of implementation, and the lack of a need for gradients when solving a problem.
The set-covering problem is an optimization challenge in the fields of computer theory and operations research.In this problem, the goal is to find the most efficient way to cover a set of elements using a smaller set of subsets, where each subset has an associated cost.The objective is to minimize the total cost by selecting an appropriate set of subsets so that all elements are covered at least once.The set-covering problem has applications in various fields, such as route planning, resource allocation, and general decision-making.Solving this problem involves striking a balance between the number of selected subsets and the total cost required to achieve complete coverage [6].
One of the most relevant characteristics that a metaheuristic should have is the ability to possess operators that allow both exploration and exploitation of the search space.Exploitation refers to the algorithm's ability to perform a local search, while exploration refers to its ability to perform global searches, thus enabling the heuristic to find optima throughout the entire search space.
The main aims of our work are the following: • Implement a binary version for Growth Optimizer.• Solve the set-covering problem with our proposal.• Carry out a deep analysis of the behavior of our proposal.We will analyze the convergence, execution times, best results obtained, distribution of fitness in the independent runs executed, and balance of exploration and exploitation.
The present work is organized as follows.Section 2 defines the combinatorial optimization problems.Section 3 describes the concept of metaheuristics.Section 4 describes the theoretical background and the set-covering problem along with its mathematical model.Section 5 defines the Growth Optimizer algorithm and the mathematical model, and the used operators are presented.Section 6 presents our proposal: a Binary Growth Optimizer for solving the SCP.In Section 7, an analysis and a discussion of the obtained results are carried out.Finally, Section 8 provides the conclusions and future work.

Combinatorial Optimization Problems
Optimization problems are a type of problem in mathematics and computer science that involve finding the best solution from a set of possible solutions, according to certain criteria or constraints.In these problems, the objective is to maximize or minimize an objective function, which usually represents a measure of quality or efficiency [7].
One of the major challenges in combinatorial optimization is the current computing capacity, which often cannot deliver optimal solutions efficiently.Among the most famous problems is the Traveling Salesman Problem [5], where individuals must decide the shortest route to visit n cities to optimize their fuel costs.The number of possibilities to calculate to solve this problem is equal to "n" factorial, meaning that the number of possible solutions grows factorially with the number of cities.Using the most powerful computer available today to solve a problem like this for 20 cities would require approximately 15,000 years to provide the optimal solution.In 1954, techniques such as linear programming, heuristics, and branch and bound were first used to solve instances of the Traveling Salesman Problem.These techniques have been the most successful methods for solving these types of problems to this day.
In order to classify combinatorial optimization problems by complexity, various research efforts have led to four basic categorizations: P-type, NP-type, NP-complete, and NPhard.P-type problems are those that can be solved in polynomial time "n" by deterministic algorithms, where "n" is the problem size.NP-type problems can be solved in polynomial time of the same degree as the problem size "n" by nondeterministic algorithms.NPcomplete problems are at least as difficult as NP problems but are considered the most challenging within this classification.They can be solved in polynomial time through a polynomial reduction.Lastly, NP-hard problems are at least as difficult as NP problems, but no algorithm is known that can solve them in polynomial time.It is important to note that all NP-complete problems are within the NP-hard classification, but not all NP-hard problems fall under the NP-complete category [8].

Continuous Optimization Problems
Continuous optimization problems are those in which the goal is to minimize an objective function within a search space defined by continuous boundaries or constraints.In these problems, solutions are continuous values, and the objective function can be computationally expensive to evaluate.There is no requirement for discrete data structures, and the search space is continuous [9].

Discrete Optimization Problems
Discrete optimization problems are characterized by having solutions represented as discrete data structures rather than continuous values.These data structures can include ordinal, categorical, or binary variables, permutations, strings, trees, or other discrete representations.In discrete optimization problems, the continuous boundaries or constraints are often not necessary.These problems involve finding the best combination or configuration of elements within a discrete set of options [9].

Metaheuristics
Metaheuristics, also known as stochastic search algorithms, are characterized by an iterative search that uses stochastic procedures to generate the next iterations.The next iteration may contain a candidate solution to be the best local optimum.
These algorithms are considered robust and easy to implement because they do not rely on structural information from an objective function, such as gradient information or convexity.This feature has contributed to their popularity in the field of combinatorial optimization.However, many of these algorithms require specifying various configuration parameters, such as population sizes, variation operators, or distribution functions, making it necessary to fine-tune them to solve different problems.
The most basic metaheuristics are instance-based.These algorithms maintain a single solution or a population of candidate solutions.The construction of new candidate solutions depends explicitly on the solutions generated previously.Prominent examples of representatives in this category include simulated annealing [10], evolutionary algorithms (EAs) [11], and tabu search [12].To delve deeper into metaheuristics, the book by El-Ghazali Talbi is recommended [13].

The Set-Covering Problem (SCP)
The SCP is a classical optimization problem defined by a binary matrix, denoted as A. In this matrix, each cell is represented as a binary value, where a ij ∈ [0, 1] , and i and j are the size of m-rows and n-columns, respectively: Let i ∈ 1, 2, . . ., m and j ∈ 1, 2, . . ., n represent the sets of rows and columns, respectively.The primary objective of the problem is to minimize the cost associated with the subset S ⊆ J, subject to the constraint that all rows i ∈ I must be covered by at least one column j ∈ J.It is important to note that the inclusion of column j in the subset of solution S is represented as 1, and 0 otherwise.The problem at hand can be formally defined as the set-covering problem (SCP), which seeks to Subject to Equations ( 2) and (3), x j ∈ {0, 1} ∀j ∈ J (binary variables)

Applications
The SCP has a variety of real-world applications, making it highly relevant in the optimization field.These real-world problems often pose a significant computational burden on teams, necessitating the development of techniques to obtain feasible solutions within a reasonable timeframe.Examples of these real-world applications include the following.

Organization of the Serbian Postal Network
Efficiently organizing a postal network that ensures global service coverage is a challenging task.It involves the strategic placement of physical stations or access points where local residents can send parcels or deposit items.This problem is subject to additional constraints related to population density, access point placement costs, and city size.The primary objective is to minimize the number of permanent postal units within the postal network.This optimization process reduces the operational costs for the postal operator and minimizes the total number of employees involved in the service [14].

Sibling Relationship Reconstruction
In the field of genetics, there is a challenge in modeling and reconstructing sibling relationships among individuals of a single generation when parental genetic information is unavailable.This problem holds significant importance, as knowledge of familial relationships is crucial in biological applications, such as studies of mating systems, the population management of endangered species, or the estimation of hereditary traits [15].

Periodic Vehicle Routing Problem
In this context, the problem involves determining the minimum-cost routes for each day within a given planning horizon.These routes come with constraints that require each customer to be visited a specified number of times (chosen from a set of valid day combinations) and ensure that the required quantity of products is delivered during each visit.Another critical constraint is that the number of daily routes (each respecting the vehicle's capacity) should not exceed the total number of available vehicles [16].

Solving Set-Covering Problem Review
The set-covering problem seeks a subset of decision variables that satisfy a minimum cost, and Crawford et al. proposed an improved binary monkey search algorithm (MSA) to handle the SCP [6].The algorithm employs a novel climbing process to enhance exploration capability and a new cooperative evolution to reduce the number of infeasible solutions.Jaszkiewicz compared the computational efficiency of three state-of-the-art multi-objective metaheuristic algorithms in the SCP [17], and computational effort was compared in achieving the same solution quality by calculating average of scalarizing functions in representative samples.Kılıç and Yüzgeç proposed an antlion optimization (ALO) algorithm for the quadratic assignment problem based on contest selection [18].In the random walking process of ALO, a tournament selection strategy is introduced to replace the roulette method, and several equations in ALO are modified.
The minimum labeling spanning tree (MLST) is an NP-hard problem commonly applied in communication networks and data compression.To address this problem, Lin et al. introduced a binary FA that repairs infeasible solutions and eliminates redundant tags [19], and the algorithm is more suitable for discrete optimization.Vehicular ad hoc networks (VANETs) require robust paths connecting all nodes to achieve reliable and efficient information transmission, but classic graph theory only yields a minimum spanning tree (MST).Zhang and Zhang proposed a binary-coded ABC algorithm to solve the construction of spanning trees and applied the algorithm to roadside-vehicle communication [20].Da et al. proposed an improved maximum vertex cover algorithm to meet the strict time complexity constraint of mixed-integer linear programs (MILP), and multi-start local search handles it by combining the proposed algorithm with local search [21], more state-of-the-art is shown in the Table 1.
Table 1.Examples of metaheuristics used for set-covering problem resolutions and their application.

Ref.
Optimization Algorithm Application Convergence Complexity [22] GA Parameter calibration Low Low [23] Binary PSO Input variable selection in ELM Medium High [24] Binary PSO Parameter optimization in ELM Medium High [25] GA Variable selection in hot metal desulfurization kinetics Low Low [26] Binary GWO ESN Low High [27] Binary CSO Parameter optimization in MRE isolator Low High [28] CSO and salp swarm algorithm CNN Medium High [29] Binary CSA CNN Low High [30] DE and binary DE NN Medium High Table 1.Cont.

Ref. Optimization Algorithm Application
Convergence Complexity [31] Binary PSO and BBO Set ret reduction Fast High [32] Binary PSO Parameter optimization in Electric Vehicles Medium Low

Inspiration
Growth Optimization (GO) is a metaheuristic inspired by human behavior and how individuals develop in their surroundings [33].In this metaheuristic, each potential solution is associated with an individual within a population.These individuals are ranked based on their Growth Resistance GR, which corresponds to the value of the objective function when evaluating the solution.This ranking divides the population into three groups based on a parameter P 1 : the upper level (positions 1 to P 1 , with the first being the leader), the middle level (positions P 1 + 1 to N − P 1 ), and the lower rank (positions N − P 1 to N).

Mathematical Modeling
The GO algorithm consists of two main phases for generating solutions: the learning phase and the reflection phase, as shown in Algorithm 1.

Learning Phase
In this phase, the algorithm generates movements by using the differences between individuals to reflect how much an individual should learn based on their knowledge gap compared to others, described in Equation ( 4) where ⃗ x best represents the best solution, while ⃗ x better represents one of the next P 1 − 1 best individuals.⃗ x worse is one of the P 1 lowest ranked individuals in the population.Both ⃗ x L1 and ⃗ x L2 are random individuals different from the ith individual.
Metrics like learning factor LF k in Equation ( 5) and SF i in Equation ( 6) are used to control how much an individual should learn based on the knowledge gap and their resistance to change.LF k measures the influence of gap k on individual i, while SF i evaluates an individual's resistance to change compared to the rest of the population.
To represent the acquired knowledge and generate a new candidate solution in Equation ( 7), the knowledge acquisition KA formula is used, allowing each individual i to absorb knowledge from various gaps using Equation ( 8) where It is the number of current iterations, and ⃗ x i is the ith individual who absorbs the knowledge acquired.Subsequently, an adjustment phase is carried out, where an evaluation is carried out of whether the solutions in the next iteration are better than the previous ones, modeled here as a minimization problem.If not, these solutions can be retained with a small retention probability, controlled by the parameters P 2 and r 1 , a uniformly distributed random number in the range [0, 1], preventing the loss of an individual's effort, modeled as follows Equation (9):

Reflection Phase
In this phase, individuals seek to compensate for or overcome their deficiencies.ub and lb are the upper and lower bounds of the search domain, P 3 is the parameter that controls the probability of reflection, and r 2 , r 3 , r 4 , and r 5 are uniformly distributed random numbers in the range [0, 1] used to determine how individuals adjust their solutions, modeled in Equations ( 10) and (11).
The algorithm also incorporates an Attenuation Factor AF that depends on the current number of evaluations FE and the maximum number of evaluations maxFE's.As the algorithm progresses, the AF value tends to converge to 0.001, indicating that individuals avoid frequent reinitialization and make the most of their progress.⃗ R denotes an individual at the high level, and it serves as a reflective learning guide for the current individual i.R j is the knowledge of the jth aspect of ⃗ R.

A New Binary Growth Optimizer
The binarization techniques used in continuous MHs involve transferring continuous domain values to binary domains, with the aim of maintaining the quality of moves and generating high-quality binary solutions.While some MHs operate on binary domains without a binary scheme, studies have demonstrated that continuous MHs supported by a binary scheme perform exceptionally well on multiple NP-hard combinatorial problems [34].Examples of such MHs include the binary bat algorithm [35,36], binary particle swarm optimization [37], binary sine-cosine algorithm [38,39], binary salp swarm algorithm [40], binary grey wolf optimizer [39,41], binary dragonfly algorithm [42,43], binary whale optimization algorithm [39], and binary magnetic optimization algorithm [44].In the scientific literature, two main groups of binary schemes used to solve combinatorial problems can be identified [45].The first group refers to operators that do not cause alterations in the operations related to different elements of the MH.Within this group, two-step techniques stand out as the most widely used in recent years, as they are considered to be the most efficient in terms of convergence and their ability to find optimal solutions.These techniques are based on modifying the solution in the first step and discretizing it into a 0 or a 1 in the second step [34].In addition, the angle modulation technique is also used in this group as it has been shown to be effective in solving combinatorial problems [46].On the other hand, the second group of binary schemes includes methods that alter the normal operation of an MH.For example, the quantum binary approach, which is based on the application of quantum mechanisms to solve combinatorial problems [47].In addition, also included in this group are set-based approaches, which focus on the selection of solution sets to improve the efficiency of the MH.Finally, clustering-based techniques, such as the k-means approach [48,49], are also considered in this second group, as they modify the normal operation of the MH to improve its ability to find optimal solutions.
Unlike the original approach of GO, where candidate solutions are found in the continuous domain, the proposed Binary Growth Optimizer (BWO) uses binary strings to represent solutions, where each decision variable corresponds to either 0 or 1, as shown in Figure 1.
This new proposal, detailed in pseudocode Algorithm 2, features the same moves as GO, with the addition of three key steps for obtaining binary solutions.First, the initialization of the initial population is generated in a binary manner, and binarization in two steps is carried out after the learning and reflection phases.These moves involve new parameters and functions in the algorithm that need to be adjusted as well.In the scientific community, two-step binarization schemes are very relevant [50].They have been widely used to solve a variety of combinatorial problems [51].As the name suggests, this binarization scheme consists of two stages.The first stage involves the application of a transfer function [52], which transfers the values generated by the continuous MH to a continuous interval between 0 and 1.The second stage consists of the application of a binarization rule, which discretizes the numbers within that interval into binary values.This technique has been shown to be effective in solving combinatorial problems, since it allows the quality moves of the continuous MH to be preserved while generating high-quality binary solutions.

Transfer Function
In 1997, Kennedy and his team [53] introduced the concept of transfer functions in the field of optimization.The significant advantage of these functions lies in their ability to provide probability values in a low computational cost range of 0 to 1.There are several types of transfer functions, with "S" and "V" forms being among the most popular [52,54].Their utility is derived from their capacity to transform values generated by continuous metaheuristics into a continuous interval from 0 to 1.
It is important to note that there is no one-size-fits-all transfer function.This is due to the well-known "no free lunch" theorem [55], which states that there is no universal optimization algorithm that excels in all situations.Consequently, this theorem allows for experimentation and the development of new optimization algorithms.

Binarization Rules
In the binarization stage, the conversion of discrete values into binary values, specifically 0 or 1, is obtained.Binarization rules are applied to the probabilities obtained through the transfer function to yield binary values.The choice of which binarization rule to use is crucial as it directly influences the effectiveness of the solutions in the binary metaheuristic context.

Experimental Setup
For the experiments, BWO was compared with three widely known metaheuristics, the grey wolf optimizer (GWO) [56], Pendulum Search Algorithm (PSA) [57], and sine-cosine algorithm (SCA) [58], and a selection of 49 different SCP instances was made.Each instance underwent a total of 31 experiments with V-type transfer function, the main features are shown in the Table 2, where the density corresponds to the percentage of non-zero in the matrix.Regarding the configuration of the metaheuristics, we used the following strategies: • Solution initialization: As a solution initialization strategy, we used the generation of random solutions.Since we were solving a binary problem, each decision variable was randomly assigned a value of 1 or 0. • Termination conditions: In the literature, we found different term criteria, such as calls to the objective function [59] or the total number of iterations [60][61][62] that the metaheuristic will perform.We considered the total number of iterations as the completion criterion in our work.After previous experiments, it was determined to use 30 as the total number of iterations.

•
Population size: As we worked with population metaheuristics, defining the number of individuals to use in the experimentation was key.After previous experiments, it was determined that 40 individuals would be used as the population size.
The hardware used in the experiments included an Intel Core i5-9400F processor operating at 2.90 GHz, 16.00 GB of RAM, and a 64-bit operating system with an x64 processor.Since we were experimenting with stochastic algorithms, we ran each experiment 31 times independently.All the code used for our experimentation can be found in [63].
These experiments were subjected to an analysis that delivered comparative convergence graphs, boxplots, time graphs, and exploration vs. exploitation charts.Additionally, a statistical analysis was conducted to assess the behavior of the Binary Growth Optimizer and enable a comparison with other implemented metaheuristics, thereby establishing a common framework for comparison with typical techniques.

Experimental Results
Table 3 shows the results obtained in the experimentation.Table 3 has the following format: The first column refers to the resolved SCP instance.The second column refers to the optimum of the instance.If the instance has no known optimum, it is marked with "-".The third, fourth, and fifth columns repeat each metaheuristic used.The first of these three columns refers to the best result obtained in the 31 independent executions.The second of these three columns refers to the fitness average obtained in the 31 independent executions.The third of these three columns refers to the Relative Percentage Distance (RPD) calculated based on Equation (12), where Opt corresponds to the optimum of the instance and Best corresponds to the best value obtained for the experiment.By reviewing Table 3, we can see that the BGO obtains competitive results compared to the other algorithms.We can even highlight that the BGO reaches the optimum in six instances.

Convergence vs. Iterations
One crucial aspect when evaluating the performance of metaheuristics is the speed at which they converge towards an optimal or near-optimal solution.In Figure 2, a consistent trend is observed: the Binary Growth Optimizer demonstrates a remarkable ability to converge more rapidly compared to the other three metaheuristics.Therefore, it can be highlighted for each of these: • Binary Growth Optimizer: In all cases, the Binary Growth Optimizer exhibited faster convergence in fewer iterations.This suggests that this metaheuristic can find highquality solutions in fewer iterations, which could be beneficial in practical applications where efficiency is crucial.• Grey wolf optimizer: While the grey wolf optimizer did not reach the convergence speed of the Growth Optimizer, it still stood out as the second-best option in terms of convergence speed.This indicates its ability to find reasonable solutions within a reasonable time frame.• sine-cosine algorithm: This algorithm showed a moderate convergence speed compared to the two aforementioned metaheuristics.Although it may not be the fastest choice, it remains an effective metaheuristic for addressing SCP instances.

•
Pendulum Search Algorithm: In the graphs, the PSA exhibited slower convergence compared to the other three metaheuristics.This suggests that it may require more iterations to reach comparable solutions.(q) convergence scp57.

Fitness Distribution
For the instance scp43 in Figure 3c, significant differences in the results are observed among the various metaheuristics.The BGO displays a boxplot that is close to a line, indicating rapid convergence towards low fitness values.On the other hand, the other metaheuristics maintain more uniform boxplots, suggesting more consistent performance in this particular instance.
For the instance scp52 in Figure 3l, all the metaheuristics exhibit fairly similar performance, with minimal differences in the maximum and minimum values obtained.Particularly, the PSA presents higher maximum values compared to the others, while the GWO shows the lowest minimum values.This suggests that all the metaheuristics converge within a range of similar solutions in this instance, although the PSA tends to explore solutions with a higher maximum fitness value, indicating a broader search for high-quality solutions.
For the instance scpb4 in Figure 3ah, the results are mostly similar among the different metaheuristics, although it is observed that the PSA displays a slightly higher boxplot compared to the others.This suggests that in this particular instance, the PSA may have a tendency to explore solutions with a moderately higher level of fitness compared to the other metaheuristics.
For the instance scpb5 in Figure 3ai, an interesting pattern is observed in the results of the metaheuristics.All the metaheuristics show a distribution of fitness values that remains within a relatively narrow range.However, it is important to note that the PSA and GWO exhibit higher maximum fitness values compared to the other metaheuristics in this particular instance.This suggests that both the PSA and GWO have the capability to explore solutions with a higher maximum fitness value in this specific problem configuration.
This variation in the results emphasizes the importance of considering the characteristics and conditions of each individual instance when selecting the most suitable metaheuristic to address optimization problems.
For the instance scpnrg5 in Figure 3au, it is observed that all the metaheuristics maintain boxplots with high performance values.However, it is noteworthy that the BGO displays the lowest minimum values in this instance.
For the instance scpnrh2 in Figure 3av, it is observed that the SCA and BGO maintain similars boxplots, while the GWO suggests a tendency to converge towards lower fitness solutions while retaining the capability to reach higher values.
For the instance scpnrh4 in Figure 3aw, all the metaheuristics exhibit fairly similar performance, with minimal differences in the maximum and minimum values obtained.
For the instance scpnrh5 in Figure 3ax, interesting patterns are observed in the results of the different metaheuristics.In particular, the following are true:

•
The SCA exhibits a complete range of maximum and minimum fitness values, suggesting a broad exploration of solutions.

•
The PSA stands out for having higher maximum values than the other metaheuristics, although it also presents exceptionally low whiskers.This indicates its ability to reach high-quality solutions but also a tendency towards less optimal solutions.• The GWO closely follows the SCA, covering a similar fitness range but with more lower solutions, indicating effective exploration of the search space.

•
The BGO has values slightly lower than the average and shows whiskers at the upper end, suggesting a tendency to converge towards better optimal fitness solutions in this specific instance.
These results highlight how each metaheuristic has its own way of approaching the problem based on its specific characteristics and parameters, which can lead to varied results in different problem configurations.

Exploration vs. Exploitation
Next, Figure 4 presents the exploration and exploitation of the BGO algorithm when solving the scp43 instance.This allows us to analyze the movements made by the metaheuristic and how it finds new solutions.
In the graph, quick convergence to 93 percent for exploitation and 7 percent for exploration is shown.This early convergence indicates that the metaheuristic is capable of finding a neighborhood of solutions close to the optimum in a few iterations.

Time vs. Iterations
In Figure 5 are graphs that display time as a function of iterations for different metaheuristics in problem resolution.One notable finding is that the BGO requires more time in the initial iterations compared to the other metaheuristics.
This suggests that the BGO makes more time-consuming initial moves.However, this initial time investment has a significant benefit because the BGO tends to converge more rapidly towards high-quality solutions compared to the other metaheuristics.In other words, the BGO achieves early convergence towards an optimum in fewer iterations compared to the other MH.

Statistical Tests
Before conducting the statistical test, it was essential to determine the appropriate type of test to be employed, distinguishing between a parametric and a non-parametric test.
A non-parametric statistical test was deemed suitable for our analysis as the data originated from machines rather than exhibiting a normal distribution inherent in natural occurrences.Given the independence of our sample sets, the Wilcoxon-Mann-Whitney test [64,65] was selected for the statistical analysis.This test allows for the identification of algorithms that are significantly better than others.It offers detailed information about pairs of algorithms with significant differences in terms of performance.
Utilizing the scipy Python library, the chosen statistical test could be applied using the function scipy.stats.mannwhitneyu.Within this function, the "alternative" parameter was specified as "less".The evaluation involved contrasting two distinct metaheuristics.The resulting p-value, less than 0.05, indicated that sample MhA was statistically smaller than sample MhB.Consequently, the following hypotheses were formulated in Equations ( 13) and ( 14): In the event that the obtained p-value from the statistical test was less than 0.05, it could not be asserted that MhA exhibited inferior performance to MhB, leading to the rejection of H 0 .This comparison was made in consideration of the problem being a minimization problem.To verify the findings presented, a Wilcoxon-Mann-Whitney test for each instance was conducted.
In Table 4, we can see the statistical test result when we compare BGO against the other three metaheuristics in each solved instance.When BGO is statistically better than another metaheuristic, we will indicate the p-value obtained in green and bold.When BGO is statistically worse than another metaheuristic, we will indicate the p-value obtained in red and in bold.In another case, there is no statistical difference between BGO and the other metaheuristic.
Based on the findings presented in Table 4, the summary in Table 5 illustrates the total occurrences where the BGO exhibits statistically lesser results (win) and statistically greater results (loss), and where there is no significant difference compared to the other metaheuristics.metaheuristic.These findings underscore the robust performance of the BGO, achieving strongly competitive and even superior outcomes.The analysis of results across the twelve instances of the SCP reveals some interesting trends.

1.
Rapid Convergence of the Binary Growth Optimizer (BGO): The BGO demonstrated swift convergence towards solutions with low fitness compared to the other metaheuristics.This suggests that the BGO can find high-quality solutions in fewer iterations.

2.
High Competitiveness of BGO: Comparing the results obtained from Table 5, there is statistical evidence of the BGO's performance compared to well-recognized metaheuristics, achieving very good and often superior results.

3.
Behaviors across Different Instances: As expected, similar to its competitors, the BGO showcases varying behaviors across different studied instances, sometimes displaying highly precise fitness and sometimes not.It is crucial to consider this factor when implementing the BGO for problem-solving in real-world scenarios beyond controlled laboratory settings.4.
Success of Implementing Growth Optimizer with Binary Growth Optimizer: A Growth Optimizer is a high-performing metaheuristic in continuous optimization spaces, where various parameter configurations were tested to find the best-performing one.
In this work, a Binary Growth Optimizer was employed using recommended parameters.However, altering these parameters might produce different, potentially superior outcomes.

5.
Binarization Strategies: The use of binarization strategies is crucial for solving SCPs; utilizing the V-type transfer function yielded excellent results.Other transfer function types remain to be explored to observe the BGO's behavior comprehensively.
In summary, each metaheuristic displayed its unique characteristics and strategies based on the specific instances of the SCP.The BGO stood out significantly for its rapid convergence in few iterations, high competitiveness, and excellent results.

Underlying Mechanisms
Next, the unique features of the BGO that facilitate such performance are shown, setting it apart from other metaheuristic methods.

•
Integration of Learning and Reflection: The BGO distinguishes itself by its dual approach that combines two fundamental phases: the learning phase and the reflection phase.In the learning phase, the algorithm explores the search space by selecting "better" and "worse" solutions for each individual, fostering diversity and global exploration.On the other hand, in the reflection phase, the BGO employs reflection on decision variables to generate new random positions, promoting local exploitation and convergence towards optimal solutions.• Adaptive Dynamics and Escape from Local Minima: The BGO introduces adaptive dynamics that dynamically adjust the probability of accepting worse solutions during the search.This feature allows the algorithm to adapt to the changing nature of the search landscape, increasing exploration in early stages and focusing on exploitation in later stages, thus enhancing the algorithm's ability to find high-quality solutions in different types of optimization problems.To address the challenge of local minima, the BGO incorporates diversification mechanisms that enable the occasional acceptance of worse solutions with a small probability.This strategy helps the algorithm avoid getting stuck in suboptimal local minima and explore new regions of the search space, increasing the chances of finding globally optimal solutions.• Well-defined Exploration and Exploitation Stages: The BGO is characterized by welldefined exploration and exploitation stages, allowing it to conduct extensive sampling of the search space in the early iterations and then focus on promising regions to refine solutions.This alternation between exploration and exploitation contributes to an efficient search and rapid convergence towards optimal solutions.
• Initialization with Longer Time and Rapid Convergence Compared to other metaheuristics: The BGO may require more time in initialization due to its dual approach and adaptive dynamics.However, once the search is underway, the BGO demonstrates faster convergence towards optimal solutions thanks to its ability to effectively explore and exploit the search space.

Conclusions
This analysis of twelve instances of the set-covering problem (SCP) revealed distinct trends among various metaheuristics.The Binary Growth Optimizer (BGO) emerged as a standout performer, showcasing rapid convergence towards solutions with low fitness in comparison to its counterparts.This rapid convergence suggests the BGO's ability to attain high-quality solutions within a reduced number of iterations, signifying its efficiency in problem-solving.
The BGO's success can be attributed to its unique mechanisms, including the integration of learning and reflection, adaptive dynamics, and well-defined exploration and exploitation stages.These mechanisms enable the BGO to adapt dynamically to changing landscapes, effectively explore diverse regions of the search space, and converge towards high-quality solutions in a reduced number of iterations.The BGO consistently achieved excellent and often superior results compared to established metaheuristics, showcasing its high competitiveness.
Moreover, statistical comparisons against well-established metaheuristics demonstrated the BGO's high competitiveness, consistently achieving excellent and often superior results.However, it is essential to note that the BGO displayed diverse behaviors across different instances, implying variability in its performance.Understanding these variances is crucial when implementing the BGO for real-world problem-solving outside controlled environments.
The successful adaptation from the Growth Optimizer to the Binary Growth Optimizer highlights the potential for parameter optimization, indicating that fine-tuning parameters could potentially enhance the BGO's performance further.
Additionally, the importance of binarization strategies in solving SCPs was underscored, particularly the success observed with the V-type transfer function.However, further exploration of alternative transfer function types remains an area for potential investigation to comprehensively understand the BGO's behavior.
In essence, the BGO's distinguishing traits of rapid convergence, competitiveness, and consistently excellent outcomes position it as a promising metaheuristic for solving SCPs while acknowledging the need for deeper exploration and parameter optimization for its maximal utilization.The unique approach of the BGO makes it suitable for a wide range of multidimensional optimization problems across various domains, including engineering, logistics, planning, and bioinformatics, given its ability to find high-quality solutions in complex search spaces.This versatility renders it a valuable tool for researchers and professionals facing optimization challenges in their respective fields.

Table 2 .
Description of instances.

Table 4 .
Matrix of p-values from the Wilcoxon-Mann-Whitney test for the BGO against the SCA, PSA, and GWO in different instances.In green the p-values lower than 0.05 where BGO exhibits statistically lesser results, and in red the p-values greater than 0.95 where BGO exhibits statistically greater results compared to other metaheuristics.