MOBCA: Multi-Objective Besiege and Conquer Algorithm

The besiege and conquer algorithm has shown excellent performance in single-objective optimization problems. However, there is no literature on the research of the BCA algorithm on multi-objective optimization problems. Therefore, this paper proposes a new multi-objective besiege and conquer algorithm to solve multi-objective optimization problems. The grid mechanism, archiving mechanism, and leader selection mechanism are integrated into the BCA to estimate the Pareto optimal solution and approach the Pareto optimal frontier. The proposed algorithm is tested with MOPSO, MOEA/D, and NSGAIII on the benchmark function IMOP and ZDT. The experiment results show that the proposed algorithm can obtain competitive results in terms of the accuracy of the Pareto optimal solution.


Introduction
With the advent of the era of artificial intelligence, the evolutionary algorithm of swarm intelligence has received more and more attention [1].Swarm intelligence algorithms are designed by researchers who are inspired by natural phenomena or population activities [2].These algorithms are often used to solve complex optimization problems [3].
Optimization problems can be divided into single-objective optimization problems and multi-objective optimization problems [4].Single-objective optimization problems have more specific objectives than multi-objective optimization problems [5].Therefore, the solution to multi-objective optimization problems is more complicated.The result of solving a multi-objective optimization problem requires a decision maker to choose the final result [6].Therefore, when solving multi-objective optimization problems, multiple different results are required for decision-makers to choose.There are usually two ways to deal with multi-objective problems: priori and posteriori [7,8].
A priori means that the decision-maker first weights the decision variables of the problem to be optimized.In this way, the multi-objective problem can be transformed into a single-objective problem [9].In this case, the problem can be solved by employing a single-objective optimization algorithm.As a result, only one solution can be obtained as the optimal solution.The optimizer must be run multiple times if multiple optimal solutions are required [10].On the contrary, when dealing with multi-objective problems, the posterior method ensures that the nature of the problem does not change.In other words, the problem will not be transformed into a single-objective problem [11].This enables the design parameters or conditions to vary within a certain range while obtaining a different set of solutions.Then, the decision maker chooses a solution from the obtained solution set as the optimal solution according to his own preference or actual situation [12].
The multi-objective optimizer aims to find a set of non-dominated solutions, obtain more non-dominated solutions through iteration, and improve the quality of the solution [13].The resulting non-dominated solution can trade-off between multiple objectives [14].Compared with adopting a priori method to solve it, the posterior method retains more original characteristics of the multi-objective problem, meaning the method can obtain more optimal solutions in a short time [15].
There are many multi-objective optimization algorithms in the literature that solve multi-objective problems.Most of the algorithms are inspired by the single-objective algorithm, such as the grey wolf optimizer (GWO) [16], particle swarm optimization (PSO) [17], the ant lion optimization algorithm (ALO) [18], the grasshopper optimization algorithm (GOA) [5], the genetic algorithm (GA), and the evolutionary algorithm (EA).These algorithms are summarized in the following Table 1: Many meta-heuristic multi-objective optimization algorithms (MOAs) have been proposed according to the literature review above.There are many different types of MOAs to solve different types of multi-objective optimization problems (MOPs).The existing MOAs are faced with low speed of convergence [31], are easily trapped by local optima [32,33], lack diversity in solutions when solving MOPs [34,35], etc.Based on the single-objective besiege and conquer algorithm (BCA), we propose an MOBCA to solve these problems.The contributions can be summarized as follows: (i) The single-objective BCA outperforms other algorithms in the literature, and converging to the local optima is not easy.Therefore, we attempt to demonstrate its effectiveness in MOPs.
(ii) The BCA has been equipped with a grid mechanism to guarantee the diversity in the convergence process.
The rest of the paper is organized as follows: Section 2 provides a literature review concerning MOPs.Section 3 proposes the multi-objective besiege and conquer algorithm.Section 4 presents, discusses, and analyzes the experiment results on the ZDT, IMOP, and real-world problems.Finally, Section 5 concludes the work and suggests future works.

Multi-Objective Optimization Problem
Multi-objective optimization is a classic optimization problem.This problem needs to optimize multiple objectives at the same time.Therefore, the algorithm is required to find the maximum or minimum of multiple objective functions at the same time [9].Compared with single-objective optimization problems, only one objective function needs to be optimized.As a result, we can easily compare the pros and cons of the fitness value in single-objective optimization problems.
A multi-objective optimization problem can be defined as follows: where n is the number of variables, o is the number of objective functions, m is the number of inequality constraints, p is the number of equality constraints, g i is the i th inequality constraint, h i indicates the i th equality constraint, and [L i , U i ] are the boundaries of the i th variable.
In Figure 1, we obtain two different solutions, x 1 and x 2 , which correspond to different evaluation values, f (x 1 ) and f (x 2 ).We can clearly label these two fitness values on a onedimensional coordinate axis.In this coordinate axis, point o is the origin.We can clearly understand the relationship between the fitness values corresponding to the two solutions.That means, if we need to determine the smallest fitness value for the problem, solution x 2 in the figure is desirable.
For fitness values in multi-objective problems, each evaluation process produces multiple evaluation values.This means that each evaluation process will produce at least two evaluation values.This is shown in Figure 2a,b.For example, during the two evaluations processes, two sets of fitness values are generated as follows: where x is the first solution of Fitness_set, f 1 is the first evaluation function, f 1 (x) is the evaluation function value obtained by x through the first evaluation function, and f 2 and f 2 (x) are the same.y is the second solution of Fitness_set, its evaluation functions f 1 and f 2 are the same as in solution x, but the obtained evaluation value is different.
In Figure 2a, all the fitness values of the solution x are smaller than the fitness values of solution y.From this, we can easily judge that x is a better solution than y.
In Figure 2b, we can observe that f 1 (x) < f 1 (y) and f 2 (x) > f 2 (y), and it is difficult for us to judge whether x or y is better.As mentioned above, the evaluation of the solution is more complicated than that of the single-objective problem.When we evaluate a multi-objective solution, it is difficult to determine an evaluation standard to judge the quality of the solution.So, the introduction of a series of definitions of Pareto is necessary [36].Definition 1. Pareto Optimality: Solution ⃗ x ∈ X is deemed Pareto-optimal if and only if the following is applicable:

Definition 3. Pareto-optimal Set:
The set including all Pareto-optimal solutions is called a Pareto set: Definition 4. Pareto-optimal Front: A set that contains the value of objective functions for the Pareto solutions set: In Figure 3a,b, all solutions that lie on the Pareto front can dominate solutions that do not.The set of points located on the Pareto front constitutes a Pareto optimal solution set.This solution set is the optimal solution to the multi-objective optimization problem.

Multi-Objective Optimization in Metaheuristics
The ultimate goal of a multi-objective optimization algorithm is to determine a Pareto optimal solution set and ensure that the solutions within this set are uniformly distributed across the Pareto front [37].A solution set like this can provide decision-makers with more high-quality choices.Therefore, when solving multi-objective optimization problems, we mainly focus on two issues: the diversity of the solution set and the distance between the solution and the true Pareto front [38].
Metaheuristic optimization algorithms can perform well in solving multi-objective optimization problems [39].The meta-heuristic algorithm usually uses a random strategy to generate populations and adaptively adjusts the position of randomly generated offspring, which can converge to cover the Pareto front.The parallelism and scalability of the heuristic algorithm are also noteworthy, enabling it to solve large-scale and complex multi-objective optimization problems.
In recent decades, multi-objective optimization algorithms have received extensive attention in their field due to their powerful ability to solve complex optimization problems [40].Some novel and excellent multi-objective optimization algorithms have been proposed: the multi-objective grey wolf optimizer (MOGWO) [8], the multi-objective ant lion optimizer (MOALO) [11], the multi-objective whale optimization algorithm (MOWOA) [20], the vector-evaluated genetic algorithm (VEGA) [30], the niched Pareto genetic algorithm (NPGA) [28], the non-dominated sorting genetic algorithm (NSGA) [22], the non-dominated sorting genetic algorithm II (NSGA-II) [23], the non-dominated sorting genetic algorithm III (NSGA-III) [24], and the multi-objective evolutionary algorithm based on decomposition MOEA/D [26].The above algorithms can approach the real Pareto optimal frontier in some multi-objective problems.However, according to the NFL theorem, there may be problems that these algorithms cannot solve [41].So, new multi-objective algorithms should be proposed.In the next section, a new multi-objective optimization algorithm is proposed to solve the multi-objective optimization problem.

Besiege and Conquer Algorithm (BCA)
The BCA algorithm was proposed by Jiang et al. in 2023.To enable BCA to solve multiobjective problems, the single-objective version of BCA also deserves a brief introduction here.The process of BCA optimization and the location update process can be briefly summarized by the following mathematical formulas: where S t+1 j,d is the j th soldier of the d th dimension with (t + 1) th iteration, B t d is the current best army with t th iteration, A t i,d is the i th army of the d th dimension with t th iteration, A t r,d is a random army of the d th dimension with t th iteration, and k 1 , k 2 is the disturbance coefficient.
k 1 and k 2 can be calculated using following two equations: Parameter BCB is set to adjust global search or local search.It can be dynamically changed according to the distance between the current army and the best army.If the current army is the best army in the population, then parameter BCB will change to 0.2 in order to enhance the global search.This will avoid local stagnation.
The BCA algorithm starts the optimization process with a set of randomly generated populations.In the optimization process, the initial population is 1/3 of the set population size, which means that each army has three soldiers.During the position update process, each army will generate three soldiers according to the position update Equation (1).But, the positions of these soldiers will only be stored when the army is updated.The soldiers' position will be eliminated if the fitness value of soldiers is not better than that of armies when the evaluation process is completed.When all armies are updated, the algorithm judges the distance relationship between the current army and the global optimum.So, the algorithm can adjust the value of BCB according to the distance, which can control the global search or local search.This is an aggressive position update strategy, and it also enables BCA to have better global optimization capabilities.
The number of function evaluations (FEs) is described as Equation ( 4).
where nSoldiers is the number of soldiers in each army, and nArmies is the number of armies.The algorithm will output the best solutions when the FEs reaches max function evaluation times (Max_FEs).

Multi-Objective Besiege and Conquer Algorithm (MOBCA)
To apply BCA to solve multi-objective optimization problems, we need to introduce related mechanisms.These mechanisms function roughly as MOPSO [19], including grid, archive, and leader selection mechanisms.
The purpose of the grid mechanism is to determine the location of each solution in the objective space.During the initialization phase, the objective space is divided into several grids according to the scalar parameter div.As the optimization process proceeds, more solutions are obtained by the MOAs.Suppose that a solution is not located in the grid at one iteration, the grid will be updated to cover the new solution.Specifically, the grid is used to determine the relative positions between solutions in objective space.This information will be used to determine whether a solution is retained.This is associated with the archive.In the literature, MOGWO utilizes the grid mechanism to decide the α, β, and δ wolf [8].A solution located in the sparest grid will be chosen as the α wolf, while the solutions in the second-and third-sparest grids will be chosen as the β and δ.This method aims to lead the algorithm to maintain diversity in solutions.
The archive stores non-dominated solutions during iterations.The maximum size of the archive is equal to the population size.In each iteration, the archive members will be compared with the new offspring population.Then, the archive will update the latest non-dominated solutions as the new archive members.In the replacement process, the archive size may exceed the maximum limitation.The grid mechanism comes in handy to maintain the diversity of the non-dominated population.The grid location will be rearranged if the archive size exceeds the maximum limitation.The surplus member will be deleted via the information on the crowding degree.Those solutions that have occupied a crowding grid will be more likely to be deleted.In GWO, the algorithm only stores the first three wolves in the population [16].However, this is not suitable in the MOA.We expect to obtain more diverse solutions to guide the subsequent optimization process.So, an archive equal to the population size to store the solutions is necessary.
In PSO, the iterative update of particles is based on global best and personal best [17].In MOPs, the personal best is easily updated if the new solution dominates the old one.In contrast, the global best is hard to update.It is easy to select a leader in single-objective optimization to guide the population to a promising area to approach the global optimum.However, in multi-objective optimization, comparing one solution with another makes it hard due to the Pareto optimal concepts mentioned before.Therefore, we must use a leader selection mechanism to solve this problem.In multi-objective problems, promising solutions approach the Pareto front and have good population diversity.The solution in the archive set is composed of many promising solutions.So, the leader selection mechanism will use the roulette method to select the least crowded solution in the archive as the leader.
The BCA is introduced in the archive mechanism to store non-dominated solutions.The global best solution will be selected from the archive to lead the population.In MOBCA, every soldier may be assigned to a different global best.This will guarantee diversity in the population.Each soldier will move in a different direction to the Pareto Front.Once armies exceed the maximum size, redundant armies will be deleted on the basis of the grid mechanism.The convergence of the MOBCA is guaranteed because it employs the same mathematical model of BCA.During the optimization process, BCA completes convergence by changing the position of the agent factor.This behavior guarantees the convergence ability of an algorithm in the search process according to [42].The MOBCA inherits all the characteristics of the BCA, which means that the search agents explore and exploit the search space in the same manner.
The main difference is that the MOBCA is based on a set of archive members, while the BCA only saves and improves three of the best solutions.
The workflow of the BCA is shown in Figure 4, and its pseudo code is presented in Algorithm 1.

Computational Complexity
The complexity of the proposed MOBCA is based on the number of decision variables, the objective variables, and the population size.Imagine a multi-objective problem with D decision variables, M objective variables, and N particles as an example.The MOBCA mainly includes updated soldiers, an updated archive, and updated armies.
The complexity of update soldiers is decided by N, D, and Max_FEs.This process will be executed Max_FEs/N times; thus, the computational complexity of this process is O(D × N × Max_FEs/N).
The updated archive involves deleting redundant solutions, and the complexity is In the MOBCA, after the update archive process, we must also update armies for the next generation, so this process is similar to the update archive.The complexity of this process is O(M × N 2 × Max_FEs/N + nArmies × Max_FEs/N).
Since the D, M, and nArmies are far lower than N, the final complexity of the proposed MOBCA is O(Max_FEs/N × N 2 ).

Experimental Settings
The experiment codes are executed in a MATLAB R2022b environment under the Windows 10 operating system.All simulations are carried out on a computer with an Intel(R) Core(TM) i7-8750H CPU @ 2.20 GHz 2.21 GHz and memory of 16 G.
Different from the evaluation of a single-objective optimization algorithm, the performance evaluation of a multi-objective optimization algorithm needs to be evaluated by other calculation methods.The specific calculation method of the performance evaluation index is as follows: For the inverted generational distance (IGD) for measuring convergence [44], its mathematical formula can be expressed as Equation ( 5): where nt shows the size of the true Pareto optimal solutions set, and d i indicates the Euclidean distance (ED) between the true Pareto optimal solutions set and obtained solution set.n represents the number of obtained Pareto solutions.IGD can evaluate the distance between the Pareto optimal solution and the actual obtained solution.However, evaluating the sparsity of the Pareto solution set requires the use of another evaluation indicator: hypervolume (HV).
Hypervolume is a widely employed performance metric in the domain of multiobjective optimization [45][46][47].It quantifies the hypervolume enclosed by a set of solutions within the objective space, representing the volume of the space dominated by these solutions.HV serves as a crucial indicator for assessing the quality of solution sets generated by different multi-objective optimization algorithms, where larger HV values typically indicate superior performance.
The calculation of HV involves the determination of the volume within the Pareto front formed by the set of solutions.It is computed as the integral of the dominated portion of the objective space.Formally, the hypervolume (HV) is defined as follows: Hypervolume (HV) is calculated using the following Equation ( 6): where X represents the set of solutions, R m is the objective space, and H(X, z) is the hypervolume contribution Equation ( 7): Here, z is a reference point in the objective space.The integral spans the entire objective space, and the computation involves evaluating the contribution of each solution in X to the overall hypervolume.
Utilizing the above evaluation metrics allows us to quantitatively compare MOBCA with MOPSO, NSGAIII, and MOEA/D.In addition, we can illustrate the best set of Pareto optimal solutions obtained by each algorithm on the search space.This method allows us to compare the performance of the algorithms qualitatively.All algorithms are run 30 times on the test problems, and the statistical results of these 30 runs are provided in Tables 2 and 3. Note that we use 10,000 function evaluations for each algorithm.The qualitative results are also provided in Figures 5-7        * NaN indicates that no feasible solution was found."+" and "−" indicate the number of test problems in which the compared algorithm shows significantly better performance of worse performance than MOBCA respectively.The symbol "=" indicates there is no significant difference between MOBCA and compared algorithms.

Experimental Results and Discussion
The results of algorithms on the test functions are presented in Tables 2 and 3, and the best Pareto optimal fronts obtained by all algorithms are illustrated in Figure 5.At the same time, the tracking results of HV and IGD during the iteration process are shown in Figures 6 and 7.

Results on ZDT Test Suite
Tables 2 and 3 show that the MOBCA outperforms other algorithms in three of five ZDT test problems.The superiority can be seen in the columns, showing higher accuracy and better robustness of the MOBCA compared to others in ZDT1, ZDT3, and ZDT6.
The shape of the Pareto optimal fronts obtained by the four algorithms on ZDT1, ZDT2, ZDT3, and ZDT6 is illustrated in Figure 5a-d.Inspecting these figures, it may be observed that NSGAIII shows the poorest convergence despite its good coverage in ZDT6.However, the NSGAIII and MOBCA both provide very good convergence toward all true Pareto optimal fronts.The most interesting pattern is that the Pareto optimal solutions obtained by NSGAIII show higher coverage than MOBCA on ZDT2.However, the coverage of the MOBCA on ZDT3 is better than NSGAIII.This shows that the MOBCA has the potential to outperform NSGAIII in finding a Pareto optimal front with separate regions.
The convergence curve of HV and IGD is presented in Figures 6a-d and 7a-d.Figures 6a,b and 7a,b show that the MOBCA converge to the Pareto front is faster than that of NSGAIII.This shows the MOBCA can quickly converge and cover the Pareto front in comparison to NSGAIII.However, the MOBCA only demonstrates a weak advantage on ZDT3 compared to NSGAIII.We can draw a conclusion based on Table 2: although the MOBCA shows a weak advantage on ZDT3, it is a stable and robust way to solve separated MOPs.

Results on IMOP Test Suite
The previous section investigated the performance of the proposed MOBCA on the ZDT test set.Most of the test functions in this suite are not multi-modal.To benchmark the performance of the proposed algorithm's more challenging test set, this subsection employs IMOP benchmark functions.These functions are the most difficult test functions in the literature on multi-objective optimization and are able to confirm whether the superiority of the MOBCA is significant or not.
Inspecting the results in Table 2, it is evident that the MOBCA outperforms the MOEA/D, MOPSO, and NSGAIII in all of IMOP test functions.Since IGD is a good metric for benchmarking the convergence of an algorithm, these results indicate that the MOBCA has a better convergence on these benchmark functions.In order to observe the coverage of the algorithms, the HV metric provides quantitative result analysis in Table 3.The MOBCA failed to achieve the best results in IMOP1.We noticed that the MOBCA obtained the worst standard deviation on IMOP1 regarding HV.This means that in 30 rounds of experiments, the HV obtained by MOBCA is unstable.This may be the local optima that causes the convergence of MOBCA to stagnate.
Figure 5e-l shows the obtained Pareto front using algorithms in the IMOPs.The figures show that the MOBCA is closer to the true Pareto front, and the coverage is broader than other algorithms.Especially in Figure 5f-i,k, the MOBCA demonstrates overwhelming advantages in the coverage of the Pareto front.This means that the MOBCA can provide decision-makers with more high-quality solutions when solving practical problems.Although the MOBCA provides similar results to NSGAIII when solving IMOP5, as shown in Figure 5i, the MOBCA is significantly better than NSGAIII when solving IMOP7, as shown in Figure 5k.
The convergence speed of the MOBCA and the obtained Pareto front coverage can be reflected by the HV and IGD.Figures 6 and 7 demonstrate that the MOBCA converges to the Pareto optimal front more quickly in most of the IMOPs.In particular, Figures 6f,k and 7f,k present that the MOBCA can avoid the local optima effectively compared with other algorithms.The MOBCA has been developed based on the single-objective BCA, so it inherits the excellent convergence speed of the BCA.Figures 6e,h,l and 7e,h,l prove this view.

Results on Real-World Problems
The last part of Tables 2 and 3 present real-world multi-objective optimization problems (RWMOPs).The MOBCA shows outstanding results in terms of the IGD metric, while failed archives show good results in HV.Due to the complexity and dynamics of real-life problems, the MOBCA initially designed to solve multi-objective problems may have certain flaws.However, it is still undeniable that the MOBCA can provide high-quality solutions compared with other algorithms with its current performance.

Discussion
The qualitative and quantitative results show that the MOBCA benefits from high convergence and coverage.The high convergence of the MOBCA is inherited from the BCA.The main mechanisms that guarantee convergence in the BCA and MOBCA are the besiege and conquer mechanism.These two mechanisms emphasize exploitation and convergence proportional to the number of iterations.Since we select one solution from the archive in every iteration for each army and require the soldiers to move around the armies in the MOBCA, being trapped by local optima might be a concern.However, the results prove that the MOBCA algorithm does not suffer from local optima.

Conclusions and Future Work
This paper proposes a multi-objective version of the population optimization algorithm BCA.By introducing the grid mechanism, archive mechanism, and leader selection mechanism, and redesigning these mechanisms into the single-objective BCA algorithm, a new multi-objective algorithm, the MOBCA, is generated, which enables the BCA to deal with multi-objective optimization problems.
The proposed algorithm, the MOBCA, is compared with several excellent multiobjective optimization algorithms.These include those inspired by single-objective algorithms: MOPSO, MOEA/D, and NSGAIII.Our experimental results show that the MOBCA is superior to all compared algorithms in this paper in terms of convergence.Furthermore, the archive mechanism and grid mechanism ensure the diversity of the distribution of Pareto optimal solutions obtained by the MOBCA.
Based on the experimental results, the MOBCA shows a similar convergence speed to the BCA.It can converge to the Pareto front effectively in some MOPs.The diversity in solutions is also better than in compared algorithms except in real-world problems.This also shows that the proposed MOBCA still has certain flaws in terms of diversity, and there is considerable room for improvement.
For future work, a new multi-objective optimization mechanism should be introduced to ensure the diversity of the distribution of Pareto optimal solutions obtained by the algorithm.At the same time, a new position update formula should be considered because the MOBCA failed to determine a sufficient number of Pareto optimal solutions in some test functions.

Figure 2 .
Figure 2. Different situations in a multi-objective problem (for example: two objectives).(left) Situation 1: solution x dominates y. (right) Situation 2: x and y do not dominate each other.

Figure 3 .
Pareto fronts for different optimization directions.

Table 1 .
Classical and novel multi-objective optimization algorithms.
The workflow of the BCA algrotihm.
The convergence of hypervolume with the number of function evaluations.
(l) IGD-IMOP8 Figure 7.The convergence of invert generation distance with the number of function evaluations.