A Modiﬁcation of the Imperialist Competitive Algorithm with Hybrid Methods for Multi-Objective Optimization Problems

: This paper proposes a modiﬁcation of the imperialist competitive algorithm to solve multiobjective optimization problems with hybrid methods (MOHMICA) based on a modiﬁcation of the imperialist competitive algorithm with hybrid methods (HMICA). The rationale for this is that there is an obvious disadvantage of HMICA in that it can only solve single-objective optimization problems but cannot solve multi-objective optimization problems. In order to adapt to the characteristics of multi-objective optimization problems, this paper improves the establishment of the initial empires and colony allocation mechanism and empire competition in HMICA, and introduces an external archiving strategy. A total of 12 benchmark functions are calculated, including 10 bi-objective and 2 tri-objective benchmarks. Four metrics are used to verify the quality of MOHMICA. Then, a new comprehensive evaluation method is proposed, called “radar map method”, which could comprehensively evaluate the convergence and distribution performance of multi-objective optimization algorithm. It can be seen from the four coordinate axes of the radar maps that this is a symmetrical evaluation method. For this evaluation method, the larger the radar map area is, the better the calculation result of the algorithm. Using this new evaluation method, the algorithm proposed in this paper is compared with seven other high-quality algorithms. The radar map area of MOHMICA is at least 14.06% larger than that of other algorithms. Therefore, it is proven that MOHMICA has advantages as a whole.


Introduction
In the fields of production processes, engineering applications, management and decision-making within complex systems, multi-objective optimization problems are more common than single-objective problems. However, it is very difficult to achieve a solution to meet the requirement that all objective functions are optimal because of the conflict between various objective functions. Therefore, there is hardly a single global optimal solution, but a set of Pareto optimal solutions balanced by the values of various objective functions will be formed. In this case, the process of solving solutions becomes more complex than single-objective optimization, and it is difficult to obtain multiple uniformly distributed approximate Pareto optimal solution sets. Accordingly, it is of theoretical and practical significance to study the solution for such problems.

Description of Constrained Optimizaiton
Generally, a multi-objective optimization problem can be described by the Formula (1).
g i (x) ≤ 0, i = 1, 2 . . . , p h j (x) = 0, j = 1, 2, . . . , q u k ≤ x k ≤ v k , x ∈ R n , k = 1, 2, . . . , n where, { f 1 (x), f 2 (x), . . . , f m (x)} represents the individual objective function. g i (x) ≤ 0 is the i-th inequality constraint in optimization problem in the Formula (1), and p is the number of inequality constraints. h j (x) = 0 is the j-th equation constraint, and q is the number of equation constraints. u k and v k are the upper and lower bounds of x k , respectively. The set D = x ∈ S g i (x) ≤ 0, h j (x) = 0, i = 1, 2 . . . , p, j = 1, 2, . . . , q} that meets all inequality and equality constraints in the search space S = {u k ≤ x k ≤ v k , x ∈ R n , k = 1, 2, . . . , n} is called the feasible region of the constrained optimization problem in the Formula (1). If a group solution x ∈ D, x is called a feasible solution; otherwise, it is called an infeasible solution. For two group of solutions x 1 = (x 11 , x 12 , . . . , x 1n ) and x 2 = (x 21 , x 21 , . . . , x 2n ), if all components of x 1 are better than x 2 , or some components of x 1 are better than x 2 and the others are equal, there is a dominant relationship between x 1 and x 2 . Here, x 1 is the dominant solution and x 2 is the dominated solution. Otherwise, there is a non-dominant relationship between x 1 and x 2 .

Related Work
This section can be divided into two parts, including multi-objective swarm and evolutionary algorithms and multi-objective imperialist competitive algorithms.

Multi-Objective Swarm and Evolutionary Algorithms
Swarm and evolutionary algorithms can use the population to search in the optimal direction, so as to make the whole population approach the Pareto front, and finally obtain the approximate Pareto front. There have been several studies about swarm and evolutionary algorithms for solving multi-objective optimization, since Schaffer [1] proposed the vector evaluated genetic algorithm (VEGA). Some well-known algorithms include multiple objective genetic algorithm (MOGA) proposed by Fonseca and Fleming [2], Pareto evolutionary selection algorithm II (PESA-II) proposed by Corne [3], non-dominated sorting in genetic algorithms (NSGA) [4] and non-dominated sorting in genetic algorithms II (NSGA-II) [5] proposed by Deb, multi-objective particle swarm optimization (MOPSO) proposed by Coello [6], multi-objective evolutionary algorithm based on decomposition (MOEA\D) proposed by Q. Zhang [7] and multi-objective artificial bee colony algorithm proposed by Akbari [8].
When solving complex multi-objective optimization problems, the above algorithms may have one or more of the following problems: (1) With the increase of the number of objective functions, the proportion of non-dominated solutions in the population also increases, which would lead to the slowing down in the speed of search process; (2) For high-dimensional target space, the computational complexity to maintain diversity is too high, and it is difficult to find the adjacent elements of the solution; (3) The indexes for evaluating comprehensive performance of the algorithm are poor.
Almost all evaluation indexes can only evaluate one of the convergence and distribution of the population in the algorithm; therefore, it is presently difficult to comprehensively evaluate the population convergence and distribution of the swarm and evolutionary algorithms for solving multi-objective optimization; (4) For the high-dimensional target space, how to visualize the results is also a difficult problem.
In recent years, many new swarm and evolutionary algorithms and their improved algorithms have also been effectively applied in the process of solving multi-objective optimization. Mirjalili proposed the multi-objective grasshopper optimization algorithm (MOGOA) [9], the multi-objective ant lion optimizer (MOALO) [10] and the multi-objective grey wolf optimizer (MOGWO) [11], respectively. The MOGOA algorithm, based on the grasshopper optimization algorithm (GOA), has been proposed when solving multiobjective optimization. In order to solve multi-objective optimization, an archive and target selection mechanism was introduced into GOA. For most multi-objective optimization, MOGOA is a competitive algorithm with high distribution. In addition, the quality of convergence and distribution is competitive. The MOALO algorithm, based on ant lion optimizer (ALO), has also been proposed for solving multi-objective optimization. The algorithm was tested on 17 case studies, including 5 unconstrained functions, 5 constrained benchmarks and 7 engineering design optimizations. Most of the results achieved have been better than NSGA-II and MOPSO. The MOGWO algorithm, based on the grey wolf optimizer (GWO), is another algorithm proposed to solve multi-objective optimization. In this algorithm, in order to save the non-dominated solutions in the iterative process, a fixed-sized external archive was used. Meanwhile, a grid-based approach was employed to maintain and adaptively assess the Pareto front. After solving CEC 2009 [12] benchmarks, the results of MOGWO were compared with that of MOPSO and MOEA/D. Based on MOGWO, using an adaptive chaotic mutation strategy, a multiple search strategy based on the multi-objective grey wolf optimizer (MMOGWO) [13] has been proposed by Liu. An elitism strategy is also introduced into MMOGWO to search for more potential Pareto optimal solutions and store the diversity of solutions in the approximated solution set. Therefore, MMOGWO is verified by some benchmark functions of multi-objective optimization, and competitive calculation results are obtained. Based on stochastic Fractal Search (SFS), Khalilpourazari [14] proposed multi-objective stochastic Fractal Search (MOSFS) with two new components, including archive and leader selection mechanism. Then, this algorithm was applied in the welded beam design problem, obtaining better results than MOPSO and MOGWO. Got [15] extended the whale optimization algorithm (WOA) and proposed a new multi-objective algorithm called the guided population archive whale optimization algorithm (GPAWOA). This algorithm uses an external archive to store the non-dominated solutions searched in the process of solving the optimization problems. The leaders are selected from the archive to guide the population towards promising regions of the search space; also, the mechanism of crowding distance is incorporated into the WOA to maintain the diversity. The algorithm obtained good results, but there is room for improvement in the initialization. In the future, some new swarm and evolutionary algorithms, including aquila optimizer (AO) [16], reptile search algorithm (RSA) [17], and arithmetic optimization algorithm (AOA) [18], can be improved in order to solve multi-objective optimization.
In recent years, there has been some research carried out regarding solving multiobjective optimization problems using ICA and all kinds of modified ICA. Enaytifar [33] proposed the multi-objective imperialist competitive algorithm (MOICA). The main calculation steps of MOICA are strictly carried out according to the ICA algorithm. Therefore, there are some problems, including premature convergence, because empires' competition can reduce the number of empires, and computing terminates before the number of iterations reaches the maximum. The reason for this is that convergence is too fast, leading to empires dying out in the process of empire competition. Moreover, there are several steps in the MOICA algorithm, and each step has space to improve, including in terms of the search ability and convergence speed. In order to solve these problems, researchers have proposed some form of modified MOICA. Ghasemi [34] proposed a bare-bones multi-objective imperialist competitive algorithm with a modified version (MGBICA). In that paper, a Gaussian bare-bones operator was introduced in empire assimilation in order to enhance the population diversity. Then, MGBICA is applied in the multi-objective optimal electric power planning, namely optimal power flow (OPF) and optimal reactive power dispatch (ORPD) problems. For this algorithm, the other steps, except for assimilation, have modified room.
Mohammad [35] improved MOICA, a new step that all countries move to the optimal imperialist; they use this algorithm to design variables of brushless DC motor to maximize efficiency and minimize total mass. For this algorithm, such algorithm design can enhance the convergence speed, but increase the possibility of falling into local optimization. At the same time, it cannot solve the problem that the number of empires may be reduced due to imperialist competition, and the iteration may be terminated before the number of iterations reaches the maximum. Piroozfard [36] designed an improved multi-objective imperialist competitive algorithm to solve multi-objective job shop scheduling optimization problem with low carbon emission. The algorithm obtains good calculation results for the model established in this paper, but the application scope has obvious limitations. When Khanali [37] researched multi-objective energy optimization and environmental emissions for a walnut production system, a new modified MOICA was proposed. This algorithm solved the multi-objective optimization for the walnut production system. The result of the most environmental and economic benefits of energy consumption was obtained. In order to solve flexible job shop scheduling problems with transportation, sequence-dependent setup times (FJSSP-TSDST), which is a complex multi-objective problem, Li [38] proposed a new MOICA named imperialist competitive algorithm with feedback (FICA). This algorithm proposed a new assimilation and adaptive revolution mechanism with feedback. Meanwhile, in order to improve the search ability, a novel competition mechanism is presented by solution transferring among empires.
In addition, some improved ICA algorithms that can only solve single objective optimization have the potential to solve multi-objective optimization problems through continuous improvement. A hybrid algorithm using ICA combining Harris Hawks Optimizer (HHO) [39] was proposed, called Imperialist Competitive Harris Hawks Optimization (ICHHO). This algorithm could solve some common optimization problems. Therefore, 23 benchmarks are calculated, and then the results are compared with ICA and HHO. This hybrid algorithm can obtain better results than two basic algorithms. In order to solve assembly flow shop scheduling problem, Li [40] proposed imperialist competitive algorithm with empire cooperation (ECICA). This algorithm uses a new imperialist competitive method through adaptive empire cooperation between the strongest and weakest empires. Tao [41] presented an improved ICA called a discrete imperialist competitive algorithm (DICA) to solve the resource-constrained hybrid flow-shop problem with energy consumption (RCHFSP-EC). A new decoding method considering the resource allocation was designed in this algorithm. Finally, a series of real shop scheduling system instances are calculated and compared with some other high-quality heuristic algorithms. DICA obtained satisfactory results.

The Main Content of This Paper
From the above literature on the improvement and application of multi-objective imperialist competitive algorithms, these kind of algorithms have the following three problems. First, most algorithms fail to solve the problem that the number of empires is reduced due to imperialist competition. When the number of empires is one, the calculation would not be carried out, which may lead to the early termination of iterative calculation. Second, in the operation process of each step of all kinds of modified imperialist competitive algorithms, most of the algorithms cannot consider both local search and global search. Third, when solving practical problems, some algorithms have limitations, which are only applicable to the problems to be solved, but not universal.
Therefore, in order to solve the above problems of multi-objective optimization using ICA, this paper proposes a new multi-objective imperialist competitive algorithm, called MOHMICA, based on a modification of the imperialist competitive algorithm, HMICA, in the literature [42].
The scientific contribution of this paper can be divided into the following two aspects, including algorithm theory and the evolution of algorithm performance: (1) From the perspective of algorithm theory, this paper proposes a new scheme to solve multi-objective optimization problems based on HMICA. By calculating 12 multiobjective benchmarks and comparing with some high-quality algorithms in recent years, the algorithm proposed in this paper has certain advantages; (2) From the perspective of algorithm performance evaluation, this paper proposes a comprehensive evaluation method of multi-objective optimization algorithm by using multiple evaluation metrics.
The second part of this paper will introduce MOHMICA, which is the proposed algorithm in this paper. The third part will introduce the relevant design of numerical simulation in this paper, including performance metrics, comparison algorithms, simulating setting and environment. The fourth part introduces the calculation results and discussion, and the fifth part is the conclusion and future research.

The Proposed Algorithm
The steps of MOHMICA include initialization of solutions, the establishment of the initial empires, the development of imperialists and assimilation of colonies, empire interaction, empire revolution, empire competition and external archive.
Among these steps, initialization of solutions, the development of imperialists and assimilation of colonies, empire interaction, and empire revolution are the same as HMICA. In this paper, the steps of the establishment of the initial empires, empire competition and external archive strategy are as follows.

The Establishment of the Initial Empires
Firstly, generate N initial solutions, namely N countries, using Halton sequences, and then sort these N initial solutions. The rules are as follows: The less the number of dominated solutions is, the better the solution is; (4) If the two solutions are mutually non-dominated feasible solutions, and the number of dominated solutions of the two solutions is the same in the whole population, the crowding distance is compared. The larger the crowding distance is, the better the solution is. The calculation process of the crowding distance can be seen in the literature [5].
After sorting the countries, they are divided into N imp empires. Each empire is composed of an imperialist and several colonies, that is, all countries are composed of N imp imperialists and N col colonies. Here, N = N imp + N col . For the top N imp − 1 imperialists, the number of colonies randomly assigned to each imperialist is carried out according to the Formula (2), and the remaining colonies are assigned to the last imperialist.
where, NC i means the number of colonies allocated to the i-th imperialist. round(•) is an integer closest to •. randi(0, 1) is a random number of 0 or 1.
Allocating colonies such as this can avoid the disadvantage that the calculation formula of empires' power in the basic ICA cannot be used in multi-objective optimization and simplify the steps of colony allocation. Meanwhile, when, N 2 imp < N col , it ensures that each imperialist can be assigned to at least one colony.

Empire Competition
Competition among empires is a process of redistribution of the colonies owned by each empire. The steps are as follows: Step 1. Compare the quality of each empire and rank them to find out the strongest empire and the weakest empire.
Step 2. If the weakest empire has colonies, find the weakest colony in the weakest empire as the annexed country. If there are no colonies in the weakest empire, the imperialist will be annexed by other empires.
Step 3. Randomly put the annexed countries into other empires. The rules for ranking the strength of the empire are as follows: (1) Comparing the number of infeasible solutions in each empire, where the empire with a smaller number is better; (2) If the number of infeasible solutions of the two empires is the same, compare the number of dominated solutions. The lower the number of dominated solutions, the better empire is; (3) If the above two are the same, compare the average crowding distance of each empire, where the larger the crowding distance is, the stronger empire is.

External Archive Strategy
When solving multi-objective optimization, it is necessary to compare the quality of solutions by using the distribution indexes, such as crowding distance, because non-dominated solutions cannot be directly compared. Since a certain number of nondominated solutions would be generated in each iteration, in order to prevent these nondominated solutions generated in each iteration from losing in the next iteration, it is necessary to establish an external archive, which could store these non-dominated solutions, merge the non-dominated solutions obtained in each iteration and delete the duplicate or dominated individuals in the external archive. Finally, the elite individuals in the calculation process are retained. The specific process of archiving strategy in this paper is as follows: Step 1. Arrange the non-dominated solutions obtained in each iteration according to the crowding distance, place into the external archive and delete the duplicate solutions in the external archive; Step 2. Update the external archive. Recalculate the number of dominated solutions and crowding distance of each solution in the external archive, and define the crowding distance of the D solutions with the minimum value in any specific sub vector as positive infinity. D is the number of objective functions; Step 3. Delete the dominated solutions of the updated external archive and sort them by crowding distance. If the number of non-dominated solutions is larger than the maximum size of the external archive at this time, the part beyond the maximum size of the external archive will be deleted. In particular, in order to preserve more possible elite solutions, the size of the external archive can be enlarged to a certain extent, for example, twice the population; Step 4. Find the country that the number of dominated solutions is the largest in all colonies, and replace the colony with the solution with the largest crowding distance in the external archive (excluding two solutions that the crowding distances are positive infinite), and then carry out the next iteration.

Implementation of the Proposed Algorithm
After the improvement of the hybrid method, the pseudo code of MOHMICA is obtained (Algortithm 1), as shown below. Sort initial solutions according to the sorting rules in the Section 2.1.

5
Create empires: according to the clonies allocating rules in the Section 2.1. The development of imperialists and the assimilation of colonies: according to literature [42].

10
Calculate the function values, violation values (if the optimization with constraints) the number of dominated solutions and crowding distance of the initial countries.

12
Calculate the function values, violation values (if the optimization with constraints) the number of dominated solutions and crowding distance of the initial countries.
14 Calculate the function values, violation values (if the optimization with constraints) the number of dominated solutions and crowding distance of the initial countries.

15
Empire interaction: according to literature [42]. 16 Empire competition: according to the Section 2.2 of this paper. 17 Update external archive: according to the Section 2.3 of this paper. 18 end for

Experimental Design
This part will introduce the benchmark functions calculated in this paper, performance metrics, comparison algorithms, simulating setting and environment.

Benchmark Functions
In order to verify the effectiveness of the algorithm proposed in this paper, 12 benchmark functions are calculated by MOHMICA, including SCH [5], FON [5], ZDT1-ZDT4 in ZDT [5] series, and 6 benchmarks in UF of CEC 2009. Among them, UF8 and UF10 are three objective functions and the other benchmarks are double objective functions. The mathematical expressions of all benchmarks are shown in Table 1.

Performance Metrics
In order to evaluate the convergence and distribution of solutions, this paper uses four metrics: convergence metric (CM), diversity metric (DM), generational distance (GD) and inverted generational distance (IGD). The introduction of these four indicators is as follows.

Function Name Mathematical Expressions Dimensions Bounds
UF8 (1) Convergence metric This metric reflects the distance between the approximate Pareto front and the real Pareto front. The smaller the value is, the closer the individual of the solutions is to the real Pareto front, and the better its convergence is. The calculating method is as shown in Equation (3): where, PF is the calculated approximate Pareto front. PF * is real Pareto front. n nd is the number of non-dominated solutions. • means Euclidean distance. Particularly, if CM = 0, that means the calculated Pareto front is true Pareto front.
(2) Diversity metric This metric is used to measure the distribution of non-dominated solutions. The smaller its value is, the better distribution of non-dominated solutions is. The calculation method is shown in Equation (4).

DM(PF, PF
where, d f and d l are the Euclidean distance between the extreme non-dominated solution and the boundary solutions of the obtained non-dominated solution set.

(3) Generational distance
This metric refers to the distance between the whole approximate Pareto front obtained by the algorithm and the real Pareto front. The smaller the GD is, the closer solutions are to the real Pareto front, and the better the convergence of the algorithm. The calculation method of this metric is shown in Equation (5): (4) Inverted generational distance This metric refers to the distance between the real Pareto front and the approximate Pareto front obtained by the algorithm. To some extent, it is a comprehensive metric that can measure both convergence and diversity of an algorithm. The smaller the IGD, the better quality of algorithm is. The calculation method of IGD is shown in Equation (6): where, n PF is the number of points of real Pareto front.

Comparison Algorithm and Simulation Setting
In this paper, each benchmark function is run independently 20 times by using the MOHMICA algorithm, and then compared with some multi-objective algorithms that have achieved good results in solving these kind of problems in recent years, including PESA-II, MOEA\D, NSGA-II, MOABC, MOALO, MOGOA and MMOGWO. In particular, the related parameter settings of PESA-II and MOEA\D are the same as in [3,7]. The related data of the other algorithms are from [13].
The simulation environment is Windows 10, Intel ® Core (TM) i7-10875H CPU @ 2.30 GHz with a 16.00 GB RAM memory with a running environment of MATLAB 2017b.
The initial population size of the MOHMICA algorithm is set to 100, and the size of the external archive is set to 200. For SCH and FON, the maximum number of iterations is 50, meaning the maximum number of evaluations is 5000. The maximum number of iterations of other two objective functions is 250, that is, the maximum number of evaluations is 25,000. For the three objective benchmark functions, the maximum number of iterations is 500, that is, the maximum number of evaluations is 50,000. In order to ensure that the comparison results of different algorithms are fair when calculating the same function, the population number, maximum iteration times and maximum evaluation times of all comparison algorithms are the same as those of MOHMICA.  Table 6. From this table, MOHMICA ranked first more than all the other algorithms.

Results and Discussion
In Tables 2-6, some rules about relevant metrics can be obtained when calculating each benchmark function of MOHMICA. For the convergence metrics including CM and GD, MOHMICA has an obvious advantage in general. For the distribution metric DM, the amount of times that MOHMICA ranked first was the most among all algorithms. For the metric of IGD, the comprehensive ranking of the algorithm proposed in this paper was slightly lower than MOALO and MMOGWO, but significantly higher than other algorithms. The reason for this is the low ranking of SCH and UF2 functions. On the whole, the more complex a benchmark function is, the better the result obtained by MOHMICA is. The results of all benchmark functions calculated by different algorithms from Tables 2-5 can be quantitatively verified by the Wilcoxon test on the four metrics of each algorithm. This test is conducted with three levels of significance, namely, α = 0.01, α = 0.05 and α = 0.1. The statistical hypotheses for the Wilcoxon test are as follows: (1) H 0 : The results of the two algorithms are homogenous; (2) H 1 : The results of the two algorithms are heterogenous.
According to the results of the Wilcoxon test in Table 7, the conclusions that can be obtained as follows:

A New Method for Evaluating Multi-Objective Optimization Algorithm
For the common metrics evaluating the quality of multi-objective optimization algorithms at present, CM, DM, GD and IGD all have some limitations. Specifically, CM and GD are convergence metrics from different perspectives. DM is a metric to evaluate the distribution of solutions in the approximate Pareto front. Although IGD is generally considered to be a comprehensive evaluation metric that can take into account the convergence and distribution of the solutions, it also has some limitations. On the one hand, a different number of the sampling points on the real Pareto front may affect the results of IGD; on the other hand, for those optimization problems with more than three objective functions, the convergence and distribution of the solutions obtained by algorithms cannot be seen from IGD because those solutions cannot be expressed visually. Therefore, it is of some theoretical significance to combine multiple metrics representing the convergence and distribution of the solutions of multi-objective optimization algorithms and propose a comprehensive evaluation method that can be expressed visually. The specific methods are as follows.
Firstly, each metric result of benchmark functions calculated by different algorithms is processed by logarithm. The specific calculation method is shown in Equation (7): In Equation (7) forms a diagonal of the quadrilateral of the radar map, because these two are the metrics that directly characterize convergence degree and distribution degree of the approximate Pareto front, respectively. GD w and IGD w constitutes another diagonal, because these two metrics represent the distance from the approximate Pareto fronts obtained by different algorithms to real Pareto fronts and the distance from real Pareto fronts to the approximate Pareto fronts obtained by different algorithms. The larger the area of the radar map is, the better the comprehensive result of the benchmark function obtained by each algorithm. For the 12 benchmark functions calculated by MOHMICA and comparing other algorithms in this paper, the larger the average area of the 12 radar maps of each algorithm, the stronger the comprehensive ability to calculate the multi-objective optimization problems. Moreover, from the actual value after logarithmic transformation, when the radar map areas of two algorithms calculating the same benchmark function, there is

A New Method for Evaluating Multi-Objective Optimization Algorithm
For the common metrics evaluating the quality of multi-objective optimization algorithms at present, CM, DM, GD and IGD all have some limitations. Specifically, CM and GD are convergence metrics from different perspectives. DM is a metric to evaluate the distribution of solutions in the approximate Pareto front. Although IGD is generally considered to be a comprehensive evaluation metric that can take into account the convergence and distribution of the solutions, it also has some limitations. On the one hand, a different number of the sampling points on the real Pareto front may affect the results of IGD; on the other hand, for those optimization problems with more than three objective functions, the convergence and distribution of the solutions obtained by algorithms cannot be seen from IGD because those solutions cannot be expressed visually. Therefore, it is of some theoretical significance to combine multiple metrics representing the convergence and distribution of the solutions of multi-objective optimization algorithms and propose a comprehensive evaluation method that can be expressed visually. The specific methods are as follows.
Firstly, each metric result of benchmark functions calculated by different algorithms is processed by logarithm. The specific calculation method is shown in Equation (7): In Equation (7), v represents the mean value of CM, DM, GD and IGD, respectively. u = |[−lg v] min | + 1, where [•] represents the integer of •. w is logarithmic processed data. Then, draw the radar map of each benchmark functions using w CM , w DM ,w GD and w IGD of different algorithms, as shown in Figures 13-16. The drawing method of radar maps is as follows. Starting from the origin point, the length of w CM , w DM , w GD and w IGD are the half diagonal respectively. w CM and w DM forms a diagonal of the quadrilateral of the radar map, because these two are the metrics that directly characterize convergence degree and distribution degree of the approximate Pareto front, respectively. w GD and w IGD constitutes another diagonal, because these two metrics represent the distance from the approximate Pareto fronts obtained by different algorithms to real Pareto fronts and the distance from real Pareto fronts to the approximate Pareto fronts obtained by different algorithms. The larger the area of the radar map is, the better the comprehensive result of the benchmark function obtained by each algorithm. For the 12 benchmark functions calculated by MOHMICA and comparing other algorithms in this paper, the larger the average area of the 12 radar maps of each algorithm, the stronger the comprehensive ability to calculate the multi-objective optimization problems. Moreover, from the actual value after logarithmic transformation, when the radar map areas of two algorithms calculating the same benchmark function, there is little performance difference between different algorithms. The calculation results are shown in Table 8. Table 8, comparing with the average area of radar maps of different algorithms in this paper, MOHMICA is the largest, being at least 14.06% larger than the total area of other algorithms. It shows that the comprehensive ability of MOHMICA is also the strongest when calculating benchmark functions. Meanwhile, the number of times the radar maps with the largest area of MOHMICA is the most among all algorithms. little performance difference between different algorithms. The calculation results are shown in Table 8. From the results in Table 8, comparing with the average area of radar maps of different algorithms in this paper, MOHMICA is the largest, being at least 14.06% larger than the total area of other algorithms. It shows that the comprehensive ability of MOHMICA is also the strongest when calculating benchmark functions. Meanwhile, the number of times the radar maps with the largest area of MOHMICA is the most among all algorithms.   little performance difference between different algorithms. The calculation results are shown in Table 8. From the results in Table 8, comparing with the average area of radar maps of different algorithms in this paper, MOHMICA is the largest, being at least 14.06% larger than the total area of other algorithms. It shows that the comprehensive ability of MOHMICA is also the strongest when calculating benchmark functions. Meanwhile, the number of times the radar maps with the largest area of MOHMICA is the most among all algorithms.

Conclusions and Future Research
This paper aimed to address the shortcomings of HMICA that can only solve singleobjective optimization problems and proposes the MOHMICA algorithm. In order to adapt to the characteristics of multi-objective optimization problems, MOHMICA updates the colony allocation strategy during the empire creation on the basis of HMICA, and increases the step of external archive.
In order to verify the performance of MOHMICA, this paper calculated 12 common benchmark functions, including 10 bi-objective benchmarks and 2 tri-objective benchmarks. Then, seven high-quality algorithms were compared to the proposed algorithm using four metrics: CM, DM, GD and IGD. After ranking and performing the Wilcoxon test, the proposed algorithm was found to have certain advantages over other algorithms for most metrics, but it is not enough to prove that the algorithm proposed in this paper has obvious advantages for each function. Therefore, a new comprehensive evaluating method called "radar map method" is proposed as the other knowledge contribution of this paper, which is used to evaluate comprehensive ability, including that of convergence and distribution of the approximate Pareto fronts obtained by different algorithms. The coordinate axis of the radar map includes CM, DM, GD and IGD. After evaluating algorithms that compare with MOHMICA using the radar map method, the comprehensive ability of MOHMICA was found to be the best among all algorithms.
For future research, there are three problems recommended to improve upon. First, in order to make the Pareto front distribution better than the algorithm proposed in this paper, when solving the optimization problem with more than two objective functions, the external archive strategy may need to be further improved. Second, in order to reduce time consumption and complexity when using MOHMICA to solve optimization problems, the operators in some of the steps may need to be replaced with simpler operators. Lastly, the application field needs to be considered. Using MOHMICA to solve real-world problems, including vehicle routing, industrial production management and production process scheduling optimization, are also important to explore in future research.