Next Article in Journal
Indoor Comfort and Energy Consumption Optimization Using an Inertia Weight Artificial Bee Colony Algorithm
Previous Article in Journal
Lithium-Ion Battery Prognostics through Reinforcement Learning Based on Entropy Measures
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Two-Archive Many-Objective Optimization Algorithm Based on D-Domination and Decomposition

1
School of Computer Science, Shaanxi Normal University, Xi’an 710119, China
2
Fujian Provincial Key Laboratory of Big Data Mining and Applications, Fujian University of Technology, Fuzhou 350011, China
*
Author to whom correspondence should be addressed.
Algorithms 2022, 15(11), 392; https://doi.org/10.3390/a15110392
Submission received: 11 September 2022 / Revised: 11 October 2022 / Accepted: 14 October 2022 / Published: 24 October 2022
(This article belongs to the Section Evolutionary Algorithms and Machine Learning)

Abstract

:
Decomposition-based evolutionary algorithms are popular with solving multi-objective optimization problems. It uses weight vectors and aggregate functions to keep the convergence and diversity. However, it is hard to balance diversity and convergence in high-dimensional objective space. In order to discriminate solutions and equilibrate the convergence and diversity in high-dimensional objective space, a two-archive many-objective optimization algorithm based on D-dominance and decomposition (Two Arch-D) is proposed. In Two Arch-D, the method of D-dominance and adaptive strategy adjusting parameter are used to apply selection pressure on the population to identify better solutions. Then, it uses the two archives’ strategy to equilibrate convergence and diversity, and after classifying solutions in convergence archive, the improved Tchebycheff function is used to evaluate the solution set and retain the better solutions. For the diversity archive, the diversity is maintained by making any two solutions as far apart and different as possible. Finally, the Two Arch-D is compared with other four multi-objective evolutionary algorithms on 45 many-objective test problems (including 5, 10 and 15 objectives). Good performance of the algorithm is verified by the description and analysis of the experimental results.

1. Introduction

The introduction many-objective optimization problems (MaOPs) have been widely used in real life. Generally speaking, a MaOP can be described mathematically as follows [1]
{ min F ( x ) = ( f 1 ( x ) , f 2 ( x ) , , f m ( x ) ) s . t   x Ω
where x = ( x 1 , x 2 , , x n ) is an n-dimensional decision vector; Ω is an n-dimensional decision space; F : Ω R m contains m conflicting objective functions. Here are some important concepts: suppose there are two solutions x A , x B Ω , if and only if i = 1 , , m , f i ( x A ) f i ( x B ) j = 1 , , m , f j ( x A ) < f j ( x B ) , then x A is said to be pareto dominant compared with x B , denoted as x A x B . x * Ω is called the Pareto optimal solution if and only if x Ω , x x * .
At present, many MaOPs have been widely used in network optimization [2,3], power system [4], chemical process and structure optimization [5]. So, to solve MaOPs, many many-objective evolutionary algorithms have been coming out in recent years. In 2017, Delice et al. proposed a particle swarm optimization algorithm [6] based on negative knowledge, which can effectively search in the solution space. Khajehzadeh et al. combined the adaptive gravitational search algorithm with pattern search [7] to improve the local search ability of the algorithm. Reference [8] proposed a firefly algorithm based on opposite learning, which uses opposite learning concepts to generate the initial population and update location.
With the development of optimization technology, the many-objective optimization algorithm is used to solve many-objective optimization problems [9,10,11]. The larger the number of objectives, the greater the proportion of nondominated solutions in the candidate solutions, which worsens the selection pressure of PF. Maintaining population diversity in high-dimensional objective space is also a challenging task [12]. For these problems, scholars have proposed that many algorithms can be divided into the following three types according to the different methods.
The first is indicator-based algorithms, e.g., an optimization algorithm [13] based on a new enhanced inverted generation distance indicator is proposed, which automatically adjusts the reference points according to the new indicator contribution. A many-objective optimization algorithm [14] based on IGD indicator and nadir estimation is proposed. IGD is used to select solutions with convergence and diversity. They use indicators to evaluate the convergence and diversity of solutions and guide the search process, the final solution set depends mainly on the indicator characteristics included, such as hypervolume (HV) indicator [15], R2 indicator [16], I ε + indicator [17] and the inverted generational distance (IGD). HV is one of the commonly used indicators [18,19], but it requires a large amount of calculation, which is generally exponential. Evaluating IGD requires the real Pareto front, which is unknown in real-life problems.
The second category is based on the Pareto dominance relationship, e.g., NSGA-II/SDR [20], GrEA [21], VaEA [22] and B-NAGA-III [23], which uses improved Pareto domination to select solutions. The larger the number of objectives, the larger the number of dominant solutions [24]. The method based on Pareto dominance mainly relies on diversity measurement to distinguish solutions, so the population will lose the selection pressure and cannot ensure diversity. To address these issues, more efforts should be made to design diversity maintenance mechanisms to ensure diversity.
The third is based on decomposition. The main idea is to transform a MaOP into a set of subproblems by decomposition method and optimize the subproblems in a collaborative way using evolutionary search. The most representative algorithm is the MOEA/D proposed in 2007 [25]. Many similar algorithms have since emerged, such as MOEA/DD [26], MOEA/AD [10], MOEA/D-SOM [27], MOEA/D-AM2M [28], hpaEA [29] and ar-MOEA [30], etc. These algorithms have gained wide popularity due to their good performance. The decomposition-based method uses the objective value of subproblems to update the whole, which reduces the calculation cost. However, it uses a set of weight vectors to decompose MaOPs. The weight vector guides the evolution of the subproblem, which makes the performance of the algorithm strongly depend on the weight vectors.
To better equilibrate the convergence and diversity of population and reduce the influence of the weight vector on algorithm performance, this paper combines the “two-archive” algorithm with the decomposition methods, and its main contribution can be summarized as follows.
  • The method of D-dominance and adaptive strategy adjusting parameter are used to apply selection pressure on the population to identify better solutions. Meanwhile, the dominant and dominated regions can be adjusted by parameter β , which means that D-dominance is more flexible when dealing with MaOPs;
  • In this paper, two-archive strategy is used to balance the convergence and diversity of solutions, and a set of solutions with good convergence can be obtained.
The rest of the article is structured as follows: The second part describes the algorithm and its method detailly. The third part presents the experimental results and related analysis and discussion. Finally, the conclusion is drawn in the fourth part.

2. Materials and Methods

2.1. D-Dominance Methods

Penalty-based Boundary Intersection (PBI) is one of the most common decomposition methods, and its definition formula is as follows:
min   g p b i ( x | λ , z * ) = d 1 + θ d 2
d 1 ( z * , λ , x ) = || ( z * F ( x ) ) T λ || || λ ||
d 2 ( z * , λ , x , d 1 ) = || F ( x ) ( z * d 1 λ ) ||
where x Ω , θ > 0 is a preset parameter, which usually takes 0.5. d 1 is the projection distance of F ( x ) in direction λ, and d 2 is the distance from the vertical projection of F ( x ) in direction λ. The farther away from λ, the greater the penalty value d 2 , which constrains the direction of the algorithm weight vector to generate solutions. Suppose y is the projection of F ( x ) on line L , as shown in Figure 1, the distance from z * to y is represented by d 1 , and the distance from F ( x ) to L is represented by d 2 .
Chen and Liu established a new relationship between decomposition and dominance through PBI decomposition, namely D-dominance [31]. According to the definition of d 1 and d 2 in the PBI decomposition, the parameter β ( 0 , π / 2 ) and the unit decomposition vector v , D-dominance is defined as follows.
If d 1 ( F ( x ) ) + d 2 ( F ( x ) F ( y ) ) cot ( β ) < d 1 ( F ( y ) ) , then let us say F ( x ) is D-dominated by F ( y ) , expressed by F ( x ) D F ( y ) .
Otherwise, if d 1 ( F ( y ) ) + d 2 ( F ( y ) F ( x ) ) cot ( β ) < d 1 ( F ( x ) ) , let us say F ( x ) is D-dominated by F ( y ) , expressed by F ( y ) D F ( x ) .
Otherwise, F ( x ) and F ( y ) are called mutually non-D-dominated. d 1 ( F ( x ) ) and d 2 ( F ( x ) F ( y ) ) are calculated by Equation (4). We used d 1 to evaluate the convergence of x and d 2 to measure population diversity, and we wanted d 1 to be as small as possible and increased d 2 for diversity. If F ( x ) D F ( y ) , d 2 ( F ( x ) ) deviated from the unit vector by an angle greater than d 2 ( F ( y ) ) in the direction, and d 1 ( F ( y ) ) was greater than d 1 ( F ( x ) ) in the distance. Conversely, if F ( y ) D F ( x ) , d 2 ( F ( y ) ) deviated from the unit vector by an angle greater than d 2 ( F ( x ) ) in the direction, and d 1 ( F ( x ) ) was greater than d 1 ( F ( y ) ) in the distance.
Figure 2 shows D-dominance example diagram. In the figure, there are three groups of solutions x , x 1 , y , y 1 and q , q 1 and three decomposition vectors V and V 1 , where V 1 is the moving vectors of V . To judge whether x is D-dominated by y , move the vector V to V 1 to intersect with x to get the β projection; that is, the PBI-β decomposition value of y with respect to V 1 is y 1 . From the figure, we can see intuitively that x is greater than the projection value y 1 ; that is, x is D-dominated by y . To determine whether q is D-dominated by x , one only needs the moving decomposition vector V intersected with q , and then one can find the β projection value, i.e., x 1 , but x 1 is less than q . At the same time, the projection q 1 from q to V 1 is intuitively greater than x . Therefore, q and x are not D-dominated.
The D-dominated method reduces the dependence on the decomposition vector and enables each individual to update around itself. At the same time, D-dominated region can pass parameter β flexible adjustment. In order to set β reasonably, we used Formula (5) to adjust β automatically.
β = π ( 1 1 / ( 1 + exp ( 3 c u r r e n t g e n max g e n ) ) ) / 1.5
where c u r r e n t g e n stands for current generation, and max g e n denotes the maximal generation. The initial value of β is the largest and monotonically decreases with the increase of the current generation. As the β increases, the selection pressure also increases, which results in fewer non-D-domination solutions. This is designed to exert more selective pressure on the population at an early stage of evolution, bringing it rapidly closer to PF, and then gradually reduce selection pressure so that the population has better expansion along PF. Compared with Pareto domination, D-domination has an adjustable domination area and dominated area, which means that D-domination will be more flexible in dealing with MaOPs.

2.2. Two Archives Strategy

The two-archive algorithm is a multi-objective evolutionary algorithm for balancing the convergence and diversity using the diversity archive (DA) and convergence archive (CA). The non-dominated solution and its descendants are added to CA, while the dominated solution and its descendants are added to DA. Solutions that exceed the CA and DA thresholds will be deleted. However, as the objective dimension increases, the size of CA increases enormously, leaving little space for DA. When the total size of both archives’ overflows, the removal strategy should be applied.
First, the solutions in CA are classified by a set of weight vectors ( γ 1 , γ 2 , , γ N ) generated by the initial population. The purpose of this is to obtain a uniformly distributed solution set to maintain its diversity. The following formula is used to classify the solutions in CA.
C A i = { x | x C A , Δ ( F ( x ) , γ i ) = max 1 j N { Δ ( F ( x ) , γ j ) } }
Δ ( F ( x ) , γ i ) = γ i ( F ( x ) Z ) T || γ i || || F ( x ) Z || , i = 1 , 2 , , N
where Z = ( Z 1 , Z 2 , , Z m ) is a reference point, and Δ ( F ( x ) , γ i ) is cosine of the angle between F ( x ) Z and γ i .
In the DA archive, to maintain the diversity, select the solution with the largest difference between any two distances in the population. The distance value of any solution is calculated by Formula (8).
d ( x ) = min { || F ( x ) F ( y ) || 2 || F ( x ) F ( y ) || | y P O P y x }
where || F ( x ) F ( y ) || 2   and   || F ( x ) F ( y ) || represent the distance and difference of the two solutions individually.

2.3. Updating Strategy

In order to update population, the algorithm adopts non-D-dominated sorting and Tchebycheff aggregate functions to preserve better convergent and distributed solutions. It is important to maintain the distribution of the population, given a set of weight vectors ( γ 1 , γ 2 , , γ m ) . According to the cosine value of the angle between the solution and the weight vector, each solution in CA is divided into the group of the nearest weight vector, and only one solution is retained in each group. First, calculate the cosine value of the angle between the solution in CA and the given weight vector according to Formula (6). The larger the cosine of the angle, the smaller the angle. After all partitions are completed, all solutions in CA are divided into m groups. For each weight vector, if there is only one corresponding solution, the solution is kept directly. If the corresponding solution is more than one, the solution of each group needs to be evaluated according to the improved Tchebycheff function, then the D-dominance method is used to increase the selection pressure and, finally, the best solution in each group is retained. Therefore, the solution with the smallest Tchebycheff function value is selected according to Formula (9). If there is no corresponding solution, the selection range is extended to the whole CA, and the solution with the smallest Tchebycheff function value in CA is retained.
min i m i z e x Ω g t e ( x | λ i , z ) = max 1 j m { | f j ( x ) z j | / λ j i }

2.4. The Proposed Algorithm

In order to discriminate better solutions and better equilibrate convergence and diversity in high-dimensional objective space, we proposed Two Arch-D. Benefiting from the D-dominance, the decomposition vectors are free vectors in Two Arch-D, and the subpopulation search and update no longer rely on the decomposition vector but take themselves as the center, which greatly reduces the individual’s high dependence on the relevant decomposition vector in the process of evolution.
This section describes in detail how the proposed D-domination method and update strategy based on the Two Arch framework works. The main ideas are as follows.
First, use K unit vectors to divide the objective space R + m into K sub-regions, and each sub-region satisfies Formula (10). That is, u is in Ω k if and only if the angle between v k and u is the smallest among all K direction vectors. (1) is decomposed into multiple objective optimization subproblems and solved them in a cooperative manner. K subpopulations are maintained in each generation, and each subpopulation always contains multiple feasible solutions. Finally, these feasible solutions are output to the population EPOP.
Ω k = { u R + m | u , v k u , v j , j = 1 , 2 , , K }
In order to get a good set of convergent solutions, Formulas (6) and (7) are used to classify the CA. The improved Tchebycheff function is used to assess solutions, and the solution with the smallest value of Tchebycheff function is selected to be retained. To keep the diversity of solutions, use Formula (8) to select any two solutions with a large distance difference in the population into the diversity archives. In Algorithm 1, the framework of the algorithm is given. At the same time, the pseudo-code to determine CA in this paper is also given in Algorithm 2.
Algorithm 1: Proposed algorithm framework
Input:
  MaOP(1).
  A set of weight vectors ( γ 1 , γ 2 , , γ m ) .
  K: number of subproblems.
  MaxGen: the maximal generation.
  Genetic operators and their associated parameters.
Output: EPOP
Initialization: unit direction vectors and population.
while the stop requirements are not met do
 The new solution was obtained by genetic manipulation.  for 1 to K do
    for  x P k do
      Pick a solution y at random from P k ;
      A new solution z was obtained by genetic operation;
       R = R { z } ;
     end
    end
    Find all solutions in K = 1 K P k .
    Determine the convergence archive C A .
    Defining the diversity archive D A : use Formula (8) to select solutions.
     E P O P = C A D A .
end while
Algorithm 2: Determine the convergence archive CA
Input:
    C A : a set of solutions.
   weight vectors ( γ 1 , γ 2 , , γ m ) .
Output: C A i : C A 1 , C A 2 , , C A m .
Classifying solutions in CA according to Formulas (6) and (7).
Using non-D-dominated sorting and Tchebycheff aggregate functions to preserve better convergent and distributed solutions.
for i = 1 , 2 , , m , do
  if C A i
  Evaluate the solution in C A i by formula (9), keep only the best solution in each group.
  end if
end

3. Results and Discussion

3.1. Test Problem

In this part, Two Arch-D is compared with the running results of other seven MOEAs (NSGAIII, MOEA/DD, KnEA, RVEA, hpaEA, VaEA and NSGAIISDR) on 15 test functions of CEC2018 MaOP optimization to verify the effectiveness of Two Arch-D. NSGAIII [32] uses the method based on reference points when selecting the Pareto solution set, while NSGAII uses the way of crowding distance to screen. MOEA/DD [26] combines dominance and decomposition methods, using their advantages to equilibrate the convergence and diversity. The KnEA [33] algorithm does not need to introduce an additional diversity maintenance mechanism, thus reducing the computational complexity. RVEA [34] is a multi-objective optimization algorithm guided by reference vector. In the high-dimensional objective space, RVEA employs an angle-penalized distance to select solutions. HpaEA [29] is an evolutionary algorithm based on adjacent hyperplanes to select non-dominated solutions using the newly proposed selection strategy. In VaEA [22], in order to maintain the diversity and convergence of solutions, the maximum vector angle-first strategy is used to select solutions, and worse solutions are replaced by other solutions. NSGA-II/SDR [20] is an improved many-objective optimization algorithm based on NSGA-II, which uses a new domination relation based on the angle of candidate solutions to select convergent solutions.
In order to test the ability of Two Arch-D algorithm to solve various MOPs, 15 benchmark functions with different characteristics of the CEC2018 MaOP competition [35] were selected for testing. The 15 benchmark functions with different properties used in the experiments can well represent various real-world scenarios, such as multimodal, disconnected and degenerate. The main features, expressions and PF features of these 15 test functions are referred to the literature [36]. For testing the performance of the Two Arch-D algorithm, the objective number of each test function is set to 5, 10 and 15, respectively, with a total of 45 sets of problems.

3.2. Parameter Setting

The experiment is tested by PlatEMO (http://bimk.ahu.edu.cn/12957/list.htm, accessed on 10 September 2022) [37], an evolutionary multi-objective optimization platform based on MATLAB. Each algorithm runs independently 20 times when solving each test function. The parameter of the Two Arch-D algorithm is set as follows: the outside population is twice as large as N, and the maximum function evaluation time is 100,000. Other parameter settings are the same as those of the CEC2018 MaOP competition [35]. The population size N and maximum function evaluation times of NSGAIII, MOEA/DD, KnEA, RVEA, hpaEA, VaEA and NSGAIISDR comparison algorithms are the same as those of Two Arch-D, and the default parameters of PlatEMO platform are used for other parameter settings.

3.3. Performance Indicator

In the experimental part, the IGD [38] index is used to evaluate algorithm. It mainly evaluates the convergence and diversity of the algorithm by calculating the sum of the minimum distance between each individual on the real Pareto front and the individual set obtained by algorithm. If IGD is smaller and the population is closer to the Pareto front, the better performance of the algorithm. Computing IGD requires a real Pareto front. However, in reality, the real Pareto front of optimization problems is often unknown. In the experiment, we uniformly selected 10,000 points on the Pareto fronts of each test question to calculate the IGD. In addition, the average IGD values of each algorithm were compared using Wilcoxon rank-sum test [38] with a significance level of 0.05. The IGD is calculated by Formula (11).
I G D ( P * , P ) = x * P * d ( x * , p ) | P * |
where P is the solution set obtained by algorithm, P * is a set of points sampled from PF and d ( x * , P ) is the Euclidean distance between point x * in the reference set P * and point p in the solution set P . The smaller the average IGD is, the closer P is to the true PF, and it is evenly distributed on the whole PF. Therefore, the IGD index can measure the convergence and diversity of the algorithm solutions to a certain extent.

3.4. Comparison and Analysis of Results

Table 1 and Table 2 show the average and standard deviation of IGD obtained by Two Arch-D and other seven comparison algorithms running independently 20 times on 45 test problems (including 5, 10 and 15 objectives). In the table, the best performance for each test question is highlighted in bold. “+” indicates that the comparison algorithm performs better than Two Arch-D, “−” indicates that the comparison algorithm performs worse than Two Arch-D and “=“ indicates that the two algorithms are similar.
From the experimental results, it can be seen that on the first 15 test questions (5 objectives), the average IGD values obtained by Two Arch-D on the problems of MaF1, MaF3, MaF4, MaF6, MaF7, MaF9 and MaF12 are significantly better than other compared algorithms. On the test problems of 10 objectives, the average IGD values obtained by Two Arch-D on the six problems of MaF3, MaF5, MaF6 and MaF10–12 are outperformed the compared algorithms. On the last 15 test problems (15 objectives), the average IGD values obtained by Two Arch-D on the problems of MaF2 and MaF5 are outperformed the other comparison algorithms.
Among the 45 problems, the performance of Two Arch-D on 15 problems is statistically better than other compared algorithms, which indicates that the Two Arch-D algorithm has a good performance on IGD. On 23, 23, 23, 26, 17, 14 and 18 problems, Two Arch-D performed better than NSGAIII, MOEA/DD, KnEA, RVEA, hpaEA, VaEA and NSGAIISDR. However, NSGAIII, MOEA/DD, KnEA, RVEA, hpaEA, VaEA and NSGAIISDR performed similarly to Two Arch-D on two, two, three, two, two, four and zero problems. On 20, 17, 19, 17, 26, 27 and 27 problems, the average IGD of Two Arch-D was smaller than other compared algorithms, respectively. In summary, it can be seen from the experimental data that the performance of Two Arch-D algorithm is superior to NSGAIII, MOEA/DD, KnEA and RVEA algorithms for most test problems. This also shows the excellent performance of Two Arch-D algorithm in estimating convergence and diversity.
The test problem MaF3 can clearly show the convergence of the algorithm, so the IGD value obtained by algorithm Two Arch-D on the MaF3 problem (5 objective, 10 objective and 15 objective) varies with the evaluation numbers as shown Figure 3, Figure 4 and Figure 5. It can be seen that the proposed algorithm performs well in terms of convergence.

3.5. Value of Parameter β

In order to make the comparison results more accurate, the population size, target dimension and function evaluation times to be input for each test problem are the same, and only the value of parameter β is changed. The value range of parameter β is ( 0 , π 2 ) . In order to determine the optimal value of β, the performance of the algorithm with different values of β is tested in 15 test problems. Each test question was run independently 20 times, and the average IGD measurements obtained are shown in Table 3, Table 4 and Table 5. In conclusion, different values of parameter β have different effects on the algorithm performance in most problems. For the problem MaF1–MaF5, the algorithm performs well when β is 0.2, 0.6, 0.8, 0.9 and 1.2. For MaF6–MaF10, the average value of IGD is small when β is 0.2 and 1.0; for MaF11–MaF15, different values of β had little effect on the algorithm, and the average value of IGD changed little. Especially for MaF11, MaF12 and MaF13, the algorithm was more stable. Overall, among the 15 test questions, with the parameter β of 0.2, 0.3 and 1.0, better times are obtained.

3.6. Discusion

The high-dimensional many-objective optimization problem is closely related to our life and production and has been widely used in various fields, such as power scheduling, network optimization, chemical production, structure design and path planning, so the research on the high-dimensional many-objective optimization algorithm has important practical significance. For example, the algorithm proposed can be applied to the operation and scheduling of high-speed railway trains. Due to the continuous growth of passenger traffic volume and the increasing demand for high-speed railway service quality, it is necessary to optimize the operation adjustment method of high-speed railway trains. Taking the punctuality rate of train operation, passenger comfort and delay time as optimization objectives, a multi-objective optimization model is established to minimize the total number of train delays, and the proposed algorithm is applied to provide a fast and reasonable train operation adjustment scheme for dispatchers.

4. Conclusions

This paper uses CA archives and DA archives to equilibrate diversity and convergence. On this basis, the adaptive strategy D-domination method and the improved Tchebycheff are used to preserve the convergent solutions, and a set of uniformly distributed solutions are obtained. The adaptive strategy D-domination method is adopted. On the one hand, the β value is automatically adjusted during the algorithm execution, which makes the β value more reasonable. On the other hand, the implementation of the D-domination method is based on PBI decomposition, making the update of each individual in the population centered on itself, reducing the dependence on the decomposition vector. At the same time, the improved Tchebycheff function is used to exert selection pressure on the population to select solutions with better convergence. The goal of better identifying solutions and balancing convergence and diversity in high-dimensional objective space is realized. The effectiveness of the proposed method is verified by a series of comparative experiments. The main work in the future is to apply the proposed algorithm to the operation and scheduling of high-speed railway trains mentioned in Section 3.6.

Author Contributions

Conceptualization, N.Y., X.X. and C.D.; methodology, N.Y., X.X. and C.D.; software, N.Y. and X.X.; validation, N.Y.; formal analysis, N.Y. and C.D.; data curation, N.Y., X.X. and C.D.; writing—original draft preparation, N.Y.; writing—review and editing, N.Y. and C.D. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by National Natural Science Foundations of China (no.12271326, no. 61806120, no. 61502290, no. 61401263, no. 61672334, no. 61673251), China Postdoctoral Science Foundation (no. 2015M582606), Industrial Research Project of Science and Technology in Shaanxi Province (no. 2015GY016, no. 2017JQ6063), Fundamental Research Fund for the Central Universities (no. GK202003071), Natural Science Basic Research Plan in Shaanxi Province of China (no. 2022JM-354).

Data Availability Statement

The data used to support the findings of this study are available from the corresponding author upon request.

Acknowledgments

In this work, Xinyu Liang, Zhu Liu, Cheng Peng and Zhibin Zhou provided some helps. Thanks for their help.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Coello Coello, C.A.; Lamont, G.B.; Van Veldhuizen, D.A. (Eds.) Basic Concepts; Springer: Boston, MA, USA, 2007; pp. 1–60. [Google Scholar]
  2. Koessler, E.; Almomani, A. Hybrid particle swarm optimization and pattern search algorithm. Optim. Eng. 2021, 22, 1539–1555. [Google Scholar] [CrossRef]
  3. Cherki, I.; Chaker, A.; Djidar, Z.; Khalfallah, N.; Benzergua, F. A Sequential Hybridization of Genetic Algorithm and Particle Swarm Opti mization for the Optimal Reactive Power Flow. Sustainability 2019, 11, 3862. [Google Scholar] [CrossRef] [Green Version]
  4. Eslami, M.; Shareef, H.; Mohamed, A.; Khajehzadeh, M. Damping controller design for power system oscillations using hybrid ga-sqp. Int. Rev. Electr. Eng. 2011, 6, 888–896. [Google Scholar]
  5. Kaveh, A.; Ghazaan, M.; Abadi, A.A. An improved water strider algorithm for optimal design of skeletal structures. Period. Polytech. Civ. Eng. 2020, 64, 1284–1305. [Google Scholar] [CrossRef]
  6. Delice, A.Y.; Aydoan, E.K.; Özcan, U.; lkay, M.S. A modified particle swarm optimization algorithm to mixed-model two-sided assembly line balancing. J. Intell. Manuf. 2017, 28, 23–36. [Google Scholar] [CrossRef]
  7. Mohammad, A.K.; Raihan, T.M.; Mahdiyeh, E. Multi-objective optimisation of retaining walls using hybrid adaptive gravitational search algorithm. Civ. Eng. Environ. Syst. 2014, 31, 229–242. [Google Scholar]
  8. Khajehzadeh, M.; Taha, M.R.; Eslami, M. Opposition-based firefly algorithm for earth slope stability evaluation. China Ocean Eng. 2014, 28, 713–724. [Google Scholar] [CrossRef]
  9. Dong, N.; Dai, C. An improvement decomposition-based multi-objective evolutionary algorithm using multi-search strategy. Knowl.-Based Syst. 2019, 163, 572–580. [Google Scholar] [CrossRef]
  10. Wu, M.; Li, K.; Kwong, S.; Zhang, Q. Evolutionary many-objective optimization based on adversarial decomposition. IEEE Trans. Cybern. 2020, 50, 753–764. [Google Scholar] [CrossRef] [Green Version]
  11. Liu, S.; Lin, Q.; Tan, K.C.; Gong, M.; Coello, C.A.C. A fuzzy decomposition-based multi/many-objective evolutionary algorithm. IEEE Trans. Cybern. 2022, 52, 3495–3509. [Google Scholar] [CrossRef]
  12. Ji, H.; Dai, C. A simplified hypervolume-based evolutionary algorithm for many-objective optimization. Complexity 2020, 2020, 8353154. [Google Scholar] [CrossRef]
  13. Tian, Y.; Cheng, R.; Zhang, X.; Cheng, F.; Jin, Y. An indicator-based multiobjective evolutionary algorithm with reference point adaptation for better versatility. IEEE Trans. Evol. Comput. 2018, 22, 609–622. [Google Scholar] [CrossRef]
  14. Sun, Y.; Yen, G.G.; Yi, Z. Igd indicator-based evolutionary algorithm for many-objective optimization problems. IEEE Trans. Evol. Comput. 2019, 23, 173–187. [Google Scholar] [CrossRef] [Green Version]
  15. Bader, J.; Zitzler, E. Hype: An algorithm for fast hypervolume-based many-objective optimization. Evol. Comput. 2011, 19, 45–76. [Google Scholar] [CrossRef]
  16. Shang, K.; Ishibuchi, H. A new hypervolume-based evolutionary algorithm for many-objective optimization. IEEE Trans. Evol. Comput. 2020, 24, 839–852. [Google Scholar] [CrossRef]
  17. Liang, Z.; Luo, T.; Hu, K.; Ma, X.; Zhu, Z. An indicator-based many-objective evolutionary algorithm with boundary protection. IEEE Trans. Cybern. 2021, 51, 4553–4566. [Google Scholar] [CrossRef]
  18. While, L.; Hingston, P.; Barone, L.; Huband, S. A faster algorithm for calculating hypervolume. IEEE Trans. Evol. Comput. 2006, 10, 29–38. [Google Scholar] [CrossRef] [Green Version]
  19. Menchaca-Mendez, A.; Coello, C.A.C. A new selection mechanism based on hypervolume and its locality property. In Proceedings of the 2013 IEEE Congress on Evolutionary Computation, Cancun, Mexico, 20–23 June 2013; pp. 924–931. [Google Scholar]
  20. Tian, Y.; Cheng, R.; Zhang, X.; Su, Y.; Jin, Y. A strengthened dominance relation considering convergence and diversity for evolutionary many-objective optimization. IEEE Trans. Evol. Comput. 2019, 23, 331–345. [Google Scholar] [CrossRef] [Green Version]
  21. Yang, S.; Li, M.; Liu, X.; Zheng, J. A grid-based evolutionary algorithm for many-objective optimization. IEEE Trans. Evol. Comput. 2013, 17, 721–736. [Google Scholar] [CrossRef]
  22. Xiang, Y.; Zhou, Y.; Li, M.; Chen, Z. A vector angle-based evolutionary algorithm for unconstrained many-objective optimization. IEEE Trans. Evol. Comput. 2017, 21, 131–152. [Google Scholar] [CrossRef]
  23. Seada, H.; Abouhawwash, M.; Deb, K. Multiphase balance of diversity and convergence in multiobjective optimization. IEEE Trans. Evol. Comput. 2019, 23, 503–513. [Google Scholar] [CrossRef]
  24. Xiang, Y.; Zhou, Y.; Yang, X.; Huang, H. A many-objective evolutionary algorithm with pareto-adaptive reference points. IEEE Trans. Evol. Comput. 2020, 24, 99–113. [Google Scholar] [CrossRef]
  25. Zhang, Q.; Li, H. Moea/d: A multiobjective evolutionary algorithm based on decomposition. IEEE Trans. Evol. Comput. 2007, 11, 712–731. [Google Scholar] [CrossRef]
  26. Li, K.; Deb, K.; Zhang, Q.; Kwong, S. An evolutionary many-objective optimization algorithm based on dominance and decomposition. IEEE Trans. Evol. Comput. 2015, 19, 694–716. [Google Scholar] [CrossRef]
  27. Gu, F.; Cheung, Y.-M. Self-organizing map-based weight design for decomposition-based many-objective evolutionary algorithm. IEEE Trans. Evol. Comput. 2018, 22, 211–225. [Google Scholar] [CrossRef]
  28. Liu, H.-L.; Chen, L.; Zhang, Q.; Deb, K. Adaptively allocating search effort in challenging many-objective optimization problems. IEEE Trans. Evol. Comput. 2018, 22, 433–448. [Google Scholar] [CrossRef]
  29. Chen, H.; Tian, Y.; Pedrycz, W.; Wu, G.; Wang, R.; Wang, L. Hyperplane assisted evolutionary algorithm for many-objective optimization problems. IEEE Trans. Cybern. 2020, 50, 3367–3380. [Google Scholar] [CrossRef]
  30. Yi, J.; Bai, J.; He, H.; Peng, J.; Tang, D. ar-moea: A novel preference-based dominance relation for evolutionary multiobjective optimization. IEEE Trans. Evol. Comput. 2019, 23, 788–802. [Google Scholar] [CrossRef]
  31. Chen, L.; Liu, H.; Tan, K.C.; Cheung, Y.; Wang, Y. Evolutionary many-objective algorithm using decomposition-based dominance relationship. IEEE Trans. Cybern. 2019, 49, 4129–4139. [Google Scholar] [CrossRef]
  32. Deb, K.; Jain, H. An evolutionary many-objective optimization algorithm using reference-point-based nondominated sorting approach, part i: Solving problems with box constraints. IEEE Trans. Evol. Comput. 2014, 18, 577–601. [Google Scholar] [CrossRef]
  33. Zhang, X.; Tian, Y.; Jin, Y. A knee point-driven evolutionary algorithm for many-objective optimization. IEEE Trans. Evol. Comput. 2015, 19, 761–776. [Google Scholar] [CrossRef]
  34. Cheng, R.; Jin, Y.; Olhofer, M.; Sendhoff, B. A reference vector guided evolutionary algorithm for many-objective opti-mization. IEEE Trans. Evol. Comput. 2016, 20, 773–791. [Google Scholar] [CrossRef]
  35. Cheng, R.; Li, M.; Tian, Y.; Xiang, X.; Zhang, X.; Yang, S.; Jin, Y.; Yao, X. Benchmark functions for cec’2018 competition on many-objective optimization. Complex Intell. Syst. 2017, 3, 1–22. [Google Scholar] [CrossRef]
  36. Steel, R.G.D.; Torrie, J.H. Principles and Procedures of Statistics. (With Special Reference to the Biological Sciences.); McGraw-Hill Book Company: New York, NY, USA; Toronto, ON, Canada; London, UK, 1960; pp. 207–208. [Google Scholar]
  37. Tian, Y.; Cheng, R.; Zhang, X.; Jin, Y. Platemo: A matlab platform for evolutionary multi-objective optimization [educational forum]. IEEE Comput. Intell. Mag. 2017, 12, 73–87. [Google Scholar] [CrossRef] [Green Version]
  38. Zitzler, E.; Thiele, L.; Laumanns, M.; Fonseca, C.M.; da Fonseca, V.G. Performance assessment of multiobjective optimizers: An analysis and review. IEEE Trans. Evol. Comput. 2003, 7, 117–132. [Google Scholar] [CrossRef]
Figure 1. Penalty-based Boundary Intersection (PBI).
Figure 1. Penalty-based Boundary Intersection (PBI).
Algorithms 15 00392 g001
Figure 2. D-dominance example diagram.
Figure 2. D-dominance example diagram.
Algorithms 15 00392 g002
Figure 3. Change of IGD value with the number of assessments (5 objective).
Figure 3. Change of IGD value with the number of assessments (5 objective).
Algorithms 15 00392 g003
Figure 4. Change of IGD value with the number of assessments (10 objective).
Figure 4. Change of IGD value with the number of assessments (10 objective).
Algorithms 15 00392 g004
Figure 5. Change of IGD value with the number of assessments (15 objective).
Figure 5. Change of IGD value with the number of assessments (15 objective).
Algorithms 15 00392 g005
Table 1. IGD mean and standard deviation values of each algorithm.
Table 1. IGD mean and standard deviation values of each algorithm.
ProblemsTwo Arch-DNSGAIIIMOEA/DDKnEARVEAhpaEAVaEANSGAIISDR
MaF1-51.6189 × 10−1
8.91 × 10−2
2.0558 × 10−1
(6.77 × 10³) =
2.9579 × 10−1
(4.68 × 10³) −
1.7136 × 10−1
(1.91 × 10³) =
2.8832 × 10−1
(4.10 × 10−2) −
1.6675 × 10−1
(7.86 × 10−4) =
1.6959 × 10−1
(6.82 × 10−4) =
1.6728 × 10−1
(1.48 × 10³) −
MaF2-51.0182 × 10−1
2.78 × 10³
1.3232 × 10−1
(2.26 × 10³) =
1.3679 × 10−1
(3.22 × 10³) =
1.3553 × 10−1
(1.50 × 10³) =
1.3208 × 10−1
(1.24 × 10−2) =
9.4521 × 10−2
(1.63 × 10³) +
9.0742 × 10−2
(1.51 × 10³) +
9.5664 × 10−2
(1.72 × 10³) +
MaF3-58.2609 × 10−2
4.18 × 10³
9.0837 × 10−2
(3.56 × 10³) −
1.1869 × 10−1
(3.27 × 10³) −
1.6911 × 10−1
(7.46 × 10−2) −
8.4313 × 10−2
(3.70 × 10³) =
2.3479 × 10−1
(4.96 × 10−1) −
2.0613 × 10−1
(1.19 × 10−1) −
1.4569 × 10−1
(1.06 × 10−2) −
MaF4-53.0854 × 100
5.97 × 102
3.4086 × 100
(2.28 × 10−1) −
7.8385 × 100
(3.24 × 10−1) −
3.8983 × 100
(1.85 × 10−1) −
4.9479 × 100
(1.13 × 100) −
3.1267 × 100
(4.82 × 10−2) −
3.1139 × 100
(5.58 × 10−1) =
3.2691 × 100
(1.70 × 10−1) −
MaF5-52.4984 × 100
7.43 × 10−1
2.5687 × 100
(8.57 × 10−2) −
6.8677 × 100
(1.82 × 10−1) −
2.7249 × 100
(4.53 × 10−2) −
2.6461 × 100
(8.22 × 10−1) −
2.6487 × 100
(1.25 × 100) −
1.7817 × 100
(2.12 × 10−2) +
1.3930 × 101
(1.67 × 100) +
MaF6-57.2501 × 10³
4.29 × 10−2
8.0726 × 10−2
(1.02 × 10³) −
7.2735 × 10−2
(4.76 × 10−1) −
8.0282 × 10³
(1.16 × 10³) −
9.2862 × 10−2
(8.64 × 10−2) −
7.3210 × 10³
(5.44 × 10³) =
7.9108 × 10³
(1.29 × 10³) =
1.0689 × 10−2
(1.88 × 10³) −
MaF7-52.6563 × 10−1
1.67 × 100
3.5428 × 10−1
(4.01 × 10³) −
9.8822 × 10−1
(6.53 × 10−1) −
3.2756 × 10−1
(4.02 × 10³) −
6.6489 × 10−1
(2.14 × 10−1) −
2.9005 × 10−1
(7.52 × 10−2) −
2.7418 × 10−1
(4.05 × 10³) =
3.1979 × 10−1
(2.63 × 10−2) −
MaF8-52.6988 × 10−1
4.89 × 101
2.7666 × 10−1
(6.39 × 10³) −
3.5750 × 10−1
(1.68 × 100) −
2.9077 × 10−1
(2.83 × 10−2) −
4.5119 × 10−1
(7.52 × 10−2) −
8.0480 × 10−2
(1.15 × 10³) +
8.5273 × 10−2
(1.86 × 10³) +
1.0078 × 10−1
(5.62 × 10³) +
MaF9-51.0636 × 10−1
7.21 × 101
6.0092 × 10−1
(1.52 × 10−1) −
2.4862 × 10−1
(1.30 × 100) −
4.3125 × 10−1
(1.43 × 10−1) −
3.8018 × 10−1
(3.58 × 10−1) −
3.0522 × 10−1
(7.37 × 10³) −
4.5541 × 10−1
(2.53 × 10−1) −
1.3954 × 10−1
(6.56 × 10³) −
MaF10-52.1531 × 100
6.33 × 10−2
4.8581 × 10−1
(3.16 × 10−2) +
7.8860 × 10−1
(9.97 × 10−2) +
5.4787 × 10−1
(7.68 × 10³) +
4.2839 × 10−1
(2.32 × 10−1) +
8.2042 × 10−1
(2.36 × 10−1) +
3.9001 × 10−1
(1.05 × 10−2) +
7.0136 × 10−1
(1.08 × 10−1) +
MaF11-51.0579 × 100
9.68 × 10−2
8.3222 × 10−1
(1.86 × 10³) +
4.4351 × 100
(2.41 × 10−2) −
6.0541 × 10−1
(9.28 × 10³) +
4.9736 × 10−1
(4.24 × 10−2) +
4.6437 × 10−1
(1.78 × 10−2) +
3.9941 × 10−1
(3.90 × 10³) +
4.9583 × 10−1
(4.73 × 10−2) +
MaF12-51.0745 × 100
9.24 × 10−2
1.1697 × 100
(3.05 × 10³) −
1.3644 × 100
(1.69 × 10−2) −
1.2294 × 100
(5.59 × 10³) −
1.2215 × 100
(2.35 × 10−2) −
1.2810 × 100
(8.98 × 10³) −
1.2540 × 100
(6.69 × 10³) −
1.1161 × 100
(9.47 × 10³) −
MaF13-53.3819 × 10−1
1.63 × 10−1
2.9729 × 10−1
(4.49 × 10³) +
2.5779 × 10−1
(1.96 × 10−1) +
2.2398 × 10−1
(1.03 × 10−2) +
6.9643 × 10−1
(9.40 × 10−2) −
8.3813 × 10−2
(5.78 × 10³) +
1.3760 × 10−1
(1.98 × 10−2) +
1.3864 × 10−1
(9.91 × 10³) +
MaF14-59.9542 × 101
1.26 × 102
7.9707 × 10−1
(3.49 × 10−1) +
3.8723 × 10−1
(2.29 × 100) +
7.6784 × 10−1
(4.80 × 10−1) +
7.1840 × 10−1
(3.38 × 100) +
2.2656 × 100
(8.02 × 10−1) +
1.9552 × 100
(1.11 × 100) +
4.7653 × 10−1
(8.70 × 10−2) +
MaF15-52.5200 × 101
3.40 × 100
1.6373 × 100
(2.74 × 10−1) +
5.8242 × 100
(6.55 × 10−1) +
3.3027 × 100
(8.56 × 10−1) +
1.3832 × 100
(1.05 × 100) +
1.1633 × 100
(5.57 × 10−2) +
1.1731 × 100
(3.97 × 10−2) +
7.9347 × 10−1
(2.48 × 10−2) +
MaF1-103.8847 × 10−1
1.15 × 10−1
3.2801 × 10−1
(5.12 × 10³) +
4.7539 × 10−1
(1.96 × 10−2) −
2.4884 × 10−1
(2.25 × 10³) +
6.7891 × 10−1
(6.34 × 10−2) −
2.3295 × 10−1
(1.25 × 10³) +
2.3898 × 10−1
(1.56 × 10³) +
2.3036 × 10−1
(1.88 × 10³) +
MaF2-102.6095 × 10−1
2.28 × 10−2
2.1939 × 10−1
(2.81 × 10−2) +
2.7208 × 10−1
(3.81 × 10−2) −
1.6838 × 10−1
(3.12 × 10³) +
4.5747 × 10−1
(6.32 × 10−2) −
3.3075 × 10−1
(1.56 × 10−2) −
1.9052 × 10−1
(3.47 × 10³) +
2.3787 × 10−1
(2.22 × 10−2) +
MaF3-101.9296 × 10−1
1.19 × 10−1
3.6882 × 102
(4.27 × 102) −
2.1532 × 10−1
(5.21 × 10³) −
9.4119 × 103
(3.95 × 103) −
2.3506 × 10−1
(2.40 × 100) −
3.0333 × 103
(2.29 × 103) −
1.0104 × 103
(1.70 × 103) −
2.5282 × 10−1
(2.35 × 10³) −
MaF4-101.0634 × 102
1.75 × 103
9.0035 × 101
(7.89 × 100) +
4.1675 × 102
(3.19 × 103) −
8.0853 × 101
(4.30 × 101) +
2.3817 × 102
(3.46 × 101) −
5.8700 × 101
(3.48 × 100) +
5.9376 × 101
(5.01 × 100) +
2.0630 × 102
(3.24 × 101) −
MaF5-106.1000 × 101
1.86 × 101
7.7751 × 101
(8.43 × 10−1) −
2.5978 × 102
(1.98 × 101) −
6.4025 × 101
(4.63 × 100) −
9.0194 × 101
(9.37 × 100) −
6.5317 × 101
(2.05 × 100) −
6.1263 × 101
(2.03 × 100) −
3.0626 × 102
(2.40 × 10−1) −
MaF6-101.7214 × 10−1
7.50 × 100
3.0201 × 10−1
(4.52 × 10−1) −
1.8762 × 10−1
(5.45 × 10−1) −
6.9385 × 100
(6.82 × 100) −
3.8519 × 10−1
(2.23 × 10−2) −
3.4760 × 100
(3.58 × 100) −
3.5467 × 100
(1.60 × 100) −
1.2812 × 100
(2.95 × 10³) −
MaF7-101.7575 × 101
3.46 × 100
1.1201 × 100
(9.37 × 10−2) +
2.6346 × 100
(2.16 × 10−1) +
8.3232 × 10−1
(6.98 × 10³) +
2.9247 × 100
(4.32 × 10−1) +
9.3718 × 10−1
(4.22 × 10−2) +
9.9884 × 10−1
(1.61 × 10−2) +
1.7941 × 100
(2.60 × 10−1) +
MaF8-101.2987 × 100
6.17 × 101
4.9619 × 10−1
(5.50 × 10−2) +
9.4960 × 10−1
(3.31 × 100) +
1.6105 × 10−1
(7.65 × 10³) +
9.5714 × 10−1
(1.61 × 10−1) +
1.2975 × 10−1
(1.89 × 10−2) +
1.3411 × 10−1
(2.51 × 10³) +
1.8149 × 10−1
(2.55 × 10−2) +
Table 2. IGD mean and standard deviation values of each algorithm.
Table 2. IGD mean and standard deviation values of each algorithm.
ProblemsTwo Arch-DNSGAIIIMOEA/DDKnEARVEAhpaEAVaEANSGAIISDR
MaF9-101.3767 × 100
1.01 × 102
5.3925 × 10−1
(1.40 × 10−1) +
6.5337 × 100
(2.89 × 100) −
3.1205 × 10 1
(2.96 × 101) −
8.5061 × 10−1
(2.00 × 10−1) +
3.0532 × 10−1
(1.20 × 10−1) +
2.1405 × 10−1
(8.91 × 10−2) +
1.9701 × 10−1
(8.42 × 10−3) +
MaF10-101.0928 × 100
3.81 × 10−2
1.1583 × 100
(7.23 × 10−2) −
1.9436 × 100
(6.45 × 10−2) −
1.2529 × 100
(1.71 × 10−2) −
1.2849 × 100
(4.34 × 10−2) −
1.4610 × 100
(2.14 × 10−1) −
1.1478 × 100
(7.93 × 10−2) −
1.7939 × 100
(1.25 × 10−1) −
MaF11-101.0464 × 100
3.71 × 10−1
4.3444 × 100
(9.03 × 10−2) −
1.7965 × 100
(3.07 × 10−2) −
2.7405 × 100
(3.32 × 10−2) −
8.8117 × 100
(1.82 × 100) −
1.1432 × 100
(4.12 × 10−2) −
1.0746 × 100
(1.62 × 10−2) −
1.6378 × 100
(1.10 × 10−1) −
MaF12-104.0685 × 100
3.46 × 10−1
4.3884 × 100
(3.84 × 10−2) −
6.5672 × 100
(1.10 × 10−2) −
5.2698 × 100
(2.66 × 10−2) −
4.8662 × 100
(5.81 × 10−2) −
5.1899 × 100
(2.59 × 10−1) −
4.2516 × 100
(2.85 × 10−2) −
4.6239 × 100
(5.15 × 10−2) −
MaF13-102.5486 × 10−1
1.47 × 10−1
4.1263 × 10−1
(2.72 × 10−2) −
4.4820 × 10−1
(5.24 × 10−3) −
2.6770 × 10−1
(1.62 × 10−2) −
8.9039 × 10−1
(2.71 × 10−1) −
1.1376 × 10−1
(7.67 × 10−3) +
1.7319 × 10−1
(2.72 × 10−2) +
1.8447 × 10−1
(1.17 × 10−2) +
MaF14-103.6053 × 103
3.91 × 103
1.6137 × 100
(4.10 × 100) +
5.2802 × 10−1
(2.72 × 100) +
2.0355 × 102
(3.21 × 102) +
6.1964 × 10−1
(1.83 × 10−1) +
1.1869 × 102
(1.80 × 102) +
1.0076 × 101
(5.00 × 100) +
1.3157 × 100
(1.64 × 10−1) +
MaF15-104.0687 × 101
5.71 × 100
3.2906 × 100
(2.38 × 100) +
1.6765 × 101
(2.80 × 100) +
4.7879 × 101
(1.66 × 101) −
1.0670 × 100
(6.03 × 10−2) +
2.7897 × 100
(1.12 × 100) +
1.3290 × 100
(3.20 × 10−1) +
1.1020 × 100
(2.85 × 10−2) +
MaF1-153.3100 × 10−1
1.17 × 10−1
4.2070 × 10−1
(6.85 × 10−3) −
5.4084 × 10−1
(3.32 × 10−2) −
3.9638 × 10−1
(2.42 × 10−2) −
6.6399 × 10−1
(6.72 × 10−2) −
2.8224 × 10−1
(5.46 × 10−3) +
2.8315 × 10−1
(1.65 × 10−3) +
2.9106 × 10−1
(5.77 × 10−3) +
MaF2-151.8845 × 10−1
3.09 × 10−2
2.4723 × 10−1
(2.39 × 10−2) −
4.2432 × 10−1
(4.19 × 10−2) −
1.9211 × 10−1
(6.51 × 10−3) =
5.0625 × 10−1
(1.65 × 10−1) −
4.8030 × 10−1
(1.64 × 10−2) −
2.0511 × 10−1
(3.40 × 10−3) −
3.1962 × 10−1
(3.20 × 10−2) −
MaF3-158.1378 × 10−1
3.04 × 10−3
1.7598 × 100
(5.09 × 100) −
1.1909 × 10−1
(4.94 × 100) +
4.9242 × 103
(5.41 × 103) −
1.3833 × 10−1
(1.20 × 10−1) +
9.0950 × 103
(5.42 × 103) −
7.5296 × 103
(9.92 × 103) −
1.4109 × 10−1
(1.47 × 10−3) +
MaF4-153.6131 × 103
6.49 × 103
4.7241 × 103
(3.12 × 102) +
1.4906 × 103
(2.14 × 103) +
1.9243 × 103
(4.70 × 102) +
8.1507 × 103
(1.80 × 103) +
1.6215 × 103
(1.78 × 102) +
1.7141 × 103
(3.33 × 102) +
7.7858 × 103
(1.40 × 103) +
MaF5-152.3753 × 103
4.40 × 102
4.0975 × 103
(1.98 × 10−1) −
7.2914 × 103
(6.70 × 101) −
3.6769 × 103
(1.93 × 102) −
2.9293 × 103
(2.66 × 102) −
2.3912 × 103
(9.95 × 101) −
2.5676 × 103
(6.42 × 101) −
7.2431 × 103
(2.55 × 102) −
MaF6-151.9554 × 10−1
8.80 × 10−1
3.7384 × 10−1
(1.64 × 10−1) −
1.6390 × 10−1
(3.10 × 10−3) =
4.8147 × 101
(9.85 × 100) −
2.3346 × 10−1
(2.38 × 10−1) −
1.3535 × 101
(8.20 × 100) −
3.6300 × 100
(1.23 × 100) −
7.5413 × 10−3
(1.28 × 10−3) +
MaF7-153.0745 × 100
2.07 × 10−2
2.8347 × 100
(4.41 × 10−2) +
3.4554 × 100
(4.28 × 10−2) −
2.4546 × 100
(2.82 × 10−1) +
2.7596 × 100
(5.18 × 10−1) +
3.0156 × 100
(4.01 × 10−1) +
2.3330 × 100
(2.40 × 10−1) +
4.2609 × 100
(5.20 × 10−1) −
MaF8-151.6376 × 10−1
8.74 × 10−3
4.1663 × 10−1
(4.81 × 10−3) −
1.3276 × 100
(1.84 × 10−2) −
1.4318 × 10−1
(5.31 × 10−3) +
1.2941 × 100
(1.85 × 10−1) −
1.5480 × 10−1
(3.41 × 10−3) +
1.7215 × 10−1
(3.85 × 10−3) −
2.2177 × 10−1
(2.06 × 10−2) −
MaF9-151.5617 × 102
9.97 × 101
2.1172 × 100
(4.19 × 100) +
9.5364 × 10−1
(1.33 × 10−2) +
4.4992 × 10−1
(5.32 × 10−1) +
1.8558 × 100
(2.41 × 100) +
1.4924 × 10−1
(3.88 × 10−3) +
2.9089 × 10−1
(1.57 × 10−1) +
2.1064 × 10−1
(9.27 × 10−3) +
MaF10-153.9705 × 100
3.55 × 10−2
1.8887 × 100
(8.30 × 10−2) +
2.0407 × 100
(6.91 × 10−2) +
1.9456 × 100
(3.63 × 10−2) +
1.9865 × 100
(5.56 × 10−2) +
2.0644 × 100
(8.86 × 10−2) +
1.7254 × 100
(1.10 × 10−1) +
2.4796 × 100
(4.84 × 10−2) +
MaF11-154.1226 × 100
9.97 × 10−1
6.7039 × 100
(4.53 × 10−1) −
3.5955 × 101
(3.70 × 10−2) +
4.6635 × 100
(7.08 × 10−2) −
4.6274 × 100
(1.05 × 10−1) −
1.9243 × 100
(9.90 × 10−2) +
1.6971 × 100
(4.82 × 10−2) +
2.3690 × 100
(6.63 × 10−2) +
MaF12-151.3829 × 101
8.47 × 10−1
8.0424 × 100
(9.96 × 10−2) +
8.6339 × 100
(1.59 + × 10−1) +
6.2047 × 100
(6.25 × 10−2) +
7.4553 × 100
(2.91 × 10−1) +
1.2237 × 101
(7.56 × 10−1) +
7.1922 × 100
(8.65 × 10−2) +
8.2112 × 100
(8.47 × 10−1) +
MaF13-155.0668 × 10−1
2.05 × 10−1
1.6947 × 100
(8.43 × 10−2) −
3.8754 × 10−1
(4.13 × 10−2) +
1.1619 × 10−1
(1.07 × 10−2) +
1.2745 × 100
(7.34 × 10−1) −
1.3703 × 10−1
(1.11 × 10−2) +
1.9194 × 10−1
(3.47 × 10−2) +
1.8125 × 10−1
(1.08 × 10−2) +
MaF14-151.2021 × 102
1.47 × 102
1.4878 × 100
(7.32 × 10−1) +
5.1629 × 10−1
(5.96 × 10−3) +
5.0679 × 101
(5.17 × 102) +
7.4847 × 10−1
(2.33 × 10−1) +
7.5186 × 100
(8.66 × 100) +
7.6563 × 100
(6.64 × 100) +
1.2116 × 100
(1.70 × 10−1) +
MaF15-155.8367 × 101
9.43 × 100
4.6599 × 100
(4.41 × 100) +
1.6858 × 100
(2.91 × 10−1) +
1.2239 × 102
(3.09 × 101) −
1.3129 × 100
(6.45 × 10−2) +
9.7637 × 100
(2.42 × 100) +
2.6989 × 100
(2.73 × 10−1) +
1.2697 × 100
(3.16 × 10−2) +
+/−/= 20/23/217/26/219/23/317/26/226/17/227/14/427/18/0
Table 3. IGD mean obtained by Two Arch-D with different β.
Table 3. IGD mean obtained by Two Arch-D with different β.
Problems β = 0.1β = 0.2β = 0.3β = 0.4β = 0.5
MaF1-5 1.9361 × 10−1 1.0156 × 100 8.7439 × 10−1 1.0869 × 100 1.0674 × 100
MaF2-5 1.2540 × 100 1.9446 × 10−1 1.2052 × 100 1.0837 × 100 1.2382 × 100
MaF3-5 1.0926 × 100 1.6368 × 10−1 1.1206 × 100 8.5473 × 10−1 9.9766 × 10−1
MaF4-5 1.1434 × 100 1.0369 × 100 9.6906 × 10−1 9.8466 × 10−1 1.2896 × 100
MaF5-5 1.0570 × 100 2.3708 × 10−1 1.1298 × 100 1.0517 × 100 9.8865 × 10−1
MaF6-5 1.0442 × 100 1.9064 × 10−1 1.0836 × 100 1.1109 × 100 9.3124 × 10−1
MaF7-5 1.1546 × 100 3.2421 × 10−1 7.3017 × 10−1 1.0234 × 100 9.0161 × 10−1
MaF8-5 1.1382 × 100 1.4518 × 10−1 1.1256 × 100 1.5466 × 10−1 1.1746 × 100
MaF9-5 1.0274 × 100 1.3021 × 100 1.0739 × 100 1.0344 × 100 1.1065 × 100
MaF10-5 2.2074 × 10−1 1.7793 × 10−1 9.6060 × 10−1 9.2230 × 10−1 1.1880 × 100
MaF11-5 2.6770 × 10−1 1.9138 × 10−1 9.7079 × 10−1 9.9605 × 10−1 9.8776 × 10−1
MaF12-5 2.3673 × 10−1 1.1375 × 100 9.7738 × 10−1 8.7156 × 10−1 9.4840 × 10−1
MaF13-5 1.0145 × 100 1.2126 × 100 9.2151 × 10−1 8.8058 × 10−1 1.1845 × 100
MaF14-5 1.1896 × 100 1.0181 × 100 1.1256 × 100 1.2166 × 10−1 1.1124 × 10−1
MaF15-5 1.1282 × 100 1.6536 × 10−1 1.0052 × 100 1.1078 × 100 1.2553 × 100
Table 4. IGD mean obtained by Two Arch-D with different β.
Table 4. IGD mean obtained by Two Arch-D with different β.
Problemsβ = 0.6β = 0.7β = 0.8β = 0.9β = 1.0
MaF1-5 9.4293 × 10−1 1.0189 × 100 9.2574 × 10−1 9.9358 × 10−1 1.0574 × 100
MaF2-5 9.7363 × 10−1 1.0439 × 100 8.3417 × 10−1 1.0067 × 100 8.8881 × 10−1
MaF3-5 9.1630 × 10−1 1.0596 × 100 1.2256 × 100 8.6536 × 10−1 8.7178 × 10−1
MaF4-5 1.1585 × 100 9.8553 × 10−1 9.1146 × 10−1 1.0135 × 100 1.1021 × 100
MaF5-5 1.0614 × 100 8.9829 × 10−1 1.1919 × 100 8.4124 × 10−1 1.0229 × 100
MaF6-5 9.0160 × 10−1 1.0544 × 100 1.1214 × 100 7.8145 × 10−1 8.8379 × 10−1
MaF7-5 1.0889 × 100 9.5638 × 10−1 1.0512 × 100 9.6275 × 10−1 8.8720 × 10−1
MaF8-5 1.1479 × 100 1.2521 × 100 1.1092 × 100 1.1545 × 100 1.0437 × 100
MaF9-5 1.3618 × 10−1 1.6021 × 100 3.1440 × 10−1 1.3218 × 10−1 1.3046 × 100
MaF10-5 1.0188 × 100 1.0159 × 100 9.8462 × 10−1 1.0523 × 100 9.8052 × 10−1
MaF11-5 9.3733 × 10−1 9.0826 × 10−1 1.0970 × 100 8.8298 × 10−1 9.7087 × 10−1
MaF12-5 1.2302 × 100 9.8316 × 10−1 1.1134 × 100 9.8456 × 10−1 8.0840 × 10−1
MaF13-5 9.1040 × 10−1 8.9130 × 10−1 9.9249 × 10−1 1.0142 × 100 1.0193 × 100
MaF14-5 1.1236 × 100 1.1630 × 10−1 1.3638 × 10−1 3.3340 × 10−1 1.1313 × 100
MaF15-5 1.1382 × 100 1.1540 × 100 1.0589 × 100 1.1714 × 100 1.3945 × 100
Table 5. IGD mean obtained by Two Arch-D with different β.
Table 5. IGD mean obtained by Two Arch-D with different β.
Problemsβ = 1.1β = 1.2β = 1.3β = 1.4β = 1.5
MaF1-5 1.1071 × 100 9.4357 × 10−1 1.1988 × 100 1.0151 × 100 1.0058 × 100
MaF2-5 1.2518 × 10−1 6.8046 × 10−1 1.0479 × 100 1.0136 × 100 1.0081 × 100
MaF3-5 8.4891 × 10−1 8.5646 × 10−1 1.1052 × 100 8.1836 × 10−1 9.9570 × 10−1
MaF4-5 1.0521 × 100 8.4880 × 10−1 9.2458 × 10−1 1.0162 × 100 7.8815 × 10−1
MaF5-5 1.1118 × 100 1.0258 × 100 9.6185 × 10−1 9.7589 × 10−1 9.2210 × 10−1
MaF6-5 1.1501 × 100 8.9643 × 10−1 1.0135 × 100 1.1382 × 100 1.1913 × 100
MaF7-5 8.5490 × 10−1 1.1539 × 100 1.2590 × 100 8.3618 × 10−1 1.0272 × 100
MaF8-5 1.2517 × 100 5.8080 × 10−1 1.2218 × 10−1 1.0908 × 100 2.9950 × 10−1
MaF9-5 1.1466 × 10−1 1.3154 × 100 1.0069 × 100 1.1003 × 100 1.1337 × 100
MaF10-5 9.8591 × 10−1 1.1810 × 100 9.6601 × 10−1 1.0520 × 100 1.0716 × 100
MaF11-5 8.1082 × 10−1 1.0238 × 100 1.2674 × 100 9.8526 × 10−1 9.5035 × 10−1
MaF12-5 8.4588 × 10−1 9.1533 × 10−1 9.3684 × 10−1 8.2870 × 10−1 1.0397 × 100
MaF13-5 1.1351 × 100 1.1304 × 100 8.3463 × 10−1 9.3790 × 10−1 4.8714 × 10−1
MaF14-5 1.0556 × 100 3.1750 × 10−1 1.3422 × 100 1.1218 × 10−1 1.3640 × 100
MaF15-5 1.8720 × 10−1 6.1190 × 10−1 1.1714 × 100 2.5440 × 10−1 1.1067 × 100
MaF4-5 1.0521 × 100 8.4880 × 10−1 9.2458 × 10−1 1.0162 × 100 7.8815 × 10−1
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Ye, N.; Dai, C.; Xue, X. A Two-Archive Many-Objective Optimization Algorithm Based on D-Domination and Decomposition. Algorithms 2022, 15, 392. https://doi.org/10.3390/a15110392

AMA Style

Ye N, Dai C, Xue X. A Two-Archive Many-Objective Optimization Algorithm Based on D-Domination and Decomposition. Algorithms. 2022; 15(11):392. https://doi.org/10.3390/a15110392

Chicago/Turabian Style

Ye, Na, Cai Dai, and Xingsi Xue. 2022. "A Two-Archive Many-Objective Optimization Algorithm Based on D-Domination and Decomposition" Algorithms 15, no. 11: 392. https://doi.org/10.3390/a15110392

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop