You are currently viewing a new version of our website. To view the old version click .
Mathematics
  • Article
  • Open Access

10 November 2021

A Mating Selection Based on Modified Strengthened Dominance Relation for NSGA-III

,
,
,
and
1
Department of Mathematics, National Institute of Technology Silchar, Assam 788010, India
2
Department of Artificial Intelligence, School of Electronics Engineering, Kyungpook National University, Daegu 41566, Korea
*
Author to whom correspondence should be addressed.
This article belongs to the Special Issue Numerical Analysis and Scientific Computing

Abstract

In multi/many-objective evolutionary algorithms (MOEAs), to alleviate the degraded convergence pressure of Pareto dominance with the increase in the number of objectives, numerous modified dominance relationships were proposed. Recently, the strengthened dominance relation (SDR) has been proposed, where the dominance area of a solution is determined by convergence degree and niche size ( θ ¯ ). Later, in controlled SDR (CSDR), θ ¯ and an additional parameter ( k ) associated with the convergence degree are dynamically adjusted depending on the iteration count. Depending on the problem characteristics and the distribution of the current population, different situations require different values of k , rendering the linear reduction of k based on the generation count ineffective. This is because a particular value of k is expected to bias the dominance relationship towards a particular region on the Pareto front (PF). In addition, due to the same reason, using SDR or CSDR in the environmental selection cannot preserve the diversity of solutions required to cover the entire PF. Therefore, we propose an MOEA, referred to as NSGA-III*, where (1) a modified SDR (MSDR)-based mating selection with an adaptive ensemble of parameter k would prioritize parents from specific sections of the PF depending on k, and (2) the traditional weight vector and non-dominated sorting-based environmental selection of NSGA-III would protect the solutions corresponding to the entire PF. The performance of NSGA-III* is favourably compared with state-of-the-art MOEAs on DTLZ and WFG test suites with up to 10 objectives.

1. Introduction

In literature [1], evolutionary algorithms (EAs) have demonstrated their ability to tackle a variety of optimization problems efficiently. Many real-world optimization problems involve several conflicting objectives that must be optimized simultaneously. Without prior preference information, the existence of conflicting objectives inevitably results in the impossibility of finding a single solution that is globally optimal concerning all of the objectives. In such a situation, instead of total order between various solutions, only partial orders between different solutions may be anticipated, resulting in a solution set consisting of a suite of alternative solutions that have been differently compromised. However, one of the difficulties in multi-objective optimization, compared to the single objective optimization, is that there does not exist a unique or straightforward quality assessment method to classify all the solutions obtained and to guide the search process towards better regions. Multi-objective evolutionary algorithms (MOEAs) are particularly suitable for this task because they simultaneously evolve a population of potential solutions to the problem at hand, which facilitates the search for a set of Pareto non-dominated solutions in a single run of the algorithm. Classically, a multi-objective problem (MOP) can be briefly stated as
min f ( x ) = ( f 1 ( x ) , f 2 ( x ) , , f M ( x ) )   s . t . x = ( x 1 ,   x 2 , , x n ) X , X R n
However, as the number of objectives incorporated in a problem is more than three, typically known as many-objective optimization problems (MaOPs), many-objective evolutionary algorithms (MaOEAs) have undergone a lot of difficulties. First and foremost, most of the solutions in a population become non-dominated with each other with the increase in the number of objectives. Due to this tendency, the selection pressure toward the Pareto front (PF) deteriorates significantly, making the convergence process of MaOEAs very difficult, especially for the MaOEAs that use the Pareto dominance relation as a key selection criterion. In many-objective optimization, the phenomenon where most of the candidate solutions become incomparable in the sense of Pareto dominance is referred to as dominance resistance [2]. Because of dominance resistance, the ranking of the solutions would depend on the secondary selection criterion, which is diversity measure. Therefore, in the mating selection, the selection of the effective solutions to generate offspring depends on the crowding distance [3]. A variety of approaches have been suggested to address the issues raised by MaOPs in the context of the challenges discussed above. The utmost criterion in any MaOEA is the selection criterion, specifically the environmental selection criterion, which is used to reduce the overabundance of population members. Generally, MOEAs/MaOEAs can be roughly divided into three categories according to their environmental selection strategies: dominance-based [3,4,5], decomposition-based [6,7,8] and indicator-based [9,10,11]. Dominance-based methods mainly use the concept of Pareto dominance along with some diversity measurement criteria. NSGA-II [3] and PDMOEAs [12] are well-known in this category. Decomposition-based methods decompose an MOP/MaOP into a set of sub-problems and optimize them simultaneously. MOEA/D [6], NSGA-III [13], MOEA/DD [8] and RVEA [14] are some popular approaches under this category where a set of pre-defined, uniformly distributed reference vectors [15] are utilized to manage population convergence and diversity. Some of these approaches also incorporate Pareto dominance as a primary criterion to enhance the convergence of the population. Indicator-based approaches quantify the quality of candidate solutions by the indicator value which is a singular value obtained by combining information present in M-objective values. Indicator-based EA (IBEA) [9], HypE [10] and ISDE+ [11] are some representative algorithms in this category.
To enhance the performance of dominance-based MOEAs, the first and most intuitive idea is to establish a new dominance relation or modify the traditional Pareto dominance to increase the selection pressure toward the PF. In literature, many dominance relationships have been proposed in the last couple of years [16,17,18,19,20]. The controlling dominance area of solutions (CDAS) [16] and its adaptive version, referred to as self-CDAS (S-CDAS) [18], improve the convergence pressure by expanding the dominance area. In the generalized Pareto optimality (GPO) [20] and α-dominance [2], the dominance area is expanded by modifying the definition of dominance relation. Dominance relations such as ϵ –dominance [21] and grid dominance [22] are based on the gridding of the objective space. θ-dominance [19] and RP-dominance [23] are proposed to suit decomposition-based MaOEAs. Recently, in [24], a new dominance relation referred to as the strengthened dominance relation (SDR) was proposed where a tailored niching technique considering the angles between the candidate solutions was developed. In the proposed niching technique, the niche size ( θ ¯ ) is adaptively determined by the distribution of the candidate solutions. In each niche of size θ ¯ , the solution with the best convergence degree defined by the sum of the objectives is selected. However, in SDR there is a possibility that some dominated solutions may be considered as non-dominated. To overcome this, a modification to SDR referred to as the controlled strengthened dominance relation (CSDR) was proposed in [25], where SDR is combined with traditional Pareto dominance. In addition, CSDR introduces two parameters k and a into convergence degree and niche size, to control the dominance area and to be adapted based on the generation count. In other words, CSDR degenerates into SDR when the traditional Pareto dominance condition is removed and the parameters k and a are set to 1 and 50, respectively.
However, the use of SDR or CSDR in the niching process of the oversized population during the environmental selection results in the loss of some promising solutions as k value stresses on some sections of the PF. In other words, the usage of SDR or CSDR in environmental selection may not be suitable. In addition, in the mating selection, a particular value of k concentrates on a particular region of the PF. Therefore, the setting of the parameter k should depend on the distribution of the current population. In other words, the linear reduction in k would not be appropriate for the efficient parent selection for offspring generation.
Motivated by the observations that (1) by controlling the parameter k , different sections of the PF can be emphasized, and (2) different stages of the evolutions require different parameter values of k depending on the status of the population, we propose a mating selection that employs modified SDR with an adaptive ensemble of parameters where the probability of applying the parameter values in the ensemble depends on the success rate of the parameter values. However, the environmental selection is similar to the traditional NSGA-III because it is able to preserve solutions from the different parts of the PF, thus resulting in the selection of solutions that are diverse and represent the entire PF.
The remainder of this article is organized as follows. Section 2 covers the literature on different dominance relationships and the motivation for the current study. Section 3 describes the modified SDR and NSGA-III* framework with a modified SDR-based mating selection. Section 4 presents the experimental setup and comparison results of NSGA-III* with a number of state-of-the-art MOEAs/MaOEAs. Section 5 concludes the paper.

3. Controlled Strengthened Dominance-Based Mating Selection with Adaptive Ensemble of Parameters for NSGA-III (NSGA-III*)

In this section, the concept of modified SDR is initially described and is followed by the framework of NSGA-III*. Based on the observations mentioned in the previous section, a modified SDR (MSDR) is proposed accordingly which a solution x is said to MSDR dominates to another solution y (i.e., x   MSDR y ) if and only if
{ C o n ( x ) < C o n ( y ) ,                             θ x y θ ¯ C o n ( x ) · θ x y θ ¯ C o n ( y ) ,               θ x y >   θ ¯
where the definitions of Con(x), θ x y   a n d   θ ¯ are the same as in Equation (3). The adaptation of θ ¯ is the same as in CSDR. However, k value is selected from a fixed pool of values sampled from the range ( 0 ,   2 ) . In the current study, the size of the pool is set to be five. The selection of the parameters from the pool is probabilistic, where the probabilities are adapted depending on the number of successful offspring members produced by the parameter values in the pool.
The basic framework of the proposed NSGA-III* is as follows:
(1)
Mating selection that employs modified SDR.
(2)
Environmental selection is similar to standard NSGA-III with weight vectors and traditional Pareto dominance.
NSGA-III* starts with a parent population which is P 0 of size N (Algorithm 1, Line 1). In each generation (t), the mating selection is performed by probabilistically selecting a k value from the pool and a mating pool M t   is   created (Algorithm 1, Line 4). After mating selection, the offspring population ( Q t ) of size N is created (Algorithm 1, Line 5). The offspring population ( Q t ) and population ( P t ) are combined to form R t (Algorithm 1, Line 6) and normalized (Algorithm 1, Line 7). Through environmental selection, best N solutions are selected from R t to form the population members for the next generation P t+1 (Algorithm 1, Line 8).
Algorithm 1: NSGA-III* pseudo-code
Input :   P 0   ( Initial   Population ) ,   N   ( Size   of   Population ) ,   W   ( Set   of   weight   vectors ) ,   set   the   k = ( k 1 , k 2 ,   , k p )   values ,   p r t = ( p r 1 , t , p r 2 , t ,   , p r p , t )   ( Probability   of   select   each   k ) ,   t m a x   ( Maximum   generation )
Output :   P M a x g e n   ( Final   population )
   01: P 0   Generate initial population (N)
   02: t = 1;
   03: While (t < tmax) do
   04: M t   Mating _ selection   ( P t , N ,   k , p r t )
   05:  Q t = Variation   ( M t , N )
   06:  R t P t   Q t
   07:  R t   Normalization   ( R t )
   08:  P t + 1   Environmental _ selection   ( W , R t , N )  
   09:  t = t + 1
10:   p r t + 1 = Adapt _   p r ( P t + 1 , p r t )
11:  End While

3.1. Initialization

A set of uniform weight vectors ( W ) are generated using the NBI method [15], then subsequently, a population P 0 of size N   ( | W | ) is initialized within the permissible boundaries. The pool of values and their initial probabilities of selection corresponding to the parameter k is set as k = ( k 1 , k 2 ,   , k p ) and p r 0 = ( p r 1 , 0 , p r 2 , 0 ,   , p r p , 0 ) , respectively. The size of the pool (p) is set to five in the current study. The effect of the parameter values in the ensemble on the performance of the algorithm is demonstrated in Section 4.

3.2. Mating Selection with Modified SDR and Offspring Generation

At each generation (t), the parameter value corresponding to a is obtained from Equation (6). In addition, considering each value k = ( k 1 , k 2 ,   , k p ) in the pool, the population P t is sorted based on the modified SDR (Algorithm 2, Line 02). After sorting, through binary tournament selection ( p r i , t × N ) solutions are selected into the mating pool, where the probabilities corresponding to each parameter value of k at generation t is given by p r t = ( p r 1 , t , p r 2 , t ,   , p r p , t )   ( Algorithm   2 ,   Line   03 ) . After repeating the process for each value of k i , the mating pool M t = i = 1 , 2 , , p M i , t is formed (Algorithm 2, Line 05). Since the mating pool is formed considering different values of k , the distribution of the parents selected for offspring generation would be sampled from different regions of the PF. In addition, as the probabilities of the parameter values are being adapted based on the performance, the k values that perform better are given a chance to produce more offspring members (Algorithm 4). In other words, the region on the PF that corresponds to the k values with high probabilities is given priority, and more offspring members would be generated in that region. In conclusion, depending on the nature of the problem, and distribution of the population members, different stages of the evolution require different k values. Moreover, the selection probabilities of each parameter value in the pool reflect the state of the current population. The details regarding the adaptation of the probabilities corresponding to the k parameter values in the pool are presented in Section 3.5 (Algorithm 4). After forming the mating pool, the variation operators, namely crossover and mutation, are employed to produce the offspring members (Algorithm 1, Line 05). In the current study, simulated binary crossover (SBX) and polynomial mutation (PM) are employed.
Algorithm 2: Mating _ selection   ( P t , N ,   k , p r t )
Input :   P t   ( Population   at   generation   t ) ,   N   ( Population   Size ) ,   k = ( k 1 , k 2 ,   , k p ) ,   p r t = ( p r 1 , t , p r 2 , t ,   , p r p , t )   ( probability   of   select   each   k   at generation t ) ,
Output :   M t   ( Mating   pool )
   01: For   i = 1 : p Generate initial population (N)
   02:  Perform   the   MSDR   sorting   with   k i   on   P t   and   calculate   the   crowding   degree   of   solutions .
   03:  M i , t =   Select   ( p r i , t × N )   number   of   solutions .
   04: End For
   05:  M t = i = 1 , 2 , , p M i , t

3.3. Normalization

Normalization is an essential tool to map the unscaled search space to a scaled one so as to characterize the badly scaled objectives. In NSGA-III*, the normalization of the jth population member is given in Equation (7).
                                        F i j = f i j z i * z i n a d z i * ,     i = 1 , 2 , , M
where z i * and z i n a d are considered as the lowest and highest values of i t h objective function.

3.4. Environmental Selection

To perform the environmental selection (Algorithm 3), where the goal is to select N solutions for (t + 1)th generation ( P t + 1 ) from the combined population R t of size 2 N from generation (t), a set of pre-defined weight vectors W is set [15]. First, the non-dominated sorting based on traditional Pareto dominance is performed on R t which is the result of normalization of R t   ( A l g o r i t h m   3 ,   L i n e   01 ) . Based on the number of individuals on the sub-fronts, if the condition i = 1 l | F r o n t _ N o i | = = N satisfies then P t + 1 = i = 1 l | F r o n t i | = = N is considered as the parent population of the next generation (Algorithm 3, Lines 2~4). Otherwise, the association is performed between the set of weight vectors W and the combined population R t (Algorithm 3, Line 5). Each solution tries to associate with each weight vector based on angle. If multiple solutions are associated with the same weight vector then a solution will be selected from the associated solutions which has a minimum perpendicular distance to that particular weight vector (Algorithm 3, Line 6). After the association process, the best associated solutions will be added to P t + 1 and if | P t + 1 | < N , then the weight vectors without any associated solutions are identified as ineffective weight vectors ( I W V )   ( Algorithm   3 ,   Line   7 ) . These ineffective weight vectors again try to associate with the members of the last front (Algorithm 3, Line 8). The | I W V | number of associated solutions selected is represented by U   ( Algorithm   3 ,   Lines   9 ) . Finally, combined the population P t + 1 = P t + 1   U represents the parent population for the next generation (Algorithm 3, Lines 10).
Algorithm 3: Environmental_selection ( W , R t , N )
Input :   R t (Merged Population at generation), W (Reference Point Set), N (Size of Population)
Output :   P t + 1   ( Population / Parent   for   next   generation )
   01: [ F r o n t ,   M a x _ F r o n t ] = Non - Dominated   Sort   ( R t )
   02:  If   i = 1 l | F r o n t i | = = N w h e r e   l = 1 , 2 , , M a x _ F r o n t
   03: P t + 1 = R t ( F r o n t M a x _ F r o n t )  
   04: Else
   05: a s s o c i a t i o n = A s s o c i a t e ( R t , W ) / / Associating   solutions   of   R t   ( except   last   front   solutions )   to   W
   06:  P t + 1 = Selecting   best   associated   solutions   if   there   exist   more   than   one   solution   associated   with   a   weight   vector   based   on   perpendicular   distance  
   07:  W i n e f f e c t i v e = Finding   weight   vectors   which   have   no   solutions   associated   with   it
   08:  i n e f f e c t i v e s o l u t i o n = A s s o c i a t e ( R t ( M a x _ F r o n t ) , W )
   09:  U   = Select   best   associated   solution   from   i n e f f e c t i v e s o l u t i o n
   10:  P t + 1   = P t + 1     U
   11: End If
In [24,25], the modified dominance relationships, namely SDR and CSDR, are employed in the environmental selection, in addition to the mating selection. However, in the current work, the environmental selection is based on traditional Pareto dominance so that the environmental selection process is not biased to any section of the PF.

3.5. Adaptation of the Probability of Parameters in the Ensemble Pool

As mentioned in Section 3.2, the probabilities p r t = ( p r 1 , t , p r 2 , t ,   , p r p , t ) of applying the parameter values in the pool k need to be adapted over the generations (Algorithm 4). At the end of each generation (t), the number of solutions produced by k i that entered the parent population are counted to modify the probabilities. In other words, count the number of solutions for each k i as C = ( C 1 , C 2 , , C p ) after getting the parent set P t + 1 from the environmental selection (Algorithm 4, Line 1). Normalize each of the C i s as C i = ( C i / j = 1 , 2 , , p C j )     i = 1 , 2 , p (Algorithm 4, Line 2). The probabilities of the parameters in the ensemble pool are thus updated using a weighted function where the performance of the current generation is given a weight of 0.3. In addition, a probability of applying any parameter value in the pool cannot go below a min threshold value of 0.05. Finally, the normalization of probabilities is performed (Algorithm 4, Lines 3~5).
Algorithm 4: Adapt _   p r ( P t ,   p r t )
Input :   P t   ( Population   at   a   generation   t ) ,   p r t   ( probability   of   each   generation )
Output :   p r t + 1   ( Probability   after   adaptation )
   01: C = ( C 1 , C 2 , , C p ) count the number of occurrences of solutions for each k.
   02:  C i = ( C i / j = 1 , 2 , , p C j )     i = 1 , 2 , p
   03:  t e m p = 0.7 p r t + 0.3 C i
   04:  t e m p = max ( 0.05 ,   t e m p )
   05:  p r i , t + 1 = ( t e m p i / j = 1 , 2 , , p t e m p j )     i = 1 , 2 , p  

4. Experimental Setup, Results and Discussion

Experiments were conducted on 16 scalable test problems from DTLZ and WFG, test suites comprising of seven and nine problems, respectively. For each test problem, 2-, 4-, 6-, 8- and 10-objectives were considered. The parameter values employed are present in [11]. In order to compare the efficiency of NSGA-III* with the state-of-the-art algorithms a quantitative indicator, namely HyperVolume (HV), was employed. The larger value of HV implies the superiority of the algorithm. In this experiment, we first performed the normalization of the objective vectors before calculating the HV. The reference point was set as ( 1 , , 1 ) R M . To evaluate the HV, we considered the Monte Carlo sampling, where the number of the sampling point was 10 6 . In each instance, 30 independent runs were performed for each algorithm on a PC with a 3.30 GHz Intel (R) Core (TM) i7- 8700 CPU and Windows 10 Pro 64-bit operating system with 16 GB RAM. As a stopping criterion, the maximum number of generations for DTLZ1 and WFG2 was set to 700 and for DTLZ3 and WFG1 it was set as 1000. For the other problems (DTL2, DTLZ4–7 and WFG3–9) it was set to 250. All algorithms considered employing a population size (N) of 100, 165, 182, 240 and 275 for 2-, 4-, 6-, 8-, 10-objectives, respectively. Simulated binary crossover and polynomial mutation with distribution indices and probabilities set to n m = 20 , n c = 20 , p c = 1.0 and p m = 1 / D , respectively, were employed.
The only additional component introduced into the NSGA-III* was the pool of parameter values k . To investigate the robustness of the NSGA-III* algorithm with respect to the selection of the pool, we performed simulations with different sets of values. In all the sets, it was made sure that the pool of k values were diverse, representative and covered the entire range. In addition, the size of the pool could not be large. Therefore, the size of pool was set to five. The experiments were conducted by incorporating five different sets of diverse and well representative pools corresponding to parameter k into NSGA-III* named as NSGA-III1* = ( 1.2 ,   0.7 ,   0.5 ,   0.3 ,   0.1 ) , NSGA-III2* = ( 1.5 ,   1.2 ,   1 ,   0.5 ,   0.3 ) , NSGA-III3* = ( 1.5 ,   1 ,   0.7 ,   0.5 ,   0.1 ) , NSGA-III4* = ( 1.3 ,   1.1 ,   0.9 ,   0.5 ,   0.3 ) and NSGA-III5* = ( 1.2 ,   1 ,   0.8 ,   0.4 ,   0.1 ) .
The experimental analysis was also performed in 16 scalable test problems from DTLZ and WFG test suites. The results are presented in the supplementary file and the pair-wise comparisons are summarized in Table 1 with respect to the number of wins (W), number of losses (L) and number of ties (T). From the results, it is evident the performance of the NSGA-III* with respect to the selection of the pool is quite robust which is apparent from the performance similarity (represented with T) between the different versions of the ensemble of over 80%. However, NSGA-III2* with the pool of k   = ( 1.5 ,   1.2 ,   1 ,   0.5 ,   0.3 ) is the best suited value among them and with a slightly better performance. Therefore, the simulation results corresponding to k = ( 1.5 ,   1.2 ,   1 ,   0.5 ,   0.3 ) referred to as NSGA-III* are employed to compare with the state-of-the-art algorithms in Table 2.
Table 1. Summary of the experimental results demonstrating the robustness of the NSGA-III* to the parameter values in the ensemble pool.
Table 2. Comparison of HV and statistical results on DTLZ and WFG test problems (“+”–WIN, “≈”–TIE, “−”–LOSS).
To demonstrate the effect of the different instances of NSGA-II/CSDR [25] where (1) the k values are linearly reduced based on generations as in Equation 5, and (2) both the mating and environmental selections employ CSDR, all the instances of NSGA-II/CSDR considered start with k m a x = 1.6 . However, the rate of reductions · k employed are 0.6, 0.5, 0.4 and 0.2 and the instances are referred to as NSGA-II/CSDR1, NSGA-II/CSDR2, NSGA-II/CSDR3 and NSGA-II/CSDR4, respectively. The simulations are performed on 3-objective instances of DTLZ1 and DTLZ3. The plots corresponding to the final population are depicted in Figure 3 and Figure 4. From the figures, it is evident that all the four instances corresponding to NSGA-II/CSDR cannot produce well distributed solutions in all the three problems. In other words, even though all the instances of NSGA-II/CSDR start with the same k max due to the different k values, the value of parameter k in the final generation would be different. As mentioned earlier, as different values of k emphasize the different regions of the PF, the use of CSDR in the environmental selection would result in bias resulting in non-uniform distribution of solutions. This is evident from the simulation results depicted in Figure 3 and Figure 4.
Figure 3. Pareto front of 3-objective DTLZ1 for problems (a) NSGA-II/CSDR1 (b) NSGA-II/CSDR2 (c) NSGA-II/CSDR3 (d) NSGA-II/CSDR4.
Figure 4. Pareto front of 3-objective DTLZ3 for problems (a) NSGA-II/CSDR1 (b) NSGA-II/CSDR2 (c) NSGA-II/CSDR3 (d) NSGA-II/CSDR4.
To demonstrate that different values of k are going to be effective at different stages of the evolutions, the plots corresponding to the changes in the probabilities of the parameter values in the ensemble pool are plotted in Figure 5. To plot, the 4- and 8-objective instances of DTLZ3 and WFG1 were considered. From the figure, it is evident that depending on the characteristics of the problem and distribution of the current population, different values of k in the pool are considered to be effective at different stages of the evolution. In addition, there is no standard pattern of reducing k that is suitable for all the problems. Therefore, continuously reducing the parameter k , as done is NSGA-II/CSDR, is not suitable.
Figure 5. Probability of parameters during evolution on 4- and 8-objective DTLZ3 and WFG1. (a) 4-objective DTLZ3, (b) 4-objective WFG1, (c) 8-objective DTLZ3 and (d) 8-objective WFG1.
First, we would like to compare the performance of NSGA-II, NSGA-III, NSGA-II/SDR and NSGA-II/CSDR to demonstrate the effect of not using Pareto dominance in environmental selection. In 2-objective instances of DTLZ1, DTLZ2 and DTLZ3, it is evident that NSGA-II, NSGA-III and NSGA-II/CSDR employ Pareto dominance in the environmental selection and perform better than NSGA-II/SDR which does not employ Pareto dominance. However, the performance of the proposed NSGA-III* is comparable to the best result as it employs Pareto dominance in the environmental selection.
In higher objectives ( > 4 ) in WFG4 to WFG9, the performance of NSGA-II/SDR is better than NSGA-II/CSDR, according to the simulation results. This might be due to the linear reduction of k with respect to generations in NSGA-II/CSDR compared to the constant setting ( k = 1) in NSGA-II/SDR. In other words, the linear reduction of k with generations is not suitable for all the problems with diverse characteristics. However, the performance of NSGA-II* is comparable or better to the best of NSGA-II/SDR or NSGA-II/CSDR and indicates the effectiveness of the adaptive ensemble in finding the suitable k depending on the characteristics of the problem and the distribution of the population.
The performance of the proposed NSGA-III* was compared with state-of-the-art MOEAs such as NSGA-II, NSGA-II/SDR, NSGA-II/CSDR, MOEA/D, MOEAD-DE, NSGAIII, TDEA and ISDE+. The experimental results (mean and standard deviation values of normalized HV) on benchmark suites are presented in Table 2. In addition, the statistical tests (t-test) at a 5% significance level were conducted to compare the significance of the difference between the mean metric values yielded by NSGA-III* and state-of-the-art algorithms. The signs “+”, “−” and “≈” against the HV values indicate that the NSGA-III* is statistically “better”, “worse” and “comparable” with the corresponding algorithm, respectively. The last row of Table 2 represents the overall performance of NSGA-III* in terms of the number of instances where it is better (Win-W), comparable (Tie-T) and worse (Loss-L) with respect to the corresponding algorithm.
For better visualization, the performance of state-of-the-art algorithms in terms of wins, ties and losses are summarized in Figure 6.
Figure 6. Comparison of NSGA-III* with other state-of-the-art algorithims.
As shown in Figure 6 and Table 2, NSGA-III* significantly outperforms or is comparable to NSGA-II, NSGA-II/SDR, NSGA-II/CSDR, MOEA/D, MOEA/D-DE, NSGA-III, TDEA and ISDE+ in 71⁄80 ≈ 88.75%, 66⁄80 ≈ 82.5%, 72⁄80 ≈ 90%, 70⁄80 ≈ 87.5%, 67⁄80 ≈ 83.75%, 74⁄80 ≈ 92.5%, 69⁄80 ≈ 86.25% and 56⁄80 ≈ 70% of cases, respectively. In other words, NSGA-III* consistently performs better than the state-of-the-art algorithms. This can be attributed to the modified SDR-based mating selection with an ensemble of parameter values k that ensures that solutions are uniformly sampled over the entire range of the PF by prioritizing respective k values. In addition, the weight vector-based environmental selection based on Pareto dominance was able to provide the required diversity on the PF without any bias to specific regions. Furthermore, the NSGA-III* is implemented in MatLab using the PlatEmo [35] framework. The source code is accessible at https://github.com/Saykat1993/Mating-Selection-based-on-Modified-Strengthened-Dominance-Relation-for-NSGA-III.git (accessed on 2 November 2021).

5. Conclusions

In this manuscript, a modified strengthened dominance relation (MSDR) with an adaptive ensemble of parameter values that can enforce convergence is proposed. A multi/many-objective evolutionary algorithm (MOEA) that employs mating selection based on MSDR and environmental selection using weight vectors and Pareto dominance is proposed, referred to as NSGA-III*. In the proposed NSGA-III*, the probability of applying different parameter values in the ensemble is adapted based on the performance of the parameters. In other words, the probability of the parameters changes depending on the nature of the problem and the distribution of the population. The environmental selection with Pareto dominance enables the diversity of the solutions over the entire Pareto front due to its unbiased nature. The performance of the proposed NSGA-III* framework is compared with various state-of-the-art MOEA algorithms on standard benchmark test suites.

Supplementary Materials

The following are available online at https://www.mdpi.com/article/10.3390/math9222837/s1, In the supplementary materials the HyperVolume comparisons of NSGA-III* for different values of k , named as NSGA-III1* = ( 1.2 ,   0.7 ,   0.5 ,   0.3 ,   0.1 ) , NSGA-III2* = ( 1.5 ,   1.2 ,   1 ,   0.5 ,   0.3 ) , NSGA-III3* = ( 1.5 ,   1 ,   0.7 ,   0.5 ,   0.1 ) , NSGA-III4* = ( 1.3 ,   1.1 ,   0.9 ,   0.5 ,   0.3 ) and NSGA-III5* = ( 1.2 ,   1 ,   0.8 ,   0.4 ,   0.1 ) are presented. In Table S1, the performance of NSGA-III1* with the rest of the others is compared. In a similar fashion, the performance of NSGA-III2*, NSGA-III3*, NSGA-III4* and NSGA-III5* are presented in Tables S2–S5, respectively.

Author Contributions

Conceptualization, S.D., S.S.R.M. and R.M.; methodology, S.D., S.S.R.M. and R.M.; software, S.D.; validation, S.D. and R.M.; formal analysis, S.D., S.S.R.M. and R.M.; investigation, K.N.D. and D.-G.L.; resources, K.N.D. and D.-G.L.; data curation, S.D. and R.M.; writing—original draft preparation, S.D. and S.S.R.M.; writing—review and editing, R.M., K.N.D. and D.-G.L.; visualization, S.D.; supervision, R.M.; project administration, K.N.D.; funding acquisition, R.M. and D.-G.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Research Foundation (NRF), Korea, under Project BK21.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Goldberg, D.E. Genetic Algorithms in Search, Optimization and Machine Learning; Addison-Wesley Longman Publishing Co., Inc.: Boston, MA, USA, 1989. [Google Scholar]
  2. Ikeda, K.; Kita, H.; Kobayashi, S. Failure of Pareto-based MOEAs: Does non-dominated really mean near to optimal? In Proceedings of the 2001 Congress on Evolutionary Computation (IEEE Cat. No.01TH8546), Seoul, Korea, 27–30 May 2001; Volume 2, pp. 957–962. [Google Scholar]
  3. Deb, K.; Pratap, A.; Agarwal, S.; Meyarivan, T. A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans. Evol. Comput. 2002, 6, 182–197. [Google Scholar] [CrossRef] [Green Version]
  4. Knowles, J.; Corne, D. The Pareto archived evolution strategy: A new baseline algorithm for Pareto multiobjective optimisation. In Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406), Washington, DC, USA, 6–9 July 1999; Volume 1, pp. 98–105. [Google Scholar]
  5. Zitzler, E.; Laumanns, M.; Thiele, L. SPEA2: Improving the Strength Pareto Evolutionary Algorithm; Eidgenössische Technische Hochschule Zürich (ETH), Institut für Technische Informatik und Kommunikationsnetze (TIK): Zurich, Switzerland, 2001; Volume 103. [Google Scholar]
  6. Zhang, Q.; Li, H. MOEA/D: A Multiobjective Evolutionary Algorithm Based on Decomposition. IEEE Trans. Evol. Comput. 2007, 11, 712–731. [Google Scholar] [CrossRef]
  7. Liu, H.; Gu, F.; Zhang, Q. Decomposition of a Multiobjective Optimization Problem Into a Number of Simple Multiobjective Subproblems. IEEE Trans. Evol. Comput. 2014, 18, 450–455. [Google Scholar] [CrossRef] [Green Version]
  8. Li, K.; Deb, K.; Zhang, Q.; Kwong, S. An Evolutionary Many-Objective Optimization Algorithm Based on Dominance and Decomposition. IEEE Trans. Evol. Comput. 2015, 19, 694–716. [Google Scholar] [CrossRef]
  9. Ishibuchi, H.; Tsukamoto, N.; Sakane, Y.; Nojima, Y. Indicator-based evolutionary algorithm with hypervolume approximation by achievement scalarizing functions. In Proceedings of the 12th Annual Conference on Genetic and Evolutionary Computation, Portland, OR, USA, 11–13 July 2010. [Google Scholar]
  10. Bader, J.; Zitzler, E. HypE: An Algorithm for Fast Hypervolume-Based Many-Objective Optimization. Evol. Comput. 2011, 19, 45–76. [Google Scholar] [CrossRef] [PubMed]
  11. Pamulapati, T.; Mallipeddi, R.; Suganthan, P.N. ISDE +—An Indicator for Multi and Many-Objective Optimization. IEEE Trans. Evol. Comput. 2019, 23, 346–352. [Google Scholar] [CrossRef]
  12. Palakonda, V.; Mallipeddi, R. Pareto Dominance-Based Algorithms With Ranking Methods for Many-Objective Optimization. IEEE Access 2017, 5, 11043–11053. [Google Scholar] [CrossRef]
  13. Deb, K.; Jain, H. An Evolutionary Many-Objective Optimization Algorithm Using Reference-Point-Based Nondominated Sorting Approach, Part I: Solving Problems With Box Constraints. IEEE Trans. Evol. Comput. 2014, 18, 577–601. [Google Scholar] [CrossRef]
  14. Cheng, R.; Jin, Y.; Olhofer, M.; Sendhoff, B. A Reference Vector Guided Evolutionary Algorithm for Many-Objective Optimization. IEEE Trans. Evol. Comput. 2016, 20, 773–791. [Google Scholar] [CrossRef] [Green Version]
  15. Das, I.; Dennis, J.E. Normal-Boundary Intersection: A New Method for Generating the Pareto Surface in Nonlinear Multicriteria Optimization Problems. SIAM J. Optim. 1998, 8, 631–657. [Google Scholar] [CrossRef] [Green Version]
  16. Sato, H.; Aguirre, H.E.; Tanaka, K. Controlling Dominance Area of Solutions and Its Impact on the Performance of MOEAs. In Evolutionary Multi-Criterion Optimization; Obayashi, S., Deb, K., Poloni, C., Hiroyasu, T., Murata, T., Eds.; Springer: Berlin/Heidelberg, Germany, 2007; pp. 5–20. [Google Scholar]
  17. Wang, G.; Jiang, H. Fuzzy-Dominance and Its Application in Evolutionary Many Objective Optimization. In Proceedings of the 2007 International Conference on Computational Intelligence and Security Workshops (CISW 2007), Harbin, China, 15–19 December 2007; pp. 195–198. [Google Scholar]
  18. Sato, H.; Aguirre, H.E.; Tanaka, K. Self-Controlling Dominance Area of Solutions in Evolutionary Many-Objective Optimization. In Simulated Evolution and Learning; Deb, K., Ed.; Springer: Berlin/Heidelberg, Germany, 2010; pp. 455–465. [Google Scholar]
  19. Yuan, Y.; Xu, H.; Wang, B.; Yao, X. A New Dominance Relation-Based Evolutionary Algorithm for Many-Objective Optimization. IEEE Trans. Evol. Comput. 2016, 20, 16–37. [Google Scholar] [CrossRef]
  20. Zhu, C.; Xu, L.; Goodman, E.D. Generalization of Pareto-Optimality for Many-Objective Evolutionary Optimization. IEEE Trans. Evol. Comput. 2016, 20, 299–315. [Google Scholar] [CrossRef]
  21. Laumanns, M.; Thiele, L.; Deb, K.; Zitzler, E. Combining convergence and diversity in evolutionary multiobjective optimization. Evol. Comput. 2002, 10, 263–282. [Google Scholar] [CrossRef]
  22. Yang, S.; Li, M.; Liu, X.; Zheng, J. A Grid-Based Evolutionary Algorithm for Many-Objective Optimization. IEEE Trans. Evol. Comput. 2013, 17, 721–736. [Google Scholar] [CrossRef]
  23. Elarbi, M.; Bechikh, S.; Gupta, A.; Said, L.B.; Ong, Y. A New Decomposition-Based NSGA-II for Many-Objective Optimization. IEEE Trans. Syst. Man Cybern. Syst. 2018, 48, 1191–1210. [Google Scholar] [CrossRef]
  24. Tian, Y.; Cheng, R.; Zhang, X.; Su, Y.; Jin, Y. A Strengthened Dominance Relation Considering Convergence and Diversity for Evolutionary Many-Objective Optimization. IEEE Trans. Evol. Comput. 2019, 23, 331–345. [Google Scholar] [CrossRef] [Green Version]
  25. Shen, J.; Wang, P.; Wang, X. A Controlled Strengthened Dominance Relation for Evolutionary Many-Objective Optimization. IEEE Trans. Cybern. 2020, 1–13. [Google Scholar] [CrossRef]
  26. Srinivas, N.; Deb, K. Muiltiobjective Optimization Using Nondominated Sorting in Genetic Algorithms. Evol. Comput. 1994, 2, 221–248. [Google Scholar] [CrossRef]
  27. Li, M.; Yang, S.; Liu, X. Shift-Based Density Estimation for Pareto-Based Algorithms in Many-Objective Optimization. IEEE Trans. Evol. Comput. 2014, 18, 348–365. [Google Scholar] [CrossRef] [Green Version]
  28. Hernández-Díaz, A.G.; Santana-Quintero, L.V.; Coello, C.A.C.; Molina, J. Pareto-adaptive ε-dominance. Evol. Comput. 2007, 15, 493–517. [Google Scholar] [CrossRef] [PubMed]
  29. Batista, L.S.; Campelo, F.; Guimarães, F.G.; Ramírez, J.A. Pareto Cone ε-Dominance: Improving Convergence and Diversity in Multiobjective Evolutionary Algorithms. In Evolutionary Multi-Criterion Optimization; Takahashi, R.H.C., Deb, K., Wanner, E.F., Greco, S., Eds.; Springer: Berlin/Heidelberg, Germany, 2011; pp. 76–90. [Google Scholar]
  30. Liu, Y.; Zhu, N.; Li, K.; Li, M.; Zheng, J.; Li, K. An angle dominance criterion for evolutionary many-objective optimization. Inf. Sci. 2020, 509, 376–399. [Google Scholar] [CrossRef]
  31. Farina, M.; Amato, P. A fuzzy definition of “optimality” for many-criteria optimization problems. IEEE Trans. Syst. Man Cybern. Part A Syst. Hum. 2004, 34, 315–326. [Google Scholar] [CrossRef]
  32. Zou, X.; Chen, Y.; Liu, M.; Kang, L. A New Evolutionary Algorithm for Solving Many-Objective Optimization Problems. IEEE Trans. Syst. Man Cybern. Part B Cybern. 2008, 38, 1402–1412. [Google Scholar]
  33. He, Z.; Yen, G.G.; Zhang, J. Fuzzy-Based Pareto Optimality for Many-Objective Evolutionary Algorithms. IEEE Trans. Evol. Comput. 2014, 18, 269–285. [Google Scholar] [CrossRef]
  34. Zitzler, E.; Deb, K.; Thiele, L. Comparison of Multiobjective Evolutionary Algorithms: Empirical Results. Evol. Comput. 2000, 8, 173–195. [Google Scholar] [CrossRef] [Green Version]
  35. Tian, Y.; Cheng, R.; Zhang, X.; Jin, Y. PlatEMO: A MATLAB Platform for Evolutionary Multi-Objective Optimization [Educational Forum]. IEEE Comput. Intell. Mag. 2017, 12, 73–87. [Google Scholar] [CrossRef] [Green Version]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.