Next Article in Journal
A Note on a Meshless Method for Fractional Laplacian at Arbitrary Irregular Meshes
Next Article in Special Issue
Numerical Study of Powder Flow Nozzle for Laser-Assisted Metal Deposition
Previous Article in Journal
Preservice Teachers’ Eliciting and Responding to Student Thinking in Lesson Plays
Previous Article in Special Issue
Sixth Order Numerov-Type Methods with Coefficients Trained to Perform Best on Problems with Oscillating Solutions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Mating Selection Based on Modified Strengthened Dominance Relation for NSGA-III

1
Department of Mathematics, National Institute of Technology Silchar, Assam 788010, India
2
Department of Artificial Intelligence, School of Electronics Engineering, Kyungpook National University, Daegu 41566, Korea
*
Author to whom correspondence should be addressed.
Mathematics 2021, 9(22), 2837; https://doi.org/10.3390/math9222837
Submission received: 9 October 2021 / Revised: 3 November 2021 / Accepted: 3 November 2021 / Published: 10 November 2021
(This article belongs to the Special Issue Numerical Analysis and Scientific Computing)

Abstract

:
In multi/many-objective evolutionary algorithms (MOEAs), to alleviate the degraded convergence pressure of Pareto dominance with the increase in the number of objectives, numerous modified dominance relationships were proposed. Recently, the strengthened dominance relation (SDR) has been proposed, where the dominance area of a solution is determined by convergence degree and niche size ( θ ¯ ). Later, in controlled SDR (CSDR), θ ¯ and an additional parameter ( k ) associated with the convergence degree are dynamically adjusted depending on the iteration count. Depending on the problem characteristics and the distribution of the current population, different situations require different values of k , rendering the linear reduction of k based on the generation count ineffective. This is because a particular value of k is expected to bias the dominance relationship towards a particular region on the Pareto front (PF). In addition, due to the same reason, using SDR or CSDR in the environmental selection cannot preserve the diversity of solutions required to cover the entire PF. Therefore, we propose an MOEA, referred to as NSGA-III*, where (1) a modified SDR (MSDR)-based mating selection with an adaptive ensemble of parameter k would prioritize parents from specific sections of the PF depending on k, and (2) the traditional weight vector and non-dominated sorting-based environmental selection of NSGA-III would protect the solutions corresponding to the entire PF. The performance of NSGA-III* is favourably compared with state-of-the-art MOEAs on DTLZ and WFG test suites with up to 10 objectives.

1. Introduction

In literature [1], evolutionary algorithms (EAs) have demonstrated their ability to tackle a variety of optimization problems efficiently. Many real-world optimization problems involve several conflicting objectives that must be optimized simultaneously. Without prior preference information, the existence of conflicting objectives inevitably results in the impossibility of finding a single solution that is globally optimal concerning all of the objectives. In such a situation, instead of total order between various solutions, only partial orders between different solutions may be anticipated, resulting in a solution set consisting of a suite of alternative solutions that have been differently compromised. However, one of the difficulties in multi-objective optimization, compared to the single objective optimization, is that there does not exist a unique or straightforward quality assessment method to classify all the solutions obtained and to guide the search process towards better regions. Multi-objective evolutionary algorithms (MOEAs) are particularly suitable for this task because they simultaneously evolve a population of potential solutions to the problem at hand, which facilitates the search for a set of Pareto non-dominated solutions in a single run of the algorithm. Classically, a multi-objective problem (MOP) can be briefly stated as
min f ( x ) = ( f 1 ( x ) , f 2 ( x ) , , f M ( x ) )   s . t . x = ( x 1 ,   x 2 , , x n ) X , X R n
However, as the number of objectives incorporated in a problem is more than three, typically known as many-objective optimization problems (MaOPs), many-objective evolutionary algorithms (MaOEAs) have undergone a lot of difficulties. First and foremost, most of the solutions in a population become non-dominated with each other with the increase in the number of objectives. Due to this tendency, the selection pressure toward the Pareto front (PF) deteriorates significantly, making the convergence process of MaOEAs very difficult, especially for the MaOEAs that use the Pareto dominance relation as a key selection criterion. In many-objective optimization, the phenomenon where most of the candidate solutions become incomparable in the sense of Pareto dominance is referred to as dominance resistance [2]. Because of dominance resistance, the ranking of the solutions would depend on the secondary selection criterion, which is diversity measure. Therefore, in the mating selection, the selection of the effective solutions to generate offspring depends on the crowding distance [3]. A variety of approaches have been suggested to address the issues raised by MaOPs in the context of the challenges discussed above. The utmost criterion in any MaOEA is the selection criterion, specifically the environmental selection criterion, which is used to reduce the overabundance of population members. Generally, MOEAs/MaOEAs can be roughly divided into three categories according to their environmental selection strategies: dominance-based [3,4,5], decomposition-based [6,7,8] and indicator-based [9,10,11]. Dominance-based methods mainly use the concept of Pareto dominance along with some diversity measurement criteria. NSGA-II [3] and PDMOEAs [12] are well-known in this category. Decomposition-based methods decompose an MOP/MaOP into a set of sub-problems and optimize them simultaneously. MOEA/D [6], NSGA-III [13], MOEA/DD [8] and RVEA [14] are some popular approaches under this category where a set of pre-defined, uniformly distributed reference vectors [15] are utilized to manage population convergence and diversity. Some of these approaches also incorporate Pareto dominance as a primary criterion to enhance the convergence of the population. Indicator-based approaches quantify the quality of candidate solutions by the indicator value which is a singular value obtained by combining information present in M-objective values. Indicator-based EA (IBEA) [9], HypE [10] and ISDE+ [11] are some representative algorithms in this category.
To enhance the performance of dominance-based MOEAs, the first and most intuitive idea is to establish a new dominance relation or modify the traditional Pareto dominance to increase the selection pressure toward the PF. In literature, many dominance relationships have been proposed in the last couple of years [16,17,18,19,20]. The controlling dominance area of solutions (CDAS) [16] and its adaptive version, referred to as self-CDAS (S-CDAS) [18], improve the convergence pressure by expanding the dominance area. In the generalized Pareto optimality (GPO) [20] and α-dominance [2], the dominance area is expanded by modifying the definition of dominance relation. Dominance relations such as ϵ –dominance [21] and grid dominance [22] are based on the gridding of the objective space. θ-dominance [19] and RP-dominance [23] are proposed to suit decomposition-based MaOEAs. Recently, in [24], a new dominance relation referred to as the strengthened dominance relation (SDR) was proposed where a tailored niching technique considering the angles between the candidate solutions was developed. In the proposed niching technique, the niche size ( θ ¯ ) is adaptively determined by the distribution of the candidate solutions. In each niche of size θ ¯ , the solution with the best convergence degree defined by the sum of the objectives is selected. However, in SDR there is a possibility that some dominated solutions may be considered as non-dominated. To overcome this, a modification to SDR referred to as the controlled strengthened dominance relation (CSDR) was proposed in [25], where SDR is combined with traditional Pareto dominance. In addition, CSDR introduces two parameters k and a into convergence degree and niche size, to control the dominance area and to be adapted based on the generation count. In other words, CSDR degenerates into SDR when the traditional Pareto dominance condition is removed and the parameters k and a are set to 1 and 50, respectively.
However, the use of SDR or CSDR in the niching process of the oversized population during the environmental selection results in the loss of some promising solutions as k value stresses on some sections of the PF. In other words, the usage of SDR or CSDR in environmental selection may not be suitable. In addition, in the mating selection, a particular value of k concentrates on a particular region of the PF. Therefore, the setting of the parameter k should depend on the distribution of the current population. In other words, the linear reduction in k would not be appropriate for the efficient parent selection for offspring generation.
Motivated by the observations that (1) by controlling the parameter k , different sections of the PF can be emphasized, and (2) different stages of the evolutions require different parameter values of k depending on the status of the population, we propose a mating selection that employs modified SDR with an adaptive ensemble of parameters where the probability of applying the parameter values in the ensemble depends on the success rate of the parameter values. However, the environmental selection is similar to the traditional NSGA-III because it is able to preserve solutions from the different parts of the PF, thus resulting in the selection of solutions that are diverse and represent the entire PF.
The remainder of this article is organized as follows. Section 2 covers the literature on different dominance relationships and the motivation for the current study. Section 3 describes the modified SDR and NSGA-III* framework with a modified SDR-based mating selection. Section 4 presents the experimental setup and comparison results of NSGA-III* with a number of state-of-the-art MOEAs/MaOEAs. Section 5 concludes the paper.

2. Related Study and Motivation

Given an MOP with M-objectives that are conflicting, as shown in Equation (1), the goal of MOEAs/MaOEAs is to find an optimal set of Pareto-optimal solutions (PS) whose objective values are usually referred to as a Pareto front (PF) that covers the entire decision-making range. In addition, since MOEAs employ a fixed population to cover the entire range, the solutions are expected to be diverse. Therefore, the goal of MOEAs is to start with the random initialization of N solutions, referred to as population size, and drive the population close to the PF while maintaining the spread of solutions to cover the entire PF. MOEAs employ two main selection steps referred to as the mating selection and environmental selection to accomplish the task. The aim of the mating selection is to select better population members so that better offspring members can be produced through variation operators. On the contrary, the goal of environmental selection is to select a set of N solutions from the 2 N solutions, which is the combined set of population and offspring members. In other words, the aim of mating selection is to prioritize better population members in the generation of offspring members, while the aim of environmental selection is to preserve better solutions for further generations. Therefore, for better convergence and diversity in MOEAs, both the selection operators play a crucial role. To promote convergence, both the selection operators employ different mechanisms. The most popular among them is Pareto dominance which can be enforced through non-dominated sorting [3,26].
In an MOP, when the goal is to minimize all the objective f i s simultaneously, a candidate solution x Pareto dominates another solution y (i.e., x y ), if and only if
{   i 1 , 2 , , M   :   f i ( x ) f i ( y )   j 1 , 2 , , M   :   f i ( x ) < f i ( y )
If neither x dominates y nor y dominates x , then x and y are said to be “incomparable”. Alternatively, both solutions are “non-dominated” to each other. As the M increases, i.e., the number of objectives increases, the probability of a solution being dominated by the other solutions decreases. In other words, in a given set of solutions, most of the solutions are labeled as “non-dominated”. Therefore, during the mating and environmental selections, the primary selection that is Pareto dominance fails to distinguish solutions; therefore, the selection entirely depends on the secondary selection criterion, e.g., crowding distance [3], density estimation [27], etc. In other words, the Pareto dominance, which is expected to promote convergence, does not play any role in both the selection steps. Therefore, the selection of solutions based on diversity is not expected to promote convergence, thus leading to the failure of the MOEAs.
To address the convergence issue, various approaches have been proposed to increase the probability that two candidate solutions are distinguishable and thus improve the selection pressure. The different approaches can be categorized as:
  • Approaches that expand the dominance are by modifying the objective values such as CDAS [16], S-CDAS [18].
  • Approaches that expand the dominance area by modifying the dominance relationship such as α-dominance, generalized Pareto dominance (GPO) [20].
  • Approaches that employ gridding in the objective space such as ϵ -dominance [21], pa ϵ -dominance [28], cone ϵ -dominance [29], grid-dominance [22] and angle dominance [30].
  • Fuzzy logic to define new dominance relationships such as (1-k) dominance [31], L-dominance [32] and fuzzy dominance [33].
As illustrated in [24]:
(a)
Pareto dominance, (1-k) dominance and L-dominance are good at achieving diversity but poor at promoting convergence.
(b)
CDAS, GPO and Grid-based methods are good at achieving convergence but poor at maintaining diversity.
(c)
S-CDAS is poor at promoting both diversity and convergence.
To overcome the issues, the strengthened dominance relation (SDR) and controlled strengthened dominance relation (CSDR) were proposed. According to CSDR, a solution x is said to CSDR-dominate a solution y (denoted as x   C S D R y ) if and only if
i   x   Pareto   dominates   y   or ii { C o n ( x ) < C o n ( y ) ,                             θ x y θ ¯ C o n ( x ) · θ x y θ ¯ C o n ( y ) ,                   θ x y >   θ ¯
where C o n ( x ) = i = 1 M f i ( x ) k is a metric measuring the convergence degree of x [11], θ x y is the acute angle between the objective values of two candidate solutions x and y in a population P and is expressed as
θ x y = arccos (   f ( x ) · f ( y ) | f ( x ) | | f ( y ) | )
and θ ¯ is the niche size which is set to the a · | P | 100 th (a ∈ [1,100]) minimum element of
{ m i n q ϵ P \ { p }   θ p q | p ϵ P   }
Before calculating C o n ( x ) and θ x y , the solutions in P are normalized with respect to the ideal and nadir points. The ideal point in the objective space is a vector composed of the optimum of each objective function. On the other hand, the nadir point is a vector made up of the worst of each objective function in the objective space.
In CSDR, by removing the first condition (Equation (3)i) related to Pareto dominance and setting the parameters k and a to 1 and 50, respectively, it degenerates into SDR. According to the definition of SDR, some of the Pareto-dominated solutions might be classified as non-dominated, which is not desired. However, in [24], it is claimed that since only a few candidate solutions in the population are Pareto-dominated on MaOPs, the classification of dominated solutions as non-dominated solutions has little influence on the performance of MOEA that employs SDR. However, this contradicts the claim regarding dominance resistance, and classifying dominated solutions as non-dominated further aggravates the issue. Therefore, CSDR combines SDR with traditional Pareto dominance and demonstrates a significant improvement in the performance in certain problem instances. However, the use of Pareto dominance in combination with SDR increases the computational complexity of the process.
According to (Equation (3)ii), which is applicable to both CSDR and SDR, the dominance relationship of solutions in the population is mainly controlled by the niche size θ ¯ . According to the first condition in (Equation (3)ii), if the acute angle between solution x and solution y is smaller than θ ¯ then the convergence degree determines if x   C S D R y or x   S D R y . Hence, in each niche, in addition to preserving the diversity, the required convergence pressure is enforced. In the second expression in (Equation (3)ii), even if the acute angle between x and y is greater than θ ¯ , x   C S D R y or x   S D R y is possible if the convergence degree of x is much smaller than that of y . However, as θ x y increases the probability that x   C S D R y   or   x   S D R y is decreased.
The number of niches or diversity of the solutions or the dominance area of the solutions depend on the niche size ( θ ¯ ) . According to Equation (4), the niche size θ ¯ can be controlled by adjusting the parameter a . In other words, the proportion of the dominance area is increased with the increase of a strengthening the convergence pressure towards PF due to a lesser number of niches. On the other hand, a small value of a is expected to improve the diversity due to a large number of niches. However, the lower and the upper bounds of a niche have to be restricted. In [25], it is justified that the values around 50 would be better, mostly a range from 40 to 60. This is based on the intuition that the environmental selection consistently chooses half of the combined population acquired at each generation in the majority of current MOEAs, the target of adapting θ ¯ is to guarantee that the ratio of the non-dominated solutions in a given candidate set is around 0.5.
To demonstrate the effect of k , keeping the niche size θ ¯ constant, three solutions (A, B and C) are considered in a bi-objective space corresponding to a convex MOP and the convergence degree of the solutions when k varies from 0 to 2 is plotted in Figure 1. From Figure 1, it is evident that A always has a better convergence degree compared to C because A   B . For k > 1, the candidate solutions (A and C) from the center region of the objective space have a better convergence degree. On the other hand, for 0 < k < 1, corner solutions (B) away from the centre of objective space have a better convergence degree. Based on the observation in [25], it can be concluded that (1) larger values of k enhance the convergence pressure, and (2) small values of k enhance diversity.
Based on the above intuition, in CSDR, the parameters k and a can be adjusted as follows:
k = k m a x Δ k · ( t t m a x )
a = a m a x Δ a · ( t t m a x )
where k m a x and a m a x are predefined initial values, Δ k and Δ a are the variations of k and a , respectively, and t m a x is the maximum number of generations. The parameter settings are detailed in [25]. The decrease of parameters k and a with respect to the generation count is based on the notion that dynamic dominance relations would improve the performance of MOEAs. In other words, high convergent pressure is exerted to push the population towards the PF in the early phase of evolution and as the search progresses, population diversity is enforced in the selection to generate well-distributed solutions. The value of a starts with a value of 60 in the initial generations, and the value of a is reduced to 40 in the final generation. In other words, with a large value of a , niche size ( θ ¯ ) would be large resulting in a smaller number of niches. This would help the better segregation of solutions and thus promote convergence. On the other hand, a smaller value of niche size ( θ ¯ ) in the later stages would increase the number of niches to accommodate enough numbers of well-spread solutions. Therefore, with respect to parameter a , the observation that starting with a large value and reducing to a smaller value over the generations would shift the focus from convergence to diversity as the number of generations increases.
However, as mentioned earlier, the observation that larger values of k promote convergence and smaller values of k promote diversity is not correct. Actually, depending on the nature of the problem, different values of k prioritize the different regions of the PF. In other words, in a convex problem (ZDT1 [34]), as the value of k is decreased the focus shifts from the center of the PF to the edges of the PF as shown in Figure 2a with five different solutions taken from the true PF. However, while in a concave problem (ZDT2 [34]), the contrary happens as shown in Figure 2b. Therefore, the actual observation is that as k changes with respect to the dominance relationship, the focus shifts to different regions on the PF.
In addition, the use of CSDR in the environmental selection might result in issues such as: (1) as the adaptation of k is based on the number of generations, in a convex problem, by the time the k comes down to a value k < 1 , if the convergence of the population reaches the center part of the optimal PF, then the decrease of k further will allow the algorithm to slowly shift the focus to the edges. The reduction in k combined with the decrease in a would help CSDR perform well. However, if the k value comes down to a value <1 before the population converges to the optimal PF in the central region, then decreasing k further will only concentrate on the edges. Therefore, the centre part of PF will be left unexplored, resulting in degraded performance. In other words, some sections of the PF would not be adequately explored. (2) As the niche size is reduced with generation count, the number of niches increases resulting in some sparse regions during the search process. Therefore, different values of parameter k would be helpful at different stages depending on the distribution of the current population.
Therefore, reduction of k with respect to the generations heavily depends on the maximum generation count and is not appropriate to obtain a well converged diverse set of solutions. Therefore, motivated by the observation, we propose a modified SDR that employs an ensemble of parameter k. The probability of employing each parameter value in the ensemble pool is adapted over the generations depending on the performance. The modified SDR is only employed in the mating selection to select better parents for offspring generation. In contrast, the environmental selection is based on traditional Pareto dominance because of its unbiased nature and ability to preserve solutions corresponding to the entire PF.

3. Controlled Strengthened Dominance-Based Mating Selection with Adaptive Ensemble of Parameters for NSGA-III (NSGA-III*)

In this section, the concept of modified SDR is initially described and is followed by the framework of NSGA-III*. Based on the observations mentioned in the previous section, a modified SDR (MSDR) is proposed accordingly which a solution x is said to MSDR dominates to another solution y (i.e., x   MSDR y ) if and only if
{ C o n ( x ) < C o n ( y ) ,                             θ x y θ ¯ C o n ( x ) · θ x y θ ¯ C o n ( y ) ,               θ x y >   θ ¯
where the definitions of Con(x), θ x y   a n d   θ ¯ are the same as in Equation (3). The adaptation of θ ¯ is the same as in CSDR. However, k value is selected from a fixed pool of values sampled from the range ( 0 ,   2 ) . In the current study, the size of the pool is set to be five. The selection of the parameters from the pool is probabilistic, where the probabilities are adapted depending on the number of successful offspring members produced by the parameter values in the pool.
The basic framework of the proposed NSGA-III* is as follows:
(1)
Mating selection that employs modified SDR.
(2)
Environmental selection is similar to standard NSGA-III with weight vectors and traditional Pareto dominance.
NSGA-III* starts with a parent population which is P 0 of size N (Algorithm 1, Line 1). In each generation (t), the mating selection is performed by probabilistically selecting a k value from the pool and a mating pool M t   is   created (Algorithm 1, Line 4). After mating selection, the offspring population ( Q t ) of size N is created (Algorithm 1, Line 5). The offspring population ( Q t ) and population ( P t ) are combined to form R t (Algorithm 1, Line 6) and normalized (Algorithm 1, Line 7). Through environmental selection, best N solutions are selected from R t to form the population members for the next generation P t+1 (Algorithm 1, Line 8).
Algorithm 1: NSGA-III* pseudo-code
Input :   P 0   ( Initial   Population ) ,   N   ( Size   of   Population ) ,   W   ( Set   of   weight   vectors ) ,   set   the   k = ( k 1 , k 2 ,   , k p )   values ,   p r t = ( p r 1 , t , p r 2 , t ,   , p r p , t )   ( Probability   of   select   each   k ) ,   t m a x   ( Maximum   generation )
Output :   P M a x g e n   ( Final   population )
   01: P 0   Generate initial population (N)
   02: t = 1;
   03: While (t < tmax) do
   04: M t   Mating _ selection   ( P t , N ,   k , p r t )
   05:  Q t = Variation   ( M t , N )
   06:  R t P t   Q t
   07:  R t   Normalization   ( R t )
   08:  P t + 1   Environmental _ selection   ( W , R t , N )  
   09:  t = t + 1
10:   p r t + 1 = Adapt _   p r ( P t + 1 , p r t )
11:  End While

3.1. Initialization

A set of uniform weight vectors ( W ) are generated using the NBI method [15], then subsequently, a population P 0 of size N   ( | W | ) is initialized within the permissible boundaries. The pool of values and their initial probabilities of selection corresponding to the parameter k is set as k = ( k 1 , k 2 ,   , k p ) and p r 0 = ( p r 1 , 0 , p r 2 , 0 ,   , p r p , 0 ) , respectively. The size of the pool (p) is set to five in the current study. The effect of the parameter values in the ensemble on the performance of the algorithm is demonstrated in Section 4.

3.2. Mating Selection with Modified SDR and Offspring Generation

At each generation (t), the parameter value corresponding to a is obtained from Equation (6). In addition, considering each value k = ( k 1 , k 2 ,   , k p ) in the pool, the population P t is sorted based on the modified SDR (Algorithm 2, Line 02). After sorting, through binary tournament selection ( p r i , t × N ) solutions are selected into the mating pool, where the probabilities corresponding to each parameter value of k at generation t is given by p r t = ( p r 1 , t , p r 2 , t ,   , p r p , t )   ( Algorithm   2 ,   Line   03 ) . After repeating the process for each value of k i , the mating pool M t = i = 1 , 2 , , p M i , t is formed (Algorithm 2, Line 05). Since the mating pool is formed considering different values of k , the distribution of the parents selected for offspring generation would be sampled from different regions of the PF. In addition, as the probabilities of the parameter values are being adapted based on the performance, the k values that perform better are given a chance to produce more offspring members (Algorithm 4). In other words, the region on the PF that corresponds to the k values with high probabilities is given priority, and more offspring members would be generated in that region. In conclusion, depending on the nature of the problem, and distribution of the population members, different stages of the evolution require different k values. Moreover, the selection probabilities of each parameter value in the pool reflect the state of the current population. The details regarding the adaptation of the probabilities corresponding to the k parameter values in the pool are presented in Section 3.5 (Algorithm 4). After forming the mating pool, the variation operators, namely crossover and mutation, are employed to produce the offspring members (Algorithm 1, Line 05). In the current study, simulated binary crossover (SBX) and polynomial mutation (PM) are employed.
Algorithm 2: Mating _ selection   ( P t , N ,   k , p r t )
Input :   P t   ( Population   at   generation   t ) ,   N   ( Population   Size ) ,   k = ( k 1 , k 2 ,   , k p ) ,   p r t = ( p r 1 , t , p r 2 , t ,   , p r p , t )   ( probability   of   select   each   k   at generation t ) ,
Output :   M t   ( Mating   pool )
   01: For   i = 1 : p Generate initial population (N)
   02:  Perform   the   MSDR   sorting   with   k i   on   P t   and   calculate   the   crowding   degree   of   solutions .
   03:  M i , t =   Select   ( p r i , t × N )   number   of   solutions .
   04: End For
   05:  M t = i = 1 , 2 , , p M i , t

3.3. Normalization

Normalization is an essential tool to map the unscaled search space to a scaled one so as to characterize the badly scaled objectives. In NSGA-III*, the normalization of the jth population member is given in Equation (7).
                                        F i j = f i j z i * z i n a d z i * ,     i = 1 , 2 , , M
where z i * and z i n a d are considered as the lowest and highest values of i t h objective function.

3.4. Environmental Selection

To perform the environmental selection (Algorithm 3), where the goal is to select N solutions for (t + 1)th generation ( P t + 1 ) from the combined population R t of size 2 N from generation (t), a set of pre-defined weight vectors W is set [15]. First, the non-dominated sorting based on traditional Pareto dominance is performed on R t which is the result of normalization of R t   ( A l g o r i t h m   3 ,   L i n e   01 ) . Based on the number of individuals on the sub-fronts, if the condition i = 1 l | F r o n t _ N o i | = = N satisfies then P t + 1 = i = 1 l | F r o n t i | = = N is considered as the parent population of the next generation (Algorithm 3, Lines 2~4). Otherwise, the association is performed between the set of weight vectors W and the combined population R t (Algorithm 3, Line 5). Each solution tries to associate with each weight vector based on angle. If multiple solutions are associated with the same weight vector then a solution will be selected from the associated solutions which has a minimum perpendicular distance to that particular weight vector (Algorithm 3, Line 6). After the association process, the best associated solutions will be added to P t + 1 and if | P t + 1 | < N , then the weight vectors without any associated solutions are identified as ineffective weight vectors ( I W V )   ( Algorithm   3 ,   Line   7 ) . These ineffective weight vectors again try to associate with the members of the last front (Algorithm 3, Line 8). The | I W V | number of associated solutions selected is represented by U   ( Algorithm   3 ,   Lines   9 ) . Finally, combined the population P t + 1 = P t + 1   U represents the parent population for the next generation (Algorithm 3, Lines 10).
Algorithm 3: Environmental_selection ( W , R t , N )
Input :   R t (Merged Population at generation), W (Reference Point Set), N (Size of Population)
Output :   P t + 1   ( Population / Parent   for   next   generation )
   01: [ F r o n t ,   M a x _ F r o n t ] = Non - Dominated   Sort   ( R t )
   02:  If   i = 1 l | F r o n t i | = = N w h e r e   l = 1 , 2 , , M a x _ F r o n t
   03: P t + 1 = R t ( F r o n t M a x _ F r o n t )  
   04: Else
   05: a s s o c i a t i o n = A s s o c i a t e ( R t , W ) / / Associating   solutions   of   R t   ( except   last   front   solutions )   to   W
   06:  P t + 1 = Selecting   best   associated   solutions   if   there   exist   more   than   one   solution   associated   with   a   weight   vector   based   on   perpendicular   distance  
   07:  W i n e f f e c t i v e = Finding   weight   vectors   which   have   no   solutions   associated   with   it
   08:  i n e f f e c t i v e s o l u t i o n = A s s o c i a t e ( R t ( M a x _ F r o n t ) , W )
   09:  U   = Select   best   associated   solution   from   i n e f f e c t i v e s o l u t i o n
   10:  P t + 1   = P t + 1     U
   11: End If
In [24,25], the modified dominance relationships, namely SDR and CSDR, are employed in the environmental selection, in addition to the mating selection. However, in the current work, the environmental selection is based on traditional Pareto dominance so that the environmental selection process is not biased to any section of the PF.

3.5. Adaptation of the Probability of Parameters in the Ensemble Pool

As mentioned in Section 3.2, the probabilities p r t = ( p r 1 , t , p r 2 , t ,   , p r p , t ) of applying the parameter values in the pool k need to be adapted over the generations (Algorithm 4). At the end of each generation (t), the number of solutions produced by k i that entered the parent population are counted to modify the probabilities. In other words, count the number of solutions for each k i as C = ( C 1 , C 2 , , C p ) after getting the parent set P t + 1 from the environmental selection (Algorithm 4, Line 1). Normalize each of the C i s as C i = ( C i / j = 1 , 2 , , p C j )     i = 1 , 2 , p (Algorithm 4, Line 2). The probabilities of the parameters in the ensemble pool are thus updated using a weighted function where the performance of the current generation is given a weight of 0.3. In addition, a probability of applying any parameter value in the pool cannot go below a min threshold value of 0.05. Finally, the normalization of probabilities is performed (Algorithm 4, Lines 3~5).
Algorithm 4: Adapt _   p r ( P t ,   p r t )
Input :   P t   ( Population   at   a   generation   t ) ,   p r t   ( probability   of   each   generation )
Output :   p r t + 1   ( Probability   after   adaptation )
   01: C = ( C 1 , C 2 , , C p ) count the number of occurrences of solutions for each k.
   02:  C i = ( C i / j = 1 , 2 , , p C j )     i = 1 , 2 , p
   03:  t e m p = 0.7 p r t + 0.3 C i
   04:  t e m p = max ( 0.05 ,   t e m p )
   05:  p r i , t + 1 = ( t e m p i / j = 1 , 2 , , p t e m p j )     i = 1 , 2 , p  

4. Experimental Setup, Results and Discussion

Experiments were conducted on 16 scalable test problems from DTLZ and WFG, test suites comprising of seven and nine problems, respectively. For each test problem, 2-, 4-, 6-, 8- and 10-objectives were considered. The parameter values employed are present in [11]. In order to compare the efficiency of NSGA-III* with the state-of-the-art algorithms a quantitative indicator, namely HyperVolume (HV), was employed. The larger value of HV implies the superiority of the algorithm. In this experiment, we first performed the normalization of the objective vectors before calculating the HV. The reference point was set as ( 1 , , 1 ) R M . To evaluate the HV, we considered the Monte Carlo sampling, where the number of the sampling point was 10 6 . In each instance, 30 independent runs were performed for each algorithm on a PC with a 3.30 GHz Intel (R) Core (TM) i7- 8700 CPU and Windows 10 Pro 64-bit operating system with 16 GB RAM. As a stopping criterion, the maximum number of generations for DTLZ1 and WFG2 was set to 700 and for DTLZ3 and WFG1 it was set as 1000. For the other problems (DTL2, DTLZ4–7 and WFG3–9) it was set to 250. All algorithms considered employing a population size (N) of 100, 165, 182, 240 and 275 for 2-, 4-, 6-, 8-, 10-objectives, respectively. Simulated binary crossover and polynomial mutation with distribution indices and probabilities set to n m = 20 , n c = 20 , p c = 1.0 and p m = 1 / D , respectively, were employed.
The only additional component introduced into the NSGA-III* was the pool of parameter values k . To investigate the robustness of the NSGA-III* algorithm with respect to the selection of the pool, we performed simulations with different sets of values. In all the sets, it was made sure that the pool of k values were diverse, representative and covered the entire range. In addition, the size of the pool could not be large. Therefore, the size of pool was set to five. The experiments were conducted by incorporating five different sets of diverse and well representative pools corresponding to parameter k into NSGA-III* named as NSGA-III1* = ( 1.2 ,   0.7 ,   0.5 ,   0.3 ,   0.1 ) , NSGA-III2* = ( 1.5 ,   1.2 ,   1 ,   0.5 ,   0.3 ) , NSGA-III3* = ( 1.5 ,   1 ,   0.7 ,   0.5 ,   0.1 ) , NSGA-III4* = ( 1.3 ,   1.1 ,   0.9 ,   0.5 ,   0.3 ) and NSGA-III5* = ( 1.2 ,   1 ,   0.8 ,   0.4 ,   0.1 ) .
The experimental analysis was also performed in 16 scalable test problems from DTLZ and WFG test suites. The results are presented in the supplementary file and the pair-wise comparisons are summarized in Table 1 with respect to the number of wins (W), number of losses (L) and number of ties (T). From the results, it is evident the performance of the NSGA-III* with respect to the selection of the pool is quite robust which is apparent from the performance similarity (represented with T) between the different versions of the ensemble of over 80%. However, NSGA-III2* with the pool of k   = ( 1.5 ,   1.2 ,   1 ,   0.5 ,   0.3 ) is the best suited value among them and with a slightly better performance. Therefore, the simulation results corresponding to k = ( 1.5 ,   1.2 ,   1 ,   0.5 ,   0.3 ) referred to as NSGA-III* are employed to compare with the state-of-the-art algorithms in Table 2.
To demonstrate the effect of the different instances of NSGA-II/CSDR [25] where (1) the k values are linearly reduced based on generations as in Equation 5, and (2) both the mating and environmental selections employ CSDR, all the instances of NSGA-II/CSDR considered start with k m a x = 1.6 . However, the rate of reductions · k employed are 0.6, 0.5, 0.4 and 0.2 and the instances are referred to as NSGA-II/CSDR1, NSGA-II/CSDR2, NSGA-II/CSDR3 and NSGA-II/CSDR4, respectively. The simulations are performed on 3-objective instances of DTLZ1 and DTLZ3. The plots corresponding to the final population are depicted in Figure 3 and Figure 4. From the figures, it is evident that all the four instances corresponding to NSGA-II/CSDR cannot produce well distributed solutions in all the three problems. In other words, even though all the instances of NSGA-II/CSDR start with the same k max due to the different k values, the value of parameter k in the final generation would be different. As mentioned earlier, as different values of k emphasize the different regions of the PF, the use of CSDR in the environmental selection would result in bias resulting in non-uniform distribution of solutions. This is evident from the simulation results depicted in Figure 3 and Figure 4.
To demonstrate that different values of k are going to be effective at different stages of the evolutions, the plots corresponding to the changes in the probabilities of the parameter values in the ensemble pool are plotted in Figure 5. To plot, the 4- and 8-objective instances of DTLZ3 and WFG1 were considered. From the figure, it is evident that depending on the characteristics of the problem and distribution of the current population, different values of k in the pool are considered to be effective at different stages of the evolution. In addition, there is no standard pattern of reducing k that is suitable for all the problems. Therefore, continuously reducing the parameter k , as done is NSGA-II/CSDR, is not suitable.
First, we would like to compare the performance of NSGA-II, NSGA-III, NSGA-II/SDR and NSGA-II/CSDR to demonstrate the effect of not using Pareto dominance in environmental selection. In 2-objective instances of DTLZ1, DTLZ2 and DTLZ3, it is evident that NSGA-II, NSGA-III and NSGA-II/CSDR employ Pareto dominance in the environmental selection and perform better than NSGA-II/SDR which does not employ Pareto dominance. However, the performance of the proposed NSGA-III* is comparable to the best result as it employs Pareto dominance in the environmental selection.
In higher objectives ( > 4 ) in WFG4 to WFG9, the performance of NSGA-II/SDR is better than NSGA-II/CSDR, according to the simulation results. This might be due to the linear reduction of k with respect to generations in NSGA-II/CSDR compared to the constant setting ( k = 1) in NSGA-II/SDR. In other words, the linear reduction of k with generations is not suitable for all the problems with diverse characteristics. However, the performance of NSGA-II* is comparable or better to the best of NSGA-II/SDR or NSGA-II/CSDR and indicates the effectiveness of the adaptive ensemble in finding the suitable k depending on the characteristics of the problem and the distribution of the population.
The performance of the proposed NSGA-III* was compared with state-of-the-art MOEAs such as NSGA-II, NSGA-II/SDR, NSGA-II/CSDR, MOEA/D, MOEAD-DE, NSGAIII, TDEA and ISDE+. The experimental results (mean and standard deviation values of normalized HV) on benchmark suites are presented in Table 2. In addition, the statistical tests (t-test) at a 5% significance level were conducted to compare the significance of the difference between the mean metric values yielded by NSGA-III* and state-of-the-art algorithms. The signs “+”, “−” and “≈” against the HV values indicate that the NSGA-III* is statistically “better”, “worse” and “comparable” with the corresponding algorithm, respectively. The last row of Table 2 represents the overall performance of NSGA-III* in terms of the number of instances where it is better (Win-W), comparable (Tie-T) and worse (Loss-L) with respect to the corresponding algorithm.
For better visualization, the performance of state-of-the-art algorithms in terms of wins, ties and losses are summarized in Figure 6.
As shown in Figure 6 and Table 2, NSGA-III* significantly outperforms or is comparable to NSGA-II, NSGA-II/SDR, NSGA-II/CSDR, MOEA/D, MOEA/D-DE, NSGA-III, TDEA and ISDE+ in 71⁄80 ≈ 88.75%, 66⁄80 ≈ 82.5%, 72⁄80 ≈ 90%, 70⁄80 ≈ 87.5%, 67⁄80 ≈ 83.75%, 74⁄80 ≈ 92.5%, 69⁄80 ≈ 86.25% and 56⁄80 ≈ 70% of cases, respectively. In other words, NSGA-III* consistently performs better than the state-of-the-art algorithms. This can be attributed to the modified SDR-based mating selection with an ensemble of parameter values k that ensures that solutions are uniformly sampled over the entire range of the PF by prioritizing respective k values. In addition, the weight vector-based environmental selection based on Pareto dominance was able to provide the required diversity on the PF without any bias to specific regions. Furthermore, the NSGA-III* is implemented in MatLab using the PlatEmo [35] framework. The source code is accessible at https://github.com/Saykat1993/Mating-Selection-based-on-Modified-Strengthened-Dominance-Relation-for-NSGA-III.git (accessed on 2 November 2021).

5. Conclusions

In this manuscript, a modified strengthened dominance relation (MSDR) with an adaptive ensemble of parameter values that can enforce convergence is proposed. A multi/many-objective evolutionary algorithm (MOEA) that employs mating selection based on MSDR and environmental selection using weight vectors and Pareto dominance is proposed, referred to as NSGA-III*. In the proposed NSGA-III*, the probability of applying different parameter values in the ensemble is adapted based on the performance of the parameters. In other words, the probability of the parameters changes depending on the nature of the problem and the distribution of the population. The environmental selection with Pareto dominance enables the diversity of the solutions over the entire Pareto front due to its unbiased nature. The performance of the proposed NSGA-III* framework is compared with various state-of-the-art MOEA algorithms on standard benchmark test suites.

Supplementary Materials

The following are available online at https://www.mdpi.com/article/10.3390/math9222837/s1, In the supplementary materials the HyperVolume comparisons of NSGA-III* for different values of k , named as NSGA-III1* = ( 1.2 ,   0.7 ,   0.5 ,   0.3 ,   0.1 ) , NSGA-III2* = ( 1.5 ,   1.2 ,   1 ,   0.5 ,   0.3 ) , NSGA-III3* = ( 1.5 ,   1 ,   0.7 ,   0.5 ,   0.1 ) , NSGA-III4* = ( 1.3 ,   1.1 ,   0.9 ,   0.5 ,   0.3 ) and NSGA-III5* = ( 1.2 ,   1 ,   0.8 ,   0.4 ,   0.1 ) are presented. In Table S1, the performance of NSGA-III1* with the rest of the others is compared. In a similar fashion, the performance of NSGA-III2*, NSGA-III3*, NSGA-III4* and NSGA-III5* are presented in Tables S2–S5, respectively.

Author Contributions

Conceptualization, S.D., S.S.R.M. and R.M.; methodology, S.D., S.S.R.M. and R.M.; software, S.D.; validation, S.D. and R.M.; formal analysis, S.D., S.S.R.M. and R.M.; investigation, K.N.D. and D.-G.L.; resources, K.N.D. and D.-G.L.; data curation, S.D. and R.M.; writing—original draft preparation, S.D. and S.S.R.M.; writing—review and editing, R.M., K.N.D. and D.-G.L.; visualization, S.D.; supervision, R.M.; project administration, K.N.D.; funding acquisition, R.M. and D.-G.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Research Foundation (NRF), Korea, under Project BK21.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Goldberg, D.E. Genetic Algorithms in Search, Optimization and Machine Learning; Addison-Wesley Longman Publishing Co., Inc.: Boston, MA, USA, 1989. [Google Scholar]
  2. Ikeda, K.; Kita, H.; Kobayashi, S. Failure of Pareto-based MOEAs: Does non-dominated really mean near to optimal? In Proceedings of the 2001 Congress on Evolutionary Computation (IEEE Cat. No.01TH8546), Seoul, Korea, 27–30 May 2001; Volume 2, pp. 957–962. [Google Scholar]
  3. Deb, K.; Pratap, A.; Agarwal, S.; Meyarivan, T. A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans. Evol. Comput. 2002, 6, 182–197. [Google Scholar] [CrossRef] [Green Version]
  4. Knowles, J.; Corne, D. The Pareto archived evolution strategy: A new baseline algorithm for Pareto multiobjective optimisation. In Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406), Washington, DC, USA, 6–9 July 1999; Volume 1, pp. 98–105. [Google Scholar]
  5. Zitzler, E.; Laumanns, M.; Thiele, L. SPEA2: Improving the Strength Pareto Evolutionary Algorithm; Eidgenössische Technische Hochschule Zürich (ETH), Institut für Technische Informatik und Kommunikationsnetze (TIK): Zurich, Switzerland, 2001; Volume 103. [Google Scholar]
  6. Zhang, Q.; Li, H. MOEA/D: A Multiobjective Evolutionary Algorithm Based on Decomposition. IEEE Trans. Evol. Comput. 2007, 11, 712–731. [Google Scholar] [CrossRef]
  7. Liu, H.; Gu, F.; Zhang, Q. Decomposition of a Multiobjective Optimization Problem Into a Number of Simple Multiobjective Subproblems. IEEE Trans. Evol. Comput. 2014, 18, 450–455. [Google Scholar] [CrossRef] [Green Version]
  8. Li, K.; Deb, K.; Zhang, Q.; Kwong, S. An Evolutionary Many-Objective Optimization Algorithm Based on Dominance and Decomposition. IEEE Trans. Evol. Comput. 2015, 19, 694–716. [Google Scholar] [CrossRef]
  9. Ishibuchi, H.; Tsukamoto, N.; Sakane, Y.; Nojima, Y. Indicator-based evolutionary algorithm with hypervolume approximation by achievement scalarizing functions. In Proceedings of the 12th Annual Conference on Genetic and Evolutionary Computation, Portland, OR, USA, 11–13 July 2010. [Google Scholar]
  10. Bader, J.; Zitzler, E. HypE: An Algorithm for Fast Hypervolume-Based Many-Objective Optimization. Evol. Comput. 2011, 19, 45–76. [Google Scholar] [CrossRef] [PubMed]
  11. Pamulapati, T.; Mallipeddi, R.; Suganthan, P.N. ISDE +—An Indicator for Multi and Many-Objective Optimization. IEEE Trans. Evol. Comput. 2019, 23, 346–352. [Google Scholar] [CrossRef]
  12. Palakonda, V.; Mallipeddi, R. Pareto Dominance-Based Algorithms With Ranking Methods for Many-Objective Optimization. IEEE Access 2017, 5, 11043–11053. [Google Scholar] [CrossRef]
  13. Deb, K.; Jain, H. An Evolutionary Many-Objective Optimization Algorithm Using Reference-Point-Based Nondominated Sorting Approach, Part I: Solving Problems With Box Constraints. IEEE Trans. Evol. Comput. 2014, 18, 577–601. [Google Scholar] [CrossRef]
  14. Cheng, R.; Jin, Y.; Olhofer, M.; Sendhoff, B. A Reference Vector Guided Evolutionary Algorithm for Many-Objective Optimization. IEEE Trans. Evol. Comput. 2016, 20, 773–791. [Google Scholar] [CrossRef] [Green Version]
  15. Das, I.; Dennis, J.E. Normal-Boundary Intersection: A New Method for Generating the Pareto Surface in Nonlinear Multicriteria Optimization Problems. SIAM J. Optim. 1998, 8, 631–657. [Google Scholar] [CrossRef] [Green Version]
  16. Sato, H.; Aguirre, H.E.; Tanaka, K. Controlling Dominance Area of Solutions and Its Impact on the Performance of MOEAs. In Evolutionary Multi-Criterion Optimization; Obayashi, S., Deb, K., Poloni, C., Hiroyasu, T., Murata, T., Eds.; Springer: Berlin/Heidelberg, Germany, 2007; pp. 5–20. [Google Scholar]
  17. Wang, G.; Jiang, H. Fuzzy-Dominance and Its Application in Evolutionary Many Objective Optimization. In Proceedings of the 2007 International Conference on Computational Intelligence and Security Workshops (CISW 2007), Harbin, China, 15–19 December 2007; pp. 195–198. [Google Scholar]
  18. Sato, H.; Aguirre, H.E.; Tanaka, K. Self-Controlling Dominance Area of Solutions in Evolutionary Many-Objective Optimization. In Simulated Evolution and Learning; Deb, K., Ed.; Springer: Berlin/Heidelberg, Germany, 2010; pp. 455–465. [Google Scholar]
  19. Yuan, Y.; Xu, H.; Wang, B.; Yao, X. A New Dominance Relation-Based Evolutionary Algorithm for Many-Objective Optimization. IEEE Trans. Evol. Comput. 2016, 20, 16–37. [Google Scholar] [CrossRef]
  20. Zhu, C.; Xu, L.; Goodman, E.D. Generalization of Pareto-Optimality for Many-Objective Evolutionary Optimization. IEEE Trans. Evol. Comput. 2016, 20, 299–315. [Google Scholar] [CrossRef]
  21. Laumanns, M.; Thiele, L.; Deb, K.; Zitzler, E. Combining convergence and diversity in evolutionary multiobjective optimization. Evol. Comput. 2002, 10, 263–282. [Google Scholar] [CrossRef]
  22. Yang, S.; Li, M.; Liu, X.; Zheng, J. A Grid-Based Evolutionary Algorithm for Many-Objective Optimization. IEEE Trans. Evol. Comput. 2013, 17, 721–736. [Google Scholar] [CrossRef]
  23. Elarbi, M.; Bechikh, S.; Gupta, A.; Said, L.B.; Ong, Y. A New Decomposition-Based NSGA-II for Many-Objective Optimization. IEEE Trans. Syst. Man Cybern. Syst. 2018, 48, 1191–1210. [Google Scholar] [CrossRef]
  24. Tian, Y.; Cheng, R.; Zhang, X.; Su, Y.; Jin, Y. A Strengthened Dominance Relation Considering Convergence and Diversity for Evolutionary Many-Objective Optimization. IEEE Trans. Evol. Comput. 2019, 23, 331–345. [Google Scholar] [CrossRef] [Green Version]
  25. Shen, J.; Wang, P.; Wang, X. A Controlled Strengthened Dominance Relation for Evolutionary Many-Objective Optimization. IEEE Trans. Cybern. 2020, 1–13. [Google Scholar] [CrossRef]
  26. Srinivas, N.; Deb, K. Muiltiobjective Optimization Using Nondominated Sorting in Genetic Algorithms. Evol. Comput. 1994, 2, 221–248. [Google Scholar] [CrossRef]
  27. Li, M.; Yang, S.; Liu, X. Shift-Based Density Estimation for Pareto-Based Algorithms in Many-Objective Optimization. IEEE Trans. Evol. Comput. 2014, 18, 348–365. [Google Scholar] [CrossRef] [Green Version]
  28. Hernández-Díaz, A.G.; Santana-Quintero, L.V.; Coello, C.A.C.; Molina, J. Pareto-adaptive ε-dominance. Evol. Comput. 2007, 15, 493–517. [Google Scholar] [CrossRef] [PubMed]
  29. Batista, L.S.; Campelo, F.; Guimarães, F.G.; Ramírez, J.A. Pareto Cone ε-Dominance: Improving Convergence and Diversity in Multiobjective Evolutionary Algorithms. In Evolutionary Multi-Criterion Optimization; Takahashi, R.H.C., Deb, K., Wanner, E.F., Greco, S., Eds.; Springer: Berlin/Heidelberg, Germany, 2011; pp. 76–90. [Google Scholar]
  30. Liu, Y.; Zhu, N.; Li, K.; Li, M.; Zheng, J.; Li, K. An angle dominance criterion for evolutionary many-objective optimization. Inf. Sci. 2020, 509, 376–399. [Google Scholar] [CrossRef]
  31. Farina, M.; Amato, P. A fuzzy definition of “optimality” for many-criteria optimization problems. IEEE Trans. Syst. Man Cybern. Part A Syst. Hum. 2004, 34, 315–326. [Google Scholar] [CrossRef]
  32. Zou, X.; Chen, Y.; Liu, M.; Kang, L. A New Evolutionary Algorithm for Solving Many-Objective Optimization Problems. IEEE Trans. Syst. Man Cybern. Part B Cybern. 2008, 38, 1402–1412. [Google Scholar]
  33. He, Z.; Yen, G.G.; Zhang, J. Fuzzy-Based Pareto Optimality for Many-Objective Evolutionary Algorithms. IEEE Trans. Evol. Comput. 2014, 18, 269–285. [Google Scholar] [CrossRef]
  34. Zitzler, E.; Deb, K.; Thiele, L. Comparison of Multiobjective Evolutionary Algorithms: Empirical Results. Evol. Comput. 2000, 8, 173–195. [Google Scholar] [CrossRef] [Green Version]
  35. Tian, Y.; Cheng, R.; Zhang, X.; Jin, Y. PlatEMO: A MATLAB Platform for Evolutionary Multi-Objective Optimization [Educational Forum]. IEEE Comput. Intell. Mag. 2017, 12, 73–87. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Convergence degree of solutions A(0.5, 0.5), B(0.1, 0.9) and C(0.6, 0.6) with variation in parameter k [ 0 ,   2 ] , [24].
Figure 1. Convergence degree of solutions A(0.5, 0.5), B(0.1, 0.9) and C(0.6, 0.6) with variation in parameter k [ 0 ,   2 ] , [24].
Mathematics 09 02837 g001
Figure 2. (a) Convergence degree of solutions A1 (0.1, 0.9), B1 (0.3, 0.5), C1 (0.4, 0.6), D1 (0.7, 0.2) and E1 (0.9, 0.1) for convex problem and (b) convergence degree of solutions A2 (0.1, 0.9), B2 (0.3, 0.9), C2 (0.5, 0.7), D2 (0.7, 0.4) and E2 (0.9, 0.1) for concave problem whenever k [ 0 , 2 ] .
Figure 2. (a) Convergence degree of solutions A1 (0.1, 0.9), B1 (0.3, 0.5), C1 (0.4, 0.6), D1 (0.7, 0.2) and E1 (0.9, 0.1) for convex problem and (b) convergence degree of solutions A2 (0.1, 0.9), B2 (0.3, 0.9), C2 (0.5, 0.7), D2 (0.7, 0.4) and E2 (0.9, 0.1) for concave problem whenever k [ 0 , 2 ] .
Mathematics 09 02837 g002
Figure 3. Pareto front of 3-objective DTLZ1 for problems (a) NSGA-II/CSDR1 (b) NSGA-II/CSDR2 (c) NSGA-II/CSDR3 (d) NSGA-II/CSDR4.
Figure 3. Pareto front of 3-objective DTLZ1 for problems (a) NSGA-II/CSDR1 (b) NSGA-II/CSDR2 (c) NSGA-II/CSDR3 (d) NSGA-II/CSDR4.
Mathematics 09 02837 g003
Figure 4. Pareto front of 3-objective DTLZ3 for problems (a) NSGA-II/CSDR1 (b) NSGA-II/CSDR2 (c) NSGA-II/CSDR3 (d) NSGA-II/CSDR4.
Figure 4. Pareto front of 3-objective DTLZ3 for problems (a) NSGA-II/CSDR1 (b) NSGA-II/CSDR2 (c) NSGA-II/CSDR3 (d) NSGA-II/CSDR4.
Mathematics 09 02837 g004
Figure 5. Probability of parameters during evolution on 4- and 8-objective DTLZ3 and WFG1. (a) 4-objective DTLZ3, (b) 4-objective WFG1, (c) 8-objective DTLZ3 and (d) 8-objective WFG1.
Figure 5. Probability of parameters during evolution on 4- and 8-objective DTLZ3 and WFG1. (a) 4-objective DTLZ3, (b) 4-objective WFG1, (c) 8-objective DTLZ3 and (d) 8-objective WFG1.
Mathematics 09 02837 g005aMathematics 09 02837 g005b
Figure 6. Comparison of NSGA-III* with other state-of-the-art algorithims.
Figure 6. Comparison of NSGA-III* with other state-of-the-art algorithims.
Mathematics 09 02837 g006
Table 1. Summary of the experimental results demonstrating the robustness of the NSGA-III* to the parameter values in the ensemble pool.
Table 1. Summary of the experimental results demonstrating the robustness of the NSGA-III* to the parameter values in the ensemble pool.
W/L/TNSGA-III1*NSGA-III2*NSGA-III3*NSGA-III4*NSGA-III5*
NSGA-III1*X7-9-644-3-734-4-728-7-65
NSGA-III2*10-7-63X11-9-609-5-669-4-67
NSGA-III3*3-4-7310-7-63X2-4-743-4-73
NSGA-III4*5-4-716-7-674-2-74X4-2-74
NSGA-III5*7-5-684-7-694-3-733-3-74X
Table 2. Comparison of HV and statistical results on DTLZ and WFG test problems (“+”–WIN, “≈”–TIE, “−”–LOSS).
Table 2. Comparison of HV and statistical results on DTLZ and WFG test problems (“+”–WIN, “≈”–TIE, “−”–LOSS).
#MNSGA-III*NSGA-IINSGA-II/SDRNSGA-II/CSDRMOEA/DMOEA/D-DENSGA-IIITDEAISDE +
DTLZ125.83 × 10−1
(5.40 × 10−5)
5.81 × 10−1
(4.25 × 10−4) +
3.00 × 10−1
(8.25 × 10−2) +
5.82 × 10−1
(2.63 × 10−4) +
5.82 × 10−1
(3.16 × 10−4) +
5.83 × 10−1
(1.78 × 10−6) −
5.82 × 10−1
(1.74 × 10−4) +
5.82 × 10−1
(1.86 × 10−4) +
5.82 × 10−1
(2.45 × 10−4) +
49.45 × 10−1
(2.41 × 10−4)
9.29 × 10−1
(2.02 × 10−3) +
9.03 × 10−1
(1.35 × 10−2) +
9.16 × 10−1
(1.05 × 10−1) ≈
9.45 × 10−1
(3.35 × 10−4) ≈
7.90 × 10−1
(1.36 × 10−1) +
9.45 × 10−1
(2.22 × 10−4) ≈
9.45 × 10−1
(2.31 × 10−4) ≈
9.36 × 10−1
(2.45 × 10−3) +
69.90 × 10−1
(1.13 × 10−4)
2.96 × 10−1
(3.85 × 10−1) +
9.38 × 10−1
(1.80 × 10−2) +
9.40 × 10−1
(1.39 × 10−1) +
9.90 × 10−1
(2.00 × 10−4) +
9.59 × 10−1
(1.16 × 10−2) +
9.90 × 10−1
(1.31 × 10−4) ≈
9.90 × 10−1
(1.40 × 10−4) +
9.83 × 10−1
(1.70 × 10−3) +
89.97 × 10−1
(1.14 × 10−3)
8.71 × 10−3
(4.77 × 10−2) +
9.53 × 10−1
(1.40 × 10−2) +
9.88 × 10−1
(2.31 × 10−3) +
9.93 × 10−1
(1.43 × 10−3) +
9.67 × 10−1
(2.02 × 10−3) +
9.97 × 10−1
(1.11 × 10−3) ≈
9.98 × 10−1
(3.99 × 10−4) ≈
9.95 × 10−1
(8.43 × 10−4) +
109.97 × 10−1
(1.49 × 10−2)
0 × 100
(0 × 100) +
9.35 × 10−1
(2.11 × 10−2) +
9.99 × 10−1
(5.24 × 10−4) ≈
9.99 × 10−1
(6.33 × 10−5) ≈
9.72 × 10−1
(1.72 × 10−3) +
9.97 × 10−1
(1.39 × 10−2) ≈
9.82 × 10−1
(6.61 × 10−2) ≈
9.98 × 10−1
(4.31 × 10−4) ≈
DTLZ223.47 × 10−1
(1.33 × 10−7)
3.47 × 10−1
(1.64 × 10−4) +
1.80 × 10−1
(1.09 × 10−2) +
3.47 × 10−1
(1.85 × 10−4) +
3.47 × 10−1
(1.11 × 10−5) +
3.47 × 10−1
(3.19 × 10−5) +
3.47 × 10−1
(6.72 × 10−6) +
3.47 × 10−1
(9.83 × 10−6) +
3.47 × 10−1
(1.46 × 10−4) +
47.15 × 10−1
(4.62 × 10−4)
6.40 × 10−1
(7.59 × 10−3) +
6.16 × 10−1
(1.50 × 10−1) +
6.64 × 10−1
(5.85 × 10−3) +
7.14 × 10−1
(5.36 × 10−4) +
6.20 × 10−1
(3.30 × 10−3) +
7.14 × 10−1
(5.32 × 10−4) +
7.15 × 10−1
(4.34 × 10−4) +
7.12 × 10−1
(1.68 × 10−3) +
68.61 × 10−1
(4.42 × 10−4)
8.38 × 10−3
(3.19 × 10−2) +
8.45 × 10−1
(3.80 × 10−3) +
7.81 × 10−1
(1.12 × 10−2) +
8.58 × 10−1
(5.49 × 10−4) +
6.49 × 10−1
(1.98 × 10−2) +
8.57 × 10−1
(7.28 × 10−4) +
8.59 × 10−1
(4.61 × 10−4) +
8.65 × 10−1
(1.71 × 10−3) −
89.17 × 10−1
(2.53 × 10−2)
3.29 × 10−4
(1.38 × 10−3) +
9.26 × 10−1
(4.04 × 10−3) −
9.05 × 10−1
(3.46 × 10−3) +
9.17 × 10−1
(1.50 × 10−3) ≈
5.88 × 10−1
(1.26 × 10−2) +
9.11 × 10−1
(2.03 × 10−2) ≈
9.23 × 10−1
(7.14 × 10−4) ≈
9.38 × 10−1
(1.63 × 10−3) −
109.66 × 10−1
(1.17 × 10−2)
2.84 × 10−3
(6.45 × 10−3) +
9.63 × 10−1
(2.99 × 10−3) ≈
9.57 × 10−1
(1.45 × 10−3) +
9.70 × 10−1
(3.57 × 10−4) −
5.49 × 10−1
(1.60 × 10−2) +
9.58 × 10−1
(1.71 × 10−2) +
9.68 × 10−1
(2.54 × 10−4) ≈
9.70 × 10−1
(1.36 × 10−3) −
DTLZ323.46 × 10−1
(1.08 × 10−3)
3.46 × 10−1
(9.23 × 10−4) ≈
1.94 × 10−1
(2.42 × 10−2) +
3.46 × 10−1
(9.28 × 10−4) ≈
3.46 × 10−1
(8.47 × 10−4) ≈
3.45 × 10−1
(1.41 × 10−3) ≈
3.46 × 10−1
(1.15 × 10−3) ≈
3.46 × 10−1
(7.66 × 10−4) −
3.45 × 10−1
(1.61 × 10−3) ≈
47.14 × 10−1
(1.32 × 10−3)
6.53 × 10−1
(9.16 × 10−3) +
6.85 × 10−1
(6.78 × 10−2) +
6.75 × 10−1
(4.53 × 10−3) +
7.13 × 10−1
(2.53 × 10−3) +
5.96 × 10−1
(3.10 × 10−2) +
7.13 × 10−1
(1.57 × 10−3) ≈
7.14 × 10−1
(1.89 × 10−3) ≈
7.11 × 10−1
(2.98 × 10−3) +
68.56 × 10−1
(3.34 × 10−3)
0 × 100
(0 × 100) +
8.48 × 10−1
(3.19 × 10−3) +
8.09 × 10−1
(8.03 × 10−3) +
8.57 × 10−1
(4.06 × 10−3) ≈
6.32 × 10−1
(4.56 × 10−2) +
8.55 × 10−1
(3.74 × 10−3) ≈
8.59 × 10−1
(1.15 × 10−3) −
8.60 × 10−1
(4.00 × 10−3) −
89.17 × 10−1
(3.14 × 10−2)
0 × 100
(0 × 100) +
9.27 × 10−1
(4.02 × 10−3) ≈
9.10 × 10−1
(1.28 × 10−2) ≈
9.21 × 10−1
(3.85 × 10−3) ≈
5.81 × 10−1
(1.57 × 10−2) +
9.12 × 10−1
(5.14 × 10−2) ≈
9.15 × 10−1
(5.61 × 10−2) ≈
9.32 × 10−1
(3.42 × 10−3) −
108.85 × 10−1
(2.30 × 10−1)
0 × 100
(0 × 100) +
9.63 × 10−1
(2.27 × 10−3) −
9.60 × 10−1
(2.65 × 10−3) −
9.44 × 10−1
(1.36 × 10−1) ≈
5.42 × 10−1
(3.11 × 10−2) +
8.86 × 10−1
(2.42 × 10−1) ≈
9.46 × 10−1
(4.77 × 10−2) ≈
9.62 × 10−1
(2.68 × 10−3) −
DTLZ423.47 × 10−1
(1.20 × 10−4)
2.70 × 10−1
(1.19 × 10−1) +
2.57 × 10−1
(7.13 × 10−2) +
3.38 × 10−1
(4.67 × 10−2) ≈
2.51 × 10−1
(1.24 × 10−1) +
3.47 × 10−1
(8.80 × 10−5) +
3.13 × 10−1
(8.86 × 10−2) +
3.04 × 10−1
(9.71 × 10−2) +
3.47 × 10−1
(1.56 × 10−4) +
46.89 × 10−1
(6.09 × 10−2)
6.47 × 10−1
(7.08 × 10−3) +
3.37 × 10−1
(5.62 × 10−2) +
6.64 × 10−1
(4.82 × 10−3) +
5.34 × 10−1
(1.89 × 10−1) +
6.07 × 10−1
(1.47 × 10−2) +
7.09 × 10−1
(2.96 × 10−2) ≈
7.09 × 10−1
(3.12 × 10−2) ≈
7.12 × 10−1
(1.59 × 10−3) −
68.51 × 10−1
(3.20 × 10−2)
8.29 × 10−2
(1.52 × 10−1) +
3.95 × 10−1
(9.04 × 10−2) +
7.97 × 10−1
(7.67 × 10−3) +
7.25 × 10−1
(9.07 × 10−2) +
6.95 × 10−1
(2.35 × 10−2) +
8.52 × 10−1
(2.64 × 10−2) ≈
8.60 × 10−1
(5.25 × 10−4) ≈
8.58 × 10−1
(1.58 × 10−2) ≈
89.12 × 10−1
(3.05 × 10−2)
4.54 × 10−4
(2.37 × 10−3) +
4.75 × 10−1
(6.14 × 10−2) +
9.14 × 10−1
(3.61 × 10−3) ≈
7.79 × 10−1
(1.19 × 10−1) +
6.50 × 10−1
(2.23 × 10−2) +
9.06 × 10−1
(2.98 × 10−2) ≈
9.25 × 10−1
(4.00 × 10−4) −
9.33 × 10−1
(4.44 × 10−3) −
109.66 × 10−1
(1.08 × 10−2)
1.89 × 10−4
(7.26 × 10−4) +
5.69 × 10−1
(8.52 × 10−2) +
9.62 × 10−1
(1.21 × 10−3) +
8.68 × 10−1
(9.44 × 10−2) +
6.31 × 10−1
(2.54 × 10−2) +
9.66 × 10−1
(1.13 × 10−2) ≈
9.70 × 10−1
(2.42 × 10−4) −
9.66 × 10−1
(1.26 × 10−3) ≈
DTLZ523.47 × 10−1
(7.05 × 10−7)
3.47 × 10−1
(1.82 × 10−4) +
1.79 × 10−1
(7.76 × 10−3) +
3.47 × 10−1
(2.15 × 10−4) +
3.47 × 10−1
(1.04 × 10−5) +
3.47 × 10−1
(3.63 × 10−5) +
3.47 × 10−1
(5.63 × 10−6) +
3.47 × 10−1
(5.76 × 10−6) +
3.47 × 10−1
(1.73 × 10−4) +
41.40 × 10−1
(2.10 × 10−3)
1.42 × 10−1
(2.19 × 10−3) −
1.30 × 10−1
(4.39 × 10−3) +
1.40 × 10−1
(2.22 × 10−3) ≈
1.47 × 10−1
(3.06 × 10−4) −
1.45 × 10−1
(4.30 × 10−4) −
1.41 × 10−1
(2.31 × 10−3) −
1.20 × 10−1
(9.23 × 10−3) +
1.33 × 10−1
(2.94 × 10−3) +
69.75 × 10−2
(4.12 × 10−3)
9.63 × 10−2
(7.95 × 10−3) ≈
9.66 × 10−2
(2.68 × 10−3) ≈
9.14 × 10−2
(3.37 × 10−3) +
1.15 × 10−1
(2.59 × 10−4) −
1.11 × 10−1
(3.89 × 10−4) −
9.06 × 10−2
(6.21 × 10−3) ≈
1.00 × 10−1
(2.45 × 10−3) −
9.20 × 10−2
(1.98 × 10−3) +
89.21 × 10−2
(2.27 × 10−3)
6.82 × 10−2
(2.39 × 10−2) +
8.85 × 10−2
(2.04 × 10−3) +
8.06 × 10−2
(6.63 × 10−3) +
1.04 × 10−1
(2.98 × 10−4) −
1.02 × 10−1
(3.17 × 10−4) −
7.82 × 10−2
(1.15 × 10−2) ≈
9.25 × 10−2
(2.65 × 10−3) ≈
8.29 × 10−2
(4.70 × 10−3) +
108.62 × 10−2
(5.54 × 10−3)
4.73 × 10−2
(2.31 × 10−2) +
8.41 × 10−2
(4.54 × 10−3) ≈
7.45 × 10−2
(9.08 × 10−3) +
1.00 × 10−1
(2.75 × 10−4) −
9.81 × 10−2
(2.66 × 10−4) −
7.72 × 10−2
(1.21 × 10−2) ≈
9.28 × 10−2
(1.36 × 10−3) −
7.55 × 10−2
(6.32 × 10−3) +
DTLZ623.47 × 10−1
(4.08 × 10−8)
3.46 × 10−1
(1.61 × 10−4) +
3.18 × 10−1
(3.58 × 10−2) +
3.47 × 10−1
(1.03 × 10−4) −
3.47 × 10−1
(4.71 × 10−5) ≈
3.47 × 10−1
(9.23 × 10−8) +
3.47 × 10−1
(3.30 × 10−7) ≈
3.47 × 10−1
(2.26 × 10−7) +
3.47 × 10−1
(1.27 × 10−4) ≈
41.33 × 10−1
(1.03 × 10−2)
1.14 × 10−1
(2.35 × 10−2) +
1.33 × 10−1
(3.60 × 10−3) ≈
1.33 × 10−1
(4.83 × 10−3) ≈
1.47 × 10−1
(6.94 × 10−4) −
1.45 × 10−1
(4.78 × 10−4) −
1.37 × 10−1
(4.57 × 10−3) −
1.13 × 10−1
(9.61 × 10−3) +
1.27 × 10−1
(5.53 × 10−3) +
69.21 × 10−2
(2.91 × 10−3)
0 × 100
(0 × 100) +
9.92 × 10−2
(4.32 × 10−3) −
8.38 × 10−2
(1.50 × 10−2) +
1.14 × 10−1
(3.19 × 10−3) −
1.12 × 10−1
(3.07 × 10−4) −
6.05 × 10−2
(4.13 × 10−2) +
9.12 × 10−2
(5.20 × 10−4) +
9.63 × 10−2
(2.85 × 10−3) −
89.11 × 10−2
(5.32 × 10−4)
0 × 100
(0 × 100) +
8.87 × 10−2
(1.57 × 10−2) ≈
6.17 × 10−2
(3.56 × 10−2) +
1.04 × 10−1
(3.10 × 10−4) −
1.02 × 10−1
(2.27 × 10−4) −
2.70 × 10−2
(3.89 × 10−2) +
9.11 × 10−2
(2.88 × 10−4) ≈
9.11 × 10−2
(1.49 × 10−3) ≈
108.18 × 10−2
(2.77 × 10−2)
0 × 100
(0 × 100) +
8.75 × 10−2
(9.37 × 10−3) ≈
5.66 × 10−2
(3.71 × 10−2) +
9.97 × 10−2
(1.42 × 10−3) −
9.82 × 10−2
(2.33 × 10−4) −
5.83 × 10−3
(2.08 × 10−2) +
9.11 × 10−2
(3.68 × 10−4) −
8.38 × 10−2
(1.98 × 10−2) ≈
DTLZ722.43 × 10−1
(1.87 × 10−5)
2.43 × 10−1
(4.22 × 10−5) +
2.42 × 10−1
(3.55 × 10−4) +
2.43 × 10−1
(3.61 × 10−5) +
2.16 × 10−1
(3.12 × 10−2) +
2.08 × 10−1
(3.34 × 10−2) +
2.43 × 10−1
(3.30 × 10−5) +
2.43 × 10−1
(3.05 × 10−5) +
2.42 × 10−1
(2.84 × 10−4) +
42.58 × 10−1
(5.50 × 10−3)
2.42 × 10−1
(2.97 × 10−3) +
2.69 × 10−1
(2.33 × 10−3) −
2.61 × 10−1
(2.17 × 10−3) −
1.88 × 10−1
(4.91 × 10−3) +
3.01 × 10−2
(1.72 × 10−2) +
2.54 × 10−1
(6.27 × 10−3) +
2.68 × 10−1
(5.30 × 10−3) −
2.73 × 10−1
(6.21 × 10−3) −
62.31 × 10−1
(2.74 × 10−3)
5.10 × 10−2
(1.56 × 10−2) +
2.37 × 10−1
(3.43 × 10−3) −
1.74 × 10−1
(5.64 × 10−3) +
9.88 × 10−3
(2.47 × 10−2) +
1.35 × 10−3
(2.01 × 10−3) +
2.19 × 10−1
(3.85 × 10−3) +
1.73 × 10−1
(2.43 × 10−2) +
2.40 × 10−1
(7.17 × 10−3) −
82.12 × 10−1
(2.94 × 10−3)
2.36 × 10−4
(4.00 × 10−4) +
2.00 × 10−1
(3.27 × 10−3) +
1.12 × 10−1
(1.28 × 10−2) +
5.61 × 10−5
(2.46 × 10−5) +
1.25 × 10−4
(3.90 × 10−4) +
1.80 × 10−1
(5.63 × 10−3) +
1.76 × 10−1
(1.63 × 10−2) +
2.12 × 10−1
(8.96 × 10−3) ≈
101.81 × 10−1
(7.78 × 10−3)
1.97 × 10−6
(5.32 × 10−6) +
1.54 × 10−1
(4.08 × 10−2) +
1.00 × 10−1
(1.68 × 10−2) +
1.20 × 10−4
(2.53 × 10−4) +
2.07 × 10−4
(5.66 × 10−4) +
1.60 × 10−1
(5.69 × 10−3) +
1.71 × 10−1
(1.49 × 10−2) +
1.81 × 10−1
(1.76 × 10−2) ≈
WFG126.77 × 10−1
(5.95 × 10−3)
6.92 × 10−1
(1.70 × 10−3) −
6.88 × 10−1
(2.33 × 10−3) −
6.80 × 10−1
(1.62 × 10−2) ≈
6.58 × 10−1
(8.94 × 10−3) +
5.46 × 10−1
(8.38 × 10−2) +
6.74 × 10−1
(1.03 × 10−2) ≈
6.73 × 10−1
(8.24 × 10−3) +
6.67 × 10−1
(2.26 × 10−2) +
49.84 × 10−1
(8.16 × 10−3)
9.65 × 10−1
(4.63 × 10−3) +
9.68 × 10−1
(5.70 × 10−3) +
9.80 × 10−1
(2.25 × 10−3) +
9.50 × 10−1
(1.37 × 10−2) +
6.16 × 10−1
(7.07 × 10−2) +
9.88 × 10−1
(4.80 × 10−3) −
9.89 × 10−1
(1.54 × 10−3) −
9.80 × 10−1
(2.77 × 10−3) +
68.70 × 10−1
(2.21 × 10−2)
9.67 × 10−1
(1.45 × 10−2) −
9.66 × 10−1
(1.98 × 10−2) −
9.49 × 10−1
(1.85 × 10−2) −
9.36 × 10−1
(9.53 × 10−3) −
5.86 × 10−1
(6.87 × 10−2) +
8.93 × 10−1
(2.52 × 10−2) −
9.34 × 10−1
(1.87 × 10−2) −
9.91 × 10−1
(1.48 × 10−3) −
89.83 × 10−1
(2.32 × 10−2)
9.94 × 10−1
(1.82 × 10−3) −
9.84 × 10−1
(1.60 × 10−2) ≈
9.98 × 10−1
(6.47 × 10−4) −
9.16 × 10−1
(4.01 × 10−2) +
5.22 × 10−1
(7.13 × 10−2) +
9.98 × 10−1
(8.33 × 10−4) −
9.96 × 10−1
(1.03 × 10−3) −
9.94 × 10−1
(1.25 × 10−3) −
109.99 × 10−1
(6.37 × 10−4)
9.97 × 10−1
(1.10 × 10−3) +
9.88 × 10−1
(1.13 × 10−2) +
9.99 × 10−1
(2.30 × 10−4) −
7.24 × 10−1
(1.04 × 10−1) +
9.81 × 10−1
(1.74 × 10−2) +
9.99 × 10−1
(2.49 × 10−4) −
9.96 × 10−1
(8.08 × 10−4) +
9.95 × 10−1
(1.33 × 10−3) +
WFG226.32 × 10−1
(3.82 × 10−4)
6.32 × 10−1
(4.76 × 10−4) −
6.29 × 10−1
(1.09 × 10−3) +
6.32 × 10−1
(6.03 × 10−4) ≈
6.20 × 10−1
(3.14 × 10−3) +
6.29 × 10−1
(8.18 × 10−4) +
6.32 × 10−1
(4.99 × 10−4) ≈
6.32 × 10−1
(4.49 × 10−4) +
6.32 × 10−1
(4.57 × 10−4) ≈
49.88 × 10−1
(4.99 × 10−4)
9.74 × 10−1
(1.88 × 10−3) +
9.60 × 10−1
(5.88 × 10−3) +
9.79 × 10−1
(1.75 × 10−3) +
9.58 × 10−1
(1.03 × 10−2) +
9.05 × 10−1
(1.07 × 10−2) +
9.86 × 10−1
(7.99 × 10−4) +
9.87 × 10−1
(6.80 × 10−4) +
9.77 × 10−1
(3.25 × 10−3) +
69.95 × 10−1
(9.24 × 10−4)
9.90 × 10−1
(1.88 × 10−3) +
9.68 × 10−1
(5.14 × 10−3) +
9.92 × 10−1
(9.60 × 10−4) +
9.25 × 10−1
(1.76 × 10−2) +
9.62 × 10−1
(1.04 × 10−2) +
9.92 × 10−1
(1.58 × 10−3) +
9.87 × 10−1
(1.55 × 10−3) +
9.83 × 10−1
(3.98 × 10−3) +
89.96 × 10−1
(1.11 × 10−3)
9.97 × 10−1
(9.71 × 10−4) ≈
9.81 × 10−1
(5.41 × 10−3) +
9.98 × 10−1
(3.77 × 10−4) −
9.25 × 10−1
(9.32 × 10−3) +
9.92 × 10−1
(2.93 × 10−3) +
9.95 × 10−1
(1.92 × 10−3) +
9.90 × 10−1
(5.65 × 10−3) +
9.92 × 10−1
(1.83 × 10−3) +
109.97 × 10−1
(1.09 × 10−3)
9.98 × 10−1
(7.40 × 10−4) −
9.85 × 10−1
(6.00 × 10−3) +
9.99 × 10−1
(2.11 × 10−4) −
9.37 × 10−1
(3.93 × 10−3) +
9.97 × 10−1
(1.83 × 10−3) −
9.97 × 10−1
(1.37 × 10−3) ≈
9.92 × 10−1
(2.11 × 10−3) +
9.94 × 10−1
(1.16 × 10−3) +
WFG325.82 × 10−1
(2.75 × 10−4)
5.77 × 10−1
(1.19 × 10−3) +
5.79 × 10−1
(7.36 × 10−4) +
5.78 × 10−1
(8.48 × 10−4) +
5.63 × 10−1
(7.68 × 10−3) +
5.73 × 10−1
(1.78 × 10−3) +
5.78 × 10−1
(1.10 × 10−3) +
5.79 × 10−1
(1.03 × 10−3) +
5.80 × 10−1
(4.45 × 10−4) +
42.56 × 10−1
(9.22 × 10−3)
2.82 × 10−1
(9.46 × 10−3) −
2.67 × 10−1
(9.85 × 10−3) −
2.58 × 10−1
(1.34 × 10−2) ≈
1.05 × 10−1
(4.51 × 10−2) +
1.43 × 10−1
(3.70 × 10−2) +
2.37 × 10−1
(1.05 × 10−2) +
2.43 × 10−1
(1.70 × 10−2) +
2.75 × 10−1
(1.03 × 10−2) −
65.37 × 10−2
(1.83 × 10−2)
8.83 × 10−2
(2.71 × 10−2) −
1.02 × 10−1
(1.88 × 10−2) −
5.02 × 10−2
(1.90 × 10−2) ≈
0 × 100
(0 × 100) +
6.63 × 10−2
(1.33 × 10−2) −
9.86 × 10−3
(8.14 × 10−3) +
2.57 × 10−2
(1.26 × 10−2) +
8.83 × 10−2
(2.12 × 10−2) −
81.99 × 10−2
(1.76 × 10−2)
5.11 × 10−2
(1.77 × 10−2) −
6.65 × 10−3
(9.64 × 10−3) +
6.87 × 10−4
(1.61 × 10−3) +
0 × 100
(0 × 100) +
6.18 × 10−2
(1.52 × 10−2) −
6.60 × 10−4
(1.64 × 10−3) +
8.27 × 10−3
(1.16 × 10−2) +
4.73 × 10−3
(8.71 × 10−3) +
102.06 × 10−3
(5.58 × 10−3)
1.43 × 10−3
(4.83 × 10−3) ≈
0 × 100
(0 × 100) +
0 × 100
(0 × 100) +
0 × 100
(0 × 100) +
8.80 × 10−2
(1.87 × 10−3) −
0 × 100
(0 × 100) +
1.26 × 10−4
(6.92 × 10−4) +
0 × 100
(0 × 100) +
WFG423.47 × 10−1
(4.10 × 10−5)
3.45 × 10−1
(3.95 × 10−4) +
3.45 × 10−1
(4.32 × 10−4) +
3.45 × 10−1
(3.98 × 10−4) +
3.30 × 10−1
(2.24 × 10−3) +
3.14 × 10−1
(2.85 × 10−3) +
3.45 × 10−1
(5.10 × 10−4) +
3.45 × 10−1
(5.75 × 10−4) +
3.46 × 10−1
(2.58 × 10−4) +
47.12 × 10−1
(6.59 × 10−4)
6.19 × 10−1
(7.37 × 10−3) +
6.84 × 10−1
(3.35 × 10−3) +
6.36 × 10−1
(6.36 × 10−3) +
6.63 × 10−1
(5.78 × 10−3) +
4.79 × 10−1
(1.92 × 10−2) +
6.87 × 10−1
(2.05 × 10−3) +
6.89 × 10−1
(2.42 × 10−3) +
7.02 × 10−1
(2.17 × 10−3) +
68.48 × 10−1
(1.21 × 10−3)
6.60 × 10−1
(1.61 × 10−2) +
8.20 × 10−1
(3.78 × 10−3) +
7.54 × 10−1
(7.40 × 10−3) +
5.19 × 10−1
(3.68 × 10−2) +
5.71 × 10−1
(2.99 × 10−2) +
7.98 × 10−1
(4.93 × 10−3) +
8.04 × 10−1
(4.18 × 10−3) +
8.40 × 10−1
(4.02 × 10−3) +
89.07 × 10−1
(2.21 × 10−2)
6.68 × 10−1
(2.12 × 10−2) +
9.05 × 10−1
(3.76 × 10−3) ≈
8.63 × 10−1
(8.02 × 10−3) +
3.70 × 10−1
(3.47 × 10−2) +
6.58 × 10−1
(3.48 × 10−2) +
8.59 × 10−1
(6.53 × 10−3) +
8.64 × 10−1
(4.93 × 10−3) +
9.11 × 10−1
(4.40 × 10−3) ≈
109.60 × 10−1
(2.80 × 10−3)
6.68 × 10−1
(2.43 × 10−2) +
9.41 × 10−1
(3.66 × 10−3) +
9.23 × 10−1
(6.08 × 10−3) +
3.97 × 10−1
(4.99 × 10−2) +
6.67 × 10−1
(4.80 × 10−2) +
9.13 × 10−1
(6.28 × 10−3) +
9.20 × 10−1
(5.79 × 10−3) +
9.30 × 10−1
(4.38 × 10−3) +
WFG523.13 × 10−1
(2.95 × 10−5)
3.13 × 10−1
(5.11 × 10−4) +
3.12 × 10−1
(1.47 × 10−3) +
3.13 × 10−1
(1.01 × 10−3) +
3.05 × 10−1
(1.09 × 10−3) +
3.07 × 10−1
(6.08 × 10−4) +
3.12 × 10−1
(1.77 × 10−3) +
3.12 × 10−1
(1.27 × 10−3) +
3.12 × 10−1
(1.62 × 10−3) +
46.69 × 10−1
(4.72 × 10−4)
5.95 × 10−1
(7.96 × 10−3) +
6.54 × 10−1
(2.54 × 10−3) +
6.11 × 10−1
(4.81 × 10−3) +
6.36 × 10−1
(4.83 × 10−3) +
4.64 × 10−1
(9.20 × 10−3) +
6.62 × 10−1
(1.31 × 10−3) +
6.63 × 10−1
(1.26 × 10−3) +
6.64 × 10−1
(1.79 × 10−3) +
68.05 × 10−1
(4.21 × 10−4)
6.25 × 10−1
(1.51 × 10−2) +
7.88 × 10−1
(4.19 × 10−3) +
7.32 × 10−1
(6.85 × 10−3) +
5.66 × 10−1
(1.97 × 10−2) +
5.35 × 10−1
(3.35 × 10−2) +
7.81 × 10−1
(3.08 × 10−3) +
7.85 × 10−1
(2.04 × 10−3) +
8.05 × 10−1
(2.40 × 10−3) ≈
88.63 × 10−1
(4.66 × 10−4)
5.87 × 10−1
(2.06 × 10−2) +
8.60 × 10−1
(2.50 × 10−3) +
8.21 × 10−1
(7.41 × 10−3) +
4.92 × 10−1
(2.17 × 10−2) +
5.70 × 10−1
(2.33 × 10−2) +
8.32 × 10−1
(3.30 × 10−3) +
8.33 × 10−1
(2.82 × 10−3) +
8.68 × 10−1
(2.56 × 10−3) −
109.04 × 10−1
(2.35 × 10−4)
5.98 × 10−1
(2.20 × 10−2) +
8.91 × 10−1
(2.32 × 10−3) +
8.71 × 10−1
(4.63 × 10−3) +
4.63 × 10−1
(2.54 × 10−2) +
5.34 × 10−1
(4.71 × 10−2) +
8.82 × 10−1
(2.00 × 10−3) +
8.84 × 10−1
(1.60 × 10−3) +
8.91 × 10−1
(2.23 × 10−3) +
WFG623.07 × 10−1
(5.28 × 10−3)
3.04 × 10−1
(8.24 × 10−3) +
3.06 × 10−1
(7.35 × 10−3) ≈
3.09 × 10−1
(7.63 × 10−3) ≈
2.95 × 10−1
(1.22 × 10−2) +
2.80 × 10−1
(3.61 × 10−2) +
3.06 × 10−1
(9.24 × 10−3) ≈
3.06 × 10−1
(8.13 × 10−3) ≈
3.09 × 10−1
(8.40 × 10−3) ≈
46.61 × 10−1
(8.38 × 10−3)
5.66 × 10−1
(1.34 × 10−2) +
6.39 × 10−1
(8.67 × 10−3) +
5.79 × 10−1
(1.28 × 10−2) +
6.14 × 10−1
(1.72 × 10−2) +
4.46 × 10−1
(2.05 × 10−2) +
6.35 × 10−1
(8.92 × 10−3) +
6.41 × 10−1
(8.44 × 10−3) +
6.57 × 10−1
(9.53 × 10−3) ≈
67.98 × 10−1
(9.34 × 10−3)
6.13 × 10−1
(2.72 × 10−2) +
7.82 × 10−1
(1.02 × 10−2) +
6.75 × 10−1
(1.30 × 10−2) +
3.89 × 10−1
(3.62 × 10−2) +
5.49 × 10−1
(1.89 × 10−2) +
7.55 × 10−1
(1.03 × 10−2) +
7.63 × 10−1
(1.03 × 10−2) +
8.01 × 10−1
(8.94 × 10−3) ≈
88.39 × 10−1
(1.86 × 10−2)
6.39 × 10−1
(3.48 × 10−2) +
8.45 × 10−1
(1.78 × 10−2) ≈
7.82 × 10−1
(1.83 × 10−2) +
2.65 × 10−1
(2.53 × 10−2) +
4.74 × 10−1
(7.38 × 10−2) +
8.00 × 10−1
(2.43 × 10−2) +
8.01 × 10−1
(1.49 × 10−2) +
8.49 × 10−1
(1.72 × 10−2) −
108.75 × 10−1
(2.12 × 10−2)
6.45 × 10−1
(3.52 × 10−2) +
8.77 × 10−1
(1.80 × 10−2) ≈
8.48 × 10−1
(2.63 × 10−2) +
2.22 × 10−1
(5.35 × 10−2) +
4.67 × 10−1
(8.26 × 10−2) +
8.44 × 10−1
(1.46 × 10−2) +
8.50 × 10−1
(1.21 × 10−2) +
8.80 × 10−1
(1.62 × 10−2) ≈
WFG723.47 × 10−1
(1.98 × 10−5)
3.45 × 10−1
(3.99 × 10−4) +
3.46 × 10−1
(2.21 × 10−4) +
3.46 × 10−1
(2.64 × 10−4) +
3.29 × 10−1
(4.96 × 10−3) +
3.43 × 10−1
(5.82 × 10−4) +
3.46 × 10−1
(2.34 × 10−4) +
3.46 × 10−1
(2.45 × 10−4) +
3.47 × 10−1
(1.98 × 10−4) +
47.12 × 10−1
(6.03 × 10−4)
6.25 × 10−1
(7.05 × 10−3) +
6.99 × 10−1
(1.87 × 10−3) +
6.52 × 10−1
(5.47 × 10−3) +
6.47 × 10−1
(1.74 × 10−2) +
4.94 × 10−1
(1.74 × 10−2) +
6.94 × 10−1
(2.29 × 10−3) +
6.99 × 10−1
(1.44 × 10−3) +
7.10 × 10−1
(1.93 × 10−3) +
68.55 × 10−1
(9.75 × 10−4)
6.18 × 10−1
(2.67 × 10−2) +
8.44 × 10−1
(3.37 × 10−3) +
7.73 × 10−1
(6.24 × 10−3) +
4.90 × 10−1
(3.20 × 10−2) +
5.05 × 10−1
(3.54 × 10−2) +
8.05 × 10−1
(8.03 × 10−3) +
8.23 × 10−1
(4.82 × 10−3) +
8.61 × 10−1
(1.81 × 10−3) −
89.17 × 10−1
(1.20 × 10−3)
6.52 × 10−1
(3.30 × 10−2) +
9.20 × 10−1
(2.83 × 10−3) −
8.87 × 10−1
(5.92 × 10−3) +
3.63 × 10−1
(2.24 × 10−2) +
5.47 × 10−1
(3.30 × 10−2) +
8.65 × 10−1
(8.44 × 10−3) +
8.80 × 10−1
(5.09 × 10−3) +
9.29 × 10−1
(1.91 × 10−3) −
109.62 × 10−1
(1.16 × 10−2)
6.65 × 10−1
(2.67 × 10−2) +
9.58 × 10−1
(2.22 × 10−3) +
9.44 × 10−1
(3.54 × 10−3) +
3.20 × 10−1
(4.77 × 10−2) +
5.33 × 10−1
(5.95 × 10−2) +
9.33 × 10−1
(8.63 × 10−3) +
9.39 × 10−1
(3.60 × 10−3) +
9.56 × 10−1
(2.56 × 10−3) +
WFG823.02 × 10−1
(4.46 × 10−4)
2.98 × 10−1
(1.18 × 10−3) +
2.97 × 10−1
(1.45 × 10−3) +
2.99 × 10−1
(1.38 × 10−3) +
2.83 × 10−1
(6.67 × 10−3) +
2.94 × 10−1
(2.34 × 10−3) +
2.98 × 10−1
(1.40 × 10−3) +
2.96 × 10−1
(3.10 × 10−3) +
3.00 × 10−1
(1.21 × 10−3) +
46.35 × 10−1
(9.54 × 10−4)
5.29 × 10−1
(6.87 × 10−3) +
6.08 × 10−1
(3.07 × 10−3) +
5.49 × 10−1
(5.59 × 10−3) +
5.90 × 10−1
(5.84 × 10−3) +
3.59 × 10−1
(1.51 × 10−2) +
6.10 × 10−1
(4.16 × 10−3) +
6.11 × 10−1
(3.20 × 10−3) +
6.28 × 10−1
(2.21 × 10−3) +
67.68 × 10−1
(1.54 × 10−3)
5.70 × 10−1
(9.22 × 10−3) +
7.29 × 10−1
(4.50 × 10−3) +
6.24 × 10−1
(9.39 × 10−3) +
2.19 × 10−1
(7.96 × 10−2) +
3.49 × 10−1
(3.97 × 10−2) +
7.14 × 10−1
(1.12 × 10−2) +
7.12 × 10−1
(6.46 × 10−3) +
7.60 × 10−1
(4.14 × 10−3) +
87.84 × 10−1
(2.39 × 10−2)
6.09 × 10−1
(1.36 × 10−2) +
8.12 × 10−1
(1.62 × 10−2) −
6.56 × 10−1
(1.09 × 10−2) +
6.10 × 10−2
(2.63 × 10−2) +
3.97 × 10−1
(5.91 × 10−2) +
7.25 × 10−1
(1.28 × 10−2) +
7.06 × 10−1
(1.73 × 10−2) +
8.11 × 10−1
(1.52 × 10−2) −
108.62 × 10−1
(1.41 × 10−2)
6.36 × 10−1
(1.82 × 10−2) +
8.91 × 10−1
(2.22 × 10−2) −
7.35 × 10−1
(1.88 × 10−2) +
5.49 × 10−2
(4.26 × 10−2) +
3.88 × 10−1
(5.97 × 10−2) +
8.33 × 10−1
(1.18 × 10−2) +
8.11 × 10−1
(1.25 × 10−2) +
9.02 × 10−1
(2.54 × 10−2) −
WFG923.40 × 10−1
(1.67 × 10−2)
3.29 × 10−1
(2.69 × 10−2) +
3.26 × 10−1
(3.10 × 10−2) +
3.33 × 10−1
(2.22 × 10−2) ≈
3.02 × 10−1
(2.37 × 10−2) +
3.21 × 10−1
(2.47 × 10−2) +
3.27 × 10−1
(2.31 × 10−2) +
3.31 × 10−1
(2.19 × 10−2) ≈
3.34 × 10−1
(2.32 × 10−2) ≈
46.77 × 10−1
(3.05 × 10−3)
5.78 × 10−1
(1.26 × 10−2) +
6.68 × 10−1
(3.76 × 10−3) +
6.27 × 10−1
(6.21 × 10−3) +
5.75 × 10−1
(3.97 × 10−2) +
4.81 × 10−1
(3.70 × 10−2) +
6.37 × 10−1
(1.94 × 10−2) +
6.53 × 10−1
(7.16 × 10−3) +
6.84 × 10−1
(3.58 × 10−3) −
67.90 × 10−1
(1.70 × 10−2)
5.13 × 10−1
(2.09 × 10−2) +
7.90 × 10−1
(2.36 × 10−2) ≈
7.26 × 10−1
(9.88 × 10−3) +
4.89 × 10−1
(6.91 × 10−2) +
5.53 × 10−1
(4.41 × 10−2) +
7.14 × 10−1
(2.75 × 10−2) +
7.49 × 10−1
(1.37 × 10−2) +
8.12 × 10−1
(4.04 × 10−3) −
88.40 × 10−1
(3.89 × 10−2)
5.79 × 10−1
(2.57 × 10−2) +
8.67 × 10−1
(3.91 × 10−3) −
8.07 × 10−1
(8.73 × 10−3) +
3.21 × 10−1
(1.32 × 10−1) +
6.19 × 10−1
(3.92 × 10−2) +
7.49 × 10−1
(5.26 × 10−2) +
7.98 × 10−1
(3.06 × 10−2) +
8.72 × 10−1
(4.57 × 10−3) −
108.74 × 10−1
(5.65 × 10−2)
5.72 × 10−1
(3.08 × 10−2) +
8.99 × 10−1
(5.22 × 10−3) −
8.51 × 10−1
(6.37 × 10−3) +
3.06 × 10−1
(1.18 × 10−1) +
5.35 × 10−1
(6.45 × 10−2) +
8.17 × 10−1
(5.67 × 10−2) +
8.51 × 10−1
(1.07 × 10−2) +
8.88 × 10−1
(4.59 × 10−3) ≈
W/T/L 67/4/953/13/1458/14/862/8/1066/1/1352/22/655/14/1138/18/24
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Dutta, S.; M, S.S.R.; Mallipeddi, R.; Das, K.N.; Lee, D.-G. A Mating Selection Based on Modified Strengthened Dominance Relation for NSGA-III. Mathematics 2021, 9, 2837. https://doi.org/10.3390/math9222837

AMA Style

Dutta S, M SSR, Mallipeddi R, Das KN, Lee D-G. A Mating Selection Based on Modified Strengthened Dominance Relation for NSGA-III. Mathematics. 2021; 9(22):2837. https://doi.org/10.3390/math9222837

Chicago/Turabian Style

Dutta, Saykat, Sri Srinivasa Raju M, Rammohan Mallipeddi, Kedar Nath Das, and Dong-Gyu Lee. 2021. "A Mating Selection Based on Modified Strengthened Dominance Relation for NSGA-III" Mathematics 9, no. 22: 2837. https://doi.org/10.3390/math9222837

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop