You are currently viewing a new version of our website. To view the old version click .
Mathematical and Computational Applications
  • Feature Paper
  • Article
  • Open Access

8 January 2020

Non-Epsilon Dominated Evolutionary Algorithm for the Set of Approximate Solutions

,
,
and
1
Department of Engineering Science, University of Oxford, Oxford OX1 3PJ, UK
2
Departamento de Computación, CINVESTAV-IPN, Mexico City 07360, Mexico
3
School of Engineering, University of California Merced, Merced, CA 95343, USA
*
Author to whom correspondence should be addressed.
This article belongs to the Special Issue Numerical and Evolutionary Optimization 2019

Abstract

In this paper, we present a novel evolutionary algorithm for the computation of approximate solutions for multi-objective optimization problems. These solutions are of particular interest to the decision-maker as backup solutions since they can provide solutions with similar quality but in different regions of the decision space. The novel algorithm uses a subpopulation approach to put pressure towards the Pareto front while exploring promissory areas for approximate solutions. Furthermore, the algorithm uses an external archiver to maintain a suitable representation in both decision and objective space. The novel algorithm is capable of computing an approximation of the set of interest with good quality in terms of the averaged Hausdorff distance. We underline the statements on some academic problems from literature and an application in non-uniform beams.

1. Introduction

In many real-world engineering problems, one is faced with the problem that several objectives have to be optimized concurrently, leading to a multi-objective optimization problem (MOP). The common goal for such problems is to identify the set of optimal solutions (the so-called Pareto set) and its image in objective space, the Pareto front. These kinds of problems are of great interest in areas such as: optimal control [,], telecommunications [], transportation [,], and healthcare [,]. In all of the previous cases, the authors considered several objectives in conflict, and their objective was to provide solutions that represented trade-offs between the solutions.
However, in practice, the decision-maker may not always be interested in the best solutions—for instance, when these solutions are sensitive to perturbations or are not realizable in practice [,,,]. Thus, there exists an additional challenge. One has to search not only for solutions with a good performance but also solutions that are possible to implement.
In this context, the set of approximate solutions [] are an excellent candidate to enhance the solutions given to the decision-maker. Given the allowed deterioration for each objective ϵ , one can compute a set that it is “close” to the Pareto front but that can have different properties in decision space. For instance, the decision-maker could find a solution that is realizable in practice or one that serves as a backup in case the selected option is no longer available. For this purpose, we use the ϵ -dominance. In this setting, a solution x ϵ dominates a solution y if x + ϵ dominates y in the Pareto sense (see Definition 3 for the formal definition).
In this work, we propose a novel evolutionary algorithm that aims for the computation of the set of approximate solutions. The algorithm uses two subpopulations: the first to put pressure towards the Pareto front while the second to explore promissory areas for approximate solutions. For this purpose, the algorithm ranks the solutions according to the ϵ -dominance and how well distributed the solutions are in both decision and objective spaces. Furthermore, the algorithm incorporates an external archiver to maintain a suitable representation in both decision and objective space. The novel algorithm is capable of computing an approximation of the set of interest with good quality regarding the averaged Hausdorff distance [].
The remainder of this paper is organized as follows: in Section 2, we state the notations and some background required for the understanding of the article. In Section 3, we present the components of the proposed algorithm. In Section 4, we show some numerical results, and we study an optimal control problem. Finally, in Section 5, we conclude and will give some paths for future work.

2. Background

Here, we consider continuous multi-objective optimization problems of the form
min x Q F ( x ) , ( MOP )
where Q R n and F is defined as the vector of the objective functions
F : Q R k , F ( x ) = ( f 1 ( x ) , , f k ( x ) ) T ,
and where each objective f i : Q R is continuous. In multi-objective optimality, this is defined by the concept of dominance [].
Definition 1.
 
(a) 
Let v , w R k . Then, the vector v isless thanw ( v < p w ), if v i < w i for all i { 1 , , k } . The relation p is defined analogously.
(b) 
y Q isdominatedby a point x Q ( x y ) with respect to (MOP) if F ( x ) p F ( y ) and F ( x ) F ( y ) , else y is called non-dominated by x.
(c) 
x Q is aPareto pointif there is no y Q that dominates x.
(d) 
x Q isweakly Pareto optimalif there exists no y Q such that F ( y ) < p F ( x ) .
The set P Q of Pareto points is called the Pareto set and its image F ( P Q ) the Pareto front. Typically, both sets form ( k 1 )-dimensional objects [].
In some cases, it is worth looking for solutions that are ’close’ in objective space to the optimal solutions, for instance, as backup solutions. These solutions form the so-called set of nearly optimal solutions.
In order to define the set of nearly optimal solutions ( P Q , ϵ ), we now present ϵ -dominance [,], which can be viewed as a weaker concept of dominance and will be the basis of the approximation concept we use in the sequel. To define the set of nearly optimal solutions, we need the following definition.
Definition 2
( ϵ -dominance []). Let ϵ = ( ϵ 1 , , ϵ k ) R + k and x , y Q . x is said to ϵ-dominate y ( x ϵ y ) with respect to (MOP) if F ( x ) ϵ p F ( y ) and F ( x ) ϵ F ( y ) .
Definition 3
( ϵ -dominance []). Let ϵ = ( ϵ 1 , , ϵ k ) R k and x , y Q . x is said to ϵ -dominate y ( x ϵ y ) with respect to (MOP) if F ( x ) + ϵ p F ( y ) and F ( x ) + ϵ F ( y ) .
Both definitions are identical. However, in this work, we introduce ϵ -dominance as it can be viewed as a stronger concept of dominance and since we use it to define our set of interest.
Definition 4
(Set of approximate solutions []). Denote by P Q , ϵ the set of points in Q R n that are not ϵ -dominated by any other point in Q, i.e.,
P Q , ϵ : = x Q | y Q : y ϵ x .
Thus, in the remainder of this work, we will aim to compute the set P Q , ϵ with multi-objective evolutionary algorithms [,]. To the best of the authors’ knowledge, there exist only a few algorithms that aim to find approximations of P Q , ϵ . Most of the approaches use the A r c h i v e r U p d a t e P Q , ϵ as the critical component to maintaining a representation of the set of interest. This archiver keeps all points that are not ϵ -dominated by any other considered candidate. Furthermore, the archiver is monotone, and it converges in limit to the set of interest P Q , ϵ in the Hausdorff sense [].

4. The Proposed Algorithm

In this section, we present an evolutionary multi-objective algorithm that aims for a finite representation of P Q , ϵ (1). We will refer to this algorithm as the non- ϵ -dominated sorting genetic algorithm (N ϵ SGA). First, the algorithm initializes two random subpopulations (P and R). The subpopulation R evolves as in classical NSGA-II []. This subpopulation aims to find the Pareto front. The subpopulation P seeks to find the set of approximate solutions. At each iteration, the individuals are ranked using Algorithm 3 according to ϵ -dominance. Then, we apply a density estimator based on nearest the neighbor of each solution in both objective and decision space (line 13 of Algorithm 1). Thus, the algorithm selects half the population from those better spaced in objective space and does the same for decision space. This mechanism allows the algorithm to have better diversity in the population.
Furthermore, the archiver is applied to maintain the non- ϵ -dominated solutions. Finally, the algorithm performs a migration between the populations every specified number of generations. Note that the algorithm uses diversity in both decision and objective space. This feature allows keeping nearly optimal solutions that are similar in objective space but different in decision space. To achieve this goal, half the population is selected according to their nearest neighbor distance in objective space and a half using decision space. In the following sections, we detail the components of the algorithm.
Algorithm 1 Non ϵ -Dominated Sorting EMOA
Require: 
number of generations: n g e n , population size: n p o p , ϵ R + n , δ x R + , δ y R + , n f [ 1 , , n g e n ] , n r [ 1 , , n p o p ]
Ensure: 
updated archive A
1:
Generate initial population P 1
2:
Generate initial population R 1
3:
A A r c h i v e P Q , ϵ D x y ( P 0 , [ ] , ϵ , δ x , δ y )
4:
for i = 1 n g e n do
5:
    Select λ 2 Parents Q i with Tournament
6:
     O i ˜ S B X C r o s s o v e r ( Q i )
7:
     O i P o l y n o m i a l M u t a t i o n ( O i ˜ )
8:
     P ˜ i P i O i
9:
     F Non ϵ -DominatedSorting ( P ˜ i )
10:
     P i + 1 =
11:
     j = 1
12:
    while | P i + 1 | + | F j | n p o p do
13:
         N e a r e s t N e i g h b o r D i s t a n c e ( F j ) ▹ in both parameter/objective space
14:
         P i + 1 = P i + 1 F j
15:
         j j + 1
16:
    end while
17:
     S o r t ( F i , n )
18:
     P i + 1 P i + 1 F j [ 1 : n p o p | P i + 1 | ]
19:
     S i Evolve R i as in NSGA-II
20:
     R i + 1 Select from S i using usual non-dominated ranking and crawding distance
21:
     A i + 1 A r c h i v e P Q , ϵ D x y ( O i S i , A i , ϵ , δ x , δ y )
22:
    if m o d ( i , n f ) = = 0 then
23:
        Exchange n r individuals at random between P i + 1 and R i + 1
24:
    end if
25:
end for

4.1. An Archiver for the Set of Approximate Solutions

Recently, Schütze et al. [] proposed the archiver A r c h i v e U p d a t e P Q , ϵ D x y (see Algorithm 2). This archiver aims to maintain a representation of the set of approximate solutions with good distribution in both decision and objective space according to user-defined preference δ x , δ y R . The archiver will include a candidate solution if there is no solution in the archiver, which ϵ -dominates and if there is no solution ’close’ in both decision and objective space (measured by δ x and δ y respectively). An important feature of this algorithm is that it converges to the set of approximate solutions in the limit (plus a discretization error given by δ x and δ y ). In the following, we briefly describe the archiver since it is the cornerstone of the proposed algorithm.
Given a set of candidate solutions P and an initial archiver A, the algorithm goes through all the set of candidate solutions p P and will try to add them to the archiver. A solution p P will be included if there is no solution a A that ϵ -dominates p and if there is no other solution that is ’close’ in both decision and objective space in terms of the Hausdorff distance d i s t ( · , · ) .
Theorem 3 states that the archiver is monotone. By monotone, we mean that the Hausdorff distance of the image of archiver at iteration l, F ( A l ) , and the set of non ϵ -dominated solutions until iteration l, F ( P C l , ϵ ) is max ( δ y , d i s t ( F ( P C l , ϵ + 2 δ y ) , F ( P C l , ϵ ) ) ) .
Theorem 3.
Let l N , ϵ R + k , P 1 , , P l R n be finite sets, δ x = 0 and A i , i = 1 , , l , be obtained by A r c h i v e U p d a t e P Q , ϵ D x y as in Algorithm 1. Then, for C l = i = 1 l P i , it holds:
( i ) d i s t ( F ( P C l , ϵ ) , F ( A l ) ) δ y , ( i i ) d i s t ( F ( A l ) , F ( P C l , ϵ ) ) d i s t ( F ( P C l , ϵ + 2 δ y ) , F ( P C l , ϵ ) ) , ( i i i ) d H ( F ( P C l , ϵ ) , F ( A l ) ) max ( δ y , d i s t ( F ( P C l , ϵ + 2 δ y ) , F ( P C l , ϵ ) ) ) .
Proof. 
See []. □
Note that the ϵ -dominance operation takes constant time to the sizes of P and A. Thus, the complexity of the archiver is O ( | P | | A | ) , which in the worst case is quadratic ( O ( | P | 2 ) ).
Algorithm 2 A : = A r c h i v e U p d a t e P Q , ϵ D x y
Require: 
population P, archive A 0 , ϵ R + n , δ x R + , δ y R +
Ensure: 
updated archive A
1:
A : = A 0
2:
for all p P do
3:
    if a 1 A : a 1 ϵ p and a 2 A : ( d i s t ( F ( a 2 ) , F ( p ) ) δ y and d i s t ( a 2 , p ) δ x ) then
4:
         A A { p }
5:
         A ^ = { a 1 A | a 2 A : a 2 ( ϵ + δ y ) a 1 }
6:
        for all a A A ^ do
7:
           if p ( ϵ + δ y ) a and d i s t ( a , A ^ ) 2 δ x then
8:
                A A { a }
9:
           end if
10:
        end for
11:
    end if
12:
end for

4.2. Ranking Nearly Optimal Solutions

Next, we present an algorithm to rank approximate solutions. Algorithm 3 is inspired by the OMNI Optimizer []. In this case, the best solutions are those that are non ϵ -dominated by any other solution in the population and are also well distributed in both decision and objective space. The second layer will be formed with those solutions, the solutions that are non ϵ -dominated but were not well distributed. The next layers follow the same pattern.
Algorithm 3 Non ϵ -Dominated Sorting
Require: 
P 0 , ϵ , δ x , δ y
Ensure: 
F
1:
while p do
2:
     A i + 1 A r c h i v e P Q , ϵ D x y ( P i , [ ] , ϵ , δ x , δ y )
3:
     P i + i P i A i + 1
4:
     F i A i + 1
5:
end while
Figure 1 shows the different fronts formed with different colors. It is possible to observe that the use of δ y and δ x allows for a criterion to decide between solutions that are non ϵ -dominated, while, in the bottom figure, all solutions in blue would belong to the first front. Meanwhile, in the upper figure, there are fewer solutions, and their rank is decided by how well-spaced they are in both spaces. This feature is important since it would be possible to add a preference on the rank by the spacing between solutions.
Figure 1. Ranking nearly optimal solutions for the first five ranks on Lamé superspheres MOP (LSS) with α = 0 . 5 []. The figure shows the rankings with the proposed algorithm that uses δ y (left) and without (right).

4.3. Using Subpopulations

Since the ϵ -dominance can be viewed as a relaxed form of Pareto dominance, it can be expected that more solutions are non ϵ -dominated when compared to classical dominance. This fact generates a potential issue since it would be possible that the algorithm stagnates at the early stages.
To avoid this issue, we propose the use of two subpopulations to improve the pressure towards the set of interest of the novel algorithm. Each subpopulation has a different aim. The first one aims to approximate the global Pareto set/front. While the second one seeks to approximate P Q , ϵ . Figure 2 shows what would be the first rank for each subpopulation. A vital aspect of the approach is how the information between the subpopulations is shared. Thus, in the following, we describe two schemes to share individuals via migration.
Figure 2. The figure shows in yellow the feasible space and in blue the first rank for each subpopulation on LSS with α = 0 . 5 —on the left, the first ranked solutions that aim for the global Pareto front, on the right, the first ranked solutions that aim for the set of approximate solutions.
  • Migration: every n f generations, n r individuals are exchanged between the population. The individuals are chosen at random from the rank one individuals.
  • Crossover between subpopulations: every n f generations, the crossover is performed between the populations. One parent is chosen from each population at random.
As it can be observed, the use of subpopulations introduces three extra parameters to the algorithm, namely:
  • n f : frequency of migration,
  • n r : the number of individuals to be exchanged, and
  • r a t i o : the ratio of individuals between subpopulation in the range [ 0 , 1 ] .
Thus, the population in charge of looking for optimal solutions puts pressure on the second population and helps to determine the set of approximate solutions. While the latter population is in charge of exploiting the promissory regions, this allows for finding local Pareto fronts if they are `close enough’ in terms of ϵ -dominance.

4.4. Complexity of the Novel Algorithm

In the following, we comment on the complexity of N ϵ SGA. Let G be the number of generations and | P | the size of the population.
  • Initialization: in this step, all the individuals are generated at random and evaluated. This task takes O ( | P | ) .
  • External archiver: from the previous section, the complexity of the archiver is O ( | P | | A | ) , where | A | is the size of the archiver. Note that the maximum size of the archiver at the end of the execution is G | P | .
  • Non ϵ -dominated sorting: until there are no solutions in the population P, A r c h i v e U p d a t e P Q , ϵ D x y is executed. In the worst case, each rank has only one solution, and the archiver is executed | P | times. Thus, the complexity of this part of the algorithm is O ( | P | 3 ) .
  • Main loop: the main loop selects the parents and applies the genetic operators (each of this tasks take O ( | P | ) ), then the algorithm uses the Non ϵ -dominated sorting ( O ( | P | 3 ) ); next, it finds those solutions that are well distributed in terms of closest neighbor ( O ( | P | 2 ) ) and finally applies the external archiver O ( | P | 2 ) . Note that all these operations are executed G times. Thus, the complexity of the algorithm is O ( m a x ( G | P | 3 , ( G | P | ) 2 ) ) , which is the maximum of the non ϵ -dominated sorting or the external archiver.

5. Numerical Results

In this section, we present the numerical result coxrresponding to the evolutionary algorithm. We performed several experiments to validate the novel algorithm. First, we validate the use of an external archiver in the evolutionary algorithm. Next, we compare several strategies of subpopulations. Finally, we analyze the resulting algorithm with the enhanced state-of-the-art algorithms.
The algorithms chosen are modifications of P Q , ϵ NSGA-II, and P Q , ϵ MOEA. In the literature, both algorithms used aim for well-distributed solutions only in objective space, but, in this case, we enhanced them with A r c h i v e r U p d a t e P Q , ϵ D x y to have a fair comparison with the proposed algorithm.
In all cases, we used the following problems: Deb99 [], two-on-one [], sym-part [], S. Schäffler, R. Schultz and K. Weinzierl (SSW) [], omni test [], Lamé superspheres (LSS) []. Next, 20 independent experiments were performed with the following parameters: 100 individuals, 100 generations, crossover rate = 0 . 9 , mutation rate = 1 / n , distribution index for crossover e t a _ c = 20 and distribution index for mutation e t a _ m = 20 . Then, Table 1 shows the ϵ , δ y and δ x values used for each problem. Furthermore, all cases were measured with the Δ 2 indicator, and a Wilcoxon rank-sum test was performed with a significance level α = 0 . 05 . Finally, all Δ 2 tables, the bold font represents the best mean value, and the arrows represent:
Table 1. Parameters used for the experiments.
  • ↑ rank sum rejects the null hypothesis of equal medians and the algorithm has a better median,
  • ↓ rank sum rejects the null hypothesis of equal medians, and the algorithm has a worse median and
  • ↔ rank sum cannot reject the null hypothesis of equal medians.
The comparison is always made with the algorithm in the first column. We have selected the Δ 2 indicator since it allows us to compare the distance between the reference solution and the approximation found by the algorithms in both decision and objective space. Since we aim to compute the set of approximate solutions, classical indicators used to measure Pareto fronts cannot be applied in a straightforward manner and should be carefully studied if one would like to use them in this context.

5.1. Validation of the Use of an External Archiver

We first validate the use of an external archiver in the novel algorithm. Note N ϵ SGA2A denotes the version of the algorithm that uses the external archiver. Table 2 and Table 3 show the mean and standard deviation of the Δ 2 values obtained by the algorithms. From the results, we can observe that in the six problems the use of the external archiver has a significant impact on the algorithm. This indicates an advantage of using the novel archiver. It is important to notice that the use of the external archiver does not use any function evaluation. Thus, the comparison is fair regarding the number of evaluations used.
Table 2. Averaged Δ 2 in decision space for the use of the external archiver and the base case.
Table 3. Averaged Δ 2 in objective space for the use of the external archiver and the base case.

5.2. Validation of the Use of Subpopulations

Furthermore, we experiment with whether the use of subpopulations in the novel algorithm has a positive effect on the algorithm. N ϵ SGA2Is1 denotes the approach that uses migration, and N ϵ SGA2Is2 denotes the approach that uses a crossover between the subpopulations. Table 4 and Table 5 show the mean and standard deviation of the Δ 2 values obtained by the algorithms. From the results, we can observe that the approach that uses migration is outperformed in four of the twelve comparisons (considering both decision and objective space). On the other hand, the approach that uses crossover between subpopulations is outperformed by the base algorithm in three of twelve comparisons. As it could be expected, the algorithms that use subpopulations obtain better results in terms of objective space while sacrificing in some cases decision space.
Table 4. Averaged Δ 2 for the algorithms that use subpopulations and the base case in decision space.
Table 5. Averaged Δ 2 for the algorithms that use subpopulations and the base case in objective space.

5.3. Comparison to State-of-the-Art Algorithms

Now, we compare the proposed algorithm with P Q , ϵ D x y -NSGA2 and P Q , ϵ D x y -MOEA. Additionally to the MOPs used previously, in this section, we consider three distance-based multi/many-objective point problems (DBMOPP) [,,]. These problems allow us to visually examine the behavior of many-objective evolution as they can scale on the number of objectives. In this context, there are K sets of vectors defined, where the kth set, V k = { v 1 , , v m k } determines the quality of a design vector x Q on the kth objective. This can be computed as
f k ( x ) = min v V k ( d i s t ( x , v ) ) ,
where m k is the number of elements of V k , which depends on k and d i s t ( x , v ) denotes in the Euclidian distance. An alternative representation is given by the centre (c), a circle radius (r), and an angle for each objective minimizing vector. From this framework, we generate three problems with k = 3 , 5 , 10 objectives. Each problem has nice connected components with centers c = [ [ 1 , 1 ] , [ 1 , 0 ] , [ 1 , 1 ] , [ 0 , 1 ] , [ 0 , 0 ] , [ 0 , 1 ] , [ 1 , 1 ] , [ 1 , 0 ] , [ 1 , 1 ] ] . Furthermore, the radius r = [ 0 . 15 , 0 . 1 , 0 . 15 , 0 . 15 , 0 . 1 , 0 . 15 , 0 . 15 , 0 . 1 , 0 . 15 ] and the angle a = [ a i : a i 2 i π k ] for i = 1 , , k . Figure 3 shows the structure for each problem. Each polygon represents a local Pareto set. The global Pareto sets are those centered in [ [ 1 , 0 ] , [ 0 , 1 ] , [ 0 , 1 ] , [ 1 , 0 ] ] . For these problems, we used ϵ = [ 0 . 05 ] k , δ x = [ 0 . 02 , 0 . 02 ] , δ y = [ 0 . 01 ] k and 10 x 1 , x 2 10 .
Figure 3. Local Pareto set regions of the DBMOPP problems generated for k = 3 , 5 , 10 objectives.
Figure 4, Figure 5, Figure 6 and Figure 7 show the median approximations to P Q , ϵ and F ( P Q , ϵ ) obtained by the algorithms. It is posible to observe than in all figures P Q , ϵ D x y -MOEA has the worst performance. Next, P Q , ϵ D x y -NSGA2 misses several connected components that are of the set of approximate solutions. Figure 6 shows a clear example where P Q , ϵ D x y -NSGA2 focuses on a few regions of the decision space while the proposed algorithm is capable of finding all connected components. Furthermore, Figure 8, Figure 9 and Figure 10 show the box plots in both decision and objective space for the problems considered. Table 6 and Table 7 show the mean and standard deviation of the Δ 2 values obtained by the algorithms.
Figure 4. Numerical results on Deb99 for the comparison of N ϵ SGA2 and the state-of-the-art algorithms.
Figure 5. Numerical results on SSW for the comparison of N ϵ SGA2 and the state-of-the-art algorithms.
Figure 6. Numerical results on Omni test for the comparison of N ϵ SGA2 and the state-of-the-art algorithms.
Figure 7. Numerical results on DBMOPPs for the comparison of N ϵ SGA2 and the state-of-the-art algorithms.
Figure 8. Box plots of Δ 2 in decision space of N ϵ SGA, P Q , ϵ D x y -NSGA-II and P Q , ϵ D x y -MOEA/D from left to right.
Figure 9. Box plots of Δ 2 in objective space of N ϵ SGA, P Q , ϵ D x y -NSGA-II and P Q , ϵ D x y -MOEA/D from left to right.
Figure 10. Box plots of Δ 2 in decision space (left) and objective space (right) of N ϵ SGA, P Q , ϵ D x y -NSGA-II and P Q , ϵ D x y -MOEA/D.
Table 6. Averaged Δ 2 in decision space for the comparison of the state-of-the-art algorithms.
Table 7. Averaged Δ 2 in objective space for the comparison of the state-of-the-art algorithms.
From the results, we can observe that the novel algorithm outperforms P Q , ϵ D x y -NSGA2 in eight out of nine problems according to decision space and seven according to objective space. When compared with P Q , ϵ D x y -MOEA, the novel algorithm outperforms in the nine problems according to both decision and objective space. This shows that an improvement in the search engine to maintain solutions well spread in decision and objective space is advantageous when approximating P Q , ϵ .
To conclude the comparison to state-of-the-art algorithms, we aggregate the results by conducting a single election winner strategy. Here, we have selected the Borda and Condorcet methods to perform our analysis [].
In the Borda method, for each ranking, assign to algorithm X, the number of points equal to the number of algorithms it defeats. In the Condorcet method, if algorithm A defeats every other algorithm in a pairwise majority vote, then A should be ranked first. Table 8 shows that the count of times algorithm i was significantly better than the algorithm j according to the Wilcoxon rank sum. In this case, both Borda count and Condorcet method prefer the novel method. Furthermore, in terms of computational complexity, both state-of-the-art methods have a complexity of O ( ( G P ) 2 ) since both algorithms make use of the external archiver. This complexity is comparable with the one of the novel method. In some cases, due to the non ϵ -dominated sorting, the novel algorithm can have a higher complexity.
Table 8. Summary of times a method outperforms another with statistical significance.

5.4. Application to Non-Uniform Beam

Finally, we consider a non-uniform beam problem taken from [,]. Non-uniform beams are a commonly used structure in many applications such as ship hull, rocket surface, and aircraft fuselage. In this application, structural and acoustic properties are presented as constraints and performance indices of candidate non-uniform beams.
The primary goal of the structural-acoustic design is to create a light-weight, stable, and quiet structure. The multi-objective optimization of non-uniform beams is to seek the balance among those aims. Here, we will consider the following bi-objective problem:
The first objective P ¯ is the integration of the radiated sound power over a range of frequencies,
P ¯ = 200 600 P d ω ,
where P is the average acoustic power radiated by the vibrating beam per unit width over one period of vibrations (see [] for the details of the implementation).
The second objective is the total mass of the beam that can be expressed as
m t o t = n = 1 N ρ h i L N ,
where 1 h i ( x ) 15 mm is the average height of the ith segment, and for the number of interpolation coordinates N = 10 (i.e., the decision space is 10-dimensional). The Beam length is L = 1 . 0 m and the mass density ρ = 2643 kg / m 3 .
The parameters for the archiver were set to ϵ = [ 1 , 0 . 00003 ] , δ x = 2 . 5 × 10 3 and δ y = [ 0 . 3 , 0 . 000003 ] . That is to say, we are willing to accept a deterioration of 1 kg and 0 . 3 × 10 5 W/m/s.
The parameters were as follows:
  • Population size: 500,
  • Number of generations: 200,
  • The rest of the parameters were set as before.
Figure 11 shows the approximated set P Q , ϵ . The results show that the algorithm is capable of finding the region of interest, according to previous studies [,], with 100 , 000 function evaluations, while in previous work using only the archiver with random points, it took 10 million function evaluations [] to obtain similar results.
Figure 11. Approximation of P Q , ϵ obtained with N ϵ SGA

6. Conclusions and Future Work

In this work, we have proposed a novel multi-objective evolutionary algorithm that aims to compute the set of approximate solutions. The algorithm uses the archiver A r c h i v e P Q , ϵ D x y to maintain a well-distributed representation in both decision and objective space. Moreover, the algorithm ranks the solutions according to the ϵ -dominance and their distribution in both decision and objective space. This allows exploring different regions of the search space where approximate solutions might be located and maintaining them in the archiver.
Furthermore, we observed that the interplay of generator and archiver was a delicate problem, including (among others) a proper balance of local/global search, a suitable density distribution for all generational operators. The results show that the novel algorithm is competitive against other methods of the-state-of-the-art on academic problems. Finally, we applied the algorithm to an optimal control problem with comparable results to those of the state-of-the-art but using significantly fewer resources.

Author Contributions

Conceptualization, C.I.H.C. and O.S.; Data curation, C.I.H.C.; Formal analysis, C.I.H.C., O.S., J.-Q.S. and S.O.-B.; Funding acquisition, O.S.; Investigation, C.I.H.C.; Methodology, C.I.H.C., O.S. and J.-Q.S.; Project administration, J.-Q.S. and S.O.-B.; Resources, O.S., J.-Q.S. and S.O.-B.; Software, C.I.H.C.; Supervision, O.S., J.-Q.S. and S.O.-B.; Validation, C.I.H.C.; Visualization, C.I.H.C.; Writing—original draft, C.I.H.C.; Writing—review and editing, C.I.H.C., O.S., J.-Q.S. and S.O.-B. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Conacyt grant number 711172, project 285599 and Cinvestav-Conacyt project no. 231.

Acknowledgments

The first author acknowledges Conacyt for funding No. 711172.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Hernández, C.; Naranjani, Y.; Sardahi, Y.; Liang, W.; Schütze, O.; Sun, J.Q. Simple cell mapping method for multi-objective optimal feedback control design. Int. J. Dyn. Control 2013, 1, 231–238. [Google Scholar] [CrossRef]
  2. Sardahi, Y.; Sun, J.Q.; Hernández, C.; Schütze, O. Many-Objective Optimal and Robust Design of Proportional-Integral-Derivative Controls with a State Observer. J. Dyn. Syst. Meas. Control 2017, 139, 024502. [Google Scholar] [CrossRef]
  3. Zakrzewska, A.; d’Andreagiovanni, F.; Ruepp, S.; Berger, M.S. Biobjective optimization of radio access technology selection and resource allocation in heterogeneous wireless networks. In Proceedings of the 2013 11th International Symposium and Workshops on Modeling and Optimization in Mobile, Ad Hoc and Wireless Networks (WiOpt), Tsukuba Science City, Japan, 13–17 May 2013; pp. 652–658. [Google Scholar]
  4. Andrade, A.R.; Teixeira, P.F. Biobjective optimization model for maintenance and renewal decisions related to rail track geometry. Transp. Res. Record 2011, 2261, 163–170. [Google Scholar] [CrossRef]
  5. Stepanov, A.; Smith, J.M. Multi-objective evacuation routing in transportation networks. Eur. J. Op. Res. 2009, 198, 435–446. [Google Scholar] [CrossRef]
  6. Meskens, N.; Duvivier, D.; Hanset, A. Multi-objective operating room scheduling considering desiderata of the surgical team. Decis. Support Syst. 2013, 55, 650–659. [Google Scholar] [CrossRef]
  7. Dibene, J.C.; Maldonado, Y.; Vera, C.; de Oliveira, M.; Trujillo, L.; Schütze, O. Optimizing the location of ambulances in Tijuana, Mexico. Comput. Biol. Med. 2017, 80, 107–115. [Google Scholar]
  8. Beyer, H.G.; Sendhoff, B. Robust optimization—A comprehensive survey. Comput. Methods Appl. Mech. Eng. 2007, 196, 3190–3218. [Google Scholar] [CrossRef]
  9. Jin, Y.; Branke, J. Evolutionary optimization in uncertain environments—A survey. IEEE Trans. Evol. Comput. 2005, 9, 303–317. [Google Scholar] [CrossRef]
  10. Deb, K.; Gupta, H. Introducing Robustness in Multi-Objective Optimization. Evol. Comput. 2006, 14, 463–494. [Google Scholar] [CrossRef]
  11. Avigad, G.; Branke, J. Embedded Evolutionary Multi-Objective Optimization for Worst Case Robustness. In Proceedings of the 10th Annual Conference on Genetic and Evolutionary Computation, GECCO ’08, Atlanta, GA, USA, 12–16 July 2008; pp. 617–624. [Google Scholar] [CrossRef]
  12. Loridan, P. ϵ-Solutions in Vector Minimization Problems. J. Optim. Theory Appl. 1984, 42, 265–276. [Google Scholar] [CrossRef]
  13. Schütze, O.; Esquivel, X.; Lara, A.; Coello Coello, C.A. Using the Averaged Hausdorff Distance as a Performance Measure in Evolutionary Multi-Objective Optimization. IEEE Trans. Evol. Comput. 2012, 16, 504–522. [Google Scholar] [CrossRef]
  14. Pareto, V. Manual of Political Economy; The MacMillan Press: New York, NY, USA, 1971. [Google Scholar]
  15. Hillermeier, C. Nonlinear Multiobjective Optimization—A Generalized Homotopy Approach; Birkhäuser: Basel, Switzerland, 2001. [Google Scholar]
  16. White, D.J. Epsilon efficiency. J. Optim. Theory Appl. 1986, 49, 319–337. [Google Scholar] [CrossRef]
  17. Deb, K. Multi-Objective Optimization Using Evolutionary Algorithms; John Wiley & Sons: Chichester, UK, 2001. [Google Scholar]
  18. Coello Coello, C.A.; Lamont, G.B.; Van Veldhuizen, D.A. Evolutionary Algorithms for Solving Multi-Objective Problems, 2nd ed.; Springer: New York, NY, USA, 2007. [Google Scholar]
  19. Schütze, O.; Hernandez, C.; Talbi, E.G.; Sun, J.Q.; Naranjani, Y.; Xiong, F.R. Archivers for the representation of the set of approximate solutions for MOPs. J. Heuristics 2019, 25, 71–105. [Google Scholar] [CrossRef]
  20. Eichfelder, G. Scalarizations for adaptively solving multi-objective optimization problems. Comput. Optim. Appl. 2009, 44, 249. [Google Scholar] [CrossRef]
  21. Das, I.; Dennis, J. Normal-boundary intersection: A new method for generating the Pareto surface in nonlinear multicriteria optimization problems. SIAM J. Optim. 1998, 8, 631–657. [Google Scholar] [CrossRef]
  22. Fliege, J. Gap-free computation of Pareto-points by quadratic scalarizations. Math. Methods Oper. Res. 2004, 59, 69–89. [Google Scholar] [CrossRef]
  23. Zadeh, L. Optimality and Non-Scalar-Valued Performance Criteria. IEEE Trans. Autom. Control 1963, 8, 59–60. [Google Scholar] [CrossRef]
  24. Bowman, V.J. On the Relationship of the Tchebycheff Norm and the Efficient Frontier of Multiple-Criteria Objectives. In Multiple Criteria Decision Making; Springer: Berlin/Heidelberg, Germany, 1976; Volume 130, pp. 76–86. [Google Scholar] [CrossRef]
  25. Fliege, J.; Svaiter, B.F. Steepest Descent Methods for Multicriteria Optimization. Math. Methods Oper. Res. 2000, 51, 479–494. [Google Scholar] [CrossRef]
  26. Bosman, P.A.N. On Gradients and Hybrid Evolutionary Algorithms for Real-Valued Multiobjective Optimization. IEEE Trans. Evol. Comput. 2012, 16, 51–69. [Google Scholar] [CrossRef]
  27. Lara, A. Using Gradient Based Information to Build Hybrid Multi-Objective Evolutionary Algorithms. Ph.D. Thesis, CINVESTAV-IPN, Mexico City, Mexico, 2012. [Google Scholar]
  28. Lara, A.; Alvarado, S.; Salomon, S.; Avigad, G.; Coello, C.A.C.; Schütze, O. The gradient free directed search method as local search within multi-objective evolutionary algorithms. In EVOLVE—A Bridge Between Probability, Set Oriented Numerics, and Evolutionary Computation (EVOLVE II); Springer: Berlin/Heidelberg, Germany, 2013; pp. 153–168. [Google Scholar]
  29. Dellnitz, M.; Schütze, O.; Hestermeyer, T. Covering Pareto Sets by Multilevel Subdivision Techniques. J. Optim. Theory Appl. 2005, 124, 113–136. [Google Scholar] [CrossRef]
  30. Hernández, C.; Schütze, O.; Sun, J.Q. Global Multi-Objective Optimization by Means of Cell Mapping Techniques. In EVOLVE—A Bridge Between Probability, Set Oriented Numerics and Evolutionary Computation VII; Emmerich, M., Deutz, A., Schütze, O., Legrand, P., Tantar, E., Tantar, A.A., Eds.; Springer: Cham, Switzerland, 2017; pp. 25–56. [Google Scholar] [CrossRef]
  31. Arrondo, A.G.; Redondo, J.L.; Fernández, J.; Ortigosa, P.M. Parallelization of a non-linear multi-objective optimization algorithm: Application to a location problem. Appl. Math. Comput. 2015, 255, 114–124. [Google Scholar] [CrossRef]
  32. Deb, K.; Pratap, A.; Agarwal, S.; Meyarivan, T. A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans. Evol. Comput. 2002, 6, 182–197. [Google Scholar] [CrossRef]
  33. Zitzler, E.; Thiele, L. Multiobjective Evolutionary Algorithms: A Comparative Case Study and the Strength Pareto Evolutionary Algorithm. IEEE Trans. Evol. Comput. 1999, 3, 257–271. [Google Scholar] [CrossRef]
  34. Zitzler, E.; Laumanns, M.; Thiele, L. SPEA2: Improving the Strength Pareto Evolutionary Algorithm for Multiobjective Optimization. In Evolutionary Methods for Design, Optimisation and Control with Application to Industrial Problems; Giannakoglou, K., Ed.; International Center for Numerical Methods in Engineering (CIMNE): Barcelona, Spain, 2002; pp. 95–100. [Google Scholar]
  35. Zhang, Q.; Li, H. MOEA/D: A Multi-Objective Evolutionary Algorithm Based on Decomposition. IEEE Trans. Evol. Comput. 2007, 11, 712–731. [Google Scholar] [CrossRef]
  36. Zuiani, F.; Vasile, M. Multi Agent Collaborative Search Based on Tchebycheff Decomposition. Comput. Optim. Appl. 2013, 56, 189–208. [Google Scholar] [CrossRef]
  37. Moubayed, N.A.; Petrovski, A.; McCall, J. (DMOPSO)-M-2: MOPSO Based on Decomposition and Dominance with Archiving Using Crowding Distance in Objective and Solution Spaces. Evol. Comput. 2014, 22. [Google Scholar] [CrossRef]
  38. Beume, N.; Naujoks, B.; Emmerich, M. SMS-EMOA: Multiobjective selection based on dominated hypervolume. Eur. J. Oper. Res. 2007, 181, 1653–1669. [Google Scholar] [CrossRef]
  39. Zitzler, E.; Thiele, L.; Bader, J. SPAM: Set Preference Algorithm for multiobjective optimization. In Parallel Problem Solving From Nature–PPSN X; Springer: Berlin/Heidelberg, Germany, 2008; pp. 847–858. [Google Scholar]
  40. Wagner, T.; Trautmann, H. Integration of Preferences in Hypervolume-Based Multiobjective Evolutionary Algorithms by Means of Desirability Functions. IEEE Trans. Evol. Comput. 2010, 14, 688–701. [Google Scholar] [CrossRef]
  41. Rudolph, G.; Schütze, O.; Grimme, C.; Domínguez-Medina, C.; Trautmann, H. Optimal averaged Hausdorff archives for bi-objective problems: Theoretical and numerical results. Comput. Optim. Appl. 2016, 64, 589–618. [Google Scholar] [CrossRef]
  42. Schütze, O.; Domínguez-Medina, C.; Cruz-Cortés, N.; de la Fraga, L.G.; Sun, J.Q.; Toscano, G.; Landa, R. A scalar optimization approach for averaged Hausdorff approximations of the Pareto front. Eng. Optim. 2016, 48, 1593–1617. [Google Scholar] [CrossRef]
  43. Schütze, O.; Coello, C.A.C.; Talbi, E.G. Approximating the ε-efficient set of an MOP with stochastic search algorithms. In Proceedings of the Mexican International Conference on Artificial Intelligence, Aguascalientes, Mexico, 4–10 November 2007; pp. 128–138. [Google Scholar]
  44. Rudolph, G.; Naujoks, B.; Preuss, M. Capabilities of EMOA to Detect and Preserve Equivalent Pareto Subsets. In Proceedings of the 4th International Conference on Evolutionary Multi-Criterion Optimization, Matsushima, Japan, 5–8 March 2007; pp. 36–50. [Google Scholar] [CrossRef]
  45. Schütze, O.; Vasile, M.; Coello Coello, C. On the Benefit of ϵ-Efficient Solutions in Multi Objective Space Mission Design. In Proceedings of the International Conference on Metaheuristics and Nature Inspired Computing, Hammamet, Tunisia, 6–11 October 2008. [Google Scholar]
  46. Schütze, O.; Vasile, M.; Junge, O.; Dellnitz, M.; Izzo, D. Designing optimal low thrust gravity assist trajectories using space pruning and a multi-objective approach. Eng. Optim. 2009, 41, 155–181. [Google Scholar] [CrossRef]
  47. Schütze, O.; Vasile, M.; Coello Coello, C.A. Computing the set of epsilon-efficient solutions in multiobjective space mission design. J. Aerosp. Comput. Inf. Commun. 2011, 8, 53–70. [Google Scholar] [CrossRef]
  48. Hernández, C.; Sun, J.Q.; Schütze, O. Computing the set of approximate solutions of a multi-objective optimization problem by means of cell mapping techniques. In EVOLVE—A Bridge Between Probability, Set Oriented Numerics, and Evolutionary Computation IV; Springer: Berlin, Germany, 2013; pp. 171–188. [Google Scholar]
  49. Xia, H.; Zhuang, J.; Yu, D. Multi-objective unsupervised feature selection algorithm utilizing redundancy measure and negative epsilon-dominance for fault diagnosis. Neurocomputing 2014, 146, 113–124. [Google Scholar] [CrossRef]
  50. Ishibuchi, H.; Hitotsuyanagi, Y.; Tsukamoto, N.; Nojima, Y. Many-Objective Test Problems to Visually Examine the Behavior of Multiobjective Evolution in a Decision Space. In Parallel Problem Solving from Nature, PPSN XI; Schaefer, R., Cotta, C., Kołodziej, J., Rudolph, G., Eds.; Springer: Berlin/Heidelberg, Germany, 2010; pp. 91–100. [Google Scholar]
  51. Li, M.; Yang, S.; Liu, X. A test problem for visual investigation of high-dimensional multi-objective search. In Proceedings of the 2014 IEEE Congress on Evolutionary Computation (CEC), Beijing, China, 6–11 July 2014; pp. 2140–2147. [Google Scholar]
  52. Bogoya, J.M.; Vargas, A.; Schütze, O. The Averaged Hausdorff Distances in Multi-Objective Optimization: A Review. Mathematics 2019, 7, 894. [Google Scholar] [CrossRef]
  53. Deb, K.; Tiwari, S. Omni-optimizer: A generic evolutionary algorithm for single and multi-objective optimization. Eur. J. Oper. Res. 2008, 185, 1062–1087. [Google Scholar] [CrossRef]
  54. Emmerich, A.D. Test problems based on lamé superspheres. In Proceedings of the 4th International Conference on Evolutionary Multi-Criterion Optimization, Matsushima, Japan, 5–8 March 2007; pp. 922–936. [Google Scholar]
  55. Preuss, M.; Naujoks, B.; Rudolph, G. Pareto Set and EMOA Behavior for Simple Multimodal Multiobjective Functions. In Proceedings of the 9th International Conference on Parallel Problem Solving from Nature, Reykjavik, Iceland, 9–13 September 2006; pp. 513–522. [Google Scholar] [CrossRef]
  56. Schaeffler, S.; Schultz, R.; Weinzierl, K. Stochastic Method for the Solution of Unconstrained Vector Optimization Problems. J. Optim. Theory Appl. 2002, 114, 209–222. [Google Scholar] [CrossRef]
  57. Fieldsend, J.E.; Chugh, T.; Allmendinger, R.; Miettinen, K. A Feature Rich Distance-Based Many-Objective Visualisable Test Problem Generator. In Proceedings of the Genetic and Evolutionary Computation Conference (GECCO ’19), Prague, Czech Republic, 13–17 July 2019; pp. 541–549. [Google Scholar]
  58. Brandt, F.; Conitzer, V.; Endriss, U.; Lang, J.; Procaccia, A.D. Handbook of Computational Social Choice; Cambridge University Press: Cambridge, UK, 2016. [Google Scholar]
  59. Sun, J.Q. Vibration and sound radiation of non-uniform beams. J. Sound Vib. 1995, 185, 827–843. [Google Scholar] [CrossRef]
  60. He, M.X.; Xiong, F.R.; Sun, J.Q. Multi-Objective Optimization of Elastic Beams for Noise Reduction. ASME J. Vib. Acoust. 2017, 139, 051014. [Google Scholar] [CrossRef]

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.