Non-Epsilon Dominated Evolutionary Algorithm for the Set of Approximate Solutions

: In this paper, we present a novel evolutionary algorithm for the computation of approximate solutions for multi-objective optimization problems. These solutions are of particular interest to the decision-maker as backup solutions since they can provide solutions with similar quality but in different regions of the decision space. The novel algorithm uses a subpopulation approach to put pressure towards the Pareto front while exploring promissory areas for approximate solutions. Furthermore, the algorithm uses an external archiver to maintain a suitable representation in both decision and objective space. The novel algorithm is capable of computing an approximation of the set of interest with good quality in terms of the averaged Hausdorff distance. We underline the statements on some academic problems from literature and an application in non-uniform beams. S.O.-B.; C.I.H.C.; Visualization, C.I.H.C.; Writing—original C.I.H.C.; Writing—review editing, C.I.H.C., O.S., J.-Q.S. S.O.-B. All authors


Introduction
In many real-world engineering problems, one is faced with the problem that several objectives have to be optimized concurrently, leading to a multi-objective optimization problem (MOP). The common goal for such problems is to identify the set of optimal solutions (the so-called Pareto set) and its image in objective space, the Pareto front. These kinds of problems are of great interest in areas such as: optimal control [1,2], telecommunications [3], transportation [4,5], and healthcare [6,7]. In all of the previous cases, the authors considered several objectives in conflict, and their objective was to provide solutions that represented trade-offs between the solutions.
However, in practice, the decision-maker may not always be interested in the best solutions-for instance, when these solutions are sensitive to perturbations or are not realizable in practice [8][9][10][11]. Thus, there exists an additional challenge. One has to search not only for solutions with a good performance but also solutions that are possible to implement.
In this context, the set of approximate solutions [12] are an excellent candidate to enhance the solutions given to the decision-maker. Given the allowed deterioration for each objective , one can compute a set that it is "close" to the Pareto front but that can have different properties in decision space. For instance, the decision-maker could find a solution that is realizable in practice or one that serves as a backup in case the selected option is no longer available. For this purpose, we use the − -dominance. In this setting, a solution x − dominates a solution y if x + dominates y in the Pareto sense (see Definition 3 for the formal definition).
In this work, we propose a novel evolutionary algorithm that aims for the computation of the set of approximate solutions. The algorithm uses two subpopulations: the first to put pressure towards the Pareto front while the second to explore promissory areas for approximate solutions. For this purpose, the algorithm ranks the solutions according to the − -dominance and how well distributed the solutions are in both decision and objective spaces. Furthermore, the algorithm incorporates an external archiver to maintain a suitable representation in both decision and objective space. The novel algorithm is capable of computing an approximation of the set of interest with good quality regarding the averaged Hausdorff distance [13].
The remainder of this paper is organized as follows: in Section 2, we state the notations and some background required for the understanding of the article. In Section 3, we present the components of the proposed algorithm. In Section 4, we show some numerical results, and we study an optimal control problem. Finally, in Section 5, we conclude and will give some paths for future work.

Background
Here, we consider continuous multi-objective optimization problems of the form where Q ⊆ R n and F is defined as the vector of the objective functions and where each objective f i : Q → R is continuous. In multi-objective optimality, this is defined by the concept of dominance [14].

Definition 1.
(a) Let v, w ∈ R k . Then, the vector v is less than w (v < p w), if v i < w i for all i ∈ {1, . . . , k}. The relation ≤ p is defined analogously. (b) y ∈ Q is dominated by a point x ∈ Q (x ≺ y) with respect to (MOP) if F(x) ≤ p F(y) and F(x) = F(y), else y is called non-dominated by x. (c) x ∈ Q is a Pareto point if there is no y ∈ Q that dominates x. (d) x ∈ Q is weakly Pareto optimal if there exists no y ∈ Q such that F(y) < p F(x).
The set P Q of Pareto points is called the Pareto set and its image F(P Q ) the Pareto front. Typically, both sets form (k − 1)-dimensional objects [15].
In some cases, it is worth looking for solutions that are 'close' in objective space to the optimal solutions, for instance, as backup solutions. These solutions form the so-called set of nearly optimal solutions.
In order to define the set of nearly optimal solutions (P Q, ), we now present -dominance [12,16], which can be viewed as a weaker concept of dominance and will be the basis of the approximation concept we use in the sequel. To define the set of nearly optimal solutions, we need the following definition.
Both definitions are identical. However, in this work, we introduce − -dominance as it can be viewed as a stronger concept of dominance and since we use it to define our set of interest.
Definition 4 (Set of approximate solutions [12]). Denote by P Q, the set of points in Q ⊂ R n that are not − -dominated by any other point in Q, i.e., P Q, := {x ∈ Q| ∃y ∈ Q : y ≺ − x} . (2) Thus, in the remainder of this work, we will aim to compute the set P Q, with multi-objective evolutionary algorithms [17,18]. To the best of the authors' knowledge, there exist only a few algorithms that aim to find approximations of P Q, . Most of the approaches use the ArchiverU pdateP Q, as the critical component to maintaining a representation of the set of interest. This archiver keeps all points that are not − -dominated by any other considered candidate. Furthermore, the archiver is monotone, and it converges in limit to the set of interest P Q, in the Hausdorff sense [19].

Related Work
In general, we can classify the methods in mathematical programming techniques and heuristical methods (with multi-objective evolutionary algorithms being the most prominent). Mathematical programming techniques typically give guarantees on the solutions that can find (e.g., convergence to local optima). However, they usually require the MOP to be continuous and, in some cases, differentiable or even twice continuous differentiable. On the other hand, heuristic methods are a good option when the structure of the problem is unknown (e.g., black box problems), but they typically do not guarantee convergence. In the following, we describe several methods to solve an MOP from both families, and we finish this section with a description of methods that aim to find the set of approximate solutions.

Mathematical Programming Techniques
One of the classical ideas to solve an MOP with mathematical programming techniques is to transform the problem into an auxiliary single-objective optimization method. With this approach, we simplify the problem by reducing the number of objectives to one. Once we do this, we are now able to use one of the numerous methods to solve SOPs that have been proposed. However, typically the solution of a SOP consists of only one point, while the solution of an MOP is a set. Thus, the Pareto set can be approximated (in some cases not entirely) by solving a clever sequence of SOPs [20][21][22].
The weighted sum method [23]: the weighted sum method is probably the oldest scalarization method. The underlying idea is to assign to each objective a certain weight α i ≥ 0, and to minimize the resulting weighted sum. Given an MOP, the weighted sum problem can be stated as follows: The main advantage of the weighted sum method is that one can expect to find Pareto optimal solutions, to be more precise:

then a solution of Problem (3) is Pareto optimal.
On the other hand, the proper choice of α, though it appears to be intuitive at first sight is, in some instances, a delicate problem. Furthermore, the images of (global) solutions of Problem (3) cannot be located in parts of the Pareto front, where it is concave. That is, not all points of the Pareto front can be reached when using the weighted sum method, which represents a severe drawback. Weighted Tchebycheff method [24]: the aim of the weighted Tchebycheff method is to find a point whose image is as close as possible to a given reference point Z ∈ R k . For the distance assignment, the weighted Tchebycheff metric is used: let α ∈ R k with α i ≥ 0, i = 1, . . . , k, and ∑ k i=1 α i = 1, and let Z = (z 1 , . . . , z k ) ∈ R k , then the weighted Tchebycheff method reads as follows: Note that the solution of Problem (4) depends on Z as well as on α. The main advantage of the weighted Tchebycheff method is that, by a proper choice of these vectors, every point on the Pareto front can be reached. Theorem 2. Let x * ∈ Q be Pareto optimal. Then, there exists α ∈ R k + such that x * is a solution of Problem (4), where Z is chosen as the utopian vector of the MOP.
The utopian vector F * = ( f * 1 , . . . , f * k ) of an MOP consists of the minimum objective values f * i of each function f i . On the other hand, the proper choice of Z and α might also get a delicate problem for particular cases.
Normal boundary intersection [21]: the Normal boundary intersection (NBI) method [21] computes finite-size approximations of the Pareto front in the following two steps: 1.
The Convex Hull of Individual Minima (CHIM) is computed, which is the (k − 1)-simplex connecting the objective values of the minimum of each objective f i , i = 1, . . . , k (i.e., the utopian).

2.
The points y i from the CHIM are selected, and the point x * i ∈ Q is computed such that the image F(x * i ) has the maximal distance from y i in the direction that is normal to the CHIM and points toward the origin.
The latter is called the NBI-subproblem and can, in mathematical terms, be stated as follows: Given an initial value x 0 and a direction α ∈ R k , solve Problem (5) can be helpful since there are scenarios where the aim is to steer the search in a certain direction given in objective space. On the other hand, solutions of Problem (5) do not have to be Pareto optimal [21].

Multi-Objective Evolutionary Algorithms
An alternative to the classical methods is given by an area known as Evolutionary Multi-Objective Optimization. This area has developed a wide variety of methods. These methods are known as Multi-Objective Evolutionary Algorithms (MOEAs). Some of the advantages are that they do not require gradient information about the problem. Instead, they rely on stochastic search procedures. Another advantage is that they give an approximation of the entire Pareto set and Pareto front in one execution. Examples of these methods can be found in [17,18,31]. In the following, we review the most representative methodologies to design a MOEA and its most important exponent.
Dominance based methods: methods of this kind rely on the dominance relationship directly. In general, these methods have two main components: a dominance ranking and a diversity estimator. The dominance ranking sorts the solutions according to the partial order induced by Pareto dominance. This mechanism puts pressure towards the Pareto front. The diversity estimator is introduced to have a total order of the solutions, and thus it allows a comparison between solutions even when they are mutually non-dominated. The density estimator is based on the idea that the decision-maker would like to have a good distribution of the solution on the front (diversity).
One of the most popular methods in this class is the non-dominated sorting genetic algorithm II (NSGA-II) that was proposed in [32]. The NSGA-II has two main mechanisms. The first one is the non-dominated sorting, where the idea is to rank the current solutions using the Pareto dominance relationship, i.e., the ranking says by how many solutions the current solution is dominated. This helps to identify the best values and to use them to generate new solutions.
The second mechanism is called crowding distance. The idea is to measure the distance from a given point to its neighbors. This helps to identify the solutions with less distance as it means that there are more solutions in that region.
These components aim to accomplish the goals of convergence and spread, respectively. Due to these mechanisms, NSGA-II has a good overall performance, and it has become a prominent method when two or three objectives are considered. Other methods in this class can be found in [33,34].
Decomposition based methods: methods of this kind rely on scalarization techniques to solve an MOP. The idea behind these methods is that a set of well-distributed scalarization problems will also generate a good distribution of the Pareto front. They use the evaluation of the scalar problem instead of the evaluation of the MOP. These methods do not need a density estimator since it is coded in the selection of the scalar problems.
The most prominent algorithm in this class is the MOEA based on decomposition (MOEA/D), which was proposed in [35]. The main idea of this method is to make a decomposition of the original MOP into N different SOPs, also called subproblems. Then, the algorithm solves these subproblems using the information from its neighbor subproblems. The decomposition is made by using one of the following methods: weighted sum, weighted Tchebycheff, or PBI. Other methods in this class can be found in [36,37].
Indicator based methods: these methods rely on performance indicators to assign the contribution of an individual to the solution found by the algorithm. By using a performance indicator, these methods reduce an MOP to an SOP. In most cases, these methods do not need a density estimator, since it is coded in the indicator.
One of the methods that has received the most attention is the SMS-EMOA [38]. This method is an µ + 1 evolutionary strategy that uses the hypervolume indicator to guide the search. The method computes the contribution to the hypervolume of each solution. Next, the solution with the minimum contribution is removed from the population. In order to improve the running time of the algorithm, SMS-EMOA is combined with the dominance-based methods. In this case, when the whole population is non-dominated, the indicator is used to rank the solutions. Other methods in this class are presented in [39,40] for the hypervolume indicator and [41,42] for the ∆ p indicator.

Methods for the Set of Approximate Solutions
In [43], the authors proposed to couple a generic evolutionary algorithm with the ArchiverUpdateP Q, . The method was tested on Tanaka and sym-part [44] problems to show the capabilities of the method. Later in [45], the authors presented a modified version of NSGA-II [32], named P Q, -NSGA-II. The method uses an external archiver to maintain a representation of P Q, and tested the method on a space machine design application. Furthermore, in [46], the method was applied to solve a two impulse transfer to asteroid Apophis and a sequence Earth-Venus-Mercury problem.
Later, Schuetze et al. [47] proposed the P Q, -MOEA, which is a steady-state archive-based MOEA that uses ArchiverU pdateP Q, for the population. The algorithm draws at random two individuals from the archiver and generates two new solutions that are introduced to the archiver.
One of the few methods that use a different search engine was presented in [48]. The work proposed an adaption of the simple cell mapping method to find the set of approximate solutions using ArchiverU pdateP Q, D y . The archiver aims for discretizations of P Q, in objective space by ensuring that the Hausdorff distance between every two solutions is at least δ y ∈ R from each other. The results of the method showed that the algorithm was capable of outperforming previous algorithms in low-dimensional problems (n ≤ 3) in several academic MOPs.
More recently, in [49], the authors designed an algorithm for fault tolerance applications. The method uses two different kinds of problem-dependent mutation and one for diversity. In this case, a child will replace a random individual if it is better or with probability one half if they are non-dominated. The method uses a bounded version of ArchiverU pdateP Q, at the end of each generation.
The previous approaches based on evolutionary algorithms have a potential drawback. They are not capable of exploiting equivalent, or near equivalent Pareto sets [44,50,51]. This issue prevents the algorithm from finding solutions that have similar properties in objective space but differ in decision space, which can be interesting to the decision-maker. Furthermore, − dominance dilutes the dominance relationship. Thus, it becomes harder for the algorithms to put pressure towards the set of interest. Our proposed algorithm aims to couple with these drawbacks, and we will compare it with the P Q, -NSGA-II and P Q, -MOEA in terms of the ∆ 2 indicator in both decision and objective space [13,52]. The ∆ 2 indicator measures the Averaged Hausdorff distance between a reference set and an approximation.

The Proposed Algorithm
In this section, we present an evolutionary multi-objective algorithm that aims for a finite representation of P Q, (1). We will refer to this algorithm as the non--dominated sorting genetic algorithm (N SGA). First, the algorithm initializes two random subpopulations (P and R). The subpopulation R evolves as in classical NSGA-II [32]. This subpopulation aims to find the Pareto front. The subpopulation P seeks to find the set of approximate solutions. At each iteration, the individuals are ranked using Algorithm 3 according to − -dominance. Then, we apply a density estimator based on nearest the neighbor of each solution in both objective and decision space (line 13 of Algorithm 1). Thus, the algorithm selects half the population from those better spaced in objective space and does the same for decision space. This mechanism allows the algorithm to have better diversity in the population.
Furthermore, the archiver is applied to maintain the non--dominated solutions. Finally, the algorithm performs a migration between the populations every specified number of generations. Note that the algorithm uses diversity in both decision and objective space. This feature allows keeping nearly optimal solutions that are similar in objective space but different in decision space. To achieve this goal, half the population is selected according to their nearest neighbor distance in objective space and a half using decision space. In the following sections, we detail the components of the algorithm.

24:
end if 25: end for

An Archiver for the Set of Approximate Solutions
Recently, Schütze et al. [19] proposed the archiver ArchiveU pdateP Q, D xy (see Algorithm 2). This archiver aims to maintain a representation of the set of approximate solutions with good distribution in both decision and objective space according to user-defined preference δ x , δ y ∈ R. The archiver will include a candidate solution if there is no solution in the archiver, which − -dominates and if there is no solution 'close' in both decision and objective space (measured by δ x and δ y respectively). An important feature of this algorithm is that it converges to the set of approximate solutions in the limit (plus a discretization error given by δ x and δ y ). In the following, we briefly describe the archiver since it is the cornerstone of the proposed algorithm.
Given a set of candidate solutions P and an initial archiver A, the algorithm goes through all the set of candidate solutions p ∈ P and will try to add them to the archiver. A solution p ∈ P will be included if there is no solution a ∈ A that − -dominates p and if there is no other solution that is 'close' in both decision and objective space in terms of the Hausdorff distance dist ∞ (·, ·).
Theorem 3 states that the archiver is monotone. By monotone, we mean that the Hausdorff distance of the image of archiver at iteration l, F(A l ), and the set of non − -dominated solutions until iteration l, F(P C l , ) is max(δ y , dist(F(P C l , +2δ y ), F(P C l , ))). Theorem 3. Let l ∈ N, ∈ R k + , P 1 , . . . , P l ⊂ R n be finite sets, δ x = 0 and A i , i = 1, . . . , l, be obtained by ArchiveU pdateP Q, D xy as in Algorithm 1. Then, for C l = l i=1 P i , it holds: Proof. See [19].
Note that the − -dominance operation takes constant time to the sizes of P and A. Thus, the complexity of the archiver is O(|P||A|), which in the worst case is quadratic (O(|P| 2 )).

Ranking Nearly Optimal Solutions
Next, we present an algorithm to rank approximate solutions. Algorithm 3 is inspired by the OMNI Optimizer [53]. In this case, the best solutions are those that are non -dominated by any other solution in the population and are also well distributed in both decision and objective space. The second layer will be formed with those solutions, the solutions that are non -dominated but were not well distributed. The next layers follow the same pattern.

Using Subpopulations
Since the − -dominance can be viewed as a relaxed form of Pareto dominance, it can be expected that more solutions are non − -dominated when compared to classical dominance. This fact generates a potential issue since it would be possible that the algorithm stagnates at the early stages.
To avoid this issue, we propose the use of two subpopulations to improve the pressure towards the set of interest of the novel algorithm. Each subpopulation has a different aim. The first one aims to approximate the global Pareto set/front. While the second one seeks to approximate P Q, . Figure 2 shows what would be the first rank for each subpopulation. A vital aspect of the approach is how the information between the subpopulations is shared. Thus, in the following, we describe two schemes to share individuals via migration.

1.
Migration: every n f generations, nr individuals are exchanged between the population. The individuals are chosen at random from the rank one individuals.

2.
Crossover between subpopulations: every n f generations, the crossover is performed between the populations. One parent is chosen from each population at random.
As it can be observed, the use of subpopulations introduces three extra parameters to the algorithm, namely: • n f : frequency of migration, • nr: the number of individuals to be exchanged, and • ratio: the ratio of individuals between subpopulation in the range [0, 1].
Thus, the population in charge of looking for optimal solutions puts pressure on the second population and helps to determine the set of approximate solutions. While the latter population is in charge of exploiting the promissory regions, this allows for finding local Pareto fronts if they are 'close enough' in terms of − -dominance.

Complexity of the Novel Algorithm
In the following, we comment on the complexity of N SGA. Let G be the number of generations and |P| the size of the population.

•
Initialization: in this step, all the individuals are generated at random and evaluated. This task takes O(|P|).
• External archiver: from the previous section, the complexity of the archiver is O(|P||A|), where |A| is the size of the archiver. Note that the maximum size of the archiver at the end of the execution is G|P|.
• Non -dominated sorting: until there are no solutions in the population P, ArchiveU pdateP Q, D xy is executed. In the worst case, each rank has only one solution, and the archiver is executed |P| times. Thus, the complexity of this part of the algorithm is O(|P| 3 ).
• Main loop: the main loop selects the parents and applies the genetic operators (each of this tasks take O(|P|)), then the algorithm uses the Non -dominated sorting (O(|P| 3 )); next, it finds those solutions that are well distributed in terms of closest neighbor (O(|P| 2 )) and finally applies the external archiver O(|P| 2 ). Note that all these operations are executed G times. Thus, the complexity of the algorithm is O(max(G|P| 3 , (G|P|) 2 )), which is the maximum of the non -dominated sorting or the external archiver.

Numerical Results
In this section, we present the numerical result coxrresponding to the evolutionary algorithm. We performed several experiments to validate the novel algorithm. First, we validate the use of an external archiver in the evolutionary algorithm. Next, we compare several strategies of subpopulations. Finally, we analyze the resulting algorithm with the enhanced state-of-the-art algorithms.
The algorithms chosen are modifications of P Q, NSGA-II, and P Q, MOEA. In the literature, both algorithms used aim for well-distributed solutions only in objective space, but, in this case, we enhanced them with ArchiverU pdateP Q, D xy to have a fair comparison with the proposed algorithm.
In all cases, we used the following problems: Deb99 [17], two-on-one [55], sym-part [44], S. Schäffler, R. Schultz and K. Weinzierl (SSW) [56], omni test [53], Lamé superspheres (LSS) [54]. Next, 20 independent experiments were performed with the following parameters: 100 individuals, 100 generations, crossover rate = 0.9, mutation rate = 1/n, distribution index for crossover eta_c = 20 and distribution index for mutation eta_m = 20. Then, Table 1 shows the , δ y and δ x values used for each problem. Furthermore, all cases were measured with the ∆ 2 indicator, and a Wilcoxon rank-sum test was performed with a significance level α = 0.05. Finally, all ∆ 2 tables, the bold font represents the best mean value, and the arrows represent: • ↑ rank sum rejects the null hypothesis of equal medians and the algorithm has a better median, • ↓ rank sum rejects the null hypothesis of equal medians, and the algorithm has a worse median and • ↔ rank sum cannot reject the null hypothesis of equal medians. The comparison is always made with the algorithm in the first column. We have selected the ∆ 2 indicator since it allows us to compare the distance between the reference solution and the approximation found by the algorithms in both decision and objective space. Since we aim to compute the set of approximate solutions, classical indicators used to measure Pareto fronts cannot be applied in a straightforward manner and should be carefully studied if one would like to use them in this context.

Validation of the Use of an External Archiver
We first validate the use of an external archiver in the novel algorithm. Note N SGA2A denotes the version of the algorithm that uses the external archiver. Tables 2 and 3 show the mean and standard deviation of the ∆ 2 values obtained by the algorithms. From the results, we can observe that in the six problems the use of the external archiver has a significant impact on the algorithm. This indicates an advantage of using the novel archiver. It is important to notice that the use of the external archiver does not use any function evaluation. Thus, the comparison is fair regarding the number of evaluations used. Table 2. Averaged ∆ 2 in decision space for the use of the external archiver and the base case.

Validation of the Use of Subpopulations
Furthermore, we experiment with whether the use of subpopulations in the novel algorithm has a positive effect on the algorithm. N SGA2Is1 denotes the approach that uses migration, and N SGA2Is2 denotes the approach that uses a crossover between the subpopulations. Tables 4 and 5 show the mean and standard deviation of the ∆ 2 values obtained by the algorithms. From the results, we can observe that the approach that uses migration is outperformed in four of the twelve comparisons (considering both decision and objective space). On the other hand, the approach that uses crossover between subpopulations is outperformed by the base algorithm in three of twelve comparisons. As it could be expected, the algorithms that use subpopulations obtain better results in terms of objective space while sacrificing in some cases decision space. Table 4. Averaged ∆ 2 for the algorithms that use subpopulations and the base case in decision space.

Comparison to State-of-the-Art Algorithms
Now, we compare the proposed algorithm with P Q, D xy -NSGA2 and P Q, D xy -MOEA. Additionally to the MOPs used previously, in this section, we consider three distance-based multi/many-objective point problems (DBMOPP) [50,51,57]. These problems allow us to visually examine the behavior of many-objective evolution as they can scale on the number of objectives. In this context, there are K sets of vectors defined, where the kth set, V k = {v 1 , . . . , v m k } determines the quality of a design vector x ∈ Q on the kth objective. This can be computed as where m k is the number of elements of V k , which depends on k and dist(x, v) denotes in the Euclidian distance. An alternative representation is given by the centre (c), a circle radius (r), and an angle for each objective minimizing vector. From this framework, we generate three problems with k =    Figures 4-7 show the median approximations to P Q, and F(P Q, ) obtained by the algorithms. It is posible to observe than in all figures P Q, D xy -MOEA has the worst performance. Next, P Q, D xy -NSGA2 misses several connected components that are of the set of approximate solutions. Figure 6 shows a clear example where P Q, D xy -NSGA2 focuses on a few regions of the decision space while the proposed algorithm is capable of finding all connected components. Furthermore, Figures 8-10 show the box plots in both decision and objective space for the problems considered. Tables 6 and 7 show the mean and standard deviation of the ∆ 2 values obtained by the algorithms.
From the results, we can observe that the novel algorithm outperforms P Q, D xy -NSGA2 in eight out of nine problems according to decision space and seven according to objective space. When compared with P Q, D xy -MOEA, the novel algorithm outperforms in the nine problems according to both decision and objective space. This shows that an improvement in the search engine to maintain solutions well spread in decision and objective space is advantageous when approximating P Q, .          To conclude the comparison to state-of-the-art algorithms, we aggregate the results by conducting a single election winner strategy. Here, we have selected the Borda and Condorcet methods to perform our analysis [58].
In the Borda method, for each ranking, assign to algorithm X, the number of points equal to the number of algorithms it defeats. In the Condorcet method, if algorithm A defeats every other algorithm in a pairwise majority vote, then A should be ranked first. Table 8 shows that the count of times algorithm i was significantly better than the algorithm j according to the Wilcoxon rank sum. In this case, both Borda count and Condorcet method prefer the novel method. Furthermore, in terms of computational complexity, both state-of-the-art methods have a complexity of O((G P ) 2 ) since both algorithms make use of the external archiver. This complexity is comparable with the one of the novel method. In some cases, due to the non -dominated sorting, the novel algorithm can have a higher complexity. Table 8. Summary of times a method outperforms another with statistical significance.

Application to Non-Uniform Beam
Finally, we consider a non-uniform beam problem taken from [59,60]. Non-uniform beams are a commonly used structure in many applications such as ship hull, rocket surface, and aircraft fuselage. In this application, structural and acoustic properties are presented as constraints and performance indices of candidate non-uniform beams.
The primary goal of the structural-acoustic design is to create a light-weight, stable, and quiet structure. The multi-objective optimization of non-uniform beams is to seek the balance among those aims. Here, we will consider the following bi-objective problem: The first objectiveP is the integration of the radiated sound power over a range of frequencies, where P is the average acoustic power radiated by the vibrating beam per unit width over one period of vibrations (see [60] for the details of the implementation). The second objective is the total mass of the beam that can be expressed as where 1 ≤ h i (x) ≤ 15 mm is the average height of the ith segment, and for the number of interpolation coordinates N = 10 (i.e., the decision space is 10-dimensional). The Beam length is L = 1.0 m and the mass density ρ = 2643 kg/m 3 . The parameters for the archiver were set to = [1, 0.00003], δ x = 2.5 × 10 −3 and δ y = [0.3, 0.000003]. That is to say, we are willing to accept a deterioration of 1 kg and 0.3 × 10 −5 W/m/s. The parameters were as follows: • Population size: 500, • Number of generations: 200, • The rest of the parameters were set as before. Figure 11 shows the approximated set P Q, . The results show that the algorithm is capable of finding the region of interest, according to previous studies [19,60], with 100, 000 function evaluations, while in previous work using only the archiver with random points, it took 10 million function evaluations [19] to obtain similar results.

Conclusions and Future Work
In this work, we have proposed a novel multi-objective evolutionary algorithm that aims to compute the set of approximate solutions. The algorithm uses the archiver ArchiveP Q, D xy to maintain a well-distributed representation in both decision and objective space. Moreover, the algorithm ranks the solutions according to the − -dominance and their distribution in both decision and objective space. This allows exploring different regions of the search space where approximate solutions might be located and maintaining them in the archiver.
Furthermore, we observed that the interplay of generator and archiver was a delicate problem, including (among others) a proper balance of local/global search, a suitable density distribution for all generational operators. The results show that the novel algorithm is competitive against other methods of the-state-of-the-art on academic problems. Finally, we applied the algorithm to an optimal control problem with comparable results to those of the state-of-the-art but using significantly fewer resources.