Next Article in Journal
Enhanced CO2 Emissions Prediction Using Temporal Fusion Transformer Optimized by Football Optimization Algorithm
Previous Article in Journal
Mathematical Theory of Social Conformity I: Belief Dynamics, Propaganda Limits, and Learning Times in Networked Societies
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Reference Set Generator: A Method for Pareto Front Approximation and Reference Set Generation

by
Angel E. Rodriguez-Fernandez
1,*,
Hao Wang
2,* and
Oliver Schütze
1
1
Departmento de Computación, Centro de Investigación y de Estudios Avanzados del IPN, Mexico City 07360, Mexico
2
Leiden Institute of Advanced Computer Science and Applied Quantum Algorithms, Leiden University, 2311 EZ Leiden, The Netherlands
*
Authors to whom correspondence should be addressed.
Mathematics 2025, 13(10), 1626; https://doi.org/10.3390/math13101626
Submission received: 27 March 2025 / Revised: 25 April 2025 / Accepted: 3 May 2025 / Published: 15 May 2025

Abstract

:
In this paper, we address the problem of obtaining bias-free and complete finite size approximations of the solution sets (Pareto fronts) of multi-objective optimization problems (MOPs). Such approximations are, in particular, required for the fair usage of distance-based performance indicators, which are frequently used in evolutionary multi-objective optimization (EMO). If the Pareto front approximations are biased or incomplete, the use of these performance indicators can lead to misleading or false information. To address this issue, we propose the Reference Set Generator (RSG), which can, in principle, be applied to Pareto fronts of any shape and dimension. We finally demonstrate the strength of the novel approach on several benchmark problems.

1. Introduction

Multi-objective optimization has become an integral part of the decision making for many real-world problems. In a multi-objective optimization problem (MOP), one is faced with the issue of concurrently optimizing k individual objectives. The set of optimal solutions is called the Pareto set, and its image is the Pareto front. The latter set is, in many cases, most important for the decision maker (DM), since it provides him/her with an overview of the optimal performances for his/her project. What makes MOPs hard to deal with is that one can expect that both the Pareto set and front form—at least locally and under certain assumptions on the model—objects of dimension k 1 [1]. For the numerical treatment of MOPs, specialized evolutionary algorithms, called multi-objective evolutionary algorithms (MOEAs), have caught the interest of many researchers and practitioners during the last three decades [2]. MOEAs are population-based and hence allow one to obtain a finite approximation of the entire solution set in one run of the algorithm. For the performance assessment of the outcome sets, several different indicators have been proposed so far (e.g., [3,4,5,6,7,8]). Some of these performance indicators are distance-based and require a “suitable” finite-size representation of the Pareto front. While until now, a vast variety of different MOEAs have been proposed and analyzed, it is fair to say that the generation of suitable reference sets has played a rather minor role in the evolutionary multi-objective optimization (EMO) community. It is evident that such reference sets should be complete. Furthermore, as we show in this work, a biased representation can lead to misleading or even false information.
To fill this gap, we propose in this work the Reference Set Generator (RSG). The main steps of RSG are as follows: (i) A first approximation A y of the Pareto front is either taken or generated. This set can be, in principle, of arbitrary size, and the elements can be non-uniformly distributed along the Pareto front (i.e., biased). However, all of these elements have to be “close enough” to the set of interest. In order to obtain a bias-free approximation, (ii) component detection and (iii) a filling step are applied to A y . Finally, (iv) a reduction step is applied. The RSG is applicable to MOPs with Pareto fronts of, in principle, any shape and dimension. This is in contrast to existing methods that generate such reference sets that require an analytic expression of either the Pareto front or the Pareto set (or a tight superset of it). Furthermore, the resulting reference set is of any desired magnitude. We will show the strength of the novel method on several benchmark problems.
The remainder of this paper is organized as follows: in Section 2, we briefly recall the background for the understanding of this work. We further discuss the related work and the performance indicators that benefit from our approach. In Section 3, we motivate the need for bias-free complete finite-size Pareto front approximations. In Section 4, we propose the Reference Set Generator (RSG) that aims for such sets. In Section 5, we present some numerical results on selected benchmark problems and compare the RSG to related algorithms. Finally, we draw our conclusions in Section 6 and give possible paths for future research.

2. Background and Related Work

We consider multi-objective optimization problems (MOPs) that can be mathematically expressed via
min x Q F ( x ) .
Hereby, the map F is defined as
F : Q R k ,       F ( x ) = ( f 1 ( x ) , , f k ( x ) ) T ,
where we assume each of the individual objectives, f i : Q R , i = 1 , k , to be continuous. We stress, however, that the method we propose in the sequel, RSG, can, in principle, also be applied to discrete problems. Q is the domain of the objective functions that is typically expressed by equality and inequality constraints.
In order to define the optimality of a MOP, one can use the concept of dominance [9].
Definition 1. 
(a) 
Let v , w R k . Then, the vector v is less than w ( v < p w ) if v i < w i for all i { 1 , , k } . The relation p is defined analogously.
(b) 
y Q is dominated by a point x Q ( x y ) with respect to (Equation (MOP)) if F ( x ) p F ( y ) and F ( x ) F ( y ) .
(c) 
x Q is called a Pareto point or Pareto optimal if there is no y Q that dominates x.
(d) 
The set P Q of Pareto optimal solutions
P Q : = { x Q : y Q s . t . y x }
is called the Pareto set.
(e) 
The image F ( P Q ) of the Pareto set is called the Pareto front.
One can expect that both the Pareto set and the Pareto front form, under certain conditions and at least locally, objects of dimension k 1 . For details, we refer to [1]. Due to this “curse of dimensionality”, it is, hence, not possible for an evolutionary multi-objective optimization algorithm (MOEA) to keep all promising candidate solutions (e.g., all non-dominated ones) during the algorithm run. It is hence inevitable—at least for continuous problems—to select which of the promising solutions should be kept in order to obtain a “suitable” approximation of the solution set (in most cases, the Pareto front of the given MOP). Within MOEAs, this process is termed “selection”. Another term, which can be used synonymously, is “archiving”. The latter is typically used when the MOEA is equipped with an external archive.
Most of the existing MOEAs can be divided into three main classes: First, there exist MOEAs that are based on the concept of dominance (e.g., [10,11,12,13]). Second, there exist MOEAs that are based on decompositions (e.g., [14,15,16,17,18]), and third, there are MOEAs that make use of an indicator function (e.g., [19,20,21]). The selection strategies of the first generation of MOEAs of the first class are based on a combination of non-dominated sorting and niching (e.g., [22,23,24]). Later, elite preservation was included, leading to increased overall performance (and, as a consequence, better Pareto front approximations). This holds, e.g., for SPEA [25], PAES [13], SPEA-II [11], and NSGA-II [10]. Theoretical studies on selection mechanisms have been done by the groups of Rudolph [26,27,28,29] and Hanne [30,31,32,33]. All of these selection mechanisms deal with the abilities of the populations to reach the Pareto set/front. On the other hand, the distributions of the individuals along the Pareto sets/fronts have not been considered. The selection mechanisms of the MOEAs within the second and third class follow directly from the construction of the algorithms: selection in a decomposition-based MOEA is done by considering the values of the chosen scalarization functions. Analogously, the selection in an indicator-based MOEA is done by considering the indicator contributions.
Existing (external) archiving strategies can also be divided into three classes: (a) unbounded archivers, (b) implicitly bounded archivers, and (c) bounded archivers. Unbounded archivers store all promising solutions during the algorithm run. The magnitudes of such archives can exceed any given threshold if the algorithm is run long enough. Unbounded archivers have, e.g., been used and analyzed in [34,35,36,37,38,39,40]. ϵ -dominance [41] can be viewed as a weaker concept of dominance. This relation allows single solutions to “cover” entire parts of the Pareto front of a given MOP, which is the basis for most implicitly bounded archivers. Such strategies have first been considered in the context of evolutionary multi-objective optimization (EMO) by Laumanns et al. [42]. Later works proposed and analyzed different approximations (such as gap-free approximations of the Pareto front) and dealt with different sets of interest (e.g., the consideration of all nearly optimal or all ϵ -locally optimal solutions) [38,39,43]. Finally, bounded archivers have, e.g., been proposed in [44,45], where adaptive grid selections have been utilized. Bounded archivers tailored to the use of the hypervolume indicator have been suggested in [46,47]. In ref. [48], a bounded archiver aiming for Hausdorff approximations of the Pareto front is presented and analyzed. Laumanns and Zenklusen have proposed two bounded archivers that aim for ϵ -approximations of the Pareto front [49].
All of the selection/archiving strategies mentioned above have in common that they aim for a “best approximation” of the Pareto front out of the given (finite) data. A related but slightly different problem is to generate a “suitable” (in particular, complete and bias-free) finite-size approximation of the set of interest S for the sake of comparisons, even if S is known approximately or even analytically but is not “trivial” (e.g., a line segment, a simplex, or perfectly spherical). It is known that most selection mechanisms have a non-monotonic behavior, which may result in the fact that entire regions of the Pareto front are not covered. Some of the external archives have monotonic behavior and even aim for gap-free approximations. On the other hand, uniformity of the final archive cannot be guaranteed. Both issues are, e.g., discussed in [43]. The evolutionary multi-objective optimization platform PlatEMO [50] provides reference sets for many benchmark problems. The underyling method was proposed in Tian et al. It uses uniform sampling on the simplex and then maps these points to the particular Pareto fronts of each problem. However, an analytical expression or characterization of the Pareto front is required, and in some cases, the obtained set is not completely uniform. Furthermore, in pymoo [51]—a multi-objective optimization framework in Python—built-in functions to obtain reference sets of the Pareto fronts for selected MOPs are provided. This is mainly the case for problems where the fronts are given analytically. Some other reference sets are provided with fixed magnitude; however, no direct information is provided as to how they have been obtained. A method related to RSG can be found in [52], which has the aim of guiding the iterates of a particular Newton method toward the Pareto front. In this work, we extend this idea for the purpose of generating complete and bias-free Pareto front approximations of relatively large magnitudes (in particular, compared to population sizes used in EMO). RSG is, in principle, applicable to Pareto fronts of any shape or dimension. For this, however, an initial reference set has to be given. The retrieval of this set is problem-dependent, and it is not always clear how to obtain a suitable approximation (though we give some guidelines as to which method may be most promising for a given MOP).
Finally, reference sets such as the ones generated by RSG are helpful for the evaluation of the performance qualities of candidate sets (populations) in EMO. More precisely, such sets are required for all distance-based indicators. The earliest of such indicators are the Generational Distance (GD, [3]) and the Inverted Generational Distance. (IGD, [5]). Later, the indicator Δ p [6] was proposed, which is a combination of slight variants of the GD and IGD and can be viewed as an averaged version of the Hausdorff distance d H . So far, there exist several extensions of these performance indicators. For instance, the consideration of continuous sets—either only the Pareto front or also the candidate solution set—has been done in [8,53], leading to modifications of I G D and Δ p . The indicators IGD+ [7] and DOA [54] are modifications of the IGD that are Pareto-compliant.

3. Motivation

Here, we motivate the need for complete and bias-free finite-size Pareto front approximations and propose the Reference Set Generator (RSG) that targets such sets in the next section.
Distance-based indicators require a “suitable” finite-size approximation of the Pareto front in order to give a “correct” value for the approximation quality of the considered candidate solution set. In particular, this holds for the abovementioned indicators GD, IGD, and Δ p , together with their variants. Such representations are ideally uniformly spread along the entire Pareto front [53]. This, however, is a non-trivial task, unless the Pareto front is given analytically and has a relatively simple form (e.g., linear or perfectly spherical). Regrettably, this is the case for only a few test problems (e.g., DTLZ1 and DTLZ2). On the other hand, there exist quite a few benchmark MOPs where the shape of the Pareto set is relatively simple. For such problems, it is tempting to choose uniform samples from the Pareto set (i.e., X = { s 1 , , s m } , where all s i P Q ) and to use the respective image Y = F ( X ) to represent of the Pareto front. The following discussion shows, however, that this approach has to be handled with care, since it can induce unwanted biases in the approximations that, in turn, may result in misleading indicator values.
As the first example, consider the one-dimensional bi-objective problem
F ( x ) = 1 1 x 1 x .
Let the domain be given by Q = [ 0.1 , 3 ] ; then, the Pareto set is identical to Q, and the Pareto front is the line segment that connects the points a = ( 9 , 10 ) and b = ( 2 / 3 , 1 / 3 ) . Figure 1 shows the result when using N equally spaced points along the Pareto set. As can be seen for N = 50 and N = 500 , there is a clear bias of the images toward the right lower end of the Pareto front. For N = 10 , 000 , the Pareto front approximation is “complete” (at least from the practical point of view) and appears to be perfect. However, it possesses the same bias. To see the impact of the reference set on the performance indicators, consider the two hypothetical outcomes (e.g., possible results from different MOP solvers):
A : = a + 2 i 1 10 ( b a ) | i { 1 , , 5 } , and B : = a + i 10 ( b a ) | i { 6 , , 10 } .
Figure 2 shows the two sets together with the Pareto front. Note that A is the perfect five-element approximation of the Pareto front: the elements are equally distributed along the Pareto front, and the extreme points are shifted “halfway in” [53]. The set B is certainly not perfect, as it, e.g., fails to “cover” more than half of the front. Table 1 shows values I ( O , R ) for different distance-based indicators, the outcomes O { A , B } , and different representations R of the Pareto front. For R = R 10 , 000 x (the one shown in Figure 1c), all indicators—except d H = Δ —yield lower values for B than for A, indicating (erroneously) that B is better than A. This is not the case for d H , since the Hausdorff distance is determined by the maximum of the considered distances and not by an average of those (however, d H has other disadvantages in the context of EMO in that it most prominently punishes single outliers [6]). The situation changes when selecting R = R 10 , 000 y as a representation of the front. This representation also contains N = 10,000 elements, but these are chosen uniformly along the Pareto front. Now, there is a tie for the two GD variants, and for all other indicators, A leads to better values than B. These values are indeed very close to the “correct” values: all exact GD values are equal to zero, since A and B are contained in the Pareto front. In order to compute the exact IGD values, a particular integral has to be solved [53]. When using R 10 , 000 y as representation, the computation of the IGD values can be interpreted as a Riemann sum with N = 10,000 equally sized sub-intervals leading to perfect solutions—at least from the practical point of view.
We repeat the process, but now using only N = 100 elements for the representation (see Table 1). We can see the same trend, i.e., that B appears to be better for R 100 x , while A appears to be better when using R 100 y . Furthermore, we see that the indicator values for R 100 y are already quite close to the exact values (i.e., when using R 10 , 000 y ). While the proper choice of N may not be an issue for bi-objective problems ( k = 2 ), this may become important for a larger number of objectives due to the “curse of dimensionality”: at least for continuous problems, one can expect that the Pareto front forms under certain (mild) assumptions a manifold of dimension k 1 [1].
We next consider further test problems. Figure 3, Figure 4, Figure 5 and Figure 6 show the Pareto front approximations for the commonly used test problems DTLZ1, DTLZ2, ZDT1, and ZDT3, respectively, where the representation has been obtained via uniform sampling in the decision variable space along the Pareto set. In all cases, the Pareto front representations appear to be perfect for large enough values of N. However, certain biases can be observed for lower values of N. For all test problems, we selected two points out of the representation and show the distances to their nearest neighbors (denoted by d ( y , N N y ) ). These distances differ by one order of magnitude, which confirms that the solutions are not uniformly distributed along the fronts. Using such representations, the same issues can arise as the one discussed above.
To conclude, the suitable representation of the Pareto front of a given MOP is crucial when considering distance-based performance indicators that use an average of the distances considered. Such representations are ideally equally distributed over the front. If the representation contains a bias, this may result in misleading indicator values, leading, in turn, to a wrong evaluation of the obtained results. In particular, the approach to performing the sampling along the Pareto set is, though tempting, not appropriate for such indicator-based indicators. In the sequel, we will propose a method that aims to achieve uniform Pareto front representations.

4. Reference Set Generator (RSG)

In the following, we will present the RSG, which is a method that aims to generate complete and bias-free Pareto front approximations. We will first present the general idea of the method and further on discuss the steps in detail.

4.1. General Idea

We assume in the following that we are interested in a Pareto front approximation of size N for a given MOP. Furthermore, we assume that we are given a set A y of (in principle) arbitrary size of non-dominated, possibly non-uniformly distributed points that are “close enough” to the PF. The computation of A y is in general a non-trivial task. Below, we will discuss different strategies to obtain suitable approximations. Given these data, the Reference Set Generator (RSG) consists of three main steps: component detection, filling, and reduction. More precisely, given A y that may have imperfections or biases in the approximation, the idea is to fill the gaps between the points within each connected point of A y . This leads to a more complete set F with a higher cardinality than A y , which can then be reduced to obtain a uniform reference set of size N. Note that PFs can be disconnected, and if we simply fill the gaps in A y , we may introduce points that do not belong to the PF (see Figure 7b for such an example). Therefore, component detection must be performed before applying the filling process to each detected component (Figure 7a).
The general procedure of the RSG is presented in Algorithm 1, and each of the main steps will be explained in the following subsections.
Figure 8, Figure 9, Figure 10, Figure 11, Figure 12 and Figure 13 show all the steps of RSG on the test problems ZDT3, DTLZ7, WFG2, CONV3, CONV3-4, and CONV4-2F (we will discuss these examples in more detail in Section 5).
Algorithm 1 Reference Set Generator (RSG)
Input: 
Starting set A y , filling size N f , output size N.
Output: 
PF reference Z.
1:
C = { C 1 , , C n c } ComponentDetection( A y )
2:
for  i = 1 , , n c  do                             ▹ for each component
3:
     F i Filling( C i , N f · | C i | | A y | )
4:
end for
5:
F i F i
6:
Z reduction( F , N )
7:
return  Z

4.2. Component Detection

Since the Pareto front might be disconnected, a component detection on A y is needed. We use DBSCAN [55] in objective space for this purpose for three main reasons: (i) the number of components does not need to be known a priori, (ii) the method detects outliers, and (iii) we have observed that a density-based approach works better than a distance-based one (e.g., k-means) for component detection. DBSCAN has two parameters: m i n p t s and r. The selection of these parameters is problem-dependent. Depending on the information available about the Pareto front, we suggest the following values:
  • If it is known that the PF is connected, this step can simply be omitted. Note that the main application of RSG is in the approximation of Pareto fronts of known benchmark functions, where the shapes of the PFs are at least known roughly.
  • If the range of the PF is roughly known or normalized and the PF is disconnected (or at least suspected to be), r can be set to 10% of the range and m i n p t s to 10% of | A y | when few components are expected. Alternatively, r can be set to a smaller value (e.g., 2–3%) and m i n p t s to 1–2% of | A y | when several components are anticipated.
  • No information of the PF is known a priori. To make the component detection process “parameter-free”, we compute a small grid search to find the optimal values of m i n p t s and r based on the weakest link function defined in [56]. Based on our experiments, we suggest setting the parameter values to m i n p t s { 2 , 3 } and r { 0.10 d ¯ , 0.11 d ¯ , , 0.15 d ¯ } for bi-objective problems and to m i n p t s { 3 , 4 } and r { 0.19 d ¯ , 0.20 d ¯ , , 0.23 d ¯ } otherwise, where d ¯ = 2 ( 1 ) p i p j | p i p j | is the average pairwise distance between all points p i A y and = | A y | .
A summary of the component detection process is presented in Algorithm 2. r and m i n p t s have to be given as grid search ranges: r _ i n t e r v a l and m i n p t s _ i n t e r v a l are the input variables containing the lower and upper bounds of the grid search for r and m i n p t s , respectively. The step size for r is also needed, and it is given in the input variable r _ s t e p . For m i n p t s , it is not needed, as this variable is an integer. With these values, the grid search values are defined as m i n p t s { m i n p t s _ m i n , m i n p t s _ m i n + 1 , , m i n p t s _ m a x } and r { r _ m i n , r _ m i n + r _ s t e p , r _ m i n + 2 r _ s t e p , , r _ m a x } . If the grid search is not needed, simply set r _ i n t e r v a l as [ r , r ] and m i n p t s _ i n t e r v a l as [ m i n p t s , m i n p t s ] , using the desired values for r and m i n p t s , and the component detection will be performed only once with those values. In the following, we describe the remaining steps for a single connected component. If multiple components exist, the procedures must be repeated analogously for each component C i identified by Algorithm 2.
Algorithm 2 Component Detection
Input: 
starting set A y = { p 1 , , p } , number of objectives k, range of r used for the grid search r _ i n t e r v a l = [ r _ m i n , r _ m a x ] , size of the step taken for r in the grid search r _ s t e p , range of m i n p t s used for the grid search m i n p t s _ r a n g e = [ m i n p t s _ m i n , m i n p t s _ m a x ] .
Output: 
Number of clusters n c , clusters C = { C 1 , , C n c } .
1:
Set w l m i n
2:
for  m i n p t s { m i n p t s _ m i n , m i n p t s _ m i n + 1 , , m i n p t s _ m a x }   do
3:
    for  r { r _ m i n , r _ m i n + r _ s t e p , r _ m i n + 2 r _ s t e p , , r _ m a x }  do
4:
         C t , n c t DBSCAN( A y , r , m i n p t s )
5:
         w l WeakestLink( C t )
6:
        if  w l w l m i n  then
7:
            C C t
8:
            n c n c t
9:
            w l m i n w l
10:
        end if
11:
    end for
12:
end for
13:
return  C = { C 1 , , C n c } , n c

4.3. Filling

Even if we know the PS a priori, a uniform sampling of the PS will not result in a uniform sampling of the PF. We assume that we have a set of points A y that is not uniformly distributed. However, if we fill the gaps and select N points from the filled set, we can obtain a more uniform set, leading to better IGD approximations when selecting points from these filled sets. The idea behind the filling step is to create a set that is as uniform as possible so that the reduction step (in particular, k-means) does not become stuck in non-uniform local optima, which would lead to non-uniform final sets. The next task is, therefore, to compute N f solutions that are ideally uniformly distributed along A y .
This process is performed differently for k = 2 and k 3 objectives:
  • For k = 2 , we sort the points of A y = { p 1 , , p } in increasing order of f 1 , i.e., the first objective. Then, we consider the piecewise linear curve formed by the segments between p 1 and p 2 , p 2 and p 3 , and so on. The total length of this curve is given by | L | = i = 1 1 | L i | , where | L i | = p i p i + 1 2 . To perform the filling, we arrange the N f desired points along the curve L such that the first point is p 1 and the subsequent points are distributed equidistantly along L. This is achieved by placing each point at a distance of δ = | L | / ( N f 1 ) from the previous one along L. See Algorithm 3 for details.
  • The filling process for k 3 consists of several intermediate steps that must be described first; see Algorithm 4 for a general outline of the procedure. The procedure is as follows: First, to better represent A y (particularly for the filling step), we triangulate this set in k 1 dimensional space. This is done because the PF for continuous MOPs forms a set whose dimension is at most k 1 . To achieve this, we compute a “normal vector” η to A y using Equation (8), and then we project it onto the k 1 hyperplane normal to η , obtaining the projected set P k 1 . After this, we compute the Delaunay triangulation [57] of P k 1 , which provides a triangulation T that can be used in the original k-dimensional space. For some PFs, the triangulation may include triangles (or simplex for k > 3 ) that extend beyond A y (Figure 12d), so a removal strategy is applied to eliminate these triangles and obtain the final triangulation T. Finally, each triangle t i T is uniformly filled at random with a number of points proportional to its area (or volume for k > 3 ), resulting in the filled set F of size N f .
    We will now describe each step in more detail in the following:
    -
    Computing “normal vector” ( η ) : Since the front is not known, we compute the normal direction η orthogonal to the convex hull defined by the minimal elements of A y . More precisely, we compute η as follows: if A y = { p 1 , , p } , choose
    y ( i ) m arg min j = 1 , , N p j , i , i = 1 , , k ,
    where p j , i denotes the i-th element of p j , and set
    M : = ( y ( 2 ) m y ( 1 ) m , y ( 3 ) m y ( 1 ) m , , y ( k ) m y ( 1 ) m ) R k × ( k 1 ) .
    Next, compute a QR factorization of M, i.e.,
    M = Q R = ( q 1 , , q k ) R ,
    where Q R k × k is an orthogonal matrix with column vectors q i , and R R k × ( k 1 ) is a right upper triangular matrix. Then, the vector
    η = sgn ( q k , 1 ) q k q k 2
    is the desired shifting direction. Since Q is orthogonal, the vectors v 1 : = q 1 , , v k 1 : = q k 1 form an orthonormal basis of the hyperplane that is orthogonal to η . That is, these vectors can be used for the construction of P k 1 .
    -
    k 1 Projection ( P k 1 ) : We use η as the first axis of a new coordinate system ( η , v 1 , , v k 1 ) , where the vectors v i are defined as above. In this coordinate system, the orthonormal vectors v 1 , , v k 1 form the basis of a hyperplane orthogonal to η . P k 1 , and the projection of the points of A y onto this hyperplane is achieved by first expressing the points p i A y in this new coordinate system as p i = β i η + β 1 i v 1 + + β k 1 i v k 1 and then removing the first coordinate, yielding p i k 1 = β 1 i v 1 + + β k 1 i v k 1 . Finally, P k 1 = { p 1 k 1 , , p k 1 } .
    -
    Delaunay Triangulation ( T ) : Compute the Delaunay triangulation of P k 1 . This returns T , which is a list of size δ containing the indices of P k 1 that form the triangles (or simplices for k > 3 ). The list T serves as the triangulation for the k-dimensional set A y , which is possible because T consists of indices, making it independent of the dimension. We use δ to denote the number of triangles obtained, { T ( i , 1 ) , , T ( i , k ) } to denote the indices of the vertices forming triangle i, and { p T ( i , 1 ) , , p T ( i , k ) } to denote the corresponding vertices of triangle i.
    -
    Triangle Cleaning ( T ) : We identify three types of unwanted triangles: those with large sides, those with large areas, and those where the matrix containing the coordinates of the vertices has a large condition number. The type of cleaning applied depends on the problem; however, the procedure remains the same for any problematic triangle case and is outlined in Algorithm 5. First, the property ρ i (area, largest side, or condition number) is computed for all the triangles i = 1 , , δ . Next, triangles i with ρ i > τ are removed.
    -
    Triangle Filling ( F ) : For each triangle t i T with area a i , we generate a i A N f points uniformly at random inside triangle t i , following the procedure described in [58]. That is, the number of points is proportional to the area (or volume) of each triangle (or simplex). Here, A = i = 1 δ a i is the total area of the triangulation.
Algorithm 3 Filling ( k = 2 Objectives)
Input: 
starting set A y = { p 1 , , p } , filling size N f
Output: 
Filled set F = { y 1 , , y N f }
1:
X = { x 1 , , x } sort A y according to its first objective f 1
2:
L i x i x i + 1       i = 1 , , 1
3:
L i = 1 1 L i
4:
δ L / ( N f 1 )
5:
d i s t _ l e f t ( 0 , 0 , , 0 ) R
6:
for  i = 1 : 1  do                                                 ▹ compute number of points per segment
7:
     r a t i o ( | L i | + d i s t _ l e f t ( i ) ) / δ
8:
     p o i n t s _ p e r _ s e g m e n t ( i ) r a t i o
9:
     d i s t _ l e f t ( i + 1 ) ( r a t i o r a t i o ) · δ
10:
end for
11:
c o u n t = 1
12:
for  i = 1 : 1  do                                                                              ▹ for each line segment
13:
    if  p o i n t s _ p e r _ s e g m e n t ( i ) > 0  then                          ▹ check if a point lands in segment L i
14:
         ν i : = ( x i + 1 x i ) / L i
15:
         y c o u n t = x i + ( δ d i s t _ l e f t ( i ) ) · ν i
16:
         c o u n t c o u n t + 1
17:
        for  j = 2 : p o i n t s _ p e r _ s e g m e n t ( i )  do                          ▹ if L i has more than one point
18:
            y c o u n t y c o u n t 1 + δ · ν i
19:
            c o u n t c o u n t + 1
20:
        end for
21:
    end if
22:
end for
23:
return  F = { y 1 , , y N f }
Algorithm 4 Filling ( k 3 Objectives)
Input: 
starting set A y = { p 1 , , p } , filling size N f
Output: 
Filled set F = { y 1 , , y N f }
1:
η normal_vector ( A y )
2:
P k 1 projection ( A y , η )
3:
T DelaunayTriangulation ( P k 1 )
4:
T TriangleCleaning ( T )
5:
F TriangleFilling ( T , A y )
6:
return  F = { y 1 , , y N f }
Algorithm 5 Triangle Cleaning
Input: 
starting set A y = { p 1 , , p l } , triangulation T R k × δ , number of triangles δ , parameter threshold τ
Output: 
cleaned triangulation T of size δ
1:
if chosen property is area then
2:
     ρ i 1 k ! det [ ( p T ( i , 2 ) T p T ( i , 1 ) T p T ( i , 3 ) T p T ( i , 1 ) T p T ( i , k ) T p T ( i , 1 ) T ) ( p T ( i , 2 ) T p T ( i , 1 ) T p T ( i , 3 ) T p T ( i , 1 ) T p T ( i , k ) T p T ( i , 1 ) T ) T ] 1 / 2 i = 1 , , δ
3:
else if chosen property is largest side then
4:
     ρ i largest side of simplex with vertices { p T ( i , 1 ) , , p T ( i , k ) }
5:
else if chosen property is condition number then
6:
     ρ i κ p T ( i , 1 ) p T ( i , 2 ) p T ( i , k )                     ▹ κ ( A ) is the condition number of matrix A
7:
end if
8:
ρ i = 1 δ ρ i
9:
T T
10:
if  ρ i > τ · ρ   then
11:
    remove triangle i from T
12:
end if
13:
return T, number of triangles δ

4.4. Reduction

Once we have computed the filled set F, we need to select N points that are ideally evenly distributed along F. To this end, we use k-means clustering with N clusters, as there is a strong relationship between k-means and the optimal IGD subset selection [59,60]. The resulting N cluster centroids form the PF reference set Z. The use of k-means for reduction can be modified in the RSG code; alternatives such as k-medoids and spectral clustering are also supported (this is an input parameter in the code). Note that this reduction step can be further adapted to generate reference sets tailored to other types of indicators (i.e., those that are not distance-based), since one of the output of RSG the raw filled set F.

4.5. Obtaining A y

The RSG requires an initial approximation A y of the Pareto front. Note that by construction of the algorithm, this set can have small imperfections (which can be removed by the filling step) and can also have biases in the approximation (reduction step). However, it is desired that A y “captures” the shape of the entire Pareto front. In particular, the RSG is not capable of detecting if an entire region of the Pareto front is missing in A y . The computation of such an approximation is certainly problem-dependent. For our computations, we have used the following three main procedures to obtain A y , depending on the complexity of the PF shape:
-
Sampling: In the easiest case, either the PS or the PF is given analytically, which is indeed given for several benchmark problems. If a sampling can be performed directly in objective space (e.g., for linear fronts), the remaining steps of the RSG may not be needed to further improve the quality of the solution set. If the sampling is performed in decision variable space, the elements of the resulting image A y are not necessarily uniformly distributed along the Pareto front as discussed above. However, in that case, the filling and reduction steps may help to remove biases. We have used sampling, e.g., for the test problems DTLZ1, CONV3, and CONV3-4.
-
Continuation: If neither the PS nor the PF has an “easy” shape, one alternative is to make use of multi-objective continuation methods, probably in combination with the use of several different starting points and with a non-dominance test. In particular, we have used the Pareto Tracer (PT, [61,62]), which is a state-of-the-art continuation method that is able to treat problems of, in principle, any dimensions (both n and k), can handle general constraints, and that can even detect local degeneration of the solution set. Continuation is advisable if the PS/PF consists of relatively few connected components and if the gradient information can at least be approximated. We have used PT, e.g., for the test problems WFG2, DTLZ5, and DTLZ6.
-
Archiving: The result of an MOEA or any other MOP solver can, of course, be taken. This could be either the final archive of the population, via merging several populations of the same or several runs [52], or via using external (unbounded) archives [43]. Note that this includes taking a reference set from a given repository. Archiving is advisable if none of the above techniques can be applied successfully.
We have used archiving, e.g., for the test problems DTLZ1-4, DTLZ7, ZDT1-6, CONV3, CONV3-4, and CONV4-2F.

4.6. Complexity Analysis

The overall time complexity is O ( γ 2 + τ 2 N k N f ) for k = 2 (regardless of the number of components) and O ( γ 2 + δ k 2 + τ 2 N k N f ) for k 3 , where is the size of the initial approximation A y , k is the number of objectives, δ is the number of triangles in the Delaunay triangulation, τ 2 is the number of iterations of k-means (bounded to 500 in this work), N is the desired size of the reference set Z, and N f is the size of the filling. This assumes that k and that the triangle-cleaning method used is based on the longest side (which was the method applied to all the references presented in this work). Typically, obtaining a decent approximation requires a large value of N f , making the clustering step the dominant one and thus reducing the overall complexity to O ( τ 2 N k N f ) for any k. We now present the time complexity analysis in detail for each step separately, considering a single component:
  • Component Detection: The time complexity is O ( ( ) 2 + γ ( log ( ) + ( ) 2 ) ) , which accounts for the computation of the average distance plus the size of the grid search ( γ ) multiplied by the sum of the complexities of DBSCAN and the WeakestLink computation. Here, is the size of A y , and γ represents the number of parameter combinations of the grid search, with γ = 14 for k = 2 and γ = 10 for k 3 using the suggested values for the case where no information about the PF is known a priori. If it is previously known that the Pareto front is connected, then the parameters of DBSCAN can be correctly adjusted, and γ can be set to 1.
  • Filling: The time complexity depends on the number of objectives:
    -
    For k = 2 , the time complexity is O ( + k N f ) , which accounts for sorting and placing the N f points along the line segments.
    -
    For k 3 , the time complexity is O ( [ k + k ( k 1 ) 2 ] + k 2 + log + [ ( k 1 ) δ + k N f ] ) due to the computations involved in determining the normal vector η , changing coordinates and projecting, performing the Delaunay triangulation, and filling the triangles. Here, δ represents the size of the cleaned Delaunay triangulation, i.e., the number of triangles. Additionally, triangle cleaning must be considered, though its complexity depends on the method used. It is given by O ( δ k 3 ) when the cleaning is based on area or the condition number (due to determinant computation) or O ( δ k 2 ) when the cleaning is based on the longest side.
  • Select Reference Set T: The time complexity is O ( τ 2 N k N f ) due to the k-means clustering algorithm. Here, τ 2 is the number of iterations of k-means.
The space complexity of RSG is dominated by the reduction step (i.e., k-means) and is given by O ( k ( N f + N ) ) , where k is the number of objectives, N f is the size of the filling set, and N is the desired reference set size. Since typically N f N , the complexity simplifies to O ( k N f ) .

5. Numerical Results

In this section, we show the strength of the novel approach on selected benchmark test problems. We further show—as far as possible—comparisons to related methods. See Appendix A for the definitions of the test problems CONV3, CONV3-4, and CONV4-2F.
First, we demonstrate the working principle of the RSG on selected test problems with different characteristics (number of objectives, choice of the initial set A y , and shape of the Pareto front). For all problems, we show all the main steps of the RSG: initial solution, component detection, filling, and the selection. For the latter, we show the obtained reference sets for different values of N to show the effect of the new method. In particular, we have used the following six test problems (refer to Figure 8, Figure 9, Figure 10, Figure 11, Figure 12 and Figure 13): ZDT3 ( k = 2 objectives, disconnected, and convex–concave Pareto front), DTLZ7 ( k = 3 , disconnected, and convex–concave PF), WFG2 ( k = 3 , disconnected, and convex PF, where the latter refers to each connected component), CONV3 ( k = 3 , connected, and convex PF), CONV3-4 ( k = 3 , connected, and convex PF, where one part of the PF is nearly degenerated), and CONV4-2F ( k = 4 , disconnected, and convex PF). Note that the starting sets A y for ZDT3 and DTLZ7 contain (slight) biases that were removed by the RSG. The applicability to CONV3, CONV3-4, and CONV4-2F show the universality of the RSG: in contrast to other methods that generate reference sets, the RSG does not, in principle, need any analytical information of the PF. It requires, however, a “suitable” initial solution A y , which is a non-trivial task and is problem-dependent. Refer to Section 4.5 for general guidelines. In the captions of Figure 8, Figure 9, Figure 10, Figure 11, Figure 12 and Figure 13, we describe how we obtained this set for each problem.
Figure 14 shows some numerical results of the RSG on DTLZ1 and DTLZ2 with and without the filling step. As can be seen, the candidate solutions are more evenly distributed when using the filling. Without filling, a bias can be observed on the top end of the fronts similar to uniform sampling along the Pareto set (refer to Figure 3 and Figure 4 and the related discussion).
Table 2 and Table 3 show the running times (in seconds) of the RSG on selected bi-objective and three-objective problems for different filling sizes. We used for the starting set A y the reference provided by PlatEMO, with a size of 100 for bi-objective problems and a problem-dependent size for three-objective problems. The size of the final reference set was fixed at 100 for bi-objective and 300 for three-objective problems. Naturally, the larger the filling size, the longer the runtime of the RSG. However, reference sizes of 100 and 300 are typically standard for comparisons, meaning that, in general, the RSG only needs to be run once per problem.
Additionally, a comparison between the RSG, PlatEMO [50], pymoo, and filling step is presented in Figure 15 and Figure 16 for the WFG2, ZDT1, ZDT3, DTLZ2, Convex DTLZ2 (CDTLZ2), C2-DTLZ2, and DTLZ7 test problems. Although the reference sets provided by PlatEMO and pymoo are of high quality, they still exhibit some bias in certain bi-objective problems (such as ZDT1 and ZDT3) and especially in three-objective problems such as WFG2, and CDTLZ2. Furthermore, for problems like WFG2 and DTLZ7, the number of points in the reference sets of PlatEMO and pymoo is limited to a fixed set in contrast to the RSG, which can generate any desired number of points.

6. Conclusions and Future Work

In this paper, we have addressed the problem of obtaining bias-free and complete finite-size approximations of the solution sets (Pareto fronts) of multi-objective optimization problems (MOPs). Such approximations are, in particular, required for the fair usage of distance-based performance indicators, which are frequently used in evolutionary multi-objective optimization (EMO). If the Pareto front approximations are biased or incomplete, the use of these performance indicators can lead to misleading or false information. To address this issue, we have proposed the Reference Set Generator (RSG). This method starts with an initial (probably biased) approximation of the Pareto front. An unbiased approximation is then computed via component detection, filling, and a reduction to the desired size. The RSG can be applied to Pareto fronts of any shape and dimension. We have finally demonstrated the strength of the novel approach on several benchmark problems.
In the future, we intend to use the RSG on the Pareto fronts of all commonly used continuous test problems. Special attention has to be paid to problems with degenerated fronts (i.e., problems where the Pareto front does locally not form an object of dimension k 1 ). In the current approach, we handled degeneracy using the Pareto Tracer (PT) to obtain a filled set and, from there, used the reduction step. For future work, we will explore if the projection can be modified to fill such sets, which may lead to a more general approach to handling degeneration.
Another important aspect is scalability. In order to apply the method to higher dimensional problems, variants have to be considered that are less costly. For example, to avoid performing a grid search, alternative parameter selection methods can be explored. Similarly, for the reduction step, a faster k-means variant or a different subset selection method could be used. A study on the effect of k-means initialization is also left as future work. Next, we stress that we have designed the RSG for Pareto front approximations. There are, however, other interesting sets of interest in the context of multi-objective optimization that may be worth investigating. This may include the entire Pareto set or locally optimal solutions, as considered in multi-objective multi-modal optimization (e.g., [38,63]) or the families of Pareto sets/fronts in the context of dynamic multi-objective optimization [64].

Author Contributions

All authors contributed equally to this work. All authors have read and agreed to the published version of the manuscript.

Funding

O. Schütze acknowledges support from the CONAHCYT, project CBF2023-2024-1463.

Data Availability Statement

The code and data presented in the study are openly available in GitHub at https://github.com/aerfangel/RSG (accessed on 24 April 2025). See also the website of the third author for more information about the RSG (https://neo.cinvestav.mx/Group/)(accessed on 24 April 2025).

Acknowledgments

Angel E. Rodriguez-Fernandez acknowledges support from the SECIHTI to pursue his postdoc fellowship at the CINVESTAV-IPN.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
A y Current PF approximation
Size of A y , the starting PF approximation
NDesired size of approximation
ZRSG result: reference set of size N
FFilled set
N f Size of filling
C i i-th detected component
CSet of all detected components
LTotal length of 2D curve
T Delaunay triangulation
δ Number of triangles in T
TCleaned triangulation
δ Number of triangles in T
η Normal vector
P k 1 Projected A y
ρ Selected cleaning property (area/volume, largest side, or condition number)
ρ i Value of property ρ for triangle i
τ Threshold for removing triangles
a i Area/volume of triangle/simplex i
ATotal area/volume of the triangulation
rRadius of DBSCAN

Appendix A. Function Definitions

  • CONV3
    F : [ 3 , 3 ] 3 R 3 F ( x ) = ( f 1 ( x ) ,   f 2 ( x ) ,   f 3 ( x ) ) T where : f i ( x ) = x a i 2 a 1 = ( 1 , 1 , 1 ) ,     a 2 = ( 1 , 1 , 1 ) ,     a 3 = ( 1 , 1 , 1 )
  • CONV3-4
    F : [ 3 , 3 ] 3 R 3 F ( x ) = ( f 1 ( x ) ,   f 2 ( x ) ,   f 3 ( x ) ) T where : f 1 ( x ) = ( x 1 a 1 1 ) 4 + ( x 2 a 2 1 ) 2 + ( x 3 a 3 1 ) 2 f 2 ( x ) = ( x 1 a 1 2 ) 2 + ( x 2 a 2 2 ) 4 + ( x 3 a 3 2 ) 2 f 3 ( x ) = ( x 1 a 1 3 ) 2 + ( x 2 a 2 3 ) 2 + ( x 3 a 3 3 ) 4 a 1 = ( 1 , 1 , 1 ) ,     a 2 = ( 1 , 1 , 1 ) ,     a 3 = ( 1 , 1 , 1 )
  • CONV4-2F
    F : [ 3 , 3 ] 4 R 4 F ( x ) = ( f 1 ( x ) ,   f 2 ( x ) ,   f 3 ( x ) ,   f 4 ( x ) ) T where : f i ( x ) = x + 1 a i 2 3.5 σ if x < ( 0 , 0 , 0 , 0 ) x a i 2 otherwise ϕ 1 = ( 0 , a 1 a 2 2 , a 1 a 3 2 , a 1 a 4 2 ) ϕ 4 = ( a 4 a 1 2 , a 4 a 2 2 , a 4 a 3 2 , 0 ) σ = ϕ 4 ϕ 1 a 1 = ( 1 , 0 , 0 , 0 ) ,     a 2 = ( 0 , 1 , 0 , 0 )     a 3 = ( 0 , 0 , 1 , 0 ) , a 4 = ( 0 , 0 , 0 , 1 ) , 1 = ( 1 , 1 , 1 , 1 ) ,

References

  1. Hillermeier, C. Nonlinear Multiobjective Optimization: A Generalized Homotopy Approach; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2001; Volume 135. [Google Scholar]
  2. Coello Coello, C.A.; Goodman, E.; Miettinen, K.; Saxena, D.; Schütze, O.; Thiele, L. Interview: Kalyanmoy Deb Talks about Formation, Development and Challenges of the EMO Community, Important Positions in His Career, and Issues Faced Getting His Works Published. Math. Comput. Appl. 2023, 28, 34. [Google Scholar] [CrossRef]
  3. Veldhuizen, D.A.V. Multiobjective Evolutionary Algorithms: Classifications, Analyses, and New Innovations; Technical Report; Air Force Institute of Technology: Dayton, OH, USA, 1999. [Google Scholar]
  4. Zitzler, E.; Thiele, L.; Laumanns, M.; Fonseca, C.M.; Grunert, V.D.F. Performance assessment of multiobjective optimizers: An analysis and review. IEEE Trans. Evol. Comput. 2003, 7, 117–132. [Google Scholar] [CrossRef]
  5. Coello, C.A.C.; Cruz, N.C. Solving Multiobjective Optimization Problems Using an Artificial Immune System. Genet. Program. Evolvable Mach. 2005, 6, 163–190. [Google Scholar] [CrossRef]
  6. Schütze, O.; Esquivel, X.; Lara, A.; Coello Coello, C.A. Using the averaged Hausdorff distance as a performance measure in evolutionary multi-objective optimization. IEEE Trans. Evol. Comput. 2012, 16, 504–522. [Google Scholar] [CrossRef]
  7. Ishibuchi, H.; Masuda, H.; Nojima, Y. A Study on Performance Evaluation Ability of a Modified Inverted Generational Distance Indicator. In Proceedings of the GECCO’15: Genetic and Evolutionary Computation Conference, Madrid, Spain, 11–15 July 2015; pp. 695–702. [Google Scholar] [CrossRef]
  8. Bogoya, J.M.; Vargas, A.; Cuate, O.; Schütze, O. A (p,q)-Averaged Hausdorff Distance for Arbitrary Measurable Sets. Math. Comput. Appl. 2018, 23, 51. [Google Scholar] [CrossRef]
  9. Deb, K.; Ehrgott, M. On Generalized Dominance Structures for Multi-Objective Optimization. Math. Comput. Appl. 2023, 28, 100. [Google Scholar] [CrossRef]
  10. Deb, K.; Pratap, A.; Sameer, S.A.; Meyarivan, T. A fast and elitist multiobjective genetic algorithm: NSGA-II. Evol. Comput. IEEE Trans. 2002, 6, 182–197. [Google Scholar] [CrossRef]
  11. Zitzler, E.; Laumanns, M.; Thiele, L. SPEA2: Improving the Strength Pareto Evolutionary Algorithm for Multiobjective Optimization. In Proceedings of the Evolutionary Methods for Design, Optimisation and Control with Application to Industrial Problems (EUROGEN 2001), Athens, Greece, 19–21 September 2002; Giannakoglou, K., Tsahalis, D., Periaux, J., Papailiou, K., Eds.; International Center for Numerical Methods in Engineering (CIMNE): Barcelona, Spain, 2002; pp. 95–100. [Google Scholar]
  12. Fonseca, C.M.; Fleming, P.J. An overview of evolutionary algorithms in multiobjective optimization. Evol. Comput. 1995, 3, 1–16. [Google Scholar] [CrossRef]
  13. Knowles, J.D.; Corne, D.W. Approximating the nondominated front using the Pareto Archived Evolution Strategy. Evol. Comput. 2000, 8, 149–172. [Google Scholar] [CrossRef]
  14. Zhang, Q.; Li, H. MOEA/D: A Multi-objective Evolutionary Algorithm Based on Decomposition. IEEE Trans. Evol. Comput. 2007, 11, 712–731. [Google Scholar] [CrossRef]
  15. Deb, K.; Jain, H. An evolutionary many-objective optimization algorithm using reference-point-based nondominated sorting approach, part I: Solving problems with box constraints. Trans. Evol. Comput. 2014, 18, 577–601. [Google Scholar] [CrossRef]
  16. Jain, H.; Deb, K. An Evolutionary Many-Objective Optimization Algorithm Using Reference-Point Based Nondominated Sorting Approach, Part II: Handling Constraints and Extending to an Adaptive Approach. IEEE Trans. Evol. Comput. 2014, 18, 602–622. [Google Scholar] [CrossRef]
  17. Zuiani, F.; Vasile, M. Multi Agent Collaborative Search based on Tchebycheff decomposition. Comput. Optim. Appl. 2013, 56, 189–208. [Google Scholar] [CrossRef]
  18. Moubayed, N.A.; Petrovski, A.; McCall, J. (DMOPSO)-M-2: MOPSO Based on Decomposition and Dominance with Archiving Using Crowding Distance in Objective and Solution Spaces. Evol. Comput. 2014, 22, 47–77. [Google Scholar] [CrossRef]
  19. Beume, N.; Naujoks, B.; Emmerich, M.T.M. SMS-EMOA: Multiobjective selection based on dominated hypervolume. Eur. J. Oper. Res. 2007, 181, 1653–1669. [Google Scholar] [CrossRef]
  20. Zitzler, E.; Thiele, L.; Bader, J. SPAM: Set Preference Algorithm for multiobjective optimization. In Proceedings of the Parallel Problem Solving From Nature PPSN X, Dortmund, Germany, 13–17 September 2008; pp. 847–858. [Google Scholar]
  21. Wagner, T.; Trautmann, H. Integration of Preferences in Hypervolume-based multiobjective evolutionary algorithms by means of desirability functions. IEEE Trans. Evol. Comput. 2010, 14, 688–701. [Google Scholar] [CrossRef]
  22. Fonseca, C.M.; Fleming, P.J. Genetic algorithms for multiobjective optimization: Formulation, discussion, and generalization. In Proceedings of the 5-th International Conference on Genetic Algorithms, Champaign, IL, USA, 17–21 July 1993; pp. 416–423. [Google Scholar]
  23. Srinivas, N.; Deb, K. Multiobjective optimization using nondominated sorting in genetic algorithms. Evol. Comput. 1994, 2, 221–248. [Google Scholar] [CrossRef]
  24. Horn, J.; Nafpliotis, N.; Goldberg, D.E. A niched Pareto genetic algorithm for multiobjective optimization. In Proceedings of the First IEEE Conference on Evolutionary Computation, IEEE World Congress on Computational Computation, Orlando, FL, USA, 27–29 June 1994; IEEE Press: Piscataway, NJ, USA, 1994; pp. 82–87. [Google Scholar]
  25. Zitzler, E.; Thiele, L. Multiobjective evolutionary algorithms: A comparative case study and the strength Pareto approach. IEEE Trans. Evol. Comput. 1999, 3, 257–271. [Google Scholar] [CrossRef]
  26. Rudolph, G. Finite Markov Chain results in evolutionary computation: A Tour d’Horizon. Fundam. Inform. 1998, 35, 67–89. [Google Scholar] [CrossRef]
  27. Rudolph, G. On a multi-objective evolutionary algorithm and its convergence to the Pareto set. In Proceedings of the IEEE International Conference on Evolutionary Computation (ICEC 1998), Anchorage, AK, USA, 4–9 May 1998; IEEE Press: Piscataway, NJ, USA, 1998; pp. 511–516. [Google Scholar]
  28. Rudolph, G.; Agapie, A. Convergence Properties of Some Multi-Objective Evolutionary Algorithms. In Proceedings of the Evolutionary Computation (CEC), La Jolla, CA, USA, 16–19 July 2000; IEEE Press: Piscataway, NJ, USA, 2000. [Google Scholar]
  29. Rudolph, G. Evolutionary Search under Partially Ordered Fitness Sets. In Proceedings of the International NAISO Congress on Information Science Innovations (ISI 2001), Dubai, United Arab Emirates, 17–21 March 2001; ICSC Academic Press: Sliedrecht, The Netherlands, 2001; pp. 818–822. [Google Scholar]
  30. Hanne, T. On the convergence of multiobjective evolutionary algorithms. Eur. J. Oper. Res. 1999, 117, 553–564. [Google Scholar] [CrossRef]
  31. Hanne, T. Global multiobjective optimization with evolutionary algorithms: Selection mechanisms and mutation control. In Proceedings of the First International Conference on Evolutionary Multi-Criterion Optimization, EMO 2001, Zurich, Switzerland, 7–9 March 2001; Springer: Berlin/Heidelberg, Germany, 2001; pp. 197–212. [Google Scholar]
  32. Hanne, T. A multiobjective evolutionary algorithm for approximating the efficient set. Eur. J. Oper. Res. 2007, 176, 1723–1734. [Google Scholar] [CrossRef]
  33. Hanne, T. A Primal-Dual Multiobjective Evolutionary Algorithm for Approximating the Efficient Set. In Proceedings of the Evolutionary Computation (CEC), Singapore, 25–28 September 2007; IEEE Press: Piscataway, NJ, USA, 2007; pp. 3127–3134. [Google Scholar]
  34. Brockhoff, D.; Tran, T.D.; Hansen, N. Benchmarking numerical multiobjective optimizers revisited. In Proceedings of the 2015 Annual Conference on Genetic and Evolutionary Computation, Madrid, Spain, 11–15 July 2015; pp. 639–646. [Google Scholar]
  35. Wang, R.; Zhou, Z.; Ishibuchi, H.; Liao, T.; Zhang, T. Localized weighted sum method for many-objective optimization. IEEE Trans. Evol. Comput. 2016, 22, 3–18. [Google Scholar] [CrossRef]
  36. Pang, L.M.; Ishibuchi, H.; Shang, K. Algorithm Configurations of MOEA/D with an Unbounded External Archive. arXiv 2020, arXiv:2007.13352. [Google Scholar]
  37. Nan, Y.; Shu, T.; Ishibuchi, H. Effects of External Archives on the Performance of Multi-Objective Evolutionary Algorithms on Real-World Problems. In Proceedings of the 2023 IEEE Congress on Evolutionary Computation (CEC), Chicago, IL, USA, 1–5 July 2023; pp. 1–8. [Google Scholar] [CrossRef]
  38. Rodriguez-Fernandez, A.E.; Schäpermeier, L.; Hernández, C.; Kerschke, P.; Trautmann, H.; Schütze, O. Finding ϵ-Locally Optimal Solutions for Multi-Objective Multimodal Optimization. IEEE Trans. Evol. Comput. 2024. [Google Scholar] [CrossRef]
  39. Schütze, O.; Rodriguez-Fernandez, A.E.; Segura, C.; Hernández, C. Finding the Set of Nearly Optimal Solutions of a Multi-Objective Optimization Problem. IEEE Trans. Evol. Comput. 2024, 29, 145–157. [Google Scholar] [CrossRef]
  40. Nan, Y.; Ishibuchi, H.; Pang, L.M. Small Population Size is Enough in Many Cases with External Archives. In Evolutionary Multi-Criterion Optimization, Proceedings of the 13th International Conference, EMO 2025, Canberra, ACT, Australia, 4–7 March 2025; Singh, H., Ray, T., Knowles, J., Li, X., Branke, J., Wang, B., Oyama, A., Eds.; Springer Nature: Singapore, 2025; pp. 99–113. [Google Scholar]
  41. Loridan, P. ϵ-Solutions in Vector Minimization Problems. J. Optim. Theory Appl. 1984, 42, 265–276. [Google Scholar] [CrossRef]
  42. Laumanns, M.; Thiele, L.; Deb, K.; Zitzler, E. Combining convergence and diversity in evolutionary multiobjective optimization. Evol. Comput. 2002, 10, 263–282. [Google Scholar] [CrossRef] [PubMed]
  43. Schütze, O.; Hernández, C. Archiving Strategies for Evolutionary Multi-Objective Optimization Algorithms; Springer: Berlin/Heidelberg, Germany, 2021. [Google Scholar]
  44. Knowles, J.D.; Corne, D.W. Properties of an adaptive archiving algorithm for storing nondominated vectors. IEEE Trans. Evol. Comput. 2003, 7, 100–116. [Google Scholar] [CrossRef]
  45. Knowles, J.D.; Corne, D.W. Bounded Pareto archiving: Theory and practice. In Metaheuristics for Multiobjective Optimisation; Springer: Berlin/Heidelberg, Germany, 2004; pp. 39–64. [Google Scholar]
  46. Knowles, J.D.; Corne, D.W.; Fleischer, M. Bounded archiving using the Lebesgue measure. In Proceedings of the IEEE Congress on Evolutionary Computation, Canberra, ACT, Australia, 8–12 December 2003; IEEE Press: IEEE, NJ, USA, 2003; pp. 2490–2497. [Google Scholar]
  47. López-Ibáñez, M.; Knowles, J.D.; Laumanns, M. On Sequential Online Archiving of Objective Vectors. In Evolutionary Multi-Criterion Optimization (EMO 2011), Proceedings of the 6th International Conference, EMO 2011, Ouro Preto, Brazil, 5–8 April 2011; Springer: Berlin/Heidelberg, Germany, 2011; pp. 46–60. [Google Scholar]
  48. Castellanos, C.I.H.; Schütze, O. A Bounded Archiver for Hausdorff Approximations of the Pareto Front for Multi-Objective Evolutionary Algorithms. Math. Comput. Appl. 2022, 27, 48. [Google Scholar] [CrossRef]
  49. Laumanns, M.; Zenklusen, R. Stochastic convergence of random search methods to fixed size Pareto front approximations. Eur. J. Oper. Res. 2011, 213, 414–421. [Google Scholar] [CrossRef]
  50. Tian, Y.; Cheng, R.; Zhang, X.; Jin, Y. PlatEMO: A MATLAB platform for evolutionary multi-objective optimization. IEEE Comput. Intell. Mag. 2017, 12, 73–87. [Google Scholar] [CrossRef]
  51. Blank, J.; Deb, K. Pymoo: Multi-Objective Optimization in Python. IEEE Access 2020, 8, 89497–89509. [Google Scholar] [CrossRef]
  52. Wang, H.; Rodriguez-Fernandez, A.E.; Uribe, L.; Deutz, A.; Cortés-Piña, O.; Schütze, O. A Newton Method for Hausdorff Approximations of the Pareto Front Within Multi-objective Evolutionary Algorithms. IEEE Trans. Evol. Comput. 2024. [Google Scholar] [CrossRef]
  53. Rudolph, G.; Schütze, O.; Grimme, C.; Domínguez-Medina, C.; Trautmann, H. Optimal averaged Hausdorff archives for bi-objective problems: Theoretical and numerical results. Comput. Optim. Appl. 2016, 64, 589–618. [Google Scholar] [CrossRef]
  54. Dilettoso, E.; Rizzo, S.A.; Salerno, N. A Weakly Pareto Compliant Quality Indicator. Math. Comput. Appl. 2017, 22, 25. [Google Scholar] [CrossRef]
  55. Ester, M.; Kriegel, H.P.; Sander, J.; Xu, X. A Density-Based Algorithm for Discovering Clusters in Large Spatial Databases with Noise. In Proceedings of the KDD, Portland, OR, USA, 2–4 August 1996; Simoudis, S., Han, J., Fayyad, U., Eds.; AAAI Press: Menlo Park, CA, USA, 1996; pp. 226–231. [Google Scholar]
  56. Ben-David, S.; Ackerman, M. Measures of Clustering Quality: A Working Set of Axioms for Clustering. In Proceedings of the Advances in Neural Information Processing Systems, Vancouver, BC, Canada, 8–10 December 2008; Koller, D., Schuurmans, D., Bengio, Y., Bottou, L., Eds.; Curran Associates, Inc.: Red Hook, NY, USA, 2008; Volume 21. [Google Scholar]
  57. Delaunay, B. Sur la sphère vide. Bull. L’AcadeÉmie Des Sci. L’URSS Cl. Des Sci. MathÉmatiques 1934, 1934, 793–800. [Google Scholar]
  58. Smith, N.A.; Tromble, R.W. Sampling Uniformly from the Unit Simplex; Johns Hopkins University: Baltimore, MD, USA, 2004. [Google Scholar]
  59. Uribe, L.; Bogoya, J.M.; Vargas, A.; Lara, A.; Rudolph, G.; Schütze, O. A Set Based Newton Method for the Averaged Hausdorff Distance for Multi-Objective Reference Set Problems. Mathematics 2020, 8, 1822. [Google Scholar] [CrossRef]
  60. Chen, W.; Ishibuchi, H.; Shang, K. Clustering-Based Subset Selection in Evolutionary Multiobjective Optimization. In Proceedings of the 2021 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Melbourne, Australia, 17–20 October 2021; pp. 468–475. [Google Scholar] [CrossRef]
  61. Martín, A.; Schütze, O. Pareto Tracer: A predictor-corrector method for multi-objective optimization problems. Eng. Optim. 2018, 50, 516–536. [Google Scholar] [CrossRef]
  62. Schütze, O.; Cuate, O. The Pareto Tracer for the treatment of degenerated multi-objective optimization problems. Eng. Optim. 2024, 57, 261–286. [Google Scholar] [CrossRef]
  63. Li, W.; Yao, X.; Zhang, T.; Wang, R.; Wang, L. Hierarchy ranking method for multimodal multiobjective optimization with local Pareto fronts. IEEE Trans. Evol. Comput. 2022, 27, 98–110. [Google Scholar] [CrossRef]
  64. Cai, X.; Wu, L.; Zhao, T.; Wu, D.; Zhang, W.; Chen, J. Dynamic adaptive multi-objective optimization algorithm based on type detection. Inf. Sci. 2024, 654, 119867. [Google Scholar] [CrossRef]
Figure 1. Pareto front representations R N x of MOP (3) when using N = 50 , 500, and 10 , 000 equally distributed samples along the Pareto set.
Figure 1. Pareto front representations R N x of MOP (3) when using N = 50 , 500, and 10 , 000 equally distributed samples along the Pareto set.
Mathematics 13 01626 g001
Figure 2. Representation R 10 , 000 x of the Pareto front of MOP (3), together with the two hypothetical outcomes A and B.
Figure 2. Representation R 10 , 000 x of the Pareto front of MOP (3), together with the two hypothetical outcomes A and B.
Mathematics 13 01626 g002
Figure 3. Pareto front approximations for DTLZ1 resulting from uniformly sampling the Pareto set with N points. Distances to the nearest neighbors in (a): d ( y 1 , N N y 1 ) = 0.06818124 ; d ( y 2 , N N y 2 ) = 0.00872971 .
Figure 3. Pareto front approximations for DTLZ1 resulting from uniformly sampling the Pareto set with N points. Distances to the nearest neighbors in (a): d ( y 1 , N N y 1 ) = 0.06818124 ; d ( y 2 , N N y 2 ) = 0.00872971 .
Mathematics 13 01626 g003
Figure 4. Pareto front approximations for DTLZ2 resulting from uniformly sampling the Pareto set with N points. Distances to the nearest neighbors in (a): d ( y 1 , N N y 1 ) = 0.17431148 , d ( y 2 , N N y 2 ) = 0.03026887 .
Figure 4. Pareto front approximations for DTLZ2 resulting from uniformly sampling the Pareto set with N points. Distances to the nearest neighbors in (a): d ( y 1 , N N y 1 ) = 0.17431148 , d ( y 2 , N N y 2 ) = 0.03026887 .
Mathematics 13 01626 g004
Figure 5. Pareto front approximations for ZDT1 resulting from uniformly sampling the Pareto set with N points. Distances to the nearest neighbors in (a): d ( y 1 , N N y 1 ) = 0.10101010 ; d ( y 2 , N N y 2 ) = 0.01144074 .
Figure 5. Pareto front approximations for ZDT1 resulting from uniformly sampling the Pareto set with N points. Distances to the nearest neighbors in (a): d ( y 1 , N N y 1 ) = 0.10101010 ; d ( y 2 , N N y 2 ) = 0.01144074 .
Mathematics 13 01626 g005
Figure 6. Pareto front approximations for ZDT3 resulting from uniformly sampling the Pareto set with N points. Distances to the nearest neighbors in (a): d ( y 1 , N N y 1 ) = 0.06683458 ; d ( y 2 , N N y 2 ) = 0.00448699 .
Figure 6. Pareto front approximations for ZDT3 resulting from uniformly sampling the Pareto set with N points. Distances to the nearest neighbors in (a): d ( y 1 , N N y 1 ) = 0.06683458 ; d ( y 2 , N N y 2 ) = 0.00448699 .
Mathematics 13 01626 g006
Figure 7. Filling with (a) and without (b) component detection for ZDT3. Note that points not in the PF are included if the component detection step is omitted (b).
Figure 7. Filling with (a) and without (b) component detection for ZDT3. Note that points not in the PF are included if the component detection step is omitted (b).
Mathematics 13 01626 g007
Figure 8. The main steps of RSG on ZDT3 as well as three reference sets for N = 1000 , 100, and 50. For this problem, we have taken the starting set A y from PlatEMO and have set N f = 10 , 000 . In (b), detected connected components are represented by different colors.
Figure 8. The main steps of RSG on ZDT3 as well as three reference sets for N = 1000 , 100, and 50. For this problem, we have taken the starting set A y from PlatEMO and have set N f = 10 , 000 . In (b), detected connected components are represented by different colors.
Mathematics 13 01626 g008
Figure 9. The main steps of RSG on DTLZ7 as well as three reference sets for N = 1000 , 500 , and 300. We have taken A y from pymoo and have set N f = 100 , 000 . (k,l) show the same results, but (l) uses the same range for all variables, indicating uniformity of the solution set. In (b), detected connected components are represented by different colors.
Figure 9. The main steps of RSG on DTLZ7 as well as three reference sets for N = 1000 , 500 , and 300. We have taken A y from pymoo and have set N f = 100 , 000 . (k,l) show the same results, but (l) uses the same range for all variables, indicating uniformity of the solution set. In (b), detected connected components are represented by different colors.
Mathematics 13 01626 g009aMathematics 13 01626 g009b
Figure 10. The main steps of RSG on WFG2 as well as the reference set for N = 500 . We have obtained A y from PT (together with a non-dominance test) and have set N f = 3 , 500 , 000 . In (b), detected connected components are represented by different colors.
Figure 10. The main steps of RSG on WFG2 as well as the reference set for N = 500 . We have obtained A y from PT (together with a non-dominance test) and have set N f = 3 , 500 , 000 . In (b), detected connected components are represented by different colors.
Mathematics 13 01626 g010aMathematics 13 01626 g010b
Figure 11. The main steps of RSG on CONV3 as well as reference sets for N = 1000 , 500 and 300. We have obtained A y via uniform sampling along the PS (together with a non-dominance test) and have set N f = 10 , 000 .
Figure 11. The main steps of RSG on CONV3 as well as reference sets for N = 1000 , 500 and 300. We have obtained A y via uniform sampling along the PS (together with a non-dominance test) and have set N f = 10 , 000 .
Mathematics 13 01626 g011aMathematics 13 01626 g011b
Figure 12. The main steps of RSG on CONV3-4 as well as reference sets for N = 1000 , 500 and 300. We have obtained A y via uniform sampling along the PS (together with a non-dominance test) and have set N f = 10 , 000 .
Figure 12. The main steps of RSG on CONV3-4 as well as reference sets for N = 1000 , 500 and 300. We have obtained A y via uniform sampling along the PS (together with a non-dominance test) and have set N f = 10 , 000 .
Mathematics 13 01626 g012aMathematics 13 01626 g012b
Figure 13. The main steps of RSG on CONV4-2F as well as reference sets for N = 1000 , 500 and 300. We have obtained A y via merging final populations from 30 independent runs of NSGA-III (population size 500, 400 generations), together with a non-dominance test. We and have set N f = 100 , 000 . In (b), detected connected components are represented by different colors.
Figure 13. The main steps of RSG on CONV4-2F as well as reference sets for N = 1000 , 500 and 300. We have obtained A y via merging final populations from 30 independent runs of NSGA-III (population size 500, 400 generations), together with a non-dominance test. We and have set N f = 100 , 000 . In (b), detected connected components are represented by different colors.
Mathematics 13 01626 g013aMathematics 13 01626 g013b
Figure 14. Effect of the filling step shown on DTLZ2 (above) and DTLZ1 (below). The left-side references were obtained using RSG with a starting set A y of size 300, which was filled with N f = 1 , 000 , 000 points and then reduced to a reference set of N = 300 points using k-means. The right-side references were obtained without filling by uniformly sampling 1,000,000 points on the Pareto set and then reducing the obtained front to N = 300 using k-means. Bias can be observed on the upper part of the Pareto front in both cases without filling (right side), around f 3 = 1 for DTLZ2 and f 3 = 0.5 for DTLZ1.
Figure 14. Effect of the filling step shown on DTLZ2 (above) and DTLZ1 (below). The left-side references were obtained using RSG with a starting set A y of size 300, which was filled with N f = 1 , 000 , 000 points and then reduced to a reference set of N = 300 points using k-means. The right-side references were obtained without filling by uniformly sampling 1,000,000 points on the Pareto set and then reducing the obtained front to N = 300 using k-means. Bias can be observed on the upper part of the Pareto front in both cases without filling (right side), around f 3 = 1 for DTLZ2 and f 3 = 0.5 for DTLZ1.
Mathematics 13 01626 g014
Figure 15. The first part of the comparisons between RSG (first column), PlatEMO (second column), pymoo (third column), and the filling step F of RSG (fourth column) are shown. A reference set of size 100 was used for bi-objective problems and 300 for three-objective problems. For bi-objective problems, all methods produced exactly 100 points. However, for three-objective problems, PlatEMO and pymoo did not always yield exactly 300 points—for example, PlatEMO in WFG2 and DTLZ7 and pymoo in WFG2, DTLZ7, and IDTLZ2.
Figure 15. The first part of the comparisons between RSG (first column), PlatEMO (second column), pymoo (third column), and the filling step F of RSG (fourth column) are shown. A reference set of size 100 was used for bi-objective problems and 300 for three-objective problems. For bi-objective problems, all methods produced exactly 100 points. However, for three-objective problems, PlatEMO and pymoo did not always yield exactly 300 points—for example, PlatEMO in WFG2 and DTLZ7 and pymoo in WFG2, DTLZ7, and IDTLZ2.
Mathematics 13 01626 g015
Figure 16. The second part of the comparisons between RSG (first column), PlatEMO (second column), pymoo (third column), and the filling step F of RSG (fourth column) are shown. A reference set of size 100 was used for bi-objective problems and 300 for three-objective problems. For bi-objective problems, all methods produced exactly 100 points. However, for three-objective problems, PlatEMO and pymoo did not always yield exactly 300 points—for example, PlatEMO in WFG2 and DTLZ7 and pymoo in WFG2, DTLZ7, and IDTLZ2. Figure marked as N/A indicates that no reference set was provided.
Figure 16. The second part of the comparisons between RSG (first column), PlatEMO (second column), pymoo (third column), and the filling step F of RSG (fourth column) are shown. A reference set of size 100 was used for bi-objective problems and 300 for three-objective problems. For bi-objective problems, all methods produced exactly 100 points. However, for three-objective problems, PlatEMO and pymoo did not always yield exactly 300 points—for example, PlatEMO in WFG2 and DTLZ7 and pymoo in WFG2, DTLZ7, and IDTLZ2. Figure marked as N/A indicates that no reference set was provided.
Mathematics 13 01626 g016
Table 1. Indicator values I ( O , R ) for different indicators, the outcomes O { A , B } , and the representations R { R 100 x , R 100 y , R 10 , 000 x , R 10 , 000 y } of the Pareto front. Better indicator values are diplayed in bold font.
Table 1. Indicator values I ( O , R ) for different indicators, the outcomes O { A , B } , and the representations R { R 100 x , R 100 y , R 10 , 000 x , R 10 , 000 y } of the Pareto front. Better indicator values are diplayed in bold font.
GD 1 GD 2 IGD 1 IGD 2 IGD + Δ 1 Δ 2 Δ
I ( A , R 100 x ) 0.51180.73840.90840.98730.64230.90840.98731.3671
I ( B , R 100 x ) 0.06980.10020.45221.07440.31980.45221.07448.2024
I ( A , R 100 y ) 0.06840.06840.68350.78830.48330.68350.78831.2987
I ( B , R 100 y ) 0.06840.06842.59743.67651.83672.59743.67658.1341
I ( A , R 10 , 000 x ) 0.00280.00320.89680.97760.63410.89680.97761.3671
I ( B , R 10 , 000 x ) 0.00080.00100.41170.87920.29110.41170.87928.2024
I ( A , R 10 , 000 y ) 0.00070.00070.68350.78930.48330.68350.78931.3664
I ( B , R 10 , 000 y ) 0.00070.00072.59743.67671.83672.59743.67678.2018
Table 2. RSG running time (in seconds) for bi-objective problems varying the size of the filling set ( N f ). The size of A y was fixed at 100 for all problems, and the size of the final reference set was set to N = 100 .
Table 2. RSG running time (in seconds) for bi-objective problems varying the size of the filling set ( N f ). The size of A y was fixed at 100 for all problems, and the size of the final reference set was set to N = 100 .
N f =1000500010,00050,000
ZDT1 0.0500.1270.43411.548
ZDT3 0.0300.0970.2666.272
ZDT4 0.0160.0860.2896.639
ZDT6 0.0150.0930.3769.296
Table 3. RSG running time (in seconds) for problems with k = 3 objectives varying the size of the filling set ( N f ). The size of A y was set to 300, 1024, 1134, and 990 for DTLZ2, DTLZ7, C2_DTLZ2, and CDTLZ2, respectively. The size of the final reference set was fixed at N = 300 .
Table 3. RSG running time (in seconds) for problems with k = 3 objectives varying the size of the filling set ( N f ). The size of A y was set to 300, 1024, 1134, and 990 for DTLZ2, DTLZ7, C2_DTLZ2, and CDTLZ2, respectively. The size of the final reference set was fixed at N = 300 .
N f =10,00050,000100,000500,0001,000,000
DTLZ2 0.2713.1648.059235.5101049.119
DTLZ7 1.3053.3597.962230.973981.419
C2_DTLZ2 1.7103.6988.510216.1971025.821
CDTLZ2 0.3293.5959.452247.4041013.905
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Rodriguez-Fernandez, A.E.; Wang, H.; Schütze, O. Reference Set Generator: A Method for Pareto Front Approximation and Reference Set Generation. Mathematics 2025, 13, 1626. https://doi.org/10.3390/math13101626

AMA Style

Rodriguez-Fernandez AE, Wang H, Schütze O. Reference Set Generator: A Method for Pareto Front Approximation and Reference Set Generation. Mathematics. 2025; 13(10):1626. https://doi.org/10.3390/math13101626

Chicago/Turabian Style

Rodriguez-Fernandez, Angel E., Hao Wang, and Oliver Schütze. 2025. "Reference Set Generator: A Method for Pareto Front Approximation and Reference Set Generation" Mathematics 13, no. 10: 1626. https://doi.org/10.3390/math13101626

APA Style

Rodriguez-Fernandez, A. E., Wang, H., & Schütze, O. (2025). Reference Set Generator: A Method for Pareto Front Approximation and Reference Set Generation. Mathematics, 13(10), 1626. https://doi.org/10.3390/math13101626

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop