Next Article in Journal
Optimal Control and Computational Method for the Resolution of Isoperimetric Problem in a Discrete-Time SIRS System
Next Article in Special Issue
Surrogate-Based Optimization Using an Open-Source Framework: The Bulbous Bow Shape Optimization Case
Previous Article in Journal
Left Ventricular Volume in Bovines: The Correlation between Teichholz’s Medical Mathematical Method and the Volume of the Truncated Prolate Spheroid
Previous Article in Special Issue
An Improved Differential Evolution Algorithm for Crop Planning in the Northeastern Region of Thailand
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A (p,q)-Averaged Hausdorff Distance for Arbitrary Measurable Sets

1
Department of Mathematics, Pontificia Universidad Javeriana, 110231 Bogotá, Colombia
2
Computer Science Department, Cinvestav-IPN, 07360 Mexico City, Mexico
*
Author to whom correspondence should be addressed.
Math. Comput. Appl. 2018, 23(3), 51; https://doi.org/10.3390/mca23030051
Submission received: 22 July 2018 / Revised: 8 September 2018 / Accepted: 16 September 2018 / Published: 18 September 2018
(This article belongs to the Special Issue Numerical and Evolutionary Optimization)

Abstract

:
The Hausdorff distance is a widely used tool to measure the distance between different sets. For the approximation of certain objects via stochastic search algorithms this distance is, however, of limited use as it punishes single outliers. As a remedy in the context of evolutionary multi-objective optimization (EMO), the averaged Hausdorff distance Δ p has been proposed that is better suited as an indicator for the performance assessment of EMO algorithms since such methods tend to generate outliers. Later on, the two-parameter indicator Δ p , q has been proposed for finite sets as an extension to Δ p which also averages distances, but which yields some desired metric properties. In this paper, we extend Δ p , q to a continuous function between general bounded subsets of finite measure inside a metric measure space. In particular, this extension applies to bounded subsets of R k endowed with the Euclidean metric, which is the natural context for EMO applications. We show that our extension preserves the nice metric properties of the finite case, and finally provide some useful numerical examples that arise in EMO.

1. Introduction

The Hausdorff distance d H (see [1]) is an established and widely used tool to measure the proximity of different sets. It is, among others, used in several research fields such as image matching (e.g., [2,3,4]), the approximation of manifolds in dynamical systems ([5,6,7]), in fractal geometry ([8]), or in the context of convergence analysis in multi-objective optimization ([9,10,11,12,13]). One major reason for the use of d H is that it defines a metric on the set of all nonempty bounded closed sets in a metric space. However, one characteristic of the Hausdorff distance is that it heavily punishes single outliers which is a severe drawback in many cases. For instance, it is known that stochastic search algorithms are generally quite effective in the (global) approximation of certain objects, however, it is also known that these approximations may come with a few outliers (e.g., [14]). For those cases, the approximation quality is not reflected by the value of the Hausdorff distance.
As a remedy, in the context of evolutionary multi-objective optimization, Schütze et al. [14] have made a first effort to propose the averaged Hausdorff distance Δ p . As opposed to d H , this indicator averages the distances involved in the proximity measure of the given sets and is hence much more suitable in the context of stochastic search as single (or few) outliers in a candidate solution set are not punished hard any more. On the other hand, compared to d H , Δ p has two shortcomings: (i) it only defines an inframetric instead of a metric; and (ii) it is only defined for finite approximations of the solution set. In the particular context of continuous multi-objective optimization, it is known that the solution set, the so-called Pareto set, and its image, the Pareto front, form manifolds of certain dimensions. Hence, it is natural that the candidate solution set (i.e., the set computed by a given solver) is not restricted to finitely many points, but may also form a continuous set. This is in fact already the case for set-based optimization techniques such as the cell-to-cell mappings ([15,16,17]) and the subdivision techniques ([10,18,19]). In the context of evolutionary multi-objective optimization, typically a finite set of candidate solutions (a population) is generated ([20,21,22,23]). However, also here it is a rather natural approach to construct a continuous set out of the final population using, e.g., interpolation techniques (see [24,25]).
In [26], a modification of the Δ p indicator called the ( p , q ) -averaged Hausdorff distance Δ p , q has been introduced by the first two authors. This indicator generalizes the averaged Hausdorff distance Δ p , is strongly related to the Hausdorff distance d H , and admits an expression in terms of the matrix p , q -norm · p , q . Moreover, when 1 p , q < it is a proper metric, while for the remaining cases where | p | , | q | 1 it is still an inframetric. In addition, when finding optimal archives the parameters p and q play crucial geometrical roles. More precisely, in the context of EMO, p handles the closeness to the Pareto front and q handles the dispersion. The indicator, however, is restricted to finite sets.
In this work, we propose a more general version of the Δ p , q indicator that can be applied to general measurable subsets and that preserves the useful advantages of the finite case. Consideration is also given to the Pareto-compliance of an intermediate indicator GD p , q that is employed to define Δ p , q . The indicator is hence the first one that can be used in the context of multi-objective optimization using continuous approximations of the Pareto set/front as described above. Numerical results on two well known evolutionary algorithms will show the benefit of such continuous archives compared to discrete ones that have been used so far in lack of a suitable performance indicator.
This paper is organized as follows: In Section 2, we briefly state the background required for the understanding of this work. In Section 3, we introduce the extended version of the GD p , q and Δ p , q indicators, discussing their properties and providing some sufficient criteria for the Pareto compliance of the first one. In Section 4, we present some numerical results that show the applicability and the benefit of the novel indicator in particular in the context of multi-objective optimization. Finally, we draw our conclusions and present possible paths for future research in Section 5.

2. Preliminaries

In this section, we briefly present the required background on integral power means and multi-objective optimization that will be needed for our purposes. Throughout the document we employ the notation R × R \ { 0 } and R ¯ [ , ] for simplicity.

2.1. Integral Power Means

The theory can be presented in the general setting of metric measure spaces, briefly outlined below, but for simplicity the reader may assume that the specific context of our interest is that of well-behaved bounded subsets of the n-dimensional Euclidean space R n endowed with its standard Lebesgue measure which gives rise to the conventional notion of volume (when it is defined). For a quick review of measure spaces see [27] (Section 1.4), and for a simple explanation of the Lebesgue measure see [28] (Chapter 2). Integral means appear already in [29] (Chapter 6). A comprehensive account on the properties of means can be found in [30].
For greater generality, we recall that ( Σ , d , μ ) is called a metric measure space if ( Σ , d ) is a metric space with a measure μ defined on its Borel σ -algebra M ( Σ ) , i.e., the smallest σ -algebra containing all the open subsets of the metric topology of ( Σ , d ) . A measure μ is said to be finite if μ ( Σ ) < , and in this case Σ is called a finite-measure space.
Now, given p R × and any measurable function f : X Σ [ 0 , ) over a finite-measure set X, we can define the p-average of f over X (or the p-power mean of f over X), by
M p x X ( f ( x ) ) : = 1 μ ( X ) X f ( x ) p d μ 1 p .
Henceforth, the integral at the RHS will be abbreviated as
X f p d μ : = 1 μ ( X ) X f ( x ) p d μ .
If necessary, when the measure μ is clear from the context, the element d μ will be written as d x to emphasize the variable of integration x. In addition, the notation M p ( f ( X ) ) M p x X ( f ( x ) ) and | X | μ ( X ) will also be employed to simplify expressions whenever appropriate.
Let us note that for p 1 we have M p ( f ( X ) ) = μ ( X ) 1 p f p , where · p denotes the p-norm of the Lebesgue space L p ( X , μ ) . Furthermore, it is not difficult to show, with the aid of L’Hôpital’s rule, that the integral power mean M p can be extended to the cases p = ± . Indeed, if f 0 , denoting the essential supremum and essential infimum of f on X by f : = ess   sup x X f ( x ) and 1 / f 1 : = ess   inf x X f ( x ) , respectively, it follows that
M x X ( f ( x ) ) : = lim p X f p d μ 1 p = f lim p X f ( x ) f p d μ 1 p = f ,
because the last integrand is smaller than 1 and the limit is 1. Similarly,
M x X ( f ( x ) ) : = lim p X f p d μ 1 p = lim p X 1 f p d μ 1 p = 1 f 1 .
We recall that · is precisely the norm of the Lebesgue space L ( X , μ ) . We can also define M p when p = 0 as follows:
M 0 x X ( f ( x ) ) : = exp X log f d μ .
It can be considered the integral generalization of the notion of geometric mean for finitely many elements.

2.2. Multi-objective Optimization

As an application of the ( p , q ) -distances, we will consider in this work continuous multi-objective optimization problems (MOPs). Problems of this kind can be expressed mathematically as
min { F ( x ) : x Q R n } ,
where the function F is defined as a vector of objective functions
F : Q R n R k , F ( x ) : = ( f 1 ( x ) , , f k ( x ) ) .
We will assume here that all objectives f i : X R , for i { 1 , k } , are continuous. The optimality of MOPs is typically defined via the concept of dominance (see [31]).
Definition 1.
In the context of MOPs the following are standard notions:
(i)
Let v = ( v 1 , , v k ) and w = ( w 1 , , w k ) R k . Then the vector v is less than w (denoted v < p w ), if v i < w i for all i { 1 , , k } . The relation p is defined analogously.
(ii)
A vector y Q is dominated by a vector x Q (in short: x y ) with respect to (2) if F ( x ) p F ( y ) and F ( x ) F ( y ) , i.e., there exists a j { 1 , , k } such that f j ( x ) < f j ( y ) .
(iii)
A point x Q is called Pareto optimal or a Pareto point if there is no y Q which dominates x.
(iv)
The set of all Pareto optimal solutions is called the Pareto set, denoted by P Q .
(v)
The image of the Pareto set, F ( P Q ) , is called the Pareto front.
It is known that under certain mild smoothness assumptions the Pareto set and the Pareto front define ( k 1 ) -dimensional objects [32]. Hence, for set oriented solvers such as cell mapping, subdivision techniques, and evolutionary algorithms, the question naturally arises as to how to measure the approximation quality of the obtained solution set with respect to the Pareto set/front. To accomplish this task, several performance indicators have been proposed in the specialized literature. There exist, for instance, the hypervolume indicator [21,33], the R2 indicator [34], the IGD + [35], and the DOA [36]. Moreover, in the context of multi-criteria decision-making processes, the properties of some distance measures, as the Hamming, Euclidean, and Hausdorff metrics, is investigated in [37,38]. In this work, we will focus on a new variant of the Hausdorff distance [6]. For convenience of the reader, we recall in the following the most important definitions.
Definition 2.
Let u , v R n , arbitrary A , B R n , and · be a vector norm. The Hausdorff distance d H ( · , · ) is defined as follows:
(i)
dist ( u , A ) : = inf { u v : v A } ,
(ii)
dist ( B , A ) : = sup { dist ( u , A ) : u B } ,
(iii)
d H ( A , B ) : = max { dist ( A , B ) , dist ( B , A ) } .
The Hausdorff distance d H is widely used in many fields. It is, however, of limited practical use when measuring the distance of the outcome of a stochastic search method such as an evolutionary algorithm to the Pareto set/front. The main reason for this is that evolutionary algorithms may generate outliers that are punished too strongly by d H . As a remedy, the averaged Hausdorff distance has been proposed in [14]. In this study the vector norm is the 2-norm, i.e., the Euclidean norm.
Definition 3
(Schütze et al. [14]). For p N , and finite sets A , B R n the value
Δ p ( A , B ) : = max { GD p ( A , B ) , IGD p ( A , B ) } ,
where
GD p ( A , B ) : = 1 | A | a A d ( a , B ) p 1 p a n d IGD p ( A , B ) : = 1 | B | b B d ( b , A ) p 1 p ,
is called the averaged Hausdorff distance between A and B.
The indicator Δ p can be viewed as a composition of slight variations of the Generational Distance (GD, see [39]) and the Inverted Generational Distance (IGD, see [40]). It is Δ = d H , but for finite values of p the indicator Δ p averages the distances considered in d H . More precisely, the larger the value of p, the harder single outliers will be punished by Δ p . Hence, as opposed to d H , the distance Δ p does not punish single (or few) outliers in a candidate set. For more discussion about Δ p and its relation to other indicators we refer to [14,41].
Definition 4
(Vargas–Bogoya [26]). For p , q R × , and finite sets A , B R n the value
Δ p , q ( A , B ) : = max { GD p , q ( A , B \ A ) , GD p , q ( B , A \ B ) } ,
where GD p , q ( A , B ) : = 1 | A | a A 1 | B | b B d ( a , b ) q p q 1 p is called the ( p , q ) -averaged Hausdorff distance between A and B.
For finite sets, the indicator Δ p , q , introduced in [26] was also defined for p or q = 0 , and even for p or q = ± . It is a generalization of Δ p in the sense that between disjoint subsets we have
lim q Δ p , q = Δ p .
The parameters p and q can be independently modified in order to produce customary spread archives (depending on q) located with customary closeness (depending on p) to the Pareto front.
Finally, let us recall that one characteristic of a performance indicator is Pareto compliance: for two subsets A and B we say that A B if for every b B there exists an element a A such that a b . If this does not hold, we write A B . We say that a performance indicator I is Pareto compliant if for any two sets A and B with A B and B A it follows I ( A ) I ( B ) . We refer to [42] for details.

3. The ( p , q ) -Averaged Hausdorff Distance for Measurable Sets

3.1. Properties of Integral Power Means

We start summarizing some fundamental properties of integral power means that we will need for our subsequent calculations.
Theorem 1.
Let X and Y denote finite-measure spaces, f , g : X [ 0 , ) non-negative measurable functions, and d : X × Y [ 0 , ) a measurable function with respect to the product measure on X × Y . The integral power mean M satisfies the following properties:
(i)
If p R ¯ and k [ 0 , ) , then M p x X ( k ) = k and M p x X ( k f ( x ) ) = k M p x X ( f ( x ) ) .
(ii)
For any p R ¯ , we have M p x X M p y Y ( d ( x , y ) ) = M p y Y M p x X ( d ( x , y ) ) .
(iii)
If 1 p , then M p x X ( f ( x ) + g ( x ) ) M p x X ( f ( x ) ) + M p x X ( g ( x ) ) .
(iv)
If p R ¯ and f ( x ) g ( x ) for all x X , then M p x X ( f ( x ) ) M p x X ( g ( x ) ) .
(v)
For p , q R ¯ with 0 < p q , we have that M p x X ( f ( x ) ) M q x X ( f ( x ) ) .
Proof. 
The proofs of (i) and (ii) are straightforward. To prove (iii) we only need the Minkowski inequality,
M p x X ( f ( x ) + g ( x ) ) = μ ( X ) 1 p f + g p μ ( X ) 1 p f p + g p = M p x X ( f ( x ) ) + M p x X ( g ( x ) ) .
The proof of (iv) is also straightforward from the definitions and a simple proof of (v) can be given as a particular case of [43] (Theorem 3) which we recall here for completeness. For a positive real v, consider the function
ω r ( v ) : = 1 v t r 1 d t = v r 1 r , r 0 ; log v , r = 0 .
Since the function t u , with t a positive constant and u 0 is increasing with respect to u, we easily get ω r ( v ) ω s ( v ) for 0 r s and every v 0 . Consider the following linear integral operator
J [ f ] : = 1 μ ( X ) X f d μ = X f d μ .
Assume first, that p 0 , then for any x X ,
J [ ω p f ( x ) M p ( f ( X ) ) ] = 1 p ( J [ f ( x ) M p ( f ( X ) ) ] p J [ 1 ] ) = 1 p 1 ( M p ( f ( X ) ) ) p X f p d μ X d μ = 0 .
Similarly, if p = 0 we have
J [ ω 0 f ( x ) M 0 ( f ( X ) ) ] = J [ log ( f ( x ) ) log ( M 0 ( f ( X ) ) ] = X log f d μ log ( M 0 ( f ( X ) ) J [ 1 ] = 0 .
Suppose that 0 p < q . Since ω p ( · ) ω q ( · ) implies J ( ω p ( · ) ) J ( ω q ( · ) ) , we obtain
0 J [ ω q f ( x ) M p ( f ( X ) ) ] = 1 q M q ( f ( X ) ) M p ( f ( X ) ) q 1 ,
from which it follows that M p x X ( f ( x ) ) M q x X ( f ( x ) ) . □

3.2. Definition of Δ p , q for Measurable Sets

With the aid of Theorem 1 we generalize the results of [26] (Section 3). For easy reference, we provide here slightly abbreviated but complete proofs. Given a metric measure space ( Σ , d , μ ) , let M ( Σ ) denote the σ -algebra of all measurable subsets of Σ and let M < ( Σ ) refer to those elements of M ( Σ ) having finite measure. As it should be expected from the context, any set relation obtained from calculations involving an underlying measure μ should be understood to hold in a measure-theoretic sense, i.e., almost everywhere (a.e.). For example, for X , Y M < ( Σ ) , a result saying X = Y , or X Y actually holds almost everywhere, which means that μ { X Y } = 0 , or μ { X Y } = 0 , respectively. Thus, it is convenient in this setting to identify a set X M < ( Σ ) with the whole equivalence class [ X ] : = { Y X = Y , a . e . } , and think of these classes as the elements of M < ( Σ ) to remove the need for the a.e. abbreviation. Also, to avoid an overload of parentheses in the forthcoming expressions, the distance d ( x , y ) between x , y Σ will be abbreviated by d x , y .
Definition 5.
For p , q R × , the generational ( p , q ) -distance GD p , q ( X , Y ) between two sets X , Y M < ( Σ ) is given by
GD p , q ( X , Y ) : = M p x X M q y Y ( d x , y ) = X Y d x , y q d y p q d x 1 p ,
where the sets X and Y are implicitly assumed to be disjoint when p < 0 or q < 0 .
As in the finite case, the definition of GD p , q can be easily extended for p , q R ¯ , but still has two undesirable drawbacks, first GD p , q ( X , X ) can be different from zero, and second, in general the values of GD p , q ( X , Y ) and GD p , q ( Y , X ) can be different, thus this indicator does not define a metric. To obtain a proper metric we introduce the following modification.
Definition 6.
The ( p , q ) -averaged Hausdorff distance is the map Δ p , q : M < ( Σ ) × M < ( Σ ) [ 0 , ) given by
Δ p , q ( X , Y ) : = max { GD p , q ( X , Y \ X ) , GD p , q ( Y , X \ Y ) } .
Remark 1.
For finite subsets X , Y R n endowed with the standard counting measure μ, the previous notions of GD p , q and Δ p , q coincide with the ones in Definition 4.
Figure 1 illustrates how the shape of Δ p , q -metric balls B ε : = { x R 2 : Δ p , q ( A , x ) ε } around a discrete set A of ten points (that approximates a segment of negative slope in the plane) varies as p and q take several different values. Notice that for negative values of p and q the balls’ shape resemble the shape of A and enclose all of its points.

3.3. Metric Properties

The extension of Δ p , q to measurable sets given in Definition 6 preserves the nice metric properties of the finite version considered in [26] (Section 3). In particular, Theorem 1 enables us to show a result analogous to [26] (Theorem 3.3).
Theorem 2.
Suppose that 1 p , q < . Then the generational ( p , q ) -distance GD p , q satisfies the triangle inequality, namely
GD p , q ( X , Z ) GD p , q ( X , Y ) + GD p , q ( Y , Z )
for any sets X , Y , Z M < ( Σ ) .
Proof. 
From the triangle inequality for the metric d ( · , · ) we have
d x , z d x , y + d y , z ( x X , y Y , z Z ) .
Taking the q-average over Z at both sides and using Theorem 1 (i)–(iii), yields
M q z Z ( d x , z ) M q z Z ( d x , y + d y , z ) M q z Z ( d x , y ) + M q z Z ( d y , z ) = d x , y + M q z Z ( d y , z ) .
Now, we consider two cases for the parameters 1 p , q < , independently.
Case p q : Taking at both sides of (3) the p-average over X and using Theorem 1 (i), (iii), and (iv), we get
M p x X M q z Z ( d x , z ) M p x X d x , y + M q z Z ( d y , z ) = M p x X ( d x , y ) + M q z Z ( d y , z ) .
In this expression, the LHS is precisely GD p , q ( X , Z ) which does not depend on Y. We now take the p-average over Y at both sides of (4) and use Theorem 1 (i), (iii), and (iv), to obtain
GD p , q ( X , Z ) M p y Y M p x X ( d x , y ) + M q z Z ( d y , z ) = M p y Y M p x X ( d x , y ) + GD p , q ( Y , Z ) .
To finish this case note that from Theorem 1 (ii), (iv), and (v), we have that
M p y Y M p x X ( d x , y ) = M p x X M p y Y ( d x , y ) M p x X M q y Y ( d x , y ) = GD p , q ( X , Y )
which proves the claim.
Case q p : Here, we note that the LHS of (3) does not depend on Y, and take at both sides of (3) the q-average over Y. Hence, Theorem 1 (i), (iii)–(v) yield
M q z Z ( d x , z ) M q y Y d x , y + M q z Z ( d y , z ) M q y Y ( d x , y ) + GD p , q ( Y , Z ) .
Lastly, we take the p-average over X and use Theorem 1 (ii)–(iv), to obtain
GD p , q ( X , Z ) M p x X M q y Y ( d x , y ) + GD p , q ( Y , Z ) = GD p , q ( X , Z ) + GD p , q ( Y , Z ) ,
which is the required result. □
Corollary 1.
For p , q R × the ( p , q ) -averaged Hausdorff distance Δ p , q is a semimetric on the collection M < ( Σ ) of all measurable subsets of Σ with finite measure. Moreover, between disjoint sets, Δ p , q is a proper metric on M < ( Σ ) for 1 p , q < .
Proof. 
Definition 6 easily implies that Δ p , q ( · , · ) 0 as well as Δ p , q ( X , Y ) = Δ p , q ( Y , X ) , for every pair X , Y M < ( Σ ) and all p , q R × . From Definition 5 we can see that GD p , q ( X , Y \ X ) = 0 if and only if X = or Y X (and hence Y \ X = ). We thus find, for X , Y , that
Δ p , q ( X , Y ) = 0 if and only if X = Y .
We have shown that Δ p , q is a semimetric on M < ( Σ ) , and since the maximum of two functions satisfying the triangle inequality also satisfies it, Theorem 2 shows that Δ p , q satisfies the triangle inequality when 1 p , q < . □
Theorem 3.
Suppose that for any sets X , Y , Z M < ( Σ ) there exist some constants 0 < r < R such that r d u , v R holds for all pairs ( u , v ) in X × Y , X × Z , or Y × Z . Then, for all non-simultaneously positive p , q R × with | p | , | q | 1 the generational ( p , q ) -distance GD p , q satisfies the following relaxed triangle inequality
GD p , q ( X , Z ) R 2 r 2 GD p , q ( X , Y ) + GD p , q ( Y , Z ) .
Proof. 
We prove the theorem in three steps.
Step 1: Take p R × and assume that q < 0 , we will show that
GD p , | q | ( X , Y ) R r GD p , q ( X , Y ) .
For any x X and all y 1 , y 2 Y we have r R d x , y 1 d x , y 2 R r , thus
R r Y Y d x , y 1 d x , y 2 | q | d y 1 d y 2 1 | q | = Y d x , y 1 | q | d y 1 1 | q | Y d x , y 2 | q | d y 2 1 | q | .
Using the fact that q = | q | , we get
Y d x , y 1 | q | d y 1 1 | q | R r Y d x , y 2 q d y 2 1 q ,
which by (1), proves that M | q | y Y ( d x , y ) R r M q y Y ( d x , y ) . Calculating the p-average M p x X of both sides, and from Theorem 1 (i) and (iv), we finally get M p x X M | q | y Y ( d x , y ) R r M p x X M q y Y ( d x , y ) , which, by Definition 5, is precisely (5).
Step 2: Now, take q R × and assume that p < 0 , we will show that
GD | p | , q ( X , Y ) R r GD p , q ( X , Y ) .
By hypothesis, for any y Y and all x 1 , x 2 X we have r R d x 1 , y d x 2 , y R r . Therefore, proceeding as before and applying again Theorem 1 (i) and (iv) we conclude that M q y Y ( d x 1 , y ) R r M q y Y ( d x 2 , y ) . Hence,
X M q y Y ( d x 1 , y ) | p | d x 1 1 | p | X M q y Y ( d x 2 , y ) p d x 2 1 | p | = X X M q y Y ( d x 1 , y ) M q y Y ( d x 2 , y ) | p | d x 1 d x 2 1 | p | R r ,
from which we deduce
X M q y Y ( d x 1 , y ) | p | d x 1 1 | p | R r X M q y Y ( d x 2 , y ) p d x 2 1 p .
Using (1), the previous inequality can be written as
M | p | x X M q y Y ( d x , y ) R r M p x X M q y Y ( d x , y ) ,
which, by Definition 5, is precisely (6).
Step 3: From the previous two steps we easily obtain
GD | p | , | q | ( X , Y ) R r GD | p | , q ( X , Y ) R 2 r 2 GD p , q ( X , Y ) .
Theorem 1 (iv) and Definition 5 imply that GD p , q ( X , Z ) GD | p | , | q | ( X , Z ) . Finally, the triangle inequality for GD | p | , | q | (Theorem 2) and (7), produces the desired relation
GD p , q ( X , Z ) GD | p | , | q | ( X , Y ) + GD | p | , | q | ( Y , Z ) R 2 r 2 GD p , q ( X , Y ) + GD p , q ( Y , Z ) .
 □
Remark 2.
When the pair ( p , q ) lies in the light-gray or violet regions of Figure 2, the distance GD p , q satisfies a relaxed triangle inequality, with the drawback that the constant R 2 / r 2 depends on the condition that r d u , v R , for all pairs ( u , v ) X × Y , X × Z , or Y × Z . For bounded and separated sets this condition always holds, and on those sets the associated ( p , q ) -averaged Hausdorff distance Δ p , q becomes an inframetric as the following corollary implies.
Corollary 2.
Under the same hypothesis of Theorem 3 we have
Δ p , q ( X , Z ) R 2 r 2 Δ p , q ( X , Y ) + Δ p , q ( Y , Z ) .
Proof. 
It follows immediately from Theorem 3 and Definition 6. □
Theorem 4.
Let X , Y M < ( Σ ) and suppose that p , p , q , q R ¯ satisfy p p and q q . Then
Δ p , q ( X , Y ) Δ p , q ( X , Y ) a n d Δ p , q ( X , Y ) Δ p , q ( X , Y ) .
Proof. 
It follows easily from Theorem 1 (v) and Definition 6. □

3.4. Pareto-Compliance

We return now to the setting of MOPs to consider the behavior of the generalized GD p , q and Δ p , q distances as performance indicators by studying their Pareto-compliance. A discussion of the Pareto-compliance for the indicators GD p and Δ p appeared in [14] (Section 3). Similar observations are valid for these new ( p , q ) -indicators, but a detailed and complete account of the details is part of ongoing research and will appear elsewhere. Here, as a first approach to the compliance question we present a basic result that describes the behavior of the indicator GD p , q under stronger assumptions than the compliance notion mentioned at the end of Section 2.2.
Let us assume that given a decision space Q R n , a MOP has an associated objective function F : Q R k , with objective space F ( Q ) R k endowed with the Euclidean distance d ( · , · ) and the inherited Lebesgue measure μ . Furthermore, let P Q denote the Pareto set and F ( P Q ) R k the corresponding Pareto front. If X Q denotes an approximating subset (or archive), the explicit GD p , q -performance indicators assigned to X is given by
I p , q GD ( X ) : = GD p , q ( F ( X ) , F ( P Q ) ) .
For the following statement, let us recall here that a partition of a set X is a collection of disjoint and non-empty subsets of X whose union is the whole of X. Furthermore, for any q R ¯ we abbreviate the q-averaged distance of F ( u ) F ( Q ) to the Pareto front F ( P Q ) by δ q ( u ) : = M q v P Q ( d ( F ( u ) , F ( v ) ) ) .
Theorem 5.
Suppose that for fixed p , q R ¯ a pair of measurable archives X , Y Q , satisfy that:
(i)
X and Y admit finite partitions X = i = 1 m X i and Y = i = 1 m Y i such that for each i { 1 , , m } :
(a)
X i X and Y i Y are subsets of non-null finite measure.
(b)
x X i , y Y i : x y ,
(ii)
x X , y Y : if x y δ q ( x ) δ q ( y ) .
Then I p , q GD ( X ) I p , q GD ( Y ) .
Proof. 
From condition (i) the archives X and Y admit partitions into the same number m of subsets and from (ii) it is clear that for any i { 1 , , m } if x X i and y Y i then δ q ( x ) δ q ( y ) . Hence, taking integral p-averages over X i , and then over Y i of the quantities at both sides of this inequality we obtain for each i that
a i p : = 1 | X i | X i δ q ( x ) p d x 1 | Y i | Y i δ q ( y ) p d y = : b i p .
Now, for each i { 1 , , m } for which the inequality | X i | | X | | Y i | | Y | does not hold, we can further subdivide X i into a sufficiently large partition of m i non-null finite measure subsets X i , 1 , X i , 2 , , X i , m i , so that for all j { 1 , , m i } we can guarantee that
w i , j : = | X i , j | | X | | Y i | | Y | w ˜ i .
Please note that this should be possible due to the assumption that X i has non-null finite measure. Since part (b) of condition (i) still holds for these subsets, (i.e., x X i , j , y Y i : x y ), an analogous relation to Inequality (8) is valid for them. Explicitly, for each i { 1 , , n } and all j { 1 , , m i } we have
a i , j p : = 1 | X i , j | X i , j δ q ( x ) p d x 1 | Y i | Y i δ q ( y ) p d y = : b i p .
Due to the chosen partitions of X and Y, it is clear that | X | = i = 1 m | X i | , where | X i | = j = 1 m i | X i , j | , and | Y | = i = 1 m | Y i | . Therefore, with the notation of (9) it follows i = 1 m j = 1 m i w i , j = i = 1 m w ˜ i = 1 , which implies that the quantities w i , j and w ˜ i can be regarded as normalized weights appropriate for taking weighted averages. Using that 0 a i , j b i and 0 w i , j w ˜ i 1 , simple properties of (discrete) weighted power means ensure that i = 1 m j = 1 m i w i , j a i , j p i = 1 m w ˜ i b i p . Thus, we can finally write
I p , q GD ( Y ) p = 1 | X | i = 1 m j = 1 m i X i , j δ q ( x ) p d x = i = 1 m j = 1 m j | X i , j | | X | a i , j p = i = 1 m j = 1 m i w i , j a i , j p i = 1 m w ˜ i b i p = i = 1 m | Y i | | Y | b i p = 1 | Y | i = 1 m Y i δ q ( y ) p d y = I p , q GD ( Y ) p ,
proving the statement. □
Remark 3.
Condition (i) of Theorem 5 implies the simpler (and weaker) dominance conditions:
(a’)
X Y (i.e., y Y , x X such that x y ), and
(b’)
x X , y Y such that x y .
In many simple examples for which (a’) and (b’) hold, it is not difficult to find the partitions needed for Theorem 5 (i), however this is not always possible, and the question of when such partitions exist will not be considered here. Figure 3, show examples where (a’) and (b’) hold and the inequality I p , q GD ( X ) I p , q GD ( Y ) is both, true (left) and false (right). In these cases it can be shown that X and Y satisfy (left), and do not satisfy (right) the requirements of Theorem 5 (i), respectively.
Remark 4.
Another important observation is that condition (ii) of Theorem 5 allows for some freedom in the choice of an appropriate q R ¯ such that the inequality δ q ( x ) δ q ( y ) holds for x y , ensuring the compliance to Pareto optimality. This freedom is not available for the indicator GD p because in that case δ q ( x ) should be replaced by the corresponding quantity when q which is the standard distance from a set to a point d ( F ( x ) , F ( P Q ) ) . The possibility to choose a value of q according to the problem is clearly an advantage, and provides an argument in favor of the generalized version GD p , q .

4. Numerical Examples

In this section, we demonstrate the applicability and usefulness of the new distance measure on two examples.

4.1. General Example

As a first example we consider the following sets within the Euclidean plane R 2 : the first set, A, is a line segment connecting two points a = ( 1 , 0 ) and b = ( 1 , 0 ) , i.e.,
A = a b ¯ .
Next, for some given δ > 0 and any fixed value of ε > 0 we consider sets B δ defined as the union of line segments
B δ = c d δ ¯ e δ f δ ¯ g δ h ¯ ,
where c = ( 1 , ε ) , d δ = ( δ , ε ) , e δ = ( δ , 1 ) , f δ = ( δ , 1 ) , g δ = ( δ , ε ) , and h = ( 1 , ε ) are the segment end-points in R 2 . Hereby, a set B δ can be seen as a certain approximation of A, where the segment e δ f δ ¯ can be considered to be the outlier in the approximation.
Figure 4 shows the sets A and B δ for the values δ { 0.05 , 0.10 , 0.20 , 0.40 } and ε = 0.10 . Apparently, for smaller δ , the outlier region gets smaller, and hence, the approximation B δ of A gets “better”. This is reflected by the values of the ( p , q ) -distance in Table 1.
On the other hand, if choosing the classical Hausdorff distance, all values of d H ( A , B δ ) are equal to 1, regardless of the choice of δ > 0 . Hence, the ( p , q ) -distance is more appropriate in this example to identify “better” approximations.

4.2. Approximation of Pareto Sets/Fronts

As a second example we consider the approximation of the Pareto set and front of a given MOP. For this, we define the following bi-objective problem that is known as the Lamé super-sphere function [32]:
min x F : R n R 2 ,
where F ( x ) = ( f 1 ( x ) , f 2 ( x ) ) is given by
f 1 ( x ) = 1 n i = 1 n x i 2 γ 2 and f 2 ( x ) = 1 n i = 1 n ( x i 1 ) 2 γ 2
for x R n and γ R . Figure 5 and Figure 6 show the Pareto sets and fronts for the special cases n = 2 with γ = 2 and γ = 1 / 2 , respectively.
For the first step, we consider a simple hypothetical example to illustrate the concept of continuous archives in the context of evolutionary multi-objective optimization. For these, assume we are given the discrete archive A = { x 1 , , x 5 } R 2 , where
x 1 = ( 0.0129 , 0.0421 ) , x 2 = ( 0.2525 , 0.2912 ) , x 3 = ( 0.4903 , 0.4035 ) , x 4 = ( 0.6258 , 0.6912 ) , x 5 = ( 1.0212 , 0.9930 ) .
The set A is hence consisting of only five candidate solutions. Analogously, the image F ( A ) of A can be considered as an approximation of the Pareto front that consists as well of five candidate solutions. Now, in order to improve the quality of the approximation, instead of A one can consider the polygon that is defined by the elements of A, namely
B : = x 1 x 2 ¯ x 4 x 5 ¯ .
In what follows, we will call this polygon the continuous archive. The approximations A, B, F ( A ) , and F ( B ) can be seen in the Figure 7 and Figure 8. By visual inspection, the approximation qualities increase significantly when using the linear interpolation, in particular in objective space. This is reflected by the ( p , q ) -distances which are shown in Table 2 where we can find the following general behavior: first, the distances within decision and objective spaces, decreases from finite to continuous archives, and this phenomena is stronger in the objective space; and second, following the result of Theorem 4, the distances decreases as q decreases.
In a next step, we consider discrete and continuous archives resulting from two of the most famous EMO algorithms: NSGA-II [44] and MOEA/D [45], see Table 3 for the parameter setting of these algorithms. To this end, we first consider the result of NSGA-II with a population size of 12 after 500 generations, see Figure 9 and Figure 10 and Table 4 for the numerical results. Finally, we consider the MOEA/D generational algorithm to get 500 finite archives of 12 elements each, see Figure 11 and Figure 12 and Table 5 for the numerical results.
For both EMO algorithms, it can be observed that the indicator values for the continuous archives are significantly better than for the respective discrete archives. Next, note that the Δ p , q values oscillate for NSGA-II which is a typical behavior for this dominance-based algorithm. These oscillations, however, are less distinct for the continuous archives.
To further investigate the last statement, we consider finally the convex bi-objective problem f 1 , f 2 : R 3 R , where x = ( x 1 , x 2 , x 3 ) and
f 1 ( x ) = ( x 1 + 1 ) 2 + x 2 2 + x 3 2 f 2 ( x ) = ( x 1 1 ) 2 + x 2 2 + x 3 2 .
The Pareto set of MOP (14) is the line segment connecting the points ( 0 , 0 , 0 ) and ( 1 , 0 , 0 ) , and the Pareto front is as shown in Figure 13.
Figure 14 and Table 6 show the Δ p , q values for both the discrete and continuous archives obtained by NSGA-II using a population size of 20. As it can be seen, again the continuous archives achieve much better indicator values, and the amplitudes of the oscillations are significantly smaller compared to the discrete archives. This is confirmed by Figure 15, Figure 16 and Figure 17 that show the results of the discrete and continuous archives after 300, 400, and 500 generations, respectively. As it can be seen, NSGA-II is able to compute solutions along the Pareto front, however, with varying distribution along this set (In fact, it is known that there is no “limit archive” for NSGA-II since this algorithm is not indicator-based). In turn, for each of the results of NSGA-II, all of the continuous archives represent—at least by visual inspection—perfect approximations of the Pareto front, which is reflected by the good Δ p , q values.
Concluding, the results presented in this section strongly indicate the convenience of the new indicator that is able to assess the performance of continuous archives. Though in principle also other indicators can be extended to continuous sets, this has not been done so far, and this is not a straightforward task. Hence, no comparisons to other indicators can be considered here. The presented results further indicate the benefit of the use of continuous archives instead of discrete ones that are being used classically. This would, among others, allow for the usage of smaller population sizes which would in turn allow to reduce the computational burden of the evolutionary algorithms (note that the time complexity for all MOEAs in each generation is quadratic in the population size). The verification of this statement, however, is left for future work as this goes beyond the scope of this study.

5. Conclusions and Future Work

In this work, we have proposed extensions of the existing GD p , q and Δ p , q performance indicators that allow to compute the distance between two general measurable sets. In particular, this is a natural setting in multi-objective optimization because the solution of such a problem typically forms a set of certain dimension (and is thus not given by finitely many points). We have shown that the extended indicators keep the nice metric properties from its finite-version predecessors (see [14,26]). Moreover, for GD p , q , sufficient conditions have been provided ensuring that certain compliance to Pareto optimality of this indicator can be guaranteed. Further study is needed to determine the precise relation between these conditions and other ones appearing in the literature.
We have demonstrated the applicability and usefulness of the novel indicator on examples related to evolutionary multi-objective optimization.
As part of future work, we intend to further investigate the use of Δ p , q within evolutionary multi-objective optimization. For instance, it might be interesting to integrate this performance indicator within an evolutionary multi-objective optimization algorithm as it was done, e.g., in [46] for its predecessor Δ p . Although it is clear that the individual roles of p and q are related with the convexity of the metric neighborhoods of point and sets, further research is needed to elucidate more precisely useful ways to take advantage of their joint behavior in concrete situations. Additionally, to understand the behavior of Δ p , q in relation to Pareto compliance and to complete the partial results that have been established in Section 3.4 for GD p , q , consideration should be given to the inverted generational indicator IGD p , q . Finally, one interesting aspect is to see if the indicator can be used as a proximity measure in other research fields.

Author Contributions

J.B. and A.V. obtained the theoretical results concerning the ( p , q ) -averaged Hausdorff distance. O.S. and O.C. conceived and designed the experiments; J.B. and O.C. performed the experiments and provided the related figures and tables; O.S. analyzed the data and revised the text. J.B. and A.V. wrote the paper.

Acknowledgments

We would like to thank the referees for the for their useful comments and suggestions that helped to improve our manuscript. The first two authors were partially supported by the research project ID-PRY: 7919 of the Faculty of Sciences, Pontificia Universidad Javeriana, Bogotá, Colombia. The last two authors acknowledge support from the Conacyt Basic Science Group project no. 285599.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Heinonen, J. Lectures on Analysis on Metric Spaces; Springer: New York, NY, USA, 2001. [Google Scholar]
  2. Huttenlocher, D.P.; Klanderman, G.A.; Rucklidge, W.A. Comparing Images Using the Hausdorff Distance. IEEE Trans. Pattern Anal. Mach. Intell. 1993, 15, 850–863. [Google Scholar] [CrossRef]
  3. Yi, X.; Camps, O.I. Line-Based Recognition Using A Multidimensional Hausdorff Distance. IEEE Trans. Pattern Anal. Mach. Intell. 1999, 21, 901–916. [Google Scholar]
  4. De Carvalho, F.; de Souza, R.; Chavent, M.; Lechevallier, Y. Adaptive Hausdorff distances and dynamic clustering of symbolic interval data. Pattern Recognit. Lett. 2006, 27, 167–179. [Google Scholar] [CrossRef] [Green Version]
  5. Dellnitz, M.; Hohmann, A. A subdivision algorithm for the computation of unstable manifolds and global attractors. Numerische Mathematik 1997, 75, 293–317. [Google Scholar] [CrossRef]
  6. Aulbach, B.; Rasmussen, M.; Siegmund, S. Approximation of attractors of nonautonomous dynamical systems. Discret. Contin. Dyn. Syst. Ser. B 2005, 5, 215–238. [Google Scholar]
  7. Emmerich, M.; Deutz, A.H. Test Problems Based on Lamé Superspheres. In Proceedings of the 4th International Conference on Evolutionary Multi-criterion Optimization, Matsushima, Japan, 5–8 March 2007; Springer: Berlin/Heidelberg, Germany, 2007; pp. 922–936. [Google Scholar]
  8. Falconer, K. Fractal Geometry: Mathematical Foundations and Applications, 2nd ed.; John Wiley & Sons, Inc.: Hoboken, NJ, USA, 2003. [Google Scholar]
  9. Schütze, O. Set Oriented Methods for Global Optimization. Ph.D. Thesis, University of Paderborn, Paderborn, Germany, 2004. [Google Scholar]
  10. Dellnitz, M.; Schütze, O.; Hestermeyer, T. Covering Pareto Sets by Multilevel Subdivision Techniques. J. Optim. Theory Appl. 2005, 124, 113–155. [Google Scholar] [CrossRef]
  11. Padberg, K. Numerical Analysis of Transport in Dynamical Systems. Ph.D. Thesis, University of Paderborn, Paderborn, Germany, 2005. [Google Scholar]
  12. Schütze, O.; Coello Coello, C.A.; Mostaghim, S.; Talbi, E.G.; Dellnitz, M. Hybridizing Evolutionary Strategies with Continuation Methods for Solving Multi-Objective Problems. Eng. Optim. 2008, 40, 383–402. [Google Scholar] [CrossRef]
  13. Schütze, O.; Laumanns, M.; Coello Coello, C.A.; Dellnitz, M.; Talbi, E.G. Convergence of Stochastic Search Algorithms to Finite Size Pareto Set Approximations. J. Glob. Optim. 2008, 41, 559–577. [Google Scholar] [CrossRef]
  14. Schütze, O.; Esquivel, X.; Lara, A.; Coello Coello, C.A. Using the averaged Hausdorff distance as a performance measure in evolutionary multiobjective optimization. IEEE Trans. Evol. Comput. 2012, 16, 504–522. [Google Scholar] [CrossRef]
  15. Hernández, C.; Naranjani, Y.; Sardahi, Y.; Liang, W.; Schütze, O.; Sun, J.Q. Simple Cell Mapping Method for Multi-objective Optimal Feedback Control Design. Int. J. Dyn. Control 2013, 1, 231–238. [Google Scholar] [CrossRef]
  16. Siwel, J.; Yew-Soon, O.; Jie, Z.; Liang, F. Consistencies and contradictions of performance metrics in multiobjective optimization. IEEE Trans. Evol. Comput. 2014, 44, 2329–2404. [Google Scholar]
  17. Sun, J.Q.; Xiong, F.R.; Schütze, O.; Hernández, C. Cell Mapping Methods—Algorithmic Approaches and Applications; Springer: Singapore, 2019. [Google Scholar]
  18. Jahn, J. Multiobjective search algorithm with subdivision technique. Comput. Optim. Appl. 2006, 35, 161–175. [Google Scholar] [CrossRef]
  19. Schütze, O.; Vasile, M.; Junge, O.; Dellnitz, M.; Izzo, D. Designing optimal low thrust gravity assist trajectories using space pruning and a multi-objective approach. Eng. Optim. 2009, 41, 155–181. [Google Scholar] [CrossRef] [Green Version]
  20. Deb, K. Multi-Objective Optimization Using Evolutionary Algorithms; John Wiley & Sons: Chichester, UK, 2001. [Google Scholar]
  21. Beume, N.; Naujoks, B.; Emmerich, M. SMS-EMOA: Multiobjective selection based on dominated hypervolume. Eur. J. Oper. Res. 2007, 181, 1653–1669. [Google Scholar] [CrossRef]
  22. Garg, H. Solving structural engineering design optimization problems using an artificial bee colony algorithm. J. Ind. Manag. Optim. 2014, 10, 777–794. [Google Scholar] [CrossRef]
  23. Garg, H. A hybrid PSO-GA algorithm for constrained optimization problems. Appl. Math. Comput. 2016, 274, 292–305. [Google Scholar] [CrossRef]
  24. Zapotecas-Martínez, S.; López-Jaimes, A.; García-Nájera, A. LIBEA: A Lebesgue Indicator-Based Evolutionary Algorithm for Multi-objective Optimization. Swarm Evolut. Comput. 2018. [Google Scholar] [CrossRef]
  25. Hartikainen, M.; Miettinen, K.; Wiecek, M. PAINT: Pareto front interpolation for nonlinear multiobjective optimization. Comput. Optim. Appl. 2012, 52, 845–867. [Google Scholar] [CrossRef]
  26. Vargas, A.; Bogoya, J.M. A generalization of the averaged Hausdorff distance. Computación y Sistemas 2018, 22, 331–345. [Google Scholar] [CrossRef]
  27. Tao, T. An Introduction to Measure Theory (Graduate Studies in Mathematics); American Mathematical Society: Providence, RI, USA, 2011. [Google Scholar]
  28. Jones, F. Lebesgue Integration on Euclidean Space; Jones and Bartlett Publishers: Boston, MA, USA, 2001. [Google Scholar]
  29. Hardy, G.H.; Littlewood, J.E.; Pólya, G. Inequalities, 2nd ed.; Cambridge University Press: Cambridge, UK, 1952. [Google Scholar]
  30. Bullen, P.S. Handbook of Means and Their Inequalities; Kluwer Academic Publishers Group: Dordrecht, The Netherlands, 2003. [Google Scholar]
  31. Pareto, V. Manual of Political Economy; The Macmillan Press: London, UK, 1971. [Google Scholar]
  32. Hillermeier, C. Nonlinear Multiobjective Optimization: A Generalized Homotopy Approach; Springer Science & Business Media: Berlin, Germany, 2001. [Google Scholar]
  33. Zitzler, E.; Thiele, L. Multiobjective evolutionary algorithms: A comparative case study and the strength Pareto approach. IEEE Trans. Evol. Comput. 1999, 3, 257–271. [Google Scholar] [CrossRef]
  34. Brockhoff, D.; Wagner, T.; Trautmann, H. On the Properties of the R2 Indicator. In Proceedings of the 14th Annual Conference on Genetic and Evolutionary Computation, Philadelphia, PA, USA, 7–11 July 2012; ACM: New York, NY, USA, 2012; pp. 465–472. [Google Scholar]
  35. Ishibuchi, H.; Masuda, H.; Nojima, Y. A Study on Performance Evaluation Ability of a Modified Inverted Generational Distance Indicator. In Proceedings of the 2015 Annual Conference on Genetic and Evolutionary Computation, Madrid, Spain, 11–15 July 2015; ACM: New York, NY, USA, 2015; pp. 695–702. [Google Scholar]
  36. Dilettoso, E.; Rizzo, S.A.; Salerno, N. A Weakly Pareto Compliant Quality Indicator. Math. Comput. Appl. 2017, 22, 25. [Google Scholar] [Green Version]
  37. Garg, H.; Kumar, K. Distance measures for connection number sets based on set pair analysis and its applications to decision-making process. Appl. Intell. 2018, 48, 1–14. [Google Scholar] [CrossRef]
  38. Singh, S.; Garg, H. Distance measures between type-2 intuitionistic fuzzy sets and their application to multicriteria decision-making process. Appl. Intell. 2017, 46, 788–799. [Google Scholar] [CrossRef]
  39. Van Veldhuizen, D.A.; Lamont, G.B. Multiobjective evolutionary algorithm test suites. In Proceedings of the 1999 ACM Symposium on Applied Computing, San Antonio, TX, USA, 28 February–2 March 1999; ACM: New York, NY, USA, 1999; pp. 351–357. [Google Scholar] [Green Version]
  40. Coello Coello, C.A.; Cruz Cortés, N. Solving Multiobjective Optimization Problems using an Artificial Immune System. Genet. Program. Evolvable Mach. 2005, 6, 163–190. [Google Scholar] [CrossRef]
  41. Rudolph, G.; Schütze, O.; Grimme, C.; Domínguez-Medina, C.; Trautmann, H. Optimal averaged Hausdorff archives for bi-objective problems: Theoretical and numerical results. Comput. Optim. Appl. 2016, 64, 589–618. [Google Scholar] [CrossRef]
  42. Zitzler, E.; Thiele, L.; Laumanns, M.; Fonseca, C.M.; da Fonseca, V.G. Performance assessment of multiobjective optimizers: An analysis and review. IEEE Trans. Evol. Comput. 2003, 7, 117–132. [Google Scholar] [CrossRef]
  43. Witkowski, A. A new proof of the monotonicity property of power means. JIPAM. J. Inequal. Pure Appl. Math. 2004, 5, 73. [Google Scholar]
  44. Deb, K.; Pratap, A.; Agarwal, S.; Meyarivan, T. A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans. Evol. Comput. 2002, 6, 182–197. [Google Scholar] [CrossRef] [Green Version]
  45. Zhang, Q.; Li, H. MOEA/D: A Multiobjective Evolutionary Algorithm Based on Decomposition. IEEE Trans. Evol. Comput. 2007, 11, 712–731. [Google Scholar] [CrossRef] [Green Version]
  46. Schütze, O.; Domínguez-Medina, C.; Cruz-Cortés, N.; de la Fraga, L.G.; Sun, J.Q.; Toscano, G.; Landa, R. A scalar optimization approach for averaged Hausdorff approximations of the Pareto front. Eng. Optim. 2016, 48, 1593–1617. [Google Scholar] [CrossRef]
Figure 1. Table of Δ p , q -neighborhoods of increasing radius around a discrete set of ten equidistant points along the line y = x in R 2 , showing how their shape change for different values of p and q.
Figure 1. Table of Δ p , q -neighborhoods of increasing radius around a discrete set of ten equidistant points along the line y = x in R 2 , showing how their shape change for different values of p and q.
Mca 23 00051 g001
Figure 2. Representation of key regions on the ( p , q ) -plane. Corollary 1 shows that Δ p , q is a proper metric in the violet region and Corollary 2 shows that it is an inframetric in the orange and light-gray ones. Numerical evidence suggests that Δ p , q is still a proper metric in the orange regions.
Figure 2. Representation of key regions on the ( p , q ) -plane. Corollary 1 shows that Δ p , q is a proper metric in the violet region and Corollary 2 shows that it is an inframetric in the orange and light-gray ones. Numerical evidence suggests that Δ p , q is still a proper metric in the orange regions.
Mca 23 00051 g002
Figure 3. (Left) Example of a Pareto front F ( P Q ) with two archives satisfying condition (i) of Theorem 5 for which I p , q GD ( X ) I p , q GD ( Y ) . (Right) Example of a Pareto front F ( P Q ) with two archives satisfying conditions (a’) and (b’) of Remark 3 but for which I p , q GD ( X ) I p , q GD ( Y ) . In this case partitions of the archives satisfying Theorem 5 (i) do not exist.
Figure 3. (Left) Example of a Pareto front F ( P Q ) with two archives satisfying condition (i) of Theorem 5 for which I p , q GD ( X ) I p , q GD ( Y ) . (Right) Example of a Pareto front F ( P Q ) with two archives satisfying conditions (a’) and (b’) of Remark 3 but for which I p , q GD ( X ) I p , q GD ( Y ) . In this case partitions of the archives satisfying Theorem 5 (i) do not exist.
Mca 23 00051 g003
Figure 4. Example of four approximations of A (black horizontal segment) with B δ (blue piecewise function) for four different values of δ and fixed ε = 0.10 .
Figure 4. Example of four approximations of A (black horizontal segment) with B δ (blue piecewise function) for four different values of δ and fixed ε = 0.10 .
Mca 23 00051 g004
Figure 5. Pareto set (left) and front (right) of MOP (12) for n = 2 with γ = 2 .
Figure 5. Pareto set (left) and front (right) of MOP (12) for n = 2 with γ = 2 .
Mca 23 00051 g005
Figure 6. Pareto set (left) and front (right) of MOP (12) for n = 2 with γ = 1 / 2 .
Figure 6. Pareto set (left) and front (right) of MOP (12) for n = 2 with γ = 1 / 2 .
Mca 23 00051 g006
Figure 7. Left: Approximations A (blue dots) and B (blue polygon line) of the Pareto set (green thick line) of MOP (12) for n = 2 . Right: corresponding approximations F ( A ) and F ( B ) of the Pareto front, for γ = 2 .
Figure 7. Left: Approximations A (blue dots) and B (blue polygon line) of the Pareto set (green thick line) of MOP (12) for n = 2 . Right: corresponding approximations F ( A ) and F ( B ) of the Pareto front, for γ = 2 .
Mca 23 00051 g007
Figure 8. Left: Approximations A (blue dots) and B (blue polygon line) of the Pareto set (green thick line) of MOP (12) for n = 2 . Right: corresponding approximations F ( A ) and F ( B ) of the Pareto front, for γ = 1 / 2 .
Figure 8. Left: Approximations A (blue dots) and B (blue polygon line) of the Pareto set (green thick line) of MOP (12) for n = 2 . Right: corresponding approximations F ( A ) and F ( B ) of the Pareto front, for γ = 1 / 2 .
Mca 23 00051 g008
Figure 9. Left: approximations A (blue dots) corresponding to the 500th generation of the NSGA-II algorithm, and B (blue polygon line) of the Pareto set (green thick line) of MOP (12) for n = 2 . Right: respective approximations F ( A ) and F ( B ) of the Pareto front for γ = 2 .
Figure 9. Left: approximations A (blue dots) corresponding to the 500th generation of the NSGA-II algorithm, and B (blue polygon line) of the Pareto set (green thick line) of MOP (12) for n = 2 . Right: respective approximations F ( A ) and F ( B ) of the Pareto front for γ = 2 .
Mca 23 00051 g009
Figure 10. Left: approximations A (blue dots) corresponding to the 500th generation of the NSGA-II algorithm, and B (blue polygon line) of the Pareto set (green thick line) of MOP (12) for n = 2 . Right: respective approximations F ( A ) and F ( B ) of the Pareto front for γ = 1 / 2 .
Figure 10. Left: approximations A (blue dots) corresponding to the 500th generation of the NSGA-II algorithm, and B (blue polygon line) of the Pareto set (green thick line) of MOP (12) for n = 2 . Right: respective approximations F ( A ) and F ( B ) of the Pareto front for γ = 1 / 2 .
Mca 23 00051 g010
Figure 11. Left: approximations A (blue dots) corresponding to the 500th generation of the MOEA/D algorithm), and B (blue polygon line) of the Pareto set (green thick line) of MOP (12) for n = 2 . Right: respective approximations F ( A ) and F ( B ) of the Pareto front for γ = 2 .
Figure 11. Left: approximations A (blue dots) corresponding to the 500th generation of the MOEA/D algorithm), and B (blue polygon line) of the Pareto set (green thick line) of MOP (12) for n = 2 . Right: respective approximations F ( A ) and F ( B ) of the Pareto front for γ = 2 .
Mca 23 00051 g011
Figure 12. Left: approximations A (blue dots) corresponding to the 500th generation of the MOEA/D algorithm), and B (blue polygon line) of the Pareto set (green thick line) of MOP (12) for n = 2 . Right: respective approximations F ( A ) and F ( B ) of the Pareto front for γ = 1 / 2 .
Figure 12. Left: approximations A (blue dots) corresponding to the 500th generation of the MOEA/D algorithm), and B (blue polygon line) of the Pareto set (green thick line) of MOP (12) for n = 2 . Right: respective approximations F ( A ) and F ( B ) of the Pareto front for γ = 1 / 2 .
Mca 23 00051 g012
Figure 13. Pareto set (left) and front (right) of MOP (14).
Figure 13. Pareto set (left) and front (right) of MOP (14).
Mca 23 00051 g013
Figure 14. Δ p , q values for the discrete (black curve) and the continuous archives (blue curve) of NSGA-II for MOP (14).
Figure 14. Δ p , q values for the discrete (black curve) and the continuous archives (blue curve) of NSGA-II for MOP (14).
Mca 23 00051 g014
Figure 15. Left: Approximations A (blue dots) and B (blue continuous polygon line) of the Pareto set of MOP (14) in the 300th generation. Right: corresponding approximations F ( A ) and F ( B ) of the Pareto front.
Figure 15. Left: Approximations A (blue dots) and B (blue continuous polygon line) of the Pareto set of MOP (14) in the 300th generation. Right: corresponding approximations F ( A ) and F ( B ) of the Pareto front.
Mca 23 00051 g015
Figure 16. Left: Approximations A (blue dots) and B (blue continuous polygon line) of the Pareto set of MOP (14) in the 400th generation. Right: corresponding approximations F ( A ) and F ( B ) of the Pareto front.
Figure 16. Left: Approximations A (blue dots) and B (blue continuous polygon line) of the Pareto set of MOP (14) in the 400th generation. Right: corresponding approximations F ( A ) and F ( B ) of the Pareto front.
Mca 23 00051 g016
Figure 17. Left: Approximations A (blue dots) and B (blue continuous polygon line) of the Pareto set of MOP (14) in the 500th generation. Right: corresponding approximations F ( A ) and F ( B ) of the Pareto front.
Figure 17. Left: Approximations A (blue dots) and B (blue continuous polygon line) of the Pareto set of MOP (14) in the 500th generation. Right: corresponding approximations F ( A ) and F ( B ) of the Pareto front.
Mca 23 00051 g017
Table 1. Δ p , q values for A and B δ in (10) and (11), for different values of p, q, and δ , with fixed ε = 0.10 .
Table 1. Δ p , q values for A and B δ in (10) and (11), for different values of p, q, and δ , with fixed ε = 0.10 .
pq Δ pq ( A , B 0.05 ) Δ pq ( A , B 0.10 ) Δ pq ( A , B 0.20 ) Δ pq ( A , B 0.40 )
11 0.7149 0.7464 0.8091 0.9324
1 1 0.4105 0.4506 0.5311 0.6945
1 100 0.1503 0.1961 0.2878 0.4711
1 200 0.1479 0.1934 0.2844 0.4663
1 10 , 000 0.1451 0.1901 0.2802 0.4602
Table 2. Δ p , q values for the Pareto set/front approximations for MOP (12).
Table 2. Δ p , q values for the Pareto set/front approximations for MOP (12).
pqDecision SpaceObjective Space
Finite ArchiveContinuous ArchiveFinite ArchiveContinuous Archive
γ = 2 11 0.5314 0.4841 0.4369 0.3943
1 1 0.2732 0.1750 0.2095 0.0945
1 100 0.1140 0.0213 0.0893 0.0018
1 200 0.1131 0.0208 0.0886 0.0017
1 10 , 000 0.1122 0.0202 0.0879 0.0017
γ = 12 11 0.5314 0.4841 0.5629 0.5024
1 1 0.2732 0.1750 0.2807 0.1072
1 100 0.1140 0.0213 0.1202 0.0015
1 200 0.1131 0.0208 0.1192 0.0015
1 10 , 000 0.1122 0.0202 0.1183 0.0014
Table 3. Parameter setting for NSGA-II and MOEA/D.
Table 3. Parameter setting for NSGA-II and MOEA/D.
AlgorithmParameterValue
NSGA-IIPopulation size12
Number of generations500
Crossover probability0.8
Mutation probability 1 / n
Distribution index for crossover20
Distribution index for mutation20
MOEA/DPopulation size12
# weight vectors12
Number of generations500
Crossover probability1
Mutation probability 1 / n
Distribution index for crossover30
Distribution index for mutation20
Aggregation functionTchebycheff
Neighborhood size3
Table 4. Δ p , q values for the Pareto front approximations for MOP (12) using the NSGA-II archives and with p = 1 , q = 10 .
Table 4. Δ p , q values for the Pareto front approximations for MOP (12) using the NSGA-II archives and with p = 1 , q = 10 .
Generation γ = 1 / 2 γ = 2
Finite ArchiveContinuous ArchiveFinite ArchiveContinuous Archive
50 0.0439 0.0147 0.0696 0.0160
100 0.0498 0.0109 0.0540 0.0102
200 0.0613 0.0118 0.0716 0.0207
250 0.0651 0.0265 0.0572 0.0061
400 0.0602 0.0102 0.0723 0.0276
450 0.0630 0.0154 0.0584 0.0088
460 0.0612 0.0154 0.0658 0.0098
470 0.0523 0.0102 0.0566 0.0083
480 0.0754 0.0269 0.0684 0.0241
490 0.0510 0.0091 0.0584 0.0118
500 0.0722 0.0097 0.0560 0.0103
Table 5. Δ p , q values for the Pareto front approximations for MOP (12) using the MOEA/D archives and with p = 1 , q = 10 .
Table 5. Δ p , q values for the Pareto front approximations for MOP (12) using the MOEA/D archives and with p = 1 , q = 10 .
Generation γ = 1 / 2 γ = 2
Finite ArchiveContinuous ArchiveFinite ArchiveContinuous Archive
50 0.0610 0.0171 0.0648 0.0119
100 0.0519 0.0051 0.1093 0.0016
200 0.0536 0.0037 0.0781 0.0009
250 0.0522 0.0037 0.0790 0.0008
400 0.0511 0.0017 0.0784 0.0009
450 0.0511 0.0017 0.0784 0.0009
460 0.0509 0.0012 0.0784 0.0009
470 0.0509 0.0012 0.0784 0.0009
480 0.0509 0.0010 0.0783 0.0009
490 0.0509 0.0010 0.0783 0.0009
500 0.0509 0.0010 0.0783 0.0009
Table 6. Δ p , q values for the discrete and continuous archives of NSGA-II for MOP (14). The results are averaged over 20 independent runs.
Table 6. Δ p , q values for the discrete and continuous archives of NSGA-II for MOP (14). The results are averaged over 20 independent runs.
GenerationContinuous ArchiveFinite Archive
20 0.1333 0.2401
40 0.0176 0.1451
60 0.0090 0.1561
80 0.0088 0.1355
100 0.0065 0.1472
120 0.0074 0.1412
140 0.0081 0.1395
160 0.0075 0.1549
180 0.0092 0.1468
200 0.0074 0.1429
220 0.0066 0.1408
240 0.0075 0.1397
260 0.0066 0.1460
280 0.0074 0.1439
300 0.0084 0.1421
320 0.0070 0.1352
340 0.0070 0.1373
360 0.0081 0.1454
380 0.0079 0.1413
400 0.0066 0.1388
420 0.0063 0.1400
440 0.0097 0.1384
460 0.0067 0.1418
480 0.0067 0.1421
500 0.0076 0.1426

Share and Cite

MDPI and ACS Style

Bogoya, J.M.; Vargas, A.; Cuate, O.; Schütze, O. A (p,q)-Averaged Hausdorff Distance for Arbitrary Measurable Sets. Math. Comput. Appl. 2018, 23, 51. https://doi.org/10.3390/mca23030051

AMA Style

Bogoya JM, Vargas A, Cuate O, Schütze O. A (p,q)-Averaged Hausdorff Distance for Arbitrary Measurable Sets. Mathematical and Computational Applications. 2018; 23(3):51. https://doi.org/10.3390/mca23030051

Chicago/Turabian Style

Bogoya, Johan M., Andrés Vargas, Oliver Cuate, and Oliver Schütze. 2018. "A (p,q)-Averaged Hausdorff Distance for Arbitrary Measurable Sets" Mathematical and Computational Applications 23, no. 3: 51. https://doi.org/10.3390/mca23030051

Article Metrics

Back to TopTop