Next Article in Journal
An Intelligent Algorithm for Solving the Efficient Nash Equilibrium of a Single-Leader Multi-Follower Game
Next Article in Special Issue
Properties of the Global Total k-Domination Number
Previous Article in Journal
Frequent Itemset Mining and Multi-Layer Network-Based Analysis of RDF Databases
Previous Article in Special Issue
Adding Negative Learning to Ant Colony Optimization: A Comprehensive Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Unified Polynomial Dynamic Programming Algorithms for P-Center Variants in a 2D Pareto Front †

1
Université Paris-Saclay, CNRS, Laboratoire Interdisciplinaire des Sciences du Numérique, 91400 Orsay, France
2
Sony Computer Science Laboratories Inc., Tokyo 141-0022, Japan
3
CNRS UMR 9189-CRIStAL-Centre de Recherche en Informatique Signal et Automatique de Lille, Université Lille, F-59000 Lille, France
*
Author to whom correspondence should be addressed.
This paper is an extended version of our paper published in OLA 2020, International Conference in Optimization and Learning, Cadiz, Spain, 17–19 February 2020.
Mathematics 2021, 9(4), 453; https://doi.org/10.3390/math9040453
Submission received: 21 December 2020 / Revised: 12 February 2021 / Accepted: 16 February 2021 / Published: 23 February 2021
(This article belongs to the Special Issue Mathematical Methods for Operations Research Problems)

Abstract

:
With many efficient solutions for a multi-objective optimization problem, this paper aims to cluster the Pareto Front in a given number of clusters K and to detect isolated points. K-center problems and variants are investigated with a unified formulation considering the discrete and continuous versions, partial K-center problems, and their min-sum-K-radii variants. In dimension three (or upper), this induces NP-hard complexities. In the planar case, common optimality property is proven: non-nested optimal solutions exist. This induces a common dynamic programming algorithm running in polynomial time. Specific improvements hold for some variants, such as K-center problems and min-sum K-radii on a line. When applied to N points and allowing to uncover M < N points, K-center and min-sum-K-radii variants are, respectively, solvable in O ( K ( M + 1 ) N log N ) and O ( K ( M + 1 ) N 2 ) time. Such complexity of results allows an efficient straightforward implementation. Parallel implementations can also be designed for a practical speed-up. Their application inside multi-objective heuristics is discussed to archive partial Pareto fronts, with a special interest in partial clustering variants.

1. Introduction

This paper is motivated by real-world applications of multi-objective optimization (MOO). Some optimization problems are driven by more than one objective function, with  conflicting optimization directions. For example, one may minimize financial costs, while maximizing the robustness to uncertainties or minimizing the environmental impact  [1,2]. In such cases, higher levels of robustness or sustainability are likely to induce financial over-costs. Pareto dominance, preferring one solution to another if it is better for all the objectives, is a weak dominance rule. With conflicting objectives, several non-dominated points in the objective space can be generated, defining efficient solutions, which are the best compromises. A Pareto front (PF) is the projection in the objective space of the efficient solutions [3]. MOO approaches may generate large sets of efficient solutions using Pareto dominance [3]. Summarizing the shape of a PF may be required for presentation to decision makers. In such a context, clustering problems are useful to support decision making to present a view of a PF in clusters, the density of points in the cluster, or to select the most central cluster points as representative points. Note than similar problems are of interest for population MOO heuristics such as evolutionary algorithms to archive representative points of a partial Pareto fronts, or in selecting diversified efficient solutions to process mutation or cross-over operators [4,5].
With N points in a PF, one wishes to define K N clusters while minimizing the measure of dissimilarity. The K-center problems, both in the discrete and continuous versions, define the cluster costs in this paper, covering the PF with K identical balls while minimizing the radius of the balls used. By definition, the ball centers belong to the PF for the discrete K-center version, whereas the continuous version is similar to geometric covering problems, without any constraint for the localization of centers. Furthermore, sum-radii or sum-diameter are min-sum clustering variants, where the covering balls are not necessarily identical. For each variant, one can also consider partial clustering variants, where a given percentage (or number) of points can be ignored in the covering constraints, which is useful when modelling outliers in the data.
The K-center problems are NP-hard in the general case,  [6] but also for the specific case in R 2 using the Euclidean distance [7]. This implies that K-center problems in three-dimensional (3D) PF are also NP-hard, with the planar case being equivalent to an affine 3D PF. We consider the case of two-dimensional (2D) PF in this paper, focusing on the polynomial complexity results. It as an application to bi-objective optimization, the 3D PF and upper dimensions are shown as perspectives after this work. Note that 2D PF are a generalization of one-dimensional (1D) cases, where polynomial complexity results are known [8,9]. A preliminary work proved that K-center clustering variants in a 2D PF are solvable in polynomial time using a Dynamic Programming (DP) algorithm [10]. This paper improves these algorithms for these variants, with an extension to min-sum clustering variants, partial clustering, and Chebyshev and Minkowski distances. The properties of the DP algorithms are discussed for efficient implementation, including parallelization.
This paper is organized as follows. The considered problems are defined formally with unified notations in Section 2. In Section 3, related state-of-the-art elements are discussed. In Section 4 and Section 5, intermediate results and specific complexity results for sub-cases are presented. In Section 6, a unified DP algorithm with a proven polynomial complexity is designed. In Section 7, specific improvements are presented. In Section 8, the implications and applications of the results of Section 5, Section 6 and Section 7 are discussed. In Section 9, our contributions are summarized, with a discussion of future research directions.

2. Problem Statement and Notation

In this paper, integer intervals are denoted as [ [ a , b ] ] = [ a , b ] Z . Let E = { x 1 , , x N } = { x i } i [ [ 1 , N ] ] a set of N elements of R 2 , such that for all i j , x i 0 I 0 x j defining the binary relations I , for all y = ( y 1 , y 2 ) , z = ( z 1 , z 2 ) R 2 with
y z y 1 < z 1 2 and 2 y 2 > z 2
y z y z 2 or 2 y = z
y 1 I 1 z y z 2 or 2 z y
These hypotheses on E define 2D PF considering the minimization of two objectives [3,11]. Such configuration is illustrated by Figure 1. Without loss of generality, transforming objectives to maximize f into f allows for the consideration of the minimization of two objectives. This assumption impacts the sense of the inequalities of I , . A PF can also be seen as a Skyline operator [12]. A 2D PF can be extracted from any subset of R 2 using an output-sensitive algorithm [13], or using any MOO approach [3,14].
The results of this paper will be given using the Chebyshev and Minkowski distances, generically denoting d ( y , z ) the l 8 and l m norm-induced distance, respectively. For a given m > 0 , the Minkowski distance is denoted d m , and given by the formula
y = ( y 1 , y 2 ) , z = ( z 1 , z 2 ) R 2 , d m ( y , z ) = y 1 z 1 m + y 2 z 2 m m
The case m = 2 corresponds to the Euclidean distance; it is a usual case for our application. The limit with m 8 defines the Chebyshev distance, denoted d 8 and given by the formula
y = ( y 1 , y 2 ) ,   z = ( z 1 , z 2 ) R 2 , d 8 ( y , z ) = max y 1 z 1 , y 2 z 2
Once a distance d is defined, a dissimilarity among a subset of points E E is defined using the radius of the minimal enclosing ball containing E . Numerically, this dissimilarity function, denoted as f C , can be written as
E E , f C ( E ) = min y R 2 max x E d ( x , y )
A discrete variant considers enclosing balls with one of the given points as the center. Numerically, this dissimilarity function, denoted f D , can be written as
E E , f D ( E ) = min y E max x E d ( x , y )
For the sake of having unified notations for common results and proofs, we define γ { 0 , 1 } to indicate which version of the dissimilarity function is considered. γ = 0 (respectively,  1) indicates that the continuous (respectively, discrete) version is used, f γ , thus denoting f 1 = f C (respectively, f 0 = f D ). Note that γ { 0 , 1 } will be related to complexity results which motivated such a notation choice.
For each a subset of points E E and integer K 1 , we define Π K ( E ) , as the set of all the possible partitions of E in K subsets. Continuous and discrete K-center are optimization problems with Π K ( E ) as a set of feasible solutions, covering E with K identical balls while minimizing the radius of the balls used
min π Π K ( E ) max P π f γ ( P )
The continuous and discrete K-center problems in the 2D PF are denoted K- γ -CP2DPF. Another covering variant, denoted min-sum-K-radii problems, covers the points with non-identical balls, while minimizing the sum of the radius of the balls. We consider the following extension of min-sum-K-radii problems, with α > 0 being a real number
min π Π K ( E ) P π f γ ( P ) α
α = 1 corresponds to the standard min-sum-K-radii problem. α = 2 with the standard Euclidean distance is equivalent to the minimization of the area defined by the covering disks. For the sake of unifying notations for results and proofs, we define a generic operator { + , max } to denote, respectively, sum-clustering and max-clustering. This defines the generic optimization problems
min π Π K ( E ) P π f γ ( P ) α
Lastly, we consider a partial clustering extension of problems (10), similarly to the partial p-center [15]. The covering with balls mainly concerns the extreme points, which make the results highly dependent on outliers. One may consider that a certain number M < N of the points may be considered outliers, and that M points can be removed in the evaluation. This can be written as
min E E : | E \ E | M min π Π K ( E ) P π f γ ( P ) α
Problem (11) is denoted K-M-⊕- ( α , γ ) -BC2DPF. Sometimes, the partial covering is defined by a maximal percentage of outliers. In this case, if M is much smaller than N, we have M = Θ ( N ) , which we have to keep in mind for the complexity results. K-center problems, K- γ -CP2DPF, are K-M-max- ( α , γ ) -BC2DPF problems for all α > 0 ; the value of α does not matter for max-clustering, defining the same optimal solutions as α = 1 . The standard min-sum-k-radii problem, equivalent to the min-sum diameter problem, corresponds to k-0-+- ( 1 , γ ) -BC2DPF problems for discrete and continuous versions, k-M-+- ( 1 , γ ) -BC2DPF problems consider partial covering for min-sum-k-radii problems.

3. Related Works

This section describes works related to our contributions, presenting the state-of-the-art for p-center problems and clustering points in a PF. For more detailed survey on the results for the p-center problems, we refer to [16].

3.1. Solving P-Center Problems and Complexity Results

Generally, the p-center problem consists of locating p facilities among possible locations and assigning n clients, called c 1 , c 2 , , c n , to the facilities in order to minimize the maximum distance between a client and the facility to which it is allocated. The continuous p-center problem assumes that any place of location can be chosen, whereas the discrete p-center problem considers a subset o m potential sites, denoted f 1 , f 2 , , f m , and distances d i , j for all i [ [ 1 , n ] ] and j [ [ 1 , m ] ] . Discrete p-center problems can be formulated with bipartite graphs, modeling that si unfeasibile for some assignments. In the discrete p-center problem defined in Section 2, points f 1 , f 2 , , f m are exactly c 1 , c 2 , , c n , and the distances are defined using a norm, so that triangle inequality holds for such variants.
P-center problems are NP-hard [6,17]. Furthermore, for all α < 2 , any α -approximation for the discrete p-center problem with triangle inequality is NP-hard [18]. Two approximations were provided for the discrete p-center problem running in O ( p n log n ) time and in O ( n p ) time, respecitvely [19,20]. The discrete p-center problem in R 2 with a Euclidean distance is also NP-hard [17]. Defining binary variables x i , j { 0 , 1 } and y j { 0 , 1 } with x i , j = 1 if and only if the customer i is assigned to the depot j, and y j = 1 if and only if location f j is chosen as a depot, the following Integer Linear Programming (ILP) formulation models the discrete p-center problem [21]
min x , y , z z ( 12 a ) s . t : j = 1 n d i , j x i , j z i [ [ 1 , n ] ] ( 12 b ) j = 1 m y j = p ( 12 c ) j = 1 m x i , j = 1 i [ [ 1 , n ] ] ( 12 d ) x i , j y j ( i , j ) [ [ 1 , N ] ] × [ [ 1 , n ] ] , ( 12 e ) x i , j , y j { 0 , 1 } i , j [ [ 1 , n ] ] × [ [ 1 , m ] ] , ( 12 f )
Constraints (12b) are implied by a standard linearization of the min–max original objective function. Constraint (12c) fixes the number of open facilities to p. Constraints (12d) assign each client to exactly one facility. Constraints (12e) are necessary to induce that any considered assignment x i , j = 1 implies that facility j is open with y j = 1 . Tighter ILP formulations than (12) were proposed, with efficient exact algorithms relying on the IP models [22,23]. Exponential exact algorithms were also designed for the continuous p-center problem [24,25]. An n O ( p ) -time algorithm was provided for the continuous Euclidean p-center problem in the plane [26]. An n O ( p 1 1 / d ) -time algorithm is available for the continuous p-center problem in R d under Euclidean and L -metric  [27].
Specific cases of p-center problems are solvable in polynomial time. The continuous 1-center problem is exactly the minimum covering ball problem, which has a linear complexity in R 2 . Indeed, a “prune and search” algorithm finds the optimum bounding sphere and runs in linear time if the dimension is fixed as a constant [28]. In dimension d, its complexity is in O ( ( d + 1 ) ( d + 1 ) ! n ) time, which is impractical for high-dimensional applications [28]. The discrete 1-center problem is solved in time O ( n log n ) , using furthest-neighbor Voronoi diagrams [29]. The continuous and planar 2-center problem is solved in randomized expected O ( n log 2 n ) time  [30,31]. The discrete and planar 2-center problem is solvable in O ( n 4 / 3 log 5 n ) time [32].
1D p-center problems, or those with equivalent points that are located in a line, have specific complexity results with polynomial DP algorithms. The discrete 1D k-center problem is solvable in O ( n ) time [33]. The continuous and planar k-centers on a line, finding k disks with centers on a given line l, are solvable in polynomial time, in  O ( n 2 log 2 n ) time in the first algorithm by [29], and in O ( n k log n ) time and O ( n ) space in the improved version provided by [34]. An intensively studied extension of the 1D sub-cases is the p-center in a tree structure. The continuous p-center problem is solvable in O ( n log 3 n ) time in a tree structure [7]. The discrete p-center problem is solvable in O ( n log n ) time in a tree structure [35].
Rectilinear p-center problems, using the Chebyshev distances, were less studied. Such distance is useful for complexity results; however, it has fewer applications than Euclidean or Minkowski norms. For the planar and rectangular 1-center and 2-center problems, O ( n ) algorithms are available for the 1-center problem, and such 3-center problems can be solved in O ( n log n ) time [36]. In a general dimension d, continuous and discrete versions of rectangular p-center problems are solvable in O ( n ) and O ( n log d 2 n log log n + n log n ) running time, respectively. Specific complexity results for rectangular 2-center problems are also available [37].

3.2. Solving Variants of P-Center Problems and Complexity Results

Variants of p-center problems were studied less intensively than the standard p-center problems. The partial variants were introduced in 1999 by [15], whereas a preliminary work in 1981 considered a partial weighted one-center variant and a DP algorithm to solve it running in O ( n 2 log n ) time [38]. The partial discrete p-center can formulated as an ILP starting from the formulation provided by [21] as written in (12). Indeed, considering that n 0 points can be uncovered, constraints (12.4) become inequalities j = 1 m x i , j 1 for all i , j and the maximal number of unassigned points is set to n 0 , adding one constraint j = i n j = 1 m x i , j n n 0 . Similarly, the sum-clustering variants K-M-+- ( α , γ ) -BC2DPF can be written as the following ILP
min z , r 0 n = 1 N r n ( 13 a ) s . t : n = 1 N d ( x n , x n ) α z n , n r n n [ [ 1 , N ] ] ( 13 b ) n = 1 N z n , n = K ( 13 c ) n = 1 N z n , n 1 n [ [ 1 , N ] ] ( 13 d ) n = 1 N n = 1 N z n , n N M ( 13 e ) z n , n z n , n ( n , n ) [ [ 1 , N ] ] 2 , ( 13 f ) z n , n { 0 , 1 } ( n , n ) [ [ 1 , N ] ] 2 , ( 13 g ) r n 0 n [ [ 1 , N ] ] , ( 13 h )
In this ILP formulation, binary variables z n , n { 0 , 1 } are defined such that z n , n = 1 if and only if the points x n and x n are assigned in the same cluster, with x n being the discrete center. Continuous variables r n 0 denote the powered radius of the ball centered in x n , if  x n is chosen as a center, and  r n = 0 otherwise. Constraint (13b) is a standard linearization of the non-linear objective function. z n , n indicates that if point x n is chosen as the center, then this implies with (13c) that K such variables are nonzero, and with (13f) that a nonzero variable z n , n implies that the corresponding z n , n is not null. (13d) and (13e) allow the extension with partial variants, as discussed before.
Min-sum radii or diameter problems were rarely studied. However, such objective functions are useful for meta-heuristics to break some “plateau” effects [39]. Min-sum diameter clustering is NP-hard in the general case and polynomial within a tree structure [40]. The NP-hardness is also proven, even in metrics induced by weighted planar graphs [41]. Approximation algorithms were studied for min-sum diameter clustering. A logarithmic approximation with a constant factor blowup in the number of clusters was provided by [42]. In the planar case with Euclidean distances, a polynomial time approximation scheme was designed [43].

3.3. Clustering/Selecting Points in Pareto Frontiers

Here, we summarize the results related to the selection or the clustering of points in PF, with applications for MOO algorithms. Polynomial complexity resulting in the use of 2D PF structures is an interesting property; clustering problems have a NP-hard complexity in general [17,44,45].
To the best of our knowledge, no specific work focused on PF sub-cases of k-center problems and variants before our preliminary work [10]. A Distance-Based Representative Skyline with similarities to the discrete p-center problem in a 2D PF may not be fully available in the Skyline application, which makes a significant difference [46,47]. The preliminary results proved that K- γ -CP2DPF is solvable in O ( K N log γ N ) time using O ( N ) additional memory space [10]. Partial extensions and min-sum-k-radii variants were not considered for 2D PF. We note that the 2D PF case is an extension of the 1D case, with 1D cases being equivalent to the cases of an affine 2D PF. In the study of complexity results, a tree structure is usually a more studied extension of 1D cases. The discrete k-center problem on a tree structure, and thus the 1D sub-case, is solvable in O ( N ) time [33]. 3F PF cases are NP-complete, as already mentioned in the introduction, this being a consequence of the NP-hardness of the general planar case.
Maximization of the quality of discrete representations of Pareto sets was studied with the hypervolume measure in the Hypervolume Subset Selection (HSS) problem [48,49]. The HSS problem is known to be NP-hard in dimension 3 (and greater dimensions) [50]. HSS is solvable with an exact algorithm in N O ( K ) and a polynomial-time approximation scheme for any constant dimension d [50]. The 2D case is solvable in polynomial time with a DP algorithm with a complexity in O ( K N 2 ) time and O ( K N ) space [49]. The time complexity of the DP algorithm was improved in O ( K N + N log N ) by [51], and in O ( K ( N K ) + N log N ) by [52].
The selection of points in a 2D PF, maximizing the diversity, can also be formulated using p-dispersion problems. Max–Min and Max-Sum p-dispersion problems are NP-hard problems [53,54]. Max–Min and Max-Sum p-dispersion problems are still NP-hard problems when distances fulfill the triangle inequality [53,54]. The planar (2D) Max–Min p-dispersion problem is also NP-hard [9]. The one-dimensional (1D) cases of Max–Min and Max-Sum p-dispersion problems are solvable in polynomial time, with a similar DP algorithm running in O ( max { p N , N log N } ) time [8,9]. Max–Min p-dispersion was proven to be solvable in polynomial time, with a DP algorithm running in O ( p N log N ) time and O ( N ) space [55]. Other variants of p-dispersion problems were also proven to be solvable in polynomial time using DP algorithms [55].
Similar results exist for k-means, k-medoid and k-median clustering. K-means is NP-hard for 2D cases, and thus for 3D PF [44]. K-median and K-medoid problems are known to be NP hard in dimension 2, since [17], where the specific case of 2D PF was proven to be solvable in O ( N 3 ) time with DP algorithms [11,56]. The restriction of k-means to 2D PF would be also solvable in O ( N 3 ) time with a DP algorithm if a conjecture was proven [57]. We note that an affine 2D PF is a line in R 2 , where clustering is equivalent to 1D cases. 1D k-means were proven to be solvable in polynomial time with a DP algorithm in O ( K N 2 ) time and O ( K N ) space. This complexity was improved for a DP algorithm in O ( K N ) time and O ( N ) space [58]. This is thus the complexity of K-means in an affine 2D PF.

4. Intermediate Results

4.1. Indexation and Distances in a 2D PF

Lemma 1.
≼ is an order relation, and  ≺ is a transitive relation
x , y , z R 2 , 3 x y 1 a n d 1 y z x z
Proposition 1 implies an order among the points of E, for a re-indexation in O ( N log N )  time
Proposition 1
(Total order). Points ( x i ) can be re-indexed in O ( N log N ) time, such that
( i 1 , i 2 ) [ [ 1 ; N ] ] 2 , i 1 < i 2 x i 1 x i 2
( i 1 , i 2 ) [ [ 1 ; N ] ] 2 , i 1 i 2 x i 1 x i 2
Proof. 
We index E such that the first coordinate is increasing. This sorting procedure runs in O ( N log N ) time. Let ( i 1 , i 2 ) [ [ 1 ; N ] ] 2 , with  i 1 < i 2 . We thus have x i 1 1 < x i 2 1 . Having x i 1 I x i 2 implies that x i 1 2 > x i 2 2 . x i 1 1 < x i 2 1 and x i 1 2 > x i 2 2 is by definition x i 1 x i 2 . □
The re-indexation also implies monotonic relations among distances of the 2D PF
Lemma 2.
We suppose that E is re-indexed as in Proposition 1. Letting d be a Minkowski, Euclidean or Chebyshev distance, we obtain the following monotonicity relations
( i 1 , i 2 , i 3 ) [ [ 1 ; N ] ] 3 , i 1 i 2 < i 3 d ( x i 1 , x i 2 ) < d ( x i 1 , x i 3 )
( i 1 , i 2 , i 3 ) [ [ 1 ; N ] ] 3 , i 1 < i 2 i 3 d ( x i 2 , x i 3 ) < d ( x i 1 , x i 3 )
Proof. 
We first note that the equality cases are trivial, so we can suppose that i 1 < i 2 < i 3 in the following proof. We prove the propriety (17); the proof of (18) is analogous.
Let i 1 < i 2 < i 3 . We note that x i 1 = ( x i 1 1 , x i 1 2 ) , x i 2 = ( x i 2 1 , x i 2 2 ) and x i 3 = ( x i 3 1 , x i 3 2 ) . Proposition 1 re-indexation ensures x i 1 1 < x i 2 1 < x i 3 1 and x i 1 2 > x i 2 2 > x i 3 2 . With x i 3 1 x i 1 1 > x i 2 1 x i 1 1 > 0 , | x i 1 1 x i 2 1 | < | x i 1 1 x i 3 1 |
With x i 3 2 x i 1 2 < x i 2 2 x i 1 2 < 0 , | x i 1 2 x i 2 2 | < | x i 1 2 x i 3 2 |
Thus, for any m > 0 , d m ( x i 1 , x i 2 ) < | x i 1 1 x i 3 1 | m + | x i 1 2 x i 3 2 | m = d m ( x i 1 , x i 3 ) and also
d 8 ( x i 1 , x i 2 ) = max ( | x i 1 1 x i 2 1 | , | x i 1 2 x i 2 2 | ) < max ( | x i 1 1 x i 3 1 | , | x i 1 2 x i 3 2 | ) = d 8 ( x i 1 , x i 3 ) . Hence, the result is proven for Euclidean, Minkowski and Chebyshev distances. □

4.2. Lemmas Related to Cluster Costs

This section provides the relations needed to compute or compare cluster costs. Firstly, one notes that the computation of cluster costs is easy in a 2D PF in the continuous clustering case.
Lemma 3.
Let P E , such that c a r d ( P ) 1 . Let i (resp i ) be the minimal (respective maximal) index of points of P with the indexation of Proposition 1. Then, f C ( P ) can be computed with f C ( P ) = 1 2 d ( x i , x i ) .
To prove the Lemma 3, we use the Lemmas 4 and 5.
Lemma 4.
Let P E , such that c a r d ( P ) 1 . Let i (resp i ) the minimal (resp maximal) index of points of P with the indexation of Proposition 1. We denote with O = x i + x i 2 the midpoint of x i , x i . Then, using a Minkowski or Chebyshev distance d, we have for all x P : d x , O d x i , O = d x i , O .
Proof of Lemma 4: 
We denote with r = d x i , O = d x i , O = 1 2 d ( x i , x i ) , with the equality being trivial as points O , x i , x i are on a line and d is a distance. Let x P . We calculate the distances using a new system of coordinates, translating the original coordinates such that O, is a new origin (which is compatible with the definition of Pareto optimality). x i and x i have coordinates ( a , b ) and ( a , b ) in the new coordinate system, with a , b > 0 and a m + b m = r m if a Minkowski distance is used, otherwise it is max ( a , b ) = r for the Chebyshev distance. We use ( a , b ) to denote the coordinates of x. x i x x i implies that a a a and b b b , i.e., | a | a and | b | b , which implies d ( x , O ) r , using Minkowski or Chebyshev distances. □
Lemma 5.
Let P E such that c a r d ( P ) 1 . Let i (respective i ) be the minimal (respective maximal) index of points of P with the indexation of Proposition 1. We denote, using O = x i + x i 2 , the midpoint of x i , x i . Then, using a Minkowski or Chebyshev distance d, we have for all y R 2 : d x i , O = d x i , O max ( d x i , y , d x i , y ) .
Proof of Lemma 5: 
As previously noted, let r = d x i , O = d x i , O = 1 2 d ( x i , x i ) . Let y R 2 . We have to prove that d x i , y r or d x i , y r . If we suppose that d x i , y < r , this implies that y O . Then, having y O x i implies d x i , y > d x i , 0 = r with Lemma 2. □
Proof of Lemma 3: 
We first note that f C ( P ) = min y R 2 max x P d ( x , y ) max x P d x , O , using the particular point O = x i + x i 2 . Using Lemma 4, max x P d x , O r , and thus f C ( P ) r with r = d x i , O = d x i , O = 1 2 d ( x i , x i ) . Reciprocally, for all y R 2 , r max ( d x i , y , d x i , y ) using Lemma 5, and thus r max x P d ( x , y ) . This implies that r min y R 2 max x P d ( x , y ) = f C ( P ) . □
Lemma 6.
Let P E such that c a r d ( P ) 3 . Let i (respective i ) the minimal (respective maximal) index of points of P.
f D ( P ) = min j [ [ i + 1 , i 1 ] ] , x j P max d x j , x i , d x j , x i
Proof. 
Let y P { x i , x i } . We denote j [ [ i , i ] ] , such that y = x j . Applying Lemma 2 to i < j < i , for all k [ [ i , i ] ] , , we have d x j , x k max d x j , x i , d x j , x i . Then
f D ( P ) = min y = x j P max x P d x , y f D ( P ) = min j [ [ i , i ] ] , x j P max max d x j , x i , d x j , x i , max k [ [ i , i ] ] d x j , x k f D ( P ) = min j [ [ i , i ] ] , x j P max d x j , x i , d x j , x i
Lastly, we notice that extreme points are not optimal centers. Indeed, max d x i , x i , d x i , x i = d x i , x i > max d x i + 1 , x i , d x i + 1 , x i with Proposition 2, i.e., i, is not optimal in the last minimization, dominated by i + 1 . Similarly, i is dominated by i 1 . □
Lemma 7.
Let γ { 0 , 1 } . Let P P E . We have f γ ( P ) f γ ( P ) .
Proof. 
Using the order of Proposition 1, let i (respectively, i ) the minimal index of points of P (respectively, P ) and let j (respectively, j ) the maximal indexes of points of P (respectively, P ). f C ( P ) f C ( P ) is trivial using Lemmas 2 and 3. To prove f D ( P ) f D ( P ) , we use i i j j , and Lemmas 2 and 6
f D ( P ) = min k [ [ i , j ] ] , x k P max d x k , x i , d x j , x k min k [ [ i , j ] ] , x k P max d x k , x i , d x j , x k min k [ [ i , j ] ] , x k P max d x k , x i , d x j , x k = f D ( P ) 2
   □
Lemma 8.
Let γ { 0 , 1 } . Let P E , such that c a r d ( P ) 1 . Let i (respectively, i ) the minimal (respectively, maximal) index of points of P. For all P P , such that x i , x i P , we have f γ ( P ) = f γ ( P )
Proof. 
Let P P such that x i , x i P . With Lemma 7, we have f γ ( P ) f γ ( P ) . f C ( P ) = f C ( P ) is trivial using Lemma 3, so that we have to prove f D ( P ) f D ( P ) .
f D ( P ) = min k [ [ i , i ] ] , x k P max d x k , x i , d x i x i min k [ [ i , j ] ] , x k P max d x k , x i , d x k , x i = f D ( P )  □

4.3. Optimality of Non-Nested Clustering

In this section, we prove that non-nested clustering property, the extension of interval clustering from 1D to 2D PF, allows the computation of optimal solutions, which will be a key element for a DP algorithm. For  (partial) p-center problems, i.e., K-M-max- ( α , γ ) -BC2DPF, optimal solutions may exist without fulfilling the non-nested property, whereas for K-M-+- ( α , 0 ) -BC2DPF problems, the nested property is a necessary condition to obtaining an optimal solution.
Lemma 9.
Let γ { 0 , 1 } ; let M > 0 . There is an optimal solution of 1-M-⊕- ( α , γ ) -BC2DPF on the shape C i , i = { x j } j [ [ i , i ] ] = { x E | j [ [ i , i ] ] , x = x j } , with | i i | N M .
Proof. 
Let π Π K ( E ) represent an optimal solution of 1-M-⊕- ( α , γ ) -BC2DPF, let O P T be the optimal cost, and  C E with | C | N M and f γ ( C ) = O P T . Let i (respectively, i ) be the minimal (respectively maximal) index of C using order of Proposition 1. C C i , i , so Lemma 8 applies and f γ ( C i , i ) = f γ ( C ) = O P T . | C i , i | | C | N M , thus C i , i defines an optimal solution of 1-M-⊕- ( α , γ ) -BC2DPF. □
Proposition 2.
Let E = ( x i ) be a 2D PF, re-indexed with Proposition 1. There are optimal solutions of K-M-⊕- ( α , γ ) -BC2DPF using only clusters on the shape C i , i = { x j } j [ [ i , i ] ] = { x E | j [ [ i , i ] ] , x = x j } .
Proof. 
We prove the results by the induction on K N * . For K = 1 , Lemma 9 gives the initialization.
Let us suppose that K > 1 and the Induction Hypothesis (IH) that Proposition 2 is true for K-M-⊕- ( α , γ ) -BC2DPF. Let π Π K ( E ) be an optimal solution of K-M-⊕- ( α , γ ) -BC2DPF; let O P T be the optimal cost. Let X E be the subset of the non-selected points, | X | M , and C 1 , , C K be the K subsets defining the costs, so that X , C 1 , , C K is a partition of E and k = 1 K f γ ( C k ) α = O P T . Let N be the maximal index, such that x N X , which is, necessarily, N N M . We reindex the clusters C k , such that x N C K . Let i be the minimal index such that x i C K .
We consider the subsets C K = { x j } j [ [ i , N ] ] , X = X [ [ 1 , i 1 ] ] and C k = C k { x j } j [ [ 1 , i 1 ] ] for all k [ [ 1 , K 1 ] ] . It is clear that X , C 1 , , C K 1 is a partition of { x j } j [ [ 1 , i 1 ] ] , and X , C 1 , , C K is a partition of E. For all k [ [ 1 , K 1 ] ] , C k C k , so that f γ ( C k ) f γ ( C k ) (Lemma 7).
X , C 1 , , C K is a partition of E, and  k = 1 K f γ ( C k ) α O P T . C 1 , , C K is an optimal solution of K- | X | -⊕- ( α , γ ) -BC2DPF. C 1 , , C K 1 is an optimal solution of ( K 1 ) - | X | -⊕- ( α , γ ) -BC2DPF, applied to points E = k = 1 K 1 C 1 X . Letting O P T be the optimal cost of ( K 1 ) - | X | -⊕- ( α , γ ) -BC2DPF, we have O P T = O P T f γ ( C K ) α . Applying IH for of ( K 1 ) - | X | -⊕- ( α , γ ) -BC2DPF to points E , we have C 1 , , C K 1 an optimal solution of ( K 1 ) - | X | -⊕- ( α , γ ) -BC2DPF among E on the shape C i , i = { x j } j i , i = { x E | j [ [ i , i ] ] , x = x j } . k = 1 K f γ ( C k ) α = O P T , and thus k = 1 K f γ ( C k ) α f γ ( C K ) α = O P T . C 1 , , C K 1 , C K is an optimal solution of K-M-⊕- ( α , γ ) -BC2DPF in E using only clusters C i , i . Hence, the result is proven by induction. □
Proposition 3.
There is an optimal solution of K-M-⊕- ( α , 0 ) -BC2DPF, removing exactly M points in the partial clustering.
Proof. 
Starting with an optimal solution of K-M-+- ( α , 0 ) -BC2DPF, let O P T be the optimal cost, and let X E be the subset of the non-selected points, | X | M , and C 1 , , C K , the K subsets defining the costs, so that X , C 1 , , C K is a partition of E. Removing random M | X | points in C 1 , , C K , we have clusters C 1 , , C K such that, for all k [ [ 1 , K 1 ] ] , C k C k , and thus f γ ( C k ) f γ ( C k ) (Lemma 7). This implies k = 1 K f γ ( C k ) α k = 1 K f γ ( C k ) α = O P T , and thus the clusters C 1 , , C K and outliers X = E \ k C k define and provide the optimal solution of K-M-⊕- ( α , 0 ) -BC2DPF with exactly M outliers. □
Reciprocally, one may investigate if the conditions of optimality in Propositions 2 and 3 are necessary. The conditions are not necessary in general. For instance, with  E = { ( 3 , 1 ) ; ( 2 , 2 ) ; ( 1 , 3 ) } , K = M = 1 and the discrete function F D , ie γ = 1 , the selection of each pair of points defines an optimal solution, with the same cost as the selection of the three points, which do not fulfill the property of Proposition 3. Having an optimal solution with the two extreme points also does not fulfill the property of Proposition 2. The optimality conditions are necessary in the case of sum-clustering, using the continuous measure of the enclosing disk.
Proposition 4.
Let an optimal solution of K-M-+- ( α , 0 ) -BC2DPF be defined with X E as the subset of outliers, with | X | M , and  C 1 , , C K as the K subsets defining the optimal cost. We therefore have
( i ) k = 1 K C k = M , in other words, exactly M points are not selected in π.
( i i ) For each k [ [ 1 , K ] ] , defining i k = min { i [ [ 1 , N ] ] | x i C k } and j k = max { i [ [ 1 , N ] ] | x i C k } , we have C k = { x i } i [ [ i k , j k ] ] .
Proof. 
Starting with an optimal solution of K-M-+- ( α , 0 ) -BC2DPF, let O P T be the optimal cost, and let X E be the subset of the non-selected points, | X | M , and C 1 , , C K be the K subsets defining the costs, so that X , C 1 , , C K is a partition of E. We prove ( i ) and ( i i ) ad absurdum.
If | X | < M , one may remove one extreme point of the cluster C 1 , defining C 1 . With Lemmas 2 and 3, we have f C ( C 1 ) < f C ( C 1 ) , and f C ( C 1 ) α + k = 1 K f C ( C k ) α < f C ( C 1 ) α + k = 1 K f C ( C k ) α = O P T . This is in contraction with the optimality of C 1 , , C K , C 1 , C 2 , C K , defining a strictly better solution for K-M-+- ( α , 0 ) -BC2DPF. ( i ) is thus proven ad absurdum.
If ( i i ) is not fulfilled by a cluster C k , there is x i C k with i [ [ i k , j k ] ] . If  x i X , we have a better solution than the optimal one with X = X { x i k } \ { x i } and C k = C k { x i } \ { x i k } . If  x i C l with l k , we have nested clusters C l and C k . We suppose that i k < i l (otherwise, reasoning is symmetrical). We define a better solution than the optimal one with C l = C k { x i } \ { x i l } and C k = C k { x i l } \ { x i } . ( i i ) is thus proven ad absurdum. □

4.4. Computation of Cluster Costs

Using Proposition 2, only cluster costs C i , i are computed. This section allows the efficient computation of such cluster costs. Once points are sorted using Proposition 1, cluster costs f C ( C i , i ) can be computed in O ( 1 ) using Lemma 3. This makes a time complexity in O ( N 2 ) to compute all the cluster costs f C ( C i , i ) for 1 i i N .
Equation (19) ensures that cluster costs f D ( C i , i ) can be computed in O ( i i ) for all i < i . Actually, Algorithm 1 and Proposition 5 allow for computations in O ( log ( i i ) ) once points are sorted following Proposition 1, with a dichotomic and logarithmic search.
Lemma 10.
Letting ( i , i ) with i < i . f i , i : j [ [ i , i ] ] max d x j x i , d x j x i decreases before reaching a minimum f i , i ( l ) , f i , i ( l + 1 ) f i , i ( l ) , and then increases for j [ [ l + 1 , i ] ] .
Proof: We define g i , i , j , h i , i , j with g i , i : j [ [ i , i ] ] d x j x i and h i , i : j [ [ i , i ] ] d x j x i .
Let i < i . Proposition 2, applied to i and any j , j + 1 with j i and j < i , ensures that g is decreasing. Similarly, Proposition 2, applied to i and any j , j + 1 , ensures that h is increasing.
Let A = { j [ [ i , i ] ] | m [ [ i , j ] ] g i , i ( m ) < h i , i ( m ) } . g i , i ( i ) = 0 and h i , i ( i ) = d x i x i > 0 , so that i A . A is a non-empty and bounded subset of N , so that A has a maximum. We note that l = max A . h i , i ( i ) = 0 and g i , i ( i ) = d x i x i > 0 , so that i A and l < i .
Let j [ [ i , l 1 ] ] . g i , i ( j ) < g i , i ( j + 1 ) and h i , i , j ( j + 1 ) < h i , i ( j ) , using the monotony of g i , i and h i , i . f i , i ( j + 1 ) = max g i , i ( j + 1 ) , h i , i ( j + 1 ) = h i , i ( j + 1 ) and f i , i ( j ) = max ( g i , i ( j ) , h i , i ( j ) ) = h i , i ( j ) as j , j + 1 A . Hence, f i , i ( j + 1 ) = h i , i ( j + 1 ) < h i , i ( j ) = f i , i ( j ) . This proves that f i , i is decreasing in [ [ i , l ] ] .
l + 1 A and g i , i ( l + 1 ) > h i , i ( l + 1 ) have to be coherent with the fact that l = max A .
Let j [ [ l + 1 , i 1 ] ] . j + 1 > j l + 1 , so g i , i ( j + 1 ) > g i , i ( j ) g i , i ( l + 1 ) > h i , i ( l + 1 ) h i , i ( j ) > h i , i ( j + 1 ) using the monotony of g i , i and h i , i .This proves that f i , i is increasing in [ [ l + 1 , i ] ] .
Lastly, the minimum of f can be reached in l or in l + 1 , depending on the sign of f i , i ( l + 1 ) f i , i ( l ) . If f i , i ( l + 1 ) = f i , i ( l ) , there are two minimums l , l + 1 . Otherwise, there is a unique minimum l 0 { l , l + 1 } , f i , i , which decreases before increasing.
Algorithm 1: Computation of f D ( C i , i )
input: indexes i < i , a distance d
output: the cost f D ( C i , i )
   
    define i ̲ : = i , v ̲ : = d x i x i , i ¯ : = i , v ¯ : = d x i x i ,
   while i ¯ i ̲ 2
       i : = i ¯ + i ̲ 2
      if d x i x i < d x i x i then i ̲ : = i and v ̲ : = d x i x i                                                 
          else i ¯ : = i and v ¯ : = d x i x i
   end while
return min ( v ̲ , v ¯ )
Proposition 5.
Let E = { x 1 , , x N } be N points of R 2 , such that for all i < j , x i x j . The computing cost f D ( C i , i ) for any cluster C i , i has a complexity in O ( log ( i i ) ) time, using O ( 1 ) additional memory space.
Proof. 
Let i < i . Let us prove the correctness and complexity of Algorithm 1. Algorithm 1 is a dichotomic and logarithmic search; it iterates O ( log ( i i ) ) times, with each iteration running in O ( 1 ) time. The correctness and complexity of Algorithm 1 is a consequence of Lemma 10 and the loop invariant, which exists as a of minimum of f i , i , f i , i ( j * ) with i ̲ j * i ¯ , also having v ̲ = f i , i ( i ̲ ) and v ¯ = f i , i ( i ¯ ) . By construction in Algorithm 1, we have d x i x i ̲ < d x i x i ̲ , and thus f i , i ( i ̲ ) = d x i x i ̲ . This implies that f i , i ( i ̲ 1 ) = d x i x i ̲ 1 > f i , i ( i ̲ ) , and thus i ̲ j * , using Lemma 10. Similarly, we always obtain d x i x i ̲ d x i x i ̲ , and thus f i , i ( i ¯ ) = d x i x i ¯ , f i , i ( i ¯ + 1 ) = d x i x i ¯ + 1 > f i , i ( i ¯ ) , so that i ¯ j * with Lemma 10. At the convergence of the dichotomic search, i ¯ i ̲ = 1 and j * is i ̲ or i ¯ ; therefore, the optimal value is f D ( C i , i ) = f i , i ( j * ) = min ( v ̲ , v ¯ ) .   □
Remark 1.
Algorithm 1 improves the previously proposed binary search algorithm [10]. If it has the same logarithmic complexity, this leads to two times fewer calls of the distance function. Indeed, in the previous version, the dichotomic algorithm is computed at each iteration f i , i ( i ) and f i , i ( i + 1 ) to determine if i is in the increasing or decreasing phase of f i , i . In Algorithm 1, the computations that are provided for each iteration are equivalent to the evaluation of only f i , i ( i ) , computing d x i x i and d x i x i .
Proposition 5 can compute f D ( C i , i ) for all i < i in O ( N 2 log N ) . Now, we prove that the costs f D ( C i , i ) of all i < i can be computed in O ( N 2 ) time instead of O ( N 2 log N ) using O ( N 2 ) -independent computations. Two schemes are proposed, computing the lines of the cost matrix in O ( N ) time, computing f D ( C j , j ) α for all j [ [ j ; N ] ] for a given j [ [ 1 ; N ] ] in Algorithm 2, and computing f D ( C j , j ) α for all j [ [ 1 ; j ] ] for a given j [ [ 1 ; N ] ] in Algorithm 3.
Lemma 11.
Let i , i [ [ 1 , N ] ] , with  i + 1 < i . Let c [ [ i + 1 , i 1 ] ] , such that f i , i ( c ) = f D ( C i , i ) .
(i) If i < N , then there is c , such that c c i , with  f i , i + 1 ( c ) = min l [ [ i + 1 , i 1 ] ] f i , i ( l ) = f D ( C i , i + 1 ) .
(ii) If i > 1 , then there is c , such that i c c , with  f i 1 , i ( c ) = min l [ [ i + 1 , i 1 ] ] f i 1 , i ( l ) = f D ( C i 1 , i ) .
Proof. 
We prove ( i ) ; we suppose that i < N and we prove that, for all c < c f i , i + 1 ( c ) f i , i + 1 ( c ) , so that either c is an argmin of the minimization, and the superior minimum to c. ( i i ) is similarly proven. Let c < c . f i , i ( c ) = f D ( C i , i ) , which implies f i , i ( c ) f i , i ( c ) and, with Lemma 10, f i , i is decreasing in [ [ c , c ] ] , i.e.,  f i , i ( c ) = d ( x c , x i ) for all c [ [ c , c ] ] We thus have d ( x c , x i ) d ( x c , x i ) , and, with lemma 2, d ( x c , x i + 1 ) d ( x c , x i ) . Thus, f i , i + 1 ( c ) = d ( x c , x i + 1 . With lemma 2, d ( x c , x i + 1 d ( x c , x i + 1 . f i , i ( c ) = d ( x c , x i ) implies that d ( x c , x i ) d ( x c , x i ) , and then d ( x c , x i ) d ( x c , x i ) d ( x c , x i + 1 ) . Thus f i , i + 1 ( c ) = d ( x c , x i + 1 , and f i , i ( c ) f i , i ( c ) . □
Proposition 6.
, Let E = { x 1 , , x N } be N points of R 2 , such that for all i < j , x i x j . Algorithm 2 computes f D ( C j , j ) α for all j [ [ j ; N ] ] for a given j [ [ 1 ; N ] ] in O ( N ) time using O ( N ) memory space.
Proof. 
The validity of Algorithm 2 is based on Lemmas 10 and 11: once a discrete center c is known for a f D ( C j , j ) α , we can find a center c of f D ( C j , j + 1 ) α with c c , and Lemma 10 gives the stopping criterion to prove a discrete center. Let us prove the time complexity; the space complexity is obviously within O ( N ) memory space. In Algorithm 2, each computation f j , j ( c u r C t r ) is in O ( 1 ) time; we have to count the number of calls for this function. In each loop in j , one computation is used for the initialization; the total number of calls for this initialization is N j N . Then, denoting, with c N N , the center found for C j , N , we note that the number of loops is c N j N . Lastly, there are less that 2 N computations calls f j , j ( c u r C t r ) ; Algorithm 2 runs in O ( N ) time. □
Algorithm 2: Computing f D ( C j , j ) α for all j [ [ j ; N ] ] for a given j [ [ 1 ; N ] ]
Input: E = { x 1 , , x N } indexed with Proposition 1, j [ [ 1 ; N ] ] , α > 0 , N points of R 2 ,
Output: for all j [ [ 1 ; j ] ] , v j = f D α ( C j , j )
  
   define vector v with v j : = 0 for all j [ [ j ; N ] ]
   define c u r C t r : = j + 1 , c u r C o s t : = 0
  for j : = j + 1 to N
       c u r C o s t : = f j , j ( c u r C t r )
      while c u r C o s t f j , j ( c u r C t r + 1 )
          c u r C t r : = c u r C t r + 1
          c u r C o s t : = f j , j ( c u r C t r )
      end while
       v j : = c u r C o s t α
    end for
return vector v
Proposition 7.
Let E = { x 1 , , x N } be N points of R 2 , such that for all i < j , x i x j . Algorithm 3 computes f D ( C j , j ) α for all j [ [ 1 ; j ] ] for a given j [ [ 1 ; N ] ] in O ( N ) time, using O ( N ) memory space.
Algorithm 3: Computing f D ( C j , j ) α for all j [ [ 1 ; j ] ] for a given j [ [ 1 ; N ] ]
Input: E = { x 1 , , x N } indexed with Proposition 1, j [ [ 1 ; N ] ] , α > 0 , N points of R 2 ,
Output: for all j [ [ 1 ; j ] ] , v j = f D ( C j , j ) α
  
   define vector v with v j : = 0 for all j [ [ 1 ; j ] ]
   define c u r C t r : = j 1 , c u r C o s t : = 0
  for j : = j 1 to 1 with increment j : = j 1
       c u r C o s t : = f j , j ( c u r C t r )
      while c u r C o s t f j , j ( c u r C t r 1 )
          c u r C t r : = c u r C t r 1
          c u r C o s t : = f j , j ( c u r C t r )
      end while
       v j : = c u r C o s t α
    end for
return vector v
Proof. 
The proof is analogous with Proposition 6, applied to Algorithm 2. □

5. Particular Sub-Cases

Some particular sub-cases have specific complexity results, which are presented in this section.

5.1. Sub-Cases with K = 1

We first note that sub-cases K = 1 show no difference between 1-0-+- ( α , γ ) -BC2DPF and 1-0-max- ( 1 , γ ) -BC2DPF problems, defining the continuous or the discrete version of 1-center problems. Similarly, 1-M-+- ( α , γ ) -BC2DPF and 1-M-max- ( 1 , γ ) -BC2DPF problems define the continuous or the discrete version of partial 1-center problems. 1-center optimization problems have a trivial solution; the unique partition of E in one subset is E. To solve the 1-center problem, it is necessary to compute the radius of the minimum enclosing disk covering all the points of E (centered in one point of E for the discrete version). Once the points are re-indexed with Proposition 1, the cost computation is in O ( 1 ) time for the continuous version using Proposition 3, and in O ( log N ) time for the discrete version using Proposition 5. The cost of the re-indexation in O ( N log N ) forms the overall complexity time with such an approach. One may improve this complexity without re-indexing E.
Proposition 8.
Let E = { x 1 , , x N } , a subset of N points of R 2 , such that for all i j , x i 0 I 0 x j . 1-0-⊕- ( α , γ ) -BC2DPF problems are solvable in O ( N ) time using O ( 1 ) additional memory space.
Proof. 
Using Lemma 3 or Lemma 6, computations of f γ are, at most, in O ( N ) once the extreme elements following the order ≺ have been computed. Computation of the extreme points is also seen in O ( N ) , with one traversal of the elements of E, storing only the current minimal and maximal element with the order relation ≺. Finally, the complexity of one-center problems is in linear time. □
Proposition 9.
Let M N * , let E = { x 1 , , x N } a subset of N points of R 2 , such that for all i j , x i 0 I 0 x j . The continuous partial 1-center, i.e.,  1-M-⊕- ( α , 0 ) -BC2DPF problems, is solvable in O ( N min ( M , log N ) ) time. The discrete partial 1-center, i.e.,  1-M-⊕- ( α , 1 ) -BC2DPF problems, is solvable in O ( N log N ) time.
Proof. 
Using Proposition 2, o n e -center problems are computed equivalently: min m [ [ 0 ; M ] ] f γ ( C 1 + m , N m ) α .
For the continuous and the discrete case, re-indexing the whole PF with Proposition 1 runs in O ( N log N ) time, leading to M computations in O ( 1 ) or O ( log ( N M ) ) time, which are dominated by the complexity of re-indexing. The time complexity for both cases are highest in O ( N log N ) . In the continuous case, i.e.,  γ = 0 , one requires only the M minimal and maximal points with the total order ≺ to compute the cluster costs using. If  M < log N , one may use one traversal of E, storing the current m minimal and extreme points, which has a complexity in O ( M N ) . Choosing among the two possible algorithms, the time complexity is in O ( N min ( M , log N ) ) . □

5.2. Sub-Cases with K = 2

Specific cases with K = 2 define two clusters, and one separation as defined in Proposition 2. For these cases, specific complexity results are provided, enumerating all the possible separations.
Proposition 10.
Let N points of R 2 , E = { x 1 , , x N } , such that for all i j , x i I x j . 2-0-⊕- ( α , γ ) -BC2DPF problems are solvable in O ( N log N ) time, using O ( N γ ) additional memory space.
Proof. 
Using Proposition 2, optimal solutions exist, considering two clusters: C 1 , i and C i + 1 , N . One enumerates the possible separations i [ [ 1 ; N ] ] . First, the re-indexation phase runs in O ( N log N ) time, which will be the bottleneck for the time complexity. Enumerating the (N-1) values f γ ( C 1 , i ) α f γ ( C i + 1 , N ) α and storing the minimal value induces (N-1) computations in O ( 1 ) time for the continuous case γ = 0 , and uses O ( 1 ) additional memory space: the current best value and the corresponding index. Considering the discrete case, one uses O ( N ) additional memory space f γ ( C 1 , i ) α , f γ ( C i + 1 , N ) α to maintain the time complexity result. □
One can extend the previous complexity results with the partial covering extension.
Proposition 11.
Let E = { x 1 , , x N } be a subset of N points of R 2 , such that for all i j , x i I x j . 2-M-⊕- ( α , γ ) -BC2DPF problems are solvable in O ( N ( ( M + 1 ) 2 + log N ) ) time and O ( N γ ) additional memory space, or in O ( N ( ( M + 1 ) 2 log γ N ) + log N ) ) time and O ( 1 ) additional memory space.
Proof. 
After the re-indexation phase running in O ( log N ) time), Proposition 2 ensures that there is an optimal solution for 2-M-⊕- ( α , γ ) -BC2DP, removing the m 1 0 first indexes, the  m 3 0 last indexes, and m 2 0 points between the two selected clusters, with m 1 + m 2 + m 3 M . Using Proposition 3, there is an optimal solution, exactly defining the M outliers, so that we can consider that m 1 + m 2 + m 3 = M . Denoting i as the last index of the first cluster, the first selected cluster is C 1 + m 1 , i ; the second one is C i + m 2 + 1 , N M + m 1 + m 2 . We have i m 1 + 1 and i + m 2 + 1 N M + m 1 + m 2 i.e., i N M + m 1 . We denote, with X, the following feasible i , m 1 , m 2
X = { ( i , m 1 , m 2 ) [ [ 1 ; N ] ] × [ [ 0 ; M ] ] 2 , 1 0 m 1 + m 2 M 1 and 1 m 1 + 1 i N M + m 1 }
Computing an optimal solution for 2-M-⊕- ( α , γ ) -BC2DP brings the following enumeration
O P T = min ( i , m 1 , m 2 ) X f γ ( C 1 + m 1 , i ) α + f γ ( C i + m 2 + 1 , N M + m 1 + m 2 ) α
In the continuous case (ie γ = 0 ), we use O ( ( M + 1 ) 2 ) computations to enumerate the possible m 1 , m 2 , and  O ( N ) computations to enumerate the possible i once m 1 , m 2 are defined. With cost computations running in O ( 1 ) time, the computation of (20) by enumeration runs in O ( N ( M + 1 ) 2 ) time, after the re-indexation in O ( N log N ) time. This induces the time complexity announced for γ = 0 . This computation uses O ( 1 ) additional memory space, storing only the best current solution ( i , m 1 , m 2 ) X and its cost; this is also the announced memory complexity .
In the discrete case (i.e., γ = 1 ), we use O ( ( M + 1 ) 2 ) computations to enumerate the possible m 1 , m 2 , and  O ( N ) computations to enumerate the possible i once m 1 , m 2 are fixed. This uses O ( 1 ) additional memory space, and the total time complexity is O ( N log N ( ( M + 1 ) 2 ) . To decrease the time complexity, one can use two vectors of size N to store a given m 1 , m 2 , for which the cluster costs f γ ( C 1 + m 1 , i ) α and f γ ( C i + m 2 + 1 , N M + m 1 + m 2 ) α are given by Algorithms 2 and 3, so that the total time complexity remains in O ( N ( ( M + 1 ) 2 + log N ) ) with an O ( N ) additional memory space. These two variants, using O ( 1 ) or O ( N ) additional memory space, induce the time complexity announced in Proposition 11. □

5.3. Continuous Min-Sum K-Radii on A Line

To the best of our knowledge, the 1D continuous min-sum k-radii and the min-sum diameter problems were not previously studied. The specific properties hold, as proven in Lemma 12. This allows a time complexity of O ( N log N ) .
Lemma 12.
Let E = { x 1 , , x N } be N points in a line of R 2 , indexed such that for all i < j , x i x j . The min-sum k-radii in a line, K-0-+- ( 1 , 0 ) -BC2DPF, is equivalent to selecting the K 1 highest values of the distance among consecutive points, with the extremity of such segments defining the extreme points of the disks.
Proof. 
Let a feasible and non-nested solution of K-0-+- ( 1 , 0 ) -BC2DPF be defined with clusters C a 1 , b 1 , C a 2 , b 2 , , C a K , b K such that 1 = a 1 b 1 < a 2 b 2 < < a K b K = N . Using the alignment property, we can obtain
d ( x 1 , x N ) = i = 1 n 1 d ( x i , x i + 1 ) = k = 1 K d ( x a k , x b k ) + k = 2 K d ( x b k 1 , x a k ) = k = 1 K f 0 ( C a k , b k ) + k = 2 K d ( x b k 1 , x a k )
Reciprocally, this is equivalent to considering K-0-+- ( 1 , 0 ) -BC2DPF or the maximization of the sum of K 1 sa a different distance among consecutive points. The latter problem is just computing the K 1 highest distances among consecutive points. □
Proposition 12.
Let E = { x 1 , , x N } be a subset of N points of R 2 on a line. K-0-+- ( 1 , 0 ) -BC2DPF, the continuous min-sum-k-radii, is solvable in O ( N log N ) time and O ( N ) memory space.
Proof. 
Lemma 12 ensures the validity of Algorithm 4, determining the K 1 highest values of the distance among consecutive points. The additional memory space in Algorithm 4 is in O(N), computing the list of consecutive distances. Sorting the distances and the re-indexation both have a time complexity in O ( N log N ) . □
Algorithm 4: Continuous min-sum K-radii on a line
Input: K N * , N points of R 2 on a line E = { x 1 , , x N }
  
   re-index E using Proposition 1
   initialize vector v with v i : = ( i , d ( x i + 1 ) d ( x i ) ) for i [ [ 1 ; N 1 ] ]
   initialize vector w with v j : = 0 for j [ [ 1 ; K 1 ] ]
   sort vector v with d ( x i + 1 ) d ( x i ) increasing
   for the K 1 elements of v with the maximal value d ( x i + 1 ) d ( x i ) , store the indexes
i in w
   sort w in the increasing order
   initialize P = , i ̲ = i ¯ = 1 , O P T = 0 .
  for j [ [ 1 ; K 1 ] ] in the increasing order
     i ¯ : = w j
     add C i ̲ , i ¯ in P
     O P T : = O P T + f C ( C i ̲ , i ¯ )
     i ̲ : = i ¯ + 1
  end for
   add C i ̲ , N in P
   O P T : = O P T + f C ( C i ̲ , N )
  
return O P T the optimal cost and the partition of selected clusters P

6. Unified DP Algorithm and Complexity Results

Proposition 2 allows the design of a common DP algorithm for p-center problems and variants, and to prove polynomial complexities. The key element is to design Bellman equations.
Proposition 13
(Bellman equations). Defining O i , k , m as the optimal cost of k-m-⊕- ( α , γ ) -BC2DPF among points [ [ 1 , i ] ] for all i [ [ 1 , N ] ] , k [ [ 1 , K ] ] and m [ [ 0 , M ] ] , we have the following induction relations
i [ [ 1 , N ] ] , O i , 1 , 0 = f γ ( C 1 , i ) α
m [ [ 1 , M ] ] , k [ [ 1 , K ] ] , i [ [ 1 , m + k ] ] , O i , k , m = 0
m [ [ 1 , M ] ] , i [ [ m + 2 , N ] ] , O i , 1 , m = min O i 1 , 1 , m 1 , f γ ( C 1 + m , i ) α
k [ [ 2 , K ] ] , i [ [ k + 1 , N ] ] , O i , k , 0 = min j [ [ k , i ] ] O j 1 , k 1 , 0 f γ ( C j , i ) α
m [ [ 1 , M ] ] , k [ [ 2 , K ] ] , i [ [ k + m + 1 , N ] ] ,
O i , k , m = min O i 1 , k , m 1 , min j k + m , i O j 1 , k 1 , m f γ ( C j , i ) α
Proof. (21) is the standard 1-center case. (22) is a trivial case, where it is possible to fill the clusters with singletons, with a null and optimal cost. (23) is a recursion formula among the partial 1-center cases, an optimal solution of 1-m-⊕- ( α , γ ) -BC2DPF among points [ [ 1 , i ] ] , selecting the point x i , and the optimal solution is cluster C 1 + m , i with Proposition 3, with a cost f γ ( C 1 + m , i ) α or an optimal solution of 1- m 1 -⊕- ( α , γ ) -BC2DPF if the point x i is not selected. (24) is a recursion formula among the k-0-⊕- ( α , γ ) -BC2DPF cases among points [ [ 1 , i ] ] ; when generalizing the ones from [10] for the powered sum-radii cases, the proof is similar. Let k [ [ 2 , K ] ] and i [ [ k + 1 , N ] ] . Let j [ [ k , i ] ] , when selecting an optimal solution of k-0-⊕- ( α , γ ) -BC2DPF among points indexed in [ [ 1 , j 1 ] ] , and adding cluster C j , i , a feasible solution is obtained for k-0-⊕- ( α , γ ) -BC2DPF among the points indexed in [ [ 1 , i ] ] with a cost O j 1 , k 1 , 0 f γ ( C j , i ) α . This last cost is lower than the optimal cost, thus O i , k , 0 O j 1 , k 1 , 0 f γ ( C j , i ) α . Such inequalities are valid for all j [ [ k , i ] ] ; this implies
O i , k , 0 min j k , i O j 1 , k 1 , 0 f γ ( C j , i ) α
Let j 1 < j 2 < < j k 1 < j k = i indexes, such that C 1 , j 1 , C j 1 + 1 , j 2 , , C j k 1 + 1 , i defines the optimal solution of k-0-⊕- ( α , γ ) -BC2DPF among the points indexed in [ [ 1 , i ] ] ; its cost is O i , k , 0 . Necessarily, C 1 , j 1 , C j 1 + 1 , j 2 , , C j k 2 + 1 , j k 1 defines the optimal solution of k 1 -0-⊕- ( α , γ ) -BC2DPF among the points indexed in [ [ 1 , j k 1 ] ] . On the contrary, a better solution for O i , k , 0 would be constructed, adding the cluster C j k 1 + 1 , i . We thus have O i , k , 0 = O j k 1 , k 1 , 0 f γ ( C j k 1 + 1 , i ) α . Combined with (26), this proves O i , k , 0 = min j k , i O j 1 , k 1 , 0 f γ ( C j , i ) α .
Lastly, we prove (25). Let m [ [ 1 , M ] ] , k [ [ 2 , K ] ] , i [ [ k + m + 1 , N ] ] . O i , k , m O i 1 , k , m 1 ; each solution of k- m 1 -⊕- ( α , γ ) -BC2DPF among the points indexed in [ [ 1 , i 1 ] ] defines a solution of k-m-⊕- ( α , γ ) -BC2DPF among the points indexed in [ [ 1 , i ] ] , with the selecting point x i as an outlier. Let O i , k , m , with the cost of k-m-⊕- ( α , γ ) -BC2DPF among the points indexed in [ [ 1 , i ] ] ; necessarily selecting the point i, we obtain O i , k , m O i , k , m . O i , k , m is defined by a cluster C j , i and an optimal solution of k-m-⊕- ( α , γ ) -BC2DPF among the points indexed in [ [ 1 , j 1 ] ] , so that O i , k , m = min j [ [ k + m , i ] ] O j 1 , k 1 , m f γ ( C j , i ) α . We thus have
O i , k , m min O i 1 , k , m 1 , min j [ [ k + m , i ] ] O j 1 , k 1 , m f γ ( C j , i ) α
Reciprocally, let 1 = a 1 < b 1 < a 2 < b 2 < < a k < b k indexes, such that C a 1 , b 1 , C a 2 , b 2 , , C a k , b k defines an optimal solution of k-m-⊕- ( α , γ ) -BC2DPF among the points indexed in [ [ 1 , i ] ] ; its cost is O i , k , m . If b k = i , then O i , k , m = O i , k , m and (27) is an equality. If b k < i , then C a 1 , b 1 , C a 2 , b 2 , , C a k , b k defines an optimal solution of k- m 1 -⊕- ( α , γ ) -BC2DPF among the points indexed in [ [ 1 , i 1 ] ] ; its cost is O i , k , m 1 . We thus have O i , k , m = O i , k , m 1 , and (27) is an equality. Finally, (25) is proven by disjunction. □
Bellman equations of Proposition 13 can compute the optimal value O i , k , m by induction. A first method is a recursive implementation of the Bellman equations to compute the cost O N , K , M and store the intermediate computations O i , k , m in a memoized implementation. An iterative implementation is provided in Algorithm 5, using a defined order for the computations of elements O i , k , m . An advantage of Algorithm 5 is that independent computations are highlighted for a parallel implementation. For both methods computing the optimal cost O N , K , M , backtracking operations in the DP matrix with computed costs allow for recovery of the affectation of clusters and outliers in an optimal solution.
In Algorithm 5, note that some useless computations are not processed. When having to compute O N , K , M , computations O N , k , m with k + m < K + M are useless. O N 1 , K , M will also not be called. Generally, triangular elements O N n , k , m with n + k + m < K + M are useless. The DP matrix O n , k , m is not fully constructed in Algorithm 5, removing such useless elements.
Algorithm 5: unified DP algorithm for K-M-⊕- ( α , γ ) -BC2DPF
Input: -N points of R 2 , E = { x 1 , , x N } such that for all i j , x i 0 I 0 x j ;
    - Parameters: K N * , M N , { + , max } , γ { 0 , 1 } and α > 0 ;
  
   sort E following the order of Proposition 1
   initialize matrix O with O i , k , m : = 0 for all m [ [ 0 ; M ] ] , k [ [ 1 ; K 1 ] ] ,
i [ [ k ; N K + k ] ]
  
   compute f γ ( C 1 , i ) α for all i [ [ 1 ; N K + 1 ] ] and store in O i , 1 , 0 : = f γ ( C 1 , i ) α
  
  for i : = 2 to N
     compute and store f γ ( C i , i ) α for all i [ [ 1 ; i ] ]
     compute O i , k , 0 : = min j [ [ k , i ] ] O j 1 , k 1 , 0 f γ ( C j , i ) α for all k [ [ 2 ; min ( K , i ) ] ]
    for m = 1 to min ( M , i 2 )
        compute O i , 1 , m : = min O i 1 , 1 , m 1 , f γ ( C 1 + m , i ) α
       for k = 2 to min ( K , i m )
           compute O i , k , m : = min O i 1 , k , m 1 , min j [ [ k + m , i ] ] O j 1 , k 1 , m f γ ( C j , i ) α     
       end for
    end for
     delete the stored f γ ( C i , i ) α for all i [ [ 1 ; i ] ]
  end for
  
   initialize P = , i ̲ = i ¯ = N , m = M
  for k = K to 1 with increment k k 1
     compute i ¯ : = min { i [ [ i ̲ m ; i ̲ ] ] | O i ̲ , k , m : = O i ̲ i , k , m i + i ̲ }
     m : = m i ¯ + i ̲
     compute and store f α ( C i , i ¯ ) for all i [ [ 1 ; i ¯ ] ]
     find i ̲ [ [ 1 , i ¯ ] ] such that i ̲ : = arg min j [ [ k + m , i ] ] O j 1 , k 1 , m f γ ( C j , i ¯ ) α
     add C i ̲ , i ¯ in P
     delete the stored f α ( C i , i ¯ ) for all i [ [ 1 ; i ¯ ] ]
  end for
  
return O N , K , M the optimal cost and the selected clusters P
Theorem 1.
Let E = { x 1 , , x N } a subset of N points of R 2 , such that for all i j , x i I x j . When applied to the 2D PF E for K 2 , the K-M-⊕- ( α , γ ) -BC2DPF problems are solvable to optimality in polynomial time using Algorithm 5, with a complexity in O ( K N 2 ( 1 + M ) ) time and O ( K N ( 1 + M ) ) space.
Proof. 
The validity of Algorithm 5 is proven by induction; each cell of the DP matrix O i , k , m is computed using only cells that were previously computed to optimality. Once the required cells are computed, a standard backtracking algorithm is applied to compute the clusters. Let us analyze the complexity. Let K 2 . The space complexity is in O ( K N ( 1 + M ) ) , along with the size of the DP matrix, with the intermediate computations of cluster costs using, at most, O(N) memory space, only remembering such vectors due to the deleting operations. Let us analyze the time complexity. Sorting and indexing the elements of E (Proposition 1) has a time complexity in O ( N log N ) . Once costs f γ ( C i , i ) α are computed and stored, each cell of the DP matrix is computed, at most, in O ( N ) time using Formulas (21)–(24). This induces a total complexity in O ( K N 2 ( 1 + M ) ) time. The cluster costs are computed using N times Algorithm 3 and one time Algorithm 2; this has a time complexity in O ( N 2 ) , which is negligible compared to the O ( K N 2 ( 1 + M ) ) time computation of the cells of the DP matrix. The K backtracking operations requires a O ( N 2 ) time computation of the costs f α ( C i , i ¯ ) for all i [ [ 1 ; i ¯ ] ] and a given i, M operations in O ( 1 ) time to compute min { i [ [ i ̲ m ; i ̲ ] ] | O i ̲ , k , m = O i ̲ i , k , m i + i ̲ } and O ( N ) operations in O ( 1 ) time to compute arg min j [ [ k + m , i ] ] O j 1 , k 1 , m f γ ( C j , i ¯ ) α . Finally, the backtracking operations requires O ( K N 2 ) time, which is negligible compared to the previous computation in O ( K N 2 ( 1 + M ) ) time. □

7. Specific Improvements

This section investigates how the complexity results of Theorem 2 may be improved, and how to speed up Algorithm 5, from a theoretical and a practical viewpoint.

7.1. Improving Time Complexity for Standard and Partial P-Center Problems

In Algorithm 5, the bottleneck for complexity are the computations min j [ [ k + m , i ] ] O j 1 , k 1 , m f γ ( C j , i ) α , for i [ [ 2 , N ] ] , k [ [ 2 , min ( K , i ) ] ] , m [ [ 0 , i k ] ] . When = max , it is proven that such a minimization can be processed in O ( log N ) instead of O ( N ) for the naive enumeration, leading to the general complexity results. This can improve the time complexity in the p-center cases.
Lemma 13.
Let k [ [ 1 , K ] ] and j [ [ 1 , N ] ] . The application m [ [ 0 , M ] ] O j , k , m is decreasing.
Proof. 
Let m [ [ 1 , M ] ] . For each E E , any feasible solution of k- ( m 1 ) -⊕- ( α , γ ) -BC2DPF in E is a feasible solution of k-m-⊕- ( α , γ ) -BC2DPF, with the partial versions defined by problems (11). An optimal solution of k- m 1 -⊕- ( α , γ ) -BC2DPF is feasible for k- ( m 1 ) -⊕- ( α , γ ) -BC2DPF, it implies O j , k , m 1 O j , k , m . □
Lemma 14.
Let k [ [ 1 , K ] ] and m [ [ 0 , M ] ] . The application j [ [ 1 , N ] ] O j , k , m is increasing.
Proof. 
We yfirst note that the case k = 1 is implied by the Lemma 7, so that we can suppose in the following, that k 2 . Let k [ [ 2 , K ] ] , m [ [ 0 , M ] ] and j [ [ 2 , N ] ] . Let π Π K ( E ) be an optimal solution of k-m-⊕- ( α , γ ) -BC2DPF among the points indexed in [ [ 1 , j ] ] ; its cost is O j , k , m . Let X E , the subset of the non-selected points, | X | M , and C 1 , , C k with the k subsets defining the costs, so that X , C 1 , , C k is a partition of E and k = 1 k f γ ( C k ) α = O j , k , m . If x j X , then O j , k , m = O j 1 , k , m 1 O j 1 , k , m using Lemma 13, which is the result. We suppose to end the proof that x j X and re-index the clusters such that x j C k . We consider the clusters C 1 , , C k = C 1 , , C k 1 , C k x k . With X, a partition of ( x l ) l [ [ 1 , j 1 ] ] is defined, with, at most, M outliers, so that it defines a feasible solution of the optimization problem, defining O j 1 , k , m as a cost O P T O j 1 , k , m . Using Lemma 7, O P T O j , k , m , so that O j 1 , k , m 1 O j 1 , k , m . □
Lemma 15.
Let i [ [ 2 , N ] ] , k [ [ 2 , min ( K , i ) ] ] , m [ [ 0 , i k ] ] . Let g i , k , m : j [ [ 2 , i ] ] max ( O j 1 , k 1 , m , f γ ( C j , i ) α ) . There is l [ [ 2 , i ] ] , such that g i , k is decreasing for j [ [ 2 , l ] ] , and then increases for j [ [ l + 1 , i ] ] . For j < l , g i , k = f γ ( C j , i ) α and for j > l , g i , k = O j 1 , k 1 , m .
Proof. 
Similarly to the proof of Lemma 10, the following applications are monotone:
j [ [ 1 , i ] ] f γ ( C j , i ) α decreases with Lemma 7,
j [ [ 1 , N ] ] O j , k , m increases for all k with Lemma 14. □
Proposition 14.
Let i [ [ 2 , N ] ] , k [ [ 2 , K ] ] , m [ [ 0 , M ] ] . Let γ { 0 , 1 } . Once the values O i , k 1 , m in the DP matrix of Algorithm 2 are computed, Algorithm 6 computes O i , k , m = min j [ [ k + m , i ] ] max O j 1 , k 1 , m , f γ ( C j , i ) α calling O ( log i ) cost computations f γ ( C j , i ) . This induces a time complexity in O ( log 1 + γ i ) using straightforward computations of the cluster costs with Propositions 3 and 5.
Algorithm 6: Dichotomic computation of min j [ [ k + m , i ] ] max O j 1 , k 1 , m , f γ ( C j , i ) α
  input: indexes i [ [ 2 , N ] ] , k [ [ 2 , min ( K , i ) ] ] , m [ [ 0 , i k ] ] , α > 0 γ { 0 , 1 } ;                    
      a vector v containing v j : = O j , k 1 , m for all j [ [ 1 , i 1 ] ] .
  
   define i ̲ : = k + m , v ̲ = f γ ( C k + m , i ) α ,
   define i ¯ : = i , v ¯ : = v i 1 ,
  while i ¯ i ̲ 2
     i : = i ̲ + i ¯ 2
    if f γ ( C j , i ) α < v i then set i ¯ : = i and v ¯ : = v i
    else i ̲ : = i and v ̲ : = f γ ( C j , i ) α
  end while
return min ( v ̲ , v ¯ )
Proof. 
Algorithm 6 is a dichotomic search based on Lemma 15, similarly to Algorithm 1, derived from Lemma 10. The complexity in Algorithm 6 is O ( log i ) cost computations f γ ( C j , i ) . In the discrete case, such computations run in O ( log i ) time with Proposition 5, whereas it is O ( 1 ) in the continuous case with Lemma 3. In both cases, the final time complexity is given by O ( log 1 + γ i ) . □
Computing min j [ [ k + m , i ] ] max O j 1 , k 1 , m , f γ ( C j , i ) α in time O ( log i ) instead of O ( i ) in the proof of Theorem 1 for p-center problem and variants, the complexity results are updated for these sub-problems.
Theorem 2.
Let E = { x 1 , , x N } be a subset of N points of R 2 , such that for all i j , x i I x j . Whe napplied to the 2D PF E for K 2 , the K-M-max- ( α , γ ) -BC2DPF problems are solvable to optimality in polynomial time using Algorithm 4, with a complexity in O ( K N ( 1 + M ) log N ) time and O ( K N ( 1 + M ) ) space.
Proof. 
The validity of Algorithm 5 using Algorithm 6 inside is implied by the validity of Algorithm 6, proven in Proposition 14. Updating the time complexity with Proposition 14, the new time complexity for continuous K-center problems is seen in O ( K N ( 1 + M ) log N ) time instead of O ( K ( 1 + M ) N 2 ) , as previously. For the discrete versions, using Proposition 14 with computations of discrete cluster costs with Proposition 5 induces a time complexity in O ( K N ( 1 + M ) log 2 N ) . The complexity is decreased to O ( K N ( 1 + M ) log N ) , where the cluster costs are already computed and stored in Algorithm 5, and thus the computations of Algorithm 6 are seen in O ( 1 ) . tThisinduces the same complexity for discrete and continuous K-center variants. □
Remark 2.
For the standard discrete p-center, Theorem 2 improves the time complexity given in the preliminary paper [10], from  O ( p N log 2 N ) to O ( p N log N ) . Another improvement was given by Algorithm 1; the former computation of cluster costs has the same asymptotic complexity but requires two times more computations. tTis proportional factor is non negligible in practice.

7.2. Improving Space Complexity for Standard P-Center Problems

For standard p-center problems, Algorithm 5 has a complexity in memory space in O ( K N ) , the size of the DP matrix. This section proves it is possible to reduce the space complexity into an O ( N ) memory space.
One can compute the DP matrix for k-centers “line-by-line”, with k increasing. This does not change the validity of the algorithm, with each computation using values that were previously computed to the optimal values. Two main differences occur compared to Algorithm 5. On one hand, the k + 1 -center values use only k-center computations, and the computations with k < k can be deleted once all the required k-center values are computed when having to compute only the K-center values, especially the optimal cost. On the other hand, the  computations of cluster costs are not factorized, as in Algorithm 5; this does not make any difference in the continuous version, where Lemma 3 can to recompute cluster costs in O ( 1 ) time when needed, whereas recomputing each cost induces the computations running in O ( log N ) for the discrete version with Algorithm 1.
The search order of operations slightly degrades the time complexity for the discrete variant, without inducing a change in the continuous variant. This allows only for computations of the optimal value; another difficulty is that the backtracking operations, as written in Algorithm 5, require storage of the whole stored values of the whole matrix. The issue is obtaining alternative backtracking algorithms that allow the computation of an optimal solution of the standard p-center problems using only the optimal value provided by the DP iterations, and with a complexity of, at most, O ( K N log γ N ) time and O ( N ) memory space. Algorithms 7 and 8 have such properties.
Algorithm 7: Backtracking algorithm using O ( N ) memory space
  input: - γ { 0 , 1 } to specify the clustering measure;
    - N points of a 2D PF, E = { z 1 , , z N } , sorted such that for all i < j , z i z j ;            
    - K N the number of clusters;
    - O P T , the optimal cost of K- γ -CP2DPF;
  output: P an optimal partition of K- γ -CP2DPF.
  
   initialize m a x I d : = N , m i n I d : = N , P = , a set of sub-intervals of [ [ 1 ; N ] ] .
  for k : = K to 2 with increment k k 1
     set m i n I d : = m a x I d
    while f γ ( C m i n I d 1 , m a x I d ) ) O P T do m i n I d : = m i n I d 1 end while
     add [ [ m i n I d , m a x I d ] ] in P
     m a x I d : = m i n I d 1
  end for
   add [ [ 1 , m a x I d ] ] in P
return P
Algorithm 8: Backtracking algorithm using O ( N ) memory space
  input: - γ { 0 , 1 } to specify the clustering measure;
    - N points of a 2D PF, E = { z 1 , , z N } , sorted such that for all i < j , z i z j ;            
    - K N the number of clusters;
    - O P T , the optimal cost of K- γ -CP2DPF;
  output: P an optimal partition of K- γ -CP2DPF.
  
   initialize m i n I d : = 1 , m a x I d : = 1 , P : = , a set of sub-intervals of [ [ 1 ; N ] ] .
  for k : = 2 to K with increment k k + 1
     set m a x I d : = m i n I d
    while f γ ( C m i n I d , m a x I d + 1 ) ) O P T do m a x I d : = m a x I d + 1 end while
     add [ [ m i n I d , m a x I d ] ] in P
     set m i n I d : = m a x I d + 1
  end for
   add [ [ m i n I d , N ] ] in P
return P
Lemma 16.
Let K N , K 2 . Let E = { z 1 , , z N } , sorted such that for all i < j , z i 0 0 z j . For the discrete and continuous K-center problems, the indexes given by Algorithm 7 are lower bounds of the indexes of any optimal solution. Denoting [ [ 1 , i 1 ] ] , [ [ i 1 + 1 , i 2 ] ] , , [ [ i K 1 + 1 , N ] ] , the indexes given by Algorithm 7, and [ [ 1 , i 1 ] ] , [ [ i 1 + 1 , i 2 ] ] , , [ [ i K 1 + 1 , N ] ] , the indexes of an optimal solution, we have, for all k [ [ 1 , K 1 ] ] , i k i k
Proof. 
This lemma is proven a decreasing induction on k, starting from k = K 1 . The case k = K 1 is furnished by the first step of Algorithm 4, and j [ [ 1 , N ] ] f γ ( C j , N ) decreaswa with Lemma 7. WIth a given k, i k i k , i k 1 i k 1 is implied by Lemma 2 and d ( z i k , z i k 1 1 ) > O P T . □
Algorithm 8 is similar to Algorithm 7, with iterations increasing the indexes of the points of E. The validity is similarly proven, and this provides the upper bounds for the indexes of any optimal solution of K-center problems.
Lemma 17.
Let K N , K 2 . Let E = { z 1 , , z N } , sorted such that for all i < j , z i 0 0 z j . For K-center problems, the indexes given by Algorithm 8 are upper bounds of the indexes of any optimal solution. Denoting [ [ 1 , i 1 ] ] , [ [ i 1 + 1 , i 2 ] ] , , [ [ i K 1 + 1 , N ] ] , the indexes given by Algorithm 8, and [ [ 1 , i 1 ] ] , [ [ i 1 + 1 , i 2 ] ] , , [ [ i K 1 + 1 , N ] ] , the indexes of an optimal solution, we have, for all k [ [ 1 , K 1 ] ] , i k i k .
Proposition 15.
Once the optimal cost of p-center problems are computed, Algorithms 7 and 8 compute an optimal partition in O ( N log N ) time using O ( 1 ) additional memory space.
Proof. 
We consider the proof for Algorithm 7, which is symmetrical for Algorithm 8. Let O P T be the optimal cost of K-center clustering with f. Let [ [ 1 , i 1 ] ] , [ [ i 1 + 1 , i 2 ] ] , , [ [ i K 1 + 1 , N ] ] be the indexes given by Algorithm 7. Through this construction, all the clusters C defined by the indexes [ [ i k + 1 , i k + 1 ] ] for all k > 1 verify f γ ( C ) O P T . Let C 1 be the cluster defined by [ [ 1 , i 1 ] ] ; we have to prove that f γ ( C 1 ) O P T to conclude the optimality of the clustering defined by Algorithm 4. For an optimal solution, let [ [ 1 , i 1 ] ] , [ [ i 1 + 1 , i 2 ] ] , , [ [ i K 1 + 1 , N ] ] be the indexes defining this solution. Lemma 16 ensures that i 1 i 1 , and thus Lemma 7 assures f γ ( C 1 , i 1 ) f γ ( C 1 , i 1 ) O P T . Analyzing the complexity, Algorithm 7 calls for a maximum of ( K + N ) 2 N times the clustering cost function, without  requiring stored elements; the complexity is in O ( N log γ N ) time. □
Remark 3.
Finding the biggest cluster with an extremity given and a bounded cost can be acheived by a dichotomic search. Rhis would induce a complexity in O ( K log 1 + γ N ) . To avoid the separate case K = O ( N ) and γ = 1 , Algorithms 7 and 8 provide a common algorithm running in O ( N log N ) time, which is enough for the following complexity results.
The previous improvements, written in Algorithm 9, allow for new complexity results with a O ( N ) memory space for K-centrer problems.
Algorithm 9: p-center clustering in a 2DPF with a O(N) memory space
Input:
- N points of R 2 , E = { x 1 , , x N } such that for all i j , x i 0 I 0 x j ;
- γ { 0 , 1 } to specify the clustering measure;
- K N the number of clusters.
  
   initialize matrix O with O i , k : = 0 for all i [ [ 1 ; N ] ] , k [ [ 1 ; K 1 ] ]
   sort E following the order of Proposition 1
   compute and store O i , 1 : = f γ ( C 1 , i ) for all i [ [ 1 ; N ] ] (with Algorithm 2 if γ = 1 )
  for k = 2 to K 1
    for i = k + 1 to N K + k
       compute and store O i , k : = min j [ [ 2 , i ] ] max ( O j 1 , k 1 , f γ ( C j , i ) ) (Algorithm 6)
    end for
     delete the stored O i , k 1 for all i
  end for
   O P T : = min j [ [ 2 , N ] ] max ( O j 1 , K 1 , f γ ( C j , N ) ) with Algorithm 6
return O P T the optimal cost and a partition P given by backtracking Algorithm 7 or 8
Theorem 3.
Let E = { x 1 , , x N } a subset of N points of R 2 , such that for all i j , x i I x j . When applied to the 2D PF E for K 2 , the standard continuous and discrete K-center problems, i.e., K-0-max- ( α , γ ) -BC2DPF, are solvable with a complexity in O ( K N log 1 + γ N ) time and O ( N )  space.
Remark 4.
The continuous case improves the complexity obtained after Theorem 2, with the same time complexity and an improvement in the space complexity. For the discrete variant, improving the space complexity in O ( N ) instead of O ( N ) induces a very slight degradation of the time complexity, from  O ( K N log N ) to O ( K N log 2 N ) . Depending on the value of K, it may be preferable, with stronger constraints in memory space, to have this second version.

7.3. Improving Space Complexity for Partial P-Center Problems?

This section tries to generalize the previous results for the partial K-center problems, i.e., K-M-max- ( α , γ ) -BC2DPF with M > 0 . The key element is to obtain a backtracking algorithm that does not use the DP matrix. Algorithm 10 extends Algorithm 7 by considering all the possible cardinals of outliers between clusters k and k + 1 for k [ [ 0 , K 1 ] ] and the outliers after the last cluster. A feasible solution of the optimal cost should be feasible by iterating Algorithm 7 for at least one of these sub-cases.
Algorithm 10: Backtracking algorithm for K-M-max- ( α , γ ) -BC2DPF with M > 0
  input: - a K-M-max- ( α , γ ) -BC2DPF problem
    - N points of a 2D PF, E = { z 1 , , z N } , sorted such that for all i < j , z i z j ;            
    - O P T , the optimal cost of K-M-max- ( α , γ ) -BC2DPF problem;
  output: P an optimal partition of K-M-max- ( α , γ ) -BC2DPF problem.
  
  for each vector x of K + 1 elements such that k = 0 K x [ k ] = M
     initialize m a x I d x [ K ] : = N , m i n I d : = N x [ K ] , P : = , a set of sub-intervals
of [ [ 1 ; N ] ] .
    for k = K to 2 with increment k k 1
       set m i n I d : = m a x I d
      while f γ ( C m i n I d 1 , m a x I d ) ) α O P T do m i n I d : = m i n I d 1 end while
       add [ [ m i n I d , m a x I d ] ] in P
       set m a x I d : = m i n I d 1 x [ K 1 ]
    end for
    if f γ ( C 1 + x [ 0 ] , m a x I d ) ) α O P T then add [ [ 1 + x [ 0 ] , m a x I d ] ] in P and return P
  end for
return error “OPT is not a feasible cost for K-M-max- ( α , γ ) -BC2DPF ”
It is crucial to analyze the time complexity induced by this enumeration. If the number of vectors x of K + 1 elements is such such that k = 0 K x [ k ] = M is in Θ ( K M ) , then this complexity is not polynomial anymore. For  M = 1 , a time complexity in O ( K N log N ) would be induced, which is acceptable within the complexity of the computation of the DP matrix. Having M 2 would dramatically degrade the time complexity. Hence, we extend the improvement results of space complexity only for M = 1 , with Algorithm 11.
Theorem 4.
Let E = { x 1 , , x N } a subset of N points of R 2 , such that for all i j , x i I x j . When applied to the 2D PF E for K 2 , partial K-center problems K-1-max- ( α , γ ) -BC2DPF, are solvable with a complexity in O ( K N log 1 + γ N ) time and O ( N ) space.

7.4. Speeding-Up DP for Sum-Radii Problems

Similarly to Algorithm 6, this section tries to speed up the computations min j [ [ k + m , i ] ] O j 1 , k 1 , m + f γ ( C j , i ) α , which are the bottleneck for the time complexity in Algorithm 5. This section presents the stopping criterion to avoid useless computations in the O ( N ) naive enumeration, but without providing proofs of time complexity improvements.
Algorithm 11: Partial p-center K-1-max- ( 1 , γ ) -BC2DPF with a O(N) memory space
Input:
- N points of R 2 , E = { x 1 , , x N } such that for all i j , x i 0 I 0 x j ;
- γ { 0 , 1 } to specify the clustering measure;
- K N , K 2 the number of clusters.
  
   initialize matrix O with O i , k , m : = 0 for all i [ [ 1 ; N ] ] , k [ [ 1 ; K 1 ] ] , m [ [ 0 ; 1 ] ]
   sort E following the order of Proposition 1
   compute and store O i , 1 , 0 : = f γ ( C 1 , i ) for all i [ [ 1 ; N ] ] (with Algorithm 2 if γ = 1 )         
   compute and store f γ ( C 2 , i ) for all i [ [ 2 ; N ] ] (with Algorithm 2 if γ = 1 )
   compute and store O i , 1 , 1 : = min f γ ( C 2 , i ) , O i 1 , 1 , 0 for all i [ [ 2 ; N ] ]
  for k = 2 to K
    for i : = k + 1 to N K + k
       compute and store O i , k , 0 : = min j [ [ 2 , i ] ] max ( O j 1 , k 1 , 0 , f γ ( C j , i ) ) (Algorithm 6)
       compute and store O i , k , 1 : = min j [ [ k + 1 , i ] ] max O j 1 , k 1 , 1 , f γ ( C j , i )
       O i , k , 1 : = min O i 1 , k , 0 , O i , k , 1
    end for
     delete the stored O i , k 1 , m for all i , m
  end for
return O N , K , 1 the optimal cost and a partition P given by backtracking Algorithm 10
Proposition 16.
Let m [ [ 0 , M ] ] , i [ [ 1 , N ] ] and k [ [ 2 , K ] ] . Let β an upper bound for O i , k , m . We suppose there exist j 0 [ [ 1 , i ] ] , such that f γ ( C j 0 , i ) α β . Then, each optimal index j * , such that O i , k , m = O j * 1 , k 1 , m + f α ( C j * , i ) necessarily fulfills j * > j 0 . In other words, O i , k , m = min j [ [ max ( k + m , j 0 + 1 ) , i ] ] O j 1 , k 1 , m + f γ ( C j , i ) α .
Proof. 
With f γ ( C j 0 , i ) α β , Lemma 7 implies that for all j < j 0 , f γ ( C j , i ) α > f γ ( C j 0 , i ) α β . Using O i , k , m 0 for all i , k implies that for all j < j 0 , f γ ( C j 0 , i ) α + O j 0 1 , k 1 , m > β , and the optimal index gives O i , k , m = min j k + m , i O j 1 , k 1 , m + f γ ( C j , i ) α , which is superior to j 0 . □
Proposition 16 can be applied to compute each value of the DP matrix using fewer computations than the naive enumeration. In the enumeration, β is updated to the best current value of O j 1 , k 1 , m + f γ ( C j , i ) α . The index would be enumerated in a decreasing way, starting from j = i until an index is found, such that f γ ( C j 0 , i ) α β , and no more enumeration is required with Proposition 16, ensuring that the partial enumeration is sufficient to find the wished-for minimal value. This is a practical improvement, but we do not furnish proof of complexity improvements, as it is likely that this would not change the worst case complexity.

8. Discussion

8.1. Importance of the 2D PF Hypothesis, Summarizing Complexity Results

Planar p-center problems were not studied previously in the PF case. The 2D PF hypothesis is crucial for the complexity results and the efficiency of the solving algorithms. Table 1 compares the available complexity results for 1D and 2D cases of some k-center variants.
The complexity for 2D PF cases is very similar to the 1D cases; the 2D PF extension does not induce major difficulties in terms of complexity results. 2D PF cases may induce significant differences compared to the general 2D cases. The p-center problems are NP-hard in a planar Euclidean space [17], since adding the PF hypothesis leads to the polynomial complexity of Theorem 1, which allows for an efficient, straightforward implementation of the algorithm. Two properties of 2D PF were crucial for these results: The 1D structure implied by Proposition 1, which allows an extension of DP algorithms [58,59], and Lemmas 3 and 6, which allow quick computations of cluster costs. Note that rectangular p-center problems have a better complexity using general planar results than using our Theorems 2 and 3. Our algorithms only use common properties for Chebyshev and Minkowski distances, whereas significant improvements are provided using specificities of Chebyshev distance.
Note that our complexity results are given considering the complexity of the initial re-indexation with Proposition 1. This O ( N log N ) phase may be the bottleneck for the final complexity. Some papers mention results which consider that the data are already in the memory (avoiding an O(N) traversal for input data) and already sorted. In our applications, MOO methods such as epsilon-constraint provide already sorted points [3]. Using this means of calculating the complexity, our algorithms for continuous and discrete 2-center problems in a 2D PF would have, respectively, a complexity in O ( log N ) and O ( log 2 N ) time. A notable advantage of the specialized algorithm in a 2D PF instead of the general cases in 2D is the simple and easy to implement algorithms.

8.2. Equivalent Optimal Solutions for P-Center Problems

Lemmas 16 and 17 emphasize that many optimal solutions may exist; the lower and upper bounds may define a very large funnel. We also note that many optimal solutions can be nested, i.e., non-verifying the Proposition 2. For real-world applicationa, having well-balanced clusters is more natural, and often wished for. Algorithms 7 and 8 provide the most unbalanced solutions. One may balance the sizes of covering balls, or the number of points in the clusters. Both types of solutions may be given using simple and fast post-processing. For example, one may proceed with a steepest descent local search using two-center problem types for consecutive clusters in the current solution. For balancing the size of clusters, iterating two-center computations induces marginal computations in O ( log 1 + γ N ) time for each iteration with Algorithm 6. Such complexity occurs once the points are re-indexed using Proposition 1; one such computation in O ( N log N ) allows for many neighborhood computations running in O ( log 1 + γ N ) time, and the sorting time is amortized.

8.3. Towards a Parallel Implementation

Complexity issues are raised to speed-up the convergence of the algorithms in practice. An additional way to speed up the algorithms in practice is to consider implementation issues, especially parallel implementation properties in multi- or many-core environments. In Algorithm 5, the values of the DP matrix O i , k , m for a given i [ [ 1 ; N ] ] requires only to compute the values O j , k , m for all j < i . Independent computations can thus be operated at the iteration i of Algorithm 5, once the cluster costs f γ ( C i , i ) α for all i [ [ 1 ; i ] ] have been computed, which is not the most time-consuming part when using Algorithms 2 and 3. This is a very useful property for a parallel implementation, requiring only N 1 synchronizations to process O ( K N 2 ( 1 + M ) ) operations. Hence, a parallel implementation of Algorithm 5 is straightforward in a shared memory parallelization, using OpenMP for instance in C/C++, or higher-level programming languages such as Python, Julia or Chapel [60]. One may also consider an intensive parallelization in a many-core environment, such as General Purpose Graphical Processing Units (GPGPU). A difficulty when using this may be the large memory size that is required in Algorithm 5.
Section 7 variants, which construct the DP matrix faster, both for k-center and min-sum k-radii problems, are not compatible with an efficient GPGPU parallelization, and one would prefer the naive and fixed-size enumeration of Algorithm 5, even with its worse time complexity for the sequential algorithm. Comparing the sequential algorithm to the GPGPU parallelization, having many independent parallelized computations allows a huge proportional factor with GPGPU, which can compensate the worst asymptotic complexity for reasonable sized instances. Shared memory parallelization, such as OpenMP, is compatible with the improvements provided in Section 7. Contrary to Algorithm 5, Algorithms 9 and 11 compute the DP matrix with index k increasing, with O ( N ) independent computation induced at each iteration. With such algorithms, there are only K 2 synchronizations required, instead of N 1 for Algorithm 5, which is a better property for parallelization. The O ( N ) memory versions are also useful for GPGPU parallelization, where memory space is more constrained than when storing a DP matrix on the RAM.
Previously, the parallelization of the DP matrix construction was discussed, as this is the bottleneck in time complexity. The initial sorting algorithm can also be parallelized on GPGPU if needed; the sorting time is negligible in most cases. The backtracking algorithm is sequential to obtain clusters, but with a low complexity in general, so that a parallelization of this phase is not crucial. We note that there is only one case where the backtracking Algorithm has the same complexity as the construction of the DP matrix: the DP variant in O ( N ) memory space proposed in Algorithm 11 with Algorithm 10 as a specific backtrack. In this specific case, the O ( K ) tests with different positions of the chosen outlier are independent, which allows a specific parallelization for Algorithm 10.

8.4. Applications to Bi-Objective Meta-Heuristics

The initial motivation of this work was to support decision makers when an MOO approach without preference furnishes a large set of non-dominated solutions. In this application, the value of K is small, allowing for human analyses to offer some preferences. In this paper, the optimality is not required in the further developments. Our work can also be applied to a partial PF furnished by population meta-heuristics [5]. A posteriori, the complexity allows for the use of Algorithms 5, 9 and 11 inside MOO meta-heuristics. Archiving PF is a common issue of population meta-heuristics, facing multi-objective optimization problems [4,5]. A key issue is obtaining diversified points of the PF in the archive, to compute diversified solutions along the current PF.
Algorithms 5, 9 and 11 can be used to address this issue, embedded in MOO approaches, similarly to [49]. Archiving diversified solutions of Pareto sets has application for the diversification of genetic algorithms, to select diversified solutions for cross-over and mutation phases [61,62], but also for swarm particle optimization heuristics [63]. In these applications, clustering has to run quickly. The complexity results and the parallelization properties are useful in such applicationas.
For application to MOO meta-heuristics like evolutionary algorithms, the partial versions are particularly useful. Indeed, partial versions may detect outliers that are isolated from the other points. For such points, it is natural to process intensification operators to look for efficient solutions in the neighborhood, which will make the former outlier less isolated. Such a process is interesting for obtaining a better balanced distribution of the points along the PF, which is a crucial point when dealing with MOO meta-heuristics.

8.5. How to Choose K , M ?

A crucial point in clustering applications the selection of an appropriate value of K. A too-small value of K can miss that instances which are well-captured with K + 1 representative clusters. Real-world applications seek the best compromise between the minimization of K, and the minimization of the dissimilarity among the clusters. Similarly, with [11], the properties of DP can be used to achieve this goal. With the DP Algorithm 9, many couples { ( k , O N , k ) } k are computed, using the optimal k-center values with k clusters. Having defined a maximal value K , the complexity for computing these points is seen in O ( N K log 1 + γ N ) . When searching for good values of k, the elbow technique, may be applied. Backtracking operations may be used for many solutions without changing the complexity. Rhe same ideas are applicable along the M index. In the previoulsy described context of MOO meta-heuristics, the sensitivity with the M parameter is more important than the sensitivity for the parameter K, where the number of archived points is known and fixed regarding other considerations, such as the allowed size of the population.

9. Conclusions and Perspectives

This paper examined the properties of p-center problems and variants in the special case of a discrete set of non-dominated points in a 2D space, using Euclidean, Minkowski or Chebyshev distances. A common characterization of optimal clusters is proven for the discrete and continuous variants of the p-center problems and variants. Thie can solve these problems to optimality with a unified DP algorithm of a polynomial complexity. Some complexity results for the 2D PF case improve the general ones in 2D. The presented algorithms are useful for MOO approaches. The complexity results, in O ( K N log N ) time for the standard K-center problems, and in O ( K N 2 ) time for the standard min-sum k-radii problems, are useful for application with a large PF. When applied to N points and able to ncover M < N points, partial K-center and min-sum-K-radii variants are, respectively, solvable in O ( K ( M + 1 ) N log N ) and O ( K ( M + 1 ) N 2 ) time. Furthermore, the DP algorithms have interesting properties for efficient parallel implementation in a shared memory environment, such as OpenMP or using GPGPU. This allows their application for a very large PF with short solving times. For an application for MOO meta-heuristics such as evolutionary algorithms, the partial versions are useful for the detection of outliers where intensification phases around these isolated solutions may be processed in order to obtain a better distribution of the points along the PF.
Future perspectives include the extension of these results to other clustering algorithms. The weighted versions of p-center variants were not studied in this paper, which was motivated by MOO perspectives, and future perspectives shall consider extending our algorithms to weighted variants. Regarding MOO applications, extending the results to dimension 3 is a subject of interest for MOO problems with three objectives. However, clustering a 3D PF will be an NP-hard problem as soon as the general 2D cases are proven to be NP-hard. The perspective in such cases is to design specific approximation algorithms for a 3D PF.

Author Contributions

Conceptualization, N.D. and F.N.; Methodology, N.D. and F.N.; Validation, E.-G.T. and F.N.; Writing–original draft preparation, N.D.; Writing—review and editing, N.D.; Supervision, E.-G.T. and F.N. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Peugeot, T.; Dupin, N.; Sembely, M.J.; Dubecq, C. MBSE, PLM, MIP and Robust Optimization for System of Systems Management, Application to SCCOA French Air Defense Program. In Complex Systems Design & Management; Springer: Berlin/Heidelberg, Germany, 2017; pp. 29–40. [Google Scholar] [CrossRef]
  2. Dupin, N.; Talbi, E. Matheuristics to optimize refueling and maintenance planning of nuclear power plants. J. Heuristics 2020, 1–43. [Google Scholar] [CrossRef]
  3. Ehrgott, M.; Gandibleux, X. Multiobjective combinatorial optimization-theory, methodology, and applications. In Multiple Criteria Optimization: State of the Art Annotated Bibliographic Surveys; Springer: Berlin/Heidelberg, Germany, 2003; pp. 369–444. [Google Scholar]
  4. Schuetze, O.; Hernandez, C.; Talbi, E.; Sun, J.; Naranjani, Y.; Xiong, F. Archivers for the representation of the set of approximate solutions for MOPs. J. Heuristics 2019, 25, 71–105. [Google Scholar] [CrossRef]
  5. Talbi, E. Metaheuristics: From Design to Implementation; Wiley: Hoboken, NJ, USA, 2009; Volume 74. [Google Scholar]
  6. Hsu, W.; Nemhauser, G. Easy and hard bottleneck location problems. Discret. Appl. Math. 1979, 1, 209–215. [Google Scholar] [CrossRef] [Green Version]
  7. Megiddo, N.; Tamir, A. New results on the complexity of p-centre problems. SIAM J. Comput. 1983, 12, 751–758. [Google Scholar] [CrossRef] [Green Version]
  8. Ravi, S.; Rosenkrantz, D.; Tayi, G. Heuristic and special case algorithms for dispersion problems. Oper. Res. 1994, 42, 299–310. [Google Scholar] [CrossRef] [Green Version]
  9. Wang, D.; Kuo, Y. A study on two geometric location problems. Inf. Process. Lett. 1988, 28, 281–286. [Google Scholar] [CrossRef]
  10. Dupin, N.; Nielsen, F.; Talbi, E. Clustering a 2d Pareto Front: P-center problems are solvable in polynomial time. In Proceedings of the International Conference on Optimization and Learning, Cádiz, Spain, 17–19 February 2020; pp. 179–191. [Google Scholar] [CrossRef]
  11. Dupin, N.; Nielsen, F.; Talbi, E. k-medoids clustering is solvable in polynomial time for a 2d Pareto front. In Proceedings of the World Congress on Global Optimization, Metz, France, 8–10 July 2019; pp. 790–799. [Google Scholar] [CrossRef]
  12. Borzsony, S.; Kossmann, D.; Stocker, K. The skyline operator. In Proceedings of the 17th International Conference on Data Engineering, Heidelberg, Germany, 2–6 April 2001; pp. 421–430. [Google Scholar]
  13. Nielsen, F. Output-sensitive peeling of convex and maximal layers. Inf. Process. Lett. 1996, 59, 255–259. [Google Scholar] [CrossRef]
  14. Arana-Jiménez, M.; Sánchez-Gil, C. On generating the set of nondominated solutions of a linear programming problem with parameterized fuzzy numbers. J. Glob. Optim. 2020, 77, 27–52. [Google Scholar] [CrossRef]
  15. Daskin, M.; Owen, S. Two New Location Covering Problems: The Partial P-Center Problem and the Partial Set Covering Problem. Geogr. Anal. 1999, 31, 217–235. [Google Scholar] [CrossRef]
  16. Calik, H.; Labbé, M.; Yaman, H. p-Center problems. In Location Science; Springer: Berlin/Heidelberg, Germany, 2015; pp. 79–92. [Google Scholar]
  17. Megiddo, N.; Supowit, K. On the complexity of some common geometric location problems. SIAM J. Comput. 1984, 13, 182–196. [Google Scholar] [CrossRef]
  18. Hochbaum, D. When are NP-hard location problems easy? Ann. Oper. Res. 1984, 1, 201–214. [Google Scholar] [CrossRef]
  19. Hochbaum, D.; Shmoys, D. A best possible heuristic for the k-center problem. Math. Oper. Res. 1985, 10, 180–184. [Google Scholar] [CrossRef] [Green Version]
  20. Gonzalez, T. Clustering to minimize the maximum intercluster distance. Theor. Comput. Sci. 1985, 38, 293–306. [Google Scholar] [CrossRef] [Green Version]
  21. Daskin, M. Network and Discrete Location: Models, Algorithms and Applications; Wiley: Hoboken, NJ, USA, 1995. [Google Scholar]
  22. Calik, H.; Tansel, B. Double bound method for solving the p-center location problem. Comput. Oper. Res. 2013, 40, 2991–2999. [Google Scholar] [CrossRef] [Green Version]
  23. Elloumi, S.; Labbé, M.; Pochet, Y. A new formulation and resolution method for the p-center problem. INFORMS J. Comput. 2004, 16, 84–94. [Google Scholar] [CrossRef] [Green Version]
  24. Callaghan, B.; Salhi, S.; Nagy, G. Speeding up the optimal method of Drezner for the p-centre problem in the plane. Eur. J. Oper. Res. 2017, 257, 722–734. [Google Scholar] [CrossRef] [Green Version]
  25. Drezner, Z. The p-centre problem—heuristic and optimal algorithms. J. Oper. Res. Soc. 1984, 35, 741–748. [Google Scholar]
  26. Hwang, R.; Lee, R.; Chang, R. The slab dividing approach to solve the Euclidean P-Center problem. Algorithmica 1993, 9, 1–22. [Google Scholar] [CrossRef]
  27. Agarwal, P.; Procopiuc, C. Exact and approximation algorithms for clustering. Algorithmica 2002, 33, 201–226. [Google Scholar] [CrossRef] [Green Version]
  28. Megiddo, N. Linear-time algorithms for linear programming in R3 and related problems. SIAM J. Comput. 1983, 12, 759–776. [Google Scholar] [CrossRef]
  29. Brass, P.; Knauer, C.; Na, H.; Shin, C.; Vigneron, A. Computing k-centers on a line. arXiv 2009, arXiv:0902.3282. [Google Scholar]
  30. Sharir, M. A near-linear algorithm for the planar 2-center problem. Discret. Comput. Geom. 1997, 18, 125–134. [Google Scholar] [CrossRef] [Green Version]
  31. Eppstein, D. Faster construction of planar two-centers. SODA 1997, 97, 131–138. [Google Scholar]
  32. Agarwal, P.; Sharir, M.; Welzl, E. The discrete 2-center problem. Discret. Comput. Geom. 1998, 20, 287–305. [Google Scholar] [CrossRef] [Green Version]
  33. Frederickson, G. Parametric search and locating supply centers in trees. In Workshop on Algorithms and Data Structures; Springer: Berlin/Heidelberg, Germany, 1991; pp. 299–319. [Google Scholar]
  34. Karmakar, A.; Das, S.; Nandy, S.; Bhattacharya, B. Some variations on constrained minimum enclosing circle problem. J. Comb. Optim. 2013, 25, 176–190. [Google Scholar] [CrossRef]
  35. Chen, D.; Li, J.; Wang, H. Efficient algorithms for the one-dimensional k-center problem. Theor. Comput. Sci. 2015, 592, 135–142. [Google Scholar] [CrossRef]
  36. Drezner, Z. On the rectangular p-center problem. Nav. Res. Logist. (NRL) 1987, 34, 229–234. [Google Scholar] [CrossRef]
  37. Katz, M.J.; Kedem, K.; Segal, M. Discrete rectilinear 2-center problems. Comput. Geom. 2000, 15, 203–214. [Google Scholar] [CrossRef] [Green Version]
  38. Drezner, Z. On a modified one-center model. Manag. Sci. 1981, 27, 848–851. [Google Scholar] [CrossRef] [Green Version]
  39. Hansen, P.; Jaumard, B. Cluster analysis and mathematical programming. Math. Program. 1997, 79, 191–215. [Google Scholar] [CrossRef] [Green Version]
  40. Doddi, S.; Marathe, M.; Ravi, S.; Taylor, D.; Widmayer, P. Approximation algorithms for clustering to minimize the sum of diameters. Nord. J. Comput. 2000, 7, 185–203. [Google Scholar]
  41. Gibson, M.; Kanade, G.; Krohn, E.; Pirwani, I.A.; Varadarajan, K. On metric clustering to minimize the sum of radii. Algorithmica 2010, 57, 484–498. [Google Scholar] [CrossRef] [Green Version]
  42. Charikar, M.; Panigrahy, R. Clustering to minimize the sum of cluster diameters. J. Comput. Syst. Sci. 2004, 68, 417–441. [Google Scholar] [CrossRef] [Green Version]
  43. Behsaz, B.; Salavatipour, M. On minimum sum of radii and diameters clustering. Algorithmica 2015, 73, 143–165. [Google Scholar] [CrossRef]
  44. Mahajan, M.; Nimbhorkar, P.; Varadarajan, K. The planar k-means problem is NP-hard. Theor. Comput. Sci. 2012, 442, 13–21. [Google Scholar] [CrossRef] [Green Version]
  45. Shang, Y. Generalized K-Core percolation in networks with community structure. SIAM J. Appl. Math. 2020, 80, 1272–1289. [Google Scholar] [CrossRef]
  46. Tao, Y.; Ding, L.; Lin, X.; Pei, J. Distance-based representative skyline. In Proceedings of the 2009 IEEE 25th International Conference on Data Engineering, Shanghai, China, 29 March–2 April 2009; pp. 892–903. [Google Scholar]
  47. Cabello, S. Faster Distance-Based Representative Skyline and k-Center Along Pareto Front in the Plane. arXiv 2020, arXiv:2012.15381. [Google Scholar]
  48. Sayın, S. Measuring the quality of discrete representations of efficient sets in multiple objective mathematical programming. Math. Program. 2000, 87, 543–560. [Google Scholar] [CrossRef]
  49. Auger, A.; Bader, J.; Brockhoff, D.; Zitzler, E. Investigating and exploiting the bias of the weighted hypervolume to articulate user preferences. In Proceedings of the GECCO 2009, Montreal, QC, Canada, 8–12 July 2009; pp. 563–570. [Google Scholar]
  50. Bringmann, K.; Cabello, S.; Emmerich, M. Maximum Volume Subset Selection for Anchored Boxes. In Proceedings of the 33rd International Symposium on Computational Geometry (SoCG 2017), Brisbane, Australia, 4–7 July 2017; Aronov, B., Katz, M.J., Eds.; Volume 77, pp. 22:1–22:15. [Google Scholar] [CrossRef]
  51. Bringmann, K.; Friedrich, T.; Klitzke, P. Two-dimensional subset selection for hypervolume and epsilon-indicator. In Proceedings of the Annual Conference on Genetic and Evolutionary Computation, Vancouver, BC, Canada, 12–16 June 2014; pp. 589–596. [Google Scholar]
  52. Kuhn, T.; Fonseca, C.; Paquete, L.; Ruzika, S.; Duarte, M.; Figueira, J. Hypervolume subset selection in two dimensions: Formulations and algorithms. Evol. Comput. 2016, 24, 411–425. [Google Scholar] [CrossRef]
  53. Erkut, E. The discrete p-dispersion problem. Eur. J. Oper. Res. 1990, 46, 48–60. [Google Scholar] [CrossRef]
  54. Hansen, P.; Moon, I. Dispersing facilities on a network. Cahiers du GERAD 1995. [Google Scholar]
  55. Dupin, N. Polynomial algorithms for p-dispersion problems in a 2d Pareto Front. arXiv 2020, arXiv:2002.11830. [Google Scholar]
  56. Dupin, N.; Nielsen, F.; Talbi, E. k-medoids and p-median clustering are solvable in polynomial time for a 2d Pareto front. arXiv 2018, arXiv:1806.02098. [Google Scholar]
  57. Dupin, N.; Nielsen, F.; Talbi, E. Dynamic Programming heuristic for k-means Clustering among a 2-dimensional Pareto Frontier. In Proceedings of the 7th International Conference on Metaheuristics and Nature Inspired Computing, Marrakech, Morocco, 27–31 October 2018. [Google Scholar]
  58. Grønlund, A.; Larsen, K.; Mathiasen, A.; Nielsen, J.; Schneider, S.; Song, M. Fast exact k-means, k-medians and Bregman divergence clustering in 1d. arXiv 2017, arXiv:1701.07204. [Google Scholar]
  59. Wang, H.; Song, M. Ckmeans. 1d. dp: Optimal k-means clustering in one dimension by dynamic programming. R J. 2011, 3, 29. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  60. Gmys, J.; Carneiro, T.; Melab, N.; Talbi, E.; Tuyttens, D. A comparative study of high-productivity high-performance programming languages for parallel metaheuristics. Swarm Evol. Comput. 2020, 57, 100720. [Google Scholar] [CrossRef]
  61. Zio, E.; Bazzo, R. A clustering procedure for reducing the number of representative solutions in the Pareto Front of multiobjective optimization problems. Eur. J. Oper. Res. 2011, 210, 624–634. [Google Scholar] [CrossRef]
  62. Samorani, M.; Wang, Y.; Lv, Z.; Glover, F. Clustering-driven evolutionary algorithms: An application of path relinking to the quadratic unconstrained binary optimization problem. J. Heuristics 2019, 25, 629–642. [Google Scholar] [CrossRef]
  63. Pulido, G.; Coello, C. Using clustering techniques to improve the performance of a multi-objective particle swarm optimizer. In Proceedings of the Genetic and Evolutionary Computation Conference, Seattle, WA, USA, 26–30 June 2004; pp. 225–237. [Google Scholar]
Figure 1. Illustration of a 2D Pareto Front (PF) with 15 points and the indexation implied by Proposition 1.
Figure 1. Illustration of a 2D Pareto Front (PF) with 15 points and the indexation implied by Proposition 1.
Mathematics 09 00453 g001
Table 1. Comparison of the time complexity for 2D PF cases to the 1D and 2D cases.
Table 1. Comparison of the time complexity for 2D PF cases to the 1D and 2D cases.
Problem1D Complexity Our 2D PF Complexity 2D Complexity
Cont.
min-sum-K-radii
O ( N log N ) Proposition 12 O ( K N 2 ) Theorem 1NP-hard[40]
Cont. p-center O ( N log 3 N ) [7] O ( p N log N ) Theorems 2 and 3NP-hard[17]
Discr. p-center O ( N ) [33] O ( p N log N ) Theorem 2NP-hard[17]
Cont. 1-center O ( N ) [20] O ( N ) Proposition 8 O ( N ) [20]
Discr. 1-center-- O ( N ) Proposition 8 O ( N log N ) [29]
Cont. 2-center-- O ( N log N ) Proposition 10 O ( N log 2 N ) [28]
Discr. 2-center-- O ( N log N ) Proposition 10 O ( N 4 / 3 log 5 N ) [32]
partial 1-center-- O ( N min ( M , log N ) ) Proposition 9 O ( N 2 log N ) [38]
Rect. 1-center O ( N ) [36] O ( N ) Proposition 2 O ( N ) [36]
Rect. 2-center O ( N ) [36] O ( N log N ) Proposition 10 O ( N ) [36]
Cont. rect. p-center O ( N ) [36] O ( p N log N ) Theorem 3 O ( N ) [36]
Discr. rect. p-center O ( N log N ) [36] O ( p N log N ) Theorem 2 O ( N log N ) [36]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Dupin, N.; Nielsen, F.; Talbi, E.-G. Unified Polynomial Dynamic Programming Algorithms for P-Center Variants in a 2D Pareto Front. Mathematics 2021, 9, 453. https://doi.org/10.3390/math9040453

AMA Style

Dupin N, Nielsen F, Talbi E-G. Unified Polynomial Dynamic Programming Algorithms for P-Center Variants in a 2D Pareto Front. Mathematics. 2021; 9(4):453. https://doi.org/10.3390/math9040453

Chicago/Turabian Style

Dupin, Nicolas, Frank Nielsen, and El-Ghazali Talbi. 2021. "Unified Polynomial Dynamic Programming Algorithms for P-Center Variants in a 2D Pareto Front" Mathematics 9, no. 4: 453. https://doi.org/10.3390/math9040453

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop