Next Article in Journal
A NARX Model Reference Adaptive Control Scheme: Improved Disturbance Rejection Fractional-Order PID Control of an Experimental Magnetic Levitation System
Next Article in Special Issue
Detecting Traffic Incidents Using Persistence Diagrams
Previous Article in Journal
Scalable Block Preconditioners for Linearized Navier-Stokes Equations at High Reynolds Number
Previous Article in Special Issue
Approximate Triangulations of Grassmann Manifolds
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Adaptive Metrics for Adaptive Samples

by
Nicholas J. Cavanna
1,2,† and
Donald R. Sheehy
1,3,*,†
1
Department of Computer Science and Engineering, University of Connecticut, Storrs, CT 06269, USA
2
Swift Health Systems Inc., Irvine, CA 92617, USA
3
Department of Computer Science, North Carolina State University, Raleigh, NC 27695, USA
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Algorithms 2020, 13(8), 200; https://doi.org/10.3390/a13080200
Submission received: 1 July 2020 / Revised: 31 July 2020 / Accepted: 3 August 2020 / Published: 18 August 2020
(This article belongs to the Special Issue Topological Data Analysis)

Abstract

:
We generalize the local-feature size definition of adaptive sampling used in surface reconstruction to relate it to an alternative metric on Euclidean space. In the new metric, adaptive samples become uniform samples, making it simpler both to give adaptive sampling versions of homological inference results and to prove topological guarantees using the critical points theory of distance functions. This ultimately leads to an algorithm for homology inference from samples whose spacing depends on their distance to a discrete representation of the complement space.

1. Introduction

1.1. From Points to Topology

Both surface reconstruction and homology inference are algorithmic problems that take points as input and produce a topological representation of the underlying space from which the points were drawn. In surface reconstruction, one often wants a homeomorphic reconstruction in the form of a triangulation, whereas in homology inference, it suffices to compute the homology groups. Although similar in many respects, the general trend is that with weaker conditions on the input (i.e., noisier samples), one can only hope for weaker guarantees on the output (homology rather than homeomorphism). There is one aspect of these theories that directly contradicts this trend: many surface reconstruction algorithms are able to work with an adaptive sample, while most homology inference algorithms require a uniform sample. Here and throughout, we use “uniform” in the Hausdorff sense, not the statistical sense. An adaptive sample has a density that adapts to some local sizing function. Thus, areas that require higher fidelity will have higher density (and a smaller scale), while areas that can get by with less fidelity will have lower density (and larger scale).
There have been some notable works that have bridged this gap between surface reconstruction and homology inference for adaptive samples. Most theoretically guaranteed surface reconstruction algorithms assume an input that is sufficiently dense with respect to the distance to the medial axis, a kind of skeleton describing the complement of the underlying shape. Cazals et al. [1] introduced the conformal alpha shape filtration as a way to build triangulations at different scales that have local connectivity related to the local feature size. Although their stated goal was surface reconstruction, the work employed many of the methods of homology inference. Chazal and Lieutier [2,3] gave a more direct generalization of methods in surface reconstruction with adaptive samples to homology inference, achieving some guarantees for smooth manifolds assuming both upper and lower bounds on the density. Dey et al. [4] gave a homology inference algorithm for manifold data that attempts to sample a subset of the medial axis in order to approximate the local feature size. This work was the main motivation for the current paper, and we adopted their notation of X for the space and L for the approximation to the complement space. We extend these works by providing guaranteed homology inference for a much more general class of samples and spaces; we do not require the space to be a manifold or the sample to adapt to the medial axis.

1.2. From Surface Reconstruction to Homology Inference

To reconstruct a surface from a point set, one needs the sample to be sufficiently dense with respect to not just the local curvature of the surface, but also the distance to parts of the surface that are close in the embedding, but far in geodesic distance. Otherwise, algorithms have no way of identifying which geometrically close sample points correspond to local neighborhoods in the surface. Adaptive sampling with respect to the so-called local feature size as introduced by Amenta and Bern [5] neatly characterizes such “good” samples and was then used in many later works on surface reconstruction with topological guarantees [6]. There is an extensive literature on the problem in high dimensions (see [7,8] for recent examples), and the problem remains an active research area. Such adaptive samples are in contrast to uniform samples for which a single parameter determines the density. That parameter is usually driven by the minimum of the local feature size and results in a much larger sample.
Later work on generalizations of surface reconstruction and homology inference related the topology of unions of balls centered at a sample X ^ near the unknown set X to the topology of X itself. The most well-known such results were by Niyogi et al. [9,10]. A union of balls with a fixed radius can be viewed as a sublevel set of the distance function to X ^ . If we have an adaptive sample, then we would like to scale the radii of the balls as well. However, if the sample is adaptive with respect to a local feature size defined as the distance to an unknown set L, another approximation L ^ near L is necessary. Indeed, one interpretation of some Voronoi-based surface reconstruction algorithms is that an approximation L ^ to the medial axis L is computed from the Voronoi diagram of the sample X ^ of the unknown surface X.
We present a new perspective on adaptive samples. For any pair of disjoint, compact sets X and L, we define a metric on R d \ L with the property that a uniform sample of X in the new metric corresponds to an adaptive sample in the Euclidean metric. We call this the metric induced by L or simply the induced metric. This new metric can also be extended to an arbitrarily close Riemannian metric over the same domain. Our main motivation is to connect adaptive sampling theory to the critical point theory of distance functions used extensively to prove topological guarantees in topological data analysis [2,11,12]. That theory gives natural topological equivalences between sublevel sets of distance functions to compact sets in Riemannian metrics. Thus, we propose to use the induced metric as the underlying ideal object and then relate it to a union of Euclidean balls constructed from approximations of X and L. Our metric can be viewed as a smoothed version of a metric used by Clarkson [13]. Our new formulation reveals connections with work on path planning [14,15] and density-based distances [16,17]. These are all constructions where one looks at conformal change of metrics induced by subsets of Euclidean space.

1.3. Overview

We lay out the main objects of study in Section 2. This includes the induced metric and a discrete approximation. Throughout the paper, we will relate these two objects or variations thereof for different purposes. In Section 3.1, we prove the relationship between the adaptive samples used in surface reconstruction and uniform samples in the induced metric. The definition of the induced metric does not lend itself to direct computation. Therefore, in Section 3.2, we bound the interleaving distance between the induced metric and its discrete approximation. This interleaving is then used in Section 3.3 to give a homology inference algorithm that is guaranteed to recover the homology of a sublevel set of the induced metric under certain sampling conditions.

2. Methods

Let L and X be compact subsets of R d with respect to the Euclidean metric. For x , y R d , define Path ( x , y ) to be the set of bounded piecewise- C 1 paths from x to y, parametrized by the Euclidean arc-length. Similarly, Path ( x , S ) : = s S Path ( x , s ) denotes all paths from x to a set S.
For any compact set L R d , define f L ( · ) : R d R by:
f L ( x ) : = min L x .
Define:
d L ( x , y ) : = min γ Path ( x , y ) γ d z f L ( z ) .
The length of a unit-speed path γ : [ 0 , a ] R d is denoted as:
| γ | : = γ d z = 0 a d t .
For y R d , define:
f X L ( y ) : = d L ( y , X ) = min x X d L ( y , x ) ,
and:
f X L ^ ( y ) : = min x X y x f L ( x ) .
Note that f X L ( · ) is a distance function, while f X L ^ ( · ) is not. The latter function can be interpreted as a first-order approximation of the former.
Definition 1.
For any compact set X R d \ L , for some compact set L R d , the α-offsets with respect to d L are:
A X L ( α ) : = { x R d | f X L ( x ) α } .
The distance function f L ( · ) can be transformed into an arbitrarily close smooth function f ˜ L ( · ) [18], yielding a Riemannian metric d ˜ L defined in an identical manner as d L . From this, one has corresponding α -offsets A ˜ X L ( α ) that are arbitrarily close to A L X ( α ) . We will encounter this smoother version in Section 3.3.
We will approximate the offsets A X L ( α ) by a union of balls as follows.
Definition 2.
For any compact set X R d \ L , for some compact set L R d , the approximate α-offsets with respect to d L are:
B X L ( α ) : = ( f X L ^ ) 1 [ 0 , α ] = x X ball ( x , α f L ( x ) ) .
A useful property of f X L ( · ) is that it is a one-Lipschitz function. In general, a function f between two metric spaces ( X , d X ) and ( Y , d Y ) is said to be k-Lipschitz if for all x , y X , d Y ( f ( x ) , f ( y ) ) k d X ( x , y ) .
Lemma 1.
The function f X L is one-Lipschitz from the metric space ( R d , d L ) to R .
Proof. 
Fix any a , b R d . There exists point x X and a path γ 1 Path ( a , x ) such that f X L ( a ) = γ 1 d z f L ( z ) . Likewise, there exists γ 2 Path ( a , b ) such that d L ( a , b ) = γ 2 d z f L ( z ) .
This implies that the concatenation of γ 1 and γ 2 is a path γ 3 in Path ( b , X ) . Thus, f X L ( b ) γ 3 d z f L ( z ) f X L ( a ) + d L ( a , b ) . As this holds for all a , b , we conclude that | f X L ( a ) f L X ( b ) | d L ( a , b ) , as desired. □
We can use f X L to define the Hausdorff distance, which is a metric between compact sets. This metric is useful for stating bounds on the quality, or uniformity, of a sample near a set.
Definition 3.
The Hausdorff distance between two compact sets X , Y ( R d , d L ) is defined as:
d H L ( X , Y ) = max { max x X f Y L ( x ) , max y Y f X L ( y ) }
If the Hausdorff distance between a compact set and a sample is bounded, Lemma 3 shows that their α -offsets are interleaved at particular scales.
Lemma 2.
Let X ^ , X R d \ L be such that d H L ( X ^ , X ) δ . Then, for all α 0 , A X L ( α ) A X ^ L ( α + δ ) and A X ^ L ( α ) A X L ( α + δ ) .
Proof. 
Let y A X L ( α ) be any point. By the definition of A X L , we have f X L ( y ) α . Therefore, there exists x X such that d L ( x , y ) α . The Hausdorff assumption that d H L ( X ^ , X ) δ implies that for all x X , we have f X ^ L ( x ) δ . By Lemma 1, f X ^ L ( y ) f X ^ L ( x ) + d L ( x , y ) δ + α , implying y A X ^ L ( α + δ ) . The second inclusion is proven by a symmetric argument.  □
The following is the definition of an adaptive sample we will use throughout. For the special case when X is a manifold and L is its medial axis, it corresponds to the ε -sample used in surface reconstruction.
Definition 4.
Given a compact set L R d and compact sets X , X ^ R d \ L such that X ^ X , we say that X ^ is an ε-sample of X, for ε [ 0 , 1 ) , if for all x X , there exists p X ^ such that x p ε f L ( x ) .
This definition is closely related to that of the approximate α -offsets, because if X ^ is an ε -sample of X, then for all x X , ball ( x , ε f L ( x ) ) X ^ .

3. Results

Lemma 3.
Consider X ^ , X R d \ L to be such that d H L ( X ^ , X ) δ . Then, for all α 0 , A X L ( α ) A X ^ L ( α + δ ) and A X ^ L ( α ) A X L ( α + δ ) .
Proof. 
Fix y A X L ( α ) . By definition, f X L ( y ) α , which implies that there exists x X such that d L ( x , y ) α . d H L ( X ^ , X ) δ , which implies that for all x X , f X ^ L ( x ) δ . Now, by Lemma 1, f X ^ L ( y ) f X ^ L ( x ) + d L ( x , y ) δ + α , implying y A X ^ L ( α + δ ) . By a symmetric argument, the other statement holds. □
Lemma 4 relates the length of a path γ with respect to two distance-to-set functions, assuming they have a close Hausdorff distance with respect to a Euclidean metric.
Lemma 4.
Let L , L ^ be two compact sets such that d H ( L , L ^ ) ε for some ε > 0 . For all unit-speed, γ : [ 0 , a ] R d \ L c ε , where for some positive c, we have the following inequalities.
( 1 1 c ) | γ | L ^ | γ | L ( 1 + 1 c ) | γ | L ^ .
Proof. 
Take an arbitrary unit-speed path γ : [ 0 , a ] R d \ L c ε where d H ( L , L ^ ) ε . Since the image of the path γ is a subset of R d \ L c ε , then for all z γ , f L ( z ) > c ε . By the Hausdorff distance between L and L ^ , we have f L ( z ) f L ^ ( z ) + ε < f L ^ ( z ) + f L ( z ) c . Likewise, we have that f L ( z ) f L ^ ( z ) ε > f L ^ ( z ) f L ( z ) c . Rearranging both of these, we have that 1 1 c f L ^ ( z ) < 1 f L ( z ) < 1 + 1 c f L ^ ( z ) .
By the definition of | γ | L and | γ | L ^ , these inequalities imply that ( 1 1 c ) | γ | L ^ | γ | L ( 1 + 1 c ) | γ | L ^ .  □
The following lemma provides a bound on how close to L a shortest path to a compact set X can be and a constant c to satisfy Lemma 4 that is dependent on what compact set X R d \ L one is working.
Lemma 5.
Take compact set L R d , compact set X R d \ L , and y A X L ( δ ) , for δ < 1 . If γ is the shortest path from y to X with respect to d L , then:
γ R d \ L ( 1 δ 1 δ ) d H ( X , L )
Proof. 
Since y A X L ( δ ) , f X L ( y ) δ , so there exists x X such that d L ( x , y ) δ . Take γ as the shortest path from y to X. For all z γ , d L ( x , z ) d L ( x , y ) δ .
By Lemma 10, x z δ 1 δ f L ( x ) , and by f L being Lipschitz, we have that f L ( z ) f L ( x ) x z ( 1 δ 1 δ ) f L ( x ) ( 1 δ 1 δ ) d H ( X , L ) . This means that every point on the path γ is at least distance ( 1 δ 1 δ ) d H ( X , L ) away from L. □
We define a noisy ε -sample, for ε < 1 , of compact X R d \ L with respect to f L for some compact set L as a compact set X ^ R d \ L such that for all x X , there exists p X ^ such that x p ε f L ( x ) . Likewise, for all p X ^ , there exists x X , such that x p ε f L ( x ) . The following theorems relate a noisy ε -sample to the Hausdorff distance between the sample X ^ and the set X and vice versa.
Lemma 6.
Consider compact set L and compact X , X ^ R d \ L . If X ^ is a noisy ε-sample of X with respect to f L , for ε < 1 , then d H L ( X ^ , X ) ε 1 ε .
Proof. 
Given x X , by definition, there exists p X ^ such that x p ε f L ( x ) . By Lemma 10, d L ( x , p ) ε 1 ε , so for all, x X , f X ^ L ( x ) ε 1 ε .
Furthermore, given p X ^ , there exists x X such that x p ε f L ( x ) , so for all p X ^ , f X L ( p ) ε 1 ε ; thus, d H L ( X ^ , X ) ε 1 ε . □
Lemma 7.
Consider compact set L and sets X , X ^ R d \ L . If d H L ( X ^ , X ) ε < 1 2 , then X ^ is a noisy ε 1 ε -sample of X with respect to f L .
Proof. 
d H L ( X ^ , X ) ε implies that for all p X ^ , f X L ( p ) ε . Thus, there exists x X such that d L ( x , p ) ε . By Lemma 10, x p ε 1 ε f L ( x ) .
Similarly, d H L ( X ^ , X ) ε implies that for all x X , f X ^ L ( x ) ε ; thus, there exists x X ^ such that d L ( x , p ) ε , and thus, x p ε 1 ε f L ( x ) . Since ε < 1 2 , then ε 1 ε < 1 , so X ^ is a noisy ε 1 ε -sample of X. □
Lemma 8.
Given compact set L R d and compact set X R d \ L , for ε < 1 , A X L ( ε ) B X L ( ε 1 ε ) .
Proof. 
Take y A X L ( ε ) so that f X L ( y ) ε . Thus, there exists x X such that d L ( x , y ) ε . By Lemma 10, this implies that x y ε 1 ε f L ( x ) , which implies that y B X L ( ε 1 ε ) . □
Lemma 9.
Given compact set L R d and compact set X R d \ L , for ε < 1 , B X L ( ε ) A X L ( ε 1 ε ) .
Proof. 
Consider y B X L ( ε ) . Thus, y ball ( x , ε f L ( x ) ) , for some x X , so x y ε f L ( x ) . Applying Lemma 10, we then have that d L ( x , y ) ε 1 ε , and as f X L ( y ) d L ( x , y ) , y A X L ( ε 1 ε ) . □

3.1. Adaptive Sampling

In this section, we prove that a uniform sample in the induced metric corresponds to an adaptive sample in the Euclidean metric and vice versa. The key to this proof is the following lemma, which will also be used for the more elaborate interleaving results of Section 3.2.
Lemma 10.
Let L R d be a compact set, and let a , b R d \ L . Then, the following two statements hold for all δ [ 0 , 1 ) .
(i) 
If d L ( a , b ) δ , then a b f L ( a ) δ 1 δ .
(ii) 
If a b f L ( a ) δ , then d L ( a , b ) δ 1 δ .
Proof. 
To prove (i), we assume d L ( a , b ) δ . Let γ be the path in Path ( a , b ) such that d L ( a , b ) = γ d z f L ( z ) < δ . Then, we have the following inequalities following from the Lipschitz property of f L .
| γ | = γ d z = ( f L ( a ) + | γ | ) γ d z f L ( a ) + | γ | ( f L ( a ) + | γ | ) γ d z f L ( z ) ( f L ( x ) + | γ | ) δ
It follows that | γ | δ 1 δ f L ( x ) . Because a b is the length of the shortest path between a and b in the Euclidean metric, we conclude that a b | γ | δ 1 δ f L ( x ) .
Next we prove (ii). Assume a b f L ( a ) δ . For all points z in the straight line segment a b ¯ ,
f L ( z ) f L ( a ) a z f L ( a ) a b ( 1 δ ) f L ( a ) .
This implies the following inequality.
d L ( a , b ) = inf γ Path ( a , b ) γ d z f L ( z ) a b ¯ d z f L ( z ) 1 ( 1 δ ) f L ( a ) a b ¯ d z = a b ( 1 δ ) f L ( a ) δ 1 δ .
 □
We can now state the main theorem relating adaptive samples in the Euclidean metric to uniform samples in the metric induced by a set L.
Theorem 1.
Let L and X be compact sets; let X ^ X be a sample; and let ε [ 0 , 1 ) be a constant. If X ^ is an ε-sample of X with respect to the distance to L, then d H L ( X , X ^ ) ε 1 ε . Furthermore, if d H L ( X , X ^ ) ε < 1 2 , then X ^ is an ε 1 ε -sample of X with respect to the distance to L.
Proof. 
Given x X , there exists p X ^ such that x p ε f L ( x ) . By Lemma 10, d L ( x , p ) ε 1 ε , so for all x X , f X ^ L ( x ) ε 1 ε . As X ^ X , this proves d H L ( X ^ , X ) ε 1 ε .
Furthermore, d H L ( X ^ , X ) ε < 1 2 implies that for all x X , f X ^ L ( x ) ε ; thus, there exists p X ^ such that d L ( x , p ) ε . Thus, by Lemma 10 x p ε 1 ε f L ( x ) . Since ε < 1 2 , then ε 1 ε < 1 , so X ^ is an ε 1 ε -sample of X. □

3.2. Interleaving

A filtration is a nested family of sets. In this paper, we consider filtrations F parameterized by a real number α 0 so that F ( α ) R d , and whenever α < β , we have F ( α ) F ( β ) . Often, our filtrations are sublevel filtrations of a real valued function f : R d R . The sublevel filtration F corresponding to the function f is defined as:
F ( α ) : = { x R d | f ( x ) α } .
Definition 5.
A pair of filtrations ( F , G ) is ( h 1 , h 2 ) -interleaved in an interval ( s , t ) if F ( r ) G ( h 1 ( r ) ) whenever r , h 1 ( r ) ( s , t ) and G ( r ) F ( h 2 ( r ) ) whenever r , h 2 ( r ) ( s , t ) . We require that the functions h 1 , h 2 to be nondecreasing in ( s , t ) .
The following lemma gives us an easy way to combine interleavings.
Lemma 11.
If ( F , G ) is ( h 1 , h 2 ) -interleaved in ( s 1 , t 1 ) and ( G , H ) is ( h 3 , h 4 ) -interleaved in ( s 2 , t 2 ) , then ( F , H ) is ( h 3 h 1 , h 2 h 4 ) -interleaved in ( s 3 , t 3 ) , where s 3 = max { s 1 , s 2 } and t 3 = min { t 1 , t 2 } .
Proof. 
If r , h 3 ( h 1 ( r ) ) ( s 3 , t 3 ) , then we have F ( r ) G ( h 1 ( r ) ) H ( h 3 ( h 1 ( r ) ) ) . Similarly, if r , h 2 ( h 4 ( r ) ) ( s 3 , t 3 ) , then H ( r ) G ( h 4 ( r ) ) F ( h 2 ( h 4 ( r ) ) ) . □

3.2.1. Approximating X with X ^

Ultimately, the goal is to relate A X L , the offsets in the induced metric, to B X ^ L ^ , the approximate offsets computed from approximations (or samples) to both X and L. This relationship will be given by an interleaving that is built up from an interleaving for each approximation step. For each of the following lemmas, let L , L ^ R d and X , X ^ R d \ ( L L ^ ) be compact sets.
Lemma 12.
If d H L ( X ^ , X ) ε , then ( A X L , A X ^ L ) are ( h 1 , h 1 ) -interleaved in ( 0 , ) , where h 1 ( r ) = r + ε .
Proof. 
This lemma is a reinterpretation of Lemma 3 in the interleaving notation. □

3.2.2. Approximating the Induced Metric

It is much easier to use a union of Euclidean balls to model the sublevel sets of the distance function f X L . Below, we show that this is a reasonable approximation. The following results may also be viewed as a strengthening of the adaptive sampling result of the previous section (Theorem 1).
Lemma 13.
The pair ( A X ^ L , B X ^ L ) is ( h 2 , h 2 ) -interleaved in ( 0 , 1 ) , where h 2 ( r ) = r 1 r .
Proof. 
It will suffice to show that for r [ 0 , 1 ) , A X ^ L ( r ) B X ^ L ( r 1 r ) , and for r [ 0 , 1 2 ) , B X ^ L ( r ) A X ^ L ( r 1 r ) .
Take y A X ^ L ( r ) so that f X ^ L ( y ) r . Thus, there exists x X such that d L ( x , y ) r . By Lemma 10, this implies that x y r 1 r f L ( x ) , which implies that y B X ^ L ( r 1 r ) .
Consider any point y B X ^ L ( r ) . For some x X , we have y ball ( x , r f L ( x ) ) , so x y r f L ( x ) . Applying Lemma 10, we have that d L ( x , y ) r 1 r . Finally, y A X ^ L ( r 1 r ) , because f X ^ L ( y ) d L ( x , y ) . □

3.2.3. Approximating L with L ^

Usually, the set L is unknown at the start and must be estimated from the input. For example, if L is the medial axis of X, there are several known techniques for approximating L by taking some vertices of the Voronoi diagram [5,6]. We would like to give some sampling conditions that allow us to replace L with an approximation L ^ . Interestingly, the sampling conditions for X ^ are dual to those used for L ^ : we require d H X ^ ( L , L ^ ) ε . In other words, L ^ must be an adaptive sample with respect to the distance to X ^ .
Lemma 14.
If d H X ^ ( L , L ^ ) δ < 1 , then ( B X ^ L , B X ^ L ^ ) is ( h 3 , h 3 ) -interleaved in ( 0 , ) , where h 3 ( r ) = r 1 δ .
Proof. 
Fix any x B X ^ L ( r ) . There is a point p X ^ such that x p f L ( p ) r . Moreover, there is a nearest point z L ^ to x such that f L ^ ( p ) = p z . Lemma 10 and the assumption that d H X ^ ( L , L ^ ) δ together imply that there exists y L such that:
y z δ 1 δ f X ^ ( z ) .
It then follows from the definitions that:
f X ^ ( z ) = min q X ^ z q z p = f L ^ ( p ) .
Therefore, we can bound f L ( p ) in terms of f L ^ ( p ) as follows.
f L ( p ) y p [ y L ] y z + z p [ triangle   inequality ] 1 1 δ f L ^ ( p ) [ by ( 1 ) and ( 2 ) ]
Therefore,
x p f L ^ ( p ) x p ( 1 δ ) f L ( p ) r 1 δ = h 3 ( r ) .
Therefore, x B X ^ L ^ ( h 3 ( r ) ) , and so, we conclude that B X ^ L ( r ) B X ^ L ^ ( h 3 ( r ) ) . The proof is symmetric to show that B X ^ L ^ ( r ) B X ^ L ( h 3 ( r ) )  □

3.2.4. Putting It All Together

We can now use Lemma 11 to combine the interleavings established in Lemmas 12–14.
Theorem 2.
Let L , L ^ R d and X , X ^ R d \ ( L L ^ ) be compact sets. If d H X ^ ( L , L ^ ) δ < 1 and d H L ( X ^ , X ) ε < 1 , then ( A X L , B X ^ L ^ ) are ( h 4 , h 5 ) -interleaved in ( 0 , 1 ) , where h 4 ( r ) = r + ε ( 1 r ε ) ( 1 δ ) and h 5 ( r ) = r 1 δ r + ε .
Proof. 
We use Lemma 11 to combine the interleavings from Lemmas 12–14 to conclude that the pair ( A X L , B X ^ L ^ ) is ( h 3 h 2 h 1 , h 1 h 2 h 3 ) -interleaved in ( 0 , 1 ) . To complete the proof, we expand h 3 h 2 h 1 and h 1 h 2 h 3 as follows.
( h 3 h 2 h 1 ) ( r ) = ( h 3 h 2 ) ( r + δ ) = h 3 ( r + δ 1 r δ ) = r + δ ( 1 r δ ) ( 1 ε ) ( h 1 h 2 h 3 ) ( r ) = ( h 1 h 2 ) ( r 1 ε ) = h 1 ( r ( 1 ε ) ( 1 r 1 ε ) ) = h 1 ( r 1 ε r ) = r 1 ε r + δ
Therefore, we have that h 4 ( r ) = r + δ ( 1 r δ ) ( 1 ε ) and h 5 ( r ) = r 1 ε r + δ . □

3.3. Smooth Adaptive Distance and Homology Inference

In the preceding sections, we showed how to approximate (via interleaving) A X L , the sublevels of the distance to X in the induced metric, using a finite set of Euclidean balls, B X ^ L ^ . Now, we show how and when such an approximation gives a guarantee about the underlying space X itself. This is substantially more difficult, because it requires us to relate the sublevels of the induced metric to an object we do not have direct access to. As such, we will require some stronger hypotheses.
We will first review the critical point theory of distance functions. Then, we show how to smooth the induced metric to an arbitrarily close Riemannian metric, rendering the critical point theory applicable. Then, we put these together to prove the main inference result of the paper, Theorem 3.

3.3.1. Critical Points of Distance Functions

In this section, we give a minimal presentation of the critical point theory of distance functions to explain and motivate the results about interleaving offsets of distance functions in Riemannian manifolds. The main fact we use is that such interleavings lead immediately to results about homology inference (Lemma 16).
For a smooth Riemannian manifold M and a compact subset X M , one can consider the function f X : M R that maps each point in M to the distance to its nearest point in X as measured by the metric on the manifold. The gradient of f X can be defined on M, and the critical points are those points for which the gradient is zero. The critical values of f X are those values of r such that f X 1 ( r ) contains a critical point. The critical point theory of distance functions developed by Grove and others [11] extends the ideas from Morse theory to such distance functions. In particular, the theory gives the following result.
Lemma 15
(Grove [11]). If [ r , r ] contains no critical values, then f X 1 ( [ 0 , r ] ) f X 1 ( [ 0 , r ] ) is a homotopy equivalence.
This means that for intervals that do not contain critical values, the inclusion maps in the filtration F ( r ) : = { f X 1 ( [ 0 , r ] ) | r 0 } are all homotopy equivalences and therefore induce isomorphisms in homology. This is used to give some information about the homology of filtrations that are interleaved with F.
We write H * to denote homology over a field. Therefore, for a set X R d , we have a vector space H * ( X ) , and for a continuous map f : X Y , we have a linear map H * ( f ) . For the canonical inclusion map X Y for a subset X Y , we will denote the corresponding linear map in homology as H * ( X Y ) . The image of this map is denoted im H * ( X Y ) .
Lemma 16.
Let f X be the distance function to a compact set in a Riemannian manifold such that [ r , r ] contains no critical values of f X . Let F be the sublevel filtration of f X , and let G be a filtration such that ( F , G ) are ( h 1 , h 2 ) -interleaved in ( r , r ) . If r < ( h 2 h 1 h 2 h 1 ) ( r ) , then:
im H * ( G ( h 1 ( r ) ) G ( ( h 1 h 2 h 1 ) ( r ) ) ) H * ( F ( r ) ) .
Proof. 
The interleaving and the hypotheses imply that we have the following inclusions.
F ( r ) G ( h 1 ( r ) ) F ( ( h 2 h 1 ) ( r ) ) G ( ( h 1 h 2 h 1 ) ( r ) ) F ( ( h 2 h 1 h 2 h 1 ) ( r ) )
The preceding lemma implies that the maps F ( r ) F ( ( h 2 h 1 ) ( r ) ) , F ( ( h 2 h 1 ) ( r ) ) F ( ( h 2 h 1 h 2 h 1 ) ( r ) ) , and F ( r ) F ( ( h 2 h 1 h 2 h 1 ) ( r ) ) all induce isomorphisms in homology. It follows that im H * ( G ( h 1 ( r ) ) G ( ( h 1 h 2 h 1 ) ( r ) ) ) H * ( F ( r ) ) , because the inclusion of spaces in G is factored through a space in F, and it factors an inclusion of spaces, all of which are isomorphic in homology. □

3.3.2. Smoothing the Metric

To apply the critical point theory of distance functions to the induced metric directly, we would need it to be a smooth Riemannian manifold. Although it is not smooth, we can smooth it with an arbitrarily small change. The process, though a little technical, is not surprising, nor very difficult. It proceeds in three steps.
  • We smooth the distance to L. This is the source of non-smoothness in the induced metric. This replaces f L with a smooth approximation, f L ˜ .
  • The smoothed distance to L is used to define the smoothed induced metric d L ˜ analogously to the original construction of d L .
  • The induced distance function f X L can then be replaced by its smoothed version f X L ˜ , and the corresponding smoothed offsets A X L ˜ are then well defined.
The complete construction of the smoothed offsets is presented in Appendix A. The end result is an interleaving between the induced offsets A X L and the smoothed version A X L ˜ as expressed in the following lemma.
Lemma 17.
Given α , β ( 0 , 1 ) , consider compact sets L ^ L R d and compact sets X ^ X R d \ L β , such that d H X ^ ( L , L ^ ) δ < 1 and d H L ( X ^ , X ) ε < 1 , then ( A ˜ X L , B X ^ L ^ ) are ( h 8 , h 9 ) -interleaved on ( 0 , 1 ) , where h 8 ( r ) = r + α r + ε ( 1 r r α ε ) ( 1 δ ) and h 9 ( r ) = r ( 1 α ) ( 1 δ r ) + ε 1 α .
Proof. 
The proof can be found in Appendix A.1. □

3.3.3. The Weak Feature Size

Chazal and Leutier [19] introduced the weak feature size ( wfs ) as the least positive critical value of a Riemannian distance function. We denote the weak feature size with respect to f X L ˜ ( · ) as wfs L ( X ) . In light of the critical point theory of distance functions, a bound on the weak feature size gives a guaranteed interval with no critical points. This allows one to infer the homology from another filtration (usually one that is discrete and built from data) as long as the second filtration is interleaved in that critical point free interval.
Lemma 18
(Adapted from [19] Theorem 4.2; see also [20]). Let S and S ^ be compact subsets of R d . If d H ( S , S ^ ) > ε and wfs ( S ) > 4 ε , then for all sufficiently small η > 0 ,
H * ( A S ( η ) ) im H * ( A S ^ ( ε ) A S ^ ( 3 ε ) ) .
The key idea in that proof is that the Hausdorff bound gives an interleaving, while the weak feature size bound gives the interval without critical points. The technical condition regarding η is present to account for strange compact sets that may be homologically different from their arbitrarily small offsets. It is reasonable to assume that for some sufficiently small η that H * ( A S ( η ) ) H * ( S ) , and thus, one could “compute” the homology of S using only the sample S ^ .
Most previous uses of the weak feature size have been applied in Euclidean spaces, but the critical point theory of distance functions can be applied more broadly to other smooth Riemannian manifolds. This is why we introduced it as wfs L (with the superscript) to indicate the underlying metric.

3.3.4. Homology Inference

We have now introduced all the necessary pieces to prove our main homology inference result.
Theorem 3.
Given α , β ( 0 , 1 ) , consider compact sets L ^ L R d and compact sets X ^ X R d \ L β , such that d H X ^ ( L , L ^ ) δ < 1 and d H L ( X ^ , X ) ε < 1 . Define the real-valued functions Ψ and Φ as:
Ψ ( r ) : = r + α r + ε ( 1 r α r ε ) ( 1 δ )
and:
Φ ( r ) : = r ( 1 α ) ( 1 δ r ) + ε 1 α .
Given any η > 0 , such that Φ Ψ Φ Ψ ( η ) < 1 , if wfs L ( X ) > Φ Ψ Φ Ψ ( η ) , then:
H * ( A ˜ X L ( η ) ) im ( H * ( B X ^ L ^ ( Ψ ( η ) ) ) B X ^ L ^ ( Ψ Φ Ψ ( η ) ) ) .
Proof. 
Given η > 0 such that Φ Ψ Φ Ψ ( η ) < 1 , we have the following sequence of inclusions as a result of Lemma 17.
A ˜ X L ( η ) a B X ^ L ^ ( Ψ ( η ) ) b A ˜ X L ( Φ Ψ ( η ) ) c c B X ^ L ^ ( Ψ Φ Ψ ( η ) ) d A ˜ X L ( Φ Ψ Φ Ψ ( η ) ) .
As we assume that wfs L ( X ) > Φ Ψ Φ Ψ ( η ) , by the definition of the weak feature size, Lemma 16 implies that the inclusions b a and d c are homotopy equivalences. We remind the reader that if two spaces are homotopy equivalent, all the induced homology maps between the spaces are isomorphisms. By applying homology to each space and inclusion in the previous sequence, we have the following sequence of homology groups, where b * a * and d * c * are isomorphisms.
H * ( A ˜ X L ( η ) ) a * H * ( B X ^ L ^ ( Ψ ( η ) ) ) b * H * ( A ˜ X L ( Φ Ψ ( η ) ) ) c * H * ( B X ^ L ^ ( Ψ Φ Ψ ( η ) ) ) d * H * ( A ˜ X L ( Φ Ψ Φ Ψ ( η ) ) ) .
The aforementioned isomorphisms b * a * and d * c * factor through H * ( B X ^ L ^ ( Ψ ( η ) ) ) and H * ( B X ^ L ^ ( Ψ Φ Ψ ( η ) ) ) , respectively, proving that b * is surjective and c * is injective. We then have that H * ( A ˜ X L ( η ) ) H * ( A ˜ X L ( Φ Ψ ( η ) ) ) im b * im ( c * b * ) . □

3.3.5. Computing the Homology

The last step is to relate the smoothed offsets to something that can be computed. It will generally be the case that the approximation X ^ of X is not just compact, but also finite. Then, for any scale α 0 , we have that B X ^ L ^ ( α ) is the union of a finite set of Euclidean balls.
The nerve theorem provides a natural way to compute the homology of a union of Euclidean balls. The nerve of a collection U of sets is the set of all subsets of U that have a nonempty intersection. It has the structure of a simplicial complex, whose homology can be directly computed by standard matrix reduction algorithms. When all nonempty intersections are contractible, the cover is said to be good. A cover by Euclidean balls (or any convex shape) is always good. For good covers, the nerve theorem, a standard result in algebraic topology [21], implies that:
H * ( Nrv ( { ball ( x , α f L ^ ( x ) ) } x X ^ ) ) H * ( B X ^ L ^ ( α ) ) .
This is the most basic way to compute the homology of union of balls and is used throughout topological data analysis.
In our case, we are not just computing the homology of the union, but also the homology of the inclusion map. This computation will require a slightly stronger result. The persistent nerve lemma [20], applied to Diagram (4) when combined with the above isomorphisms, yields the following.
H * ( A ˜ X L ( η ) ) im H * ( Nrv ( { ball ( x , Ψ ( η ) f L ^ ( x ) ) } x X ^ ) ) Nrv ( { ball ( x , Ψ Φ Ψ ( η ) f L ^ ( x ) ) } x X ^ ) ) .
This last statement turns the isomorphism into an algorithm, because standard algorithms [22] can compute the homology of the inclusion of the nerves.

4. Conclusions

We present an alternative metric in Euclidean space that connects adaptive sampling and uniform sampling. We show how to apply classical results from the critical point theory of distance functions to infer topological properties of the underlying space from such samples. This provides a connection between methods in surface reconstruction (based on adaptive sampling) and homology inference (based on uniform sampling).
We show in Theorem 1 that there is a precise relationship between samples that are uniformly taken with respect to d L at some scale, to those same samples being adaptive in the Euclidean metric. In Theorem 2, we show that we can interleave the sublevel sets of our distance function under this alternative metric with the metric balls resulting from our approximation of the metric, assuming that both X ^ and L ^ are uniformly well sampled with respect to the Hausdorff distance of d L and d X ^ . Finally, we show how to fully extend the critical point theory of distance functions and the weak feature size to give theoretical guarantees on homology inference from finite samples of X and L using the induced metric (Theorem 3).
The main limitation of adaptive metrics is that they require two sets as input, one to define the set and one to define the metric. In many instances, this is not available. However, we expect that the approach could find wider use in problems with labeled data. For example, data with binary labels may be viewed as the two sets X and L. Then, each set defines a metric on the other, where the metric is scaled according to how close it is to the other set. This is the subject of ongoing and future work.

Author Contributions

Writing—original draft, N.J.C. and D.R.S.; Writing—review & editing, N.J.C. and D.R.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partially supported by the National Science Foundation under Grants CCF-1464379, CCF-1525978, and CCF-1652218.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Details on Metric Smoothing

This section includes the full construction and relevant lemmas about the smoothed version of the induced metric.
For a compact set L R d and β 0 , denote by L β : = { x R d min y L x y β } the offsets of L with respect to the Euclidean metric. The following lemma gives upper and lower bounds on the value of a smoothing of the distance-to-set function f L , f L ˜ , which is defined on an arbitrarily smaller subset of Euclidean space.
Lemma A1.
Consider a compact set L R d . Given α ( 0 , 1 ) , for all β ( 0 , 1 ) , there exists smooth function f L ˜ : R d \ L β R such that for all x R d \ L β , ( 1 α ) f L ( x ) < f L ˜ ( x ) < ( 1 + α ) f L ( x ) .
Proof. 
By a standard result from [18], for all ε > 0 , there exists a smoothing f L ˜ : R d \ L β R of the distance function f L such that f L f L ˜ < ε . Choose ε = β α , for the given α ( 0 , 1 ) . By the approximation property of f L ˜ , for all x R d \ L β , we have that f L ( x ) ε < f L ˜ ( x ) < f L ( x ) + ε . Note also that for all x R d \ L β , f L ( x ) > β = ε α , and thus, α f L ( x ) > ε . Combining the aforementioned, we have that f L ( x ) ( 1 α ) < f L ( x ) ε and f L ( x ) + ε < f L ( x ) ( 1 + α ) . □
Consider f L ˜ as defined in Lemma A1. Using this we can define a smooth adaptive distance function f X L ˜ and provide upper and lower bounds on its value with respect to the original adaptive distance function f X L . For x , y R d \ L β , we define:
d L ˜ ( x , y ) : = inf γ Path ( x , y ) γ d z f L ˜ ( z )
and f X L ˜ ( y ) : = d L ˜ ( y , X ) .
Lemma A2.
Given α , β ( 0 , 1 ) and a smooth function f L ˜ defined on R d \ L β approximating f L , consider a compact set X R d \ L β . The Riemannian distance function f X L ˜ ( · ) : = d L ˜ ( · , X ) satisfies the following property for all y R d \ L β ,
1 1 + α f X L ( y ) < f X L ˜ ( y ) < 1 1 α f X L ( y ) .
Proof. 
Given two points x , y R d \ L β and any ε > 0 , consider γ , γ Path ( x , y ) such that d L ( x , y ) γ d z f L ( z ) d L ( x , y ) + ε and d L ˜ ( x , y ) γ d z f L ˜ ( z ) d L ˜ ( x , y ) + ε . We then have the following inequalities resulting from inverting the inequalities in Lemma A1.
d L ˜ ( x , y ) γ d z f L ˜ ( z ) < 1 1 α γ d z f L ( z ) 1 1 α d L ( x , y ) + ε 1 α ,
and:
1 1 + α d L ( x , y ) 1 1 + α γ d z f L ( z ) < γ d z f L ˜ ( z ) d L ˜ ( x , y ) + ε .
Since these equalities hold for all ε > 0 , then we can conclude that for all pairs x , y R d \ L β , 1 1 + α d L ( x , y ) < d L ˜ ( x , y ) < 1 1 α d L ( x , y ) .
Now, consider y R d \ L β . Define x : = argmin x X d L ( y , x ) and x = argmin x X d L ˜ ( y , x ) . We remind the reader that these points’ existence is guaranteed by the extreme value theorem. By examining these variables with respect to the previous inequality we know that:
1 1 + α d L ( y , x ) 1 1 + α d L ( y , x ) < d L ˜ ( y , x ) d L ˜ ( y , x ) < 1 1 α d L ( y , x ) .
By applying the definitions of both adaptive distance functions to the previous expression, we obtain the desired inequality,
1 1 + α f X L ( y ) < f X L ˜ ( y ) < 1 1 α f X L ( y ) .
 □
Define the Riemannian adaptive offsets of X as A ˜ X L ( α ) : = { x R d | f X L ˜ ( x ) α } , and denote the corresponding filtration by A ˜ X L . The following result reestablishes Lemma A2 in the language of filtrations and establishes an interleaving of the Riemannian adaptive offsets with the original adaptive offsets.
Corollary A1.
Let L R d be a compact set. Given α , β ( 0 , 1 ) , for compact X R d \ L β , there exists a Riemannian distance function f X L ˜ : R d R , such that ( A ˜ X L , A X L ) are ( h 6 , h 7 ) -interleaved on ( 0 , ) , where h 6 ( r ) = ( 1 + α ) r and h 7 ( r ) = r 1 α .
Proof. 
By Lemma A2, there exists a Riemannian distance function f X L ˜ : R d R , such that for all y R d \ L β ,
1 1 + α f X L ( y ) < f X L ˜ ( y ) < 1 1 α f X L ( y ) ,
so for r ( 0 , ) and y A ˜ X L ( r ) , f X L ˜ ( y ) r , and thus, f X L ( y ) ( 1 + α ) r , which implies that y A X L ( ( 1 + α ) r ) , so A ˜ X L ( r ) A X L ( ( 1 + α ) r ) .
On the other hand, for r ( 0 , ) and y A X L ( r ) , f X L ( y ) r , and thus, f X L ˜ ( r ) r 1 α , so A X L ( r ) A ˜ X L ( r 1 α ) .  □
Combining the previous corollary with Theorem 2 in Section 3.2.4, we obtain an interleaving between the smoothed adaptive offsets and the approximate offsets. This will then allow us to apply Lemma 16 and standard topological data analysis techniques to this interleaving to give a method of homology inference for arbitrary small offsets of X as we have a Riemannian distance function generating the smooth adaptive offsets’ filtration.

Appendix A.1. Proof of Lemma 17

Proof. 
The hypotheses of the statement satisfy the hypotheses of both Theorem 2 and Corollary A1, so one knows that ( A X L , B X ^ L ^ ) are ( h 4 , h 5 ) -interleaved on ( 0 , 1 ) , where h 4 ( r ) = r + ε ( 1 r ε ) ( 1 δ ) and h 5 ( r ) = r 1 δ r + ε Furthermore, ( A ˜ X L , A X L ) are ( h 6 , h 7 ) -interleaved on ( 0 , ) , where h 6 ( r ) = ( 1 + α ) r and h 7 ( r ) = r 1 α . By applying Lemma 11 and composing the necessary functions, we achieve the stated interleavings. □

References

  1. Cazals, F.; Giesen, J.; Pauly, M.; Zomorodian, A. The conformal alpha shape filtration. Vis. Comput. 2006, 22, 531–540. [Google Scholar]
  2. Chazal, F.; Lieutier, A. Smooth Manifold Reconstruction from Noisy and Non-Uniform Approximation with Guarantees. Comput. Geom. Theory Appl. 2008, 40, 156–170. [Google Scholar] [CrossRef]
  3. Chazal, F.; Lieutier, A. Topology Guaranteeing Manifold Reconstruction using Distance Function to Noisy Data. In Proceedings of the 22nd ACM Symposium on Computational Geometry, Sedona, AZ, USA, 5–7 June 2006. [Google Scholar]
  4. Dey, T.K.; Dong, Z.; Wang, Y. Parameter-free topology inference and sparsification for data on manifolds. In Proceedings of the Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms, Barcelona, Spain, 16–19 January 2017; pp. 2733–2747. [Google Scholar]
  5. Amenta, N.; Bern, M.; Kamvysselis, M. A new Voronoi-based surface reconstruction algorithm. In Proceedings of the 25th Annual Conference on Computer Graphics and Interactive Techniques, Anaheim, CA, USA, 21–25 July 2013; pp. 415–421. [Google Scholar]
  6. Dey, T.K. Curve and Surface Reconstruction: Algorithms with Mathematical Analysis; Cambridge University Press: Cambridge, UK, 2007. [Google Scholar]
  7. Boissonnat, J.D.; Dyer, R.; Ghosh, A. Delaunay Triangulation of Manifolds. Found. Comput. Math. 2018, 18, 399–431. [Google Scholar] [CrossRef]
  8. Boissonnat, J.D.; Wintraecken, M. The Topological Correctness of PL-Approximations of Isomanifolds. In Proceedings of the 36th International Symposium on Computational Geometry (SoCG 2020), Zurich, Switzerland, 23–26 June 2020; Leibniz International Proceedings in Informatics (LIPIcs). Cabello, S., Chen, D.Z., Eds.; Schloss Dagstuhl–Leibniz-Zentrum für Informatik: Dagstuhl, Germany, 2020; Volume 164, pp. 20:1–20:18. [Google Scholar] [CrossRef]
  9. Niyogi, P.; Smale, S.; Weinberger, S. Finding the Homology of Submanifolds with High Confidence from Random Samples. Discret. Comput. Geom. 2008, 39, 419–441. [Google Scholar] [CrossRef]
  10. Niyogi, P.; Smale, S.; Weinberger, S. A Topological View of Unsupervised Learning from Noisy Data. SIAM J. Comput. 2011, 40, 646–663. [Google Scholar] [CrossRef]
  11. Grove, K. Critical point theory for distance functions. Proc. Symp. Pure Math. 1993, 54, 357–385. [Google Scholar]
  12. Chazal, F.; Cohen-Steiner, D.; Lieutier, A. A Sampling Theory for Compact Sets in Euclidean Space. Discret. Comput. Geom. 2009, 41, 461–479. [Google Scholar] [CrossRef] [Green Version]
  13. Clarkson, K.L. Building triangulations using ε-nets. In Proceedings of the Thirty-Eighth Annual ACM Symposium on Theory of Computing, Seattle, WA, USA, 21–23 May 2006; pp. 326–335. [Google Scholar]
  14. Wein, R.; van den Berg, J.; Halperin, D. Planning High-quality Paths and Corridors Amidst Obstacles. Int. J. Robot. Res. 2008, 27, 1213–1231. [Google Scholar] [CrossRef]
  15. Agarwal, P.K.; Fox, K.; Salzman, O. An efficient algorithm for computing high quality paths amid polygonal obstacles. In Proceedings of the 27th Annual ACM-SIAM Symposium on Discrete Algorithms, Arlington, VA, USA, 10–12 January 2016; pp. 1179–1192. [Google Scholar]
  16. Cohen, M.B.; Fasy, B.T.; Miller, G.L.; Nayyeri, A.; Sheehy, D.R.; Velingker, A. Approximating Nearest Neighbor Distances. In Proceedings of the Algorithms and Data Structures Symposium, Victoria, BC, Canada, 5–7 August 2015; pp. 200–211. [Google Scholar]
  17. Chu, T.; Miller, G.L.; Sheehy, D.R. Exact computation of a manifold metric via Lipschitz Embeddings and Shortest Paths on a Graph. In Proceedings of the ACM-SIAM Symposium on Discrete Algorithms, Salt Lake City, UT, USA, 5–8 January 2020. [Google Scholar]
  18. Green, R.; Wu, H. C approximations of convex, subharmonic, and plurisubharmonic functions. Ann. Sci. Éc. Norm. Sup. 1979, 12, 47–84. [Google Scholar] [CrossRef]
  19. Chazal, F.; Lieutier, A. Weak Feature Size and Persistent Homology: Computing Homology of Solids in Rn from Noisy Data Samples. In Proceedings of the 21st ACM Symposium on Computational Geometry, Pisa, Italy, 6–8 June 2005; pp. 255–262. [Google Scholar]
  20. Chazal, F.; Oudot, S.Y. Towards Persistence-Based Reconstruction in Euclidean Spaces. In Proceedings of the 24th ACM Symposium on Computational Geometry, College Park, MD, USA, 9–11 June 2008; pp. 232–241. [Google Scholar]
  21. Hatcher, A. Algebraic Topology; Cambridge University Press: Cambridge, UK, 2001. [Google Scholar]
  22. Edelsbrunner, H.; Letscher, D.; Zomorodian, A. Topological Persistence and Simplification. Discret. Comput. Geom. 2002, 4, 511–533. [Google Scholar] [CrossRef] [Green Version]

Share and Cite

MDPI and ACS Style

Cavanna, N.J.; Sheehy, D.R. Adaptive Metrics for Adaptive Samples. Algorithms 2020, 13, 200. https://doi.org/10.3390/a13080200

AMA Style

Cavanna NJ, Sheehy DR. Adaptive Metrics for Adaptive Samples. Algorithms. 2020; 13(8):200. https://doi.org/10.3390/a13080200

Chicago/Turabian Style

Cavanna, Nicholas J., and Donald R. Sheehy. 2020. "Adaptive Metrics for Adaptive Samples" Algorithms 13, no. 8: 200. https://doi.org/10.3390/a13080200

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop