We generalize the local-feature size definition of adaptive sampling used in surface reconstruction to relate it to an alternative metric on Euclidean space. In the new metric, adaptive samples become uniform samples, making it simpler both to give adaptive sampling versions of homological inference results and to prove topological guarantees using the critical points theory of distance functions. This ultimately leads to an algorithm for homology inference from samples whose spacing depends on their distance to a discrete representation of the complement space.
Both surface reconstruction and homology inference are algorithmic problems that take points as input and produce a topological representation of the underlying space from which the points were drawn. In surface reconstruction, one often wants a homeomorphic reconstruction in the form of a triangulation, whereas in homology inference, it suffices to compute the homology groups. Although similar in many respects, the general trend is that with weaker conditions on the input (i.e., noisier samples), one can only hope for weaker guarantees on the output (homology rather than homeomorphism). There is one aspect of these theories that directly contradicts this trend: many surface reconstruction algorithms are able to work with an adaptive sample, while most homology inference algorithms require a uniform sample. Here and throughout, we use “uniform” in the Hausdorff sense, not the statistical sense. An adaptive sample has a density that adapts to some local sizing function. Thus, areas that require higher fidelity will have higher density (and a smaller scale), while areas that can get by with less fidelity will have lower density (and larger scale).
There have been some notable works that have bridged this gap between surface reconstruction and homology inference for adaptive samples. Most theoretically guaranteed surface reconstruction algorithms assume an input that is sufficiently dense with respect to the distance to the medial axis, a kind of skeleton describing the complement of the underlying shape. Cazals et al.  introduced the conformal alpha shape filtration as a way to build triangulations at different scales that have local connectivity related to the local feature size. Although their stated goal was surface reconstruction, the work employed many of the methods of homology inference. Chazal and Lieutier [2,3] gave a more direct generalization of methods in surface reconstruction with adaptive samples to homology inference, achieving some guarantees for smooth manifolds assuming both upper and lower bounds on the density. Dey et al.  gave a homology inference algorithm for manifold data that attempts to sample a subset of the medial axis in order to approximate the local feature size. This work was the main motivation for the current paper, and we adopted their notation of X for the space and L for the approximation to the complement space. We extend these works by providing guaranteed homology inference for a much more general class of samples and spaces; we do not require the space to be a manifold or the sample to adapt to the medial axis.
1.2. From Surface Reconstruction to Homology Inference
To reconstruct a surface from a point set, one needs the sample to be sufficiently dense with respect to not just the local curvature of the surface, but also the distance to parts of the surface that are close in the embedding, but far in geodesic distance. Otherwise, algorithms have no way of identifying which geometrically close sample points correspond to local neighborhoods in the surface. Adaptive sampling with respect to the so-called local feature size as introduced by Amenta and Bern  neatly characterizes such “good” samples and was then used in many later works on surface reconstruction with topological guarantees . There is an extensive literature on the problem in high dimensions (see [7,8] for recent examples), and the problem remains an active research area. Such adaptive samples are in contrast to uniform samples for which a single parameter determines the density. That parameter is usually driven by the minimum of the local feature size and results in a much larger sample.
Later work on generalizations of surface reconstruction and homology inference related the topology of unions of balls centered at a sample near the unknown set X to the topology of X itself. The most well-known such results were by Niyogi et al. [9,10]. A union of balls with a fixed radius can be viewed as a sublevel set of the distance function to . If we have an adaptive sample, then we would like to scale the radii of the balls as well. However, if the sample is adaptive with respect to a local feature size defined as the distance to an unknown set L, another approximation near L is necessary. Indeed, one interpretation of some Voronoi-based surface reconstruction algorithms is that an approximation to the medial axis L is computed from the Voronoi diagram of the sample of the unknown surface X.
We present a new perspective on adaptive samples. For any pair of disjoint, compact sets X and L, we define a metric on with the property that a uniform sample of X in the new metric corresponds to an adaptive sample in the Euclidean metric. We call this the metric induced by L or simply the induced metric. This new metric can also be extended to an arbitrarily close Riemannian metric over the same domain. Our main motivation is to connect adaptive sampling theory to the critical point theory of distance functions used extensively to prove topological guarantees in topological data analysis [2,11,12]. That theory gives natural topological equivalences between sublevel sets of distance functions to compact sets in Riemannian metrics. Thus, we propose to use the induced metric as the underlying ideal object and then relate it to a union of Euclidean balls constructed from approximations of X and L. Our metric can be viewed as a smoothed version of a metric used by Clarkson . Our new formulation reveals connections with work on path planning [14,15] and density-based distances [16,17]. These are all constructions where one looks at conformal change of metrics induced by subsets of Euclidean space.
We lay out the main objects of study in Section 2. This includes the induced metric and a discrete approximation. Throughout the paper, we will relate these two objects or variations thereof for different purposes. In Section 3.1, we prove the relationship between the adaptive samples used in surface reconstruction and uniform samples in the induced metric. The definition of the induced metric does not lend itself to direct computation. Therefore, in Section 3.2, we bound the interleaving distance between the induced metric and its discrete approximation. This interleaving is then used in Section 3.3 to give a homology inference algorithm that is guaranteed to recover the homology of a sublevel set of the induced metric under certain sampling conditions.
Let L and X be compact subsets of with respect to the Euclidean metric. For , define to be the set of bounded piecewise- paths from x to y, parametrized by the Euclidean arc-length. Similarly, denotes all paths from x to a set S.
For any compact set , define by:
The length of a unit-speed path is denoted as:
For , define:
Note that is a distance function, while is not. The latter function can be interpreted as a first-order approximation of the former.
For any compact set , for some compact set , the α-offsets with respect to are:
The distance function can be transformed into an arbitrarily close smooth function , yielding a Riemannian metric defined in an identical manner as . From this, one has corresponding -offsets that are arbitrarily close to . We will encounter this smoother version in Section 3.3.
We will approximate the offsets by a union of balls as follows.
For any compact set , for some compact set , the approximate α-offsets with respect to are:
A useful property of is that it is a one-Lipschitz function. In general, a function f between two metric spaces and is said to be k-Lipschitz if for all , .
The function is one-Lipschitz from the metric space to .
Fix any . There exists point and a path such that . Likewise, there exists such that .
This implies that the concatenation of and is a path in . Thus, . As this holds for all , we conclude that , as desired. □
We can use to define the Hausdorff distance, which is a metric between compact sets. This metric is useful for stating bounds on the quality, or uniformity, of a sample near a set.
The Hausdorff distance between two compact sets is defined as:
If the Hausdorff distance between a compact set and a sample is bounded, Lemma 3 shows that their -offsets are interleaved at particular scales.
Let be such that . Then, for all , and .
Let be any point. By the definition of , we have . Therefore, there exists such that . The Hausdorff assumption that implies that for all , we have . By Lemma 1, , implying . The second inclusion is proven by a symmetric argument. □
The following is the definition of an adaptive sample we will use throughout. For the special case when X is a manifold and L is its medial axis, it corresponds to the -sample used in surface reconstruction.
Given a compact set and compact sets such that , we say that is an ε-sample of X, for , if for all , there exists such that .
This definition is closely related to that of the approximate -offsets, because if is an -sample of X, then for all , .
Consider to be such that . Then, for all , and .
Fix . By definition, , which implies that there exists such that . , which implies that for all , . Now, by Lemma 1, , implying . By a symmetric argument, the other statement holds. □
Lemma 4 relates the length of a path with respect to two distance-to-set functions, assuming they have a close Hausdorff distance with respect to a Euclidean metric.
Let be two compact sets such that for some . For all unit-speed, , where for some positive c, we have the following inequalities.
Take an arbitrary unit-speed path where . Since the image of the path is a subset of , then for all , . By the Hausdorff distance between L and , we have . Likewise, we have that . Rearranging both of these, we have that .
By the definition of and , these inequalities imply that □
The following lemma provides a bound on how close to L a shortest path to a compact set X can be and a constant c to satisfy Lemma 4 that is dependent on what compact set one is working.
Take compact set , compact set , and , for . If γ is the shortest path from y to X with respect to , then:
Since , , so there exists such that . Take as the shortest path from y to X. For all , .
By Lemma 10, , and by being Lipschitz, we have that . This means that every point on the path is at least distance away from L. □
We define a noisy -sample, for , of compact with respect to for some compact set L as a compact set such that for all , there exists such that . Likewise, for all , there exists , such that . The following theorems relate a noisy -sample to the Hausdorff distance between the sample and the set X and vice versa.
Consider compact set L and compact . If is a noisy ε-sample of X with respect to , for , then .
Given , by definition, there exists such that . By Lemma 10, , so for all, , .
Furthermore, given , there exists such that , so for all , ; thus, . □
Consider compact set L and sets . If , then is a noisy -sample of X with respect to .
implies that for all , . Thus, there exists such that . By Lemma 10, .
Similarly, implies that for all , ; thus, there exists such that , and thus, . Since , then , so is a noisy -sample of X. □
Given compact set and compact set , for , .
Take so that . Thus, there exists such that . By Lemma 10, this implies that , which implies that . □
Given compact set and compact set , for , .
Consider . Thus, , for some , so . Applying Lemma 10, we then have that , and as , . □
3.1. Adaptive Sampling
In this section, we prove that a uniform sample in the induced metric corresponds to an adaptive sample in the Euclidean metric and vice versa. The key to this proof is the following lemma, which will also be used for the more elaborate interleaving results of Section 3.2.
Let be a compact set, and let . Then, the following two statements hold for all .
If , then .
If , then .
To prove (i), we assume . Let be the path in such that . Then, we have the following inequalities following from the Lipschitz property of .
It follows that . Because is the length of the shortest path between a and b in the Euclidean metric, we conclude that .
Next we prove (ii). Assume . For all points z in the straight line segment ,
This implies the following inequality.
We can now state the main theorem relating adaptive samples in the Euclidean metric to uniform samples in the metric induced by a set L.
Let L and X be compact sets; let be a sample; and let be a constant. If is an ε-sample of X with respect to the distance to L, then . Furthermore, if , then is an -sample of X with respect to the distance to L.
Given , there exists such that . By Lemma 10, , so for all , . As , this proves .
Furthermore, implies that for all , ; thus, there exists such that . Thus, by Lemma 10 . Since , then , so is an -sample of X. □
A filtration is a nested family of sets. In this paper, we consider filtrations F parameterized by a real number so that , and whenever , we have . Often, our filtrations are sublevel filtrations of a real valued function . The sublevel filtration F corresponding to the function f is defined as:
A pair of filtrations is -interleaved in an interval if whenever and whenever . We require that the functions to be nondecreasing in .
The following lemma gives us an easy way to combine interleavings.
If is -interleaved in and is -interleaved in , then is -interleaved in , where and .
If , then we have . Similarly, if , then . □
3.2.1. Approximating X with
Ultimately, the goal is to relate , the offsets in the induced metric, to , the approximate offsets computed from approximations (or samples) to both X and L. This relationship will be given by an interleaving that is built up from an interleaving for each approximation step. For each of the following lemmas, let and be compact sets.
If , then are -interleaved in , where .
This lemma is a reinterpretation of Lemma 3 in the interleaving notation. □
3.2.2. Approximating the Induced Metric
It is much easier to use a union of Euclidean balls to model the sublevel sets of the distance function . Below, we show that this is a reasonable approximation. The following results may also be viewed as a strengthening of the adaptive sampling result of the previous section (Theorem 1).
The pair is -interleaved in , where .
It will suffice to show that for , , and for , .
Take so that . Thus, there exists such that . By Lemma 10, this implies that , which implies that .
Consider any point . For some , we have , so . Applying Lemma 10, we have that . Finally, , because . □
3.2.3. Approximating L with
Usually, the set L is unknown at the start and must be estimated from the input. For example, if L is the medial axis of X, there are several known techniques for approximating L by taking some vertices of the Voronoi diagram [5,6]. We would like to give some sampling conditions that allow us to replace L with an approximation . Interestingly, the sampling conditions for are dual to those used for : we require . In other words, must be an adaptive sample with respect to the distance to .
If , then is -interleaved in , where .
Fix any . There is a point such that . Moreover, there is a nearest point to x such that . Lemma 10 and the assumption that together imply that there exists such that:
It then follows from the definitions that:
Therefore, we can bound in terms of as follows.
Therefore, , and so, we conclude that . The proof is symmetric to show that □
3.2.4. Putting It All Together
We can now use Lemma 11 to combine the interleavings established in Lemmas 12–14.
Let and be compact sets. If and , then are -interleaved in , where and .
We use Lemma 11 to combine the interleavings from Lemmas 12–14 to conclude that the pair is -interleaved in . To complete the proof, we expand and as follows.
Therefore, we have that and . □
3.3. Smooth Adaptive Distance and Homology Inference
In the preceding sections, we showed how to approximate (via interleaving) , the sublevels of the distance to X in the induced metric, using a finite set of Euclidean balls, . Now, we show how and when such an approximation gives a guarantee about the underlying space X itself. This is substantially more difficult, because it requires us to relate the sublevels of the induced metric to an object we do not have direct access to. As such, we will require some stronger hypotheses.
We will first review the critical point theory of distance functions. Then, we show how to smooth the induced metric to an arbitrarily close Riemannian metric, rendering the critical point theory applicable. Then, we put these together to prove the main inference result of the paper, Theorem 3.
3.3.1. Critical Points of Distance Functions
In this section, we give a minimal presentation of the critical point theory of distance functions to explain and motivate the results about interleaving offsets of distance functions in Riemannian manifolds. The main fact we use is that such interleavings lead immediately to results about homology inference (Lemma 16).
For a smooth Riemannian manifold M and a compact subset , one can consider the function that maps each point in M to the distance to its nearest point in X as measured by the metric on the manifold. The gradient of can be defined on M, and the critical points are those points for which the gradient is zero. The critical values of are those values of r such that contains a critical point. The critical point theory of distance functions developed by Grove and others  extends the ideas from Morse theory to such distance functions. In particular, the theory gives the following result.
(Grove ). If contains no critical values, then is a homotopy equivalence.
This means that for intervals that do not contain critical values, the inclusion maps in the filtration are all homotopy equivalences and therefore induce isomorphisms in homology. This is used to give some information about the homology of filtrations that are interleaved with F.
We write to denote homology over a field. Therefore, for a set , we have a vector space , and for a continuous map , we have a linear map . For the canonical inclusion map for a subset , we will denote the corresponding linear map in homology as . The image of this map is denoted .
Let be the distance function to a compact set in a Riemannian manifold such that contains no critical values of . Let F be the sublevel filtration of , and let G be a filtration such that are -interleaved in . If , then:
The interleaving and the hypotheses imply that we have the following inclusions.
The preceding lemma implies that the maps , , and all induce isomorphisms in homology. It follows that , because the inclusion of spaces in G is factored through a space in F, and it factors an inclusion of spaces, all of which are isomorphic in homology. □
3.3.2. Smoothing the Metric
To apply the critical point theory of distance functions to the induced metric directly, we would need it to be a smooth Riemannian manifold. Although it is not smooth, we can smooth it with an arbitrarily small change. The process, though a little technical, is not surprising, nor very difficult. It proceeds in three steps.
We smooth the distance to L. This is the source of non-smoothness in the induced metric. This replaces with a smooth approximation, .
The smoothed distance to L is used to define the smoothed induced metric analogously to the original construction of .
The induced distance function can then be replaced by its smoothed version , and the corresponding smoothed offsets are then well defined.
The complete construction of the smoothed offsets is presented in Appendix A. The end result is an interleaving between the induced offsets and the smoothed version as expressed in the following lemma.
Given , consider compact sets and compact sets , such that and , then are -interleaved on , where and .
Chazal and Leutier  introduced the weak feature size () as the least positive critical value of a Riemannian distance function. We denote the weak feature size with respect to as . In light of the critical point theory of distance functions, a bound on the weak feature size gives a guaranteed interval with no critical points. This allows one to infer the homology from another filtration (usually one that is discrete and built from data) as long as the second filtration is interleaved in that critical point free interval.
(Adapted from  Theorem 4.2; see also ). Let S and be compact subsets of . If and , then for all sufficiently small ,
The key idea in that proof is that the Hausdorff bound gives an interleaving, while the weak feature size bound gives the interval without critical points. The technical condition regarding is present to account for strange compact sets that may be homologically different from their arbitrarily small offsets. It is reasonable to assume that for some sufficiently small that , and thus, one could “compute” the homology of S using only the sample .
Most previous uses of the weak feature size have been applied in Euclidean spaces, but the critical point theory of distance functions can be applied more broadly to other smooth Riemannian manifolds. This is why we introduced it as (with the superscript) to indicate the underlying metric.
3.3.4. Homology Inference
We have now introduced all the necessary pieces to prove our main homology inference result.
Given , consider compact sets and compact sets , such that and . Define the real-valued functions and as:
Given any , such that , if , then:
Given such that , we have the following sequence of inclusions as a result of Lemma 17.
As we assume that , by the definition of the weak feature size, Lemma 16 implies that the inclusions and are homotopy equivalences. We remind the reader that if two spaces are homotopy equivalent, all the induced homology maps between the spaces are isomorphisms. By applying homology to each space and inclusion in the previous sequence, we have the following sequence of homology groups, where and are isomorphisms.
The aforementioned isomorphisms and factor through and , respectively, proving that is surjective and is injective. We then have that . □
3.3.5. Computing the Homology
The last step is to relate the smoothed offsets to something that can be computed. It will generally be the case that the approximation of X is not just compact, but also finite. Then, for any scale , we have that is the union of a finite set of Euclidean balls.
The nerve theorem provides a natural way to compute the homology of a union of Euclidean balls. The nerve of a collection U of sets is the set of all subsets of U that have a nonempty intersection. It has the structure of a simplicial complex, whose homology can be directly computed by standard matrix reduction algorithms. When all nonempty intersections are contractible, the cover is said to be good. A cover by Euclidean balls (or any convex shape) is always good. For good covers, the nerve theorem, a standard result in algebraic topology , implies that:
This is the most basic way to compute the homology of union of balls and is used throughout topological data analysis.
In our case, we are not just computing the homology of the union, but also the homology of the inclusion map. This computation will require a slightly stronger result. The persistent nerve lemma , applied to Diagram (4) when combined with the above isomorphisms, yields the following.
This last statement turns the isomorphism into an algorithm, because standard algorithms  can compute the homology of the inclusion of the nerves.
We present an alternative metric in Euclidean space that connects adaptive sampling and uniform sampling. We show how to apply classical results from the critical point theory of distance functions to infer topological properties of the underlying space from such samples. This provides a connection between methods in surface reconstruction (based on adaptive sampling) and homology inference (based on uniform sampling).
We show in Theorem 1 that there is a precise relationship between samples that are uniformly taken with respect to at some scale, to those same samples being adaptive in the Euclidean metric. In Theorem 2, we show that we can interleave the sublevel sets of our distance function under this alternative metric with the metric balls resulting from our approximation of the metric, assuming that both and are uniformly well sampled with respect to the Hausdorff distance of and . Finally, we show how to fully extend the critical point theory of distance functions and the weak feature size to give theoretical guarantees on homology inference from finite samples of X and L using the induced metric (Theorem 3).
The main limitation of adaptive metrics is that they require two sets as input, one to define the set and one to define the metric. In many instances, this is not available. However, we expect that the approach could find wider use in problems with labeled data. For example, data with binary labels may be viewed as the two sets X and L. Then, each set defines a metric on the other, where the metric is scaled according to how close it is to the other set. This is the subject of ongoing and future work.
Writing—original draft, N.J.C. and D.R.S.; Writing—review & editing, N.J.C. and D.R.S. All authors have read and agreed to the published version of the manuscript.
This research was partially supported by the National Science Foundation under Grants CCF-1464379, CCF-1525978, and CCF-1652218.
Conflicts of Interest
The authors declare no conflict of interest.
Appendix A. Details on Metric Smoothing
This section includes the full construction and relevant lemmas about the smoothed version of the induced metric.
For a compact set and , denote by the offsets of L with respect to the Euclidean metric. The following lemma gives upper and lower bounds on the value of a smoothing of the distance-to-set function , , which is defined on an arbitrarily smaller subset of Euclidean space.
Consider a compact set . Given , for all , there exists smooth function such that for all , .
By a standard result from , for all , there exists a smoothing of the distance function such that . Choose , for the given . By the approximation property of , for all , we have that . Note also that for all , and thus, . Combining the aforementioned, we have that and . □
Consider as defined in Lemma A1. Using this we can define a smooth adaptive distance function and provide upper and lower bounds on its value with respect to the original adaptive distance function . For , we define:
Given and a smooth function defined on approximating , consider a compact set . The Riemannian distance function satisfies the following property for all ,
Given two points and any , consider such that and . We then have the following inequalities resulting from inverting the inequalities in Lemma A1.
Since these equalities hold for all , then we can conclude that for all pairs , .
Now, consider . Define and . We remind the reader that these points’ existence is guaranteed by the extreme value theorem. By examining these variables with respect to the previous inequality we know that:
By applying the definitions of both adaptive distance functions to the previous expression, we obtain the desired inequality,
Define the Riemannian adaptive offsets of X as , and denote the corresponding filtration by . The following result reestablishes Lemma A2 in the language of filtrations and establishes an interleaving of the Riemannian adaptive offsets with the original adaptive offsets.
Let be a compact set. Given , for compact , there exists a Riemannian distance function , such that are -interleaved on , where and .
By Lemma A2, there exists a Riemannian distance function , such that for all ,
so for and , , and thus, , which implies that , so .
On the other hand, for and , , and thus, , so □
Combining the previous corollary with Theorem 2 in Section 3.2.4, we obtain an interleaving between the smoothed adaptive offsets and the approximate offsets. This will then allow us to apply Lemma 16 and standard topological data analysis techniques to this interleaving to give a method of homology inference for arbitrary small offsets of X as we have a Riemannian distance function generating the smooth adaptive offsets’ filtration.
Appendix A.1. Proof of Lemma 17
The hypotheses of the statement satisfy the hypotheses of both Theorem 2 and Corollary A1, so one knows that are -interleaved on , where and Furthermore, are -interleaved on , where and . By applying Lemma 11 and composing the necessary functions, we achieve the stated interleavings. □
Cazals, F.; Giesen, J.; Pauly, M.; Zomorodian, A. The conformal alpha shape filtration. Vis. Comput.2006, 22, 531–540. [Google Scholar]
Chazal, F.; Lieutier, A. Smooth Manifold Reconstruction from Noisy and Non-Uniform Approximation with Guarantees. Comput. Geom. Theory Appl.2008, 40, 156–170. [Google Scholar] [CrossRef]
Chazal, F.; Lieutier, A. Topology Guaranteeing Manifold Reconstruction using Distance Function to Noisy Data. In Proceedings of the 22nd ACM Symposium on Computational Geometry, Sedona, AZ, USA, 5–7 June 2006. [Google Scholar]
Dey, T.K.; Dong, Z.; Wang, Y. Parameter-free topology inference and sparsification for data on manifolds. In Proceedings of the Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms, Barcelona, Spain, 16–19 January 2017; pp. 2733–2747. [Google Scholar]
Amenta, N.; Bern, M.; Kamvysselis, M. A new Voronoi-based surface reconstruction algorithm. In Proceedings of the 25th Annual Conference on Computer Graphics and Interactive Techniques, Anaheim, CA, USA, 21–25 July 2013; pp. 415–421. [Google Scholar]
Dey, T.K. Curve and Surface Reconstruction: Algorithms with Mathematical Analysis; Cambridge University Press: Cambridge, UK, 2007. [Google Scholar]
Boissonnat, J.D.; Dyer, R.; Ghosh, A. Delaunay Triangulation of Manifolds. Found. Comput. Math.2018, 18, 399–431. [Google Scholar] [CrossRef]
Boissonnat, J.D.; Wintraecken, M. The Topological Correctness of PL-Approximations of Isomanifolds. In Proceedings of the 36th International Symposium on Computational Geometry (SoCG 2020), Zurich, Switzerland, 23–26 June 2020; Leibniz International Proceedings in Informatics (LIPIcs). Cabello, S., Chen, D.Z., Eds.; Schloss Dagstuhl–Leibniz-Zentrum für Informatik: Dagstuhl, Germany, 2020; Volume 164, pp. 20:1–20:18. [Google Scholar] [CrossRef]
Niyogi, P.; Smale, S.; Weinberger, S. Finding the Homology of Submanifolds with High Confidence from Random Samples. Discret. Comput. Geom.2008, 39, 419–441. [Google Scholar] [CrossRef]
Niyogi, P.; Smale, S.; Weinberger, S. A Topological View of Unsupervised Learning from Noisy Data. SIAM J. Comput.2011, 40, 646–663. [Google Scholar] [CrossRef]
Grove, K. Critical point theory for distance functions. Proc. Symp. Pure Math.1993, 54, 357–385. [Google Scholar]
Chazal, F.; Cohen-Steiner, D.; Lieutier, A. A Sampling Theory for Compact Sets in Euclidean Space. Discret. Comput. Geom.2009, 41, 461–479. [Google Scholar] [CrossRef]
Clarkson, K.L. Building triangulations using ε-nets. In Proceedings of the Thirty-Eighth Annual ACM Symposium on Theory of Computing, Seattle, WA, USA, 21–23 May 2006; pp. 326–335. [Google Scholar]
Wein, R.; van den Berg, J.; Halperin, D. Planning High-quality Paths and Corridors Amidst Obstacles. Int. J. Robot. Res.2008, 27, 1213–1231. [Google Scholar] [CrossRef]
Agarwal, P.K.; Fox, K.; Salzman, O. An efficient algorithm for computing high quality paths amid polygonal obstacles. In Proceedings of the 27th Annual ACM-SIAM Symposium on Discrete Algorithms, Arlington, VA, USA, 10–12 January 2016; pp. 1179–1192. [Google Scholar]
Cohen, M.B.; Fasy, B.T.; Miller, G.L.; Nayyeri, A.; Sheehy, D.R.; Velingker, A. Approximating Nearest Neighbor Distances. In Proceedings of the Algorithms and Data Structures Symposium, Victoria, BC, Canada, 5–7 August 2015; pp. 200–211. [Google Scholar]
Chu, T.; Miller, G.L.; Sheehy, D.R. Exact computation of a manifold metric via Lipschitz Embeddings and Shortest Paths on a Graph. In Proceedings of the ACM-SIAM Symposium on Discrete Algorithms, Salt Lake City, UT, USA, 5–8 January 2020. [Google Scholar]
Green, R.; Wu, H. C∞ approximations of convex, subharmonic, and plurisubharmonic functions. Ann. Sci. Éc. Norm. Sup.1979, 12, 47–84. [Google Scholar] [CrossRef]
Chazal, F.; Lieutier, A. Weak Feature Size and Persistent Homology: Computing Homology of Solids in Rn from Noisy Data Samples. In Proceedings of the 21st ACM Symposium on Computational Geometry, Pisa, Italy, 6–8 June 2005; pp. 255–262. [Google Scholar]
Chazal, F.; Oudot, S.Y. Towards Persistence-Based Reconstruction in Euclidean Spaces. In Proceedings of the 24th ACM Symposium on Computational Geometry, College Park, MD, USA, 9–11 June 2008; pp. 232–241. [Google Scholar]
Hatcher, A. Algebraic Topology; Cambridge University Press: Cambridge, UK, 2001. [Google Scholar]
Edelsbrunner, H.; Letscher, D.; Zomorodian, A. Topological Persistence and Simplification. Discret. Comput. Geom.2002, 4, 511–533. [Google Scholar] [CrossRef]
The statements, opinions and data contained in the journal Algorithms are solely
those of the individual authors and contributors and not of the publisher and the editor(s).
MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.
The statements, opinions and data contained in the journals are solely
those of the individual authors and contributors and not of the publisher and the editor(s).
MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.