## 2. Methods

Let L and X be compact subsets of ${\mathbb{R}}^{d}$ with respect to the Euclidean metric. For $x,y\in {\mathbb{R}}^{d}$, define $\mathrm{Path}(x,y)$ to be the set of bounded piecewise-${C}_{1}$ paths from x to y, parametrized by the Euclidean arc-length. Similarly, $\mathrm{Path}(x,S):={\displaystyle \bigcup _{s\in S}}\mathrm{Path}(x,s)$ denotes all paths from x to a set S.

For any compact set

$L\subseteq {\mathbb{R}}^{d}$, define

${f}_{L}(\xb7):{\mathbb{R}}^{d}\to \mathbb{R}$ by:

The length of a unit-speed path

$\gamma :[0,a]\to {\mathbb{R}}^{d}$ is denoted as:

For

$y\in {\mathbb{R}}^{d}$, define:

and:

Note that ${f}_{X}^{L}(\xb7)$ is a distance function, while $\widehat{{f}_{X}^{L}}(\xb7)$ is not. The latter function can be interpreted as a first-order approximation of the former.

**Definition** **1.** For any compact set $X\subset {\mathbb{R}}^{d}\backslash L$, for some compact set $L\subset {\mathbb{R}}^{d}$, the α-offsets with respect to ${\mathrm{d}}^{L}$ are: The distance function

${f}_{L}(\xb7)$ can be transformed into an arbitrarily close smooth function

${\tilde{f}}_{L}(\xb7)$ [

18], yielding a Riemannian metric

${\tilde{\mathrm{d}}}_{L}$ defined in an identical manner as

${\mathrm{d}}_{L}$. From this, one has corresponding

$\alpha $-offsets

${\tilde{A}}_{X}^{L}(\alpha )$ that are arbitrarily close to

${A}_{L}^{X}(\alpha )$. We will encounter this smoother version in

Section 3.3.

We will approximate the offsets ${A}_{X}^{L}(\alpha )$ by a union of balls as follows.

**Definition** **2.** For any compact set $X\subset {\mathbb{R}}^{d}\backslash L$, for some compact set $L\subset {\mathbb{R}}^{d}$, the approximate α-offsets with respect to ${\mathrm{d}}^{L}$ are: A useful property of ${f}_{X}^{L}(\xb7)$ is that it is a one-Lipschitz function. In general, a function f between two metric spaces $(X,{\mathrm{d}}_{X})$ and $(Y,{\mathrm{d}}_{Y})$ is said to be k-Lipschitz if for all $x,y\in X$, ${\mathrm{d}}_{Y}(f(x),f(y))\le k{\mathrm{d}}_{X}(x,y)$.

**Lemma** **1.** The function ${f}_{X}^{L}$ is one-Lipschitz from the metric space $({\mathbb{R}}^{d},{\mathrm{d}}^{L})$ to $\mathbb{R}$.

**Proof.** Fix any $a,b\in {\mathbb{R}}^{d}$. There exists point $x\in X$ and a path ${\gamma}_{1}\in \mathrm{Path}(a,x)$ such that ${f}_{X}^{L}(a)={\int}_{{\gamma}_{1}}\frac{dz}{{f}_{L}(z)}$. Likewise, there exists ${\gamma}_{2}\in \mathrm{Path}(a,b)$ such that ${\mathrm{d}}^{L}(a,b)={\int}_{{\gamma}_{2}}\frac{dz}{{f}_{L}(z)}$.

This implies that the concatenation of ${\gamma}_{1}$ and ${\gamma}_{2}$ is a path ${\gamma}_{3}$ in $\mathrm{Path}(b,X)$. Thus, ${f}_{X}^{L}(b)\le {\int}_{{\gamma}_{3}}\frac{dz}{{f}_{L}(z)}\le {f}_{X}^{L}(a)+{\mathrm{d}}^{L}(a,b)$. As this holds for all $a,b$, we conclude that $|{f}_{X}^{L}(a)-{f}_{L}^{X}(b)|\le {\mathrm{d}}^{L}(a,b)$, as desired. □

We can use ${f}_{X}^{L}$ to define the Hausdorff distance, which is a metric between compact sets. This metric is useful for stating bounds on the quality, or uniformity, of a sample near a set.

**Definition** **3.** The Hausdorff distance between two compact sets $X,Y\in ({\mathbb{R}}^{d},{\mathrm{d}}^{L})$ is defined as: If the Hausdorff distance between a compact set and a sample is bounded, Lemma 3 shows that their $\alpha $-offsets are interleaved at particular scales.

**Lemma** **2.** Let $\widehat{X},X\subseteq {\mathbb{R}}^{d}\backslash L$ be such that ${\mathrm{d}}_{H}^{L}(\widehat{X},X)\le \delta $. Then, for all $\alpha \ge 0$, ${A}_{X}^{L}(\alpha )\subseteq {A}_{\widehat{X}}^{L}(\alpha +\delta )$ and ${A}_{\widehat{X}}^{L}(\alpha )\subseteq {A}_{X}^{L}(\alpha +\delta )$.

**Proof.** Let $y\in {A}_{X}^{L}(\alpha )$ be any point. By the definition of ${A}_{X}^{L}$, we have ${f}_{X}^{L}(y)\le \alpha $. Therefore, there exists $x\in X$ such that ${\mathrm{d}}^{L}(x,y)\le \alpha $. The Hausdorff assumption that ${\mathrm{d}}_{H}^{L}(\widehat{X},X)\le \delta $ implies that for all $x\in X$, we have ${f}_{\widehat{X}}^{L}(x)\le \delta $. By Lemma 1, ${f}_{\widehat{X}}^{L}(y)\le {f}_{\widehat{X}}^{L}(x)+{\mathrm{d}}^{L}(x,y)\le \delta +\alpha $, implying $y\in {A}_{\widehat{X}}^{L}(\alpha +\delta )$. The second inclusion is proven by a symmetric argument. □

The following is the definition of an adaptive sample we will use throughout. For the special case when X is a manifold and L is its medial axis, it corresponds to the $\epsilon $-sample used in surface reconstruction.

**Definition** **4.** Given a compact set $L\subset {\mathbb{R}}^{d}$ and compact sets $X,\widehat{X}\subset {\mathbb{R}}^{d}\backslash L$ such that $\widehat{X}\subseteq X$, we say that $\widehat{X}$ is an ε-sample of X, for $\epsilon \in [0,1)$, if for all $x\in X$, there exists $p\in \widehat{X}$ such that $\Vert x-p\Vert \le \epsilon {f}_{L}(x)$.

This definition is closely related to that of the approximate $\alpha $-offsets, because if $\widehat{X}$ is an $\epsilon $-sample of X, then for all $x\in X$, $\mathrm{ball}(x,\epsilon {f}_{L}(x))\cap \widehat{X}\ne \varnothing $.

## 3. Results

**Lemma** **3.** Consider $\widehat{X},X\subseteq {\mathbb{R}}^{d}\backslash L$ to be such that ${\mathrm{d}}_{H}^{L}(\widehat{X},X)\le \delta $. Then, for all $\alpha \ge 0$, ${A}_{X}^{L}(\alpha )\subseteq {A}_{\widehat{X}}^{L}(\alpha +\delta )$ and ${A}_{\widehat{X}}^{L}(\alpha )\subseteq {A}_{X}^{L}(\alpha +\delta )$.

**Proof.** Fix $y\in {A}_{X}^{L}(\alpha )$. By definition, ${f}_{X}^{L}(y)\le \alpha $, which implies that there exists $x\in X$ such that ${\mathrm{d}}^{L}(x,y)\le \alpha $. ${\mathrm{d}}_{H}^{L}(\widehat{X},X)\le \delta $, which implies that for all $x\in X$, ${f}_{\widehat{X}}^{L}(x)\le \delta $. Now, by Lemma 1, ${f}_{\widehat{X}}^{L}(y)\le {f}_{\widehat{X}}^{L}(x)+{\mathrm{d}}^{L}(x,y)\le \delta +\alpha $, implying $y\in {A}_{\widehat{X}}^{L}(\alpha +\delta )$. By a symmetric argument, the other statement holds. □

Lemma 4 relates the length of a path $\gamma $ with respect to two distance-to-set functions, assuming they have a close Hausdorff distance with respect to a Euclidean metric.

**Lemma** **4.** Let $L,\widehat{L}$ be two compact sets such that ${\mathrm{d}}_{H}(L,\widehat{L})\le \epsilon $ for some $\epsilon >0$. For all unit-speed, $\gamma :[0,a]\to {\mathbb{R}}^{d}\backslash {L}^{c\epsilon}$, where for some positive c, we have the following inequalities. **Proof.** Take an arbitrary unit-speed path $\gamma :[0,a]\to {\mathbb{R}}^{d}\backslash {L}^{c\epsilon}$ where ${\mathrm{d}}_{H}(L,\widehat{L})\le \epsilon $. Since the image of the path $\gamma $ is a subset of ${\mathbb{R}}^{d}\backslash {L}^{c\epsilon}$, then for all $z\in \gamma $, ${f}_{L}(z)>c\epsilon $. By the Hausdorff distance between L and $\widehat{L}$, we have ${f}_{L}(z)\le {f}_{\widehat{L}}(z)+\epsilon <{f}_{\widehat{L}}(z)+\frac{{f}_{L}(z)}{c}$. Likewise, we have that ${f}_{L}(z)\ge {f}_{\widehat{L}}(z)-\epsilon >{f}_{\widehat{L}}(z)-\frac{{f}_{L}(z)}{c}$. Rearranging both of these, we have that $\frac{1-\frac{1}{c}}{{f}_{\widehat{L}}(z)}<\frac{1}{{f}_{L}(z)}<\frac{1+\frac{1}{c}}{{f}_{\widehat{L}}(z)}$.

By the definition of ${\left|\gamma \right|}^{L}$ and ${\left|\gamma \right|}^{\widehat{L}}$, these inequalities imply that $(1-\frac{1}{c}){\left|\gamma \right|}^{\widehat{L}}\le {\left|\gamma \right|}^{L}\le (1+\frac{1}{c}){\left|\gamma \right|}^{\widehat{L}}.$ □

The following lemma provides a bound on how close to L a shortest path to a compact set X can be and a constant c to satisfy Lemma 4 that is dependent on what compact set $X\subset {\mathbb{R}}^{d}\backslash L$ one is working.

**Lemma** **5.** Take compact set $L\subset {\mathbb{R}}^{d}$, compact set $X\subset {\mathbb{R}}^{d}\backslash L$, and $y\in {A}_{X}^{L}(\delta )$, for $\delta <1$. If γ is the shortest path from y to X with respect to ${\mathrm{d}}^{L}$, then: **Proof.** Since $y\in {A}_{X}^{L}(\delta )$, ${f}_{X}^{L}(y)\le \delta $, so there exists $x\in X$ such that ${\mathrm{d}}^{L}(x,y)\le \delta $. Take $\gamma $ as the shortest path from y to X. For all $z\in \gamma $, ${\mathrm{d}}^{L}(x,z)\le {\mathrm{d}}^{L}(x,y)\le \delta $.

By Lemma 10, $\Vert x-z\Vert \le \frac{\delta}{1-\delta}{f}_{L}(x)$, and by ${f}_{L}$ being Lipschitz, we have that ${f}_{L}(z)\ge {f}_{L}(x)-\Vert x-z\Vert \ge (1-\frac{\delta}{1-\delta}){f}_{L}(x)\ge (1-\frac{\delta}{1-\delta}){\mathrm{d}}_{H}(X,L)$. This means that every point on the path $\gamma $ is at least distance $(1-\frac{\delta}{1-\delta}){\mathrm{d}}_{H}(X,L)$ away from L. □

We define a noisy $\epsilon $-sample, for $\epsilon <1$, of compact $X\subseteq {\mathbb{R}}^{d}\backslash L$ with respect to ${f}_{L}$ for some compact set L as a compact set $\widehat{X}\subseteq {\mathbb{R}}^{d}\backslash L$ such that for all $x\in X$, there exists $p\in \widehat{X}$ such that $\Vert x-p\Vert \le \epsilon {f}_{L}(x)$. Likewise, for all $p\in \widehat{X}$, there exists $x\in X$, such that $\Vert x-p\Vert \le \epsilon {f}_{L}(x)$. The following theorems relate a noisy $\epsilon $-sample to the Hausdorff distance between the sample $\widehat{X}$ and the set X and vice versa.

**Lemma** **6.** Consider compact set L and compact $X,\widehat{X}\subset {\mathbb{R}}^{d}\backslash L$. If $\widehat{X}$ is a noisy ε-sample of X with respect to ${f}_{L}$, for $\epsilon <1$, then ${\mathrm{d}}_{H}^{L}(\widehat{X},X)\le \frac{\epsilon}{1-\epsilon}$.

**Proof.** Given $x\in X$, by definition, there exists $p\in \widehat{X}$ such that $\Vert x-p\Vert \le \epsilon {f}_{L}(x)$. By Lemma 10, ${\mathrm{d}}^{L}(x,p)\le \frac{\epsilon}{1-\epsilon}$, so for all, $x\in X$, ${f}_{\widehat{X}}^{L}(x)\le \frac{\epsilon}{1-\epsilon}$.

Furthermore, given $p\in \widehat{X}$, there exists $x\in X$ such that $\Vert x-p\Vert \le \epsilon {f}_{L}(x)$, so for all $p\in \widehat{X}$, ${f}_{X}^{L}(p)\le \frac{\epsilon}{1-\epsilon}$; thus, ${\mathrm{d}}_{H}^{L}(\widehat{X},X)\le \frac{\epsilon}{1-\epsilon}$. □

**Lemma** **7.** Consider compact set L and sets $X,\widehat{X}\subset {\mathbb{R}}^{d}\backslash L$. If ${\mathrm{d}}_{H}^{L}(\widehat{X},X)\le \epsilon <\frac{1}{2}$, then $\widehat{X}$ is a noisy $\frac{\epsilon}{1-\epsilon}$-sample of X with respect to ${f}_{L}$.

**Proof.** ${\mathrm{d}}_{H}^{L}(\widehat{X},X)\le \epsilon $ implies that for all $p\in \widehat{X}$, ${f}_{X}^{L}(p)\le \epsilon $. Thus, there exists $x\in X$ such that ${\mathrm{d}}^{L}(x,p)\le \epsilon $. By Lemma 10, $\Vert x-p\Vert \le \frac{\epsilon}{1-\epsilon}{f}_{L}(x)$.

Similarly, ${\mathrm{d}}_{H}^{L}(\widehat{X},X)\le \epsilon $ implies that for all $x\in X$, ${f}_{\widehat{X}}^{L}(x)\le \epsilon $; thus, there exists $x\in \widehat{X}$ such that ${\mathrm{d}}^{L}(x,p)\le \epsilon $, and thus, $\Vert x-p\Vert \le \frac{\epsilon}{1-\epsilon}{f}_{L}(x)$. Since $\epsilon <\frac{1}{2}$, then $\frac{\epsilon}{1-\epsilon}<1$, so $\widehat{X}$ is a noisy $\frac{\epsilon}{1-\epsilon}$-sample of X. □

**Lemma** **8.** Given compact set $L\subset {\mathbb{R}}^{d}$ and compact set $X\subset {\mathbb{R}}^{d}\backslash L$, for $\epsilon <1$, ${A}_{X}^{L}(\epsilon )\subseteq {B}_{X}^{L}(\frac{\epsilon}{1-\epsilon})$.

**Proof.** Take $y\in {A}_{X}^{L}(\epsilon )$ so that ${f}_{X}^{L}(y)\le \epsilon $. Thus, there exists $x\in X$ such that ${\mathrm{d}}^{L}(x,y)\le \epsilon $. By Lemma 10, this implies that $\Vert x-y\Vert \le \frac{\epsilon}{1-\epsilon}{f}_{L}(x)$, which implies that $y\in {B}_{X}^{L}(\frac{\epsilon}{1-\epsilon})$. □

**Lemma** **9.** Given compact set $L\subset {\mathbb{R}}^{d}$ and compact set $X\subset {\mathbb{R}}^{d}\backslash L$, for $\epsilon <1$, ${B}_{X}^{L}(\epsilon )\subseteq {A}_{X}^{L}(\frac{\epsilon}{1-\epsilon})$.

**Proof.** Consider $y\in {B}_{X}^{L}(\epsilon )$. Thus, $y\in \mathrm{ball}(x,\epsilon {f}_{L}(x))$, for some $x\in X$, so $\Vert x-y\Vert \le \epsilon {f}_{L}(x)$. Applying Lemma 10, we then have that ${\mathrm{d}}^{L}(x,y)\le \frac{\epsilon}{1-\epsilon}$, and as ${f}_{X}^{L}(y)\le {\mathrm{d}}^{L}(x,y)$, $y\in {A}_{X}^{L}(\frac{\epsilon}{1-\epsilon})$. □

#### 3.1. Adaptive Sampling

In this section, we prove that a uniform sample in the induced metric corresponds to an adaptive sample in the Euclidean metric and vice versa. The key to this proof is the following lemma, which will also be used for the more elaborate interleaving results of

Section 3.2.

**Lemma** **10.** Let $L\subset {\mathbb{R}}^{d}$ be a compact set, and let $a,b\in {\mathbb{R}}^{d}\backslash L$. Then, the following two statements hold for all $\delta \in [0,1)$.

- (i)
If ${\mathrm{d}}^{L}(a,b)\le \delta $, then $\frac{\Vert a-b\Vert}{{f}_{L}(a)}\le \frac{\delta}{1-\delta}$.

- (ii)
If $\frac{\Vert a-b\Vert}{{f}_{L}(a)}\le \delta $, then ${\mathrm{d}}^{L}(a,b)\le \frac{\delta}{1-\delta}$.

**Proof.** To prove (i), we assume

${\mathrm{d}}^{L}(a,b)\le \delta $. Let

$\gamma $ be the path in

$\mathrm{Path}(a,b)$ such that

${\mathrm{d}}^{L}(a,b)={\int}_{\gamma}\frac{dz}{{f}_{L}(z)}<\delta $. Then, we have the following inequalities following from the Lipschitz property of

${f}_{L}$.

It follows that $\left|\gamma \right|\le \frac{\delta}{1-\delta}{f}_{L}(x)$. Because $\Vert a-b\Vert $ is the length of the shortest path between a and b in the Euclidean metric, we conclude that $\Vert a-b\Vert \le \left|\gamma \right|\le \frac{\delta}{1-\delta}{f}_{L}(x)$.

Next we prove (ii). Assume

$\frac{\Vert a-b\Vert}{{f}_{L}(a)}\le \delta $. For all points

z in the straight line segment

$\overline{ab}$,

This implies the following inequality.

□

We can now state the main theorem relating adaptive samples in the Euclidean metric to uniform samples in the metric induced by a set L.

**Theorem** **1.** Let L and X be compact sets; let $\widehat{X}\subset X$ be a sample; and let $\epsilon \in [0,1)$ be a constant. If $\widehat{X}$ is an ε-sample of X with respect to the distance to L, then ${\mathrm{d}}_{H}^{L}(X,\widehat{X})\le \frac{\epsilon}{1-\epsilon}$. Furthermore, if ${\mathrm{d}}_{H}^{L}(X,\widehat{X})\le \epsilon <\frac{1}{2}$, then $\widehat{X}$ is an $\frac{\epsilon}{1-\epsilon}$-sample of X with respect to the distance to L.

**Proof.** Given $x\in X$, there exists $p\in \widehat{X}$ such that $\Vert x-p\Vert \le \epsilon {f}_{L}(x)$. By Lemma 10, ${\mathrm{d}}^{L}(x,p)\le \frac{\epsilon}{1-\epsilon}$, so for all $x\in X$, ${f}_{\widehat{X}}^{L}(x)\le \frac{\epsilon}{1-\epsilon}$. As $\widehat{X}\subseteq X$, this proves ${\mathrm{d}}_{H}^{L}(\widehat{X},X)\le \frac{\epsilon}{1-\epsilon}$.

Furthermore, ${\mathrm{d}}_{H}^{L}(\widehat{X},X)\le \epsilon <\frac{1}{2}$ implies that for all $x\in X$, ${f}_{\widehat{X}}^{L}(x)\le \epsilon $; thus, there exists $p\in \widehat{X}$ such that ${\mathrm{d}}^{L}(x,p)\le \epsilon $. Thus, by Lemma 10 $\Vert x-p\Vert \le \frac{\epsilon}{1-\epsilon}{f}_{L}(x)$. Since $\epsilon <\frac{1}{2}$, then $\frac{\epsilon}{1-\epsilon}<1$, so $\widehat{X}$ is an $\frac{\epsilon}{1-\epsilon}$-sample of X. □

#### 3.2. Interleaving

A filtration is a nested family of sets. In this paper, we consider filtrations

F parameterized by a real number

$\alpha \ge 0$ so that

$F(\alpha )\subset {\mathbb{R}}^{d}$, and whenever

$\alpha <\beta $, we have

$F(\alpha )\subseteq F(\beta )$. Often, our filtrations are sublevel filtrations of a real valued function

$f:{\mathbb{R}}^{d}\to \mathbb{R}$. The sublevel filtration

F corresponding to the function

f is defined as:

**Definition** **5.** A pair of filtrations $(F,G)$ is $({h}_{1},{h}_{2})$-interleaved in an interval $(s,t)$ if $F(r)\subseteq G({h}_{1}(r))$ whenever $r,{h}_{1}(r)\in (s,t)$ and $G(r)\subseteq F({h}_{2}(r))$ whenever $r,{h}_{2}(r)\in (s,t)$. We require that the functions ${h}_{1},{h}_{2}$ to be nondecreasing in $(s,t)$.

The following lemma gives us an easy way to combine interleavings.

**Lemma** **11.** If $(F,G)$ is $({h}_{1},{h}_{2})$-interleaved in $({s}_{1},{t}_{1})$ and $(G,H)$ is $({h}_{3},{h}_{4})$-interleaved in $({s}_{2},{t}_{2})$, then $(F,H)$ is $({h}_{3}\circ {h}_{1},{h}_{2}\circ {h}_{4})$-interleaved in $({s}_{3},{t}_{3})$, where ${s}_{3}=max\{{s}_{1},{s}_{2}\}$ and ${t}_{3}=min\{{t}_{1},{t}_{2}\}$.

**Proof.** If $r,{h}_{3}({h}_{1}(r))\in ({s}_{3},{t}_{3})$, then we have $F(r)\subseteq G({h}_{1}(r))\subseteq H({h}_{3}({h}_{1}(r)))$. Similarly, if $r,{h}_{2}({h}_{4}(r))\in ({s}_{3},{t}_{3})$, then $H(r)\subseteq G({h}_{4}(r))\subseteq F({h}_{2}({h}_{4}(r)))$. □

#### 3.2.1. Approximating X with $\widehat{X}$

Ultimately, the goal is to relate ${A}_{X}^{L}$, the offsets in the induced metric, to ${B}_{\widehat{X}}^{\widehat{L}}$, the approximate offsets computed from approximations (or samples) to both X and L. This relationship will be given by an interleaving that is built up from an interleaving for each approximation step. For each of the following lemmas, let $L,\widehat{L}\subset {\mathbb{R}}^{d}$ and $X,\widehat{X}\subset {\mathbb{R}}^{d}\backslash (L\cup \widehat{L})$ be compact sets.

**Lemma** **12.** If ${\mathrm{d}}_{H}^{L}(\widehat{X},X)\le \epsilon $, then $({A}_{X}^{L},{A}_{\widehat{X}}^{L})$ are $({h}_{1},{h}_{1})$-interleaved in $(0,\infty )$, where ${h}_{1}(r)=r+\epsilon $.

**Proof.** This lemma is a reinterpretation of Lemma 3 in the interleaving notation. □

#### 3.2.2. Approximating the Induced Metric

It is much easier to use a union of Euclidean balls to model the sublevel sets of the distance function ${f}_{X}^{L}$. Below, we show that this is a reasonable approximation. The following results may also be viewed as a strengthening of the adaptive sampling result of the previous section (Theorem 1).

**Lemma** **13.** The pair $({A}_{\widehat{X}}^{L},{B}_{\widehat{X}}^{L})$ is $({h}_{2},{h}_{2})$-interleaved in $(0,1)$, where ${h}_{2}(r)=\frac{r}{1-r}$.

**Proof.** It will suffice to show that for $r\in [0,1)$, ${A}_{\widehat{X}}^{L}(r)\subseteq {B}_{\widehat{X}}^{L}(\frac{r}{1-r})$, and for $r\in [0,\frac{1}{2})$, ${B}_{\widehat{X}}^{L}(r)\subseteq {A}_{\widehat{X}}^{L}(\frac{r}{1-r})$.

Take $y\in {A}_{\widehat{X}}^{L}(r)$ so that ${f}_{\widehat{X}}^{L}(y)\le r$. Thus, there exists $x\in X$ such that ${\mathrm{d}}^{L}(x,y)\le r$. By Lemma 10, this implies that $\Vert x-y\Vert \le \frac{r}{1-r}{f}_{L}(x)$, which implies that $y\in {B}_{\widehat{X}}^{L}(\frac{r}{1-r})$.

Consider any point $y\in {B}_{\widehat{X}}^{L}(r)$. For some $x\in X$, we have $y\in \mathrm{ball}(x,r{f}_{L}(x))$, so $\Vert x-y\Vert \le r{f}_{L}(x)$. Applying Lemma 10, we have that ${\mathrm{d}}^{L}(x,y)\le \frac{r}{1-r}$. Finally, $y\in {A}_{\widehat{X}}^{L}(\frac{r}{1-r})$, because ${f}_{\widehat{X}}^{L}(y)\le {\mathrm{d}}^{L}(x,y)$. □

#### 3.2.3. Approximating L with $\widehat{L}$

Usually, the set

L is unknown at the start and must be estimated from the input. For example, if

L is the medial axis of

X, there are several known techniques for approximating

L by taking some vertices of the Voronoi diagram [

5,

6]. We would like to give some sampling conditions that allow us to replace

L with an approximation

$\widehat{L}$. Interestingly, the sampling conditions for

$\widehat{X}$ are dual to those used for

$\widehat{L}$: we require

${\mathrm{d}}_{H}^{\widehat{X}}(L,\widehat{L})\le \epsilon $. In other words,

$\widehat{L}$ must be an adaptive sample with respect to the distance to

$\widehat{X}$.

**Lemma** **14.** If ${\mathrm{d}}_{H}^{\widehat{X}}(L,\widehat{L})\le \delta <1$, then $({B}_{\widehat{X}}^{L},{B}_{\widehat{X}}^{\widehat{L}})$ is $({h}_{3},{h}_{3})$-interleaved in $(0,\infty )$, where ${h}_{3}(r)=\frac{r}{1-\delta}$.

**Proof.** Fix any

$x\in {B}_{\widehat{X}}^{L}(r)$. There is a point

$p\in \widehat{X}$ such that

$\frac{\Vert x-p\Vert}{{f}_{L}(p)}\le r$. Moreover, there is a nearest point

$z\in \widehat{L}$ to

x such that

${f}_{\widehat{L}}(p)=\Vert p-z\Vert $. Lemma 10 and the assumption that

${\mathrm{d}}_{H}^{\widehat{X}}(L,\widehat{L})\le \delta $ together imply that there exists

$y\in L$ such that:

It then follows from the definitions that:

Therefore, we can bound

${f}_{L}(p)$ in terms of

${f}_{\widehat{L}}(p)$ as follows.

Therefore, $x\in {B}_{\widehat{X}}^{\widehat{L}}({h}_{3}(r))$, and so, we conclude that ${B}_{\widehat{X}}^{L}(r)\subseteq {B}_{\widehat{X}}^{\widehat{L}}({h}_{3}(r))$. The proof is symmetric to show that ${B}_{\widehat{X}}^{\widehat{L}}(r)\subseteq {B}_{\widehat{X}}^{L}({h}_{3}(r))$ □

#### 3.2.4. Putting It All Together

We can now use Lemma 11 to combine the interleavings established in Lemmas 12–14.

**Theorem** **2.** Let $L,\widehat{L}\subset {\mathbb{R}}^{d}$ and $X,\widehat{X}\subset {\mathbb{R}}^{d}\backslash (L\cup \widehat{L})$ be compact sets. If ${\mathrm{d}}_{H}^{\widehat{X}}(L,\widehat{L})\le \delta <1$ and ${\mathrm{d}}_{H}^{L}(\widehat{X},X)\le \epsilon <1$, then $({A}_{X}^{L},{B}_{\widehat{X}}^{\widehat{L}})$ are $({h}_{4},{h}_{5})$-interleaved in $(0,1)$, where ${h}_{4}(r)=\frac{r+\epsilon}{(1-r-\epsilon )(1-\delta )}$ and ${h}_{5}(r)=\frac{r}{1-\delta -r}+\epsilon $.

**Proof.** We use Lemma 11 to combine the interleavings from Lemmas 12–14 to conclude that the pair

$({A}_{X}^{L},{B}_{\widehat{X}}^{\widehat{L}})$ is

$({h}_{3}\circ {h}_{2}\circ {h}_{1},{h}_{1}\circ {h}_{2}\circ {h}_{3})$-interleaved in

$(0,1)$. To complete the proof, we expand

${h}_{3}\circ {h}_{2}\circ {h}_{1}$ and

${h}_{1}\circ {h}_{2}\circ {h}_{3}$ as follows.

Therefore, we have that ${h}_{4}(r)=\frac{r+\delta}{(1-r-\delta )(1-\epsilon )}$ and ${h}_{5}(r)=\frac{r}{1-\epsilon -r}+\delta $. □

#### 3.3. Smooth Adaptive Distance and Homology Inference

In the preceding sections, we showed how to approximate (via interleaving) ${A}_{X}^{L}$, the sublevels of the distance to X in the induced metric, using a finite set of Euclidean balls, ${B}_{\widehat{X}}^{\widehat{L}}$. Now, we show how and when such an approximation gives a guarantee about the underlying space X itself. This is substantially more difficult, because it requires us to relate the sublevels of the induced metric to an object we do not have direct access to. As such, we will require some stronger hypotheses.

We will first review the critical point theory of distance functions. Then, we show how to smooth the induced metric to an arbitrarily close Riemannian metric, rendering the critical point theory applicable. Then, we put these together to prove the main inference result of the paper, Theorem 3.

#### 3.3.1. Critical Points of Distance Functions

In this section, we give a minimal presentation of the critical point theory of distance functions to explain and motivate the results about interleaving offsets of distance functions in Riemannian manifolds. The main fact we use is that such interleavings lead immediately to results about homology inference (Lemma 16).

For a smooth Riemannian manifold

M and a compact subset

$X\subset M$, one can consider the function

${f}_{X}:M\to \mathbb{R}$ that maps each point in

M to the distance to its nearest point in

X as measured by the metric on the manifold. The gradient of

${f}_{X}$ can be defined on

M, and the critical points are those points for which the gradient is zero. The critical values of

${f}_{X}$ are those values of

r such that

${f}_{X}^{-1}(r)$ contains a critical point. The critical point theory of distance functions developed by Grove and others [

11] extends the ideas from Morse theory to such distance functions. In particular, the theory gives the following result.

**Lemma** **15** (Grove [

11]).

If $[r,{r}^{\prime}]$ contains no critical values, then ${f}_{X}^{-1}([0,r])\hookrightarrow {f}_{X}^{-1}([0,{r}^{\prime}])$ is a homotopy equivalence.This means that for intervals that do not contain critical values, the inclusion maps in the filtration $F(r):=\{{f}_{X}^{-1}([0,r])|r\ge 0\}$ are all homotopy equivalences and therefore induce isomorphisms in homology. This is used to give some information about the homology of filtrations that are interleaved with F.

We write ${H}_{*}$ to denote homology over a field. Therefore, for a set $X\subseteq {\mathbb{R}}^{d}$, we have a vector space ${H}_{*}(X)$, and for a continuous map $f:X\to Y$, we have a linear map ${H}_{*}(f)$. For the canonical inclusion map $X\hookrightarrow Y$ for a subset $X\subseteq Y$, we will denote the corresponding linear map in homology as ${H}_{*}(X\hookrightarrow Y)$. The image of this map is denoted $\mathrm{im}\phantom{\rule{0.277778em}{0ex}}{H}_{*}(X\hookrightarrow Y)$.

**Lemma** **16.** Let ${f}_{X}$ be the distance function to a compact set in a Riemannian manifold such that $[r,{r}^{\prime}]$ contains no critical values of ${f}_{X}$. Let F be the sublevel filtration of ${f}_{X}$, and let G be a filtration such that $(F,G)$ are $({h}_{1},{h}_{2})$-interleaved in $(r,{r}^{\prime})$. If ${r}^{\prime}<({h}_{2}\circ {h}_{1}\circ {h}_{2}\circ {h}_{1})(r)$, then: **Proof.** The interleaving and the hypotheses imply that we have the following inclusions.

The preceding lemma implies that the maps $F(r)\hookrightarrow F(({h}_{2}\circ {h}_{1})(r))$, $F(({h}_{2}\circ {h}_{1})(r))\hookrightarrow F(({h}_{2}\circ {h}_{1}\circ {h}_{2}\circ {h}_{1})(r))$, and $F(r)\hookrightarrow F(({h}_{2}\circ {h}_{1}\circ {h}_{2}\circ {h}_{1})(r))$ all induce isomorphisms in homology. It follows that $\mathrm{im}\phantom{\rule{0.277778em}{0ex}}{H}_{*}(G({h}_{1}(r))\hookrightarrow G(({h}_{1}\circ {h}_{2}\circ {h}_{1})(r)))\cong {H}_{*}(F(r))$, because the inclusion of spaces in G is factored through a space in F, and it factors an inclusion of spaces, all of which are isomorphic in homology. □

#### 3.3.2. Smoothing the Metric

To apply the critical point theory of distance functions to the induced metric directly, we would need it to be a smooth Riemannian manifold. Although it is not smooth, we can smooth it with an arbitrarily small change. The process, though a little technical, is not surprising, nor very difficult. It proceeds in three steps.

We smooth the distance to L. This is the source of non-smoothness in the induced metric. This replaces ${f}_{L}$ with a smooth approximation, $\tilde{{f}_{L}}$.

The smoothed distance to L is used to define the smoothed induced metric $\tilde{{\mathrm{d}}^{L}}$ analogously to the original construction of ${\mathrm{d}}^{L}$.

The induced distance function ${f}_{X}^{L}$ can then be replaced by its smoothed version $\tilde{{f}_{X}^{L}}$, and the corresponding smoothed offsets $\tilde{{A}_{X}^{L}}$ are then well defined.

The complete construction of the smoothed offsets is presented in

Appendix A. The end result is an interleaving between the induced offsets

${A}_{X}^{L}$ and the smoothed version

$\tilde{{A}_{X}^{L}}$ as expressed in the following lemma.

**Lemma** **17.** Given $\alpha ,\beta \in (0,1)$, consider compact sets $\widehat{L}\subseteq L\subset {\mathbb{R}}^{d}$ and compact sets $\widehat{X}\subseteq X\subset {\mathbb{R}}^{d}\backslash {L}^{\beta}$, such that ${\mathrm{d}}_{H}^{\widehat{X}}(L,\widehat{L})\le \delta <1$ and ${\mathrm{d}}_{H}^{L}(\widehat{X},X)\le \epsilon <1$, then $({\tilde{\mathcal{A}}}_{X}^{L},{\mathcal{B}}_{\widehat{X}}^{\widehat{L}})$ are $({h}_{8},{h}_{9})$-interleaved on $(0,1)$, where ${h}_{8}(r)=\frac{r+\alpha r+\epsilon}{(1-r-r\alpha -\epsilon )(1-\delta )}$ and ${h}_{9}(r)=\frac{r}{(1-\alpha )(1-\delta -r)}+\frac{\epsilon}{1-\alpha}$.

#### 3.3.3. The Weak Feature Size

Chazal and Leutier [

19] introduced the weak feature size (

$\mathrm{wfs}$) as the least positive critical value of a Riemannian distance function. We denote the weak feature size with respect to

$\tilde{{f}_{X}^{L}}(\xb7)$ as

${\mathrm{wfs}}^{L}(X)$. In light of the critical point theory of distance functions, a bound on the weak feature size gives a guaranteed interval with no critical points. This allows one to infer the homology from another filtration (usually one that is discrete and built from data) as long as the second filtration is interleaved in that critical point free interval.

**Lemma** **18** (Adapted from [

19] Theorem 4.2; see also [

20]).

Let S and $\widehat{S}$ be compact subsets of ${\mathbb{R}}^{d}$. If ${\mathrm{d}}_{H}(S,\widehat{S})>\epsilon $ and $\mathrm{wfs}(S)>4\epsilon $, then for all sufficiently small $\eta >0$,The key idea in that proof is that the Hausdorff bound gives an interleaving, while the weak feature size bound gives the interval without critical points. The technical condition regarding $\eta $ is present to account for strange compact sets that may be homologically different from their arbitrarily small offsets. It is reasonable to assume that for some sufficiently small $\eta $ that ${H}_{*}({A}_{S}(\eta ))\cong {H}_{*}(S)$, and thus, one could “compute” the homology of S using only the sample $\widehat{S}$.

Most previous uses of the weak feature size have been applied in Euclidean spaces, but the critical point theory of distance functions can be applied more broadly to other smooth Riemannian manifolds. This is why we introduced it as ${\mathrm{wfs}}^{L}$ (with the superscript) to indicate the underlying metric.

#### 3.3.4. Homology Inference

We have now introduced all the necessary pieces to prove our main homology inference result.

**Theorem** **3.** Given $\alpha ,\beta \in (0,1)$, consider compact sets $\widehat{L}\subseteq L\subset {\mathbb{R}}^{d}$ and compact sets $\widehat{X}\subseteq X\subset {\mathbb{R}}^{d}\backslash {L}^{\beta}$, such that ${\mathrm{d}}_{H}^{\widehat{X}}(L,\widehat{L})\le \delta <1$ and ${\mathrm{d}}_{H}^{L}(\widehat{X},X)\le \epsilon <1$. Define the real-valued functions $\mathsf{\Psi}$ and $\mathsf{\Phi}$ as: Given any $\eta >0$, such that $\mathsf{\Phi}\mathsf{\Psi}\mathsf{\Phi}\mathsf{\Psi}(\eta )<1$, if ${\mathrm{wfs}}_{L}(X)>\mathsf{\Phi}\mathsf{\Psi}\mathsf{\Phi}\mathsf{\Psi}(\eta )$, then: **Proof.** Given

$\eta >0$ such that

$\mathsf{\Phi}\mathsf{\Psi}\mathsf{\Phi}\mathsf{\Psi}(\eta )<1$, we have the following sequence of inclusions as a result of Lemma 17.

As we assume that

${\mathrm{wfs}}_{L}(X)>\mathsf{\Phi}\mathsf{\Psi}\mathsf{\Phi}\mathsf{\Psi}(\eta )$, by the definition of the weak feature size, Lemma 16 implies that the inclusions

$b\circ a$ and

$d\circ c$ are homotopy equivalences. We remind the reader that if two spaces are homotopy equivalent, all the induced homology maps between the spaces are isomorphisms. By applying homology to each space and inclusion in the previous sequence, we have the following sequence of homology groups, where

${b}_{*}\circ {a}_{*}$ and

${d}_{*}\circ {c}_{*}$ are isomorphisms.

The aforementioned isomorphisms ${b}_{*}\circ {a}_{*}$ and ${d}_{*}\circ {c}_{*}$ factor through ${H}_{*}({B}_{\widehat{X}}^{\widehat{L}}(\mathsf{\Psi}(\eta )))$ and ${H}_{*}({B}_{\widehat{X}}^{\widehat{L}}(\mathsf{\Psi}\mathsf{\Phi}\mathsf{\Psi}(\eta )))$, respectively, proving that ${b}_{*}$ is surjective and ${c}_{*}$ is injective. We then have that ${H}_{*}({\tilde{A}}_{X}^{L}(\eta ))\cong {H}_{*}({\tilde{A}}_{X}^{L}(\mathsf{\Phi}\mathsf{\Psi}(\eta )))\cong \mathrm{im}\phantom{\rule{0.277778em}{0ex}}{b}_{*}\cong \mathrm{im}\phantom{\rule{0.277778em}{0ex}}({c}_{*}\circ {b}_{*})$. □

#### 3.3.5. Computing the Homology

The last step is to relate the smoothed offsets to something that can be computed. It will generally be the case that the approximation $\widehat{X}$ of X is not just compact, but also finite. Then, for any scale $\alpha \ge 0$, we have that ${B}_{\widehat{X}}^{\widehat{L}}(\alpha )$ is the union of a finite set of Euclidean balls.

The nerve theorem provides a natural way to compute the homology of a union of Euclidean balls. The nerve of a collection

U of sets is the set of all subsets of

U that have a nonempty intersection. It has the structure of a simplicial complex, whose homology can be directly computed by standard matrix reduction algorithms. When all nonempty intersections are contractible, the cover is said to be good. A cover by Euclidean balls (or any convex shape) is always good. For good covers, the nerve theorem, a standard result in algebraic topology [

21], implies that:

This is the most basic way to compute the homology of union of balls and is used throughout topological data analysis.

In our case, we are not just computing the homology of the union, but also the homology of the inclusion map. This computation will require a slightly stronger result. The persistent nerve lemma [

20], applied to Diagram (

4) when combined with the above isomorphisms, yields the following.

This last statement turns the isomorphism into an algorithm, because standard algorithms [

22] can compute the homology of the inclusion of the nerves.

## 4. Conclusions

We present an alternative metric in Euclidean space that connects adaptive sampling and uniform sampling. We show how to apply classical results from the critical point theory of distance functions to infer topological properties of the underlying space from such samples. This provides a connection between methods in surface reconstruction (based on adaptive sampling) and homology inference (based on uniform sampling).

We show in Theorem 1 that there is a precise relationship between samples that are uniformly taken with respect to ${\mathrm{d}}^{L}$ at some scale, to those same samples being adaptive in the Euclidean metric. In Theorem 2, we show that we can interleave the sublevel sets of our distance function under this alternative metric with the metric balls resulting from our approximation of the metric, assuming that both $\widehat{X}$ and $\widehat{L}$ are uniformly well sampled with respect to the Hausdorff distance of ${\mathrm{d}}^{L}$ and ${\mathrm{d}}^{\widehat{X}}$. Finally, we show how to fully extend the critical point theory of distance functions and the weak feature size to give theoretical guarantees on homology inference from finite samples of X and L using the induced metric (Theorem 3).

The main limitation of adaptive metrics is that they require two sets as input, one to define the set and one to define the metric. In many instances, this is not available. However, we expect that the approach could find wider use in problems with labeled data. For example, data with binary labels may be viewed as the two sets X and L. Then, each set defines a metric on the other, where the metric is scaled according to how close it is to the other set. This is the subject of ongoing and future work.