Next Article in Journal
Satellite-Observed Variations and Trends in Carbon Monoxide over Asia and Their Sensitivities to Biomass Burning
Previous Article in Journal
NOAA Operational Microwave Sounding Radiometer Data Quality Monitoring and Anomaly Assessment Using COSMIC GNSS Radio-Occultation Soundings
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Deformation Analysis Using B-Spline Surface with Correlated Terrestrial Laser Scanner Observations—A Bridge Under Load

1
Geodetic Institute, Leibniz Universität Hannover, Nienburger Str. 1, 30167 Hannover, Germany
2
Institut für Geoinformation und Vermessung Dessau, Anhalt University of Applied Sciences, Seminarplatz 2a, 06846 Dessau, Germany
*
Author to whom correspondence should be addressed.
Remote Sens. 2020, 12(5), 829; https://doi.org/10.3390/rs12050829
Submission received: 6 February 2020 / Revised: 27 February 2020 / Accepted: 2 March 2020 / Published: 4 March 2020

Abstract

:
The choice of an appropriate metric is mandatory to perform deformation analysis between two point clouds (PC)—the distance has to be trustworthy and, simultaneously, robust against measurement noise, which may be correlated and heteroscedastic. The Hausdorff distance (HD) or its averaged derivation (AHD) are widely used to compute local distances between two PC and are implemented in nearly all commercial software. Unfortunately, they are affected by measurement noise, particularly when correlations are present. In this contribution, we focus on terrestrial laser scanner (TLS) observations and assess the impact of neglecting correlations on the distance computation when a mathematical approximation is performed. The results of the simulations are extended to real observations from a bridge under load. Highly accurate laser tracker (LT) measurements were available for this experiment: they allow the comparison of the HD and AHD between two raw PC or between their mathematical approximations regarding reference values. Based on these results, we determine which distance is better suited in the case of heteroscedastic and correlated TLS observations for local deformation analysis. Finally, we set up a novel bootstrap testing procedure for this distance when the PC are approximated with B-spline surfaces.

Graphical Abstract

1. Introduction

Computing the distance between two objects is an important task in domains such as shape registration [1], shape approximation and simplification [2] or pattern recognition [3]. In the field of engineering geodesy, the distance between objects recorded at different times allows the estimation of deformation magnitudes [4] and their associated risks (see, e.g., Reference [5] for bridges, Reference [6] for dams and Reference [7] for risk management).
The raw point clouds (PC) from a static or kinematic terrestrial laser scanner (TLS) can be analyzed in commercial software. Provided that a registration of the PC is performed (e.g., Reference [8]), maps of deformation magnitudes are formed by building the difference between the PC recorded at two different epochs and allows visualization of the corresponding strength of deformation. The metric to compute the distance between PC is usually based on cloud to cloud (C2C), cloud to mesh (C2M) or mesh to mesh (M2M) strategies. The Multiscale Model to Model Cloud Comparison (M3C2), implemented in CloudCompare, is a possibility for assessing signed distances by smoothing the PC in a predefined zone [9]. The reader can refer to Reference [10] for a description of the different methods used in commercial software. All results remain dependent on the quality of the raw TLS observations. Exemplarily:
  • the presence of noise will significantly affect the detection of the closest point in the second epoch with the C2C algorithm. Different variances are due to the scanning geometry [11] or properties of the objects scanned [12,13];
  • correlations between range measurements influence the deformation magnitude computed by reducing the number of observations available [14,15,16]. They may impact the M3C2 algorithm, depending on the radii chosen for the computation [17].
One way to avoid the related under- or overdetermination of the distance is to approximate the PC with a parametric model [18,19]. This strategy is similar to noise filtering of the raw observations and involves two steps:
  • choice of the mathematical approximation of the PC. In the field of geodesy, the regression B-spline approximation, as introduced by Reference [20], allows great flexibility to model raw TLS observations: no predetermined geometric primitives, such as circles, planes or cylinders, restrict the fitting [21]. Other strategies exist, such as penalized splines [22] or patches splines [23]. They seem less suitable for applications with noisy and scattered observations from TLS PC: please refer to Reference [24] for a short review of the different methods);
  • choice of the distance [25]. The distance chosen has to fulfil certain conditions, such as being robust against noise and outliers to ensure its trustworthiness, particularly when the objects are close to each other [26]. Furthermore, it should correspond to the problem under consideration, that is, shape recognition or image comparisons may require another definition than object matching applications [27]. When a complex object is modelled, maps that allow for a visualization of pointwise deformation magnitudes to detect changes are more relevant than a global measure of distance [28]. Distances based on the maximum norm of parametric representations may not estimate the real distance correctly [29] and cannot be applied to piecewise algebraic spline curves [30]. An alternative is the widely used Hausdorff distance (HD) to estimate either the distance between two raw PC or their B-spline approximations [31]. Unfortunately, the traditional HD only provides a global measure of the distance and is known to be sensitive to outliers. Alternatives were proposed, including the Hausdorff quantile [32], for close objects [26], for the specific case of B-spline curves [33], spatially coherent matching [34] or the averaged Hausdorff distance (AHD; [27]).
Parametric approximations of PC and mathematical definitions of distances are often associated with an increase of complexity. Fortunately, the additional effort involved to perform a deformation analysis with mathematical approximations of TLS PC rather than with the raw observations is worthwhile. In this contribution, we aim to convince a practitioner from it by answering the following three scientific questions:
(1)
Is a mathematical approximation of the noisy TLS PC beneficial for a trustworthy distance computation?
(2)
How does correlated noise affect the distance between mathematical surfaces? Which metric is better suited in the case of correlated observations?
(3)
Which specific statistical test has to be applied when testing for deformation based on a distance between mathematical approximations of TLS PC?
We will build our answers for (1) and (2) on both simulations and real data analyses and focus on local approximation. In a first step, we will simulate the PC of TLS raw observations in polar co-ordinates. They are known to be both heteroscedastic ([12,35]) and correlated ([15,36]). Thus, we will extend the stochastic model of TLS measurements using a separable covariance function to simulate correlated range observations ([37,38]). We will use Monte Carlo (MC) simulations to generate random vectors for the correlated noise by means of a Cholesky factorization. This allows us to analyze the impact of correlations on the HD-based distances in a general case, as well as to determine which mathematical distance is the most trustworthy in the case of correlations.
In a second step, we will confirm the simulation results using small surfaces of real data from a bridge under load. We will draw a parallel between the correlations and density reduction of the PC via gridding and show how gridding positively affects the distances computed with a mathematical approximation of the raw TLS observations. Reference deformation magnitudes are available from a pointwise laser tracker (LT) for the sake of comparison.
The estimation of the deformation magnitude itself is meaningless without assessing its significance, that is, answering the question (3): “can the null-hypothesis that no deformation occurs be rejected or not?”. In this contribution, we propose a novel and specific test strategy based on the HD or AHD between mathematical approximations. Because the distribution of the test statistics is not tractable, we will combine MC simulations with a bootstrapping approach to validate a rigorous test procedure for testing deformation.
The remainder of the contribution is as follows: in Section 2, we introduce the mathematical approximation of curves and surfaces using B-splines basis functions, focusing on the regression B-splines. The HD and AHD will be presented. In Section 3, the theoretical derivations will be applied and validated by means of simulations of PC with different correlated noises for a null-deformation case. Section 4 is dedicated to a real case study of surface fitting using TLS observations from a bridge under load. A comparison between the deformation magnitudes obtained by gridding the raw PC and the values provided by the LT will highlight the potential of the AHD, provided that a local approximation is performed with a reduction in point density. The specific bootstrap testing procedure for the AHD is described in Appendix B.

2. Mathematical Background

2.1. Approximation of Observations with B-Splines Basis Functions

2.1.1. B-Spline Curves

We start with the fundamental problem of having m observed points that we wish to approximate by a smooth curve thanks to an efficient and numerically stable method ([20,39]). In this contribution, we will make use of the B-splines basis: it offers a powerful local control thanks to the control points (CP). We defined the corresponding curve C ( x ) as:
C ( x ) = i = 1 n B i , d ( x ) p i ,
where p i is the ith CP from a total of n . B i , d = B i , d , t the B-spline function of degree d depending on the non-decreasing sequence of real numbers t = ( t i ) i = 1 n + d + 1 , called knots. x is an independent variable, which is varied in the interval [ t i , t i + 1 ) . B i , d is given by the recurrence relation [39]: B i , d ( x ) = t t i t i + d 1 t i B i , d 1 ( x ) + t i + d t t i + d t i + 1 B i + 1 , d 1 ( x ) starting with B i , 0 ( x ) = { 1 , x [ t i , t i + 1 ) 0   else , with the convention that anything divided by zero is zero.

2.1.2. Approximation of Scattered Points with B-Spline Curves

The Least-Square Problem

Observations in real applications are the results of noisy measurements. We solve the approximation problem in least-square (LS) sense with the following minimization problem [20]:
min C S d , t i = 1 m w i ( y i C ( x i ) ) 2 ,
with C ( x ) being an element of S d , t , the linear space of all linear combinations of B-splines defined by S d , t = s p a n { B 1 , d , , B n , d } . min means minimum, y = [ x i , y i ] i = 1 m is the observation vector with x 1 < < x m and w i the ith corresponding weight. In a more general matrix form, (2) is equivalent to finding the vector of length n of CP p = [ p 1 , , p n ] that solves the linear weighted LS problem:
min p n A p - b Σ 2 ,
with · being the usual Euclidean distance. The elements of the matrix A m , n are a i , j = B j , d ( x i ) . b i = y i are the components of b m . Σ is the variance-covariance matrix (VCM) of the error term v = A p - b .
The solution of (3) gives the estimated CP vector p ^ 0 by means of
p ^ 0 = ( A T Σ 1 A ) 1 A T Σ 1 b ,
with · T for transpose. We have E ( p ^ 0 ) = p , E ( · ) being the expectation operator. Since the true Σ is unknown, the feasible LS estimator p ^ = ( A T Σ ^ 1 A ) 1 A T Σ ^ 1 b is used in practice, where Σ ^ is an estimation of Σ .

The Parametrization of the Point Cloud

Parametric B-spline curves take values in 2 and are defined by letting the CP be points in 2 instead of real numbers. Intuitively, the parameter provides a measure of the time to travel along the curve and can be adapted to the data. Different methods can be used (e.g., uniform, cord length or centripetal parametrization; [40]). They all have shortcomings, which should not be underestimated for complicated geometries [41]. An exhaustive description of the parametrization is beyond the scope of the present paper.

The Number of CP

The CP build a control polygon: this is a rough scheme of the curve itself. Moving one CP influences the curve locally and not globally. The number of CP to estimate is linked to the length of the knot vector and can be iteratively adjusted. An optimal CP number should avoid the fitting of measurement noise. Information criteria (IC; [42]) are an alternative to heuristic methods and provide a useful tool to assess this optimal number. Two criteria are widely used—the Akaike information criterion (AIC), which minimizes the Kullback-Leibler divergence of the assumed model from the true, data-generating model or the Bayesian information criterion (BIC). The latter is based on the inherent assumption that the true model exists [43]. They are defined in their common versions as:
A I C = 2 [ l ( p ^ ) ] + 2 n B I C = 2 [ l ( p ^ ) ] + log ( m ) n ,
where l ( p ^ ) is the log-likelihood of the estimated parameters. Using this formulation, a minimum is searched.

2.2. B-Spline Surfaces

We construct parametric B-spline surfaces as tensor product surfaces depending on the B-spline functions. The approximation method for curves described in the previous section can be generalized to 3 , provided that a suitable parametrisation ( u , v ) for the discrete data has been chosen. The parametric B-spline surface S ( u , v ) is expressed as
S ( u , v ) = i = 1 n j = 1 r B i , d , t ( u ) ( u ) B j , d , t ( v ) ( v ) p i j ,
t ( u ) = ( t i ( u ) ) i = 1 n + d + 1 , t ( v ) = ( t j ( v ) ) j = 1 r + d + 1 are the knot vectors associated with the B-spline functions assumed to be of the same degree d , for the sake of simplicity. p i j is the CP vector in 3 and n , r are the number of CP to estimate in the u and v direction, respectively. We define z as the value of the surface S ( u , v ) at ( x , y ) . Without a lack of generality, we will skip the subscript ( u , v ) from now.
The LS approximation method can be used for fitting surfaces to scattered and noisy data in 3 . Due to the definition of the surface by means of a tensor product, the minimization problem is directly related to the univariate one and is only a generalization of the methodology described in Section 2.1.
We will restrict ourselves to cubic B-splines, that is, d = 3. They are considered to be optimal for approximating smooth objects without sharp edges and corners. The observations are often gridded in advance to avoid the problem of solving a large system of equations. In this contribution, both gridded real observations, non-gridded real data and simulated observations from a TLS will be used.

2.3. Deformation Analysis

We apply the previous theoretical development to TLS observations starting with two raw PC of the same object recorded at different epochs. The polar observations are transformed into Cartesian and approximated with B-spline surfaces following the methodology presented in 2.2. For the purpose of deformation analysis, we compute the distance—also called the “deformation magnitude”—between these two approximated surfaces S 1 and S 2 . In this section, we briefly discuss different methods of distance computation.

2.3.1. Suboptimal Intuitive Approaches

We introduce two intuitive approaches to compute the distance between surfaces:
  • The first one makes use a gridded PC and is defined as the difference between the z co-ordinates of S 1 and S 2 , D g r i d = z i , j 1 z i , j 2 , where · is the Euclidian norm. z i , j 1 and z i , j 2 are the values of S 1 and S 2 at grid points ( x i 1 , y i 1 ) and ( x i 2 , y i 2 ) , respectively. We note that S 1 and S 2 may have been computed with different optimal numbers of CP, i.e., we may have n 1 n 2 , m 1 m 2 . Due to the gridding, D g r i d should only be used when the deformation can be assumed to be unidirectional (i.e., in the z-direction).
  • A second idea is to define the distance at the parameter level between the estimated vectors of CP p ^ 1 and p ^ 2 for S 1 and S 2 , respectively, as D C P = p ^ 1 p ^ 2 . Clearly, D C P is meaningless when n 1 n 2 , m 1 m 2 , since the size of the two control polygons differs.

2.3.2. The Hausdorff Distance

In order to overcome the drawbacks of the two intuitive aforementioned approaches, we propose to quantify the deformation magnitude between two parametric B-spline surfaces by computing their HD (see, e.g., References [29,30,31]). First, we define the distance D ( p o 1 , S 2 ) between a point p o 1 = [ x p o 1 y p o 1 z p o 1 ] T belonging to S 1 , and S 2 as D ( p o 1 , S 2 ) = min p o 2 S 2 p o 1 p o 2 , where p o 2 = [ x p o 2 y p o 2 z p o 2 ] T is any point of S 2 . From this definition, the HD between S 1 and S 2 is obtained by taking the maximum:
D ( S 1 , S 2 ) = max p o 1 S 1 D ( p o 2 , S 2 ) .
It is convenient to introduce the symmetrical HD as D s ( S 1 , S 2 ) = max ( D ( S 1 , S 2 ) , D ( S 2 , S 1 ) ) . The computation of a one-sided distance leads potentially to an underestimated value [44]. [30] show that the HD is related to the computation of binormal lines for parametric surfaces. These lines are defined as a normal line at both p o 1 , 0 at parameter ( u 0 , v 0 ) and p o 2 , 0 . Thus, after having detected the so-called antipodal points p o 1 , 0 and p o 2 , 0 , the minimum of the distance can be easily computed. This distance is, therefore, independent of the number of CP used to approximate each data set.
We will denote by S 1 ( x j H D , y j H D ) the value z j H D 1 at the point j H D where the HD occurs in S 1 and similarly by S 2 ( x i H D , y i H D ) at point i H D of the occurring HD in S 2 . Note that i H D may differ from j H D .

2.3.3. The Averaged Hausdorff Distance

The maximum distance involved in (7) to compute the HD can be distorted by the noise of the observations and will not accurately reflect the global deformation between two objects. Rather than an HD, we propose to estimate the AHD, that is, an averaged value of the HD [24]. The AHD is based on the mean value of the distances defined as D mean ( S 1 , S 2 ) = mean p o 1 S 1 D ( p o 1 , S 2 ) . With D mean ( S 2 , S 1 ) = mean p o 2 S 2 D ( p o 2 , S 1 ) , we have
D s _ a v e ( S 1 , S 2 ) = max ( D mean ( S 1 , S 2 ) , D mean ( S 2 , S 1 ) ) .
Our choice is justified by the fact that the AHD is known to be less sensitive to observation noise [27].

Note

Nearly all predefined distances in standard software for raw PC processing are based on the HD or the AHD, either with or without simplistically local mathematical approximations based on planes [45]. B-splines surface approximation allow a more general and detailed description of the PC than the local strategies used in conventional software.

3. Simulations

In this section, we will analyze the HD and AHD between two simulated noisy PC and their approximations. We wish to answer the first two questions raised in the introduction in a controlled framework. To that aim, we will investigate if a mathematical approximation is beneficial for a trustworthy distance computation and how the correlated noise affects the distance computation.
Our simulations are based on the generation of two “no deformation” sets of raw correlated and uncorrelated reference observations in polar co-ordinates. We describe the specific correlation model used to generate correlated TLS observations in Appendix A.

3.1. Generating Noisy Surfaces

We generate sample points from a given mathematical reference surface S s i m u . This latter is assumed to correspond to the probability density function of a two-dimensional normal distribution with a mean of μ = [ 5 5 ] T , T states for transpose and VCM of Σ s = [ 0.2 0 0 0.2 ] . The choice of the reference surface is justified to avoid oscillations of the approximation due to sharp edges or variations [46]. The x- and y-co-ordinates are considered to be uniformly sampled with a resolution of 0.5 in the interval [ 1 10 ] . The corresponding surface is shown in Figure 1 left.
We generate two realizations of the same surface S s i m u having the same stochastic properties. The methodology can be summarized as follows:
Remotesensing 12 00829 i001
We add noise to the predefined grid surface points of S s i m u to obtain a pointwise sorted gridded surface S s i m u 1 : S s i m u 1 = S s i m u + G T W n o i s e , 1 . N 1 = G T W n o i s e , 1 is called a realization of the noise, where G T is the lower triangular matrix of the Cholesky factorization of the noise VCM Σ n o i s e , with Σ n o i s e = G T G [47]. We generate the vector W n o i s e , 1 of the same size as S s i m u 1 with the Matlab’s random number generator through r a n d n ( 0 , 1 ) , which is the realization of a normal distributed vector with a mean of 0 and a variance of 1. The surface S s i m u 2 is built similarly using a second white noise vector W n o i s e , 2 . We call N 2 = G T W n o i s e , 2 the second realization of the noise.

3.2. Generating the Reference Noise VCM Σnoise

The set-up of Σ n o i s e is mandatory to generate the noisy surfaces. We simulate three kinds of VCM with an increased degree of complexity:
(i) 
Simple VCM: Σ n o i s e = σ ρ 2 I . The Identity matrix I is scaled by a factor σ ρ 2 defined in the next section.
(ii) 
Complex VCM degree 1: Σ n o i s e = Σ M A C , assuming heteroscedasticity of the raw polar observations and mathematical correlations (MAC) due to the transformation to Cartesian coordinates in the B-spline approximation.
(iii) 
Complex VCM degree 2: Σ n o i s e = Σ T C , assuming, in addition to (ii), also temporally correlated polar observations.
These matrices are further described for a better understanding in the next sections.

3.2.1. Case (ii)

We follow [37] build the diagonal VCM Σ n o i s e , p o l a r of the polar observations Σ n o i s e , p o l a r = [ σ H A 2 I 0 0 0 σ V A 2 I 0 0 0 Σ n o i s e , ρ ] . We call ρ the range and V A and H A the vertical and horizontal angles, respectively. We assume a constant standard deviation (STD) of σ H A = σ V A = 2.5 mgon for both normally distributed angles. Two STDs σ ρ for the range are chosen to build Σ n o i s e , ρ = σ ρ 2 I following the manufacturer’s specifications of a Zoller+Fröhlich Imager 5006H TLS:
(1) σ ρ , 1 = 0.7 mm, which corresponds to an object observed at a close distance (<10 m) and
(2) σ ρ , 2 = 7   mm, for an object scanned at a distance greater than 25 m or under unfavorable scanning conditions.
Starting from Σ n o i s e , p o l a r , we, furthermore, make use of the error propagation law to compute the VCM of the transformed Cartesian observations, that is, Σ n o i s e = Σ M A C . This step is justified by the need to use Cartesian observations to compute the B-spline surface (Section 2). The same two range variances are used to scale the Identity matrix of case (i) for the sake of comparison between models. In these simulations and in the following case study, we will assume the range variance σ ρ 2 to be known; rough estimations of the range variance are available in real cases using the intensity model or manufacturer’s specifications.

3.2.2. Case (iii)

As many effects (atmospheric, surface or sensor-based) can potentially act on correlating the range measurements, the assumption of heteroscedasticity only made in case (ii) is fairly unrealistic. Assessing the correlation structure of the TLS range with a general model is a complex task. An empirically-based method was proposed in Reference [36]: the residuals of a LS adjustment of a scanned plane were fitted by an exponential function. This function is known to have a substantial limitation in most geostatistical studies due to the small degree of smoothness of the covariance function [38]. Additionally, methods using an empirical fitting of the autocovariance function have severe drawbacks in the case of fractional noise. In this contribution, we follow [16], who model the correlation structure of Global Navigation Satellite System observations with a general Matérn covariance function [48]. An analogy drawn between TLS and Global Navigation Satellite System observations makes the application of this flexible function to describe the structure of TLS range correlation plausible [17]. The two parameters—smoothness and range—involved in the Matérn model are presented in Appendix A.

Our Assumptions

  • We model the correlation of the range as being temporal, that is, time-dependent. Range measurements are a measure of time [49]: any spatial effects stemming from the reflected surface can be included in the variance factor. This latter could exemplarily follow the physically plausible intensity model, as proposed in Reference [12].
  • The covariance function proposed is said to be separable, that is, it separates the temporal from the spatial effects [50]. We will here assume a temporal spacing of 1 s between the simulated observations.

Building the VCM

We build the fully populated VCM Σ n o i s e , ρ by associating the time label t i at which the measurement was made to each range observation (see Figure 1 right). In a first approach, we neglect the time taken by the sensor to go back to the second column and consider the first point of the second column to be equally spaced regarding the observations of the first column. Including this short time offset acts to decrease the correlations, that is, makes the results obtained closer to case (ii).
Finally, we build the fully populated pointwise sorted VCM of the range measurements from the vector of correlations. This VCM has a Toeplitz structure and is scaled so that the variance of the range ρ corresponds to the two cases described previously (see (ii)). Similar to case (ii), the VCM of the raw TLS observations is transformed by accounting for mathematical correlations. This leads to a fully populated VCM of the Cartesian co-ordinates Σ n o i s e = Σ T C .
We simulate two kinds of correlation structures with different Matérn parameter values: low correlation range [ α , ν ] = [ 1 , 2 ] and [ α , ν ] = [ 0.01 , 2 ] for which the correlations prevail for larger lags. We intentionally consider mean-squared differentiability of the Matérn covariance function at the origin (near t i = 0 ) by taking ν > 1 [38]. Taking ν < 1 imposes is a strong limitation, since the correlation length decreases sharply at the origin leading to sparse VCM and inverses. This effect decreases the impact of fully populated matrices in the LS adjustment and statistical tests [51]. The study of the temporal correlation structure of TLS range measurements is beyond the scope of this paper and will be done in a next contribution.

3.3. Approximated VCM in the LS Adjustment

The assumption that the true VCM of the raw observations is known is fairly unrealistic in real application. Thus, we propose to additionally assess the impact of an approximated VCM on the distance computation derived from the regression B-splines approximation. We gradually mis-specified the true VCM, as presented in Table 1. This Table has to be read as follows: for case (ii) the true VCM is Σ = Σ M A C and is simplified using the scaled identity matrix Σ ^ = σ ρ 2 I . For case (iii) we simulate two steps of simplification: firstly, we neglect the temporal correlations ( Σ ^ = Σ M A C ) and, secondly, the mathematical correlations ( Σ ^ = σ ρ 2 I ).

3.4. Determining the Optimal Number of CP Using Information Criteria

We simulate a total of four PC for cases (i) and (ii) and four PC for case (iii) with two different correlation structures and range variances. We compute 100 runs of each simulation with an MC approach. One run here corresponds to the generation of two epochs simultaneously.
The mathematical modelization of the simulated PC is performed using the regression B-spline surface approximation developed in Section 2. The parametrization is made with the chord length method, which gives satisfactory results for regular and rectangular-shaped PC. The knot vector is determined using the method of Piegl and Tiller [40]. The optimal number of CP in the two directions is iteratively determined for each of the eight cases with the BIC and AIC approaches (see Section 2.1.2). Since the AIC gave the same results as BIC, the results are not shown for the sake of brevity.
The results given by the BIC are presented in Table 2 and are identical for each MC run. Correlated and Gaussian noise vectors lead to a different optimal number of CP.

3.5. Results

The means over all MC runs of D s ( S s i m u 1 , S s i m u 2 ) and D s ( S 1 , S 2 ) are calculated, as well as D s _ a v e ( S s i m u 1 , S s i m u 2 ) and D s _ a v e ( S 1 , S 2 ) . These values correspond to the distance between the approximated surfaces or the distance between the raw PC. Additionally, the STDs of the series obtained from the 100 MC runs are given. The stochastic models used to approximate the surfaces are varied according to Table 1.
In Table 3, we intentionally choose to present only the results from the extreme case (iii), corresponding to a high correlation level and a high range variance. This is justified for the sake of the brevity and clarity of this contribution. The other results are deduced from this particular one and are summarized in a text form in the following paragraphs.

3.5.1. Impact of the Simplified Stochastic Model on HD and AHD: Mathematical Approximations

We expect the HD and AHD between the mathematical approximations to be as close as possible to 0, since the simulated PC corresponds to a “non-deformation” case. Any discrepancy can be assigned to the LS solution itself when noisy observations with the wrong VCM are approximated. Additionally, the chosen distance may be inappropriate. The results of Table 3, as well as the one of the other simulations -described in text form-, are interpreted in this light.

Use of a Correct VCM

When we use the correct VCM to approximate the PC with the regression B-spline surface (Table 3, second line), the distances (HD or AHD) are close to 0. For case (iii) and σ ρ = 0.007 m, it reaches 0.0029 m for the AHD but a higher value of 0.0084 m for the HD. We find additionally 0.0011 versus 0.0012 m for σ ρ = 0.0007 m (STD 3 × 104 m) for AHD and HD, respectively. For case (ii) which corresponds to σ ρ = 0.007 m and a reference Σ n o i s e = Σ M A C , the AHD reaches 0.0046 m and the HD 0.0232 m (STD 3 × 104 and 4 × 103 m respectively). This is a stronger difference compared to case (iii). For case (i) − σ ρ = 0.007 m, Σ n o i s e = I -, the AHD reaches 0.1097 m and the HD 0.3534 m (STD 3 × 103 for both). Thus, we clearly see that the AHD gives values that are closer to 0 than the HD for all cases under consideration.

Use of An Approximated VCM

As described in Section 3.3, we use approximated VCMs in the LS and distance computation (Table 3, third and fourth line). In the latter case, we can distinguish that:
  • under correlated noise, the approximated VCM used in the LS computation affects the determination of both the HD and the AHD strongly: the difference ratio reaches for the case iii) more than 75% for the HD and 200% for the AHD. This result was found to hold true for all cases under consideration, that is, independently of the correlation structure and the variance factor. Thus, a correct stochastic model is unavoidable for a trustworthy distance. Exemplarily for case (iii) with [ α , ν ] = [ 0.01 , 2 ] and σ ρ , 1 = 0.007 m, the ratio of the difference between the approximated and the reference distance to the reference for Σ ^ = Σ M A C or Σ ^ = σ ρ 2 I reaches 200% for the AHD (Table 3). Decreasing the correlation length decreases the ratio: for case iii) and [ α , ν ] = [ 1 , 2 ] , this latter is found only 7% smaller than the reference value when the VCM is mis-specified ( Σ ^ = Σ M A C or Σ ^ = σ ρ 2 I ). This result is found to be independent of the σ ρ chosen. For σ ρ , 2 = 0.0007 m, the same ratio is 10% smaller: a small range variance impacts the distance computed with a mis-specified VCM less strongly.
  • When the observations are only MC, simplifying the stochastic model by neglecting the mathematical correlations, that is, taking Σ ^ = σ ρ 2 I , did not affect the HD or the AHD significantly for σ ρ , 2 = 0.0007 m. By increasing the range STD to σ ρ , 1 = 0.007 m, the ratio for the AHD was increased by 15%. This result highlights the importance of accounting for mathematical correlations under unfavorable scanning conditions, that is, high range variance.
Independently of the case under consideration and the approximated VCM used, the AHD was always about four times smaller than the HD and, thus, closer to the expected 0 value. The AHD was less influenced by a wrong stochastic model than the HD, except for case iii) and σ ρ , 2 = 0.0007 m. We attribute this finding to the averaging effect of the AHD.

3.5.2. Impact of the Simplified Stochastic Model on the HD and AHD: PC

When the distances are computed based on the raw observations, we still expect the AHD and the HD to be as close as possible to 0. Discrepancies are due to the noise introduced to generate the simulated PC and depend on the distance chosen.
The HD and AHD based on simulated PC have both higher values and STD compared with the values obtained with a mathematical modelization (see Section 3.5.1). This result is particularly significant when the PC noise is correlated. Indeed, a difference of up to 135% for the AHD could be obtained for case iii) and σ ρ , 1 = 0.007 m (“PC,” last line in Table 3). For case iii) and σ ρ , 2 = 0.0007 m, we find a difference of 85% for the AHD and more than 400% for the HD. For case (ii), differences of 80% and 30% for the AHD and HD, respectively, are reached for both σ ρ , 1 and σ ρ , 2 . For case (i), we notice a difference of 22% versus 75% for the AHD and the HD, respectively. This finding highlights the filtering effect of surface approximations on the underlying PC noise. It is, thus, particularly advantageous to approximate the PC mathematically for distance computation in the case of correlations.
We note more generally that correlations also had a positive effect on the distance computation with raw observations. In case (iii), a decrease of both its value and STD regarding case (ii) or (i) was identified. Exemplarily, we find for σ ρ , 2 = 0.0007 m:
case (i): AHD of 0.3534 m (STD 4 × 103 m)
case (ii): AHD of 0.0012 m (STD 5 × 105 m)
case (iii) and [ α , ν ] = [ 0.01 , 2 ] : AHD of 0.0011 m (STD 2.5 × 105 m).
When the raw observations are used, correlations act implicitly as a reduction of the available information, that is, similar to a point density reduction [9]. In real applications, they are related to the gridding of the raw observations, which reduces the number of observations of the PC. This implication is further developed in Section 4 with real observations from a bridge under load.

3.5.3. Statistical Testing for Deformation

In Appendix B, we propose a rigorous statistical test for the significance of the distance between mathematical approximations of TLS observations. We applied this derivation to the simulated PC. The H 0 that no deformation occurs was fortunately strongly supported for all cases under consideration. The test values varied between 0.3 and 1, whereas the smallest ones were obtained for the correlated cases (iii) under the assumption that Σ ^ = σ ρ 2 I . This highlights once more the importance of an adequate stochastic model, particularly in the presence of correlations, although the absolute p v values should only be overinterpreted [51].

3.6. Conclusions of the Simulations

Using the results of the simulations, we provide a first answer to the following questions:
  • How does correlated noise affect the distance? Which distance is better suited in the case of noisy observations?
Based on the results of the simulations and when the raw observations are used (PC), correlated noise affects the distance computation positively. It has a similar effect as a reduction of the observations available. When a mathematical approximation of the PC is performed, the best stochastic model should be used in the LS adjustment to assess a trustworthy distance. The impact becomes less pronounced by decreasing range variance and correlation length. The AHD is better suited than the HD for computing the distance between raw or approximated PC.
  • Why should we use a mathematical approximation of the noisy PC?
A mathematical approximation of the PC using, for example, B-spline surfaces is beneficial to assess a distance as close as possible to the reference value. Moreover, it allows the derivation of a rigorous statistical testing procedure based on the distance chosen, as developed in Appendix B and validated within a simulation framework.

4. Case Study

The previous simulations, for which the noise structure was known and controlled, have highlighted the impact of correlations on the HD and AHD. In this section, we propose to apply these derivations to a real case study.
We will analyze the HD and AHD computed with and without mathematical modelization of a real PC. We will compare the values with a reference one obtained with a more precise sensor: a pointwise LT. The rigorous statistical test procedure for deformation will be further applied.

4.1. A Bridge Under Load

We use real data from a bridge under load to assess the advantages of a mathematical modelization regarding processing the raw observations to estimate and test magnitudes of deformation.
The data set corresponds to a historic masonry arch bridge over the river Aller near Verden in Germany. Deformations were artificially generated by increasing load weights on specific parts of the bridge to simulate the impact induced, for example, by car traffic [52]. In the scope of the load testing the standard load of 1.0 MN (100 t) was defined and further loadings up to five-times the standard load were realized. Thus, a maximum load of approximately 6.0 MN was defined, produced by four hydraulic cylinders mounted on the arch. The TLS profiles were captured using a Zoller + Fröhlich Imager 5006H at a sampling rate of 500,000 points per second. In a pre-processing step, objects such as prisms were removed from the PC to achieve a clean dataset: this filtering with respect to objects on the arch surface eliminated interfering objects, that is, other sensor installations like prisms for the laser tracker and strain gauges. The first evaluation step of the 3D point clouds in post-processing was the referencing of the 3D point clouds in the coordinate system of the structure. The corresponding results are shown in Reference [52], Table 1. The mean standard deviation of the 3D points was 0.2 and maximum of 0.4 mm, which shows the quality of the referencing and guarantees at the same time a stable laser scanner position during the load test. In this contribution, we intentionally focus on the deformation between the reference PC without load (called E00 for epoch 0) and the PC corresponding to the maximum deformation occurring at the 5th epoch (E55). Further details about the experiment can be found in Reference [5], with a comparison for all load steps of the LT deformation magnitudes and the M3C2 distance.
Figure 2 is a photograph of the bridge, with a localization of the two LT points under consideration. The TLS was positioned approximately in the middle of the bridge so that the parts under load could be optimally scanned at a short distance, that is, from 5 m in the up-direction.
We aim to compare the HD and AHD with the deformation magnitude obtained with a highly accurate LT. As this latter measures a pointwise distance, we selected two small surfaces (quadratic patches) of 25 × 25 cm from the whole PC in the direct neighborhood of the two LT points L8 and L13. The zone around L8 was scanned with a less favorable geometry than the patch around L13 regarding point density, incidence angle, footprint size and range (see Figure 2).
These two points were chosen intentionally due to:
  • their comparable and small deformation magnitudes of approximately 4 mm between step E00 and 55 around the reference LT point and
  • the two different scanning geometries.
Please note that we do not intend to make a systematic investigation of the impact of the geometry on the quantities of interest here and hence, no further indication will be given. Our goal in this contribution is to compare the HD and AHD with the LT values and validate a procedure for testing deformation. Further investigations are left to the next and other specific contributions.

4.2. Mathematical Modelling

A parameterization was carried out using a uniform method, which is justified by their relatively smooth and uncomplicated geometries, in order to mathematically approximate the small patches with B-spline surfaces. We chose an equidistant knot vector for the same reason. A B-spline approximation is preferred instead of a Gauss-Helmert Model [53], since the surfaces are not exactly planar and cannot be exactly approximated with an inclined plane.
Three strategies were adopted for the surface fitting to simplify the computation and reduce the point densities of the PC:
  • In a pre-processing step, the extracted PC were gridded, that is, the X- and Y-axis were each divided into ten steps. For each of the 100 cells, the means of the X, Y, Z values were computed to reduce the number of observations. The value of 10 was chosen as the highest one leading to the occurrence of at least one point in each cell.
  • The PC extracted were gridded similarly to (i) but the X- and Y-axis were divided into 5, which corresponds to 25 cells.
  • The whole PC were used without gridding, that is, no reduction of the PC point density was performed.
The BIC was used to compute the optimal number of CP. For (i) and (ii), the estimation of 4 CP in both directions was found to be optimal for the patches, whereas for (iii), 6 CP were estimated in both directions for L8 and L13, respectively.
The different mathematical surfaces obtained with and without gridding are shown in Figure 3. Figure 3 right highlights how the gridding of the PC (case (i)) affects the fitting by smoothing or filtering the PC. Figure 3 left shows more details of the surface as all available scanned points are used (case (iii)).

Note on the Stochastic Model:

When a gridding of the PC is performed, the temporal correlations are lost: an averaging of the values within one cell is performed and the time matching becomes meaningless. We, therefore, make use of the simplified stochastic model corresponding to case (ii) and account only for heteroscedasticity and mathematical correlations in the LS adjustment. The intensity model is used to compute the range variance [37]. As expected from the scanning geometry and because the TLS was situated under the middle of the bridge (Figure 2), we obtained a large mean intensity of 1,557,500 Inc for L13, leading to σ ρ L 13 0.5 mm, whereas for L8, the mean of the intensity reached 358,900 Inc corresponding to σ ρ L 8 1 mm.
For case (iii) (B-spline approximation without gridding), we chose intentionally to neglect correlations to compute the mathematical approximation of the PC. This is justified by the computational burden associated with fully populated VCM and relatively low impact on the distance for the range variance under consideration (less than the submm level, see 3.6.3).

4.3. Computation of the HD and AHD

The HD or AHD computed with the three gridding strategies are not expected to give similar values.
Due to the smaller reduction of the PC point density, HD and AHD from the B-spline approximation (i) will be closer to the values obtained with the PC (case (iii)). Because the HD is a local distance measure, a stronger influence on unexpected local details is anticipated, particularly when few points are condensed in a grid cell. In the simulation section, we stated that correlations were acting to reduce the number of observations available and affected the distance computation with raw observations positively. This effect is similar to a gridding of the PC and allows us to conjecture that an optimal gridding exists leading to the reference value of the distance. The reference value corresponds here to the pointwise LT distance.
The mathematical approximations of the PC will lead to a distance closer to the reference one when optimal gridding of the raw PC is performed.
The corresponding results are presented in Table 4 and confirm these expectations. Both HD and AHD values are compared with the LT values (last column) for the four cases under consideration.

Gridded Observations: Case (i) and (ii)

In the case of gridded observations (Table 4, first line), the LT deformation magnitudes are closer to the AHD than to the HD. The HD (Table 4, third column) is higher than the AHD (Table 4, second column) in all cases.
A reduction of the PC to 16 values pro cell (L8 case (i)), Table 4, first line) leads logically to an AHD closer to the value obtained without mathematical approximation (PC). It is nearly 0.2 mm over the value given by the LT (4.29 mm versus 4.07 mm). We link this effect to the lower noise reduction regarding (ii).
A high point averaging corresponding to 300 PC points in a cell (L13 case (ii)) is associated with a low AHD. This latter is smaller by 0.23 mm compared with case (i) (Table 4, third line). However, the difference is below the noise variance of the range and should not be overinterpreted. Similarly, we found an underestimated value of 3.8 mm for point L8 by averaging to 600 points per cell (not presented in Table 4). Thus, the loss of information due to a strong PC density reduction affects the AHD negatively when compared with the LT deformation magnitude.

No Gridding, Case (iii)

When the whole PC is used for surface fitting rather than a gridded version, the AHD are higher by up to 0.5 mm for L8 and 0.3 mm for L13 compared with the optimal values obtained with an approximated gridded PC (Table 4, second column). These values are below the noise variance of the range but show the effect of the PC smoothing on distance computation (e.g., Figure 3).
From Table 4 (third column), the HD for case (i) and (iii) are higher than for case (ii). This gives an additional argument in favor of the AHD, that is, the averaging decreases the impact of potential local artefacts when compared with LT pointwise deformation magnitude.
In Table 4, last column, we added the results found with the usual method M3C2 [10]. The results show an underestimation of the distance with 3.20 mm versus the LT value of 4.07 mm for L8 and 4.70 mm versus 4.96 for L13.

4.4. Testing for Deformation

Even if the deformation magnitude is obvious regarding the estimated noise STD for both L8 and L13, we aimed to validate the testing methodology presented in Appendix B. We, thus, make use of the bootstrapping approach to derive the p-values of the a priori test statistics T H D and T A H D under the stochastic model Σ ^ M A C with the estimated σ ρ . Following the simulations, we use K B S = 99 samples to test for the significance of deformation. The bootstrap sample generated under H 0 “no deformation” is defined as the average of the two surfaces E00 and E55 for the two points under consideration. No evidence for H 0 “no deformation” could be identified, as the p-values reached for the AHD were approximately 0 and were far below the critical value α t e s t of 0.05. We can conclude that the deformation magnitude based on the AHD is statistically significant.

4.5. Discussion

In this case study, we found a number of points, around 60–70 optimal in each cell (L8 case (i)) and L13 case (ii)), to ensure an AHD close to the deformation magnitude obtained with the LT. This finding is far-reaching when comparing mathematical approximations of the TLS PC and LT values is intended:
  • an optimal grid setting for a good correspondence between the deformation magnitudes computed from two different sensors exists: a higher point density may lead to different point correspondences in the two epochs, particularly in the case of a small deformation. The optimal size of the cell depends on the point density inside one cell and could be assigned by means of calibration based on sensors comparison (LT and TLS).
  • We further pointed out that the AHD is less influenced by a suboptimal fitting, that is, inappropriate parametrization, knot vector or number of CP and is more trustworthy for local deformation analysis than the HD. This finding confirms the results from the previous simulations: the AHD is more appropriate than a maximum value (the HD) for the sake of comparison with LT values. This is due to the averaging of the AHD when a local deformation analysis is performed. A statistical test of significance of deformation should be based on this distance.
We noticed that the point density reduction affects the distance computed with B-spline approximations positively, up to a given stage where not enough information is available for a correct fitting. This confirms our conjecture that optimal gridding of the raw PC exists for which the AHD corresponds to the reference value, that is, an implicit account for correlations:
Using standard setting, we found an underestimation of the deformation with the M3C2 method. This finding is coherent with our results about the point density inside one cell. Consequently, we would strongly recommend performing local fitting when magnitudes have to be precisely estimated. Consistency regarding the point density and distance computation is mandatory for the sake of comparison between deformation magnitudes obtained with different sensors.

5. Conclusions

The potential of TLS-based deformation analysis is high due to the fast and simple data acquisition, the high point density and the possibility of scanning whole areas of interest. B-spline surfaces can approximate the PC mathematically for rigorous statistical testing of deformation and to filter the noise of TLS observations.
A numerical evaluation of the magnitude of deformation can be obtained by computing a distance between the PC or their approximated counterparts. In this contribution, we focused on the HD and the AHD. The latter was shown to be a powerful alternative to the HD in the case of correlated observations.
Three questions were answered:
  • A mathematical approximation of the noisy TLS PC is beneficial for a trustworthy distance computation: B-spline surface approximation from scattered PC acts as filtering the correlated and heteroscedastic noise from TLS observations. The AHD computed was closer to the reference one for both simulated and real data analysis when a B-spline surface fitting was performed. Additionally, a pre-gridding of the raw PC for a real scenario affected the distance computation positively by further reducing the observations available.
  • Rigorous statistical test for deformation can only be performed based on parametric surfaces. That is one of the most significant advantages of mathematical approximation. Because the distribution of the test statistics for deformation based on the AHD is not tractable: we proposed and validated a novel bootstrap approach for the test decision.
  • Correlated noise affects the distance computation between PC for both raw and approximated observations. In the case of an approximation of the PC with regression B-spline surfaces, an optimal stochastic model in the LS adjustment is mandatory to reach the optimal value of the distance: both mathematical and temporal correlations should be accounted for.
In a real application, the impact of the noise on the distance can be decreased by optimal gridding of the raw observations, similar to the account of correlations. Consequently, a calibration using a highly accurate sensor could be performed in advance. The size of the cell depends on the point density within the surface under consideration. Further analysis will be performed to fix the optimal grid size by means of calibration. We will also validate the proposed correlation model by analyzing the residuals of the B-spline surface approximation.

Author Contributions

G.K.: conceptualization, methodology, investigation, analysis and writing, H.A. and B.K.: conceptualization, methodology (bootstrap simulations), review. All authors have read and agreed to the published version of the manuscript.

Funding

The publication of this article was funded by the Open Access fund of Leibniz Universität Hannover.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. The Matérn Model

In this contribution, we use the Matérn covariance function [47] to compute Σ T C . The main parameters of this model—which can be extended to account for anisotropy and nonstationarity [50]—are briefly presented here.
In its simplest and spatial form, the Matérn covariance function C m a t e r n is defined by C m a t e r n ( r ) = ϕ ( α r ) ν K ν ( α r ) , where r > 0 is the Euclidean distance between two points in space and ν is the smoothness parameter related to the mean-squared differentiability of the field at the origin. K ν denotes the modified Bessel function of the second kind with order ν and α is a range parameter that controls how quickly C m a t e r n decreases as r increases. The function is usually normalized to 1 with the scaling parameter ϕ and can be easily scaled to any other variance, for example, using the intensity model of Reference [12] as proposed in this contribution. In this contribution, the r is replaced by the time t i to obtain a temporal covariance function.
Figure A1 left displays C m a t e r n scaled to 1 (i.e., the correlation function) for different choices of the shape parameter ν . The parameter clearly specifies the rate of decay of the covariance function at the origin and is, thus, related to the high-frequency content in the spectral domain. When Toeplitz VCM are built with this covariance function, their inverse will become more fully populated as ν increases and, consecutively, the impact of accounting for correlations in the LS adjustment will be stronger [16]. The following cases are well-known for a 1D field and should be mentioned:
  • ν = 1 2 corresponds to the exponential covariance function, that is, a strong decay at the origin
  • ν = 1 to the Markov process of first order
  • ν = is the squared exponential covariance function, which corresponds to a physically less plausible infinitely differentiable random field at the origin. The case ν = 4 of Figure A1 left highlights the meaning of this assumption, that is, a low decaying C m a t e r n at the origin, leading potentially to some numerical problems when corresponding VCM have to be inverted.
Figure A1 right shows the different correlation functions obtained by varying the range parameter α . As can be seen, α is linked with the speed at which the covariance function decays to 0. Please note that other parametrizations of the Matérn function are possible [38]. The parameters, including the variance, can be estimated with the maximum likelihood or cross-validation methods, eventually by fixing one parameter to reduce the computational burden [54].
Figure A1. Matérn correlation function. Left: variation of the smoothness parameter ν with α = 0.05 . Right: variation of the range parameter α by keeping ν fixed to 2.
Figure A1. Matérn correlation function. Left: variation of the smoothness parameter ν with α = 0.05 . Right: variation of the range parameter α by keeping ν fixed to 2.
Remotesensing 12 00829 g0a1

Appendix B. Bootstrap Statistical Test for Deformation

A parametric surface modelization allows the significance of the deformation magnitude to be statistically and rigorously assessed.

Appendix B.1. Test Statistics and the Null Hypothesis

In order to test the significance of the HD and AHD, we define the null and the alternative hypothesis of the test by: H 0 : E { D s } = 0 vs. H 1 : E { D s } 0 and H 0 : E { D s _ a v e } = 0 vs. H 1 : E { D s _ a v e } 0 , that is, the null hypothesis states that no deformation happened. E { } is the expectation operator. As we aim to measure the deviations from H 0 as a distance measure, we follow [42] and choose the HD test statistic
T H D = ( [ x i H D y i H D z i H D ] S 2 [ x j H D y j H D z j H D ] S 1 ) Σ Δ Δ 1 ( [ x i H D y i H D z i H D ] S 2 [ x j H D y j H D z j H D ] S 1 ) T
This a priori test statistic is similar to the congruency statistic used, for example, to test deformation in geodetic networks [4]. It is directly derived from Reference [46]. In this latter contribution, gridded surface points were used to compute the test statistic. The proposed test statistic (9) is more general, as the points of the two surfaces under consideration are the ones where the HD occurs. As has been mentioned previously, the HD distance defined by (7) is based on the closest distance between two points at different epochs, so that the point i H D may differ from j H D .
We define Σ Δ Δ as the VCM of the estimated surface differences. Using the error propagation law and neglecting cross-correlations, we have Σ Δ Δ = ( A 1 T Σ i H D 1 A 1 ) 1 + ( A 2 T Σ j H D 1 A 2 ) 1 , where A 1 and A 2 correspond to the design matrices defined in (3) for epoch 1 and 2, respectively. Σ i H D and Σ j H D are the submatrices of Σ S 1 = ( A 1 T Σ n o i s e , 1 1 A 1 ) 1 and Σ S 2 = ( A 2 T Σ n o i s e , 2 1 A 2 ) 1 at the HD points on the two surfaces, whereas Σ n o i s e , 1 , Σ n o i s e , 2 are the VCM specified in Section 3.
T A H D is defined similarly as the mean of the weighted sum of the square of the surface difference vector for each set of corresponding (i.e., closest) points on the two surfaces.

Appendix B.2. Bootstrap Approach

Our test statistics are based on the computation of the HD and AHD. They are nonlinear functions of the estimated surface points: exact test distributions are unavailable. To overcome this drawback, we use a parametric bootstrap method in the sense of Reference [55] to make a test decision at a prescribed significance level α t e s t .
In this appendix, we provide a short description of the four steps of the bootstrapping method, which utilizes an MC simulation of the empirical p-value, according to Reference [56].
  • Testing step: the bootstrapping approach starts by computing T H D and T A H D or their a posteriori counterparts for the two estimated surfaces. Because these quantities are to be compared to a critical value that is not available, a large number of observation vectors are generated under H 0 . A so-called bootstrap sample is defined, which is here taken as the mean of the surface differences, that is, S H 0 = ( S 2 S 1 ) 2 . We consider, therefore, that the mean surface as not being deformed, that is, generated under H 0 .
  • Generating step: the generating step begins with the computation of N 1 and N 2 following the methodology of Section 3.1. Added to S H 0 , we generate, thus, two noised surfaces, which we approximate with regression B-splines surfaces. Finally, the HD and AHD between the two approximations are computed. For one iteration k B S , we call the corresponding test statistics T H D k B S and T A H D k B S . Please note that we make use of a parametric approach, that is, the random numbers are generated independently, so that no replacement is made by using the residuals of the LS approximation.
  • Evaluation steps: K B S iterations are carried out. Following [57], the loss of power of the test is proportional to the inverse of K B S . We fixed K B S = 99 to keep the computation manageable. The p-value is estimated by p ^ v H D = 1 K B S k B S = 1 K B S I ( T H D k B S > T H D ) , according to Reference [56], to determine how extreme the test values T H D and T A H D are in comparison to the K B S of T H D k B S and T A H D k B S generated under H 0 . I is an indicator function, which takes the value 1 when T H D k B S > T H D and 0, vice versa.
  • Decision test: A large p ^ v H D indicates a large support of H 0 by the observations. H 0 is rejected if p ^ v H D < α t e s t , where α t e s t is the specified significance level, usually taken as 0.05.
We obtain p ^ v A H D by using T A H D instead of T H D . The methodology of the bootstrapping approach is summarized in Figure A2.
Figure A2. Flowchart explaining the bootstrap simulation.
Figure A2. Flowchart explaining the bootstrap simulation.
Remotesensing 12 00829 g0a2

References

  1. Hu, S.; Wallner, J. A second order algorithm for orthogonal projection onto curves and surfaces. Comput. Aided Geom. Des. 2005, 22, 251–260. [Google Scholar] [CrossRef] [Green Version]
  2. Guthe, M.; Borodin, P.; Klein, R. Fast and accurate Hausdorff distance calculation between meshes. J. WSCG 2005, 13, 41–48. [Google Scholar]
  3. Alt, H.; Scharf, L. Computing the Hausdorff distance between sets of curves. In Proceedings of the 20th European Workshop on Computational Geometry (EWCG), Seville, Spain, 24–25 March 2004; pp. 233–236. [Google Scholar]
  4. Pelzer, H. Zur Analyze Geodatischer Deformations-messungen; Verlag der Bayer. Akad. D. Wiss: München, Germany, 1971; p. 86. [Google Scholar]
  5. Paffenholz, J.A.; Huge, J.; Stenz, U. Integration von Lasertracking und Laserscanning zur optimalen Bestimmung von lastinduzierten Gewölbeverformungen. AVN Allg. Vermess.-Nachr. 2018, 125, 75–89. [Google Scholar]
  6. Alba, M.; Fregonese, L.; Prandi, F.; Scaioni, M.; Valgoi, P. Structural monitoring of a large dam by terrestrial laser scanning. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2006, 36, 1–6. [Google Scholar]
  7. Caballero, D.; Esteban, J.; Izquierdo, B. ORCHESTRA: A unified and open architecture for risk management applications. Geophys. Res. Abstr. 2007, 9, 08557. [Google Scholar]
  8. Besl, P.J.; McKay, N.D. A method for registration of 3D shapes. IEEE Trans. Pattern Anal. Mach. Intell. 1992, 14, 239–256. [Google Scholar] [CrossRef]
  9. Holst, C.; Kuhlmann, H. Challenges and present fields of action at laser scanner based deformation analysis. J. Appl. Geod. 2016, 10, 17–25. [Google Scholar]
  10. Lague, D.; Brodu, N.; Leroux, J. Accurate 3D comparison of complex topography with terrestrial laser scanner. Application to the Rangitikei canyon (N-Z). ISPRS J. Photogramm. Remote Sens. 2013, 82, 10–26. [Google Scholar] [CrossRef] [Green Version]
  11. Soudarissanane, S.; Lindenbergh, R.; Menenti, M.; Teunissen, P. Scanning geometry: Influencing factor on the quality of terrestrial laser scanning points. ISPRS J. Photogramm. Remote Sens. 2011, 66, 389–399. [Google Scholar] [CrossRef]
  12. Wujanz, D.; Burger, M.; Mettenleiter, M.; Neitzel, F. An intensity-based stochastic model for terrestrial laser scanners. ISPRS J. Photogramm. Remote Sens. 2017, 125, 146–155. [Google Scholar] [CrossRef]
  13. Wujanz, D.; Burger, M.; Tschirschwitz, F.; Nietzschmann, T.; Neitzel, F.; Kersten, T.P. Determination of intensity-based stochastic models for terrestrial laser scanners utilizing 3D-point clouds. Sensors 2018, 18, 2187. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  14. Holst, C.; Artz, T.; Kuhlmann, H. Biased and unbiased estimates based on laser scans of surfaces with unknown deformations. J. Appl. Geod. 2014, 8, 169–183. [Google Scholar] [CrossRef]
  15. Jurek, T.; Kuhlmann, H.; Host, C. Impact of spatial correlations on the surface estimation based on terrestrial laser scanning. J. Appl. Geod. 2017, 11, 143–155. [Google Scholar] [CrossRef]
  16. Kermarrec, G.; Schön, S. On the Matérn covariance family: A proposal for modelling temporal correlations based on turbulence theory. J. Geod. 2014, 88, 1061–1079. [Google Scholar] [CrossRef]
  17. Kermarrec, G.; Neumann, I.; Alkhatib, H.; Schön, S. The stochastic model for Global Navigation Satellite Systems and terrestrial laser scanning observations: A proposal to account for correlations in least squares adjustment. J. Appl. Geod. 2018, 13, 93–104. [Google Scholar] [CrossRef]
  18. Mémoli, F.; Sapiro, G. Comparing point clouds. In The 2004 Eurographics/ACM SIGGRAPH Symposium; Boissonnat, J.-D., Alliez, P., Eds.; ACM: New York, NY, USA, 2004; p. 32. [Google Scholar]
  19. Monserrat, O.; Crosetto, M. Deformation measurement using terrestrial laser scanning data and least squares 3D surface matching. ISPRS J. Photogramm. 2008, 63, 142–154. [Google Scholar] [CrossRef]
  20. Koch, K.R. Fitting free-form surfaces to laserscan data by NURBS. AVN Allg. Vermess.-Nachr. 2009, 116, 134–140. [Google Scholar]
  21. Lindenbergh, R.; Pietrzyk, P. Change detection and deformation analysis using static and mobile laser scanning. Appl. Geomat. 2015, 7, 65–74. [Google Scholar] [CrossRef]
  22. Eilers, P.H.C.; Marx, B.D. Flexible smoothing with B-splines and penalties. Stat. Sci. 1996, 11, 89–121. [Google Scholar] [CrossRef]
  23. Engleitner, N.; Jüttler, B. Patchwork B-spline refinement. Comput. Aided Des. 2017, 90, 168–179. [Google Scholar] [CrossRef]
  24. Aguilera, A.M.; Aguilera-Morillo, M.C. Comparative study of different B-spline approaches for functional data. Math. Comput. Model. 2013, 58, 1568–1579. [Google Scholar] [CrossRef]
  25. Bogacki, P.; Weinstein, S.E. Generalized Fréchet distance between curves. In Mathematical Methods for Curves and Surfaces II; Daehlen, M., Lyche, T., Schumaker, L.L., Eds.; Vanderbilt University Press: Nashville, Tennessee, 1998; pp. 25–32. [Google Scholar]
  26. Kim, Y.-J.; Oh, Y.-T.; Yoon, S.-H.; Kim, M.-S.; Elber, G. Efficient Hausdorff distance computation for freeform geometric models in close proximity. Comput. Aided Des. 2013, 45, 270–276. [Google Scholar] [CrossRef]
  27. Dubuisson, M.P.; Jain, A.K. A modified Hausdorff distance for object matching. In Proceedings of the 12th IAPR International Conference on Pattern Recognition, Jerusalem, Israel, 9–13 October 1994; Volume 1, pp. 566–568. [Google Scholar]
  28. Scaioni, M.; Roncella, R.; Alba, M.I. Change detection and deformation analysis in point clouds: Application to rock face monitoring. Photogramm. Eng. Remote Sens. 2013, 79, 441–455. [Google Scholar] [CrossRef]
  29. Jüttler, B. Bounding the Hausdorff distance of implicitly defined and/or parametric curves. In Mathematical Methods in CAGD; Lyche, T., Schumaker, L.L., Eds.; Academic Press: Oslo, Norway, 2000; pp. 1–10. [Google Scholar]
  30. Elber, G.; Grandine, T. Hausdorff and minimal distances between parametric free forms in R2 and R3. In Advances in Geometric Modeling and Processing, Proceedings of the 5th International Conference, GMP 2008, Hangzhou, China, 23–25 April 2008; Chen, F., Juettler, B., Eds.; Lecture Notes in Computer Science 4975; Springer: Berlin, Germany, 2008; pp. 191–204. [Google Scholar]
  31. Shapiro, M.D.; Blaschko, M.B. On Hausdorff Distance Measures; Technical Report, UM-CS-2004-071; Department of Computer Science, University of Massachusetts Amherst: Amherst, MA, USA, 2004. [Google Scholar]
  32. Huttenlocher, D.P.; Klanderman, G.A.; Rucklidge, W.J. Comparing images using the Hausdorff distance. IEEE TPAMI 1993, 15, 850–862. [Google Scholar] [CrossRef] [Green Version]
  33. Chen, X.D.; Ma, W.; Xu, G.; Paul, J.C. Computing the Hausdorff distance between two B-spline curves. Comput. Aided Des. 2010, 42, 1197–1206. [Google Scholar] [CrossRef]
  34. Boykov, Y.; Huttenlocher, D. A new Bayesian framework for object recognition. IEEE CVPR 1999, 2, 517–523. [Google Scholar]
  35. Boehler, W.; Marbs, A.A. 3D Scanning instruments. In Proceedings of the CIPA WG6 International Workshop on Scanning for Cultural Heritage Recording, Corfu, Greece, 1–2 September 2002. [Google Scholar]
  36. Kauker, S.; Schwieger, V. A synthetic covariance matrix for monitoring by terrestrial laser scanning. J. Appl. Geod. 2017, 11, 77–87. [Google Scholar] [CrossRef]
  37. Kermarrec, G.; Alkhatib, H.; Neumann, I. On the sensitivity of the parameters of the intensity-based stochastic model for terrestrial laser scanner. Case study: B-spline approximation. Sensors 2018, 18, 2964. [Google Scholar] [CrossRef] [Green Version]
  38. Stein, M.L. Interpolation of Spatial Data: Some Theory for Kriging; Springer: New York, NY, USA, 1999. [Google Scholar]
  39. De Boor, C. On calculating with B-splines. J. Approx. Theory 1972, 6, 50–62. [Google Scholar] [CrossRef] [Green Version]
  40. Piegl, L.; Tiller, W. The NURBS Book; Springer Science & Business Media: Berlin, Germany, 1997. [Google Scholar]
  41. Harmening, C.; Neuner, H. Choosing the optimal number of B-spline control points (Part 1 Methodology and approximation of curves). J. Appl. Geod. 2016, 10, 139–157. [Google Scholar] [CrossRef]
  42. Alkhatib, H.; Kargoll, B.; Bureick, J.; Paffenholz, J.A. Statistical evaluation of the B-Splines approximation of 3D point clouds. In Proceedings of the 2018 FIG-Congress, Istanbul, Turkey, 6–11 May 2018. [Google Scholar]
  43. Burnham, K.P.; Anderson, D.R. Model Selection and Multimodel Inference; Springer: New York, NY, USA, 2002. [Google Scholar]
  44. Aspert, N.; Santa-Cruz, D.; Ebrahimi, T. Measuring errors between surfaces using the Hausdorff distance. In Proceedings of the IEEE International Conference on Multimedia and Expo, Lausanne, Switzerland, 26–29 August 2002; Volume 1, pp. 705–708. [Google Scholar]
  45. Kermarrec, G.; Alkhatib, H.; Paffenholz, J.-A. Original 3D-Punktwolken oder Approximation mit B-Splines: Verformungsanalyse mit CloudCompare. In Tagungsband GeoMonitoring 2019, Proceedings of the GeoMonitoring, Hannover, Germany, 14–15 March 2019; Alkhatib, H., Paffenholz, J.A., Eds.; Leibniz Universität Hannover: Hanover, Germany, 2019; pp. 165–176. [Google Scholar]
  46. Zhao, X.; Kermarrec, G.; Kargoll, B.; Alkhatib, H.; Neumann, I. Statistical evaluation of the influence of stochastic model on geometry based deformation analysis. J. Appl. Geod. 2017, 11, 4. [Google Scholar]
  47. Gentle, J.E. Random Number Generation and Monte Carlo Methods; Springer: Berlin, Germany, 1998. [Google Scholar]
  48. Matérn, B. Spatial variation – stochastic models and their applications to some problems in forest survey sampling investigations. Rep. For. Res. Inst. Swede 1960, 49, 1–144. [Google Scholar]
  49. Rueger, J.M. Electronic Distance Measurement; Springer: Berlin/Heidelberg, Germany, 1996. [Google Scholar]
  50. Gelfand, A.E.; Diggle, P.J.; Fuentes, M.; Guttorp, P. Handbook of Spatial Statistics; Chapman & Hall/CRC Handbooks of Modern Statistical Methods: London, UK, 2010. [Google Scholar]
  51. Kermarrec, G.; Paffenholz, J.-A.; Alkhatib, H. How significant are differences obtained by neglecting correlations when testing for deformation: A real case study using bootstrapping with terrestrial laser scanner observations approximated by B-spline surfaces. Sensors 2019, 19, 3640. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  52. Schacht, G.; Piehler, J.; Müller, J.Z.A.; Marx, S. Belastungsversuche an einer historischen Eisenbahn-Gewölbebrücke. Bautechnik 2017, 94, 125–130. [Google Scholar] [CrossRef]
  53. Lenzmann, L.; Lenzmann, E. Strenge Auswertung des nichtlinearen GaußHelmert-Modells. AVN Allg. Vermess.-Nachr. 2004, 111, 68–73. [Google Scholar]
  54. Kaufman, C.G.; Shaby, B.A. The role of the range parameter for estimation and prediction in geostatistics. Biometrika 2012, 100, 473–484. [Google Scholar] [CrossRef] [Green Version]
  55. Efron, B. Bootstrap methods: Another look at the jackknife. Ann. Stat. 1979, 7, 1–26. [Google Scholar] [CrossRef]
  56. McKinnon, J. Bootstrap Hypothesis Testing; Queen’s Economics Department Working Paper, No. 1127; Queen’s University: Kingston, ON, Canada, 2007. [Google Scholar]
  57. Davidson, R.; MacKinnon, J.G. Bootstrap tests: How many bootstraps? Econom. Rev. 2000, 19, 55–68. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Left: the reference surface corresponds to a Gaussian probability density function. Right: time in increasing order starting at the beginning of each row is associated with each point. The time needed by the terrestrial laser scanner (TLS) to come back to its initial X-position is neglected in a first approximation.
Figure 1. Left: the reference surface corresponds to a Gaussian probability density function. Right: time in increasing order starting at the beginning of each row is associated with each point. The time needed by the terrestrial laser scanner (TLS) to come back to its initial X-position is neglected in a first approximation.
Remotesensing 12 00829 g001
Figure 2. Bottom and top: Representation of the bridge under load with the localization of the two patches L13 and L8 under consideration. The load was positioned approximately in the middle of the bridge under which the TLS was positioned (images adapted from Reference [5]).
Figure 2. Bottom and top: Representation of the bridge under load with the localization of the two patches L13 and L8 under consideration. The load was positioned approximately in the middle of the bridge under which the TLS was positioned (images adapted from Reference [5]).
Remotesensing 12 00829 g002
Figure 3. L13: Effect of the point cloud reduction by gridding the raw observations. Left: B-splines fitting without gridding. Right: Surface approximation with gridding, case (i).
Figure 3. L13: Effect of the point cloud reduction by gridding the raw observations. Left: B-splines fitting without gridding. Right: Surface approximation with gridding, case (i).
Remotesensing 12 00829 g003
Table 1. Approximation of the variance-covariance matrix (VCM) in the least-square (LS) adjustment to estimate the B-spline surfaces for the three cases under consideration. σ ρ 2 can take the values σ ρ , 1 2 or σ ρ , 2 2 .
Table 1. Approximation of the variance-covariance matrix (VCM) in the least-square (LS) adjustment to estimate the B-spline surfaces for the three cases under consideration. σ ρ 2 can take the values σ ρ , 1 2 or σ ρ , 2 2 .
Case (i)Case (ii)Case (iii)
True VCM
Σ = σ ρ 2 I
Simplification 1
Σ ^ = σ ρ 2 I
Simplification 1
Σ ^ = σ ρ 2 I
True VCM
Σ = Σ M A C
Simplification 2
Σ ^ = Σ M A C
True VCM
Σ = Σ T C
Table 2. The optimal number of control point (CP) in both directions (n/m) is determined with Bayesian information criterion (BIC) for case (i), (ii) and (iii) corresponding to different noise structures. Unit of STD is [m]. Case (i): simple case, heteroscedasticity. Case (ii): complexity degree 1: heteroscedasticity + MAC. Case (iii): complexity degree 2: heteroscedasticity + MAC+ temporally correlated (low and high).
Table 2. The optimal number of control point (CP) in both directions (n/m) is determined with Bayesian information criterion (BIC) for case (i), (ii) and (iii) corresponding to different noise structures. Unit of STD is [m]. Case (i): simple case, heteroscedasticity. Case (ii): complexity degree 1: heteroscedasticity + MAC. Case (iii): complexity degree 2: heteroscedasticity + MAC+ temporally correlated (low and high).
Case (i) Σ n o i s e = σ ρ 2 I
σ ρ , 1 = 0.0007
σ ρ , 2 = 0.007
Case (ii) Σ n o i s e = Σ M A C
σ ρ , 1 = 0.0007
σ ρ , 2 = 0.007
BIC (n/m)9/1011/10
Case (iii) Σ n o i s e = Σ T C
σ ρ , 1 = 0.0007 σ ρ , 2 = 0.007 [ α , ν ] = [ 1 , 2 ]
(Case iii) Σ n o i s e = Σ T C
σ ρ , 1 = 0.0007 σ ρ , 2 = 0.007 [ α , ν ] = [ 0.01 , 2 ]
BIC (n/m)11/1011/10
Table 3. Results of the Monte Carlo (MC) simulations for the case (iii) under gradual misspecification of the stochastic model. The Hausdorff distance (HD), its averaged derivation (AHD) and the difference ratio defined as 100 H D H D r e f H D r e f where H D r e f , H D are the HD obtained under the reference VCM Σ n o i s e and the different approximating VCM Σ ^ , respectively. Units of STD and distances are [m].
Table 3. Results of the Monte Carlo (MC) simulations for the case (iii) under gradual misspecification of the stochastic model. The Hausdorff distance (HD), its averaged derivation (AHD) and the difference ratio defined as 100 H D H D r e f H D r e f where H D r e f , H D are the HD obtained under the reference VCM Σ n o i s e and the different approximating VCM Σ ^ , respectively. Units of STD and distances are [m].
Case   iii )   Σ n o i s e = Σ T C
σ ρ , 1 = 0.007 [ α , ν ] = [ 0.01 , 2 ]
HD (%)/STDAHD (%)/STD
Σ ^ = Σ T C reference VCM0.0084
3.6 × 103
0.0029
1.3 × 103
Σ ^ = Σ M A C only MAC0.0175 (107%)
6.2 × 103
0.0090 (207%)
4.8 × 103
Σ ^ = σ ρ 2 I no correlation0.0148 (76%)
6.1 × 103
0.0090 (207%)
4.9 × 103
PC no approximation0.0153 (82%)
5.7 × 103
0.0092 (135%)
4.8 × 103
Table 4. The HD and AHD for the two points L8 and L13 under consideration. Case (i) and (ii) correspond to a B-spline approximation with a reduction of the PC via gridding. Case (iii) B-spline fitting without gridding using the whole PC. For all approximations we took Σ ^ = Σ M A C . PC means that no mathematical approximation was performed. The values are compared with the Euclidian distance obtained from the LT observations between the two epochs as well as the usual M3C2 distance.
Table 4. The HD and AHD for the two points L8 and L13 under consideration. Case (i) and (ii) correspond to a B-spline approximation with a reduction of the PC via gridding. Case (iii) B-spline fitting without gridding using the whole PC. For all approximations we took Σ ^ = Σ M A C . PC means that no mathematical approximation was performed. The values are compared with the Euclidian distance obtained from the LT observations between the two epochs as well as the usual M3C2 distance.
L13 σ ρ = 0.5 mmAHD [mm]HD [mm]LT [mm]M3C2 [mm]
Gridded observations
B-Splines (i): 74 points/cell
B-Splines (ii): 300 points/cell
4.90
4.80
5.62
5.53
No gridding
B-splines (iii) no gridding
PC or raw obs.
5.21
5.58
6.70
7.24
Ref:
4.96
4.70
L8 σ ρ = 0.5 mmAHD [mm]HD [mm]LT [mm]M3C2 [mm]
Gridded observations
B-Splines (i): 16 points/cell
B-Splines (ii): 66 points/cell
4.29
4.06
9.71
5.39
No gridding
B-splines (iii) no gridding
PC or raw obs.
4.51
5.09
11.00
9.82
Ref:
4.07
3.20

Share and Cite

MDPI and ACS Style

Kermarrec, G.; Kargoll, B.; Alkhatib, H. Deformation Analysis Using B-Spline Surface with Correlated Terrestrial Laser Scanner Observations—A Bridge Under Load. Remote Sens. 2020, 12, 829. https://doi.org/10.3390/rs12050829

AMA Style

Kermarrec G, Kargoll B, Alkhatib H. Deformation Analysis Using B-Spline Surface with Correlated Terrestrial Laser Scanner Observations—A Bridge Under Load. Remote Sensing. 2020; 12(5):829. https://doi.org/10.3390/rs12050829

Chicago/Turabian Style

Kermarrec, Gaël, Boris Kargoll, and Hamza Alkhatib. 2020. "Deformation Analysis Using B-Spline Surface with Correlated Terrestrial Laser Scanner Observations—A Bridge Under Load" Remote Sensing 12, no. 5: 829. https://doi.org/10.3390/rs12050829

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop