Next Article in Journal
Mainlobe Jamming Suppression via Joint Polarization-Range-Doppler Processing
Previous Article in Journal
Enhancing Real-Time Aerial Image Object Detection with High-Frequency Feature Learning and Context-Aware Fusion
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Robust Land-Surface Parameterisation for Repeated Topographic Surveys in Dynamic Environments with Adaptive State-Space Models

by
Daniel R. Newman
and
Yuichi S. Hayakawa
*
Faculty of Environmental Earth Science, Hokkaido University, N10W5, Kita Ward, Sapporo 060-0810, Hokkaido, Japan
*
Author to whom correspondence should be addressed.
JSPS International Research Fellow.
Remote Sens. 2025, 17(12), 1993; https://doi.org/10.3390/rs17121993
Submission received: 25 February 2025 / Revised: 3 June 2025 / Accepted: 5 June 2025 / Published: 9 June 2025

Abstract

The proliferation of unmanned aerial vehicles has enabled cost-effective topographic surveys to be collected at high frequencies. However, terrain analyses rarely take advantage of the information provided by repeated observations. As a result, the ability to characterize the topographic surface and surface changes resulting from dynamic surface processes is undermined by the accumulation and propagation of uncertainty. Accurate surface model parameterisation benefits all derived local characteristics, such as surface slope and curvature. To address this, several advances in adaptive Kalman filtering were evaluated with respect to surface model coefficient estimation error, and the sensitivity to initial noise statistics was tested. A simple surface with exactly known parameters was simulated for a set of common geomorphological change regimes and survey temporal distributions. The results confirmed that all Kalman filters reduced error relative to a least-squares estimator under static conditions. Only adaptive filters outperformed a least-squares estimator under dynamic conditions, where average error was often reduced by approximately 50%, and up to 80%. However, adaptive Kalman filters exhibited up to a 40% increase in maximum error relative to a least-squares estimator in response to sudden surface changes, returning to lower error within 15–25 epochs. The adaptive Kalman filters were sensitive to the overestimation of measurement noise greater than two orders of magnitude from the true noise, resulting in degraded performance. Adaptive Kalman filters consistently and substantially reduced spatio-temporal coefficient error, which includes an estimate of local vertical displacement. The results demonstrated that adaptive Kalman filters address challenges related to the sensitivity of conventional Kalman filter performance to sub-optimal parameterisation, and they are robust estimators for both terrain analysis and surface change analysis when multiple surveys are available. Therefore, adaptive Kalman filters are well-suited for analyzing the local properties of topographic surfaces in general.

1. Introduction

Advances in remote sensing platforms, including Unmanned Aerial Vehicles (UAVs) and sensors, have increased the ability to collect high-resolution topographic data with flexible revisit times. Data collected from UAVs equipped with Light Detection And Ranging (LiDAR) sensors [1], cameras [2] and Interferometric Synthetic Aperture Radar (InSAR) [3] are readily processed into Digital Elevation Models (DEMs), and other elevation data sets [4]. While it is generally the case that survey capabilities are mission specific (e.g., [5,6]), it is now possible to survey several square kilometers within hours of time in the field [7]. Moreover, vertical and horizontal accuracy are frequently between 1 and 20 cm, with spatial resolutions between 5 and 50 cm, depending on the type of sensor, environmental conditions, and vegetation density, among other factors (e.g., [1,2,8,9,10,11,12]).
The properties of UAV-based topographic surveys are ideal for monitoring a wide variety of natural hazards [13]. The collection of multi-temporal topographic data also enables geomorphic change detection, which attempts to make inferences about the surface evolution based on observations of topographic change between surveys [14]. For example, InSAR is commonly used to infer motion from differential phase shifts in line-of-sight observations [15], and this information has proven useful for the validation of subsistence models [16]. For other sensors, a common practice to introduce the temporal dimension to elevation observations is through the subtraction of DEMs representing data collected at different times. This operation is called Difference of DEMs (DoD) when applied to raster elevation models, and is often used to calculate vertical or volumetric differences (i.e., the displacement of matter) resulting from geomorphological processes (e.g., [14,17,18,19]). This represents a dynamic analogue to the process–form relationship between geomorphological processes and the resulting landforms and landscape [20], where dynamic processes result in observable surficial changes over time.
However, a major problem with directly comparing multiple sets of observations is the inclusion and propagation of error and uncertainty. Several interdependent factors combine to make accounting for local uncertainty challenging [21]. In addition to artifacts [22], DEMs contain errors from interpolation, differentiation, and displacement [23]. It is well known that errors present in the source data are propagated to results and may be exaggerated by an analysis [24], and thus DEM subtraction, using two DEMs, suffers doubly. Minimizing error in surface models is important when calculating surface derivatives such as slope and various curvatures [25,26] because the sensitivity of these calculations to noise and error increase with order [27]. Kalman filters are a class of discrete-time state-space models that are well-suited to address noise and minimize uncertainty in dynamic systems using multiple observations. In fact, Kalman filters are optimal estimators if the noise statistical properties are known and Gaussian [28]. Non-linear, non-Gaussian state models such as particle filters and other sequential Monte Carlo methods may be used with better results because they make less assumptions about the data; however, this comes at the cost of increased computation [29]. However, the linear surface models used to extract spatial partial derivatives are compatible with conventional linear Kalman filter estimators.
Despite these advantages, Kalman filters have seen limited application to digital terrain analysis. DEM generation from InSAR data is a common application for Kalman filters because of the suitability for processing raw InSAR data [30,31]. Similar use cases are found in bathymetric applications where Kalman filters provide robust estimates of the sea floor position [32,33]. Alternatively, Kalman filters have also been applied to outlier detection and removal [34]. Wang [35] used a spatial Kalman filter (i.e., single time with neighboring observations) to remove outliers prior to surface differentiation. Similarly, Lawrence and Celestine [36] used Kalman filters to estimate elevation and its first-order spatial partial derivatives to smooth DEM values. Orti et al. [37] used Kalman filters to integrate multi-user topographic observations to improve gully delineation. However, the sensitivity to initial parameterisation of noise statistics, the difficulty in estimating these parameters, and the large impact they have on performance [38] are all major challenges that have limited the adoption of Kalman filters for terrain analysis. This problem is further complicated by the fact that many local surface models require an accurate characterization of local noise statistics. This is not feasible if these noise statistics vary spatially (e.g., [21,39]) or temporally. While automatic calibration algorithms exist (e.g., [40,41,42,43]), they require many epochs of data to search for the optimal solution. Winiwarter et al. [44] developed a spatio-temporal Kalman filter to compare LiDAR point clouds, reporting favourable noise reduction properties and also sensitivity to large and sudden surface changes. Thus, a performant filter must adapt not only to unknown and potentially variable noise statistics but also to a variety of local surface dynamics.
Rather than to denoise DEMs or remove artifacts, the purpose of this research is to improve the accuracy of local surface model coefficient estimates and associated temporal changes under static and dynamic geomorphological regimes. This research combines several advances in adaptive Kalman filtering into an algorithm that exploits the information provided by multiple topographic observations from repeat surveys. Several filter configurations are implemented, and the sensitivity of each filter to sub-optimal noise statistic parameterisation is evaluated using simulated surface models with exactly known values. Multiple dynamic regimes are simulated to assess how each filter responds to model violations (i.e., observations that are inconsistent with the model). Establishing filter performance characteristics in a controlled setting is necessary to understand estimator performance and develop confidence in all derived data.

2. Methods

2.1. Advances in State-Space Models and the Kalman Filter

State-space models estimate a set of state variables, which may be observed or unobserved, given observations over time. These models have widespread adoption in engineering and science to predict future states, estimate current states, or smooth previous states [45]. The general form of a discrete-time state-space model and the associated measurement model are
x k = A k x k 1 + B k u k + ω k , ω k N 0 , Q k
z ^ k = C k x k + D k u k + ν k , ν k N 0 , R k
where x k R b × 1 is the state vector with b states, z ^ k R a × 1 is the output vector with a elements, u k is a control vector, ω k and ν k are noise vectors based on a multivariate normal distribution with process noise covariance Q k R b × b and measurement noise covariance R k R a × a , respectively, A k R b × b is the state transition matrix, which extrapolates the state vector to the next time, B k is the input matrix, C k R a × b is the measurement matrix, which converts the state variables to outputs,  D k is the feed-forward matrix, and  k N is a discrete-time index.
The Kalman filter algorithm is a popular recursive implementation of a state-space model that minimizes estimate error [46] and has mathematical connections to Bayesian filtering [47]. The recursive property only requires that the current state and state uncertainty matrix ( P ) are stored, and they are updated as new information is collected. A Conventional Kalman filter (CKF) is a two-stage, optimal, linear, minimum variance-of-error filter that fuses a priori predictions with a posteriori corrections once observations are available at each time step. The linear equations of the Kalman filter are presented for a single epoch (k) below:
Stage 1:
Prediction
x k | k 1 = A k x k 1 | k 1 + B k u k
P k | k 1 = A k P k 1 | k 1 A k + Q k
Stage 2:
Measurement update
S k = C k P k | k 1 C k + R k
K k = P k | k 1 C k S k 1
x k | k = x k | k 1 + K k z k C k x k | k 1
P k | k = I K k C k P k | k 1
where z k R a × 1 is a vector of observations, S k R a × a is the innovation covariance matrix, and  K k R b × a is the Kalman gain matrix.
Initial values for x , P , R , and  Q must be provided. x can be set to zero and updated with the filter functions, or estimated in some way prior to Kalman filtering. A common practice to initialize P involves creating a diagonal matrix with a large number (e.g., 10 3 ) so that the first update stage favours incoming data, and it is subsequently updated to more appropriate values. Estimating R and Q are significantly more challenging, and both can strongly impact filter performance [28]. A common practice is to estimate these values from a combination of expert knowledge of the system and tuning [48]. Q is particularly challenging to estimate because it represents an abstraction of a system with potentially unobservable components. Underestimating Q limits error minimization, and overestimating Q increases uncertainty such that incoming noisy measurements dominate the state estimate. In addition to optimization strategies that use large data sets to tune these parameters (e.g., [41,42,43,49]), several advances have been made to Kalman filter implementations to improve performance with sub-optimal noise statistic estimates.

2.1.1. Adaptive Filters

Several techniques have been developed to address the challenge of estimating state and measurement covariance. Adaptive filters are a popular solution that involves modulating various parameters to minimize the negative impact of sub-optimal initial estimates. Sage and Husa [50] developed a covariance-matching technique to evaluate the auto-covariance of state and measurement error vectors over a number of time lags. As the number of lags increases, the covariance estimates approach the true values at the expense of computational resources to store the residuals. An innovation-based approach was later developed to update covariance estimates in real-time [51,52]. Sorenson and Sacks [53] developed a fading memory filter as an alternative approach to reduce the effect of previous epochs on current estimates by using an adaptive factor to inflate state uncertainty. Yang et al. [54] developed this idea to adapt this factor based on a robust maximum-likelihood estimation. Subsequent research combined the adaptive factors to mix current covariance estimates with that of the previous epoch, thus addressing the limitations of computing the auto-covariance [55] with an adaptive factor [56].
This research employs adaptive Kalman filters based on an adaptive estimator [54,57]. An adaptive parameter α R 0 , 1 is used to minimize the impact of previous epochs, favouring recent observations in state estimates. Yang et al. [54] note that the adaptive estimator yields a variance-Weighted Least Squares (WLS) estimator when α k = 0 , the normal Kalman equations when α k = 1 , and balances the contributions of these estimators when 0 < α k < 1 . An adaptive estimator can be expressed using the following Kalman equations:
x k | k = x k | k 1 + K ¯ k z k C k x k | k 1
P k | k = I K ¯ k C k P k | k 1 / α k
where K ¯ is an adaptive Kalman gain matrix:
K ¯ k = 1 α k P k | k 1 C k 1 α k C k P k | k 1 C k + R k
The calculation of α k is based on the discrepancy between the predicted state and an estimate based on current information using Equation (12). Yang and Gao [58] demonstrated that an approximately optimal adaptive factor quantifies the divergence between the state discrepancy relative to the system uncertainty. This can be evaluated by a learning score υ k using Equation (13). The learning score is normalized using a descending three-segment function [54] based on a normal segment, a weight reduction segment, and an elimination segment defined by Equation (14).
Δ x k = x k | k 1 C k R k 1 C k 1 C k R k 1 z k
υ k = Δ x tr P k | k 1
α υ , c 1 , c 2 = 1 υ c 1 c 1 υ c 2 υ c 2 c 1 2 c 1 < υ c 2 0 υ > c 2
where c 1 delineates the normal segment from the weight reduction segment and c 2 delineates the elimination segment. The lower limit defines the state discrepancy at which a standard Kalman filter is used and the upper limit defines the state discrepancy at which a Weighted Least Squares (WLS) estimator is used. Both upper and lower limits can be determined experimentally, or by using an appropriate probability distribution [59]. Alternatively, a two-segment version that asymptotically approaches zero can be used to control the initiation of adaptation using Equation (15).
α υ , c 1 = 1 υ c 1 c 1 υ 2 υ > c 1

2.1.2. Covariance Estimation

The matrices R k and Q k can also be implemented to be estimated recursively. However, both matrices are interdependent, and varying both simultaneously may lead to unstable estimates [60]. Only an adaptive Q k is considered since measurement error data are often available for topographic applications. A common approach called covariance-matching uses a moving window [50] to approximate the true values (c.f., [51,52], for the derivation). Equation (16) is used to calculate the residual vector, which is averaged over several time lags in Equation (17), yielding an estimate of process noise covariance using Equation (18). Note that the estimate is based on the residual derived from the a posteriori estimate instead of the innovation, which is based on the a priori estimate. It is possible to use either the innovation or residual; however, the residual is generally regarded as the superior measure [52,60].
δ k = z k C k x k | k
E δ k δ k = 1 m j = 0 m 1 δ k j δ k j
Q k = E δ k δ k + K k E δ k δ k K k
where m is the number of time lags.
An adaptive parameter is used to blend previous and current process covariance estimates, serving a similar function as α in Section 2.1.1. This allows an on-line estimation scheme [55] that emulates a windowed approach while avoiding the computing resources required to store and analyze the windowed time lags using Equation (19). This can be combined with a covariance-scaling approach to inflate or deflate the process noise covariance [61] while preserving the internal structure of the process noise model (i.e., the noise model used to initialize Q 0 ). A noise variance parameter q is used to scale the process uncertainty model based on the ratio between the estimated and predicted process uncertainty matrices in Equation (20). Equation (21) applies the resulting scalar to the process noise matrix of the next epoch. Note that Equation (20) can be modified by taking the square root to smooth the multiplier, as suggested by Ding et al. [61]; however, this is not required.
Q ^ k = α k Q k + 1 α k K k δ k δ k K k
q k + 1 = q k tr Q ^ k tr Q k
Q k + 1 = Q k × q k + 1

2.1.3. M-Type Robust Estimators

Huber [62]’s maximum likelihood-type estimators (M-type, or M-theory) are a class of robust estimators that are suitable when noise statistics are unknown [63]. The premise is that individual measurement error covariance is inflated to match the corresponding normalized residual. A robust M-type Kalman filter is simultaneously robust to outliers [54] and underestimated measurement noise covariance. This addresses common sources of non-systematic error in topographic data, such as non-terrain artifacts, by inflating the uncertainty of effected observations. The robust equivalent weight matrix ( ρ ¯ ) of R is given by Huber’s algorithm, which minimizes the cost function
J x k = i = 1 b ρ r i
using the score function
ρ ( r i ) = 1 2 r i 2 , r i c 3 c 3 r i 1 2 c 3 2 , r i > c 3
where r i is an element of the innovation vector standardized by the corresponding measurement variance (i.e., R i , i ) and c 3 is a constant usually set between 1.3 and 2.0 [54].
Setting the partial derivative of Equation (22) to zero yields the equivalent weight functions using Equation (24) for independent measurement covariance and Equation (25) for dependent measurement covariance [64].
ρ ¯ i , i = 1 σ i , i , r i c 3 c 3 σ i , i r i , i , r i > c 3
ρ ¯ i , j = 1 σ i , j , r i c 3 , r i c 3 c 3 σ i , j max r i , r j , r i > c 3 , r j > c 3
Finally, the robust measurement noise covariance matrix is obtained by inverting the equivalent weight matrix
R ¯ k = ρ ¯ k 1

2.2. Robust Self-Adaptive Algorithm

The algorithm proposed by Yang et al. [54] builds upon the CKF described in Section 2.1, implementing a robust M-type Kalman filter by replacing R k with R ¯ k and implementing the adaptive α parameter described in Section 2.1.1. Several modifications are made to improve the ability to parameterize the adaptive functions in the model. First, the learning score in Equation (13) is replaced with a related metric that accounts for correlated state estimate error, the Normalized Estimate Error Squared (NEES) in Equation (27). NEES is a χ 2 -distributed variable that normalizes the state discrepancy using the state uncertainty matrix [40]. Equation (28) provides a convenient means to select the upper threshold for Equation (14) based on a one-tailed probability ( γ ) to determine if the state estimate is sufficiently different from the current WLS estimate given the amount of uncertainty in the system, where the degrees of freedom is equal to the length of the state discrepancy vector. This function provides an intuitive and state length agnostic method to parameterize c 1 and c 2 based on probabilities γ 1 and γ 2 , respectively. Similarly, thresholds can be parameterized using a two-tailed distribution where γ 1 = 1 γ 2 . Using a small lower threshold increases the probability of α -based inflation occurring, which in turn increases the state discrepancy required to trigger another adaptive response, while the upper threshold limits maximum error.
υ k = Δ x k P k | k 1 1 Δ x k
c = F χ 2 1 γ , dim Δ x k
where F χ 2 1 is the inverse cumulative distribution function of the χ 2 distribution.
Defining the elimination segment prevents large errors from over-inflating uncertainty, which approaches infinity as α approaches 0. However, adaptive filters encounter division by zero errors when α = 0 according to Equations (10) and (11). This is addressed by replacing the adaptive estimator in Equation (9) with the WLS estimator defined in Equation (29). Similarly, the adaptive uncertainty in Equation (10) is re-initialized to account for the decoupled state and state uncertainty using Equation (30) followed by CKF Equations (6) and (8). This modified version of the Yang et al. [54] algorithm will be referred to as the Adaptively-Robust Kalman filter (ARK) hereafter.
x ^ k | k = C k R k 1 C k 1 C k R k 1 z k
P k | k 1 = A k A k
ARK was further modified to include the adaptive process noise covariance scaling approach described in Section 2.1.2. This version will be referred to as the Robust Self-Adaptive Kalman filter (RSAK) because Q is able to self-select a more appropriate variance based on the α parameter. Algorithm 1 implements RSAK, which becomes ARK by omitting line 16. Note that the algorithm is shown for readability and is not optimized for efficiency.
Algorithm 1 Robust Self-Adaptive Kalman filter
Require: 
P , R , Q , x , q, γ 1 , γ 2 , c 3
1:
c 1 F χ 2 1 γ 1 , dim x
2:
c 2 F χ 2 1 γ 2 , dim x
3:
for all k do
4:
     Δ t t k t k 1
5:
    Update A , Q with Δ t ,   q
6:
    CKF predict() function using Equations (3) and (4)
7:
    function update( z )
8:
         R ¯ ρ ¯ z , R , c 3 1
9:
         Δ x x C R ¯ 1 C 1 C R ¯ 1 z
10:
         α α Δ x P 1 Δ x , c 1 , c 2
11:
        if  α > 0  then
12:
            K ¯ 1 α P C 1 α C P C + R ¯
13:
            x x + K ¯ z C x
14:
            P I K ¯ C P / α
15:
            δ z C x
16:
            q q × tr α Q + 1 α K ¯ δ δ K ¯ / tr Q
17:
        else
18:
            x C R ¯ 1 C 1 C R ¯ 1 z
19:
            K ¯ A A C C A A C + R ¯
20:
            P I K ¯ C A A
21:
        end if
22:
    end function
23:
     k k + 1
24:
end for

2.3. Surface Model and Problem Formulation

Consider the elevation of a surface as the graph of a bivariate polynomial with monomial basis in Cartesian coordinates with finite degree d
z x , y = β 0 + β 1 x + β 2 y + β 3 x 2 + β 4 x y + β 5 y 2 + + β c y d
These quadratic models are deployed to analyze the surface geometry about a point based on the vector of model coefficients ( β ). Using a local coordinate system and differentiating a surface model with d 2 at the origin ( x = y = 0 ) yields the second-order spatial derivatives found in Equation (32). These spatial derivatives are used to characterize several surface properties, such as slope, aspect, and several curvatures [25] using differential geometry. Thus, the accurate estimation of the model coefficients directly contributes to more accurate characterization of local surface properties.
p = 𝜕 z 𝜕 x q = 𝜕 z 𝜕 y r 2 = 𝜕 2 z 𝜕 x 2 s = 𝜕 2 z 𝜕 x 𝜕 y t 2 = 𝜕 2 z 𝜕 y 2
Now consider the change in surface elevation in discrete-time separated by the time interval Δ t = t k t k 1 . Because both polynomials belong to the same ring z R X , Y , differences can be calculated by polynomial subtraction with Equation (33) and average rate of change can by calculated by scalar division with Equation (34), with both operating on coefficients and resulting a vector ( β ) containing the spatio-temporal relationships. Note that the intercept of Equation (33) is an estimator of DoD.
z Δ x , y = z k x , y z k 1 x , y = β 0 , k β 0 , k 1 + β 1 , k β 1 , k 1 x + + β c , k β c , k 1 y d
and the rate of change
z x , y = Δ z x , y Δ t
approximating the spatio-temporal partial derivatives
𝜕 z 𝜕 t 𝜕 2 z 𝜕 x 𝜕 t 𝜕 2 z 𝜕 y 𝜕 t 𝜕 3 z 𝜕 x 2 𝜕 t 𝜕 3 z 𝜕 x 𝜕 y 𝜕 t 𝜕 3 z 𝜕 y 2 𝜕 t
The surface model is parameterised from a sufficiently large set of spatially distributed elevation observations sampled from a neighbourhood L about location p 0 = x , y ,
z , p { z n , f p 0 , p n | p n L }
where the function f centers the local coordinate system on p 0 .
This can be expressed in matrix form by performing monomial basis expansion on the position observations
ϕ d p = p 0 p i i N 2 | i d
such that
z 0 z a = ϕ d p 0 ϕ d p a β 0 β c , z = Φ β
where z R a × 1 , Φ R a × c , and β R c × 1 .
The coefficients of Equations (31) and (34) can be obtained by Ordinary Least Squares (OLS) regression using Equations (38) and (39), respectively. Note that Equation (39) requires that both sets of observations are coincident.
β Φ Φ 1 Φ z
β Φ Φ 1 Φ z k z k 1 Δ t
Representing the surface model within the framework of the Kalman filter is achieved in Equation (40) by rearranging Equation (34) and expressing it in matrix form. The state vector x now includes the surface and spatio-temporal coefficients (with length b = 2 c ), and the state transition matrix A is defined to extrapolate coefficients over the time interval. Equation (40) integrates the spatio-temporal coefficients over the time interval and adds them to the previous state to arrive at current model coefficients representing the mutated surface. The measurement matrix C in Equation (41) is extended from Equation (37) to negate β coefficients.
x k = A k x k 1 + ω k , ω k N 0 , Q k = I I Δ t k 0 I β k 1 β k 1 + ω k
z ^ k = C k x k + ν k , ν k N 0 , R k = Φ k 0 β k β k + ν k
where I R c × c is an identity matrix multiplied by a scalar value. Note the use of block matrices regarding matrix dimensions from previous equations.

Modifications to the Kalman Update Function

Since the spatio-temporal coefficients are not used by the observation model, additional measures must be taken to estimate these coefficients when the WLS estimator is triggered (i.e., α = 0 ). The feed-forward matrix from Equation (2) is repurposed and set to D k = 0 Φ k to estimate the spatio-temporal coefficients using the difference between current and previous observations scaled by the elapsed time. Thus, Equation (29) is modified as
x ^ k | k = C k R ¯ k 1 C k 1 C k R ¯ k 1 z k + D k R ^ k | k 1 1 D k 1 D k R ^ k | k 1 1 z k z k 1 Δ t k
where R ^ k | k 1 is a diagonal matrix
R ^ k | k 1 { i , i } = V A R R ¯ i , i , k + V A R R ¯ i , i , k 1 2 C O V R ¯ i , i , k , R ¯ i , i , k 1
Note that this requires observations from the previous epoch, which introduces a dependency between the error terms from Equation (2) for both sets of observations. Equation (43) is simplified by assuming that the covariance term is 0, which results only in the addition of the robust measurement error variances from each epoch. While this assumption is dubious and unlikely to be true, this estimator is only used as an emergency correction to extreme model violations. Line 19 in Algorithm 1 was modified to use R ^ k | k 1 to reflect the elevated uncertainty.
The NEES score function, Equation (27), was modified to use only the observable components of the state vector and the associated uncertainty using Equation (44). This prevents models from being penalized for diverging from the rate of change estimates derived from Equation (42), which are expected to carry increased uncertainty. The additional noise present in the WLS rate of change estimate may impact α and unnecessarily inflate state uncertainty if the entire state vector is used.
υ = Δ x i i c P i j i c j c 1 Δ x i i c

2.4. Evaluation

The performance of Kalman filters in various configurations were evaluated using simulated surfaces. Local surface model coefficients are rarely known in practice, requiring simulations to establish exactly known state coefficients ( x ˜ k ). All simulations lasted 1000 epochs, and each simulation was run n = 500 times to account for different random noise vectors added to observations. Random number generators were seeded based on the simulation index to ensure that all simulations received the same noise vectors. Filter performance is reported as the average euclidean distance between the estimated and true state vectors over all repeated simulations for a given epoch using Equation (45) and is referred to as error hereafter. All epoch metrics are averaged across the 500 simulations in the same manner as Equation (45).
ϵ k = 1 n i = 1 n x k , i x ˜ k , i

2.4.1. Filter Configurations

The OLS, CKF, ARK, and RSAK estimators were implemented with multiple models of process noise and compared. OLS is a conventional surface model estimator and CKF is a non-adaptive Kalman filter; both are included for reference. ARK and RSAK are both M-type robust estimators; however, ARK is only α adaptive, while RSAK is α and Q adaptive. All estimators were initialized with the following parameters:
γ 1 = 0.05 γ 2 = 0.95 c 3 = 1.5 x 0 = OLS z 0 P 0 = I R 0 = I r 0 2 Q 0 = Q Δ t , q 0
where γ 1 is the opposite tail of γ 2 , which is slightly less the values Yang and Xu [59] used to identify faulty observations, the initial state estimate x 0 is calculated with Equation (38) padded with zeros, r 0 is an initial estimate of measurement Root Mean Squared Error (RMSE) and is squared to represent error variance, q 0 is an initial estimate of process noise variance, and Q 0 is calculated with one of Equations (46)–(48).
All Kalman filters were initialized with first-order continuous white noise, which models the effect of zero-mean continuous variability in kinematic components [65] and is defined by Equation (48). Both ARK and RSAK were initialized with process uncertainty described by multiple continuous white noise model orders in addition to the first-order model due to the high sensitivity and variability of Δ t . Zero-order continuous white noise models are less sensitive to Δ t and only affect the surface model coefficients, and are defined by Equation (47). Finally, Equation (46) was used to examine the effect of α only adaptation by ignoring process uncertainty (i.e., a null matrix). The noise model order is labelled as a suffix to each estimator. Additionally, the suffix ‘A’ was added to denote estimators configured with an asymptotic α function that use Equation (15).
Q Δ t , q = 0
Q Δ t , q = I Δ t 0 0 0 q
Q Δ t , q = I Δ t 3 3 I Δ t 2 2 I Δ t 2 2 I Δ t q
Two temporal distributions were used to simulate the effect of unevenly spaced surveys in units of days. Δ t k was drawn from either a normal or uniform distribution based on an average of 183 days (i.e., biannual surveys). The normal distribution represents some variability in survey date, while the uniform distribution represents random temporal sampling, new knowledge of system activity, or opportunistic surveys. The parameters used for the normal and uniform distributions are given in Equations (49) and (50), respectively. This variability was added evaluate the impact of different process noise models, which are on dependent Δ t and affect covariance estimation and scaling. The time distributions are visualized in Figure 1A.
Δ t k N N 183 , 15
Δ t k U U 30 , 336
Combinations of initial values for r 0 { 10 3 , 10 2 , 10 1 , 10 } and q 0 ∈ { 10 24 , 10 16 , 10 8 } were used to test the sensitivity of all filters to initial parameterisation using all cases and temporal distributions. This analysis was paired with a parameter space search on the ranges 10 1 < r 0 < 10 1 , 10 16 q 0 10 0 , and 0.01 < γ 1 < 0.58 for a more targeted assessment. The parameter space search was conducted to identify the point at which the poor estimation of noise statistics began to degrade filter performance. All parameter space searches were performed using the uniform time distribution, the irregular case, and the true noise statistics.

2.4.2. Simulating a Simple Surface

A true state vector is defined to represent the temporal evolution of an initial surface model by a change model. A second-order surface model starts with the coefficients given in Equation (51). Dynamism was simulated by extrapolating the true state by a change model with coefficients given in Equation (52), or with zeros. The surface and change models were based on a rough field estimate. The observation model was based on a 5 × 5 grid with 0.15 m spacing. The true state was projected into observation space to generate true observations and was corrupted with noise drawn from a multivariate normal distribution, Equation (53), before use in the update function.
β ˜ 0 = 130.0 0.425 0.725 0.185 0.095 0.160
β ˜ = 6.920 e 4 1.500 e 4 1.155 e 5 1.230 e 3 4.615 e 4 2.650 e 4
z k = Φ k β ˜ k + ν k , ν k N 0 , I r ˜ 2
where r ˜ = 0.01 is a scalar representing one centimeter elevation RMSE to simulate DEM measurement noise with exactly known parameters.
The temporal application of the change model to the initial model was governed by four cases of surface dynamism. Each case represents a simplified temporal pattern designed to approximate dynamic surface processes and apply stress to each filter by deliberately violating the model with unpredictable, temporally non-linear changes. A small amount of process noise ( q ˜ ) with exactly known parameters was added to the state vector at every epoch.
Case 1:
Static models a perfectly static surface by never adding the change model or process noise to the state (i.e., q ˜ = 0 ). Thus, the only change apparent to the filter is from observation noise. This case acts as a control by eliminating all surface change and isolating the effects of repeated observations.
Case 2:
Periodic models period processes such as seasonal erosion and deposition by scaling the change model continuously between 0.25 , 0.75 using Equation (54). Note that this imparts a periodic trend based on the epoch rather than the time difference. A first-order white process noise vector with variance q ˜ = 10 16 was added at every epoch.
Case 3:
Irregular models a surface that starts and stops changing intermittently, mimicking some landslide behaviors. This is implemented by selecting five random ‘start’ epochs, which then received constant change for 25 epochs before returning to static. A first-order white process noise vector with variance q ˜ = 10 16 was added at every epoch.
Case 4:
Catastrophe models a surface that experiences a natural disaster, such as a large mass movement. This is implemented as a single epoch where 5 m elevation is subtracted and the remaining coefficients are randomized. Two catastrophic events are simulated 365 epochs apart, and the second event was subjected to and additional 25 epochs of constant change to simulate sustained effects following a large disturbance. Note that the catastrophe represents a highly non-linear impulse error signature with non-Gaussian properties. A first-order white process noise vector with q ˜ = 10 16 was added at every epoch.
β ˜ k = β ˜ × 0.5 cos 2 π 365 k + 2 π arccos 0.25 0.5 + 0.25
Each case is visualized in Figure 1B using the uniform temporal distribution. 3D visualizations of the initial surface, an epoch of change, and a catastrophe are presented in Figure 1C–E.

2.4.3. Spatial Analysis

A preliminary assessment of a spatially distributed array of Kalman filters was conducted on a small 500 × 500 cell DEM with a spatial resolution of 0.15 m, collected with UAV-LiDAR near Atsuma, Hokkaido, Japan (42.780°N, 142.020°E). A field of spatially auto-correlated noise was generated using a turning bands simulation [66] with five iterations and an auto-correlation range of 75 m. All noise fields were centered and scaled such that the mean was zero and the standard deviation was 0.1 m. This experiment simulates a static surface where the initial uncorrupted DEM was assumed to be true, and 50 epochs of corrupted observations were analyzed using 5 × 5 cell observation windows. The ARK-∅A filter was compared to an OLS estimator to eliminate the choice of initial process noise due, which does not apply a static surface. All ARK-∅A filters were initialized with r 0 = 1.0 = I , and the state was initialized with an OLS estimate. An example of a single epoch of this process is visualized in Figure 2.
The estimated elevation (i.e., the intercept β 0 ) for each of the 250,000 filters was recorded at each epoch and compared to the true DEM. Additionally, the spatial distributions of minimal curvature were generated for a qualitative assessment. Minimal curvature ( k m i n ) is the curvature of a principal section with the lowest value of curvature [25,26] and can be calculated from the spatial partial derivatives defined in Equation (32) using Equation (55). Minimal curvature values tend to be large when the local geometry resembles a channel, which skews negatively when linear errors are abundant [27].
k m i n = 1 + q 2 r + 2 p q s + 1 + p 2 t 2 1 + p 2 + q 2 3 1 + q 2 r + 2 p q s + 1 + p 2 t 2 1 + p 2 + q 2 3 2 r t s 2 1 + p 2 + q 2 2

3. Results

3.1. Aggregate Performance

The average error values for all cases, r 0 and q 0 excluding the individual parameter space search results, were aggregated, and the error distributions are summarized in Figure 3. The rows record the error for the β and β coefficients and the columns record the temporal distributions. The increased measurement noise due to DEM subtraction is evident when comparing the OLS model error distributions between the β and β coefficients. This is particularly apparent for the uniform distribution (Figure 3B,D). The filters with null or zero-order process noise models tended to perform better than the first-order filters. Interestingly, the ARK-∅ and ARK-∅A filters achieved low error across all simulations by α adaptation alone, independently of any process noise model. While the error distributions for the β coefficients were relatively similar for OLS and Kalman filters, all Kalman filters had much lower β coefficient error.

3.2. Sensitivity to Initial Parameters

The simulation results were summarized as the average and maximum state error over all epochs using the normal time distributions for the static, periodic, irregular, and catastrophic cases in Table 1, Table 2, Table 3 and Table 4, respectively. These tables provide a coarse impression of the impact of r 0 and q 0 choice combinations on filter performance. In general, all Kalman filters achieved lower average error than OLS. However, maximum error for all adaptive Kalman filters was commonly near or slightly higher than OLS. Otherwise, relative to the OLS estimator, the adaptive Kalman filters reduced average error by approximately 50% for most simulations (Table 1, Table 2, Table 3 and Table 4), up to approximately 80% (Table 1). All filters performed particularly poorly when noise was greatly overestimated (i.e., r 0 = 10 ), and the performance of CKF at very low process noise (i.e., q 0 = 10 24 ) was extremely poor. The RSAK-0 filter also performed particularly poorly for the periodic case (Table 2), and both RSAK-1 and RSAK-1A had notably large maximum errors for the irregular and catastrophe cases (Table 3 and Table 4). The choice of q 0 had minimal impact on average and maximum error for all adaptive Kalman filters when it was accurately estimated or underestimated, and approached the OLS estimator performance when it was overestimated. Typical maximum error for the adaptive Kalman filters was 30% or 40% greater than the OLS maximum error when r 0 r ˜ and were roughly equivalent when r 0 = 10 1 (Table 1, Table 2, Table 3 and Table 4). Note that ARK-∅A performance data are not available for the catastrophe case with the lowest noise parameter settings due to fatal matrix inversion errors.
The r 0 parameter space was searched to find the value at which r 0 began to negatively effect filter error. Simulations used the uniform time distribution and the irregular case, using q 0 = q ˜ to assess the impact of r 0 values between 10 1 r 0 10 1 on filter performance (Figure 4). The null and zero-order noise models began to diverge in performance at r 0 1.51 and r 0 0.81 , respectively, with RSAK-0 demonstrating a particularly clear divergence (Figure 4A,C). A smooth degradation in performance with increasing r 0 for ARK-∅A contrasts with the sudden jumps observed in the bounded ARK-∅ variant (Figure 4A,B). The first-order filters had different points of divergence, with ARK-1 diverging smoothly at r 0 1.51 and with RSAK-1 diverging suddenly at r 0 0.81 (Figure 4D,E). These results establish that adaptive filter performance was stable when the r 0 estimate is less than approximately two orders of magnitude from the true measurement noise.
The q 0 parameter space was searched to find the value at which q 0 began to negatively effect error. Simulations used the uniform time distribution and irregular case, using r 0 = r ˜ to assess the impacts of q 0 values between 10 16 q 0 10 0 on filter performance (Figure 5). The zero-order noise model showed that error remained consistently low when q 0 10 4 , and then suddenly increased to approximately the same error as the OLS estimator (Figure 5A). All three first-order noise model filters showed a smooth transition from low error to matching OLS estimator performance when q 0 10 10 (Figure 5B,C). All three first-order noise model filters also showed an error impulse at q 0 = 10 8 . Greatly overestimating q 0 slightly elevated the error relative to the OLS estimator for all tested filters; however, this effect on RSAK-0 was less pronounced.
The γ 1 parameter space was searched to find the value at which γ 1 began to negatively effect the error. Simulations used the uniform time distribution and irregular case, using r 0 = r ˜ , q 0 = q ˜ , and γ 2 = 0.95 to assess the impacts of γ 1 values between 0.01 γ 1 0.58 on filter performance (Figure 6). The ARK-∅A and RSAK-1 filters both experienced increasingly large error impulses as γ 1 increased. The other filters showed relatively low error irrespective of the noise model used. However, during long periods of static surface dynamics (e.g., 600 < k < 800 ), some filters achieved the lowest error with γ 1 = 0.05 while others achieved it with γ 1 = 0.58 . In general, lower γ 1 reduced the maximum error and increased the minimum error, effectively narrowing the error envelope around the WLS estimate used in the state discrepancy calculation (e.g., Figure 6C).

3.3. Response to Case Dynamics

The average state error ( ϵ k ) is plotted as a function of epoch and shows how the performance of each filter reacted to the dynamics characterized by each case. Figure 7 shows the results for the uniform time distribution, r 0 = 10 3 and q 0 = 10 24 as an example. Other simulation results follow a similar pattern, and performance can be inferred by cross-referencing with Section 3.2. Recall that only the measurement noise vector is added to the static case, and that no change or process noise is added. The error for each filter converged on a lower limit defined by the model noise parameters, which is lowest for CKF and RSAK-0 and highest for RSAK-1 and RSAK-1A (Figure 7A). The periodic case demonstrated an inability for RSAK-0 to track the constant periodic change, while only RSAK-1 and RSAK-1A were able to consistently outperform OLS (Figure 7B). The irregular case in general, and the inset image for the catastrophe case, all exemplify how each filter responded to sudden change (Figure 7C,D). RSAK-1 required fewer epochs to transition from inferior to superior performance relative to OLS than all other filters (approximately 12 epochs for RSAK-1 and 30 epochs for the other filters, Figure 7D). The asymptotic filters exhibited large error impulses, while all of the other filters exhibited comparable maximum error across cases, with the notable exception of RSAK-1 on the second catastrophic event (Figure 7C,D). The differential performance based on process noise model order demonstrates the trade-off between error minimization and responsiveness through the noise parameters.
Figure 8 shows the average β and β error for the irregular and catastrophe cases and all filters except CKF where r 0 = 10 1 and q 0 = 10 16 . The WLS estimator and state reset (i.e., α = 0 ) can be observed truncating the error of both components to the same error as the OLS estimator (Figure 8). This pattern of error truncation was only observed for the β coefficients when r 0 = 10 1 , as documented in Table 4. The close relationship between both sets of coefficients is demonstrated as the small error in the β coefficients is extrapolated into larger β coefficients. However, the non-linear temporal signature of the catastrophe is observed as a very large β impulse error in the asymptotic filters, despite the absence of a similar error signature in the β component. Otherwise, all non-asymptotic adaptive filters yielded a lower β error than the OLS estimator.

3.4. Mode of Adaptation

The α k and q k values were recorded for the normal time distribution with r 0 = r ˜ and q 0 = q ˜ to demonstrate the mode by which the ARK and RSAK filters adapted to the irregular case. Figure 9A shows that non-asymptotic filters responded to the initiation and cessation of change with a WLS state reset, triggered by the α = 0 condition. RSAK-1 and RSAK-1A were the only filters that consistently retained α values near 1. All other filters maintained α values near 0.95, irrespective of the process noise model. Figure 9B,C record the q k multiplier used to scale the Q matrix for the RSAK-0 and RSAK-1 models, where both filters adjusted the original value immediately at the first epoch. RSAK-0 estimated q k values around 10 7 and was resilient to the added observation noise vector (observed as a small confidence interval), but fluctuated over time. RSAK-1 stabilized on q k 10 11 , and a greater variance between simulations was observed. However, neither filter accurately estimated the true process noise.

3.5. Spatial Distributions

A 2D array of ARK-∅A and OLS filters was constructed to analyze a small DEM. The time series of predicted elevation error relative to the true surface is shown in Figure 10A with solid lines. The OLS estimate error was consistently equivalent to the standard deviation of the added noise. The ARK-∅A error demonstrated an immediate reduction in estimation error from 0.1 m by 0.04 m (a reduction of approximately 40%), where it varied from epoch 10 onward. Both OLS and ARK-∅A increased the negative skew of the minimal curvature; however, OLS increased the negative skew by a larger margin than ARK-∅A, Figure 10A. The spatial distributions of minimal curvature in Figure 10B1–D1,B2–D2 show that the spatial pattern of minimal curvature remained negatively affected by spatially auto-correlated noise throughout the experiment. However, upon closer inspection, the ARK-∅A distribution in Figure 10D2 showed a substantially lower impact of the linear artifacts than the OLS distribution in Figure 10D1.

4. Discussion

The use of Kalman filters for topographic analysis compared favourably to OLS estimators when multiple surveys are available. While OLS estimators do not risk degraded performance due to poor parameterisation, they do not account for measurement or system noise nor estimate uncertainty. Kalman filters demonstrated the potential to lower error relative to OLS; however, they risk severe error if parameterisation is sub-optimal or error is non-linear, resulting in unreliable state estimates, as reported elsewhere [38,44]. CKF exemplified this behavior with some parameterisations yielding a lower error than OLS estimators, but often producing much higher errors. All adaptive filters were effective at modeling the dynamics defined by all cases tested and are superior to OLS if the surface can be assumed static. The adaptive Kalman filters generally outperformed OLS and CKF across all parameterisations for dynamic cases, reducing the average error by up to 80%. Maximum errors were contained within approximately 30% greater than OLS by the WLS state reset, addressing the propensity of Kalman filters to become unreliable in response to model violations. These patterns remain true for both time distributions, with the only notable difference being a slightly increased error variance for the uniform distribution and a higher maximum error, exceeding OLS by approximately 40%. While adaptive Kalman filters do not guarantee superior performance compared to OLS estimators, the potential for error reduction is greater than the potential error increase.
All adaptive filters outperformed OLS by a large margin when estimating spatio-temporal coefficients (Figure 3). The extrapolation of the β coefficients strongly penalizes errors, while DEM subtraction incorporates multiple measurement noise vectors (Figure 3D). This is likely related to the larger innovation variance systematically deflating the Kalman gain, reducing the impact of measurements on the subsequent state estimates, allowing a faster error reduction in the β components. Thus, the state estimate relies more on the predicted observations based on β rather than on subsequent measurements. However, the use of asymptotic filters for highly non-linear dynamics, such as the catastrophe case, should be approached with caution due to the large impulse error in β components and lack of WLS reset (e.g., Figure 8D). While adding second-order temporal coefficients (i.e., acceleration) may improve the response to and recovery from non-linear change, it may result in extreme sensitivity to noise and irregular temporal spacing. Analyzing the local spatial properties of surface change has not been considered in terrain analysis. A similar idea has been explored in the related field of surface metrology, where measuring geometric deviations between surfaces is of interest [67]. The spatio-temporal coefficients may provide information for characterizing local surface change, such as vertical displacement or surface geometry. These data are complementary to non-local surface change analyses (e.g., translation) derived from InSAR or other methods (e.g., [30,68]).
Parameter selection is largely dependent on the desired error envelope. Filters were relatively sensitive to choice in r 0 , favoring overestimation by one order of magnitude, but not more than two orders of magnitude (Figure 4 and Table 1, Table 2, Table 3 and Table 4). Slightly overestimated r 0 relative to the true noise consistently yielded the lowest average error. Overestimating q 0 forced the estimator to converge on the current estimate by increasing uncertainty to favor incoming observations (Figure 5). All RSAK filters immediately adjusted the q 0 value, rendering the initial estimate somewhat irrelevant. However, there is a theoretical upper limit, because a large enough q 0 may inflate uncertainty in Equation (4) such that no observed state discrepancies are large enough to yield α < 1 after normalization, preventing adaptation from occurring. Thus, all adaptive filters should be initialized with greatly underestimated q 0 . The error envelope is strongly effected by γ 1 , where low γ 1 thresholds allow the current WLS estimate to guide adaptation and control most model divergences. This diverges from the notion of optimal devised by Yang and Gao [58], where a much larger margin of error is recommended before initiating adaptation. This discrepancy is likely due to the frequent violation of model assumptions by the various surface dynamics, which reflects a theoretical incompatibility with temporally non-linear dynamics. However, the results demonstrated that γ 2 acts as a guard against excessively large error due to non-linear surface dynamics by truncating error at the WLS estimate (e.g., Figure 8). However, there is no obvious solution to guarantee that Kalman filter error is below WLS error because Kalman filters do not typically operate with awareness of true values. Instead, error for complex dynamics can only be contained within a margin about the WLS estimate defined by γ .
The choice of filter and noise model determines the reaction time to a dynamic event. For the static case, RSAK-0 and CKF had the lowest minimum error and RSAK-1 and RSAK-1A had the highest, with all ARK filters performing in between (e.g., Figure 7A,C). However, RSAK-0 performed very poorly in the periodic case, and therefore cannot be considered reliable for dynamic environments. The irregular and catastrophe cases show that RSAK-1 recovers in the fewest epochs from a state reset, requiring approximately 10 epochs to achieve lower error than OLS compared to the approximately 25 epochs required by the other adaptive filters (Figure 7D). However, the performance discrepancy between RSAK-1 and ARK-1 for the catastrophe case reveals unpredictable performance driven by the Q estimation, suggesting that ARK filters are more reliable. Alternatively, ARK-∅ and ARK-∅A were able to achieve comparable performance with the other filters without the complications of designing the process noise model. This suggests that, at least in the tested cases, α -based adaptation alone can effectively absorb process noise. As a result, ARK-∅ balances several performance trade-offs and represents a suitable filter for general analysis (i.e., without a priori process knowledge). The asymptotic filters, especially ARK-∅A, have a smooth adaptive function that may preserve spatial continuity when mapping spatial distributions by avoiding a threshold-dependent state reset. However, the absence of the state reset permits very large impulse error for the β components of the catastrophe case (Figure 8) and caused fatal linear algebra errors when r 0 = 10 3 . In practice, catastrophic events such as landslides or earthquakes are temporally well-defined, so the error impulse for asymptotic filters can be partially addressed by manually re-initializing the filter when an event occurs at the cost of reintroducing noise.
Topographic surveys likely have spatially variable measurement uncertainty resulting from interactions between slope and point density [69,70], slope and horizontal position error and vertical noise [14,39,71,72], or the inclusion of DEM artifacts such as those derived from vegetation and point classification (e.g., [73]). Performance was somewhat sensitive to the choice of r 0 , requiring an accurate estimate by a margin between one and two orders of magnitude before causing unstable performance (Figure 4). The total survey measurement error is the sum of a reasonably well-known set of sources [74,75,76]. Total error estimates for LiDAR surveys typically span a single order of magnitude (e.g., [75,77,78]), although many factors contribute to the actual magnitude. This suggests that the sensitivity of the adaptive Kalman filters to r 0 falls within common measurement error ranges, and that the adaptive filters are therefore suitable for application on LiDAR-derived topographic data.
The inclusion of non-terrain data as surface measurements remains a pervasive challenge known to severely degrade data quality [79,80] and is compounded when comparing multiple surveys. These errors should be considered as a separate issue from other sources of measurement uncertainty because they occur at much larger scales and tend not to have systematic causes (e.g., LiDAR point mis-classification). While several algorithms address this issue (e.g., [81,82,83]), none guarantee that all artifacts are addressed. Yang and Xu [59] shows that a three-segment M-type robust function can be used to nullify faulty or extremely unexpected measurements, which could identify and remove artifacts. This observation space strategy mirrors the state reset used in this research to mitigate unexpected state estimates. Thus, the current configuration of M-type robust estimation provides an adaptive mechanism to address locally varying measurement noise, provides a potential mechanism to address artifacts through the c 3 parameter, and can be extended to implement the three-segment function used by Yang and Xu [59]. Using larger windows can also apply some smoothing to mitigate the influence of artifact errors on the surface coefficients while simultaneously implementing multiscale topographic characterization. However, models based on large neighborhoods may interact negatively with the M-type robust function by inflating uncertainty due to topographic roughness rather than measurement noise. Newman et al. [84] showed that the RMSE of a quadratic surface model increases with neighborhood size, increasing the probability of artificial measurement uncertainty inflation from a poor model fit over large or rough areas (i.e., truncation error). Other methods for multiscale terrain analysis, such as Gaussian smoothing [85], smooth excess surface roughness, which may improve performance in a multiscale context.
The similar performance between the normal and uniform time distributions suggest that the adaptive Kalman filters are suitable for imprecise survey temporal distributions (Figure 3). Even with recent technological advances in UAV platforms, mission planning for topographic surveys is subject to a wide variety of constraints that impose limitations on when surveys are actually conducted. While these discrete-time filters are appropriate for most topographic survey mission designs and field campaigns, some technologies, like terrestrial LiDAR, have been used to conduct continuous monitoring with high time resolution (e.g., minute to hour scale intervals, [86,87]). These shorter time intervals may leverage continuous-time variants of the Kalman filter (e.g., [88]). Otherwise, the survey temporal distribution must be sufficiently short to resolve the rate of motion for the measured surface to minimize the generalization of temporal dynamics. The large error impulses observed for the catastrophe case (Figure 7D and Figure 8B,D) reinforce the poor suitability of Kalman filters for highly non-linear surface dynamics, which is consistent with other conclusions [44]. While the state reset provides a partial solution, it may be more suitable to reformulate the problem for use in Extended Kalman filters, Unscented Kalman filters, or particle filters, all of which are more robust to non-linear and non-Gaussian data [89,90].
The Kalman filter has considerably higher memory requirements compared to the OLS or finite difference methods commonly used in terrain analysis due to the additional uncertainty matrices and larger set of state coefficients. A DEM or point cloud consists of millions of locations that must be processed for a single epoch before the addition of new data. Given the large memory requirement of a Kalman filter, any spatially distributed implementation will likely be too large to fit into memory and will likely require a database memory management using permanent storage (e.g., [91]). However, there are several properties that are favourable to managing these limitations. Several matrices, such as A , C for regularly gridded raster data, and an independent version of R , can be shared between multiple locations to reduce memory and storage requirements. The presented implementation in Algorithm 1 uses Δ t , r k , and q k scalars to reconstruct the extrapolation and covariance matrices at the beginning of each epoch to minimize persistent data requirements. As a result, only the state vector containing model coefficients, the state uncertainty matrix, which records local coefficient uncertainty, and the scalar q k must be saved between epochs. This also raises the issue of measurement independence and spatial auto-correlation, especially over short distances, and is expected for interpolated DEMs. The results of the spatial analysis in Figure 10 demonstrate that Kalman filters are capable of minimizing even highly spatially auto-correlated noise, reducing the error of all elevation estimates by approximately 40%. While it is possible to encode spatial auto-correlation in the measurement noise matrix [92], the R ^ function currently only returns a diagonal matrix. Future research is required to conduct a more rigorous evaluation on spatially auto-correlated noise in order to further improve Kalman filter performance.

5. Conclusions

True surface parameters are rarely exactly known in practice. Thus, the quality of surface characterization is dependent on estimator quality with little ability to validate parameter estimates. This research sought to evaluate the potential for adaptive Kalman filters to exploit information from multiple topographic surveys to improve surface model coefficient estimates. The results showed that adaptive Kalman filters are generally able to reduce error relative to OLS estimators by 50–80%. However, error was elevated by approximately 40% for up to 25 epochs when an unexpected change in the dynamic regime occurred.
The combination of adaptive filters and M-type robust estimators addressed several issues limiting the application of Kalman filters to topographic analysis simultaneously. Many of the tested adaptive Kalman filter configurations achieved favourable performance and were relatively insensitive to initial parameterisation. Process noise estimation in combination with adaptation yielded the fastest error recovery times; however, performance was not reliable. Instead, robust filter performance was achieved by relying on early α only adaptation with small γ 1 thresholds and an underestimated or null q 0 . The ARK-∅ and its asymptotic variant ARK-∅A were able to achieve consistently favourable performance while bypassing process noise entirely, representing a general option for surface modeling and mapping, respectively. Measurement noise was insensitive to overestimation between one and two orders of magnitude from the true variance, achieving the lowest error when r 0 was overestimated by one order of magnitude. Despite comparable performance to current methods, highly non-linear surface dynamics remain a challenge for adaptive filters.
Adaptive Kalman filters have several implications for terrain analysis beyond this research. Primarily, the results show that adaptive Kalman filters reliably reduce the influence of noise on surface parameter estimates, resulting in lower error. This increases the accuracy of local surface characteristics derived from the surface coefficients, which is particularly valuable when calculating higher-order surface derivatives, or conversely, minimizing the negative impact of noise and artifacts. Moreover, the spatio-temporal coefficients provide novel information with which local surface dynamics can be interrogated from a time series of surface observations, which may be useful for providing validation data for other models (e.g., volumetric change) or characterizing the impact of various dynamic processes on the Earth’s surface. The ability to achieve low error for static and dynamic landscapes over vastly different time distributions is highly valuable for terrain analysis in general. Several limitations must be overcome to apply this method in a spatially distributed fashion. Memory management is particularly important given the relatively large state vector and uncertainty matrices. Subsequent research must address these limitations and assess the spatial distribution of multiple independent model errors.

Author Contributions

Conceptualization, D.R.N. and Y.S.H.; methodology, D.R.N.; software, D.R.N.; validation, D.R.N.; formal analysis, D.R.N.; investigation, D.R.N.; resources, D.R.N. and Y.S.H.; data curation, D.R.N.; writing—original draft preparation, D.R.N.; writing—review and editing, Y.S.H.; visualization, D.R.N.; supervision, Y.S.H.; project administration, D.R.N. and Y.S.H.; funding acquisition, D.R.N. and Y.S.H. All authors have read and agreed to the published version of the manuscript.

Funding

This study was supported by JSPS KAKENHI Grant Numbers JP23KF0180, JP23K20541, and JP23K23639.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Hashemi-Beni, L.; Jones, J.; Thompson, G.; Johnson, C.; Gebrehiwot, A. Challenges and opportunities for UAV-based digital elevation model generation for flood-risk management: A case of Princeville, North Carolina. Sensors 2018, 18, 3843. [Google Scholar] [CrossRef] [PubMed]
  2. Uysal, M.; Toprak, A.S.; Polat, N. DEM generation with UAV Photogrammetry and accuracy analysis in Sahitler hill. Measurement 2015, 73, 539–543. [Google Scholar] [CrossRef]
  3. Huang, Y.; Yu, M.; Xu, Q.; Sawada, K.; Moriguchi, S.; Yashima, A.; Liu, C.; Xue, L. InSAR-derived digital elevation models for terrain change analysis of earthquake-triggered flow-like landslides based on ALOS/PALSAR imagery. Environ. Earth Sci. 2015, 73, 7661–7668. [Google Scholar] [CrossRef]
  4. Nex, F.; Remondino, F. UAV for 3D mapping applications: A review. Appl. Geomat. 2014, 6, 1–15. [Google Scholar] [CrossRef]
  5. Mueller, M.M.; Dietenberger, S.; Nestler, M.; Hese, S.; Ziemer, J.; Bachmann, F.; Leiber, J.; Dubois, C.; Thiel, C. Novel UAV Flight Designs for Accuracy Optimization of Structure from Motion Data Products. Remote Sens. 2023, 15, 4308. [Google Scholar] [CrossRef]
  6. Ruzgiene, B.; Berteška, T.; Gečyte, S.; Jakubauskiene, E.; Česlovas Aksamitauskas, V. The surface modelling based on UAV Photogrammetry and qualitative estimation. Meas. J. Int. Meas. Confed. 2015, 73, 619–627. [Google Scholar] [CrossRef]
  7. Oguchi, T.; Hayakawa, Y.S.; Wasklewicz, T. Remote Data in Fluvial Geomorphology: Characteristics and Applications. In Treatise on Geomorphology, 2nd ed.; Academic Press: Cambridge, MA, USA, 2022; pp. 1116–1142. [Google Scholar] [CrossRef]
  8. Fuad, N.A.; Ismail, Z.; Majid, Z.; Darwin, N.; Ariff, M.F.M.; Idris, K.M.; Yusoff, A.R. Accuracy evaluation of digital terrain model based on different flying altitudes and conditional of terrain using UAV LiDAR technology. IOP Conf. Ser. Earth Environ. Sci. 2018, 169, 12100. [Google Scholar] [CrossRef]
  9. Gonçalves, J.A.; Henriques, R. UAV photogrammetry for topographic monitoring of coastal areas. ISPRS J. Photogramm. Remote Sens. 2015, 104, 101–111. [Google Scholar] [CrossRef]
  10. Kucharczyk, M.; Hugenholtz, C.H.; Zou, X. UAV–LiDAR accuracy in vegetated terrain. J. Unmanned Veh. Syst. 2018, 6, 212–234. [Google Scholar] [CrossRef]
  11. Salach, A.; Bakuła, K.; Pilarska, M.; Ostrowski, W.; Górski, K.; Kurczyński, Z. Accuracy Assessment of Point Clouds from LiDAR and Dense Image Matching Acquired Using the UAV Platform for DTM Creation. ISPRS Int. J. Geo-Inf. 2018, 7, 342. [Google Scholar] [CrossRef]
  12. Santise, M.; Fornari, M.; Forlani, G.; Roncella, R. Evaluation of DEM generation accuracy from UAS imagery. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2014, XL-5, 529–536. [Google Scholar] [CrossRef]
  13. Giordan, D.; Hayakawa, Y.; Nex, F.; Remondino, F.; Tarolli, P. Review article: The use of remotely piloted aircraft systems (RPASs) for natural hazards monitoring and management. Nat. Hazards Earth Syst. Sci. 2018, 18, 1079–1096. [Google Scholar] [CrossRef]
  14. James, L.A.; Hodgson, M.E.; Ghoshal, S.; Latiolais, M.M. Geomorphic change detection using historic maps and DEM differencing: The temporal dimension of geospatial analysis. Geomorphology 2012, 137, 181–198. [Google Scholar] [CrossRef]
  15. Hu, X.; Lu, Z.; Pierson, T.C.; Kramer, R.; George, D.L. Combining InSAR and GPS to Determine Transient Movement and Thickness of a Seasonally Active Low-Gradient Translational Landslide. Geophys. Res. Lett. 2018, 45, 1453–1462. [Google Scholar] [CrossRef]
  16. Li, L.; Zhang, M.; Katzenstein, K. Calibration of a Land Subsidence Model Using InSAR Data via the Ensemble Kalman Filter. Groundwater 2017, 55, 871–878. [Google Scholar] [CrossRef]
  17. de Sousa, A.M.; Viana, C.D.; Garcia, G.P.B.; Grohmann, C.H. Monitoring Geological Risk Areas in the City of São Paulo Based on Multi-Temporal High-Resolution 3D Models. Remote Sens. 2023, 15, 3028. [Google Scholar] [CrossRef]
  18. Tsunetaka, H.; Hotta, N.; Hayakawa, Y.S.; Imaizumi, F. Spatial accuracy assessment of unmanned aerial vehicle-based structures from motion multi-view stereo photogrammetry for geomorphic observations in initiation zones of debris flows, Ohya landslide, Japan. Prog. Earth Planet. Sci. 2020, 7, 24. [Google Scholar] [CrossRef]
  19. Williams, R.D. DEMs of Difference. Geomorphol. Tech. 2012, 2, 1–17. [Google Scholar]
  20. Miliaresis, G.C. Quantification of Terrain Processes. In Advances in Digital Terrain Analysis; Springer: Berlin/Heidelberg, Germany, 2008; pp. 13–28. [Google Scholar] [CrossRef]
  21. Wheaton, J.M.; Brasington, J.; Darby, S.E.; Sear, D.A. Accounting for uncertainty in DEMs from repeat topographic surveys: Improved sediment budgets. Earth Surf. Process. Landf. 2010, 35, 136–156. [Google Scholar] [CrossRef]
  22. Desmet, P.J.J. Effects of Interpolation Errors on the Analysis of DEMs. Earth Surf. Process. Landf. 1997, 22, 563–580. [Google Scholar] [CrossRef]
  23. Florinsky, I.V. Errors of signal processing in digital terrain modelling. Int. J. Geogr. Inf. Sci. 2002, 16, 475–501. [Google Scholar] [CrossRef]
  24. Oksanen, J.; Sarjakoski, T. Error propagation of DEM-based surface derivatives. Comput. Geosci. 2005, 31, 1015–1027. [Google Scholar] [CrossRef]
  25. Florinsky, I.V. An illustrated introduction to general geomorphometry. Prog. Phys. Geogr. 2017, 41, 723–752. [Google Scholar] [CrossRef]
  26. Minár, J.; Evans, I.S.; Jenčo, M. A comprehensive system of definitions of land surface (topographic) curvatures, with implications for their application in geoscience modelling and prediction. Earth-Sci. Rev. 2020, 211, 103414. [Google Scholar] [CrossRef]
  27. Sofia, G.; Pirotti, F.; Tarolli, P. Variations in multiscale curvature distribution and signatures of LiDAR DTM errors. Earth Surf. Process. Landf. 2013, 38, 1116–1134. [Google Scholar] [CrossRef]
  28. Humpherys, J.; Redd, P.; West, J. A Fresh Look at the Kalman Filter. SIAM Rev. 2012, 54, 801–823. [Google Scholar] [CrossRef]
  29. Cappe, O.; Godsill, S.J.; Moulines, E. An Overview of Existing Methods and Recent Advances in Sequential Monte Carlo. Proc. IEEE 2007, 95, 899–924. [Google Scholar] [CrossRef]
  30. Cai, J.; Liu, G.; Jia, H.; Zhang, B.; Wu, R.; Fu, Y.; Xiang, W.; Mao, W.; Wang, X.; Zhang, R. A new algorithm for landslide dynamic monitoring with high temporal resolution by Kalman filter integration of multiplatform time-series InSAR processing. Int. J. Appl. Earth Obs. Geoinf. 2022, 110, 102812. [Google Scholar] [CrossRef]
  31. Zhang, X.; Zeng, Q.; Jiao, J.; Zhang, J. Fusion of space-borne multi-baseline and multi-frequency interferometric results based on extended Kalman filter to generate high quality DEMs. ISPRS J. Photogramm. Remote Sens. 2016, 111, 32–44. [Google Scholar] [CrossRef]
  32. Seoane, L.; Ramillien, G.; Beirens, B.; Darrozes, J.; Rouxel, D.; Schmitt, T.; Salaün, C.; Frappart, F. Regional Seafloor Topography by Extended Kalman Filtering of Marine Gravity Data without Ship-Track Information. Remote Sens. 2022, 14, 169. [Google Scholar] [CrossRef]
  33. Zhou, T.; Yuan, W.; Sun, Y.; Xu, C.; Chen, B. A quality factor of forecasting error for sounding data in MBES. Meas. Sci. Technol. 2022, 33, 85014. [Google Scholar] [CrossRef]
  34. Ghannadi, M.A.; Alebooye, S.; Izadi, M.; Moradi, A. A method for Sentinel-1 DEM outlier removal using 2-D Kalman filter. Geocarto Int. 2022, 37, 2237–2251. [Google Scholar] [CrossRef]
  35. Wang, P. Applying two dimensional Kalman filtering for digital terrain modelling. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 1998, 32, 649–656. [Google Scholar]
  36. Lawrence, H.; Celestine, M.H. Enhancing Terrain Analysis from Digital Elevation Models Using 2-D Kalman Filtering Technique. J. Geogr. Environ. Earth Sci. Int. 2024, 28, 40–51. [Google Scholar] [CrossRef]
  37. Orti, M.V.; Anders, K.; Ajayi, O.; Bubenzer, O.; Höfle, B. Integrating multi-user digitising actions for mapping gully outlines using a combined approach of Kalman filtering and machine learning. ISPRS Open J. Photogramm. Remote Sens. 2024, 12, 100059. [Google Scholar] [CrossRef]
  38. Heuvelink, G.B.M.; Schoorl, J.M.; Veldkamp, A.; Pennock, D.J. Space–time Kalman filtering of soil redistribution. Geoderma 2006, 133, 124–137. [Google Scholar] [CrossRef]
  39. James, M.R.; Antoniazza, G.; Robson, S.; Lane, S.N. Mitigating systematic error in topographic models for geomorphic change detection: Accuracy, precision and considerations beyond off-nadir imagery. Earth Surf. Process. Landf. 2020, 45, 2251–2271. [Google Scholar] [CrossRef]
  40. Chen, Z.; Heckman, C.; Julier, S.; Ahmed, N. Weak in the NEES?: Auto-Tuning Kalman Filters with Bayesian Optimization. In Proceedings of the 2018 21st International Conference on Information Fusion (FUSION), Cambridge, UK, 10–13 July 2018; pp. 1072–1079. [Google Scholar] [CrossRef]
  41. Kaba, A.; Kıyak, E. Optimizing a Kalman filter with an evolutionary algorithm for nonlinear quadrotor attitude dynamics. J. Comput. Sci. 2020, 39, 101051. [Google Scholar] [CrossRef]
  42. Karasalo, M.; Hu, X. An optimization approach to adaptive Kalman filtering. Automatica 2011, 47, 1785–1793. [Google Scholar] [CrossRef]
  43. Wen, W.; Pfeifer, T.; Bai, X.; Hsu, L.T. Factor graph optimization for GNSS/INS integration: A comparison with the extended Kalman filter. NAVIG. J. Inst. Navig. 2021, 68, 315–331. [Google Scholar] [CrossRef]
  44. Winiwarter, L.; Anders, K.; Czerwonka-Schröder, D.; Höfle, B. Full four-dimensional change analysis of topographic point cloud time series using Kalman filtering. Earth Surf. Dyn. 2023, 11, 593–613. [Google Scholar] [CrossRef]
  45. Simon, D. Optimal State Estimation: Kalman, H Infinity, and Nonlinear Approaches; John Wiley & Sons: Hoboken, NJ, USA, 2006. [Google Scholar]
  46. Kalman, R.E. A New Approach to Linear Filtering and Prediction Problems. J. Basic Eng. 1960, 82, 35–45. [Google Scholar] [CrossRef]
  47. Gurajala, R.; Choppala, P.B.; Meka, J.S.; Teal, P.D. Derivation of the Kalman filter in a Bayesian filtering perspective. In Proceedings of the 2021 2nd International Conference on Range Technology (ICORT), Balasore, India, 5–6 August 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 1–5. [Google Scholar] [CrossRef]
  48. Saha, M.; Ghosh, R.; Goswami, B. Robustness and Sensitivity Metrics for Tuning the Extended Kalman Filter. IEEE Trans. Instrum. Meas. 2014, 63, 964–971. [Google Scholar] [CrossRef]
  49. Greenberg, I.; Yannay, N.; Mannor, S. Optimization or Architecture: How to Hack Kalman Filtering. In Proceedings of the Advances in Neural Information Processing Systems, New Orleans, LA, USA, 10–16 December 2023; Oh, A., Naumann, T., Globerson, A., Saenko, K., Hardt, M., Levine, S., Eds.; Curran Associates, Inc.: Red Hook, NY, USA, 2023; Volume 36, pp. 50482–50505. [Google Scholar]
  50. Sage, A.P.; Husa, G.W. Adaptive filtering with unknown prior statistics. Jt. Autom. Control Conf. 1969, 7, 760–769. [Google Scholar]
  51. Mohamed, A.H.; Schwarz, K.P. Adaptive Kalman filtering for INS/GPS. J. Geod. 1999, 73, 193–203. [Google Scholar] [CrossRef]
  52. Wang, J. Stochastic Modeling for Real-Time Kinematic GPS/GLONASS Positioning. Navigation 1999, 46, 297–305. [Google Scholar] [CrossRef]
  53. Sorenson, H.W.; Sacks, J.E. Recursive fading memory filtering. Inf. Sci. 1971, 3, 101–119. [Google Scholar] [CrossRef]
  54. Yang, Y.; He, H.; Xu, G. Adaptively robust filtering for kinematic geodetic positioning. J. Geod. 2001, 75, 109–116. [Google Scholar] [CrossRef]
  55. Akhlaghi, S.; Zhou, N.; Huang, Z. Adaptive adjustment of noise covariance in Kalman filter for dynamic state estimation. In Proceedings of the 2017 IEEE Power & Energy Society General Meeting, Chicago, IL, USA, 16–20 July 2017; pp. 1–5. [Google Scholar] [CrossRef]
  56. Chen, Y.W.; Tu, K.M. Robust self-adaptive Kalman filter with application in target tracking. Meas. Control 2022, 55, 935–944. [Google Scholar] [CrossRef]
  57. Yang, Y.; Xu, T.; He, H. On adaptively kinematic filtering. Sel. Pap. Engl. Acta Geod. Cartogr. Sin. 2001, 200, 25–32. [Google Scholar]
  58. Yang, Y.; Gao, W. An optimal adaptive Kalman filter. J. Geod. 2006, 80, 177–183. [Google Scholar] [CrossRef]
  59. Yang, Y.; Xu, J. GNSS receiver autonomous integrity monitoring (RAIM) algorithm based on robust estimation. Geod. Geodyn. 2016, 7, 117–123. [Google Scholar] [CrossRef]
  60. Almagbile, A.; Wang, J.; Ding, W. Evaluating the Performances of Adaptive Kalman Filter Methods in GPS/INS Integration. J. Glob. Position. Syst. 2010, 9, 33–40. [Google Scholar] [CrossRef]
  61. Ding, W.; Wang, J.; Rizos, C.; Kinlyside, D. Improving adaptive kalman estimation in GPS/INS integration. J. Navig. 2007, 60, 517–529. [Google Scholar] [CrossRef]
  62. Huber, P.J. Robust Estimation of a Location Parameter. Ann. Math. Stat. 1964, 35, 73–101. [Google Scholar] [CrossRef]
  63. Durovic, Z.M.; Kovacevic, B.D. Robust estimation with unknown noise statistics. IEEE Trans. Autom. Control 1999, 44, 1292–1296. [Google Scholar] [CrossRef]
  64. Yang, Y. Robust estimation for dependent observations. Manuscripta Geod. 1994, 19, 10–17. [Google Scholar]
  65. Schwarz, K.P. A comparison of GPS kinematic models for the determination of position and velocity along a trajectory. Manuscripta Geod. 1989, 14, 345–353. [Google Scholar] [CrossRef]
  66. Lindsay, J.B. WhiteboxTools User Manual. 2025. Available online: http://whiteboxgeo.com/manual/wbt_book/preface.html (accessed on 1 May 2025).
  67. Zhao, C.; Lv, J.; Du, S. Geometrical deviation modeling and monitoring of 3D surface based on multi-output Gaussian process. Measurement 2022, 199, 111569. [Google Scholar] [CrossRef]
  68. Tondaś, D.; Ilieva, M.; van Leijen, F.; van der Marel, H.; Rohm, W. Kalman filter-based integration of GNSS and InSAR observations for local nonlinear strong deformations. J. Geod. 2023, 97, 109. [Google Scholar] [CrossRef]
  69. Chow, T.E.; Hodgson, M.E. Effects of lidar post-spacing and DEM resolution to mean slope estimation. Int. J. Geogr. Inf. Sci. 2009, 23, 1277–1295. [Google Scholar] [CrossRef]
  70. Su, J.; Bork, E. Influence of vegetation, slope, and lidar sampling angle on DEM accuracy. Photogramm. Eng. Remote Sens. 2006, 72, 1265–1274. [Google Scholar] [CrossRef]
  71. Rastogi, G.; Agrawal, R.; Ajai. Bias corrections of CartoDEM using ICESat-GLAS data in hilly regions. GIScience Remote Sens. 2015, 52, 571–585. [Google Scholar] [CrossRef]
  72. Schaffrath, K.R.; Belmont, P.; Wheaton, J.M. Landscape-scale geomorphic change detection: Quantifying spatially variable uncertainty and circumventing legacy data issues. Geomorphology 2015, 250, 334–348. [Google Scholar] [CrossRef]
  73. Yan, W.Y. Airborne Lidar Data Artifacts: What we know thus far. IEEE Geosci. Remote Sens. Mag. 2023, 11, 21–45. [Google Scholar] [CrossRef]
  74. Aguilar, F.J.; Mills, J.P. Accuracy assessment of lidar-derived digital elevation models. Photogramm. Rec. 2008, 23, 148–169. [Google Scholar] [CrossRef]
  75. Aguilar, F.J.; Mills, J.P.; Delgado, J.; Aguilar, M.A.; Negreiros, J.G.; Pérez, J.L. Modelling vertical error in LiDAR-derived digital elevation models. ISPRS J. Photogramm. Remote Sens. 2010, 65, 103–110. [Google Scholar] [CrossRef]
  76. Leigh, C.L.; Kidner, D.B.; Thomas, M.C. The Use of LiDAR in Digital Surface Modelling: Issues and Errors. Trans. GIS 2009, 13, 345–361. [Google Scholar] [CrossRef]
  77. Bater, C.W.; Coops, N.C. Evaluating error associated with lidar-derived DEM interpolation. Comput. Geosci. 2009, 35, 289–300. [Google Scholar] [CrossRef]
  78. Toth, C.; Jóźków, G. Remote sensing platforms and sensors: A survey. ISPRS J. Photogramm. Remote Sens. 2016, 115, 22–36. [Google Scholar] [CrossRef]
  79. Liu, X. Airborne LiDAR for DEM generation: Some critical issues. Prog. Phys. Geogr. Earth Environ. 2008, 32, 31–49. [Google Scholar] [CrossRef]
  80. Polidori, L.; Hage, M.E. Digital elevation model quality assessment methods: A critical review. Remote Sens. 2020, 12, 3522. [Google Scholar] [CrossRef]
  81. Arefi, H.; Hahn, M. A morphological reconstruction algorithm for separating off-terrain points from terrain points in laser scanning data. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2005, 36, 120–125. [Google Scholar]
  82. Lindsay, J. A New Method for the Removal of Off-Terrain Objects from LiDAR-Derived Raster Surface Models; Technical Report; University of Guelph: Guelph, ON, Canada, 2018. [Google Scholar]
  83. Vosselman, G. Slope based filtering of laser altimetry data. Int. Arch. Photogramm. Remote Sens. 2000, 33, 935–942. [Google Scholar]
  84. Newman, D.R.; Cockburn, J.M.H.; Drǎguţ, L.; Lindsay, J.B. Evaluating Scaling Frameworks for Multiscale Geomorphometric Analysis. Geomatics 2022, 2, 36–51. [Google Scholar] [CrossRef]
  85. Newman, D.; Cockburn, J.; Drǎguţ, L.; Lindsay, J. Local scale optimization of geomorphometric land surface parameters using scale-standardized Gaussian scale-space. Comput. Geosci. 2022, 165, 105144. [Google Scholar] [CrossRef]
  86. Arshad, B.; Barthelemy, J.; Perez, P. Autonomous Lidar-Based Monitoring of Coastal Lagoon Entrances. Remote Sens. 2021, 13, 1320. [Google Scholar] [CrossRef]
  87. O’Dea, A.; Brodie, K.L.; Hartzell, P. Continuous Coastal Monitoring with an Automated Terrestrial Lidar Scanner. J. Mar. Sci. Eng. 2019, 7, 37. [Google Scholar] [CrossRef]
  88. Lange, T.; Stannat, W. On the continuous time limit of the ensemble Kalman filter. Math. Comput. 2021, 90, 233–265. [Google Scholar] [CrossRef]
  89. Gustafsson, F. Particle filter theory and practice with positioning applications. IEEE Aerosp. Electron. Syst. Mag. 2010, 25, 53–82. [Google Scholar] [CrossRef]
  90. Gustafsson, F.; Hendeby, G. Some Relations Between Extended and Unscented Kalman Filters. IEEE Trans. Signal Process. 2012, 60, 545–555. [Google Scholar] [CrossRef]
  91. Zhang, H.; Chen, G.; Ooi, B.C.; Tan, K.L.; Zhang, M. In-Memory Big Data Management and Processing: A Survey. IEEE Trans. Knowl. Data Eng. 2015, 27, 1920–1948. [Google Scholar] [CrossRef]
  92. Rougier, J.; Brady, A.; Bamber, J.; Chuter, S.; Royston, S.; Vishwakarma, B.D.; Westaway, R.; Ziegler, Y. The scope of the Kalman filter for spatio-temporal applications in environmental science. Environmetrics 2022, 34, e2773. [Google Scholar] [CrossRef]
Figure 1. The value of Δ t k for both temporal distributions is shown in Subfigure (A). The euclidean distance between the true state vector ( x ˜ k ) and the original true state vector ( x ˜ 0 ) at every epoch to show how the true state evolves for each case in Subfigure (B). 3D visualizations of the initial surface, change over 365 days, and a catastrophe are presented in Subfigures (CE). The black dots in Subfigures (CE) are discrete observations based on a 5 × 5 window.
Figure 1. The value of Δ t k for both temporal distributions is shown in Subfigure (A). The euclidean distance between the true state vector ( x ˜ k ) and the original true state vector ( x ˜ 0 ) at every epoch to show how the true state evolves for each case in Subfigure (B). 3D visualizations of the initial surface, change over 365 days, and a catastrophe are presented in Subfigures (CE). The black dots in Subfigures (CE) are discrete observations based on a 5 × 5 window.
Remotesensing 17 01993 g001
Figure 2. Subfigure (A) shows the hillshade and contours with 5 m spacing for the uncorrupted DEM. Subfigure (B) shows the spatially auto-correlated noise field generated by the turning bands simulation, where blue values are positive, red values are negative, and white values are 0. Subfigure (C) shows DEM corrupted by spatially auto-correlated noise, which has been exaggerated to improve visualization.
Figure 2. Subfigure (A) shows the hillshade and contours with 5 m spacing for the uncorrupted DEM. Subfigure (B) shows the spatially auto-correlated noise field generated by the turning bands simulation, where blue values are positive, red values are negative, and white values are 0. Subfigure (C) shows DEM corrupted by spatially auto-correlated noise, which has been exaggerated to improve visualization.
Remotesensing 17 01993 g002
Figure 3. Box plots showing the aggregated model error distribution for each filter. The left column (Subfigures (A,C)) shows the results for Δ t drawn from a normal distribution and the right column (Subfigures (B,D)) shows results drawn from a uniform distribution. The top row (Subfigures (A,B)) shows the β coefficient error and the bottom row (Subfigures (C,D)) shows the β coefficient error.
Figure 3. Box plots showing the aggregated model error distribution for each filter. The left column (Subfigures (A,C)) shows the results for Δ t drawn from a normal distribution and the right column (Subfigures (B,D)) shows results drawn from a uniform distribution. The top row (Subfigures (A,B)) shows the β coefficient error and the bottom row (Subfigures (C,D)) shows the β coefficient error.
Remotesensing 17 01993 g003
Figure 4. The state error ( ϵ k ) time series for the uniform time distribution, the irregular case with q 0 = q ˜ . The ARK-∅, ARK-∅A, RSAK-0, ARK-1, and RSAK-1 filter configurations are shown in Subfigures (AE), respectively. The lowest r 0 value that showed a notable divergence from the previous value is annotated.
Figure 4. The state error ( ϵ k ) time series for the uniform time distribution, the irregular case with q 0 = q ˜ . The ARK-∅, ARK-∅A, RSAK-0, ARK-1, and RSAK-1 filter configurations are shown in Subfigures (AE), respectively. The lowest r 0 value that showed a notable divergence from the previous value is annotated.
Remotesensing 17 01993 g004
Figure 5. The state error ( ϵ k ) time series for the uniform time distribution, the irregular case with r 0 = r ˜ . The RSAK-0, ARK-1, RSAK-1, and RSAK-1A filter configurations are shown in Subfigures (AD), respectively. The lowest q 0 value that showed a notable divergence from the previous value is annotated.
Figure 5. The state error ( ϵ k ) time series for the uniform time distribution, the irregular case with r 0 = r ˜ . The RSAK-0, ARK-1, RSAK-1, and RSAK-1A filter configurations are shown in Subfigures (AD), respectively. The lowest q 0 value that showed a notable divergence from the previous value is annotated.
Remotesensing 17 01993 g005
Figure 6. The state error ( ϵ k ) time series for the uniform time distribution, the irregular case with r 0 = r ˜ , q 0 = q ˜ , and γ 2 = 0.95 . The ARK-∅, ARK-∅A, RSAK-0, ARK-1, and RSAK-1 filter configurations are shown in Subfigures (AE), respectively. The lowest γ 1 value that showed a notable divergence from the previous value is annotated.
Figure 6. The state error ( ϵ k ) time series for the uniform time distribution, the irregular case with r 0 = r ˜ , q 0 = q ˜ , and γ 2 = 0.95 . The ARK-∅, ARK-∅A, RSAK-0, ARK-1, and RSAK-1 filter configurations are shown in Subfigures (AE), respectively. The lowest γ 1 value that showed a notable divergence from the previous value is annotated.
Remotesensing 17 01993 g006
Figure 7. The state error ( ϵ k ) time series and the 95% confidence interval of the estimated mean using initial parameters r 0 = 10 3 and q 0 = 10 24 is shown for the uniform time distribution. The static case, gradual, irregular, and catastrophe cases are shown in Subfigures (AD), respectively. The catastrophe case features an additional inset image to show detailed responses to the catastrophe. Note that ARK-∅A was omitted from the catastrophe case.
Figure 7. The state error ( ϵ k ) time series and the 95% confidence interval of the estimated mean using initial parameters r 0 = 10 3 and q 0 = 10 24 is shown for the uniform time distribution. The static case, gradual, irregular, and catastrophe cases are shown in Subfigures (AD), respectively. The catastrophe case features an additional inset image to show detailed responses to the catastrophe. Note that ARK-∅A was omitted from the catastrophe case.
Remotesensing 17 01993 g007
Figure 8. The state error ( ϵ k ) time series and the 95% confidence interval of the estimated mean using initial parameters r 0 = 10 1 and q 0 = 10 16 is shown for the normal time distribution. The error for the β and β coefficients are shown on the top and bottom rows, respectively. The irregular and catastrophe cases are shown on the left and right columns, respectively. The catastrophe case features an additional inset images to show detailed responses to the catastrophe.
Figure 8. The state error ( ϵ k ) time series and the 95% confidence interval of the estimated mean using initial parameters r 0 = 10 1 and q 0 = 10 16 is shown for the normal time distribution. The error for the β and β coefficients are shown on the top and bottom rows, respectively. The irregular and catastrophe cases are shown on the left and right columns, respectively. The catastrophe case features an additional inset images to show detailed responses to the catastrophe.
Remotesensing 17 01993 g008
Figure 9. The α k and q k adaptive parameter values for the irregular case and normal time distribution where r 0 = 10 2 and q 0 = 10 16 . Subfigure (A) shows the average α k value for all adaptive filters. Subfigures (B,C) show the q k values for the RSAK-0 and RSAK-1 filters, respectively, which use different noise model orders. The 95% confidence interval is provided for Subfigures (B,C), but it was removed for Subfigure A to improve clarity.
Figure 9. The α k and q k adaptive parameter values for the irregular case and normal time distribution where r 0 = 10 2 and q 0 = 10 16 . Subfigure (A) shows the average α k value for all adaptive filters. Subfigures (B,C) show the q k values for the RSAK-0 and RSAK-1 filters, respectively, which use different noise model orders. The 95% confidence interval is provided for Subfigures (B,C), but it was removed for Subfigure A to improve clarity.
Remotesensing 17 01993 g009
Figure 10. Subfigure (A) shows the time series of predicted elevation error on the left hand side using solid lines and the change in k m i n skew on the right hand side using dotted lines. Subfigures (B1,B2) show the OLS and ARK-∅A spatial distributions of k m i n at epoch 5, while Subfigures (C1,C2) represent epoch 35. Subfigures (D1,D2) are inset images from Subfigures (C1,C2), respectively, to show the spatial distributions in greater detail.
Figure 10. Subfigure (A) shows the time series of predicted elevation error on the left hand side using solid lines and the change in k m i n skew on the right hand side using dotted lines. Subfigures (B1,B2) show the OLS and ARK-∅A spatial distributions of k m i n at epoch 5, while Subfigures (C1,C2) represent epoch 35. Subfigures (D1,D2) are inset images from Subfigures (C1,C2), respectively, to show the spatial distributions in greater detail.
Remotesensing 17 01993 g010
Table 1. The average and maximum state error ( ϵ ) for all epochs of the normal distribution, for all r 0 , q 0 , and the static case. For reference, the OLS method had an average state error of 0.082 and a maximum state error of 0.086 for all simulations. The true values are r ˜ = 10 2 and q ˜ = 10 16 . Values exceeding the comparable OLS value are marked in bold.
Table 1. The average and maximum state error ( ϵ ) for all epochs of the normal distribution, for all r 0 , q 0 , and the static case. For reference, the OLS method had an average state error of 0.082 and a maximum state error of 0.086 for all simulations. The true values are r ˜ = 10 2 and q ˜ = 10 16 . Values exceeding the comparable OLS value are marked in bold.
Average ϵ Maximum ϵ
r 0 10 3 10 2 10 1 10 1 10 3 10 2 10 1 10 1
q 0
CKF-1 10 24 0.0320.0340.0340.0440.0810.0810.0810.081
10 16 0.0230.0140.0130.0390.0810.0810.0810.081
10 8 0.0810.0760.0600.0230.0860.0850.0810.081
ARK-∅ 0.0360.0240.0170.0440.0810.0810.0810.081
ARK-∅A 0.0320.0260.0180.0440.0810.0810.0810.081
ARK-0 10 24 0.0360.0240.0170.0440.0810.0810.0810.081
10 16 0.0360.0240.0170.0440.0810.0810.0810.081
10 8 0.0310.0230.0160.0440.0810.0810.0810.081
RSAK-0 10 24 0.0250.0220.0160.0440.0810.0810.0810.081
10 16 0.0250.0220.0160.0440.0810.0810.0810.081
10 8 0.0260.0220.0160.0440.0810.0810.0810.081
ARK-1 10 24 0.0360.0240.0170.0440.0810.0810.0810.081
10 16 0.0360.0240.0170.0390.0810.0810.0810.081
10 8 0.0710.0700.0580.0230.0850.0870.0810.081
RSAK-1 10 24 0.0550.0510.0310.0440.0810.0810.0810.081
10 16 0.0550.0510.0310.0390.0810.0810.0810.081
10 8 0.0710.0700.0580.0230.0840.0870.0810.081
RSAK-1A 10 24 0.0560.0520.0320.0440.0810.0810.0810.081
10 16 0.0570.0520.0320.0390.0810.0810.0810.081
10 8 0.0710.0700.0580.0230.0850.0870.0810.081
Table 2. The average and maximum state error ( ϵ ) for all epochs of the normal distribution, for all r 0 , q 0 , and the periodic case. For reference, the OLS method had an average state error of 0.082 and a maximum state error of 0.086 for all simulations. The true values are r ˜ = 10 2 and q ˜ = 10 16 . Values exceeding the comparable OLS value are marked in bold.
Table 2. The average and maximum state error ( ϵ ) for all epochs of the normal distribution, for all r 0 , q 0 , and the periodic case. For reference, the OLS method had an average state error of 0.082 and a maximum state error of 0.086 for all simulations. The true values are r ˜ = 10 2 and q ˜ = 10 16 . Values exceeding the comparable OLS value are marked in bold.
Average ϵ Maximum ϵ
r 0 10 3 10 2 10 1 10 1 10 3 10 2 10 1 10 1
q 0
CKF-1 10 24 20.02522.21422.82924.83132.28336.76038.16741.761
10 16 0.2772.2544.93821.6050.5214.1279.95734.786
10 8 0.0810.0760.0600.2770.0860.0850.0810.521
ARK-∅ 0.0820.0760.0511.2440.1030.0960.0811.873
ARK-∅A 0.0790.0750.0511.2310.1000.0950.0811.853
ARK-0 10 24 0.0820.0760.0511.2440.1030.0960.0811.873
10 16 0.0820.0770.0511.2440.1030.0970.0811.873
10 8 0.0800.0760.0521.2440.0990.0960.0811.874
RSAK-0 10 24 0.1150.1170.1781.6450.1660.1700.2742.501
10 16 0.1150.1170.1781.6450.1640.1700.2742.501
10 8 0.1160.1170.1781.6450.1650.1660.2742.501
ARK-1 10 24 0.0820.0770.0511.2440.1010.0960.0811.873
10 16 0.0830.0760.0511.2440.1030.0970.0811.875
10 8 0.0720.0710.0580.2770.0870.0850.0810.521
RSAK-1 10 24 0.0640.0630.0632.9080.0830.0830.1115.327
10 16 0.0640.0630.0632.9090.0830.0850.1115.327
10 8 0.0720.0710.0580.2770.0870.0850.0810.521
RSAK-1A 10 24 0.0640.0630.0632.8890.0810.0830.1055.290
10 16 0.0650.0630.0632.8890.0820.0820.1055.290
10 8 0.0720.0710.0580.2770.0860.0850.0810.521
Table 3. The average and maximum state error ( ϵ ) for all epochs of the normal distribution, for all r 0 , q 0 , and the irregular case. For reference, the OLS method had an average state error of 0.082 and a maximum state error of 0.086 for all simulations. The true values are r ˜ = 10 2 and q ˜ = 10 16 . Values exceeding the comparable OLS value are marked in bold.
Table 3. The average and maximum state error ( ϵ ) for all epochs of the normal distribution, for all r 0 , q 0 , and the irregular case. For reference, the OLS method had an average state error of 0.082 and a maximum state error of 0.086 for all simulations. The true values are r ˜ = 10 2 and q ˜ = 10 16 . Values exceeding the comparable OLS value are marked in bold.
Average ϵ Maximum ϵ
r 0 10 3 10 2 10 1 10 1 10 3 10 2 10 1 10 1
q 0
CKF-1 10 24 9.53410.48210.79411.97716.00317.27917.51818.446
10 16 0.2920.7372.38710.4401.7763.9427.22916.802
10 8 0.0810.0760.0610.2920.0860.0850.1261.776
ARK-∅ 0.0520.0450.0260.6960.1170.1170.0842.111
ARK-∅A 0.0500.0450.0280.6870.1680.1340.1152.054
ARK-0 10 24 0.0520.0450.0260.6960.1180.1170.0842.111
10 16 0.0520.0450.0260.6960.1160.1160.0842.111
10 8 0.0500.0460.0250.6970.1170.1160.0842.113
RSAK-0 10 24 0.0490.0470.0251.1350.1160.1170.0843.378
10 16 0.0490.0470.0251.1350.1170.1170.0843.378
10 8 0.0490.0470.0251.1350.1180.1170.0843.378
ARK-1 10 24 0.0530.0450.0260.6960.1170.1160.0842.111
10 16 0.0520.0450.0250.7030.1170.1160.0842.160
10 8 0.0760.0760.0620.2920.3450.3410.3131.776
RSAK-1 10 24 0.0610.0570.0370.5470.2160.1600.1343.189
10 16 0.0630.0570.0370.5470.2270.1580.1353.189
10 8 0.0760.0760.0620.2920.3440.3420.3131.776
RSAK-1A 10 24 0.0700.0680.0530.5340.2990.2890.2663.127
10 16 0.0690.0670.0530.5340.2970.2870.2663.127
10 8 0.0760.0760.0620.2920.3430.3420.3131.776
Table 4. The average and maximum state error ( ϵ ) for all epochs of the normal distribution, for all r 0 , q 0 , and the catastrophe case. For reference, the OLS method had an average state error of 0.082 and a maximum state error of 0.086 for all simulations. The true values are r ˜ = 10 2 and q ˜ = 10 16 . Values exceeding the comparable OLS value are marked in bold.
Table 4. The average and maximum state error ( ϵ ) for all epochs of the normal distribution, for all r 0 , q 0 , and the catastrophe case. For reference, the OLS method had an average state error of 0.082 and a maximum state error of 0.086 for all simulations. The true values are r ˜ = 10 2 and q ˜ = 10 16 . Values exceeding the comparable OLS value are marked in bold.
Average ϵ Maximum ϵ
r 0 10 3 10 2 10 1 10 1 10 3 10 2 10 1 10 1
q 0
CKF-1 10 24 2.9194.3474.9135.0219.17911.18111.71311.730
10 16 0.1190.3731.0083.0183.6244.4846.5359.182
10 8 0.0810.0770.0620.1190.0960.2430.9123.624
ARK-∅ 0.0440.0340.0211.3350.1280.1280.1213.043
ARK-∅A N/A0.0350.0220.325N/A0.1310.1311.932
ARK-0 10 24 0.0440.0340.0211.3350.1280.1280.1213.043
10 16 0.0440.0340.0211.3350.1280.1280.1213.043
10 8 0.0400.0340.0201.3350.1280.1280.1213.043
RSAK-0 10 24 0.0370.0340.0201.3860.1280.1280.1213.247
10 16 0.0370.0340.0201.3860.1280.1280.1213.247
10 8 0.0370.0350.0201.3860.1280.1280.1213.247
ARK-1 10 24 0.0440.0340.0211.3350.1280.1280.1213.043
10 16 0.0430.0340.0201.3350.1280.1280.1213.043
10 8 0.0720.0720.0590.0960.2580.2570.2301.449
RSAK-1 10 24 0.0550.0540.0450.1890.2460.2470.2481.906
10 16 0.0550.0540.0450.1890.2460.2460.2481.906
10 8 0.0580.0570.0490.1690.2480.2480.2611.867
RSAK-1A 10 24 0.0540.0520.0360.0820.1690.1650.1261.092
10 16 0.0540.0520.0360.0820.1700.1640.1261.092
10 8 0.0550.0520.0370.0600.1700.1670.1260.965
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Newman, D.R.; Hayakawa, Y.S. Robust Land-Surface Parameterisation for Repeated Topographic Surveys in Dynamic Environments with Adaptive State-Space Models. Remote Sens. 2025, 17, 1993. https://doi.org/10.3390/rs17121993

AMA Style

Newman DR, Hayakawa YS. Robust Land-Surface Parameterisation for Repeated Topographic Surveys in Dynamic Environments with Adaptive State-Space Models. Remote Sensing. 2025; 17(12):1993. https://doi.org/10.3390/rs17121993

Chicago/Turabian Style

Newman, Daniel R., and Yuichi S. Hayakawa. 2025. "Robust Land-Surface Parameterisation for Repeated Topographic Surveys in Dynamic Environments with Adaptive State-Space Models" Remote Sensing 17, no. 12: 1993. https://doi.org/10.3390/rs17121993

APA Style

Newman, D. R., & Hayakawa, Y. S. (2025). Robust Land-Surface Parameterisation for Repeated Topographic Surveys in Dynamic Environments with Adaptive State-Space Models. Remote Sensing, 17(12), 1993. https://doi.org/10.3390/rs17121993

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop