Next Article in Journal
Quality Assessment of DJI Zenmuse L1 and P1 LiDAR and Photogrammetric Systems: Metric and Statistics Analysis with the Integration of Trimble SX10 Data
Next Article in Special Issue
Effective Automated Procedures for Hydrographic Data Review
Previous Article in Journal
Automated Modeling of Road Networks for High-Definition Maps in OpenDRIVE Format Using Mobile Mapping Measurements
Previous Article in Special Issue
Introducing Smart Marine Ecosystem-Based Planning (SMEP)—How SMEP Can Drive Marine Spatial Planning Strategy and Its Implementation in Greece
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multigrid/Multiresolution Interpolation: Reducing Oversmoothing and Other Sampling Effects

by
Daniel Rodriguez-Perez
1,*,† and
Noela Sanchez-Carnero
2,3,†
1
Departamento de Física Matemática y de Fluidos, Facultad de Ciencias, UNED, Avda. Esparta s/n, 28232 Las Rozas, Madrid, Spain
2
Centro para el Estudio de Sistemas Marinos (CESIMAR), CCT CONICET-CENPAT, Bv. Almirante Brown 2915, Puerto Madryn U9120ACD, Chubut, Argentina
3
Grupo de Ocenografia Fisica (GOFUVI), Facultade de Ciencias do Mar, Campus de Vigo, Lagoas-Marcosende, Illa de Toralla s/n, 36331 Vigo, Pontevedra, Spain
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Geomatics 2022, 2(3), 236-253; https://doi.org/10.3390/geomatics2030014
Submission received: 25 May 2022 / Revised: 14 June 2022 / Accepted: 18 June 2022 / Published: 22 June 2022
(This article belongs to the Special Issue Advances in Ocean Mapping and Nautical Cartography)

Abstract

:
Traditional interpolation methods, such as IDW, kriging, radial basis functions, and regularized splines, are commonly used to generate digital elevation models (DEM). All of these methods have strong statistical and analytical foundations (such as the assumption of randomly distributed data points from a gaussian correlated stochastic surface); however, when data are acquired non-homogeneously (e.g., along transects) all of them show over/under-smoothing of the interpolated surface depending on local point density. As a result, actual information is lost in high point density areas (caused by over-smoothing) or artifacts appear around uneven density areas (“pimple” or “transect” effects). In this paper, we introduce a simple but robust multigrid/multiresolution interpolation (MMI) method which adapts to the spatial resolution available, being an exact interpolator where data exist and a smoothing generalizer where data are missing, but always fulfilling the statistical requirement that surface height mathematical expectation at the proper working resolution equals the mean height of the data at that same scale. The MMI is efficient enough to use K-fold cross-validation to estimate local errors. We also introduce a fractal extrapolation that simulates the elevation in data-depleted areas (rendering a visually realistic surface and also realistic error estimations). In this work, MMI is applied to reconstruct a real DEM, thus testing its accuracy and local error estimation capabilities under different sampling strategies (random points and transects). It is also applied to compute the bathymetry of Gulf of San Jorge (Argentina) from multisource data of different origins and sampling qualities. The results show visually realistic surfaces with estimated local validation errors that are within the bounds of direct DEM comparison, in the case of the simulation, and within the 10 % of the bathymetric surface typical deviation in the real calculation.

1. Introduction

Digital elevation models (DEM) are important tools to study the Earth surface and model the processes taking place over it; hazard mapping, climate impact studies, geological and environmental modeling, atmospheric and marine flow simulations including tide prediction, are just a few of their current applications [1,2,3,4]. A grid DEM represents the continuous surface interpolated through (discrete) points where elevation has been measured and recorded, and is usually represented as an image whose pixels contain elevation data. High resolution DEMs (∼1 m) appearing in the late 1990s allowed geomorphological exploration with unprecedented detail, both by visual analysis of shaded DEM (e.g., that provides an easy inspection of features at various scales) [5] and through geomorphological indices quantified from the raster image [6,7]. Finding the best DEM generalization (i.e., interpolation) for the scale of topographical features of interest is a key element for multiscale analysis of structural topographic features [5,8,9].
Assessing the accuracy of DEMs is a pending issue, especially for the submerged part of the Earth, where both density and distribution of acoustic bathymetric measurements [10] and spatial resolution (either of interpolation or of indirect gravimetric inversion) are limited. Furthermore, DEM quality is also affected by characteristics of the surface or terrain roughness, cell size or spatial resolution, and the chosen interpolation method (and decisions made about its parameters) [11,12].
Currently, there are different open access global DEMs of the emerged Earth with moderate resolution such as the shuttle radar telemetry model (SRTM, 1 arc second, approximately 30 m horizontal, and 16 m vertical resolution) [13], the ASTER global DEM (GDEM v3, 2.4 arc seconds, approximately 90 m horizontal, and 12 m vertical resolution) [14,15], the Japan Aerospace Exploration Agency (JAXA) AW3D high-resolution global digital surface model (5 m horizontal and 6.5 m vertical resolution) [16], and the ICESat GLAH14 (6 m horizontal and 15 cm vertical resolution) [17,18].
Mapping the submerged bottom of the seas and oceans has required more work. The best known example of open-source bathymetric DEM is the General Bathymetric Chart of the Oceans (GEBCO) [19,20]. Elaborating this DEM involves cleaning and harmonizing data sources and then interpolating them into a surface. Often, this is an iterative process as source data cleaning (and, sometimes, harmonization) cannot be done without an estimated DEM. The acquisition of acoustic data over large areas is very expensive (for a given spatial resolution, it grows with the square of the area), so crowdsourcing strategies are being used to build large databases of bathymetric information [21], being GEBCO one of the most successful ones in terms of integration from multiple sources.
Interpolation methods can be grossly grouped into deterministic, geostatistical and machine learning methods (see the reviews [22,23,24] for more details):
  • Deterministic interpolation methods include nearest (natural) neighbour (NN) [25], inverse distance weighting (IDW) [26], or trend surface mapping (TS) [27]. These methods often work better with homogeneous distributions of data points. There are also models, as ANUDEM a.k.a. ArcGIS TOPO2GRID [28] that are designed to interpolate data along curves (e.g., isolines or river basins).
  • Geostatistical interpolation is commonly known as kriging which estimates elevation using the best linear unbiased predictor, under the assumption of certain stationarity assumptions [29,30]. There are many variants that overcome some limitations about those statistical assumptions (such as indicator kriging), or improve prediction based on co-variables (co-kriging).
  • Machine learning interpolation methods apply interpolation/classification methods to group “likewise” measurements thus enhancing their efficiency by using previous results. Despite the widespread use of machine learning, its use applied to spatial data is still a field of research; dealing with spatial heterogeneity and the problem of scale are areas in which these techniques can excel (see [31,32]). These methods are also showing their great potential when dealing with multi-source multi-quality data [33].
Interpolated DEMs often present “pimple” artifacts. These are typical of exact interpolation methods, where they appear around sampling points (quite common in IDW), but also appear in approximate (e.g., geostatistical) methods, and are usually removed by filtering the resulting DEM or by increasing the search window. This may cause; however, oversmoothing if the estimated correlation length is larger than the details available in particular areas with higher sampling density; this has been addressed by variance correction methods [34,35]. Another common artifact in DEMs are “transect” artifacts, very common in bathymetric DEMs, that appear where data density is higher (along transects) in contrast with the rest of the raster which is generalized. Some statistical resampling methods have also been devised to address this problem [36,37]. The non-uniform sampling of terrain data can be also caused by selection bias in topographic data (e.g., limited to easily accessible areas), leading to scarcely sampled areas compared with other highly sampled ones. High accuracy surface modeling methods have been proposed which attempt to overcome this limitation by imposing differential geometry constraints that preserve the expected topographical continuity [38] or introducing pre-interpolated features (e.g., isolines) in the interpolation [36]. Of course, the alternative is to increase sampling effort in undersampled areas; however, this is not always feasible.
The spatial resolution required from a DEM depends largely on the focus of our study interest. For example, a continental DEM or an ocean-wide bathymetry do not require resolving details smaller than several kilometers. On the other hand, the study of coastal tidal dynamics or coastal geomorphometry, or lake or water dam bathymetry may require resolving details of tens of meters or even meters [4,39,40,41,42]. When dealing with large areas involving continental scale features data size grows rapidly making it almost impossible to efficiently estimate elevation at points where data are not available, hence techniques are required that are able to efficiently handle large data sets. This interest in multiple scales across large geographical areas has led naturally to multiscale algorithms, either to improve computation of traditional geostatistical interpolations [43], to get advantage of wavelet interpolation algorithms [44], to complete information (especially in bathymetries) by “superresolution” (techniques inherited from digital image inpainting) [45,46], to store and get access to scale-dependent information [47], to analyze scale-dependent geomorphological features [5,9], or even to extrapolate the topography to finer resolutions than available from the data in what is called geostatistical simulation [48,49].
Spatial interpolation methods, either multiscale or not, usually make assumptions about the sampling process (e.g., random independent point-wise sampling), surface statistical properties (e.g., gaussian height distribution, functional form of the variogram), neighborhood shape and extension (e.g., triangulation, look-up distance, look-up directions or quadrants), smoothness penalization or other parameters (curvature constraints, wavelet family, etc.). This makes the choice difficult in common working conditions, statistical assumptions difficult to test, and algorithm parameters difficult to adjust, being the “desirable visual aspect” the most used heuristic criterion in choosing the interpolation, and the software availability and computer memory and processing time the other criteria. The latter are very dependent on the number of points to be interpolated, which again calls for efficient multiresolution approaches.
The goal of this article is to describe multigrid/multiresolution interpolation (MMI) based on simple (if not simplistic) hypotheses about the data, and which is able to solve many of the problems other interpolation methods have, while being fast and extensible. For that we will first introduce a top-down multigrid/multiscale method which meets them while making the simplest hypotheses about the input data or about the interpolated surface (Section 2.1). Then we will show how to use it for surface extrapolation (assuming a self-affine multi-fractal terrain model, in Section 2.3), and cross-validation later used for data filtering and outlier detection (Section 2.4). We will apply this algorithm to two case studies in the area of the Gulf of San Jorge (in Argentina’s Patagonia, described in Section 1): one based on synthetic data extracted from the SRTM DEM of the coastal area (Section 3.1), and another based on actual multi-source bathymetric data in order to compute the bathymetric surface of the Gulf (Section 3.2). We will discuss our proposal based on these case studies, and on the current bibliography (Section 4) and, finally, draw some conclusions.

2. Method

Mathematically speaking, interpolation means filling in the gaps of our information about a function based on the information we have about that function, especially but not limited to, the values that function takes at some known points. In what follows, we will construct a multigrid/multiresolution interpolation (MMI) method keeping in mind the geometrical relationships, and properties of exactness, regularity and smoothing, and statistical expectation of the methods described in the introduction. We will also focus on surface interpolation, i.e., interpolation of a real valued function f defined on an interval I = [ a , b ] × [ b , c ] R 2 ; without loss of generality, we will assume that interval to be I = [ 0 , 1 ] × [ 0 , 1 ] .

2.1. Top-Down Multigrid/Multiresolution Algorithm

Although some interpolation methods aim at providing a grand final mathematical formula to approximate function f at any point x I , often that formula is not used, but an iterative method estimates the value of f at x from its values f ( x i ) at the observation points x i I . In addition, in practice, we are often interested in obtaining the average value of the function in some neighborhood B I of x, being the precise value at x often inaccessible experimentally. Based on these two practical approximations, we formulate our multigrid method as follows:
  • Start with a partition of I in 2 n 0 × 2 n 0 intervals of the form
    B i j = i × 2 ( n + 1 ) , ( i + 1 ) × 2 ( n + 1 ) × j × 2 ( n + 1 ) , ( j + 1 ) × 2 ( n + 1 )
    with n = n 0 N and i , j = 0 , 1 , , 2 n 1 . Thus the sidelength of each B i j is equal to 2 ( n + 1 ) , being the sidelength of I equal to 1.
  • Chose those B i j such that for some k there is some observation point x k B i j . Let us call N i j the number of those observation points inside B i j and estimate the average value of f in B i j to be
    f ^ ( B i j ) = f ( x k ) x k I i j = 1 N i j x k B i j f ( x k )
    This means that our estimation of f in B i j is the most likely one (maximum likelihood) given by the arithmetic mean of the N i j measured points inside B i j .
  • Let us now focus on some B i j * such that there is no x k B i j * . Let us consider its neighbor intervals, of the form B i ± { 0 , 1 } j ± { 0 , 1 } , such that the value of f ^ could be computed in them; let us denote that set of neighbor intervals N i j . Then, we will interpolate
    f ^ ( B i j * ) = B N i j w B f ^ ( B ) B N i j w B
    where the w B are weights assigned to intervals B N i j . The simplest weight assignment would be the number of points inside B, that is w B k l = N k l , meaning that we take B i j * as a part of the larger set B ¯ i j = B i j * B N i j B and then we estimate f ^ as the average of f over the points measure in that enlarged set B ¯ i j . Under this assumption, we can also interpolate the number of expected measurement points in B i j * (e.g., after a new statistically independent measurement of the function) as
    N i j * = B N i j w B N B B N i j w B
    equating N B i j = N i j in subindices notation.
    Remark 1.
    For a partition of I with n > n 0 , the expression “such that the value of f ^ could be computed in them” will also include the rough estimation of f ^ (and of N B * , B N i j ) from the previous partition n 1 given by (4) below.
  • Now, we will refine the partition of I by defining, for each B i j four subintervals (quadtree structure), B i j , k l with k , l = 0 , 1 . If our partition of I was made in 2 n × 2 n intervals, then this one will be in 2 n + 1 × 2 n + 1 intervals of the form
    B i j , k l = ( 2 i + k ) × 2 ( n + 2 ) , ( 2 i + k + 1 ) × 2 ( n + 2 ) × ( 2 j + l ) × 2 ( n + 2 ) , ( 2 j + l + 1 ) × 2 ( n + 2 )
    and assign to each of these subintervals the following values of f ^ and N i j , k l (until a better approximation is made)
    f ^ ( B i j , k l ) = f ^ ( B i j ) N i j , k l = 1 4 N i j
  • At this point, we have for the partition of I in 2 n + 1 × 2 n + 1 intervals a rough estimation of f ^ , N i j , k l in each of its subintervals. Then, we can relabel those B i j , k l subintervals applying the substitution ( i j , k l ) ( 2 i + k , 2 j + l ) and go back to step 2 to calculate an improved interpolation on a new n + 1 n partition in new updated intervals B i j of side-length 2 n .
The multigrid quadtree refinement structure of the algorithm makes it to reach a spatial resolution of r (i.e., r is the sidelength of any of the B i j intervals in the last iteration) in log 2 ( r ) n 0 + 1 iterations of the previous 5 steps. We only run through the scales in one direction, top-down, hence the title of this section.

2.2. Some Properties of the Algorithm

  • Exactness: The method is an exact interpolator meaning that, for any partition of I in 2 n × 2 n subintervals, the interpolated f ^ ( B i j ) is the mean of observed values of f at points within B i j I , in particular for B i j containing one single point (that is the usual meaning of exact interpolation method).
  • Smoothing: Smoothing of the surface is done during the down-scaling process, applying a nearest neighbors weighted averaging (2) and (3). The neighborhood can be extended to only first-neighbors or to second-neighbors or can be weighted unevenly (e.g., assigning 0.614 weight to second neighbors, assuming octogonal symmetry). In order to get smoother surfaces, the application of Equation (1) can be stopped at some resolution n s , applying from there on only the generalization operation; then, the method will not be exact at the highest resolution (i.e., pointwise).
  • Statistical expectation: At every resolution level n, pixels containing data points are asigned the average value of elevation, which is an unbiased estimator of the mean. However, pixels not containing data points are estimated from their surrounding pixels either at that resolution, n, if they contain data points, or at the previous resolution, n 1 , if they do not. Equations (2) and (3), when used to estimate f ^ ( B i j * ) and N i j * using as w B the N i j known up to that level, operate as unbiased estimators acting on unbiased estimations, and then will provide the unbiased expected value of f ( B i j ) when averaged over all possible data samplings. As for the case of ordinary kriging, the underlying hypothesis is that f is “locally constant”, hence the neighborhood averaging.
  • Sensitivity to outliers: As long as the method is based on data averages (or estimated averages), outliers will have their effect on the results. They cannot be safely removed unless strong statistical assumptions (for instance, based on asymptotic standard error of the mean) are made scale-wide, because the same error correction should be applied at all scales. This will be assessed using K-fold cross-validation (see Section 2.4 below).

2.3. Fractal Extrapolation

Geological surfaces, and particularly bathymetric surfaces, are known to evolve through some of these scale-independent transformations and have often been characterized as self-affine fractals [50] or multifractals [51,52,53,54] whose Hurst exponent or multifractality spectrum can be related to their geophysical evolution [6,53].
The well known “middle point displacement” method [55] has been used to construct visually realistic ladscape surfaces, and it applies a simple rule to succesively refine a triangulated surface (with some degree of randomness). Although there are variants to this method (among others, to generate multifractal surfaces [56]), the key idea is to make a refinement of the triangulated surface by inserting a new point inside each of its faces (e.g., at the center of the triangles) and assigning to it a height equal to some average of the previous triangle vertices heights plus a randomly distributed zero-mean displacement with variance σ 2 proportional to L 2 H , being L the side-length of the triangle. The new points, once included in the triangulation, multiply the number of triangles by 3, and the new triangulated surface is transformed by applying the same rule until the required spatial resolution (defined by the triangle side-length L) is achieved.
Given the similarities of this “middle point displacement” construction with our interpolation method, we will adopt it to modify Equation (2) in order to allow for a fractal simulation (or extrapolation) of f ^ in those intervals B i j * without actual measurements x k . So we will just estimate
f ^ ( B i j * ) = B N i j w B f ^ ( B ) B N i j w B + 1 12 s n × η
where η is a uniformly distributed random variable in [ 1 , 1 ] and s n is the roughness of the surface (typical deviation) at the scale L = 2 ( n + 1 ) , given by
s n = σ r × ( L / r ) H
where r is the reference resolution (usually, the final interpolated map resolution), σ r is the estimated roughness at that resolution r (i.e., the root mean square difference between surface heights measured at that resolution) and H is the Hurst exponent.
Usually, H will not be known beforehand, so it can be estimated:
  • Globally: from the globally mean roughness at the smallest scale (one pixel of the final interpolated map) computed from neighbor height differences Δ f between intervals containing observation points. If there are such K pairs of neighboring intervals, then σ r 2 = s N 2 = 1 K k = 1 K ( Δ f ) 2 . The value of H is estimated from the previous resolution roughness, s n 1 which is already known: H = log ( s n 1 / s N ) / log ( 2 L / r ) . Going global, maximizes the number K, thus the estimation is improved, however local roughness could vary from part to part of the domain.
  • Locally: in this strategy a value is estimated for σ r 2 in each interval, using only the neighbor height differences Δ f of observation points within that n-th resolution interval (of size L). However, whenever there are no pairs of neighboring points within that interval, σ r 2 is estimated from the previous resolution (of size 2 L ) by the same interpolation method used to estimate f ^ . This implies that not only f ^ ( B i j ) has to be interpolated, but also σ ^ r ( B i j ) using the same algorithm.
We will use the local approach in this article.
Remark 2.
Notice that the global estimation of H would play the role of the covariance structure estimation used in ordinary kriging, assuming a power law semivariogram model for the entire area, i.e., assuming a stationary covariance structure. The local approximation would allow for a non-stationary process similar to universal kriging, and also results in multifractal structures. The main difference here is that, as long as possible, the fractal structure is computed as close to the actual scale as possible from measured data, only applying the simulation where necessary, i.e., on intervals with no data for estimation.

2.4. Surface Validation and Error Estimation

We would like to know how accurate the surface estimation is given a random sample of measurement points ( x i , f ( x i ) ) . The common method to assess goodness of fit is validation, that is, using a part of the points not used to fit the function f to compute the distance between the estimated values of f ^ at those points and those actually measured values. However, this method only provides a pointwise (at each x i ) or a global (e.g., the mean square error) estimation of error. A bootstrap cross-validation, on the other limit, would repeat the interpolation a large number of times K using each time an independent random sample (extracted “with repetition”) of meaurement points, and then estimating the local interpolation error from the distribution of interpolation replicas { f ^ ( k ) } k = 1 K .
In this article, we use a more modest and realizable estimation process, based on K-fold cross-validation. Interpolation will be repeated K times, leaving each time 1 / K -th of the data out. Then, instead of only testing the accuracy of the interpolation with respect to that 1 / K -th of the data, we will estimate the local interpolation standard error Δ f ^ CV ( x ) from the set of K interpolation replicas { f ^ ( k ) } k = 1 K as:
Δ f ^ CV 2 ( x ) = p = 1 K f ^ ( p ) ( x ) f ^ CV ( x ) 2
where
f ^ CV ( x ) = 1 K q = 1 K f ^ ( q ) ( x )
is the mean cross-validation surface.
Remark 3.
Apart from the obvious problem of computing a large number of interpolations posed by bootstrap, the condition of independent random samples poses a problem when measurement data are inherently correlated, as is the case with sampling transects. To address the problem of spatial correlation of points along a transect, we will adopt an “object oriented” K-fold partition of the data. We will subset each transect in smaller sub-transects of equal length (25 km was a practical choice for the case studies below), randomly assigning each of them to one of the K partitions of the data. We will use K = 10 , which is a common choice in the literature [57].

3. Case Studies

In this section we will apply our interpolation method to reconstruct and assess the quality of two surfaces interpolated from sampled data. First, we will sample data from an area of the SRTM digital elevation model, and test the accuracy of our interpolation both from the sampled data (using the K-fold error estimation) and from comparison with the actual model. Then, we will use bathymetric measurements acquired over an area equivalent in size, and compute the accuracy of our interpolation from those sampled data; in this case, we do not have a more accurate (i.e., computed from more extensive data) bathymetric model than our result, hence the interest of the first one.
Our study cases are located in the Gulf of San Jorge (GSJ) and its adjacent coastal area. The GSJ is is the largest gulf of the Argentinian Patagonian shelf, with an extension of 39,340 km 2 and a mouth of nearly 250 km , located between 45 ° S (Cape Dos Bahías) and 47 ° S (Cape Tres Puntas) (Figure 1). This gulf is a semi-open basin mainly covered by silt with coarse granulometric fractions to the north and south ends of the gulf [58,59], that reaches about 100 m of depth in its center, and having in its mouth depths ranging from about 90 m on the north and center, to 50–60 m on the south end, where the basin is demarcated from the adjacent shelf by a pronounced sill. The tidal regime in the GSJ is semidiurnal, with tidal amplitudes ranging between 3–5 m [60,61].
The continental vicinity of the GSJ forms part of the hydrocarbon-producing GSJ basin surrounded by the North Patagonian Massif (north), Deseado Massif (south), and the Andes (west) [62]. These massifs appear in the GSJ as Jurassic rhyolitic volcanic rock outcrops, the larger one located in the northeast (close to Cape Dos Bahías). The GSJ basin plateau is mainly covered by Eocene-Miocene sedimentary rocks of the Sarmiento and Patagonia Formations [63], as well as Quaternary fluvio-glacial deposits (“Rodados Patagónicos”; [64]). This plateau reaches the coast as cliffs or gravel/sand beach-ridges [60].
The GSJ is a very interesting and complex case of management since several interests coexist in it [65]. On the one hand, GSJ is one of the most relevant areas of Argentina coast in terms of biodiversity and productivity with relevant areas for marine conservation because of the presence of reproductive aggregations and foraging grounds of many marine birds and mammals. Moreover, it houses major fisheries targeting valuable shrimp, hake, scallops and king crab stocks [66,67]. On the other hand, its hydrocarbon-producing geology makes it ground of offshore oil platforms [62]. Since each of these processes and activities (oceanographic, fisheries, oil platforms, etc.) extend beyond the limits of the Gulf, we have included in our study the adjacent areas, limited to the north by 44 ° 20 S (Cabo Raso), south by 48 ° 05 S (Punta Buque) and east meridian 64 ° W (Figure 1).

3.1. SRTM Digital Elevation Model Sample Reconstruction

We selected the area between 69 ° 6 and 65 ° 7 W and between 48 ° 1 and 44 ° 2 S shown in Figure 1 (solid line rectangle) for our experiments. The SRTM30 tiles corresponding to this area were merged, resampled and reprojected onto a 90 m UTM grid (zone 19 S); the area includes a total surface of 84,400 km 2 in the emerged zone. Data samples were extracted using two different sampling strategies:
  • random point subsampling;
  • transect subsampling with 25 km long straight parallel transects.
Sampling density, that is the fraction of land points of the grid included in these samples was set to p = 2 n , with n = 4 , 5 , 6 , 7 , 8 (that is, from p 0.004 to 0.063 ). From those samples, a digital elevation model was interpolated with and without fractal extrapolation. For every sampling strategy and density, the average interpolation bias
Δ f ^ = f ^ CV f
root mean square error
Δ f ^ rms = ( Δ f ^ ) 2
the 50 % and 90 % interquantile ranges of Δ f ^ , denoted IQ 50 % Δ f ^ and IQ 90 % Δ f ^ , and the correlation coefficient between f ^ CV and f, cor ( f ^ CV , f ) , were computed by direct comparison of the estimated f ^ with the full SRTM data f. The K-fold cross-validation mean square errors
Δ f ^ CV rms = Δ f ^ CV 2
are also included in Table 1 and Table 2. The K-fold cross-validation estimated standard error Δ f ^ CV ( x ) , as well as the standard error map of the interpolated surface, from which table values were computed, are shown in Figure 2. The sampling density of the highlighted column, p = 2 6 0.0156 , is the closest one to the sampling density of our case in Section 3.2, p = 0.0181 .

3.2. Gulf of San Jorge Bathymetry Interpolation

Now our area is comprehended between 67 ° 7 and 64 ° 0 W and between 48 ° 1 and 44 ° 2 S as shown in Figure 1 (dashed line rectangle), enclosing a marine area of 85,600 km 2 . We used a number of data sources with different spatial sampling strategies (along transects and pointwise), densities, depth reference levels, etc.:
  • Acoustic data from single and split-beam echosounders (SBES): This type of data is distributed in transects, within which there is a very high density of sounding points (depending on the vessel speed and the ping rate, but not greater than one sounding point every ten meters). In addition, the vertical resolution, although dependent on the working frequency, is usually less than 50 cm . In our study case we have several sources of this bathymetric information:
    • The bathymetric data repository published by the National Institute for Fisheries Research and Development (INIDEP) of Argentina, which regularly conducts stock assessment surveys. This repository has a horizontal resolution of one sounding point every 5 m (see details in [68]). In our study area, there were 85085 sounding points, with depths between 11.5 and 123.1 m . These data are distributed in transects located mainly in the northern and southern areas of the GSJ, with less density in the central area.
    • Data from oceanographic campaigns collected in the framework of research project PICT 2016-0218, from the analysis of oceanographic and fishing campaigns carried out by different Argentine intitutions. This database consisted of 147,755 bathymetric points, with depths between 4.2 and 146.7 m . These data are distributed throughout the study area in transects with a mostly NW-SE orientation.
    • Data from coastal campaigns. There were 4281 bathymetric points, with depth values between 2 (negative means above low-tide level, that is, the intertidal area) and 71.6 m , all of them acquired with portable echosounders from small vessels. These data are in areas very close to the coast, in the north of the GSJ.
    Considering the tidal amplitude ranges in the GSJ, in order to refer all measured depths to a reference low-tide level, a tide correction was applied using the open OSU Tide Prediction Software (OTPS, available from https://www.tpxo.net/otps; access date 17 June 2022) [69].
  • Acoustic data from Multibeam (MBES) and Interferometric Sidescan Sonar (ISSS), which are acoustic sounders that, unlike SBES, provide wide swath coverage, at very high vertical and horizontal resolutions (up to a few centimeters). For our study area, these data come from three acoustic surveys in coastal areas (north of the GSJ), two with MBES and one with ISSS. For this work, the bathymetric surfaces were subsampled onto a 50 m grid. In total, 11,305 bathymetric points were included, with depth values between 5.2 and 121.3 m .
  • Data from nautical charts: the basic source of bathymetric information are always nautical charts, in this case developed and maintained by the Naval Hydrography Services (Servicio de Hidrografía Naval) of Argentina. For our study area, data from six nautical charts were used; one of these charts, covered the entire area, while the other five cover smaller coastal areas, located to the north and west of the gulf, with higher detail. In total, 3522 bathymetric points were used, with depths between 0.3 and 119 m deep.
  • Data from the citicen-science project “Observadores a bordo” (on-board observers, POBCh). Most of the GSJ waters are under the jurisdiction of the province of Chubut, whose Fisheries Secretariat developed the program POBCh for years to control fisheries. In this program, along with fishing data, depth data were taken at those places where fishing sets were made (along with information of date and time). After this database depuration, we used 38,249 bathymetric points in our study area, with depths between 2.8 and 123 m and distributed throughout the entire GSJ except for the SW quadrant, which is under the jurisdiction of another province. Depth data were also corrected using OTPS based on observers annotated coordinates and local time.
  • Coastline. The 0 m isoline of the SRTM30 model was used as the union limit between the emerged and submerged areas. Points were generated along this line, that also includes islands, separated by 20–30 m (a second of arc, corresponding to the SRTM resolution) and with a depth value of 0 m . For the study area, 59,128 points were included from Santa Elena Bay, to the north, to Punta Buque. Coastline is used as a boundary condition and thus not included in the cross-validation process (i.e., it is always included in the interpolation) [36].
In order to harmonize the data, they were subsampled to take one point every 50 m along every transect (to reduce importance bias caused by larger sounding densities) and projected onto a 90 m UTM grid (zone 20 S). Whenever a new data source was projected onto this grid, its depth measurements were corrected to agree on average with the already projected data sources; as a reference, nautic charts were added second, just after the coast line data. The total number of data points within the study area was 248,443.
Remark 4.
Although in some sense this variety of bathymetric sources can be seen as crowdsourced data, all of the data sets were acquired in the context of scientific research programs, and had been previously curated and applied quality tests to remove erroneous data. For example, SBES acoustic data transects were tested for false bottom detections and missing echoes. Similarly, POBCh were checked for the existence of points far off their neighbor depths (usually erroneous manual annotations), and those points were removed from the dataset.

Outlier Detection

Input data contained a number of points that cross-validation revealed as far-off the mean interpolated surface, sensibly farther than the local standard error Δ f ^ CV ( x i ) . To detect them and remove them from the input data we applied the algorithm known as Tukey fences [70] to measurement errors f ^ CV ( x i ) f ( x i ) . The algorithm consists in calculating the interquartile interval of all these measurement errors and removing those points departing from either interval bound more than k Tuck times its length. That is, only observation points such that
Q 25 % Δ f ^ CV k Tuck × IQ 50 % Δ f ^ CV < f ^ CV ( x i ) f ( x i ) < Q 75 % Δ f ^ CV + k Tuck × IQ 50 % Δ f ^ CV
are kept. According to [70] a value k Tuck = 1.5 does detect outliers, and a value of k Tuck = 3 detects “far off” points; we have used k Tuck = 2 here. We also removed points where Δ f ^ CV ( x i ) / f ^ CV ( x i ) > 0.5 , that is, the cross-validation relative standard error was above 50 % ; those points did not clearly contribute any information to the interpolation. In total 18,080 points were removed based on these criteria from interpolation in the study area.
After this, another interpolation was carried out again giving the results summarized in Figure 3 and Table 3.

4. Discussion

Above, we presented and tested a multigrid/multiresolution interpolation (MMI) method with four good qualities: fast, with relatively low RAM requirements (in its simplest version), extensible, based on the fewest possible statistical hypotheses, and locally exact (i.e., at each pixel scale interpolated values concide with the average measured data).

4.1. Asessment of the Interpolations

The potential of MMI is shown in the study cases above. One (Figure 2), in an emerged topography with very different reliefs: from mountains in the north-west (nearing the Andes mountain range) to the southern plains of Patagonian steppe; in addition, a hilly structure runs almost parallel to the coast, from the city of Comodoro (at the midpoint of the GSJ) to the north which, although not having high altitudes, stands out of the sorrounding plains. The other one (Figure 3), in a submerged area combining sandy (south) and rocky (north) coasts, island chains (north), flat sedimentary bottoms (center), basin delimiting sill (east), etc. In both cases, the interpolated surface follows in the larger scales the topography, but also in the smaller ones, if enough data are available, with no appreciable oversmoothig. This is numerically shown in the close values of mean and standard deviation of the original SRTM and interpolated DEMs, with differences below 3 % for the mean, and below 10 % for the standard deviation and reasonable sampling density (even for transect sampling).
Regarding the interpolation using SRTM sampled data, statistical analysis shows how interpolation cross-validation errors depend strongly on both sampling density and sampling strategy (see Table 1 and Table 2): random sampling gives rms standard errors ranging from about 8.5 m ( p = 0.062 ) to 20 m ( p = 0.004 ), while transect sampling ranges from 28 m ( p = 0.062 ) to 105 m ( p = 0.004 ), i.e., 4 to 5 times larger; this shows graphically the loss of accuracy far from the transects. The relationship between cross-validation Δ f ^ CV rms and standard interpolation error Δ f ^ rms is approximately linear in this range of sampling densities, with Δ f ^ CV rms slightly underestimating error computed from direct comparison with the original SRTM; nevertheless Δ f ^ CV rms lays within the 50 % and 90 % interquantile errors. The inclusion of fractal extrapolation adds to these errors, as expected, but not an statistically significant amount. We can draw from this in order to analyse the GSJ interpolation (Figure 3 and Table 3).
Visually, the GSJ interpolated surface does not show any marked transect artifacts. However “pimple” effects are slightly visible in that interpolation, especially at points from the POBCh data source, and especially in the northern rocky shores (which are naturally irregular) and in the flat sedimentary plateau; it is remarkable that Tukey fences did not remove these points as outliers thus these “pimples” could be just showing real bottom roughness or the need for more sampling in the voids around them. When fractal extrapolation is applied, both effects are masked to some degree by the artificial fractal roughness. A clear case is observed in front of Cape Tres Puntas, where data is scarce and yet the surface is rough, attenuating also the effect of south-leading oceanographic survey transects (but, in turn, increasing the estimated cross-validation error). On the contrary, in the western part of the Gulf the interpolated surface shows a flat bottom with and without fractal extrapolation; this agrees with the known features of the sedimentary seabed in this zone, also confirmed by the relatively low cross-validation error in that area, although in other areas it could result from lack of data there or nearby.

4.2. Assessment of the Method

The idea of multigrid methods appeared in computational mathematics [71] as a way to speedup the solution of partial differential equations and has been interpreted as a preconditioner of the resulting system of linear equations. This not only makes their resolution faster but also numerically more accurate. That was also the goal of using hierarchical basis and wavelets in interpolation methods [44]. Other multiscale methods, either Laplacian/Gaussian pyramid methods in image processing [72], or other wavelet based methods have either been focused on image information representation or compression or on feature analysis [5,44]. In some sense our MMI could be related to some of them, as it uses multiple scale grids (the quadtree structure) that could be formally related to the simple Haar wavelet basis; however, it is difficult to relate those previous works with the interpolation we perform in this work, with randomly distributed point and transect samples, and mean surface estimation at each resolution.
Our MMI method is more easy to compare with other common interpolation methods such as IDW or kriging. It has in common with them that interpolated values are computed as convex linear combinations (i.e., weighted averages) of measured data, without imposing further conditions on the resulting surface. Contrary to kriging, MMI does not require computation of the semivariogram or the stationarity assumptions, which makes it, on the one hand, a (more) parameter free method and, on the other hand, more adaptable to extended areas with subareas of very different elevation profiles. Like these two methods, it is a convex method: interpolated elevations will be weighted averages of measured ones, thus it cannot predict a crest or a valley unless these features were captured by the sampling of the elevation or bathymetric surface (however, see below further improvements that can be included along these lines). MMI, whose underlying idea is just the simple spatial averaging of measuremens inside a tile, is easy to interpret, at least locally; this it has in common with OK which is the best linear unbiased predictor, that is, an estimator of the expected mean elevation based on correlated nearby measurements. The difference here is that MMI assumes measurements to be reliable and aims at interpolating the surface which contains these points, instead of the surface which is the estimated mean of the stochastic surface to which measured points belong.
However, MMI lacks the predictive capabilities of machine learning methods that can predict based on the geophysical features in the area, and are not limited to linear combinations of observations [73]. Contrary to these methods, it can only detect outliers from an statistical assessment of the interpolated bathymetry as we performed in the GSJ bathymetry, although as with any statistical assessment it is not a risk-free decission. Anyway, performing this statistical assessment of the final interpolated surface, using K-fold cross validation as in this work, has other advantages as the spatialization of the estimated error, which is very important in cases with inhomogeneous surfaces, as in the SRTM simulation, or inhomogeneous sampling, as in the GSJ bathymetry [74]. Other potential weakness is related with using data from different sources, but not taking into account their different levels of accuracy. In our approach we took into account these accuracy levels only regarding the vertical reference in the harmonization step (shifting each data set reference to match, on average, the previously more reliable datasets at their cross points). From there on, we applied the common method of rejecting those points far off the general surface trend [21]. Other approaches such as reweighting the data depending on their distance to the average surface (taking into account, or not, local cross-validation error), would have been against our goal of an interpolation method with the least number of assumptions.
Computationally speaking, MMI has also a number of advantages. First, interpolation time is mostly independent of the number of points as the algorithm runs on the quadtree raster pyramid; hence only final raster size determines that time (roughly multiplying it by 4 with every halving of the pixel size). This also means that it will be advantageous when interpolating a large number of data points such as in our bathymetry example: a 3346 × 4928 raster interpolation of 339,874 bathymetric points (padded to 4096 × 8192 pixels for computation) took on average 17 min on an Intel(R) Core(TM) i7-8750H CPU @ 2.20 GHz; the fractal extrapolation took longer: 120 min. Also, being based on raster local operations, it can be adapted to GPU parallel computation (something we have not addressed in our simulations). The fractal extrapolation extension, is not so time-efficient nor so easy to parallelize in the GPU; first, it involves estimating the (multi) fractal distribution parameters, and after that, it requires the use of random numbers for the simulation (which is an issue that has been addressed in other areas such as Monte Carlo simulations [75], but nevertheless increases the complexity of the GPU operations).
Our fractal extrapolation is based on the widely explored characterization of the Earth topography as multifractal [51,53,54]. It takes especially advantage of transect sampling that has been exploited in the past for fractal characterization [76]. It can be seen as a particular approach to geostatistical simulation, that attempts to include complex fine-scale features into (or onto) coarse resolution DEMs, taking into account larger scale spatial height distribution to estimate smaller ones [48]; our estimation method is parametric as it assumes a fractal model. Keeping surface roughness, even if it is simulated, helps to perform terrain classification and regionalization based on geomorphological features computed usually as focal statistics of elevation distribution [7,9], and then terrain classification based on feature distribution across the study area [39,77]; smooth interpolated areas would appear as unreal separate classes, otherwise. From the most basic interest in DEM assessment, fractal extrapolation provides a more realistic estimation of error: in areas where interpolated DEM is totally determined by distant measurements, error can be underestimated based on error propagation (assuming or not a underlying convex formula and gaussian process), or on cross-validation. However, simulating an stochastic surface with the same properties observed in measured areas, will give a more conservative error estimation. Although our method gets this, it is true that some of the simulated features are too random (due to isotropy) and do not prolong the natural trends observed in the area (see, for example, the southern area in front of Cape Tres Puntas in Figure 3).
Future improvements of the MMI algorithm may include extending the generalization window to perform a least squares approximation of the curved surface, weakening the current assumption of a locally flat surface and allowing the inclusion of anysotropy in the fractal extrapolation. This would render more realistic groove and ridge-like features in continuity with the known elevation data [51].

5. Conclusions

In this article, we introduced a multigrid/multiresolution interpolation (MMI) method. The goal of the method is simplicity, both in implementation and in statistical and other assumptions, and scalability to efficiently interpolate large datasets. The quadtree multigrid raster approach makes the method fast and memory efficient. This allows the use of K-fold cross-validation methods to compute local interpolation standard errors, which not only inform about the interpolation quality, but also, helps assess input data quality using outlier detection; this is important when working with heterogeneous data as in our Gulf of San Jorge bathymetry case study.
The (multi)fractal extrapolation method simulates natural roughness in areas with no data (e.g., between transects). On the one hand, this simulates a roughness with the same scale and statistical topographical properties observed in the data (especially in transect data) and, on the other hand, it provides a more realistic asessment of the DEM K-fold cross-validation uncertainty based on the well established multifractal nature of the Earth relief.
We have applied the MMI to synthetic (SRTM elevation model) and real (Gulf of San Jorge bathymetry) DEM interpolation problems, showing how errors depend on sampling strategy and density, and how K-fold cross-validation does a reasonably good job assessing local and global errors. The results show visually realistic surfaces with varying levels of detail, i.e., no oversmoothing, while also reducing transect and “bump” artifacts to a minimum, across a geomorphologically rich area.

Author Contributions

D.R.-P. developed the mathematical and computational aspects of the article. N.S.-C. obtained the data and performed initial standardization and cleaning. Both contributed to write and revise the article. All authors have read and agreed to the published version of the manuscript.

Funding

This work was partially supported by the Agencia Nacional de Promoción Científica y Tecnológica (ANPCyT) of Argentina through project PICT 2016-0218.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The code implementing the algorithms described and some sample data can be found in the GitHub public repository https://github.com/daniel-rperez/mrinterp (access date 17 June 2022).

Acknowledgments

The authors would like to thank Jesus San Martin for his suggestions about method validation. The authors would like to also acknowledge the joint project “Fortalecimiento de la Gestión y Protección de la Biodiversidad Costero Marina en Áreas Ecológicas clave y la Aplicación del Enfoque Ecosistémico de la Pesca (EEP)” between United Nations’ Food and Agriculture Organization (FAO) and Ministerio de Ambiente y Desarrollo Sostenible de la Nación Argentina, GCP/ARG/025/GFF, in whose framework were tested some of the methods presented in the article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Badura, J.; Przybylski, B. Application of digital elevation models to geological and geomorphological studies-some examples. Przegląd Geol. 2005, 53, 977–983. [Google Scholar]
  2. Ogania, J.; Puno, G.; Alivio, M.; Taylaran, J. Effect of digital elevation model’s resolution in producing flood hazard maps. Glob. J. Environ. Sci. Manag. 2019, 5, 95–106. [Google Scholar]
  3. Bove, G.; Becker, A.; Sweeney, B.; Vousdoukas, M.; Kulp, S. A method for regional estimation of climate change exposure of coastal infrastructure: Case of USVI and the influence of digital elevation models on assessments. Sci. Total Environ. 2020, 710, 136162. [Google Scholar] [CrossRef] [PubMed]
  4. Green, J.; Pugh, D.T. Bardsey—An island in a strong tidal stream: Underestimating coastal tides due to unresolved topography. Ocean. Sci. 2020, 16, 1337–1345. [Google Scholar] [CrossRef]
  5. Kalbermatten, M.; Van De Ville, D.; Turberg, P.; Tuia, D.; Joost, S. Multiscale analysis of geomorphological and geological features in high resolution digital elevation models using the wavelet transform. Geomorphology 2012, 138, 352–363. [Google Scholar] [CrossRef]
  6. Sofia, G. Combining geomorphometry, feature extraction techniques and Earth-surface processes research: The way forward. Geomorphology 2020, 355, 107055. [Google Scholar] [CrossRef]
  7. Lecours, V.; Dolan, M.F.; Micallef, A.; Lucieer, V.L. A review of marine geomorphometry, the quantitative study of the seafloor. Hydrol. Earth Syst. Sci. 2016, 20, 3207–3244. [Google Scholar] [CrossRef] [Green Version]
  8. Marceau, D.J.; Hay, G.J. Remote sensing contributions to the scale issue. Can. J. Remote Sens. 1999, 25, 357–366. [Google Scholar] [CrossRef]
  9. Newman, D.R.; Cockburn, J.M.; Draguţ, L.; Lindsay, J.B. Evaluating Scaling Frameworks for Multiscale Geomorphometric Analysis. Geomatics 2022, 2, 36–51. [Google Scholar] [CrossRef]
  10. Alcaras, E.; Amoroso, P.P.; Parente, C. The Influence of Interpolated Point Location and Density on 3D Bathymetric Models Generated by Kriging Methods: An Application on the Giglio Island Seabed (Italy). Geosciences 2022, 12, 62. [Google Scholar] [CrossRef]
  11. Hengl, T. Finding the right pixel size. Comput. Geosci. 2006, 32, 1283–1298. [Google Scholar] [CrossRef]
  12. Habib, M.; Alzubi, Y.; Malkawi, A.; Awwad, M. Impact of interpolation techniques on the accuracy of large-scale digital elevation model. Open Geosci. 2020, 12, 190–202. [Google Scholar] [CrossRef]
  13. Farr, T.G.; Rosen, P.A.; Caro, E.; Crippen, R.; Duren, R.; Hensley, S.; Kobrick, M.; Paller, M.; Rodriguez, E.; Roth, L.; et al. The shuttle radar topography mission. Rev. Geophys. 2007, 45, RG2004. [Google Scholar] [CrossRef] [Green Version]
  14. Abrams, M.; Crippen, R.; Fujisada, H. ASTER global digital elevation model (GDEM) and ASTER global water body dataset (ASTWBD). Remote Sens. 2020, 12, 1156. [Google Scholar] [CrossRef] [Green Version]
  15. Tachikawa, T.; Hato, M.; Kaku, M.; Iwasaki, A. Characteristics of ASTER GDEM version 2. In Proceedings of the 2011 IEEE International Geoscience and Remote Sensing Symposium, Vancouver, BC, Canada, 24–29 July 2011; pp. 3657–3660. [Google Scholar]
  16. Tadono, T.; Ishida, H.; Oda, F.; Naito, S.; Minakawa, K.; Iwamoto, H. Precise Global DEM Generation by ALOS PRISM. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2014, II-4, 71–76. [Google Scholar] [CrossRef] [Green Version]
  17. Abshire, J.B.; Sun, X.; Riris, H.; Sirota, J.M.; McGarry, J.F.; Palm, S.; Yi, D.; Liiva, P. Geoscience Laser Altimeter System (GLAS) on the ICESat Mission: On-orbit measurement performance. Geophys. Res. Lett. 2005, 32, L21S02. [Google Scholar] [CrossRef] [Green Version]
  18. Shuman, C.A.; Zwally, H.J.; Schutz, B.E.; Brenner, A.C.; DiMarzio, J.P.; Suchdeo, V.P.; Fricker, H.A. ICESat Antarctic elevation data: Preliminary precision and accuracy assessment. Geophys. Res. Lett. 2006, 33, L07501. [Google Scholar] [CrossRef]
  19. Hall, J. GEBCO Centennial Special Issue—Charting the secret world of the ocean floor: The GEBCO project 1903–2003. Mar. Geophys. Res. 2006, 27, 1–5. [Google Scholar] [CrossRef]
  20. Weatherall, P.; Marks, K.M.; Jakobsson, M.; Schmitt, T.; Tani, S.; Arndt, J.E.; Rovere, M.; Chayes, D.; Ferrini, V.; Wigley, R. A new digital bathymetric model of the world’s oceans. Earth Space Sci. 2015, 2, 331–345. [Google Scholar] [CrossRef]
  21. Novaczek, E.; Devillers, R.; Edinger, E. Generating higher resolution regional seafloor maps from crowd-sourced bathymetry. PLoS ONE 2019, 14, e0216792. [Google Scholar] [CrossRef]
  22. Li, J.; Heap, A.D. A Review of Spatial Interpolation Methods for Environmental Scientists; Record 2008/23; Geoscience Australia: Canberra, Australia, 2008; p. 137. Available online: http://www.ga.gov.au/servlet/BigObjFileManager?bigobjid=GA12526 (accessed on 17 June 2022).
  23. Li, J.; Heap, A.D. Spatial interpolation methods applied in the environmental sciences: A review. Environ. Model. Softw. 2014, 53, 173–189. [Google Scholar] [CrossRef]
  24. Jiang, Z. A survey on spatial prediction methods. IEEE Trans. Knowl. Data Eng. 2018, 31, 1645–1664. [Google Scholar] [CrossRef]
  25. Yanalak, M. Sibson (natural neighbour) and non-Sibsonian interpolation for digital elevation model (DEM). Surv. Rev. 2004, 37, 360–376. [Google Scholar] [CrossRef]
  26. Shepard, D. A two-dimensional interpolation function for irregularly-spaced data. In Proceedings of the 1968 23rd ACM National Conference, New York, NY, USA, 27–29 August 1968; pp. 517–524. [Google Scholar]
  27. Chorley, R.J.; Haggett, P. Trend-surface mapping in geographical research. Trans. Inst. Br. Geogr. 1965, 37, 47–67. [Google Scholar] [CrossRef]
  28. Hutchinson, M.F. A new procedure for gridding elevation and stream line data with automatic removal of spurious pits. J. Hydrol. 1989, 106, 211–232. [Google Scholar] [CrossRef]
  29. Van der Meer, F. Remote-sensing image analysis and geostatistics. Int. J. Remote Sens. 2012, 33, 5644–5676. [Google Scholar] [CrossRef]
  30. Maroufpoor, S.; Bozorg-Haddad, O.; Chu, X. Chapter 9—Geostatistics: Principles and methods. In Handbook of Probabilistic Models; Samui, P., Tien Bui, D., Chakraborty, S., Deo, R.C., Eds.; Butterworth-Heinemann: Cambridge, MA, USA, 2020; pp. 229–242. [Google Scholar] [CrossRef]
  31. Kopczewska, K. Spatial machine learning: New opportunities for regional science. Ann. Reg. Sci. 2022, 68, 713–755. [Google Scholar] [CrossRef]
  32. Nikparvar, B.; Thill, J.C. Machine learning of spatial data. ISPRS Int. J. Geo-Inf. 2021, 10, 600. [Google Scholar] [CrossRef]
  33. Kamolov, A.A.; Park, S. Prediction of Depth of Seawater Using Fuzzy C-Means Clustering Algorithm of Crowdsourced SONAR Data. Sustainability 2021, 13, 5823. [Google Scholar] [CrossRef]
  34. Rezaee, H.; Asghari, O.; Yamamoto, J. On the reduction of the ordinary kriging smoothing effect. J. Min. Environ. 2011, 2, 102–117. [Google Scholar]
  35. Wang, Q.; Xiao, H.; Wu, W.; Su, F.; Zuo, X.; Yao, G.; Zheng, G. Reconstructing High-Precision Coral Reef Geomorphology from Active Remote Sensing Datasets: A Robust Spatial Variability Modified Ordinary Kriging Method. Remote Sens. 2022, 14, 253. [Google Scholar] [CrossRef]
  36. Sánchez-Carnero, N.; Ace na, S.; Rodríguez-Pérez, D.; Cou nago, E.; Fraile, P.; Freire, J. Fast and low-cost method for VBES bathymetry generation in coastal areas. Estuar. Coast. Shelf Sci. 2012, 114, 175–182. [Google Scholar] [CrossRef]
  37. Li, Y.; Rendas, M.J. Tuning interpolation methods for environmental uni-dimensional (transect) surveys. In Proceedings of the OCEANS 2015-MTS/IEEE, Washington, DC, USA, 19–22 October 2015; pp. 1–8. [Google Scholar] [CrossRef] [Green Version]
  38. Wang, J.; Zhao, M.W.; Jiang, L.; Yang, C.C.; Huang, X.L.; Xu, Y.; Lu, J. A new strategy combined HASM and classical interpolation methods for DEM construction in areas without sufficient terrain data. J. Mt. Sci. 2021, 18, 2761–2775. [Google Scholar] [CrossRef]
  39. Sánchez-Carnero, N.; Rodríguez-Pérez, D. A sea bottom classification of the Robredo area in the Northern San Jorge Gulf (Argentina). Geo-Mar. Lett. 2021, 41, 1–14. [Google Scholar] [CrossRef]
  40. Ibrahim, P.O.; Sternberg, H. Bathymetric Survey for Enhancing the Volumetric Capacity of Tagwai Dam in Nigeria via Leapfrogging Approach. Geomatics 2021, 1, 246–257. [Google Scholar] [CrossRef]
  41. Perivolioti, T.M.; Mouratidis, A.; Terzopoulos, D.; Kalaitzis, P.; Ampatzidis, D.; Tušer, M.; Frouzova, J.; Bobori, D. Production, Validation and Morphometric Analysis of a Digital Terrain Model for Lake Trichonis Using Geospatial Technologies and Hydroacoustics. ISPRS Int. J. Geo-Inf. 2021, 10, 91. [Google Scholar] [CrossRef]
  42. Liu, K.; Song, C. Modeling lake bathymetry and water storage from DEM data constrained by limited underwater surveys. J. Hydrol. 2022, 604, 127260. [Google Scholar] [CrossRef]
  43. Tran, T.T. Improving variogram reproduction on dense simulation grids. Comput. Geosci. 1994, 20, 1161–1168. [Google Scholar] [CrossRef]
  44. Yaou, M.H.; Chang, W.T. Fast surface interpolation using multiresolution wavelet transform. IEEE Trans. Pattern Anal. Mach. Intell. 1994, 16, 673–688. [Google Scholar] [CrossRef] [Green Version]
  45. Yutani, T.; Yono, O.; Kuwatani, T.; Matsuoka, D.; Kaneko, J.; Hidaka, M.; Kasaya, T.; Kido, Y.; Ishikawa, Y.; Ueki, T.; et al. Super-Resolution and Feature Extraction for Ocean Bathymetric Maps Using Sparse Coding. Sensors 2022, 22, 3198. [Google Scholar] [CrossRef]
  46. Zhang, Y.; Yu, W. Comparison of DEM Super-Resolution Methods Based on Interpolation and Neural Networks. Sensors 2022, 22, 745. [Google Scholar] [CrossRef] [PubMed]
  47. Shekhar, P.; Patra, A.; Stefanescu, E.R. Multilevel methods for sparse representation of topographical data. Procedia Comput. Sci. 2016, 80, 887–896. [Google Scholar] [CrossRef] [Green Version]
  48. Rasera, L.G.; Gravey, M.; Lane, S.N.; Mariethoz, G. Downscaling images with trends using multiple-point statistics simulation: An application to digital elevation models. Math. Geosci. 2020, 52, 145–187. [Google Scholar] [CrossRef]
  49. Zakeri, F.; Mariethoz, G. A review of geostatistical simulation models applied to satellite remote sensing: Methods and applications. Remote Sens. Environ. 2021, 259, 112381. [Google Scholar] [CrossRef]
  50. Blondel, P. Quantitative Analyses of Morphological Data; Springer Geology; Springer: Cham, Switzerland, 2018; pp. 63–74. [Google Scholar] [CrossRef]
  51. Gagnon, J.S.; Lovejoy, S.; Schertzer, D. Multifractal earth topography. Nonlinear Process. Geophys. 2006, 13, 541–570. [Google Scholar] [CrossRef]
  52. Henrico, I. Optimal interpolation method to predict the bathymetry of Saldanha Bay. Trans. GIS 2021, 25, 1991–2009. [Google Scholar] [CrossRef]
  53. Herzfeld, U.C.; Overbeck, C. Analysis and simulation of scale-dependent fractal surfaces with application to seafloor morphology. Comput. Geosci. 1999, 25, 979–1007. [Google Scholar] [CrossRef]
  54. McClean, C.J.; Evans, I.S. Apparent fractal dimensions from continental scale digital elevation models using variogram methods. Trans. GIS 2000, 4, 361–378. [Google Scholar] [CrossRef]
  55. Saupe, D. Algorithms for random fractals. In The Science of Fractal Images; Springer: Berlin, Germany, 1988; pp. 71–136. [Google Scholar]
  56. Ebert, D.S.; Musgrave, F.K.; Peachey, D.; Perlin, K.; Worley, S. Texturing & Modeling: A Procedural Approach; Morgan Kaufmann: San Francisco, CA, USA, 2003. [Google Scholar]
  57. Wadoux, A.M.C.; Heuvelink, G.B.; de Bruin, S.; Brus, D.J. Spatial cross-validation is not the right way to evaluate map accuracy. Ecol. Model. 2021, 457, 109692. [Google Scholar] [CrossRef]
  58. Fernández, M.; Roux, A.; Fernández, E.; Caló, J.; Marcos, A.; Aldacur, H. Grain-size analysis of surficial sediments from Golfo San Jorge, Argentina. J. Mar. Biol. Assoc. U. K. 2003, 83, 1193–1197. [Google Scholar] [CrossRef]
  59. Desiage, P.A.; Montero-Serrano, J.C.; St-Onge, G.; Crespi-Abril, A.C.; Giarratano, E.; Gil, M.N.; Haller, M.J. Quantifying sources and transport pathways of surface sediments in the Gulf of San Jorge, central Patagonia (Argentina). Oceanography 2018, 31, 92–103. [Google Scholar] [CrossRef]
  60. Isla, F.I.; Iantanos, N.; Estrada, E. Playas reflectivas y disipativas macromareales del Golfo San Jorge, Chubut. Rev. Asoc. Argent. Sedimentol. 2002, 9, 155–164. [Google Scholar]
  61. Carbajal, J.C.; Rivas, A.L.; Chavanne, C. High-frequency frontal displacements south of San Jorge Gulf during a tidal cycle near spring and neap phases: Biological implications between tidal states. Oceanography 2018, 31, 60–69. [Google Scholar] [CrossRef]
  62. Sylwan, C.A. Geology of the Golfo San Jorge Basin, Argentina. Geología de la Cuenca del Golfo San Jorge, Argentina. J. Iber. Geol. 2001, 27, 123–158. [Google Scholar]
  63. Cuiti no, J.I.; Scasso, R.A.; Ventura Santos, R.; Mancini, L.H. Sr ages for the Chenque Formation in the Comodoro Rivadavia región (Golfo San Jorge basin, Argentina): Stratigraphic implications. Lat. Am. J. Sedimentol. Basin Anal. 2015, 22, 13–28. [Google Scholar]
  64. Martinez, O.A.; Kutschker, A. The ‘Rodados Patagónicos’ (Patagonian shingle formation) of eastern Patagonia: Environmental conditions of gravel sedimentation. Biol. J. Linn. Soc. 2011, 103, 336–345. [Google Scholar] [CrossRef] [Green Version]
  65. St-Onge, G.; Ferreyra, G.A. Introduction to the Special Issue on the Gulf of San Jorge (Patagonia, Argentina). Oceanography 2018, 31, 14–15. [Google Scholar] [CrossRef]
  66. Góngora, M.E.; González-Zevallos, D.; Pettovello, A.; Mendía, L. Caracterización de las principales pesquerías del golfo San Jorge Patagonia, Argentina. Lat. Am. J. Aquat. Res. 2012, 40, 1–11. [Google Scholar] [CrossRef]
  67. De la Garza, J.; Moriondo Danovaro, P.; Fernández, M.; Ravalli, C.; Souto, V.; Waessle, J. An Overview of the Argentine Red Shrimp (Pleoticus muelleri, Decapoda, Solenoceridae) Fishery in Argentina: Biology, Fishing, Management and Ecological Interactions. 2017. Available online: http://hdl.handle.net/1834/15133 (accessed on 17 June 2022).
  68. Sonvico, P.; Cascallares, G.; Madirolas, A.; Cabreira, A.; Menna, B.V. Repositorio de Líneas Batimétricas de Las Campa Nas de Investigación del INIDEP; INIDEP Report ASES 053; INIDEP: Mar del Plata, Argentina, 2021.
  69. Egbert, G.D.; Erofeeva, S.Y. Efficient inverse modeling of barotropic ocean tides. J. Atmos. Ocean. Technol. 2002, 19, 183–204. [Google Scholar] [CrossRef] [Green Version]
  70. Tukey, J.W. Exploratory Data Analysis; Addison-Wesley Publishing Company: Reading, MA, USA, 1977; Volume 2. [Google Scholar]
  71. Hackbusch, W. Multi-Grid Methods and Applications; Springer Science & Business Media: Berlin, Germany, 2013; Volume 4. [Google Scholar]
  72. Adelson, E.; Anderson, C.; Bergen, J.; Burt, P.; Ogden, J. Pyramid Methods in Image Processing. RCA Eng. 1983, 29. [Google Scholar]
  73. Sekulić, A.; Kilibarda, M.; Heuvelink, G.; Nikolić, M.; Bajat, B. Random forest spatial interpolation. Remote Sens. 2020, 12, 1687. [Google Scholar] [CrossRef]
  74. Liu, P.; Jin, S.; Wu, Z. Assessment of the Seafloor Topography Accuracy in the Emperor Seamount Chain by Ship-Based Water Depth Data and Satellite-Based Gravity Data. Sensors 2022, 22, 3189. [Google Scholar] [CrossRef] [PubMed]
  75. Manssen, M.; Weigel, M.; Hartmann, A.K. Random number generators for massively parallel simulations on GPU. Eur. Phys. J. Spec. Top. 2012, 210, 53–71. [Google Scholar] [CrossRef] [Green Version]
  76. Malinverno, A. Segmentation of topographic profiles of the seafloor based on a self-affine model. IEEE J. Ocean. Eng. 1989, 14, 348–359. [Google Scholar] [CrossRef]
  77. Wilson, M.F.J.; O’Connell, B.; Brown, C.; Guinan, J.C.; Grehan, A.J. Multiscale Terrain Analysis of Multibeam Bathymetry Data for Habitat Mapping on the Continental Slope. Mar. Geod. 2007, 30, 3–35. [Google Scholar] [CrossRef] [Green Version]
Figure 1. The area of Gulf of San Jorge with the delimitation of the land and ocean regions where the MMI algorithm has been tested.
Figure 1. The area of Gulf of San Jorge with the delimitation of the land and ocean regions where the MMI algorithm has been tested.
Geomatics 02 00014 g001
Figure 2. (A) Random points used to sample SRTM90 with with p = 0.0156 ; (B) Interpolated DEM ( f ^ CV ) from point samples; (C) K-fold cross-validation standard error Δ f ^ CV . (DF) Same meaning, respectively, but using transect sampling.
Figure 2. (A) Random points used to sample SRTM90 with with p = 0.0156 ; (B) Interpolated DEM ( f ^ CV ) from point samples; (C) K-fold cross-validation standard error Δ f ^ CV . (DF) Same meaning, respectively, but using transect sampling.
Geomatics 02 00014 g002
Figure 3. (A) Bathymetric acoustic sounding points and transects in the Gulf of San Jorge; (B) MMI interpolated bathymetry ( f ^ CV ); (C) Cross-validation local standard error Δ f ^ CV . (D) and (E) have, respectively, the same meaning but including fractal extrapolation in the algorithm.
Figure 3. (A) Bathymetric acoustic sounding points and transects in the Gulf of San Jorge; (B) MMI interpolated bathymetry ( f ^ CV ); (C) Cross-validation local standard error Δ f ^ CV . (D) and (E) have, respectively, the same meaning but including fractal extrapolation in the algorithm.
Geomatics 02 00014 g003
Table 1. K-fold and other statistics of interpolated DEM using simulated data extracted from SRTM at randomly distributed points (p denotes point density per pixel). MMI was applied without and with fractal extrapolation. The SRTM90 column contains an assessment of SRTM resampling error based on the original 30 m resolution SRTM (using K-fold cross-validation) for comparison. Values are in meters.
Table 1. K-fold and other statistics of interpolated DEM using simulated data extracted from SRTM at randomly distributed points (p denotes point density per pixel). MMI was applied without and with fractal extrapolation. The SRTM90 column contains an assessment of SRTM resampling error based on the original 30 m resolution SRTM (using K-fold cross-validation) for comparison. Values are in meters.
Simple InterpolationFractal ExtrapolationSRTM90
p = 2 8 2 7 2 6 2 5 2 4 2 8 2 7 2 6 2 5 2 4
z ¯ 308.6308.7308.7308.6308.6308.6308.7308.6308.6308.6308.4
σ z 184.2185.1185.6185.9186.2184.2185.1185.7186.0186.2187.2
Δ f ^ CV rms 12.5510.158.236.655.4019.3316.4715.1814.5815.752.50
Δ f ^ 0.0440.1520.0870.0450.0650.0520.1820.1150.0230.088−0.007
Δ f ^ rms 20.2316.2213.1110.548.4820.8516.8013.7811.449.900.85
IQ 50 % Δ f ^ 8.006.535.104.013.1312.7210.348.587.327.021.25
IQ 90 % Δ f ^ 26.4521.2817.2413.8611.1437.8832.3328.9526.5127.355.10
cor ( f ^ CV , f ) 0.99450.99650.99750.99850.99900.99440.99640.99730.99840.99891.000
Table 2. K-fold and other statistics of interpolated DEM using simulated data extracted from SRTM along random 25 km transects; p denotes the fraction of the raster sampled by the transects. MMI was applied without and with fractal extrapolation. The SRTM90 column shows SRTM resampling error (see Table 1). Values are in meters.
Table 2. K-fold and other statistics of interpolated DEM using simulated data extracted from SRTM along random 25 km transects; p denotes the fraction of the raster sampled by the transects. MMI was applied without and with fractal extrapolation. The SRTM90 column shows SRTM resampling error (see Table 1). Values are in meters.
Simple InterpolationFractal ExtrapolationSRTM90
p = 2 8 2 7 2 6 2 5 2 4 2 8 2 7 2 6 2 5 2 4
z ¯ 298.5311.0310.9307.9307.8299.4311.3310.9307.9307.8308.4
σ z 140.0170.6180.7178.4182.6139.7170.2180.8178.2182.5187.2
Δ f ^ CV rms 73.3353.3952.7031.4923.98136.55109.2288.2361.9044.722.50
Δ f ^ −9.9922.4672.355−0.616−0.698−9.1682.7752.403−0.684−0.708−0.007
Var Δ f ^ 105.5679.7655.9240.3828.05111.5985.5960.4543.9730.680.85
IQ 50 % Δ f ^ 49.0242.1732.3420.4214.9989.0078.4966.2644.9831.601.25
IQ 90 % Δ f ^ 153.37108.89106.0866.9150.70247.59196.91172.97126.3592.255.10
cor ( f ^ CV , f ) 0.84300.91130.95890.97870.98990.83980.90820.95820.97800.98951.000
Table 3. Statistics of interpolated bathymetry with real data from the Gulf of San Jorge using the MMI interpolation without and with fractal extrapolation. Values are in meters.
Table 3. Statistics of interpolated bathymetry with real data from the Gulf of San Jorge using the MMI interpolation without and with fractal extrapolation. Values are in meters.
p = 0.0181 Simple InterpolationFractal Extrapolation
z ¯ 81.3281.24
σ z 24.0624.20
Δ f ^ CV rms 2.029.65
IQ 50 % Δ f ^ CV 1.284.77
IQ 90 % Δ f ^ CV 4.0822.87
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Rodriguez-Perez, D.; Sanchez-Carnero, N. Multigrid/Multiresolution Interpolation: Reducing Oversmoothing and Other Sampling Effects. Geomatics 2022, 2, 236-253. https://doi.org/10.3390/geomatics2030014

AMA Style

Rodriguez-Perez D, Sanchez-Carnero N. Multigrid/Multiresolution Interpolation: Reducing Oversmoothing and Other Sampling Effects. Geomatics. 2022; 2(3):236-253. https://doi.org/10.3390/geomatics2030014

Chicago/Turabian Style

Rodriguez-Perez, Daniel, and Noela Sanchez-Carnero. 2022. "Multigrid/Multiresolution Interpolation: Reducing Oversmoothing and Other Sampling Effects" Geomatics 2, no. 3: 236-253. https://doi.org/10.3390/geomatics2030014

APA Style

Rodriguez-Perez, D., & Sanchez-Carnero, N. (2022). Multigrid/Multiresolution Interpolation: Reducing Oversmoothing and Other Sampling Effects. Geomatics, 2(3), 236-253. https://doi.org/10.3390/geomatics2030014

Article Metrics

Back to TopTop