Different algorithms have been recently developed for the treatment of LiDAR PCs [4
] and especially TLS (Terrestrial Laser Scanner) data [7
], entailing a wider use of PCs acquired by photogrammetric systems in several geoscience fields [2
]. Examples include: a classification algorithm using a multi-scale dimensionality criterion called “CAractérisation de NUages de POints (CANUPO)” [9
]; the geomorphological change detection algorithm “Multi-scale Model to Model Cloud Comparison (M3C2)” [14
] and a four-dimensional (4D) workflow for detecting changes in PCs [11
] or [15
]. Notably, the work by Kromer et al. [11
] proposed computing the median distances of a series of point clouds acquired at different time steps and with respect to a fixed reference, extending the M3C2 algorithm [14
] to 4D. Kromer et al. [11
] and Lague et al. [14
] obtained a better level of detection on PC comparison by minimizing point scattering around a central value, they do not improve the PC per se
. Although these algorithms were originally developed with the aim of improving results by taking into account the properties and errors of LiDAR datasets, they can be used with PCs captured using different sensors (LiDAR, sonar, etc.); nevertheless, the particular characteristics of photogrammetric data such as non-linear and time-variant errors require the development of new methodologies specifically designed to overcome the constraints of this technique. Interestingly, improving PC quality by stacking multiple low-quality datasets as those generated using time-lapse cameras has not been explored yet.
1.1. Photogrammetry vs. LiDAR 3D Point Cloud Errors
The transformation of digital images into photogrammetric PCs using the technique called Structure from Motion (SfM) consists of estimating the three-dimensional position of points in the space defined from two-dimensional (2D) images [3
]. Nowadays, both commercial and open-source software products are available for this purpose [17
]. In addition to the SfM algorithms, these suites incorporate different tools and utilities to improve the quality of the processing such as the masking of images, the automatic detection of markers, the refinement of camera calibration and the filtering of points based on their quality.
Since the SfM technique is based on iterations of various processes during the bundle adjustment, the obtained PC is a possible solution among many possible results. The quality of the generated PC depends both on the number of homologous points identified and on the quality of the bundle adjustment [18
]. For this reason, any repeatability analysis carried out using different images of the same site implies the generation of photogrammetric models with significantly different geometries due to the different random solutions in both camera position estimation and internal calibration parameters. These geometric differences, which can be called geometric errors
, will be more or less significant depending on the quality of the homologous points and the bundle adjustment, which depends on the quality of the photographs.
This type of error contrasts strongly with the errors that are obtained in the LiDAR PCs. LiDAR, whose operation is extensively described in Petrie and Toth [19
] and Jaboyedoff et al. [6
], generates geometrically consistent models; however, due to its operation, the PCs obtained have a random Gaussian noise distributed along all points [11
]. This dispersion, called random error
, was easily solved and was the theoretical basis of the development of many algorithms for the processing of LiDAR data [14
Several studies have shown that the use of photogrammetric techniques does not result in a particular loss of accuracy with respect to LiDAR PCs [16
]. It should be noted, however, that these studies are mostly based on large-scale or detailed laboratory analyses in which hundreds of photographs and numerous control points were used to generate the models. It is well known that the accuracy of LiDAR data depends mainly on the device used to acquire the data. In contrast, the accuracy of the photogrammetric data depends on many other factors such as the camera setup, the weather conditions and the sharpness of the acquired images.
1.2. Landslide Monitoring Using Photogrammetry
The use of digital photogrammetry for landslide monitoring has evolved considerably over the past few years. Photogrammetric monitoring methods can be divided into those using a single camera position plus Digital Image Correlation (DIC) techniques [24
] to detect changes between two images captured at different times [25
], and those that use images taken from different positions (multi-view methods) [27
]. One advantage of these photogrammetric techniques is that, in addition to the analysis of current changes, they allow old deformations to be studied by scanning analogue photographs taken before the development of the methodology [28
Model comparisons and difference calculations can be obtained from: (a) one-dimensional (1D) values resulting from topographic sections extracted from Digital Terrain Models (DTM) [28
]; (b) 2D displacements in the image plane obtained using DIC methods [26
]; (c) in 3D from the comparison of PCs obtained by SfM techniques [27
]; and (d) the 4D analysis based on multi-temporal PC comparisons [29
As mentioned by Gabrieli et al. [25
], it is possible to obtain 3D information of the displacement of a landslide with a single camera, using complex DIC methods and a reference DTM. Even so, several factors, such as the orientation of the deformation and the range and the magnitude of the movement, have a decisive effect on the accuracy of the results. In addition, classical DIC is strongly influenced by changes in illumination [30
] and normally assumes that the main deformation field is parallel to the internal camera coordinate system, which is not always the case. Thus, the deformation field should be subsequently orto-rectified using a DTM (e.g., Travelletti et al. [26
The use of methods based on multi-view techniques for the study of landslides also carries some limitations, as described in Tannant [31
] and Cardenal et al. [27
]. These studies can be approached from two different angles. The first consists of obtaining a large number of images from different camera positions to provide algorithms with a large amount of data. The second strategy consists of implementing 4D monitoring by using photogrammetric systems with fixed cameras. In this case, the number of images is limited as it depends directly on the number of cameras installed. These workflows allow the generation of 3D models with a high temporal (sub-daily) frequency, allowing detailed monitoring of deformation, as well as 4D analysis.
The most recent studies in the field of geosciences [29
] emphasize the importance of two key elements of the SfM process to obtain good quality results. The first is the need to perform lens calibration to obtain the internal parameters of the camera. These parameters, some of which are highly sensitive, allow the elimination of radial and tangential distortion, correcting the deformations of the resulting models. The second element consists of positioning ground control points (GCP) in order to estimate the fit of the models to the real surface. James et al. [33
] developed tools to optimize SfM processes by analyzing the parameters that allow better ground control quality. They processed the output of ground control point and point precision analyses using Monte Carlo approaches. These improvements allow the calculation of 3D uncertainty-based topographic change from precision maps. Santise et al. [34
] tried to improve the photogrammetric models by reducing the image noise from a three-image merge (see Section 1.3
). Additionally, Parente et al. [18
] demonstrated an improvement in monitoring quality when a fixed-camera approach was adopted, even with a poor camera calibration procedure. However, there are no algorithms or workflows available to improve the photogrammetric models via the modification of the calculated PCs based on error reduction strategies.
Although several authors have emphasized the importance of using GCP and lens calibration [2
], in some contexts, such as landslide monitoring, this is impossible due to the unfeasibility of installing targets such as GCP on inaccessible slopes. In addition, obtaining calibration parameters fixed in time is difficult on a high frequency (time-lapse) basis in a fixed area and in remote locations. These limitations, combined with the reduced number of images obtained by fixed camera systems, imply a reduction in the quality of the resulting models compared to those obtained with TLS.
Based on the above, the accuracy and resolution of photogrammetry-derived PCs are strongly controlled not only by internal characteristics (instrument specifications) but also by external considerations (range, number and setup of stations, deformation magnitude, etc.). While TLS is thought to provide a more robust data set compared with SfM, no single technology is universally best suited to all situations because of the wide variety of fieldwork setup and instrumental considerations [35
]. On the one hand, the limited number of stations and subsequent occlusions in TLS-derived PCs have been highlighted as one of the main TLS limitations [36
]. On the other hand, loss of fine-scale details due to rounding off or over-smoothing of the SfM-derived surfaces on sharp outcrop corners has been observed by several authors (e.g., Wilkinson et al. [35
] and Nouwakpo [37
Compared to TLS, digital photogrammetry is considered a low-cost landslide monitoring system [2
]. In addition, the cost of photographic devices (700 €–3000 €) can be considerably reduced by using very low-cost photographic sensor and lens combinations (50 €), thus producing very low-cost monitoring systems. These systems are based on combinations of photographic microsensors (3.76 × 2.74 mm) and low-quality lenses associated with small microcomputers that are designed to obtain, process and store the images acquired. These systems are designed essentially to operate autonomously and in remote areas [34
] (see Section 2.3.2
.). These configurations are used because they are easy to obtain, easy to program and can be installed in active areas without concerns about damage due to their low cost. However, the use of these devices implies a lower quality of the photogrammetric models due to the low resolution of the sensor, as well as the poor quality of the lenses. For this reason, when very low-cost photogrammetric systems are used, new methodologies to improve the quality of the photogrammetric models are required, such as the one presented in this article.
1.3. Techniques for Image Stacking (2D)
Using 2D stacking algorithms to enhance digital imagery is a common strategy in several disciplines such as astronomy, computer-vision and microscopy; recent examples of astronomical image processing include successful attempts to increase the signal-to-noise ratio (SNR) [39
] and the combination of different wavelengths to de-noise imagery of celestial bodies [40
]. Image stacking strategies using photographs taken at different f-stops or with the focus point on different parts of the subject (aka “f-stop stacking” or “focus stacking,” respectively) have also been used to extend the depth of field of the composite images in order to overcome blurriness [41
]. In addition, various 2D stacking strategies have been tested to derive high quality imagery from a series of 2D photographs, leading to considerable improvements in photogrammetric models, e.g., when using super-resolution images [42
]. On the contrary, stacking 2D images under specific conditions might not always entail noteworthy increases in SNR, as reported by Santise et al. [34
Similarly, diverse image stacking techniques are commonly employed when using satellite Interferometric Synthetic-Aperture Radar (InSAR) techniques to monitor ground deformation, as recently reported by Selvakumaran et al. [43
]. Indeed, atmospheric noise is filtered when using these strategies, leading to a higher SNR and a more accurate DinSAR time series as pointed out by Manconi [30
]. In the same way, 2D stacking plays an important role in other fields such as Seismic Data Processing, improving the overall SNR and overall quality of seismic data [44
]. Although several publications describing stacking techniques on 2D matrices were found in the literature, no other publications dealing with the improvement of 3D objects (e.g., PCs) were identified during our literature review.
1.4. Aim and Objectives
The aim of this manuscript is to present and validate a workflow to enhance the monitoring capabilities of time-lapse camera systems by stacking individual 3D point clouds generated from Multi View Stereo (MVS) photogrammetry.
The proposed workflow allows the accuracy of the individual PC to be improved by getting the most out of the iterative solutions obtained during the bundle adjustment process, a key step in the MVS photogrammetry workflow. More specifically, bundle adjustment resolves an indeterminate system with a larger number of unknowns than equations (intrinsic and extrinsic camera parameters vs. number of cameras/perspectives respectively), thus, multiple solutions in the form of PCs satisfying these equations are possible. Since the average value in a local coordinate system of the “range” coordinate converges for a large enough size sample (i.e., total number of PCs), a gain in precision can be obtained by stacking and averaging this value on the individual PC, allowing the correction of individual geometric aberrations, as shown below.
The proposed workflow was tested using a synthetic point cloud, created using mathematical functions that attempted to emulate the photogrammetric models, and data collected from a rock cliff located in Puigcercós (Catalonia, NE Spain), using very low-cost photogrammetric systems specially developed for this experiment. This work demonstrates that the proposed workflow is especially well-suited for improving precision when a high temporal sampling procedure can be set up or when low-cost time-lapse camera systems are being used, or both.