Next Article in Journal
An Asymmetric Selective Kernel Network for Drone-Based Vehicle Detection to Build a High-Accuracy Vehicle Trajectory Dataset
Next Article in Special Issue
Re-Using Historical Aerial Imagery for Obtaining 3D Data of Beach-Dune Systems: A Novel Refinement Method for Producing Precise and Comparable DSMs
Previous Article in Journal
Autonomous Quality Control of High Spatiotemporal Resolution Automatic Weather Station Precipitation Data
Previous Article in Special Issue
3D Modelling and Measuring Dam System of a Pellucid Tufa Lake Using UAV Digital Photogrammetry
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Application of UAV Photogrammetry and Multispectral Image Analysis for Identifying Land Use and Vegetation Cover Succession in Former Mining Areas

by
Volker Reinprecht
* and
Daniel Scott Kieffer
Institute of Applied Geosciences, Graz University of Technology and NAWI Graz Geocenter, Rechbauerstraße 12, 8010 Graz, Austria
*
Author to whom correspondence should be addressed.
Remote Sens. 2025, 17(3), 405; https://doi.org/10.3390/rs17030405
Submission received: 18 November 2024 / Revised: 3 January 2025 / Accepted: 21 January 2025 / Published: 24 January 2025

Abstract

:
Variations in vegetation indices derived from multispectral images and digital terrain models from satellite imagery have been successfully used for reclamation and hazard management in former mining areas. However, low spatial resolution and the lack of sufficiently detailed information on surface morphology have restricted such studies to large sites. This study investigates the application of small, unmanned aerial vehicles (UAVs) equipped with multispectral sensors for land cover classification and vegetation monitoring. The application of UAVs bridges the gap between large-scale satellite remote sensing techniques and terrestrial surveys. Photogrammetric terrain models and orthoimages (RGB and multispectral) obtained from repeated mapping flights between November 2023 and May 2024 were combined with an ALS-based reference terrain model for object-based image classification. The collected data enabled differentiation between natural forests and areas affected by former mining activities, as well as the identification of variations in vegetation density and growth rates on former mining areas. The results confirm that small UAVs provide a versatile and efficient platform for classifying and monitoring mining areas and forested landslides.

1. Introduction

Unmanned aerial vehicles (UAVs) equipped with multispectral sensors have significantly contributed to the fields of precision agriculture and forest management. High-resolution orthoimages, digital elevation models and georeferenced images from UAV-based mapping missions have been used for crop management, plant counting and classification, vegetation health assessment and prescription planning [1,2,3]. The geometric and radiometric data provided by these platforms allow the identification of anomalies caused by past land use or geomorphological processes based on machine learning [4,5,6]. These technological advances, however, have yet to find broad utilization in the field of civil engineering and mining, where the focus is usually on vegetation free areas [7,8,9,10]. Transferring and adapting these developments can significantly advance site investigation, mine reclamation and landslide mapping by integrating ecological information into the monitoring and hazard management processes [11,12,13,14,15]. Additional information on vegetation state can be used to monitor the recovery process of former mining areas and identify geotechnical hazards under vegetation cover.
The slope failures of pit walls or mine dumps represent a substantial threat that extends beyond the operational life of a mine, impeding the restoration process and hindering the sustainable reuse mining sites [16,17,18]. The systematic monitoring of the vegetation succession on former landslides and mining sites provides an insight into the regrowth process that can be used to differentiate recently active from dormant zones in the region [11,12,19]. Satellite and aerial images have been successfully employed to detect forest disturbances or monitor landslides under dense tree cover. The methods relied either on the time series of vegetation indices or SAR images to identify disturbances in the vegetation or succession patterns following landslides or windthrow events [20,21,22,23]. However, the low spatial resolution of satellite imagery and the lack of reference elevation models have limited the application to large events. The compact size and high-resolution imagery provided by small drones equipped with multispectral sensors allow for the analysis of localized studies of vegetation recovery on reclamation areas or landslides. Clouds, shadows or obstructions that affect satellite and—to a lesser extent—aerial images can be largely avoided due to the low altitude and the flexible flight planning [24,25]. The acquired geometric and radiometric information also aids the analysis of vegetation response during mine reclamation or after landslide events [12,26,27].
This study evaluates the use of small UAVs (<1000 g) equipped with multispectral and RGB sensors for land cover analysis in forested areas. The hypothesis is that UAVs can provide datasets with higher accuracy, flexibility and representativeness than obtained by satellite or aircraft mounted sensors. The main investigative goals are to determine the optimal UAV setup, the identification of different land cover types in former mining areas and the assessment of vegetation recovery patterns.
Four mapping flights were performed with a DJI Mavic 3M UAV platform (M = multispectral; SZ DJI Technology Co. Ltd., Shenzhen, China) between November 2023 and May 2024 to find the optimal flight setup and period for data collection. The resulting terrain model orthoimages (RGB, green, red, red-edge and near-infrared channels) were combined with national survey airborne–laserscanning (ALS) terrain models for object-based image analysis (OBIA). The classification workflow was designed with open-source software to implement additional machine learning and optimization methods.

2. Study Site

The quarry at Gossendorf (Austria, Styria) was a source of “Trass”, a metasomatic alteration product of volcanic rocks, used as a raw material for cement due to its pozzolanic attributes [28,29]. It was active from 1948 to 2003 and was decommissioned in 2008 [29]. Since that time, the former mining area has progressively been overgrown by trees and shrubs (Figure 1). The quarry is located on the northern slope of the “Bschaidkogel” and has a total height of ~80 m with four mining levels and an interior size of approximately 400 × 300 m.
The mining activities were affected by repeated landslides occurring along the southwestern pit wall, resulting in a circular shaped area with typical hummocky landslide morphology (Figure 1B). The study site includes landfills (mine waste dumps), cut slopes and an untreated landslide. The landfills and cut slopes have been overgrown by a mixed forest, consisting of coniferous and deciduous trees. The compact size and the diverse landforms make it an ideal site to evaluate mapping techniques focusing on landcover classification in forested areas.

3. Investigative Methods

3.1. UAV Mapping Flights

The DJI Mavic 3M (M3M) is a small UAV with a maximum takeoff weight of 1050 g, released in late 2022 [30,31]. This device is equipped with a gimbal, housing an RGB camera with a 4/3″ CMOS sensor (20 MPx) and four smaller cameras with 1/2.8″ CMOS sensors (5 MPx). The smaller sensors record the reflectance of green (560 ± 16 nm), red (650 ± 16 nm), red-edge (730 ± 16 nm) and near-infrared (860 ± 26 nm) wavelength separately. The RGB-camera has an equivalent focal length of 24 mm and a mechanical shutter, while the multispectral cameras have an equivalent focal length of 25 mm and an electronic shutter. The UAV uses an upward orientated sunlight sensor to record the solar irradiance and provide a scale factor to apply corrections to the measured reflectance values [32]. An additional RTK antenna was mounted on top of the Mavic 3M to connect it to the APOS NTRIP service, allowing for a positioning accuracy of ±5 cm [33,34]. Four UAV flights were performed from November 2023 to May 2024 (each flight between 11:00 and 12:00). The flight missions were programmed with DJI-Pilot 2, using an “oblique” mission setup for the initial flight (November 2023) and a “nadir” mission setup (top-down view) for all subsequent flights. The oblique mapping setup splits the image acquisition process into a series of cross-aligned tracks that include both nadir and oblique photographs to increase the model coverage and quality [35,36]. The second mission (December 2023) was used to evaluate potential benefits or shortcomings of 8-bit images over the standard 16-bit files. All images were captured at an elevation of 100 m above ground level defined by a digital elevation model (“terrain follow mission”) with the aircraft speed limited to 7 m/s to avoid motion blur and reduce the rolling shutter effect on the multispectral dataset [37].
The images were processed into digital terrain models and orthoimages with the photogrammetric modeling software “Agisoft Metashape Professional” (version 1.8.1) [38], following established guidelines for RGB and multispectral datasets [39,40]. Prior to processing, the image coordinates were converted from WGS84 to the local Cartesian reference coordinate system (Gauss-Krueger M34, EPSG:31256). The camera position accuracy was specified at 0.1 m [34], and the image coordinate accuracy was set to 0.5 pix during alignment using 40,000 keypoints and 4000 tie points per image. Sunlight sensor recordings were used to correct the image reflectance, but this correction was not suited for the 8-bit images as it resulted in a significant reduction in number of tie-points (<10% remaining points). The bundle adjustment was optimized for the parameters focal length (f), principal point offset (cx, cy) and radial distortion (k1, k2), including a rolling shutter correction for the multispectral images [37,41]. The photogrammetric models were further processed into dense point clouds and rasterized to digital surface models (pDSM) and orthoimages with a resolution of 0.05 × 0.05 m. The multispectral orthoimage (MS-O) and the photogrammetric digital surface model derived from the RGB camera were then used to extract features for classification, while the RGB orthoimage (RGB-O) was utilized for segmentation.

3.2. Object Based Image Classification and Analysis Workflow

Classification methods focusing on individual pixels are prone to noisy results and potential false classifications when applied on datasets with a spatial resolution smaller than individual objects. The OBIA uses image segments (“objects”) instead of individual pixel data for classification. This method is therefore less susceptible to varying illumination, shadows or local occlusions that occur on high resolution datasets [42,43,44,45,46]. The image segments are utilized to combine spectral and geometric information from multiple remote sensing data sources (elevation models, optical imagery, radar imagery) with zonal statistics as input to the classification process. OBIA has been successfully employed to identify landslides, vegetation types or debris-covered rock glaciers using high resolution satellite imagery or drone images [47,48,49].
The segmentation settings were tested on the initial flight datasets, using the algorithms Felzenszwalb, Quickshift, Watershed and SLIC, which were included in the Scikit-Image library [50,51,52,53]. The algorithm that provided the best balance between computation time, model accuracy and segment size was selected to perform the actual classification. Morphometric parameters, including differential DEM height (dDEM), differential DSM height (dDSM), curvature, slope and roughness, were calculated from the RGB-based pDSM [27,54]. ALS terrain models from 2009 (resolution 1 × 1 m), provided by the provincial government of Styria, were used as reference datasets for the alignment of the digital surface models and the calculation of dDEMs and dDSMs. The RGB-based UAV surface model was utilized due to its higher resolution and enhanced DEM quality. The morphometric indices “slope”, “curvature” and “roughness” were calculated using the Zevenbergen–Thorne and Horn methods [55,56]. The vegetation indices NDVI, NDRE, NDWI, CI and BI were employed for training instead of the individual band information from the multispectral orthoimages (MS-O) to reduce the impact of overexposure and camera sensitivity. These indices comprise band combinations of all available image bands in the dataset (except the “panchromatic” RGB) and are correlated with plant health, phenotype and environmental conditions [32,57]. All data sources and their application, as well as the extracted morphometric and spectral features, are summarized in Table 1.
Training location selection and map design were performed with QGIS (version 3.38.3), which provides an advanced graphical user interface for the selection of appropriate training points [58]. The classes (1) “Ground” (bare earth, paths and forest ground), (2) “Slope” (mine slopes with sparse vegetation), (3) “Forest, natural” (forest surrounding the quarry, mostly deciduous trees) and (4) “Forest, artificial” (forest areas within the quarry, mixed coniferous and deciduous trees) were selected for training. These classes allow the differentiation between former mining areas that are now covered by the mixed forest (forest, artificial), mine slopes and the surrounding forest. The training points (35 per class, total: 140) for each class were obtained from existing reference maps representing the quarry in 1961, 1978, 1982 and 2008 [29] and the digital terrain model (Figure 2). The position of the training points was kept constant over all epochs to analyze the effect of growing vegetation on the model results. Variations in NDVI, NDRE, dDEM and dDSM over time were analyzed within the landslide area, the former mining areas (mine and mine dump) and the natural forest to investigate differences in vegetation cover and the growth process.
The necessary processing steps for UAV products involve co-registration, resampling, segmentation and model training, which are not completely implemented in common GIS applications. Consequently, a Python-based (version 3.11.11) approach was favored, given its capacity to execute all the requisite steps within a single processing script, which can be readily adapted to future tasks [45,59]. The preprocessing and classification framework combines libraries for data processing (NumPy, version 1.26.4 and Pandas, version 2.2.2), spatial data handling (Rasterio, version 1.4.1; GeoPandas, version 1.0.1; GeoUtils, version 0.1.9; and xDEM, version 0.0.2), image processing (OpenCV, version 4.10.0 and Scikit-Image, version 0.24.0) and machine learning (Scikit-Learn, version 1.6.0) [60,61,62,63,64,65,66,67,68]. The processing steps are enumerated below and summarized in Figure 3.

3.2.1. Data Preparation and Feature Engineering (Step 1)

All datasets were transformed to a common reference coordinate system (EPSG-Code: 31256), then clipped and resampled to matching resolutions of 0.1 m. The pDSM was co-registered to the ALS reference terrain model (rDTM) using the Nuth–Kääb method and a hand-drawn ground mask that only includes the ground at the mining entrance [65,69]. The pDSMs and the reference elevation models (rDTM, rDSM) were used to calculate the height above the reference DTM (dDEM), the height above the DSM (dDSM) and morphometric indices. The RGB-based pDSM was selected over the multispectral dataset due to the larger camera sensor and mechanical shutter providing superior geometric results and enhanced DEM quality.

3.2.2. Image Segmentation and Zonal Statistics (Step 2)

The RGB orthoimage (RGB-O) was segmented using the Felzenszwalb–Huttenlocher implementation from the Scikit-Image library. This algorithm uses intensity and color information to maximize the variance between neighboring segments and minimize the variance within each segment [51]. The segmentation results were visually evaluated by comparing the geometry to real-world features (e.g., trees, pathways, bushes) present on the orthoimages (Figure 2B). Zonal statistics (mean, standard deviation, minimum and maximum) of the corresponding features (e.g., morphometric and radiometric indices) within the areas enclosed by each segment were extracted as input data for the image classification. Highly correlated features with a Pearson correlation coefficient of >0.90 were removed from the feature space to avoid overfitting and improve the model robustness [59].

3.2.3. Training Point Setup (Step 3)

Training points were mapped with QGIS using the UAV datasets and orthoimages provided by the government of Styria (https://data.steiermark.at; accessed on 2 January 2025). The points were spatially joined with the corresponding segments, including a buffer with a diameter of 0.5 m. The implemented buffer combines adjacent segments and avoids the selection of small single segments caused by inhomogeneous illumination or occlusions during data acquisition. The selection of the segments within this buffer distance increased the number of segments from 140 to approximately 450. The segmentation process yielded variable patch sizes between epochs and classes, resulting in an imbalanced sample distribution. To address the impact of class imbalance on model performance, stratified cross-validation and the incorporation of class weights during model training were employed [59].

3.2.4. Model Training and Evaluation (Step 4)

Before the initiation of the training process, the feature dataset was split into training and a holdout dataset using a ratio of 80/20 (training/holdout). The holdout dataset was kept separate to test the final model performance on data that were completely excluded from the training and optimization process [70]. The random forest classifier was chosen for the model training and classification steps because this method tends to provide good classification results even on highly dimensional input data and provides a feature importance rating to discriminate the input data [59,71,72]. This classifier uses randomized subsets of the input data to create multiple decision trees and selects the best parameter combination based on a majority vote. Two thirds of the samples (in-bag samples) are used for training, while the remaining (out-of-bag) samples are kept for validation and model optimization. This approach increases the robustness of the classification and provides a rating of the input parameters for model optimization [70]. The best combination of hyperparameters for the classifier was determined during a 10-fold stratified cross-validation process using the global model accuracy as a quality metric. The number of estimators, the estimator depth and the class weight varied during the validation process. Stratified cross-validation aims to preserve the percentage of samples for each class when the data are split, to ensure that each class is evenly represented during the training process and reduces the impact of imbalanced samples [59,70]. During cross-validation, the training dataset is split into k subsets, using k minus 1 parts for training, while the remaining part is used for validation. For a 10-fold cross-validation (k = 10), each fold uses 90% of the dataset for training and 10% for validation.
The global performance of the model was evaluated using the accuracy score, the kappa score and weighted recall and precision, obtained during an additional 10-fold cross-validation process using the optimized hyperparameter combination. A global model accuracy of >0.85 and a kappa score of >0.8 were anticipated as target values for the global model performance [70,73,74]. The corresponding results for each fold were concatenated into a combined confusion matrix, representing the total classification results during the cross-validation process. This combined matrix and the corresponding accuracy, precision, recall, F1-score and balanced accuracy score were used to check the performance on the individual classes and assess the impact of the data splitting process [59,70]. Additionally, the model performance on previously unknown data were checked by comparing the cross-validation results with the results on holdout dataset. The trained model was finally applied to the feature dataset and the classification results were exported as a GeoPackage file.

3.2.5. Post Processing and Analytics (Step 5)

The post processing and analytic steps with QGIS involved the removal of no-data segments based on the elevation or multispectral attributes. The remaining segments were converted into a raster dataset using the class as a burn-in value and filtered with a majority filter (kernel size of 5 × 5 pixels) to remove small, isolated segments and smooth the segment edges. Additionally, the mean NDVI, NDRE, height above ground (dDEM) and height above DSM (dDSM) for each land use zone (“mine”, “mine dump”, “landslide” and “natural forest”) were extracted to investigate the variation in these features over time.

4. Results

4.1. Photogrammetric Modeling and Image Segmentation Performance

The flight conditions, the ground sampling distance (GSD) and the reprojection error (RE) are summarized in Table 2. The average reprojection error was between 0.55 pixels (RGB) and 0.60 pixels (multispectral) (Table 2). The relatively high reprojection errors compared to the recommendation defined by the USGS [40] can be attributed to the dense vegetation cover, resulting in obstructed areas, shadows and the unavoidable movement of branches and leaves. As indicated in Table 2, the benefit on the RE provided by the oblique images was very limited, even though there were about two times more images collected. To increase processing speed, reduce flight time and file size, a nadir-orientated setup was used for all subsequent missions after the initial flight.
The optimal segmentation method was selected from algorithms included in the Scikit-Image library (version 1.6.0) during an initial test run on the dataset from epoch 1 (Table 3). The Felzenszwalb–Huttenlocher algorithm provided the best balance between processing runtime and global cross-validation accuracy. The remaining algorithms resulted either in >7-times longer processing times or inferior model accuracy. The resulting average segment size of 9.5 pix (0.95 m2) corresponds approximately 13 to 30% of the typical tree crown dimension in the project area (diameter ~2–3 m, Figure 2B). The combination of these small-sized segments with a buffer distance of 0.5 m around each sampling point resulted in patch groups that were able to characterize the terrain classes (“ground”; “slope”) as well as the forest classes (“forest, artificial”; “forest, natural”) during the classification.

4.2. Classification Results

The model demonstrated good performance in categorizing the terrain classes “ground” and “slope” during the initial three epochs (November 2023 to April 2024) but encountered difficulties in differentiating between the natural and artificial forest (Figure 4). The model performance varied between the flights, providing the best results on the dataset for epoch 3 (April, Figure 4C) and the poorest results for epoch 4 (May, Figure 4D). The feature importance ranking from the random forest classifier identified the morphological features (dDEM, dDSM, curvature, roughness) as the most relevant classification parameters, followed by the NDVI, the NDRE and the BI.
The OBIA-based classification was able to identify all four classes in each epoch, achieving global model accuracy between 0.87 and 0.97 and a kappa score between 0.82 and 0.96 (Figure 5A). As with the photogrammetric modeling, the classification results provided by the oblique mapping setup did not yield any beneficial effects. The scatter of the cross-validation parameters (Figure 5A) and the differences between validation and holdout datasets suggest a possible model bias caused by the varying quality of the training and validation segments, with a higher impact during epochs 2 and 4 (Figure 5B,C). This effect is also evident in the lower balanced accuracy observed in epoch 4, where the increasing vegetation density limited the visibility of the reference points. These results were partly anticipated due to the spectral similarities between the terrain and forest classes and the seasonal variation in the vegetation cover. Precision and recall were balanced in all epochs, with only a few exceptions. The classes “ground” and “natural forest” were susceptible to underclassification (type 2 error, low recall dominates), while the class “slope” exhibited a tendency towards overclassification (type 1 error, low precision dominates), especially during epoch 4.
The best results were obtained during epoch 3 (April), just prior to the full closure of the forest leaf cover. For this epoch, the scatter of performance parameters during cross-validation was minimal and all class performance metrics were close to 0.9. During the final epoch (May 2024), the terrain visibility was limited by dense vegetation cover, leading to decreased model performance in all classes and a wide scatter during the cross-validation process (Figure 5A). Problematic areas were extant in the sparse forest along the southeastern border of the mapping area (Figure 4; Zone A2), where the effects of shadows and model boundaries were particularly evident. The former mine dump in the west of the mine entrance was never actively used for earthworks due to the local terrain and was only cleared for minor access roads (Figure 4; Zone B1). As a result, the forest cover in this area remained largely unchanged, reflecting the natural forest conditions.

4.3. Temporal Variations in NDVI and Growth Height

The spatial distribution of the classes “forest, artificial”, “ground” and “slope” agrees with the boundaries of areas affected by former mining activities. The “artificial forest” class covers the benches of the pit mine and extends in northeastern direction to the dumping site (Figure 2A and Figure 4). The “slope” class occurs along the benches and the landslide headscarp and as a series of incorrectly classified patches at the eastern boundary of the mapping area (“X” in Figure 4). This misclassification was caused by individual trees (high local relief), shadows and model boundary effects.
The landslide morphology forms a rotational slump that originated from the forest and slid as a detached block into the mining area [75]. The interior of the slide is covered with the “natural forest” class, while “artificial forest” grows along cracks and ridges within the slide mass (~480 m contour line; Figure 4, zone B3). This results in a heterogenous and patchy vegetation cover consisting of a mixture of shrubs and trees on the slide that differs from the surrounding forest. The differences in vegetation cover resulted in distinct temporal variations in the vegetation indices (NDVI, NDRE) and morphology (growth height) between the former mining areas, the landslide and the natural forest (Figure 6).
In the former mining areas (“mine”, “mine dump”), the NDVI and NDRE were generally higher during the winter period due to the dominance of coniferous trees and bushes. However, these indices reach lower maxima caused by the presence of vegetation free zones. The NDRE exhibited higher sensitivity throughout the growing period, making it more suitable than the NDVI for analyzing temporal variations. This finding is consistent with previous studies that have identified red edge-based indices as a more robust method in areas covered by dense vegetation or forests [76,77].
The average heights above the digital elevation model (dDEM) and the digital surface model (dDSM) are influenced by the canopy height and vegetation cover density, which in turn affect the visible ground in the photogrammetric models [12,78]. Temporal variations in dDEM and dDSM within former mining areas reflect the closure of the leaf cover and the growth of shrubs at the base of the mine and on the benches (Figure 4). Within the landslide area, elevation differences between the pDSM and the reference elevation models exceed those in the natural forest zones, while the vegetation indices and the increase in growth height are slightly slower between April and May. This pattern is caused by the presence of mixed forest and shrub cover, combined with several exposed bare soil areas within the landslide.

5. Discussion

5.1. Flight Setup and Optimal Mapping Season

The DJI M3M can provide relevant radiometric and geometric classification parameters (canopy height, NDRE, NDVI) during a single flight mission. The quality of the resulting datasets was significantly influenced by the flight parameters, the season and the illumination. Flight setups that employ nadir image collection at altitudes between 100 and 120 m above ground, provide terrain models and orthoimages with sufficient ground resolutions (~5 cm/pix) for classification tasks in forested areas. This setup utilizes the permitted flight altitude of up to 120 m in the most accessible OPEN category, according to the UAV regulations in the European Union [79]. Lower altitudes would improve the spatial resolution, but would result in increased data size, processing cost and flight duration. Flight setups at lower heights would therefore be beneficial for individual plant identification and counting tasks but could also result in more noise in forested areas, where moving branches and leaves cannot be avoided [1,3,80].
The achieved reprojection error (RE) was between 0.4 and close to 1.0 pix for both camera models, with the largest error occurring on the 8-bit multispectral dataset (epoch 2, December 2023). An RE above the USGS recommended threshold of 0.3 pix is typical for photogrammetric models in forested areas [40,81]. The combined effects of illumination conditions, seasonality and data type (16-bit vs. 8-bit) had a greater influence on the results than the photogrammetric model quality. Consequently, for landcover classification, a less restrictive evaluation of model quality is acceptable, with an RE ranging from 0.5 to 0.8 still being adequate. Despite the availability of modeling approaches that mitigate the rolling shutter effect, it is necessary to set the flight speed < 10 m/s to avoid distortion of multispectral images caused by the rolling shutter effect [37].
The optimal period for data collection in temperate zones was found to be at the beginning and end of the vegetative season (March to April and October to November). This timeframe provides the highest degree of variance of the vegetation indices and offers the best compromise on ground visibility. Although the winter season provides enhanced ground visibility, a certain amount of variation in the vegetation cover is needed for the classification purposes. Therefore, mapping flights during the winter season (leaf-off phase) are generally not recommended for the classification. Overcast days are preferable to sunny conditions as they reduce the effects of shadows and avoid overexposure. Images with 8-bit data should not be used, as this negatively affects the quality of the photogrammetric model and renders the irradiance sensor unusable. This effect was less pronounced during the leaf-free time (epoch 2) but would have been more problematic during seasons with more variation in the reflectance of the vegetation cover (Spring, Autumn).

5.2. Application for Landcover Classification

The Python-based OBIA-implementation facilitates the co-registration of multiple epochs, image segmentation, data preparation and model evaluation in a single processing workflow. This provides a versatile alternative to the methods implemented in common remote sensing software that offers limited options for machine learning and advanced image analysis, compared to specialized Python libraries [45,64,65,66,67,68]. The implementation includes all necessary preprocessing steps for spatial datasets provided by UAV photogrammetry and can be further adapted for alternative algorithms for machine learning or image segmentation. The inclusion of a wider range of segmentation methods would enhance the object generation process and allow the incorporation of the feature morphology into the training process [44]. The further improvement of the model performance and robustness can be achieved by the inclusion of temporal variations from multiple datasets as training features or by recursively eliminating of non-persistent training points [19,20,82].
The combination of photogrammetric surface models (pDSMs) and multispectral orthoimages (MS-O) from UAV flights with preexisting ALS-based elevation models (rDTM, rDSM) is an efficient combination for terrain classification of forested areas. This combination provides a flexible addition or alternative to current strategies for mine reclamation or hazard management in forested areas that are usually based on satellite imagery [11,19]. The high-resolution data permitted identification of disturbances in the forest cover resulting from earthworks (classes “slope” and “ground”) and the regrowth of forest on former mining sites (class “forest, artificial”).
The calibration flights were necessary to determine the best time of year for data collection and the seasonal variance of the training features. This process can be avoided if the focus is solely on landcover classification and the flights are not performed during the peak vegetation period. Errors in the classification process were caused either by low variance in the radiometric data (leaf-off phase; leaf maximum in summer) or by areas with tree shadows. Both problems can be reduced with an optimized flight setup. The systematic classification process should be started during the active states of mines. This would enable the monitoring and adjustment of technical measures for regrowth and enhance the reclamation process, particularly in areas prone to slope failures [83,84]. However, a more detailed differentiation of former mining areas, including the reclamation process or the identification of recent slope movements, would require a time series of measurements.

5.3. Application for Vegetation Monitoring

The data provided by the tested small UAVs allow the vegetation recovery in mining sites or vegetation covered landslides to be used for mapping and monitoring purposes in much greater detail than is possible with current satellite imagery. The differential vegetation recovery, expressed through the varying growth rate and vegetation indices, can be used to monitor the reclamation of mining areas and landslides [18,19,83,84]. The application of UAVs for vegetation or landslide monitoring requires a background value for natural, undisturbed forest areas, either from a time series of satellite imagery (despite its low spatial resolution) or through multiple UAV flights. The remaining disadvantages of UAVs compared to satellite imagery are lower temporal resolution, limited ground coverage and flight restrictions. Combining the high temporal and radiometric resolution of satellite imagery with the flexibility and high spatial resolution of UAV-based data is one of the current research goals in precision agriculture. The integration of these two technologies has the potential to mitigate their respective limitations, thereby enhancing overall accuracy and efficiency. The combination of these technologies would be of particular benefit in the domains of mining reclamation and hazard mapping in forested regions [1,3,85,86].
The parameters NDVI, NDRE, dDEM and dDSM showed a clear difference between the former mining area and the stable natural forest area at the field site. This indicates that the canopy cover exhibits a different growth pattern than the surrounding forest during the vegetative season. However, the NDVI was not as reliable as the NDRE, as it reached its maximum value rapidly over a short period [76,87]. Aerial images or high-resolution satellite images are therefore only a limited alternative to UAV images for detailed monitoring tasks, as they provide lower spatial resolution and usually only RGB+NIR radiometry [24,25,88]. The classified datasets can be used to assess damage caused by wind or landslide events and to quantify vegetation succession [5,6]. In combination with ALS-based landslide inventories, dormant landslides within forests can be differentiated from recently active landslides by the presence of disturbances or anomalies within the vegetation cover [4,82,83].
An additional advantage of these small devices is their transportability and flight times, which are particularly advantageous in remote regions. The integration of high-resolution spectral data that reflect the vegetation conditions provides an alternative to landslide maps that rely purely on morphology. This would aid the process of hazard mitigation in former mines or landslide prone areas.

6. Conclusions

The application of small UAV platforms with multispectral cameras for landcover classification in former mining areas was evaluated based on mapping flights from November 2023 to May 2024. The optimal flight setup and seasonal conditions were tested and their efficacy for landcover classification and the assessment of vegetation recovery was determined. The main conclusions of this study are provided below.
  • Small UAVs with combined multispectral and RGB camera systems provide a versatile and efficient platform for classifying and monitoring mining areas and forested landslides. Data acquisition and flight setup can be optimized for the specific site conditions to ensure representative and concise training data.
  • The most effective period for data acquisition is at the end or beginning of the growing season during overcast conditions. The use of 8-bit imagery should be avoided as it interferes with the alignment process and the use of the irradiance sensor.
  • High resolution geometric and radiometric data from UAVs provide optimal training features to obtain accurate classification models. The integration of a reference terrain model with repeated UAV flights enables differentiation between natural forest and former mining areas, as well as the determination of variable growth patterns.
  • Disturbances in the forest cover resulting from earthworks and the regrowth of forest on former mining sites can be efficiently detected and classified. Morphological features (dDTM, dDSM, curvature, roughness) are the most relevant classification parameters, followed by the NDRE and the Brightness Index (BI).
  • Vegetation patterns in the former mining areas and on the landslide have a different time-dependent variation compared to the surrounding natural forest. Variations in NDVI, NDRE, dDTM and dDSM exhibit characteristic temporal patterns, with their lowest values observed in December and their highest in May. Among these parameters, the NDRE demonstrated a relatively higher variance compared to the NDVI.
  • Former mining areas are characterized by distinct spectral indices (both NDVI and NDRE) and demonstrate reduced variability in the growth height compared to natural forests (expressed by the dDTM and dDSM). This can be attributed to the presence of varying plant species (predominantly coniferous trees), the sparser vegetation and the younger age of the vegetation.
  • Future applications of the methodology described herein could be used to optimize mine reclamation strategies, according to the monitoring results. This requires a reference dataset prior to reclamation activities and a series of repeated surveys during the reclamation. Additionally, the combined multispectral and geometrical data could provide an efficient supplement to DEM-based landslide monitoring concepts by differentiating between active and dormant landslide areas according to vegetation patterns.

Author Contributions

Conceptualization, V.R.; methodology and field tests, V.R.; programming and modeling, V.R.; visualization, V.R., validation, V.R. and D.S.K.; formal analysis, V.R.; writing—original draft preparation, V.R. and D.S.K.; writing—review and editing, V.R. and D.S.K.; project administration, V.R. and D.S.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the TU Graz Open Access Publishing Fund.

Data Availability Statement

All raw data will be made available upon request to V.R.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Avtar, R.; Watanabe, T. Unmanned Aerial Vehicle: Applications in Agriculture and Environment; Springer International Publishing: Cham, Switzerland, 2020; ISBN 978-3-030-27156-5. [Google Scholar]
  2. Rejeb, A.; Abdollahi, A.; Rejeb, K.; Treiblmaier, H. Drones in Agriculture: A Review and Bibliometric Analysis. Comput. Electron. Agric. 2022, 198, 107017. [Google Scholar] [CrossRef]
  3. Ecke, S.; Dempewolf, J.; Frey, J.; Schwaller, A.; Endres, E.; Klemmt, H.-J.; Tiede, D.; Seifert, T. UAV-Based Forest Health Monitoring: A Systematic Review. Remote Sens. 2022, 14, 3205. [Google Scholar] [CrossRef]
  4. Lusiana, N.; Shinohara, Y.; Imaizumi, F. Quantifying Effects of Changes in Forest Age Distribution on the Landslide Frequency in Japan. Nat. Hazards 2024, 120, 8551–8570. [Google Scholar] [CrossRef]
  5. Cabral, R.P.; Da Silva, G.F.; De Almeida, A.Q.; Bonilla-Bedoya, S.; Dias, H.M.; De Mendonça, A.R.; Rodrigues, N.M.M.; Valente, C.C.A.; Oliveira, K.; Gonçalves, F.G.; et al. Mapping of the Successional Stage of a Secondary Forest Using Point Clouds Derived from UAV Photogrammetry. Remote Sens. 2023, 15, 509. [Google Scholar] [CrossRef]
  6. Chen, C.; Li, C.; Huang, C.; Lin, H.; Zelený, D. Secondary Succession on Landslides in Submontane Forests of Central Taiwan: Environmental Drivers and Restoration Strategies. Appl. Veg. Sci. 2022, 25, e12635. [Google Scholar] [CrossRef]
  7. Choi, H.-W.; Kim, H.-J.; Kim, S.-K.; Na, W.S. An Overview of Drone Applications in the Construction Industry. Drones 2023, 7, 515. [Google Scholar] [CrossRef]
  8. Shahmoradi, J.; Talebi, E.; Roghanchi, P.; Hassanalian, M. A Comprehensive Review of Applications of Drone Technology in the Mining Industry. Drones 2020, 4, 34. [Google Scholar] [CrossRef]
  9. Sun, J.; Yuan, G.; Song, L.; Zhang, H. Unmanned Aerial Vehicles (UAVs) in Landslide Investigation and Monitoring: A Review. Drones 2024, 8, 30. [Google Scholar] [CrossRef]
  10. Ren, H.; Zhao, Y.; Xiao, W.; Hu, Z. A Review of UAV Monitoring in Mining Areas: Current Status and Future Perspectives. Int. J. Coal Sci. Technol. 2019, 6, 320–333. [Google Scholar] [CrossRef]
  11. Hu, J.; Ye, B.; Bai, Z.; Feng, Y. Remote Sensing Monitoring of Vegetation Reclamation in the Antaibao Open-Pit Mine. Remote Sens. 2022, 14, 5634. [Google Scholar] [CrossRef]
  12. Ilinca, V.; Șandric, I.; Chițu, Z.; Irimia, R.; Gheuca, I. UAV Applications to Assess Short-Term Dynamics of Slow-Moving Landslides under Dense Forest Cover. Landslides 2022, 19, 1717–1734. [Google Scholar] [CrossRef]
  13. Moudrý, V.; Gdulová, K.; Fogl, M.; Klápště, P.; Urban, R.; Komárek, J.; Moudrá, L.; Štroner, M.; Barták, V.; Solský, M. Comparison of Leaf-off and Leaf-on Combined UAV Imagery and Airborne LiDAR for Assessment of a Post-Mining Site Terrain and Vegetation Structure: Prospects for Monitoring Hazards and Restoration Success. Appl. Geogr. 2019, 104, 32–41. [Google Scholar] [CrossRef]
  14. Park, S.; Choi, Y. Applications of Unmanned Aerial Vehicles in Mining from Exploration to Reclamation: A Review. Minerals 2020, 10, 663. [Google Scholar] [CrossRef]
  15. Al Heib, M.M.; Franck, C.; Djizanne, H.; Degas, M. Post-Mining Multi-Hazard Assessment for Sustainable Development. Sustainability 2023, 15, 8139. [Google Scholar] [CrossRef]
  16. Guo, S.; Yang, S.; Liu, C. Mining Heritage Reuse Risks: A Systematic Review. Sustainability 2024, 16, 4048. [Google Scholar] [CrossRef]
  17. Meng, H.; Wu, J.; Zhang, C.; Wu, K. Mechanism Analysis and Process Inversion of the “7.26” Landslide in the West Open-Pit Mine of Fushun, China. Water 2023, 15, 2652. [Google Scholar] [CrossRef]
  18. Zapico, I.; Molina, A.; Laronne, J.B.; Sánchez Castillo, L.; Martín Duque, J.F. Stabilization by Geomorphic Reclamation of a Rotational Landslide in an Abandoned Mine next to the Alto Tajo Natural Park. Eng. Geol. 2020, 264, 105321. [Google Scholar] [CrossRef]
  19. Thapa, P.S.; Daimaru, H.; Yanai, S. Analyzing Vegetation Recovery and Erosion Status after a Large Landslide at Mt. Hakusan, Central Japan. Ecol. Eng. 2024, 198, 107144. [Google Scholar] [CrossRef]
  20. Fu, S.; De Jong, S.M.; Deijns, A.; Geertsema, M.; De Haas, T. The SWADE Model for Landslide Dating in Time Series of Optical Satellite Imagery. Landslides 2023, 20, 913–932. [Google Scholar] [CrossRef]
  21. Van Den Eeckhaut, M.; Kerle, N.; Hervás, J.; Supper, R. Mapping of Landslides Under Dense Vegetation Cover Using Object-Oriented Analysis and LiDAR Derivatives. In Landslide Science and Practice; Margottini, C., Canuti, P., Sassa, K., Eds.; Springer: Berlin/Heidelberg, Germany, 2013; pp. 103–109. ISBN 978-3-642-31324-0. [Google Scholar]
  22. Santangelo, M.; Cardinali, M.; Bucci, F.; Fiorucci, F.; Mondini, A.C. Exploring Event Landslide Mapping Using Sentinel-1 SAR Backscatter Products. Geomorphology 2022, 397, 108021. [Google Scholar] [CrossRef]
  23. Li, D.; Tang, X.; Tu, Z.; Fang, C.; Ju, Y. Automatic Detection of Forested Landslides: A Case Study in Jiuzhaigou County, China. Remote Sens. 2023, 15, 3850. [Google Scholar] [CrossRef]
  24. Reinprecht, V.; Klass, C.; Kieffer, S. Aerial Imagery for Geological Hazard Management in Alpine Catchments. Geomech. Tunn. 2024, 17, 553–560. [Google Scholar] [CrossRef]
  25. Lillesand, T.M.; Kiefer, R.W.; Chipman, J.W. Remote Sensing and Image Interpretation, 7th ed.; Wiley: Hoboken, NJ, USA, 2015; ISBN 978-1-118-34328-9. [Google Scholar]
  26. Fernández, T.; Pérez, J.; Cardenal, J.; Gómez, J.; Colomo, C.; Delgado, J. Analysis of Landslide Evolution Affecting Olive Groves Using UAV and Photogrammetric Techniques. Remote Sens. 2016, 8, 837. [Google Scholar] [CrossRef]
  27. Xu, Y.; Zhu, H.; Hu, C.; Liu, H.; Cheng, Y. Deep Learning of DEM Image Texture for Landform Classification in the Shandong Area, China. Front. Earth Sci. 2022, 16, 352–367. [Google Scholar] [CrossRef]
  28. Weber, L. Handbuch der Lagerstätten der Erze, Industrieminerale und Energierohstoffe Österreichs. In Archiv für Lagerstättenforschung; Geologische Bundesanst: Wien, Austria, 1997; ISBN 978-3-900312-98-5. [Google Scholar]
  29. Jauk, J. Der Gossendorfer Bergbau—Materialien Für Eine Bildungsbezogene Nachnutzung. Master‘s Thesis, Karl-Franzens Universität Graz, Graz, Austria, 2018. [Google Scholar]
  30. DJI Mavic 3M User Manual [v1.7] 2024.06. Available online: https://ag.dji.com/de/mavic-3-m/downloads (accessed on 2 January 2025).
  31. Qu, T.; Li, Y.; Zhao, Q.; Yin, Y.; Wang, Y.; Li, F.; Zhang, W. Drone-Based Multispectral Remote Sensing Inversion for Typical Crop Soil Moisture under Dry Farming Conditions. Agriculture 2024, 14, 484. [Google Scholar] [CrossRef]
  32. Deng, L.; Mao, Z.; Li, X.; Hu, Z.; Duan, F.; Yan, Y. UAV-Based Multispectral Remote Sensing for Precision Agriculture: A Comparison between Different Cameras. ISPRS J. Photogramm. Remote Sens. 2018, 146, 124–136. [Google Scholar] [CrossRef]
  33. BEV-APOS Austrian Positioning Service. Available online: www.bev.gv.at/Services/Produkte/Grundlagenvermessung/APOS (accessed on 2 January 2025).
  34. Liu, X.; Lian, X.; Yang, W.; Wang, F.; Han, Y.; Zhang, Y. Accuracy Assessment of a UAV Direct Georeferencing Method and Impact of the Configuration of Ground Control Points. Drones 2022, 6, 30. [Google Scholar] [CrossRef]
  35. James, M.R.; Robson, S. Mitigating Systematic Error in Topographic Models Derived from UAV and Ground-based Image Networks. Earth Surf. Process. Landf. 2014, 39, 1413–1420. [Google Scholar] [CrossRef]
  36. Jiménez-Jiménez, S.I.; Ojeda-Bustamante, W.; Marcial-Pablo, M.; Enciso, J. Digital Terrain Models Generated with Low-Cost UAV Photogrammetry: Methodology and Accuracy. IJGIISPRS Int. J. Geo-Inf. 2021, 10, 285. [Google Scholar] [CrossRef]
  37. Vautherin, J.; Rutishauser, S.; Schneider-Zapp, K.; Choi, H.F.; Chovancova, V.; Glass, A.; Strecha, C. Photogrammetric accuracy and modeling of rolling shutter cameras. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, III-3, 139–146. [Google Scholar] [CrossRef]
  38. AgiSoft LLC AgiSoft Metashape Professional (Version 1.8.1.). Available online: www.agisoft.com (accessed on 2 January 2025).
  39. Saczuk, E. Processing Multi-Spectral Imagery with Agisoft MetaShape Pro. 2020. Available online: https://pressbooks.bccampus.ca/ericsaczuk/ (accessed on 2 January 2025).
  40. Over, J.S.R.; Ritchie, A.C.; Brown, J.; Kranenburg, C.J. Processing Coastal Imagery with Agisoft Metashape Professional Edition, Version 1.6—Structure from Motion Workflow Documentation; USGS Open-File Report; USGS Publications Warehouse: Reston, VA, USA, 2021. [Google Scholar]
  41. Luhmann, T.; Robson, S.; Kyle, S.; Boehm, J. Close-Range Photogrammetry and 3D Imaging, 4th ed.; De Gruyter Stem: Boston, MA, USA, 2023; ISBN 978-3-11-102935-1. [Google Scholar]
  42. Blaschke, T. Object Based Image Analysis for Remote Sensing. ISPRS J. Photogramm. Remote Sens. 2010, 65, 2–16. [Google Scholar] [CrossRef]
  43. Castillejo-González, I.L.; López-Granados, F.; García-Ferrer, A.; Peña-Barragán, J.M.; Jurado-Expósito, M.; De La Orden, M.S.; González-Audicana, M. Object- and Pixel-Based Analysis for Mapping Crops and Their Agro-Environmental Associated Measures Using QuickBird Imagery. Comput. Electron. Agric. 2009, 68, 207–215. [Google Scholar] [CrossRef]
  44. Hossain, M.D.; Chen, D. Segmentation for Object-Based Image Analysis (OBIA): A Review of Algorithms and Challenges from Remote Sensing Perspective. ISPRS J. Photogramm. Remote Sens. 2019, 150, 115–134. [Google Scholar] [CrossRef]
  45. Clewley, D.; Bunting, P.; Shepherd, J.; Gillingham, S.; Flood, N.; Dymond, J.; Lucas, R.; Armston, J.; Moghaddam, M. A Python-Based Open Source System for Geographic Object-Based Image Analysis (GEOBIA) Utilizing Raster Attribute Tables. Remote Sens. 2014, 6, 6111–6135. [Google Scholar] [CrossRef]
  46. Hay, G.J.; Castilla, G. Object-based image analysis: Strengths, weaknesses, opportunities and threats (SWOT). In Proceedings of the 1st international Conference on Object-based Image Analysis (OBIA 2006), Salzburg, Austria, 4–5 July 2006; Available online: https://www.isprs.org/proceedings/XXXVI/4-C42/ (accessed on 2 January 2025).
  47. Amatya, P.; Kirschbaum, D.; Stanley, T.; Tanyas, H. Landslide Mapping Using Object-Based Image Analysis and Open Source Tools. Eng. Geol. 2021, 282, 106000. [Google Scholar] [CrossRef]
  48. Robson, B.A.; Bolch, T.; MacDonell, S.; Hölbling, D.; Rastner, P.; Schaffer, N. Automated Detection of Rock Glaciers Using Deep Learning and Object-Based Image Analysis. Remote Sens. Environ. 2020, 250, 112033. [Google Scholar] [CrossRef]
  49. Machala, M.; Zejdová, L. Forest Mapping Through Object-Based Image Analysis of Multispectral and LiDAR Aerial Data. Eur. J. Remote Sens. 2014, 47, 117–131. [Google Scholar] [CrossRef]
  50. Soille, P.J.; Ansoult, M.M. Automated Basin Delineation from Digital Elevation Models Using Mathematical Morphology. Signal Process. 1990, 20, 171–182. [Google Scholar] [CrossRef]
  51. Felzenszwalb, P.F.; Huttenlocher, D.P. Efficient Graph-Based Image Segmentation. Int. J. Comput. Vis. 2004, 59, 167–181. [Google Scholar] [CrossRef]
  52. Achanta, R.; Shaji, A.; Smith, K.; Lucchi, A.; Fua, P.; Süsstrunk, S. SLIC Superpixels Compared to State-of-the-Art Superpixel Methods. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 2274–2282. [Google Scholar] [CrossRef]
  53. Vedaldi, A.; Soatto, S. Quick Shift and Kernel Methods for Mode Seeking. In Computer Vision—ECCV 2008; Forsyth, D., Torr, P., Zisserman, A., Eds.; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2008; Volume 5305, pp. 705–718. ISBN 978-3-540-88692-1. [Google Scholar]
  54. Florinsky, I.V. Digital Terrain Analysis in Soil Science and Geology, 2nd ed.; Elsevier: London, UK, 2016; ISBN 978-0-12-804632-6. [Google Scholar]
  55. Horn, B.K.P. Hill Shading and the Reflectance Map. Proc. IEEE 1981, 69, 14–47. [Google Scholar] [CrossRef]
  56. Zevenbergen, L.W.; Thorne, C.R. Quantitative Analysis of Land Surface Topography. Earth Surf. Process. Landf. 1987, 12, 47–56. [Google Scholar] [CrossRef]
  57. Polykretis, C.; Grillakis, M.; Alexakis, D. Exploring the Impact of Various Spectral Indices on Land Cover Change Detection Using Change Vector Analysis: A Case Study of Crete Island, Greece. Remote Sens. 2020, 12, 319. [Google Scholar] [CrossRef]
  58. QGIS Development Team. QGIS Geographic Information System (Version 3.38.3). Available online: https://qgis.org (accessed on 2 January 2025).
  59. Gallatin, K.; Albon, C. Machine Learning with Python Cookbook: Practical Solutions from Preprocessing to Deep Learning, 2nd ed.; O’Reilly Media Inc.: Sebastopol, CA, USA, 2023; ISBN 978-1-09-813572-0. [Google Scholar]
  60. Harris, C.R.; Millman, K.J.; Van Der Walt, S.J.; Gommers, R.; Virtanen, P.; Cournapeau, D.; Wieser, E.; Taylor, J.; Berg, S.; Smith, N.J. Array Programming with NumPy. Nature 2020, 585, 357–362. [Google Scholar] [CrossRef]
  61. Pandas Development Team (Version 2.2.2). Available online: https://github.com/pandas-dev/pandas.git (accessed on 2 January 2025).
  62. Gillies, S. Rasterio: Geospatial Raster I/O for Programmers (Version 1.4.1). Available online: https://github.com/rasterio/rasterio.git (accessed on 2 January 2025).
  63. Jordahl, K.; Bossche, J.V.D.; Fleischmann, M.; Wasserman, J.; McBride, J.; Gerard, J.; Tratner, J.; Perry, M.; Badaracco, A.G.; Farmer, C. GeoPandas (Version 1.0.1). Available online: https://github.com/geopandas/geopandas.git (accessed on 2 January 2025).
  64. GeoUtils Contributors GeoUtils (Version 0.1.9). Available online: https://github.com/GlacioHack/geoutils.git (accessed on 2 January 2025).
  65. xDEM Contributors xDEM (Version 0.0.20). Available online: https://github.com/GlacioHack/xdem.git (accessed on 2 January 2025).
  66. OpenCV Contributors Open CV (Version 4.10.0). Available online: https://github.com/opencv/opencv.git (accessed on 2 January 2025).
  67. Van Der Walt, S.; Schönberger, J.L.; Nunez-Iglesias, J.; Boulogne, F.; Warner, J.D.; Yager, N.; Gouillart, E.; Yu, T. Scikit-Image: Image Processing in Python. PeerJ 2014, 2, e453. [Google Scholar] [CrossRef]
  68. Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V.; et al. Scikit-Learn: Machine Learning in Python. J. Mach. Learn. Res. 2011, 12, 2826–2830. [Google Scholar]
  69. Nuth, C.; Kääb, A. Co-Registration and Bias Corrections of Satellite Elevation Data Sets for Quantifying Glacier Thickness Change. Cryosphere 2011, 5, 271–290. [Google Scholar] [CrossRef]
  70. Amr, T. Hands-on Machine Learning with Scikit-Learn and Scientific Python Toolkits: A Practical Guide to Implementing Supervised and Unsupervised Machine Learning Algorithms in Python; Packt: Birmingham, UK; Mumbai, India, 2020; ISBN 978-1-83882-604-8. [Google Scholar]
  71. Belgiu, M.; Drăguţ, L. Random Forest in Remote Sensing: A Review of Applications and Future Directions. ISPRS J. Photogramm. Remote Sens. 2016, 114, 24–31. [Google Scholar] [CrossRef]
  72. Gislason, P.O.; Benediktsson, J.A.; Sveinsson, J.R. Random Forests for Land Cover Classification. Pattern Recognit. Lett. 2006, 27, 294–300. [Google Scholar] [CrossRef]
  73. Rimal, Y.; Sharma, N.; Alsadoon, A. The Accuracy of Machine Learning Models Relies on Hyperparameter Tuning: Student Result Classification Using Random Forest, Randomized Search, Grid Search, Bayesian, Genetic, and Optuna Algorithms. Multimed. Tools Appl. 2024, 83, 74349–74364. [Google Scholar] [CrossRef]
  74. Landis, J.R.; Koch, G.G. The Measurement of Observer Agreement for Categorical Data. Biometrics 1977, 33, 159. [Google Scholar] [CrossRef]
  75. Burns, W.J.; Madin, I.P. Protocol for Inventory Mapping of Landslide Deposits from Light Detection and Ranging (Lidar) Imagery. Oregon Department of Geology and Mineral Industries Special Paper No. 42. 2009. Available online: https://pubs.oregon.gov/dogami/sp/p-SP-42.htm (accessed on 2 January 2025).
  76. Boiarskii, B. Comparison of NDVI and NDRE Indices to Detect Differences in Vegetation and Chlorophyll Content. J. Mech. Contin. Math. Sci. 2019, 4, 20–29. [Google Scholar] [CrossRef]
  77. Liu, M.; Zhan, Y.; Li, J.; Kang, Y.; Sun, X.; Gu, X.; Wei, X.; Wang, C.; Li, L.; Gao, H.; et al. Validation of Red-Edge Vegetation Indices in Vegetation Classification in Tropical Monsoon Region—A Case Study in Wenchang, Hainan, China. Remote Sens. 2024, 16, 1865. [Google Scholar] [CrossRef]
  78. McNicol, I.M.; Mitchard, E.T.A.; Aquino, C.; Burt, A.; Carstairs, H.; Dassi, C.; Modinga Dikongo, A.; Disney, M.I. To What Extent Can UAV Photogrammetry Replicate UAV LiDAR to Determine Forest Structure? A Test in Two Contrasting Tropical Forests. J. Geophys. Res. Biogeosci. 2021, 126, e2021JG006586. [Google Scholar] [CrossRef]
  79. European Commission. European Commission Regulation 2019/947 on the Rules and Procedures for the Operation of Unmanned Aircraft, 2019; Official Journal of the European Union; Publications Office of the European Union: Luxembourg, 2019; pp. 45–71. Available online: https://skybrary.aero/articles/regulation-2019947-rules-and-procedures-unmanned-aircraft (accessed on 2 January 2025).
  80. Maes, W.H.; Steppe, K. Perspectives for Remote Sensing with Unmanned Aerial Vehicles in Precision Agriculture. Trends Plant Sci. 2019, 24, 152–164. [Google Scholar] [CrossRef]
  81. Kameyama, S.; Sugiura, K. Effects of Differences in Structure from Motion Software on Image Processing of Unmanned Aerial Vehicle Photography and Estimation of Crown Area and Tree Height in Forests. Remote Sens. 2021, 13, 626. [Google Scholar] [CrossRef]
  82. Dislich, C.; Huth, A. Modelling the Impact of Shallow Landslides on Forest Structure in Tropical Montane Forests. Ecol. Model. 2012, 239, 40–53. [Google Scholar] [CrossRef]
  83. Buma, B.; Pawlik, Ł. Post-landslide Soil and Vegetation Recovery in a Dry, Montane System Is Slow and Patchy. Ecosphere 2021, 12, e03346. [Google Scholar] [CrossRef]
  84. Vachova, P.; Vach, M.; Skalicky, M.; Walmsley, A.; Berka, M.; Kraus, K.; Hnilickova, H.; Vinduskova, O.; Mudrak, O. Reclaimed Mine Sites: Forests and Plant Diversity. Diversity 2021, 14, 13. [Google Scholar] [CrossRef]
  85. Govi, D.; Pappalardo, S.E.; De Marchi, M.; Meggio, F. From Space to Field: Combining Satellite, UAV and Agronomic Data in an Open-Source Methodology for the Validation of NDVI Maps in Precision Viticulture. Remote Sens. 2024, 16, 735. [Google Scholar] [CrossRef]
  86. Dash, J.P.; Pearse, G.D.; Watt, M.S. UAV Multispectral Imagery Can Complement Satellite Data for Monitoring Forest Health. Remote Sens. 2018, 10, 1216. [Google Scholar] [CrossRef]
  87. Mutanga, O.; Skidmore, A.K. Narrow Band Vegetation Indices Overcome the Saturation Problem in Biomass Estimation. Int. J. Remote Sens. 2004, 25, 3999–4014. [Google Scholar] [CrossRef]
  88. Agrafiotis, P.; Georgopoulos, A. Comparative assessment of very high resolution satellite and aerial orthoimagers. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2015, XL-3/W2, 1–7. [Google Scholar] [CrossRef]
Figure 1. (A) Overview of the study site (“Trassbruch Gossendorf”) based on the digital elevation model; (B) oblique photograph. Former mining and mine dump areas, access roads and the landslide area are highlighted in (A).
Figure 1. (A) Overview of the study site (“Trassbruch Gossendorf”) based on the digital elevation model; (B) oblique photograph. Former mining and mine dump areas, access roads and the landslide area are highlighted in (A).
Remotesensing 17 00405 g001
Figure 2. (A) Study site with the boundaries of former mining, mine dump and landslide affected areas. (B) Subset at the southern slope, visualizing the segmentation and the effect of the 0.5 m buffer around the sampling points and the typical tree crown dimension (diameter ~2–3 m).
Figure 2. (A) Study site with the boundaries of former mining, mine dump and landslide affected areas. (B) Subset at the southern slope, visualizing the segmentation and the effect of the 0.5 m buffer around the sampling points and the typical tree crown dimension (diameter ~2–3 m).
Remotesensing 17 00405 g002
Figure 3. Python-based OBIA workflow, including a summary of each processing step.
Figure 3. Python-based OBIA workflow, including a summary of each processing step.
Remotesensing 17 00405 g003
Figure 4. Classified map datasets for all four classification periods. (A) November 2023 (sunny, oblique flight); (B) December 2023 (overcast, nadir flight); (C) April 2024 (overcast, nadir flight); (D) May 2024 (sunny, nadir flight). [X] = area prone to misclassification (Zone A2), [Y] = old mine dump (Zone B1), that was only partially cleared for operation.
Figure 4. Classified map datasets for all four classification periods. (A) November 2023 (sunny, oblique flight); (B) December 2023 (overcast, nadir flight); (C) April 2024 (overcast, nadir flight); (D) May 2024 (sunny, nadir flight). [X] = area prone to misclassification (Zone A2), [Y] = old mine dump (Zone B1), that was only partially cleared for operation.
Remotesensing 17 00405 g004
Figure 5. (A) Parameter variation during the cross-validation process (global performance metrics and class performance metrics). (B) Classification metrics for all flight epochs including combined confusion matrices. (C) Confusion matrices derived from holdout dataset (holdout confusion matrix). The confusion matrices were standardized in horizontal direction and the corresponding sample number is given in square brackets.
Figure 5. (A) Parameter variation during the cross-validation process (global performance metrics and class performance metrics). (B) Classification metrics for all flight epochs including combined confusion matrices. (C) Confusion matrices derived from holdout dataset (holdout confusion matrix). The confusion matrices were standardized in horizontal direction and the corresponding sample number is given in square brackets.
Remotesensing 17 00405 g005
Figure 6. Time series for the mean NDVI, NDRE, height above rDTM (dDTM), height above rDSM and (dDSM) extracted from the former mining zones (mine dump, mine), the landslide area and the natural forest.
Figure 6. Time series for the mean NDVI, NDRE, height above rDTM (dDTM), height above rDSM and (dDSM) extracted from the former mining zones (mine dump, mine), the landslide area and the natural forest.
Remotesensing 17 00405 g006
Table 1. Optical imagery, elevation models (data source) and extracted features (morphometry, spectral indices) used for classification and segmentation.
Table 1. Optical imagery, elevation models (data source) and extracted features (morphometry, spectral indices) used for classification and segmentation.
Data SourceSymbolApplication and Extracted Features
RGB OrthoimageRGB-OUsed for Image Segmentation
RGB Digital Surface ModelpDSMUsed for morphometric feature extraction:
slope, curvature, roughness, dDEM, dDSM
Multispectral OrthoimageMS-OUsed for spectral feature extraction:
NDVI: Normalized differential vegetation index: (NIR − Red)/(NIR + Red)
NDRE: Normalized differential Red Edge (RE) index: (NIR − RE)/(NIR + RE)
NDWI: Normalized differential water index: (Green − NIR)/(Green + NIR)
CI: Coloration Index: (Green − Red)/(Green + Red)
BI: Brightness Index: (((Red × Red)/(Green × Green)) × 0.5)0.5
ALS Digital Terrain Model
Date: 2009; Resolution 1 × 1 m
rDTMUsed for vertical co-registration and the calculation
of the height above the (dDEM)
ALS Digital Surface Model
Date: 2009; Resolution 1 × 1 m
rDSMUsed to calculate height above surface model (dDSM)
Table 2. Flight epochs, illumination conditions, setup and photogrammetric parameters of the resulting datasets. GSD = ground sampling distance; RE = reprojection error; RGB = RGB-dataset, MS = multispectral dataset.
Table 2. Flight epochs, illumination conditions, setup and photogrammetric parameters of the resulting datasets. GSD = ground sampling distance; RE = reprojection error; RGB = RGB-dataset, MS = multispectral dataset.
Flight EpochIllumination
Condition
Flight Setup/
Image Datatype
GSD RGB/MS [cm/pix]RE RGB/MS
[pix]
18 November 2023SunnyOblique/16-Bit Tiff3.3/4.80.7/0.4
29 December 2023OvercastNadir/8-Bit Tiff3.0/5.10.6/1.0
6 April 2024OvercastNadir/16-Bit Tiff2.9/5.00.5/0.5
25 May 2024SunnyNadir/16-Bit Tiff2.9/4.40.4/0.5
Table 3. Comparison of image segmentation algorithms implemented in the Scikit-Image library (version: 1.6.0; UAV dataset: November 2023). The runtime was scaled to the optimal method (Felzenswalb).
Table 3. Comparison of image segmentation algorithms implemented in the Scikit-Image library (version: 1.6.0; UAV dataset: November 2023). The runtime was scaled to the optimal method (Felzenswalb).
MethodAlgorithm
Runtime
Average
Segment Size
[pix/m2]
Number of
Segments
[Count]
Classification Accuracy
Felzenswalb1.0 × Felzenswalb9.5/0.9575,7760.93
Quickshift5.3 × Felzenswalb5.9/0.59145,2520.89
Watershed7.1 × Felzenswalb4.7/0.4779,9200.82
SLIC0.4 × Felzenswalb2.5/0.2568,3830.86
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Reinprecht, V.; Kieffer, D.S. Application of UAV Photogrammetry and Multispectral Image Analysis for Identifying Land Use and Vegetation Cover Succession in Former Mining Areas. Remote Sens. 2025, 17, 405. https://doi.org/10.3390/rs17030405

AMA Style

Reinprecht V, Kieffer DS. Application of UAV Photogrammetry and Multispectral Image Analysis for Identifying Land Use and Vegetation Cover Succession in Former Mining Areas. Remote Sensing. 2025; 17(3):405. https://doi.org/10.3390/rs17030405

Chicago/Turabian Style

Reinprecht, Volker, and Daniel Scott Kieffer. 2025. "Application of UAV Photogrammetry and Multispectral Image Analysis for Identifying Land Use and Vegetation Cover Succession in Former Mining Areas" Remote Sensing 17, no. 3: 405. https://doi.org/10.3390/rs17030405

APA Style

Reinprecht, V., & Kieffer, D. S. (2025). Application of UAV Photogrammetry and Multispectral Image Analysis for Identifying Land Use and Vegetation Cover Succession in Former Mining Areas. Remote Sensing, 17(3), 405. https://doi.org/10.3390/rs17030405

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop