Next Article in Journal
Potential of Multi-Source Multispectral vs. Hyperspectral Remote Sensing for Winter Wheat Nitrogen Monitoring
Previous Article in Journal
ScaleViM-PDD: Multi-Scale EfficientViM with Physical Decoupling and Dual-Domain Fusion for Remote Sensing Image Dehazing
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Extraction of Sparse Vegetation Cover in Deserts Based on UAV Remote Sensing

1
Institute of Ecological Conservation and Restoration, Chinese Academy of Forestry, Beijing 100091, China
2
Institute of Desertification Studies, Chinese Academy of Forestry, Beijing 100091, China
3
School of Pastoral Agriculture Science and Technology, Lanzhou University, Lanzhou 730000, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Remote Sens. 2025, 17(15), 2665; https://doi.org/10.3390/rs17152665 (registering DOI)
Submission received: 30 June 2025 / Revised: 25 July 2025 / Accepted: 30 July 2025 / Published: 1 August 2025

Abstract

The unique characteristics of desert vegetation, such as different leaf morphology, discrete canopy structures, sparse and uneven distribution, etc., pose significant challenges for remote sensing-based estimation of fractional vegetation cover (FVC). The Unmanned Aerial Vehicle (UAV) system can accurately distinguish vegetation patches, extract weak vegetation signals, and navigate through complex terrain, making it suitable for applications in small-scale FVC extraction. In this study, we selected the floodplain fan with Caragana korshinskii Kom as the constructive species in Hatengtaohai National Nature Reserve, Bayannur, Inner Mongolia, China, as our study area. We investigated the remote sensing extraction method of desert sparse vegetation cover by placing samples across three gradients: the top, middle, and edge of the fan. We then acquired UAV multispectral images; evaluated the applicability of various vegetation indices (VIs) using methods such as supervised classification, linear regression models, and machine learning; and explored the feasibility and stability of multiple machine learning models in this region. Our results indicate the following: (1) We discovered that the multispectral vegetation index is superior to the visible vegetation index and more suitable for FVC extraction in vegetation-sparse desert regions. (2) By comparing five machine learning regression models, it was found that the XGBoost and KNN models exhibited relatively lower estimation performance in the study area. The spatial distribution of plots appeared to influence the stability of the SVM model when estimating fractional vegetation cover (FVC). In contrast, the RF and LASSO models demonstrated robust stability across both training and testing datasets. Notably, the RF model achieved the best inversion performance (R2 = 0.876, RMSE = 0.020, MAE = 0.016), indicating that RF is one of the most suitable models for retrieving FVC in naturally sparse desert vegetation. This study provides a valuable contribution to the limited existing research on remote sensing-based estimation of FVC and characterization of spatial heterogeneity in small-scale desert sparse vegetation ecosystems dominated by a single species.

1. Introduction

Remote sensing technology provides an efficient and cost-effective approach for large-scale ecological monitoring of fractional vegetation cover (FVC) changes [1,2,3]. Moreover, the existing remote sensing monitoring models for FVC have been well developed and widely applied at medium to large regional or global scales [4,5,6]. However, due to variation in climate, topography, and other factors, there are significant variations in plant morphology, vegetation types, and community composition across different regions. This results in complex and diverse ecosystem structures and functions, increasing the difficulty and uncertainty of accurately estimating FVC using remote sensing [7]. Desert vegetation is one of the most distinctive vegetation types, and forming the backbone of desert ecosystems and is crucial for their stability. Achieving precise extraction of desert vegetation information is of great significance for ecological environment monitoring and desertification control [8,9,10]. However, desert vegetation, having long adapted to extreme environments, exhibits leaf degeneration, fragmented canopies, sparse distribution, and an extremely simple community structure [11]. On the remote sensing imagery, its spectral response is significantly influenced by the bright reflectance spectrum of the soil. Consequently, the spectral characteristics of bare soil and vegetation are often nonlinearly mixed, leading to a “double-peak” phenomenon and making the “red edge” of the spectral curve of desert vegetation less distinct. As a result, the spectral curves of desert vegetation do not exhibit the typical spectral characteristics of healthy green plants [12,13]. This makes generic remote sensing estimation models lose their universal applicability [14,15], resulting in cases in which desert vegetation was simplified or overlooked entirely due to its extremely low NDVI [16].
Among the various inversion methods, the vegetation index (VI) approach is the most widely used, demonstrating good and stable performance in areas with moderate to high FVC [17,18]. However, in sparsely vegetated desert regions, challenges such as low sensitivity to soil background and complexities in mixed pixel decomposition often result in either overestimation or underestimation of the FVC [19,20,21]. For instance, using NDVI tends to underestimate FVC in regions of low cover because NDVI is heavily saturated in areas of high FVC. Moreover, NDVI is prone to exaggerating the vegetation signal under high-reflectance backgrounds, thereby causing a serious underestimation of actual FVC [22,23]. To address the misestimation issues associated with vegetation indices, researchers have attempted to develop more robust new vegetation indices by adjusting parameters or incorporating additional spectral bands to improve their accuracy and generalization ability in specific regions, such as the commonly used soil-adjusted vegetation indices (SAVIs), Modified SAVI (MSAVI) [24,25], and the Total Ratio Vegetation Index (TRVI) which is commonly used in sparse forests in arid and semi-arid regions [26]. Although some vegetation indices perform well in arid areas, their effectiveness in regions of extremely low FVC remains significantly limited [27,28].
Hyperspectral remote sensing data, with its higher spectral resolution and richer spectral information, enables fine-grained identification of vegetation species and types, significantly improving classification accuracy [29,30]. Using relevant spectral mixture models can effectively eliminate background interference and improve the estimation accuracy of sparse FVC in desert areas [31]. However, studies have shown that in areas with sparse vegetation distribution where FVC < 30%, the reliability of using multiple endmember spectral mixture analysis (MESMA) based on hyperspectral imagery to retrieve vegetation types still requires further validation [32]. In addition, the application of hyperspectral data is greatly limited by high acquisition costs, complexity of data processing, and the constraints of data acquisition platforms [33]. In contrast, multispectral remote sensing data, which are relatively easy and inexpensive to acquire, have been widely used [34,35,36]. An Unmanned Aerial Vehicle (UAV) platform can carry various sensors, with low cost, high efficiency, and real-time data acquisition. Multispectral UAV imagery, with its notable advantages of high spatial resolution, multiple spectral bands, and low cost, effectively compensates for the limitations of traditional satellite remote sensing. It plays an irreplaceable role, particularly in small-scale, fine-resolution, and rapid-response applications, thereby offering significant benefits for FVC estimation in arid regions [37]. A study [34] conducted at a research station in an arid region of Australia utilized 15 ultra-high spatial resolution multispectral UAV images to estimate FVC through three approaches: RF regression, spectral unmixing, and object-based classification. The results demonstrated that multispectral UAV imagery exhibits considerable potential for FVC estimation in arid environments at ultra-high resolutions. Existing studies employing UAV imagery for FVC estimation have largely focused on croplands or areas with relatively dense vegetation cover [38,39]. In contrast, research specifically targeting sparse vegetation cover in desert regions remains limited, particularly for areas dominated by a single plant species.
Current studies utilizing UAV-based multispectral imagery in arid regions are predominantly focused on environmental monitoring [40,41], species identification in arid regions [42,43] and FVC and biomass estimation [44,45]. In arid region research, study areas are primarily focused on Gobi deserts, desert steppes, arid-oasis transition zones, and oasis areas [46,47,48]. These research regions generally exhibit diverse vegetation types, large variations in FVC yet relatively high vegetation density, and a large number of ground-truth samples. Consequently, models are often conducted based on a mixture of multiple vegetation types. At a fine spatial scale, studies on remote sensing-based inversion of FVC and quantitative characterization of spatial heterogeneity remain limited in desert areas with natural sparse vegetation dominated by a single species, especially when the number of ground-truth samples is relatively small. This study provides a valuable contribution to the limited existing research on remote sensing-based estimation of FVC and characterization of spatial heterogeneity in small-scale desert sparse vegetation ecosystems dominated by a single species.
In this study, we aim to explore and evaluate the applicability of UAV-based spectral vegetation indices in desert areas with sparse vegetation, and to investigate and compare the feasibility and stability of various machine learning (ML) models in such environments. We conducted a gradient analysis on an alluvial fan dominated by Populus species as the constructive species. Firstly, we used the result obtained from supervised post-classification optimization based on UAV multispectral images as the true value of FVC (FVCT). Secondly, we calculated multiple VIs (including multispectral and visible VIs) and constructed a linear regression model using FVCT, which allowed us to determine the optimal VIs. Finally, we used the optimal VIs, texture, and other features as input parameters for ML to estimate FVC, thus determining the optimal model for extracting sparse vegetation FVC. Previous studies have consistently demonstrated that RF is the most accurate model for predicting FVC under a large sample dataset [27], and our research verifies this, showing that RF maintains high accuracy even with smaller sample datasets. These findings contribute valuable insights to the growing body of research on UAV-based detection and estimation of FVC in similar regions with low vegetation cover.

2. Materials and Methods

2.1. Study Area and Sample Layout

The study area is located in a floodplain fan at the southeast edge of Wolf Mountain, Dengkou County, Bayannur City, Inner Mongolia, China (Figure 1). The composition of the protected area is dominated by mountains, deserts, and barrens, with an average annual temperature of 7.6 °C and an average annual precipitation of 119.0 mm. Precipitation from June to September accounts for 78.8% of the total annual precipitation [49]. Vegetation types in this area include desert vegetation, saline meadow vegetation, herbaceous marsh vegetation, and desert steppe vegetation; desert vegetation includes Haloxylon ammodendron (C. A. Mey.) Bunge, Potaninia mongolica Maxim, Caragana korshinskii Kom, and Nitraria tangutorum Bobrov, etc. [50]. The sample site selected for this study is a single shrub community with Caragana korshinskii Kom as the establishment species (N 40°40′54.97″ ~ 40°44′15.12″, E 106°26′25.28″ ~ 106°29′41.32″) (Figure 2). Caragana korshinskii Kom is a shrub or sometimes a small tree belonging to the genus Caragana in the legume family, with a height of 1–4 m.; its old branches are golden yellow and glossy, while the young branches are covered with white soft hairs; and it is an important tree species for establishing windbreak and sand-fixing forests as well as soil and water conservation forests in Northwest China (Figure 2).
The field experiment was conducted from late August to early September 2024. A total of 12 plots, each with an area of 100 m2, were established across three gradient zones—fan apex, mid-fan, and fan edge—on 4 distinct channels of the alluvial fan (numbered sequentially from southwest to northeast as 1, 2, 3, 4) (Figure 1). Then each plot was divided into 4 × 4 (16) plots according to the size of 25 m × 25 m, obtaining a total of 192 plots.

2.2. Acquisition of UAV Aerial Images and Pre-Processing

To acquire orthophotos of the plots, we utilized a DJI Matrice 300 RTK drone, manufactured by Shenzhen Dajiang Innovation Technology Co., Ltd., headquartered in Shenzhen, Guangdong Province, China, equipped with a DJI MS600 Pro 6-band multispectral camera: blue (450 ± 30 nm), green (555 ± 27 nm), red (660 ± 22 nm), rededge (720 ± 10 nm), rededge_750 (750 ± 10 nm), and near-infrared (840 ± 30 nm) and a vertical FOV of 38.1°, with an effective resolution of 1.2 million pixels. The flight altitude was consistently 50 m, with a forward overlap of 75% and a side overlap of 80%. All flights were conducted between 11:00 and 14:00 Beijing Time under clear and cloud-free conditions to minimize potential errors caused by low solar angles and rapid illumination changes. Radiometric calibration was performed using DJI-certified standard reflectance calibration panels, which were imaged on the ground before each flight. These panels provide known reflectance values for the six camera bands (blue, green, red, rededge, rededge_750 and near-infrared), corresponding to 0.62, 0.61, 0.60, 0.61, 0.61, and 0.60, respectively. After image acquisition, the digital number (DN) values of each band were normalized and converted to surface reflectance using Pix4Dmapper software (version 4.5.6) to correct for sensor response differences and variations in illumination intensity. Within the software, preprocessing steps including image tile alignment, point cloud reconstruction, radiometric calibration, and orthorectification were completed, resulting in six-band multispectral orthomosaic images with a spatial resolution of 3.74 cm covering all sample plots.

2.3. Oversight Classification

2.3.1. ROI Creation

Small sample datasets are usually extracted using supervised classification and color threshold segmentation [51]. According to the structural characteristics of vegetation composition in the study area, the maximum likelihood method was adopted in this study to classify the plots into vegetation and non-vegetation categories [52]. The principles of ROI labeling are as follows [53] (Figure 3): (1) Vegetation was dominated by Caragana korshinskii in the study area,, and areas with a uniform and continuous canopy distribution are selected for labeling. For a small number of herbaceous plants, when their spectral characteristics differ significantly from those of the main species, mixed areas should be avoided as much as possible to avoid interference with classification. The labeled samples should cover the vegetation characteristics under different light conditions, including shaded and direct light areas, while staying away from the border zone or areas with no vegetation, to ensure that the selected samples have good homogeneity and representativeness. (2) The annotation of non-vegetation categories should select typical surface features without vegetation cover in the area, such as bare soil and sand. The samples need to be clear and homogeneous, avoiding annotation of mixed vegetation areas or transitional zones near vegetation edges. To better capture the diversity of non-vegetation categories, surface areas with different brightness and textures should be selected, including bright sand and dark bare soil.

2.3.2. Calculation of Sample Separation

To evaluate the ROIs, the separate tools were used to calculate the Jeffreys–Matusita distance and Transformed Divergence. The size of the two parameters determines the rationality of training sample selection. When the parameter value is 1.9–2.0, there is good separability between samples, that is, qualified samples; when the parameter value is 1.0–1.8, the sample must be selected again; and when the parameter is 0.1–1.0, the difference between the two types of samples is very small, and the samples are merged into one [54]. Sample separability between the two types of features in the ROI in this study was good, with both greater than 1.85.

2.3.3. FVC Estimation and Accuracy Evaluation

The FVC is estimated according to the ratio of the number of pixels, as in Equation (1). Manually correct the sample set based on field surveys, and use ground truth ROIs to perform confusion matrix analysis to validate the classification results. Overall accuracy (OA) and the Kappa coefficient are used as accuracy evaluation metrics.
F V C = N v e g N t o t a l
O A = i = 1 n x i i N
K a p p a = N i = 1 n x i i i = 1 n x i + x + i N 2 i = 1 n x i + x + i
where N v e g represents the number of vegetation pixels in an image, N t o t a l represents the total number of pixels in the image, n is the number of categories, x i i is the number of correctly categorized categories, N represents the total number of all reference samples, x + i is the total number of actual reference samples of the category, and x i + represents the number of samples of the overall classification.

2.4. Vegetation Index Method

This study selected 17 commonly used VIs, including 7 multispectral vegetation indices (MVIs) and 10 visible-light vegetation indices (VVIs) (Table 1). The binarization results of VI images obtained from UAV data using 10 threshold segmentation methods were comparatively analyzed, using the NDVI image of plot 1 as an exampl. In terms of spatial representation in the binarized images, the Isodata, Mean, and Minimum Error methods commonly produced pronounced striping artifacts when processing NDVI images, which failed to accurately reflect the actual spatial distribution of vegetation. In areas with relatively concentrated vegetation, these methods tended to convert scattered vegetation patches into continuous regions, leading to misclassification of background areas as vegetation and consequently overestimating vegetation cover. The Yen method almost entirely failed to identify any vegetation areas and can thus be considered ineffective [55]. The Moments method, on the other hand, connected discrete vegetation patches into continuous blocks in some regions, resulting in unstable segmentation outcomes. To further quantify the applicability of each method, the classification accuracy of the ten thresholding methods was systematically evaluated based on manually optimized ground-truth images (Table 2). The results showed that the Triangle [56] and Otsu methods performed best across all accuracy metrics, with minimal performance differences between them. The Maximum Entropy method ranked just below the top two in terms of overall suitability. Among these, the Otsu is based on the idea of maximizing the variance between classes, with automatic threshold selection, simplicity, and efficiency, and is widely used to quickly distinguish vegetated and non-vegetated areas from remote sensing imagery [57,58]. This method is also commonly used in most studies on desert vegetation information extraction [59,60]. Furthermore, as the Otsu algorithm is embedded within the ENVI 5.6 software, it facilitates efficient, batch processing and is well-suited to the scale of remote sensing data used in this study. Therefore, the Otsu method was ultimately adopted in this study as the binarization approach for vegetation identification.
For accuracy validation, a linear regression model of the FVCT and VI is established. Spearman’s correlation coefficient (r) and coefficient of determination (R2) were utilized to determine if those 17 VIs could be used for subsequent model predictions.

2.5. Machine Learning Regression Models

2.5.1. Model Selection and Its Application to Arid Areas

In recent years, spatial data mining, knowledge discovery, and ML have shown their advantages in remote sensing signal recognition and processing [73,74]. In view of the significant interference from non-vegetative factors such as soil and shadows, strong spatial heterogeneity, and the limited number of samples in the study area, this study employs 5 ML methods-RF, Extreme Gradient Boosting (XGBoost), Least Absolute Shrinkage and Selection Operator (LASSO), SVM, and K-Nearest Neighbor (KNN) for regression prediction. RF constructs multiple decision trees to solve regression and classification problems, effectively reducing model variance and overfitting [75]. It can determine the overall explanatory power of all detection factors and the relative importance of each factor in studies of severely degraded vegetation areas and arid regions, while producing classification results with relatively high accuracy [76,77]. XGBoost is a boosting method that constructs a series of decision trees step-by-step, with each tree trying to correct the errors of the previous tree [78]. This method has been proven to produce reliable inversion results across different scales when applied to medium- and high-resolution satellite imagery [77]. KNN generates estimates by calculating the similarity between unobserved locations and plots, selecting the k most similar plots, and generating estimates based on a weighted average of their observations [79]. The algorithm is simple to implement and easy to understand; it performs well for small-scale datasets and does not require a training process. Ge et al. [80] found that RF had the highest classification accuracy for wasteland, KNN had the highest computational efficiency, but its classification accuracy and stability are slightly lower than those of RF. KNN and XGBoost tend to overfit the training data when applied to datasets with limited sample sizes. Hence, the training accuracy offers little reference value. LASSO is a linear regression model that reduces the complexity of the model through L1 regularization for feature selection [81]. Wang et al. [82] used the LASSO model to regress the predicted NDVI and the original values in the Northwest Arid Zone, and the results showed that the two had a high degree of consistency, with a correlation as high as 0.9. SVM classifies data into different classes by finding an optimal hyperplane, has strong generalization ability, and especially performs well when the dataset is small and high dimensional [83], but still has limitations in extracting pure shadows and sparse vegetation [84]. The 5 ML models selected in this paper exhibit strong adaptability and flexibility, can handle complex nonlinear relationships, have robust noise resistance, and demonstrate excellent prediction performance.

2.5.2. Model Validation and Evaluation

To comprehensively evaluate the performance and generalization ability of the machine learning models, this study adopted a multi-level model validation strategy. First, 5-fold cross-validation was employed to assess the baseline performance of the initial models. Specifically, the complete dataset was evenly divided into five subsets. In each iteration, one subset was used as the validation set while the remaining four were used for training. This process was repeated five times, and the average of the validation results was calculated to reduce evaluation variability caused by different data splits, thereby enhancing the stability and reliability of model assessment [85]. Building on this, Grid Search with Cross-Validation (GridSearchCV) was introduced to further improve model prediction performance and identify the optimal hyperparameter combination [86]. A parameter grid containing multiple combinations was constructed on the training set, and internal 5-fold cross-validation was conducted to evaluate the performance of each combination. This process effectively enhanced the model’s adaptability across different feature spaces. Finally, an independent test set, accounting for 30% of the total dataset, was used to evaluate the optimized model. Several metrics were computed, including the coefficient of determination (R2), root mean square error (RMSE), and mean absolute error (MAE), to comprehensively assess the model’s generalization performance on unseen data. This multi-stage validation process ensured the robustness of model training and effectively mitigated the risk of overfitting.
R 2 = 1 y i y ^ i 2 y i y ¯ 2
R M S E = i = 1 n y i y ^ i 2 n
M A E = i = 1 n | y i y ^ i | n
where n is the number of samples, y i is the actual observation, y ^ i is the model prediction, and y ¯ is the mean of the actual observations.

3. Results Analysis

3.1. Supervisory Classification

In this study, the Maximum Likelihood Classification (MLC) was used to classify the sample-scale FVC based on the information of spectral characteristics of UAV images and spatial distribution characteristics of vegetation, and the results showed that the FVC of the plots obtained are between using this method ranges from 5% to 20%, and the overall accuracies are all higher than 91%, the Kappa coefficients are all greater than 0.8 (Table 3). MLC, as a classical statistical classification method, relies on training samples to estimate the probability distribution of each class. It is well-suited for scenarios with a large number of spectral bands and clear class separability [87]. The result indicates that in natural desert sparse vegetation areas, based on high spatial resolution UAV images, using supervised classification with manual labeling and post-classification optimization processing methods, classification results with high confidence and stability can be obtained, which can accurately reflect the actual FVC of the sample site [88]. Therefore, the manually corrected sample FVC of the supervised classification results as the true value will be analyzed subsequently.
The vegetation cover results of plots obtained through supervised classification show that the vegetation cover of gullies at different locations on the alluvial fan exhibits different characteristics with decreasing elevation (from the fan apex to the fan middle to the fan edge). However, among them, the FVC of channels 1 and 3 were characterized by increasing, the FVC of channel 2 increased and then decreased, and the FVC of channel 4 was characterized by decreasing all the time, which was closely related to the water redistribution process dominated by microtopographic changes [46]. In the floodplain fan selected for this study, the widths of channels 1 and 3 were large, indicating a large cross-sectional flow area, high water flow, relatively fast flow rate, good moisture conditions, and also the further downstream the better the moisture conditions, which is conducive to the growth of vegetation, and the cover was characterized by an obvious increase; The width of channel 2 is small although it is located in the middle of the channel, which indicates that the channel is slightly higher in elevation than channels 1 and 3, and therefore less water is allocated to it; however, because it is located in the center of the floodplain fan, the water allocated to it at the top of the fan and in the middle of the fan is in a better condition, and therefore FVC is higher, while the water allocated to it at the edge of the fan is reduced, and therefore the FVC is lower. Channel 4 is located at the very edge of the flood fan and close to the mountain, allocating the poorest moisture conditions, and therefore, the FVC decreases the farther away from the outcrop.

3.2. Otsu Vegetation Index Method

All 17 vegetation index images of the 12 plots were binarized and the FVC was calculated using Equation (1), and the FVC values obtained from the inversion of the VI method were expressed using FVCVI. Overall, the FVCMVI exhibits a spatial distribution pattern consistent with the FVCT across the top, middle, and edge of the fan. However, instances of overestimation and underestimation are also observed. The FVC values obtained based on 10 VVIs show significant discrepancies from the ground-truth values, with prominent outliers (Figure 4). This indicates that the visible spectral index cannot adequately characterize the spectral response properties of the plant canopy, leading to limitations in its application, and it is difficult to realize the accurate extraction of desert sparse FVC simply using the visible spectral index [89]. In contrast, incorporating red-edge and NIR bands significantly enhances the indices’ sensitivity to canopy structure and biochemical parameters, effectively suppressing background noise and improving inversion accuracy.
The relationship between FVCT of the sample and the inverse value of each VI was evaluated using Spearman’s correlation coefficient (r) and fit relationship (R2). The results showed that the correlation between FVCT and the MVIs (SRRededge, NDVI, MSAVI, ARVI, SAVI, RVI) is significantly positive. Among these MVIs, FVCT shows the highest correlation coefficient with NDVI (r = 0.85), the correlation coefficients between it and other MVIs range from 0.79 to 0.82. The correlation between FVCT and the VVIs (NGBDI, RGRI, GRRI) is moderately significant and positive (r is close to 0.5). The correlation between FVCT and the VVIs (VARI, CIVE, EGRBDI) is low (r < 0.35), showing a weak correlation; Specifically, the correlation coefficient between FVCT and EGRBDI is 0.06, implying virtually no correlation (Figure 5). This study conducted linear regressions between FVCT and various VIs. Preliminary results revealed the presence of heteroscedasticity in some variables (Appendix A). To address this issue, weighted least squares (WLS) regression was applied to refit the models. The WLS approach improved model goodness-of-fit and yielded statistically more significant regression coefficients (with p values well below 0.001, except for EGRBDI). Although WLS partially enhanced model stability, heteroscedasticity was not fully eliminated, indicating potential limitations of the linear modeling approach. The best fitting performance is observed between FVCT and NDVI (R2 = 0.78), followed by SAVI (R2 = 0.77), SRRededge (R2 = 0.73). The goodness-of-fit between FVCT and both ARVI and RVI are the same (R2 = 0.70). The goodness-of-fit between FVCT and NGRDI, NGBDI, RGRI, GRRI, and MGRVI all range between 0.2 and 0.35. When the others are lower, EGRBDI shows the worst fitting performance (R2 = 0.01). Overall, FVCT was significantly correlated with the MVIs (SAVI, NDVI, MSAVI, SRRededge, RVI, and ARVI), while the fit to the VVIs was extremely poor and not correlated. The application of visible bands in this region primarily faces challenges related to background interference and signal dilution caused by sparse vegetation canopies. Due to the sparse and discontinuous distribution of vegetation canopies, non-vegetated features such as soil, bare ground, and rocks further enhance the spectral mixing effect, making vegetation signals within individual pixels easily masked by the reflectance characteristics of the background. In addition, VVIs mainly rely on enhanced reflectance in the green band [90]. When exposed desert surfaces and rocks exhibit strong reflectance, these indices may overestimate FVC (Figure 4). In contrast, the near-infrared (NIR) band is more sensitive to the internal structure of vegetation [91], and the distinct differences in reflectance between vegetation and background elements (such as soil, rocks, and litter) in the NIR region make it possible for MVIs such as NDVI and SAVI to more accurately capture vegetation information in desert areas. This significantly improves the detectability of low-cover vegetation and provides more reliable vegetation estimates [92].

3.3. Constructing Machine Learning Models Using Multiple Feature Parameters

The feature parameters inputted into the ML regression models in this study introduced other spectral features (e.g., six bands of reflectance of UAV imagery) in addition to the six salient variables (SAVI, NDVI, MSAVI, SRRededge, RVI, and ARVI) and FVCT of the corresponding samples, texture features (gray scale covariance matrix mean (GLCM_mean), homogeneity, contrast, and entropy), microtopographic features and (fan apex, fan middle, and fan edge) spatial distribution characteristics, etc., for a total of 18 parameters. The study constructed five multidimensional feature ML regression models using FVCT as the dependent variable, and parameters such as spectral, texture, and microtopographic features as the independent variables. The results show (Table 4, Figure 6) that among all the tested models, the RF model demonstrated the best fitting performance (R2 = 0.876, RMSE = 0.02, MAE = 0.016), and exhibited high robustness to noisy data and outliers; the SVM model ranked second only to the RF model (R2 = 0.874, RMSE = 0.020, MAE = 0.017). Overall, the spatial distribution of plots had little impact on the performance of the ML models. Among them, LASSO was the most stable, while the RF model not only exhibited high stability but also achieved the highest simulation accuracy and the best regression performance, making it the most suitable for estimating sparse FVC in desert regions.
To enhance the interpretability and transparency of the models, this study employed SHAP (SHapley Additive exPlanations) to evaluate the contribution of different features to five ML models [93]. SHAP is a model-agnostic interpretation method based on cooperative game theory, which attributes the prediction of an instance to its individual features by computing their marginal contributions across all possible feature combinations [94]. Compared with traditional importance metrics, SHAP provides consistent and locally accurate explanations, making it particularly suitable for interpreting complex machine learning models, including ensemble methods [95]. The SHAP beeswarm plot for the RF model (Figure 6b) indicates that spectral variables—such as NDVI, SAVI, and MSAVI—contribute significantly more than texture features (e.g., GLCM_mean, Contrast). VIs like NDVI, SAVI, and MSAVI exhibited the highest explanatory power in the RF model, highlighting their strong capability in capturing sparse vegetation dynamics in arid regions. Meanwhile, raw spectral reflectance features and certain texture features showed moderate importance, whereas DEM and spatial distribution-related features made relatively minor contributions. The inclusion of such low-contribution variables may increase model complexity without substantially improving prediction accuracy. Feature importance analysis across models revealed that spatial distribution features held relatively high importance only in the SVM model, while their influence was negligible in the other four. Given their limited overall impact, spatial distribution features were excluded from the comparative feature importance statistics. To comprehensively assess the relative contribution of different feature types, four categories—VIs, spectral reflectance, texture, and DEM—were evaluated across five ML models (Figure 7). Results show that spectral features consistently played a dominant role. In the SVM model, spectral reflectance had the greatest relative contribution, while in the remaining four models, VIs were the most influential. Both the RF and KNN models demonstrated a higher capacity to integrate texture and DEM, reflecting their adaptability to high-dimensional inputs. In conclusion, for FVC estimation in arid and sparsely vegetated areas, VIs that include NIR band should be prioritized. In addition, selecting complementary feature types based on the characteristics of each model can further improve model robustness and generalization performance.

4. Discussions

4.1. Extraction of FVC in Plots

Vegetation classification of sparsely vegetated areas based on UAV and maximum likelihood methods has been shown to be reliable [96,97,98]. In this study, the classification accuracy results from 12 sample plots demonstrated the overall good performance of the classification method. To assess the consistency between the classification results and the actual ROI labels in boundary regions, we introduced the Average Boundary Mismatch Ratio (ABMR) as an auxiliary metric. Although ABMR is not a standard accuracy metric, similar approaches have been employed in previous studies to evaluate classification boundary shifts or uncertainties [99], offering an effective means to reflect the influence of edge heterogeneity on supervised classification results. The observed ABMR values ranged between 0.5 and 0.7, indicating that conventional metrics such as OA and Kappa coefficient, while effective in assessing overall classification performance, are insufficient to reveal the specific impact of spatial heterogeneity on classification errors in boundary areas. In contrast, the mismatch ratio serves as a complementary indicator, providing a more sensitive reflection of error accumulation in edge regions. This issue is particularly evident in shrub–bare land transitional zones, where severe spectral mixing between vegetation and non-vegetation leads to greater classification uncertainty along the boundaries. Therefore, we recommend that future studies evaluate classification accuracy separately for boundary and interior regions to more comprehensively characterize the applicability and robustness of classification algorithms across different spatial positions, especially in ecotonal areas with pronounced land cover transitions and strong spatial heterogeneity. In addition, we suggest adopting more robust classification approaches in boundary regions to improve accuracy—such as deep learning methods that integrate spatial and spectral features (e.g., U-Net, DeepLab series), or ensemble learning techniques (e.g., integrated RF or XGBoost combined with spatial filtering strategies). These approaches are better equipped to capture the complex structural features of boundary regions and hold promise for enhancing classification accuracy in such challenging areas.

4.2. Discussion on the Applicability of VIs in Arid Regions with Sparse Vegetation

A previous study [45] has applied some of the VIs used in this study in desert-oasis ecotones, encompassing both artificial and natural vegetation types. The results indicated that FVC values across sample plots exhibited dynamic behavior within the range of (0, 100%), with high sample diversity and smooth, stable overall variation. Moreover, the extracted vegetation information was minimally affected by soil background, demonstrating the overall robustness of these indices. However, the current study area is characterized by a single species with low FVC (<20%), typical of arid, sparse vegetation. The physiological features of the dominant species include small leaves covered with white pubescence and severely degraded, yellowed branches and low chlorophyll content. These features make the visible spectral bands more susceptible to background, topography, and atmospheric influences. Such factors amplify signal fluctuations, resulting in unstable variations in the values of the VVI, thereby hindering the effective discrimination between vegetation and soil [100]. In this study, the linear fitting between EGRBDI and the ground-truth FVC exhibited the poorest performance (R2 = 0.01). In contrast, MVIs incorporate the NIR band, which is more sensitive to the internal structure and biomass of plant leaves. Under sparse vegetation conditions, this leads to a higher signal-to-noise ratio and consequently improves estimation accuracy [101]. Additionally, desert soils are often dry and sandy or silty, with high reflectance—particularly in the red spectral region. Such background reflectance tends to destabilize the performance of VVIs, making their response to vegetation changes weak or distorted [102]. By introducing spectral bands that exhibit greater contrast between vegetation and soil—such as NIR or rededge—MVIs can effectively suppress background soil interference [103] and enhance the detection of sparse, patchy vegetation. In this study, the estimation accuracy of VVIs was significantly lower than that of MVIs. Therefore, it is recommended that VIs incorporating NIR bands be prioritized in similar environments to enhance spectral contrast between vegetation and soil, thereby improving the accuracy and stability of vegetation identification and FVC estimation.

4.3. Assessment of Model Transferability and Ecological Adaptability in ML

Although numerous studies have evaluated the performance of ML models across different ecological systems, most findings are still region-specific and exhibit certain limitations in generalizability. For instance, SVM achieved 95% classification accuracy for sparse vegetation in the Sabah Al-Ahmad Nature Reserve [104], while the RF model reached 94% accuracy in distinguishing urban-natural mixed vegetation types in the UAE, including wetland vegetation, urban vegetation, cropland, and artificial/natural forests [105]. Decision trees yielded an accuracy of 87% in estimating FVC of dominant shrubs in typical grassland ecosystems [42]. Additionally, XGBoost and LASSO models have demonstrated high accuracy in long-term vegetation monitoring in salt marshes and arid regions, respectively [106,107]. These studies suggest that various models can perform well under specific ecological contexts; however, their universality and transferability across regions with differing ecological conditions, vegetation structure complexities, and spatial resolutions remain uncertain. In the present study, the RF model exhibited the best performance under sparse, structurally simple, and spectrally distinct vegetation conditions typical of desert regions. This is largely attributable to the RF model’s strengths in handling high-dimensional features, integrating multi-source data, and tolerating noise. Nevertheless, due to its “ensemble of static decision trees” structure, RF may encounter limitations when applied to highly heterogeneous or spatiotemporally dynamic ecosystems—such as seasonal grasslands, salt marshes, or areas with intense anthropogenic disturbance—where it may struggle to delineate fuzzy class boundaries or generalize well, especially when training samples fail to capture the full spectrum of landscape variability. This increases the risk of overfitting and reduced extrapolation performance. Therefore, while RF performs favorably in the study area, its applicability may be relatively constrained and requires systematic validation across a broader range of desert ecosystems. Given the high spatial heterogeneity in vegetation cover, geomorphology, soil background, and moisture availability in desert landscapes, model transferability becomes particularly critical. The SVM’s ability to define decision boundaries under small sample conditions, XGBoost’s iterative optimization mechanism, and LASSO’s feature selection strength may offer better adaptability in different ecological scenarios. Future research should thus investigate the robustness and transferability of various ML models along ecological gradients across broader spatial scales in arid regions, to avoid overgeneralization based solely on localized results. In particular, the applicability of the RF model should be further tested in more heterogeneous desert sub-regions—such as interdune depressions, saline-alkaline patches, and lowland oases—to determine its performance boundaries and provide more generalizable guidance for model selection across diverse ecological backgrounds.

4.4. Limitations and Future Prospects

Previous research has shown that RF achieves high classification and regression accuracy for large-scale sparse-vegetation mapping in arid regions [45,108]. Under the present experimental conditions—characterized by arid environments, sparse vegetation, and a small training sample—RF again delivered the best regression performance, further confirming its robustness and suitability in complex ecological settings. Nevertheless, this study focused exclusively on a single sparse shrub community. Given the physiological and spectral diversity of desert vegetation, the advantages of RF remain to be verified across other arid ecosystems and multi-species scenarios. Future work will expand the sample size and incorporate additional vegetation types to systematically assess and enhance the model’s generalization capacity. Moreover, integrating LiDAR and hyperspectral data to increase spatial and spectral resolution, together with advanced deep-learning approaches, is expected to further improve the accuracy of FVC retrieval in desert landscapes.

5. Conclusions

In this study, we focused on a floodplain fan dominated by a single constructive species, Caragana korshinskii Kom, and investigated it by dividing the area into three gradients. High-resolution UAV images and a supervised classification method were employed to generate FVC results with a high degree of confidence, achieved through manual annotation and subsequent post-classification optimization processing. We compared the linear regression models between FVCT and each of 17 vegetation indices, and selected parameters of the training set input in the subsequent ML models. Five ML models were developed using multispectral vegetation indices combined with spectral features of the reflectance in the six spectral bands, texture features, and topographic features of the UAV imagery. We explored and compared the feasibility and applicability of these models in estimating vegetation information in low FVC areas. The findings are presented below:
(1)
In conducting regression analysis utilizing the UAV VI and FVCT, it was discerned that the MVIs provided a notably superior fit compared to the VVIs. Furthermore, the MVIs exhibited a significant positive correlation with FVCT. However, the correlations between the VVIs and FVCT were extremely weak, and even negative in some cases. In contrast, MVIs were relatively more effective in capturing vegetation variation in low-FVC areas and provided more accurate vegetation information. These findings offer useful references for future vegetation studies in similar regions.
(2)
Among the ML regression models examined, the RF and LASSO models exhibited strong stability across both the training and testing sets, regardless of changes in moisture or elevation gradients. Among them, the RF model achieved the best fitting performance. Both XGBoost and KNN have weak generalization ability and low prediction accuracy. SVM’s prediction accuracy fluctuates with changes in the gradient, demonstrating insufficient stability. The three models were not suitable for this study area. This research confirmed that the RF model remains the optimal choice under conditions of severe desertification with low FVC and small sample sizes.

Author Contributions

J.H.: writing—original draft, investigation, methodology, software, conceptualization, visualization, and writing—review and editing; J.Z. (Jinlei Zhu): conceptualization and writing—review and editing; X.C.: investigation, writing—review and editing, supervision, and funding acquisition; L.X.: investigation; Z.Q.: investigation; Y.L.: investigation; X.W.: investigation; J.Z. (Jiaxiu Zou): investigation; All the authors have read and agreed to the published version of the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Science & Technology Fundamental Resources Investigation Program (Grant NO. 2023FY100703), Key R&D Projects in Xinjiang Autonomous Region (2024B03025-1), the Program from National Forestry and Grassland Administration (202401), and National Natural Science Foundation of China (41971398).

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors upon request.

Acknowledgments

We declare that Jie Han and Jinlei Zhu contribute equally to this work.

Conflicts of Interest

The authors declare that they have no known competing financial interests or personal relationships.

Appendix A

Table A1. Coefficient of determination (R2) for the linear relationship between FVCT and 17 VI.
Table A1. Coefficient of determination (R2) for the linear relationship between FVCT and 17 VI.
VIF (F-Statistic)p ValueR2HeteroscedasticityWLS_R2
SRRededgeF (1, 190) = 160<0.0010.61TRUE0.73
NDVIF (1, 190) = 542<0.0010.66TRUE0.78
MSAVIF (1, 190) = 203<0.0010.62FALSE/
ARVIF (1, 190) = 327<0.0010.59TRUE0.70
SAVIF (1, 190) = 238<0.0010.68TRUE0.77
RVIF (1, 190) = 404<0.0010.56TRUE0.70
DVIF (1, 190) = 24<0.0010.16FALSE/
VDVIF (1, 190) = 67<0.0010.19FALSE/
EXGF (1, 190) = 56<0.0010.15FALSE/
NGRDIF (1, 190) = 32<0.0010.29TRUE0.33
NGBDIF (1, 190) = 33<0.0010.29FALSE/
RGRIF (1, 190) = 16<0.0010.28TRUE0.29
GRRIF (1, 190) = 33<0.0010.29FALSE/
VARIF (1, 190) = 16<0.0010.16TRUE0.10
CIVEF (1, 190) = 17<0.0010.17TRUE0.17
MGRVIF (1, 190) = 32<0.0010.29TRUE0.33
EGRBDIF (1, 190) = 2>0.050.01TRUE0.01

References

  1. Goetz, A.F.; Rock, B.N.; Rowan, L.C. Remote sensing for exploration; an overview. Econ. Geol. 1983, 78, 573–590. [Google Scholar] [CrossRef]
  2. Wu, H.; Li, Z.-L. Scale issues in remote sensing: A review on analysis, processing and modeling. Sensors 2009, 9, 1768–1793. [Google Scholar] [CrossRef]
  3. Li, L.; Xin, X.; Zhao, J.; Yang, A.; Wu, S.; Zhang, H.; Yu, S. Remote sensing monitoring and assessment of global vegetation status and changes during 2016–2020. Sensors 2023, 23, 8452. [Google Scholar] [CrossRef]
  4. Xie, Y.; Sha, Z.; Yu, M. Remote sensing imagery in vegetation mapping: A review. J. Plant Ecol. 2008, 1, 9–23. [Google Scholar] [CrossRef]
  5. Haertel, V.F.; Shimabukuro, Y. Spectral linear mixing model in low spatial resolution image data. IEEE Trans. Geosci. Remote Sens. 2005, 43, 2555–2562. [Google Scholar] [CrossRef]
  6. Zhu, W.-Q.; Pan, Y.-Z.; Zhang, J.-S. Estimation of net primary productivity of Chinese terrestrial vegetation based on remote sensing. Chin. J. Plant Ecol. 2007, 31, 413. [Google Scholar]
  7. Räsänen, A.; Virtanen, T. Data and resolution requirements in mapping vegetation in spatially heterogeneous landscapes. Remote Sens. Environ. 2019, 230, 111207. [Google Scholar] [CrossRef]
  8. Jiapaer, G.; Xi, C.; Bao, A.-M. Coverage extraction and up-scaling of sparse desert vegetation in arid area. Yingyong Shengtai Xuebao 2009, 20, 2925–2934. [Google Scholar]
  9. Jia, X.; Li, J.; Ye, J.; Fei, B.; Bao, F.; Xu, X.; Zhang, L.; Wu, B. Estimating carbon storage of desert ecosystems in China. Int. J. Digit. Earth 2023, 16, 4113–4125. [Google Scholar] [CrossRef]
  10. Zhao, X.; Tan, S.; Li, Y.; Wu, H.; Wu, R. Quantitative analysis of fractional vegetation cover in southern Sichuan urban agglomeration using optimal parameter geographic detector model, China. Ecol. Indic. 2024, 158, 111529. [Google Scholar] [CrossRef]
  11. Munson, S.; Webb, R.; Hubbard, J. A comparison of methods to assess long-term changes in Sonoran Desert vegetation. J. Arid Environ. 2011, 75, 1228–1231. [Google Scholar] [CrossRef]
  12. Shupe, S.M.; Marsh, S.E. Cover-and density-based vegetation classifications of the Sonoran Desert using Landsat TM and ERS-1 SAR imagery. Remote Sens. Environ. 2004, 93, 131–149. [Google Scholar] [CrossRef]
  13. Wei, H.; Yang, X.; Zhang, B.; Ding, F.; Zhang, W.; Liu, S.; Chen, F. Hyper-spectral characteristics of rolled-leaf desert vegetation in the Hexi Corridor, China. J. Arid Land 2019, 11, 332–344. [Google Scholar] [CrossRef]
  14. Zhang, C.; Chen, Y.; Lu, D. Detecting fractional land-cover change in arid and semiarid urban landscapes with multitemporal Landsat Thematic mapper imagery. GIScience Remote Sens. 2015, 52, 700–722. [Google Scholar] [CrossRef]
  15. Deng, M.; Meng, X.; Lu, Y.; Li, Z.; Zhao, L.; Niu, H.; Chen, H.; Shang, L.; Wang, S.; Sheng, D. The response of vegetation to regional climate change on the Tibetan Plateau based on remote sensing products and the dynamic global vegetation model. Remote Sens. 2022, 14, 3337. [Google Scholar] [CrossRef]
  16. Kogan, F.N. Remote sensing of weather impacts on vegetation in non-homogeneous areas. Int. J. Remote Sens. 1990, 11, 1405–1419. [Google Scholar] [CrossRef]
  17. Broge, N.H.; Leblanc, E. Comparing prediction power and stability of broadband and hyperspectral vegetation indices for estimation of green leaf area index and canopy chlorophyll density. Remote Sens. Environ. 2001, 76, 156–172. [Google Scholar] [CrossRef]
  18. Xue, J.; Su, B. Significant remote sensing vegetation indices: A review of developments and applications. J. Sens. 2017, 2017, 1353691. [Google Scholar] [CrossRef]
  19. Yan, K.; Gao, S.; Chi, H.; Qi, J.; Song, W.; Tong, Y.; Mu, X.; Yan, G. Evaluation of the vegetation-index-based dimidiate pixel model for fractional vegetation cover estimation. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1–14. [Google Scholar] [CrossRef]
  20. Jiapaer, G.; Chen, X.; Bao, A.J.A.; Meteorology, F. A comparison of methods for estimating fractional vegetation cover in arid regions. Agric. For. Meteorol. 2011, 151, 1698–1710. [Google Scholar] [CrossRef]
  21. Zhaoming, W. A theoretical review of vegetation extraction methods based on UAV. IOP Conf. Ser. Earth Environ. Sci. 2020, 546, 032019. [Google Scholar] [CrossRef]
  22. Huete, A.R.; Liu, H.; van Leeuwen, W.J. The use of vegetation indices in forested regions: Issues of linearity and saturation. In Proceedings of the IGARSS’97. 1997 IEEE International Geoscience and Remote Sensing Symposium Proceedings. Remote Sensing—A Scientific Vision for Sustainable Development, Singapore, 3–8 August 1997; pp. 1966–1968. [Google Scholar]
  23. Dawelbait, M.; Morari, F. Limits and potentialities of studying dryland vegetation using the optical remote sensing. Ital. J. Agron. 2008, 3, 97–106. [Google Scholar] [CrossRef]
  24. Huete, A.R. A soil-adjusted vegetation index (SAVI). Remote Sens. Environ. 1988, 25, 295–309. [Google Scholar] [CrossRef]
  25. Qi, J.; Chehbouni, A.; Huete, A.R.; Kerr, Y.H.; Sorooshian, S. A modified soil adjusted vegetation index. Remote Sens. Environ. 1994, 48, 119–126. [Google Scholar] [CrossRef]
  26. Fadaei, H.; Suzuki, R.; Sakai, T.; Torii, K. A proposed new vegetation index, the total ratio vegetation index (TRVI), for arid and semi-arid regions. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2012, 39, 403–407. [Google Scholar] [CrossRef]
  27. Almalki, R.; Khaki, M.; Saco, P.M.; Rodriguez, J.F. Monitoring and mapping vegetation cover changes in arid and semi-arid areas using remote sensing technology: A review. Remote Sens. 2022, 14, 5143. [Google Scholar] [CrossRef]
  28. Elvidge, C.D.; Lyon, R.J. Influence of rock-soil spectral variation on the assessment of green biomass. Remote Sens. Environ. 1985, 17, 265–279. [Google Scholar] [CrossRef]
  29. Khdery, G.A.; Farg, E.; Arafat, S.M. Natural vegetation cover analysis in Wadi Hagul, Egypt using hyperspectral remote sensing approach. Egypt. J. Remote Sens. Space Sci. 2019, 22, 253–262. [Google Scholar] [CrossRef]
  30. Aburaed, N.; Alkhatib, M.Q.; Marshall, S.; Zabalza, J.; Al Ahmad, H. A review of spatial enhancement of hyperspectral remote sensing imaging techniques. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2023, 16, 2275–2300. [Google Scholar] [CrossRef]
  31. Li, X.-S.; Gao, Z.-H.; Li, Z.-Y.; Bai, L.-N.; Wang, B.-Y. Estimation of sparse vegetation coverage in arid region based on hyperspectral mixed pixel decompositon. Yingyong Shengtai Xuebao 2010, 21, 152–158. [Google Scholar]
  32. Okin, G.S.; Roberts, D.A.; Murray, B.; Okin, W.J. Practical limits on hyperspectral vegetation discrimination in arid and semiarid environments. Remote Sens. Environ. 2001, 77, 212–225. [Google Scholar] [CrossRef]
  33. Sarić, R.; Nguyen, V.D.; Burge, T.; Berkowitz, O.; Trtílek, M.; Whelan, J.; Lewsey, M.G.; Čustović, E. Applications of hyperspectral imaging in plant phenotyping. Trends Plant Sci. 2022, 27, 301–315. [Google Scholar] [CrossRef]
  34. Melville, B.; Fisher, A.; Lucieer, A. Ultra-high spatial resolution fractional vegetation cover from unmanned aerial multispectral imagery. Int. J. Appl. Earth Obs. Geoinf. 2019, 78, 14–24. [Google Scholar] [CrossRef]
  35. Choi, S.K.; Lee, S.K.; Jung, S.H.; Choi, J.W.; Choi, D.Y.; Chun, S.J. Estimation of fractional vegetation cover in sand dunes using multi-spectral images from fixed-wing UAV. J. Korean Soc. Surv. Geod. Photogramm. Cartogr. 2016, 34, 431–441. [Google Scholar] [CrossRef]
  36. Dash, J.P.; Pearse, G.D.; Watt, M.S. UAV multispectral imagery can complement satellite data for monitoring forest health. Remote Sens. 2018, 10, 1216. [Google Scholar] [CrossRef]
  37. Zhao, Y.; Liu, X.; Wang, Y.; Zheng, Z.; Zheng, S.; Zhao, D.; Bai, Y. UAV-based individual shrub aboveground biomass estimation calibrated against terrestrial LiDAR in a shrub-encroached grassland. Int. J. Appl. Earth Obs. 2021, 101, 102358. [Google Scholar] [CrossRef]
  38. Osco, L.P.; De Arruda, M.d.S.; Junior, J.M.; Da Silva, N.B.; Ramos, A.P.M.; Moryia, É.A.S.; Imai, N.N.; Pereira, D.R.; Creste, J.E.; Matsubara, E.T. A convolutional neural network approach for counting and geolocating citrus-trees in UAV multispectral imagery. ISPRS J. Photogramm. Remote Sens. 2020, 160, 97–106. [Google Scholar] [CrossRef]
  39. Zhang, L.; Zhang, H.; Niu, Y.; Han, W. Mapping maize water stress based on UAV multispectral remote sensing. Remote Sens. 2019, 11, 605. [Google Scholar] [CrossRef]
  40. Zhou, X.; Zhou, T.; Fang, S.; Han, B.; He, Q. Investigation of the vertical distribution characteristics and microphysical properties of summer mineral dust masses over the taklimakan desert using an unmanned aerial vehicle. Remote Sens. 2023, 15, 3556. [Google Scholar] [CrossRef]
  41. Wang, Z.; Shi, Y.; Zhang, Y. Review of desert mobility assessment and desertification monitoring based on remote sensing. Remote Sens. 2023, 15, 4412. [Google Scholar] [CrossRef]
  42. Yang, H.; Du, J. Classification of desert steppe species based on unmanned aerial vehicle hyperspectral remote sensing and continuum removal vegetation indices. Optik 2021, 247, 167877. [Google Scholar] [CrossRef]
  43. Zhang, T.; Bi, Y.; Zhu, X.; Gao, X. Identification and classification of small sample desert grassland vegetation communities based on dynamic graph convolution and UAV hyperspectral imagery. Sensors 2023, 23, 2856. [Google Scholar] [CrossRef] [PubMed]
  44. Lin, X.; Chen, J.; Lou, P.; Yi, S.; Qin, Y.; You, H.; Han, X. Improving the estimation of alpine grassland fractional vegetation cover using optimized algorithms and multi-dimensional features. Plant Methods 2021, 17, 96. [Google Scholar] [CrossRef]
  45. Wang, N.; Guo, Y.; Wei, X.; Zhou, M.; Wang, H.; Bai, Y. UAV-based remote sensing using visible and multispectral indices for the estimation of vegetation cover in an oasis of a desert. Ecol. Indic. 2022, 141, 109155. [Google Scholar] [CrossRef]
  46. Zhang, H.; Feng, Y.; Guan, W.; Cao, X.; Li, Z.; Ding, J. Using unmanned aerial vehicles to quantify spatial patterns of dominant vegetation along an elevation gradient in the typical Gobi region in Xinjiang, Northwest China. Glob. Ecol. Conserv. 2021, 27, e01571. [Google Scholar] [CrossRef]
  47. Yue, J.; Yang, G.; Tian, Q.; Feng, H.; Xu, K.; Zhou, C. Estimate of winter-wheat above-ground biomass based on UAV ultrahigh-ground-resolution image textures and vegetation indices. ISPRS J. Photogramm. Remote Sens. 2019, 150, 226–244. [Google Scholar] [CrossRef]
  48. Liang, Y.; Kou, W.; Lai, H.; Wang, J.; Wang, Q.; Xu, W.; Wang, H.; Lu, N. Improved estimation of aboveground biomass in rubber plantations by fusing spectral and textural information from UAV-based RGB imagery. Ecol. Indic. 2022, 142, 109286. [Google Scholar] [CrossRef]
  49. Zhang, J. Investigation on Distribution and Quantity of Rock Sheep in Summer in Hatengtaohai National Nature Reserve of Inner Mongolia. Inn. Mong. For. Investig. Des. 2022, 45, 67–69. [Google Scholar]
  50. Zhao, N. Evaluation Research on Shrubs Resources—Take Caraganakorshinskiikom in Dengkou, Bayannaoer City as an Example. Master’s Thesis, Inner Mongolia Agricultural University, Inner Mongolia, China, 2011. [Google Scholar]
  51. Yue, J.; Guo, W.; Yang, G.; Zhou, C.; Feng, H.; Qiao, H. Method for accurate multi-growth-stage estimation of fractional vegetation cover using unmanned aerial vehicle remote sensing. Plant Methods 2021, 17, 51. [Google Scholar] [CrossRef]
  52. Al-Ali, Z.; Abdullah, M.; Asadalla, N.; Gholoum, M. A comparative study of remote sensing classification methods for monitoring and assessing desert vegetation using a UAV-based multispectral sensor. Environ. Monit. Assess. 2020, 192, 389. [Google Scholar] [CrossRef]
  53. Zhou, H.; Fu, L.; Sharma, R.P.; Lei, Y.; Guo, J. A hybrid approach of combining random forest with texture analysis and VDVI for desert vegetation mapping Based on UAV RGB Data. Remote Sens. 2021, 13, 1891. [Google Scholar] [CrossRef]
  54. Fonseca, L.M.G.; Namikawa, L.M.; Castejon, E.F. Digital image processing in remote sensing. In Proceedings of the 2009 Tutorials of the XXII Brazilian Symposium on Computer Graphics and Image Processing, IEEE Computer Society1730 Massachusetts Ave., Washington, DC, USA, 11–14 October 2009; pp. 59–71. [Google Scholar]
  55. Lins, R.D.; de Almeida, M.M.; Bernardino, R.B.; Jesus, D.; Oliveira, J.M. Assessing binarization techniques for document images. In Proceedings of the 2017 ACM Symposium on Document Engineering, Valletta, Malta, 4–7 September 2017; pp. 183–192. [Google Scholar]
  56. Mustafa, W.A.; Aziz, H.; Khairunizam, W.; Ibrahim, Z.; Shahriman, A.; Razlan, Z.M. Review of different binarization approaches on degraded document images. In Proceedings of the 2018 International Conference on Computational Approach in Smart Systems Design and Applications (ICASSDA), Kuching, Malaysia, 15–17 August 2018; pp. 1–8. [Google Scholar]
  57. Otsu, N. A threshold selection method from gray-level histograms. Automatica 1975, 11, 23–27. [Google Scholar] [CrossRef]
  58. Dutta, K.; Talukdar, D.; Bora, S.S. Segmentation of unhealthy leaves in cruciferous crops for early disease detection using vegetative indices and Otsu thresholding of aerial images. Measurement 2022, 189, 110478. [Google Scholar] [CrossRef]
  59. Li, H.; Shi, Q.; Wan, Y.; Shi, H.; Imin, B. Influence of surface water on desert vegetation expansion at the landscape scale: A case study of the Daliyabuyi Oasis, Taklamakan desert. Sustainability 2021, 13, 9522. [Google Scholar] [CrossRef]
  60. Chen, Z.; Huang, M.; Xiao, C.; Qi, S.; Du, W.; Zhu, D.; Altan, O. Integrating remote sensing and spatiotemporal analysis to characterize artificial vegetation restoration suitability in desert areas: A Case Study of Mu Us Sandy Land. Remote Sens. 2022, 14, 4736. [Google Scholar] [CrossRef]
  61. Pettorelli, N.; Vik, J.O.; Mysterud, A.; Gaillard, J.-M.; Tucker, C.J.; Stenseth, N.C. Using the satellite-derived NDVI to assess ecological responses to environmental change. Trends Ecol. Evol. 2005, 20, 503–510. [Google Scholar] [CrossRef] [PubMed]
  62. Gitelson, A.A.; Keydan, G.P.; Merzlyak, M.N. Three-band model for noninvasive estimation of chlorophyll, carotenoids, and anthocyanin contents in higher plant leaves. Geophys. Res. Lett. 2006, 33, L11402. [Google Scholar] [CrossRef]
  63. Gao, L.; Wang, X.; Johnson, B.A.; Tian, Q.; Wang, Y.; Verrelst, J.; Mu, X.; Gu, X. Remote sensing algorithms for estimation of fractional vegetation cover using pure vegetation index values: A review. ISPRS J. Photogramm. Remote Sens. 2020, 159, 364–377. [Google Scholar] [CrossRef]
  64. Schlerf, M.; Atzberger, C.; Hill, J. Remote sensing of forest biophysical variables using HyMap imaging spectrometer data. Remote Sens. Environ. 2005, 95, 177–194. [Google Scholar] [CrossRef]
  65. Li, X. Quantitive Retrieval of Sparse Vegetation Cover in Arid Regions Using Hyperspectral Data. Ph.D. Thesis, Chinese Academy of Forestry, Beijing, China, 2008. [Google Scholar]
  66. Naji, T.A. Study of vegetation cover distribution using DVI, PVI, WDVI indices with 2D-space plot. J. Phys. Conf. Ser. 2018, 1003, 012083. [Google Scholar] [CrossRef]
  67. Yuan, H.; Liu, Z.; Cai, Y.; Zhao, B. Research on vegetation information extraction from visible UAV remote sensing images. In Proceedings of the 2018 Fifth International Workshop on Earth Observation and Remote Sensing Applications (EORSA), Xi’an, China, 18–20 June 2018; pp. 1–5. [Google Scholar]
  68. Kerkech, M.; Hafiane, A.; Canals, R. Deep leaning approach with colorimetric spaces and vegetation indices for vine diseases detection in UAV images. Comput. Electron. Agric. 2018, 155, 237–243. [Google Scholar] [CrossRef]
  69. Schneider, P.; Roberts, D.; Kyriakidis, P. A VARI-based relative greenness from MODIS data for computing the Fire Potential Index. Remote Sens. Environ. 2008, 112, 1151–1167. [Google Scholar] [CrossRef]
  70. Beniaich, A.; Silva, M.L.N.; Avalos, F.A.P.; de Menezes, M.D.; Cândido, B.M. Determination of vegetation cover index under different soil management systems of cover plants by using an unmanned aerial vehicle with an onboard digital photographic camera. Semin. Ciênc. Agrár. 2019, 40, 49–66. [Google Scholar] [CrossRef]
  71. Evstatiev, B.; Mladenova, T.; Valov, N.; Zhelyazkova, T.; Gerdzhikova, M.; Todorova, M.; Grozeva, N.; Sevov, A.; Stanchev, G. Fast pasture classification method using ground-based camera and the modified green red vegetation index (mgrvi). Int. J. Adv. Comput. Sci. Appl. 2023, 14, 45–51. [Google Scholar] [CrossRef]
  72. Xu, X.; Liu, L.; Han, P.; Gong, X.; Zhang, Q. Accuracy of vegetation indices in assessing different grades of grassland desertification from UAV. Int. J. Environ. Res. Public Health 2022, 19, 16793. [Google Scholar] [CrossRef]
  73. Yang, L.; Jia, K.; Liang, S.; Liu, J.; Wang, X. Comparison of four machine learning methods for generating the GLASS fractional vegetation cover product from MODIS data. Remote Sens. 2016, 8, 682. [Google Scholar] [CrossRef]
  74. Jeon, G. Advanced Machine Learning and Deep Learning Approaches for Remote Sensing. Remote Sens. 2023, 15, 2876. [Google Scholar] [CrossRef]
  75. Song, D.-X.; Wang, Z.; He, T.; Wang, H.; Liang, S. Estimation and validation of 30 m fractional vegetation cover over China through integrated use of Landsat 8 and Gaofen 2 data. Sci. Remote Sens. 2022, 6, 100058. [Google Scholar] [CrossRef]
  76. Wang, Z.; Zhang, T.; Pei, C.; Zhao, X.; Li, Y.; Hu, S.; Bu, C.; Zhang, Q. Multisource remote sensing monitoring and analysis of the driving forces of vegetation restoration in the Mu Us sandy land. Land 2022, 11, 1553. [Google Scholar] [CrossRef]
  77. Chen, X.; Sun, Y.; Qin, X.; Cai, J.; Cai, M.; Hou, X.; Yang, K.; Zhang, H. Assessing the Potential of UAV for Large-Scale Fractional Vegetation Cover Mapping with Satellite Data and Machine Learning. Remote Sens. 2024, 16, 3587. [Google Scholar] [CrossRef]
  78. Sun, F.; Chen, D.; Li, L.; Zhang, Q.; Yuan, X.; Liao, Z.; Xiang, C.; Liu, L.; Zhou, J.; Shrestha, M. Machine Learning Models Based on UAV Oblique Images Improved Above-Ground Biomass Estimation Accuracy Across Diverse Grasslands on the Qinghai–Tibetan Plateau. Land Degrad. Dev. 2025, 36, 585–598. [Google Scholar] [CrossRef]
  79. Jiang, F.; Smith, A.R.; Kutia, M.; Wang, G.; Liu, H.; Sun, H. A modified KNN method for mapping the leaf area index in arid and semi-arid areas of China. Remote Sens. 2020, 12, 1884. [Google Scholar] [CrossRef]
  80. Ge, G.; Shi, Z.; Zhu, Y.; Yang, X.; Hao, Y. Land use/cover classification in an arid desert-oasis mosaic landscape of China using remote sensed imagery: Performance assessment of four machine learning algorithms. Glob. Ecol. Conserv. 2020, 22, e00971. [Google Scholar] [CrossRef]
  81. Abdelbaki, A.; Udelhoven, T. A review of hybrid approaches for quantitative assessment of crop traits using optical remote sensing: Research trends and future directions. Remote Sens. 2022, 14, 3515. [Google Scholar] [CrossRef]
  82. Wang, S.; Liu, Q.; Huang, C. Vegetation change and its response to climate extremes in the arid region of Northwest China. Remote Sens. 2021, 13, 1230. [Google Scholar] [CrossRef]
  83. Huang, W.; Li, W.; Xu, J.; Ma, X.; Li, C.; Liu, C. Hyperspectral monitoring driven by machine learning methods for grassland above-ground biomass. Remote Sens. 2022, 14, 2086. [Google Scholar] [CrossRef]
  84. Sun, C.; Ma, Y.; Pan, H.; Wang, Q.; Guo, J.; Li, N.; Ran, H. Methods for Extracting Fractional Vegetation Cover from Differentiated Scenarios Based on Unmanned Aerial Vehicle Imagery. Land 2024, 13, 1840. [Google Scholar] [CrossRef]
  85. Jung, Y. Multiple predicting K-fold cross-validation for model selection. J. Nonparametr. Stat. 2018, 30, 197–215. [Google Scholar] [CrossRef]
  86. Ahmad, G.N.; Fatima, H.; Ullah, S.; Saidi, A.S. Efficient medical diagnosis of human heart diseases using machine learning techniques with and without GridSearchCV. IEEE Access 2022, 10, 80151–80173. [Google Scholar] [CrossRef]
  87. Sisodia, P.S.; Tiwari, V.; Kumar, A. Analysis of supervised maximum likelihood classification for remote sensing image. In Proceedings of the International Conference on Recent Advances and Innovations in Engineering (ICRAIE-2014), Jaipur, India, 9–11 May 2014; pp. 1–4. [Google Scholar]
  88. Gao, S.-h.; Yan, Y.-z.; Yuan, Y.; Zhang, N.; Ma, L.; Zhang, Q. Comprehensive degradation index for monitoring desert grassland using UAV multispectral imagery. Ecol. Indic. 2024, 165, 112194. [Google Scholar] [CrossRef]
  89. Silver, M.; Tiwari, A.; Karnieli, A. Identifying vegetation in arid regions using object-based image analysis with RGB-only aerial imagery. Remote Sens. 2019, 11, 2308. [Google Scholar] [CrossRef]
  90. Jensen, J.R. Remote Sensing of the Environment: An Earth Resource Perspective 2/e; Pearson Education India: Chennai, India, 2009. [Google Scholar]
  91. Knipling, E.B. Physical and physiological basis for the reflectance of visible and near-infrared radiation from vegetation. Remote Sens. Environ. 1970, 1, 155–159. [Google Scholar] [CrossRef]
  92. Song, Z.; Lu, Y.; Ding, Z.; Sun, D.; Jia, Y.; Sun, W. A new remote sensing desert vegetation detection index. Remote Sens. 2023, 15, 5742. [Google Scholar] [CrossRef]
  93. Salih, A.M.; Raisi-Estabragh, Z.; Galazzo, I.B.; Radeva, P.; Petersen, S.E.; Lekadir, K.; Menegaz, G. A perspective on explainable artificial intelligence methods: SHAP and LIME. Adv. Intell. Syst. 2025, 7, 2400304. [Google Scholar] [CrossRef]
  94. Gholami, H.; Darvishi, E.; Moradi, N.; Mohammadifar, A.; Song, Y.; Li, Y.; Niu, B.; Kaskaoutis, D.; Pradhan, B. An interpretable (explainable) model based on machine learning and SHAP interpretation technique for mapping wind erosion hazard. Environ. Sci. Pollut. Res. 2024, 31, 64628–64643. [Google Scholar] [CrossRef]
  95. Li, Z. Extracting spatial effects from machine learning model using local interpretation method: An example of SHAP and XGBoost. Comput. Environ. Urban Syst. 2022, 96, 101845. [Google Scholar] [CrossRef]
  96. Xu, Y.; Shu, X.; Tao, M.; Sun, Y.; Liu, W.; Dong, G.; He, Q.; Li, J.; Li, Y.; Deng, L.; et al. Vegetation Information Extraction for Restoration of Sandy Land in Northwest Sichuan Based on Unmanned Aerial Vehicles and Machine Learning. J. Sichuan Agric. Univ. 2024, 42, 181–187. [Google Scholar]
  97. Xia, L.; Zhang, R.; Chen, L.; Huang, Y.; Xu, G.; Wen, Y.; Yi, T. Monitor cotton budding using SVM and UAV images. Appl. Sci. 2019, 9, 4312. [Google Scholar] [CrossRef]
  98. Agarwal, A.; Singh, A.K.; Kumar, S.; Singh, D. Critical analysis of classification techniques for precision agriculture monitoring using satellite and drone. In Proceedings of the 2018 IEEE 13th International Conference on Industrial and Information Systems (ICIIS), Rupnagar, India, 1–2 December 2018; pp. 83–88. [Google Scholar]
  99. Foody, G.M. Approaches for the production and evaluation of fuzzy land cover classifications from remotely-sensed data. Int. J. Remote Sens. 1996, 17, 1317–1340. [Google Scholar] [CrossRef]
  100. Vahidi, M.; Shafian, S.; Thomas, S.; Maguire, R. Pasture biomass estimation using ultra-high-resolution RGB UAVs images and deep learning. Remote Sens. 2023, 15, 5714. [Google Scholar] [CrossRef]
  101. Asner, G.P. Biophysical and biochemical sources of variability in canopy reflectance. Remote Sens. Environ. 1998, 64, 234–253. [Google Scholar] [CrossRef]
  102. Franklin, J.; Duncan, J.; Turner, D.L. Reflectance of vegetation and soil in Chihuahuan desert plant communities from ground radiometry using SPOT wavebands. Remote Sens. Environ. 1993, 46, 291–304. [Google Scholar] [CrossRef]
  103. Batadlan, B.D.; Paringit, E.C.; Santillan, J.R.; Caparas, A.S.; Fabila, J.L. Analysis of background variations in computed spectral vegetation indices and its implications for mapping mangrove forests using satellite imagery. In Proceedings of the 4th ERDT Conference, Manila, Philippines, 11 September 2009. [Google Scholar]
  104. Abdullah, M.M.; Addae-Wireko, L.; Tena-Gonzalez, G.A. Assessing native desert vegetation recovery in a war-affected area using multispectral and hyperspectral imagery: A case study of the Sabah Al-Ahmad Nature Reserve, Kuwait. Restor. Ecol. 2017, 25, 982–993. [Google Scholar] [CrossRef]
  105. Dahy, B.; Issa, S.; Saleous, N. Random forest for classifying and monitoring 50 Years of vegetation dynamics in three desert cities of the uae. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2022, 43, 69–76. [Google Scholar] [CrossRef]
  106. Zheng, J.; Sun, C.; Zhao, S.; Hu, M.; Zhang, S.; Li, J. Classification of salt marsh vegetation in the Yangtze River Delta of China using the pixel-level time-series and XGBoost algorithm. J. Remote Sens. 2023, 3, 0036. [Google Scholar] [CrossRef]
  107. Ma, N.; Cao, S.; Bai, T.; Yang, Z.; Cai, Z.; Sun, W. Assessment of Vegetation Dynamics in Xinjiang Using NDVI Data and Machine Learning Models from 2000 to 2023. Sustainability 2025, 17, 306. [Google Scholar] [CrossRef]
  108. Guo, Y.; Wang, N.; Wei, X.; Zhou, M.; Wang, H.; Bai, Y. Desert oasis vegetation information extraction by PLANET and unmanned aerial vehicle image fusion. Ecol. Indic. 2024, 166, 112516. [Google Scholar] [CrossRef]
Figure 1. Location of the study sites.
Figure 1. Location of the study sites.
Remotesensing 17 02665 g001
Figure 2. Plot scene (a,b); individual (c) and leaves (d) of the Caragana korshinskii Kom.
Figure 2. Plot scene (a,b); individual (c) and leaves (d) of the Caragana korshinskii Kom.
Remotesensing 17 02665 g002aRemotesensing 17 02665 g002b
Figure 3. Detailed example of vegetation ROI annotation of Caragana korshinskii (ad) and bare soil: grave (e), gully (f), and sand dune (g,h). Note: The images are labeled as (ah) from left to right and top to bottom.
Figure 3. Detailed example of vegetation ROI annotation of Caragana korshinskii (ad) and bare soil: grave (e), gully (f), and sand dune (g,h). Note: The images are labeled as (ah) from left to right and top to bottom.
Remotesensing 17 02665 g003
Figure 4. Comparison of FVCT and FVC estimates derived from 17 VIs across 12 plots. Note: The range from 0 to 1 represents the values of FVC.
Figure 4. Comparison of FVCT and FVC estimates derived from 17 VIs across 12 plots. Note: The range from 0 to 1 represents the values of FVC.
Remotesensing 17 02665 g004
Figure 5. Correlation matrix of FVCT and VIs, *** indicates significant correlation at the p < 0.001 level, ** indicates significant correlation at the p < 0.01 level, and * indicates significant correlation at the p < 0.05 level (two-sided).
Figure 5. Correlation matrix of FVCT and VIs, *** indicates significant correlation at the p < 0.001 level, ** indicates significant correlation at the p < 0.01 level, and * indicates significant correlation at the p < 0.05 level (two-sided).
Remotesensing 17 02665 g005
Figure 6. Fitting results of five machine learning regression models. Note: The red dashed line represents the 1:1 reference line.
Figure 6. Fitting results of five machine learning regression models. Note: The red dashed line represents the 1:1 reference line.
Remotesensing 17 02665 g006
Figure 7. SHAP-based relative contribution (%) of VIs, Spectral Bands, Texture, and DEM features to five ML models.
Figure 7. SHAP-based relative contribution (%) of VIs, Spectral Bands, Texture, and DEM features to five ML models.
Remotesensing 17 02665 g007
Table 1. Overview of vegetation indices used.
Table 1. Overview of vegetation indices used.
NameAbbreviationFormulaReferences
Normalized difference vegetation indexNDVI ρ N I R ρ R / ρ N I R + ρ R  1[61]
Red-edge simple ratio vegetation index S R R e d e d g e ρ N I R / ρ R e d e d g e  1[62,63]
Ratio vegetation indexRVI ρ N I R / ρ R  1[64]
Soil conditioning vegetation indexSAVI ( ( ρ N I R ρ R ) / ( ρ N I R + ρ R + L ) ) ( 1 + L )  2[24]
Modified soil adjusted vegetation indexMSAVI ( 2 N I R + 1 ( 2 N I R + 1 ) 2 8 ( N I R R ) ) / 2  1[25]
Atmospherically resistant vegetation indexARVI ρ N I R ρ R + γ ( ρ B ρ R ) / ρ N I R + ρ R γ ( ρ B ρ R )  2[65]
Difference vegetation indexDVI ρ N I R ρ R  1[66]
Visible-band difference vegetation indexVDVI ( 2 ρ G ρ B ρ R ) / ( 2 ρ G + ρ B + ρ R )  1[67]
Normalized green–blue difference indexNGBDI ( ρ G ρ B ) / ( ρ G + ρ B )  1[67]
Normalized green–red difference indexNGRDI ( ρ G ρ R ) / ( ρ G + ρ R )  1[67]
Excess greenEXG ( 2 ρ G ρ B ρ R ) / ( ρ G + ρ B + ρ R )  1[68]
Red–green ratio indexRGRI ρ R ρ G  1[67]
Blue–green ratio indexBGRI ρ B ρ G  1[67]
Visible atmospheric resistant indexVARI ( ρ G ρ R ) / ( ρ G + ρ R ρ B )  1[69]
Color index of vegetation extractionCIVE ( ( 0.441 ρ R 0.881 ρ G + 0.385 ρ B ) ( ρ G + ρ B + ρ R ) ) + 18.078745   1[70]
Modified green–red vegetation indexMGRVI ( ρ G 2 ρ R 2 ) / ( ρ G 2 + ρ R 2 )  1[71]
Excess green–red–blue difference indexEGRBDI ( 4 ρ G 2 ρ B ρ R ) / ( 4 ρ G 2 + ρ B ρ R ) [72]
1   ρ is the reflectance of the band; NIR is the near-infrared band; R is the red band; G is the green band; B is the blue band; Rededge is the red edge band; 2 L is the soil conditioning parameter, here 0.5; and γ is the optical path radiation revision factor, here 1.
Table 2. Accuracy assessment of ten thresholding methods applied to the NDVI image of plot 1.
Table 2. Accuracy assessment of ten thresholding methods applied to the NDVI image of plot 1.
MethodAccuracyKappaPrecisionRecall
Triangle0.9790.7090.9920.563
Otsu0.9770.6710.9970.518
Yen0.9520.0011.0000.001
Adaptive + Otsu0.7650.2220.1680.974
Local Adaptive0.3420.0010.0490.682
Isodata0.3880.0440.0700.944
Mean0.8460.3220.2330.950
Minimum Error0.3590.0390.0680.956
Maximum Entropy0.9750.6430.9980.487
Moments0.5090.0320.0640.675
Triangle Method0.9790.7090.9920.563
Table 3. Supervised classification results and accuracy assessment of plots.
Table 3. Supervised classification results and accuracy assessment of plots.
Flood NumberNO. 1NO. 2
LocationApexMiddleEdgeApexMiddleEdge
Plot number123456
OA (%)99.52698.98595.82199.46093.49897.886
Kappa0.9810.9770.9290.9650.8840.933
FVC (%)9.166211.411113.52885.860514.25879.4482
Flood NumberNO. 3NO. 4
LocationApexMiddleEdgeApexMiddleEdge
Plot number789101112
OA (%)98.85994.10498.43099.45798.79591.214
Kappa0.8940.8100.9660.9770.9490.838
FVC (%)6.4928.97815.88017.94511.63911.085
Table 4. Evaluation table of the accuracy of estimating FVCT by five models.
Table 4. Evaluation table of the accuracy of estimating FVCT by five models.
ModelTraining SetTesting Set
R2RMSEMAER2RMSEMAE
RF0.9700.0080.0060.8760.0200.016
XGBoost0.9720.0090.0060.8080.0200.015
LASSO0.8180.0210.0150.8050.0250.020
SVM0.9680.0090.0070.8740.0200.017
KNN0.9110.0160.0110.7840.0210.016
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Han, J.; Zhu, J.; Cao, X.; Xi, L.; Qi, Z.; Li, Y.; Wang, X.; Zou, J. Extraction of Sparse Vegetation Cover in Deserts Based on UAV Remote Sensing. Remote Sens. 2025, 17, 2665. https://doi.org/10.3390/rs17152665

AMA Style

Han J, Zhu J, Cao X, Xi L, Qi Z, Li Y, Wang X, Zou J. Extraction of Sparse Vegetation Cover in Deserts Based on UAV Remote Sensing. Remote Sensing. 2025; 17(15):2665. https://doi.org/10.3390/rs17152665

Chicago/Turabian Style

Han, Jie, Jinlei Zhu, Xiaoming Cao, Lei Xi, Zhao Qi, Yongxin Li, Xingyu Wang, and Jiaxiu Zou. 2025. "Extraction of Sparse Vegetation Cover in Deserts Based on UAV Remote Sensing" Remote Sensing 17, no. 15: 2665. https://doi.org/10.3390/rs17152665

APA Style

Han, J., Zhu, J., Cao, X., Xi, L., Qi, Z., Li, Y., Wang, X., & Zou, J. (2025). Extraction of Sparse Vegetation Cover in Deserts Based on UAV Remote Sensing. Remote Sensing, 17(15), 2665. https://doi.org/10.3390/rs17152665

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop