Skip to Content
LandLand
  • Article
  • Open Access

5 November 2024

Methods for Extracting Fractional Vegetation Cover from Differentiated Scenarios Based on Unmanned Aerial Vehicle Imagery

,
,
,
,
,
and
1
College of Ecology and Environment, Xinjiang University, Urumqi 830046, China
2
College of Geography and Remote Sensing Sciences, Xinjiang University, Urumqi 830046, China
3
Xinjiang Key Laboratory of Oasis Ecology, Xinjiang University, Urumqi 830046, China
4
Forestry and Grassland Work Station of Xinjiang Production and Construction Corps, Urumqi 830046, China

Abstract

Fractional vegetation cover (FVC) plays a key role in ecological and environmental status assessment because it directly reflects the extent of vegetation cover and its status, yet vegetation is an important component of ecosystems. FVC estimation methods have evolved from traditional manual interpretation to advanced remote sensing technologies, such as satellite data analysis and unmanned aerial vehicle (UAV) image processing. Extraction methods based on high-resolution UAV data are being increasingly studied in the fields of ecology and remote sensing. However, research on UAV-based FVC extraction against the backdrop of the high soil reflectance in arid regions remains scarce. In this paper, based on 12 UAV visible light images in differentiated scenarios in the Ebinur Lake basin, Xinjiang, China, various methods are used for high-precision FVC estimation: Otsu’s thresholding method combined with 12 Visible Vegetation Indices (abbreviated as Otsu-VVIs) (excess green index, excess red index, excess red minus green index, normalized green–red difference index, normalized green–blue difference index, red–green ratio index, color index of vegetation extraction, visible-band-modified soil-adjusted vegetation index, excess green minus red index, modified green–red vegetation index, red–green–blue vegetation index, visible-band difference vegetation index), color space method (red, green, blue, hue, saturation, value, lightness, ‘a’ (Green–Red component), and ‘b’ (Blue–Yellow component)), linear mixing model (LMM), and two machine learning algorithms (a support vector machine and a neural network). The results show that the following methods exhibit high accuracy in FVC extraction across differentiated scenarios: Otsu–CIVE, color space method (‘a’: Green–Red component), LMM, and SVM (Accuracy > 0.75, Precision > 0.8, kappa coefficient > 0.6). Nonetheless, higher scene complexity and image entropy reduce the applicability of precise FVC extraction methods. This study facilitates accurate, efficient extraction of vegetation information in differentiated scenarios within arid and semiarid regions, providing key technical references for FVC estimation in similar arid areas.

1. Introduction

Fractional vegetation cover (FVC), typically expressed as a percentage, represents the proportion of land surface covered by vegetation. It serves as a critical parameter for assessing vegetation health and understanding the interactions between global change and terrestrial ecosystems [1,2]. Moreover, FVC plays an essential role in evaluating ecosystem models and predicting ecological changes [3,4], highlighting its ecological, environmental, and societal relevance. Accurate FVC estimation aids in monitoring ecosystem health and contributes to the sustainable management of natural resources, essential for both environmental conservation and human society [5,6].
Arid and semiarid regions, such as Xinjiang, have distinctive ecosystems shaped by their unique geographical and climatic conditions. These regions are dominated by desert and grassland systems, where key vegetation types include desert plants like desert willow and poplar, semiarid species like bitterweed and reed, and mountain shrubs like saxaul and tamarisk [7,8,9]. These vegetation types play significant ecological roles, including mitigating wind erosion, preventing desertification, supporting wildlife habitats, and maintaining soil stability and water cycles. Given the ecological importance of this vegetation in Xinjiang, monitoring FVC is crucial for assessing regional ecological quality. However, traditional FVC retrieval methods face significant challenges due to environmental disturbances, necessitating the development of high-precision, repeatable techniques for accurate vegetation monitoring [10,11,12].
FVC extraction methods have traditionally relied on ground measurement and remote sensing. Ground-based methods, while precise, are resource-intensive and limited to small spatial and temporal scales, making them impractical in harsh environments [13,14]. Satellite remote sensing, on the other hand, provides wide coverage and allows large-scale vegetation monitoring, but its accuracy diminishes in arid and semiarid areas where vegetation is sparse and fragmented. The heterogeneity of these regions, coupled with complex terrain, further complicates satellite-based FVC extraction, and specialized methods are required for better accuracy [15,16].
In recent years, unmanned aerial vehicle (UAV) remote sensing has emerged as a promising alternative for estimating FVC, offering high spatial resolution, flexibility in data collection, and strong resistance to environmental interference [17,18]. Additionally, advances in machine learning algorithms have enhanced the ability to extract geoinformation from high-resolution UAV images, contributing to improved monitoring of vegetation dynamics and biodiversity [19,20]. Various techniques have been applied for FVC extraction, although the effectiveness of different algorithms varies depending on the region and vegetation characteristics [21,22].
This paper discusses different FVC extraction methods for arid and semiarid areas under varying entropy-difference conditions. The following Python-based methods are used: the Otsu–VVI method, color space method, linear mixing model [LMM], and two machine learning algorithms (a support vector machine [SVM] and a neural network [NN]). The precision of 24 extraction methods is validated using manually labeled points, confusion matrices, and kappa coefficients to select optimal algorithms suitable for arid and semiarid areas. This study aims to provide a scientific algorithmic basis and reference for the use of UAV monitoring in extracting FVC in such environments.

2. Materials

2.1. Study Area

The study area is in the Ebinur Lake basin, geographically situated between 43°38′ N and 45°52′ N, and 79°53′ E and 85°02′ E (Figure 1). The region features various geomorphological types, predominantly plains. Vegetation types are diverse and include the following: Saline vegetation: this vegetation dominates the area and is characterized by salt-tolerant plants. Common plants include Suaeda salsa and Haloxylon ammodendron [23], which are able to survive the saline conditions typical of the region. Wetland vegetation: this includes reedbeds dominated by Phragmites australis and Typha, which provide important habitat for waterfowl and act as natural water filters. Desert shrubs: in the surrounding arid areas, the vegetation includes drought-tolerant plants such as Tamarix chinensis Lour and Haloxylon ammodendron [24,25], which play an important role in stabilizing the soil and preventing erosion. Riparian vegetation: at the edge of the lake, plants such as Populus and Willow Salix provide important habitat for birds and other wildlife [26,27]. In recent years, human activities, especially agricultural expansion, irrigation water deployment and industrial development, dam construction, and pumping of water from water supply rivers have reduced the inflow to the lake, resulting in a drop in the water level and a drastic reduction in the size of the lake, a reduction which threatens biodiversity, especially the flora and fauna that depend on these ecosystems for their survival [28]. Changes in the Lake Ebey watershed therefore have important implications for scientific research and conservation efforts [29].
Figure 1. Overview map of the study area (the red numbers represent the number of each plot (1–12), and the different colored tows indicate the entropy values (high, medium, and low entropy) of the 12 plots).

2.2. UAV Data and Preprocessing

The UAV used in this study is the DJI Phantom4 RTK SE, which is equipped with a quadrotor flight system and a 20-million-pixel camera capable of capturing image information in three visible light bands (red: 650 nm ± 16 nm; green: 560 nm ± 16 nm; blue: 450 nm ± 16 nm [RGB]). The experiments were conducted from 28 July 2022 to 1 August 2022 using 3D photogrammetry (grid flight). The conditions were wind speeds less than 8 m/s, clear weather, visibility greater than 5 km, and a solar altitude angle greater than 45° at time of flight. The flight altitude was 40 m with an 80% lateral overlap and an 80% longitudinal overlap. Ground control points were established in each flight test site (total error < 0.5 m) to ensure the correct georegistration of UAV orthoimages (Figure 2). The collected UAV images were processed using the software Pix4D 4.4.10 (https://www.pix4d.com) (accessed on 11 July 2023) to obtain orthoimages with a pixel spatial resolution of 0.03 m. FVC was extracted from the UAV image data using a combination of methods based on Python: the Otsu–VVI method, the color space method, an LMM, and two machine learning algorithms (an SVM and an NN). These methods were evaluated for accuracy, and the optimal FVC extraction method was selected.
Figure 2. Flowchart of FVC extraction from UAV image data.

3. Methods

3.1. Scenarios and Entropy

Differentiated scenarios typically refer to specific environments influenced by varying geographic, climatic, and soil conditions and human activities. The concept of differentiated scenarios was used in this study to understand and analyze vegetation distribution and changes accurately under different environmental conditions. Based on manual visual interpretation, 12 orthorectified visible light (RGB) drone images obtained in the experiments were categorized into six scenarios according to vegetation distribution and surface complexity: (1) sparse shrub areas with similar backgrounds, (2) mixed grass–shrub areas with distinct ground vegetation demarcation, (3) mixed zone of sparse herbs and shrubs, (4) extensive shrub areas with minor grassland integration, (5) mixed grass–shrub areas with indistinguishable soil backgrounds, and (6) complex vegetation types with high cover and architectural interference.
Herein, entropy was used to describe these scenarios quantitatively. In remote sensing image processing, entropy is commonly used to measure the randomness or richness of information in image pixels [30]. Specifically for vegetation cover images, entropy can be used to analyze and differentiate the extent and type of vegetation coverage because different vegetation types and coverage exhibit various textures and gray-level variabilities. High entropy values indicate a wide and complex distribution of image pixel values, corresponding to areas with complex or mixed vegetation. Low entropy values suggest that the area is relatively uniform, with little variation in color and brightness, indicating simple vegetation areas. Entropy is typically calculated using a gray-level cooccurrence matrix (GLCM), a statistical method for characterizing image texture features [31]. A GLCM calculates the frequency distribution of gray-level similarity or dissimilarity between an image pixel and its neighboring pixels. Entropy is a statistical measure from this matrix used to describe the complexity and irregularity of image textures. The calculation formula is [31]
H = i = 1 p i log 2 p i ,
P i = C i / N ,
where P i is the probability of the i th gray level occurring in the image, C i is the number of occurrences of gray level i, and N is the total number of pixels in the image.

3.2. FVC Extraction Methods

(1)
Otsu–VVI Method
Otsu thresholding, also known as the maximum interclass variance method, is an adaptive thresholding method for image segmentation based on image data. Its principle involves calculating the image’s grayscale histogram to determine a threshold that divides the image into the foreground and background. VVIs (Table 1) are used to extract vegetation information from remote sensing images using the differential reflection between vegetation and nonvegetation areas [32,33]. These indices, typically calculated using visible light reflectance values, are based on data from different spectral bands [34]. Combining Otsu thresholding with VVIs leverages their respective advantages to enhance the accuracy and reliability of vegetation cover extraction.
Table 1. Visible Vegetation Indices (VVIs) and their formulas.
(2)
Color Space Method
The color space method for FVC extraction is based on the absorption and reflection characteristics of vegetation at different wavelengths. This method involves converting color remote sensing images into an appropriate color space, commonly RGB; hue, saturation, value (HSV); and CIELab. Depending on the chosen color space, the original image is transformed into the corresponding color space, and the relevant color channels (R, G, B, H, S, V, L, a, and b) are extracted. Appropriate thresholds are set to segregate vegetated and nonvegetated areas within the image. Different color channels may require varying thresholds. Finally, FVC is obtained from UAV image data by calculating the proportion of vegetated pixels to nonvegetated ones in the binary image.
(3)
LMM
An LMM posits that the reflectance value of a pixel in a specific spectral band is a linear combination of the reflectance values of the pixel’s endmember components and their respective abundances [45].
R e f i = j = 1 m P i , j + ε i      
where i = 1 , 2 , 3 , , n (where n is the number of spectral bands); j = 1,2 , 3 , , m (where m is the number of endmember components within a pixel); R e f i is the mixed-pixel reflectance value; P i , j is the reflectance value of the j th endmember component in the i th spectral band; and ε i is the error in the i th band.
(4)
SVM
An SVM fundamentally operates by identifying a hyperplane in the feature space that maximizes the margin between different classes. This hyperplane serves as a decision boundary that discriminates between classes. For data that are not linearly separable, an SVM uses kernel techniques to project these data into a higher-dimensional space to identify a linear separating hyperplane. An SVM solves the optimization problem [46]
min w , b 1 2 w 2 + C i = 1 n ξ i
subject to the constraints
y i ( w · x i + b ) 1 ξ i ,     ξ i 0 ,   i = 1 , , n ,
where w is the normal vector to the hyperplane; b is the bias of the hyperplane; C is a regularization parameter that controls the misclassification penalty; ξ i denotes slack variables, allowing some data points to be on the incorrect side of the margin; y i is the label of each sample, typically +1 or −1; and x i denotes the feature vectors.
The SVM model parameter settings include the kernel type, penalty coefficient C , and gamma. C balances the accuracy of classification with the smoothness of the decision surface. A larger C can reduce training errors but may lead to overfitting, whereas a smaller C enhances the model’s robustness to noise but may increase training errors. The gamma parameter determines the reach of a single training example’s influence, thereby affecting classification granularity or smoothness.
(5)
NN
Numerous nodes (neurons) are organized in a hierarchical structure within an NN. Each neuron processes input signals through an activation function and produces output signals for the next layer. In an NN, each connection has a weight, which is adjusted during training through backpropagation algorithms to minimize the error between the model outputs and the true labels. The basic equations of an NN involve forward propagation and error backpropagation processes. In forward propagation, the output y of each neuron is calculated as [36]
y = f i = 1 n w i x i + b ,
where x i is the input value; w i is the weight; b is the bias; and f is the activation function, whose common functions include sigmoid, tanh, or ReLU. During backpropagation, the network minimizes the loss function, typically a function of the prediction error, such as the mean squared error or cross-entropy loss.
The number of hidden layers and nodes, activation function, learning rate, number of epochs, and batch size are commonly used parameters in traditional MLP NNs. The numbers of hidden layers and their nodes define the network depth and width, respectively, with each hidden layer potentially having a distinct number of nodes. Additional layers and nodes can increase model complexity and lead to overfitting. Common activation functions include sigmoid, tanh, and ReLU, which introduce nonlinear factors allowing the network to learn complex data patterns. The learning rate, which determines the step size of weight adjustments, is a crucial parameter in optimization algorithms. The number of iterations, or epochs, is the number of times the entire training dataset is used to update the model weights, with multiple iterations helping the network learn better. During training, data are divided into batches, each used to compute model errors and update weights.

3.3. Precision Evaluation

The vegetated and nonvegetated areas in the 12 UAV images were manually marked in this study (Figure 3). According to the area size and vegetation sparsity, type, and complexity, vegetation and nonvegetation were proportionally marked in the images. From the 1st image to the 12th image, named 1–12, the total number of point markers was 200, 400, 600, 300, 600, 500, 200, 500, 400, 400, and 400, respectively. The 12 marked images were then used to extract vegetation cover using the Otsu–VVI method, color space method, LMM, and two machine learning methods (SVM and NN), followed by accuracy validation using confusion matrices and kappa coefficients.
Figure 3. Thumbnails of 12 markers.
(1)
Confusion Matrix
The FVC extraction results of different methods were evaluated using confusion matrices and accuracy metrics [37] (Formulas (7)–(12); Table 2).
A c c u r a c y = a + d a + b + c + d
P r e c i s i o n = a a + c
R e c a l l = a a + b
f = 2 a 2 a + b + c
O A = c c + d
U A = b a + b
Table 2. Confusion matrix for precision validation.
Several key statistical metrics were used to assess the FVC extraction accuracy of the classification models: total accuracy, precision, recall, overestimation error, underestimation error, and the f score. These metrics are defined as follows:
A c c u r a c y : This is the proportion of correctly identified observations (vegetation and nonvegetation in this study) to the total number of observations. This is the most straightforward performance evaluation metric; a higher value indicates better overall model performance.
P r e c i s i o n : This was the proportion of correctly predicted vegetation observations out of all predicted vegetation. A higher value indicated higher accuracy in the predicted vegetation samples.
R e c a l l : This metric was the proportion of actual vegetation observations that were correctly predicted as vegetation. A higher value indicated a stronger ability of the model to extract vegetation.
f score: This is the harmonic mean of precision and recall, ranging between 0 and 1. Values closer to 1 indicated better FVC extraction effectiveness.
Overestimation Error ( O A ): This metric was the probability of nonvegetation observations being incorrectly predicted as vegetation.
N is the total number of observations, X is the number of correct classifications, and E is the number expected by chance. The kappa coefficient ranges from −1 (total disagreement) to +1 (perfect agreement); a value of 0 indicates that the observed agreement is random. In practical applications, a higher kappa value signifies better classification performance and considers the impact of random factors.
Underestimation Error ( U A ): This refers to the probability of vegetation observations being missed or undetected.
These metrics constitute a comprehensive framework for evaluating the performance of vegetation classification models in different aspects. Total accuracy reflects the overall accuracy of each model. Underestimation and overestimation errors provide information about potential misjudgments by each model.
(2)
Kappa Coefficient
The kappa coefficient is a statistical measure used to assess classification accuracy, especially in terms of accounting for random agreement. In the classification of FVC, the kappa coefficient was used to understand the degree of consistency between the classification results and actual conditions, providing insights beyond those offered by overall accuracy alone. The kappa coefficient of each FVC extraction method was calculated as [38]
N = a + b + c + d ,
X = a + d ,
E = ( a + c ) × ( a + b ) N + ( b + d ) × ( c + b ) N ,
K a p p a = X E N E .

4. Result

4.1. Scenarios and Entropy

Entropy values were calculated for the 12 images across the six scenarios (Table 3). Based on the average entropy value and standard deviation, entropy classifications were assigned as follows: high entropy (values greater than the mean minus 0.5 times the standard deviation), low entropy (values less than the mean minus 0.5 times the standard deviation), and medium entropy (values in between). These categories were used to classify each of the 12 images into low-, medium-, and high-entropy categories. Subsequently, FVC was extracted from these images using the Otsu–VVI method, color space method, LMM, SVM, and NN.
Table 3. Classification of entropy and differentiated scenarios.

4.2. Otsu–VVIs

Twelve Otsu–VVI methods were used to extract FVC from the 12 orthorectified RGB drone images, and extraction result maps were generated (Figure 4; Table 4).
Figure 4. Thumbnails of 12 UAV results extracted based on the Otsu–VVI method.
Table 4. Accuracy verification of Otsu–VVI method a.

4.3. Color Space Method

FVC was extracted from the 12 images using the color space method, and FVC extraction result maps were produced (Figure 5; Table 5).
Figure 5. Thumbnails of 12 UAV image results extracted based on the color space method. R, G, B, H, S, V, L, a and b in Figure 5 represent the components in the color space method, respectively.
Table 5. Accuracy verification of color space method a.

4.4. LMM

The LMM was used to extract FVC from the 12 images, and extraction outcome images were obtained (Figure 6; Table 6).
Figure 6. Thumbnails of 12 UAV image results extracted based on the LMM.
Table 6. Accuracy verification of LMM a.

4.5. SVM

The SVM was used to extract FVC from the 12 images, and extraction outcome images were obtained (Figure 7; Table 7).
Figure 7. Thumbnail extraction of 12 UAV image results based on the SVM.
Table 7. Accuracy verification of SVM a.

4.6. NN

The NN was used to extract FVC from the 12 images, and extraction outcome images were generated (Figure 8; Table 8).
Figure 8. Thumbnail extraction of 12 UAV image results based on the NN.
Table 8. Accuracy verification of NN a.

4.7. Confusion Matrix

(1)
Otsu–VVIs
The vegetation indices EXG, VDVI, RGBVI, V-MSAVI, and CIVE display universality and stability across various entropy conditions (Figure 9). For the low-entropy images, these indices demonstrate an A c c u r a c y range of 0.7–0.96, indicating excellent differentiation between vegetation and nonvegetation. P r e c i s i o n ranges from 0.85 to 1, showing high vegetation recognition precision. Although R e c a l l is generally good, some misclassification occurs. For the medium-entropy images, A c c u r a c y slightly drops to 0.75–0.94, but P r e c i s i o n increases to 0.87–1. R e c a l l is similar to that for the low-entropy images (prone to misclassification). For the high-entropy images, A c c u r a c y ranges from 0.74 to 0.86 and P r e c i s i o n is 0.98–1; however, the vegetation extraction accuracy is notably lower than that for the low- and medium-entropy images. R e c a l l is still generally good but includes some misclassifications.
Figure 9. Thumbnails of heat maps of high, medium, and low entropy values.
(2)
Color Space Method
Each color component (R, G, B, H, S, V, L, a, and b) exhibits distinct stability and universality between entropy conditions (Figure 9). For the low-entropy images, A c c u r a c y is 0.53–0.97 and P r e c i s i o n is 0.56–1, indicating good overall model performance and high vegetation extraction precision, respectively. For the medium-entropy images, A c c u r a c y is 0.31–0.94 and P r e c i s i o n is 0.07–0.98, showing notable declines in overall performance and vegetation extraction precision, respectively, compared with those for the low-entropy images. For the high-entropy images, A c c u r a c y ranges from 0.34 to 0.96 and P r e c i s i o n from 0.41 to 0.98, suggesting lower overall performance and precision in vegetation extraction, respectively, than for the low- and medium-entropy images. Furthermore, more misclassifications are observed here than for the low- and medium-entropy images.
(3)
LMM
For the low-, medium-, and high-entropy images, the model’s overall performance, vegetation extraction precision, and accuracy significantly improve (Figure 9). Specifically, for the low-entropy images, A c c u r a c y is between 0.79 and 0.89, P r e c i s i o n is 0.87–0.97, and R e c a l l is between 0.59 and 0.9, with few misclassifications. As for the medium-entropy images, A c c u r a c y is between 0.88 and 0.91, P r e c i s i o n is between 0.87 and 0.99, and R e c a l l is 0.77–0.91, with few misclassifications. For the high-entropy images, A c c u r a c y is 0.89–0.94, P r e c i s i o n is 0.87–0.99, and R e c a l l is 0.80–0.97, with few misclassifications. Overall, the overall model performance and vegetation extraction accuracy and precision are notably better than those under the Otsu–VVI and color space methods.
(4)
SVM
For the low-entropy images, FVC extraction A c c u r a c y is 0.89–0.94, P r e c i s i o n is 0.87–0.99, and R e c a l l is 0.78–0.96, with few misclassifications (Figure 9). For the medium-entropy images, A c c u r a c y ranges from 0.89 to 0.98, P r e c i s i o n from 0.86 to 0.99, and R e c a l l from 0.86 to 0.98, with few misclassifications. For the high-entropy images, A c c u r a c y is 0.76–0.95, P r e c i s i o n is 0.88–0.95, and R e c a l l is 0.61–0.99, with few misclassifications. These indicate high stability and precision across all entropy conditions.
(5)
NN
For the low-entropy images, A c c u r a c y ranges from 0.74 to 0.93, P r e c i s i o n from 0.8 to 1, and R e c a l l from 0.49 to 0.91, with few misclassifications (Figure 9). These suggest good overall model performance and significantly good vegetation extraction precision. For the medium-entropy images, A c c u r a c y is 0.45–0.99, P r e c i s i o n is 0.45–0.99, and R e c a l l is 0.46–0.99, with few misclassifications. These show good overall model performance and vegetation extraction precision.

4.8. Kappa Coefficient

(1)
Otsu–VVIs
For images with low, medium, and high entropy values, the VVIs EXG, VDVI, RGBVI, V-MSAVI, and CIVE show optimal vegetation extraction accuracy. In particular, the CIVE demonstrates high precision across all entropy conditions, with kappa coefficients ranging from 0.54 to 0.94 (Figure 10; Table 9).
Figure 10. Thumbnails of radars of high, medium, and low entropy values.
Table 9. Kappa of FVC extraction using Otsu–VVIs.
(2)
Color Space Method
The ‘a’ component achieves high accuracy in images with low, medium, and high entropy values, with kappa coefficients ranging from 0.63 to 0.89. This indicates that this component has significant discriminative ability in extracting vegetation cover across various entropy conditions (Table 10).
Table 10. Kappa of FVC extraction using color space method.
(3)
LMM
This model shows higher vegetation extraction accuracy in the low-entropy images, with kappa coefficients between 0.62 and 0.78. For the medium-entropy images, the kappa coefficients range from 0.76 to 0.89, indicating an improvement in vegetation extraction accuracy compared with that for the low-entropy images. In the high-entropy images, the kappa coefficients range from 0.78 to 0.89, suggesting that the model’s overall precision in vegetation extraction is higher than that for the low-entropy images but lower than that for the medium-entropy images (Table 11).
Table 11. Kappa of FVC extraction using LMM.
(4)
SVM
In FVC extraction using the SVM, the low-entropy images show kappa coefficients ranging from 0.75 to 0.96, indicating high vegetation extraction accuracy. The medium-entropy images have kappa coefficients ranging from 0.79 to 0.97, showing a slight decline in accuracy compared with the low-entropy images, but the overall performance of the model remains good. The high-entropy images have kappa coefficients of 0.52 and 0.91, indicating a decrease in vegetation extraction accuracy compared with that for the low- and medium-entropy images (Table 12).
Table 12. Kappa of FVC extraction using SVM.
(5)
NN
Except the first low-entropy image, which has a kappa coefficient of 0.49, all low-entropy images have kappa coefficients of 0.68–0.86, indicating low precision in sparse vegetation areas. Except the fourth medium-entropy image, which has a kappa of −0.1, the medium-entropy images have kappa coefficients ranging from 0.82 to 0.97, showing high precision in extracting widely distributed vegetation but lower precision in extracting sporadically distributed vegetation. Except the eighth high-entropy image, which has a kappa coefficient of 0.50, the high-entropy images have kappa coefficients of 0.70 to 0.89, indicating lower precision in extracting vegetation in areas with sparse herbaceous vegetation and widespread soil backgrounds (Table 13).
Table 13. Kappa of FVC extraction using NN.

5. Discussion

5.1. Comparison of Differentiated Scenarios and Entropy for FVC Extraction

(1)
Sparse Shrub Areas with Similar Backgrounds (No. 1 and No. 2)
Both are low-entropy images. In No. 1, the sparse yellow vegetation blends visually with its soil background in color and texture. This reduces the applicability of extraction methods other than the vegetation indices EXG, VDVI, RGBVI, V-MSAVI, and CIVE and the ‘b’ component of the color space method. These methods have limitations in distinguishing subtle differences, affecting the accuracy and reliability of the overall data analysis. Conversely, No. 2 displays a clearer contrast between vegetation and the background, improving the applicability and effectiveness of up to 20 high-precision FVC extraction methods (including the above). This highlights the importance of environmental background differences for the selection of vegetation extraction methods and data interpretation. Xie et al. (2020) introduced a new red–green–blue ratio vegetation index, which showed 93.5% accuracy in vegetation cover extraction using simple RGB data [39]. However, this is clearly not suitable for FVC extraction in arid and semiarid regions.
(2)
Mixed Grass–Shrub Areas with Distinct Ground Vegetation Demarcation (No. 3 and No. 10)
Both images cover mixed grass–shrub areas but show significant differences in the distinctiveness of surface vegetation, so the same high-precision FVC extraction methods apply to both images except the ‘S’ component. Specifically, the content of No. 10, a low-entropy image, is uniform and simple, where the ‘S’ component (saturation) demonstrates high accuracy in vegetation extraction. Therefore, in environments with simple, sparse vegetation distributions, the ‘S’ component can effectively distinguish between vegetated and nonvegetated areas, thus enhancing extraction accuracy. No. 3, a medium-entropy image, is visually more complex and diverse, containing more information and noise due to its mix of vegetation and nonvegetation and different surface features, reducing the performance of the ‘S’ component in vegetation extraction. This complexity sharply differs from the simpler, sparser vegetation distribution in No. 10, further highlighting how environmental complexity affects the choice and effectiveness of UAV data analysis methods, as noted by Mariana et al. (2017) [40].
(3)
Areas Cooccupied by Shrubs and Sparse Herbaceous Vegetation (No. 4 and No. 12)
In these scenarios, the same methods apply to both images except NNs, which show differing efficiencies. No. 4, a medium-entropy image, has a complex distribution of shrubs and grass, including unevenly distributed, densely interwoven vegetation structures, and variable surface features, which increase classification difficulty, compromising the performance of the NN model. Similar to the findings by Yan et al. (2019), environmental complexity significantly hinders NN performance [41]. By contrast, No. 12, a low-entropy image, may have simpler or more regular vegetation and terrain distribution despite also having shrubs and sparse grass, providing a more manageable data structure for the NN. Additionally, No. 12 may benefit from optimized lighting conditions, further enhancing data processing efficiency and accuracy. Thus, environmental complexity significantly affects the effectiveness of NN methods in vegetation classification.
(4)
Extensive Shrub Areas with Minor Grassland Integration (No. 6 and No. 7)
Both are medium-entropy images. Up to 19 methods are suitable for these widely, uniformly vegetated areas, mainly because the adequate resolution and spectral information of the images allow these high-precision methods to better process and analyze the characteristics of extensive, uniform vegetation distributions. Zhang et al. (2022) demonstrated that random forest models perform particularly well in processing such uniform vegetation distributions in arid regions [42]. Additionally, these methods benefit from their algorithms’ high data processing capability and robustness to environmental noise and background variations, which are crucial for extensively vegetated areas. Therefore, methods besides the five vegetation indices (NGRDI, EXR, EXER, RGRI, NGBDI) enable a more comprehensive assessment and accurate extraction of vegetation cover in such environments.
(5)
Mixed Grass–Shrub Areas with Indistinguishable Soil Backgrounds (No. 9)
In this medium-entropy image, the spectral properties of the soil closely resemble those of the surrounding vegetation, so traditional vegetation indices cannot effectively distinguish between vegetation and soil. This significantly reduces the number of suitable high-precision vegetation extraction methods, especially in the color space method, where only the R, S, and ‘a’ component exhibit high extraction precision. Adjusting the UAV’s shooting angle and optimizing lighting conditions, as suggested by Catherine et al. (2013), can help improve vegetation extraction in such complex scenarios [43]. Data quality should be optimized through adjustments in image capture timing, lighting conditions, and camera angles to improve vegetation extraction in complex scenarios. Flying under optimal sunlight conditions, such as early morning or late evening, can avoid the high reflectance and intense shadows caused by direct midday sunlight and reduce spectral differences between vegetation and soil due to changes in solar angle. Adjusting the UAV’s shooting angle can capture more dimensional surface information, increasing the visual and spectral distinction between vegetation and soil in images. This improves the recognition and classification accuracy of vegetation features in UAV images and optimizes vegetation cover estimates in areas with complex soil backgrounds.
(6)
Complex Vegetation Types with High Cover and Architectural Interference (No. 5, 8, and 11)
No. 5 and No. 8 are high-entropy images, whereas Image 11 is a medium-entropy one. In this scenario, the same high-precision vegetation extraction methods are applied to No. 5 and No. 11 due to their similar settings. As for No. 8, only 13 FVC extraction methods are available, fewer than those for the two other images. Specifically, only five components (G, B, H, S, V, and b) in the color space method are suitable. This is mainly because of the extensive presence of sparse herbaceous vegetation and the soil background in Image 8, which lowers the accuracy of these color space components in distinguishing between vegetation and nonvegetation. Li et al. (2019) have similarly highlighted the limitations of vegetation indices in complex, mixed environments [44].
In conclusion, the effectiveness of vegetation cover extraction in different entropy scenarios significantly depends on the entropy level and scene complexity, so selecting extraction methods suitable for specific backgrounds and conditions is crucial. Future research should further explore how to integrate the advantages of various methods—traditional vegetation indices, machine learning techniques, and recent image processing algorithms—to adapt to diverse environmental scenarios and improve vegetation extraction precision (Figure 11).
Figure 11. (1)–(3) Original image, SVM and NN. Red: FVC; white: non-FVC.

5.2. Comparison of FVC Extraction Methods

This study uses UAV image data and 24 FVC extraction methods to explore vegetation cover extraction in typical arid areas in Xinjiang across different entropy values. These methods encompass widely used vegetation indices, the color space method, LMM, and machine learning algorithms (SVM and NN) that are becoming mainstream in various research domains. Under different entropy conditions, the optimal FVC extraction methods are the CIVE, the ‘a’ component of the color space method, LMM, and machine learning. However, each method has distinct strengths and limitations in extracting FVC under specific conditions, such as shadows, vegetation under shadows, yellow vegetation, sparse vegetation, and biological soil crusts. These are detailed as follows (Figure 12, 15 m × 14 m):
Figure 12. (1) Original, (2) CIVE, (3) ‘a’ component, (4) LMM, (5) SVM, (6) NN. White: FVC; black: non-FVC. The red squares in subplot a highlight the particular vegetation encountered during the extraction process and its particular environment.
(1)
CIVE
The CIVE performs well in distinguishing pure shadows from vegetation but identifies biological soil crusts, sparse vegetation, yellow vegetation, and their branches as nonvegetation, which aligns with findings in desert regions by Hao et al. (2020), in which areas under shadows are also frequently misclassified, highlighting the need for strict control over lighting and angles during UAV image capture to enhance FVC extraction accuracy under these conditions [45]. Vegetation under shadows is also frequently misclassified. This necessitates strict control over lighting and angles during UAV image capture to enhance FVC extraction accuracy under these conditions.
(2)
Color Space Method
The ‘a’ component improves extraction accuracy for sparse vegetation and pure shadow parts but identifies biological soil crusts, vegetation under shadows, yellow vegetation, and branch areas as nonvegetation, consistent with previous research on urban vegetation cover mapping [46]. Threshold adjustment can optimize the distinction between vegetation and nonvegetation for optimal accuracy.
(3)
LMM
Compared with the CIVE and the ‘a’ component, the LMM significantly improves extraction accuracy for pure shadows, sparse vegetation, vegetation under shadows, and biological soil crusts, but it has limitations with yellow vegetation and branches. Its overall accuracy is significantly higher than that of the CIVE and the ‘a’ component, as reported by Ni (2023) [46].
(4)
Machine Learning Algorithms
The SVM and NN outperform the other methods in extracting yellow vegetation and its branches. However, they have limitations in extracting pure shadow and sparse vegetation parts, with the NN performing notably worse. Vegetation under shadows and biological soil crusts are also not effectively recognized, with the SVM generally outperforming the NN. Moreover, despite these methods’ high extraction accuracy for pure green vegetation, limitations remain in extracting biological soil crusts, yellow vegetation, branches, shadows, and vegetation under shadows. Future research should explore combining these methods’ strengths to develop more accurate and practical strategies, particularly for vegetation cover extraction in arid and semiarid areas [47].

5.3. Accuracy of UAV Remote Sensing Images

UAV remote sensing bridges the gap between ground measurements and low-spatial-resolution satellite sensing, providing fine centimeter-level ground data without the constraints of timing or other factors [48]. This particularly holds in specific small-scale monitoring tasks, where UAV operation costs are significantly lower than those of traditional satellite sensing methods. In practical applications, high-resolution UAV imagery captures more details, greatly enhancing the accuracy of assessments of vegetation type, health, and cover. The high-resolution UAV imagery is one of its greatest advantages, allowing researchers to observe and analyze surface features meticulously. However, whether the accuracy of vegetation cover extraction linearly increases with the resolution of the original images or the effect of resolution on FVC extraction stops being significant beyond a certain threshold remains a critical question. This issue should be discussed to optimize UAV remote sensing applications in ecological monitoring and environmental management. Current research indicates that although high-resolution imagery provides rich surface information, it can also introduce noise, especially in areas with unclear vegetation boundaries or dense vegetation, potentially decreasing FVC extraction accuracy [49]. Moreover, a high image resolution typically entails considerable data processing demands, requiring more advanced data processing hardware and high processing time and costs. Therefore, determining the optimal image resolution to balance accuracy and cost is an important direction for future research. Experimental and theoretical analyses should be performed to assess FVC extraction performance systematically at different resolutions and establish precise vegetation monitoring models. Finally, the lack of near-infrared (NIR) as well as multispectral data in this study greatly limits the ability to capture nuances in vegetation health, especially in the later stages of growth. Without the use of NIR or multispectral imagery, more detailed information on plant health and soil-vegetation interactions cannot be obtained. NIR data, however, are able to distinguish between healthy and stressed vegetation and are widely used in vegetation studies.

6. Conclusions

This study utilizes UAV data combined with Otsu–VVIs (EXG, EXR, EXER, NGRDI, NGBDI, RGRI, CIVE, V-MSAVI, EXGR, MGRVI, RGBVI, and VDVI), the color space method (R, G, B, H, S, V, L, a, and b), LMM, and two machine learning algorithms (SVM and NN) to extract fractional vegetation cover in different entropy scenarios in the arid regions of Xinjiang. The most effective methods for extracting fractional vegetation cover against the backdrop of the strong reflective soil found in arid and semiarid regions were identified and validated: the CIVE, the ‘a’ component in the color space method, LMM, and SVM.
Regarding the Otsu–VVIs in high-, medium-, and low-entropy images, the CIVE outperforms the other vegetation indices, with A c c u r a c y = 0.77–0.97 and P r e c i s i o n = 0.82–1. These results highlight the CIVE’s superior overall performance, accuracy, and precision in vegetation extraction, with low rates of missed and false detections.
Regarding the color space method in high-, medium-, and low-entropy images, the ‘a’ component demonstrates superior FVC extraction accuracy compared with the other color components. It shows A c c u r a c y = 0.82–0.95 and P r e c i s i o n = 0.75–0.96. These results emphasize the applicability of the ‘a’ component in extracting vegetation cover in the arid regions of Xinjiang.
The LMM achieves the following metrics in FVC extraction across high-, medium-, and low-entropy images: A c c u r a c y = 0.81–0.94, P r e c i s i o n = 0.87–0.99. Therefore, the LMM provides high precision and accuracy in extracting vegetation cover in Xinjiang’s arid regions, with low rates of missed and false detections.
The SVM achieves the following values in FVC extraction across high-, medium-, and low-entropy images: A c c u r a c y = 0.76–0.98, P r e c i s i o n = 0.88–0.95. Hence, the SVM is highly applicable for extracting FVC under different entropy conditions in Xinjiang’s arid and semiarid regions. On the contrary, the NN shows relatively lower accuracy in extracting vegetation cover under varying entropy conditions among sparse distributions of shrubs and grasslands in arid regions.

Author Contributions

Conceptualization, Y.M. and C.S.; methodology, C.S.; software, C.S.; validation, C.S., H.P., and J.G.; formal analysis, C.S.; investigation, C.S. and N.L.; resources, Y.M. and H.R.; data curation, C.S. and Q.W.; writing—original draft preparation, C.S.; writing—review and editing, Y.M. and C.S.; visualization, C.S. and H.P.; supervision, Y.M. and H.R.; project administration, Y.M.; funding acquisition, Y.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the carbon storage, turnover, biological origins, and future scenario prediction of representative wetland ecosystems in Xinjiang (Grant No. 2023D01D01), sponsored by the Natural Science Foundation of Xinjiang Uygur Autonomous Region, and supported by the Third Xinjiang Scientific Expedition Program (Grant No. 2021xjkk1400).

Data Availability Statement

All data included in this study are available upon request by contact with the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Liu, C.; Zhang, X.; Wang, T.; Chen, G.; Zhu, K.; Wang, Q.; Wang, J. Detection of vegetation cover-age changes in the Yellow River Basin from 2003 to 2020. Ecol. Indic. 2022, 138, 108818. (In English) [Google Scholar] [CrossRef]
  2. Song, J.M.; Xia, N.; Hai, W.Y.; Tang, M.Y. Spatial-temporal variation of vegetation in Gobi region of Xinjiang based on NDVI-DFI model. Southwest China J. Agric. Sci. 2022, 35, 2867–2875. [Google Scholar]
  3. Zhao, J.; Li, J.; Zhang, Z.X.; Wu, S.L.; Zhong, B.; Liu, Q.H. A dataset of 16 m/10-day fractional vegetation cover of MuSyQ GF-series (2018–2020, China, Version 01). China Sci. Data 2022, 7, 221–230. [Google Scholar] [CrossRef]
  4. Zhang, S.; Yang, R.; Wenxing, H.; Wang, L.; Shuang, L.; Song, H.; Zhao, W.; Li, L. Analysis of Fractional Vegetation Cover Changes and Driving Forces on Both Banks of Yongding River Before and After Ecological Water Replenishment. Ecol. Environ. Sci. 2023, 32, 264–273. [Google Scholar]
  5. Chen, X.; Lv, X.; Ma, L.; Chen, A.; Zhang, Q.; Zhang, Z. Optimization and Validation of Hyperspectral Estimation Capability of Cotton Leaf Nitrogen Based on SPA and RF. Remote Sens. 2022, 14, 5201. [Google Scholar] [CrossRef]
  6. Li, Y.; Sun, J.; Wang, M.; Guo, J.; Wei, X.; Shukla, M.K.; Qi, Y. Spatiotemporal Variation of Fractional Vegetation Cover and Its Response to Climate Change and Topography Characteristics in Shaanxi Province, China. Appl. Sci. 2023, 13, 11532. [Google Scholar] [CrossRef]
  7. Cai, Y.; Zhang, M.; Lin, H. Estimating the Urban Fractional Vegetation Cover Using an Object-Based Mixture Analysis Method and Sentinel-2 MSI Imagery. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 341–350. [Google Scholar] [CrossRef]
  8. Liang, J.; Liu, D. Automated estimation of daily surface water fraction from MODIS and Landsat images using Gaussian process regression. Int. J. Remote Sens. 2021, 42, 4261–4283. [Google Scholar] [CrossRef]
  9. Jia, X.; Shao, M.; Zhu, Y.; Luo, Y. Soil moisture decline due to afforestation across the Loess Plateau, China. J. Hydrol. 2017, 546, 113–122. [Google Scholar] [CrossRef]
  10. Elmendorf, S.C.; Henry, G.H.; Hollister, R.D.; Björk, R.G.; Bjorkman, A.D.; Callaghan, T.V.; Collier, L.S.; Cooper, E.J.; Cornelissen, H.C.; Day, T.A.; et al. Global assessment of experimental climate warming on tundra vegetation: Heterogeneity over space and time. Ecol. Lett. 2012, 15, 164–175. [Google Scholar] [CrossRef]
  11. Wang, N.; Guo, Y.; Wei, X.; Zhou, M.; Wang, H.; Bai, Y. UAV-based remote sensing using visible and multi-spectral indices for the estimation of vegetation cover in an oasis of a desert. Ecol. Indic. 2022, 141, 109155. [Google Scholar] [CrossRef]
  12. Zhong, G.; Chen, J.; Huang, R.; Yi, S.; Qin, Y.; You, H.; Han, X.; Zhou, G. High Spatial Resolution Fractional Vegetation Coverage Inversion Based on UAV and Sentinel-2 Data: A Case Study of Alpine Grassland. Remote Sens. 2023, 15, 4266. [Google Scholar] [CrossRef]
  13. Brazier, R.E.; Turnbull, L.; Wainwright, J.; Bol, R. Carbon loss by water erosion in drylands: Implications from a study of vegetation change in the south-west USA. Hydrol. Process. 2013, 28, 2212–2222. [Google Scholar] [CrossRef]
  14. Maurya, A.K.; Bhargava, N.; Singh, D. Efficient selection of SAR features using ML based algorithms f-or accurate FVC estimation. Adv. Space Res. 2022, 70, 1795–1809. [Google Scholar] [CrossRef]
  15. Fernández-Guisuraga, J.M.; Verrelst, J.; Calvo, L.; Suárez-Seoane, S. Hybrid inversion of radiative transfer models based on high spatial resolution satellite reflectance data improves fractional vegetation cover retrieval in heterogeneous ecological systems after fire. Remote Sens. Environ. 2021, 255, 12304. [Google Scholar] [CrossRef]
  16. Getzin, S.; Wiegand, K.; Schöning, I. Assessing biodiversity in forests using very high-resolution images and unmanned aerial vehicles. Methods Ecol. Evol. 2011, 3, 397–404. [Google Scholar] [CrossRef]
  17. Wu, S.; Deng, L.; Zhai, J.; Lu, Z.; Wu, Y.; Chen, Y.; Guo, L.; Gao, H. Approach for Monitoring Spatiotemporal Changes in Fractional Vegetation Cover Through Unmanned Aerial System-Guided-Satellite Survey: A Case Study in Mining Area. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2023, 16, 5502–5513. [Google Scholar] [CrossRef]
  18. Zhang, T.; Liu, D. Estimating fractional vegetation cover from multispectral unmixing modeled with local endmember variability and spatial contextual information. ISPRS J. Photogramm. Remote Sens. 2024, 209, 481–499. [Google Scholar] [CrossRef]
  19. Wang, H.; Han, D.; Mu, Y.; Jiang, L.; Yao, X.; Bai, Y.; Lu, Q.; Wang, F. Landscape-level vegetation classification and fractional woody and herbaceous vegetation cover estimation over the dryland ecosystems by unmanned aerial vehicle platform. Agric. For. Meteorol. 2019, 278, 107665. [Google Scholar] [CrossRef]
  20. AAshapure, A.; Jung, J.; Chang, A.; Oh, S.; Maeda, M.; Landivar, J. A Comparative Study of RGB and Multispectral Sensor-Based Cotton Canopy Cover Modelling Using Multi-Temporal UAS Data. Remote Sens. 2019, 11, 2757. [Google Scholar] [CrossRef]
  21. Du, M.; Li, M.; Noguchi, N.; Ji, J.; Ye, M. Retrieval of Fractional Vegetation Cover from Remote Sensing Image of Unmanned Aerial Vehicle Based on Mixed Pixel Decomposition Method. Drones 2023, 7, 43. [Google Scholar] [CrossRef]
  22. Guo, W.; Rage, U.K.; Ninomiya, S. Illumination invariant segmentation of vegetation for time series wheat images based on decision tree model. Comput. Electron. Agric. 2013, 96, 58–66. [Google Scholar] [CrossRef]
  23. Hu, Q.; Yang, J.; Xu, B.; Huang, J.; Memon, M.S.; Yin, G.; Zeng, Y.; Zhao, J.; Liu, K. Evaluation of Global Decametric-Resolution LAI, FAPAR and FVC Estimates Derived from Sentinel-2 Imagery. Remote Sens. 2020, 12, 912. [Google Scholar] [CrossRef]
  24. Wang, B.; Jia, K.; Liang, S.; Xie, X.; Wei, X.; Zhao, X.; Yao, Y.; Zhang, X. Assessment of Sentinel-2 MSI Spectral Band Reflectances for Estimating Fractional Vegetation Cover. Remote Sens. 2018, 10, 1927. [Google Scholar] [CrossRef]
  25. Li, J.; Fan, W.; Li, M. Application of linear mixing spectral model to classification of multi-spectral remote sensing image. J. Northeast. For. Univ. 2008, 36, 45–69. [Google Scholar]
  26. Hultquist, C.; Chen, G.; Zhao, K. A comparison of Gaussian process regression, random forests and support vector regression for burn severity assessment in diseased forests. Remote Sens. Lett. 2014, 5, 723–732. [Google Scholar] [CrossRef]
  27. Durbha, S.S.; King, R.L.; Younan, N.H. Support vector machines regression for retrieval of leaf area index from multiangle imaging spectroradiometer. Remote Sens. Environ. 2007, 107, 348–361. [Google Scholar] [CrossRef]
  28. Gränzig, T.; Fassnacht, F.E.; Kleinschmit, B.; Foerster, M. Mapping the fractional coverage of the invasive shrub Ulex europaeus with multi-temporal Sentinel-2 imagery utilizing UAV orthoimages and a new spatial optimization approach. Int. J. Appl. Earth Obs. Geoinf. 2021, 96, 102281. [Google Scholar] [CrossRef]
  29. Mao, H.; Meng, J.; Ji, F.; Zhang, Q.; Fang, H. Comparison of Machine Learning Regression Algorithms for Cotton Leaf Area Index Retrieval Using Sentinel-2 Spectral Bands. Appl. Sci. 2019, 9, 1459. [Google Scholar] [CrossRef]
  30. Jamin, A.; Humeau-Heurtier, A. (Multiscale) Cross-Entropy Methods: A Review. Entropy 2019, 22, 45. [Google Scholar] [CrossRef]
  31. Rankine, W.J.M.; Tait, P.G. Miscellaneous Scientific Papers; C. Griffin: Glasgow, Scotland, 1881. [Google Scholar]
  32. Liu, Y.; Mu, X.; Wang, H.; Yan, G. A novel method for extracting green fractional vegetation cover from digital images. J. Veg. Sci. 2011, 23, 406–418. [Google Scholar] [CrossRef]
  33. Geng, X.; Wang, X.; Fang, H.; Ye, J.; Han, L.; Gong, Y.; Cai, D. Vegetation coverage of desert ecosystems in the Qinghai-Tibet Plateau is underestimated. Ecol. Indic. 2022, 137, 108780. [Google Scholar] [CrossRef]
  34. Sainui, J.; Pattanasatean, P. Color Classification based on Pixel Intensity Values. In Proceedings of the 2018 19th IEEE/ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD), Busan, Republic of Korea, 27–29 June 2018; pp. 302–306. [Google Scholar]
  35. Pan, Y.; Wu, W.; He, J.; Zhu, J.; Su, X.; Li, W.; Li, D.; Yao, X.; Cheng, T.; Zhu, Y.; et al. A novel approach for estimating fractional cover of crops by correcting angular effect using radiative transfer models and UAV multi-angular spectral data. Comput. Electron. Agric. 2024, 222, 109030. [Google Scholar] [CrossRef]
  36. Meyer, G.E.; Mehta, T.; Kocher, M.F.; Mortensen, D.A.; Samal, A. Textural imaging and discriminant analysis for distinguishing weeds for spot spraying. Trans. ASAE 1998, 41, 1189–1197. [Google Scholar] [CrossRef]
  37. Meyer, G.E.; Hindman, T.W.; Laksmi, K. Machine vision detection parameters for plant species identification. In Proceedings of the SPIE the International Society for Optical Engineering, Boston, MA, USA, 14 January 1999; pp. 124–523. [Google Scholar]
  38. Meyer, G.E.; Neto, J.C. Verification of color vegetation indices for automated crop imaging applications. Comput. Electron. Agric. 2008, 63, 282–293. [Google Scholar] [CrossRef]
  39. Verrelst, J.; Schaepman, M.E.; Koetz, B.; Kneubühler, M. Angular sensitivity analysis of vegetation indices derived from CHRIS/PROBA data. Remote Sens. Environ. 2008, 112, 2341–2353. [Google Scholar] [CrossRef]
  40. Liu, J.; Wei, L.; Zheng, Z.; Du, J. Vegetation cover change and its response to climate extremes in the Yellow River Basin. Sci. Total Environ. 2023, 905, 167366. [Google Scholar] [CrossRef]
  41. Zaiming, Z.; Yanming, Y.; Benqing, C. Research on Vegetation Extraction and Fractional Vegetation Cover of Spartina Alterniflora Using UAV Images. Remote Sens. Technol. Appl. 2017, 32, 714–720. [Google Scholar]
  42. Huete, A.; Liu, H.; de Lira, G.; Batchily, K.; Escadafal, R. A soil color index to adjust for soil and litter noise in vegetation index imagery of arid regions. In Proceedings of the IGARSS’ 94—1994 IEEE International Geoscience and Remote Sensing Symposium, Pasadena, CA, USA, 8–12 August; pp. 1042–1043.
  43. Yan, G.; Li, L.; Coy, A.; Mu, X.; Chen, S.; Xie, D.; Zhang, W.; Shen, Q.; Zhou, H. Improving the estimation of fractional vegetation cover from UAV RGB imagery by colour unmixing. ISPRS J. Photogramm. Remote. Sens. 2019, 158, 23–34. [Google Scholar] [CrossRef]
  44. Li, L.; Mu, X.; Macfarlane, C.; Song, W.; Chen, J.; Yan, K.; Yan, G. A half-Gaussian fitting method for estimating fractional vegetation cover of corn crops using unmanned aerial vehicle images. Agric. For. Meteorol. 2018, 262, 379–390. [Google Scholar] [CrossRef]
  45. Van de Voorde, T.; Vlaeminck, J.; Canters, F. Comparing Different Approaches for Mapping Urban Vegetation Cover from Landsat ETM+ Data: A Case Study on Brussels. Sensors 2008, 8, 3880–3902. [Google Scholar] [CrossRef] [PubMed]
  46. Zhang, D.; Ni, H. Inversion of Forest Biomass Based on Multi-Source Remote Sensing Images. Sensors 2023, 23, 9313. (In English) [Google Scholar] [CrossRef] [PubMed]
  47. Hao, M.; Qin, L.; Mao, P.; Luo, J.; Zhao, W.; Qiu, G. Unmanned aerial vehicle(UAV)based methodology for spatial distribution pattern analysis of desert vegetation. J. Desert Res. 2020, 40, 169–179. [Google Scholar]
  48. Keesstra, S.D.; Bouma, J.; Wallinga, J.; Tittonell, P.; Smith, P.; Cerdà, A.; Montanarella, L.; Quinton, J.N.; Pachepsky, Y.; van der Putten, W.H.; et al. The significance of soils and soil science towards realization of the United Nations Sustainable Development Goals. Soil 2016, 2, 111–128. [Google Scholar] [CrossRef]
  49. Croft, T.L.; Phillips, T.N. Least-Squares Proper Generalized Decompositions for Weakly Coercive Elliptic Problems. SIAM J. Sci. Comput. 2017, 39, A1366–A1388. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.