Next Article in Journal
Angular Patterns of Nonlinear Emission in Dye Water Droplets Stimulated by a Femtosecond Laser Pulse for LiDAR Applications
Next Article in Special Issue
Satellite Data Reveal Concerns Regarding Mangrove Restoration Efforts in Southern China
Previous Article in Journal
Phenology-Based Maximum Light Use Efficiency for Modeling Gross Primary Production across Typical Terrestrial Ecosystems
Previous Article in Special Issue
Monitoring Spartina alterniflora Expansion Mode and Dieback Using Multisource High-Resolution Imagery in Yancheng Coastal Wetland, China
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Effects of Multi-Growth Periods UAV Images on Classifying Karst Wetland Vegetation Communities Using Object-Based Optimization Stacking Algorithm

College of Geomatics and Geoinformation, Guilin University of Technology, Guilin 541000, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2023, 15(16), 4003; https://doi.org/10.3390/rs15164003
Submission received: 10 July 2023 / Revised: 4 August 2023 / Accepted: 9 August 2023 / Published: 12 August 2023
(This article belongs to the Special Issue Remote Sensing for Wetland Restoration)

Abstract

:
Combining machine learning algorithms with multi-temporal remote sensing data for fine classification of wetland vegetation has received wide attention from researchers. However, wetland vegetation has different physiological characteristics and phenological information in different growth periods, so it is worth exploring how to use different growth period characteristics to achieve fine classification of vegetation communities. To resolve these issues, we developed an ensemble learning model by stacking Random Forest (RF), CatBoost, and XGBoost algorithms for karst wetland vegetation community mapping and evaluated its classification performance using three growth periods of UAV images. We constructed six classification scenarios to quantitatively evaluate the effects of combining multi-growth periods UAV images on identifying vegetation communities in the Huixian Karst Wetland of International Importance. Finally, we clarified the influence and contribution of different feature bands on vegetation communities’ classification from local and global perspectives based on the SHAP (Shapley Additive explanations) method. The results indicated that (1) the overall accuracies of the four algorithms ranged from 82.03% to 93.37%, and the classification performance was Stacking > CatBoost > RF > XGBoost in order. (2) The Stacking algorithm significantly improved the classification results of vegetation communities, especially Huakolasa, Reed-Imperate, Linden-Camphora, and Cephalanthus tetrandrus-Paliurus ramosissimus. Stacking had better classification performance and generalization ability than the other three machine learning algorithms. (3) Our study confirmed that the combination of spring, summer, and autumn growth periods of UAV images produced the highest classification accuracy (OA, 93.37%). In three growth periods, summer-based UAVs achieved the highest classification accuracy (OA, 85.94%), followed by spring (OA, 85.32%) and autumn (OA, 84.47%) growth period images. (4) The interpretation of black-box stacking model outputs found that vegetation indexes and texture features provided more significant contributions to classifying karst wetland vegetation communities than the original spectral bands, geometry features, and position features. The vegetation indexes (COM and NGBDI) and texture features (Homogeneity and Standard Deviation) were very sensitive when distinguishing Bermudagrass, Bamboo, and Linden-Camphora. These research findings provide a scientific basis for the protection, restoration, and sustainable development of karst wetlands.

1. Introduction

Wetlands are precious natural resources with ecological values such as carbon sequestration [1], water purification [2], and soil conservation [3]. Therefore, it enjoys the reputation as “the kidney of the earth” [4]. Karst landscapes have become one of the wetland types with significant research value because of their unique hydrological and geological conditions, and karst wetlands are an important wetland type widely distributed in karst areas. The Huixian Karst International Important Wetland (referred to as the Huixian Karst Wetland) is located in the core of the East Asian Karst region, the third largest karst region in the world, and is currently the largest karst wetland in China [5]. The Huixian Karst Wetland covers a complete vegetation sequence of terrestrial vegetation–wet vegetation–water-holding vegetation–submerged vegetation and floating vegetation and is the largest and most representative karst wetland in the world’s subtropical peaks and plains. Clarifying vegetation distribution patterns within wetlands is one of the core elements of wetland plant diversity research, and vegetation regeneration succession and spatial pattern distribution play an important role in wetland ecosystem stability and biodiversity [6]. Therefore, it is important to have an accurate and detailed classification of wetland vegetation communities. However, due to the high spatial and temporal homogeneity of wetlands [7] and the wide variation in plant species types and vegetation structure, the fine classification of wetland vegetation is a challenging task. Remote sensing (RS) systems offer frequent Earth observation datasets with diverse characteristics and extensive coverage [8]. This makes them highly appealing for monitoring changes in wetland dynamics at both local and global scales [9]. Moreover, RS has been proven to enhance the accuracy of wetland classification and change detection [10]. However, traditional remote sensing platforms such as satellites or aviation are characterized by high cost, vulnerability to weather, and low temporal and spatial resolution, and their applications still face many difficulties. To change this situation, low-altitude unmanned aerial vehicle (UAV) remote sensing technology has become a cost-effective alternative for the flexible acquisition of ultra-high spatial and temporal resolution remote sensing images and has been widely used in wetland vegetation classification and mapping [11,12], tree species identification [13], biomass estimation [14], and vegetation cover estimation [15]. In addition, it has developed as an important tool for fine classification and monitoring wetland vegetation communities [16].
Different vegetation has different physiological information and phenological characteristics [17], and timely and accurate information about the unique canopy layer in different growing periods is essential for the fine classification of vegetation communities. However, due to the high spectral similarity and poor separability of the wetland plant canopy, it is often not possible to achieve better classification results by using only single-time-phase remote sensing images. The integration of multi-temporal remote sensing imagery can adequately capture physiological information and physical differences among features and has been demonstrated to achieve higher classification accuracy than that produced by single-temporal imagery [18,19]. Van et al. [20] used a partial least squares random forest (PLS-RF) algorithm based on satellite multispectral imagery to assess the separability of six wetland vegetation communities in the subtropical coastal zone over four seasons and showed that the combined images of the three temporal phases of autumn, winter, and spring produced the highest mean overall classification accuracy (OA) (86 ± 3.1%) compared to mono-temporal classification. Kollert et al. [21] used Sentinel-2 multispectral images of the spring, summer, and autumn seasons for tree species classification based on the random forest algorithm (RF) and found that the combined images of the three temporal phases achieved a higher classification accuracy than any single temporal image, with an OA of 84.4%. The above studies illustrate that the integration of multi-temporal data can achieve complementary information and thus improve vegetation classification accuracy. However, the critical periods of vegetation growth include budding, flowering, and maturation, and not every cycle of images can provide useful information, and images using all-time sequences may not achieve the best classification performance [22]. Piaser et al. [23] used Sentinel-2 images to classify aquatic vegetation in temperate wetlands and assessed the classification accuracy by eight machine learning algorithms, and the results showed that the classification scenario covering the widest temporal range (April–November) achieved better classification results than the scenario covering the narrow temporal range scenario (May–October, June–September, and July–August) to achieve better classification results. Macintyre [24] used multi-temporal (spring, summer, autumn, and winter) Sentinel-2 imagery for wetland vegetation classification and showed that spring imagery achieved the highest classification accuracy among the mono-temporal classification scenarios, and the multi-temporal feature dataset of combined autumn and spring imagery produced the best results among all classification scenarios. The above study demonstrates that multi-temporal remote sensing imagery has great potential in the field of wetland vegetation classification and that the combination of images with different growth period temporal phases can affect the final classification results of wetland vegetation. Therefore, in this study, we used unmanned aerial vehicle (UAV) images from different growth periods as the data source to evaluate the differences in vegetation community classification effectiveness among different growth period images. We aimed to explore suitable growth period image features for distinguishing various vegetation communities.
Previous research has confirmed that compared to the traditional pixel-based classification methods, combining the Object-Based Image Analysis (OBIA) approach with high-spatial-resolution remote sensing images can better capture spatial relationships between land features and distinguish land cover types with similar spectral characteristics [25], resulting in more accurate wetland vegetation classification results [26]. OBIA first uses a segmentation algorithm to split the image into a number of uniform pixel aggregates and then fully exploits the semantic features of the image [27], which are set as inputs in the machine learning algorithm for object classification. Machine learning algorithms have the advantages of strong model generalization [28] and can automate the processing of large amounts of data [29], and joint OBIA and machine learning algorithms have become one of the important methods for fine classification of wetland vegetation in recent years [30,31,32]. Fu et al. [12] combined the object-based RF-DT algorithm and UAV RGB images to classify wetland vegetation communities with an overall classification accuracy of more than 85%. Zhou et al. [33] used RGB imagery for wetland vegetation classification and evaluated ten OBIA scenarios using five machine learning algorithms (Bayesian, K-nearest neighbor (KNN), SVM, decision tree (DT), and RF), and the results showed that the UAV-based RGB imagery combined with OBIA achieved high accuracy of wetland vegetation classification (OA of 90.73%). The above studies indicated that object-based machine learning algorithms have prospective applications and accurate classification performance in the field of wetland vegetation classification. However, utilizing only a single classifier often suffers from a degree of instability and overfitting [34], and is susceptible to the quality of the dataset and field sample data. Stacking Ensemble Learning (SEL) as a combinatorial strategy can integrate the advantages of multiple classifiers to form a complementary advantage, thus achieving better classification results. The Stacking mechanism involves training base models in a hierarchical manner and then training a combiner (also known as a meta-classifier) that generates the final predictions based on the predictions of the base models. It has been proven to have good classification performance and generalization ability [35]. Cai et al. [36] used multi-temporal Sentinel-1 and Sentinel-2 images combined with the Stacking algorithm to extract vegetation information in the Dongting Lake wetlands, and the study demonstrated that the overall accuracy and Kappa coefficient (92.46% and 0.92) of object-based Stacking were higher than those of single classifiers (SVM, RF, and KNN). This effectively demonstrated the superiority of the object-based Stacking algorithm over individual classifiers in vegetation classification in highly heterogeneous areas. The above study shows that Stacking algorithms with multi-classifier integration have better performance in wetland classification. However, there is still a lack of relevant research to demonstrate the suitability of multi-temporal UAV-RGB images combined with an object-based Stacking algorithm for karst wetland vegetation classification. Therefore, this paper used OBIA to stack three machine learning models to construct the SEL model and compared the differences in classification effects of the four classifiers to demonstrate the stability and generalization ability of the Stacking model.
Machine learning models need to achieve high accuracy in predicting target attributes based on a large number of input features, which makes the model operation and its influencing factors difficult to explain and is known as a “black-box” model [37]. In order to explain the influence of input features on target variables, it is necessary to systematically analyze feature attribution. Several studies have used importance evaluation methods such as linear regression [38] and random forest [39] to explain shallow machine learning model results; such methods can quickly filter out features that have a large impact on model performance and provide a visual understanding of the contribution of each feature in the model. However, these methods have difficulty explaining the negative contributions of the input feature variables to the model. Meanwhile, it is difficult to provide consistent explanations for complex models. Shapley Additive explanations (SHAP) are based on the concept of Shapley values in game theory to explain the attribution of features to machine learning algorithms, which can be interpreted for different types of models and rationalize the predictions [40]. SHAP provides detailed explanations of input variables in terms of the model calculation process and output results, which can provide local and global explanations of the influence process and output results. It offers both local and global explanations for the influencing process and output results, effectively addressing the limitations of traditional feature importance analysis methods. However, the explanation of feature variables and models by SHAP has not been applied to this research domain of wetland vegetation classification. Therefore, this paper uses the SHAP method to analyze the classification model and multi-temporal image data to reveal the complex details of how different input features affect the local and global scale outputs, and to explore the best image feature variables for fine classification of each wetland vegetation community.
To fill the above research gaps, this study focuses on vegetation communities mapping in the Huixian Karst International Important Wetland in Guilin City, southern China by combining Object-Based Image Analysis (OBIA) with the SEL algorithm using multi-growth-period UAV images. The study aims to clarify the impact of combinations of different-phase unmanned aerial vehicles (UAVs) RGB images on classifying vegetation communities. The main contributions of this study are to:
  • We quantitatively evaluated the differences in the classification accuracy of single-growth-period UAV RGB images for vegetation communities mapping and explored the appropriate growth period images and their sensitive feature bands for distinguishing each vegetation community in a karst wetland.
  • We stacked an ensemble learning classification model using three machine learning algorithms (RF, XGBoost, and CatBoost) and examined its stability and generalization ability for vegetation communities’ classification using different growth period image combinations.
  • We explored the classification performance of different growth period UAV image combinations and quantitatively evaluated the effect of different growth period scenarios on vegetation communities mapping.
  • We interpreted the contributors of local and global feature variables to classify karst wetland vegetation communities using the Shapley Additive explanations (SHAP) method for black-box stacking model outputs and extracted the sensitive image features for distinguishing each vegetation community in different growth periods.

2. Materials and Methods

2.1. Study Area

In this study, we used the Huixian Karst Wetland of International Importance (25°05′20″~25°06′55″N, 110°10′50″~110°14′21″E) in Guilin, Guangxi Zhuang Autonomous Region, China, as the study area (Figure 1). The study area is the largest karst landscape original ecological wetland in China. The wetland landscape and its surrounding environment are extremely rare and of great research value in the national and even global karst plains landscape of peaks and forests, with an average annual temperature of 16.5~20.5 °C and a ground elevation between 147 and 292 m. The study area has a subtropical monsoon climate with a wet climate throughout the year, abundant precipitation, a rich variety of features, and a complex natural landscape pattern. Typical representative vegetation types of the study area include Huakolasa, Bermudagrass, Miscanthus, Reed-Imperate, and invasive species of Water Hyacinth communities and other wetland vegetation. The study area of this paper is in the core protection area of this wetland, covering a complete vegetation sequence of terrestrial vegetation–wet vegetation–submerged vegetation–submerged and floating vegetation, which is typical and representative. Because the vegetation canopy spectra change greatly in different growth periods and different vegetation types also have different phenological characteristics, this paper combined several growth period images in this study area to explore the differences in the effects of fine classification of karst wetland vegetation communities. This study provides an important scientific basis and technical support for clarifying the spatial distribution pattern of karst wetland vegetation and carrying out ecological protection and sustainable development of wetlands.

2.2. Multi-Temporal UAV Image Acquisition and Field Measurements

In this paper, we used a DJI Matrice M300 quadcopter drone to collect RGB image data of vegetation in three growth periods: 15 March 2022, 2 July 2022, and 23 October 2022. Figure 1a, b show the scenes of the actual field measurements. The flight altitude was 90–110 m, and the flight time was from 10:00 a.m. to 3:00 p.m. (UTC + 8:00), with clear weather, no wind, and a good view during the acquisition process, as shown in Table 1. Flight paths were automatically generated using ground station software (DJl GS PRO v1.8.3) with a heading overlap of 75–85% and a bypass overlap of 65–80%. In this paper, we used Pix4D mapper software v4.5.6 to process the acquired UAV images, and the main processes include checking image quality, matching images, measuring and solving aerial triangulation, generating dense point clouds, and finally, generating digital orthophoto (DOM) of the study area. The projection coordinate system was uniformly set to WGS 1984 UTM Zone 49N. Figure 1c–e display the DOM of the study area taken in March, July, and October, respectively.
In this paper, sample points were collected by simultaneous ground survey and visual interpretation of ultra-high spatial resolution UAV imagery (Table 2), which were divided into 13 categories: Huakolasa (HK); Bermudagrass (BG); Reed-Imperate (RI); Miscanthus (MC); Water Hyacinth (WH); Lotus (LT); Algae (AG); Karst River (KR); Bamboo (BO); Cephalanthustetrandrus-Paliurusramosissimus (CP); Linden-Camphora (LC); Cropland-beach (CB); and Human-made matter (HM). Then, March was divided into 12 categories due to the absence of lotus. All samples were randomly divided into 70% for training and 30% for testing.

2.3. Methods

The technical process of this paper (Figure 2) consists of four main parts: (1) First, we preprocessed the original March, July, and October UAV images of the study area and constructed different growth period combinations for the classification of karst wetland vegetation communities. (2) We performed multiscale segmentation of different growing period scenarios, extracted spectral features, geometry features, position features, texture features, and vegetation indexes, and constructed multidimensional feature datasets for each scenario separately. (3) We used high-correlation elimination and recursive feature elimination (RFE) to prefer features to eliminate redundant data. Then we stacked RF, XGBoost, and CatBoost algorithms to build an ensemble learning classification model for adaptive parameter tuning and model training. Then we synthesized the above algorithms to finely classify the karst wetland vegetation communities. (4) In this paper, we evaluated the classification performance of the stacking model by accuracy evaluation metrics to explore the differences in classification accuracy of different algorithms and the effect of classification on each vegetation community in karst wetlands. (5) We explored the effects of different growth period combination scenarios on the classification of karst wetland vegetation and the different classification effects on different vegetation communities. Finally, we used the SHAP model interpreter to interpret the classification results for both global and local variables in order to explore the most sensitive features of different features and the variables that contributed the most to the classification results.

2.3.1. Creation of Multi-Temporal Dataset and Data Dimensionality Reduction

Previous studies showed that combined OBIA and high-resolution remote sensing imagery can show better classification performance in wetland vegetation community classification [41]. In this paper, we used the multiresolution segmentation (MRS) algorithm to segment images of different growth periods and used the ESP2 tool [42] to determine the optimal segmentation scale. The final optimal segmentation scale parameters were determined to be 180, and the shape factor and compactness were 0.3 and 0.5, respectively. Then we calculated features such as spectral features (4), texture features (12), geometry features (14), position features (5), and vegetation indexes (17) for each segmented object (the detailed information on vegetation index is shown in Table 3). Finally, we used the above features to classify the karst wetland vegetation communities in a comprehensive way. The detailed information on each scenario used to construct multidimensional dataset features is shown in Table 4.
The large amount of redundant information and highly correlated variables in the object-based extracted multidimensional feature dataset will not only reduce the processing efficiency but also affect the model’s accuracy and stability. The Recursive Feature Elimination (RFE) algorithm [43] is a feature selection method that helps improve model performance by reducing dimensionality and discarding irrelevant features [44]. It automatically selects the most important features to enhance the classification accuracy and generalization capability of the model. In this paper, we used a combination of high correlation elimination and RFE methods to achieve data dimensionality reduction and variable selection. The correlation coefficients were set to 0.95 for the single-growth-period classification scenario and 0.8 for the multiple-growth-period classification scenario. After eliminating the highly correlated variables, variable selection was performed using the RFE method. Finally, the classification model of karst wetland vegetation communities was constructed based on the preferred characteristic variables of each classification scenario. Table 5 shows the variation in the number of variables and the RFE model accuracy. It can be seen that the model training accuracy of the RFE algorithm was higher than 85% and the RMSE was lower than 1.7, indicating that the RFE algorithm achieved high accuracy and low prediction error on the training set, and had excellent model accuracy.

2.3.2. Stacking Ensembled Learning Classification Model

Stacking is able to combine the advantages of multiple base models to reduce the deviation and variance of the model, thus improving the generalization ability of the model [45]. In addition, Stacking can integrate the prediction results of different models to balance the generalization ability and fitting ability of the model on the complex data, which reduces the risk of overfitting [46] and improves the prediction performance of the model. Random Forest (RF) can efficiently and quickly handle high-dimensional data and a large number of training samples, and it has some robustness to outliers and noise [47]. XGBoost (eXtreme Gradient Boosting) is able to capture the nonlinear relationships and interaction effects in the data and performs well for complex prediction problems [48]. In addition, its built-in regularization term can control the complexity of the model and effectively prevent overfitting [49]. CatBoost (Categorical Boosting) has advantages such as automatic processing of category features and missing values, automatic feature scaling, and speed and efficiency [50]. Therefore, in this study, we selected RF, XGBoost, and CatBoost as the base models and used the grid search method for adaptive parameter tuning to obtain the best parameter values of the models under each scenario, and the specific tuning parameters of each base model are shown in Table 6. The optimal parameters for each scenario are shown in Appendix A.
The construction process of the SEL model is shown in Figure 3: (1) First, we inputted the multidimensional dataset after variable selection into the three base models, then divided the data randomly into 70% for training and 30% for testing. (2) We trained the base model and then used cross-validation to train and predict the base model on the training set five-fold, and we averaged the five predictions to obtain the average prediction of the base model. (3) We selected the base model that has the highest average accuracy as the meta model and stacked the prediction results of the base model to form new training and testing sets. (4) Finally, we inputted the prediction results of the base model as new features into the meta model and used the new training data to train the metamodel to form the final classification results.

2.3.3. Accuracy Metrics

In this paper, we used validation samples to evaluate the classification accuracy of karst wetland vegetation communities between different models and different growth periods. We evaluated the differences in classification results between different models and different classification scenarios qualitatively by visual presentation. Overall classification accuracy (OA) was used to evaluate the classification performance of the base model and Stacking model quantitatively. The mean F1 score (mF1) was used to evaluate the difference in classification accuracy between different algorithms for vegetation communities. The F1 score was used to evaluate the differences in classification accuracy of SEL algorithms quantitatively on different classification scenarios and different vegetation communities. To assess the statistical significance of the variations among the classification outcomes generated by the four algorithms, this study employed McNemar’s chi-square test [51]. The test was utilized to analyze the significance of discrepancies in the classification results across the different algorithms at the 95% confidence level. In particular, the F1 score was the harmonic average of precision and recall [52], which can be combined to consider the accuracy and completeness of the classifier. The formula is as follows:
P r e c i s i o n = T P T P + F P
R e c a l l = T P T P + F N
F 1 = 2 × p r e c i s i o n × r e c a l l p r e c i s i o n + r e c a l l
where TP is a true positive, FP is a false positive, and FN is a false negative.
The specific formula for McNemar’s chi-square test is as follows:
a: The number of paired samples with positive classification results on both occasions (judged positive on both occasions).
b: The number of paired samples that have a positive result for the first classification and a negative result for the second classification.
c: The number of paired samples with negative results for the first classification and positive results for the second classification.
d: The number of paired samples with negative classification results on both occasions (judged negative on both occasions).
The McNemar statistic is calculated as:
χ 2 = ( ( b c ) 2 ) / ( b + c )

2.3.4. Model Interpretation and Feature Importance Analysis

Machine learning models usually comprise complex algorithms and weighting parameters and are called “black boxes” due to their lack of interpretability. In order to understand their internal decision mechanisms and improve model interpretability, researchers proposed some feature importance analysis approaches to help machine learning models transform “black boxes” into “white boxes” [53]. SHAP (Shapley Additive explanations), proposed by Lundberg and Lee [54], can explain the output of the black-box model and assess the positive and negative contributions to the input features at the local and global levels [55]. SHAP is based on the Shapley value of game theory and can provide a way to estimate the contribution of each feature. Moreover, it can help us understand the overall regularity of the model and the reasons for individual predictions. In this paper, we used the Shapley value of the SHAP algorithm to quantify the contribution of each feature in each classification scenario to the model prediction as a metric for importance analysis. Moreover, we quantified the contribution of different growth periods and different image features to the classification effect of karst wetland vegetation communities at the local scale (single growth period) and the global scale (image combination), respectively.
In SHAP, the contribution fx of each feature to the model output is allocated according to their marginal contribution. For a certain sample, the SHAP value ϕ i of its feature i is calculated as follows:
ϕ i = S N \ { i } | S | ! ( M | S | 1 ) ! M ! [ f x ( S { i } ) f x ( S ) ]
where N is the set of all features in the training set, M, and S is the subset from N whose dimension is |S|. A linear function of the binary features g is defined based on the following additive feature attribution method:
g ( Z ) = ϕ 0 + i = 1 M ϕ i z i
where Z { 0 , 1 } M equals 1 when a feature is observed, otherwise it equals 0, and M is the number of input features.

3. Results

3.1. Classification Results of Base Model vs. Stacking Ensemble Learning Model

3.1.1. Comparison of Classification Accuracy of Different Algorithms

In order to qualitatively assess the differences in classification ability and effectiveness of the four algorithms, this paper showed the classification results of different vegetation communities in six scenarios (Figure 4). It was found that the visual comparison of the classification results of the six scenarios showed that all four algorithms in scenario 1 had better recognition of BG and HK but showed significant differences in the classification effects of AG, BO, and LC. Except for the Stacking algorithm, the other three algorithms had some degree of confusion between BO and LC, and there was also the phenomenon of misclassification between AG and KR. The XGBoost and CatBoost algorithms in Scenario 2 misclassified a large number of LTs into AGs, while the RF and Stacking algorithms plotted the distribution range of LTs more completely. The XGBoost algorithm in Scenario 3 has a large number of misclassifications of KR into AG, and the RF misclassified a small number of KR into AG, which was likely due to the geographical and spectral similarity between the two, while the Stacking algorithm accurately identified KR. At the same time, this paper can show that the Stacking algorithm also has great advantages in recognizing CP. The other three algorithms in Scenario 4 had the phenomenon of misclassifying LC into HK and CP in different degrees, while using the Stacking algorithm can largely improve this phenomenon. The four algorithms also differed in drawing BG boundaries, while all three base model algorithms were observed to misclassify shadows into other categories, while the Stacking algorithm was able to correctly classify the vegetation types in the shadowed area. In Scenario 5, the Stacking algorithm had the most completely portrayed KR. The classification results of the other three algorithms except the Stacking algorithm all misclassified KR into AG with different degrees, and the Stacking algorithm plotted the distribution range and boundary of KR more completely and continuously. All four algorithms in Scenario 6 achieved more accurate identification of BO. However, all three base model algorithms had a small number of misclassifications of the shadow of BO into LC. However, the Stacking algorithm achieved accurate identification of BO. In addition, the Stacking algorithm had the most accurate extraction of KR and plotted a clear boundary. The above results showed that the Stacking algorithm had a good recognition effect on most karst wetland vegetation communities and can significantly improve the classification effect of many vegetation communities.
To evaluate the difference in classification accuracy of the four algorithms under different scenarios, we analyzed the OA of the four algorithms under different classification scenarios (Figure 5a). The box distribution showed that Stacking has a greater improvement in applicability compared to the single classification models. In addition, Stacking had the highest mean value of overall accuracy of the six scenarios, with 89.1%. The average OA of the Stacking algorithm was 1.6%, 2.5%, and 1.4% higher than RF, XGBoost, and CatBoost, respectively. Comparing the overall accuracy ranges of the four classification models, we can see that the RF algorithm was more sensitive to data combinations. It was possible to make a greater improvement in the classification accuracy of marsh vegetation using RF when different data were used to integrate for the classification of marsh vegetation. We found that XGBoost was more stable in wetland vegetation community classification, followed by Stacking, while CatBoost exhibited a larger fluctuation range by analyzing the length of error lines in the six scenarios. From the scatter distribution (Figure 5a), we found that most solutions had an overall accuracy of more than 90% when using stacking and RF for the classification of karst wetland vegetation communities, which indicated that the integration of different growth periods can significantly improve the overall classification accuracy of karst wetland vegetation communities. The combined scatter plot distribution showed that the classification accuracy of the scenarios combining multiple growth periods was significantly higher than that of the single-growth-period scenarios under all four models. At the same time, it was shown that the highest overall classification accuracy was obtained for the scenario combining spring, summer, and autumn images of vegetation, and the lowest overall classification accuracy (OA of 82.03–84.47%) was obtained for the scenario using only single autumn growth period images of vegetation. The above results demonstrated that the Stacking algorithm has better classification performance and higher overall stability for karst wetland vegetation communities.
In order to more accurately analyze the differences in the performance of the four algorithms and further evaluate the effects of different models on the classification results of different scenarios, we used McNemar’s test to test the significance of the differences between the classification results of the four algorithms. The heat map in Figure 5b shows that the difference between the classification results of Stacking and XGBoost in scenario 6 was the most significant at the 95% confidence interval, followed by the significance between Stacking and CatBoost. This indicated that during the process of vegetation community classification in karst wetland, the Stacking algorithm was more sensitive to the combination of different image features compared to the XGBoost and CatBoost algorithms. The difference between Stacking and RF in scenario 5 was the least significant, with a test value of 3.92. However, statistical analysis showed that there were significant differences between Stacking and RF in all four scenarios. This indicates that the two methods exhibit substantial differences in their performance for the classification task. All four algorithms in scenario 3 were significantly different from each other except Stacking and RF, and we found significant differences between Stacking and all three base models in scenario 3. The above results demonstrated the significant difference between the classification results based on the Stacking algorithm and other algorithms. Combined with the distribution of box line plots in Figure 5a, this paper demonstrated that the classification ability of the four algorithms was Stacking > CatBoost > RF > XGBoost in sequence. The above findings showed that the Stacking algorithm had a better classification effect and classification performance than the base model algorithm.

3.1.2. Classification Results of Different Algorithms of Different Vegetation Communities

In order to quantitatively evaluate the recognition ability of different growth periods and different algorithms for each vegetation community, the applicability of algorithms and growth periods in each vegetation community was statistically analyzed in this paper (Figure 6). As can be seen in Figure 6, the F1 scores of all features are higher than 0.8, which indicates that the four algorithms can identify different vegetation communities with high accuracy. The F1 scores of HK, RI, MC, WH, and HM were concentrated around the mean value, which indicated that they were less affected by the classifier and classification scenario and the classification results were more stable. The length of the normal distribution curve indicated that LT, KR, LC, and CP had a wide fluctuation range, suggesting that they were highly influenced by the classification scenario and classifier. Using different classifiers or classification scenarios can significantly impact the classification accuracy of these classes. By observing the mean value of the box center, we found that HK has the highest classification accuracy, which may be due to its clear outline, regular shape, and obvious features. In addition, the highest F1 scores of 1 were obtained by HK, BG, and KR, which indicated that the highest classification accuracy can be achieved for this feature using different algorithms or classification scenarios. The above study showed that the four algorithms can achieve fine classification of different karst vegetation communities. Moreover, different vegetation communities had different sensitivity to different classifiers and different growth periods.
To further quantitatively evaluate the differences in classification accuracy among different algorithms for each vegetation community, this paper counted and visualized the mean F1 scores (mF1) of different classes based on the four algorithms in different growth periods (Figure 7). The histogram reveals that all four algorithms can achieve high precision classification of different classes (mF1 > 0.74), indicating their effectiveness in classifying karst wetland vegetation communities. Among the algorithms, the Stacking algorithm exhibited the highest mF1 in HK, RI, KR, LC, CP, and HM, while MC, WH, and CB achieved the highest mF1 in RF, and BG, LT, AG, and BO achieved the highest mF1 using CatBoost. Notably, the XGBoost algorithm did not yield the highest classification accuracy for any of the vegetation communities, suggesting that the Stacking algorithm generally produced superior classification results. After analyzing the data in the table, it was observed that the highest classification accuracy among all vegetation communities was observed in HK, with an mF1 score of 0.97. Additionally, all four algorithms achieved mF1 scores higher than 0.9 for HK, indicating accurate classification for this vegetation community. Conversely, the CP exhibited the lowest mF1 score (0.74), indicating that the CatBoost algorithm exhibited the most severe misclassification. However, utilizing the Stacking classifier increased CP’s mF1 score by 0.147, effectively improving the probability of accurate categorization. Examining the error bars in the figure revealed that LT and KR exhibited the longest error lines when using the RF classifier, indicating unstable classification effects and significant error fluctuations. LT, KR, and CP displayed considerable fluctuations when employing XGBoost, while KR and CP showed unstable classification results with the CatBoost classifier. However, the stacking technique substantially mitigated the occurrence of significant errors in other classifiers for LT, KR, and CP, resulting in a more stable classification effect. Furthermore, in comparison to the other three classifiers, Stacking exhibited the shortest error lines in HK, RI, KR, LC, CP, and HM, indicating superior stability and the highest classification accuracy among these six vegetation communities. These results demonstrate that Stacking achieves the highest classification accuracy for most classes and exhibits improved classification and generalization capabilities for karst wetland vegetation communities, thereby enabling accurate identification of different vegetation communities.

3.2. Classification Results of Single Growth Period vs. Different Growth Periods

3.2.1. Classification Results of Vegetation Communities during Different Growth Periods

In order to explore the differences in the classification effects of vegetation communities in karst wetlands in different growth periods, this study compared and analyzed the classification results between single-growth-period scenarios and the three-growth-period scenarios (spring, summer, and autumn) that exhibited the highest accuracy using the stacking model. Figure 8 qualitatively shows the classification results of different growth period scenarios based on the Stacking algorithm. From Figure 8a, we can see that there is a misclassification phenomenon, where the shadows of KR are incorrectly classified as AG in the single-growth-period scenario. This could be attributed to the similarity in their spectral information. However, the three-growth-period scenario significantly improved this misclassification and enabled a more accurate identification of BO and WH. This is because scenario 6 with three growth periods of remote sensing images can fully utilize the spectral information from different temporal phases to capture the difference between BO, WH, and the other vegetation communities, and improve their separability. In Figure 8b, the images for March, July, and October exhibited a misclassification of part of the LC shadows into CP. Combining the images from the three growth periods depicted the distribution range of LC and classified its shadows correctly. In addition, the combined multi-growth period images could not only classify HM in KR correctly but also had the ability to better identify KR in AG, achieving better results than single-growth period images. Figure 8c shows that the classification of BG and CP in the single-growth-period classification scenario is relatively similar. However, all single-growth-period scenarios misclassified CP into other categories to some extent, whereas the combined three-growth-period images could classify CP more accurately. Figure 8d shows that all four growing period images can recognize CB and KR better, but the March and October growing period scenarios had some degree of misclassification of LC shadows to other classes, and the July growing period scenario can reduce the misclassification phenomenon to some extent. Nonetheless, the combination of three-growing-period images could optimize this phenomenon significantly. These findings indicated that the scenario based on the stacking model, combining images from the three growth periods (spring, summer, and autumn), substantially improved misclassification and under-classification, ultimately achieving the highest classification accuracy.
In order to quantitatively evaluate the difference in classification accuracy of different classes by different growth periods, the box plot of the F1 score distribution of different classes in six scenarios was statistically calculated in this paper based on the SEL model (Figure 9). Figure 9a displays the classification results of different classes in scenario 6, which achieved the highest accuracy. It was evident that scenario 6, utilizing the stacking model, successfully classified the majority of classes accurately. Figure 9b shows the maximum, minimum, and mean F1 scores for different scenarios and different classes. It could be seen that HK, BG, MC, WH, KR, LC, CB, and HM could achieve the highest F1 scores in scenario 6; RI, WH, KR, LC, CP, and HM achieved the lowest F1 scores in scenario 3 by observing the maximum values. Among the single-growth scenarios, scenario 2 achieved the highest classification accuracy, and its overall accuracy was 0.62% and 1.47% higher than that of scenarios 1 and 3, respectively. This indicated that better classification results can be achieved using summer growth period images of karst wetland vegetation than spring growth period images and autumn growth period images, and the classification scenario using autumn growth period images had the second-best classification results than spring growth period images. Among the different growing period classification scenarios, using a combination of vegetation spring, summer, and autumn images was able to achieve the highest classification accuracy. The classification accuracy of the scenario using the combination of summer and autumn images of vegetation was higher than that of the scenario using the combination of spring and summer images. However, the classification accuracies of single-growth-period classification scenarios were all lower than those of different growth periods, which indicated that combining the UAV RGB images of vegetation in different growth periods could effectively improve the classification accuracy of karst wetland vegetation communities and achieve better classification results. The above findings indicated that the lowest classification accuracy was obtained using only the autumn growth period scenario, and the highest classification accuracy was obtained using the three growth period scenarios of spring, summer, and autumn. The observation of the box length showed that BG, LT, KR, BO, LC, and CP were more sensitive to the combination of scenarios, and the use of different scenarios had the most obvious impact. The above study demonstrated that there were the most significant differences in classification accuracy between classification scenario 3 for a single growth period and the scenario combining three vegetation growth periods, spring, summer, and autumn. The different scenarios had the most significant improvement effect on BG, LT, KR, BO, LC, and CP.

3.2.2. Classification Results of Combined Multiple Growth Period Vegetation Communities

To quantitatively evaluate the effect of different growth period scenarios on the improvement of classification accuracy of karst wetland vegetation communities, this paper constructed a box line chart depicting the F1 score growth rates for selected classes including BG, LT, KR, BO, LC, and CP (Figure 10). Figure 10a presents the F1 score growth rates for these six categories, calculated as the differences between scenario 6 and the other five scenarios. Table 7 provides a quantitative comparison of F1 scores between the three growth period classification scenarios and the other five scenarios for the combined vegetation. Observing Table 7, we found that combining the classification scenarios of the three growth periods could improve the classification accuracy of vegetation communities to some extent, especially the most obvious effect on KR improvement (F1 score improved by 0.18). As can be seen from Figure 10, there were differences in the growth rates of different feature types. By examining the median and mean growth rates for the six categories, considering the specific distribution patterns, it could be seen that BG exhibited the highest growth rate (0.30), while LT showed a negative growth rate, indicating that the combined vegetation imagery in spring, summer, and autumn has a positive impact on the identification of BG but somewhat inhibited the recognition of LT. Analyzing the error bars in the box plot revealed that the growth rate distributions of KR and CP were the most unstable, indicating that these two feature species were highly sensitive to the combination of scenarios. Figure 10b shows the confusion matrix among the six species in the testing set. It shows that there was a certain degree of confusion among the features in the July image except for BG, and combining the images of the three growth periods could significantly reduce the confusion among the features, and the LT and KR in the testing set were all correctly classified. Eight percent of the BO in the July images were misclassified as LC, but combining the images from the three growth periods reduced the confusion rate between the two to 6%. In addition, CP was misclassified as LC and 5% as BO in 25% of the July images, but only 21% of CP was misclassified as LC in the combined three growth periods, significantly improving this confusion. In conclusion, combining the three growth period images could substantially improve the classification accuracy for most classes, reduce confusion, and achieve better classification results.
From Section 3.1.1, it could be seen that the classification accuracy of July growth period images is the highest among single growth periods, and the classification accuracy of October growth period images was the lowest. In addition, the highest classification accuracy could be achieved by combining the images of the March, July, and October growth periods. In order to investigate the effect of multi-growth period combinations on improving the classification accuracy of vegetation communities, this paper used the combined March, July, and October growing period scenarios as the baseline and compared their accuracy with that of the July (Figure 11a) and October (Figure 11b) growing periods, respectively. Figure 11 shows the F1 score distribution and fluctuation range for the six species in the display. The observation of the fluctuation range line showed that all LT showed negative growth, but the combined multi-growth scenario can promote the classification of all other categories. Observing Figure 11a, it can be found that the combined multi-growth period scenario has the most significant effect on BO and LC, with a 0.15 increase in the F1 score. Figure 11b shows that the difference in accuracy between KR in October and the combined multi-temporal scenario was the greatest, and the classification scenario combining three growth period images had the most significant effect on the improvement of KR classification accuracy (F1 score increased by 0.18). Observing the bar graph in Figure 11, we can see that the F1 scores of BG and KR reached 1 in the combined multi-growth scenario, indicating that the combined multi-growth scenario can effectively improve the classification accuracy of BG and KR. The above study showed that combining images of vegetation in spring, summer, and autumn growth periods could effectively improve the classification accuracy of most categories and achieve high-precision identification of different vegetation communities.

3.3. Explanation of Variable Importance for Vegetation Community Mapping

3.3.1. Sensitive Bands for Vegetation Communities Mapping in Different Growth Periods

To examine the sensitivity of different wetland vegetation communities to various imaging features during the same growth period, this study visualized the local contribution of each imaging feature to the classification of individual communities. This analysis was conducted for both the summer growth period and the combined vegetation scenarios of the spring, summer, and autumn growth periods (Figure 12). Figure 12 provides insight into the top 10 features with the highest importance across the three base models. The mean |SHAP value| was used as a metric, focusing on different vegetation communities (e.g., BG, LT, KR, BO, LC, and CP, which were found to be the most sensitive to the combination of growth periods). The bar chart highlighted the contributions of COM, NGBDI, and VDVI as the top three features for the overall classification of the SEL model. Among the combined growth periods, July NGBDI and July IPCA emerged as the most influential features for the overall classification contribution. Further examination of Figure 12 revealed the varying sensitivity of different classes to specific features during different growth periods. Figure 12a–c indicate that BG was more sensitive to COM in vegetation indexes, LT was more responsive to StD_Green, and KR exhibited heightened sensitivity to Area. This sensitivity could be attributed to the larger patch sizes formed during KR segmentation, facilitating classification. BO demonstrated high sensitivity to VDVI, while LC and CP were most influenced by NGBDI in vegetation indexes. Figure 12d–f illustrates that among the combined growth period images, BG and KR exhibited the highest sensitivity to July NGBDI, while BO demonstrated relatively high sensitivity to July IPCA. Texture features, such as Homog and Standard Deviation, played a significant role in LC recognition. As for CP, StD_Red emerged as its most sensitive feature. The above findings indicated that vegetation indexes and texture features held the greatest importance in identifying vegetation communities, surpassing spectral band, geometry features, and position features. Additionally, the position feature demonstrated lower sensitivity for wetland vegetation communities compared to the other four features.

3.3.2. Global Contribution of Different Feature Bands to Vegetation Communities’ Classification

To further evaluate the impact of different features on the classification of karst wetland vegetation communities and understand the mechanisms of feature attribution, this study utilized the SHAP model interpreter to visually demonstrate the effects of different growth periods (specifically, the summer growth period with the highest accuracy in the single-growth-period scenario and the combined three-seasonal-growth-period scenario with the highest accuracy in the multi-growth period classification scenario) and different features on the classification of wetland vegetation communities at a global scale (Figure 13). Each row in the figure represented a feature, with the SHAP value plotted on the horizontal axis. The features were arranged based on the average absolute value of Shap. Figure 13a–c demonstrated that the July texture feature was the feature of highest importance for the Stacking algorithm in identifying karst wetland vegetation communities. The vegetation index COM was also a highly significant feature, ranking second, third, and first in the RF, XGBoost, and CatBoost classifiers, respectively. From Figure 13d–f, it could be observed that July NGBDI achieved the highest importance score among the three combined growth periods. The red portion of the feature range signified the widest range, indicating its largest positive contribution to the Stacking model. Moreover, the dense concentration of the red feature for July NGBDI indicated its significant impact on classification accuracy. In addition, Mean Red in the March growth period made a significant contribution to the classification of vegetation communities in the combined three growth periods.
In order to further quantitatively evaluate the importance of different imaging features in the classification of wetland vegetation communities, this study counted the importance ranking and percentage of different features separately based on Figure 13 (Table 8). From Table 8, it can be observed that the vegetation indexes held the highest importance ranking in July, accounting for 55.6%, while the texture features accounted for 44.4%. As the number of features increased, the proportion of vegetation indexes and texture features gradually decreased from 55.6% to 40%, and the importance proportion of texture features decreased from 46.6% to 26.7%. Meanwhile, the proportion of spectral features, geometry features, and position features continued to increase. These findings indicated that as the number of features increases, the influence of vegetation indexes and texture features on classification gradually decreases, while spectral features, geometry features, and position features became increasingly significant for wetland vegetation identification. Furthermore, the proportion of the top three ranked features’ importance in the combined three growth periods was consistent with July, with vegetation indexes and texture features accounting for the same proportion. The above results indicated that the fluctuation patterns of feature importance among the five categories in the combined three growth periods were generally consistent with those observed in July. As the number of features increases, the proportion of vegetation indexes gradually decreases, while the proportion of texture features showed a trend of increasing and then decreasing. The proportions of spectral features, geometry features, and positional features exhibited a gradually increasing trend.

4. Discussion

Previous studies have demonstrated that integrating images from different growth periods can improve the classification accuracy of wetland vegetation [36,56]. In this study, we used ensemble machine learning algorithms and collected images of the spring, summer, and autumn growth periods of karst wetland vegetation. Subsequently, we combined six different growth period classification scenarios to assess accuracy. The research results indicated that in the single-growth-period classification scenario, the vegetation images from the summer growth period achieved the highest classification accuracy. The overall classification accuracy was 0.62% higher than that of the spring growth period scenario and 1.47% higher than that of the autumn growth period scenario. The classification effect of autumn growth period images was inferior to that of spring growth period images, which was consistent with the findings of [57]. In the combined multi-growth-period classification scenario, the use of vegetation images from the spring, summer, and autumn growth periods achieved the highest classification accuracy (OA of 93.37%). The OA was 1.01% higher than the combined spring and summer growth period scenario and 0.33% higher than the combined summer and autumn growth period scenario, indicating that integrating vegetation images from the spring, summer, and autumn growth periods could improve vegetation classification accuracy and achieve superior classification results. However, due to the significant spectral differences among karst wetland vegetation and the spatial homogeneity of vegetation communities in different growth periods, using only a single-growth-period image was usually unable to achieve the highest classification accuracy for all vegetation communities. In this paper, we found that there was a certain degree of confusion between CP and LC in the vegetation summer growth period images. However, combining the three growth period images substantially alleviated this confusion. The F1 score of KR in the autumn growth period was 81.6%, but employing a classification scenario that combined three growth period images significantly improved its classification accuracy (the F1 score increased by 0.18). The combined multi-growth scenario had the most notable effect on improving BO and LC improvement, resulting in a 0.15 increase in the F1 score. LT achieved the highest classification accuracy (F1 score of 0.9) in the scenario that used a combination of vegetation summer and autumn images, indicating that the combined summer and autumn images were the most suitable periods for LT classification. BG and KR achieved the highest classification accuracy (F1 score reached 1) when combined with three growth period images. These findings further supported that multi-temporal image leverages the physiological information and phenological characteristics of vegetation to enhance classification accuracy through the complementary information provided by different images. These conclusions provided theoretical support for vegetation classification using multi-temporal images and were an important guide for the field of wetland classification.
Due to the significant spatial variability of karst wetland vegetation in different phases, relying on a single machine learning algorithm was often unable to achieve satisfactory classification results in all temporal phases. In this paper, we utilized OBIA with ultra-high resolution UAV images and stacked RF, XGBoost, and CatBoost models to form a SEL model. The classification performance and adaptability of the model were evaluated across six different growth periods of vegetation communities. The qualitative results demonstrated that the Stacking algorithm could more accurately delineate the boundary between BO and LC and reduce the confusion between AG and KR to some extent. It significantly improved the misclassification and omission between different classes, leading to superior classification outcomes. Quantitative results indicated that the ranking of the classification performance of the four algorithms for karst wetland vegetation communities was as follows: Stacking > CatBoost > RF > XGBoost. The Stacking algorithm achieved the highest OA among the five classification scenarios, especially in all the scenarios of multi-growth combinations, which achieved high accuracy in classification. It also showed that the SEL algorithm was suitable for image combinations of different temporal phases and achieved the highest classification accuracy in most growth periods. Comparing the classification accuracy of the four algorithms for different vegetation communities, it was evident that the Stacking algorithm could improve the classification accuracy of HK, RI, KR, LC, CP, and HM, with the most significant improvement observed for KR (F1 score increased from 0.818 to 0.94). Additionally, stacking significantly reduced the occurrence of large errors in other classifiers for LT, KR, and CP, exhibiting a more stable classification effect. This highlighted the Stacking algorithm’s capability to enhance the classification performance of various vegetation communities and achieve higher accuracy. However, due to the poor separability of wetland vegetation and the complexity of the classes, all four algorithms inevitably misclassified vegetation communities to some extent. This reflected the challenge of machine learning models in identifying classes with similar characteristics during model training. However, the Stacking algorithm consistently outperformed the other three algorithms in terms of classification. This was attributed to the Stacking algorithm enriching the training data by stacking, which allows the meta model to improve the prediction accuracy using heterogeneous structures [58]. In this study, we used the highest average accuracy base model as the meta model, which may have certain limitations. In future research, we will further explore the use of an adaptive stacking approach to enhance the robustness of the model. Furthermore, the thinking of SEL models holds potential for application in the retrieval of physiological parameters of wetland vegetation, such as the canopy carbon content, chlorophyll content, and leaf area index (LAI), as well as for maintaining wetland ecosystem diversity. Although deep learning algorithms have gained popularity in wetland vegetation classification [59], they are limited to pixel-based semantic segmentation [60]. Moreover, training deep learning models requires large quantities of labeled data, resulting in increased costs and computational requirements [61]. In addition, generating labeled data for training deep learning models can be challenging due to the complexity and scattered distribution of wetland vegetation types. Moreover, complex terrain can significantly affect the classification accuracy of deep learning algorithms [62]. In future research, we will attempt to use deep learning algorithms for wetland vegetation classification in karst areas and explore the differences in classification performance compared to machine learning algorithms.
This study found that SHAP values not only enable the interpretability of machine learning models but also provide an importance analysis for complex models considered “black boxes”. Furthermore, it offered local and global explanations for the positive or negative contributions and magnitude of feature variables in wetland vegetation classification. These findings are consistent with previous studies such as [63,64]. This study assessed the contribution of different features to the classification of different vegetation communities at the local scale. The results showed that in the summer growing season, COM, NGBDI, and VDVI were the three features that contributed the most to the overall classification of the SEL model. In the combined three growth periods, July NGBDI and July IPCA were identified as the features that made the largest overall contribution to the classification. In addition, different land cover types exhibited varying sensitivities to different spectral bands during different growth periods. For instance, in the summer growing periods, BG displayed higher sensitivity to COM in vegetation indexes, while BG was most sensitive to summer NGBDI in the combined three growth periods. Similarly, BO exhibited high sensitivity to VDVI in the summer growth period, while it was relatively sensitive to summer IPCA in the combined three growth periods. In addition, the texture features Homog and Standard Deviation had high importance in LC identification. By integrating the above-mentioned sensitive spectral bands, important theoretical support can be provided for subsequent wetland vegetation community classification. On a global scale, this study evaluated the contributions of different feature bands to the classification results. The results indicated that texture features in the summer growth period were the most important features for the Stacking algorithm in identifying wetland vegetation communities in karst areas. The vegetation index (COM) was also considered a highly important feature. The combination of the three-growth periods reveals that the Mean Red texture feature in the spring growing period made a substantial contribution to the classification of wetland vegetation communities. Furthermore, the analysis of importance rankings and proportions of different feature types revealed that as the number of features increased, the influence of vegetation indexes and texture features on the classification gradually diminished, while spectral features, geometry features, and position features became increasingly important for wetland vegetation identification. The above experimental results indicated that COM and NGBDI contribute significantly to the overall wetland vegetation identification, while the importance of spectral features, position features, and geometry features was relatively lower. The results of this study provided the most influential features for the classification of karst wetland vegetation, improving classification efficiency and achieving high-precision fine classification. By demonstrating the sensitivity of various species to different features, more accurate and efficient technical guidance can be provided for wetland vegetation conservation and planning.

5. Conclusions

This study utilized high-resolution UAV images from different growth periods and an optimized Stacking algorithm to perform fine classification of vegetation communities in the Huixian Karst International Important Wetland. We quantitatively evaluated the impact of combining images from different growth periods on vegetation communities mapping and further validated the classification performance and generalization ability of the SEL model using six multi-dimension scenarios. This study confirmed that the object-based SEL model can significantly improve the classification performance of most vegetation communities. The overall classification accuracy (84.47~93.37%) exhibited significant differences at the 95% confidence interval. The performance and stability of four algorithms for karst wetland vegetation classification were ranked as follows: Stacking > CatBoost > RF > XGBoost. Our study demonstrated that combining UAV images from three growth periods achieved better classification accuracy (OA of 93.37%) than any of the lone growth periods and significantly improved the classification accuracy of Bermudagrass, Karst River, Bamboo, Linden-Camphora, and Cephalanthus tetrandrus-Paliurus ramosissimus. We found that the summer-based UAV images were more suitable to distinguish vegetation communities than that of spring and autumn. The model interpretation results based on SHAP confirmed that vegetation indexes and texture features provided significant contributions to classifying karst wetland vegetation communities, while spectral features, geometry features, and position features were not sensitive to vegetation communities’ identification. COM and NGBDI demonstrated the highest contributions to karst wetland vegetation communities mapping, especially Bermudagrass, displaying a high sensitivity to COM. This study can provide crucial technical support for vegetation mapping in the other karst wetlands and can also be applied to research on the retrieval of wetland vegetation physiological parameters.

Author Contributions

Conceptualization, Y.Z. and B.F.; methodology, B.F.; software, Y.Z.; validation, X.S., H.Y. and S.Z.; formal analysis, Y.W.; investigation, T.D.; resources, X.S.; data curation, S.Z.; writing—original draft preparation, Y.Z.; writing—review and editing, H.K.; visualization, B.F.; supervision, B.F.; project administration, B.F.; funding acquisition, B.F. All authors have read and agreed to the published version of the manuscript.

Funding

This study was funded by the Guangxi Science and Technology Program (Grant Number GuikeAD20159037), the Innovation Project of Guangxi Graduate Education (Grant Number YCSW2023353), and the “BaGui Scholars” program of the provincial government of Guangxi, the National Natural Science Foundation of China (Grant 42122009), the Guilin University of Technology Foundation (Grant number GUTQDJJ2017096).

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Optimal Parameters for the Three Base Models

ScenarioRFXGBoostCatBoost
Maymtry = 6max_depth = 4learning_rate = 0.05
ntree = 500eta = 0.2max_depth = 5
Julymtry = 6max_depth = 6learning_rate = 0.1
ntree = 1000eta = 0.2max_depth = 6
Octobermtry = 8max_depth = 6learning_rate = 0.1
ntree = 500eta = 0.1max_depth = 6
May + Julymtry = 5max_depth = 8learning_rate = 0.2
ntree = 2000eta = 0.3max_depth = 8
July + Octobermtry = 6max_depth = 6learning_rate = 0.25
ntree = 1000eta = 0.25max_depth = 5
May + July + Octobermtry = 8max_depth = 8learning_rate = 0.25
ntree = 1000eta = 0.1max_depth = 6

References

  1. Dai, X.; Yang, G.; Liu, D.; Wan, R. Vegetation carbon sequestration mapping in herbaceous wetlands by using a MODIS EVI time-series data set: A case in Poyang lake wetland, China. Remote Sens. 2020, 12, 3000. [Google Scholar] [CrossRef]
  2. Ding, Y.; Li, Z.; Peng, S. Global analysis of time-lag and-accumulation effects of climate on vegetation growth. Int. J. Appl. Earth Obs. Geoinf. 2020, 92, 102179. [Google Scholar] [CrossRef]
  3. Zhou, J.; Wu, J.; Gong, Y. Valuing wetland ecosystem services based on benefit transfer: A meta-analysis of China wetland studies. J. Clean. Prod. 2020, 276, 122988. [Google Scholar] [CrossRef]
  4. Amani, M.; Salehi, B.; Mahdavi, S.; Granger, J.; Brisco, B. Wetland classification in Newfoundland and Labrador using multi-source SAR and optical data integration. GISci. Remote Sens. 2017, 54, 779–796. [Google Scholar] [CrossRef]
  5. Deng, T.; Fu, B.; Liu, M.; He, H.; Fan, D.; Li, L.; Huang, L.; Gao, E. Comparison of multi-class and fusion of multiple single-class SegNet model for mapping karst wetland vegetation using UAV images. Sci. Rep. 2022, 12, 13270. [Google Scholar] [CrossRef]
  6. Cai, Y.; Liang, J.; Zhang, P.; Wang, Q.; Wu, Y.; Ding, Y.; Wang, H.; Fu, C.; Sun, J. Review on strategies of close-to-natural wetland restoration and a brief case plan for a typical wetland in northern China. Chemosphere 2021, 285, 131534. [Google Scholar] [CrossRef]
  7. Chi, Y.; Sun, J.; Liu, W.; Wang, J.; Zhao, M. Mapping coastal wetland soil salinity in different seasons using an improved comprehensive land surface factor system. Ecol. Indic. 2019, 107, 105517. [Google Scholar] [CrossRef]
  8. Mirmazloumi, S.M.; Moghimi, A.; Ranjgar, B.; Mohseni, F.; Ghorbanian, A.; Ahmadi, S.A.; Amani, M.; Brisco, B. Status and trends of wetland studies in Canada using remote sensing technology with a focus on wetland classification: A bibliographic analysis. Remote Sens. 2021, 13, 4025. [Google Scholar] [CrossRef]
  9. Arroyo-Mora, J.; Kalacska, M.; Soffer, R.; Ifimov, G.; Leblanc, G.; Schaaf, E.; Lucanus, O. Evaluation of phenospectral dynamics with Sentinel-2A using a bottom-up approach in a northern ombrotrophic peatland. Remote Sens. Environ. 2018, 216, 544–560. [Google Scholar] [CrossRef]
  10. Ahmed, K.R.; Akter, S.; Marandi, A.; Schüth, C. A simple and robust wetland classification approach by using optical indices, unsupervised and supervised machine learning algorithms. Remote Sens. Appl. Soc. Environ. 2021, 23, 100569. [Google Scholar] [CrossRef]
  11. Chen, J.; Chen, Z.; Huang, R.; You, H.; Han, X.; Yue, T.; Zhou, G. The Effects of Spatial Resolution and Resampling on the Classification Accuracy of Wetland Vegetation Species and Ground Objects: A Study Based on High Spatial Resolution UAV Images. Drones 2023, 7, 61. [Google Scholar] [CrossRef]
  12. Fu, B.; Liu, M.; He, H.; Lan, F.; He, X.; Liu, L.; Huang, L.; Fan, D.; Zhao, M.; Jia, Z. Comparison of optimized object-based RF-DT algorithm and SegNet algorithm for classifying Karst wetland vegetation communities using ultra-high spatial resolution UAV data. Int. J. Appl. Earth Obs. Geoinf. 2021, 104, 102553. [Google Scholar] [CrossRef]
  13. Diez, Y.; Kentsch, S.; Fukuda, M.; Caceres, M.L.L.; Moritake, K.; Cabezas, M. Deep learning in forestry using uav-acquired rgb data: A practical review. Remote Sens. 2021, 13, 2837. [Google Scholar] [CrossRef]
  14. Li, B.; Xu, X.; Zhang, L.; Han, J.; Bian, C.; Li, G.; Liu, J.; Jin, L. Above-ground biomass estimation and yield prediction in potato by using UAV-based RGB and hyperspectral imaging. ISPRS J. Photogramm. Remote Sens. 2020, 162, 161–172. [Google Scholar] [CrossRef]
  15. Yan, G.; Li, L.; Coy, A.; Mu, X.; Chen, S.; Xie, D.; Zhang, W.; Shen, Q.; Zhou, H. Improving the estimation of fractional vegetation cover from UAV RGB imagery by colour unmixing. ISPRS J. Photogramm. Remote Sens. 2019, 158, 23–34. [Google Scholar] [CrossRef]
  16. Bhatnagar, S.; Gill, L.; Ghosh, B. Drone image segmentation using machine and deep learning for mapping raised bog vegetation communities. Remote Sens. 2020, 12, 2602. [Google Scholar] [CrossRef]
  17. Yi, Z.; Jia, L.; Chen, Q. Crop classification using multi-temporal Sentinel-2 data in the Shiyang River Basin of China. Remote Sens. 2020, 12, 4052. [Google Scholar] [CrossRef]
  18. Zhao, Y.; Feng, D.; Yu, L.; Cheng, Y.; Zhang, M.; Liu, X.; Xu, Y.; Fang, L.; Zhu, Z.; Gong, P. Long-term land cover dynamics (1986–2016) of Northeast China derived from a multi-temporal Landsat archive. Remote Sens. 2019, 11, 599. [Google Scholar] [CrossRef] [Green Version]
  19. Fu, B.; Xie, S.; He, H.; Zuo, P.; Sun, J.; Liu, L.; Huang, L.; Fan, D.; Gao, E. Synergy of multi-temporal polarimetric SAR and optical image satellite for mapping of marsh vegetation using object-based random forest algorithm. Ecol. Indic. 2021, 131, 108173. [Google Scholar] [CrossRef]
  20. van Deventer, H.; Cho, M.A.; Mutanga, O. Multi-season RapidEye imagery improves the classification of wetland and dryland communities in a subtropical coastal region. ISPRS J. Photogramm. Remote Sens. 2019, 157, 171–187. [Google Scholar] [CrossRef]
  21. Kollert, A.; Bremer, M.; Löw, M.; Rutzinger, M. Exploring the potential of land surface phenology and seasonal cloud free composites of one year of Sentinel-2 imagery for tree species mapping in a mountainous region. Int. J. Appl. Earth Obs. Geoinf. 2021, 94, 102208. [Google Scholar] [CrossRef]
  22. Hu, Q.; Wu, W.; Song, Q.; Lu, M.; Chen, D.; Yu, Q.; Tang, H. How do temporal and spectral features matter in crop classification in Heilongjiang Province, China? J. Integr. Agric. 2017, 16, 324–336. [Google Scholar] [CrossRef]
  23. Piaser, E.; Villa, P. Evaluating capabilities of machine learning algorithms for aquatic vegetation classification in temperate wetlands using multi-temporal Sentinel-2 data. Int. J. Appl. Earth Obs. Geoinf. 2023, 117, 103202. [Google Scholar] [CrossRef]
  24. Macintyre, P.; Van Niekerk, A.; Mucina, L. Efficacy of multi-season Sentinel-2 imagery for compositional vegetation classification. Int. J. Appl. Earth Obs. Geoinf. 2020, 85, 101980. [Google Scholar] [CrossRef]
  25. Tang, Y.; Qiu, F.; Jing, L.; Shi, F.; Li, X. Integrating spectral variability and spatial distribution for object-based image analysis using curve matching approaches. ISPRS J. Photogramm. Remote Sens. 2020, 169, 320–336. [Google Scholar] [CrossRef]
  26. Mao, D.; Wang, Z.; Du, B.; Li, L.; Tian, Y.; Jia, M.; Zeng, Y.; Song, K.; Jiang, M.; Wang, Y. National wetland mapping in China: A new product resulting from object-based and hierarchical classification of Landsat 8 OLI images. ISPRS J. Photogramm. Remote Sens. 2020, 164, 11–25. [Google Scholar] [CrossRef]
  27. Fallatah, A.; Jones, S.; Wallace, L.; Mitchell, D. Combining object-based machine learning with long-term time-series analysis for informal settlement identification. Remote Sens. 2022, 14, 1226. [Google Scholar] [CrossRef]
  28. Liu, H.; Lang, B. Machine learning and deep learning methods for intrusion detection systems: A survey. Appl. Sci. 2019, 9, 4396. [Google Scholar] [CrossRef] [Green Version]
  29. Mallick, J.; Talukdar, S.; Pal, S.; Rahman, A. A novel classifier for improving wetland mapping by integrating image fusion techniques and ensemble machine learning classifiers. Ecol. Inform. 2021, 65, 101426. [Google Scholar] [CrossRef]
  30. Liu, T.; Abd-Elrahman, A.; Morton, J.; Wilhelm, V.L. Comparing fully convolutional networks, random forest, support vector machine, and patch-based deep convolutional neural networks for object-based wetland mapping using images from small unmanned aircraft system. GISci. Remote Sens. 2018, 55, 243–264. [Google Scholar] [CrossRef]
  31. Mohammadimanesh, F.; Salehi, B.; Mahdianpari, M.; Motagh, M.; Brisco, B. An efficient feature optimization for wetland mapping by synergistic use of SAR intensity, interferometry, and polarimetry data. Int. J. Appl. Earth Obs. Geoinf. 2018, 73, 450–462. [Google Scholar] [CrossRef]
  32. Yao, H.; Fu, B.; Zhang, Y.; Li, S.; Xie, S.; Qin, J.; Fan, D.; Gao, E. Combination of Hyperspectral and Quad-Polarization SAR Images to Classify Marsh Vegetation Using Stacking Ensemble Learning Algorithm. Remote Sens. 2022, 14, 5478. [Google Scholar] [CrossRef]
  33. Zhou, R.; Yang, C.; Li, E.; Cai, X.; Yang, J.; Xia, Y. Object-based wetland vegetation classification using multi-feature selection of unoccupied aerial vehicle RGB imagery. Remote Sens. 2021, 13, 4910. [Google Scholar] [CrossRef]
  34. Khan, I.U.; Javeid, N.; Taylor, C.J.; Gamage, K.A.; Ma, X. A stacked machine and deep learning-based approach for analysing electricity theft in smart grids. IEEE Trans. Smart Grid 2021, 13, 1633–1644. [Google Scholar] [CrossRef]
  35. Wen, L.; Hughes, M. Coastal wetland mapping using ensemble learning algorithms: A comparative study of bagging, boosting and stacking techniques. Remote Sens. 2020, 12, 1683. [Google Scholar] [CrossRef]
  36. Cai, Y.; Li, X.; Zhang, M.; Lin, H. Mapping wetland using the object-based stacked generalization method based on multi-temporal optical and SAR data. Int. J. Appl. Earth Obs. Geoinf. 2020, 92, 102164. [Google Scholar] [CrossRef]
  37. Rudin, C. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell. 2019, 1, 206–215. [Google Scholar] [CrossRef]
  38. Chen, J.; de Hoogh, K.; Gulliver, J.; Hoffmann, B.; Hertel, O.; Ketzel, M.; Bauwelinck, M.; Van Donkelaar, A.; Hvidtfeldt, U.A.; Katsouyanni, K. A comparison of linear regression, regularization, and machine learning algorithms to develop Europe-wide spatial models of fine particles and nitrogen dioxide. Environ. Int. 2019, 130, 104934. [Google Scholar] [CrossRef]
  39. Arora, N.; Kaur, P.D. A Bolasso based consistent feature selection enabled random forest classification algorithm: An application to credit risk assessment. Appl. Soft Comput. 2020, 86, 105936. [Google Scholar] [CrossRef]
  40. Rodríguez-Pérez, R.; Bajorath, J. Interpretation of compound activity predictions from complex machine learning models using local approximations and shapley values. J. Med. Chem. 2019, 63, 8761–8777. [Google Scholar] [CrossRef]
  41. Fu, B.; Wang, Y.; Campbell, A.; Li, Y.; Zhang, B.; Yin, S.; Xing, Z.; Jin, X. Comparison of object-based and pixel-based Random Forest algorithm for wetland vegetation mapping using high spatial resolution GF-1 and SAR data. Ecol. Indic. 2017, 73, 105–117. [Google Scholar] [CrossRef]
  42. Belgiu, M.; Drǎguţ, L. Comparing supervised and unsupervised multiresolution segmentation approaches for extracting buildings from very high resolution imagery. ISPRS J. Photogramm. Remote Sens. 2014, 96, 67–75. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  43. Ramezan, C.A. Transferability of Recursive Feature Elimination (RFE)-Derived Feature Sets for Support Vector Machine Land Cover Classification. Remote Sens. 2022, 14, 6218. [Google Scholar] [CrossRef]
  44. Zhang, S.; Zhang, J.; Li, X.; Du, X.; Zhao, T.; Hou, Q.; Jin, X. Estimating the grade of storm surge disaster loss in coastal areas of China via machine learning algorithms. Ecol. Indic. 2022, 136, 108533. [Google Scholar] [CrossRef]
  45. Wu, J.; Guo, P.; Cheng, Y.; Zhu, H.; Wang, X.-B.; Shao, X. Ensemble generalized multiclass support-vector-machine-based health evaluation of complex degradation systems. IEEE/ASME Trans. Mechatron. 2020, 25, 2230–2240. [Google Scholar] [CrossRef]
  46. Li, M.; Yan, C.; Liu, W. The network loan risk prediction model based on Convolutional neural network and Stacking fusion model. Appl. Soft Comput. 2021, 113, 107961. [Google Scholar] [CrossRef]
  47. Malekloo, A.; Ozer, E.; AlHamaydeh, M.; Girolami, M. Machine learning and structural health monitoring overview with emerging technology and high-dimensional data source highlights. Struct. Health Monit. 2022, 21, 1906–1955. [Google Scholar] [CrossRef]
  48. Meng, Y.; Yang, N.; Qian, Z.; Zhang, G. What makes an online review more helpful: An interpretation framework using XGBoost and SHAP values. J. Theor. Appl. Electron. Commer. Res. 2020, 16, 466–490. [Google Scholar] [CrossRef]
  49. Budholiya, K.; Shrivastava, S.K.; Sharma, V. An optimized XGBoost based diagnostic system for effective prediction of heart disease. J. King Saud Univ. Comput. Inf. Sci. 2022, 34, 4514–4523. [Google Scholar] [CrossRef]
  50. Toharudin, T.; Caraka, R.E.; Pratiwi, I.R.; Kim, Y.; Gio, P.U.; Sakti, A.D.; Noh, M.; Nugraha, F.A.L.; Pontoh, R.S.; Putri, T.H. Boosting Algorithm to handle Unbalanced Classification of PM2. 5 Concentration Levels by Observing Meteorological Parameters in Jakarta-Indonesia using AdaBoost, XGBoost, CatBoost, and LightGBM. IEEE Access 2023, 11, 35680–35696. [Google Scholar] [CrossRef]
  51. McNemar, Q. Note on the sampling error of the difference between correlated proportions or percentages. Psychometrika 1947, 12, 153–157. [Google Scholar] [CrossRef] [PubMed]
  52. Raju, V.G.; Lakshmi, K.P.; Jain, V.M.; Kalidindi, A.; Padma, V. Study the influence of normalization/transformation process on the accuracy of supervised classification. In Proceedings of the 2020 Third International Conference on Smart Systems and Inventive Technology (ICSSIT), Tirunelveli, India, 20–22 August 2020; pp. 729–735. [Google Scholar]
  53. Ludwig, C.; Walli, A.; Schleicher, C.; Weichselbaum, J.; Riffler, M. A highly automated algorithm for wetland detection using multi-temporal optical satellite data. Remote Sens. Environ. 2019, 224, 333–351. [Google Scholar] [CrossRef]
  54. Lundberg, S.M.; Erion, G.; Chen, H.; DeGrave, A.; Prutkin, J.M.; Nair, B.; Katz, R.; Himmelfarb, J.; Bansal, N.; Lee, S.-I. From local explanations to global understanding with explainable AI for trees. Nat. Mach. Intell. 2020, 2, 56–67. [Google Scholar] [CrossRef]
  55. Wen, X.; Xie, Y.; Wu, L.; Jiang, L. Quantifying and comparing the effects of key risk factors on various types of roadway segment crashes with LightGBM and SHAP. Accid. Anal. Prev. 2021, 159, 106261. [Google Scholar] [CrossRef]
  56. Judah, A.; Hu, B. The integration of multi-source remotely-sensed data in support of the classification of wetlands. Remote Sens. 2019, 11, 1537. [Google Scholar] [CrossRef] [Green Version]
  57. Zhao, C.; Jia, M.; Wang, Z.; Mao, D.; Wang, Y. Toward a better understanding of coastal salt marsh mapping: A case from China using dual-temporal images. Remote Sens. Environ. 2023, 295, 113664. [Google Scholar] [CrossRef]
  58. Wang, R.; Lu, S.; Feng, W. A novel improved model for building energy consumption prediction based on model integration. Appl. Energy 2020, 262, 114561. [Google Scholar] [CrossRef]
  59. Li, Y.; Fu, B.; Sun, X.; Fan, D.; Wang, Y.; He, H.; Gao, E.; He, W.; Yao, Y. Comparison of Different Transfer Learning Methods for Classification of Mangrove Communities Using MCCUNet and UAV Multispectral Images. Remote Sens. 2022, 14, 5533. [Google Scholar] [CrossRef]
  60. Minaee, S.; Boykov, Y.; Porikli, F.; Plaza, A.; Kehtarnavaz, N.; Terzopoulos, D. Image segmentation using deep learning: A survey. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 44, 3523–3542. [Google Scholar] [CrossRef]
  61. Goh, G.D.; Sing, S.L.; Yeong, W.Y. A review on machine learning in 3D printing: Applications, potential, and challenges. Artif. Intell. Rev. 2021, 54, 63–94. [Google Scholar] [CrossRef]
  62. Hartling, S.; Sagan, V.; Sidike, P.; Maimaitijiang, M.; Carron, J. Urban tree species classification using a WorldView-2/3 and LiDAR data fusion approach and deep learning. Sensors 2019, 19, 1284. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  63. Mangalathu, S.; Hwang, S.-H.; Jeon, J.-S. Failure mode and effects analysis of RC members based on machine-learning-based SHapley Additive exPlanations (SHAP) approach. Eng. Struct. 2020, 219, 110927. [Google Scholar] [CrossRef]
  64. Rodríguez-Pérez, R.; Bajorath, J. Interpretation of machine learning models using shapley values: Application to compound potency and multi-target activity predictions. J. Comput. Aided Mol. Des. 2020, 34, 1013–1026. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Geographical location of the study area. (a,b) Photos of field measurements; (ce) UAV-RGB true-color images in March, July, and October, respectively.
Figure 1. Geographical location of the study area. (a,b) Photos of field measurements; (ce) UAV-RGB true-color images in March, July, and October, respectively.
Remotesensing 15 04003 g001
Figure 2. Workflow of this study.
Figure 2. Workflow of this study.
Remotesensing 15 04003 g002
Figure 3. The framework of stacking ensemble learning model.
Figure 3. The framework of stacking ensemble learning model.
Remotesensing 15 04003 g003
Figure 4. Qualitative comparison of classification results of different classification scenarios based on four algorithms (HM: Human-made matter; CP: Cephalanthus tetrandrus-Paliurus ramosissimus; CB: Cropland-beach; LT: Lotus; RI: Reed-Imperate; HK: Huakolasa; KR: Karst River; BO: Bamboo; LC: Linden-Camphora; WH: Water Hyacinth; AG: Algae; BG: Bermudagrass; MC: Miscanthus).
Figure 4. Qualitative comparison of classification results of different classification scenarios based on four algorithms (HM: Human-made matter; CP: Cephalanthus tetrandrus-Paliurus ramosissimus; CB: Cropland-beach; LT: Lotus; RI: Reed-Imperate; HK: Huakolasa; KR: Karst River; BO: Bamboo; LC: Linden-Camphora; WH: Water Hyacinth; AG: Algae; BG: Bermudagrass; MC: Miscanthus).
Remotesensing 15 04003 g004
Figure 5. (a) Overall accuracy distribution of the four algorithms; (b) statistical tests for classification results based on the four classifiers.
Figure 5. (a) Overall accuracy distribution of the four algorithms; (b) statistical tests for classification results based on the four classifiers.
Remotesensing 15 04003 g005
Figure 6. Statistical analysis of F1 scores of different vegetation communities by all classification scenarios and classifiers (HM: Human-made matter; CP: Cephalanthus tetrandrus-Paliurus ramosissimus; CB: Cropland-beach; LT: Lotus; RI: Reed-Imperate; HK: Huakolasa; KR: Karst River; BO: Bamboo; LC: Linden-Camphora; WH: Water Hyacinth; AG: Algae; BG: Bermudagrass; MC: Miscanthus).
Figure 6. Statistical analysis of F1 scores of different vegetation communities by all classification scenarios and classifiers (HM: Human-made matter; CP: Cephalanthus tetrandrus-Paliurus ramosissimus; CB: Cropland-beach; LT: Lotus; RI: Reed-Imperate; HK: Huakolasa; KR: Karst River; BO: Bamboo; LC: Linden-Camphora; WH: Water Hyacinth; AG: Algae; BG: Bermudagrass; MC: Miscanthus).
Remotesensing 15 04003 g006
Figure 7. Comparison of mF1 of different vegetation communities based on four classification models. The bold number indicates the highest MF1 for each vegetation community.
Figure 7. Comparison of mF1 of different vegetation communities based on four classification models. The bold number indicates the highest MF1 for each vegetation community.
Remotesensing 15 04003 g007
Figure 8. Comparison of the classification results of different growth periods based on the Stacking algorithm. (ad) represent four different areas. (HM: Human-made matter; CP: Cephalanthus tetrandrus-Paliurus ramosissimus; CB: Cropland-beach; LT: Lotus; RI: Reed-Imperate; HK: Huakolasa; KR: Karst River; BO: Bamboo; LC: Linden-Camphora; WH: Water Hyacinth; AG: Algae; BG: Bermudagrass; MC: Miscanthus).
Figure 8. Comparison of the classification results of different growth periods based on the Stacking algorithm. (ad) represent four different areas. (HM: Human-made matter; CP: Cephalanthus tetrandrus-Paliurus ramosissimus; CB: Cropland-beach; LT: Lotus; RI: Reed-Imperate; HK: Huakolasa; KR: Karst River; BO: Bamboo; LC: Linden-Camphora; WH: Water Hyacinth; AG: Algae; BG: Bermudagrass; MC: Miscanthus).
Remotesensing 15 04003 g008
Figure 9. Comparison of classification results for each vegetation community based on the Stacking algorithm. (a) represents global classification results and (b) represents the F1 score of different vegetation communities in different scenarios based on the Stacking algorithm.
Figure 9. Comparison of classification results for each vegetation community based on the Stacking algorithm. (a) represents global classification results and (b) represents the F1 score of different vegetation communities in different scenarios based on the Stacking algorithm.
Remotesensing 15 04003 g009
Figure 10. The F1 score growth rate and confusion matrix for different vegetation communities. (a) shows box plots of F1 fractional growth rates. (b,c) shows the confusion matrix for different vegetation communities in July and March+July+October.
Figure 10. The F1 score growth rate and confusion matrix for different vegetation communities. (a) shows box plots of F1 fractional growth rates. (b,c) shows the confusion matrix for different vegetation communities in July and March+July+October.
Remotesensing 15 04003 g010
Figure 11. Growth rate and distribution of the F1 score for different vegetation communities in different growth periods. (a) represents the comparison between July and March+July+October. (b) represents the comparison between October and March+July+October.
Figure 11. Growth rate and distribution of the F1 score for different vegetation communities in different growth periods. (a) represents the comparison between July and March+July+October. (b) represents the comparison between October and March+July+October.
Remotesensing 15 04003 g011
Figure 12. Sensitivity analysis of the contribution of feature variables in the summer and the combined spring, summer, and autumn growth period scenarios to distinguish vegetation communities. (ac) Feature importance of UAV images in the summer growth period, (df) feature importance of UAV images based on the combined three growth periods.
Figure 12. Sensitivity analysis of the contribution of feature variables in the summer and the combined spring, summer, and autumn growth period scenarios to distinguish vegetation communities. (ac) Feature importance of UAV images in the summer growth period, (df) feature importance of UAV images based on the combined three growth periods.
Remotesensing 15 04003 g012
Figure 13. Comparison of the importance of different features in different growth periods based on the SHAP method, where (ac) represents feature importance in March, and (df) represents feature importance in combined three growth periods. The horizontal coordinate is the SHAP value, representing the weight of the effect of different feature variables on the classification results, and each row represents a feature, and the impact of the size of feature values on the results is indicated by the different colors. The red color indicates a positive contribution to the classifications, and the blue color indicates a negative contribution to the classifications. The larger the width of the color region, the greater the effect of feature on classifications.
Figure 13. Comparison of the importance of different features in different growth periods based on the SHAP method, where (ac) represents feature importance in March, and (df) represents feature importance in combined three growth periods. The horizontal coordinate is the SHAP value, representing the weight of the effect of different feature variables on the classification results, and each row represents a feature, and the impact of the size of feature values on the results is indicated by the different colors. The red color indicates a positive contribution to the classifications, and the blue color indicates a negative contribution to the classifications. The larger the width of the color region, the greater the effect of feature on classifications.
Remotesensing 15 04003 g013
Table 1. Summary of UAV data information.
Table 1. Summary of UAV data information.
Acquisition TimesGrowth PeriodsNumber of ImagesResolution (m)
15 March 2022Spring17560.02
2 July 2022Summer16930.03
23 October 2022Autumn15350.02
Table 2. Description of sample information.
Table 2. Description of sample information.
CategoriesMarchJulyOctoberMarch + JulyJuly + OctoberMarch + July + October
HK435043464949
BG768592837777
RI353330323033
MC423636413740
WH505250524444
LT4036473940
AG656049565555
KR545272565858
BO706170666161
CP575852565654
LC827580708081
CB876767636666
HM505050484848
Total711719727716700706
Table 3. Definition of vegetation indices (VIs) (R, G, and B refer to DN values for red, green, and blue bands, respectively).
Table 3. Definition of vegetation indices (VIs) (R, G, and B refer to DN values for red, green, and blue bands, respectively).
Vegetation IndicesFormula
Excess green index (ExG) 2 G R B
Excess green minus excess red index (ExGR) E x G 1.4 R G
Vegetation index (VEG) G / R 0.67 B 0.33
Color index of vegetation (CIVE) 0.44 R 0.88 G + 0.39 B + 18.79
Combination index (COM) 0.25 E x G + 0.3 E x G R + 0.33 C I V E + 0.12 V E G
Combination index 2(COM2) 0.36 E x G + 0.47 C I V E + 0.17 V E G
Normalized green-red difference index (NGRDI) ( G R ) / ( G + R )
Normalized green-blue difference index (NGBDI) ( G B ) / ( G + B )
Visable-band difference vegetation index (VDVI) ( 2 G R B ) / ( 2 G + R + B )
Red-green ratio index (RGRI) R / G
Blue-green ratio index (BGRI) B / G
Woebbecke index (WI) ( G B ) / ( R G )
Red-green-blue ratio index (RGBRI) ( R + B ) / 2 G
Red-green-blue index (RGBVI) ( G 2 ( R B ) ) / ( G 2 + ( R B ) )
Kawashima index (IKAW) ( R B ) / ( R + B )
Visible atmospherically resistance index (VARI) ( G R ) / ( G + R B )
Principal component analysis index (IPCA) 0.994 | R B | + 0.961 | G B | + 0.914 | G R |
Table 4. Description of six classification scenarios with different feature combinations and their numbers of feature variables.
Table 4. Description of six classification scenarios with different feature combinations and their numbers of feature variables.
ScenariosPhasesFeaturesNumber
1MarchTF_3 + PF_3 + VIs_3 + SF_3 + GF_380
2JulyTF_7 + PF_7 + VIs_7 + SF_7 + GF_780
3OctoberTF_10 + PF_10 + VIs_10 + SF_10 + GF_1080
4March + JulyTF_3 + PF_3 + VIs_3 + SF_3 + GF_3 + TF_7 + PF_7 + VIs_7 + SF_7 + GF_7139
5July + OctoberTF_7 + PF_7 + VIs_7 + SF_7 + GF_7 + TF_10 + PF_10 + VIs_10 + SF_10 + GF_10139
6March + July + OctoberTF_3 + PF_3 + VIs_3 + SF_3 + GF_3 + TF_7 + PF_7 + VIs_7 + SF_7 + GF_7 + TF_10 + PF_10 + VIs_10 + SF_10 + GF_10198
TF, texture features; PF, position features; VIs, vegetation indexes; SF, spectral features (original R, G, B band), GF; geometry textures.
Table 5. Data dimensionality reduction of multi-temporal dataset based on RFE-based variable selection.
Table 5. Data dimensionality reduction of multi-temporal dataset based on RFE-based variable selection.
PhasesCorrelation CoefficientOriginal → Eliminating High Correlation → Number of Variables after RFE-Based Variable SelectionTraining Accuracy of RFE ModelRMSE
March0.9580 → 43 → 330.8651.670
July0.9580 → 36 → 300.8551.672
October0.9580 → 33 → 230.8751.694
March + July0.80139 → 55 → 520.9381.255
July + October0.80139 → 52 → 310.9351.402
March + July + October0.80198 → 64 → 560.9261.263
Table 6. Summary of optimal parameters of three base models.
Table 6. Summary of optimal parameters of three base models.
ModelsParametersTuning RangeTuning Step Size
RFmtry4–81
ntree500–2000500
XGBoostmax_depth4–81
eta0.05–0.30.05
CatBoostlearning_rate0.05–0.30.05
max_depth4–81
Table 7. F1 score growth rate of vegetation communities based on combined multi-growth periods of UAV images (M: March; J: July; O: October).
Table 7. F1 score growth rate of vegetation communities based on combined multi-growth periods of UAV images (M: March; J: July; O: October).
Classes(M + J + O) – M(M + J + O) – J(M + J + O) – O(M + J + O) – (M + J)(M + J + O) – (J + O)
BG30.43%6.25%13.89%7.94%1.59%
LT0−11.96%−3.57%−0.86%−15.41%
KR2.13%6.25%18.37%0.00%9.09%
BO15.46%14.41%5.12%−2.96%−1.49%
LC9.55%14.18%16.36%3.35%4.32%
CP7.12%10.45%17.52%1.08%1.49%
Table 8. Percentage of variable importance in different growth periods (the numbers in the table represent the proportions of each type of feature variable in the SHAP values ranking).
Table 8. Percentage of variable importance in different growth periods (the numbers in the table represent the proportions of each type of feature variable in the SHAP values ranking).
PhasesFeaturesTop 3Top 5Top 10Top 15
JulyVegetation Indexes55.6%46.6%46.7%40%
Texture features44.4%40%30%26.7%
Spectral features0.0%6.7%10%13.3%
Geometry Features0.0%6.7%10%13.3%
Position Features0.0%0.0%3.3%6.7%
March + July + OctoberVegetation Indexes55.6%46.6%43.3%42.2%
Texture features44.4%46.6%46.7%44.4%
Spectral features0.0%0.0%6.7%4.4%
Geometry features0.0%0.0%3.3%4.4%
Position Features0.0%6.8%0.0%2.2%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, Y.; Fu, B.; Sun, X.; Yao, H.; Zhang, S.; Wu, Y.; Kuang, H.; Deng, T. Effects of Multi-Growth Periods UAV Images on Classifying Karst Wetland Vegetation Communities Using Object-Based Optimization Stacking Algorithm. Remote Sens. 2023, 15, 4003. https://doi.org/10.3390/rs15164003

AMA Style

Zhang Y, Fu B, Sun X, Yao H, Zhang S, Wu Y, Kuang H, Deng T. Effects of Multi-Growth Periods UAV Images on Classifying Karst Wetland Vegetation Communities Using Object-Based Optimization Stacking Algorithm. Remote Sensing. 2023; 15(16):4003. https://doi.org/10.3390/rs15164003

Chicago/Turabian Style

Zhang, Ya, Bolin Fu, Xidong Sun, Hang Yao, Shurong Zhang, Yan Wu, Hongyuan Kuang, and Tengfang Deng. 2023. "Effects of Multi-Growth Periods UAV Images on Classifying Karst Wetland Vegetation Communities Using Object-Based Optimization Stacking Algorithm" Remote Sensing 15, no. 16: 4003. https://doi.org/10.3390/rs15164003

APA Style

Zhang, Y., Fu, B., Sun, X., Yao, H., Zhang, S., Wu, Y., Kuang, H., & Deng, T. (2023). Effects of Multi-Growth Periods UAV Images on Classifying Karst Wetland Vegetation Communities Using Object-Based Optimization Stacking Algorithm. Remote Sensing, 15(16), 4003. https://doi.org/10.3390/rs15164003

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop