Next Article in Journal
IRWT-YOLO: A Background Subtraction-Based Method for Anti-Drone Detection
Previous Article in Journal
Offshore Wind Farm Delivery with Autonomous Drones: A Holistic View of System Architecture and Onboard Capabilities
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Tree Species Classification Using UAV-Based RGB Images and Spectral Information on the Loess Plateau, China

1
Key Comprehensive Laboratory of Forestry, Northwest A&F University, Xianyang 712100, China
2
College of Tropical Crops, Yunnan Agricultural University, Pu’er 665099, China
3
Key Laboratory of Silviculture on the Loess Plateau State Forestry Administration, College of Forestry-Northwest A&F University, Xianyang 712100, China
*
Author to whom correspondence should be addressed.
Drones 2025, 9(4), 296; https://doi.org/10.3390/drones9040296
Submission received: 28 February 2025 / Revised: 3 April 2025 / Accepted: 7 April 2025 / Published: 10 April 2025
(This article belongs to the Section Drones in Agriculture and Forestry)

Abstract

:
Accurate and efficient tree species classification and mapping is crucial for forest management and conservation, especially on the Loess Plateau, where forest quality urgently needs improvement. This study selected three research sites—Yongshou (YS), Zhengning (ZN), and Yanchang (YC)—on the Loess Plateau and classified the main forest tree species using RGB images acquired by an unmanned aerial vehicle (UAV). The RGB images were normalized, and vegetation indices (VIs) were extracted. Feature selection was performed using the Boruta algorithm. Two classifiers, Support Vector Machine (SVM) and Random Forest (RF), were used to evaluate the contribution of different input features to classification and their performance differences across regions. The results showed that YC achieved the best classification performance with an overall accuracy (OA) of over 83% and a Kappa value of at least 0.78. The results showed that YC achieved the best classification performance (OA > 83%, Kappa ≥ 0.78), followed by ZN and YS. The addition of VIs significantly improved classification accuracy, particularly in the YS region with imbalanced sample distribution. The OA increased by more than 13.27%, and the Kappa improved by more than 0.17. Feature selection retained most of the advantages of the complete feature set, achieving slightly lower accuracy. Both RF and SVM are effective for tree species classification based on RGB images, with comparable performance (OA difference ≤ 1.5%, Kappa difference < 0.02). This study demonstrates the feasibility of UAV-based RGB images in tree species classification on the Loess Plateau and the great potential of RGBVIs in tree species classification, especially in areas with imbalanced class distributions. It provides a viable approach and methodology for tree species classification based on RGB images.

1. Introduction

One-third of the earth’s land area is forest [1], which plays a vital role in sequestering carbon and releasing oxygen, regulating climate, preserving soil and water, conserving water sources, and purifying the air [2,3,4]. Therefore, there is considerable interest in obtaining information on the state of forests. Trees are the dominant growth form in forests. Mapping tree species accurately and opportunely serves as a critical foundation for natural resource management and conservation, such as precision silviculture practices, biodiversity monitoring, and facilitating the evaluation of terrestrial carbon pool dynamics [3,4,5].
So far, remote sensing has been widely used for tree species classification [6,7,8,9]. Satellite images cover a large area but have low spatial and temporal resolution and severe cloud contamination, making it challenging to obtain high-quality satellite images [10]. In contrast, UAV-based remote sensing has many advancements, including (1) effectively reducing cloud pollution; (2) efficiently and cost-effectively collecting data with high spatial, spectral, and temporal resolution; and (3) relatively flexible free operating area, and thus has been applied in tree species classification and many other fields [11,12,13,14,15]. These advantages increase the feasibility and practicality of UAVs for classifying forest tree species.
The potential of RGB images for tree species classification has also been convincingly demonstrated [16,17,18]. Easy-to-use consumer drones can obtain the RGB image data required for classification at a reasonable price. This expands the applicability of the approach in a variety of contexts. While existing studies have demonstrated the feasibility of tree species classification using RGB imagery, current approaches predominantly rely on deep learning architectures [19,20,21,22,23,24]. However, the substantial dependency of these algorithms on large volumes of high-quality training data [25] poses significant constraints for practical implementation, particularly in sample-limited areas. A promising alternative lies in synergizing UAV-collected RGB imagery with classification approaches that balance moderate sample requirements with sustained accuracy, potentially enabling effective tree species mapping in data-scarce regions.
RF and SVM models are effective in processing high-dimensional and complex datasets, so they are widely used in tree species classification [26,27,28,29]. They can still achieve good results even with a small number of samples [30,31] and unbalanced samples [32,33,34,35]. Furthermore, both classifiers have been found to provide a high level of accuracy with no significant differences in tree species classification [13,35,36,37,38,39]. This methodology could serve as a computationally efficient alternative to deep learning architectures for UAV RGB-based tree species classification under limited training sample conditions.
The use of VIs improves classification performance compared with using spectral signatures individually [25,40]. To date, research on VIs for tree species classification has mainly focused on multispectral and hyperspectral images. However, few studies assessed the impact of VIs derived from UAV-based RGB images on the classification of forest tree species. There is relatively little known about the impacts of RGB-based spectral information for tree species classification. Investigating the spectral attributes inherent in UAV-acquired RGB imagery presents a scientifically grounded pathway to enhance the classification efficacy of RF and SVM models in tree species discrimination tasks.
The Loess Plateau is a typical area where artificial forests are concentrated. In recent decades, several large-scale forestry projects have been implemented to restore the ecological environment, such as the “Grain for Green” program [41]. These programs have been highly successful in increasing forest cover. There is an urgent need to monitor forest quality and vegetation changes in the region. Tree species classification is the basis for these efforts. Although numerous local and international studies have addressed forest tree species classification and mapping, systematic research on tree species classification in the Loess Plateau region remains limited. Although Zhang et al. [42] applied multi-source remote sensing data to classify forest types in the Loess Plateau, their approach lacked species-level resolution.
Despite substantial progress in RGB image-based tree species classification, current methodologies predominantly depend on deep learning architectures. These approaches face operational constraints in data-scarce contexts requiring large labeled training datasets. This study proposes a pragmatic alternative by integrating UAV-RGB imagery with sample-efficient machine learning classifiers (SVM and RF), systematically assessing its operational feasibility and ecological generalizability in the Loess Plateau region. Therefore, we selected three study sites to include as many representative forest species of the Loess Plateau as possible within a limited area. The three research sites encompass several tree species, including Robinia pseudoacacia, Platycladus orientalis, Pinus tabuliformis, Quercus wutaishanica, Larix gmelinii, Populus davidiana, Hippophae rhamnoides, and several other less common broadleaf species. Our main research objectives are as follows: (1) to evaluate the feasibility of combining RGB images with two machine learning algorithms for accurately mapping forest tree species on the Loess Plateau; (2) to investigate the impact of input features on the classification results of tree species on the Loess Plateau; and (3) to explore the differences in classification outcomes across various sites on the Loess Plateau.

2. Materials and Methods

2.1. Study Area

We conducted this study on the Loess Plateau in China, including three sites: YC, ZN, and YS. All three research sites are plantations. Site ZN is located in Zhengning County, Gansu Province (108°29′47″ E–108°29′58″ E, 35°30′30″ N–35°30′43″ N). It lies in the Loess sorghum gully region [43], covering an area of 1.34 ha (Figure 1b). The climate in this area is classified as a temperate continental monsoon humid and semi-humid climate. The main tree species here include R. pseudoacacia, P. tabuliformis, L. gmelinii, and P. davidiana. The predominant shrub species are H. rhamnoides, Crataegus pinnatifida, Elaeagnus pungens, Spiraea chinensis, Euonymus alatus, and Lespedeza formosa. Site YC is located in Yanchang County, Shaanxi Province (109°46′ E–109°46′23” E, 36°30′27” N–36°30′45” N). It is also situated in the Loess hilly and gully region [43] and spans an area of 3.81 ha (Figure 1c). This area experiences a semi-arid continental climate. The primary tree species include Q. wutaishanica, R. pseudoacacia, Betula platyphylla, and P. davidiana, while the main shrub species are E. alatus, Caragana korshinskii, and Salix cheilophila. The YS study site is located in Yongshou County, Shaanxi Province (34°47′9″ N–34°48′28″ N, 108°0′7″ E–108°6′15″ E). It is in the Loess sorghum gully region [43] and covers an area of 1.79 km2 (Figure 1d). This area falls under a warm temperate continental climate. The dominant tree species are R. pseudoacacia, P. orientalis, and P. tabuliformis. R. pseudoacacia accounted for more than 70% of the forested area. The dominant shrubs are Rubus mesogaeus, Spiraea fritschiana, E. alatus, Rosa hugonis, Lespedeza floribunda, and Celastrus orbiculatus.

2.2. Data Collection

2.2.1. Image Acquisition and Preprocessing

In July 2022, Phantom4-Multispectral (DJI Innovations, Shenzhen, China) was used to collect UAV orthophotos of the study area. This is the growing season for trees when the spectral differences between tree species are evident. The UAV photogrammetric survey was conducted during optimal atmospheric stability conditions to minimize shadow-induced spectral distortions. To ensure the accuracy of the data, we calibrated the sensors using DJI Assistant 2 for Phantom software before the flight mission. This included forward calibration, downward calibration, and backward calibration. Flight parameters were configured with 100 m (with the ground spatial resolution of 5 cm) above ground-level altitude, maintaining 80% longitudinal and 80% lateral image overlaps to ensure stereoscopic coverage. The geospatial positioning infrastructure integrated dual-frequency global navigation satellite system (GNSS) receivers with on-board RTK/PPK hybrid positioning, achieving planimetric accuracy of 1.8 ± 0.3 cm and vertical accuracy of 2.7 ± 0.5 cm through tight coupling with a D-RTK2 base station. The other UAV mission parameters are systematically documented in Table 1.
The multispectral reconstruction workflow was implemented in DJI Terra v3.0 based on the Structure from Motion (SfM) algorithm. Raw multispectral images were imported, enabling automated aerial triangulation (AT) through the software’s integration of camera calibration parameters and onboard Positioning and Orientation System (POS) data. The AT workflow was configured with Agricultural Field as the scene type to optimize feature matching for crop canopy structures. Computations were executed in Local Machine mode. High feature point density was selected to enhance vegetation pattern recognition in multispectral bands. To achieve sub-centimeter georeferencing precision, 19 (YS: 9; ZN: 5; YC: 5) ground control points (GCPs) were manually deployed across the field and meticulously measured using RTK GNSS (Hi-Target Navigation Tech Co., Ltd., Guangzhou, China). This hybrid approach combining aerial AT and terrestrial GCP constraints ultimately generated a 2D orthomosaic model (Table 2). Subsequent analysis selectively employed the visible spectrum subset (460–670 nm) for classification.

2.2.2. Field Survey Data

Field validation was conducted synchronously with UAV photogrammetric operations during July 2022 to minimize temporal discrepancy between image acquisition and ground truthing. An RTK GNSS receiver (HI-TARGET V200, developed by Hi-Target Navigation Tech Co., Ltd., Guangzhou, China) was employed to georeference 1467 arboreal specimens through differential positioning mode. To ensure spatial correspondence, each geodetic coordinate was registered with < 5 cm positional tolerance relative to its UAV-derived orthophoto centroid. To ensure statistical representativeness of the surveyed trees relative to species distribution patterns, we implemented systematic random sampling within the study area. To ensure that surveyed trees were representative of the tree species distribution at the study site, they were selected randomly. The number of tree species objects is listed in Table 3.

2.3. Feature Extraction and Selection

2.3.1. Calculation of VIs

Within the visible electromagnetic spectrum (400–700 nm), the digital number (DN) values derived from red, green, and blue spectral bands provide quantifiable measures of forest canopy radiometric characteristics. The process of converting DN values of remote sensing images to reflectance typically requires the spectral response function of the sensor. However, DJI did not provide detailed spectral response function data in their official technical documentation [44]. Therefore, we did not convert DNs to reflectance. To account for the effect of illumination, this research normalized the DNs of the R, G, and B bands by using Equations (1)–(3) [45]. The normalized DNs (R’, G’, B’) were employed for the calculation of the RGB-based VIs. In this paper, 16 related VIs were used to classify the tree species, as they are closely related to plant growth and development (such as canopy closure, density, and LAI). Their calculation formulas are shown in Table S1 [46,47,48,49,50,51,52].
R’ = R/(R + G + B)
G’ = G/(R + G + B)
B’ = B/(R + G + B)

2.3.2. Feature Selection

Feature selection is a pivotal aspect of classification processes [53]. Redundant features affect the performance and stability of classification models, which is detrimental [54]. Hence, the number of features must be minimized and optimized by eliminating redundant features to achieve the highest similarity within clusters and the lowest similarity between clusters [55].
The Boruta algorithm is a feature selection wrapper method that utilizes the RF algorithm to identify all relevant variables [56]. It determines important features by comparing the Z-scores of the original features with those derived from the shadow attributes randomly generated. The algorithm is comprehensive in that it aims to identify all features associated with the dependent variable, not just the optimal ones. It also exhibits strong robustness in handling interactions among variables and fluctuations in importance metrics, making it a potent tool for variable selection [57]. In our study, the algorithm was performed using the “Boruta” package in R 4.1.0.
To systematically evaluate feature importance in classification, we conducted a preliminary experiment employing incremental feature stacking. Based on the feature ranking outcomes, we sequentially integrated feature vectors while continuously monitoring classification accuracy variations on the validation set. The experimental results demonstrate that the first seven features substantially enhanced model performance, whereas incorporating the eighth feature yielded diminishing returns in accuracy improvement. Consequently, the top seven ranked features were selected to construct the optimal feature subspace.

2.4. Tree Species Classification

To systematically evaluate the contributions of various features and the performance of different classification algorithms, 8 cases were established through controlled combinations of feature sets and classifiers, as detailed in Table 4.
Object-based supervised image classification was conducted using two machine learning classifiers, SVM and RF, to assess the performance of different input features in classifying forest tree species.
Introduced by Breiman [58], RF was chosen in this study because it has the advantages of improving the accuracy of classification results substantially, reducing the influence of outliers, and avoiding overfitting [59]. Therefore, it has been widely utilized in vegetation mapping studies [60,61]. Some studies have found that 500 is the optimal number of trees (Ntree) [62]. Therefore, in this study, Ntree was set at 500. Following the research conducted by Ghosh et al. [13], Mtry was set at the number of input features. For example, when inputting RGBVI features, Mtry was set at 22.
SVM is the other machine learning algorithm employed in this study, developed by Cortes and Vapnik [63]. Previous research has demonstrated that SVM is resilient to high data dimensionality and performs well with small training sample sizes [64,65,66]. For the SVM algorithm, a radial basis function was selected due to its established efficiency [67] and fewer computational challenges [68]. The parameters of RBF were configured with a cost value of 1 and a gamma value of 0.0001.
The confusion matrices were generated to calculate OA, Kappa index, user accuracy (UA), producer accuracy (PA), and F1 score [69]. Overall accuracy was used to assess the overall performance of the classification, while the Kappa index was used to assess the consistency of the classification results. Calculations of PA, UA, and F1 score were employed to assess the ability to distinguish single tree species. As suggested [70], a stratified random sampling approach based on species was implemented to ensure the robustness of the accuracy assessment.
In this study, the measured data were divided into two parts: 70% was used for model training and 30% was used for accuracy assessment. We conducted the segmentation process, tree species classification, and accuracy assessment using the ArcGIS Pro API v2.8 for Python.

2.5. Overview of the Proposed Method

The methods and experimental steps of this study are shown in Figure 2. The framework includes the four steps: (1) data acquisition, (2) feature extraction and selection, (3) classification with two subsections, SVM and RF, and (4) accuracy assessment.

3. Results

3.1. Feature Analysis

A total of 516 valid samples across nine categories were used to analyze sample separability. ANOVA analysis was used to assess the separability of various samples. The results showed that the values of 22 features are significantly dependent on the type of sample at a 95% confidence interval level with significant differences (p < 0.01), and the between-group variance is significantly greater than the within-group variance (Figure 3, Table S2). This proves that there is good separability among different types of samples.
Figure 4 shows the results of the feature importance analysis using the Boruta algorithm. In this analysis, 22 features were considered, including 3 original bands, 3 normalized DNs, and 16 vegetation indices based on RGB images. The analysis results show that the importance values of all 22 features far exceed those of the shadow features; thus, all features are deemed significant. Among these features, the blue band has the highest importance, followed by the INT and the green band, which showed significantly higher significance than the rest. The importance of the remaining 19 variables is similar to each other and exceeds the maximum value of the shadow features.

3.2. Tree Species Classification

3.2.1. Tree Species Classification Based on RF

Table 5 shows the total accuracy of tree species using the RF classifier. As shown, in all three sites, RGBVIs and RGBFS constantly improved the overall classification effect. RGBVIs performed best with an OA (Kappa) of 84.96% (0.80), 86.67% (0.84), and 87.69 (0.84), followed by RGBFS, with slightly lower OA, and Kappa. However, the RGBR’G’B’ feature did not significantly improve the classification effect and even caused a slight decrease (OA:1.5%, Kappa: 0.02) in ZN compared to the RGB feature.
Figure 5 presents a comparison of class accuracies for the three regions based on four input features using the RF classifier. It is evident that the RGBVI and RGBFS features generally improved class accuracy, particularly in YS and ZN.
In YS, the identification of R. pseudoacacia and the “other” category was significantly better than that of P. tabuliformis and P. orientalis. When compared to the sole use of RGB imagery, the F1 scores for R. pseudoacacia, P. orientalis, P. tabuliformis, and the “other” category increased by 13.37% (10.20%), 16.00% (20.00%), 8.28% (6.81%), and 14.97% (13.05%), respectively, when RGBVI (RGBFS) features were used as inputs.
In ZN, there was a significant disparity in UA and PA when using RGB imagery and RGBR’G’B’ features, leading to considerable misclassification and omission issues, particularly with H. rhamnoides, which reached an accuracy drop of 25.68%. The use of RGBVIs and RGBFS features notably alleviated these problems. Compared to the sole use of RGB imagery, when RGBVI (RGBFS) features were input, the F1 scores for L. gmelinii, P. tabuliformis, R. pseudoacacia, P. davidiana, H. rhamnoides, and the “other” category improved by 7.88% (8.24%), 11.74% (10.09%), 12.22% (3.98%), 9.21% (7.92%), 10.73% (8.01%), and 4.76% (−3.56%), respectively.
In YC, all categories generally exhibited high accuracy, with F1 scores exceeding 80%, and the issues of misclassification and omission were notably less severe than in YS and ZN. However, the RGBVIs and RGBFS features did not significantly improve class accuracy. When using RGBVI (RGBFS) features, the accuracy for the Q. wutaishanica did not improve and even decreased by 3.16%. Conversely, the F1 scores for R. pseudoacacia, broadleaf species, and the “other” category increased by 4.13% (4.56%), 5.38% (4.10%), and 6.65% (8.17%), respectively.

3.2.2. Tree Species Classification Based on SVM

Based on the total accuracy of the three regions, when using the SVM classifier, compared to using RGB images alone, the other three features improved the overall accuracy (3.5–15.05%) and Kappa (0.05–0.2) to varying degrees (Table 6). Among them, the improvement effect of RGBVIs is the most obvious, especially in YS, OA, and Kappa increased by 15.05% and 0.19, respectively. Meanwhile, RGBR’G’B’ and RGBFS did not show consistent superiority.
Figure 6 illustrates the class accuracy for the three regions based on the SVM classifier using four input features. In the various sites, the RGBVIs and RGBFS features led to varying degrees of improvement in class accuracy.
In YS, R. pseudoacacia and the “other” category were identified significantly better than P. tabuliformis and P. orientalis. Compared to the sole use of RGB imagery, when RGBVI (RGBFS) features were employed, the F1 score for R. pseudoacacia, P. orientalis, P. tabuliformis, and the “other” category increased by 17.05% (14.11%), 17.78% (15.15%), 11.11% (11.11%), and 11.18% (11.18%), respectively.
In ZN, there were significant issues with misclassification and omission when using RGB and RGBR’G’B’ features. Taking H. rhamnoides for example, the disparity between UA and PA reached as high as 20.57%. The use of RGBVIs and RGBFS features significantly alleviated this problem. When RGBVI (RGBFS) features were input, the F1 score for H. rhamnoides improved dramatically, increasing by 19.05% and 13.33% compared to the sole use of RGB imagery. In contrast, the F1 scores for L. gmelinii, P. tabuliformis, R. pseudoacacia, and P. davidiana showed only slight improvements, while the F1 score for the “other” category experienced a minor decline (0.93% and 3.12%). However, it remained above 80%.
Overall, the accuracy for all categories in YC is higher than that in YS and ZN, particularly after adding extra features, with the F1 score exceeding 85%. Additionally, the issues of misclassification and omission are significantly less severe compared to YS and ZN. In contrast to YS and ZN, all three additional features contributed to varying degrees of improvement in class accuracy: R. pseudoacacia improved by 7.73% to 9.52%, Q. wutaishanica by 1.66% to 4.56%, broadleaf species by 2.38% to 4.90%, and the “other” category by 2.28% to 5.6%. Among these, RGBVIs had the most significant impact, while RGBFS and RGBR’G’B’ did not demonstrate a consistent superiority.

3.2.3. Spatial Difference Analysis of Tree Species Classification

We evaluated the classification effectiveness of the three sites based on the average and optimal levels of classification accuracy for each feature set. As shown in Figure 7, regardless of the input features, there was a consistent trend for average OA, Kappa, and F1 score: YC > ZN > YS for average OA, Kappa, and F1 score. The maximum OA (OA_max) and maximum Kappa (Kappa_max) also followed the same trend, except that YS and ZN are nearly equal under the RGBFS features. When considering F1_max, YC significantly outperformed both ZN and YS. When RGB imagery is used, ZN is higher than YS; however, when RGBVI features are input, YS surpasses ZN.
This study compared the performance of SVM and RF classifiers using three additional features against the sole use of RGB imagery, assessing overall accuracy, Kappa, and the average F1 score for each tree species. The results are illustrated in Figure 8. In all three study sites, the RGBVI features exhibited the best enhancement effects. Regardless of the classifier used, the improvement in accuracy for the YS area was the most significant, with overall accuracy increasing by 2.65% to 15.04%, Kappa values rising by 0.03 to 0.20, and F1 scores increasing by 2.01% to 14.28%. In the YC area, all three features demonstrated relatively stable and comparable enhancement effects, with overall accuracy improving from 2.31% to 6.15%, Kappa increasing from 0.03 to 0.08, and F1 scores rising from 2.57% to 6.15%. For ZN, when using the RF classifier, the RGBR’G’B’ feature exhibited some inhibitory effects; however, the RGBVI features significantly enhanced classification performance.
To compare the optimal classification effects across study sites, the best features selected for each region were used to evaluate classification performance. As shown in Table 7, when employing the RF classifier, the classification effectiveness ranked as follows: YC > ZN > YS, with OA exceeding 84% in all regions, Kappa values equal to or greater than 0.8, and F1 scores also exceeding 84%. When the SVM classifier was utilized, a similar trend was observed (YC > ZN > YS); however, the performance of YC was notably superior to the other two regions, achieving an OA of 90%, a Kappa of 0.87, and an F1 score of 90.08%.

3.3. Tree Species Mapping

Through the research conducted in Section 3.2, we identified the optimal classification combinations for each study area, ultimately producing maps and statistics on the area distribution and proportions of each category, as shown in Figure 9.
In YS, although the tree species distribution is simple, there is a severe imbalance in category distribution. R. pseudoacacia dominate, accounting for 70.7% of the total area, followed by non-forested land at 22.2%. The proportions of P. tabuliformis and P. orientalis are minimal, comprising only 6.6% and 0.5%, respectively.
In contrast, ZN exhibits a relatively uniform and complex distribution of categories. The most prevalent tree species is R. pseudoacacia, covering 1.88 ha (33.2%), followed by P. tabuliformis at 1.31 ha (23.2%), L. gmelinii at 0.62 ha (11%), P. davidiana at 0.29 ha (5.1%), and H. rhamnoides, which only occupies 0.22 ha (3.9%).
The category distribution in YC is relatively even, primarily consisting of broadleaf species. The largest area is occupied by Q. wutaishanica, covering 5.40 ha (30.6%), followed by broadleaf trees at 28.3%, while R. pseudoacacia accounts for only 19.5% (3.45 ha).

4. Discussion

4.1. Visible Band and Derived VIs

4.1.1. The Importance of Visible Band in Classification

The visible band, especially the blue band, plays a significant role in mapping tree species. Our results indicated that the visible band is feasible for discriminating dominant tree species on the Loess Plateau. Numerous studies based on multispectral satellite imagery [71,72,73] and aerial imagery [74,75,76,77] have also highlighted the importance of the visible band. In the feature importance test results of this study, the blue band was significantly more important than other features, indicating its crucial role in tree species classification. This may be due to the blue band’s sensitivity to chlorophyll and its resistance to canopy shading [78]. Many previous studies have also highlighted the significance of the blue band [14,17,78,79]. Therefore, the blue band will become increasingly crucial in future research [17,80].

4.1.2. The Importance of VIs in Classification

This study quantitatively evaluated 16 VIs derived from standard RGB spectral bands for their efficacy in tree species discrimination. Our findings revealed the outstanding contribution of VIs derived from RGB images in accurate tree species classification. This may be because the RGB image-derived VI correlates with various aspects of plant growth and development, including canopy closure, density, leaf area index, biomass, and yield [81,82,83,84]. In addition, empirical investigations confirm the extensive integration of UAV-acquired RGB imagery-derived VIs into forest monitoring and management [85,86]. This further confirms the critical potential of UAV-based RGB images and their derived VIs in vegetation monitoring, especially tree species identification.

4.2. Feature Selection

In classification tasks, feature selection aims to identify the most pertinent features for accurate classification, which helps improve computational speed and classification accuracy [87]. In this study, we selected the top seven ranked features as the RGBFS features and compared the results with RGBVIs, as shown in Figure 10. It can be observed that, although RGBFS achieved good results in all cases, there was still a slight gap compared to RGBVIs. The most enormous difference was observed using the RF classifier in site ZN, where OA, Kappa, and F1_avg were only 3.67%, 0.04, and 3.66% lower than RGBVIs, respectively. This finding differs slightly from earlier studies [72,88], which reported that selected features achieved the highest accuracy. However, some studies [89,90,91] also concluded that feature selection does not necessarily yield better results than using all features combined. We think that this discrepancy may be due to the difference in the number of features. Aneece et al. pointed out that classification accuracy increases with the number of features and tends to stabilize at around 15–20 features [92]. In this study, the maximum number of features was 22, which may explain why RGBVIs showed higher accuracy than RGBFS.

4.3. Classification in Regions with Imbalanced Class Distributions

4.3.1. The Effect of Imbalanced Class Distributions in Classification

Class imbalances negatively affected the results for a specific class. In our study, YS performed the worst among the three sites, and R. pseudoacacia showed a more stable and significant effect compared to several other categories. This may be attributed to the severe class imbalance in the tree species distribution within the study area, with more than 70% of the pure forests being R. pseudoacacia. This may bias the predicted probability distribution of the classification model toward R. pseudoacacia. Many previous studies have observed similar patterns [71,93,94,95,96].
In cases of imbalanced class distribution, training samples can significantly impact classification results. In this study, P. orientalis and P. tabuliformis exhibited poor classification performance. This may be because they typically appear in mixed forests within the research area and are rarely found in pure stands, especially P. tabuliformis. As a result, it is challenging to find reliable reference samples. Therefore, some of the observed misclassifications may be due to errors in the reference dataset. Another potential reason could be spectral overlap due to the broad spectral range of some tree species [34].

4.3.2. Mitigating the Impact of Class Imbalance on Classification

To eliminate the effect of class imbalance on classification results, it is usually necessary to draw approximately the same number of sampling units for each class [97,98]. This can be achieved, for example, by down sampling the large class (R. pseudoacacia) to match the sample size of the small class (P. tabuliformis, P. orientalis) [99]. However, to date, there is a relative lack of research on this issue. In future research, this topic will be systematically and in-depth studied.

4.4. Comparison of RF and SVM Classifiers

Our analysis revealed no consistent superiority between RF and SVM classifiers across the three study sites (Figure 11). In the YS site (Figure 11a), neither algorithm demonstrated statistically dominant performance across evaluated metrics. For ZN (Figure 11b), SVM achieved systematically higher accuracy than RF when using RGB and RGBR’G’B’ features, whereas this pattern was reversed with RGBVIs and FS. In contrast, SVM consistently outperformed RF in the YC (Figure 11c) regardless of input feature sets, suggesting region-specific algorithm sensitivity to ecological heterogeneity.
Our findings align with previous comparative studies of SVM and RF in forest and land cover classification [13,34,100]. Cetinh and Yastikli [27] reported RF’s superior performance over SVM in urban tree species mapping, whereas other studies observed the opposite trend under specific conditions [101,102]. A meta-analysis by Sheykhmousa et al. [36] of 32 remote sensing classification studies employing both algorithms revealed RF outperformed SVM in 19 cases. This discrepancy may stem from RF’s inherent advantage in processing low-spatial-resolution imagery, where its ensemble-based feature selection effectively mitigates spectral noise. However, no consistent algorithmic superiority was observed with medium-to-high spatial resolution data, likely due to SVM’s enhanced capability to resolve fine-scale spectral-textural patterns.
These results suggest that neither classifier is universally superior for UAV RGB-based tree species classification. The optimal algorithm choice should be guided by spatio-spectral characteristics of the target imagery and ecological complexity of the study area, rather than relying on generic performance assumptions.

4.5. Limitations and Future Research Directions

Impact of imaging acquisition conditions: The classification performance varied significantly across the three study areas in this research. These discrepancies may be partially attributable to variations in data acquisition conditions—such as illumination intensity, solar zenith angle, atmospheric conditions, and local elevation—across different regions. However, due to the lack of quantitative environmental parameters recorded during data acquisition in this study, a rigorous analysis of these factors was not feasible. Future investigations could validate this hypothesis through systematic controlled experiments designed to isolate and quantify the impacts of specific environmental variables on classification accuracy.
Integration of multisensor data for enhanced tree species classification. While UAV-based RGB imagery demonstrates preliminary feasibility in tree species classification, its accuracy and robustness require further improvement. Hyperspectral and LiDAR systems have shown considerable potential in this domain, particularly as technological advancements have rendered them increasingly accessible. Future research should prioritize the development of robust classification frameworks that synergistically integrate multi-source data (e.g., spectral, structural, and spatial features) to achieve higher classification consistency across heterogeneous forest environments.
Incorporation of textural features for enhanced classification accuracy: In a study on tree species classification utilizing UAV-based multispectral imagery, Abdollahnejad et al. [103] demonstrated that the integration of textural variables improved the OA by 4.24%. This finding highlights the significant potential of textural features in UAV-driven tree species discrimination. Future research should prioritize leveraging the abundant textural information inherent in high-spatial-resolution RGB imagery acquired by UAV systems, which may further optimize classification performance.
Addressing class imbalance in forest resource classification: Future research should address the pervasive class distribution skew in forest resource inventories through two strategies: (1) integrating adaptive resampling techniques during sample selection to balance species representation and (2) developing cross-region transfer learning frameworks to leverage external ecological knowledge. These approaches collectively aim to enhance classification reliability by improving dataset representativeness and model generalizability for underrepresented species.

5. Conclusions

This study evaluated the feasibility of UAV-based RGB images for tree species classification on the Loess Plateau. Through a comparative analysis of classification accuracy across diverse input features, study sites, and classifiers, the following conclusions were drawn:
(1)
RGBVIs significantly enhanced classification accuracy, yielding the best performance during the classification process;
(2)
Feature selection maintained most of the advantages of the complete feature set, though not superior to RGBVIs optimization;
(3)
Site-specific accuracy ranked YC > ZN > YS, with RGBVI-driven improvements inversely correlating with baseline performance. This variation may be attributed to the differences in tree species distribution across the three sites;
(4)
SVM and RF achieved comparable results, though SVM excelled in deciduous–coniferous mixed forest.
These findings suggest that UAV-based RGB imagery and its derived VIs constitute a viable technical solution for tree species classification on the Loess Plateau. This method offers a promising solution for forest tree species classification in the region and provides a valuable reference for similar studies in areas with limited and imbalanced sample distributions.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/drones9040296/s1, Table S1: Summary of VIs derived from the UAV-based RGB images; Table S2: ANOVA analysis of feature values among various training samples (p < 0.01).

Author Contributions

The authors confirm contributions to this paper as follows: study conception and design: Z.Z. and Z.L.; data collection: Z.L., S.Y., Q.Y., M.Z. and D.Y.; analysis and interpretation of results: Z.L. and S.Y.; draft manuscript preparation: Z.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the following: (1) The National Key Research and Development Program of China named “Quality improvement technology of low-efficiency plantation forest ecosystem in the Loess Plateau” [grant number 2022YFF1300400]. (2) The National Key Research and Development Program of China: “Key Technologies and Demonstration of Planted Forest Landscape Management in the Loess Gully and Tableland Region” (2017–2020).

Data Availability Statement

The data that were used are confidential.

Acknowledgments

This work was supported by the National Key Research and Development Program of China named “Quality improvement technology of low-efficiency plantation forest ecosystem in the Loess Plateau” [grant number 2022YFF1300400]; Subject: Multifunctional enhancement of Robinia pseudoacacia forests in hilly and gully areas and techniques for maintaining vegetation stability on the Loess Plateau [grant number 2022YFF1300405]; and Special Topic: Distribution Pattern, Causes and Change Trends of Low-Efficiency Robinia pseudoacacia plantations.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Herforth, A.; Bai, Y.; Venkat, A.; Mahrt, K.; Ebel, A.; Masters, W.A. Cost and Affordability of Healthy Diets Across and Within Countries: Background Paper for the State of Food Security and Nutrition in the World; FAO Agricultural Development Economics Technical Study No. 9; Food & Agriculture Organization: Rome, Italy, 2020; Volume 9. [Google Scholar]
  2. Bonan, G.B. Forests, climate, and public policy: A 500-year interdisciplinary odyssey. Annu. Rev. Ecol. Evol. Syst. 2016, 47, 97–121. [Google Scholar] [CrossRef]
  3. Xiao, J.; Chevallier, F.; Gomez, C.; Guanter, L.; Hicke, J.A.; Huete, A.R.; Ichii, K.; Ni, W.; Pang, Y.; Rahman, A.F.; et al. Remote sensing of the terrestrial carbon cycle: A review of advances over 50 years. Remote Sens. Environ. 2020, 233, 111383. [Google Scholar] [CrossRef]
  4. Chiarucci, A.; Piovesan, G. Need for a global map of forest naturalness for a sustainable future. Conserv. Biol. 2019, 34, 368–372. [Google Scholar] [CrossRef]
  5. Wang, R.; Gamon, J.A. Remote sensing of terrestrial plant biodiversity. Remote Sens. Environ. 2019, 231, 111218. [Google Scholar] [CrossRef]
  6. Deur, M.; Gašparović, M.; Balenović, I. Tree species classification in mixed deciduous forests using very high spatial resolution satellite imagery and machine learning methods. Remote Sens. 2020, 12, 3926. [Google Scholar] [CrossRef]
  7. Qi, S.; Song, B.; Liu, C.; Gong, P.; Luo, J.; Zhang, M.; Xiong, T. Bamboo forest mapping in China using the dense Landsat 8 image archive and Google Earth Engine. Remote Sens. 2022, 14, 762. [Google Scholar] [CrossRef]
  8. Pu, R.; Landry, S. Mapping urban tree species by integrating multi-seasonal high resolution pléiades satellite imagery with airborne LiDAR data. Urban For. Urban Green. 2020, 53, 126675. [Google Scholar] [CrossRef]
  9. Liu, H.; Wu, C. Crown-level tree species classification from AISA hyperspectral imagery using an innovative pixel-weighting approach. Int. J. Appl. Earth Obs. Geoinf. 2018, 68, 298–307. [Google Scholar] [CrossRef]
  10. Wang, L.; Jia, M.; Yin, D.; Tian, J. A review of remote sensing for mangrove forests: 1956–2018. Remote Sens. Environ. 2019, 231, 111223. [Google Scholar] [CrossRef]
  11. Sothe, C.; Dalponte, M.; de Almeida, C.M.; Schimalski, M.B.; Lima, C.L.; Liesenberg, V.; Miyoshi, G.T.; Tommaselli, A.M.G. Tree species classification in a highly diverse subtropical forest integrating UAV-based photogrammetric point cloud and hyperspectral data. Remote Sens. 2019, 11, 1338. [Google Scholar] [CrossRef]
  12. Cao, J.; Leng, W.; Liu, K.; Liu, L.; He, Z.; Zhu, Y. Object-based mangrove species classification using unmanned aerial vehicle hyperspectral images and digital surface models. Remote Sens. 2018, 10, 89. [Google Scholar] [CrossRef]
  13. Lee, E.-R.; Baek, W.-K.; Jung, H.-S. Mapping tree species using CNN from bi-seasonal high-resolution drone optic and LiDAR data. Remote Sens. 2023, 15, 2140. [Google Scholar] [CrossRef]
  14. Fassnacht, F.E.; Latifi, H.; Stereńczak, K.; Modzelewska, A.; Lefsky, M.; Waser, L.T.; Straub, C.; Ghosh, A. Review of studies on tree species classification from remotely sensed data. Remote Sens. Environ. 2016, 186, 64–87. [Google Scholar] [CrossRef]
  15. Wagner, F.H.; Ferreira, M.P.; Sanchez, A.; Hirye, M.C.M.; Zortea, M.; Gloor, E.; Phillips, O.L.; de Souza Filho, C.R.; Shimabukuro, Y.E.; Aragão, L.E.O.C. Individual tree crown delineation in a highly diverse tropical forest using very high resolution satellite images. ISPRS J. Photogramm. Remote Sens. 2018, 145, 362–377. [Google Scholar] [CrossRef]
  16. Schiefer, F.; Kattenborn, T.; Frick, A.; Frey, J.; Schall, P.; Koch, B.; Schmidtlein, S. Mapping forest tree species in high resolution UAV-based RGB-imagery by means of convolutional neural networks. ISPRS J. Photogramm. Remote Sens. 2020, 170, 205–215. [Google Scholar] [CrossRef]
  17. Grybas, H.; Congalton, R.G. A comparison of multi-temporal RGB and multispectral UAS imagery for tree species classification in heterogeneous New Hampshire Forests. Remote Sens. 2021, 13, 2631. [Google Scholar] [CrossRef]
  18. Zhang, C.; Xia, K.; Feng, H.; Yang, Y.; Du, X. Tree species classification using deep learning and RGB optical images obtained by an unmanned aerial vehicle. J. For. Res. 2021, 32, 1879–1888. [Google Scholar] [CrossRef]
  19. Egli, S.; Höpke, M. CNN-based tree species classification using high resolution RGB image data from automated UAV observations. Remote Sens. 2020, 12, 3892. [Google Scholar] [CrossRef]
  20. Moura, M.M.; de Oliveira, L.E.S.; Sanquetta, C.R.; Bastos, A.; Mohan, M.; Corte, A.P.D. Towards Amazon forest restoration: Automatic detection of species from UAV imagery. Remote Sens. 2021, 13, 2627. [Google Scholar] [CrossRef]
  21. Shi, W.; Wang, S.; Yue, H.; Wang, D.; Ye, H.; Sun, L.; Sun, J.; Liu, J.; Deng, Z.; Rao, Y.; et al. Identifying tree species in a warm-temperate deciduous forest by combining multi-rotor and fixed-wing unmanned aerial vehicles. Drones 2023, 7, 353. [Google Scholar] [CrossRef]
  22. Ferreira, M.P.; de Almeida, D.R.A.; Papa, D.D.A.; Minervino, J.B.S.; Veras, H.F.P.; Formighieri, A.; Santos, C.A.N.; Ferreira, M.A.D.; Figueiredo, E.O.; Ferreira, E.J.L. Individual tree detection and species classification of Amazonian palms using UAV images and deep learning. For. Ecol. Manag. 2020, 475, 118397. [Google Scholar] [CrossRef]
  23. Huang, H.; Li, F.; Fan, P.; Chen, M.; Yang, X.; Lu, M.; Sheng, X.; Pu, H.; Zhu, P. Amdnet: A modern uav rgb remote-sensing tree species image segmentation model based on dual-attention residual and structure re-parameterization. Forests 2023, 14, 549. [Google Scholar] [CrossRef]
  24. Zhang, C.; Zhou, J.; Wang, H.; Tan, T.; Cui, M.; Huang, Z.; Wang, P.; Zhang, L. Multi-species individual tree segmentation and identification based on improved mask R-CNN and UAV imagery in mixed forests. Remote Sens. 2022, 14, 874. [Google Scholar] [CrossRef]
  25. Maschler, J.; Atzberger, C.; Immitzer, M. Individual tree crown segmentation and classification of 13 tree species using airborne hyperspectral data. Remote Sens. 2018, 10, 1218. [Google Scholar] [CrossRef]
  26. Jones, T.G.; Coops, N.C.; Sharma, T. Assessing the utility of airborne hyperspectral and LiDAR data for species distribution mapping in the coastal Pacific Northwest, Canada. Remote Sens. Environ. 2010, 114, 2841–2852. [Google Scholar] [CrossRef]
  27. Cetin, Z.; Yastikli, N. The use of machine learning algorithms in urban tree species classification. ISPRS Int. J. Geo-Inf. 2022, 11, 226. [Google Scholar] [CrossRef]
  28. Liu, L.; Coops, N.C.; Aven, N.W.; Pang, Y. Mapping urban tree species using integrated airborne hyperspectral and LiDAR remote sensing data. Remote Sens. Environ. 2017, 200, 170–182. [Google Scholar] [CrossRef]
  29. Hartling, S.; Sagan, V.; Sidike, P.; Maimaitijiang, M.; Carron, J. Urban tree species classification using a WorldView-2/3 and LiDAR data fusion approach and deep learning. Sensors 2019, 19, 1284. [Google Scholar] [CrossRef]
  30. Yu, X.; Hyyppä, J.; Litkey, P.; Kaartinen, H.; Vastaranta, M.; Holopainen, M. Single-sensor solution to tree species classification using multispectral airborne laser scanning. Remote Sens. 2017, 9, 108. [Google Scholar] [CrossRef]
  31. Mountrakis, G.; Im, J.; Ogole, C. Support vector machines in remote sensing: A review. ISPRS J. Photogramm. Remote Sens. 2011, 66, 247–259. [Google Scholar] [CrossRef]
  32. Feret, J.-B.; Asner, G.P. Tree species discrimination in tropical forests using airborne imaging spectroscopy. IEEE Trans. Geosci. Remote Sens. 2012, 51, 73–84. [Google Scholar] [CrossRef]
  33. Baldeck, C.A.; Asner, G.P.; Martin, R.E.; Anderson, C.B.; Knapp, D.E.; Kellner, J.R.; Wright, S.J. Operational tree species mapping in a diverse tropical forest with airborne imaging spectroscopy. PLoS ONE 2015, 10, e0118403. [Google Scholar] [CrossRef] [PubMed]
  34. Ballanti, L.; Blesius, L.; Hines, E.; Kruse, B. Tree species classification using hyperspectral imagery: A comparison of two classifiers. Remote Sens. 2016, 8, 445. [Google Scholar] [CrossRef]
  35. Ferreira, M.P.; Zortea, M.; Zanotta, D.C.; Shimabukuro, Y.E.; de Souza Filho, C.R. Mapping tree species in tropical seasonal semi-deciduous forests with hyperspectral and multispectral data. Remote Sens. Environ. 2016, 179, 66–78. [Google Scholar] [CrossRef]
  36. Sheykhmousa, M.; Mahdianpari, M.; Ghanbari, H.; Mohammadimanesh, F.; Ghamisi, P.; Homayouni, S. Support vector machine versus random forest for remote sensing image classification: A meta-analysis and systematic review. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 6308–6325. [Google Scholar] [CrossRef]
  37. Deng, S.; Katoh, M.; Yu, X.; Hyyppä, J.; Gao, T. Comparison of tree species classifications at the individual tree level by combining ALS data and RGB images using different algorithms. Remote Sens. 2016, 8, 1034. [Google Scholar] [CrossRef]
  38. Piiroinen, R.; Heiskanen, J.; Maeda, E.; Viinikka, A.; Pellikka, P. Classification of tree species in a diverse African agroforestry landscape using imaging spectroscopy and laser scanning. Remote Sens. 2017, 9, 875. [Google Scholar] [CrossRef]
  39. Raczko, E.; Zagajewski, B. Comparison of support vector machine, random forest and neural network classifiers for tree species classification on airborne hyperspectral APEX images. Eur. J. Remote Sens. 2017, 50, 144–154. [Google Scholar] [CrossRef]
  40. Immitzer, M.; Neuwirth, M.; Böck, S.; Brenner, H.; Vuolo, F.; Atzberger, C. Optimal input features for tree species classification in Central Europe based on multi-temporal Sentinel-2 data. Remote Sens. 2019, 11, 2599. [Google Scholar] [CrossRef]
  41. Huang, W.; Wang, P.; He, L.; Liu, B. Improvement of water yield and net primary productivity ecosystem services in the Loess Plateau of China since the “Grain for Green” project. Ecol. Indic. 2023, 154, 110707. [Google Scholar] [CrossRef]
  42. Zhang, M.; Yin, D.; Li, Z.; Zhao, Z. Improved identification of forest types in the Loess plateau using Multi-Source remote sensing data, transfer learning, and neural residual networks. Remote Sens. 2024, 16, 2096. [Google Scholar] [CrossRef]
  43. Yang, Y.F.; Wang, B.; Wang, G.L.; Li, Z.S. Ecological regionalization and overview of the Loess Plateau. Acta Ecol. Sin. 2019, 39, 7389–7397. [Google Scholar]
  44. Daniels, L.; Eeckhout, E.; Wieme, J.; Dejaegher, Y.; Audenaert, K.; Maes, W.H. Identifying the optimal radiometric calibration method for UAV-based multispectral imaging. Remote Sens. 2023, 15, 2909. [Google Scholar] [CrossRef]
  45. Liang, Y.; Kou, W.; Lai, H.; Wang, J.; Wang, Q.; Xu, W.; Wang, H.; Lu, N. Improved estimation of aboveground biomass in rubber plantations by fusing spectral and textural information from UAV-based RGB imagery. Ecol. Indic. 2022, 142, 109286. [Google Scholar] [CrossRef]
  46. Guijarro, M.; Pajares, G.; Riomoros, I.; Herrera, P.; Burgos-Artizzu, X.; Ribeiro, A. Automatic segmentation of relevant textures in agricultural images. Comput. Electron. Agric. 2011, 75, 75–83. [Google Scholar] [CrossRef]
  47. Meyer, G.E.; Neto, J.C. Verification of color vegetation indices for automated crop imaging applications. Comput. Electron. Agric. 2008, 63, 282–293. [Google Scholar] [CrossRef]
  48. Louhaichi, M.; Borman, M.M.; Johnson, D.E. Spatially located platform and aerial photography for documentation of grazing impacts on wheat. Geocarto Int. 2001, 16, 65–70. [Google Scholar] [CrossRef]
  49. Kawashima, S.; Nakatani, M. An algorithm for estimating chlorophyll content in leaves using a video camera. Ann. Bot. 1998, 81, 49–54. [Google Scholar] [CrossRef]
  50. Bendig, J.; Yu, K.; Aasen, H.; Bolten, A.; Bennertz, S.; Broscheit, J.; Gnyp, M.L.; Bareth, G. Combining UAV-based plant height from crop surface models, visible, and near infrared vegetation indices for biomass monitoring in barley. Int. J. Appl. Earth Obs. Geoinf. 2015, 39, 79–87. [Google Scholar] [CrossRef]
  51. Gitelson, A.A.; Kaufman, Y.J.; Stark, R.; Rundquist, D. Novel algorithms for remote estimation of vegetation fraction. Remote Sens. Environ. 2002, 80, 76–87. [Google Scholar] [CrossRef]
  52. Woebbecke, D.M.; Meyer, G.E.; Von Bargen, K.; Mortensen, D.A. Color indices for weed identification under various soil, residue, and lighting conditions. Trans. ASAE 1995, 38, 259–269. [Google Scholar] [CrossRef]
  53. Laliberte, A.; Browning, D.; Rango, A. A comparison of three feature selection methods for object-based classification of sub-decimeter resolution UltraCam-L imagery. Int. J. Appl. Earth Obs. Geoinf. 2012, 15, 70–78. [Google Scholar] [CrossRef]
  54. Zhang, H.; Li, Q.; Liu, J.; Du, X.; Dong, T.; McNairn, H.; Champagne, C.; Liu, M.; Shang, J. Object-based crop classification using multi-temporal SPOT-5 imagery and textural features with a Random Forest classifier. Geocarto Int. 2018, 33, 1017–1035. [Google Scholar] [CrossRef]
  55. Peña-Barragán, J.M.; Ngugi, M.K.; Plant, R.E.; Six, J. Object-based crop identification using multiple vegetation indices, textural features and crop phenology. Remote Sens. Environ. 2011, 115, 1301–1316. [Google Scholar] [CrossRef]
  56. Kursa, M.B.; Rudnicki, W.R. Feature Selection with the Boruta Package. J. Stat. Softw. 2010, 36, 1–13. [Google Scholar] [CrossRef]
  57. Degenhardt, F.; Seifert, S.; Szymczak, S. Evaluation of variable selection methods for random forests and omics data sets. Briefings Bioinform. 2019, 20, 492–503. [Google Scholar] [CrossRef]
  58. Breiman, L. Random forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
  59. Cutler, D.R.; Edwards, T.C., Jr.; Beard, K.H.; Cutler, A.; Hess, K.T.; Gibson, J.; Lawler, J.J. Random forests for classification in ecology. Ecology 2007, 88, 2783–2792. [Google Scholar] [CrossRef]
  60. Rüetschi, M.; Weber, D.; Koch, T.L.; Waser, L.T.; Small, D.; Ginzler, C. Countrywide mapping of shrub forest using multi-sensor data and bias correction techniques. Int. J. Appl. Earth Obs. Geoinf. 2021, 105, 102613. [Google Scholar] [CrossRef]
  61. Hermosilla, T.; Bastyr, A.; Coops, N.C.; White, J.C.; Wulder, M.A. Mapping the presence and distribution of tree species in Canada’s forested ecosystems. Remote Sens. Environ. 2022, 282, 113276. [Google Scholar] [CrossRef]
  62. Belgiu, M.; Drăguţ, L. Random forest in remote sensing: A review of applications and future directions. ISPRS J. Photogramm. Remote Sens. 2016, 114, 24–31. [Google Scholar] [CrossRef]
  63. Cortes, C.; Vapnik, V. Support-vector networks. Mach. Learn. 1995, 20, 273–297. [Google Scholar] [CrossRef]
  64. Li, D.; Ke, Y.; Gong, H.; Li, X. Object-based urban tree species classification using bi-temporal WorldView-2 and WorldView-3 images. Remote Sens. 2015, 7, 16917–16937. [Google Scholar] [CrossRef]
  65. Omer, G.; Mutanga, O.; Abdel-Rahman, E.M.; Adam, E. Performance of support vector machines and artificial neural network for mapping endangered tree species using WorldView-2 data in Dukuduku forest, South Africa. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 4825–4840. [Google Scholar] [CrossRef]
  66. Xie, Z.; Chen, Y.; Lu, D.; Li, G.; Chen, E. Classification of land cover, forest, and tree species classes with ZiYuan-3 multispectral and stereo data. Remote Sens. 2019, 11, 164. [Google Scholar] [CrossRef]
  67. Marcinkowska-Ochtyra, A.; Zagajewski, B.; Raczko, E.; Ochtyra, A.; Jarocińska, A. Classification of high-mountain vegetation communities within a diverse Giant Mountains ecosystem using airborne APEX hyperspectral imagery. Remote Sens. 2018, 10, 570. [Google Scholar] [CrossRef]
  68. Melgani, F.; Bruzzone, L. Classification of hyperspectral remote sensing images with support vector machines. IEEE Trans. Geosci. Remote Sens. 2004, 42, 1778–1790. [Google Scholar] [CrossRef]
  69. Cohen, J. A coefficient of agreement for nominal scales. Educ. Psychol. Meas. 1960, 20, 37–46. [Google Scholar] [CrossRef]
  70. Olofsson, P.; Foody, G.M.; Herold, M.; Stehman, S.V.; Woodcock, C.E.; Wulder, M.A. Good practices for estimating area and assessing accuracy of land change. Remote Sens. Environ. 2014, 148, 42–57. [Google Scholar] [CrossRef]
  71. Immitzer, M.; Atzberger, C.; Koukal, T. Tree species classification with Random Forest using very high spatial resolution 8-band WorldView-2 satellite data. Remote Sens. 2012, 4, 2661–2693. [Google Scholar] [CrossRef]
  72. Peerbhay, K.Y.; Mutanga, O.; Ismail, R. Investigating the capability of few strategically placed Worldview-2 multispectral bands to discriminate forest species in KwaZulu-Natal, South Africa. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2013, 7, 307–316. [Google Scholar] [CrossRef]
  73. Waser, L.T.; Küchler, M.; Jütte, K.; Stampfer, T. Evaluating the potential of WorldView-2 data to classify tree species and different levels of ash mortality. Remote Sens. 2014, 6, 4515–4545. [Google Scholar] [CrossRef]
  74. Erikson, M. Species classification of individually segmented tree crowns in high-resolution aerial images using radiometric and morphologic image measures. Remote Sens. Environ. 2004, 91, 469–477. [Google Scholar] [CrossRef]
  75. Katoh, M.; Gougeon, F.A.; Leckie, D.G. Application of high-resolution airborne data using individual tree crowns in Japanese conifer plantations. J. For. Res. 2009, 14, 10–19. [Google Scholar] [CrossRef]
  76. Waser, L.T.; Klonus, S.; Ehlers, M.; Küchler, M.; Jung, A. Potential of digital sensors for land cover and tree species classifications-A case study in the framework of the DGPF-project. Photogramm. Fernerkund. Geoinf. 2010, 2010, 141–156. [Google Scholar] [CrossRef]
  77. Nezami, S.; Khoramshahi, E.; Nevalainen, O.; Pölönen, I.; Honkavaara, E. Tree species classification of drone hyperspectral and RGB imagery with deep learning convolutional neural networks. Remote Sens. 2020, 12, 1070. [Google Scholar] [CrossRef]
  78. Key, T.A. Comparison of multispectral and multitemporal information in high spatial resolution imagery for classification of individual tree species in a temperate hardwood forest. Remote Sens. Environ. 2001, 75, 100–112. [Google Scholar] [CrossRef]
  79. Milas, A.S.; Arend, K.; Mayer, C.; Simonson, M.A.; Mackey, S. Different colours of shadows: Classification of UAV images. Int. J. Remote Sens. 2017, 38, 3084–3100. [Google Scholar] [CrossRef]
  80. Miyoshi, G.T.; Imai, N.N.; Tommaselli, A.M.G.; de Moraes, M.V.A.; Honkavaara, E. Evaluation of hyperspectral multitemporal information to improve tree species identification in the highly diverse Atlantic forest. Remote Sens. 2020, 12, 244. [Google Scholar] [CrossRef]
  81. Córcoles, J.I.; Ortega, J.F.; Hernández, D.; Moreno, M.A. Estimation of leaf area index in onion (Allium cepa L.) using an unmanned aerial vehicle. Biosyst. Eng. 2013, 115, 31–42. [Google Scholar] [CrossRef]
  82. Zhang, J.; Li, W.; Ogunbona, P.O.; Wang, P.; Tang, C. RGB-D-based action recognition datasets: A survey. Pattern Recognit. 2016, 60, 86–105. [Google Scholar] [CrossRef]
  83. Griffith, D.C.; Hay, G.J. Integrating geobia, machine learning, and volunteered geographic information to map vegetation over rooftops. ISPRS Int. J. Geo-Information 2018, 7, 462. [Google Scholar] [CrossRef]
  84. Sanches, G.M.; Duft, D.G.; Kölln, O.T.; Luciano, A.C.d.S.; De Castro, S.G.Q.; Okuno, F.M.; Franco, H.C.J. The potential for RGB images obtained using unmanned aerial vehicle to assess and predict yield in sugarcane fields. Int. J. Remote Sens. 2018, 39, 5402–5414. [Google Scholar] [CrossRef]
  85. Lu, B.; He, Y.; Dao, P.D. Comparing the performance of multispectral and hyperspectral images for estimating vegetation properties. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2019, 12, 1784–1797. [Google Scholar] [CrossRef]
  86. Lu, J.; Cheng, D.; Geng, C.; Zhang, Z.; Xiang, Y.; Hu, T. Combining plant height, canopy coverage and vegetation index from UAV-based RGB images to estimate leaf nitrogen concentration of summer maize. Biosyst. Eng. 2021, 202, 42–54. [Google Scholar] [CrossRef]
  87. Hsu, H.-H.; Hsieh, C.-W.; Lu, M.-D. Hybrid feature selection by combining filters and wrappers. Expert Syst. Appl. 2011, 38, 8144–8150. [Google Scholar] [CrossRef]
  88. Cho, M.A.; Debba, P.; Mathieu, R.; Naidoo, L.; van Aardt, J.; Asner, G.P. Improving discrimination of savanna tree species through a multiple-endmember spectral angle mapper approach: Canopy-level analysis. IEEE Trans. Geosci. Remote Sens. 2010, 48, 4133–4142. [Google Scholar] [CrossRef]
  89. Chan, J.C.-W.; Paelinckx, D. Evaluation of Random Forest and Adaboost tree-based ensemble classification and spectral band selection for ecotope mapping using airborne hyperspectral imagery. Remote Sens. Environ. 2008, 112, 2999–3011. [Google Scholar] [CrossRef]
  90. Alonzo, M.; Bookhagen, B.; Roberts, D.A. Urban tree species mapping using hyperspectral and lidar data fusion. Remote Sens. Environ. 2014, 148, 70–83. [Google Scholar] [CrossRef]
  91. Hennessy, A.; Clarke, K.; Lewis, M. Hyperspectral classification of plants: A review of waveband selection generalizability. Remote Sens. 2020, 12, 113. [Google Scholar] [CrossRef]
  92. Aneece, I.; Thenkabail, P. Accuracies achieved in classifying five leading world crop types and their growth stages using optimal earth observing-1 hyperion hyperspectral narrowbands on google earth engine. Remote Sens. 2018, 10, 2027. [Google Scholar] [CrossRef]
  93. Immitzer, M.; Vuolo, F.; Atzberger, C. First experience with Sentinel-2 data for crop and tree species classifications in central Europe. Remote Sens. 2016, 8, 166. [Google Scholar] [CrossRef]
  94. Sheeren, D.; Fauvel, M.; Josipović, V.; Lopes, M.; Planque, C.; Willm, J.; Dejoux, J.-F. Tree species classification in temperate forests using formosat-2 satellite image time Series. Remote Sens. 2016, 8, 734. [Google Scholar] [CrossRef]
  95. Fassnacht, F.E.; Mangold, D.; Schäfer, J.; Immitzer, M.; Kattenborn, T.; Koch, B.; Latifi, H. Estimating stand density, biomass and tree species from very high resolution stereo-imagery—Towards an all-in-one sensor for forestry applications? For. Int. J. For. Res. 2017, 90, 613–631. [Google Scholar] [CrossRef]
  96. Grabska, E.; Hostert, P.; Pflugmacher, D.; Ostapowicz, K. Forest stand species mapping using the Sentinel-2 time series. Remote Sens. 2019, 11, 1197. [Google Scholar] [CrossRef]
  97. Vluymans, S. Dealing with Imbalanced and Weakly Labelled Data in Machine Learning Using Fuzzy and Rough Set Methods; Springer Nature: Dordrecht, The Netherlands, 2019; pp. 81–110. [Google Scholar] [CrossRef]
  98. Maxwell, A.E.; Warner, T.A.; Fang, F. Implementation of machine-learning classification in remote sensing: An applied review. Int. J. Remote Sens. 2018, 39, 2784–2817. [Google Scholar] [CrossRef]
  99. Chen, Z.; Bai, Z.; Sinha, B.K. Ranked Set Sampling: Theory and Applications; Springer: New York, NY, USA, 2004; Volume 176. [Google Scholar]
  100. Dietterich, T.G. Approximate statistical tests for comparing supervised classification learning algorithms. Neural Comput. 1998, 10, 1895–1923. [Google Scholar] [CrossRef]
  101. Zagajewski, B.; Kluczek, M.; Raczko, E.; Njegovec, A.; Dabija, A.; Kycko, M. Comparison of random forest, support vector machines, and neural networks for post-disaster forest species mapping of the krkonoše/karkonosze transboundary biosphere reserve. Remote Sens. 2021, 13, 2581. [Google Scholar] [CrossRef]
  102. Nasiri, V.; Beloiu, M.; Darvishsefat, A.A.; Griess, V.C.; Maftei, C.; Waser, L.T. Mapping tree species composition in a Caspian temperate mixed forest based on spectral-temporal metrics and machine learning. Int. J. Appl. Earth Obs. Geoinf. 2023, 116, 103154. [Google Scholar] [CrossRef]
  103. Abdollahnejad, A.; Panagiotidis, D. Tree species classification and health status assessment for a mixed broadleaf-conifer forest with UAS multispectral imaging. Remote Sens. 2020, 12, 3722. [Google Scholar] [CrossRef]
Figure 1. Location of the study sites (a) and the RGB images of site ZN (b), site YC (c), and site YS (d).
Figure 1. Location of the study sites (a) and the RGB images of site ZN (b), site YC (c), and site YS (d).
Drones 09 00296 g001
Figure 2. Workflow diagram of the classification approach.
Figure 2. Workflow diagram of the classification approach.
Drones 09 00296 g002
Figure 3. Variance analysis of spectral features of each class.
Figure 3. Variance analysis of spectral features of each class.
Drones 09 00296 g003
Figure 4. Features importance ranked by the Boruta algorithm (V1: Red, V2: Green, V3: Blue, V4: R’, V5: G’, V6: B’, V7: CIVE, V8: ExB, V9: ExG, V10: ExGR, V11: ExR, V12: GBRI, V13: GLA, V14: GRRI, V15: GRVI, V16: IKAW, V17: INT, V18: MGRVI, V19: RBRI, V20: RGBVI, V21: VARI, and V22: WI).
Figure 4. Features importance ranked by the Boruta algorithm (V1: Red, V2: Green, V3: Blue, V4: R’, V5: G’, V6: B’, V7: CIVE, V8: ExB, V9: ExG, V10: ExGR, V11: ExR, V12: GBRI, V13: GLA, V14: GRRI, V15: GRVI, V16: IKAW, V17: INT, V18: MGRVI, V19: RBRI, V20: RGBVI, V21: VARI, and V22: WI).
Drones 09 00296 g004
Figure 5. User accuracy, producer accuracy, and F1 score of classification based on RF classifier.
Figure 5. User accuracy, producer accuracy, and F1 score of classification based on RF classifier.
Drones 09 00296 g005
Figure 6. User accuracy, producer accuracy, and F1 score of classification based on SVM classifier.
Figure 6. User accuracy, producer accuracy, and F1 score of classification based on SVM classifier.
Drones 09 00296 g006
Figure 7. Comparison of average accuracy and maximum accuracy across different regions, regardless of the classifier (F1_max is the maximum value among the average F1 score values of each tree species under each classification case).
Figure 7. Comparison of average accuracy and maximum accuracy across different regions, regardless of the classifier (F1_max is the maximum value among the average F1 score values of each tree species under each classification case).
Drones 09 00296 g007
Figure 8. Comparison of the effects of features on classification accuracy improvement across different regions (F1_avg represents the average F1 score of each tree species).
Figure 8. Comparison of the effects of features on classification accuracy improvement across different regions (F1_avg represents the average F1 score of each tree species).
Drones 09 00296 g008
Figure 9. Tree species map of three sites: YS (a), ZN (b), and YC (c).
Figure 9. Tree species map of three sites: YS (a), ZN (b), and YC (c).
Drones 09 00296 g009
Figure 10. Comparison of classification effects between RGBVIs and RGBFS (F1_avg represents the average F1 score of each tree species).
Figure 10. Comparison of classification effects between RGBVIs and RGBFS (F1_avg represents the average F1 score of each tree species).
Drones 09 00296 g010
Figure 11. Comparative performance of RF versus SVM cross study sites: (a) YS, (b) ZN, and (c) YC.
Figure 11. Comparative performance of RF versus SVM cross study sites: (a) YS, (b) ZN, and (c) YC.
Drones 09 00296 g011
Table 1. Summary of UAV flight missions across study sites.
Table 1. Summary of UAV flight missions across study sites.
Study SiteFlight DateFlight TimeNumber of FlightsNumber of Images
YS31 July 202211:00–15:001511,598
ZN26 July 202216:3031176
YC14 July 202210:1042796
Table 2. Band parameters of Phantom-Multispectral lens.
Table 2. Band parameters of Phantom-Multispectral lens.
BandWavelength (nm)Bandwidth (nm)
Blue45032
Green56032
Red65032
Red-edge73032
Near-Infrared84052
Table 3. Classes classified in our study with the number of training samples.
Table 3. Classes classified in our study with the number of training samples.
Study SiteSpecies and ClassAcronymNo. of Training SamplesNo. of Test Samples
YSRobinia pseudoacaciaRP11549
Platycladus orientalisPO6930
Pinus tabuliformisPT6729
OtherOT7432
YCRobinia pseudoacaciaOT7130
Quercus wutaishanicaRP7432
Broadleaf treeQW6930
OtherBL7331
ZNRobinia pseudoacaciaOT6930
Pinus tabuliformisRP6729
Larix gmeliniiPT7833
Populus davidianaLG6528
Hippophae rhamnoidesPD6729
OtherHR6930
Table 4. Eight object-based classification case designs.
Table 4. Eight object-based classification case designs.
CaseClassification FeatureClassifier
1RGBRF
2RGBSVM
3RGB + R’ +G’ + B’ (RGBR’G’B’)RF
4RGB + R’ +G’ + B’ (RGBR’G’B’)SVM
5RGB + VIs (RGBVIs)RF
6RGB + VIs (RGBVIs)SVM
7RGB + FS (RGBFS)RF
8RGB + FS (RGBFS)SVM
Table 5. Overall accuracy and Kappa coefficient of classification based on RF classifier.
Table 5. Overall accuracy and Kappa coefficient of classification based on RF classifier.
Study SiteRGBRGBR’G’B’RGBVIsRGBFS
OA (%)KappaOA (%)KappaOA (%)KappaOA (%)Kappa
YS71.680.6274.340.6684.960.8084.070.79
ZN77.500.7376.000.7186.670.8483.000.80
YC83.850.7886.150.8187.690.8486.920.83
Table 6. Overall accuracy and Kappa coefficient of classification based on SVM classifier.
Table 6. Overall accuracy and Kappa coefficient of classification based on SVM classifier.
Study SiteRGBRGBR’G’B’RGBVIsRGBFS
OA (%)KappaOA (%)KappaOA (%)KappaOA (%)Kappa
YS68.140.5878.760.7183.190.7781.420.75
ZN78.000.7481.500.7884.000.8182.000.78
YC83.850.7888.460.8590.000.8787.690.84
Table 7. Comparison of classification effects in different regions based on optimal features (F1_avg represents the average F1 score of each tree species).
Table 7. Comparison of classification effects in different regions based on optimal features (F1_avg represents the average F1 score of each tree species).
ClassifierStudy SiteOA (%)KappaF1_Avg (%)
RFYS84.960.8084.49
ZN86.670.8486.67
YC87.690.8487.68
SVMYS83.190.7782.95
ZN84.000.8184.04
YC90.000.8790.08
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, Z.; Yu, S.; Ye, Q.; Zhang, M.; Yin, D.; Zhao, Z. Tree Species Classification Using UAV-Based RGB Images and Spectral Information on the Loess Plateau, China. Drones 2025, 9, 296. https://doi.org/10.3390/drones9040296

AMA Style

Li Z, Yu S, Ye Q, Zhang M, Yin D, Zhao Z. Tree Species Classification Using UAV-Based RGB Images and Spectral Information on the Loess Plateau, China. Drones. 2025; 9(4):296. https://doi.org/10.3390/drones9040296

Chicago/Turabian Style

Li, Zhen, Shichuan Yu, Quanping Ye, Mei Zhang, Daihao Yin, and Zhong Zhao. 2025. "Tree Species Classification Using UAV-Based RGB Images and Spectral Information on the Loess Plateau, China" Drones 9, no. 4: 296. https://doi.org/10.3390/drones9040296

APA Style

Li, Z., Yu, S., Ye, Q., Zhang, M., Yin, D., & Zhao, Z. (2025). Tree Species Classification Using UAV-Based RGB Images and Spectral Information on the Loess Plateau, China. Drones, 9(4), 296. https://doi.org/10.3390/drones9040296

Article Metrics

Back to TopTop