Next Article in Journal
Modelling the Spatial Distribution of Soil Organic Carbon Using Machine Learning and Remote Sensing in Nevado de Toluca, Mexico
Previous Article in Journal
Spatial Variability and Geostatistical Modeling of Soil Physical Properties Under Eucalyptus globulus Plantations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Advancing Tree Species Classification with Multi-Temporal UAV Imagery, GEOBIA, and Machine Learning

1
Department of Land Surveying and Geo-Informatics, The Hong Kong Polytechnic University, Hong Kong SAR, China
2
Smart Sensing for Climate and Development, Center for Geographical Information System, University of the Punjab, Lahore 54590, Pakistan
3
Research Institute for Land and Space, The Hong Kong Polytechnic University, Hong Kong SAR, China
4
Center for Aviation and Space Exploration, King Fahd University of Petroleum & Minerals, Dhahran 31261, Saudi Arabia
5
Department of Architecture and City Design, King Fahd University of Petroleum & Minerals, Dhahran 31261, Saudi Arabia
6
Water and Agriculture Division, National Engineering Services (NESPAK), Lahore 53234, Pakistan
*
Author to whom correspondence should be addressed.
Geomatics 2025, 5(3), 42; https://doi.org/10.3390/geomatics5030042
Submission received: 19 July 2025 / Revised: 29 August 2025 / Accepted: 4 September 2025 / Published: 7 September 2025

Abstract

Accurate classification of tree species is crucial for forest management and biodiversity conservation. Remote sensing technology offers a unique capability for classifying and mapping trees across large areas; however, the accuracy of extracting and identifying individual trees remains challenging due to the limitations of available imagery and phenological variations. This study presents a novel integrated machine learning (ML) and Geographic Object-Based Image Analysis (GEOBIA) framework to enhance tree species classification in a botanical garden using multi-temporal unmanned aerial vehicle (UAV) imagery. High-resolution UAV imagery (2.3 cm/pixel) was acquired across four different seasons (summer, autumn, winter, and early spring) to incorporate the phenological changes. Spectral, textural, geometrical, and canopy height features were extracted using GEOBIA and then evaluated with four ML models (Random Forest (RF), Extra Trees (ET), eXtreme gradient boost (XGBoost), and Support Vector Machine (SVM)). Multi-temporal data significantly outperformed single-date imagery, with RF achieving the highest overall accuracy (86%, F1-score 0.85, kappa 0.83) compared to 57–75% for single-date classifications. Canopy height and textural features were dominant for species identification, indicating the importance of structural variations. Despite the limitations of moderate sample size and a controlled botanical garden setting, this approach offers a robust framework for forest and urban landscape managers as well as remote sensing professionals, by optimizing UAV-based strategies for precise tree species identification and mapping to support urban and natural forest conservation.

1. Introduction

Trees are integral components of terrestrial ecosystems and vital landscape architects that significantly shape the Earth’s biosphere [1]. They not only shape forest ecosystems but also various other environments, including urban landscapes and agricultural systems. Different tree species uniquely contribute to ecosystem functions, such as varying growth rates, wood qualities, carbon sequestration, nutrient cycling, pollutant absorption capabilities, and ecological functioning. These abilities depend on the tree type, leaf structure, surface area, and physiological characteristics [2,3,4,5]. Accurate classification and identification of tree species are valuable aspects of effective forest management [6], urban green spaces [7], and the sustainable development of ecosystems [8]. Tree species identification and classification always remain a challenge [9,10]; however, in conventional forest management practice, tree species classification was primarily conducted through laborious, time-consuming, and inefficient ground surveys [10].
Remote sensing has revolutionized tree species identification and classification by enabling large-scale and non-invasive mapping [11]. Medium-resolution satellite datasets, such as Landsat, MODIS, and Sentinel-2, have been widely used for the classification of trees [12,13,14,15], but are constrained by coarse spatial and temporal resolution, which obscures fine-scale canopy details and phenological variations critical for distinguishing species [16,17,18,19]. Unmanned aerial vehicle (UAV) photogrammetry technology has emerged in forestry to address these limitations [20,21,22] by providing centimetre-level spatial resolution, capturing detailed leaf-level information and canopy structure information [23,24,25]. Recent studies have enhanced tree species mapping and identification using UAV imagery, both in dense forests [26,27,28,29,30,31] and for urban species [32,33,34,35,36]. Despite these advancements, many previous studies rely on single-date imagery, overlooking temporal phenological changes that significantly influence species identification [7,37,38,39].
Furthermore, machine learning (ML) provides future improvements to remote sensing by using spectral, textural, and structural variables for species specific tree identification [24,40,41,42]. Various machine learning techniques such as support vector machine (SVM), random forest (RF), K-nearest neighbour (KNN), eXtreme gradient boost (XGBoost), extra tree (ET), and neural network (NN) have been used for tree species classification [7,43,44,45,46,47,48]. Many studies have also explored deep learning (DL) methods, particularly convolutional neural networks (CNNs) like REsNet and Vision Transformers, for tree species detection and classification using satellite and UAV data, achieving accuracies of up to 98% [31,49,50,51,52,53,54]. While these DL methods are effective at automatically extracting features from complex multimodal data, they typically demand extensive training datasets and high computational resources. In contrast, our study emphasizes traditional ML models, valuing their interpretability and efficiency when working with smaller phenological UAV data. Single-date images are generally less sensitive to temporal features [55], resulting in misclassification, especially in heterogeneous ecosystems, including botanical gardens or urban forests. In this context, Geographic Object-Based Image Analysis (GEOBIA)—a remote sensing image analysis technique that segments images into meaningful, homogenous objects rather than analyzing individual pixels, enabling classification based on spectral, geometrical, and textural attributes [56,57,58,59]—emerges as a promising approach. The recent approach that combines ML, multi-temporal UAV imagery, and GEOBIA offers the potential to incorporate spectral, geometric, textural, and canopy-height characteristics from different seasons, but to date, only a few studies have evaluated it systematically in different ecosystems.
This study presents a novel framework by integrating high-resolution multi-temporal UAV imagery with GEOBIA and ML to enhance tree species classification in a botanical garden. Using high-resolution (2.3 cm/pixel) imagery across four seasons (summer, autumn, winter, and early spring), this study identifies the best ML model for the classification of tree species by using different predictors and multi-temporal data. The work contributes to the advancement of UAV-based remote sensing methods for tree species identification, with a focus on achieving high efficiency in terms of both time and cost. The main objectives of this study are, (1) to evaluate the potential of multi-temporal high-resolution UAV imagery for the classification of tree species, (2) to identify key features (spectral, geometric, textural, and canopy height) distinguishing the species, and (3) to evaluate the performance of different ML models (RF, ET, XGBoost, and SVM).

2. Materials and Methods

2.1. Study Area

The study area was selected based on several criteria, including the trees being representative of similar areas elsewhere, the site being free of tall buildings, the site having no source of magnetic interference nearby, and the site not being located within any no-fly zones for UAVs. Considering these criteria, the botanical garden of the University of the Punjab, Quaid-e-Azam campus (31.499547° latitude and 74.299938° longitude) situated in Lahore city of Punjab province was selected. Figure 1 shows the University of the Punjab’s botanical garden, characterized by a rich diversity of tree species, which makes a significant contribution to the tree flora [60]. The total area of the botanical garden is about 32 acres (~13 ha) with a wide variety of native and non-native trees. Boasting more than four trees per 100 square metres, the botanical garden’s density represents a dense urban forest. Lahore city has four distinct seasons: winter (December to February), spring (March to May), summer (June to August), and autumn (September to November) [61]. High species diversity and abundance, combined with distinct seasonal variations, allow us to study the phenological effects of various tree species on classification accuracy.

2.2. Methods

This study developed a novel framework by integrating multi-temporal UAV imagery, GEOBIA, and ML to classify tree species in a botanical garden. The workflow comprised two primary steps: (a) data acquisition and extraction of the tree crown with its attributes (spectral, geometrical, textural, and canopy height), and (b) evaluation of machine learning models for species classification using ground truth information. Spectral, textural, geometrical, and canopy height attributes were extracted for all tree crowns delineated using GEOBIA. ML models were used to identify the most important features and, finally, the top-performing classifier for accurately classifying tree species was selected. The process flow comprised data acquisition, processing, and classification, as illustrated in the schematic diagram (Figure 2).

2.3. Image Acquisition and Data Preprocessing

Multi-temporal imagery of the study area was acquired using the DJI (Da-Jiang Innovations, Shenzhen, Guangdong, China) Mavic 2 Pro-UAV (Figure 3a) with a high-resolution Hasselblad RGB camera, having a 1-inch CMOS sensor and 28 mm equivalent focal length [62]. The data was acquired with the four flights, one in each season—summer, autumn, winter, and early spring—following the similar flight path of the UAV developed for the study (Figure 3b). All the datasets were acquired between 10:00 and 11:00 am (Pakistan Standard Time) on clear-sky days with 80% overlap, nadir view angle (90° camera angle), and at 100 m flying height (Table 1). Figure 4 shows the location of ground control points, which were taken using Real-time kinematic positioning using the Global Navigation Satellite System (RTK GNSS) to process and rectify the raw dataset with the Agisoft Metashape Professional (ver. 1.7.6) software by applying the structure from the motion photogrammetry method. A digital surface model (DSM) and high-resolution ortho-mosaic images with an average resolution of 2.3 cm per pixel were generated, and the canopy height was extracted by subtracting the Digital Terrain Model (DTM) from the Digital Surface Model (DSM). While high resolution enables detailed canopy capture, potential limitations include occlusion effects from overlapping canopies in dense areas and radiometric inconsistencies due to seasonal variations in sunlight angles, potentially affecting spectral feature stability across datasets.

2.4. Field Reference Data

The botanical garden is divided into different sections, and a field survey was conducted in May 2024 to collect information about tree species from all sections using a comprehensive inventory method to assess tree species diversity and distribution [63,64]. This involved identifying and recording trees across the sections, with an emphasis on mature individuals. The data collection included recording the precise spatial location, local name, scientific name, and family name of each species. The most sampled species was Jatropha (Jatropha integerrima) from the Euphorbiaceae family, with 39 samples, followed by Teak Wood (Tectona grandis) from the Verbenaceae family, with 33 samples, and Amaltas (Cassia fistula) from the Fabaceae family, with 25 samples. Other species included Villayati Shishum (Millettia ovalifolia), Karenwood (Hetrophragma adenophyllum), Alstonia (Alstonia scholaris), Chir Pine (Pinus roxburghii), and Dhrek/Bakain (Melia azedarach), with sample sizes ranging from 15 to 22, reflecting the diversity of the surveyed area. The collected site data were later converted into a GIS-based point shapefile for further analysis; Figure 4 shows the spatial distribution of the data, and a summary of the field inventory is provided in Table 2.

2.5. Tree Crown Extraction

Tree crowns were extracted using the GEOBIA [65,66], following the method proposed by Onishi & Ise [24]. Multiresolution segmentation, incorporating DSM, slope, and RGB bands of UAV imagery, was employed in eCognition Developer (ver. 9) software (Trimble, Inc., Westminster, CO, USA) to generate image object primitives. Segmentation parameters (Scale = 100, Compactness = 0.5, and Shape = 0.2) were selected using a trial-and-error approach, with values adjusted iteratively to achieve optimal segmentation results. These settings effectively reduced misclassification in dense foliage and enhanced the accuracy of downstream feature extraction, as validated by visual comparisons with the field observations. To exclude low-lying vegetation and ground-level segments, the DTM and DSM were utilized, and further refinement was achieved by removing non-tree segments using rule sets based on spectral, textural, and canopy height features. The resulting extracted crown covers are illustrated in Figure 5.
To improve the accuracy of tree species classification and understand the effect of different features on the classification result, textural, contextual, shape, and geometric features were used in addition to spectral features [38,67]. Spectral features included the standard deviation and mean of the RGB bands of tree crowns. Eight geometric features were calculated: asymmetry, border index, compactness, density, elliptic fit, main direction, roundness, and shape index. The mean and standard deviation of canopy height, along with the slope, were computed as terrain features. Two types of textural features, grey level co-occurrence matrix (GLCM) and grey level difference vectors (GLDV), were calculated for every tree object using all pixels within the segmented object [68]. Eight GLCM and two GLDV features, as outlined in Table 3, were extracted for every tree object. A label-encoding technique was used to convert tree species names into a numeric format, facilitating machine readability. Each tree species in the label data was assigned a unique value ranging from 0 to 7 for further processing.

2.6. Machine Learning Models

Four supervised machine learning models—Random Forest (RF), Extra Trees (ET), eXtreme Gradient Boost (XGBoost), and Support Vector Machine (SVM)—were implemented using PyCaret in Python (ver. 3.9) [67]. The sample data was divided into 70:30 training and testing datasets to balance model training and evaluation, given the dataset size [45]. To address the class imbalance, the Synthetic Minority Over-sampling technique (SMOTE) was applied to the training set. This technique generates synthetic samples for minority classes by interpolating between existing minority instances and their nearest neighbours [69].

2.6.1. Random Forest (RF)

RF is one of the widely used supervised machine learning models as it can solve classification and regression problems with high accuracy [70,71,72,73]. It is composed of numerous decision trees, and each tree is an output that returns a class prediction [74]. The prediction of each tree is aggregated, and the class with the majority votes is set as the final result. RF has been applied extensively in forestry for tree species identification and classification using medium-resolution satellite imagery such as Landsat and Sentinel-2 [19,75,76,77,78], as well as high-resolution imagery such as World View [46,79,80,81]. However, its application for individual tree species identification using high-resolution UAV imagery combined with GEOBIA has been less common [47,48], which underscores the novelty of our integrated approach in this study. Regarding the parameter value, hyperparameters were optimized using PyCaret’s grid search, resulting in n_estimators = 100, and min_samples_leaf and min_samples_split were 1 and 2, respectively. The split quality was measured by the criterion ‘gini’. To increase the diversity of the model and to avoid overfitting, the maximum number of features that were considered for the splitting of each tree, ‘max_features’, was the square root of the total available features (sqrt).

2.6.2. Extra Trees (ET)

Another type of tree ensemble for forests is ET, also known as Extremely Randomized Trees, which is a supervised tree-based ensemble model for classification and regression. It randomly selects the split points and, also, the features, either completely or a fraction of them [82]. The main benefit of using this classifier is its computational processing speed and low processing cost, and it is more effective than other ensembles, such as Random Forest, that require more parameter tuning and optimization [83]. In the input parameters, n_trees was 100, and min_samples_leaf and _split were 1 and 2, respectively. Measures of the quality of the split were obtained using ‘SoilType’ (with the ‘Gini’ index as the criterion), and the parameter ‘max_features’ was established at sqrt.

2.6.3. eXtreme Gradient Boost (XGBoost)

XGBoost is an extremely powerful supervised machine learning model, known for its speed and superior execution. It employs the gradient boosting method and makes an ensemble of weak learners (principally decision trees) to create an accurate predictive model. A novel way for dealing with sparse data and a weighted quantile-based framework of approximation learning enhance its performance and make it appropriate for classification [84,85]. For hyperparameter optimization, 50 trees were set, as XGBoost typically performs effectively with fewer trees due to its sequential boosting process that corrects errors from prior trees [84]. The learning rate is equal to 0.15 for the convergence and performance of the model. To control overfitting and complexity of the model, max_depth and min_child_weight were chosen as 11 and 4, respectively.

2.6.4. Support Vector Machine (SVM)

In the 1990s, Vapnik introduced a non-parametric supervised machine learning model known as SVM [86]. It separates two classes by finding the hyperplanes that entirely separate the closest data points belonging to each class [87]. In relation to the number of features in the input data, this hyperplane will appear as a plane in n-dimensional spaces or a line in 2-dimensional spaces [88,89]. Data were classified using SVM with a linear kernel, ‘none’as the class_weight assigned for dealing with imbalanced datasets. ‘Invscalling’ was used as learning_rate for efficient training and convergence, and max_iter was set to 1000 for efficient learning, to avoid overfitting, and to balance regularization and model complexity. ‘Ridge regularization (L2)’ was applied as a penalty.

2.7. Model Scenarios

Classification was performed in two steps, focusing on the selection of canopy height, spectral, geometrical, and textural attributes. Single-date datasets were first classified as an experimental measure to perceive and analyze the performance of the classification as well as the classification patterns in each season. Secondly, all the multi-temporal datasets were concatenated to make a large comprehensive dataset, which was used for classification using the same four machine learning models. This approach provides a comprehensive assessment of the model’s accuracy and compares it by month and multi-temporal datasets for the classification accuracy of tree species.

2.8. Accuracy Assessment

Accuracy assessment is essential to confirm the effectiveness of the proposed method, followed by a validation of the different machine learning classifications. Typically, to assess how well a supervised learning model is performing, predictions are developed using test data, and then those predictions are compared to the actual labels in a confusion matrix. The performance of the models was measured by several performance indexes, i.e., overall accuracy (OA), precision, recall, F1-score, kappa, and Matthews Correlation Coefficient (MCC) (Equations (1)–(5)). Confusion matrices were plotted for each machine learning model to further analyze the model’s performance based on different features. To assess potential multicollinearity and identify key discriminative features, Shapley Additive exPlanations (SHAP) values for all the models were computed, revealing mean absolute contributions of each feature [90]. The analysis of these outputs assessed the efficacy and computational performance of the models:
Precision   =   T r u e   P o s i t i v e s   ( T P ) T r u e   P o s i t i v e s   T P + F a l s e   P o s i t i v e s   ( F P )
Recall = T r u e   P o s i t i v e s   ( T P ) T r u e   P o s i t i v e s   T P + F a l s e   N e g a t i v e s ( F N )
F 1 -Score = 2   ×   P r e c i s i o n × R e c a l l P r e c i s i o n + R e c a l l
Kappa   Coefficient = P o P e 1 P e
where P o , P e are the observed and expected agreement
MCC = T P × T N ( F P × F N ) ( T P + F P ) ( T P + F N ) ( T N + F P ) ( T N + F N )

3. Results

3.1. Evaluating Tree Classification Across Single-Date Imagery Using Different Machine Learning Models

Initially, all the UAV temporal data was classified individually using four machine learning models—RF, ET, XGBoost, and SVM—and the classification results for tree species were evaluated. The result, demonstrated in Figure 6, is that Melia azedarach was classified effectively using all machine learning models during the summer. Pinus roxburghii exhibited good classification performance across all four machine learning models in the autumn, and it showed notable classification accuracy in winter as well. Jatropha integerrima classified well across all four machine learning models and all seasons. However, the OA of individual tree classification remains relatively low, with most species being misclassified.
To assess the performance of machine learning models, the models were examined on the single-date imagery. The results in Table 4 show that during the summer, the RF outperformed others, achieving an OA of 75% and an F1 score of 0.73. In autumn, the OA of RF decreased, while ET performed well compared to the other models (XGBoost and SVM), achieving an OA of 67% and an F1 score of 0.62. For the winter, RF performed well with an OA of 68% and an F1 score of 0.66. XGBoost showed good performance on the early spring data with an OA of 64% and an F1 score of 0.62. The OA remains very low, ranging from 57 to 75%, due to the misclassification of trees. However, the summer imagery shows better accuracy across all four models compared to other seasons. Figure 6 illustrates the classification accuracy for individual tree species, highlighting how species like Pinus roxburghii and Heterophragma adenophyllum achieve high classification accuracy in certain seasons due to phenological variations.

3.2. Assessment of Tree Species Classification from Multi-Temporal Data

All the temporal data were merged, and the models were trained and tested on the combined features of multi-temporal data. The effectiveness of four machine learning models (RF, ET, XGBoost, and SVM) for tree species classification using multi-temporal UAV data was assessed. The performance of these machine learning models, with a focus on OA, was thoroughly evaluated. The improved results indicate that the multi-temporal UAV, by considering all spectral, geometrical, and textural features together, achieved the highest OA compared to the single-season data, with an OA of 86% for RF, followed by ET with an OA of 84%, XGBoost with OA of 83%, and SVM with OA of 75% (Figure 7a). RF outperformed ET, SVM, and XGBoost with the maximum OA of 86%, F1 score of 0.85, and Kappa value of 0.83 (Table 5, Figure 7b). The OA range has increased from 57–75% to 77–86%, demonstrating an improvement in the classification results. This enhancement underscores the importance of temporal data in accurately classifying tree species.
The confusion matrix provides a comprehensive overview of the model’s performance by displaying the true positive, true negative, false positive, and false negative predictions. The classification accuracy of each tree species using multi-temporal data varies across species, highlighting strengths and weaknesses for different tree species (Figure 8). For Pinus roxburghii, the RF classifier showed high precision, indicating its effectiveness in correctly identifying this species with few false negatives and positives. Millettia ovalifolia was not well classified by using RF and had lower precision. The ET classifier performed very well and showed high precision across several species, particularly Hetrophragma adenophyllum and Pinus roxburghiihad, making very accurate positive predictions. XGBoost exhibited high precision for Tectona grandis and Pinus roxburghii, and the SV classifier showed perfect precision for Jatropha integerrima and Tectona grandis. RF and ET classifiers appeared to have balanced performance across different species, making them effective models for tree species classification tasks using multi-temporal data. To address class imbalance, SMOTE oversampling was applied on the training set to reduce bias toward majority classes. Table 6 presents the per-class precision, recall, and F-1 scores for the RF model using multi-temporal data, highlighting the effectiveness of SMOTE in improving performance across all classes. Minority classes, such as Hetrophragma adenophyllum (F1 score of 0.59) and Pinus roxburghii (F1 score of 0.67), still face some challenges, indicating no significant overfitting to majority classes as the weighted F1 align closely to OA.
Feature importance for different ML models was assessed using SHAP analysis to identify the most significant contributors to the tree species classification with multi-temporal data. The top six variables across all models highlight how different feature types played distinct roles in classification (Figure 9). For the RF classifier, canopy height is the most influential feature in the classification, followed by the texture feature. In the case of the ET classifier, GLDV angular second moment is the most significant, with canopy height and texture features also playing important roles. For the XGBoost classifier, spectral features are the primary contributors, followed by canopy height and texture features. Similarly, for the SVM classifier, spectral features are important, with texture features also being significant. These results indicate that canopy height plays a crucial role in multi-temporal data classification, followed by textural and spectral features.
The RF classifier was considered robust based on its highest accuracy, and it was used to classify all tree objects in the study area using spectral, geometrical, and textural features of multi-temporal data. Figure 10 shows a map of the spatial distribution of dominant tree species in a portion of the botanical garden.

4. Discussion

This research validated the importance of multi-temporal UAV imagery integrated with GEOBIA and ML for high-accuracy tree species classification in a botanical garden. Taking advantage of high spatial resolution (2.3 cm/pixel) imagery on four separate seasonal datasets (summer, autumn, winter, and early spring), the RF classifier showed a significant increase in accuracy for single-date and multi-temporal data. It is consistent with many existing studies on the importance of multi-temporal images for the phenological characteristics [91,92,93,94,95]. This enhancement could be due to a combination of factors. The first factor is that the phenological behaviour of trees fluctuates seasonally, highlighting the necessity of incorporating seasonal variations to improve classification. This finding is consistent with several studies that utilize the phenological features of trees, based on multi-temporal information, to improve classification performance [76,94,96]. Second, the temporal aspect supports the decrease in errors and noise that are associated with single-date images. Through the use of multi-temporal imagery, anomalies can be averaged off and signals of the particular tree species enhanced [97]. The choice of season is also crucial, for if imagery from summer months is used, then the vegetation would be vigorous, as the monsoon results in dense green cover. This yields lush, exclusively covered leaves that are critical for classification, as features from the summer dataset markedly improve model capability.
In addition, feature importance was evaluated using SHAP analysis to investigate the contribution of different variables. The height of the canopy of the tree is an effective characteristic for classification; thus, some researchers applied the canopy height for the improvement of precision [71]. Canopy height is also a key factor in tree segmentation [98] and can be derived as tree height-related metrics, which may be the most significant predictors in classification [99]. This study also aligns with these findings, as canopy height was the most significant feature influencing classification accuracy. Textural features are important predictors too, particularly in seasonal analyses. Texture features can help to distinguish trees with similar spectral attributes, resulting in increased classification accuracy [100]. Deur et al. [79] also improved the accuracy of classification by introducing textural features in complement with spectral bands. Consistently, in our findings, the textural features were the second most important variable in the tree classification. Guo et al. [38] also used several features for UAV image classification, reporting that geometric features had a negative impact on their classification accuracy, potentially due to variability in crown shapes within the species. To study this result in more detail, we conducted an additional classification using all four machine learning classifiers, excluding the geometric features from the input samples. The OA remained consistent with the original, indicating that the geometric features had no adverse effect on model performance in this study and made little difference to the classification outcomes.
Secondly, in assessing how machine learning models were able to perform well in the classification of tree species, their performance was first evaluated on individual imagery and, later, on the multi-temporal dataset. According to our results, the RF model performed well using a multi-temporal dataset compared to other models, due to its non-parametric structure, and can handle imbalanced datasets, which improves its predictive capabilities. These inferences agree with prior studies related to the classification of tree species, as multiple studies show consistently that RF employs a more accurate classification of images. Adugna et al. [78] compared RF and SVM models using satellite images and found that RF performed better than SVM, achieving an OA of 0.86 compared to 0.84 for SVM. The authors found that RF is more effective in managing large input datasets, where SVM often struggles. Similarly, Herbryn-Baidy & Rees [101] assessed various machine learning models for land cover classification using Sentinel-2 data, showing that the RF algorithm achieved the highest OA. However, due to the low resolution of the imagery, the accuracy of forest classification was relatively low, at just 77%. For high-resolution satellite imagery, Jombo et al. [102] utilized World View-2 satellite data with RF to classify tree species in a heterogeneous environment, achieving an OA of 97%. Man et al. [103] applied RF on multispectral UAV imagery for urban tree species classification, demonstrating strong performance in heterogeneous environments. RF outperformed the other algorithms, clearly establishing it as the best option for tree species classification.
The overall framework’s high accuracy (86% with RF) demonstrates the power of multi-temporal UAV imagery for capturing phenological variation, aligning with Jiang et al. [94], who emphasize seasonal data for species mapping. Accuracy does vary for different species across seasons, while our results generally show the highest accuracy in the summer months for all ML algorithms. But Pinus roxburghii exhibits strong performance in autumn and winter due to its distinct phenological characteristics. This highlights the key contribution of this research, demonstrating the importance of acquiring UAV imagery in optimal months tailored to specific tree species to maximize the classification accuracy. This temporal flexibility of UAV imagery, combined with high spatial resolution, provides a significant advantage over available medium- and high-resolution satellite imagery. However, a limitation of this study is the focus on a botanical garden, which, while diverse, may not fully represent natural forest dynamics. The moderate sample size may limit generalizability to larger, more heterogeneous forests. Practical challenges, such as heavy cloud cover during the monsoon season, may reduce image quality, hinder drone operations, and limit scalability in cloud-prone regions. While homogeneous cloud cover can sometimes provide favorable conditions for data acquisition by minimizing shadows, it may also negatively affect solar flux capture, posing additional limitations. The computational cost of data acquisition and processing may also constrain broader applications. Future research should consider integrating other sensors with UAVs, such as multispectral and hyperspectral sensors, to capture spectral variations more accurately within specific classes. Furthermore, LiDAR technology can be employed with UAVs to estimate tree height reliably due to its ability to penetrate canopies [10].

5. Conclusions

This study provides a novel framework that integrates high-resolution UAV imagery with GEOBIA and ML for precise tree species identification and classification. Multi-temporal imagery (2.3 cm/pixel) was collected across four seasons (summer, autumn, winter, and early spring) at the University of the Punjab’s botanical garden. Four different ML models (RF, ET, XGBoost, and SVM) were tested and compared to classify the eight tree species, including Jatropha integerrima and Tectona grandis. The results demonstrate the highest performance was from RF, achieving an OA of 86% (F1 score 0.85, kappa 0.83). This shows an improvement over single-date imagery classification, which ranged from 57% to 75% in overall accuracy. These results highlight the essential role of multi-temporal data in capturing structural and phenological variations for accurate species classification. This study highlights the importance of canopy height and textural features as key predictors in reducing misclassification, aligning with prior research that emphasizes the use of structural and phenological data in remote sensing applications. The integration of spectral, textural, and geometrical features extracted through GEOBIA, combined with the robustness of the RF classifier, provides an efficient approach for tree species classification in diverse ecosystems. These findings have significant implications for urban green space management, sustainable forest management, and biodiversity conservation, offering urban planners and remote-sensing experts a high-precision tool to monitor and manage tree species.
Despite these promising results, this study has several limitations that remain to be addressed. The research was conducted in a controlled environment of a botanical garden, which may not fully capture the complexity of a natural forest ecosystem. The UAV imagery used was limited to three RGB bands, potentially restricting spectral differentiation. Class imbalance in the dataset may further limit generalizability, potentially introducing biases in biodiversity assessments, despite SMOTE mitigation. A moderate sample size may reduce robustness in a heterogeneous environment. Integrating advanced UAV payloads, such as multispectral or hyperspectral payloads or LiDAR, will enhance the spectral and structural characteristics of the trees. Expanding this framework to a larger and more heterogeneous forest ecosystem would further validate its robustness. This work advances the growing field of UAV-based remote sensing by demonstrating the effectiveness of ML analysis and GEOBIA of multi-temporal UAV imagery for precise tree species classification. The findings of this study could contribute to the development of robust and adaptable methodologies for more accurate and efficient monitoring of tree species, thereby supporting global efforts in ecosystem preservation and sustainable land management.

Author Contributions

Conceptualization, S.A., H.Q., M.U. and X.D.; methodology, H.Q., M.U., and S.A.; software, M.U. and H.Q.; literature review, H.Q.; writing—original draft preparation, H.Q., M.U., and S.A.; formal analysis H.Q., M.U., S.A., U.A., M.B. and N.S.; data curation, M.U. and H.Q.; validation, H.Q. and H.M.K.; supervision. X.D., S.A. and M.U.; Resources, X.D. and S.A.; writing—review and editing, H.Q., S.A., X.D., M.U., and M.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data is available on request from the authors.

Acknowledgments

The authors would like to express their sincere gratitude to all those who contributed to this work. We thank colleagues and collaborators who assisted in field surveys, data acquisition, and provided invaluable administrative and logistical support. The authors also acknowledge the support drawn from the Research Grants Council (RGC) of the Hong Kong Special Administrative Region (PolyU 152318/22E, 152344/23E), the National Science Foundation of China (42330717), the Innovative Technology Commission (ITC) (K-BBY1 – Smart Railway Technology), the Guangdong–Hong Kong Joint Laboratory for Marine Infrastructure (Hong Kong, China), and the Research Postgraduate Scholarship awarded by The Hong Kong Polytechnic University (PolyU) for doctoral study.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
GEOBIAGeographical object-based image analysis
UAVUnmanned aerial vehicle
RFRandom Forest
ETExtra Tree
XGBoosteXtreme Gradient Boosting
SVMSupport Vector Machine
KNNK-nearest neighbour
NNNeural networks
OAOverall Accuracy
DJIDa-Jiang Innovations
CMOSComplementary Metal-Oxide-Semiconductor
PSTPakistan Standard Time
RTKReal-Time Kinematic
GNSSGlobal Navigation Satellite System
GCPGround control point
DSMDigital surface model
GISGeographic Information System
RGBRed, Green and Blue
DTMDigital terrain model
GLCMGrey-level co-occurrence matrix
GLDVGrey-level difference vectors
MCCMatthew’s correlation coefficient

References

  1. Kozlowski, G.; Song, Y.G. Importance, Tools, and Challenges of Protecting Trees. Sustainability 2022, 14, 13107. Available online: https://www.mdpi.com/2071-1050/14/20/13107/htm (accessed on 30 December 2024). [CrossRef]
  2. Schnabel, F.; Schwarz, J.A.; Dănescu, A.; Fichtner, A.; Nock, C.A.; Bauhus, J.; Potvin, C. Drivers of productivity and its temporal stability in a tropical tree diversity experiment. Glob. Chang. Biol. 2019, 25, 4257–4272. Available online: https://onlinelibrary.wiley.com/doi/full/10.1111/gcb.14792 (accessed on 30 December 2024). [CrossRef]
  3. Fan, Z.P.; Wang, Q.; Li, F.Y.; Sun, X.K. Absorption Ability of Different Tree Species to S, Cl and Heavy Metals in Urban Forest Ecosystem. Adv. Mater. Res. 2012, 14, 48–53. Available online: https://www.scientific.net/AMR.518-523.48 (accessed on 12 May 2024). [CrossRef]
  4. Zhang, W.-K.; Wang, B.; Niu, X. Study on the Adsorption Capacities for Airborne Particulates of Landscape Plants in Different Polluted Regions in Beijing (China). Int. J. Environ. Res. Public Health 2015, 12, 9623–9638. [Google Scholar] [CrossRef]
  5. Wang, K.; Wang, T.; Liu, X. A Review: Individual Tree Species Classification Using Integrated Airborne LiDAR and Optical Imagery with a Focus on the Urban Environment. Forests 2018, 10, 1. Available online: https://www.mdpi.com/1999-4907/10/1/1 (accessed on 12 May 2025). [CrossRef]
  6. Naseri, M.H.; Shataee Jouibary, S.; Habashi, H. Analysis of forest tree dieback using UltraCam and UAV imagery. Scand. J. For. Res. 2023, 38, 392–400. Available online: https://www.tandfonline.com/doi/full/10.1080/02827581.2023.2231349 (accessed on 30 December 2024). [CrossRef]
  7. Wang, Y.; Wang, J.; Chang, S.; Sun, L.; An, L.; Chen, Y.; Xu, J. Classification of street tree species using uav tilt photogrammetry. Remote Sens. 2021, 13, 216. [Google Scholar] [CrossRef]
  8. Wu, J.; Man, Q.; Yang, X.; Dong, P.; Ma, X.; Liu, C.; Han, C. Fine Classification of Urban Tree Species Based on UAV-Based RGB Imagery and LiDAR Data. Forests 2024, 15, 390. [Google Scholar] [CrossRef]
  9. Chiang, S.H.; Valdez, M. Tree Species Classification by Integrating Satellite Imagery and Topographic Variables Using Maximum Entropy Method in a Mongolian Forest. Forests 2019, 10, 961. Available online: https://www.mdpi.com/1999-4907/10/11/961/htm (accessed on 23 January 2025).
  10. Qin, H.; Zhou, W.; Yao, Y.; Wang, W. Individual tree segmentation and tree species classification in subtropical broadleaf forests using UAV-based LiDAR, hyperspectral, and ultrahigh-resolution RGB data. Remote Sens. Environ. 2022, 280, 113143. [Google Scholar] [CrossRef]
  11. Fassnacht, F.E.; Latifi, H.; Stereńczak, K.; Modzelewska, A.; Lefsky, M.; Waser, L.T.; Straub, C.; Ghosh, A. Review of studies on tree species classification from remotely sensed data. Remote Sens. Environ. 2016, 186, 64–87. Available online: https://www.sciencedirect.com/science/article/pii/S0034425716303169 (accessed on 30 November 2022). [CrossRef]
  12. Manzoor, A.T.; Habib, N.; Abbas, S. Performance of NDVI and GOSIF in monitoring vegetation responses to rainfall in a desert ecosystem. Remote Sens. Appl. Soc. Environ. 2025, 39, 101621. Available online: https://www.sciencedirect.com/science/article/pii/S2352938525001740 (accessed on 25 August 2025). [CrossRef]
  13. Usman, M.; Gul, A.A.; Abbas, S.; Rabbani, U.; Irteza, S.M. Identifying morphological hotspots in large rivers by optimizing image enhancement. Remote Sens. Lett. 2023, 14, 1173–1185. [Google Scholar] [CrossRef]
  14. Abbas, S.; Akhter, A.M.; Mushtaque, M.; Usama, R.M.; Rasool, A.; Umar, M.; Mahmood, M.U.; Malik, M. Enhancing vegetation cover growth through water spreading bunds and dry afforestation in arid rangelands: Integrating remote sensing indicators with field insights. Sci. Total Environ. 2025, 978, 179382. Available online: https://www.sciencedirect.com/science/article/pii/S0048969725010186 (accessed on 27 August 2025). [CrossRef]
  15. Umar, M.; Abbas, S. Advancing mangrove mapping with the integrated mangrove index (IMI) and the role of tidal dynamics. Int. J. Remote Sens. 2025, 46, 4409–4429. [Google Scholar] [CrossRef]
  16. Reese, H.M.; Lillesand, T.M.; Nagel, D.E.; Stewart, J.S.; Goldmann, R.A.; Simmons, T.E.; Chipman, J.W.; Tessar, P.A. Statewide land cover derived from multiseasonal Landsat TM data: A retrospective of the WISCLAND project. Remote Sens. Environ. 2002, 82, 224–237. [Google Scholar] [CrossRef]
  17. Zhu, X.; Liu, D. Accurate mapping of forest types using dense seasonal landsat time-series. ISPRS J. Photogramm. Remote Sens. 2014, 96, 1–11. [Google Scholar] [CrossRef]
  18. Porwal, M.C.; Pant, D.N. Forest cover type and landuse mapping using landsat Thematic Mapper False colour Composite a case study for chakrata in western himalayas U.P. J. Indian Soc. Remote Sens. 1989, 17, 33–40. [Google Scholar] [CrossRef]
  19. Persson, M.; Lindberg, E.; Reese, H. Tree Species Classification with Multi-Temporal Sentinel-2 Data. Remote Sens. 2018, 10, 1794. Available online: https://www.mdpi.com/2072-4292/10/11/1794 (accessed on 1 July 2023). [CrossRef]
  20. Hassaan, O.; Nasir, A.K.; Roth, H.; Khan, M.F. Precision Forestry: Trees Counting in Urban Areas Using Visible Imagery based on an Unmanned Aerial Vehicle. IFAC-PapersOnLine 2016, 49, 16–21. Available online: https://www.sciencedirect.com/science/article/pii/S2405896316315658 (accessed on 11 June 2023). [CrossRef]
  21. Detka, J.; Coyle, H.; Gomez, M.; Gilbert, G.S. A Drone-Powered Deep Learning Methodology for High Precision Remote Sensing in California’s Coastal Shrubs. Drones 2023, 7, 421. [Google Scholar] [CrossRef]
  22. Bian, L.; Zhang, H.; Ge, Y.; Čepl, J.; Stejskal, J.; EL-Kassaby, Y.A. Closing the gap between phenotyping and genotyping: Review of advanced, image-based phenotyping technologies in forestry. Ann. For. Sci. 2022, 79, 22. [Google Scholar] [CrossRef]
  23. Diez, Y.; Kentsch, S.; Fukuda, M.; Caceres, M.L.L.; Moritake, K.; Cabezas, M. Deep learning in forestry using uav-acquired rgb data: A practical review. Remote Sens. 2021, 13, 2837. [Google Scholar] [CrossRef]
  24. Onishi, M.; Ise, T. Explainable identification and mapping of trees using UAV RGB image and deep learning. Sci. Rep. 2021, 11, 903. [Google Scholar] [CrossRef] [PubMed]
  25. Chianucci, F.; Disperati, L.; Guzzi, D.; Bianchini, D.; Nardino, V.; Lastri, C.; Rindinella, A.; Corona, P. Estimation of canopy attributes in beech forests using true colour digital images from a small fixed-wing UAV. Int. J. Appl. Earth Obs. Geoinf. 2016, 47, 60–68. Available online: https://www.sciencedirect.com/science/article/pii/S0303243415300702 (accessed on 11 June 2023). [CrossRef]
  26. Williams, J. UAV survey mapping of illegal deforestation in Madagascar. Plants People Planet 2024, 6, 1413–1424. Available online: https://nph.onlinelibrary.wiley.com/doi/10.1002/ppp3.10533 (accessed on 24 August 2025). [CrossRef]
  27. Mäyrä, J.; Tanhuanpää, T.; Kuzmin, A.; Heinaro, E.; Kumpula, T.; Vihervaara, P. Using UAV images and deep learning to enhance the mapping of deadwood in boreal forests. Remote Sens. Environ. 2025, 329, 114906. Available online: https://www.sciencedirect.com/science/article/pii/S0034425725003104 (accessed on 24 August 2025). [CrossRef]
  28. Shi, W.; Liao, X.; Wang, S.; Ye, H.; Wang, D.; Yue, H.; Liu, J. Evaluation of a CNN model to map vegetation classification in a subalpine coniferous forest using UAV imagery. Ecol. Inform. 2025, 87, 103111. Available online: https://linkinghub.elsevier.com/retrieve/pii/S1574954125001207 (accessed on 24 August 2025). [CrossRef]
  29. SHAMTA, I.; Demir, B.E. Development of a deep learning-based surveillance system for forest fire detection and monitoring using UAV. PLoS ONE 2024, 19, e0299058. [Google Scholar] [CrossRef] [PubMed]
  30. Troles, J.; Schmid, U.; Fan, W.; Tian, J. BAMFORESTS: Bamberg Benchmark Forest Dataset of Individual Tree Crowns in Very-High-Resolution UAV Images. Remote Sens. 2024, 16, 1935. Available online: https://www.mdpi.com/2072-4292/16/11/1935 (accessed on 20 August 2025). [CrossRef]
  31. Joshi, D.; Witharana, C. Vision Transformer-Based Unhealthy Tree Crown Detection in Mixed Northeastern US Forests and Evaluation of Annotation Uncertainty. Remote Sens. 2025, 17, 1066. Available online: https://www.mdpi.com/2072-4292/17/6/1066/htm (accessed on 14 August 2025). [CrossRef]
  32. Ferreira, M.P.; dos Santos, D.R.; Ferrari, F.; Coelho, L.C.T.; Martins, G.B.; Feitosa, R.Q. Improving urban tree species classification by deep-learning based fusion of digital aerial images and LiDAR. Urban For. Urban Green. 2024, 94, 128240. Available online: https://linkinghub.elsevier.com/retrieve/pii/S1618866724000384 (accessed on 22 August 2025). [CrossRef]
  33. Jo, W.-K.; Park, J.-H. High-Accuracy Tree Type Classification in Urban Forests Using Drone-Based RGB Imagery and Optimized SVM. Korean J. Remote Sens. 2025, 41, 209–223. Available online: http://kjrs.or.kr/journal/view.html?doi=10.7780/kjrs.2025.41.1.17 (accessed on 23 August 2025). [CrossRef]
  34. DAmato, J.P.; Rinaldi, P.; Boroni, G. Urban tree surveying using aerial UAV images and machine learning algorithms. World J. Inf. Syst. 2025, 1, 71–82. Available online: https://wjis.org/index.php/wjis/article/view/20 (accessed on 25 August 2025). [CrossRef]
  35. Azizi, Z.; Miraki, M. Individual urban trees detection based on point clouds derived from UAV-RGB imagery and local maxima algorithm, a case study of Fateh Garden, Iran. Environ. Dev. Sustain. 2024, 26, 2331–2344. [Google Scholar] [CrossRef]
  36. Li, X.; Wang, L.; Guan, H.; Chen, K.; Zang, Y.; Yu, Y. Urban Tree Species Classification Using UAV-Based Multispectral Images and LiDAR Point Clouds. J. Geovisualization Spat. Anal. 2024, 8, 1–17. [Google Scholar] [CrossRef]
  37. Schiefer, F.; Kattenborn, T.; Frick, A.; Frey, J.; Schall, P.; Koch, B.; Schmidtlein, S. Mapping forest tree species in high resolution UAV-based RGB-imagery by means of convolutional neural networks. ISPRS J. Photogramm. Remote Sens. 2020, 170, 205–215. Available online: https://www.sciencedirect.com/science/article/pii/S0924271620302938 (accessed on 17 June 2023). [CrossRef]
  38. Guo, Q.; Zhang, J.; Guo, S.; Ye, Z.; Deng, H.; Hou, X.; Zhang, H. Urban Tree Classification Based on Object-Oriented Approach and Random Forest Algorithm Using Unmanned Aerial Vehicle (UAV) Multispectral Imagery. Remote Sens. 2022, 14, 3885. Available online: https://www.mdpi.com/2072-4292/14/16/3885/htm (accessed on 14 December 2024). [CrossRef]
  39. Sprott, A.H.; Piwowar, J.M. How to recognize different types of trees from quite a long way away: Combining UAV and spaceborne imagery for stand-level tree species identification. J. Unmanned Veh. Syst. 2021, 9, 166–181. [Google Scholar] [CrossRef]
  40. Hartling, S.; Sagan, V.; Maimaitijiang, M. Urban tree species classification using UAV-based multi-sensor data fusion and machine learning. GIScience Remote Sens. 2021, 58, 1250–1275. Available online: https://www.tandfonline.com/doi/abs/10.1080/15481603.2021.1974275 (accessed on 3 February 2025). [CrossRef]
  41. Adhikari, A.; Kumar, M.; Agrawal, S.; Raghavendra, S. An Integrated Object and Machine Learning Approach for Tree Canopy Extraction from UAV Datasets. J. Indian Soc. Remote Sens. 2021, 49, 471–478. Available online: https://link.springer.com/article/10.1007/s12524-020-01240-2 (accessed on 3 February 2025). [CrossRef]
  42. da Silva, S.D.P.; Eugenio, F.C.; Fantinel, R.A.; de Amaral, L.P.; dos Santos, A.R.; Mallmann, C.L.; dos Santos, F.D.; Pereira, R.S.; Ruoso, R. Modeling and detection of invasive trees using UAV image and machine learning in a subtropical forest in Brazil. Ecol. Inform. 2023, 74, 101989. Available online: https://linkinghub.elsevier.com/retrieve/pii/S1574954123000183 (accessed on 3 February 2025). [CrossRef]
  43. Amin, G.; Imtiaz, I.; Haroon, E.; Saqib, N.; Shahzad, M.; Nazeer, M. Assessment of Machine Learning Algorithms for Land Cover Classification in a Complex Mountainous Landscape. J. Geovisualization Spat. Anal. 2024, 8, 34. [Google Scholar] [CrossRef]
  44. Bakacsy, L.; Tobak, Z.; van Leeuwen, B.; Szilassi, P.; Biró, C.; Szatmári, J. Drone-Based Identification and Monitoring of Two Invasive Alien Plant Species in Open Sand Grasslands by Six RGB Vegetation Indices. Drones 2023, 7, 207. [Google Scholar] [CrossRef]
  45. Sohl, M.A.; Mahmood, S.A.; Rasheed, M.U. Comparative performance of four machine learning models for land cover classification in a low-cost UAV ultra-high-resolution RGB-only orthomosaic. Earth Sci. Inform. 2024, 17, 2869–2885. [Google Scholar] [CrossRef]
  46. Usman, M.; Ejaz, M.; Nichol, J.E.; Farid, M.S.; Abbas, S.; Khan, M.H. A Comparison of Machine Learning Models for Mapping Tree Species Using WorldView-2 Imagery in the Agroforestry Landscape of West Africa. ISPRS Int. J. Geo-Inf. 2023, 12, 142. [Google Scholar] [CrossRef]
  47. Mishra, N.B.; Mainali, K.P.; Shrestha, B.B.; Radenz, J.; Karki, D. Species-Level Vegetation Mapping in a Himalayan Treeline Ecotone Using Unmanned Aerial System (UAS) Imagery. ISPRS Int. J. Geo-Inf. 2018, 7, 445. Available online: https://www.mdpi.com/2220-9964/7/11/445/htm (accessed on 22 August 2025). [CrossRef]
  48. Ma, L.; Fu, T.; Blaschke, T.; Li, M.; Tiede, D.; Zhou, Z.; Ma, X.; Chen, D. Evaluation of Feature Selection Methods for Object-Based Land Cover Mapping of Unmanned Aerial Vehicle Imagery Using Random Forest and Support Vector Machine Classifiers. ISPRS Int. J. Geo-Inf. 2017, 6, 51. Available online: https://www.mdpi.com/2220-9964/6/2/51/htm (accessed on 22 August 2025). [CrossRef]
  49. Apurva, S.A. Comparative Analysis of Detection of Diseases in Apple Tree Leaves Using ResNet and CNN. In Scalable Modeling and Efficient Management of IoT Applications; IGI Global Scientific Publishing: Hershey, PA, USA, 2024; pp. 119–136. Available online: https://www.igi-global.com/chapter/comparative-analysis-of-detection-of-diseases-in-apple-tree-leaves-using-resnet-and-cnn/358716 (accessed on 15 August 2025).
  50. Ren, D.; Li, M.; Hong, Z.; Liu, L.; Huang, J.; Sun, H.; Ren, S.; Sao, P.; Wang, W.; Zhang, J. MASFNet: Multi-level attention and spatial sampling fusion network for pine wilt disease trees detection. Ecol. Indic. 2025, 170, 113073. Available online: https://linkinghub.elsevier.com/retrieve/pii/S1470160X25000020 (accessed on 20 August 2025). [CrossRef]
  51. Li, Q.; Yan, Y. Street tree segmentation from mobile laser scanning data using deep learning-based image instance segmentation. Urban For. Urban Green. 2024, 92, 128200. Available online: https://linkinghub.elsevier.com/retrieve/pii/S1618866723003710 (accessed on 22 August 2025). [CrossRef]
  52. Kuang, W.; Ho, H.W.; Zhou, Y.; Suandi, S.A.; Ismail, F. A comprehensive review on tree detection methods using point cloud and aerial imagery from unmanned aerial vehicles. Comput. Electron. Agric. 2024, 227, 109476. Available online: https://linkinghub.elsevier.com/retrieve/pii/S0168169924008676 (accessed on 21 August 2025). [CrossRef]
  53. Zheng, J.; Yuan, S.; Li, W.; Fu, H.; Yu, L.; Huang, J. A Review of Individual Tree Crown Detection and Delineation From Optical Remote Sensing Images: Current progress and future. IEEE Geosci. Remote Sens. Mag. 2025, 13, 209–236. Available online: https://ieeexplore.ieee.org/document/10750500/ (accessed on 24 August 2025). [CrossRef]
  54. Zhong, L.; Dai, Z.; Fang, P.; Cao, Y.; Wang, L. A Review: Tree Species Classification Based on Remote Sensing Data and Classic Deep Learning-Based Methods. Forests 2024, 15, 852. Available online: https://www.mdpi.com/1999-4907/15/5/852/htm (accessed on 21 August 2025). [CrossRef]
  55. Gärtner, P.; Förster, M.; Kleinschmit, B. The benefit of synthetically generated RapidEye and Landsat 8 data fusion time series for riparian forest disturbance monitoring. Remote Sens. Environ. 2016, 177, 237–247. [Google Scholar] [CrossRef]
  56. Chen, G.; Weng, Q.; Hay, G.J.; He, Y. Geographic object-based image analysis (GEOBIA): Emerging trends and future opportunities. GIScience Remote Sens. 2018, 55, 159–182. Available online: https://www.tandfonline.com/doi/full/10.1080/15481603.2018.1426092 (accessed on 1 July 2023). [CrossRef]
  57. Lang, S. Object-based image analysis for remote sensing applications: Modeling reality—dealing with complexity. In Object-Based Image Analysis; Springer: Berlin/Heidelberg, Germany, 2008; pp. 3–27. [Google Scholar] [CrossRef]
  58. Blaschke, T. Object based image analysis for remote sensing. ISPRS J. Photogramm. Remote Sens. 2010, 65, 2–16. Available online: https://www.sciencedirect.com/science/article/pii/S0924271609000884 (accessed on 22 August 2022). [CrossRef]
  59. Castilla, G.; Hay, G.J. Image objects and geographic objects. In Object-Based Image Analysis: Spatial Concepts for Knowledge-Driven Remote Sensing Applications; Springer: Berlin/Heidelberg, Germany, 2008; pp. 91–110. Available online: https://link.springer.com/chapter/10.1007/978-3-540-77058-9_5 (accessed on 22 August 2025).
  60. Iqbal, I.M.; Shabbir, A.; Shabbir, K.; Naveed, M.; Urooj, F.; Butt, A.; Khan, R.; Singh, N. Tree Flora of the Botanical Garden and two campuses of Punjab University Lahore Pakistan. Biologia 2020, 66, 99–124. Available online: www.cabi.org (accessed on 16 September 2024).
  61. Shakrullah, K.; Shirazi, S.A. Changing Landscape Patterns and Its Effect on Rising Land Surface Temperature of Lahore-Pakistan. Pak. J. Sci. 2023, 74, 195–202. [Google Scholar] [CrossRef]
  62. DJI DJI Mavic 2 2022. Available online: https://www.dji.com/cn/mavic-2/info (accessed on 15 June 2023).
  63. Babaasa, D.; Finn, J.T.; Schweik, C.M.; Fuller, T.K.; Sheil, D. Predictive mapping of tree species assemblages in an African montane rainforest. Biotropica 2024, 56, e13302. [Google Scholar] [CrossRef]
  64. Anita, M.; Poonam, J. Species Composition and Diversity of Tree Species in Nanta Forest Region in Kota District, Rajasthan, India. Int. J. Environ. Clim. Chang. 2023, 13, 220–227. Available online: https://journalijecc.com/index.php/IJECC/article/view/1729 (accessed on 9 December 2024). [CrossRef]
  65. Wang, Y.; Wu, L.; Qi, Q.; Wang, J. Local Scale-Guided Hierarchical Region Merging and Further Over- and Under-Segmentation Processing for Hybrid Remote Sensing Image Segmentation. IEEE Access 2022, 10, 81492–81505. [Google Scholar] [CrossRef]
  66. Souza-Filho, P.W.M.; Nascimento, W.R.; Santos, D.C.; Weber, E.J.; Silva, R.O.; Siqueira, J.O. A GEOBIA Approach for Multitemporal Land-Cover and Land-Use Change Analysis in a Tropical Watershed in the Southeastern Amazon. Remote Sens. 2018, 10, 1683. Available online: https://www.mdpi.com/2072-4292/10/11/1683/htm (accessed on 30 January 2025). [CrossRef]
  67. Pu, R. Mapping Tree Species Using Advanced Remote Sensing Technologies: A State-of-the-Art Review and Perspective. J. Remote Sens. 2021. Available online: https://spj.science.org/doi/10.34133/2021/9812624 (accessed on 3 February 2025). [CrossRef]
  68. Laliberte, A.S.; Rango, A. Texture and Scale in Object-Based Analysis of Subdecimeter Resolution Unmanned Aerial Vehicle (UAV) Imagery. IEEE Trans. Geosci. Remote Sens. 2009, 47, 761–770. [Google Scholar] [CrossRef]
  69. Chawla, N.V.; Bowyer, K.W.; Hall, L.O.; Kegelmeyer, W.P. SMOTE: Synthetic Minority Over-sampling Technique. J. Artif. Intell. Res. 2002, 16, 321–357. [Google Scholar] [CrossRef]
  70. Breiman, L. Random Forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
  71. Guimarães, N.; Pádua, L.; Marques, P.; Silva, N.; Peres, E.; Sousa, J.J. Forestry remote sensing from unmanned aerial vehicles: A review focusing on the data, processing and potentialities. Remote Sens. 2020, 12, 1046. [Google Scholar] [CrossRef]
  72. Feng, Q.; Liu, J.; Gong, J. UAV Remote Sensing for Urban Vegetation Mapping Using Random Forest and Texture Analysis. Remote Sens. 2015, 7, 1074–1094. [Google Scholar] [CrossRef]
  73. Sheykhmousa, M.; Mahdianpari, M.; Ghanbari, H.; Mohammadimanesh, F.; Ghamisi, P.; Homayouni, S. Support Vector Machine Versus Random Forest for Remote Sensing Image Classification: A Meta-Analysis and Systematic Review. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 6308–6325. [Google Scholar] [CrossRef]
  74. Rigatti, S.J. Random Forest. J. Insur. Med. 2017, 47, 31–39. [Google Scholar] [CrossRef]
  75. Raczko, E.; Zagajewski, B. Comparison of support vector machine, random forest and neural network classifiers for tree species classification on airborne hyperspectral APEX images. Eur. J. Remote Sens. 2017, 50, 144–154. [Google Scholar] [CrossRef]
  76. You, H.; Huang, Y.; Qin, Z.; Chen, J.; Liu, Y. Forest Tree Species Classification Based on Sentinel-2 Images and Auxiliary Data. Forests 2022, 13, 1416. Available online: https://www.mdpi.com/1999-4907/13/9/1416/htm (accessed on 1 February 2025). [CrossRef]
  77. Belgiu, M.; Drăguţ, L. Random forest in remote sensing: A review of applications and future directions. ISPRS J. Photogramm. Remote Sens. 2016, 114, 24–31. [Google Scholar] [CrossRef]
  78. Adugna, T.; Xu, W.; Fan, J. Comparison of Random Forest and Support Vector Machine Classifiers for Regional Land Cover Mapping Using Coarse Resolution FY-3C Images. Remote Sens. 2022, 14, 574. [Google Scholar] [CrossRef]
  79. Deur, M.; Gašparović, M.; Balenović, I. Tree Species Classification in Mixed Deciduous Forests Using Very High Spatial Resolution Satellite Imagery and Machine Learning Methods. Remote Sens. 2020, 12, 3926. [Google Scholar] [CrossRef]
  80. Immitzer, M.; Atzberger, C.; Koukal, T. Tree Species Classification with Random Forest Using Very High Spatial Resolution 8-Band WorldView-2 Satellite Data. Remote Sens. 2012, 4, 2661–2693. [Google Scholar] [CrossRef]
  81. Liu, X.; Frey, J.; Denter, M.; Zielewska-Büttner, K.; Still, N.; Koch, B. Mapping standing dead trees in temperate montane forests using a pixel- and object-based image fusion method and stereo WorldView-3 imagery. Ecol. Indic. 2021, 133, 108438. [Google Scholar] [CrossRef]
  82. Geurts, P.; Ernst, D.; Wehenkel, L. Extremely randomized trees. Mach. Learn. 2006, 63, 3–42. [Google Scholar] [CrossRef]
  83. Zafari, A.; Zurita-Milla, R.; Izquierdo-Verdiguier, E. Land Cover Classification Using Extremely Randomized Trees: A Kernel Perspective. IEEE Geosci. Remote Sens. Lett. 2020, 17, 1702–1706. [Google Scholar] [CrossRef]
  84. Chen, T.; Guestrin, C. XGBoost. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016; pp. 785–794. Available online: https://dl.acm.org/doi/10.1145/2939672.2939785 (accessed on 22 August 2025).
  85. Wan, H.; Tang, Y.; Jing, L.; Li, H.; Qiu, F.; Wu, W. Tree Species Classification of Forest Stands Using Multisource Remote Sensing Data. Remote Sens. 2021, 13, 144. Available online: https://www.mdpi.com/2072-4292/13/1/144 (accessed on 21 September 2024). [CrossRef]
  86. Cortes, C.; Vapnik, V. Support-vector networks. Mach. Learn. 1995, 20, 273–297. [Google Scholar] [CrossRef]
  87. Van der Linden, S. Classifying segmented hyperspectral data from a heterogeneous urban environment using support vector machines. J. Appl. Remote Sens. 2007, 1, 013543. [Google Scholar] [CrossRef]
  88. Li, D.; Ke, Y.; Gong, H.; Li, X. Object-Based Urban Tree Species Classification Using Bi-Temporal WorldView-2 and WorldView-3 Images. Remote Sens. 2015, 7, 16917–16937. [Google Scholar] [CrossRef]
  89. Lantzanakis, G.; Mitraka, Z.; Chrysoulakis, N. X-SVM: An Extension of C-SVM Algorithm for Classification of High-Resolution Satellite Imagery. IEEE Trans. Geosci. Remote Sens. 2020, 59, 3805–3815. [Google Scholar] [CrossRef]
  90. Lundberg, S.M.; Allen, P.G.; Lee, S.-I. A Unified Approach to Interpreting Model Predictions. Adv. Neural Inf. Process. Syst. 2017, 30, 4768–4777. Available online: https://github.com/slundberg/shap (accessed on 14 August 2024).
  91. Sun, C.; Li, J.; Liu, Y.; Liu, Y.; Liu, R. Plant species classification in salt marshes using phenological parameters derived from Sentinel-2 pixel-differential time-series. Remote Sens. Environ. 2021, 256, 112320. [Google Scholar] [CrossRef]
  92. Elatawneh, A.; Rappl, A.; Rehush, N.; Schneider, T.; Knoke, T. Forest tree species identification using phenological stages and RapidEye: A case study in the forest of Freising. In From the Basics to the Services; GITO Verlag: Berlin, Germany, 2013; pp. 21–38. [Google Scholar]
  93. Chen, S.; Chen, M.; Zhao, B.; Mao, T.; Wu, J.; Bao, W. Urban Tree Canopy Mapping Based on Double-Branch Convolutional Neural Network and Multi-Temporal High Spatial Resolution Satellite Imagery. Remote Sens. 2023, 15, 765. Available online: https://www.mdpi.com/2072-4292/15/3/765/htm (accessed on 1 February 2025). [CrossRef]
  94. Jiang, M.; Kong, J.; Zhang, Z.; Hu, J.; Qin, Y.; Shang, K.; Zhao, M.; Zhang, J. Seeing Trees from Drones: The Role of Leaf Phenology Transition in Mapping Species Distribution in Species-Rich Montane Forests. Forests 2023, 14, 908. [Google Scholar] [CrossRef]
  95. Abbas, S.; Peng, Q.; Wong, M.S.; Li, Z.; Wang, J.; Ng, K.T.K.; Kwok, C.Y.T.; Hui, K.K.W. Characterizing and classifying urban tree species using bi-monthly terrestrial hyperspectral images in Hong Kong. ISPRS J. Photogramm. Remote Sens. 2021, 177, 204–216. Available online: https://www.sciencedirect.com/science/article/pii/S0924271621001283 (accessed on 1 August 2025). [CrossRef]
  96. Guo, X.; Liu, Q.; Sharma, R.P.; Chen, Q.; Ye, Q.; Tang, S.; Fu, L. Tree Recognition on the Plantation Using UAV Images with Ultrahigh Spatial Resolution in a Complex Environment. Remote Sens. 2021, 13, 4122. [Google Scholar] [CrossRef]
  97. Santos, L.A.; Ferreira, K.R.; Camara, G.; Picoli, M.C.A.; Simoes, R.E. Quality control and class noise reduction of satellite image time series. ISPRS J. Photogramm. Remote Sens. 2021, 177, 75–88. [Google Scholar] [CrossRef]
  98. Yang, Q.; Su, Y.; Jin, S.; Kelly, M.; Hu, T.; Ma, Q.; Li, Y.; Song, S.; Zhang, J.; Xu, G.; et al. The Influence of Vegetation Characteristics on Individual Tree Segmentation Methods with Airborne LiDAR Data. Remote Sens. 2019, 11, 2880. [Google Scholar] [CrossRef]
  99. Torresan, C.; Corona, P.; Scrinzi, G.; Marsal, J.V. Using classification trees to predict forest structure types from LiDAR data. Ann. For. Res. 2016, 59, 281–298. [Google Scholar] [CrossRef]
  100. Quan, Y.; Zhong, X.; Feng, W.; Dauphin, G.; Gao, L.; Xing, M. A Novel Feature Extension Method for the Forest Disaster Monitoring Using Multispectral Data. Remote Sens. 2020, 12, 2261. [Google Scholar] [CrossRef]
  101. Hebryn-Baidy, L.; Rees, G. Machine Learning Algorithms Evaluated for Urban Land Use and Land Cover Classification Using Sentinel 2 Data. Smartindustry 2024, 10, 664–665. Available online: https://api.semanticscholar.org/CorpusID:270381005 (accessed on 11 November 2024).
  102. Jombo, S.; Adam, E.; Tesfamichael, S. Classification of urban tree species using LiDAR data and WorldView-2 satellite imagery in a heterogeneous environment. Geocarto Int. 2022, 37, 9943–9966. Available online: https://www.tandfonline.com/doi/pdf/10.1080/10106049.2022.2028904 (accessed on 22 August 2025). [CrossRef]
  103. Man, Q.; Dong, P.; Zhang, B.; Liu, H.; Yang, X.; Wu, J.; Liu, C.; Han, C.; Zhou, C.; Tan, Z. Precise identification of individual tree species in urban areas with high canopy density by multi-sensor UAV data in two seasons. Int. J. Digit. Earth 2025, 18, 2496804. Available online: https://www.tandfonline.com/doi/pdf/10.1080/17538947.2025.2496804 (accessed on 22 August 2025). [CrossRef]
Figure 1. Location map of the University of the Punjab’s botanical garden, Lahore, Pakistan.
Figure 1. Location map of the University of the Punjab’s botanical garden, Lahore, Pakistan.
Geomatics 05 00042 g001
Figure 2. Schematic diagram depicting approach and methodology.
Figure 2. Schematic diagram depicting approach and methodology.
Geomatics 05 00042 g002
Figure 3. (a) DJI Mavic 2 pro; (b) flight route and data capturing map.
Figure 3. (a) DJI Mavic 2 pro; (b) flight route and data capturing map.
Geomatics 05 00042 g003
Figure 4. Spatial distribution of ground reference data and orthogonal view of canopy structure of dominant tree species in the study area.
Figure 4. Spatial distribution of ground reference data and orthogonal view of canopy structure of dominant tree species in the study area.
Geomatics 05 00042 g004
Figure 5. True colour composite of UAV imagery of summer and tree crowns extracted using GEOBIA indicated by a black boundary.
Figure 5. True colour composite of UAV imagery of summer and tree crowns extracted using GEOBIA indicated by a black boundary.
Geomatics 05 00042 g005
Figure 6. Classification accuracy of individual tree species using machine learning models across different seasons.
Figure 6. Classification accuracy of individual tree species using machine learning models across different seasons.
Geomatics 05 00042 g006
Figure 7. Overall classification accuracy of (a) different classifiers on single and multi-temporal data and (b) RF, ET, XGBoost, and SVM on multi-temporal data.
Figure 7. Overall classification accuracy of (a) different classifiers on single and multi-temporal data and (b) RF, ET, XGBoost, and SVM on multi-temporal data.
Geomatics 05 00042 g007
Figure 8. Confusion matrix of multi-temporal data using (a) RF Classifier, (b) ET Classifier, (c) XGBoost, and (d) SVM.
Figure 8. Confusion matrix of multi-temporal data using (a) RF Classifier, (b) ET Classifier, (c) XGBoost, and (d) SVM.
Geomatics 05 00042 g008
Figure 9. Feature importance from SHAP analysis for the multi-temporal dataset: (a) Random Forest, (b) Extra Trees, (c) XGBoost, and (d) Support Vector Machine.
Figure 9. Feature importance from SHAP analysis for the multi-temporal dataset: (a) Random Forest, (b) Extra Trees, (c) XGBoost, and (d) Support Vector Machine.
Geomatics 05 00042 g009
Figure 10. Map of the spatial distribution of dominant tree species in the study area.
Figure 10. Map of the spatial distribution of dominant tree species in the study area.
Geomatics 05 00042 g010
Table 1. Description of the data acquired from the UAV.
Table 1. Description of the data acquired from the UAV.
Sr. No.Flying Month SeasonCamera AngleImage OverlapFlight Height
130 August 2023Summer90°80%100 m
229 October 2023Autumn
328 January 2024Winter
411 February 2024Early Spring
Table 2. Distribution of reference data samples for each tree species.
Table 2. Distribution of reference data samples for each tree species.
No.Local NameScientific NameFamily NameSample
1JatrophaJatropha integerrimaEuphorbiaceae39
2Teak WoodTectona grandisVerbenaceae33
3AmaltasCassia fistulaFabaceae25
4Villayati ShishumMillettia ovalifoliaFabaceae22
5KarenwoodHetrophragma adenophyllumBignoniaceae18
6AlstoniaAlstonia scholarisApocynaceae17
7Chir PinePinus roxburghiiPinaceae17
8Dhrek, BakainMelia azedarachMeliaceae15
Table 3. Description of the textural features derived from the grey level co-occurrence matrix (GLCM).
Table 3. Description of the textural features derived from the grey level co-occurrence matrix (GLCM).
Sr. No.Texture FeaturesCalculation EquationsFeatures Description
1Mean M e a n = i = 1 N g . j = 1 N g i × P ( i , j ) The Mean value in the GLCM window.
2Homogeneity H o m = i = 1 N g . j = 1 N g 1 1 + ( i + j ) 2 × P ( i , j ) Gradually decreases in weight as values are further from the diagonal and weight values by the inverse of the contrast weight.
3Contrast C o n = i = 1 N g . j = 1 N g ( i j ) 2 × P ( i , j ) Measures local diversity in an image.
4Dissimilarity D i s = i = 1 N g . j = 1 N g i j × P ( i , j ) Very similar to contrast factors, but it increases linearly.
5Entropy E n t = i = 1 N g . j = 1 N g P ( i , j ) × log P ( i , j ) High value if the values in GLCM are equally distributed.
6Angular Second Moment S M = i = 1 N g . j = 1 N g P ( i , j ) 2 The uniformity of greyscale in
the GLCM window.
7Correlation C o r = i = 1 N g . j = 1 N g i × P ( i , j ) μ i μ j σ i σ j A linear relationship between the grey levels of adjacent pixels.
8Standard Deviation S T D = i = 1 N g . j = 1 N g P ( i , j ) × ( i M e a n ) 2 Combination of neighbour and reference pixels.
9GLDV angular second moment V A S M = k = 0 N 1 V k 2 Quantify uniformity of texture in an image.
10GLDV entropy V E N T = k = 0 N 1 V k ( l n V k ) Measures the randomness or unpredictability of the texture.
Note: P( i ,j) represents the probability of each pixel pair ( i ,j), where i and j are the grey tones in the windows and the coordinates of the co-occurrence matrix space; N g Represents the number of distinct grey levels in the quantized image, reflecting the grey value range of the original image; μ and σ represent the mean and standard deviation of P( i ,j), respectively, and V k is the normalized GLDV.
Table 4. Overall accuracies used to compare ML algorithms in the different seasons.
Table 4. Overall accuracies used to compare ML algorithms in the different seasons.
SummerAutumn
ModelOAPrecision RecallF1KappaMCCOAPrecision RecallF1KappaMCC
RF75%0.760.750.730.710.7263%0.620.630.600.570.58
ET74%0.740.740.720.690.7067%0.680.670.650.620.63
XGBoost74%0.770.740.730.700.7165%0.640.650.630.590.60
SVM72%0.720.720.700.670.6857%0.590.570.550.500.52
WinterEarly Spring
ModelOAPrecision RecallF1KappaMCCOAPrecision RecallF1KappaMCC
RF68%0.680.700.660.630.6461%0.610.590.580.540.55
ET66%0.660.640.630.600.6162%0.620.590.580.550.56
XGBoost64%0.640.610.610.580.5964%0.640.620.620.580.59
SVM65%0.650.650.620.590.6162%0.630.620.580.550.57
Table 5. Performance comparison of machine learning models on multi-temporal image composite.
Table 5. Performance comparison of machine learning models on multi-temporal image composite.
Multi-Temporal Image Composite
ModelOAPrecision RecallF1KappaMCC
RF86%0.880.860.850.830.84
ET84%0.870.840.830.800.81
XGBoost83%0.850.830.820.800.80
SVM77%0.800.770.760.740.75
Table 6. Per-class precision, recall, and F1-scores for the RF model on multi-temporal data, highlighting performance across imbalanced classes.
Table 6. Per-class precision, recall, and F1-scores for the RF model on multi-temporal data, highlighting performance across imbalanced classes.
Local Name (Species Name)PrecisionRecallF1Support
Jatropha (Jatropha integerrima)11114
Teak Wood (Tectona grandis)0.670.860.7514
Amaltas (Cassia fistula)0.910.770.8313
Villayati Shishum (Millettia ovalifolia)0.70.70.710
Karenwood (Hetrophragma adenophyllum)0.630.560.599
Alstonia (Alstonia scholaris)1118
Chir Pine (Pinus roxburghii)0.80.570.677
Dhrek, Bakain (Melia azedarach)0.810.937
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Qasim, H.; Ding, X.; Usman, M.; Abbas, S.; Shahzad, N.; Keshk, H.M.; Bilal, M.; Ahmad, U. Advancing Tree Species Classification with Multi-Temporal UAV Imagery, GEOBIA, and Machine Learning. Geomatics 2025, 5, 42. https://doi.org/10.3390/geomatics5030042

AMA Style

Qasim H, Ding X, Usman M, Abbas S, Shahzad N, Keshk HM, Bilal M, Ahmad U. Advancing Tree Species Classification with Multi-Temporal UAV Imagery, GEOBIA, and Machine Learning. Geomatics. 2025; 5(3):42. https://doi.org/10.3390/geomatics5030042

Chicago/Turabian Style

Qasim, Hassan, Xiaoli Ding, Muhammad Usman, Sawaid Abbas, Naeem Shahzad, Hatem M. Keshk, Muhammad Bilal, and Usman Ahmad. 2025. "Advancing Tree Species Classification with Multi-Temporal UAV Imagery, GEOBIA, and Machine Learning" Geomatics 5, no. 3: 42. https://doi.org/10.3390/geomatics5030042

APA Style

Qasim, H., Ding, X., Usman, M., Abbas, S., Shahzad, N., Keshk, H. M., Bilal, M., & Ahmad, U. (2025). Advancing Tree Species Classification with Multi-Temporal UAV Imagery, GEOBIA, and Machine Learning. Geomatics, 5(3), 42. https://doi.org/10.3390/geomatics5030042

Article Metrics

Back to TopTop