Next Article in Journal
High-Accuracy Identification of Cropping Structure in Irrigation Districts Using Data Fusion and Machine Learning
Previous Article in Journal
Multi-Perspective Information Fusion Network for Remote Sensing Segmentation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Multi-Source Remote Sensing Identification Framework for Coconut Palm Mapping

1
Hainan Aerospace Technology Innovation Center, Wenchang 571399, China
2
Hainan Aerospace Information Research Institute, Wenchang 571399, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2026, 18(1), 102; https://doi.org/10.3390/rs18010102
Submission received: 10 November 2025 / Revised: 21 December 2025 / Accepted: 24 December 2025 / Published: 27 December 2025

Highlights

What are the main findings?
  • A GEE-based multi-source remote sensing framework integrating Sentinel-1 SAR, Sentinel-2 MSI, and SRTM topographic data was developed, achieving an overall accuracy (OA) of 92.51% for coconut palm mapping in tropical monsoon regions.
  • RF-OOB feature selection identified an optimal 15-dimensional subset (64% reduction in dimensionality) that maintained high accuracy (OA = 92.83%), with the Canopy Water Index (CWI), Green Chlorophyll Index (GCI), and VV-polarized backscattering coefficient (σVV) emerging as the most informative features.
What are the implications of the main findings?
  • The framework addresses key challenges (cloud cover, spectral similarity, feature redundancy) in tropical vegetation mapping, enabling efficient large-scale monitoring of coconut palms for agricultural planning and pest management.
  • The framework, validated by internal tests and independent UAV-based verification, provides a reliable technical reference for mapping other tropical economic forests, supporting precision agriculture and ecological resource management.

Abstract

Coconut palms (Cocos nucifera L.) are a critical economic and ecological resource in Wenchang City, Hainan. Accurate mapping of their spatial distribution is essential for precision agricultural planning and effective pest and disease management. However, in tropical monsoon regions, persistent cloud cover, spectral similarity with other evergreen species, and redundancy among high-dimensional features hinder the performance of optical classification. To address these challenges, we developed a scalable multi-source remote sensing framework on the Google Earth Engine (GEE) with an emphasis on species-oriented feature design rather than generic feature stacking. The framework integrates Sentinel-1 SAR, Sentinel-2 MSI, and SRTM topographic data to construct a 42-dimensional feature set encompassing spectral, polarimetric, textural, and topographic attributes. Using Random Forest (RF) importance ranking and out-of-bag (OOB) error analysis, an optimal 15-feature subset was identified. Four feature combination schemes were designed to assess the contribution of each data source. The fused dataset achieved an overall accuracy (OA) of 92.51% (Kappa = 0.8928), while the RF-OOB optimized subset maintained a comparable OA of 92.83% (Kappa = 0.8975) with a 64% reduction in dimensionality. Canopy Water Index (CWI), Green Chlorophyll Index (GCI), and VV-polarized backscattering coefficient ( σ V V ) were identified as the most discriminative features. Independent UAV validation (0.07 m resolution) in a 50 km2 area of Chongxing Town confirmed the model’s robustness (OA = 90.17%, Kappa = 0.8617). This study provides an efficient and robust framework for large-scale monitoring of tropical economic forests such as coconut palms.

1. Introduction

Coconut palms (Cocos nucifera L.) are evergreen trees widely distributed in tropical coastal regions, providing significant economic, ecological, and cultural value [1]. In Wenchang City, Hainan Province, China, they constitute a cornerstone of the agricultural economy and ecosystem services. Therefore, accurately mapping the distribution of coconut palms is crucial for agricultural planning, ecological management, and pest and disease monitoring. Remote sensing has long been established as a powerful tool for monitoring agricultural and forestry ecosystems [2,3]. However, acquiring reliable spatial information on coconut palms in tropical environments remains challenging due to persistent cloud cover, complex canopy structures, and high spectral similarity with other perennial tree crops. Optical remote sensing is frequently compromised by persistent cloud cover, leading to data gaps and temporal discontinuities [4]. Moreover, coconut palms share highly similar spectral signatures with other evergreen species such as oil palm and rubber trees, which limits the accuracy of classifications based solely on spectral information [5,6]. Although synthetic aperture radar (SAR) provides all-weather, day-and-night observation capabilities, the scattering mechanisms associated with the unique canopy architecture of coconut palms have not been fully elucidated, and classifications relying only on SAR data generally yield suboptimal accuracy.
Recent advances in multi-source remote sensing have opened new avenues for improving vegetation classification in complex tropical environments. Integrating optical, SAR, and topographic information enables complementary strengths of different sensors to be exploited, thereby enhancing the recognition of heterogeneous vegetation types, especially perennial economic crops [7,8]. Sentinel-1 SAR data are sensitive to vegetation structure and dielectric properties, Sentinel-2 multispectral data provide rich spectral information, and topographic data (such as SRTM DEM) facilitate understanding of the influence of terrain on vegetation distribution [9]. Numerous studies have demonstrated that combining Sentinel-1 and Sentinel-2 data enhances crop and forest classification accuracy compared with using single-source imagery. Van Tricht et al. [10] found that when using a Random Forest classifier, a combination of radar and optical imagery always outperformed a classification based on single-sensor inputs in terms of crop-mapping accuracy. Subsequent research by Reji [11] and Liu et al. [12] further confirmed that multi-sensor fusion and deep learning frameworks can substantially increase classification robustness in heterogeneous tropical environments. Despite these advances, multi-source data fusion also introduces challenges related to feature redundancy and computational complexity. Correlated spectral indices or overlapping texture metrics can reduce classifier stability and increase model uncertainty, highlighting the need for effective feature selection strategies. Several studies have shown that removing redundant or irrelevant features can improve classification efficiency and stability. For instance, Belgiu and Drăguţ [13] reported that the performance of Random Forests in land-cover mapping depends strongly on the selection of non-redundant predictors, as correlated variables can bias variable importance and increase model variance. Similarly, Rodriguez-Galiano et al. [14] found that reducing input dimensionality through feature importance ranking or recursive elimination significantly enhances classification accuracy and computational efficiency in Sentinel-based vegetation mapping. More recent studies have confirmed that optimized feature subsets combining optical, radar, and topographic variables can achieve comparable or even higher accuracy with fewer features, reducing model complexity and improving transferability across regions [15,16].
Compared with staple crops such as rice and wheat, or natural tropical forests, research on coconut palm mapping remains limited and spatially fragmented. At the global scale, Descals et al. [17] produced a 20 m global map of closed-canopy coconut palms using a U-Net model trained on annual composites of Sentinel-1 and Sentinel-2. The map achieved a relatively high overall accuracy, but showed very low producer’s accuracy (11.3%) in regions with sparse or open canopy conditions, indicating substantial omission errors outside closed-canopy areas. Worachairungreung et al. [18] used UAV-based RGB imagery with object detection models to classify individual coconut trees within plantations, demonstrating high precision under clear tree crowns and homogeneous canopy conditions. Similarly, Prabowo and Nasahara deployed 0.5 m Pleiades imagery combined with Histogram of Oriented Gradients (HOG) features and a Support Vector Machine classifier to detect and count coconut trees in experimental plots; while detection is accurate for well-defined trees, performance decreases when crowns overlap or shadowing increases [19]. Despite these efforts, no comprehensive framework currently integrates optical and SAR data with feature selection and multi-scale validation for coconut palm mapping in tropical coastal regions of China.
In response to these research challenges, this study focuses on Wenchang City and develops a scalable multi-source remote sensing framework on the Google Earth Engine (GEE) platform. The specific objectives are: (1) to construct a high-dimensional feature space integrating Sentinel-1 polarimetric, Sentinel-2 spectral, textural, and topographic variables, aiming to resolve the spectral similarity between coconut palms and morphologically similar species; (2) to identify the optimal feature subset using Random Forest (RF) importance ranking and Out-of-Bag (OOB) error analysis, thereby balancing classification accuracy and computational efficiency; (3) to unveil the specific biophysical mechanisms, particularly the roles of canopy water content and vertical structure, that drive the identification of coconut palms; and (4) to conduct independent validation using UAV imagery with a spatial resolution of 0.07 m to confirm the robustness of the results across different spatial scales. This study evaluates the applicability and limitations of an established framework when applied to coconut palm mapping at the regional scale.

2. Materials and Methods

2.1. Study Area and Data

2.1.1. Study Area

Wenchang City (19°21′–20°01′N, 110°28′–111°03′E) is located in northeastern Hainan Island, bordering the South China Sea to the east and the Qiongzhou Strait to the north, with Haikou, Ding’an, and Qionghai lying to the west. Known as China’s “Hometown of Coconuts”, Wenchang covers 2459.98 km2 and features a landscape dominated by low hills, terraces, and coastal plains, with terrain generally sloping from southwest to northeast (Figure 1). The average elevation is 42.55 m, and areas below 50 m account for the majority of the land surface. The region experiences a tropical marine monsoon climate, with a mean annual temperature of 24 °C and annual precipitation of about 1800 mm. Its sandy loam and latosol soils, coupled with abundant heat and rainfall, provide highly suitable conditions for coconut cultivation. As the core coconut-producing region in Hainan, Wenchang accounts for 45% of the province’s planting area and yield, with an economic output value of 2.5 billion yuan in 2024. The study area includes 17 towns and 2 state farms. Among them, Chongxing Town, characterized by typical coconut distribution patterns, was selected as the UAV validation site. This high-resolution UAV dataset complements the medium to low resolution Sentinel-1/2 imagery, creating an ideal experimental setting for evaluating the synergistic extraction of coconut palms using multi-source data.

2.1.2. Data Sources and Preprocessing

Sentinel-1 SAR data were obtained from the “COPERNICUS/S1_GRD” collection on the GEE platform. The dataset, derived from multi-looked Level-1 terrain-corrected products, covers the period from 6 September to 5 December 2024. This specific temporal window was selected based on three criteria: (1) it corresponds to the post-rainy season with relatively lower cloud cover; (2) it captures the stable canopy structure of evergreen coconut palms; and (3) it leverages the enhanced structural contrast following Super Typhoon Yagi (September 2024), where the high wind resistance of coconut palms distinguished them from other lodging-prone vegetation. Data acquired in the Interferometric Wide Swath (IW) mode with ascending orbits were used, including VV and VH polarizations and incidence angle information. A total of 21 valid scenes with 10 m spatial resolution were processed. Preprocessing steps included radiometric normalization using incidence angles to reduce terrain-induced variations, application of a 2.5 m median filter to suppress speckle noise, daily image compositing, and temporal median synthesis to produce the final backscatter dataset [20].
Sentinel-2 multispectral imagery was obtained from the “COPERNICUS/S2_SR_HARMONIZED” product on the GEE platform, consisting of Level-2A atmospherically corrected data. The same acquisition period as Sentinel-1 was used, with 43 images (cloud cover < 40%). Choosing this short-term window effectively minimized temporal gaps and synthesis noise caused by the persistent cloudiness typical of other seasons. Ten-meter bands (B2–B8A) and resampled 20 m bands (B11–B12) were retained. Cloud and shadow masking was performed using the Scene Classification Layer (SCL), and a hierarchical temporal gap filling approach was applied, adopting median composites from the previous 30, 60, and 180 days. A cloud-free composite at 10 m resolution was ultimately generated [21].
Topographic variables were derived from the “USGS/SRTMGL1_003” dataset (30 m resolution), resampled to 10 m. Four variables (elevation, slope, aspect, and hillshade) were extracted. Although the terrain in Wenchang is generally gentle, a mask for slopes > 60° was strictly applied, as such areas exist sporadically. This step aims to filter out DEM artifacts and non-terrain noise, ensuring that extreme outliers do not affect the normalization of topographic variables. These variables were subsequently used as auxiliary predictors to improve land-cover separability [22].
Sample data were collected using a spatially stratified sampling strategy to guarantee statistical validity and spatial representativeness. Training and validation samples were generated through manual interpretation of sub-meter Google Earth imagery. Sample reliability was verified by incorporating field survey knowledge and cross-validation with multi-temporal remote sensing images. This protocol promoted spatial representativeness by distributing samples across different towns, elevations, and coconut palm planting densities (as shown in Figure 1), which helped minimize spatial autocorrelation. Four land cover classes were considered: coconut palms (184 samples), residential areas (412), water bodies (203), and others (524). The ‘others’ category encompasses grasslands, bare land, and other crops, specifically consolidating morphologically similar species (e.g., betel nut and rubber trees) into a unified background class. This aggregation strategy was adopted to maintain sample size stability and model robustness. A total of 1323 samples were collected, which were randomly divided into training (70%) and validation (30%) sets with balanced category proportions to maintain statistical consistency.
Independent validation data were acquired in Chongxing Town in November 2024 using a DJI Matrice 350 RTK UAV (DJI Technology Co., Ltd., Shenzhen, China) equipped with a Zenmuse L2 sensor (DJI Technology Co., Ltd., Shenzhen, China). Data preprocessing was performed using DJI Terra software (v4.3). The workflow primarily involved aerotriangulation (AT) to solve the spatial position and orientation of images, followed by orthorectification and mosaicking to generate a seamless Digital Orthophoto Map (DOM). The final orthoimages have a spatial resolution of 0.07 m, covering about 220 km2. A 50 km2 sub-area was selected as the validation zone, where 519 samples were manually labeled: coconut palms (147), residential areas (90), water bodies (68), and others (214). These samples were used exclusively for independent validation at fine scales.

2.2. Research Methods

2.2.1. Overall Technical Workflow

This study adopts a progressive technical workflow of “data preprocessing and fusion–multi-dimensional feature extraction–feature combination scheme design–feature selection and classification modeling–dual-scale accuracy validation” (Figure 2). First, preprocessing was performed on Sentinel-1 SAR images, Sentinel-2 MSI images, and DEM data. On this basis, SAR polarimetric parameters, spectral and vegetation indices, topographic parameters, and textural features from the near-infrared (NIR) band of Sentinel-2 were extracted to form a multi-source feature dataset with a unified resolution. Given that the 10 m spatial resolution of Sentinel imagery is comparable to the average canopy size of individual coconut palms, a pixel-based classification strategy was adopted to ensure the detection of both contiguous plantations and scattered trees, while GLCM texture features were incorporated to provide spatial contextual information. Subsequently, multiple sets of feature combination schemes were designed to compare the contributions of different data sources to classification accuracy. The RF algorithm was used to conduct feature importance ranking, and the optimal feature subset was selected for classification modeling. Finally, accuracy evaluation of the classification results was carried out by combining ground validation samples and independent UAV samples to analyze the reliability and generalization performance of the model.

2.2.2. Feature Extraction

To fully capture the differential characteristics of coconut palms in terms of radar scattering, spectral response, textural structure, and topographic suitability, this study constructed a 42-dimensional feature set covering four major categories, including SAR polarimetric parameters, spectral and vegetation indices, NIR textural features, and DEM topographic factors, based on preprocessed images (Table 1). Among these, 10 core spectral bands of Sentinel-2 and vegetation indices collectively characterize spectral differences in vegetation; SAR polarimetric parameters capture vegetation structure information; and Gray-Level Co-Occurrence Matrix (GLCM) textures were extracted from the Sentinel-2 near-infrared band, as NIR is highly sensitive to vegetation canopy structure and spatial heterogeneity. For texture calculation, a 3 × 3 moving window with a step length of 1 pixel was adopted to compute GLCMs in four directions (0°, 45°, 90°, and 135°), and the final texture values were obtained by averaging the results across all directions. Considering the computational burden and correlations among several GLCM derivatives, nine texture measures were selected, including Angular Second Moment (ASM), Contrast, Correlation, Variance, Inverse Difference Moment (IDM), Sum Average, Sum Variance, Sum Entropy, and Entropy. These measures demonstrate effectiveness in characterizing vegetation structural differences and reflect spatial heterogeneity. In addition, DEM parameters provide topographic constraint information [23].The selection of these features was guided by the biological and structural characteristics of coconut palms, such as their evergreen canopy, high water content, and distinctive sparse crown structure. Accordingly, indices sensitive to canopy moisture, chlorophyll concentration, and spatial heterogeneity were prioritized to enhance class separability from other tropical perennial vegetation.

2.2.3. Design of Feature Combination Schemes

To systematically assess the classification contributions of different feature types, four feature combination schemes were designed:
  • Scheme 1 (SAR Polarimetric Features): Comprises only the 9-dimensional polarimetric features derived from Sentinel-1, including σ V V , σ V H , incidence angle, and polarimetric parameters (e.g., Hc, mC). This scheme evaluates microwave remote sensing’s capability to identify coconut palm structural characteristics.
  • Scheme 2 (Optical Spectral Features): Includes 20-dimensional spectral features from Sentinel-2, encompassing original spectral bands and vegetation indices. It assesses the spectral discriminative power of optical remote sensing.
  • Scheme 3 (Texture and Topographic Features): Integrates 13-dimensional textural and topographic features, consisting of 9-dimensional GLCM textural parameters and 4-dimensional topographic metrics. While Wenchang is predominantly flat, topographic variables were included alongside texture to represent the complete set of spatial and environmental constraints, independent of spectral reflectance. This scheme aims to evaluate how spatial context and environmental context collectively supplement pure spectral information.
  • Scheme 4 (Multi-Source Fusion Features): Incorporates all 42-dimensional features of multi-source data, aiming to evaluate the classification performance of multi-dimensional feature synergy.

2.2.4. Feature Selection and Classification

Feature selection was conducted to scientifically screen key features for enhancing both classification efficiency and accuracy, and this objective was achieved using the OOB error analysis method based on the RF algorithm. Specifically, an initial RF model was constructed with key hyperparameters configured as follows: number of trees = 400, max_depth = None (to fully fit the complex feature space of multi-source data), mtry = 6 (calculated as the integer square root of the input feature dimension p = 42), and bootstrap = True (default bootstrap sampling strategy). All 42 candidate features served as input variables for the model. Feature importance was quantified by the magnitude of reduction in Gini impurity (a metric reflecting node classification ambiguity), and a higher importance value indicated that the feature made a more significant contribution to improving classification accuracy. Subsequently, all features were sorted in descending order of their importance scores. On this basis, a series of sub-models were sequentially built, with each sub-model incorporating the top n features (where n ranged from 1 to 42). For each sub-model, the OOB error was computed. By plotting the curve of OOB error against the number of input features, the feature subset corresponding to the minimum OOB error was identified as the optimal feature set. This approach ensured that while maintaining high classification precision, the balance between “maximizing accuracy” and “minimizing feature dimensionality” was achieved. The classification model also employed the RF algorithm, with parameters optimized via preliminary experiments. During training, samples were randomly split into a training set (926 samples) and a validation set (397 samples) at a 7:3 ratio. Comparative experiments across four schemes evaluated the contribution of different features.

2.2.5. Accuracy Evaluation

To comprehensively and objectively verify the reliability of classification results, this study constructed a dual-layer accuracy evaluation system of “in-model validation–independent validation” to quantify classification accuracy from multiple dimensions. For in-model validation, a confusion matrix was built based on 30% of the validation samples (397 samples), with four core indicators calculated: OA, Kappa Coefficient, Producer’s Accuracy (PA) and User’s Accuracy (UA). Independent validation was conducted in a typical 50 km2 area of Chongxing Town. As shown in Figure 3, using 0.07 m-resolution UAV high-resolution images, 147 coconut palm samples and 372 non-coconut palm samples (including residential areas, water bodies, and other vegetation) were manually labeled. The OA and Kappa coefficient of this independent sample set were calculated to evaluate the model’s generalization ability in high-resolution scenarios. Since the spatial resolution of UAV data is much higher than that of Sentinel series data, it can accurately identify the morphological details of coconut palms, thus effectively verifying the model’s ability to capture fine-grained features.

3. Results

3.1. Results and Analysis of Feature Selection

Feature selection was implemented using the RF algorithm to balance classification accuracy with computational efficiency. As visualized in Figure 4a, the importance of 42 candidate features exhibited a strong correlation with the biological traits of coconut palms. The CWI ranked first, capturing the high water content of coconut palms’ evergreen canopies via shortwave infrared bands to distinguish them from deciduous vegetation. The GCI followed with an importance value of 162.476, leveraging the enrichment of chlorophyll b in coconut palm leaves to effectively differentiate them from herbaceous plants. The SAR polarimetric feature σ V V ranked third (importance = 155.104), whose sensitivity to the unique “sparse canopy + thick trunk” structure of coconut palms complemented optical data to mitigate confusion with dense shrubs. Traditional vegetation indices, such as NDVI, ranked much lower at the 24th position with an importance value of 130.361. This relatively low ranking does not imply poor discriminative capability but rather reflects feature redundancy and saturation issues. Since NDVI is highly correlated with the top-ranked indices (e.g., GCI, GNDVI), the information it carries was largely captured by the latter. Furthermore, GCI utilizes the green band, which is more sensitive to chlorophyll variability in high-biomass tropical vegetation, thereby avoiding the saturation effects often observed with NDVI. Topographic features exhibited relatively low importance, consistent with the study area’s gentle terrain. Overall, the top 20 features integrated optical bands, radar features, vegetation indices, and textural features, forming a “optically dominant, radar-assisted, texturally supplementary” multi-source pattern that supported accurate coconut palm identification.
The OOB error curve further quantifies the relationship between the number of features and classification performance (Figure 4b). When the number of features increased from 1 to 15, the OOB error dropped sharply from 51.5% to 12.5%, representing a 75.7% decrease. As the number of features continued to increase to 42, the error only fluctuated slightly within the range of 12.5–14.5%. Redundant features made no significant contribution to accuracy improvement; instead, they introduced noise and reduced model stability. This result verifies the effectiveness of feature selection, indicating that 15 features already contain the core information required for classification, with the optimal 15-dimensional subset (detailed in Table 2) accounting for 78% of total discriminative information.
Beyond the dominant optical spectral traits and SAR structural sensitivity, the inclusion of textural features is critical to resolving confusion between coconut palms and morphologically similar species. Figure 5 supports this through its UAV ground truth in Figure 5a–c and quantitative texture data in Figure 5d, which shows coconut palms have a higher NIR_asm value of 0.3502 compared to 0.1951 for betel nut palms and a lower NIR_contrast value of 0.1442 compared to 0.2210 for betel nut palms and 0.2219 for rubber trees. This difference allows coconut palms to be distinguished from morphologically similar species when spectral or SAR signals overlap. Visually, the coconut palm distribution derived from the optimal subset (Figure 4, Optimal Feature Subset (a) and (b)) matches UAV reference imagery more closely than single-source schemes, while maintaining the spatial coherence of Scheme 4 and avoiding redundancy. Quantitatively, Table 2 shows the subset achieves an OA of 92.83% and a Kappa coefficient of 0.8975, which outperforms Scheme 4 (OA = 92.51%, Kappa = 0.8928) even though it uses 64% fewer features than the original 42-dimensional set. This validates feature selection’s ability to compress the feature space while retaining critical information, achieving “accuracy-efficiency” synergy for large-scale coconut palm mapping.

3.2. Comparison of Classification Accuracy Among Different Feature Combination Schemes

Table 3 presents the quantitative accuracy metrics for each feature combination scheme, while Figure 6 visually contrasts the extraction results against 0.07 m-resolution UAV reference imagery in two representative areas (a and b) of Wenchang City. As indicated by the comparative data, the classification performance varies significantly across different feature sets, with multi-source fusion demonstrating a clear advantage over single-source approaches.
In Scheme 1 (SAR Polarimetric Features), the classification relies solely on 9-dimensional polarimetric parameters acquired by Sentinel-1. The PA for water bodies reaches 94.37%, highlights SAR’s strength in delineating water boundaries. By contrast, the PA for coconut palms is only 69.70%, which indicates that microwave signals alone cannot effectively distinguish coconut palms from other land cover types. As shown in Scheme 1 (a) and Scheme 1 (b) of Figure 6, visual inspection reveals the extracted coconut palm distribution exhibits fragmented characteristics and significant mixing with other land covers, which aligns with the relatively low PA for coconut palms.
Scheme 2 (Optical Spectral Features) integrates 20-dimensional optical spectral features from Sentinel-2, delivering a notable improvement in classification performance: it achieves an OA of 91.86%, a Kappa coefficient of 0.8834, and a PA of 83.33% for coconut palms. In Scheme 2 (a) and Scheme 2 (b) of Figure 6, coconut palm patches exhibit greater contiguity and spatial consistency compared to Scheme 1. This outcome confirms that optical data captures the unique spectral traits of coconut palms, thereby distinguishing them from other land covers for more reliable identification.
By comparison, Scheme 3 (Texture and Topographic Features) integrates 13-dimensional texture and topographic features and exhibits the lowest classification performance among the four schemes, with an OA of 84.82% and a Kappa coefficient of 0.7816. From Scheme 3 (a) and Scheme 3 (b) in Figure 6, the coconut palm mapping results appear sparse and inaccurate, with a PA for coconut palms of only 66.67%. This finding highlights the limitation of using spatial structure information alone.
Notably, Scheme 4 (Multi-Source Fusion) integrates all 42-dimensional features and achieves the highest classification performance. While Scheme 2 (Optical Spectral Features) attained a respectable Overall Accuracy of 91.86%, it exhibited relatively high omission error for coconut palms with a Producer’s Accuracy of only 83.33%, primarily due to spectral saturation or similarity between coconut palms and other evergreen species. In contrast, Scheme 4 yields an OA of 92.51% and a Kappa coefficient of 0.8928, with the PA for coconut palms increasing to 87.12% and the UA for residential areas reaching 95.79%. This approximately 3.79% point improvement in coconut palm PA stems from the integration of structural information from SAR, particularly VV backscatter and spatial context from texture features, which complement optical spectral data to distinguish spectrally indistinguishable coconut palms. In Scheme 4 (a) and Scheme 4 (b) of Figure 6, the derived coconut palm distribution is more extensive and coherent, with patches closely aligning with the spatial patterns observed in the UAV reference imagery. These metrics confirm that multi-source fusion mitigates the limitations of single-source data while leveraging complementary information to enhance classification accuracy.
However, the Optimal Feature Subset (RF-OOB Selected) achieved the highest accuracy with an OA of 92.83%, slightly surpassing the full 42-dimensional Scheme 4. Despite reducing feature dimensionality to 15, this optimized subset retained the critical complementary information from SAR and texture features, maintaining the improved coconut palm PA of 87.12% compared to the optical-only Scheme 2. This dual achievement of high accuracy and dimensionality reduction is attributed to the mitigation of the Hughes phenomenon. The original high-dimensional feature set contained redundant variables (e.g., highly correlated texture metrics) and irrelevant noise (e.g., topographic factors in flat areas), which can introduce bias into the classifier. By removing these redundant features while preserving key discriminative information, the optimized subset reduced model complexity and the risk of overfitting, thereby enhancing the model’s generalization capability on the validation dataset.

3.3. Coconut Palm Distribution Extraction Results and Multi-Scale Validation

Based on the classification results of the optimal feature subset, the spatial distribution of coconut palms in Wenchang City is clearly delineated. By integrating dual-scale validation (regional scale and UAV fine scale), this study systematically evaluated the reliability of classification accuracy and error mechanisms, providing empirical support for the method’s applicability.
The classification results (Figure 7) reveal that coconut palms in Wenchang City exhibit an overall pattern of “coastal aggregation and northern scattering”, with significant spatial heterogeneity, a pattern closely tied to regional natural conditions and human activity intensity. In coastal areas (e.g., Dongjiao Town, Longlou Town), coconut palms are distributed contiguously, which is highly consistent with visual interpretation of high-resolution Google Earth imagery. Regulated by the marine climate, this area features suitable humidity and well-drained sandy soil, ideal for large-scale coconut palm growth; coupled with a long history of artificial cultivation, it forms a core distribution zone. In central areas (e.g., Wencheng Town, Huiwen Town), coconut palms mainly follow a “village-surrounding + farmland intercropping” pattern, with fragmented patches that interlace with cash crops (e.g., rubber plantations, pepper gardens), and the classification results accurately capture this human-modified landscape. This fragmented distribution reflects the “courtyard economy” model typical of local rural communities, where coconuts are planted as auxiliary crops. In northwestern areas (e.g., Jinshan Town, Fengpo Town), coconut palms are scattered sporadically on gentle slopes, forming a mixed landscape with shrubs and drylands. Restricted by soil and water conditions, coconut palms have low density, and occasional confusion with tall herbs occurs in classification results (due to similar spectral and scattering features under low coverage).
Multi-scale validation results confirm the reliability of classification accuracy. At the regional scale, a detailed analysis of confusion matrix statistics derived from 397 random validation samples (Table 3) shows that the optimal feature subset achieves an OA of 92.83% and a Kappa coefficient of 0.8975, verifying the model’s excellent overall ability to distinguish land cover types in the study area. Further analysis of class-specific accuracy reveals that the model performs exceptionally well in excluding non-target classes, as evidenced by the high User’s Accuracy (UA) for coconut palms (92.74%), indicating low commission errors and that pixels classified as coconut palms are highly likely to be correct. However, the Producer’s Accuracy (PA) for coconut palms is slightly lower at 87.12%, indicating that omission errors are the primary source of uncertainty; these errors mainly stem from confusion with other tall arbors and occur primarily in mixed planting areas where young or sparse coconut palms are misclassified into the “Others” category due to the dominance of background signals from understory vegetation. Nevertheless, the balanced performance across all metrics confirms that the 15 selected features comprising optical, SAR, and textural variables successfully capture the multi-dimensional characteristics required for accurate species-level mapping, meeting the reliability requirements for regional statistical accuracy and confirming the model’s ability to identify the overall distribution of coconut palms in the study area.
Among the accuracy values of all classes, water bodies have the highest PA (95.74%), which benefits from the synergistic enhancement of NDWI and σ V H , enabling accurate distinction between water bodies and other ground objects. To further analyze the error characteristics at the micro scale, independent validation was conducted in the core area of Chongxing Town using UAV imagery with a resolution of 0.07 m. A total of 519 samples were collected, and the corresponding confusion matrix is presented in Table 4. Accuracy analysis based on this matrix shows that the OA reaches 90.17% with a Kappa coefficient of 0.8617, slightly lower than the regional validation results. This discrepancy mainly arises because high resolution data captures more micro scale interlacing of ground objects (e.g., overlaps between coconut palms and building edges). For coconut palms, the PA is 89.12% and the UA is 97.04%, with errors arising primarily from two aspects: (1) confusion between coconut palms and other Arecaceae plants, as their spectral and scattering characteristics overlap to a certain extent; and (2) omission of coconut palms obscured by buildings in dense residential areas.

4. Discussion

This study achieved high-precision extraction of coconut palms in Wenchang City through multi-source remote sensing fusion and feature selection technology. Beyond classification accuracy, the methodological value of this study lies in its capacity to leverage complementary structural and physiological information, which is essential for species-level discrimination in complex tropical environments.

4.1. Physical and Biological Interpretations of Feature Mechanisms

The superior performance of the optimized feature subset can be primarily attributed to the physical interpretability and ecological relevance of the selected features, rather than to purely statistical correlations. From a microwave remote sensing perspective, the high importance of the Sentinel-1 VV-polarized backscattering coefficient aligns with established scattering theories regarding C-band SAR interactions with agricultural targets. According to McNairn and Brisco, C-band backscatter is sensitive to the geometric structure and dielectric properties of the canopy [34]. Specifically, co-polarized (VV) signals often dominate in identifying vertical structures where the electric field vector aligns with the vertical orientation of stems or fronds. For coconut palms, the distinct vertical arrangement of the trunk and the radial geometry of the large pinnate fronds create a structural configuration that favors VV responsiveness. In contrast, cross-polarized backscatter (VH) is typically associated with volume scattering generated by multiple reflections within dense, randomly oriented canopies, such as those found in mixed shrublands or dense forests [35]. This polarization-dependent scattering behavior provides a robust physical basis for separating the vertically structured coconut palms from vegetation with more chaotic canopy geometries.
In the optical domain, the dominance of CWI and GCI reflects their specific sensitivity to plant physiological limitations. Research by Ceccato et al. established that reflectance in the Short-Wave Infrared (SWIR) region is governed primarily by leaf water content and equivalent water thickness (EWT), which allows SWIR-based indices like CWI to detect variations in canopy water status that visible bands cannot capture [36]. This mechanism enables the differentiation of coconut palms, which maintain high water content year-round, from seasonally drier vegetation. Similarly, Gitelson et al. [37] demonstrated that Green Chlorophyll Indices (GCI) possess a higher sensitivity to chlorophyll content in high-biomass vegetation compared to NDVI, which tends to saturate when canopy density is high. By avoiding saturation, GCI effectively captures the subtle physiological variations between coconut palms and other evergreen species (e.g., rubber trees) that appear spectrally similar in standard red/NIR bands.
Texture features further enhance class separability by introducing spatial information independent of spectral magnitude. The high ranking of the Angular Second Moment (NIR_asm) suggests that the spatial arrangement of the canopy is a critical discriminator. Haralick et al. defined ASM as a measure of textural uniformity (or energy) [38]. In this study, the regular planting intervals and the distinct star-shaped crown geometry of coconut palms result in a higher degree of local spatial homogeneity (high ASM) at the 10 m resolution, compared to the more heterogeneous and irregular texture of mixed betel nut or rubber plantations. This confirms that texture metrics serve as a necessary complement to resolving spectral confusion among morphologically distinct but spectrally overlapping species.

4.2. Synergy of Multi-Source Fusion and Comparative Analysis

The integration of these physically meaningful features significantly enhances the robustness of the classification framework, particularly in addressing the limitations of single-source data. Frequent rainy and cloudy weather in tropical regions causes data gaps in optical images, but the all-weather observation capability of Sentinel-1 SAR data can overcome this limitation. Specifically, the sensitivity of its polarimetric features to the “thick trunk and sparse canopy” structure of coconut palms, combined with the physiological features captured by Sentinel-2 optical data, forms dual “structural–physiological” constraints. This effectively alleviates the problem of spectral confusion between coconut palms and evergreen vegetation. Notably, Figure 5 reinforces the irreplaceable role of textural features in this complementarity: via its UAV ground truth (Figure 5a–c) and quantitative texture data (Figure 5d), the comparison of coconut palms, betel nut palms, and rubber trees shows that NIR_asm and NIR_contrast directly reflect canopy structural differences (e.g., coconut palms’ sparse pinnate canopy vs. betel nut palms’ dense fine foliage). These texture metrics fill the gap where spectral indices or SAR parameters fail to distinguish morphologically similar species, further reducing misclassification caused by overlapping signals [10]. In the specific context of Scheme 3 (Texture and Topographic Features), classification performance was largely driven by texture variables. Given the relatively flat terrain of the study area, topographic parameters served mainly as auxiliary environmental constraints, whereas texture features provided critical micro-scale structural information independent of spectral reflectance. This observation is consistent with the general principle of advantage complementarity in multi-source remote sensing, whereby each data type contributes distinct and non-redundant information in complex vegetation environments. Moreover, this study focuses on the biological characteristics of coconut palms through feature selection, making the synergistic effect more species-specific.
Comparisons with existing state-of-the-art studies further clarify the tangible improvements offered by this framework. Previous large-scale coconut mapping efforts, such as the global map produced by Descals et al. [17], predominantly relied on deep learning techniques applied to high-resolution satellite imagery. While their approach excels in identifying visual patterns of coconut crowns in cloud-free mosaics, it remains susceptible to the persistent cloud cover typical of tropical monsoon regions and lacks sensitivity to internal physiological traits. In contrast, our proposed framework integrates active microwave sensing (SAR) and biochemical vegetation indices (CWI, GCI), which operate independently of visual spectral bands. By incorporating SAR data that penetrates cloud cover and spectral indices sensitive to canopy water and chlorophyll content, our method demonstrates superior robustness in distinguishing coconut palms from morphologically similar species like betel nut trees, even under conditions where visual RGB features might be ambiguous or unavailable. This suggests that incorporating multi-source “structural–physiological” features offers a more reliable pathway for species-specific monitoring in complex tropical environments than relying on optical geometric features alone.

4.3. Model Robustness and Future Perspectives

The proposed framework effectively distinguishes coconut palms from morphologically similar vegetation such as betel nut and rubber trees (Figure 5), demonstrating strong species-level discriminative capability. The robustness of the model is substantiated by the high consistency between the regional-scale validation (OA = 92.83%) and the independent fine-scale UAV validation (OA = 90.17%). Despite this overall stability, uncertainty varies slightly with landscape complexity: misclassifications were more frequent in fragmented, low-density planting areas compared to contiguous coastal plantations, primarily due to the mixed-pixel effect at 10 m resolution.
This study focused on single-period imagery to ensure data consistency and to evaluate the feasibility of multi-source feature fusion under typical tropical conditions. In future research, the framework could be extended to incorporate time-series observations, enabling dynamic monitoring of seasonal canopy variations. Furthermore, integrating hyperspectral or biochemical indicators such as red-edge parameters may further enhance the separability of related palm species. Overall, these potential extensions would broaden the applicability of the proposed approach to long-term and large-scale tropical vegetation monitoring.

5. Conclusions

This study proposes and validates an intelligent feature selection framework driven by multi-source remote sensing fusion and RF based on GEE, which is applied to the regional-scale extraction of coconut trees in Wenchang City. The main conclusions are as follows:
Multi-source data synergy significantly improves classification accuracy: By fusing Sentinel-1 SAR, Sentinel-2 optical, SRTM topographic, and texture features, a 42-dimensional multi-source feature set was constructed, achieving an OA of 92.51% (Kappa = 0.8928) and thus outperforming any single-data-source scenario.
Feature selection balances efficiency and accuracy: Based on RF OOB error analysis, a 15-dimensional optimal feature subset was screened out. While reducing the feature dimension by 64%, the OA was increased to 92.83% (Kappa = 0.8975). This confirms that feature selection can effectively compress the feature space and improve model generalization ability.
The mechanism of key features is clear: CWI, GCI, and σ V V are the core features for coconut tree identification, contributing over 40% cumulatively. They distinguish coconut trees from other ground objects through physiological characteristics and structural features, respectively.
Dual-scale validation confirms method reliability: The regional-scale validation achieves an OA of 92.83%, while the independent UAV validation reaches an OA of 90.17% (Kappa = 0.8617). Errors are mainly caused by confusion with other Arecaceae plants and occlusion by buildings, indicating that the method is suitable for regional-scale coconut palm resource surveys and has the potential to be extended to other tropical regions.
This study provides an efficient technical solution for the dynamic monitoring of coconut palm resources in tropical regions. In the future, the extraction accuracy in complex scenarios can be further improved by integrating time-series data and deep learning models.

Author Contributions

Conceptualization, T.W. and N.W.; methodology, T.W. and N.W.; validation, T.W., N.W., C.L. and W.B.; formal analysis, T.W.; investigation, N.W., C.L. and W.B.; resources, X.Y. and X.-M.L.; data curation, T.W.; writing—original draft preparation, T.W.; writing—review and editing, N.W. and X.-M.L.; visualization, T.W.; supervision, X.Y. and X.-M.L.; All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded in part by the Key Research and Development Project of Hainan Province (ZDYF2024XDNY257), and Hainan Province Science and Technology Special Fund (ATIC-2023010004).

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Yang, Y.; Iqbal, A.; Qadri, R. Breeding of coconut (Cocos nucifera L.): The tree of life. In Advances in Plant Breeding Strategies: Fruits; Springer: Berlin/Heidelberg, Germany, 2018; Volume 3, pp. 673–725. [Google Scholar]
  2. Xie, Y.; Sha, Z.; Yu, M. Remote sensing imagery in vegetation mapping: A review. J. Plant Ecol. 2008, 1, 9–23. [Google Scholar] [CrossRef]
  3. Thenkabail, P.S.; Lyon, J.G.; Huete, A. Advances in hyperspectral remote sensing of vegetation and agricultural crops. In Fundamentals, Sensor Systems, Spectral Libraries, and Data Mining for Vegetation; CRC Press: Boca Raton, FL, USA, 2018; pp. 3–37. [Google Scholar]
  4. Shen, H.; Li, X.; Cheng, Q.; Zeng, C.; Yang, G.; Li, H.; Zhang, L. Missing information reconstruction of remote sensing data: A technical review. IEEE Geosci. Remote Sens. Mag. 2015, 3, 61–85. [Google Scholar] [CrossRef]
  5. Monsalve-Tellez, J.M.; Torres-León, J.L.; Garcés-Gómez, Y.A. Evaluation of SAR and optical image fusion methods in oil palm crop cover classification using the random forest algorithm. Agriculture 2022, 12, 955. [Google Scholar] [CrossRef]
  6. Sarzynski, T.; Giam, X.; Carrasco, L.; Lee, J.S.H. Combining radar and optical imagery to map oil palm plantations in Sumatra, Indonesia, using the Google Earth Engine. Remote Sens. 2020, 12, 1220. [Google Scholar]
  7. Orynbaikyzy, A.; Gessner, U.; Conrad, C. Crop type classification using a combination of optical and radar remote sensing data: A review. Int. J. Remote Sens. 2019, 40, 6553–6595. [Google Scholar]
  8. McNairn, H.; Champagne, C.; Shang, J.; Holmstrom, D.; Reichert, G. Integration of optical and Synthetic Aperture Radar (SAR) imagery for delivering operational annual crop inventories. ISPRS J. Photogramm. Remote Sens. 2009, 64, 434–449. [Google Scholar] [CrossRef]
  9. Yang, J.; Xin, Z.; Huang, Y.; Liang, X. Multi-source remote sensing data shows a significant increase in vegetation on the Tibetan Plateau since 2000. Prog. Phys. Geogr. Earth Environ. 2023, 47, 597–624. [Google Scholar] [CrossRef]
  10. Van Tricht, K.; Gobin, A.; Gilliams, S.; Piccard, I. Synergistic use of radar Sentinel-1 and optical Sentinel-2 imagery for crop mapping: A case study for Belgium. Remote Sens. 2018, 10, 1642. [Google Scholar] [CrossRef]
  11. Reji, J.; Nidamanuri, R.R. Deep Learning-Based Multi-Sensor Approach for Precision Agricultural Crop Classification Based on Nitrogen Levels. IEEE Geosci. Remote Sens. Lett. 2025, 22, 2502405. [Google Scholar] [CrossRef]
  12. Liu, N.; Zhao, Q.; Williams, R.; Barrett, B. Enhanced crop classification through integrated optical and SAR data: A deep learning approach for multi-source image fusion. Int. J. Remote Sens. 2024, 45, 7605–7633. [Google Scholar] [CrossRef]
  13. Belgiu, M.; Drăguţ, L. Random forest in remote sensing: A review of applications and future directions. ISPRS J. Photogramm. Remote Sens. 2016, 114, 24–31. [Google Scholar] [CrossRef]
  14. Rodriguez-Galiano, V.F.; Ghimire, B.; Rogan, J.; Chica-Olmo, M.; Rigol-Sanchez, J.P. An assessment of the effectiveness of a random forest classifier for land-cover classification. ISPRS J. Photogramm. Remote Sens. 2012, 67, 93–104. [Google Scholar]
  15. Stromann, O.; Nascetti, A.; Yousif, O.; Ban, Y. Dimensionality reduction and feature selection for object-based land cover classification based on Sentinel-1 and Sentinel-2 time series using Google Earth Engine. Remote Sens. 2019, 12, 76. [Google Scholar] [CrossRef]
  16. Yang, G.; Wang, J.; Qi, Z. Maize Classification in Arid Regions via Spatiotemporal Feature Optimization and Multi-Source Remote Sensing Integration. Agronomy 2025, 15, 1667. [Google Scholar] [CrossRef]
  17. Descals, A.; Wich, S.; Szantoi, Z.; Struebig, M.J.; Dennis, R.; Hatton, Z.; Ariffin, T.; Unus, N.; Gaveau, D.L.; Meijaard, E. High-resolution global map of closed-canopy coconut palm. Earth Syst. Sci. Data 2023, 15, 3991–4010. [Google Scholar] [CrossRef]
  18. Worachairungreung, M.; Kulpanich, N.; Sae-ngow, P.; Thanakunwutthirot, K.; Anurak, K.; Hemwan, P. Classification of Coconut Trees Within Plantations from UAV Images Using Deep Learning with Faster R-CNN and Mask R-CNN. J. Hum. Earth Future 2024, 5, 560–573. [Google Scholar]
  19. Prabowo, Y.; Nasahara, K.N. Detecting and Counting Coconut Trees in Pleiades Satellite Imagery using Histogram of Oriented Gradients and Support Vector Machine. Int. J. Remote Sens. Earth Sci. 2019, 16, 87–98. [Google Scholar]
  20. Varghese, A.O.; Suryavanshi, A.; Joshi, A.K. Analysis of different polarimetric target decomposition methods in forest density classification using C band SAR data. Int. J. Remote Sens. 2016, 37, 694–709. [Google Scholar]
  21. Caiza-Morales, L.; Gómez, C.; Torres, R.; Nicolau, A.P.; Olano, J.M. MANGLEE: A tool for mapping and monitoring MANgrove ecosystem on Google Earth engine—A case study in Ecuador. J. Geovis. Spat. Anal. 2024, 8, 17. [Google Scholar] [CrossRef]
  22. Wang, Z.; Liu, D.; Wang, M. Mapping main grain crops and change analysis in the West Liaohe River Basin with limited samples based on Google Earth engine. Remote Sens. 2023, 15, 5515. [Google Scholar] [CrossRef]
  23. Numbisi, F.N.; Van Coillie, F.M.; De Wulf, R. Delineation of cocoa agroforests using multiseason sentinel-1 SAR images: A low grey level range reduces uncertainties in GLCM texture-based mapping. ISPRS Int. J. Geo-Inf 2019, 8, 179. [Google Scholar]
  24. Valero, S.; Arnaud, L.; Planells, M.; Ceschia, E. Synergy of Sentinel-1 and Sentinel-2 imagery for early seasonal agricultural crop mapping. Remote Sens. 2021, 13, 4891. [Google Scholar]
  25. Ma, C.; Li, X.; McCabe, M.F. Retrieval of high-resolution soil moisture through combination of Sentinel-1 and Sentinel-2 data. Remote Sens. 2020, 12, 2303. [Google Scholar]
  26. Guo, F.; Feng, Q.; Yang, S.; Yang, W. Estimation of potato canopy leaf water content in various growth stages using UAV hyperspectral remote sensing and machine learning. Front. Plant Sci. 2024, 15, 1458589. [Google Scholar] [CrossRef]
  27. Zhang, H.; Li, J.; Gu, C.; Guan, L.; Wang, X.; Mumtaz, F.; Dong, Y.; Zhao, J.; Liu, Q.; Lin, S. The 10m-resolution global leaf chlorophyll content product using Sentinel-2 based on chlorophyll sensitive index CSI. Earth Syst. Sci. Data Discuss. 2025, 2025, 1–23. [Google Scholar] [CrossRef]
  28. Gan, Y.; Wang, Q.; Matsuzawa, T.; Song, G.; Iio, A. Multivariate regressions coupling colorimetric and textural features derived from UAV-based RGB images can trace spatiotemporal variations of LAI well in a deciduous forest. Int. J. Remote Sens. 2023, 44, 4559–4577. [Google Scholar]
  29. Fu, Y.; Yang, G.; Song, X.; Li, Z.; Xu, X.; Feng, H.; Zhao, C. Improved estimation of winter wheat aboveground biomass using multiscale textures extracted from UAV-based digital images and hyperspectral feature analysis. Remote Sens. 2021, 13, 581. [Google Scholar] [CrossRef]
  30. Harfenmeister, K.; Itzerott, S.; Weltzien, C.; Spengler, D. Agricultural monitoring using polarimetric decomposition parameters of sentinel-1 data. Remote Sens. 2021, 13, 575. [Google Scholar] [CrossRef]
  31. Mandal, D.; Kumar, V.; Ratha, D.; Dey, S.; Bhattacharya, A.; Lopez-Sanchez, J.M.; McNairn, H.; Rao, Y.S. Dual polarimetric radar vegetation index for crop growth monitoring using sentinel-1 SAR data. Remote Sens. Environ. 2020, 247, 111954. [Google Scholar] [CrossRef]
  32. Mandal, D.; Ratha, D.; Bhattacharya, A.; Kumar, V.; McNairn, H.; Rao, Y.S.; Frery, A.C. A radar vegetation index for crop monitoring using compact polarimetric SAR data. IEEE Trans. Geosci. Remote Sens. 2020, 58, 6321–6335. [Google Scholar] [CrossRef]
  33. Rau, M.I.; Julzarika, A.; Yoshikawa, N.; Nagano, T.; Kimura, M.; Setiawan, B.I.; Ha, L.T. Application of topographic elevation data generated by remote sensing approaches to flood inundation analysis model. Paddy Water Environ. 2024, 22, 285–299. [Google Scholar] [CrossRef]
  34. McNairn, H.; Brisco, B. The application of C-band polarimetric SAR for agriculture: A review. Can. J. Remote Sens. 2004, 30, 525–542. [Google Scholar] [CrossRef]
  35. Freeman, A.; Durden, S.L. A three-component scattering model for polarimetric SAR data. IEEE Trans. Geosci. Remote Sens. 1998, 36, 963–973. [Google Scholar] [CrossRef]
  36. Ceccato, P.; Flasse, S.; Tarantola, S.; Jacquemoud, S.; Grégoire, J.M. Detecting vegetation leaf water content using reflectance in the optical domain. Remote Sens. Environ. 2001, 77, 22–33. [Google Scholar] [CrossRef]
  37. Gitelson, A.A.; Viña, A.; Ciganda, V.S.; Rundquist, D.C.; Arkebauer, T.J. Remote estimation of canopy chlorophyll content in crops. Geophys. Res. Lett. 2005, 32, L08403. [Google Scholar] [CrossRef]
  38. Haralick, R.M.; Shanmugam, K.; Dinstein, I. Textural features for image classification. IEEE Trans. Syst. Man Cybern. 1973, SMC-3, 610–621. [Google Scholar] [CrossRef]
Figure 1. Geographic location and sample distribution of the study area: (a) Map of Wenchang City, showing terrain, township boundaries, category-specific sample points, lakes and reservoirs; (b) Location of Wenchang City within Hainan Island.
Figure 1. Geographic location and sample distribution of the study area: (a) Map of Wenchang City, showing terrain, township boundaries, category-specific sample points, lakes and reservoirs; (b) Location of Wenchang City within Hainan Island.
Remotesensing 18 00102 g001
Figure 2. Overall technical workflow for coconut palm extraction in Wenchang City.
Figure 2. Overall technical workflow for coconut palm extraction in Wenchang City.
Remotesensing 18 00102 g002
Figure 3. Independent UAV validation in a sub-area of Chongxing Town, Wenchang City: (a) Verification points of coconut palms overlaid on UAV imagery; (b) Dense contiguous coconut palms; (c) Sparse scattered coconut palms (red boxes indicate coconut palms).
Figure 3. Independent UAV validation in a sub-area of Chongxing Town, Wenchang City: (a) Verification points of coconut palms overlaid on UAV imagery; (b) Dense contiguous coconut palms; (c) Sparse scattered coconut palms (red boxes indicate coconut palms).
Remotesensing 18 00102 g003
Figure 4. RF-based feature analysis for coconut palm mapping: (a) Importance of 42 candidate features; (b) Trend of Out-of-Bag Error with increasing feature number.
Figure 4. RF-based feature analysis for coconut palm mapping: (a) Importance of 42 candidate features; (b) Trend of Out-of-Bag Error with increasing feature number.
Remotesensing 18 00102 g004
Figure 5. Comparative texture features of different tree species: (ac) UAV reference images of coconut palm, betel nut palm, and rubber tree regions, respectively, with their corresponding NIR_asm and NIR_corr texture maps; (d) Mean values of normalized texture features for the three tree species.
Figure 5. Comparative texture features of different tree species: (ac) UAV reference images of coconut palm, betel nut palm, and rubber tree regions, respectively, with their corresponding NIR_asm and NIR_corr texture maps; (d) Mean values of normalized texture features for the three tree species.
Remotesensing 18 00102 g005
Figure 6. Comparison of coconut palm extraction results from different schemes with UAV reference imagery in representative areas (a,b) of Wenchang City.
Figure 6. Comparison of coconut palm extraction results from different schemes with UAV reference imagery in representative areas (a,b) of Wenchang City.
Remotesensing 18 00102 g006aRemotesensing 18 00102 g006b
Figure 7. Spatial distribution of coconut palms in Wenchang City identified by the optimal feature subset.
Figure 7. Spatial distribution of coconut palms in Wenchang City identified by the optimal feature subset.
Remotesensing 18 00102 g007
Table 1. Classification and Details of the 42-Dimensional Multi-Source Feature Set for Coconut Palm Extraction.
Table 1. Classification and Details of the 42-Dimensional Multi-Source Feature Set for Coconut Palm Extraction.
Feature CategoryFeature FactorsFormula Expression
Spectral FeaturesBand Features: B2, B3, B4, B5, B6, B7, B8, B8A, B11, B12. Vegetation index features: NDVI (Normalized Difference Vegetation Index), NDWI (Normalized Difference Water Index), EVI (Enhanced Vegetation Index), GNDVI (Green Normalized Difference Vegetation Index), SAVI (Soil-Adjusted Vegetation Index), GCI (Green Chlorophyll Index), DVI (Difference Vegetation Index), GDVI (Green Difference Vegetation Index), Chlorophyll (chlorophyll content), CWI(Canopy Water Index) [24,25,26,27]. N D V I = ( B 8 B 4 ) / ( B 8 + B 4 )
NDWI = ( B 3 B 8 ) / ( B 3 + B 8 )
EVI = 2.5 × ( B 8 B 4 ) / ( B 8 + 6 × B 4 7.5 × B 2 + 1 )
GNDVI = ( B 8 B 3 ) / ( B 8 + B 3 )
SAVI = ( 1 + 0.5 ) × ( B 8 B 4 ) / ( B 8 + B 4 + 0.5 )
GCI = ( B 8 / B 3 ) 1
DVI = B 8 B 4
GDVI = B 8 B 3
C h l o r o p h y l l = 3.2 × B 8 A B 5 B 8 A + B 5 + 0.15 × B 8 A B 4
C W I = B 11 B 12 B 11 + B 12 + 0.03 × ln B 8 A
R a t i o = σ V H / σ V V × ( l i n e a r   s c a l e )
θ c = arctan ( ( 1 q ) 2 / ( 1 + q 2 q ) ) × 180 / π
H C = k = 1 2 P k log 2 P k , P 1 = 1 1 + q ,   P 2 = q 1 + q
m C = ( 1 q ) / ( 1 + q )
D p R V I c = q ( q + 3 ) / ( q + 1 ) 2
C l a s s = f ( H c , θ c )
Textural FeaturesNIR_asm (Angular Second Moment), NIR_contrast (Contrast), NIR_corr (Correlation), NIR_var (Variance), NIR_idm (Inverse Difference Moment), NIR_savg (Sum Average), NIR_svar (Sum Variance), NIR_sent (Sum Entropy), NIR_ent (Entropy) [28,29].
Radar FeaturesBackscattering features: σ VV (VV-polarized backscattering coefficient), σ V H (VH-polarized backscattering coefficient) [30]. Polarimetric decomposition features: θC (the pseudo scattering-type parameter), HC (the pseudo scattering entropy parameter), mC (the co-pol purity parameter), DpRVIc (Dual-polarimetric radar vegetation index), Class (scattering classification class), Ratio (polarization ratio), Inc (Incidence angle) [31,32].
Topographic FeaturesElevation, Slope, Aspect, Hillshade [33].
Notes: Sentinel-2 bands correspond to Blue (B2), Green (B3), Red (B4), Red Edge1 (B5), Red Edge2 (B6), Red Edge3 (B7), NIR (B8), NIR Narrow (B8A), SWIR1 (B11), and SWIR2 (B12); the Sentinel-1 parameter q denotes the polarization ratio (Ratio).
Table 2. Ranking of 15 optimal features and their importance (descending order).
Table 2. Ranking of 15 optimal features and their importance (descending order).
RankFeature NameFeature
Importance
RankFeature NameFeature
Importance
RankFeature NameFeature
Importance
1CWI183.2926B3152.69911B7148.375
2GCI162.4767B12152.38912NIR_contrast147.841
3 σ V V 155.1048B4150.82313B11145.831
4B2154.0359GNDVI150.16614B6141.353
5Inc153.74310B5149.98315NIR_asm139.392
Table 3. Classification accuracy comparison of different feature combination schemes.
Table 3. Classification accuracy comparison of different feature combination schemes.
Scheme TypeFeature DimensionOA (%)Kappa CoefficientPA (%) [Coconut Palms/Residential Areas/Water Bodies/Others]UA (%) [Coconut Palms/Residential Areas/Water Bodies/Others]
Scheme 1 (SAR Polarimetric Features)987.090.814369.70/83.45/94.37/93.2485.98/87.88/95.04/84.15
Scheme 2 (Optical Spectral Features)2091.860.883483.33/90.29/96.45/94.3290.91/94.72/97.84/88.13
Scheme 3 (Texture and Topographic Features)1384.820.781666.67/83.09/93.66/89.1988.89/81.34/93.66/83.12
Scheme 4 (Multi-Source Fusion)4292.510.892887.12/89.93/95.74/95.1492.00/95.79/97.83/88.66
Optimal Feature Subset (RF-OOB Selected)1592.830.897587.12/91.01/95.74/95.1492.74/94.76/97.83/89.80
Table 4. Confusion Matrix of Micro-Scale Validation (UAV Data).
Table 4. Confusion Matrix of Micro-Scale Validation (UAV Data).
Reference (UAV)/Classification ResultCoconut PalmsResidential AreasWater BodiesOthersTotal
Coconut Palms1311114147
Residential Areas0841590
Water Bodies0068068
Others4169185214
Total13510179204519
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wen, T.; Wang, N.; Yao, X.; Li, C.; Bi, W.; Li, X.-M. A Multi-Source Remote Sensing Identification Framework for Coconut Palm Mapping. Remote Sens. 2026, 18, 102. https://doi.org/10.3390/rs18010102

AMA Style

Wen T, Wang N, Yao X, Li C, Bi W, Li X-M. A Multi-Source Remote Sensing Identification Framework for Coconut Palm Mapping. Remote Sensing. 2026; 18(1):102. https://doi.org/10.3390/rs18010102

Chicago/Turabian Style

Wen, Tingting, Ning Wang, Xiaoning Yao, Chunbo Li, Wenkai Bi, and Xiao-Ming Li. 2026. "A Multi-Source Remote Sensing Identification Framework for Coconut Palm Mapping" Remote Sensing 18, no. 1: 102. https://doi.org/10.3390/rs18010102

APA Style

Wen, T., Wang, N., Yao, X., Li, C., Bi, W., & Li, X.-M. (2026). A Multi-Source Remote Sensing Identification Framework for Coconut Palm Mapping. Remote Sensing, 18(1), 102. https://doi.org/10.3390/rs18010102

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop