Next Article in Journal
Data-Driven Multidecadal Reconstruction and Nowcasting of Coastal and Offshore 3-D Sea Temperature Fields from Satellite Observations: A Case Study in the East/Japan Sea
Previous Article in Journal
Remote Sensing Monitoring of Soil Salinization Based on Bootstrap-Boruta Feature Stability Assessment: A Case Study in Minqin Lake Region
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

High-Resolution Mapping Coastal Wetland Vegetation Using Frequency-Augmented Deep Learning Method

1
School of Marine Science and Technology, Northwestern Polytechnical University, Xi’an 710072, China
2
National Marine Environmental Monitoring Center, Dalian 116023, China
3
Liaoning Provincial Natural Resources Affairs Service Center, Shenyang 110001, China
4
Shandong Territorial Spatial Planning Institute, Jinan 250014, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2026, 18(2), 247; https://doi.org/10.3390/rs18020247
Submission received: 2 December 2025 / Revised: 31 December 2025 / Accepted: 6 January 2026 / Published: 13 January 2026

Highlights

  • First use of very-high resolution (2 cm) image data for Mapping Coastal Wetland Vegetation: We have used drone imagery and deep learning techniques to complete the task of fine tuned classification of Wetland Vegetation.
  • A method for coastal vegetation classification from high resolution imagery is proposed: In this paper, we proposal a augment frequency-domain features network—AFDFNet. Experimental results demonstrate that AFDFNet consistently outperforms existing deep learning models, achieving state-of-the-art performance.
  • An Multi-scale Feature Enhancement Module: We designed a multi-scale feature enhancement module to compensate for the misclassification phenomenon caused by the lack of frequency domain features and contextual information in the network.

Abstract

Coastal wetland vegetation exhibits pronounced spectral mixing, complex mosaic spatial patterns, and small target sizes, posing considerable challenges for fine-grained classification in high-resolution UAV imagery. At present, remote sensing classification of ground objects based on deep learning mainly relies on spectral and structural features, while the frequency domain features of ground objects are not fully considered. To address these issues, this study proposes a vegetation classification model that integrates spatial-domain and frequency-domain features. The model enhances global contextual modeling through a large-kernel convolution branch, while a frequency-domain interaction branch separates and fuses low-frequency structural information with high-frequency details. In addition, a shallow auxiliary supervision module is introduced to improve local detail learning and stabilize training. With a compact parameter scale suitable for real-world deployment, the proposed framework effectively adapts to high-resolution remote sensing scenarios. Experiments on typical coastal wetland vegetation including Reeds, Spartina alterniflora, and Suaeda salsa demonstrate that the proposed method consistently outperforms representative segmentation models such as UNet, DeepLabV3, TransUNet, SegFormer, D-LinkNet, and MCCA across multiple metrics including Accuracy, Recall, F1 Score, and mIoU. Overall, the results show that the proposed model effectively addresses the challenges of subtle spectral differences, pervasive species mixture, and intricate structural details, offering a robust and efficient solution for UAV-based wetland vegetation mapping and ecological monitoring.

1. Introduction

Coastal wetlands are among the most ecologically valuable ecosystems on Earth. They play indispensable roles in maintaining biodiversity, sequestering carbon, mitigating emissions, preventing shoreline erosion, and regulating regional climate systems [1]. As a key component of these ecosystems, wetland vegetation not only reflects their structural and functional attributes but also provides critical indicators for ecological succession, invasive species expansion, and restoration processes [2].
Traditional investigations of coastal wetland vegetation have largely relied on field surveys, which are expensive, spatially and temporally constrained, and relatively inefficient [3,4,5]. Remote sensing offers an effective alternative, enabling large-scale and long-term monitoring while reducing labor and resource requirements. Among various remote sensing platforms, unmanned aerial vehicles (UAVs) have gained widespread attention due to their high mobility, low cost, and exceptional spatiotemporal resolution. UAV-based remote sensing has therefore been increasingly applied in ecological monitoring, agricultural management, and wetland assessment [6]. Previous studies have demonstrated the potential of UAV imagery in coastal vegetation research, including:
Akinaga et al. used high-resolution drone imagery to assess blue carbon reserves in tidal flat seagrass beds [7]. Liu et al. applied deep learning methods to the classification of coastal habitats based on drones and compared different data fusion and semantic segmentation techniques [8]. James et al. explored the modeling and prediction of coastal vegetation and environmental information from near-infrared (NIR) UAV imagery, which has reference value for vegetation indices and classification [9]. Morgan et al. studied how to estimate the aboveground biomass (AGB) of intertidal wetland vegetation using UAV multispectral imagery combined with ground samples [10]. While UAV remote sensing provides abundant high-resolution data, the sheer volume of such imagery poses considerable challenges for data processing and analysis. To process massive amounts of high-resolution image data more efficiently, computer vision techniques are becoming increasingly dominant.
Advancements in computer vision have greatly propelled remote sensing image analysis. From early digital image processing techniques to modern deep learning-based approaches, these methods have proven highly effective for UAV imagery. Deep learning, in particular, eliminates the need for handcrafted feature design and offers strong representational capacity, making it the dominant paradigm in current remote sensing research. Compared with traditional digital image processing methods, deep learning-based approaches exhibit distinct advantages in coastal wetland studies. Conventional methods often rely on manually designed features and fixed thresholds, which are sensitive to illumination variations, seasonal changes, and the complex spectral–spatial heterogeneity commonly observed in coastal wetland environments. In contrast, deep learning models are capable of automatically learning hierarchical and discriminative features from data, enabling more robust representation of diverse wetland components such as vegetation, tidal flats, and water bodies. Architectures such as convolutional neural networks (CNNs), attention mechanisms, and vision transformers have shown remarkable performance in coastal vegetation classification and recognition [11,12]. Wang et al. achieved high-precision pixel-level mangrove species classification using a semantic segmentation network [13]. Ke et al. conducted detailed mapping and change detection in the Liaohe Estuary using time-series imagery and deep learning models [14]. Cruz et al. identified an invasive species English glasswort in salt marsh and mudflat environments using a U-Net architecture built on an Inception-v3 backbone [15]. These studies collectively highlight the effectiveness of deep learning for species extraction and recognition in coastal wetlands. However, most existing methods focus primarily on spatial textures and spectral features, while largely overlooking the rich information embedded in images’ frequency domain.
Although numerous studies have advanced coastal wetland monitoring at regional and temporal scales, most large-scale or long-term analyses still rely on moderate- and low-resolution satellite imagery such as Landsat, MODIS, and Sentinel-2 [16,17,18,19,20]. These datasets provide long-term records, global coverage, and free accessibility, making them well suited for time-series analysis and large-scale mapping [21]. As a result, many national, regional, and global wetland mapping products rely on imagery with spatial resolutions of 10–30 m. In contrast, studies utilizing UAV or commercial sub-5 m and sub-meter resolution imagery are typically localized case studies used to validate, refine, or interpret heterogeneity in medium-resolution products and have not yet become mainstream [22].
In summary, existing research on coastal wetland vegetation classification is still dominated by medium and low resolution satellite imagery. Studies leveraging ultra-high-resolution UAV imagery remain limited, and most current methods overlook critical frequency-domain characteristics. Consequently, these approaches often struggle with small vegetation targets and complex co-occurrence patterns. To address these challenges, this study constructs an ultra high-resolution RGB UAV dataset for coastal vegetation classification, including representative species such as Reeds, Spartina alterniflora, Suaeda salsa, and water bodies. Furthermore, we proposal a augment frequency-domain features network—AFDFNet. Experimental results demonstrate that AFDFNet consisently outperforms existing deep learning models, achieving state-of-the-art performance.

2. Materials and Methods

2.1. Dataset Construction

The data used in this study were acquired using an airborne UAV manufactured by Feima Robotics (Beijing, China). The UAV captured true orthophoto imagery with a spatial resolution of 0.02 m using an RGB camera. After preprocessing steps including stitching and mosaicking, the resulting image measured 22,169 × 20,732 pixels and was projected onto the WGS-84 coordinate system. The processed imagery and the geographic extent of the study area are shown in Figure 1.
This drone survey was conducted in the coastal wetlands of Dongying City, Shandong Province, China, covering an area of approximately 200 square meters. Dongying is located at the mouth of the Yellow River and is a typical delta wetland. Vegetation in this region exhibits distinct zonation patterns. Suaeda salsa primarily grows in low-lying saline–alkaline zones of the intertidal area and is one of the most representative halophytic communities in the Yellow River Delta [23]. Reeds is widely distributed across freshwater and slightly brackish wetlands and serves as the dominant native species in the region [24]. The invasive species Spartina alterniflora has rapidly expanded along the coastal zone, forming complex competitive and coexisting relationships with Reeds in certain areas.
Figure 2 shows the constructed dataset. The selected images fully cover all major vegetation types within the study area. Pixel-level labels were manually annotated through visual interpretation. The dataset covering six categories: Reeds, Spartina alterniflora, Suaeda salsa, weeds, water bodies, and bare tidal flats. Suaeda salsa, as a beneficial vegetation for wetland environmental management, exhibited weakness, resulting in the largest sample size. Next, Phragmites australis and Spartina alterniflora represented native and invasive species in the study area, reflecting vegetation invasion in wetlands, hence their smaller sample sizes. Other less distributed vegetation was grouped as weeds, with water bodies and bare beaches serving as the background; these species had relatively smaller sample sizes. Specifically, A total of 160,256 × 256 pixel image-label pairs were generated, covering six major categories: Reeds, Spartina alterniflora, Suaeda salsa, weeds, water bodies, and bare tidal flats. Spartina alterniflora, Suaeda salsa, weeds, are three types of vegetation that are important for the wetland ecosystem. Therefore, the dataset was divided into 32 images each represent reeds and Spartina alterniflora; 48 images represent Suaeda salsa, which is a weak target in this area; 16 images represent weeds; and 16 images represent water bodies and exposed tidal flats, which are mostly background. The dataset was divided into training and testing sets in a 7:3 ratio.

2.2. AFDFNet Model Development

2.2.1. Overall Framework

As a typical transitional ecosystem, coastal wetlands exhibit highly mixed spectral characteristics. Vegetation types such as Reeds, Spartina alterniflora, and halophytic plants often coexist and intermingle within the same area. Their canopy structures and moisture conditions also change dynamically. In addition, Suaeda salsa pixels frequently contain mixed spectral responses from adjacent tidal flats, making it difficult for traditional remote sensing imagery to maintain spectral purity at the pixel level. Based on the dataset constructed in Section 2.1, a representative image is selected for each category, and its mean at the pixel scale is calculated to obtain the spectral curve. The spectral curves obtained from UAV imagery are shown in Figure 3.
Moreover, the characteristic coexistence and mosaic patterns of coastal wetland vegetation further blur class boundaries. For instance, Reeds and Spartina alterniflora often appear in mixed stands along the wetland ecotone, leading to overlapping textures and spectral signatures among vegetation types. Such interactions reduce inter-class separability and pose significant challenges for conventional classification methods [25,26]. To address these issues, we propose AFDFNet, a deep learning framework specifically designed for UAV-based coastal wetland vegetation classification.
The proposed method is illustrated in Figure 4. The architecture adopts an encoder–decoder structure, with ResNet-50 serving as the backbone network [27]. ResNet-50, which achieved first place in the ImageNet 2015 classification task, remains one of the most widely used feature extractors for high-resolution remote sensing imagery. The backbone extracts four hierarchical feature maps from shallow to deep levels. These feature maps are then fed into the Multi-scale Feature Enhancement Module (MFEM), which contains two parallel branches: A large-kernel convolution branch, designed to capture contextual dependencies among different vegetation types. A frequency-domain interaction branch, used to extract informative frequency components that are typically overlooked in conventional spatial-domain models. After processing through the MFEM, the enhanced features are progressively upsampled from deep to shallow layers and fused via skip connections. The final output is upsampled to the original image resolution for loss computation.

2.2.2. Multi-Scale Feature Enhancement Module (MFEM)

In coastal wetland vegetation classification, incorporating contextual modeling and frequency-domain feature extraction is essential. On the one hand, many vegetation patches in wetlands (e.g., Suaeda salsa) exist as small or weakly expressed targets [28], making it difficult for conventional local convolutions to achieve adequate discriminative capability. On the other hand, mixed stands and mosaic patterns are widespread in coastal wetlands, and UAV imagery typically provides limited spectral dimensionality. As a result, single-pixel spectral and textural information is often insufficient. In this context, frequency-domain features can effectively separate low-frequency regional structures from high-frequency boundary details [29], thereby improving the model’s sensitivity to ambiguous boundaries and mixed pixels.
To address the limitations of traditional wetland vegetation classification methods—which often overlook frequency-domain and contextual information—we design an embedded Multi-scale Feature Enhancement Module (MFEM), shown in Figure 5. The MFEM consists of two parallel branches: a large-kernel convolution branch and a frequency-domain interaction branch. This design reduces misclassification and omission errors caused by insufficient contextual or frequency cues in standard deep networks.
(1)
Large-kernel Convolution Branch
To enhance contextual modeling capability, this branch employs large convolution kernels to expand the receptive field and capture richer spatial relationships. Specifically, the feature maps f extracted from the backbone are processed through two large-kernel convolutions (5 × 5 and 7 × 7) to extract multi-scale spatial features. The resulting feature maps are concatenated along the channel dimension and passed through both max pooling and average pooling to obtain statistical descriptors at two scales. These pooled features are then fed into a 7 × 7 convolution to generate two spatial attention masks, which modulate the outputs of the 5 × 5 and 7 × 7 convolutions through element-wise multiplication. Finally, the enhanced features are fused and forwarded to the next stage.
To preserve the fine spatial details inherent in ultra-high-resolution UAV imagery, the feature map resolution remains unchanged throughout this branch.
The process can be mathematically expressed as:
L = c o n v 5   ×   5 f × m 1 + c o n v 7   ×   7 f × m 2
where
  • L denotes the output of the large-kernel branch,
  • f is the backbone feature map,
  • m 1 and m 2 represent spatial attention masks generated via average pooling and max pooling, respectively.
(2)
Frequency-domain Interaction Branch
This branch aims to strengthen representation learning from a frequency-domain perspective. Starting from the same backbone feature map f, the branch performs explicit separation of high-frequency and low-frequency components to enable multi-frequency feature modeling.
A 1 × 1 convolution is first applied to extract high-frequency information while preserving spatial resolution. Meanwhile, low-frequency features are obtained by applying average pooling followed by a 1 × 1 convolution to the downsampled feature map. Based on these decomposed components, an Octave Convolution [30] is used to enable bidirectional interaction and fusion between high- and low-frequency signals. Octave Convolution (OctConv) is designed to reduce spatial redundancy in convolutional neural networks by explicitly decomposing feature maps into high- and low-frequency components. Instead of processing all feature channels at the same spatial resolution, OctConv allocates high-frequency features to full-resolution maps to preserve fine spatial details, while low-frequency features are computed at a reduced resolution to capture global contextual information. Information exchange between the two frequency components is achieved through four convolutional pathways, including high-to-high, high-to-low, low-to-low, and low-to-high transformations, where downsampling and upsampling operations are used to enable cross-frequency interaction. By performing convolution on low-frequency features at a coarser spatial scale, OctConv effectively enlarges the receptive field and reduces computational cost, while maintaining the ability to model both detailed structures and large-scale semantic patterns. As a learnable operator, the Octave Convolution adaptively adjusts cross-frequency information flow during training, achieving effective fusion of complementary spectral–spatial cues.
The computation is formulated as:
F = O c t a v e C o n v ( c o n v 1   ×   1 f + c o n v 1 ×   1 [ d o w n s a m p l e ( f ) ] )
where
  • F is the output of the frequency-domain interaction branch.
  • f is the backbone feature map.
(3)
Decoder with Cascaded MFEMs
Multiple MFEMs are stacked to form the decoder of the proposed model. The four feature maps enhanced by MFEM are progressively upsampled from deep to shallow layers. At each stage, the upsampled feature map is passed through a 3 × 3 convolution and concatenated with the corresponding MFEM output from the next layer. This process repeats until the feature map resolution matches that of the input image.
The decoding process can be expressed as:
D i + 1 = ( L i + F i ) + D i
D 0 = ( L 0 + F 0 )
where
  • D i denotes the decoder features at each scale.

2.2.3. Loss Function

To enhance the model’s ability to recognize fine-grained structures of coastal vegetation, this study adopts a joint loss function consisting of two components. The first component is the conventional semantic segmentation loss, where pixel-wise cross-entropy is applied to constrain the final semantic prediction. This loss guides the model to accurately learn the spatial distribution patterns of different vegetation types and serves as the primary supervisory signal driving model convergence and feature learning.
However, in coastal wetland environments, vegetation targets are often characterized by small object sizes, blurred boundaries, and mixed or mosaic distributions [31]. Relying solely on deep-layer features during final prediction may lead to insufficient utilization of detailed information contained in shallow features. Therefore, a second loss component is introduced: an auxiliary supervision applied to the auxiliary prediction generated from shallow features, also using cross-entropy. Shallow features retain higher-resolution textures and edge cues; enforcing additional supervision on them helps the model better capture small-scale structures and boundary details, and promotes multi-level feature consistency throughout the network.
The loss functions are defined as follows:
l c e 1 ( Y , Y ) = i = 0 H j = 0 W [ l = 1 L Y i , j l log ( Y i , j l ) ] H × W
l c e 2 ( Y , u p s a m p l e ( Y ) ) = i = 0 H j = 0 W [ l = 1 L Y i , j l log ( u p s a m p l e ( Y i , j l ) ) ] H × W
where
  • Y denotes the prediction generated from the final decoding layer,
  • Y denotes the prediction obtained from deeper decoder features,
  • Y denotes the actual prediction result.
  • H , W denotes height and width
  • i , j denotes pixel row and column numbers
The total loss is defined as the sum of the main-branch and auxiliary-branch losses. This dual-supervision mechanism not only enhances the representational capability of the model but also improves its discriminative power and generalization performance when handling complex vegetation patterns in coastal wetlands:
l t o t a l = l c e 1 + l c e 2

2.3. Model Evaluation Indicators

To comprehensively evaluate the performance of the model in coastal vegetation classification, this study adopts several commonly used semantic segmentation and classification metrics, including Overall Accuracy (OA), Kappa coefficient (Kappa), Recall, and Mean Intersection over Union (mIoU) [32]. The calculation formulas for all indicators are shown in Table 1. These indicators assess the model’s classification capability from multiple perspectives and provide a holistic evaluation of its effectiveness in identifying small vegetation patches, ambiguous boundaries, and mixed or mosaic distribution areas.
TP, TN, FP, FN can be expressed as:
TP: True sample size, the number of positive samples in the label and the number of positive samples in the predicted value.
TN: Number of true negative samples, number of negative samples in the label and number of negative samples in the predicted value.
FP: False Positive Sample Size, the number of negative samples in the label and the number of positive samples in the predicted value.
FN: False Negative Sample Size, number of positive samples in the label and negative samples in the predicted value.

2.4. Comparison Method

To thoroughly validate the effectiveness of the proposed method in coastal wetland vegetation classification, this study selects a set of representative semantic segmentation models for comparison. These models include classical convolution-based architectures, encoder–decoder structures, Transformer-based frameworks, and multi-scale feature fusion methods. Each model emphasizes different mechanisms of feature representation, enabling a comprehensive assessment of the proposed model’s advantages in detecting small vegetation targets, handling ambiguous boundaries, and extracting multi-scale features.
The comparison set includes four semantic segmentation models originally designed for natural images—UNet [33], DeepLabV3 [34], TransUNet [35], and SegFormer [36]—as well as two models specifically developed for remote sensing applications: D-LinkNet [37] and the boundary supervision-aided multiscale channel-wise cross-attention network (MCCA) [38].

3. Results

3.1. Coastal Wetland Vegetation Classification Mapping and Analysis

In this section, large-scale coastal wetland vegetation classification mapping was conducted using the proposed AFDFNet. The classification results are shown in Figure 6. As illustrated, AFDFNet maintains strong performance when applied to UAR high-resolution imagery in coastal wetland area. producing classification outputs that closely align with the spatial patterns observed in the RGB images.
Based on the mapping results, we further calculated the pixel proportion of each vegetation type within the study area and selected three representative species Spartina alterniflora, Reeds, and Suaeda salsa to evaluate the species-specific recognition accuracy of AFDFNet. The statistics are presented in Figure 7. The results indicate that the study area contains approximately 8% Reeds, 12% Spartina alterniflora, and 23% Suaeda salsa. The high proportion of Suaeda salsa reveals that the area supports a substantial amount of this native salt-tolerant species, which plays an important role in maintaining ecosystem function [39]. This suggests that the ecological condition of the region is generally healthy.
Meanwhile, the proportion of Reeds is lower than that of Spartina alterniflora. Given that Reeds is an indigenous dominant species commonly found in estuarine wetlands, whereas Spartina alterniflora is an invasive species, this distribution pattern indicates that the local vegetation ecosystem is being affected by the spread of Spartina alterniflora.
The proposed AFDFNet model exhibits excellent identification accuracy for all three vegetation types. The identification accuracy for each species exceeds 90%, further confirming that the AFDFNet model maintains robust performance even in complex wetland environments with coexisting species and mosaic spatial patterns.

3.2. Comparison and Analysis of Model Results

The performance of all models on the test set is summarized in Table 2, and the visualized results are shown in Figure 8. As can be seen from the results in Figure 8b,f, the method proposed in this study still maintains excellent performance in the face of the complex symbiotic relationship between weak vegetation such as Suaeda salsa and Spartina alterniflora and Rees. The proposed AFDFNet exhibits outstanding performance, demonstrating clear advantages over other mainstream deep learning methods in terms of Overall Accuracy, Kappa coefficient, and Recall. In addition, TransUNet, which integrates Transformer and CNN architectures, also achieves competitive results and attains the best performance in Kappa. Specifically, compared with TransUNet, the proposed method improves Accuracy by 0.93%, mIoU by 1.89%, and Recall by 3.05%.
From Table 2, it can be observed that models using Vision Transformer (ViT)-based encoders struggle to maintain high performance when applied to ultra-high-resolution remote sensing imagery. The visual comparison further confirms this limitation. As shown in Figure 8f, SegFormer faces challenges in identifying regions where Reeds and Spartina alterniflora coexist. The model lacks sufficient detail representation, misclassifying clustered Spartina alterniflora as Reeds. This issue may arise because SegFormer employs a ViT encoder combined with a lightweight multi-layer perceptron (MLP) decoder. While this design enables efficient global context modeling and multi-scale feature aggregation, the MLP decoder lacks explicit spatial structural constraints and therefore does not recover fine-grained spatial details as effectively as convolution-based decoders [40]. Consequently, in complex boundaries or fine-textured regions such as the small clustered distribution of Spartina alterniflora the model produces blurred or overly smoothed predictions, causing small vegetation patches to be absorbed into surrounding Reeds areas.
A similar problem is observed with DeepLabV3 (Figure 8b). DeepLabV3 employs an Atrous Spatial Pyramid Pooling (ASPP) module to enlarge the receptive field through dilated convolutions; however, the sparse sampling nature of dilated convolution tends to weaken high-frequency texture information, such as the fine leaf structures of Spartina alterniflora [41]. In mixed-species zones, the suppression of high-frequency cues causes the model to rely more heavily on low-frequency regional shapes during classification, which ultimately leads to misclassification of Spartina alterniflora as Reeds.

4. Discussion

4.1. Evaluation of the AFDFNet Based on Ablation Experiments and Parameter Analysis

To further verify the contribution of each module within the proposed framework, this section employs the ResNet-50 backbone combined with a decoder in which the MFEM is removed as the baseline network. Based on this baseline, the following ablation experiments were designed:
(1) baseline + a: removing the frequency-domain interaction branch and replacing it with convolutional layers containing an equivalent number of parameters;
(2) baseline + b: removing the large-kernel convolution branch and replacing it with convolutional layers of similar parameter scale;
(3) baseline + a + b: removing the auxiliary loss used for deep-layer feature prediction.
The ablation results are presented in Table 3. It can be observed that each module contributes meaningfully to the overall performance improvement of AFDFNet in coastal vegetation classification. The substantial performance drop after removing the frequency-domain branch highlights the essential role of frequency-domain features. In coastal wetlands, vegetation types often exhibit weak spectral differences and highly interwoven, mosaic-like spatial patterns, making it difficult for models to distinguish species such as Reeds and Spartina alterniflora using spatial-domain cues alone [42]. Frequency-domain interactions enable explicit separation of high- and low-frequency components and strengthen both structural and boundary representations [43]. Therefore, removing this branch significantly weakens the model’s ability to capture fine-grained textures and delineate complex boundaries.
After removing the large-kernel convolution branch, the degradation in performance is even more pronounced: Accuracy decreases by 1.19%, Kappa by 5.22%, mIoU by 5.91%, and Recall by 4.8%. These results demonstrate that a large receptive field is indispensable for capturing the broad contextual information underlying wetland vegetation distribution. Coastal wetlands commonly exhibit interlaced vegetation patches, blurred boundaries, and small fragmented targets; traditional convolutional layers, limited by their narrow receptive field, struggle to model long-range dependencies [44]. In contrast, large-kernel convolutions substantially enhance global representation capacity, enabling the model to better recognize the macro-structures present in mixed-species regions [45]. Consequently, removing this branch results in the most severe performance deterioration.
When the auxiliary loss function is removed, the performance decline is comparatively modest—Accuracy decreases by 0.54%, Kappa by 0.44%, mIoU by 0.38%, and Recall by 1.03% but the impact remains noticeable. The shallow layer supervision primarily strengthens the sensitivity of early features to local structures and boundary details, while also improving training stability and convergence. Although its influence is not as dominant as the two core branches, the auxiliary supervision still improves detail prediction quality and provides measurable benefits in areas with complex boundaries.
In summary, the large-kernel convolution branch and the frequency-domain interaction branch are the key contributors to performance improvements. These two modules, respectively, enhance large-scale contextual modeling and frequency-domain feature representation. The auxiliary loss further promotes fine-detail learning and training efficiency. Working synergistically, these components allow the model to effectively handle spectral similarities, complex textures, and mosaic-like vegetation patterns in coastal wetlands, ultimately improving accuracy and robustness.
Model parameter size is another critical factor influencing practical deployment in high-resolution remote sensing classification tasks. UAV imagery often contains extremely large spatial dimensions and pixel densities, with a single image reaching thousands to tens of thousands of pixels. The inference speed of a model thus directly affects classification efficiency [46]. To evaluate the efficiency of the proposed framework during inference, we plotted the model parameter sizes against their corresponding mIoU values, as shown in Figure 9. The results show that AFDFNet achieves a favorable balance between accuracy and efficiency, outperforming other methods in both aspects. AFDFNet is capable of producing highly accurate coastal vegetation classification results with a relatively small number of parameters, demonstrating strong potential as a cost-effective solution for large-scale remote sensing applications.

4.2. Influence of Frequency-Domain Features on the Classification of Typical Coastal Vegetation

Building upon the ablation experiments, this section further investigates the impact of incorporating frequency-domain features on the classification accuracy of typical coastal vegetation types. Using the baseline + a model introduced in Section 4.1, we computed three evaluation metrics—Accuracy, IoU, and Recall—for three representative vegetation categories (Reeds, Spartina alterniflora, and Suaeda salsa). These results were then compared with the model performance obtained in Section 3.1. The comparison results are shown in Figure 10. Overall, the introduction of frequency-domain features leads to a performance pattern characterized as “overall improvement, local variation, and detail–structure trade-off.”
From the Accuracy metric, all three vegetation types exhibit performance gains: Reeds increases by 0.22%, Spartina alterniflora by 1.08%, and Suaeda salsa by 1.96%. This demonstrates that frequency-domain features enhance the model’s global discriminative ability, particularly in scenarios where spectral differences are subtle but texture differences are more pronounced. The larger improvements in Spartina alterniflora and Suaeda salsa suggest that frequency-domain cues are especially beneficial for vegetation types with distinctive texture patterns or small-scale patch structures.
Regarding the IoU metric, Reeds and Spartina alterniflora show improvements of 0.07% and 1.10%, respectively, indicating that frequency-domain features help the model better handle fine-grained textures and complex boundaries especially in the case of Spartina alterniflora, which typically exhibits dense, clustered growth patterns. However, Suaeda salsa shows a decrease of 0.57% in IoU. This suggests that while high-frequency enhancement strengthens detail representation, it may also amplify noise in small or boundary-blurred patches, causing boundary instability and reducing overall region-level consistency.
For Recall, all three vegetation types exhibit declines: Reeds drops by 0.03%, Spartina alterniflora by 1.31%, and Suaeda salsa—the most affected—by 2.90%. This indicates that frequency-domain enhancement, while improving edge sharpness and local texture discrimination, may slightly weaken the model’s sensitivity to the global structure of vegetation patches, leading to omission errors. This effect is most pronounced in Suaeda salsa, whose patches are typically small and have low spectral contrast. High-frequency amplification may overemphasize local variations, causing fragmentation of these patches and ultimately lowering Recall.

5. Conclusions

This study proposes a vegetation classification model tailored for high-resolution UAV imagery of coastal wetlands, where vegetation exhibits significant spectral mixing, complex symbiotic–mosaic patterns, and small target sizes. The model integrates spatial-domain and frequency-domain features through a dual-branch design: a large-kernel convolution branch enhances contextual modeling capability, while a frequency-domain interaction branch separates and fuses low-frequency structural information with high-frequency boundary details. In addition, shallow-layer auxiliary supervision is introduced to improve local detail learning and enhance training stability. The overall architecture maintains strong representational capacity while preserving a compact parameter size, making it suitable for real-world deployment in high-resolution remote sensing applications. Experimental results demonstrate that the proposed model achieves superior performance compared with baseline and state-of-the-art methods when identifying three typical coastal wetland vegetation types—Reeds, Spartina alterniflora, and Suaeda salsa.
Overall, the developed framework effectively addresses the challenges of weak spectral separability, widespread mixed growth patterns, and complex fine-scale structures in coastal wetland vegetation. It consistently outperforms mainstream segmentation models such as UNet, TransUNet, SegFormer, and DeepLabv3 across multiple key evaluation metrics, showing strong robustness and adaptability. The method provides a feasible and efficient technical pathway for UAV-based wetland ecological monitoring, invasive species detection, and landscape pattern analysis.
Future research may further explore adaptive frequency-domain fusion strategies, enhanced recall mechanisms for small objects, and multi-source or multi-temporal data integration to continuously improve the model’s generalization ability and practical value in complex wetland ecosystems.

Author Contributions

Conceptualization, N.G.; Methodology, N.G.; Software, N.G.; Validation, N.G.; Formal analysis, Y.Y.; Investigation, X.D.; Resources, X.D.; Data curation, N.G. and P.X.; Writing—original draft, N.G.; Writing—review & editing, N.G.; Visualization, E.G.; Supervision, Y.Y.; Project administration, Y.Y.; Funding acquisition, Y.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Gedan, K.B.; Kirwan, M.L.; Wolanski, E.; Barbier, E.B.; Silliman, B.R. The Present and Future Role of Coastal Wetland Vegetation in Protecting Shorelines: Answering Recent Challenges to the Paradigm. Clim. Change 2011, 106, 7–29. [Google Scholar] [CrossRef]
  2. Van De Vijsel, R.C.; Van Belzen, J.; Bouma, T.J.; Van Der Wal, D.; Borsje, B.W.; Temmerman, S.; Cornacchia, L.; Gourgue, O.; Van De Koppel, J. Vegetation Controls on Channel Network Complexity in Coastal Wetlands. Nat. Commun. 2023, 14, 7158. [Google Scholar] [CrossRef] [PubMed]
  3. Feagin, R.A.; Lozada-Bernard, S.M.; Ravens, T.M.; Möller, I.; Yeager, K.M.; Baird, A.H. Does Vegetation Prevent Wave Erosion of Salt Marsh Edges? Proc. Natl. Acad. Sci. USA 2009, 106, 10109–10113. [Google Scholar] [CrossRef] [PubMed]
  4. Mcleod, E.; Chmura, G.L.; Bouillon, S.; Salm, R.; Björk, M.; Duarte, C.M.; Lovelock, C.E.; Schlesinger, W.H.; Silliman, B.R. A Blueprint for Blue Carbon: Toward an Improved Understanding of the Role of Vegetated Coastal Habitats in Sequestering CO2. Front. Ecol. Environ. 2011, 9, 552–560. [Google Scholar] [CrossRef]
  5. Duarte, C.M.; Middelburg, J.J.; Caraco, N. Major Role of Marine Vegetation on the Oceanic Carbon Cycle. Biogeosciences 2005, 2, 1–8. [Google Scholar] [CrossRef]
  6. Li, S.; Dragicevic, S.; Castro, F.A.; Sester, M.; Winter, S.; Coltekin, A.; Pettit, C.; Jiang, B.; Haworth, J.; Stein, A.; et al. Geospatial Big Data Handling Theory and Methods: A Review and Research Challenges. ISPRS J. Photogramm. Remote Sens. 2016, 115, 119–133. [Google Scholar] [CrossRef]
  7. Akinaga, T.; Saito, M.; Onodera, S.; Hyodo, F. UAV Visual Imagery-Based Evaluation of Blue Carbon as Seagrass Beds on a Tidal Flat Scale. Remote Sens. Appl. Soc. Environ. 2025, 37, 101430. [Google Scholar] [CrossRef]
  8. Liu, Y.; Liu, Q.; Sample, J.E.; Hancke, K.; Salberg, A.-B. Coastal habitat mapping with UAV multi-sensor data: An experiment among DCNN-based approaches. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2022, 3, 439–445. [Google Scholar] [CrossRef]
  9. James, D.; Collin, A.; Mury, A.; Letard, M. Enhancing UAV coastal mapping using infrared pansharpening. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2021, 43, 257–264. [Google Scholar] [CrossRef]
  10. Morgan, G.R.; Stevenson, L.; Wang, C.; Avtar, R. UAS Remote Sensing for Coastal Wetland Vegetation Biomass Estimation: A Destructive vs. Non-Destructive Sampling Experiment. Remote Sens. 2025, 17, 2335. [Google Scholar] [CrossRef]
  11. Morgan, G.R.; Hodgson, M.E.; Wang, C.; Schill, S.R. Unmanned Aerial Remote Sensing of Coastal Vegetation: A Review. Ann. GIS 2022, 28, 385–399. [Google Scholar] [CrossRef]
  12. Zhang, Z.; Shen, X.; Yan, C.; Li, R.; Li, B. Unveiling Seaward Expansion Pattern in Mangrove Forests Using UAV Remote Sensing and Deep Learning. Ecol. Indic. 2025, 178, 114054. [Google Scholar] [CrossRef]
  13. Wang, X.; Zhang, Y.; Ca, J.; Qin, Q.; Feng, Y.; Yan, J. Semantic Segmentation Network for Mangrove Tree Species Based on UAV Remote Sensing Images. Sci. Rep. 2024, 14, 29860. [Google Scholar] [CrossRef] [PubMed]
  14. Ke, L.; Lu, Y.; Tan, Q.; Zhao, Y.; Wang, Q. Precise Mapping of Coastal Wetlands Using Time-Series Remote Sensing Images and Deep Learning Model. Front. For. Glob. Change 2024, 7, 1409985. [Google Scholar] [CrossRef]
  15. Cruz, C.; McGuinness, K.; Perrin, P.M.; O’Connell, J.; Martin, J.R.; Connolly, J. Improving the Mapping of Coastal Invasive Species Using UAV Imagery and Deep Learning. Int. J. Remote Sens. 2023, 44, 5713–5735. [Google Scholar] [CrossRef]
  16. Yuan, S.; Liang, X.; Lin, T.; Chen, S.; Liu, R.; Wang, J.; Zhang, H.; Gong, P. A Comprehensive Review of Remote Sensing in Wetland Classification and Mapping. arXiv 2025, arXiv:2504.10842. [Google Scholar] [CrossRef]
  17. Zhang, X.; Liu, L.; Zhao, T.; Chen, X.; Lin, S.; Wang, J.; Mi, J.; Liu, W. GWL_FCS30: A Global 30 m Wetland Map with a Fine Classification System Using Multi-Sourced and Time-Series Remote Sensing Imagery in 2020. Earth Syst. Sci. Data 2023, 15, 265–293. [Google Scholar] [CrossRef]
  18. Wang, X.; Xiao, X.; Zou, Z.; Hou, L.; Qin, Y.; Dong, J.; Doughty, R.B.; Chen, B.; Zhang, X.; Chen, Y.; et al. Mapping Coastal Wetlands of China Using Time Series Landsat Images in 2018 and Google Earth Engine. ISPRS J. Photogramm. Remote Sens. 2020, 163, 312–326. [Google Scholar] [CrossRef]
  19. Zhang, X.; Liu, L.; Zhao, T.; Wang, J.; Liu, W.; Chen, X. Global Annual Wetland Dataset at 30 m with a Fine Classification System from 2000 to 2022. Sci. Data 2024, 11, 310. [Google Scholar] [CrossRef]
  20. Peng, K.; Jiang, W.; Hou, P.; Wu, Z.; Cui, T. Detailed Wetland-Type Classification Using Landsat-8 Time-Series Images: A Pixel- and Object-Based Algorithm with Knowledge (POK). GIScience Remote Sens. 2024, 61, 2293525. [Google Scholar] [CrossRef]
  21. Klemas, V. Remote Sensing of Coastal Wetland Biomass: An Overview. J. Coast. Res. 2013, 290, 1016–1028. [Google Scholar] [CrossRef]
  22. Doughty, C.L.; Cavanaugh, K.C. Mapping Coastal Wetland Biomass from High Resolution Unmanned Aerial Vehicle (UAV) Imagery. Remote Sens. 2019, 11, 540. [Google Scholar] [CrossRef]
  23. Du, Y.; Wang, J.; Liu, Z.; Yu, H.; Li, Z.; Cheng, H. Evaluation on Spaceborne Multispectral Images, Airborne Hyperspectral, and LiDAR Data for Extracting Spatial Distribution and Estimating Aboveground Biomass of Wetland Vegetation Suaeda salsa. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2019, 12, 200–209. [Google Scholar] [CrossRef]
  24. Cai, L.; Tang, D.; Levy, G.; Liu, D. Remote Sensing of the Impacts of Construction in Coastal Waters on Suspended Particulate Matter Concentration—the Case of the Yangtze River Delta, China. Int. J. Remote Sens. 2016, 37, 2132–2147. [Google Scholar] [CrossRef]
  25. Ouyang, Z.-T.; Gao, Y.; Xie, X.; Guo, H.-Q.; Zhang, T.-T.; Zhao, B. Spectral Discrimination of the Invasive Plant Spartina Alterniflora at Multiple Phenological Stages in a Saltmarsh Wetland. PLoS ONE 2013, 8, e67315. [Google Scholar] [CrossRef]
  26. Wang, R.; Su, Y.; Sun, X.; Wang, M.; Feng, M. Rapid and Automated Mapping Method of Spartina Alterniflora Combines Tidal Imagery and Phenological Characteristics. Environ. Monit. Assess. 2025, 197, 1136. [Google Scholar] [CrossRef]
  27. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; IEEE: Las Vegas, NV, USA, 2016; pp. 770–778. [Google Scholar]
  28. Gao, N.; Du, X.; Yang, M.; Zhao, X.; Gao, E.; Yang, Y. Extraction of Suaeda Salsa from UAV Imagery Assisted by Adaptive Capture of Contextual Information. Remote Sens. 2025, 17, 2022. [Google Scholar] [CrossRef]
  29. Gao, F.; Fu, M.; Cao, J.; Dong, J.; Du, Q. Adaptive Frequency Enhancement Network for Remote Sensing Image Semantic Segmentation. IEEE Trans. Geosci. Remote Sens. 2025, 63, 5619415. [Google Scholar] [CrossRef]
  30. Chen, Y.; Fan, H.; Xu, B.; Yan, Z.; Kalantidis, Y.; Rohrbach, M.; Yan, S.; Feng, J. Drop an Octave: Reducing Spatial Redundancy in Convolutional Neural Networks with Octave Convolution. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27 October–2 November 2019. [Google Scholar]
  31. Huang, Y.; Wang, J.; Wu, P.; Duan, Z.; Li, X.; Tang, J. Impacts of Spartina Alterniflora Invasion on Coastal Carbon Cycling Within a Native Phragmites Australis-Dominated Wetland. Agric. For. Meteorol. 2025, 363, 110405. [Google Scholar] [CrossRef]
  32. Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; MIT Press: Cambridge, MA, USA, 2016; ISBN 978-0-262-33737-3. [Google Scholar]
  33. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation; Springer international publishing: Cham, Switzerland, 2015. [Google Scholar]
  34. Lin, T.-Y.; Dollar, P.; Girshick, R.; He, K.; Hariharan, B.; Belongie, S. Feature Pyramid Networks for Object Detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
  35. Chen, J.; Lu, Y.; Yu, Q.; Luo, X.; Adeli, E.; Wang, Y.; Lu, L.; Yuille, A.L.; Zhou, Y. TransUNet: Transformers Make Strong Encoders for Medical Image Segmentation. arXiv 2021, arXiv:2102.04306. [Google Scholar] [CrossRef]
  36. Xie, E.; Wang, W.; Yu, Z.; Anandkumar, A.; Alvarez, J.M.; Luo, P. SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers. Adv. Neural Inf. Process. Syst. 2021, 34, 12077–12090. [Google Scholar]
  37. Zhou, L.; Zhang, C.; Wu, M. D-LinkNet: LinkNet with Pretrained Encoder and Dilated Convolution for High Resolution Satellite Imagery Road Extraction. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Lake City, UT, USA, 18–22 June 2018; IEEE: Salt Lake City, UT, USA, 2018; pp. 192–1924. [Google Scholar]
  38. Zheng, J.; Shao, A.; Yan, Y.; Wu, J.; Zhang, M. Remote Sensing Semantic Segmentation via Boundary Supervision-Aided Multiscale Channelwise Cross Attention Network. IEEE Trans. Geosci. Remote Sens. 2023, 61, 1–14. [Google Scholar] [CrossRef]
  39. Han, K.; Wang, Y.; Chen, H.; Chen, X.; Guo, J.; Liu, Z.; Tang, Y.; Xiao, A.; Xu, C.; Xu, Y.; et al. A Survey on Vision Transformer. IEEE Trans. Pattern Anal. Mach. Intell. 2023, 45, 87–110. [Google Scholar] [CrossRef]
  40. Minaee, S.; Boykov, Y.Y.; Porikli, F.; Plaza, A.J.; Kehtarnavaz, N.; Terzopoulos, D. Image Segmentation Using Deep Learning: A Survey. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 44, 3523–3542. [Google Scholar] [CrossRef] [PubMed]
  41. Ke, Y.; Han, Y.; Cui, L.; Sun, P.; Min, Y.; Wang, Z.; Zhuo, Z.; Zhou, Q.; Yin, X.; Zhou, D. Suaeda Salsa Spectral Index for Suaeda Salsa Mapping and Fractional Cover Estimation in Intertidal Wetlands. ISPRS J. Photogramm. Remote Sens. 2024, 207, 104–121. [Google Scholar] [CrossRef]
  42. Harris, J.M.; Broussard, W.P.; Nelson, J.A. Evaluating Coastal Wetland Restoration Using Drones and High-Resolution Imagery. Estuaries Coasts 2024, 47, 1359–1375. [Google Scholar] [CrossRef]
  43. Yang, Y.; Yuan, G.; Li, J. SFFNet: A Wavelet-Based Spatial and Frequency Domain Fusion Network for Remote Sensing Segmentation. IEEE Trans. Geosci. Remote Sens. 2024, 62, 3000617. [Google Scholar] [CrossRef]
  44. Yang, M.; Qin, J.; Wang, X.; Gu, Y. Research on the Wetland Vegetation Classification Method Based on Cross-Satellite Hyperspectral Images. JMSE 2025, 13, 801. [Google Scholar] [CrossRef]
  45. Liu, G.; Liu, C.; Wu, X.; Li, Y.; Zhang, X.; Xu, J. Optimization of Remote-Sensing Image-Segmentation Decoder Based on Multi-Dilation and Large-Kernel Convolution. Remote Sens. 2024, 16, 2851. [Google Scholar] [CrossRef]
  46. Wang, X.; Shu, L.; Han, R.; Yang, F.; Gordon, T.; Wang, X.; Xu, H. A Survey of Farmland Boundary Extraction Technology Based on Remote Sensing Images. Electronics 2023, 12, 1156. [Google Scholar] [CrossRef]
Figure 1. Geographic location of the UAV-acquired imagery data.
Figure 1. Geographic location of the UAV-acquired imagery data.
Remotesensing 18 00247 g001
Figure 2. Training dataset used for the proposed model. representative category (a,b) Spartina alterniflora, (ce) Suaeda salsa, (fh) Reeds and weeds, (i) weeds, (j) water bodies, and bare tidal flats.
Figure 2. Training dataset used for the proposed model. representative category (a,b) Spartina alterniflora, (ce) Suaeda salsa, (fh) Reeds and weeds, (i) weeds, (j) water bodies, and bare tidal flats.
Remotesensing 18 00247 g002
Figure 3. Spectral signatures of typical coastal wetland vegetation species.
Figure 3. Spectral signatures of typical coastal wetland vegetation species.
Remotesensing 18 00247 g003
Figure 4. Overall framework of the proposed method.
Figure 4. Overall framework of the proposed method.
Remotesensing 18 00247 g004
Figure 5. Illustration of the Multi-scale Frequency Enhancement Module (MFEM).
Figure 5. Illustration of the Multi-scale Frequency Enhancement Module (MFEM).
Remotesensing 18 00247 g005
Figure 6. Coastal wetland vegetation classification results: (a) RGB imagery; (b) classification map.
Figure 6. Coastal wetland vegetation classification results: (a) RGB imagery; (b) classification map.
Remotesensing 18 00247 g006
Figure 7. Mapping statistics: category proportion pie chart and class-wise accuracy bar chart.
Figure 7. Mapping statistics: category proportion pie chart and class-wise accuracy bar chart.
Remotesensing 18 00247 g007
Figure 8. Visual comparison among different models. (a,b) Suaeda salsa growing area (cf) Weeds, reeds, and Spartina alterniflora symbiotic area (g,h) Water and mudflats.
Figure 8. Visual comparison among different models. (a,b) Suaeda salsa growing area (cf) Weeds, reeds, and Spartina alterniflora symbiotic area (g,h) Water and mudflats.
Remotesensing 18 00247 g008
Figure 9. Model accuracy versus number of parameters.
Figure 9. Model accuracy versus number of parameters.
Remotesensing 18 00247 g009
Figure 10. Heatmap showing the impact of introducing the frequency-interaction branch on typical coastal vegetation metrics.
Figure 10. Heatmap showing the impact of introducing the frequency-interaction branch on typical coastal vegetation metrics.
Remotesensing 18 00247 g010
Table 1. Formulas for the evaluation metrics used in model accuracy assessment.
Table 1. Formulas for the evaluation metrics used in model accuracy assessment.
Formula
Accuracy T P + T N T P + F P + T N + F N (8)
Kappa P 0 P e 1 P e (9)
P 0 ( T P + T N T N + T P + F P + F N )(10)
P e ( T P + F P ) × ( T P + F N ) + ( F N + T N ) × ( F P + T N ) ( T N + T P + F N + F P ) 2 (11)
Recall T P T P + F P (12)
Miou ( T P T P + F P + F N + T N T N + F P + F N ) / N (13)
Table 2. Comparison of classification accuracies among different models.
Table 2. Comparison of classification accuracies among different models.
AccuracyKappaMiouRecall
unet0.90320.75110.54220.6032
AFDFNet0.94850.82740.69220.7371
transunet0.93920.85660.67330.7066
D-linknet0.87910.68940.48760.5431
deeplabv30.89220.63340.610.6822
MCCA0.90860.76890.64990.7147
segformer0.85270.63670.53680.6007
Table 3. Results of the ablation experiments for the proposed model.
Table 3. Results of the ablation experiments for the proposed model.
AccuracyKappamIoURecall
baseline0.90730.85780.60920.6322
Baseline + a0.93510.79630.63310.7066
Baseline + b0.93660.77520.65640.6891
Baseline + a + b0.94310.85220.68840.7268
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Gao, N.; Du, X.; Xu, P.; Gao, E.; Yang, Y. High-Resolution Mapping Coastal Wetland Vegetation Using Frequency-Augmented Deep Learning Method. Remote Sens. 2026, 18, 247. https://doi.org/10.3390/rs18020247

AMA Style

Gao N, Du X, Xu P, Gao E, Yang Y. High-Resolution Mapping Coastal Wetland Vegetation Using Frequency-Augmented Deep Learning Method. Remote Sensing. 2026; 18(2):247. https://doi.org/10.3390/rs18020247

Chicago/Turabian Style

Gao, Ning, Xinyuan Du, Peng Xu, Erding Gao, and Yixin Yang. 2026. "High-Resolution Mapping Coastal Wetland Vegetation Using Frequency-Augmented Deep Learning Method" Remote Sensing 18, no. 2: 247. https://doi.org/10.3390/rs18020247

APA Style

Gao, N., Du, X., Xu, P., Gao, E., & Yang, Y. (2026). High-Resolution Mapping Coastal Wetland Vegetation Using Frequency-Augmented Deep Learning Method. Remote Sensing, 18(2), 247. https://doi.org/10.3390/rs18020247

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop