Next Article in Journal
SDDGRNets: Level–Level Semantically Decomposed Dynamic Graph Reasoning Network for Remote Sensing Semantic Change Detection
Previous Article in Journal
Stepwise Building Damage Estimation Through Time-Scaled Multi-Sensor Integration: A Case Study of the 2024 Noto Peninsula Earthquake
Previous Article in Special Issue
TMTS: A Physics-Based Turbulence Mitigation Network Guided by Turbulence Signatures for Satellite Video
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

FEMNet: A Feature-Enriched Mamba Network for Cloud Detection in Remote Sensing Imagery

by
Weixing Liu
1,
Bin Luo
1,
Jun Liu
1,*,
Han Nie
1 and
Xin Su
2
1
State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, Wuhan 430079, China
2
School of Artificial Intelligence, Wuhan University, Wuhan 430079, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2025, 17(15), 2639; https://doi.org/10.3390/rs17152639
Submission received: 11 June 2025 / Revised: 12 July 2025 / Accepted: 21 July 2025 / Published: 30 July 2025

Abstract

Accurate and efficient cloud detection is critical for maintaining the usability of optical remote sensing imagery, particularly in large-scale Earth observation systems. In this study, we propose FEMNet, a lightweight dual-branch network that combines state space modeling with convolutional encoding for multi-class cloud segmentation. The Mamba-based encoder captures long-range semantic dependencies with linear complexity, while a parallel CNN path preserves spatial detail. To address the semantic inconsistency across feature hierarchies and limited context perception in decoding, we introduce the following two targeted modules: a cross-stage semantic enhancement (CSSE) block that adaptively aligns low- and high-level features, and a multi-scale context aggregation (MSCA) block that integrates contextual cues at multiple resolutions. Extensive experiments on five benchmark datasets demonstrate that FEMNet achieves state-of-the-art performance across both binary and multi-class settings, while requiring only 4.4M parameters and 1.3G multiply–accumulate operations. These results highlight FEMNet’s suitability for resource-efficient deployment in real-world remote sensing applications.

1. Introduction

Cloud contamination represents one of the most significant challenges in optical remote sensing, with studies indicating that up to 66% of satellite images may be partially or completely affected by cloud cover [1,2]. This widespread occlusion severely limits the effective utilization of Earth observation data by masking critical land surface features and introducing radiometric uncertainties that compromise downstream analysis. Consequently, accurate and efficient cloud detection has become an indispensable preprocessing step for numerous remote sensing applications, including land cover classification, environmental monitoring, and change detection [3,4].
The complexity of cloud detection stems from the highly variable nature of cloud formations—ranging from thin, semi-transparent cirrus to thick stratocumulus—which often exhibit spectral signatures similar to other bright surface features such as snow and ice [5,6]. Additionally, cloud boundaries are frequently irregular and diffuse [7], while cloud shadows introduce further complexity, as they often share low reflectance characteristics with other dark surfaces [8] such as water bodies and terrain shadows. These spectral ambiguities increase the risk of misclassification, rendering robust pixel-wise discrimination particularly challenging [2].
Early cloud detection methods primarily relied on rule-based algorithms and spectral thresholding techniques such as FMask [9]. While computationally efficient, these methods exhibit limited generalization capability across diverse atmospheric conditions and sensor characteristics. The advent of deep learning revolutionized cloud detection methodologies, with CNNs emerging as the dominant paradigm. Representative models like CDNet [10], LPMSNet [11], and CRSNet [12] employ encoder–decoder architectures enhanced with attention mechanisms and multi-scale feature fusion. However, CNNs face inherent limitations in capturing long-range spatial dependencies due to their localized receptive fields [13,14]. This makes it difficult to exploit the contextual relationship between spatially disconnected but semantically related regions, such as clouds and their corresponding shadows [15], especially in large-scale scenes.
To address these limitations, Vision Transformers have been increasingly adopted in remote sensing applications [16,17], leveraging self-attention mechanisms to capture global dependencies. Many hybrid approaches [18,19,20,21] combine Transformers with CNNs, seeking to balance global semantic understanding with fine-grained spatial detail preservation. Despite their effectiveness, Transformer-based architectures introduce significant computational overhead due to quadratic attention complexity, limiting their scalability for high-resolution imagery and real-time applications [22]. Recently, State Space Models (SSMs) have emerged as a compelling alternative. The Mamba architecture [23] introduces selective state space modeling with linear time complexity, while VMamba [24] adapts this to 2D spatial modeling. Remote sensing applications including ChangeMamba [25], RSCaMa [26], and LCCDMamba [27] have demonstrated Mamba’s effectiveness for scalable representation learning.
While Mamba-based architectures show promise for efficient global modeling, they do not fully address specific structural challenges in encoder–decoder segmentation networks. Two critical limitations persist: first, conventional skip connections often introduce semantic misalignment between spatially detailed shallow features and semantically rich deep features, particularly problematic for thin cloud boundaries; second, standard decoders lack explicit multi-scale context integration, reducing their ability to distinguish between small fragmented clouds and large homogeneous formations. To address these challenges, we propose FEMNet, a dual-stream architecture combining Mamba-based long-range modeling with lightweight CNN-based spatial encoding. Our approach incorporates two specialized modules as follows: the cross-stage semantic enhancement (CSSE) module addresses semantic misalignment through adaptive gating, while the multi-scale context aggregation (MSCA) module enriches multi-scale understanding through resolution-aware pooling and fusion.
We evaluate FEMNet on five public datasets covering both binary and multi-class cloud segmentation scenarios. Our results demonstrate consistent improvements over state-of-the-art methods, including SCTNet and HRCloudNet, while maintaining computational efficiency. Notably, FEMNet achieves a 3.67% improvement in mIoU on the L8 Biome dataset with only 4.4 million parameters and 1.3G MACs, confirming its practical value for operational deployment.
Our main contributions are as follows:
  • We propose FEMNet, a novel dual-stream architecture that effectively combines Mamba-based global modeling with CNN-based spatial detail encoding for efficient and accurate cloud segmentation.
  • We design the cross-stage semantic enhancement (CSSE) module to resolve semantic misalignment between encoder and decoder features through adaptive gating guided by high-level contextual information.
  • We introduce the multi-scale context aggregation (MSCA) module to enhance decoder-scale awareness through lightweight multi-resolution pooling and fusion strategies.
  • We demonstrate superior segmentation accuracy and computational efficiency across five diverse datasets, validating FEMNet’s effectiveness.

2. Related Work

2.1. CNN-Based Cloud Segmentation

Convolutional Neural Networks established the foundation for modern cloud detection through learnable feature representations. CDNet [10] pioneered encoder–decoder structures with skip connections, while DBNet [28] and CDUNet [29] introduced dual-branch architectures for enhanced contextual fusion. Recent advances include LPMSNet [11] with location-aware pooling and multi-scale attention and CRSNet [12] targeting small cloud detection through strip pyramid attention. However, CNNs face fundamental limitations in modeling long-range dependencies due to their local receptive fields, particularly problematic when distinguishing clouds from spectrally similar surface features across large spatial extents.

2.2. Transformer and Hybrid Architectures

Vision Transformers address CNN limitations through self-attention mechanisms capable of capturing global contextual relationships. SCTNet [21] demonstrated token-wise self-attention for global context integration, while hybrid approaches like MAFNet [20], and MCANet [18] combine CNN spatial processing with Transformer global modeling. These methods seek to balance fine-grained detail preservation with semantic abstraction. Despite promising results, Transformer-based approaches introduce significant computational overhead due to quadratic attention complexity, limiting their scalability for high-resolution imagery and real-time applications.

2.3. Mamba and State Space Models

State Space Models offer a promising alternative with linear complexity for long-range modeling. Mamba [23] introduces selective state space mechanisms with hardware-aware implementations, while VMamba [24] adapts this to 2D spatial modeling via SS2D modules. In remote sensing, recent works, including Samba, ChangeMamba [25], RSCaMa [26], and LCCDMamba [27], demonstrate Mamba’s effectiveness across diverse applications from semantic segmentation to change detection. These approaches confirm the potential of SSM-based designs for scalable and globally aware modeling in geospatial tasks.
While each architectural paradigm addresses specific limitations of its predecessors, many recent works have adopted dual-branch encoder designs to improve semantic representation. DDRNet [30] builds two deep resolution-specific branches with repeated bilateral fusion, enhancing boundary detail and global context in urban scenes. DE-UNet [31] integrates two parallel CNN encoders into a U-Net framework, enabling the simultaneous capture of fine-grained textures and holistic structure. ST-UNet [32] combines CNNs with Swin Transformers through a relational aggregation module to integrate local and global cues hierarchically. Similarly, RS3Mamba [33] constructs a Mamba-based auxiliary branch alongside a convolutional backbone, using a collaborative fusion mechanism to adaptively refine dual-stream features. Inspired by these dual-stream principles, our proposed FEMNet employs a lightweight parallel encoder composed of a Mamba branch for global modeling and a CNN branch for spatial detail preservation.

3. Methodology

FEMNet is a lightweight yet effective semantic segmentation framework, developed for multi-class cloud detection in remote sensing imagery. It is specifically designed to tackle the challenges posed by spectral ambiguity—such as the confusion between clouds, snow, and bright land surfaces—and the need for fine-grained structural delineation across diverse terrestrial backgrounds. The architecture simultaneously captures high-resolution spatial textures and long-range semantic dependencies, while maintaining computational efficiency suitable for large-scale satellite data processing.
As shown in Figure 1, FEMNet comprises the following four main components: a dual-branch encoder, a multi-scale context aggregator (MSCA), a cross-scale semantic enhancement (CSSE) module, and a lightweight decoder. The encoder integrates convolutional blocks for spatial detail extraction with a Mamba-based sequence encoder for capturing semantic context. The MSCA module refines deep features by integrating contextual cues across scales. In parallel, the CSSE module modulates early-stage features using semantic information from deeper layers, promoting alignment between low- and high-level representations. The decoder then progressively fuses features from multiple levels to produce the final segmentation map.
This section is organized as follows: Section 3.1 introduces the overall architecture. Section 3.2 explains the Mamba-based multi-scale encoder. Section 3.3 and Section 3.4 describe the MSCA and CSSE modules. Section 3.5 details the loss function and evaluation metrics.

3.1. Network Architecture Overview

FEMNet processes the input image using two parallel encoding branches. The first pathway is a shallow convolutional encoder that extracts high-resolution spatial details, which are essential for recovering thin cloud edges and subtle shadow boundaries. In parallel, the second pathway utilizes a Mamba-based SegMAN encoder that operates hierarchically to extract multi-level semantic representations. This dual-stream design ensures that both local textures and global context are preserved throughout the network.
Formally, given an input image x R 3 × H × W , the CNN branch generates a low-level feature map f cnn R 16 × H 2 × W 2 through a sequence of convolutional and downsampling blocks. Simultaneously, the SegMAN encoder produces a set of hierarchical semantic features { f 1 , f 2 , f 3 , f 4 } with progressively decreasing spatial resolutions and increasing channel dimensions. Specifically, f 1 R 32 × H 4 × W 4 , f 2 R 64 × H 8 × W 8 , f 3 R 144 × H 16 × W 16 , and f 4 R 192 × H 32 × W 32 , capturing increasingly abstract semantic representations.
The deepest feature f 4 is refined by the MSCA module, while the lowest-level feature f 1 is modulated by the CSSE block using high-level priors from f 4 . The decoder integrates these representations through progressive upsampling and convolution, with skip connections enabling multi-scale fusion. At the final stage, the decoder output is concatenated with the shallow CNN feature (projected to a compatible dimension), and it is fused through a convolutional block. This design ensures both semantic completeness and spatial precision in the final prediction.
To qualitatively assess the contributions of the dual-stream encoder design, we visualize the intermediate feature maps from the CNN and Mamba branches, along with the fused representation. As shown in Figure 2, the CNN branch captures fine-grained texture details such as thin clouds and sharp edges, but it tends to produce noisy or fragmented responses. In contrast, the Mamba encoder focuses on global semantic consistency, leading to smoother but spatially coarser activation maps. The fused features successfully combine the strengths of both branches, which aligns well with our design motivation for combining local and global representations in FEMNet.

3.2. Mamba-Based Multi-Scale Encoder

Traditional convolutional encoders, while efficient for local feature extraction, struggle to model long-range dependencies, which are critical for capturing large-scale and spatially disconnected cloud patterns. Transformer-based models address this limitation through global self-attention but suffer from quadratic computational cost, which hinders their scalability for high-resolution remote sensing imagery. To balance efficiency and modeling capacity, we adopt Mamba—a state space model (SSM) that offers linear-time sequence modeling with dynamic parameterization.
Mamba formulates sequence modeling as a continuous-time dynamical system as follows:
d h ( t ) d t = A h ( t ) + B x ( t ) , y ( t ) = C h ( t ) + D x ( t ) ,
where h ( t ) is the latent state vector and A , B , C , D are learnable matrices. Discretization via zero-order hold yields the following:
A ¯ = exp ( Δ A ) , B ¯ = A 1 ( A ¯ I ) B Δ ,
h k = A ¯ h k 1 + B ¯ x k , y k = C h k + D x k .
To enable adaptive modeling, Mamba [23,24] makes the parameters B , C and step size Δ input-dependent. As illustrated in Figure 3, each SS2D block consists of the following three key components: cross-scan, selective scanning with S6 blocks, and cross-merge. Cross-scan unfolds the 2D feature map into four sequences along different spatial traversal directions (e.g., left-to-right, top-to-bottom, and their reverses), enabling each pixel to participate in multiple directional contexts. Selective scanning applies a dedicated S6 block to each directional sequence independently, allowing efficient 1D modeling with selective information flow. Finally, cross-merge reshapes and aggregates the four directional outputs to reconstruct the 2D feature map, typically by summing corresponding pixel responses across directions.
This SS2D design empowers each pixel to aggregate contextual information from multiple orientations, thereby constructing a global receptive field in a computation-friendly manner. Compared to traditional 2D convolutions or attention, SS2D maintains linear complexity while enhancing spatial awareness. Integrated into a four-stage encoder with neighborhood attention and residual connections, this module enables FEMNet to capture both local detail and long-range semantic structure in complex cloud scenes.

3.3. Multi-Scale Context Aggregation

Contextual modeling plays a crucial role in cloud detection, particularly when segmenting large or diffuse cloud regions where local features alone may be insufficient. However, deep layers in encoder–decoder networks typically suffer from a loss of spatial resolution, which limits their ability to retain multi-scale semantic cues. To address this limitation, we propose the Multi-Scale Context Aggregation (MSCA) module to enhance the semantic depth of high-level features while recovering spatial context.
As illustrated in Figure 4a, the Multi-Scale Context Aggregation (MSCA) module enhances the deepest semantic feature f 4 R 192 × H 32 × W 32 by aggregating contextual cues from multiple spatial scales.
Specifically, two parallel branches apply average pooling with kernel sizes of 2 and 4, respectively, followed by bilinear upsampling to restore the original resolution. The resulting pooled features are concatenated with the original f 4 , yielding an aggregated tensor of size R 576 × H 32 × W 32 :
f agg = Concat f 4 , Up ( AvgPool 2 ( f 4 ) ) , Up ( AvgPool 4 ( f 4 ) ) .
The fused representation f agg is then passed through a convolutional layer and a ReLU activation to produce the output feature f msca R 192 × H 32 × W 32 :
f msca = ReLU ( Conv ( f agg ) ) .
This design enables FEMNet to capture richer contextual information across multiple receptive fields without significantly increasing model complexity. As illustrated in Figure 5, the MSCA module transforms fragmented semantic features into more regionally coherent activations, reinforcing large-area consistency and improving the network’s ability to delineate spatially extensive or spectrally ambiguous cloud structures.

3.4. Cross-Stage Semantic Enhancement

Encoder–decoder architectures often suffer from semantic inconsistencies when merging low-level features (rich in spatial detail) with high-level features (rich in semantics). This is especially detrimental in cloud segmentation, where boundary precision and semantic coherence are critical. To alleviate this, we propose the Cross-Stage Semantic Enhancement (CSSE) module.
As illustrated in Figure 4b, the CSSE module takes a low-level spatial feature f 1 R 32 × H 4 × W 4 and a high-level semantic feature f msca R 192 × H 32 × W 32 as inputs.
First, f msca is projected using a 1 × 1 convolution and activated by a sigmoid function to generate a spatial attention map M, which is then upsampled to match the resolution of f 1 :
M = σ ( Up ( Conv ( f msca ) ) ) ,
where σ ( · ) denotes the sigmoid activation.
Meanwhile, f 1 is refined through two convolutional layers with interleaved ReLU and batch normalization as follows:
f 1 = BN ( Conv ( ReLU ( Conv ( f 1 ) ) ) ) .
Then, the attention map M is applied to the refined feature f 1 via element-wise multiplication, and the result is concatenated with the original f 1 as follows:
f cat = Concat ( f 1 , f 1 M ) ,
where ⊙ denotes element-wise multiplication.
Finally, the concatenated feature is passed through another convolution–BN–ReLU sequence to produce the output feature f csse as follows:
f csse = ReLU ( BN ( Conv ( f cat ) ) ) R 32 × H 4 × W 4 .
As shown in Figure 6, the CSSE module suppresses redundant textures in shallow features and emphasizes cloud boundaries, promoting more semantically consistent decoding. By selectively preserving spatially relevant information, it improves the network’s ability to recover fine cloud structures and mitigate false positives in complex backgrounds.

3.5. Loss Function and Evaluation Metrics

To supervise the multi-class cloud segmentation task, we adopt the standard pixel-wise cross-entropy loss, formulated as follows:
L CE = 1 N n = 1 N k = 1 K y k ( n ) log y ^ k ( n ) ,
where N denotes the number of training pixels (or samples), K is the number of cloud categories (e.g., clear sky, cloud shadow, thin cloud, and thick cloud), y k ( n ) { 0 , 1 } is the ground truth label for the k-th class of the n-th sample, and y ^ k ( n ) [ 0 , 1 ] is the predicted probability for that class. This formulation penalizes class-wise divergence between predictions and ground truth distributions.
To evaluate segmentation performance, we adopt four commonly used metrics computed at the pixel level: overall accuracy (aAcc), mean accuracy (mAcc), mean Intersection over Union (mIoU), and mean Dice coefficient (mDice). These metrics are defined as follows:
Overall Accuracy (aAcc) measures the proportion of correctly classified pixels over the entire dataset as follows:
aAcc = T P + T N T P + T N + F P + F N ,
where T P , T N , F P , and F N denote the total number of true positives, true negatives, false positives, and false negatives across all classes.
Mean Accuracy (mAcc) computes the average per-class accuracy, helping mitigate class imbalance as follows:
mAcc = 1 C i = 1 C T P i T P i + F N i ,
where T P i and F N i refer to the number of true positives and false negatives for class i, respectively.
Mean Intersection over Union (mIoU) evaluates the average region overlap between prediction and ground truth for each class as follows:
mIoU = 1 C i = 1 C T P i T P i + F P i + F N i .
Mean Dice Coefficient (mDice) provides a harmonic measure of precision and recall, particularly sensitive to boundary and small-object segmentation as follows:
mDice = 1 C i = 1 C 2 T P i 2 T P i + F P i + F N i .
These metrics jointly reflect the model’s performance in both region-level accuracy and class-specific consistency, providing a comprehensive evaluation for multi-class cloud segmentation tasks.

4. Experiments

In this section, we evaluate FEMNet’s performance on both binary and multi-class cloud detection tasks using multiple public remote sensing datasets. We detail the datasets, implementation setup, baseline methods, and quantitative and qualitative comparisons with state-of-the-art approaches.

4.1. Experimental Setup

4.1.1. Datasets

For binary cloud segmentation, we conduct experiments on the following three publicy available binary cloud detection datasets: HRC_WHU [34], GF1MS-WHU [35], and GF2MS-WHU [35]. All images in these datasets are uniformly padded and cropped into non-overlapping patches of size 256 × 256 for training and evaluation.
  • HRC_WHU is a high-resolution dataset collected from Google Earth, with spatial resolutions ranging from 0.5 to 15 m. It covers diverse geographic regions across the globe and consists of 120 training images and 30 test images.
  • GF1MS-WHU is derived from the GF-1 satellite with an 8 m resolution. It includes 6344 training and 4086 test images, collected over various regions in China.
  • GF2MS-WHU originates from the GF-2 satellite, offering a 4 m resolution. This dataset contains 14,357 training and 7560 test images.
For multi-class cloud segmentation, we evaluate our method on the following two multi-class datasets: L8 Biome [36] and CloudSEN12 [37]. The multi-class cloud segmentation task involves the following four semantic categories: clear sky, thin cloud, thick cloud, and cloud shadow. For both datasets, images are preprocessed into 512 × 512 patches to facilitate efficient training.
  • L8 Biome consists of imagery from the Landsat-8 OLI sensor. Scenes are categorized into eight biome types as follows: Urban, Barren, Forest, Shrubland, Grass/Cropland, Snow/Ice, Wetlands, and Water. Thin clouds are defined as semi-transparent regions with moderate reflectance and soft boundaries, whereas thick clouds are optically dense, high-albedo regions with distinct shapes. Cloud shadows are annotated by visual inspection based on spatial alignment with overlying clouds and their low reflectance signatures. The dataset includes 7931 training and 2643 test images.
  • CloudSEN12 is constructed from Sentinel-2 satellite imagery and provides both Level-1C (L1C) and Level-2A (L2A) data. Each version contains 8490 training and 975 test images. Thin clouds exhibit weaker opacity and are often partially transparent, while thick clouds are optically saturated. Shadows are inferred based on cloud position and solar geometry but may exhibit labeling uncertainty in complex terrain.

4.1.2. Implementation Details

All experiments are conducted on a single NVIDIA RTX 2080Ti GPU (11 GB). The models are trained using the RMSprop optimizer with a learning rate of 1 × 10 4 and a batch size of 4. The maximum number of training iterations is set to 40,000. The Mamba encoder is initialized using SegMAN-Tiny [38]. Standard data augmentations including flipping, scaling, and color jittering are applied.
We evaluate and compare our method against a wide range of existing cloud detection approaches, including CDNetv1 [10], CDNetv2 [39], KappaMask [40], DBNet [28], UNetMobv2 [37], SCNN [41], MCDNet [42], HRCloudNet [43], SCTNet [21], and SegMAN [38]. To better understand the characteristics of existing methods, we categorize them based on their architectural design. CDNetv1, CDNetv2, and KappaMask are representative single-stream CNN models that follow the encoder–decoder paradigm. UNetMobv2 and SCNN are lightweight CNN-based methods, where the former employs depthwise separable convolutions for efficiency, and the latter uses a minimal three-layer structure without any pooling operations to preserve fine details. DBNet and SCTNet are hybrid models that integrate CNN and Transformer branches to balance local and global features; among them, SCTNet is specifically optimized for lightweight deployment. MCDNet introduces a dual-input mechanism to better distinguish thin clouds, while HRCloudNet leverages multi-resolution representations for enhanced cloud structure awareness. SegMAN employs a fully Mamba-based encoder–decoder architecture, enhanced with Neighborhood Attention to mitigate detail loss by capturing local dependencies.

4.2. Comparisons with State-of-the-Art Methods

4.2.1. Results of Binary Cloud Detection

We report binary cloud segmentation results on the HRC_WHU, GF1MS-WHU, and GF2MS-WHU datasets. Quantitative comparisons are presented in Table 1, Table 2 and Table 3, while representative visualizations are provided in Figure 7, where cloud regions are displayed in white. Across all three datasets, FEMNet achieves leading performance, with mIoU values of 89.68%, 93.32%, and 88.24% on HRC_WHU, GF1MS-WHU, and GF2MS-WHU, respectively. These consistent gains highlight FEMNet’s ability to adapt across diverse spatial resolutions and scene complexities.
On the HRC_WHU dataset, FEMNet exceeds SCTNet by 0.46% in mIoU. HRC_WHU features heterogeneous acquisition conditions and relatively sparse training data, making generalization especially challenging. Qualitatively, FEMNet produces cloud masks that better conform to annotated shapes, avoiding both under-segmentation of thin structures and overextension into clear-sky areas. In contrast, models such as UNetMobv2 and SegMAN occasionally exhibit over-smoothing or omission of minor cloud components, particularly near ambiguous edges.
For GF1MS-WHU, where overall model performance is high due to abundant samples and consistent data quality, FEMNet still achieves a modest gain over SCTNet (93.32% vs. 93.22%). Compared to SegMAN, FEMNet produces comparable interior region segmentation but demonstrates improved boundary adherence, especially where annotations include fine protrusions or disconnected fragments.
On the GF2MS-WHU dataset, which contains higher variability in cloud thickness and complex land-atmosphere backgrounds, FEMNet yields the largest performance margin—a 1.25% mIoU gain over SCTNet and 3.73% over SegMAN. Visual comparisons show that FEMNet preserves fragmented or fuzzy cloud components more faithfully, particularly in low-contrast conditions. Unlike KappaMask or MCDNet, which frequently exhibit missing regions or false activations, FEMNet maintains spatial coherence without introducing artificial artifacts.
Real-world clouds, especially thin cirrus or complex formations, often possess irregular, fractal-like perimeters. Over-smoothing such boundaries can lead to the underrepresentation of valid cloud regions. FEMNet addresses this by adaptively balancing structure preservation with semantic consistency, avoiding excessive regularization while reducing noise. Taken together, these results affirm the benefit of FEMNet’s dual-branch design and decoder enhancements. Its strong performance across both quantitative metrics and qualitative visual fidelity illustrates robustness across cloud morphologies, acquisition domains, and resolution settings.

4.2.2. Results of Multi-Class Cloud Segmentation

Table 4 summarizes the quantitative results on the CloudSEN12 High L1C dataset, and the corresponding visual comparisons are shown in Figure 8. Compared with binary classification, this setting introduces additional ambiguity, particularly between visually similar classes such as thin clouds and haze, or between shadows and water bodies. To further validate the generalizability of FEMNet, we include comparison with DDRNet [30], a widely used real-time semantic segmentation model originally developed for traffic scene understanding. Its inclusion serves as a baseline for evaluating FEMNet against general purpose lightweight architectures.
FEMNet achieves the highest mIoU score of 71.78% with consistently strong performance across other metrics on the L1C dataset. Compared to SegMAN and SCTNet, FEMNet offers enhanced segmentation precision in spatially heterogeneous scenes. In Figure 8, FEMNet captures fine-scale structures of thin clouds more accurately, especially along coastlines and vegetated areas, where SegMAN tends to over-smooth or merge thin cloud boundaries with the background. For cloud shadows, which often overlap dark terrain or water, FEMNet demonstrates improved discriminative ability, reducing over-segmentation and better aligning with ground truth. These observations confirm that our model not only excels numerically but also produces more interpretable predictions in complex atmospheric contexts.
On the CloudSEN12 High L2A dataset, FEMNet again achieves the best overall performance with a mIoU of 72.53%, aAcc of 89.64%, and mDice of 83.15%, outperforming the next-best model SCTNet by margins of 1.83% in mIoU and 1.27% in mDice (Table 5). Visual comparisons in Figure 9 show that FEMNet maintains semantic consistency across thin clouds and shadows, even when atmospheric correction increases spectral distortion. SegMAN and other baselines often misclassify thin clouds as clear sky or conflate cloud shadows with terrain shadows. FEMNet, by contrast, yields smoother and more distinct boundaries across all four semantic classes, confirming its robustness to radiometric variation and ambiguous boundaries.
To evaluate geographic generalization, we further conduct experiments on the L8 Biome dataset, which includes diverse land cover types such as Forest, Barren, Snow/Ice, and Wetlands. As reported in Table 6, FEMNet achieves the highest mIoU of 69.70% and mDice of 80.49%, outperforming SegMAN by 0.93% in mIoU and 1.08% in mDice, while SegMAN attains a slightly higher aAcc, FEMNet offers better inter-class discrimination, especially in spectrally complex regions.
In Figure 10, the visual analysis shows that FEMNet excels in separating thin clouds from snow surfaces, which is a frequent source of confusion for other methods. Moreover, cloud shadows are correctly distinguished from water bodies, which are often misclassified by SegMAN and KappaMask. These results confirm that FEMNet generalizes well across varying terrain and atmospheric conditions, maintaining both high accuracy and semantic coherence across different biomes.

5. Discussion

5.1. Ablation Study

To assess the contribution of each proposed component, we conducted an ablation study on the L8 Biome dataset, as shown in Table 7. The baseline model comprises a Mamba-based encoder and a UNet-style decoder.
Among the variants, removing the Dual-Stream Encoder (DSE) causes the most notable performance degradation, with mIoU dropping from 69.70% to 66.67% and mDice from 80.49% to 78.00%. This emphasizes the essential role of combining convolutional and Mamba-based features for effective semantic segmentation. Similarly, excluding the CSSE module results in a substantial performance drop, particularly in mIoU (69.01%) and mDice (79.79%), underlining its efficacy in modeling long-range contextual dependencies. Interestingly, removing the MSCA module yields the highest aAcc (90.19%), yet it significantly lowers mAcc (76.33%), suggesting that although overall pixel-level accuracy improves, class-wise balance deteriorates. Collectively, these results confirm that the integration of DSE, CSSE, and MSCA contributes complementary strengths, leading to optimal overall performance.

5.2. Model Efficiency

We evaluate FEMNet’s efficiency in terms of parameter count, computation, inference speed, model size, and memory usage. As shown in Table 8, FEMNet contains only 4.4 million parameters and 1.3 billion MACs, which is substantially lower than heavy models like DBNet and KappaMask. While SCTNet is even smaller (0.7M parameters), FEMNet achieves better segmentation performance with only a slight increase in computation.
On an NVIDIA RTX 2080Ti (batch size 64), FEMNet processes 331 images per second, offering a good trade-off between speed and accuracy. Although SCTNet reaches a higher throughput (1047 img/s), its segmentation quality is notably lower.
FEMNet also demonstrates compactness in storage and memory usage. The model file is only 17.9 MB, and inference with 256 × 256 inputs (batch size 1) requires 548 MB of GPU memory as measured by gpustat. This enables efficient deployment on memory-limited platforms such as edge devices and satellite systems.

6. Conclusions

This paper presents FEMNet, a lightweight and feature-enriched architecture for cloud detection in optical remote sensing imagery. By combining a Mamba-based state space encoder with a parallel CNN stream, FEMNet effectively captures both long-range semantic dependencies and fine-grained spatial details. To mitigate semantic inconsistency and enhance contextual awareness, which are two key limitations in conventional encoder–decoder frameworks, we introduce the following two targeted modules: the cross-stage semantic enhancement block for semantic alignment across feature hierarchies, and the multi-scale context aggregation module for efficient context fusion across spatial resolutions. Extensive experiments across five benchmark datasets validate FEMNet’s superior performance over existing CNN-, Transformer-, and hybrid-based methods in both binary and multi-class segmentation scenarios. In future work, we aim to extend FEMNet with domain-adaptive training strategies to improve its generalization across varying seasonal patterns, geographic regions, and sensor modalities.

Author Contributions

Conceptualization, W.L. and J.L.; methodology, W.L.; software, W.L.; validation, W.L., H.N. and J.L.; formal analysis, W.L.; investigation, W.L.; resources, J.L. and X.S.; data curation, W.L.; writing—original draft preparation, W.L.; writing—review and editing, W.L. and H.N.; visualization, W.L.; supervision, B.L. and X.S.; project administration, J.L.; funding acquisition, B.L. and J.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China (grant number 61571332 and 42230108), and the Key Program for Basic Research of China (grant number JCKY2023206B026).

Data Availability Statement

Our research data are available at https://huggingface.co/datasets/XavierJiezou/cloudseg-datasets (accessed on 20 July 2025).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Zhang, J.; Wu, J.; Wang, H.; Wang, Y.; Li, Y. Cloud detection method using CNN based on cascaded feature attention and channel attention. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1–17. [Google Scholar] [CrossRef]
  2. Wang, Z.; Zhao, L.; Meng, J.; Han, Y.; Li, X.; Jiang, R.; Chen, J.; Li, H. Deep learning-based cloud detection for optical remote sensing images: A Survey. Remote Sens. 2024, 16, 4583. [Google Scholar] [CrossRef]
  3. Ghassemi, S.; Magli, E. Convolutional neural networks for on-board cloud screening. Remote Sens. 2019, 11, 1417. [Google Scholar] [CrossRef]
  4. Li, Z.; Shen, H.; Weng, Q.; Zhang, Y.; Dou, P.; Zhang, L. Cloud and cloud shadow detection for optical satellite imagery: Features, algorithms, validation, and prospects. ISPRS J. Photogramm. Remote Sens. 2022, 188, 89–108. [Google Scholar] [CrossRef]
  5. Mohajerani, S.; Krammer, T.A.; Saeedi, P. Cloud detection algorithm for remote sensing images using fully convolutional neural networks. arXiv 2018, arXiv:1810.05782. [Google Scholar] [CrossRef]
  6. Francis, A.; Sidiropoulos, P.; Muller, J.P. CloudFCN: Accurate and robust cloud detection for satellite imagery with deep learning. Remote Sens. 2019, 11, 2312. [Google Scholar] [CrossRef]
  7. Wu, K.; Xu, Z.; Lyu, X.; Ren, P. Cloud detection with boundary nets. ISPRS J. Photogramm. Remote Sens. 2022, 186, 218–231. [Google Scholar] [CrossRef]
  8. Zhu, Z.; Woodcock, C.E. Object-based cloud and cloud shadow detection in Landsat imagery. Remote Sens. Environ. 2012, 118, 83–94. [Google Scholar] [CrossRef]
  9. Qiu, S.; Zhu, Z.; He, B. Fmask 4.0: Improved cloud and cloud shadow detection in Landsats 4–8 and Sentinel-2 imagery. Remote Sens. Environ. 2019, 231, 111205. [Google Scholar] [CrossRef]
  10. Yang, J.; Guo, J.; Yue, H.; Liu, Z.; Hu, H.; Li, K. CDnet: CNN-based cloud detection for remote sensing imagery. IEEE Trans. Geosci. Remote Sens. 2019, 57, 6195–6211. [Google Scholar] [CrossRef]
  11. Dai, X.; Chen, K.; Xia, M.; Weng, L.; Lin, H. LPMSNet: Location pooling multi-scale network for cloud and cloud shadow segmentation. Remote Sens. 2023, 15, 4005. [Google Scholar] [CrossRef]
  12. Zhang, C.; Weng, L.; Ding, L.; Xia, M.; Lin, H. CRSNet: Cloud and cloud shadow refinement segmentation networks for remote sensing imagery. Remote Sens. 2023, 15, 1664. [Google Scholar] [CrossRef]
  13. Luo, W.; Li, Y.; Urtasun, R.; Zemel, R. Understanding the effective receptive field in deep convolutional neural networks. In Proceedings of the 29th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain, 5–10 December 2016. [Google Scholar]
  14. Chen, X.; Li, Z.; Jiang, J.; Han, Z.; Deng, S.; Li, Z.; Fang, T.; Huo, H.; Li, Q.; Liu, M. Adaptive effective receptive field convolution for semantic segmentation of VHR remote sensing images. IEEE Trans. Geosci. Remote Sens. 2020, 59, 3532–3546. [Google Scholar] [CrossRef]
  15. Li, J.; Wang, Q. CSDFormer: A cloud and shadow detection method for landsat images based on transformer. Int. J. Appl. Earth Obs. Geoinf. 2024, 129, 103799. [Google Scholar] [CrossRef]
  16. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv 2020, arXiv:2010.11929. [Google Scholar]
  17. Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; Guo, B. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 10012–10022. [Google Scholar]
  18. Hu, K.; Zhang, E.; Xia, M.; Weng, L.; Lin, H. Mcanet: A multi-branch network for cloud/snow segmentation in high-resolution remote sensing images. Remote Sens. 2023, 15, 1055. [Google Scholar] [CrossRef]
  19. Ge, W.; Yang, X.; Jiang, R.; Shao, W.; Zhang, L. CD-CTFM: A lightweight CNN-transformer network for remote sensing cloud detection fusing multiscale features. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2024, 17, 4538–4551. [Google Scholar] [CrossRef]
  20. Gu, H.; Gu, G.; Liu, Y.; Lin, H.; Xu, Y. Multi-Branch Attention Fusion Network for Cloud and Cloud Shadow Segmentation. Remote Sens. 2024, 16, 2308. [Google Scholar] [CrossRef]
  21. Liu, W.; Luo, B.; Liu, J.; Nie, H.; Su, X. SCTNet: A Shallow CNN-Transformer Network with Statistics-Driven Modules for Cloud Detection. IEEE Geosci. Remote Sens. Lett. 2025, 22, 1–5. [Google Scholar] [CrossRef]
  22. Pang, Y.; Yao, L.; Luo, Y.; Dong, C.; Kong, Q.; Chen, B. RepSViT: An efficient vision transformer based on spiking neural networks for object recognition in satellite on-orbit remote sensing images. IEEE Trans. Geosci. Remote Sens. 2024, 62, 1–16. [Google Scholar] [CrossRef]
  23. Gu, A.; Dao, T. Mamba: Linear-time sequence modeling with selective state spaces. arXiv 2023, arXiv:2312.00752. [Google Scholar] [CrossRef]
  24. Liu, Y.; Tian, Y.; Zhao, Y.; Yu, H.; Xie, L.; Wang, Y.; Ye, Q.; Jiao, J.; Liu, Y. Vmamba: Visual state space model. Proc. Adv. Neural Inf. Process. Syst. 2024, 37, 103031–103063. [Google Scholar]
  25. Chen, H.; Song, J.; Han, C.; Xia, J.; Yokoya, N. Changemamba: Remote sensing change detection with spatio-temporal state space model. IEEE Trans. Geosci. Remote Sens. 2024, 62, 1–20. [Google Scholar] [CrossRef]
  26. Liu, C.; Chen, K.; Chen, B.; Zhang, H.; Zou, Z.; Shi, Z. Rscama: Remote sensing image change captioning with state space model. IEEE Geosci. Remote Sens. Lett. 2024, 21, 1–5. [Google Scholar] [CrossRef]
  27. Huang, J.; Yuan, X.; Lam, C.T.; Wang, Y.; Xia, M. LCCDMamba: Visual State Space Model for Land Cover Change Detection of VHR Remote Sensing Images. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2025, 18, 5765–5781. [Google Scholar] [CrossRef]
  28. Lu, C.; Xia, M.; Qian, M.; Chen, B. Dual-branch network for cloud and cloud shadow segmentation. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–12. [Google Scholar] [CrossRef]
  29. Hu, K.; Zhang, D.; Xia, M. CDUNet: Cloud detection UNet for remote sensing imagery. Remote Sens. 2021, 13, 4533. [Google Scholar] [CrossRef]
  30. Pan, H.; Hong, Y.; Sun, W.; Jia, Y. Deep dual-resolution networks for real-time and accurate semantic segmentation of traffic scenes. IEEE Trans. Intell. Transp. Syst. 2022, 24, 3448–3460. [Google Scholar] [CrossRef]
  31. Liu, Y.; Song, S.; Wang, M.; Gao, H.; Liu, J. DE-Unet: Dual-Encoder U-Net for Ultra-High Resolution Remote Sensing Image Segmentation. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2025, 18, 12290–12302. [Google Scholar] [CrossRef]
  32. He, X.; Zhou, Y.; Zhao, J.; Zhang, D.; Yao, R.; Xue, Y. Swin transformer embedding UNet for remote sensing image semantic segmentation. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–15. [Google Scholar] [CrossRef]
  33. Ma, X.; Zhang, X.; Pun, M.O. RS3Mamba: Visual State Space Model for Remote Sensing Image Semantic Segmentation. IEEE Geosci. Remote Sens. Lett. 2024, 21, 1–5. [Google Scholar] [CrossRef]
  34. Li, Z.; Shen, H.; Cheng, Q.; Liu, Y.; You, S.; He, Z. Deep learning based cloud detection for medium and high resolution remote sensing images of different sensors. ISPRS J. Photogramm. Remote Sens. 2019, 150, 197–212. [Google Scholar] [CrossRef]
  35. Zhu, S.; Li, Z.; Shen, H. Transferring Deep Models for Cloud Detection in Multisensor Images via Weakly Supervised Learning. IEEE Trans. Geosci. Remote Sens. 2024, 62, 1–18. [Google Scholar] [CrossRef]
  36. Foga, S.; Scaramuzza, P.L.; Guo, S.; Zhu, Z.; Dilley, R.D., Jr.; Beckmann, T.; Schmidt, G.L.; Dwyer, J.L.; Hughes, M.J.; Laue, B. Cloud detection algorithm comparison and validation for operational Landsat data products. Remote Sens. Environ. 2017, 194, 379–390. [Google Scholar] [CrossRef]
  37. Aybar, C.; Ysuhuaylas, L.; Loja, J.; Gonzales, K.; Herrera, F.; Bautista, L.; Yali, R.; Flores, A.; Diaz, L.; Cuenca, N.; et al. CloudSEN12, a global dataset for semantic understanding of cloud and cloud shadow in Sentinel-2. Sci. Data 2022, 9, 782. [Google Scholar] [CrossRef] [PubMed]
  38. Fu, Y.; Lou, M.; Yu, Y. SegMAN: Omni-scale Context Modeling with State Space Models and Local Attention for Semantic Segmentation. In Proceedings of the Computer Vision and Pattern Recognition, Nashville, TN, USA, 11–15 June 2025; pp. 19077–19087. [Google Scholar]
  39. Guo, J.; Yang, J.; Yue, H.; Tan, H.; Hou, C.; Li, K. CDnetV2: CNN-based cloud detection for remote sensing imagery with cloud-snow coexistence. IEEE Trans. Geosci. Remote Sens. 2020, 59, 700–713. [Google Scholar] [CrossRef]
  40. Domnich, M.; Sünter, I.; Trofimov, H.; Wold, O.; Harun, F.; Kostiukhin, A.; Järveoja, M.; Veske, M.; Tamm, T.; Voormansik, K.; et al. KappaMask: AI-based cloudmask processor for Sentinel-2. Remote Sens. 2021, 13, 4100. [Google Scholar] [CrossRef]
  41. Chai, D.; Huang, J.; Wu, M.; Yang, X.; Wang, R. Remote sensing image cloud detection using a shallow convolutional neural network. ISPRS J. Photogramm. Remote Sens. 2024, 209, 66–84. [Google Scholar] [CrossRef]
  42. Dong, J.; Wang, Y.; Yang, Y.; Yang, M.; Chen, J. MCDNet: Multilevel cloud detection network for remote sensing images based on dual-perspective change-guided and multi-scale feature fusion. Int. J. Appl. Earth Obs. Geoinf. 2024, 129, 103820. [Google Scholar] [CrossRef]
  43. Li, J.; Xue, T.; Zhao, J.; Ge, J.; Min, Y.; Su, W.; Zhan, K. High-resolution cloud detection network. J. Electron. Imaging 2024, 33, 043027. [Google Scholar] [CrossRef]
Figure 1. Overview of the proposed FEMNet architecture. The network adopts a dual-branch encoder design as follows: a Mamba-based encoder extracts hierarchical semantic features f 1 to f 4 , while a shallow CNN encoder retains high-resolution spatial details. The MSCA module processes f 4 using average pooling at multiple scales and produces a context-enriched feature f msca . The CSSE module receives low-level feature f 1 and high-level semantic feature f msca , and it outputs an aligned feature f csse through gated fusion. The decoder progressively upsamples and refines features to recover full-resolution predictions. See Section 3 for the internal structure of each module.
Figure 1. Overview of the proposed FEMNet architecture. The network adopts a dual-branch encoder design as follows: a Mamba-based encoder extracts hierarchical semantic features f 1 to f 4 , while a shallow CNN encoder retains high-resolution spatial details. The MSCA module processes f 4 using average pooling at multiple scales and produces a context-enriched feature f msca . The CSSE module receives low-level feature f 1 and high-level semantic feature f msca , and it outputs an aligned feature f csse through gated fusion. The decoder progressively upsamples and refines features to recover full-resolution predictions. See Section 3 for the internal structure of each module.
Remotesensing 17 02639 g001
Figure 2. Feature visualization of FEMNet’s dual-branch encoder. The first column shows the input remote sensing images. The second and third columns visualize features extracted from the shallow CNN branch and the Mamba branch, respectively. The last column presents the fused features after dual-branch integration.
Figure 2. Feature visualization of FEMNet’s dual-branch encoder. The first column shows the input remote sensing images. The second and third columns visualize features extracted from the shallow CNN branch and the Mamba branch, respectively. The last column presents the fused features after dual-branch integration.
Remotesensing 17 02639 g002
Figure 3. Internal structure of the SegMAN encoder with Mamba. (a) Each stage consists of a residual backbone composed of Layer Normalization, Neighborhood Attention (NAttn), 2D Selective-Scan (SS2D) module, and convolution, followed by a Feed-Forward Network (FFN). (b) The SS2D module unfolds the input feature map along four spatial directions via cross-scan (left-to-right, right-to-left, top-to-bottom, bottom-to-top), transforming the 2D input into directional sequences. Each sequence is processed independently by a selective state space model (S6 block), which dynamically generates SSM parameters using learnable linear projections (e.g., for B , C , Δ ). These outputs are then reshaped and aggregated via Cross-Merge to reconstruct a globally contextualized 2D representation.
Figure 3. Internal structure of the SegMAN encoder with Mamba. (a) Each stage consists of a residual backbone composed of Layer Normalization, Neighborhood Attention (NAttn), 2D Selective-Scan (SS2D) module, and convolution, followed by a Feed-Forward Network (FFN). (b) The SS2D module unfolds the input feature map along four spatial directions via cross-scan (left-to-right, right-to-left, top-to-bottom, bottom-to-top), transforming the 2D input into directional sequences. Each sequence is processed independently by a selective state space model (S6 block), which dynamically generates SSM parameters using learnable linear projections (e.g., for B , C , Δ ). These outputs are then reshaped and aggregated via Cross-Merge to reconstruct a globally contextualized 2D representation.
Remotesensing 17 02639 g003
Figure 4. Auxiliary modules. (a) Multi-Scale Context Aggregator (MSCA): The input f 4 is the deepest semantic feature map, which typically exhibits sparse and fragmented activations due to resolution loss. MSCA enhances this feature by applying average pooling at multiple scales (2, 4), upsampling, and concatenation, followed by a convolutional refinement to produce f msca , which contains more coherent and context-aware activations. (b) Cross-Scale Semantic Enhancement (CSSE): The module takes the low-level feature f 1 and semantic feature f msca as input. The latter is transformed into a spatial attention map M via a 1 × 1 convolution and sigmoid activation. M modulates f 1 through element-wise multiplication, and the result is concatenated with the original f 1 , followed by a convolutional block to produce the enhanced feature f csse . This improves semantic consistency and boundary localization in the decoding path.
Figure 4. Auxiliary modules. (a) Multi-Scale Context Aggregator (MSCA): The input f 4 is the deepest semantic feature map, which typically exhibits sparse and fragmented activations due to resolution loss. MSCA enhances this feature by applying average pooling at multiple scales (2, 4), upsampling, and concatenation, followed by a convolutional refinement to produce f msca , which contains more coherent and context-aware activations. (b) Cross-Scale Semantic Enhancement (CSSE): The module takes the low-level feature f 1 and semantic feature f msca as input. The latter is transformed into a spatial attention map M via a 1 × 1 convolution and sigmoid activation. M modulates f 1 through element-wise multiplication, and the result is concatenated with the original f 1 , followed by a convolutional block to produce the enhanced feature f csse . This improves semantic consistency and boundary localization in the decoding path.
Remotesensing 17 02639 g004
Figure 5. Visualization of feature maps before and after the Multi-Scale Context Aggregation (MSCA) module.
Figure 5. Visualization of feature maps before and after the Multi-Scale Context Aggregation (MSCA) module.
Remotesensing 17 02639 g005
Figure 6. Visualization of feature maps before and after the Cross-Stage Semantic Enhancement (CSSE) module.
Figure 6. Visualization of feature maps before and after the Cross-Stage Semantic Enhancement (CSSE) module.
Remotesensing 17 02639 g006
Figure 7. Qualitative results on the (top) HRC_WHU, (middle) GF1MS-WHU, and (bottom) GF2MS-WHU datasets.
Figure 7. Qualitative results on the (top) HRC_WHU, (middle) GF1MS-WHU, and (bottom) GF2MS-WHU datasets.
Remotesensing 17 02639 g007
Figure 8. Qualitative comparison on the CloudSEN12 High L1C dataset.
Figure 8. Qualitative comparison on the CloudSEN12 High L1C dataset.
Remotesensing 17 02639 g008
Figure 9. Qualitative comparison on the CloudSEN12 High L2A dataset.
Figure 9. Qualitative comparison on the CloudSEN12 High L2A dataset.
Remotesensing 17 02639 g009
Figure 10. Qualitative comparison on the L8 Biome dataset.
Figure 10. Qualitative comparison on the L8 Biome dataset.
Remotesensing 17 02639 g010
Table 1. Evaluation results on the HRC_WHU dataset. Bold highlights the best result.
Table 1. Evaluation results on the HRC_WHU dataset. Bold highlights the best result.
MethodVenueaAccmIoUmAccmDice
CDNetv1 [10]TGRS 201989.8877.7989.9387.20
CDNetv2 [39]TGRS 202089.7176.7587.4686.46
KappaMask [40]RS 202184.7367.4880.3079.74
DBNet [28]TGRS 202290.1177.7888.8087.17
UNetMobv2 [37]Sci. Data 202292.1379.9185.6188.45
SCNN [41]ISPRS 202474.5157.2281.2772.31
MCDNet [42]JAG 202475.1453.5068.9167.96
HRCloudNet [43]JEI 202492.9383.4492.3990.79
SCTNet [21]GRSL 202595.7589.2293.9094.22
Segman [38]CVPR 202594.6486.2290.7992.45
OursRS 202595.8689.6895.1794.49
Table 2. Evaluation results on the GF1MS-WHU dataset. Bold highlights the best result.
Table 2. Evaluation results on the GF1MS-WHU dataset. Bold highlights the best result.
MethodVenueaAccmIoUmAccmDice
CDNetv1 [10]TGRS 201996.8181.8292.7589.27
CDNetv2 [39]TGRS 202097.5584.9392.9691.36
KappaMask [40]RS 202198.9192.4295.0595.94
DBNet [28]TGRS 202298.7391.3695.1995.33
UNetMobv2 [37]Sci. Data 202298.8291.7193.9995.53
SCNN [41]ISPRS 202497.1881.6887.2189.13
MCDNet [42]JAG 202497.5585.1693.9791.51
HRCloudNet [43]JEI 202498.8091.8695.8295.62
SCTNet [21]GRSL 202599.0593.2296.6996.40
Segman [38]CVPR 202598.8591.7895.3295.58
OursRS 202599.0793.3296.6596.46
Table 3. Evaluation results on the GF2MS-WHU dataset. Bold highlights the best result.
Table 3. Evaluation results on the GF2MS-WHU dataset. Bold highlights the best result.
MethodVenueaAccmIoUmAccmDice
CDNetv1 [10]TGRS 201992.4278.2083.0787.17
CDNetv2 [39]TGRS 202092.6378.8483.6787.61
KappaMask [40]RS 202190.3072.0077.6482.57
DBNet [28]TGRS 202292.6178.6883.3987.50
UNetMobv2 [37]Sci. Data 202293.2280.4484.8688.70
SCNN [41]ISPRS 202491.9976.9982.0686.32
MCDNet [42]JAG 202492.3078.3683.9587.31
HRCloudNet [43]JEI 202491.4675.5780.9585.29
SCTNet [21]GRSL 202595.5186.9991.6392.86
Segman [38]CVPR 202594.7684.5188.6691.33
OursRS 202596.0088.2492.1793.61
Table 4. Evaluation results on the CloudSEN12 High L1C dataset. Bold highlights the best result.
Table 4. Evaluation results on the CloudSEN12 High L1C dataset. Bold highlights the best result.
MethodVenueaAccmIoUmAccmDice
CDNetv1 [10]TGRS 201983.4860.3572.4673.30
CDNetv2 [39]TGRS 202086.6865.6075.6577.83
KappaMask [40]RS 202176.2741.2750.7649.33
DBNet [28]TGRS 202286.8365.5275.1577.68
UNetMobv2 [37]Sci. Data 202289.5271.6581.0582.47
DDRNet [30]TITS 202286.5665.6476.2077.94
SCNN [41]ISPRS 202460.1922.7533.6930.88
MCDNet [42]JAG 202472.6844.8059.9958.08
HRCloudNet [43]JEI 202487.8668.2678.1379.88
SCTNet [21]GRSL 202589.0471.2181.8282.24
SegMAN [38]CVPR 202588.9770.8080.4581.83
OursRS 202589.6271.7880.9082.56
Table 5. Evaluation results on the CloudSEN12 High L2A dataset. Bold highlights the best result.
Table 5. Evaluation results on the CloudSEN12 High L2A dataset. Bold highlights the best result.
MethodVenueaAccmIoUmAccmDice
CDNetv1 [10]TGRS 201984.7462.3973.5075.15
CDNetv2 [39]TGRS 202086.8866.0575.9178.19
KappaMask [40]RS 202179.6045.2855.4753.56
DBNet [28]TGRS 202286.8265.6575.5277.81
UNetMobv2 [37]Sci. Data 202288.9670.3679.5781.45
DDRNet [30]TITS 202285.8764.2775.2376.72
SCNN [41]ISPRS 202467.1428.7640.5036.48
MCDNet [42]JAG 202475.6746.5258.5459.59
HRCloudNet [43]JEI 202488.3568.3577.3579.85
SCTNet [21]GRSL 202588.9670.8081.0981.88
SegMAN [38]CVPR 202588.3869.0578.5180.45
OursRS 202589.6472.5383.1483.15
Table 6. Evaluation results on the L8 Biome dataset. Bold highlights the best result.
Table 6. Evaluation results on the L8 Biome dataset. Bold highlights the best result.
MethodVenueaAccmIoUmAccmDice
CDNetv1 [10]TGRS 201968.1634.5845.5945.80
CDNetv2 [39]TGRS 202078.5643.6352.9852.75
KappaMask [40]RS 202176.6342.1251.7351.35
DBNet [28]TGRS 202283.6251.4159.7060.99
UNetMobv2 [37]Sci. Data 202282.0047.7656.2956.91
DDRNet [30]TITS 202285.1355.2964.0765.78
SCNN [41]ISPRS 202471.2232.3844.0639.30
MCDNet [42]JAG 202469.7533.8544.8242.76
HRCloudNet [43]JEI 202477.0443.5153.7753.52
SCTNet [21]GRSL 202588.5266.0375.8577.26
SegMAN [38]CVPR 202590.0968.7778.0379.41
OursRS 202589.7869.7082.9880.49
Table 7. Ablation results on the L8 Biome dataset. Bold highlights the best result.
Table 7. Ablation results on the L8 Biome dataset. Bold highlights the best result.
MethodaAccmIoUmAccmDice
w/o DSE88.6666.6779.4478.00
w/o CSSE89.7969.0180.0879.79
w/o MSCA90.1968.4176.3379.05
Ours89.7869.7082.9880.49
Table 8. Computational complexity analysis (input size: 256 × 256). Bold highlights the best result.
Table 8. Computational complexity analysis (input size: 256 × 256). Bold highlights the best result.
MethodParams (M)MACs (G)Throughput (img/s)Model Size (MB)Memory (MB)
KappaMask [40]31.054.7137124.2734
DBNet [28]95.128.5127381.81660
UNetMobv2 [37]6.63.494526.8552
SCTNet [21]0.71.010473.1534
SegMAN [38]7.41.330729.8674
FEMNet4.41.333117.9548
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liu, W.; Luo, B.; Liu, J.; Nie, H.; Su, X. FEMNet: A Feature-Enriched Mamba Network for Cloud Detection in Remote Sensing Imagery. Remote Sens. 2025, 17, 2639. https://doi.org/10.3390/rs17152639

AMA Style

Liu W, Luo B, Liu J, Nie H, Su X. FEMNet: A Feature-Enriched Mamba Network for Cloud Detection in Remote Sensing Imagery. Remote Sensing. 2025; 17(15):2639. https://doi.org/10.3390/rs17152639

Chicago/Turabian Style

Liu, Weixing, Bin Luo, Jun Liu, Han Nie, and Xin Su. 2025. "FEMNet: A Feature-Enriched Mamba Network for Cloud Detection in Remote Sensing Imagery" Remote Sensing 17, no. 15: 2639. https://doi.org/10.3390/rs17152639

APA Style

Liu, W., Luo, B., Liu, J., Nie, H., & Su, X. (2025). FEMNet: A Feature-Enriched Mamba Network for Cloud Detection in Remote Sensing Imagery. Remote Sensing, 17(15), 2639. https://doi.org/10.3390/rs17152639

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop