Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,033)

Search Parameters:
Keywords = radar imagery

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
28 pages, 32247 KB  
Article
A Dual-Resolution Network Based on Orthogonal Components for Building Extraction from VHR PolSAR Images
by Songhao Ni, Fuhai Zhao, Mingjie Zheng, Zhen Chen and Xiuqing Liu
Remote Sens. 2026, 18(2), 305; https://doi.org/10.3390/rs18020305 - 16 Jan 2026
Viewed by 64
Abstract
Sub-meter-resolution Polarimetric Synthetic Aperture Radar (PolSAR) imagery enables precise building footprint extraction but introduces complex scattering correlated with fine spatial structures. This change renders both traditional methods, which rely on simplified scattering models, and existing deep learning approaches, which sacrifice spatial detail through [...] Read more.
Sub-meter-resolution Polarimetric Synthetic Aperture Radar (PolSAR) imagery enables precise building footprint extraction but introduces complex scattering correlated with fine spatial structures. This change renders both traditional methods, which rely on simplified scattering models, and existing deep learning approaches, which sacrifice spatial detail through multi-looking, inadequate for high-precision extraction tasks. To address this, we propose an Orthogonal Dual-Resolution Network (ODRNet) for end-to-end, precise segmentation directly from single-look complex (SLC) data. Unlike complex-valued neural networks that suffer from high computational cost and optimization difficulties, our approach decomposes complex-valued data into its orthogonal real and imaginary components, which are then concurrently fed into a Dual-Resolution Branch (DRB) with Bilateral Information Fusion (BIF) to effectively balance the trade-off between semantic and spatial details. Crucially, we introduce an auxiliary Polarization Orientation Angle (POA) regression task to enforce physical consistency between the orthogonal branches. To tackle the challenge of diverse building scales, we designed a Multi-scale Aggregation Pyramid Pooling Module (MAPPM) to enhance contextual awareness and a Pixel-attention Fusion (PAF) module to adaptively fuse dual-branch features. Furthermore, we have constructed a VHR PolSAR building footprint segmentation dataset to support related research. Experimental results demonstrate that ODRNet achieves 64.3% IoU and 78.27% F1-score on our dataset, and 73.61% IoU with 84.8% F1-score on a large-scale SLC scene, confirming the method’s significant potential and effectiveness in high-precision building extraction directly from SLC. Full article
Show Figures

Figure 1

25 pages, 2339 KB  
Article
An Operational Ground-Based Vicarious Radiometric Calibration Method for Thermal Infrared Sensors: A Case Study of GF-5A WTI
by Jingwei Bai, Yunfei Bao, Guangyao Zhou, Shuyan Zhang, Hong Guan, Mingmin Zhang, Yongchao Zhao and Kang Jiang
Remote Sens. 2026, 18(2), 302; https://doi.org/10.3390/rs18020302 - 16 Jan 2026
Viewed by 90
Abstract
High-resolution TIR missions require sustained and well-characterized radiometric accuracy to support applications such as land surface temperature retrieval, drought monitoring, and surface energy budget analysis. To address this need, we develop an operational and automated ground-based vicarious radiometric calibration framework for TIR sensors [...] Read more.
High-resolution TIR missions require sustained and well-characterized radiometric accuracy to support applications such as land surface temperature retrieval, drought monitoring, and surface energy budget analysis. To address this need, we develop an operational and automated ground-based vicarious radiometric calibration framework for TIR sensors and demonstrate its performance using the Wide-swath Thermal Infrared Imager (WTI) onboard Gaofen-5 01A (GF-5A). Three arid Gobi calibration sites were selected by integrating Moderate Resolution Imaging Spectroradiometer (MODIS) cloud products, Shuttle Radar Topography Mission (SRTM)-derived topography, and WTI-based radiometric uniformity metrics to ensure low cloud cover, flat terrain, and high spatial homogeneity. Automated ground stations deployed at Golmud, Dachaidan, and Dunhuang have continuously recorded 1 min contact surface temperature since October 2023. Field-measured emissivity spectra, Integrated Global Radiosonde Archive (IGRA) radiosonde profiles, and MODTRAN (MODerate resolution atmospheric TRANsmission) v5.2 simulations were combined to compute top-of-atmosphere (TOA) radiances, which were subsequently collocated with WTI imagery. After data screening and gain-stratified regression, linear calibration coefficients were derived for each TIR band. Based on 189 scenes from February–July 2024, all four bands exhibit strong linearity (R-squared greater than 0.979). Validation using 45 independent scenes yields a mean brightness–temperature root-mean-square error (RMSE) of 0.67 K. A full radiometric-chain uncertainty budget—including contact temperature, emissivity, atmospheric profiles, and radiative transfer modeling—results in a combined standard uncertainty of 1.41 K. The proposed framework provides a low-maintenance, traceable, and high-frequency solution for the long-term on-orbit radiometric calibration of GF-5A WTI and establishes a reproducible pathway for future TIR missions requiring sustained calibration stability. Full article
(This article belongs to the Special Issue Radiometric Calibration of Satellite Sensors Used in Remote Sensing)
Show Figures

Figure 1

32 pages, 10741 KB  
Article
A Robust Deep Learning Ensemble Framework for Waterbody Detection Using High-Resolution X-Band SAR Under Data-Constrained Conditions
by Soyeon Choi, Seung Hee Kim, Son V. Nghiem, Menas Kafatos, Minha Choi, Jinsoo Kim and Yangwon Lee
Remote Sens. 2026, 18(2), 301; https://doi.org/10.3390/rs18020301 - 16 Jan 2026
Viewed by 106
Abstract
Accurate delineation of inland waterbodies is critical for applications such as hydrological monitoring, disaster response preparedness and response, and environmental management. While optical satellite imagery is hindered by cloud cover or low-light conditions, Synthetic Aperture Radar (SAR) provides consistent surface observations regardless of [...] Read more.
Accurate delineation of inland waterbodies is critical for applications such as hydrological monitoring, disaster response preparedness and response, and environmental management. While optical satellite imagery is hindered by cloud cover or low-light conditions, Synthetic Aperture Radar (SAR) provides consistent surface observations regardless of weather or illumination. This study introduces a deep learning-based ensemble framework for precise inland waterbody detection using high-resolution X-band Capella SAR imagery. To improve the discrimination of water from spectrally similar non-water surfaces (e.g., roads and urban structures), an 8-channel input configuration was developed by incorporating auxiliary geospatial features such as height above nearest drainage (HAND), slope, and land cover classification. Four advanced deep learning segmentation models—Proportional–Integral–Derivative Network (PIDNet), Mask2Former, Swin Transformer, and Kernel Network (K-Net)—were systematically evaluated via cross-validation. Their outputs were combined using a weighted average ensemble strategy. The proposed ensemble model achieved an Intersection over Union (IoU) of 0.9422 and an F1-score of 0.9703 in blind testing, indicating high accuracy. While the ensemble gains over the best single model (IoU: 0.9371) were moderate, the enhanced operational reliability through balanced Precision–Recall performance provides significant practical value for flood and water resource monitoring with high-resolution SAR imagery, particularly under data-constrained commercial satellite platforms. Full article
(This article belongs to the Section AI Remote Sensing)
Show Figures

Figure 1

29 pages, 34498 KB  
Article
From Sparse to Refined Samples: Iterative Enhancement-Based PDLCM for Multi-Annual 10 m Rice Mapping in the Middle-Lower Yangtze
by Lingbo Yang, Jiancong Dong, Cong Xu, Jingfeng Huang, Yichen Wang, Huiqin Ma, Zhongxin Chen, Limin Wang and Jingcheng Zhang
Remote Sens. 2026, 18(2), 209; https://doi.org/10.3390/rs18020209 - 8 Jan 2026
Viewed by 140
Abstract
Accurate mapping of rice cultivation is vital for ensuring food security, reducing greenhouse gas emissions, and achieving sustainable development goals. However, large-scale deep learning–based crop mapping remains limited due to the demand for vast, uniformly distributed, high-quality samples. To address this challenge, we [...] Read more.
Accurate mapping of rice cultivation is vital for ensuring food security, reducing greenhouse gas emissions, and achieving sustainable development goals. However, large-scale deep learning–based crop mapping remains limited due to the demand for vast, uniformly distributed, high-quality samples. To address this challenge, we propose a Progressive Deep Learning Crop Mapping (PDLCM) framework for national-scale, high-resolution rice mapping. Beginning with a small set of localized rice and non-rice samples, PDLCM progressively refines model performance through iterative enhancement of positive and negative samples, effectively mitigating sample scarcity and spatial heterogeneity. By combining time-series Sentinel-2 optical data with Sentinel-1 synthetic aperture radar imagery, the framework captures distinctive phenological characteristics of rice while resolving spatiotemporal inconsistencies in large datasets. Applying PDLCM, we produced 10 m rice maps from 2022 to 2024 across the middle and lower Yangtze River Basin, covering more than one million square kilometers. The results achieved an overall accuracy of 96.8% and an F1 score of 0.88, demonstrating strong spatial and temporal generalization. All datasets and source codes are publicly accessible, supporting SDG 2 and providing a transferable paradigm for operational large-scale crop mapping. Full article
Show Figures

Figure 1

22 pages, 3276 KB  
Article
AFR-CR: An Adaptive Frequency Domain Feature Reconstruction-Based Method for Cloud Removal via SAR-Assisted Remote Sensing Image Fusion
by Xiufang Zhou, Qirui Fang, Xunqiang Gong, Shuting Yang, Tieding Lu, Yuting Wan, Ailong Ma and Yanfei Zhong
Remote Sens. 2026, 18(2), 201; https://doi.org/10.3390/rs18020201 - 8 Jan 2026
Viewed by 278
Abstract
Optical imagery is often contaminated by clouds to varying degrees, which greatly affects the interpretation and analysis of images. Synthetic Aperture Radar (SAR) possesses the characteristic of penetrating clouds and mist, and a common strategy in SAR-assisted cloud removal involves fusing SAR and [...] Read more.
Optical imagery is often contaminated by clouds to varying degrees, which greatly affects the interpretation and analysis of images. Synthetic Aperture Radar (SAR) possesses the characteristic of penetrating clouds and mist, and a common strategy in SAR-assisted cloud removal involves fusing SAR and optical data and leveraging deep learning networks to reconstruct cloud-free optical imagery. However, these methods do not fully consider the characteristics of the frequency domain when processing feature integration, resulting in blurred edges of the generated cloudless optical images. Therefore, an adaptive frequency domain feature reconstruction-based cloud removal method is proposed to solve the problem. The proposed method comprises four key sequential stages. First, shallow features are extracted by fusing optical and SAR images. Second, a Transformer-based encoder captures multi-scale semantic features. Subsequently, the Frequency Domain Decoupling Module (FDDM) is employed. Utilizing a Dynamic Mask Generation mechanism, it explicitly decomposes features into low-frequency structures and high-frequency details, effectively suppressing cloud interference while preserving surface textures. Finally, robust information interaction is facilitated by the Cross-Frequency Reconstruction Module (CFRM) via transposed cross-attention, ensuring precise fusion and reconstruction. Experimental evaluation on the M3R-CR dataset confirms that the proposed approach achieves the best results on all four evaluated metrics, surpassing the performance of the eight other State-of-the-Art methods. It has demonstrated its effectiveness and advanced capabilities in the task of SAR-optical fusion for cloud removal. Full article
Show Figures

Figure 1

36 pages, 5941 KB  
Review
Physics-Driven SAR Target Detection: A Review and Perspective
by Xinyi Li, Lei Liu, Gang Wan, Fengjie Zheng, Shihao Guo, Guangde Sun, Ziyan Wang and Xiaoxuan Liu
Remote Sens. 2026, 18(2), 200; https://doi.org/10.3390/rs18020200 - 7 Jan 2026
Viewed by 334
Abstract
Synthetic Aperture Radar (SAR) is highly valuable for target detection due to its all-weather, day-night operational capability and certain ground penetration potential. However, traditional SAR target detection methods often directly adapt algorithms designed for optical imagery, simplistically treating SAR data as grayscale images. [...] Read more.
Synthetic Aperture Radar (SAR) is highly valuable for target detection due to its all-weather, day-night operational capability and certain ground penetration potential. However, traditional SAR target detection methods often directly adapt algorithms designed for optical imagery, simplistically treating SAR data as grayscale images. This approach overlooks SAR’s unique physical nature, failing to account for key factors such as backscatter variations from different polarizations, target representation changes across resolutions, and detection threshold shifts due to clutter background heterogeneity. Consequently, these limitations lead to insufficient cross-polarization adaptability, feature masking, and degraded recognition accuracy due to clutter interference. To address these challenges, this paper systematically reviews recent research advances in SAR target detection, focusing on physical constraints including polarization characteristics, scattering mechanisms, signal-domain properties, and resolution effects. Finally, it outlines promising research directions to guide future developments in physics-aware SAR target detection. Full article
Show Figures

Figure 1

23 pages, 14919 KB  
Article
Estimating Economic Activity from Satellite Embeddings
by Xiangqi Yue, Zhong Zhao and Kun Hu
Appl. Sci. 2026, 16(2), 582; https://doi.org/10.3390/app16020582 - 6 Jan 2026
Viewed by 262
Abstract
Earth Embedding (EMB) is a method that adapts embedding techniques from Large Language Models (LLMs) to compress the information contained in multiple remote sensing satellite images into feature vectors. This article introduces a new approach to measuring economic activity from EMBs. Using the [...] Read more.
Earth Embedding (EMB) is a method that adapts embedding techniques from Large Language Models (LLMs) to compress the information contained in multiple remote sensing satellite images into feature vectors. This article introduces a new approach to measuring economic activity from EMBs. Using the Google Satellite Embedding Dataset (GSED), we extract a 64-dimensional representation of the Earth’s surface that integrates optical and radar imagery. A neural network maps these embeddings to nighttime light (NTL) intensity, yielding a 32-dimensional “income-aware” feature space aligned with economic variation. We then predict GDP levels and growth rates across countries and compare the results with those of traditional NTL-based models. The Earth-Embedding (EMB) based estimator achieves substantially lower mean squared error in estimating GDP levels. Combining the two sources yields the best overall accuracy. Further analysis shows that EMB performs particularly well in low-statistical-capacity and high-income economies. These results suggest that satellite embeddings can provide a scalable, globally consistent framework for monitoring economic development and validating official statistics. Full article
(This article belongs to the Collection Space Applications)
Show Figures

Figure 1

25 pages, 7922 KB  
Article
Generation of Rainfall Maps from GK2A Satellite Images Using Deep Learning
by Yerim Lim, Yeji Choi, Eunbin Kim, Yong-Jae Moon and Hyun-Jin Jeong
Remote Sens. 2026, 18(2), 188; https://doi.org/10.3390/rs18020188 - 6 Jan 2026
Viewed by 198
Abstract
Accurate rainfall monitoring is essential for mitigating hydrometeorological disasters and understanding hydrological changes under climate change. This study presents a deep learning-based rainfall estimation framework using multispectral GEO-KOMPSAT-2A (GK2A) satellite imagery. The analysis primarily focuses on daytime observations to take advantage of visible [...] Read more.
Accurate rainfall monitoring is essential for mitigating hydrometeorological disasters and understanding hydrological changes under climate change. This study presents a deep learning-based rainfall estimation framework using multispectral GEO-KOMPSAT-2A (GK2A) satellite imagery. The analysis primarily focuses on daytime observations to take advantage of visible channel information, which provides richer representations of cloud characteristics during daylight conditions. The core model, Model-HSP, is built on the Pix2PixCC architecture and trained with Hybrid Surface Precipitation (HSP) data from weather radar. To further enhance accuracy, an ensemble model (Model-ENS) integrates the outputs of Model-HSP and a radar based Model-CMX, leveraging their complementary strengths for improved generalization, robustness, and stability across rainfall regimes. Performance was evaluated over two periods—a one year period from May 2023 to April 2024 and the August 2023 monsoon season—at 2 km and 4 km spatial resolutions, using RMSE and CC as quantitative metrics. Case analyses confirmed the superior capability of Model-ENS in capturing rainfall distribution, intensity, and temporal evolution across diverse weather conditions. These findings show that deep learning greatly enhances GEO satellite rainfall estimation, enabling real-time, high-resolution monitoring even in radar sparse or limited coverage regions, and offering strong potential for global and regional hydrometeorological and climate research applications. Full article
(This article belongs to the Special Issue Advance of Radar Meteorology and Hydrology II)
Show Figures

Graphical abstract

21 pages, 5796 KB  
Article
Statistical Grid-Based Analysis of Anthropogenic Film Pollution in Coastal Waters According to SAR Satellite Data Series
by Valery Bondur, Victoria Studenova and Viktor Zamshin
J. Mar. Sci. Eng. 2026, 14(1), 79; https://doi.org/10.3390/jmse14010079 - 31 Dec 2025
Viewed by 211
Abstract
The problem of adequate quantitative analysis of anthropogenic film pollution of water areas according to synthetic aperture radar (SAR) satellite imagery is addressed here. A quantitative analysis of anthropogenic film pollution (AFP) in the studied coastal water areas of the north sector of [...] Read more.
The problem of adequate quantitative analysis of anthropogenic film pollution of water areas according to synthetic aperture radar (SAR) satellite imagery is addressed here. A quantitative analysis of anthropogenic film pollution (AFP) in the studied coastal water areas of the north sector of the Black Sea and Avacha Gulf has been conducted. The analysis utilized a method that involved the statistical processing of data related to AFP identified within the cells of a regular spatial grid. Time series of Sentinel-1 SAR satellite imagery were used as initial data. Spatiotemporal distributions of the proposed quantitative criterion (eAFP, ppm) have been calculated and analyzed. This criterion characterizes the intensity of AFP impact within the selected regions of marine waters based on measuring the relative frequency of an AFP event. Among them, the area of the emergency fuel oil spill that occurred in 2024–2025 near the Kerch Strait was investigated (eAFP values near the wreckage of tankers reached ~13,000 ppm), as well as the area of the emergency oil spill near the Novorossiysk terminal that occurred in 2021 (eAFP ≤ 6000 ppm). Accidents led to an approximately 3–6-fold increase in eAFP values against the background level of 0–2000 ppm. The spatiotemporal variability of eAFP across various water areas and under different conditions has been demonstrated and discussed. Full article
(This article belongs to the Section Marine Pollution)
Show Figures

Figure 1

25 pages, 6462 KB  
Article
YOLO-CMFM: A Visible-SAR Multimodal Object Detection Method Based on Edge-Guided and Gated Cross-Attention Fusion
by Xuyang Zhao, Lijun Zhao, Keli Shi, Ruotian Ren and Zheng Zhang
Remote Sens. 2026, 18(1), 136; https://doi.org/10.3390/rs18010136 - 31 Dec 2025
Viewed by 425
Abstract
To address the challenges of cross-modal feature misalignment and ineffective information fusion caused by the inherent differences in imaging mechanisms, noise statistics, and semantic representations between visible and synthetic aperture radar (SAR) imagery, this paper proposes a multimodal remote sensing object detection method, [...] Read more.
To address the challenges of cross-modal feature misalignment and ineffective information fusion caused by the inherent differences in imaging mechanisms, noise statistics, and semantic representations between visible and synthetic aperture radar (SAR) imagery, this paper proposes a multimodal remote sensing object detection method, namely YOLO-CMFM. Built upon the Ultralytics YOLOv11 framework, the proposed approach designs a Cross-Modal Fusion Module (CMFM) that systematically enhances detection accuracy and robustness from the perspectives of modality alignment, feature interaction, and adaptive fusion. Specifically, (1) a Learnable Edge-Guided Attention (LEGA) module is constructed, which leverages a learnable Gaussian saliency prior to achieve edge-oriented cross-modal alignment, effectively mitigating edge-structure mismatches across modalities; (2) a Bidirectional Cross-Attention (BCA) module is developed to enable deep semantic interaction and global contextual aggregation; (3) a Context-Guided Gating (CGG) module is designed to dynamically generate complementary weights based on multimodal source features and global contextual information, thereby achieving adaptive fusion across modalities. Extensive experiments conducted on the OGSOD 1.0 dataset demonstrate that the proposed YOLO-CMFM achieves an mAP@50 of 96.2% and an mAP@50:95 of 75.1%. While maintaining competitive performance comparable to mainstream approaches at lower IoU thresholds, the proposed method significantly outperforms existing counterparts at high IoU thresholds, highlighting its superior capability in precise object localization. Also, the experimental results on the OSPRC dataset demonstrate that the proposed method can consistently achieve stable gains under different kinds of imaging conditions, including diverse SAR polarizations, spatial resolutions, and cloud occlusion conditions. Moreover, the CMFM can be flexibly integrated into different detection frameworks, which further validates its strong generalization and transferability in multimodal remote sensing object detection tasks. Full article
(This article belongs to the Special Issue Intelligent Processing of Multimodal Remote Sensing Data)
Show Figures

Figure 1

19 pages, 4383 KB  
Article
Integrating GAN-Generated SAR and Optical Imagery for Building Damage Mapping
by Chia Yee Ho, Bruno Adriano, Gerald Baier, Erick Mas, Sesa Wiguna, Magaly Koch and Shunichi Koshimura
Remote Sens. 2026, 18(1), 134; https://doi.org/10.3390/rs18010134 - 31 Dec 2025
Viewed by 476
Abstract
Reliable assessment of building damage is essential for effective disaster management. Synthetic Aperture Radar (SAR) has become a valuable tool for damage detection, as it operates independently of the daylight and weather conditions. However, the limited availability of high-resolution pre-disaster SAR data remains [...] Read more.
Reliable assessment of building damage is essential for effective disaster management. Synthetic Aperture Radar (SAR) has become a valuable tool for damage detection, as it operates independently of the daylight and weather conditions. However, the limited availability of high-resolution pre-disaster SAR data remains a major obstacle to accurate damage evaluation, constraining the applicability of traditional change-detection approaches. This study proposes a comprehensive framework that leverages generated SAR data alongside optical imagery for building damage detection and further examines the influence of elevation data quality on SAR synthesis and model performance. The method integrates SAR image synthesis from a Digital Surface Model (DSM) and land cover inputs with a multimodal deep learning architecture capable of jointly localizing buildings and classifying damage levels. Two data modality scenarios are evaluated: a change-detection setting using pre-disaster authentic SAR and another using GAN-generated SAR, both combined with post-disaster SAR imagery for building damage assessment. Experimental results demonstrate that GAN-generated SAR can effectively substitute for authentic SAR in multimodal damage mapping. Models using generated pre-disaster SAR achieved comparable or superior performance to those using authentic SAR, with F1 scores of 0.730, 0.442, and 0.790 for the survived, moderate, and destroyed classes, respectively. Ablation studies further reveal that the model relies more heavily on land cover segmentation than on fine elevation details, suggesting that coarse-resolution DSMs (30 m) are sufficient as auxiliary input. Incorporating additional training regions further improved generalization and inter-class balance, confirming that high-quality generated SAR can serve as a viable alternative especially in the absence of authentic SAR, for scalable, post-disaster building damage assessment. Full article
(This article belongs to the Collection Feature Papers for Section Environmental Remote Sensing)
Show Figures

Figure 1

26 pages, 48691 KB  
Article
A Multi-Channel Convolutional Neural Network Model for Detecting Active Landslides Using Multi-Source Fusion Images
by Jun Wang, Hongdong Fan, Wanbing Tuo and Yiru Ren
Remote Sens. 2026, 18(1), 126; https://doi.org/10.3390/rs18010126 - 30 Dec 2025
Viewed by 296
Abstract
Synthetic Aperture Radar Interferometry (InSAR) has demonstrated significant advantages in detecting active landslides. The proliferation of computing technology has enabled the combination of InSAR and deep learning, offering an innovative approach to the automation of landslide detection. However, InSAR-based detection faces two persistent [...] Read more.
Synthetic Aperture Radar Interferometry (InSAR) has demonstrated significant advantages in detecting active landslides. The proliferation of computing technology has enabled the combination of InSAR and deep learning, offering an innovative approach to the automation of landslide detection. However, InSAR-based detection faces two persistent challenges: (1) the difficulty in distinguishing active landslides from other deformation phenomena, which leads to high false alarm rates; and (2) insufficient accuracy in delineating precise landslide boundaries due to low image contrast. The incorporation of multi-source data and multi-branch feature extraction networks can alleviate this issue, yet it inevitably increases computational cost and model complexity. To address these issues, this study first constructs a multi-source fusion image dataset combining optical remote sensing imagery, DEM-derived slope information, and InSAR deformation data. Subsequently, it proposes a multi-channel instance segmentation framework named MCLD R-CNN (Multi-Channel Landslide Detection R-CNN). The proposed network is designed to accept multi-channel inputs and integrates a landslide-focused attention mechanism, which enhances the model’s ability to capture landslide-specific features. The experimental findings indicate that the proposed strategy effectively addresses the aforementioned challenges. Moreover, the proposed MCLD R-CNN achieves superior detection accuracy and generalization ability compared to other benchmark models. Full article
Show Figures

Figure 1

29 pages, 29721 KB  
Article
MFF-Net: Flood Detection from SAR Images Using Multi-Frequency and Fuzzy Uncertainty Fusion
by Yahui Gao, Xiaochuan Wang, Zili Zhang, Xiaoming Chen, Ruijun Liu and Xiaohui Liang
Remote Sens. 2026, 18(1), 123; https://doi.org/10.3390/rs18010123 - 29 Dec 2025
Viewed by 227
Abstract
Synthetic Aperture Radar (SAR) images are highly valuable for detecting water surfaces characterized by low roughness and minimal microwave reflection, which makes them essential for flood detection. Despite these advantages, SAR imagery still faces inherent challenges, particularly systematic noise, which limits the accuracy [...] Read more.
Synthetic Aperture Radar (SAR) images are highly valuable for detecting water surfaces characterized by low roughness and minimal microwave reflection, which makes them essential for flood detection. Despite these advantages, SAR imagery still faces inherent challenges, particularly systematic noise, which limits the accuracy of pixel-level flood detection and causes fine-grained flood areas to be easily overlooked. To tackle these challenges, this study proposes a novel flood detection algorithm, the multi-frequency fuzzy uncertainty fusion network (MFF-Net), which is built upon a multi-scale architecture. Particularly, the multi-frequency feature extraction module in MFF-Net extracts frequency features at different levels, which mitigate systematic noise in the SAR images and improve the accuracy of pixel-level flood detection. The fuzzy uncertainty fusion module further mitigates noise interference and more effectively detects subtle flood areas that may be overlooked. The combined effect of these modules significantly enhances the detection capability for fine-grained flood areas. Experiments validate the effectiveness of MFF-Net on SAR benchmarks, including the MMflood Dataset with 50.2% of IoU, the Sen1Floods11 Dataset with 45.07% of IoU, the ETCI 2021 Dataset with 44.35% and the SAR Poyang Lake Water Body Sample Dataset with 57.27% of IoU, respectively. In addition, it has also been tested on actual flood events. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Graphical abstract

21 pages, 5125 KB  
Article
Estimating Soil Moisture Using Multimodal Remote Sensing and Transfer Optimization Techniques
by Jingke Liu, Lin Liu, Weidong Yu and Xingbin Wang
Remote Sens. 2026, 18(1), 84; https://doi.org/10.3390/rs18010084 - 26 Dec 2025
Viewed by 363
Abstract
Surface soil moisture (SSM) is essential for crop growth, irrigation management, and drought monitoring. However, conventional field-based measurements offer limited spatial and temporal coverage, making it difficult to capture environmental variability at scale. This study introduces a multimodal soil moisture estimation framework that [...] Read more.
Surface soil moisture (SSM) is essential for crop growth, irrigation management, and drought monitoring. However, conventional field-based measurements offer limited spatial and temporal coverage, making it difficult to capture environmental variability at scale. This study introduces a multimodal soil moisture estimation framework that combines synthetic aperture radar (SAR), optical imagery, vegetation indices, digital elevation models (DEM), meteorological data, and spatio-temporal metadata. To strengthen model performance and adaptability, an intermediate fine-tuning strategy is applied to two datasets comprising 10,571 images and 3772 samples. This approach improves generalization and transferability across regions. The framework is evaluated across diverse agro-ecological zones, including farmlands, alpine grasslands, and environmentally fragile areas, and benchmarked against single-modality methods. Results with RMSE 4.5834% and R2 0.8956 show consistently high accuracy and stability, enabling the production of reliable field-scale soil moisture maps. By addressing the spatial and temporal challenges of soil monitoring, this framework provides essential information for precision irrigation. It supports site-specific water management, promotes efficient water use, and enhances drought resilience at both farm and regional scales. Full article
Show Figures

Graphical abstract

43 pages, 42157 KB  
Article
SAREval: A Multi-Dimensional and Multi-Task Benchmark for Evaluating Visual Language Models on SAR Image Understanding
by Ziyan Wang, Lei Liu, Gang Wan, Yuchen Lu, Fengjie Zheng, Guangde Sun, Yixiang Huang, Shihao Guo, Xinyi Li and Liang Yuan
Remote Sens. 2026, 18(1), 82; https://doi.org/10.3390/rs18010082 - 25 Dec 2025
Viewed by 406
Abstract
Vision-Language Models (VLMs) demonstrate significant potential for remote sensing interpretation through multimodal fusion and semantic representation of imagery. However, their adaptation to Synthetic Aperture Radar (SAR) remains challenging due to fundamental differences in imaging mechanisms and physical properties compared to optical remote sensing. [...] Read more.
Vision-Language Models (VLMs) demonstrate significant potential for remote sensing interpretation through multimodal fusion and semantic representation of imagery. However, their adaptation to Synthetic Aperture Radar (SAR) remains challenging due to fundamental differences in imaging mechanisms and physical properties compared to optical remote sensing. SAREval, the first comprehensive benchmark specifically designed for SAR image understanding, incorporates SAR-specific characteristics, including scattering mechanisms and polarization features, through a hierarchical framework spanning perception, reasoning, and robustness capabilities. It encompasses 20 tasks from image classification to physical-attribute inference with over 10,000 high-quality image–text pairs. Extensive experiments conducted on 11 mainstream VLMs reveal substantial limitations in SAR image interpretation. Models achieve merely 25.35% accuracy in fine-grained ship classification tasks and demonstrate significant difficulties in establishing mappings between visual features and physical parameters. Furthermore, certain models exhibit unexpected performance improvements under certain noise conditions that challenge conventional robustness understanding. SAREval establishes an essential foundation for developing and evaluating VLMs in SAR image interpretation, providing standardized assessment protocols and quality-controlled annotations for cross-modal remote sensing research. Full article
Show Figures

Figure 1

Back to TopTop