Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,832)

Search Parameters:
Keywords = synthetic aperture radar (SAR) image

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
29 pages, 12050 KiB  
Article
PolSAR-SFCGN: An End-to-End PolSAR Superpixel Fully Convolutional Generation Network
by Mengxuan Zhang, Jingyuan Shi, Long Liu, Wenbo Zhang, Jie Feng, Jin Zhu and Boce Chu
Remote Sens. 2025, 17(15), 2723; https://doi.org/10.3390/rs17152723 - 6 Aug 2025
Abstract
Polarimetric Synthetic Aperture Radar (PolSAR) image classification is one of the most important applications in remote sensing. The impressive superpixel generation approaches can improve the efficiency of the subsequent classification task and restrain the influence of the speckle noise to an extent. Most [...] Read more.
Polarimetric Synthetic Aperture Radar (PolSAR) image classification is one of the most important applications in remote sensing. The impressive superpixel generation approaches can improve the efficiency of the subsequent classification task and restrain the influence of the speckle noise to an extent. Most of the classical PolSAR superpixel generation approaches use the features extracted manually and even only consider the pseudocolor images. They do not make full use of polarimetric information and do not necessarily lead to good enough superpixels. The deep learning methods can extract effective deep features but they are difficult to combine with superpixel generation to achieve true end-to-end training. Addressing the above issues, this study proposes an end-to-end fully convolutional superpixel generation network for PolSAR images. It integrates the extraction of polarization information features and the generation of PolSAR superpixels into one step. PolSAR superpixels can be generated based on deep polarization feature extraction and need no traditional clustering process. Both the performance and efficiency of generations of PolSAR superpixels can be enhanced effectively. The experimental results on various PolSAR datasets show that the proposed method can achieve impressive superpixel segmentation by fitting the real boundaries of different types of ground objects effectively and efficiently. It can achieve excellent classification performance by connecting a very simple classification network, which is helpful to improve the efficiency of the subsequent PolSAR image classification tasks. Full article
20 pages, 7088 KiB  
Article
SAR Images Despeckling Using Subaperture Decomposition and Non-Local Low-Rank Tensor Approximation
by Xinwei An, Hongcheng Zeng, Zhaohong Li, Wei Yang, Wei Xiong, Yamin Wang and Yanfang Liu
Remote Sens. 2025, 17(15), 2716; https://doi.org/10.3390/rs17152716 - 6 Aug 2025
Abstract
Synthetic aperture radar (SAR) images suffer from speckle noise due to their imaging mechanism, which deteriorates image interpretability and hinders subsequent tasks like target detection and recognition. Traditional denoising methods fall short of the demands for high-quality SAR image processing, and deep learning [...] Read more.
Synthetic aperture radar (SAR) images suffer from speckle noise due to their imaging mechanism, which deteriorates image interpretability and hinders subsequent tasks like target detection and recognition. Traditional denoising methods fall short of the demands for high-quality SAR image processing, and deep learning approaches trained on synthetic datasets exhibit poor generalization because noise-free real SAR images are unattainable. To solve this problem and improve the quality of SAR images, a speckle noise suppression method based on subaperture decomposition and non-local low-rank tensor approximation is proposed. Subaperture decomposition yields azimuth-frame subimages with high global structural similarity, which are modeled as low-rank and formed into a 3D tensor. The tensor is decomposed to derive a low-dimensional orthogonal basis and low-rank representation, followed by non-local denoising and iterative regularization in the low-rank subspace for data reconstruction. Experiments on simulated and real SAR images demonstrate that the proposed method outperforms state-of-the-art techniques in speckle suppression, significantly improving SAR image quality. Full article
Show Figures

Figure 1

27 pages, 14923 KiB  
Article
Multi-Sensor Flood Mapping in Urban and Agricultural Landscapes of the Netherlands Using SAR and Optical Data with Random Forest Classifier
by Omer Gokberk Narin, Aliihsan Sekertekin, Caglar Bayik, Filiz Bektas Balcik, Mahmut Arıkan, Fusun Balik Sanli and Saygin Abdikan
Remote Sens. 2025, 17(15), 2712; https://doi.org/10.3390/rs17152712 - 5 Aug 2025
Abstract
Floods stand as one of the most harmful natural disasters, which have become more dangerous because of climate change effects on urban structures and agricultural fields. This research presents a comprehensive flood mapping approach that combines multi-sensor satellite data with a machine learning [...] Read more.
Floods stand as one of the most harmful natural disasters, which have become more dangerous because of climate change effects on urban structures and agricultural fields. This research presents a comprehensive flood mapping approach that combines multi-sensor satellite data with a machine learning method to evaluate the July 2021 flood in the Netherlands. The research developed 25 different feature scenarios through the combination of Sentinel-1, Landsat-8, and Radarsat-2 imagery data by using backscattering coefficients together with optical Normalized Difference Water Index (NDWI) and Hue, Saturation, and Value (HSV) images and Synthetic Aperture Radar (SAR)-derived Grey Level Co-occurrence Matrix (GLCM) texture features. The Random Forest (RF) classifier was optimized before its application based on two different flood-prone regions, which included Zutphen’s urban area and Heijen’s agricultural land. Results demonstrated that the multi-sensor fusion scenarios (S18, S20, and S25) achieved the highest classification performance, with overall accuracy reaching 96.4% (Kappa = 0.906–0.949) in Zutphen and 87.5% (Kappa = 0.754–0.833) in Heijen. For the flood class F1 scores of all scenarios, they varied from 0.742 to 0.969 in Zutphen and from 0.626 to 0.969 in Heijen. Eventually, the addition of SAR texture metrics enhanced flood boundary identification throughout both urban and agricultural settings. Radarsat-2 provided limited benefits to the overall results, since Sentinel-1 and Landsat-8 data proved more effective despite being freely available. This study demonstrates that using SAR and optical features together with texture information creates a powerful and expandable flood mapping system, and RF classification performs well in diverse landscape settings. Full article
(This article belongs to the Special Issue Remote Sensing Applications in Flood Forecasting and Monitoring)
Show Figures

Figure 1

17 pages, 12127 KiB  
Article
Shoreline Response to Hurricane Otis and Flooding Impact from Hurricane John in Acapulco, Mexico
by Luis Valderrama-Landeros, Iliana Pérez-Espinosa, Edgar Villeda-Chávez, Rafael Alarcón-Medina and Francisco Flores-de-Santiago
Coasts 2025, 5(3), 28; https://doi.org/10.3390/coasts5030028 - 4 Aug 2025
Abstract
The city of Acapulco was impacted by two near-consecutive hurricanes. On 25 October 2023, Hurricane Otis made landfall, reaching the highest Category 5 storm on the Saffir–Simpson scale, causing extensive coastal destruction due to extreme winds and waves. Nearly one year later (23 [...] Read more.
The city of Acapulco was impacted by two near-consecutive hurricanes. On 25 October 2023, Hurricane Otis made landfall, reaching the highest Category 5 storm on the Saffir–Simpson scale, causing extensive coastal destruction due to extreme winds and waves. Nearly one year later (23 September 2024), Hurricane John—a Category 2 storm—caused severe flooding despite its lower intensity, primarily due to its unusual trajectory and prolonged rainfall. Digital shoreline analysis of PlanetScope images (captured one month before and after Hurricane Otis) revealed that the southern coast of Acapulco, specifically Zona Diamante—where the major seafront hotels are located—experienced substantial shoreline erosion (94 ha) and damage. In the northwestern section of the study area, the Coyuca Bar experienced the most dramatic geomorphological change in surface area. This was primarily due to the complete disappearance of the bar on October 26, which resulted in a shoreline retreat of 85 m immediately after the passage of Hurricane Otis. Sentinel-1 Synthetic Aperture Radar (SAR) showed that Hurricane John inundated 2385 ha, four times greater than Hurricane Otis’s flooding (567 ha). The retrofitted QGIS methodology demonstrated high reliability when compared to limited in situ local reports. Given the increased frequency of intense hurricanes, these methods and findings will be relevant in other coastal areas for monitoring and managing local communities affected by severe climate events. Full article
Show Figures

Figure 1

48 pages, 16562 KiB  
Article
Dense Matching with Low Computational Complexity for   Disparity Estimation in the Radargrammetric Approach of SAR Intensity Images
by Hamid Jannati, Mohammad Javad Valadan Zoej, Ebrahim Ghaderpour and Paolo Mazzanti
Remote Sens. 2025, 17(15), 2693; https://doi.org/10.3390/rs17152693 - 3 Aug 2025
Viewed by 153
Abstract
Synthetic Aperture Radar (SAR) images and optical imagery have high potential for extracting digital elevation models (DEMs). The two main approaches for deriving elevation models from SAR data are interferometry (InSAR) and radargrammetry. Adapted from photogrammetric principles, radargrammetry relies on disparity model estimation [...] Read more.
Synthetic Aperture Radar (SAR) images and optical imagery have high potential for extracting digital elevation models (DEMs). The two main approaches for deriving elevation models from SAR data are interferometry (InSAR) and radargrammetry. Adapted from photogrammetric principles, radargrammetry relies on disparity model estimation as its core component. Matching strategies in radargrammetry typically follow local, global, or semi-global methodologies. Local methods, while having higher accuracy, especially in low-texture SAR images, require larger kernel sizes, leading to quadratic computational complexity. Conversely, global and semi-global models produce more consistent and higher-quality disparity maps but are computationally more intensive than local methods with small kernels and require more memory (RAM). In this study, inspired by the advantages of local matching algorithms, a computationally efficient and novel model is proposed for extracting corresponding pixels in SAR-intensity stereo images. To enhance accuracy, the proposed two-stage algorithm operates without an image pyramid structure. Notably, unlike traditional local and global models, the computational complexity of the proposed approach remains stable as the input size or kernel dimensions increase while memory consumption stays low. Compared to a pyramid-based local normalized cross-correlation (NCC) algorithm and adaptive semi-global matching (SGM) models, the proposed method maintains good accuracy comparable to adaptive SGM while reducing processing time by up to 50% relative to pyramid SGM and achieving a 35-fold speedup over the local NCC algorithm with an optimal kernel size. Validated on a Sentinel-1 stereo pair with a 10 m ground-pixel size, the proposed algorithm yields a DEM with an average accuracy of 34.1 m. Full article
24 pages, 29785 KiB  
Article
Multi-Scale Feature Extraction with 3D Complex-Valued Network for PolSAR Image Classification
by Nana Jiang, Wenbo Zhao, Jiao Guo, Qiang Zhao and Jubo Zhu
Remote Sens. 2025, 17(15), 2663; https://doi.org/10.3390/rs17152663 - 1 Aug 2025
Viewed by 215
Abstract
Compared to traditional real-valued neural networks, which process only amplitude information, complex-valued neural networks handle both amplitude and phase information, leading to superior performance in polarimetric synthetic aperture radar (PolSAR) image classification tasks. This paper proposes a multi-scale feature extraction (MSFE) method based [...] Read more.
Compared to traditional real-valued neural networks, which process only amplitude information, complex-valued neural networks handle both amplitude and phase information, leading to superior performance in polarimetric synthetic aperture radar (PolSAR) image classification tasks. This paper proposes a multi-scale feature extraction (MSFE) method based on a 3D complex-valued network to improve classification accuracy by fully leveraging multi-scale features, including phase information. We first designed a complex-valued three-dimensional network framework combining complex-valued 3D convolution (CV-3DConv) with complex-valued squeeze-and-excitation (CV-SE) modules. This framework is capable of simultaneously capturing spatial and polarimetric features, including both amplitude and phase information, from PolSAR images. Furthermore, to address robustness degradation from limited labeled samples, we introduced a multi-scale learning strategy that jointly models global and local features. Specifically, global features extract overall semantic information, while local features help the network capture region-specific semantics. This strategy enhances information utilization by integrating multi-scale receptive fields, complementing feature advantages. Extensive experiments on four benchmark datasets demonstrated that the proposed method outperforms various comparison methods, maintaining high classification accuracy across different sampling rates, thus validating its effectiveness and robustness. Full article
Show Figures

Graphical abstract

15 pages, 4258 KiB  
Article
Complex-Scene SAR Aircraft Recognition Combining Attention Mechanism and Inner Convolution Operator
by Wansi Liu, Huan Wang, Jiapeng Duan, Lixiang Cao, Teng Feng and Xiaomin Tian
Sensors 2025, 25(15), 4749; https://doi.org/10.3390/s25154749 - 1 Aug 2025
Viewed by 207
Abstract
Synthetic aperture radar (SAR), as an active microwave imaging system, has the capability of all-weather and all-time observation. In response to the challenges of aircraft detection in SAR images due to the complex background interference caused by the continuous scattering of airport buildings [...] Read more.
Synthetic aperture radar (SAR), as an active microwave imaging system, has the capability of all-weather and all-time observation. In response to the challenges of aircraft detection in SAR images due to the complex background interference caused by the continuous scattering of airport buildings and the demand for real-time processing, this paper proposes a YOLOv7-MTI recognition model that combines the attention mechanism and involution. By integrating the MTCN module and involution, performance is enhanced. The Multi-TASP-Conv network (MTCN) module aims to effectively extract low-level semantic and spatial information using a shared lightweight attention gate structure to achieve cross-dimensional interaction between “channels and space” with very few parameters, capturing the dependencies among multiple dimensions and improving feature representation ability. Involution helps the model adaptively adjust the weights of spatial positions through dynamic parameterized convolution kernels, strengthening the discrete strong scattering points specific to aircraft and suppressing the continuous scattering of the background, thereby alleviating the interference of complex backgrounds. Experiments on the SAR-AIRcraft-1.0 dataset, which includes seven categories such as A220, A320/321, A330, ARJ21, Boeing737, Boeing787, and others, show that the mAP and mRecall of YOLOv7-MTI reach 93.51% and 96.45%, respectively, outperforming Faster R-CNN, SSD, YOLOv5, YOLOv7, and YOLOv8. Compared with the basic YOLOv7, mAP is improved by 1.47%, mRecall by 1.64%, and FPS by 8.27%, achieving an effective balance between accuracy and speed, providing research ideas for SAR aircraft recognition. Full article
(This article belongs to the Section Radar Sensors)
Show Figures

Figure 1

23 pages, 8942 KiB  
Article
Optical and SAR Image Registration in Equatorial Cloudy Regions Guided by Automatically Point-Prompted Cloud Masks
by Yifan Liao, Shuo Li, Mingyang Gao, Shizhong Li, Wei Qin, Qiang Xiong, Cong Lin, Qi Chen and Pengjie Tao
Remote Sens. 2025, 17(15), 2630; https://doi.org/10.3390/rs17152630 - 29 Jul 2025
Viewed by 276
Abstract
The equator’s unique combination of high humidity and temperature renders optical satellite imagery highly susceptible to persistent cloud cover. In contrast, synthetic aperture radar (SAR) offers a robust alternative due to its ability to penetrate clouds with microwave imaging. This study addresses the [...] Read more.
The equator’s unique combination of high humidity and temperature renders optical satellite imagery highly susceptible to persistent cloud cover. In contrast, synthetic aperture radar (SAR) offers a robust alternative due to its ability to penetrate clouds with microwave imaging. This study addresses the challenges of cloud-induced data gaps and cross-sensor geometric biases by proposing an advanced optical and SAR image-matching framework specifically designed for cloud-prone equatorial regions. We use a prompt-driven visual segmentation model with automatic prompt point generation to produce cloud masks that guide cross-modal feature-matching and joint adjustment of optical and SAR data. This process results in a comprehensive digital orthophoto map (DOM) with high geometric consistency, retaining the fine spatial detail of optical data and the all-weather reliability of SAR. We validate our approach across four equatorial regions using five satellite platforms with varying spatial resolutions and revisit intervals. Even in areas with more than 50 percent cloud cover, our method maintains sub-pixel edging accuracy under manual check points and delivers comprehensive DOM products, establishing a reliable foundation for downstream environmental monitoring and ecosystem analysis. Full article
Show Figures

Figure 1

26 pages, 6806 KiB  
Article
Fine Recognition of MEO SAR Ship Targets Based on a Multi-Level Focusing-Classification Strategy
by Zhaohong Li, Wei Yang, Can Su, Hongcheng Zeng, Yamin Wang, Jiayi Guo and Huaping Xu
Remote Sens. 2025, 17(15), 2599; https://doi.org/10.3390/rs17152599 - 26 Jul 2025
Viewed by 334
Abstract
The Medium Earth Orbit (MEO) spaceborne Synthetic Aperture Radar (SAR) has great coverage ability, which can improve maritime ship target surveillance performance significantly. However, due to the huge computational load required for imaging processing and the severe defocusing caused by ship motions, traditional [...] Read more.
The Medium Earth Orbit (MEO) spaceborne Synthetic Aperture Radar (SAR) has great coverage ability, which can improve maritime ship target surveillance performance significantly. However, due to the huge computational load required for imaging processing and the severe defocusing caused by ship motions, traditional ship recognition conducted in focused image domains cannot process MEO SAR data efficiently. To address this issue, a multi-level focusing-classification strategy for MEO SAR ship recognition is proposed, which is applied to the range-compressed ship data domain. Firstly, global fast coarse-focusing is conducted to compensate for sailing motion errors. Then, a coarse-classification network is designed to realize major target category classification, based on which local region image slices are extracted. Next, fine-focusing is performed to correct high-order motion errors, followed by applying fine-classification applied to the image slices to realize final ship classification. Equivalent MEO SAR ship images generated by real LEO SAR data are utilized to construct training and testing datasets. Simulated MEO SAR ship data are also used to evaluate the generalization of the whole method. The experimental results demonstrate that the proposed method can achieve high classification precision. Since only local region slices are used during the second-level processing step, the complex computations induced by fine-focusing for the full image can be avoided, thereby significantly improving overall efficiency. Full article
(This article belongs to the Special Issue Advances in Remote Sensing Image Target Detection and Recognition)
Show Figures

Graphical abstract

8 pages, 4452 KiB  
Proceeding Paper
Synthetic Aperture Radar Imagery Modelling and Simulation for Investigating the Composite Scattering Between Targets and the Environment
by Raphaël Valeri, Fabrice Comblet, Ali Khenchaf, Jacques Petit-Frère and Philippe Pouliguen
Eng. Proc. 2025, 94(1), 11; https://doi.org/10.3390/engproc2025094011 - 25 Jul 2025
Viewed by 227
Abstract
The high resolution of the Synthetic Aperture Radar (SAR) imagery, in addition to its capability to see through clouds and rain, makes it a crucial remote sensing technique. However, SAR images are very sensitive to radar parameters, the observation geometry and the scene’s [...] Read more.
The high resolution of the Synthetic Aperture Radar (SAR) imagery, in addition to its capability to see through clouds and rain, makes it a crucial remote sensing technique. However, SAR images are very sensitive to radar parameters, the observation geometry and the scene’s characteristics. Moreover, for a complex scene of interest with targets located on a rough soil, a composite scattering between the target and the surface occurs and creates distortions on the SAR image. These characteristics can make the SAR images difficult to analyse and process. To better understand the complex EM phenomena and their signature in the SAR image, we propose a methodology to generate raw SAR signals and SAR images for scenes of interest with a target located on a rough surface. With this prospect, the entire radar acquisition chain is considered: the sensor parameters, the atmospheric attenuation, the interactions between the incident EM field and the scene, and the SAR image formation. Simulation results are presented for a rough dielectric soil and a canonical target considered as a Perfect Electric Conductor (PEC). These results highlight the importance of the composite scattering signature between the target and the soil. Its power is 21 dB higher that that of the target for the target–soil configuration considered. Finally, these simulations allow for the retrieval of characteristics present in actual SAR images and show the potential of the presented model in investigating EM phenomena and their signatures in SAR images. Full article
Show Figures

Figure 1

25 pages, 19515 KiB  
Article
Towards Efficient SAR Ship Detection: Multi-Level Feature Fusion and Lightweight Network Design
by Wei Xu, Zengyuan Guo, Pingping Huang, Weixian Tan and Zhiqi Gao
Remote Sens. 2025, 17(15), 2588; https://doi.org/10.3390/rs17152588 - 24 Jul 2025
Viewed by 366
Abstract
Synthetic Aperture Radar (SAR) provides all-weather, all-time imaging capabilities, enabling reliable maritime ship detection under challenging weather and lighting conditions. However, most high-precision detection models rely on complex architectures and large-scale parameters, limiting their applicability to resource-constrained platforms such as satellite-based systems, where [...] Read more.
Synthetic Aperture Radar (SAR) provides all-weather, all-time imaging capabilities, enabling reliable maritime ship detection under challenging weather and lighting conditions. However, most high-precision detection models rely on complex architectures and large-scale parameters, limiting their applicability to resource-constrained platforms such as satellite-based systems, where model size, computational load, and power consumption are tightly restricted. Thus, guided by the principles of lightweight design, robustness, and energy efficiency optimization, this study proposes a three-stage collaborative multi-level feature fusion framework to reduce model complexity without compromising detection performance. Firstly, the backbone network integrates depthwise separable convolutions and a Convolutional Block Attention Module (CBAM) to suppress background clutter and extract effective features. Building upon this, a cross-layer feature interaction mechanism is introduced via the Multi-Scale Coordinated Fusion (MSCF) and Bi-EMA Enhanced Fusion (Bi-EF) modules to strengthen joint spatial-channel perception. To further enhance the detection capability, Efficient Feature Learning (EFL) modules are embedded in the neck to improve feature representation. Experiments on the Synthetic Aperture Radar (SAR) Ship Detection Dataset (SSDD) show that this method, with only 1.6 M parameters, achieves a mean average precision (mAP) of 98.35% in complex scenarios, including inshore and offshore environments. It balances the difficult problem of being unable to simultaneously consider accuracy and hardware resource requirements in traditional methods, providing a new technical path for real-time SAR ship detection on satellite platforms. Full article
Show Figures

Figure 1

22 pages, 16984 KiB  
Article
Small Ship Detection Based on Improved Neural Network Algorithm and SAR Images
by Jiaqi Li, Hongyuan Huo, Li Guo, De Zhang, Wei Feng, Yi Lian and Long He
Remote Sens. 2025, 17(15), 2586; https://doi.org/10.3390/rs17152586 - 24 Jul 2025
Viewed by 281
Abstract
Synthetic aperture radar images can be used for ship target detection. However, due to the unclear ship outline in SAR images, noise and land background factors affect the difficulty and accuracy of ship (especially small target ship) detection. Therefore, based on the YOLOv5s [...] Read more.
Synthetic aperture radar images can be used for ship target detection. However, due to the unclear ship outline in SAR images, noise and land background factors affect the difficulty and accuracy of ship (especially small target ship) detection. Therefore, based on the YOLOv5s model, this paper improves its backbone network and feature fusion network algorithm to improve the accuracy of ship detection target recognition. First, the LSKModule is used to improve the backbone network of YOLOv5s. By adaptively aggregating the features extracted by large-size convolution kernels to fully obtain context information, at the same time, key features are enhanced and noise interference is suppressed. Secondly, multiple Depthwise Separable Convolution layers are added to the SPPF (Spatial Pyramid Pooling-Fast) structure. Although a small number of parameters and calculations are introduced, features of different receptive fields can be extracted. Third, the feature fusion network of YOLOv5s is improved based on BIFPN, and the shallow feature map is used to optimize the small target detection performance. Finally, the CoordConv module is added before the detect head of YOLOv5, and two coordinate channels are added during the convolution operation to further improve the accuracy of target detection. The map50 of this method for the SSDD dataset and HRSID dataset reached 97.6% and 91.7%, respectively, and was compared with a variety of advanced target detection models. The results show that the detection accuracy of this method is higher than other similar target detection algorithms. Full article
Show Figures

Figure 1

22 pages, 2420 KiB  
Article
BiEHFFNet: A Water Body Detection Network for SAR Images Based on Bi-Encoder and Hybrid Feature Fusion
by Bin Han, Xin Huang and Feng Xue
Mathematics 2025, 13(15), 2347; https://doi.org/10.3390/math13152347 - 23 Jul 2025
Viewed by 201
Abstract
Water body detection in synthetic aperture radar (SAR) imagery plays a critical role in applications such as disaster response, water resource management, and environmental monitoring. However, it remains challenging due to complex background interference in SAR images. To address this issue, a bi-encoder [...] Read more.
Water body detection in synthetic aperture radar (SAR) imagery plays a critical role in applications such as disaster response, water resource management, and environmental monitoring. However, it remains challenging due to complex background interference in SAR images. To address this issue, a bi-encoder and hybrid feature fuse network (BiEHFFNet) is proposed for achieving accurate water body detection. First, a bi-encoder structure based on ResNet and Swin Transformer is used to jointly extract local spatial details and global contextual information, enhancing feature representation in complex scenarios. Additionally, the convolutional block attention module (CBAM) is employed to suppress irrelevant information of the output features of each ResNet stage. Second, a cross-attention-based hybrid feature fusion (CABHFF) module is designed to interactively integrate local and global features through cross-attention, followed by channel attention to achieve effective hybrid feature fusion, thus improving the model’s ability to capture water structures. Third, a multi-scale content-aware upsampling (MSCAU) module is designed by integrating atrous spatial pyramid pooling (ASPP) with the Content-Aware ReAssembly of FEatures (CARAFE), aiming to enhance multi-scale contextual learning while alleviating feature distortion caused by upsampling. Finally, a composite loss function combining Dice loss and Active Contour loss is used to provide stronger boundary supervision. Experiments conducted on the ALOS PALSAR dataset demonstrate that the proposed BiEHFFNet outperforms existing methods across multiple evaluation metrics, achieving more accurate water body detection. Full article
(This article belongs to the Special Issue Advanced Mathematical Methods in Remote Sensing)
Show Figures

Figure 1

23 pages, 24301 KiB  
Article
Robust Optical and SAR Image Registration Using Weighted Feature Fusion
by Ao Luo, Anxi Yu, Yongsheng Zhang, Wenhao Tong and Huatao Yu
Remote Sens. 2025, 17(15), 2544; https://doi.org/10.3390/rs17152544 - 22 Jul 2025
Viewed by 315
Abstract
Image registration constitutes the fundamental basis for the joint interpretation of synthetic aperture radar (SAR) and optical images. However, robust image registration remains challenging due to significant regional heterogeneity in remote sensing scenes (e.g., co-existing urban and marine areas within a single image). [...] Read more.
Image registration constitutes the fundamental basis for the joint interpretation of synthetic aperture radar (SAR) and optical images. However, robust image registration remains challenging due to significant regional heterogeneity in remote sensing scenes (e.g., co-existing urban and marine areas within a single image). To overcome this challenge, this article proposes a novel optical–SAR image registration method named Gradient and Standard Deviation Feature Weighted Fusion (GDWF). First, a Block-local standard deviation (Block-LSD) operator is proposed to extract block-based feature points with regional adaptability. Subsequently, a dual-modal feature description is developed, constructing both gradient-based descriptors and local standard deviation (LSD) descriptors for the neighborhoods surrounding the detected feature points. To further enhance matching robustness, a confidence-weighted feature fusion strategy is proposed. By establishing a reliability evaluation model for similarity measurement maps, the contribution weights of gradient features and LSD features are dynamically optimized, ensuring adaptive performance under varying conditions. To verify the effectiveness of the method, different optical and SAR datasets are used to compare it with the currently advanced algorithms MOGF, CFOG, and FED-HOPC. The experimental results demonstrate that the proposed GDWF algorithm achieves the best performance in terms of registration accuracy and robustness among all compared methods, effectively handling optical–SAR image pairs with significant regional heterogeneity. Full article
Show Figures

Figure 1

23 pages, 7457 KiB  
Article
An Efficient Ship Target Integrated Imaging and Detection Framework (ST-IIDF) for Space-Borne SAR Echo Data
by Can Su, Wei Yang, Yongchen Pan, Hongcheng Zeng, Yamin Wang, Jie Chen, Zhixiang Huang, Wei Xiong, Jie Chen and Chunsheng Li
Remote Sens. 2025, 17(15), 2545; https://doi.org/10.3390/rs17152545 - 22 Jul 2025
Viewed by 324
Abstract
Due to the sparse distribution of ship targets in wide-area offshore scenarios, the typical cascade mode of imaging and detection for space-borne Synthetic Aperture Radar (SAR) echo data would consume substantial computational time and resources, severely affecting the timeliness of ship target information [...] Read more.
Due to the sparse distribution of ship targets in wide-area offshore scenarios, the typical cascade mode of imaging and detection for space-borne Synthetic Aperture Radar (SAR) echo data would consume substantial computational time and resources, severely affecting the timeliness of ship target information acquisition tasks. Therefore, we propose a ship target integrated imaging and detection framework (ST-IIDF) for SAR oceanic region data. A two-step filtering structure is added in the SAR imaging process to extract the potential areas of ship targets, which can accelerate the whole process. First, an improved peak-valley detection method based on one-dimensional scattering characteristics is used to locate the range gate units for ship targets. Second, a dynamic quantization method is applied to the imaged range gate units to further determine the azimuth region. Finally, a lightweight YOLO neural network is used to eliminate false alarm areas and obtain accurate positions of the ship targets. Through experiments on Hisea-1 and Pujiang-2 data, within sparse target scenes, the framework maintains over 90% accuracy in ship target detection, with an average processing speed increase of 35.95 times. The framework can be applied to ship target detection tasks with high timeliness requirements and provides an effective solution for real-time onboard processing. Full article
(This article belongs to the Special Issue Efficient Object Detection Based on Remote Sensing Images)
Show Figures

Figure 1

Back to TopTop