Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (369)

Search Parameters:
Keywords = SAR ship images

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 6806 KiB  
Article
Fine Recognition of MEO SAR Ship Targets Based on a Multi-Level Focusing-Classification Strategy
by Zhaohong Li, Wei Yang, Can Su, Hongcheng Zeng, Yamin Wang, Jiayi Guo and Huaping Xu
Remote Sens. 2025, 17(15), 2599; https://doi.org/10.3390/rs17152599 - 26 Jul 2025
Viewed by 339
Abstract
The Medium Earth Orbit (MEO) spaceborne Synthetic Aperture Radar (SAR) has great coverage ability, which can improve maritime ship target surveillance performance significantly. However, due to the huge computational load required for imaging processing and the severe defocusing caused by ship motions, traditional [...] Read more.
The Medium Earth Orbit (MEO) spaceborne Synthetic Aperture Radar (SAR) has great coverage ability, which can improve maritime ship target surveillance performance significantly. However, due to the huge computational load required for imaging processing and the severe defocusing caused by ship motions, traditional ship recognition conducted in focused image domains cannot process MEO SAR data efficiently. To address this issue, a multi-level focusing-classification strategy for MEO SAR ship recognition is proposed, which is applied to the range-compressed ship data domain. Firstly, global fast coarse-focusing is conducted to compensate for sailing motion errors. Then, a coarse-classification network is designed to realize major target category classification, based on which local region image slices are extracted. Next, fine-focusing is performed to correct high-order motion errors, followed by applying fine-classification applied to the image slices to realize final ship classification. Equivalent MEO SAR ship images generated by real LEO SAR data are utilized to construct training and testing datasets. Simulated MEO SAR ship data are also used to evaluate the generalization of the whole method. The experimental results demonstrate that the proposed method can achieve high classification precision. Since only local region slices are used during the second-level processing step, the complex computations induced by fine-focusing for the full image can be avoided, thereby significantly improving overall efficiency. Full article
(This article belongs to the Special Issue Advances in Remote Sensing Image Target Detection and Recognition)
Show Figures

Graphical abstract

25 pages, 19515 KiB  
Article
Towards Efficient SAR Ship Detection: Multi-Level Feature Fusion and Lightweight Network Design
by Wei Xu, Zengyuan Guo, Pingping Huang, Weixian Tan and Zhiqi Gao
Remote Sens. 2025, 17(15), 2588; https://doi.org/10.3390/rs17152588 - 24 Jul 2025
Viewed by 376
Abstract
Synthetic Aperture Radar (SAR) provides all-weather, all-time imaging capabilities, enabling reliable maritime ship detection under challenging weather and lighting conditions. However, most high-precision detection models rely on complex architectures and large-scale parameters, limiting their applicability to resource-constrained platforms such as satellite-based systems, where [...] Read more.
Synthetic Aperture Radar (SAR) provides all-weather, all-time imaging capabilities, enabling reliable maritime ship detection under challenging weather and lighting conditions. However, most high-precision detection models rely on complex architectures and large-scale parameters, limiting their applicability to resource-constrained platforms such as satellite-based systems, where model size, computational load, and power consumption are tightly restricted. Thus, guided by the principles of lightweight design, robustness, and energy efficiency optimization, this study proposes a three-stage collaborative multi-level feature fusion framework to reduce model complexity without compromising detection performance. Firstly, the backbone network integrates depthwise separable convolutions and a Convolutional Block Attention Module (CBAM) to suppress background clutter and extract effective features. Building upon this, a cross-layer feature interaction mechanism is introduced via the Multi-Scale Coordinated Fusion (MSCF) and Bi-EMA Enhanced Fusion (Bi-EF) modules to strengthen joint spatial-channel perception. To further enhance the detection capability, Efficient Feature Learning (EFL) modules are embedded in the neck to improve feature representation. Experiments on the Synthetic Aperture Radar (SAR) Ship Detection Dataset (SSDD) show that this method, with only 1.6 M parameters, achieves a mean average precision (mAP) of 98.35% in complex scenarios, including inshore and offshore environments. It balances the difficult problem of being unable to simultaneously consider accuracy and hardware resource requirements in traditional methods, providing a new technical path for real-time SAR ship detection on satellite platforms. Full article
Show Figures

Figure 1

22 pages, 16984 KiB  
Article
Small Ship Detection Based on Improved Neural Network Algorithm and SAR Images
by Jiaqi Li, Hongyuan Huo, Li Guo, De Zhang, Wei Feng, Yi Lian and Long He
Remote Sens. 2025, 17(15), 2586; https://doi.org/10.3390/rs17152586 - 24 Jul 2025
Viewed by 286
Abstract
Synthetic aperture radar images can be used for ship target detection. However, due to the unclear ship outline in SAR images, noise and land background factors affect the difficulty and accuracy of ship (especially small target ship) detection. Therefore, based on the YOLOv5s [...] Read more.
Synthetic aperture radar images can be used for ship target detection. However, due to the unclear ship outline in SAR images, noise and land background factors affect the difficulty and accuracy of ship (especially small target ship) detection. Therefore, based on the YOLOv5s model, this paper improves its backbone network and feature fusion network algorithm to improve the accuracy of ship detection target recognition. First, the LSKModule is used to improve the backbone network of YOLOv5s. By adaptively aggregating the features extracted by large-size convolution kernels to fully obtain context information, at the same time, key features are enhanced and noise interference is suppressed. Secondly, multiple Depthwise Separable Convolution layers are added to the SPPF (Spatial Pyramid Pooling-Fast) structure. Although a small number of parameters and calculations are introduced, features of different receptive fields can be extracted. Third, the feature fusion network of YOLOv5s is improved based on BIFPN, and the shallow feature map is used to optimize the small target detection performance. Finally, the CoordConv module is added before the detect head of YOLOv5, and two coordinate channels are added during the convolution operation to further improve the accuracy of target detection. The map50 of this method for the SSDD dataset and HRSID dataset reached 97.6% and 91.7%, respectively, and was compared with a variety of advanced target detection models. The results show that the detection accuracy of this method is higher than other similar target detection algorithms. Full article
Show Figures

Figure 1

23 pages, 7457 KiB  
Article
An Efficient Ship Target Integrated Imaging and Detection Framework (ST-IIDF) for Space-Borne SAR Echo Data
by Can Su, Wei Yang, Yongchen Pan, Hongcheng Zeng, Yamin Wang, Jie Chen, Zhixiang Huang, Wei Xiong, Jie Chen and Chunsheng Li
Remote Sens. 2025, 17(15), 2545; https://doi.org/10.3390/rs17152545 - 22 Jul 2025
Viewed by 328
Abstract
Due to the sparse distribution of ship targets in wide-area offshore scenarios, the typical cascade mode of imaging and detection for space-borne Synthetic Aperture Radar (SAR) echo data would consume substantial computational time and resources, severely affecting the timeliness of ship target information [...] Read more.
Due to the sparse distribution of ship targets in wide-area offshore scenarios, the typical cascade mode of imaging and detection for space-borne Synthetic Aperture Radar (SAR) echo data would consume substantial computational time and resources, severely affecting the timeliness of ship target information acquisition tasks. Therefore, we propose a ship target integrated imaging and detection framework (ST-IIDF) for SAR oceanic region data. A two-step filtering structure is added in the SAR imaging process to extract the potential areas of ship targets, which can accelerate the whole process. First, an improved peak-valley detection method based on one-dimensional scattering characteristics is used to locate the range gate units for ship targets. Second, a dynamic quantization method is applied to the imaged range gate units to further determine the azimuth region. Finally, a lightweight YOLO neural network is used to eliminate false alarm areas and obtain accurate positions of the ship targets. Through experiments on Hisea-1 and Pujiang-2 data, within sparse target scenes, the framework maintains over 90% accuracy in ship target detection, with an average processing speed increase of 35.95 times. The framework can be applied to ship target detection tasks with high timeliness requirements and provides an effective solution for real-time onboard processing. Full article
(This article belongs to the Special Issue Efficient Object Detection Based on Remote Sensing Images)
Show Figures

Figure 1

28 pages, 43087 KiB  
Article
LWSARDet: A Lightweight SAR Small Ship Target Detection Network Based on a Position–Morphology Matching Mechanism
by Yuliang Zhao, Yang Du, Qiutong Wang, Changhe Li, Yan Miao, Tengfei Wang and Xiangyu Song
Remote Sens. 2025, 17(14), 2514; https://doi.org/10.3390/rs17142514 - 19 Jul 2025
Viewed by 404
Abstract
The all-weather imaging capability of synthetic aperture radar (SAR) confers unique advantages for maritime surveillance. However, ship detection under complex sea conditions still faces challenges, such as high-frequency noise interference and the limited computational power of edge computing platforms. To address these challenges, [...] Read more.
The all-weather imaging capability of synthetic aperture radar (SAR) confers unique advantages for maritime surveillance. However, ship detection under complex sea conditions still faces challenges, such as high-frequency noise interference and the limited computational power of edge computing platforms. To address these challenges, we propose a lightweight SAR small ship detection network, LWSARDet, which mitigates feature redundancy and reduces computational complexity in existing models. Specifically, based on the YOLOv5 framework, a dual strategy for the lightweight network is adopted as follows: On the one hand, to address the limited nonlinear representation ability of the original network, a global channel attention mechanism is embedded and a feature extraction module, GCCR-GhostNet, is constructed, which can effectively enhance the network’s feature extraction capability and high-frequency noise suppression, while reducing computational cost. On the other hand, to reduce feature dilution and computational redundancy in traditional detection heads when focusing on small targets, we replace conventional convolutions with simple linear transformations and design a lightweight detection head, LSD-Head. Furthermore, we propose a Position–Morphology Matching IoU loss function, P-MIoU, which integrates center distance constraints and morphological penalty mechanisms to more precisely capture the spatial and structural differences between predicted and ground truth bounding boxes. Extensive experiments conduct on the High-Resolution SAR Image Dataset (HRSID) and the SAR Ship Detection Dataset (SSDD) demonstrate that LWSARDet achieves superior overall performance compared to existing state-of-the-art (SOTA) methods. Full article
Show Figures

Figure 1

24 pages, 40762 KiB  
Article
Multiscale Task-Decoupled Oriented SAR Ship Detection Network Based on Size-Aware Balanced Strategy
by Shun He, Ruirui Yuan, Zhiwei Yang and Jiaxue Liu
Remote Sens. 2025, 17(13), 2257; https://doi.org/10.3390/rs17132257 - 30 Jun 2025
Viewed by 328
Abstract
Current synthetic aperture radar (SAR) ship datasets exhibit a notable disparity in the distribution of large, medium, and small ship targets. This imbalance makes it difficult for a relatively small number of large and medium-sized ships to be effectively trained, resulting in many [...] Read more.
Current synthetic aperture radar (SAR) ship datasets exhibit a notable disparity in the distribution of large, medium, and small ship targets. This imbalance makes it difficult for a relatively small number of large and medium-sized ships to be effectively trained, resulting in many false alarms. Therefore, to address the issues of scale diversity, intra-class imbalance in ship data, and the feature conflict problem associated with traditional coupled detection heads, we propose an SAR image multiscale task-decoupled oriented ship target detector based on a size-aware balanced strategy. First, the multiscale target features are extracted using the multikernel heterogeneous perception module (MKHP). Meanwhile, the triple-attention module is introduced to establish the remote channel dependence to alleviate the issue of small target feature annihilation, which can effectively enhance the feature characterization ability of the model. Second, given the differences in the demand for feature information between the detection and classification tasks, a channel attention-based task decoupling dual-head (CAT2D) detector head structure is introduced to address the inherent conflict between classification and localization tasks. Finally, a new size-aware balanced (SAB) loss strategy is proposed to guide the network in focusing on the scarce targets in training to alleviate the intra-class imbalance problem during the training process. The ablation experiments on SSDD+ reflect the contribution of each component, and the results of the comparison experiments on the RSDD-SAR and HRSID datasets show that the proposed method achieves state-of-the-art performance compared to other state-of-the-art detection models. Furthermore, our approach exhibits superior detection coverage for both offshore and inshore scenarios for ship detection tasks. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Graphical abstract

21 pages, 5214 KiB  
Article
YOLO-SAR: An Enhanced Multi-Scale Ship Detection Method in Low-Light Environments
by Zihang Xiong, Mei Wang, Ruixiang Kan and Jiayu Zhang
Appl. Sci. 2025, 15(13), 7288; https://doi.org/10.3390/app15137288 - 28 Jun 2025
Viewed by 356
Abstract
Nowadays, object detection has become increasingly crucial in various Internet-of-Things (IoT) systems, and ship detection is an essential component of this field. In low-illumination scenes, traditional ship detection algorithms often struggle due to poor visibility and blurred details in RGB video streams. To [...] Read more.
Nowadays, object detection has become increasingly crucial in various Internet-of-Things (IoT) systems, and ship detection is an essential component of this field. In low-illumination scenes, traditional ship detection algorithms often struggle due to poor visibility and blurred details in RGB video streams. To address this weakness, we create the Lowship dataset and propose the YOLO-SAR framework, which is based on the You Only Look Once (YOLO) architecture. As for implementing ship detecting methods in such challenging conditions, the main contributions of this work are as follows: (i) a low-illumination image-enhancement module that adaptively improves multi-scale feature perception in low-illumination scenes; (ii) receptive-field attention convolution to compensate for weak long-range modeling; and (iii) an Adaptively Spatial Feature Fusion head to refine the multi-scale learning of ship features. Experiments show that our method achieves 92.9% precision and raises mAP@0.5 to 93.8%, outperforming mainstream approaches. These state-of-the-art results confirm the significant practical value of our approach. Full article
Show Figures

Figure 1

26 pages, 6668 KiB  
Article
Dark Ship Detection via Optical and SAR Collaboration: An Improved Multi-Feature Association Method Between Remote Sensing Images and AIS Data
by Fan Li, Kun Yu, Chao Yuan, Yichen Tian, Guang Yang, Kai Yin and Youguang Li
Remote Sens. 2025, 17(13), 2201; https://doi.org/10.3390/rs17132201 - 26 Jun 2025
Viewed by 635
Abstract
Dark ships, vessels deliberately disabling their AIS signals, constitute a grave maritime safety hazard, with detection efforts hindered by issues like over-reliance on AIS, inadequate surveillance coverage, and significant mismatch rates. This paper proposes an improved multi-feature association method that integrates satellite remote [...] Read more.
Dark ships, vessels deliberately disabling their AIS signals, constitute a grave maritime safety hazard, with detection efforts hindered by issues like over-reliance on AIS, inadequate surveillance coverage, and significant mismatch rates. This paper proposes an improved multi-feature association method that integrates satellite remote sensing and AIS data, with a focus on oriented bounding box course estimation, to improve the detection of dark ships and enhance maritime surveillance. Firstly, the oriented bounding box object detection model (YOLOv11n-OBB) is trained to break through the limitations of horizontal bounding box orientation representation. Secondly, by integrating position, dimensions (length and width), and course characteristics, we devise a joint cost function to evaluate the combined significance of multiple features. Subsequently, an advanced JVC global optimization algorithm is employed to ensure high-precision association in dense scenes. Finally, by integrating data from Gaofen-6 (optical) and Gaofen-3B (SAR) satellites, a day-and-night collaborative monitoring framework is constructed to address the blind spots of single-sensor monitoring during night-time or adverse weather conditions. Our results indicate that the detection model demonstrates a high average precision (AP50) of 0.986 on the optical dataset and 0.903 on the SAR dataset. The association accuracy of the multi-feature association algorithm is 91.74% in optical image and AIS data matching, and 91.33% in SAR image and AIS data matching. The association rate reaches 96.03% (optical) and 74.24% (SAR), respectively. This study provides an efficient technical tool for maritime safety regulation through multi-source data fusion and algorithm innovation. Full article
Show Figures

Graphical abstract

22 pages, 11308 KiB  
Article
TIAR-SAR: An Oriented SAR Ship Detector Combining a Task Interaction Head Architecture with Composite Angle Regression
by Yu Gu, Minding Fang and Dongliang Peng
Remote Sens. 2025, 17(12), 2049; https://doi.org/10.3390/rs17122049 - 13 Jun 2025
Viewed by 484
Abstract
Oriented ship detection in Synthetic Aperture Radar (SAR) images has broad applications in maritime surveillance and other fields. While deep learning advancements have significantly improved ship detection performance, persistent challenges remain for existing methods. These include the inherent misalignment between regression and classification [...] Read more.
Oriented ship detection in Synthetic Aperture Radar (SAR) images has broad applications in maritime surveillance and other fields. While deep learning advancements have significantly improved ship detection performance, persistent challenges remain for existing methods. These include the inherent misalignment between regression and classification tasks and the boundary discontinuity problem in oriented object detection. These issues hinder efficient and accurate ship detection in complex scenarios. To address these challenges, we propose TIAR-SAR, a novel oriented SAR ship detector featuring a task interaction head and composite angle regression. First, we propose a task interaction detection head (Tihead) capable of predicting both oriented bounding boxes (OBBs) and horizontal bounding boxes (HBBs) simultaneously. Within the Tihead, a “decompose-then-interact” structure is designed. This structure not only mitigates feature misalignment but also promotes feature interaction between regression and classification tasks, thereby enhancing prediction consistency. Second, we propose a joint angle refinement mechanism (JARM). The JARM addresses the non-differentiability problem of the traditional rotated Intersection over Union (IoU) loss through the design of a composite angle regression loss (CARL) function, which strategically combines direct and indirect angle regression methods. A boundary angle correction mechanism (BACM) is then designed to enhance angle estimation accuracy. During inference, BACM dynamically replaces an object’s OBB prediction with its corresponding HBB if the OBB exhibits excessive angle deviation when the angle of the object is near the predefined boundary. Finally, the performance and applicability of the proposed methods are evaluated through extensive experiments on multiple public datasets, including SRSDD, HRSID, and DOTAv1. Experimental results derived from the use of the SRSDD dataset demonstrate that the mAP50 of the proposed method reaches 63.91%, an improvement of 4.17% compared with baseline methods. The detector achieves 17.42 FPS on 1024 × 1024 images using an RTX 2080 Ti GPU, with a model size of only 21.92 MB. Comparative experiments with other state-of-the-art methods on the HRSID dataset demonstrate the proposed method’s superior detection performance in complex nearshore scenarios. Furthermore, when further tested on the DOTAv1 dataset, the mAP50 can reach 79.1%. Full article
Show Figures

Figure 1

21 pages, 4072 KiB  
Article
ST-YOLOv8: Small-Target Ship Detection in SAR Images Targeting Specific Marine Environments
by Fei Gao, Yang Tian, Yongliang Wu and Yunxia Zhang
Appl. Sci. 2025, 15(12), 6666; https://doi.org/10.3390/app15126666 - 13 Jun 2025
Viewed by 375
Abstract
Synthetic Aperture Radar (SAR) image ship detection faces challenges such as distinguishing ships from other terrains and structures, especially in specific marine complex environments. The motivation behind this work is to enhance detection accuracy while minimizing false positives, which is crucial for applications [...] Read more.
Synthetic Aperture Radar (SAR) image ship detection faces challenges such as distinguishing ships from other terrains and structures, especially in specific marine complex environments. The motivation behind this work is to enhance detection accuracy while minimizing false positives, which is crucial for applications like defense vessel monitoring and civilian search and rescue operations. To achieve this goal, we propose several architectural improvements to You Only Look Once version 8 Nano (YOLOv8n) and present Small Target-YOLOv8(ST-YOLOv8)—a novel lightweight SAR ship detection model based on the enhance YOLOv8n framework. The C2f module in the backbone’s transition sections is replaced by the Conv_Online Reparameterized Convolution (C_OREPA) module, reducing convolutional complexity and improving efficiency. The Atrous Spatial Pyramid Pooling (ASPP) module is added to the end of the backbone to extract finer features from smaller and more complex ship targets. In the neck network, the Shuffle Attention (SA) module is employed before each upsampling step to improve upsampling quality. Additionally, we replace the Complete Intersection over Union (C-IoU) loss function with the Wise Intersection over Union (W-IoU) loss function, which enhances bounding box precision. We conducted ablation experiments on two widely used multimodal SAR datasets. The proposed model significantly outperforms the YOLOv8n baseline, achieving 94.1% accuracy, 82% recall, and 87.6% F1 score on the SAR Ship Detection Dataset (SSDD), and 92.7% accuracy, 84.5% recall, and 88.1% F1 score on the SAR Ship Dataset_v0 dataset (SSDv0). Furthermore, the ST-YOLOv8 model outperforms several state-of-the-art multi-scale ship detection algorithms on both datasets. In summary, the ST-YOLOv8 model, by integrating advanced neural network architectures and optimization techniques, significantly improves detection accuracy and reduces false detection rates. This makes it highly suitable for complex backgrounds and multi-scale ship detection. Future work will focus on lightweight model optimization for deployment on mobile platforms to broaden its applicability across different scenarios. Full article
Show Figures

Figure 1

20 pages, 5050 KiB  
Article
Deep Metric Learning for Fine-Grained Ship Classification in SAR Images with Sidelobe Interference
by Haibin Zhu, Yaxin Mu, Wupeng Xie, Kang Xing, Bin Tan, Yashi Zhou, Zhongde Yu, Zhiying Cui, Chuang Zhang, Xin Liu and Zhenghuan Xia
Remote Sens. 2025, 17(11), 1835; https://doi.org/10.3390/rs17111835 - 24 May 2025
Viewed by 513
Abstract
The interference of sidelobe often causes different targets to exhibit similar features, diminishing fine-grained classification accuracy. This effect is particularly pronounced when the available data are limited. To address the aforementioned issues, a novel classification framework for sidelobe-affected SAR imagery is proposed. First, [...] Read more.
The interference of sidelobe often causes different targets to exhibit similar features, diminishing fine-grained classification accuracy. This effect is particularly pronounced when the available data are limited. To address the aforementioned issues, a novel classification framework for sidelobe-affected SAR imagery is proposed. First, a method based on maximum median filtering is adopted to remove sidelobe by exploiting local grayscale differences between the target and sidelobe, constructing a high-quality SAR dataset. Second, a deep metric learning network is constructed for fine-grained classification. To enhance the classification performance of the network on limited samples, a feature extraction module integrating a lightweight attention mechanism is designed to extract discriminative features. Then, a hybrid loss function is proposed to strengthen intra-class correlation and inter-class separability. Experimental results based on the FUSAR-Ship dataset demonstrate that the method exhibits excellent sidelobe suppression performance. Furthermore, the proposed framework achieves an accuracy of 84.18% across five ship target classification categories, outperforming the existing methods, significantly enhancing the classification performance in the context of sidelobe interference and limited datasets. Full article
Show Figures

Graphical abstract

24 pages, 24381 KiB  
Article
AJANet: SAR Ship Detection Network Based on Adaptive Channel Attention and Large Separable Kernel Adaptation
by Yishuang Chen, Jie Chen, Long Sun, Bocai Wu and Hui Xu
Remote Sens. 2025, 17(10), 1745; https://doi.org/10.3390/rs17101745 - 16 May 2025
Cited by 1 | Viewed by 448
Abstract
Due to issues such as low resolution, scattering noise, and background clutter, ship detection in Synthetic Aperture Radar (SAR) images remains challenging, especially in inshore regions, where these factors have similar scattering characteristics. To overcome these challenges, this paper proposes a novel SAR [...] Read more.
Due to issues such as low resolution, scattering noise, and background clutter, ship detection in Synthetic Aperture Radar (SAR) images remains challenging, especially in inshore regions, where these factors have similar scattering characteristics. To overcome these challenges, this paper proposes a novel SAR ship detection framework that integrates adaptive channel attention with large kernel adaptation. The proposed method improves multi-scale contextual information extraction by enhancing feature map interactions at different scales. This method effectively reduces false positives, missed detections, and localization ambiguities, especially in complex inshore environments. Also, it includes an adaptive channel attention block that adjusts attention weights according to the dimensions of the input feature maps, enabling the model to prioritize local information and improve sensitivity to small object features in SAR images. In addition, a large kernel attention block with adaptive kernel size is introduced to automatically adjust the receptive field designed to extract abundant context information at different detection layers. Experimental evaluations on the SSDD and Hysid SAR ship datasets indicate that our method achieves excellent detection performance compared to current methods, as well as demonstrate its effectiveness in overcoming SAR ship detection challenges. Full article
Show Figures

Graphical abstract

20 pages, 6387 KiB  
Article
Denoising and Feature Enhancement Network for Target Detection Based on SAR Images
by Cheng Yang, Chengyu Li and Yongfeng Zhu
Remote Sens. 2025, 17(10), 1739; https://doi.org/10.3390/rs17101739 - 16 May 2025
Cited by 2 | Viewed by 673
Abstract
Synthetic aperture radar (SAR) is characterized by its all-weather monitoring capabilities and high-resolution imaging. It plays a crucial role in operations such as marine salvage and strategic deployments. However, existing vessel detection technologies face challenges such as occlusion and deformation of targets in [...] Read more.
Synthetic aperture radar (SAR) is characterized by its all-weather monitoring capabilities and high-resolution imaging. It plays a crucial role in operations such as marine salvage and strategic deployments. However, existing vessel detection technologies face challenges such as occlusion and deformation of targets in multi-scale target detection and significant interference noise in complex scenarios like coastal areas and ports. To address these issues, this paper proposes an algorithm based on YOLOv8 for detecting ship targets in complex backgrounds using SAR images, named DFENet (Denoising and Feature Enhancement Network). First, we design a background suppression and target enhancement module (BSTEM), which aims to suppress noise interference in complex backgrounds. Second, we further propose a feature enhancement attention module (FEAM) to enhance the network’s ability to extract edge and contour features, as well as to improve its dynamic awareness of critical areas. Experiments conducted on public datasets demonstrate the effectiveness and superiority of DFENet. In particular, compared with the benchmark network, the detection accuracy of mAP75 on the SSDD and HRSID is improved by 2.3% and 2.9%, respectively. In summary, DFENet demonstrates excellent performance in scenarios with significant background interference or high demands for positioning accuracy, indicating strong potential for various applications. Full article
Show Figures

Figure 1

21 pages, 52785 KiB  
Article
MC-ASFF-ShipYOLO: Improved Algorithm for Small-Target and Multi-Scale Ship Detection for Synthetic Aperture Radar (SAR) Images
by Yubin Xu, Haiyan Pan, Lingqun Wang and Ran Zou
Sensors 2025, 25(9), 2940; https://doi.org/10.3390/s25092940 - 7 May 2025
Viewed by 791
Abstract
Synthetic aperture radar (SAR) ship detection holds significant application value in maritime monitoring, marine traffic management, and safety maintenance. Despite remarkable advances in deep-learning-based detection methods, performance remains constrained by the vast size differences between ships, limited feature information of small targets, and [...] Read more.
Synthetic aperture radar (SAR) ship detection holds significant application value in maritime monitoring, marine traffic management, and safety maintenance. Despite remarkable advances in deep-learning-based detection methods, performance remains constrained by the vast size differences between ships, limited feature information of small targets, and complex environmental interference in SAR imagery. Although many studies have separately tackled small target identification and multi-scale detection in SAR imagery, integrated approaches that jointly address both challenges within a unified framework for SAR ship detection are still relatively scarce. This study presents MC-ASFF-ShipYOLO (Monte Carlo Attention—Adaptively Spatial Feature Fusion—ShipYOLO), a novel framework addressing both small target recognition and multi-scale ship detection challenges. Two key innovations distinguish our approach: (1) We introduce a Monte Carlo Attention (MCAttn) module into the backbone network that employs random sampling pooling operations to generate attention maps for feature map weighting, enhancing focus on small targets and improving their detection performance. (2) We add Adaptively Spatial Feature Fusion (ASFF) modules to the detection head that adaptively learn spatial fusion weights across feature layers and perform dynamic feature fusion, ensuring consistent ship representations across scales and mitigating feature conflicts, thereby enhancing multi-scale detection capability. Experiments are conducted on a newly constructed dataset combining HRSID and SSDD. Ablation experiment results demonstrate that, compared to the baseline, MC-ASFF-ShipYOLO achieves improvements of 1.39% in precision, 2.63% in recall, 2.28% in AP50, and 3.04% in AP, indicating a significant enhancement in overall detection performance. Furthermore, comparative experiments show that our method outperforms mainstream models. Even under high-confidence thresholds, MC-ASFF-ShipYOLO is capable of predicting more high-quality detection boxes, offering a valuable solution for advancing SAR ship detection technology. Full article
(This article belongs to the Special Issue Recent Advances in Synthetic Aperture Radar (SAR) Remote Sensing)
Show Figures

Figure 1

19 pages, 12427 KiB  
Article
Oriented SAR Ship Detection Based on Edge Deformable Convolution and Point Set Representation
by Tianyue Guan, Sheng Chang, Yunkai Deng, Fengli Xue, Chunle Wang and Xiaoxue Jia
Remote Sens. 2025, 17(9), 1612; https://doi.org/10.3390/rs17091612 - 1 May 2025
Cited by 1 | Viewed by 699
Abstract
Ship detection in synthetic aperture radar (SAR) images holds significant importance for both military and civilian applications, including maritime traffic supervision, marine search and rescue operations, and emergency response initiatives. Although extensive research has been conducted in this field, the interference of speckle [...] Read more.
Ship detection in synthetic aperture radar (SAR) images holds significant importance for both military and civilian applications, including maritime traffic supervision, marine search and rescue operations, and emergency response initiatives. Although extensive research has been conducted in this field, the interference of speckle noise in SAR images and the potential discontinuity of target contours continue to pose challenges for the accurate detection of multi-directional ships in complex scenes. To address these issues, we propose a novel ship detection method for SAR images that leverages edge deformable convolution combined with point set representation. By integrating edge deformable convolution with backbone networks, we learn the correlations between discontinuous target blocks in SAR images. This process effectively suppresses speckle noise while capturing the overall offset characteristics of targets. On this basis, a multi-directional ship detection module utilizing radial basis function (RBF) point set representation is developed. By constructing a point set transformation function, we establish efficient geometric alignment between the point set and the predicted rotated box, and we impose constraints on the penalty term associated with point set transformation to ensure accurate mapping between point set features and directed prediction boxes. This methodology enables the precise detection of multi-directional ship targets even in dense scenes. The experimental results derived from two publicly available datasets, RSDD-SAR and SSDD, demonstrate that our proposed method achieves state-of-the-art performance when benchmarked against other advanced detection models. Full article
Show Figures

Figure 1

Back to TopTop