Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (81)

Search Parameters:
Keywords = real aperture radar modulation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 29785 KiB  
Article
Multi-Scale Feature Extraction with 3D Complex-Valued Network for PolSAR Image Classification
by Nana Jiang, Wenbo Zhao, Jiao Guo, Qiang Zhao and Jubo Zhu
Remote Sens. 2025, 17(15), 2663; https://doi.org/10.3390/rs17152663 (registering DOI) - 1 Aug 2025
Abstract
Compared to traditional real-valued neural networks, which process only amplitude information, complex-valued neural networks handle both amplitude and phase information, leading to superior performance in polarimetric synthetic aperture radar (PolSAR) image classification tasks. This paper proposes a multi-scale feature extraction (MSFE) method based [...] Read more.
Compared to traditional real-valued neural networks, which process only amplitude information, complex-valued neural networks handle both amplitude and phase information, leading to superior performance in polarimetric synthetic aperture radar (PolSAR) image classification tasks. This paper proposes a multi-scale feature extraction (MSFE) method based on a 3D complex-valued network to improve classification accuracy by fully leveraging multi-scale features, including phase information. We first designed a complex-valued three-dimensional network framework combining complex-valued 3D convolution (CV-3DConv) with complex-valued squeeze-and-excitation (CV-SE) modules. This framework is capable of simultaneously capturing spatial and polarimetric features, including both amplitude and phase information, from PolSAR images. Furthermore, to address robustness degradation from limited labeled samples, we introduced a multi-scale learning strategy that jointly models global and local features. Specifically, global features extract overall semantic information, while local features help the network capture region-specific semantics. This strategy enhances information utilization by integrating multi-scale receptive fields, complementing feature advantages. Extensive experiments on four benchmark datasets demonstrated that the proposed method outperforms various comparison methods, maintaining high classification accuracy across different sampling rates, thus validating its effectiveness and robustness. Full article
Show Figures

Figure 1

15 pages, 4258 KiB  
Article
Complex-Scene SAR Aircraft Recognition Combining Attention Mechanism and Inner Convolution Operator
by Wansi Liu, Huan Wang, Jiapeng Duan, Lixiang Cao, Teng Feng and Xiaomin Tian
Sensors 2025, 25(15), 4749; https://doi.org/10.3390/s25154749 (registering DOI) - 1 Aug 2025
Abstract
Synthetic aperture radar (SAR), as an active microwave imaging system, has the capability of all-weather and all-time observation. In response to the challenges of aircraft detection in SAR images due to the complex background interference caused by the continuous scattering of airport buildings [...] Read more.
Synthetic aperture radar (SAR), as an active microwave imaging system, has the capability of all-weather and all-time observation. In response to the challenges of aircraft detection in SAR images due to the complex background interference caused by the continuous scattering of airport buildings and the demand for real-time processing, this paper proposes a YOLOv7-MTI recognition model that combines the attention mechanism and involution. By integrating the MTCN module and involution, performance is enhanced. The Multi-TASP-Conv network (MTCN) module aims to effectively extract low-level semantic and spatial information using a shared lightweight attention gate structure to achieve cross-dimensional interaction between “channels and space” with very few parameters, capturing the dependencies among multiple dimensions and improving feature representation ability. Involution helps the model adaptively adjust the weights of spatial positions through dynamic parameterized convolution kernels, strengthening the discrete strong scattering points specific to aircraft and suppressing the continuous scattering of the background, thereby alleviating the interference of complex backgrounds. Experiments on the SAR-AIRcraft-1.0 dataset, which includes seven categories such as A220, A320/321, A330, ARJ21, Boeing737, Boeing787, and others, show that the mAP and mRecall of YOLOv7-MTI reach 93.51% and 96.45%, respectively, outperforming Faster R-CNN, SSD, YOLOv5, YOLOv7, and YOLOv8. Compared with the basic YOLOv7, mAP is improved by 1.47%, mRecall by 1.64%, and FPS by 8.27%, achieving an effective balance between accuracy and speed, providing research ideas for SAR aircraft recognition. Full article
(This article belongs to the Section Radar Sensors)
Show Figures

Figure 1

25 pages, 19515 KiB  
Article
Towards Efficient SAR Ship Detection: Multi-Level Feature Fusion and Lightweight Network Design
by Wei Xu, Zengyuan Guo, Pingping Huang, Weixian Tan and Zhiqi Gao
Remote Sens. 2025, 17(15), 2588; https://doi.org/10.3390/rs17152588 - 24 Jul 2025
Viewed by 339
Abstract
Synthetic Aperture Radar (SAR) provides all-weather, all-time imaging capabilities, enabling reliable maritime ship detection under challenging weather and lighting conditions. However, most high-precision detection models rely on complex architectures and large-scale parameters, limiting their applicability to resource-constrained platforms such as satellite-based systems, where [...] Read more.
Synthetic Aperture Radar (SAR) provides all-weather, all-time imaging capabilities, enabling reliable maritime ship detection under challenging weather and lighting conditions. However, most high-precision detection models rely on complex architectures and large-scale parameters, limiting their applicability to resource-constrained platforms such as satellite-based systems, where model size, computational load, and power consumption are tightly restricted. Thus, guided by the principles of lightweight design, robustness, and energy efficiency optimization, this study proposes a three-stage collaborative multi-level feature fusion framework to reduce model complexity without compromising detection performance. Firstly, the backbone network integrates depthwise separable convolutions and a Convolutional Block Attention Module (CBAM) to suppress background clutter and extract effective features. Building upon this, a cross-layer feature interaction mechanism is introduced via the Multi-Scale Coordinated Fusion (MSCF) and Bi-EMA Enhanced Fusion (Bi-EF) modules to strengthen joint spatial-channel perception. To further enhance the detection capability, Efficient Feature Learning (EFL) modules are embedded in the neck to improve feature representation. Experiments on the Synthetic Aperture Radar (SAR) Ship Detection Dataset (SSDD) show that this method, with only 1.6 M parameters, achieves a mean average precision (mAP) of 98.35% in complex scenarios, including inshore and offshore environments. It balances the difficult problem of being unable to simultaneously consider accuracy and hardware resource requirements in traditional methods, providing a new technical path for real-time SAR ship detection on satellite platforms. Full article
Show Figures

Figure 1

19 pages, 2468 KiB  
Article
A Dual-Branch Spatial-Frequency Domain Fusion Method with Cross Attention for SAR Image Target Recognition
by Chao Li, Jiacheng Ni, Ying Luo, Dan Wang and Qun Zhang
Remote Sens. 2025, 17(14), 2378; https://doi.org/10.3390/rs17142378 - 10 Jul 2025
Viewed by 392
Abstract
Synthetic aperture radar (SAR) image target recognition has important application values in security reconnaissance and disaster monitoring. However, due to speckle noise and target orientation sensitivity in SAR images, traditional spatial domain recognition methods face challenges in accuracy and robustness. To effectively address [...] Read more.
Synthetic aperture radar (SAR) image target recognition has important application values in security reconnaissance and disaster monitoring. However, due to speckle noise and target orientation sensitivity in SAR images, traditional spatial domain recognition methods face challenges in accuracy and robustness. To effectively address these challenges, we propose a dual-branch spatial-frequency domain fusion recognition method with cross-attention, achieving deep fusion of spatial and frequency domain features. In the spatial domain, we propose an enhanced multi-scale feature extraction module (EMFE), which adopts a multi-branch parallel structure to effectively enhance the network’s multi-scale feature representation capability. Combining frequency domain guided attention, the model focuses on key regional features in the spatial domain. In the frequency domain, we design a hybrid frequency domain transformation module (HFDT) that extracts real and imaginary features through Fourier transform to capture the global structure of the image. Meanwhile, we introduce a spatially guided frequency domain attention to enhance the discriminative capability of frequency domain features. Finally, we propose a cross-domain feature fusion (CDFF) module, which achieves bidirectional interaction and optimal fusion of spatial-frequency domain features through cross attention and adaptive feature fusion. Experimental results demonstrate that our method achieves significantly superior recognition accuracy compared to existing methods on the MSTAR dataset. Full article
Show Figures

Figure 1

22 pages, 27201 KiB  
Article
Spatiotemporal Interactive Learning for Cloud Removal Based on Multi-Temporal SAR–Optical Images
by Chenrui Xu, Zhenfei Wang, Liang Chen and Xiangchao Meng
Remote Sens. 2025, 17(13), 2169; https://doi.org/10.3390/rs17132169 - 24 Jun 2025
Viewed by 422
Abstract
Optical remote sensing images suffer from information loss due to cloud interference, while Synthetic Aperture Radar (SAR), capable of all-weather and day–night imaging capabilities, provides crucial auxiliary data for cloud removal and reconstruction. However, existing cloud removal methods face the following key challenges: [...] Read more.
Optical remote sensing images suffer from information loss due to cloud interference, while Synthetic Aperture Radar (SAR), capable of all-weather and day–night imaging capabilities, provides crucial auxiliary data for cloud removal and reconstruction. However, existing cloud removal methods face the following key challenges: insufficient utilization of spatiotemporal information in multi-temporal data, and fusion challenges arising from fundamentally different imaging mechanisms between optical and SAR images. To address these challenges, a spatiotemporal feature interaction-based cloud removal method is proposed to effectively fuse SAR and optical images. Built upon a conditional generative adversarial network framework, the method incorporates three key modules: a multi-temporal spatiotemporal feature joint extraction module, a spatiotemporal information interaction module, and a spatiotemporal discriminator module. These components jointly establish a many-to-many spatiotemporal interactive learning network, which separately extracts and fuses spatiotemporal features from multi-temporal SAR–optical image pairs to generate temporally consistent, cloud-free image sequences. Experiments on both simulated and real datasets demonstrate the superior performance of the proposed method. Full article
Show Figures

Figure 1

26 pages, 42046 KiB  
Article
High-Resolution Wide-Beam Millimeter-Wave ArcSAR System for Urban Infrastructure Monitoring
by Wenjie Shen, Wenxing Lv, Yanping Wang, Yun Lin, Yang Li, Zechao Bai and Kuai Yu
Remote Sens. 2025, 17(12), 2043; https://doi.org/10.3390/rs17122043 - 13 Jun 2025
Viewed by 307
Abstract
Arc scanning synthetic aperture radar (ArcSAR) can achieve high-resolution panoramic imaging and retrieve submillimeter-level deformation information. To monitor buildings in a city scenario, ArcSAR must be lightweight; have a high resolution, a mid-range (around a hundred meters), and low power consumption; and be [...] Read more.
Arc scanning synthetic aperture radar (ArcSAR) can achieve high-resolution panoramic imaging and retrieve submillimeter-level deformation information. To monitor buildings in a city scenario, ArcSAR must be lightweight; have a high resolution, a mid-range (around a hundred meters), and low power consumption; and be cost-effective. In this study, a novel high-resolution wide-beam single-chip millimeter-wave (mmwave) ArcSAR system, together with an imaging algorithm, is presented. First, to handle the non-uniform azimuth sampling caused by motor motion, a high-accuracy angular coder is used in the system design. The coder can send the radar a hardware trigger signal when rotated to a specific angle so that uniform angular sampling can be achieved under the unstable rotation of the motor. Second, the ArcSAR’s maximum azimuth sampling angle that can avoid aliasing is deducted based on the Nyquist theorem. The mathematical relation supports the proposed ArcSAR system in acquiring data by setting the sampling angle interval. Third, the range cell migration (RCM) phenomenon is severe because mmwave radar has a wide azimuth beamwidth and a high frequency, and ArcSAR has a curved synthetic aperture. Therefore, the fourth-order RCM model based on the range-Doppler (RD) algorithm is interpreted with a uniform azimuth angle to suit the system and implemented. The proposed system uses the TI 6843 module as the radar sensor, and its azimuth beamwidth is 64°. The performance of the system and the corresponding imaging algorithm are thoroughly analyzed and validated via simulations and real data experiments. The output image covers a 360° and 180 m area at an azimuth resolution of 0.2°. The results show that the proposed system has good application prospects, and the design principles can support the improvement of current ArcSARs. Full article
Show Figures

Figure 1

21 pages, 6270 KiB  
Article
Cross-Level Adaptive Feature Aggregation Network for Arbitrary-Oriented SAR Ship Detection
by Lu Qian, Junyi Hu, Haohao Ren, Jie Lin, Xu Luo, Lin Zou and Yun Zhou
Remote Sens. 2025, 17(10), 1770; https://doi.org/10.3390/rs17101770 - 19 May 2025
Viewed by 379
Abstract
The rapid progress of deep learning has significantly enhanced the development of ship detection using synthetic aperture radar (SAR). However, the diversity of ship sizes, arbitrary orientations, densely arranged ships, etc., have been hindering the improvement of SAR ship detection accuracy. In response [...] Read more.
The rapid progress of deep learning has significantly enhanced the development of ship detection using synthetic aperture radar (SAR). However, the diversity of ship sizes, arbitrary orientations, densely arranged ships, etc., have been hindering the improvement of SAR ship detection accuracy. In response to these challenges, this study introduces a new detection approach called a cross-level adaptive feature aggregation network (CLAFANet) to achieve arbitrary-oriented multi-scale SAR ship detection. Specifically, we first construct a hierarchical backbone network based on a residual architecture to extract multi-scale features of ship objects from large-scale SAR imagery. Considering the multi-scale nature of ship objects, we then resort to the idea of self-attention to develop a cross-level adaptive feature aggregation (CLAFA) mechanism, which can not only alleviate the semantic gap between cross-level features but also improve the feature representation capabilities of multi-scale ships. To better adapt to the arbitrary orientation of ship objects in real application scenarios, we put forward a frequency-selective phase-shifting coder (FSPSC) module for arbitrary-oriented SAR ship detection tasks, which is dedicated to mapping the rotation angle of the object bounding box to different phases and exploits frequency-selective phase-shifting to solve the periodic ambiguity problem of the rotated bounding box. Qualitative and quantitative experiments conducted on two public datasets demonstrate that the proposed CLAFANet achieves competitive performance compared to some state-of-the-art methods in arbitrary-oriented SAR ship detection. Full article
Show Figures

Figure 1

10 pages, 2448 KiB  
Article
Image Generation and Super-Resolution Reconstruction of Synthetic Aperture Radar Images Based on an Improved Single-Image Generative Adversarial Network
by Xuguang Yang, Lixia Nie, Yun Zhang and Ling Zhang
Information 2025, 16(5), 370; https://doi.org/10.3390/info16050370 - 30 Apr 2025
Viewed by 582
Abstract
This paper presents a novel method for the super-resolution reconstruction and generation of synthetic aperture radar (SAR) images with an improved single-image generative adversarial network (ISinGAN). Unlike traditional machine learning methods typically requiring large datasets, SinGAN needs only a single input image to [...] Read more.
This paper presents a novel method for the super-resolution reconstruction and generation of synthetic aperture radar (SAR) images with an improved single-image generative adversarial network (ISinGAN). Unlike traditional machine learning methods typically requiring large datasets, SinGAN needs only a single input image to extract internal structural details and generate high-quality samples. To improve this framework further, we introduced SinGAN with a self-attention module and incorporated noise specific to SAR images. These enhancements ensure that the generated images are more aligned with real-world SAR scenarios while also improving the robustness of the SinGAN framework. Experimental results demonstrate that ISinGAN significantly enhances SAR image resolution and target recognition performance. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

20 pages, 6790 KiB  
Article
LD-Det: Lightweight Ship Target Detection Method in SAR Images via Dual Domain Feature Fusion
by Hang Yu, Bingzong Liu, Lei Wang and Teng Li
Remote Sens. 2025, 17(9), 1562; https://doi.org/10.3390/rs17091562 - 28 Apr 2025
Viewed by 465
Abstract
Ship detection technology represents a significant research focus within the application domain of synthetic aperture radar. Among all the detection methods, the deep learning method stands out for its high accuracy and high efficiency. However, large-scale deep learning algorithm training requires huge computing [...] Read more.
Ship detection technology represents a significant research focus within the application domain of synthetic aperture radar. Among all the detection methods, the deep learning method stands out for its high accuracy and high efficiency. However, large-scale deep learning algorithm training requires huge computing power support and large equipment to process, which is not suitable for real-time detection on edge platforms. Therefore, to achieve fast data transmission and little computation complexity, the design of lightweight computing models becomes a research hot point. In order to conquer the difficulties of the high complexity of the existing deep learning model and the balance between efficiency and high accuracy, this paper proposes a lightweight dual-domain feature fusion detection model (LD-Det) for ship target detection. This model designs three effective modules, including the following: (1) a wavelet transform method for image compression and the frequency domain feature extraction; (2) a lightweight partial convolutional module for channel feature extraction; and (3) an improved multidimensional attention module to realize the weight assignment of different dimensional features. Additionally, we propose a hybrid IoU loss function specifically designed to enhance the detection of small objects, improving localization accuracy and robustness. Then, we introduce these modules into the Yolov8 detection algorithm for implementation. The experiments are designed to verify LD-Det’s effectiveness. Compared with other algorithm models, LD-Det can not only achieve lighter weight but also take into account the precision of ship target detection. The experimental results from the SSDD dataset demonstrate that the proposed LD-Det model improves precision (P) by 1.4 percentage points while reducing the number of model parameters by 20% compared to the baseline. LD-Det effectively balances lightweight efficiency and detection accuracy, making it highly advantageous for deployment on edge platforms compared to other models. Full article
Show Figures

Figure 1

30 pages, 20114 KiB  
Article
Multi-Feature Lightweight DeeplabV3+ Network for Polarimetric SAR Image Classification with Attention Mechanism
by Junfei Shi, Shanshan Ji, Haiyan Jin, Yuanlin Zhang, Maoguo Gong and Weisi Lin
Remote Sens. 2025, 17(8), 1422; https://doi.org/10.3390/rs17081422 - 16 Apr 2025
Viewed by 492
Abstract
Polarimetric Synthetic Aperture Radar (PolSAR) is an advanced remote sensing technology that provides rich polarimetric information. Deep learning methods have been proved an effective tool for PolSAR image classification. However, relying solely on source data input makes it challenging to effectively classify all [...] Read more.
Polarimetric Synthetic Aperture Radar (PolSAR) is an advanced remote sensing technology that provides rich polarimetric information. Deep learning methods have been proved an effective tool for PolSAR image classification. However, relying solely on source data input makes it challenging to effectively classify all land cover targets, especially heterogeneous targets with significant scattering variations, such as urban areas and forests. Besides, multiple features can provide more complementary information, while feature selection is crucial for classification. To address these issues, we propose a novel attention mechanism-based multi-feature lightweight DeeplabV3+ network for PolSAR image classification. The proposed method integrates feature extraction, learning, selection, and classification into an end-to-end network framework. Initially, three kinds of complementary features are extracted to serve as inputs to the network, including polarimetric original data, statistical and scattering features, textural and contour features. Subsequently, a lightweight DeeplabV3+ network is designed to conduct multi-scale feature learning on the extracted multidimensional features. Finally, an attention mechanism-based feature selection module is integrated into the network model, adaptively learning weights for multi-scale features. This enhances discriminative features but suppresses redundant or confusing features. Experiments are conducted on five real PolSAR data sets, and experimental results demonstrate the proposed method can achieve more precise boundaries and smoother regions than the state-of-the-art algorithms. In this paper, we develop a novel multi-feature learning framework, achieving a fast and effective classification network for PolSAR images. Full article
(This article belongs to the Special Issue Remote Sensing Image Classification: Theory and Application)
Show Figures

Figure 1

20 pages, 3115 KiB  
Article
Global SAR Spectral Analysis of Intermediate Ocean Waves: Statistics and Derived Real Aperture Radar Modulation
by Kehan Li and Huimin Li
Remote Sens. 2025, 17(8), 1416; https://doi.org/10.3390/rs17081416 - 16 Apr 2025
Viewed by 448
Abstract
Spaceborne synthetic aperture radar (SAR) has been proven capable of observing the directional ocean wave spectrum across the global ocean. Most of the efforts focus on the integrated wave parameters to characterize the imaged ocean wave properties. The newly proposed spectrum-based radar parameter [...] Read more.
Spaceborne synthetic aperture radar (SAR) has been proven capable of observing the directional ocean wave spectrum across the global ocean. Most of the efforts focus on the integrated wave parameters to characterize the imaged ocean wave properties. The newly proposed spectrum-based radar parameter mean cross-spectrum (MACS) is investigated using SAR image spectral properties of range-traveling waves at a wavelength of 20 m, based on Sentinel-1 wave mode acquisition of high spatial resolution (5 m). The magnitude of MACS is documented relative to environmental conditions (wind speed and direction) in terms of its variation for two polarizations at two incidence angles. This parameter exhibits distinct upwind–downwind asymmetry and polarization ratio at two incidence angles (23.8° and 36.8°). In addition, by comparing the SAR measurements with simulated MACS, we derive an improved real aperture radar modulation transfer function. Results obtained in this study shall help obtain a more accurate ocean wave spectrum based on the improved RAR modulations. Full article
(This article belongs to the Special Issue SAR Monitoring of Marine and Coastal Environments)
Show Figures

Figure 1

23 pages, 68704 KiB  
Article
Adaptive Barrage Jamming Against SAR Based on Prior Information and Scene Segmentation
by Zhengwei Guo, Longyuan Wang, Zhenchang Liu, Zewen Fu, Ning Li and Xuebo Zhang
Remote Sens. 2025, 17(7), 1303; https://doi.org/10.3390/rs17071303 - 5 Apr 2025
Viewed by 509
Abstract
Due to the advantages of easy implementation and fine jamming effect, barrage jamming against synthetic aperture radar (SAR) has received extensive attention in the field of electronic countermeasures. However, most methods of barrage jamming still have limitations, such as uncontrollable jamming position and [...] Read more.
Due to the advantages of easy implementation and fine jamming effect, barrage jamming against synthetic aperture radar (SAR) has received extensive attention in the field of electronic countermeasures. However, most methods of barrage jamming still have limitations, such as uncontrollable jamming position and coverage and high-power requirements. To address these issues, an improved barrage jamming method is proposed in this paper. The proposed method fully combines the prior information of the region of interest (ROI), and the precise jamming with controllable position, coverage, and power is realized. For the proposed method, the ROI is firstly divided into several sub-scenes according to the obtained prior information, and the signal is intercepted. Then the frequency response function of the jammer for each sub-scene is generated. The frequency response function of the jammer, which consists of position modulation function and jamming coverage function, is decomposed into slow-time-dependent parts and slow-time-independent parts. The slow-time-independent parts are generated offline in advance, and the real-time performance of the proposed method is guaranteed through this way. Finally, the intercepted signal is modulated by the frequency response function to generate the two-dimensional controllable jamming effect. Theoretical analysis and simulation results show that the proposed method can produce jamming effects with controllable position and coverage, and the utilization efficiency of jamming power is improved. Full article
Show Figures

Figure 1

25 pages, 11034 KiB  
Article
A Novel Deep Unfolding Network for Multi-Band SAR Sparse Imaging and Autofocusing
by Xiaopeng Li, Mengyang Zhan, Yiheng Liang, Yinwei Li, Gang Xu and Bingnan Wang
Remote Sens. 2025, 17(7), 1279; https://doi.org/10.3390/rs17071279 - 3 Apr 2025
Viewed by 383
Abstract
The sparse imaging network of synthetic aperture radar (SAR) is usually designed end to end and has a limited adaptability to radar systems of different bands. Meanwhile, the implementation of the sparse imaging algorithm depends on the sparsity of the target scene and [...] Read more.
The sparse imaging network of synthetic aperture radar (SAR) is usually designed end to end and has a limited adaptability to radar systems of different bands. Meanwhile, the implementation of the sparse imaging algorithm depends on the sparsity of the target scene and usually adopts a fixed L1 regularization solution, which has a mediocre reconstruction effect on complex scenes. In this paper, a novel SAR imaging deep unfolding network based on approximate observation is proposed for multi-band SAR systems. Firstly, the approximate observation module is separated from the optimal solution network model and selected according to the multi-band radar echo. Secondly, to realize the SAR imaging of non-sparse scenes, Lp regularization is used to constrain the uncertain transform domain of the target scene. The adaptive optimization of Lp parameters is realized by using a data-driven approach. Furthermore, considering that phase errors may be introduced in the real SAR system during echo acquisition, an error estimation module is added to the network to estimate and compensate for the phase errors. Finally, the results from both simulated and real data experiments demonstrate that the proposed method exhibits outstanding performance under 0.22 THz and 9.6 GHz echo data: high-resolution SAR focused images are achieved under four different sparsity conditions of 20%, 40%, 60%, and 80%. These results fully validate the strong adaptability and robustness of the proposed method to diverse SAR system configurations and complex target scenarios. Full article
(This article belongs to the Special Issue Microwave Remote Sensing for Object Detection (2nd Edition))
Show Figures

Figure 1

21 pages, 3926 KiB  
Article
S4Det: Breadth and Accurate Sine Single-Stage Ship Detection for Remote Sense SAR Imagery
by Mingjin Zhang, Yingfeng Zhu, Longyi Li, Jie Guo, Zhengkun Liu and Yunsong Li
Remote Sens. 2025, 17(5), 900; https://doi.org/10.3390/rs17050900 - 4 Mar 2025
Viewed by 758
Abstract
Synthetic Aperture Radar (SAR) is a remote sensing technology that can realize all-weather and all-day monitoring, and it is widely used in ocean ship monitoring tasks. Recently, many oriented detectors were used for ship detection in SAR images. However, these methods often found [...] Read more.
Synthetic Aperture Radar (SAR) is a remote sensing technology that can realize all-weather and all-day monitoring, and it is widely used in ocean ship monitoring tasks. Recently, many oriented detectors were used for ship detection in SAR images. However, these methods often found it difficult to balance the detection accuracy and speed, and the noise around the target in the inshore scene of SAR images led to a poor detection network performance. In addition, the rotation representation still has the problem of boundary discontinuity. To address these issues, we propose S4Det, a Sinusoidal Single-Stage SAR image detection method that enables real-time oriented ship target detection. Two key mechanisms were designed to address inshore scene processing and angle regression challenges. Specifically, a Breadth Search Compensation Module (BSCM) resolved the limited detection capability issue observed within inshore scenarios. Neural Discrete Codebook Learning was strategically integrated with Multi-scale Large Kernel Attention, capturing context information around the target and mitigating the information loss inherent in dilated convolutions. To tackle boundary discontinuity arising from the periodic nature of the target regression angle, we developed a Sine Fourier Transform Coding (SFTC) technique. The angle is represented using diverse sine components, and the discrete Fourier transform is applied to convert these periodic components to the frequency domain for processing. Finally, the experimental results of our S4Det on the RSSDD dataset achieved 92.2% mAP and 31+ FPS on an RTXA5000 GPU, which outperformed the prevalent mainstream of the oriented detection network. The robustness of the proposed S4Det was also verified on another public RSDD dataset. Full article
(This article belongs to the Section AI Remote Sensing)
Show Figures

Figure 1

17 pages, 904 KiB  
Article
Apple Detection via Near-Field MIMO-SAR Imaging: A Multi-Scale and Context-Aware Approach
by Yuanping Shi, Yanheng Ma and Liang Geng
Sensors 2025, 25(5), 1536; https://doi.org/10.3390/s25051536 - 1 Mar 2025
Viewed by 1028
Abstract
Accurate fruit detection is of great importance for yield assessment, timely harvesting, and orchard management strategy optimization in precision agriculture. Traditional optical imaging methods are limited by lighting and meteorological conditions, making it difficult to obtain stable, high-quality data. Therefore, this study utilizes [...] Read more.
Accurate fruit detection is of great importance for yield assessment, timely harvesting, and orchard management strategy optimization in precision agriculture. Traditional optical imaging methods are limited by lighting and meteorological conditions, making it difficult to obtain stable, high-quality data. Therefore, this study utilizes near-field millimeter-wave MIMO-SAR (Multiple Input Multiple Output Synthetic Aperture Radar) technology, which is capable of all-day and all-weather imaging, to perform high-precision detection of apple targets in orchards. This paper first constructs a near-field millimeter-wave MIMO-SAR imaging system and performs multi-angle imaging on real fruit tree samples, obtaining about 150 sets of SAR-optical paired data, covering approximately 2000 accurately annotated apple targets. Addressing challenges such as weak scattering, low texture contrast, and complex backgrounds in SAR images, we propose an innovative detection framework integrating Dynamic Spatial Pyramid Pooling (DSPP), Recursive Feature Fusion Network (RFN), and Context-Aware Feature Enhancement (CAFE) modules. DSPP employs a learnable adaptive mechanism to dynamically adjust multi-scale feature representations, enhancing sensitivity to apple targets of varying sizes and distributions; RFN uses a multi-round iterative feature fusion strategy to gradually refine semantic consistency and stability, improving the robustness of feature representation under weak texture and high noise scenarios; and the CAFE module, based on attention mechanisms, explicitly models global and local associations, fully utilizing the scene context in texture-poor SAR conditions to enhance the discriminability of apple targets. Experimental results show that the proposed method achieves significant improvements in average precision (AP), recall rate, and F1 score on the constructed near-field millimeter-wave SAR apple dataset compared to various classic and mainstream detectors. Ablation studies confirm the synergistic effect of DSPP, RFN, and CAFE. Qualitative analysis demonstrates that the detection framework proposed in this paper can still stably locate apple targets even under conditions of leaf occlusion, complex backgrounds, and weak scattering. This research provides a beneficial reference and technical basis for using SAR data in fruit detection and yield estimation in precision agriculture. Full article
(This article belongs to the Section Smart Agriculture)
Show Figures

Figure 1

Back to TopTop