Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (352)

Search Parameters:
Keywords = aperture interference

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
13 pages, 835 KB  
Article
Enhanced Nanoparticle Detection Using Momentum-Space Filtering for Interferometric Scattering Microscopy (iSCAT)
by Xiang Zhang and Yatao Yang
Photonics 2025, 12(10), 945; https://doi.org/10.3390/photonics12100945 - 23 Sep 2025
Viewed by 151
Abstract
Interferometric scattering microscopy (iSCAT) is a powerful tool for single-particle detection. However, the detection sensitivity is significantly limited by high-frequency noise. In this paper, we have proposed a novel method leveraging frequency component analysis in the Fourier domain to enhance interference patterns, thus [...] Read more.
Interferometric scattering microscopy (iSCAT) is a powerful tool for single-particle detection. However, the detection sensitivity is significantly limited by high-frequency noise. In this paper, we have proposed a novel method leveraging frequency component analysis in the Fourier domain to enhance interference patterns, thus efficiently improving the detection accuracy. The bright–dark rings momentum feather has been effectively restored by a combined filter for high-frequency noise and aperture attenuation. The value of the structural similarity index measure has been improved from 0.73 to 0.98. We validate this method on gold nanoparticle samples. The results demonstrate its great potential to advance single-particle tracking by enhancing background suppression in iSCAT applications. Full article
(This article belongs to the Special Issue Research, Development and Application of Raman Scattering Technology)
Show Figures

Graphical abstract

24 pages, 12464 KB  
Article
Hierarchical Frequency-Guided Knowledge Reconstruction for SAR Incremental Target Detection
by Yu Tian, Zongyong Cui, Zheng Zhou and Zongjie Cao
Remote Sens. 2025, 17(18), 3214; https://doi.org/10.3390/rs17183214 - 17 Sep 2025
Viewed by 241
Abstract
Synthetic Aperture Radar (SAR) incremental target detection faces challenges from the limits of incremental learning frameworks and distinctive properties of SAR imagery. The limited spatial representation of targets, combined with strong background interference and fluctuating scattering characteristics, leads to unstable feature learning when [...] Read more.
Synthetic Aperture Radar (SAR) incremental target detection faces challenges from the limits of incremental learning frameworks and distinctive properties of SAR imagery. The limited spatial representation of targets, combined with strong background interference and fluctuating scattering characteristics, leads to unstable feature learning when new classes are introduced. These factors exacerbate representation mismatches between existing and incremental tasks, resulting in significant degradation in detection performance. To address these challenges, we propose a novel incremental learning framework featuring Hierarchical Frequency-Knowledge Reconstruction (HFKR). HFKR leverages wavelet-based frequency decomposition and cross-domain feature reconstruction to enhance consistency between global and detailed features throughout the incremental learning process. Specifically, we analyze the manifestation of representation mismatch in feature space and its impact on detection accuracy, while investigating the correlation between hierarchical semantic features and frequency-domain components. Based on these insights, HFKR is embedded within the feature transfer phase, where frequency-guided decomposition and reconstruction facilitate seamless integration of new and old task features, thereby maintaining model stability across updates. Extensive experiments on two benchmark SAR datasets, MSAR and SARAIRcraft, demonstrate that our method delivers superior performance compared to existing incremental detection approaches. Furthermore, its robustness in multi-step incremental scenarios highlights the potential of HFKR for broader applications in SAR image analysis. Full article
Show Figures

Figure 1

22 pages, 8527 KB  
Article
MCEM: Multi-Cue Fusion with Clutter Invariant Learning for Real-Time SAR Ship Detection
by Haowei Chen, Manman He, Zhen Yang and Lixin Gan
Sensors 2025, 25(18), 5736; https://doi.org/10.3390/s25185736 - 14 Sep 2025
Viewed by 416
Abstract
Small-vessel detection in Synthetic Aperture Radar (SAR) imagery constitutes a critical capability for maritime surveillance systems. However, prevailing methodologies such as sea-clutter statistical models and deep learning-based detectors face three fundamental limitations: weak target scattering signatures, complex sea clutter interference, and computational inefficiency. [...] Read more.
Small-vessel detection in Synthetic Aperture Radar (SAR) imagery constitutes a critical capability for maritime surveillance systems. However, prevailing methodologies such as sea-clutter statistical models and deep learning-based detectors face three fundamental limitations: weak target scattering signatures, complex sea clutter interference, and computational inefficiency. These challenges create inherent trade-offs between noise suppression and feature preservation while hindering high-resolution representation learning. To address these constraints, we propose the Multi-cue Efficient Maritime detector (MCEM), an anchor-free framework integrating three synergistic components: a Feature Extraction Module (FEM) with scale-adaptive convolutions for enhanced signature representation; a Feature Fusion Module (F2M) decoupling target-background ambiguities; and a Detection Head Module (DHM) optimizing accuracy-efficiency balance. Comprehensive evaluations demonstrate MCEM’s state-of-the-art performance: achieving 45.1% APS on HRSID (+2.3pp over YOLOv8) and 77.7% APL on SSDD (+13.9pp over same baseline), the world’s most challenging high-clutter SAR datasets. The framework enables robust maritime surveillance in complex oceanic conditions, particularly excelling in small target detection amidst high clutter. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

16 pages, 6539 KB  
Article
A High-Precision Ionospheric Channel Estimation Method Based on Oblique Projection and Double-Space Decomposition
by Zhengkai Wei, Baiyang Guo, Zhihui Li and Qingsong Zhou
Sensors 2025, 25(18), 5727; https://doi.org/10.3390/s25185727 - 14 Sep 2025
Viewed by 580
Abstract
Accurate ionospheric channel estimation is of great significance for acquisition of ionospheric structure, error correction of remote sensing data, high-precision Synthetic Aperture Radar (SAR) imaging, over-the-horizon (OTH) detection, and the establishment of stable communication links. Traditional super-resolution channel estimation algorithms face challenges in [...] Read more.
Accurate ionospheric channel estimation is of great significance for acquisition of ionospheric structure, error correction of remote sensing data, high-precision Synthetic Aperture Radar (SAR) imaging, over-the-horizon (OTH) detection, and the establishment of stable communication links. Traditional super-resolution channel estimation algorithms face challenges in terms of multipath correlation and noise interference when estimating ionospheric channel information. Meanwhile, some super-resolution algorithms struggle to meet the requirements of real-time measurement due to their high computational complexity. In this paper, we propose the Cross-correlation Oblique Projection Pursuit (CC-OPMP) algorithm, which constructs an atom selection strategy for anti-interference correlation metric and a dual-space multipath separation mechanism based on a greedy framework to effectively suppress noise and separate neighboring multipath components. Simulations demonstrate that the CC-OPMP algorithm outperforms other algorithms in both channel estimation accuracy and computational efficiency. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

10 pages, 1943 KB  
Article
Crosstalk Simulation of Magnets for Siam Photon Source II Storage Ring
by Warissara Tangyotkhajorn, Thongchai Leetha, Supachai Prawanta and Prapaiwan Sunwong
Particles 2025, 8(3), 80; https://doi.org/10.3390/particles8030080 - 13 Sep 2025
Viewed by 248
Abstract
During the detailed design of magnets for the storage ring of Siam Photon Source II (SPS-II), the influence of magnetic crosstalk between adjacent magnets in the compact Double Triple Bend Achromat (DTBA) lattice was investigated. Using Opera-3D magnetostatic simulation, six magnet pairs were [...] Read more.
During the detailed design of magnets for the storage ring of Siam Photon Source II (SPS-II), the influence of magnetic crosstalk between adjacent magnets in the compact Double Triple Bend Achromat (DTBA) lattice was investigated. Using Opera-3D magnetostatic simulation, six magnet pairs were analyzed to investigate the changes in magnetic field distribution along the electron trajectory and integrated magnetic field within each magnet aperture. The study employed polynomial and Fourier analyses to calculate multipole field components. Results indicate that magnetic crosstalk affects the field distribution in the region between magnets, particularly for the defocusing quadrupole and dipole magnets (QD2-D01) and the focusing quadrupole and octupole magnets (QF42-OF1) pairs, which have the pole-to-pole distances of 153.37 mm and 116.45 mm, respectively. Although these separations exceed the estimated fringe field regions, deviations of up to 1% in the main field components were observed. Notably, even an unpowered neighboring magnet contributes to magnetic field distortion due to the modified magnetic flux distribution. Crosstalk effects on the higher-order multipole fields are mostly within the acceptable limit, except for the extra quadrupole field from QD2 found in the dipole D01 magnet. This study highlights the effects of magnetic interference in tightly packed lattice and underscores the need to include a complete multipole field data with crosstalk consideration in the SPS-II lattice model in order to ensure an accurate beam dynamics simulation and predict the operating current adjustments for machine commissioning. Full article
(This article belongs to the Special Issue Generation and Application of High-Power Radiation Sources 2025)
Show Figures

Figure 1

33 pages, 10857 KB  
Article
A Damage-Based Fully Coupled DFN Study of Fracture-Driven Interactions in Zipper Fracturing for Shale Gas Production
by Fushen Liu, Yang Mou, Fenggang Wen, Zhiguang Yao, Xinzheng Yi, Rui Xu and Nanlin Zhang
Energies 2025, 18(17), 4722; https://doi.org/10.3390/en18174722 - 4 Sep 2025
Viewed by 751
Abstract
As a significant energy source enabling the global energy transition, efficient shale gas development is critical for diversifying supplies and reducing carbon emissions. Zipper fracturing widely enhances the stimulated reservoir volume (SRV) by generating complex fracture networks of shale reservoirs. However, recent trends [...] Read more.
As a significant energy source enabling the global energy transition, efficient shale gas development is critical for diversifying supplies and reducing carbon emissions. Zipper fracturing widely enhances the stimulated reservoir volume (SRV) by generating complex fracture networks of shale reservoirs. However, recent trends of reduced well spacing and increased injection intensity have significantly intensified interwell interference, particularly fracture-driven interactions (FDIs), leading to early production decline and well integrity issues. This study develops a fully coupled hydro–mechanical–damage (HMD) numerical model incorporating an explicit discrete fracture network (DFN), opening and closure of fractures, and an aperture–permeability relationship to capture the nonlinear mechanical behavior of natural fractures and their role in FDIs. After model validation, sensitivity analyses are conducted. Results show that when the horizontal differential stress exceeds 12 MPa, fractures tend to propagate as single dominant planes due to stress concentration, increasing the risks of FDIs and reducing effective SRV. Increasing well spacing from 60 m to 110 m delays or eliminates FDIs while significantly improving reservoir stimulation. Fracture approach angle governs the interaction mechanisms between hydraulic and natural fractures, influencing the deflection and branching behavior of primary fractures. Injection rate exerts a dual influence on fracture extension and FDI risk, requiring an optimized balance between stimulation efficiency and interference control. This work enriches the multi-physics coupling theory of FDIs during fracturing processes, for better understanding the fracturing design and optimization in shale gas production. Full article
Show Figures

Figure 1

36 pages, 25793 KB  
Article
DATNet: Dynamic Adaptive Transformer Network for SAR Image Denoising
by Yan Shen, Yazhou Chen, Yuming Wang, Liyun Ma and Xiaolu Zhang
Remote Sens. 2025, 17(17), 3031; https://doi.org/10.3390/rs17173031 - 1 Sep 2025
Viewed by 935
Abstract
Aiming at the problems of detail blurring and structural distortion caused by speckle noise, additive white noise and hybrid noise interference in synthetic aperture radar (SAR) images, this paper proposes a Dynamic Adaptive Transformer Network (DAT-Net) integrating a dynamic gated attention module and [...] Read more.
Aiming at the problems of detail blurring and structural distortion caused by speckle noise, additive white noise and hybrid noise interference in synthetic aperture radar (SAR) images, this paper proposes a Dynamic Adaptive Transformer Network (DAT-Net) integrating a dynamic gated attention module and a frequency-domain multi-expert enhancement module for SAR image denoising. The proposed model leverages a multi-scale encoder–decoder framework, combining local convolutional feature extraction with global self-attention mechanisms to transcend the limitations of conventional approaches restricted to single noise types, thereby achieving adaptive suppression of multi-source noise contamination. Key innovations comprise the following: (1) A Dynamic Gated Attention Module (DGAM) employing dual-path feature embedding and dynamic thresholding mechanisms to precisely characterize noise spatial heterogeneity; (2) A Frequency-domain Multi-Expert Enhancement (FMEE) Module utilizing Fourier decomposition and expert network ensembles for collaborative optimization of high-frequency and low-frequency components; (3) Lightweight Multi-scale Convolution Blocks (MCB) enhancing cross-scale feature fusion capabilities. Experimental results demonstrate that DAT-Net achieves quantifiable performance enhancement in both simulated and real SAR environments. Compared with other denoising algorithms, the proposed methodology exhibits superior noise suppression across diverse noise scenarios while preserving intrinsic textural features. Full article
Show Figures

Graphical abstract

27 pages, 1157 KB  
Article
An Ultra-Lightweight and High-Precision Underwater Object Detection Algorithm for SAS Images
by Deyin Xu, Yisong He, Jiahui Su, Lu Qiu, Lixiong Lin, Jiachun Zheng and Zhiping Xu
Remote Sens. 2025, 17(17), 3027; https://doi.org/10.3390/rs17173027 - 1 Sep 2025
Viewed by 893
Abstract
Underwater Object Detection (UOD) based on Synthetic Aperture Sonar (SAS) images is one of the core tasks of underwater intelligent perception systems. However, the existing UOD methods suffer from excessive model redundancy, high computational demands, and severe image quality degradation due to noise. [...] Read more.
Underwater Object Detection (UOD) based on Synthetic Aperture Sonar (SAS) images is one of the core tasks of underwater intelligent perception systems. However, the existing UOD methods suffer from excessive model redundancy, high computational demands, and severe image quality degradation due to noise. To mitigate these issues, this paper proposes an ultra-lightweight and high-precision underwater object detection method for SAS images. Based on a single-stage detection framework, four efficient and representative lightweight modules are developed, focusing on three key stages: feature extraction, feature fusion, and feature enhancement. For feature extraction, the Dilated-Attention Aggregation Feature Module (DAAFM) is introduced, which leverages a multi-scale Dilated Attention mechanism for strengthening the model’s capability to perceive key information, thereby improving the expressiveness and spatial coverage of extracted features. For feature fusion, the Channel–Spatial Parallel Attention with Gated Enhancement (CSPA-Gate) module is proposed, which integrates channel–spatial parallel modeling and gated enhancement to achieve effective fusion of multi-level semantic features and dynamic response to salient regions. In terms of feature enhancement, the Spatial Gated Channel Attention Module (SGCAM) is introduced to strengthen the model’s ability to discriminate the importance of feature channels through spatial gating, thereby improving robustness to complex background interference. Furthermore, the Context-Aware Feature Enhancement Module (CAFEM) is designed to guide feature learning using contextual structural information, enhancing semantic consistency and feature stability from a global perspective. To alleviate the challenge of limited sample size of real sonar images, a diffusion generative model is employed to synthesize a set of pseudo-sonar images, which are then combined with the real sonar dataset to construct an augmented training set. A two-stage training strategy is proposed: the model is first trained on the real dataset and then fine-tuned on the synthetic dataset to enhance generalization and improve detection robustness. The SCTD dataset results confirm that the proposed technique achieves better precision than the baseline model with only 10% of its parameter size. Notably, on a hybrid dataset, the proposed method surpasses Faster R-CNN by 10.3% in mAP50 while using only 9% of its parameters. Full article
(This article belongs to the Special Issue Underwater Remote Sensing: Status, New Challenges and Opportunities)
Show Figures

Figure 1

20 pages, 7901 KB  
Article
Millimeter-Wave Interferometric Synthetic Aperture Radiometer Imaging via Non-Local Similarity Learning
by Jin Yang, Zhixiang Cao, Qingbo Li and Yuehua Li
Electronics 2025, 14(17), 3452; https://doi.org/10.3390/electronics14173452 - 29 Aug 2025
Viewed by 406
Abstract
In this study, we propose a novel pixel-level non-local similarity (PNS)-based reconstruction method for millimeter-wave interferometric synthetic aperture radiometer (InSAR) imaging. Unlike traditional compressed sensing (CS) methods, which rely on predefined sparse transforms and often introduce artifacts, our approach leverages structural redundancies in [...] Read more.
In this study, we propose a novel pixel-level non-local similarity (PNS)-based reconstruction method for millimeter-wave interferometric synthetic aperture radiometer (InSAR) imaging. Unlike traditional compressed sensing (CS) methods, which rely on predefined sparse transforms and often introduce artifacts, our approach leverages structural redundancies in InSAR images through an enhanced sparse representation model with dynamically filtered coefficients. This design simultaneously preserves fine details and suppresses noise interference. Furthermore, an iterative refinement mechanism incorporates raw sampled data fidelity constraints, enhancing reconstruction accuracy. Simulation and physical experiments demonstrate that the proposed InSAR-PNS method significantly outperforms conventional techniques: it achieves a 1.93 dB average peak signal-to-noise ratio (PSNR) improvement over CS-based reconstruction while operating at reduced sampling ratios compared to Nyquist-rate fast fourier transform (FFT) methods. The framework provides a practical and efficient solution for high-fidelity millimeter-wave InSAR imaging under sub-Nyquist sampling conditions. Full article
Show Figures

Figure 1

31 pages, 3129 KB  
Review
A Review on Gas Pipeline Leak Detection: Acoustic-Based, OGI-Based, and Multimodal Fusion Methods
by Yankun Gong, Chao Bao, Zhengxi He, Yifan Jian, Xiaoye Wang, Haineng Huang and Xintai Song
Information 2025, 16(9), 731; https://doi.org/10.3390/info16090731 - 25 Aug 2025
Cited by 1 | Viewed by 1040
Abstract
Pipelines play a vital role in material transportation within industrial settings. This review synthesizes detection technologies for early-stage small gas leaks from pipelines in the industrial sector, with a focus on acoustic-based methods, optical gas imaging (OGI), and multimodal fusion approaches. It encompasses [...] Read more.
Pipelines play a vital role in material transportation within industrial settings. This review synthesizes detection technologies for early-stage small gas leaks from pipelines in the industrial sector, with a focus on acoustic-based methods, optical gas imaging (OGI), and multimodal fusion approaches. It encompasses detection principles, inherent challenges, mitigation strategies, and the state of the art (SOTA). Small leaks refer to low flow leakage originating from defects with apertures at millimeter or submillimeter scales, posing significant detection difficulties. Acoustic detection leverages the acoustic wave signals generated by gas leaks for non-contact monitoring, offering advantages such as rapid response and broad coverage. However, its susceptibility to environmental noise interference often triggers false alarms. This limitation can be mitigated through time-frequency analysis, multi-sensor fusion, and deep-learning algorithms—effectively enhancing leak signals, suppressing background noise, and thereby improving the system’s detection robustness and accuracy. OGI utilizes infrared imaging technology to visualize leakage gas and is applicable to the detection of various polar gases. Its primary limitations include low image resolution, low contrast, and interference from complex backgrounds. Mitigation techniques involve background subtraction, optical flow estimation, fully convolutional neural networks (FCNNs), and vision transformers (ViTs), which enhance image contrast and extract multi-scale features to boost detection precision. Multimodal fusion technology integrates data from diverse sensors, such as acoustic and optical devices. Key challenges lie in achieving spatiotemporal synchronization across multiple sensors and effectively fusing heterogeneous data streams. Current methodologies primarily utilize decision-level fusion and feature-level fusion techniques. Decision-level fusion offers high flexibility and ease of implementation but lacks inter-feature interaction; it is less effective than feature-level fusion when correlations exist between heterogeneous features. Feature-level fusion amalgamates data from different modalities during the feature extraction phase, generating a unified cross-modal representation that effectively resolves inter-modal heterogeneity. In conclusion, we posit that multimodal fusion holds significant potential for further enhancing detection accuracy beyond the capabilities of existing single-modality technologies and is poised to become a major focus of future research in this domain. Full article
Show Figures

Figure 1

23 pages, 6449 KB  
Article
Development of the Stitching—Oblique Incidence Interferometry Measurement Method for the Surface Flatness of Large-Scale and Elongated Ceramic Parts
by Shuai Wang, Zepei Zheng, Wule Zhu, Bosong Duan, Zhi-Zheng Ju and Bingfeng Ju
Sensors 2025, 25(17), 5270; https://doi.org/10.3390/s25175270 - 24 Aug 2025
Viewed by 860
Abstract
With the increasing demand for high-performance ceramic guideways in precision industries, accurate flatness measurement of large-scale, rough ceramic surfaces remains challenging. This paper proposes a novel method combining oblique-incidence laser interferometry and sub-aperture stitching to overcome limitations of conventional techniques. The oblique-incidence approach [...] Read more.
With the increasing demand for high-performance ceramic guideways in precision industries, accurate flatness measurement of large-scale, rough ceramic surfaces remains challenging. This paper proposes a novel method combining oblique-incidence laser interferometry and sub-aperture stitching to overcome limitations of conventional techniques. The oblique-incidence approach enhances interference signal strength on low-reflectivity surfaces, while stitching integrates high-resolution sub-aperture measurements for full-surface characterization. Numerical simulations validated the method’s feasibility, showing consistent reconstruction of surfaces with flatness values of 1–20 μm. Experimental validation on a 1050 mm × 130 mm SiC guideway achieved a full-surface measurement with PV 2.76 μm and RMS 0.59 μm, demonstrating high agreement with traditional methods in polished regions. The technique enabled quick monitoring of a 39-h lapping process, converging flatness from 13.97 μm to 2.76 μm, proving its efficacy for in-process feedback in ultra-precision manufacturing. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

13 pages, 9516 KB  
Article
Rapid Full-Field Surface Topography Measurement of Large-Scale Wafers Using Interferometric Imaging
by Ruifang Ye, Jiarui Zeng, Heyan Zhang, Yujie Su and Huihui Li
Photonics 2025, 12(9), 835; https://doi.org/10.3390/photonics12090835 - 22 Aug 2025
Viewed by 497
Abstract
Rapid full-field surface topography measurement for large-scale wafers remains challenging due to limitations in speed, system complexity, and scalability. This work presents a interferometric system based on thin-film interference for high-precision wafer profiling. An optical flat serves as the reference surface, forming a [...] Read more.
Rapid full-field surface topography measurement for large-scale wafers remains challenging due to limitations in speed, system complexity, and scalability. This work presents a interferometric system based on thin-film interference for high-precision wafer profiling. An optical flat serves as the reference surface, forming a parallel air-gap structure with the wafer under test. A large-aperture collimated beam is introduced via an off-axis parabolic mirror to generate high-contrast interference fringes across the entire field of view. Once the wafer is fully illuminated, topographic information is directly extracted from the fringe pattern. Comparative measurements with a commercial interferometer show relative deviations below 3% in bow and warp, confirming the system’s accuracy and stability. With its simple optical layout, low cost, and robust performance, the proposed method shows strong potential for industrial applications in wafer inspection and online surface monitoring. Full article
(This article belongs to the Special Issue Advances in Interferometric Optics and Applications)
Show Figures

Figure 1

23 pages, 6924 KB  
Article
A Dynamic Multi-Scale Feature Fusion Network for Enhanced SAR Ship Detection
by Rui Cao and Jianghua Sui
Sensors 2025, 25(16), 5194; https://doi.org/10.3390/s25165194 - 21 Aug 2025
Viewed by 782
Abstract
This study aims to develop an enhanced YOLO algorithm to improve the ship detection performance of synthetic aperture radar (SAR) in complex marine environments. Current SAR ship detection methods face numerous challenges in complex sea conditions, including environmental interference, false detection, and multi-scale [...] Read more.
This study aims to develop an enhanced YOLO algorithm to improve the ship detection performance of synthetic aperture radar (SAR) in complex marine environments. Current SAR ship detection methods face numerous challenges in complex sea conditions, including environmental interference, false detection, and multi-scale changes in detection targets. To address these issues, this study adopts a technical solution that combines multi-level feature fusion with a dynamic detection mechanism. First, a cross-stage partial dynamic channel transformer module (CSP_DTB) was designed, which combines the transformer architecture with a convolutional neural network to replace the last two C3k2 layers in the YOLOv11n main network, thereby enhancing the model’s feature extraction capabilities. Second, a general dynamic feature pyramid network (RepGFPN) was introduced to reconstruct the neck network architecture, enabling more efficient multi-scale feature fusion and information propagation. Additionally, a lightweight dynamic decoupled dual-alignment head (DYDDH) was constructed to enhance the collaborative performance of localization and classification tasks through task-specific feature decoupling. Experimental results show that the proposed DRGD-YOLO algorithm achieves significant performance improvements. On the HRSID dataset, the algorithm achieves an average precision (mAP50) of 93.1% at an IoU threshold of 0.50 and an mAP50–95 of 69.2% over the IoU threshold range of 0.50–0.95. Compared to the baseline YOLOv11n algorithm, the proposed method improves mAP50 and mAP50–95 by 3.3% and 4.6%, respectively. The proposed DRGD-YOLO algorithm not only significantly improves the accuracy and robustness of synthetic aperture radar (SAR) ship detection but also demonstrates broad application potential in fields such as maritime surveillance, fisheries management, and maritime safety monitoring, providing technical support for the development of intelligent marine monitoring technology. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

22 pages, 76137 KB  
Article
CS-FSDet: A Few-Shot SAR Target Detection Method for Cross-Sensor Scenarios
by Changzhi Liu, Yibin He, Xiuhua Zhang, Yanwei Wang, Zhenyu Dong and Hanyu Hong
Remote Sens. 2025, 17(16), 2841; https://doi.org/10.3390/rs17162841 - 15 Aug 2025
Cited by 1 | Viewed by 634
Abstract
Synthetic Aperture Radar (SAR) plays a pivotal role in remote-sensing target detection. However, domain shift caused by distribution discrepancies across sensors, coupled with the scarcity of target-domain samples, severely restricts the generalization and practical performance of SAR detectors. To address these challenges, this [...] Read more.
Synthetic Aperture Radar (SAR) plays a pivotal role in remote-sensing target detection. However, domain shift caused by distribution discrepancies across sensors, coupled with the scarcity of target-domain samples, severely restricts the generalization and practical performance of SAR detectors. To address these challenges, this paper proposes a few-shot SAR target-detection framework tailored for cross-sensor scenarios (CS-FSDet), enabling efficient transfer of source-domain knowledge to the target domain. First, to mitigate inter-domain feature-distribution mismatch, we introduce a Multi-scale Uncertainty-aware Bayesian Distribution Alignment (MUBDA) strategy. By modeling features as Gaussian distributions with uncertainty and performing dynamic weighting based on uncertainty, MUBDA achieves fine-grained distribution-level alignment of SAR features under different resolutions. Furthermore, we design an Adaptive Cross-domain Interactive Coordinate Attention (ACICA) module that computes cross-domain spatial-attention similarity and learns interaction weights adaptively, thereby suppressing domain-specific interference and enhancing the expressiveness of domain-shared target features. Extensive experiments on two cross-sensor few-shot detection tasks, HRSID→SSDD and SSDD→HRSID, demonstrate that the proposed method consistently surpasses state-of-the-art approaches in mean Average Precision (mAP) under 1-shot to 10-shot settings. Full article
Show Figures

Figure 1

22 pages, 15242 KB  
Article
A Modality Alignment and Fusion-Based Method for Around-the-Clock Remote Sensing Object Detection
by Yongjun Qi, Shaohua Yang, Jiahao Chen, Meng Zhang, Jie Zhu, Xin Liu and Hongxing Zheng
Sensors 2025, 25(16), 4964; https://doi.org/10.3390/s25164964 - 11 Aug 2025
Cited by 1 | Viewed by 676
Abstract
Cross-modal remote sensing object detection holds significant potential for around-the-clock applications. However, the modality differences between cross-modal data and the degradation of feature quality under adverse weather conditions limit detection performance. To address these challenges, this paper presents a novel cross-modal remote sensing [...] Read more.
Cross-modal remote sensing object detection holds significant potential for around-the-clock applications. However, the modality differences between cross-modal data and the degradation of feature quality under adverse weather conditions limit detection performance. To address these challenges, this paper presents a novel cross-modal remote sensing object detection framework designed to overcome two critical challenges in around-the-clock applications: (1) significant modality disparities between visible light, infrared, and synthetic aperture radar data, and (2) severe feature degradation under adverse weather conditions including fog, and nighttime scenarios. Our primary contributions are as follows: First, we develop a multi-scale feature extraction module that employs a hierarchical convolutional architecture to capture both fine-grained details and contextual information, effectively compensating for missing or blurred features in degraded visible-light images. Second, we introduce an innovative feature interaction module that utilizes cross-attention mechanisms to establish long-range dependencies across modalities while dynamically suppressing noise interference through adaptive feature selection. Third, we propose a feature correction fusion module that performs spatial alignment of object boundaries and channel-wise optimization of global feature consistency, enabling robust fusion of complementary information from different modalities. The proposed framework is validated on visible light, infrared, and SAR modalities. Extensive experiments on three challenging datasets (LLVIP, OGSOD, and Drone Vehicle) demonstrate our framework’s superior performance, achieving state-of-the-art mean average precision scores of 66.3%, 58.6%, and 71.7%, respectively, representing significant improvements over existing methods in scenarios with modality differences or extreme weather conditions. The proposed solution not only advances the technical frontier of cross-modal object detection but also provides practical value for mission-critical applications such as 24/7 surveillance systems, military reconnaissance, and emergency response operations where reliable around-the-clock detection is essential. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

Back to TopTop