remotesensing-logo

Journal Browser

Journal Browser

Multi-Source Data Fusion and Feature Extraction for Underwater Target Detection

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Engineering Remote Sensing".

Deadline for manuscript submissions: 30 June 2026 | Viewed by 3905

Special Issue Editors

College of Information Science and Engineering, Ocean University of China, Qingdao 266100 China
Interests: underwater target detection; underwater optical communication; multi-modal data fusion

E-Mail Website
Guest Editor
Fresnel Institute, Multidimensional Signal Group, Marseille, France
Interests: statistical signal processing; remote sensing; telecommunications; array processing; image processing; multidimensional signal processing and performances analysis

Special Issue Information

Dear Colleagues,

Underwater target detection faces unique challenges due to turbidity, variable acoustics, and limited visibility. Traditional single-modality approaches often falter in these complex environments. Recent advances in sensing technologies and artificial intelligence have enabled multi-extraction approaches—extracting diverse features from heterogeneous data sources—and sophisticated fusion algorithms that combine information across sensors, domains, and scales. This research area is crucial for maritime security, underwater archeology, marine conservation, and autonomous underwater vehicle navigation.

This Special Issue showcases cutting-edge research in multi-extraction and fusion techniques for underwater target detection. We seek interdisciplinary contributions bridging theoretical innovations with practical implementations. Our focus aligns with the journal's commitment to advancing technological solutions for complex environmental challenges, particularly in domains characterized by unique physical constraints. Potential topics include, but are not limited to:

  • Multi-modal sensor fusion architectures;
  • Deep learning for underwater feature extraction;
  • Underwater multi/hyper-spectral target detection/classification;
  • Feature extraction and target recognition in multidimensional data;
  • Fusion and applications of underwater multi-modal information;
  • Cross-domain feature integration;
  • Heterogeneous data fusion algorithms;
  • Real-time processing for multi-sensor systems;
  • Optical–acoustic data fusion methodologies;
  • Bio-inspired detection approaches;
  • Advanced signal processing for noisy environments;
  • Machine learning for underwater target classification.

Dr. Min Fu
Prof. Dr. Salah Bourennane
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 250 words) can be sent to the Editorial Office for assessment.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • target detection
  • multi-modal fusion
  • feature extraction
  • underwater data classification
  • multispectral imaging
  • multi-sensor systems
  • data fusion
  • signal processing
  • underwater sensing
  • acoustic–optical fusion

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

27 pages, 5957 KB  
Article
A Study of the Three-Dimensional Localization of an Underwater Glider Hull Using a Hierarchical Convolutional Neural Network Vision Encoder and a Variable Mixture-of-Experts Transformer
by Jungwoo Lee, Ji-Hyun Park, Jeong-Hwan Hwang, Kyoungseok Noh and Jinho Suh
Remote Sens. 2026, 18(5), 793; https://doi.org/10.3390/rs18050793 - 5 Mar 2026
Viewed by 217
Abstract
Although underwater gliders are highly energy-efficient platforms capable of long-duration and large-scale ocean observation, their lack of self-propulsion requires external assistance for recovery upon mission completion. In harsh and dynamic marine environments, reliably detecting the glider and accurately estimating its three-dimensional position are [...] Read more.
Although underwater gliders are highly energy-efficient platforms capable of long-duration and large-scale ocean observation, their lack of self-propulsion requires external assistance for recovery upon mission completion. In harsh and dynamic marine environments, reliably detecting the glider and accurately estimating its three-dimensional position are critical to ensuring the recovery operations are safe and efficient. This paper proposes a perception framework based on deep learning to detect underwater glider hulls and estimate their three-dimensional relative positions using camera–sonar multi-sensor fusion. This approach integrates a hierarchical convolutional neural network (CNN) vision encoder and a transformer-based architecture to estimate the glider’s spatial location and heading direction simultaneously. The hierarchical CNN encoder extracts multi-level, semantically rich visual features, thereby improving robustness to visual degradation and environmental disturbances common in underwater settings. Additionally, the transformer incorporates a variable mixture-of-experts (vMoE) mechanism that adaptively allocates expert networks across layers, enhancing representational capacity while maintaining computational efficiency. The resulting pose estimates enable precise, collision-free ROV navigation for automated recovery and onboard sensor inspection tasks. Experimental results, including ablation studies, validate the effectiveness of the proposed components and demonstrate their contributions to accurate glider hull detection and three-dimensional localization. Overall, the proposed framework provides a scalable, reliable perception solution that allows for the safe, autonomous recovery of underwater gliders with an ROV in realistic ocean environments. Full article
Show Figures

Figure 1

28 pages, 3284 KB  
Article
Diffusion-Enhanced Underwater Debris Detection via Improved YOLOv12n Framework
by Jianghan Tao, Fan Zhao, Yijia Chen, Yongying Liu, Feng Xue, Jian Song, Hao Wu, Jundong Chen, Peiran Li and Nan Xu
Remote Sens. 2025, 17(23), 3910; https://doi.org/10.3390/rs17233910 - 2 Dec 2025
Cited by 2 | Viewed by 1039
Abstract
Detecting underwater debris is important for monitoring the marine environment but remains challenging due to poor image quality, visual noise, object occlusions, and diverse debris appearances in underwater scenes. This study proposes UDD-YOLO, a novel detection framework that, for the first time, applies [...] Read more.
Detecting underwater debris is important for monitoring the marine environment but remains challenging due to poor image quality, visual noise, object occlusions, and diverse debris appearances in underwater scenes. This study proposes UDD-YOLO, a novel detection framework that, for the first time, applies a diffusion-based model to underwater image enhancement, introducing a new paradigm for improving perceptual quality in marine vision tasks. Specifically, the proposed framework integrates three key components: (1) a Cold Diffusion module that acts as a pre-processing stage to restore image clarity and contrast by reversing deterministic degradation such as blur and occlusion—without injecting stochastic noise—making it the first diffusion-based enhancement applied to underwater object detection; (2) an AMC2f feature extraction module that combines multi-scale separable convolutions and learnable normalization to improve representation for targets with complex morphology and scale variation; and (3) a Unified-IoU (UIoU) loss function designed to dynamically balance localization learning between high- and low-quality predictions, thereby reducing errors caused by occlusion or boundary ambiguity. Extensive experiments are conducted on the public underwater plastic pollution detection dataset, which includes 15 categories of underwater debris. The proposed method achieves a mAP50 of 81.8%, with 87.3% precision and 75.1% recall, surpassing eleven advanced detection models such as Faster R-CNN, RT-DETR-L, YOLOv8n, and YOLOv12n. Ablation studies verify the function of every module. These findings show that diffusion-driven enhancement, when coupled with feature extraction and localization optimization, offers a promising direction for accurate, robust underwater perception, opening new opportunities for environmental monitoring and autonomous marine systems. Full article
Show Figures

Figure 1

18 pages, 2949 KB  
Article
Development of a Quantitative Survey Method for Pelagic Fish Aggregations Around an Offshore Wind Farm Using Multibeam Sonar
by Masahiro Hamana, Sara Gonzalvo, Takayoshi Otaki and Teruhisa Komatsu
Remote Sens. 2025, 17(18), 3255; https://doi.org/10.3390/rs17183255 - 21 Sep 2025
Viewed by 922
Abstract
Offshore wind farms are rapidly expanding worldwide, and the submerged structures supporting wind turbines have the potential to function as artificial reefs for marine organisms. Quantitative visualization of fish aggregations around these foundations can provide valuable information for promoting collaboration between fisheries and [...] Read more.
Offshore wind farms are rapidly expanding worldwide, and the submerged structures supporting wind turbines have the potential to function as artificial reefs for marine organisms. Quantitative visualization of fish aggregations around these foundations can provide valuable information for promoting collaboration between fisheries and offshore wind energy development. This study explored the use of multibeam sonar to detect spatial distributions and estimate the biomass of pelagic fish aggregations around the foundations of offshore wind power facilities. Fish distribution was extracted from multibeam water column image data using an automated sequence of filtering steps, ending with a spatial filter designed to remove common noise artifacts in multibeam sonar data. The resulting fish aggregations were visualized in three dimensions, revealing a tendency to cluster leeward of turbine and observation tower foundations, and fish biomass was successfully estimated from beam backscatter strength. The developed method can be applied to other offshore wind farms to demonstrate the role of turbine foundations as artificial reefs for fish. Full article
Show Figures

Graphical abstract

20 pages, 19537 KB  
Article
Submarine Topography Classification Using ConDenseNet with Label Smoothing Regularization
by Jingyan Zhang, Kongwen Zhang and Jiangtao Liu
Remote Sens. 2025, 17(15), 2686; https://doi.org/10.3390/rs17152686 - 3 Aug 2025
Viewed by 969
Abstract
The classification of submarine topography and geomorphology is essential for marine resource exploitation and ocean engineering, with wide-ranging implications in marine geology, disaster assessment, resource exploration, and autonomous underwater navigation. Submarine landscapes are highly complex and diverse. Traditional visual interpretation methods are not [...] Read more.
The classification of submarine topography and geomorphology is essential for marine resource exploitation and ocean engineering, with wide-ranging implications in marine geology, disaster assessment, resource exploration, and autonomous underwater navigation. Submarine landscapes are highly complex and diverse. Traditional visual interpretation methods are not only inefficient and subjective but also lack the precision required for high-accuracy classification. While many machine learning and deep learning models have achieved promising results in image classification, limited work has been performed on integrating backscatter and bathymetric data for multi-source processing. Existing approaches often suffer from high computational costs and excessive hyperparameter demands. In this study, we propose a novel approach that integrates pruning-enhanced ConDenseNet with label smoothing regularization to reduce misclassification, strengthen the cross-entropy loss function, and significantly lower model complexity. Our method improves classification accuracy by 2% to 10%, reduces the number of hyperparameters by 50% to 96%, and cuts computation time by 50% to 85.5% compared to state-of-the-art models, including AlexNet, VGG, ResNet, and Vision Transformer. These results demonstrate the effectiveness and efficiency of our model for multi-source submarine topography classification. Full article
Show Figures

Figure 1

Back to TopTop