remotesensing-logo

Journal Browser

Journal Browser

Ocean Remote Sensing Based on Radar, Sonar and Optical Techniques

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Ocean Remote Sensing".

Deadline for manuscript submissions: closed (31 December 2024) | Viewed by 20816

Special Issue Editors


E-Mail Website
Guest Editor
College of Physics and Electronic Engineering, Northwest Normal University, Lanzhou 730070, China
Interests: sonar imaging; synthetic aperture sonar; synthetic aperture radar; image resolution; radar imaging; signal reconstruction; signal sampling
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Faculty of Information Science and Engineering, Ningbo University, Ningbo 315211, China
Interests: radar/sonar imaging; underwater image optimization; tensor decomposition-based image processing
Special Issues, Collections and Topics in MDPI journals
National Key Laboratory of Radar Signal Processing, Xidian University, Xi’an 710071, China
Interests: forward-looking airborne SAR imaging
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Electronics and Communication Engineering, Sun Yat-sen University, Guangzhou 510275, China
Interests: radar imaging; radar signal processing; distributed radar system
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
EXAIL ROBOTICS, 83000 Toulon, France
Interests: sonar imaging

E-Mail Website
Guest Editor
Department of Computer Science and Engineering, School of Convergence, College of Computing and Informatics, Sungkyunkwan University, Seoul 03063, Republic of Korea
Interests: network; fuzzy logic; opinion mining; ontology; machine learning
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The ocean covers approximately 71% of the earth’s surface, and it plays an important role in the human world. Therefore, the understanding, monitoring, and protecting of the ocean are areas of constant concern. Radar is one of the most commonly used devices for ocean remote sensing above the sea surface, and it can be used for the pre-surveying of marine oil, marine offshore terrain inversion, the monitoring of marine environment such as the detection of oil spills, and so on. Sonar is widely exploited below the sea surface. Based on the sonar technique, underwater terrain mapping, underwater rescue, underwater archaeology, the detecting of underwater unexploded explosives, and so on can be easily carried out. Underwater optical imaging can also be used for marine detection, underwater robotics, underwater archaeology, and other fields. Based on radar, sonar, and optical technologies, we can well understand, monitor, and protect the ocean.

Electromagnetic, acoustic and optical sensors can be installed at fixed locations in harbors, either on the surface, underwater, or on mobile platforms such as unmanned, aerial, underwater, or surface vehicles, as well as manned surface ships. Multiple sensors are often networked to effectively explore, observe, and exploit the ocean. The transmission characteristics of the electromagnetic, acoustic, and optical signal will be affected by many factors, such as propagation loss, multipath, Doppler, a time-varying channel, radio interference, attenuation, scattering, and so on. In addition, the sensor network also has problems such as sparsity, the limited energy of the sensor nodes, unstable topology and transmission, which seriously reduces the performance of remote sensing, navigation, positioning, environmental perception, array processing, and signal detection based on the sensor network. Consequently, the image quality, color distortion, low contrast, etc., would be severely degraded. Therefore, it is critical for electromagnetic, acoustic, and optical sensor and network technology to realize the effective acquisition of ocean information through advanced signal processing technology.

In this Special Issue, researchers are invited to report their latest progress in the fields of radar, sonar, and optics. This includes radar, sonar and optical communication, navigation, positioning, environmental perception, array processing and signal detection, remote sensing, acoustic tomography, ocean sound field monitoring, underwater optical imaging, etc.

Aim of the Special Issue and how the subject relates to the journal scope:

  • Radar imaging;
  • Sonar imaging;
  • Optical imaging;
  • Image denoising;
  • Image enhancement;
  • Radar and sonar array signal processing;
  • Underwater communication and networks;
  • Target detection and recognition;
  • Obstacle detection and collision avoidance;
  • Underwater localization and bathymetry mapping.

Dr. Xuebo Zhang
Prof. Dr. Haiyong Xu
Dr. Jingyue Lu
Prof. Dr. Lei Zhang
Dr. Marc Pinto
Dr. Farman Ali
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • ocean
  • remote sensing
  • image
  • radar
  • sonar
  • optical technology

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Related Special Issue

Published Papers (16 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review, Other

20 pages, 110320 KiB  
Article
Contrastive Feature Disentanglement via Physical Priors for Underwater Image Enhancement
by Fei Li, Li Wan, Jiangbin Zheng, Lu Wang and Yue Xi
Remote Sens. 2025, 17(5), 759; https://doi.org/10.3390/rs17050759 - 22 Feb 2025
Viewed by 447
Abstract
Underwater image enhancement (UIE) serves as a fundamental preprocessing step in ocean remote sensing applications, encompassing marine life detection, archaeological surveying, and subsea resource exploration. However, UIE encounters substantial technical challenges due to the intricate physics of underwater light propagation and the inherent [...] Read more.
Underwater image enhancement (UIE) serves as a fundamental preprocessing step in ocean remote sensing applications, encompassing marine life detection, archaeological surveying, and subsea resource exploration. However, UIE encounters substantial technical challenges due to the intricate physics of underwater light propagation and the inherent homogeneity of aquatic environments. Images captured underwater are significantly degraded through wavelength-dependent absorption and scattering processes, resulting in color distortion, contrast degradation, and illumination irregularities. To address these challenges, we propose a contrastive feature disentanglement network (CFD-Net) that systematically addresses underwater image degradation. Our framework employs a multi-stream decomposition architecture with three specialized decoders to disentangle the latent feature space into components associated with degradation and those representing high-quality features. We incorporate hierarchical contrastive learning mechanisms to establish clear relationships between standard and degraded feature spaces, emphasizing intra-layer similarity and inter-layer exclusivity. Through the synergistic utilization of internal feature consistency and cross-component distinctiveness, our framework achieves robust feature extraction without explicit supervision. Compared to existing methods, our approach achieves a 12% higher UIQM score on the EUVP dataset and outperforms other state-of-the-art techniques on various evaluation metrics such as UCIQE, MUSIQ, and NIQE, both quantitatively and qualitatively. Full article
(This article belongs to the Special Issue Ocean Remote Sensing Based on Radar, Sonar and Optical Techniques)
Show Figures

Figure 1

32 pages, 124914 KiB  
Article
CNN–Transformer Hybrid Architecture for Underwater Sonar Image Segmentation
by Juan Lei, Huigang Wang, Zelin Lei, Jiayuan Li and Shaowei Rong
Remote Sens. 2025, 17(4), 707; https://doi.org/10.3390/rs17040707 - 19 Feb 2025
Viewed by 890
Abstract
The salient object detection (SOD) of forward-looking sonar images plays a crucial role in underwater detection and rescue tasks. However, the existing SOD algorithms find it difficult to effectively extract salient features and spatial structure information from images with scarce semantic information, uneven [...] Read more.
The salient object detection (SOD) of forward-looking sonar images plays a crucial role in underwater detection and rescue tasks. However, the existing SOD algorithms find it difficult to effectively extract salient features and spatial structure information from images with scarce semantic information, uneven intensity distribution, and high noise. Convolutional neural networks (CNNs) have strong local feature extraction capabilities, but they are easily constrained by the receptive field and lack the ability to model long-range dependencies. Transformers, with their powerful self-attention mechanism, are capable of modeling the global features of a target, but they tend to lose a significant amount of local detail. Mamba effectively models long-range dependencies in long sequence inputs through a selection mechanism, offering a novel approach to capturing long-range correlations between pixels. However, since the saliency of image pixels does not exhibit sequential dependencies, this somewhat limits Mamba’s ability to fully capture global contextual information during the forward pass. Inspired by multimodal feature fusion learning, we propose a hybrid CNN–Transformer–Mamba architecture, termed FLSSNet. FLSSNet is built upon a CNN and Transformer backbone network, integrating four core submodules to address various technical challenges: (1) The asymmetric dual encoder–decoder (ADED) is capable of simultaneously extracting features from different modalities and systematically modeling both local contextual information and global spatial structure. (2) The Transformer feature converter (TFC) module optimizes the multimodal feature fusion process through feature transformation and channel compression. (3) The long-range correlation attention (LRCA) module enhances CNN’s ability to model long-range dependencies through the collaborative use of convolutional kernels, selective sequential scanning, and attention mechanisms, while effectively suppressing noise interference. (4) The recursive contour refinement (RCR) model refines edge contour information through a layer-by-layer recursive mechanism, achieving greater precision in boundary details. The experimental results show that FLSSNet exhibits outstanding competitiveness among 25 state-of-the-art SOD methods, achieving MAE and Eξ values of 0.04 and 0.973, respectively. Full article
(This article belongs to the Special Issue Ocean Remote Sensing Based on Radar, Sonar and Optical Techniques)
Show Figures

Figure 1

22 pages, 2839 KiB  
Article
Narrowband Radar Micromotion Targets Recognition Strategy Based on Graph Fusion Network Constructed by Cross-Modal Attention Mechanism
by Yuanjie Zhang, Ting Gao, Hongtu Xie, Haozong Liu, Mengfan Ge, Bin Xu, Nannan Zhu and Zheng Lu
Remote Sens. 2025, 17(4), 641; https://doi.org/10.3390/rs17040641 - 13 Feb 2025
Viewed by 520
Abstract
In the domain of micromotion target recognition, target characteristics can be extracted through either broadband or narrowband radar echoes. However, due to technical limitations and cost constraints in acquiring broadband radar waveform data, recognition can often only be performed through narrowband radar waveforms. [...] Read more.
In the domain of micromotion target recognition, target characteristics can be extracted through either broadband or narrowband radar echoes. However, due to technical limitations and cost constraints in acquiring broadband radar waveform data, recognition can often only be performed through narrowband radar waveforms. To fully utilize the information embedded within narrowband radar waveforms, it is necessary to conduct in-depth research on multi-dimensional features of micromotion targets, including radar cross-sections (RCSs), time frequency (TF) images, and cadence velocity diagrams (CVDs). To address the limitations of existing identification methodologies in achieving accurate recognition with narrowband echoes, this paper proposes a graph fusion network based on a cross-modal attention mechanism, termed GF-AM Net. The network first adopts convolutional neural networks (CNNs) to extract unimodal features from RCSs, TF images, and CVDs independently. Subsequently, a cross-modal attention mechanism integrates these extracted features into a graph structure, achieving multi-level interactions among unimodal, bimodal, and trimodal features. Finally, the fused features are input into a classification module to accomplish narrowband radar micromotion target identification. Experimental results demonstrate that the proposed methodology successfully captures potential correlations between modal features by incorporating cross-modal multi-level information interactions across different processing stages, exhibiting exceptional accuracy and robustness in narrowband radar micromotion target identification tasks. Full article
(This article belongs to the Special Issue Ocean Remote Sensing Based on Radar, Sonar and Optical Techniques)
Show Figures

Figure 1

24 pages, 5651 KiB  
Article
A Robust Direction-of-Arrival (DOA) Estimator for Weak Targets Based on a Dimension-Reduced Matrix Filter with Deep Nulling and Multiple-Measurement-Vector Orthogonal Matching Pursuit
by Shoudong Wang, Haozhong Wang, Zhaoxiang Bian, Susu Chen, Penghua Song, Bolin Su and Wei Gao
Remote Sens. 2025, 17(3), 477; https://doi.org/10.3390/rs17030477 - 30 Jan 2025
Cited by 1 | Viewed by 507
Abstract
In the field of target localization, improving direction-of-arrival (DOA) estimation methods for weak targets in the context of strong interference remains a significant challenge. This paper presents a robust DOA estimator for localizing weak signals of interest in an environment with strong interfering [...] Read more.
In the field of target localization, improving direction-of-arrival (DOA) estimation methods for weak targets in the context of strong interference remains a significant challenge. This paper presents a robust DOA estimator for localizing weak signals of interest in an environment with strong interfering sources that improve passive sonar DOA estimation. The presented estimator combines a multiple-measurement-vector orthogonal matching pursuit (MOMP) algorithm and a dimension-reduced matrix filter with deep nulling (DR-MFDN). Strong interfering sources are adaptively suppressed by employing the DR-MFDN, and the beam-space passband robustness is improved. In addition, Gaussian pre-whitening is used to prevent noise colorization. Simulations and experimental results demonstrate that the presented estimator outperforms a conventional estimator based on a dimension-reduced matrix filter with nulling (DR-MFN) and the multiple signal classification algorithm in terms of interference suppression and localization accuracy. Moreover, the presented estimator can effectively handle short snapshots, and it exhibits superior resolution by considering the signal sparsity. Full article
(This article belongs to the Special Issue Ocean Remote Sensing Based on Radar, Sonar and Optical Techniques)
Show Figures

Figure 1

28 pages, 56964 KiB  
Article
Sequential Multimodal Underwater Single-Photon Lidar Adaptive Target Reconstruction Algorithm Based on Spatiotemporal Sequence Fusion
by Tian Rong, Yuhang Wang, Qiguang Zhu, Chenxu Wang, Yanchao Zhang, Jianfeng Li, Zhiquan Zhou and Qinghua Luo
Remote Sens. 2025, 17(2), 295; https://doi.org/10.3390/rs17020295 - 15 Jan 2025
Viewed by 773
Abstract
For the demand for long-range and high-resolution target reconstruction of slow-moving small underwater targets, research on single-photon lidar target reconstruction technology is being carried out. This paper reports the sequential multimodal underwater single-photon lidar adaptive target reconstruction algorithm based on spatiotemporal sequence fusion, [...] Read more.
For the demand for long-range and high-resolution target reconstruction of slow-moving small underwater targets, research on single-photon lidar target reconstruction technology is being carried out. This paper reports the sequential multimodal underwater single-photon lidar adaptive target reconstruction algorithm based on spatiotemporal sequence fusion, which has strong information extraction and noise filtering ability and can reconstruct the target depth and reflective intensity information from complex echo photon time counts and spatial pixel relationships. The method consists of three steps: data preprocessing, sequence-optimized extreme value inference filtering, and collaborative variation strategy for image optimization to achieve high-quality target reconstruction in complex underwater environments. Simulation and test results show that the target reconstruction method outperforms the current imaging algorithms, and the built single-photon lidar system achieves underwater lateral and distance resolution of 5 mm and 2.5cm@6AL, respectively. This indicates that the method has a great advantage in sparse photon counting imaging and possesses the capability of underwater target imaging under the background of strong light noise. It also provides a good solution for underwater target imaging of small slow-moving targets with long-distance and high-resolution. Full article
(This article belongs to the Special Issue Ocean Remote Sensing Based on Radar, Sonar and Optical Techniques)
Show Figures

Figure 1

25 pages, 3773 KiB  
Article
Three-Dimensional Non-Uniform Sampled Data Visualization from Multibeam Echosounder Systems for Underwater Imaging and Environmental Monitoring
by Wenjing Cao, Shiliang Fang, Chuanqi Zhu, Miao Feng, Yifan Zhou and Hongli Cao
Remote Sens. 2025, 17(2), 294; https://doi.org/10.3390/rs17020294 - 15 Jan 2025
Viewed by 548
Abstract
This paper proposes a method for visualizing three-dimensional non-uniformly sampled data from multibeam echosounder systems (MBESs), aimed at addressing the requirements of monitoring complex and dynamic underwater flow fields. To tackle the challenges associated with spatially non-uniform sampling, the proposed method employs linear [...] Read more.
This paper proposes a method for visualizing three-dimensional non-uniformly sampled data from multibeam echosounder systems (MBESs), aimed at addressing the requirements of monitoring complex and dynamic underwater flow fields. To tackle the challenges associated with spatially non-uniform sampling, the proposed method employs linear interpolation along the radial direction and arc length weighted interpolation in the beam direction. This approach ensures consistent resolution of three-dimensional data across the same dimension. Additionally, an opacity transfer function is generated to enhance the visualization performance of the ray casting algorithm. This function leverages data values and gradient information, including the first and second directional derivatives, to suppress the rendering of background and non-interest regions while emphasizing target areas and boundary features. The simulation and experimental results demonstrate that, compared to conventional two-dimensional beam images and three-dimensional images, the proposed algorithm provides a more intuitive and accurate representation of three-dimensional data, offering significant support for the observation and analysis of spatial flow field characteristics. Full article
(This article belongs to the Special Issue Ocean Remote Sensing Based on Radar, Sonar and Optical Techniques)
Show Figures

Figure 1

24 pages, 23277 KiB  
Article
Modeling and Data Analysis of Bistatic Bottom Reverberation from a Towed Horizontal Array
by Zhaohua Su, Jie Zhuo and Chao Sun
Remote Sens. 2025, 17(2), 192; https://doi.org/10.3390/rs17020192 - 8 Jan 2025
Viewed by 647
Abstract
The spatial-temporal structures of bottom reverberation are associated with seafloor features. In a bistatic bottom reverberation experiment involving a vertical transmitting array and a towed horizontal receiving array, stable stripe structures were observed within the beam-time domain. In this study, a bistatic reverberation [...] Read more.
The spatial-temporal structures of bottom reverberation are associated with seafloor features. In a bistatic bottom reverberation experiment involving a vertical transmitting array and a towed horizontal receiving array, stable stripe structures were observed within the beam-time domain. In this study, a bistatic reverberation model based on ray theory is presented to interpret the experimental phenomena. The conventional empirical scattering function is primarily applicable to small grazing angles. Moreover, the regional segmentation method simulates reverberations across various receiving beams, ignoring scatterers in other areas. To address these issues, we substitute the empirical scattering function with a small-slope approximation (SSA) that is appropriate for full grazing angles. Furthermore, we utilize the beam pattern of arrays to incorporate the effects of each scatterer, and derive the expression for bottom reverberation intensity in both the array and beam domains. The established model demonstrates its applicability in simulating and interpreting the stripe structures of bottom reverberation, and the comparison shows that the model outputs are in agreement with the experimental results. The analysis indicates that the vertical stripes within the structures originate from eigenrays in the mirror reflection direction. Furthermore, the convex stripes are predominantly affected by the direct ray and the surface reflection ray among the scattered eigenrays, whereas the concave and elliptical stripes are primarily affected by the bottom-surface reflection ray and the surface-bottom-surface reflection ray within the scattered eigenrays. Full article
(This article belongs to the Special Issue Ocean Remote Sensing Based on Radar, Sonar and Optical Techniques)
Show Figures

Graphical abstract

20 pages, 4895 KiB  
Article
A Fast Two-Dimensional Direction-of-Arrival Estimator Using Array Manifold Matrix Learning
by Jieyi Lu, Long Yang, Yixin Yang and Lu Wang
Remote Sens. 2024, 16(24), 4654; https://doi.org/10.3390/rs16244654 - 12 Dec 2024
Viewed by 659
Abstract
Sparsity-based methods for two-dimensional (2D) direction-of-arrival (DOA) estimation often suffer from high computational complexity due to the large array manifold dictionaries. This paper proposes a fast 2D DOA estimator using array manifold matrix learning, where source-associated grid points are progressively selected from the [...] Read more.
Sparsity-based methods for two-dimensional (2D) direction-of-arrival (DOA) estimation often suffer from high computational complexity due to the large array manifold dictionaries. This paper proposes a fast 2D DOA estimator using array manifold matrix learning, where source-associated grid points are progressively selected from the set of predefined angular grids based on marginal likelihood maximization in the sparse Bayesian learning framework. This grid selection reduces the size of the manifold dictionary matrix, avoiding large-scale matrix inversion and resulting in reduced complexity. To overcome grid mismatch errors, grid optimization is established based on the marginal likelihood, with a dichotomizing-based solver provided that is applicable to arbitrary planar arrays. For uniform rectangular arrays, we present a 2D zoom fast Fourier transform as an alternative to the dichotomizing-based solver by transforming the manifold vector in a specific form, thus accelerating the computation without compromising accuracy. Simulation results verify the superior performance of the proposed methods in terms of estimation accuracy, computational efficiency, and angle resolution compared to state-of-the-art methods for 2D DOA estimation. Full article
(This article belongs to the Special Issue Ocean Remote Sensing Based on Radar, Sonar and Optical Techniques)
Show Figures

Figure 1

19 pages, 1933 KiB  
Article
Dual-Feature Fusion Learning: An Acoustic Signal Recognition Method for Marine Mammals
by Zhichao Lü, Yaqian Shi, Liangang Lü, Dongyue Han, Zhengkai Wang and Fei Yu
Remote Sens. 2024, 16(20), 3823; https://doi.org/10.3390/rs16203823 - 14 Oct 2024
Viewed by 1170
Abstract
Marine mammal acoustic signal recognition is a key technology for species conservation and ecological environment monitoring. Aiming at the complex and changing marine environment, and because the traditional recognition method based on a single feature input has the problems of poor environmental adaptability [...] Read more.
Marine mammal acoustic signal recognition is a key technology for species conservation and ecological environment monitoring. Aiming at the complex and changing marine environment, and because the traditional recognition method based on a single feature input has the problems of poor environmental adaptability and low recognition accuracy, this paper proposes a dual-feature fusion learning method. First, dual-domain feature extraction is performed on marine mammal acoustic signals to overcome the limitations of single feature input methods by interacting feature information between the time-frequency domain and the Delay-Doppler domain. Second, this paper constructs a dual-feature fusion learning target recognition model, which improves the generalization ability and robustness of mammal acoustic signal recognition in complex marine environments. Finally, the feasibility and effectiveness of the dual-feature fusion learning target recognition model are verified in this study by using the acoustic datasets of three marine mammals, namely, the Fraser’s Dolphin, the Spinner Dolphin, and the Long-Finned Pilot Whale. The dual-feature fusion learning target recognition model improved the accuracy of the training set by 3% to 6% and 20% to 23%, and the accuracy of the test set by 1% to 3% and 25% to 38%, respectively, compared to the model that used the time-frequency domain features and the Delay-Doppler domain features alone for recognition. Full article
(This article belongs to the Special Issue Ocean Remote Sensing Based on Radar, Sonar and Optical Techniques)
Show Figures

Figure 1

21 pages, 2399 KiB  
Article
Gridless DOA Estimation Method for Arbitrary Array Geometries Based on Complex-Valued Deep Neural Networks
by Yuan Cao, Tianjun Zhou and Qunfei Zhang
Remote Sens. 2024, 16(19), 3752; https://doi.org/10.3390/rs16193752 - 9 Oct 2024
Viewed by 1538
Abstract
Gridless direction of arrival (DOA) estimation methods have garnered significant attention due to their ability to avoid grid mismatch errors, which can adversely affect the performance of high-resolution DOA estimation algorithms. However, most existing gridless methods are primarily restricted to applications involving uniform [...] Read more.
Gridless direction of arrival (DOA) estimation methods have garnered significant attention due to their ability to avoid grid mismatch errors, which can adversely affect the performance of high-resolution DOA estimation algorithms. However, most existing gridless methods are primarily restricted to applications involving uniform linear arrays or sparse linear arrays. In this paper, we derive the relationship between the element-domain covariance matrix and the angular-domain covariance matrix for arbitrary array geometries by expanding the steering vector using a Fourier series. Then, a deep neural network is designed to reconstruct the angular-domain covariance matrix from the sample covariance matrix and the gridless DOA estimation can be obtained by Root-MUSIC. Simulation results on arbitrary array geometries demonstrate that the proposed method outperforms existing methods like MUSIC, SPICE, and SBL in terms of resolution probability and DOA estimation accuracy, especially when the angular separation between targets is small. Additionally, the proposed method does not require any hyperparameter tuning, is robust to varying snapshot numbers, and has a lower computational complexity. Finally, real hydrophone data from the SWellEx-96 ocean experiment validates the effectiveness of the proposed method in practical underwater acoustic environments. Full article
(This article belongs to the Special Issue Ocean Remote Sensing Based on Radar, Sonar and Optical Techniques)
Show Figures

Figure 1

20 pages, 2723 KiB  
Article
Source Range Estimation Using Linear Frequency-Difference Matched Field Processing in a Shallow Water Waveguide
by Penghua Song, Haozhong Wang, Bolin Su, Liang Wang and Wei Gao
Remote Sens. 2024, 16(18), 3529; https://doi.org/10.3390/rs16183529 - 23 Sep 2024
Viewed by 910
Abstract
Matched field processing (MFP) is an established technique for source localization in known multipath acoustic environments. Unfortunately, in many situations, imperfect knowledge of the actual propagation environment and sidelobes due to modal interference prevent accurate propagation modeling and source localization via MFP. To [...] Read more.
Matched field processing (MFP) is an established technique for source localization in known multipath acoustic environments. Unfortunately, in many situations, imperfect knowledge of the actual propagation environment and sidelobes due to modal interference prevent accurate propagation modeling and source localization via MFP. To suppress the sidelobes and improve the method’s robustness, a linear frequency-difference matched field processing (LFDMFP) method for estimating the source range is proposed. A two-neighbor-frequency high-order cross-spectrum between the measurement and the replica of each hydrophone of the vertical line array is first computed. The cost function can then be derived from the dual summation or double integral of the high-order cross-spectrum with respect to the depth of the hydrophones and the candidate sources of the replicas, where the range that corresponds to the minimum is the optimal estimation. Because of the larger modal interference distances, LFDMFP can efficiently provide only one optimal range within the same range search interval rather than some conventional matched field processing. The efficiency of the presented method was verified using simulations and experiments. The LFDMFP unambiguously estimated the source range in two experimental datasets with average relative errors of 2.2 and 1.9%. Full article
(This article belongs to the Special Issue Ocean Remote Sensing Based on Radar, Sonar and Optical Techniques)
Show Figures

Figure 1

18 pages, 4910 KiB  
Article
Target Motion Parameters Estimation by Full-Plane Hyperbola-Warping Transform with a Single Hydrophone
by Yuzheng Li, Bo Gao, Zhuo Chen, Yueqi Yu, Zhennan Wang and Dazhi Gao
Remote Sens. 2024, 16(17), 3307; https://doi.org/10.3390/rs16173307 - 5 Sep 2024
Viewed by 822
Abstract
In this paper, to counteract the sensitivity of the traditional Hough transform to noise and the fluctuations in parameter estimation, we propose a hyperbolic warping transform that integrates all interference fringes in the time–frequency domain to accurately estimate the motion parameters of a [...] Read more.
In this paper, to counteract the sensitivity of the traditional Hough transform to noise and the fluctuations in parameter estimation, we propose a hyperbolic warping transform that integrates all interference fringes in the time–frequency domain to accurately estimate the motion parameters of a single hydrophone. This method can accurately estimate the target motion parameters, including the time of closest point of approach (tCPA), the ratio of the nearest distance to the speed (b=rCPA/v), and the waveguide invariant (β). The two algorithms are compared by simulation and sea trial experiments. Hyperbola-warping improves the noise immunity performance by 10 dB in simulation experiments, increases the detection range by 20% in sea trial experiments, and demonstrates that the method proposed in this paper has better noise resistance and practicality. Full article
(This article belongs to the Special Issue Ocean Remote Sensing Based on Radar, Sonar and Optical Techniques)
Show Figures

Figure 1

23 pages, 3642 KiB  
Article
A Novel Chirp-Z Transform Algorithm for Multi-Receiver Synthetic Aperture Sonar Based on Range Frequency Division
by Mingqiang Ning, Heping Zhong, Jinsong Tang, Haoran Wu, Jiafeng Zhang, Peng Zhang and Mengbo Ma
Remote Sens. 2024, 16(17), 3265; https://doi.org/10.3390/rs16173265 - 3 Sep 2024
Cited by 2 | Viewed by 1171
Abstract
When a synthetic aperture sonar (SAS) system operates under low-frequency broadband conditions, the azimuth range coupling of the point target reference spectrum (PTRS) is severe, and the high-resolution imaging range is limited. To solve the above issue, we first convert multi-receivers’ signal into [...] Read more.
When a synthetic aperture sonar (SAS) system operates under low-frequency broadband conditions, the azimuth range coupling of the point target reference spectrum (PTRS) is severe, and the high-resolution imaging range is limited. To solve the above issue, we first convert multi-receivers’ signal into the equivalent monostatic signal and then divide the equivalent monostatic signal into range subblocks and the range frequency subbands within each range subblock in order. The azimuth range coupling terms are converted into linear terms based on piece-wise linear approximation (PLA), and the phase error of the PTRS within each subband is less than π/4. Then, we use the chirp-z transform (CZT) to correct range cell migration (RCM) to obtain low-resolution results for different subbands. After RCM correction, the subbands’ signals are coherently summed in the range frequency domain to obtain a high-resolution image. Finally, different subblocks are concatenated in the range time domain to obtain the final result of the whole swath. The processing of different subblocks and different subbands can be implemented in parallel. Computer simulation experiments and field data have verified the superiority of the proposed method over existing methods. Full article
(This article belongs to the Special Issue Ocean Remote Sensing Based on Radar, Sonar and Optical Techniques)
Show Figures

Figure 1

22 pages, 15192 KiB  
Article
Joint Luminance-Saliency Prior and Attention for Underwater Image Quality Assessment
by Zhiqiang Lin, Zhouyan He, Chongchong Jin, Ting Luo and Yeyao Chen
Remote Sens. 2024, 16(16), 3021; https://doi.org/10.3390/rs16163021 - 17 Aug 2024
Cited by 3 | Viewed by 1582
Abstract
Underwater images, as a crucial medium for storing ocean information in underwater sensors, play a vital role in various underwater tasks. However, they are prone to distortion due to the imaging environment, which leads to a decline in visual quality, which is an [...] Read more.
Underwater images, as a crucial medium for storing ocean information in underwater sensors, play a vital role in various underwater tasks. However, they are prone to distortion due to the imaging environment, which leads to a decline in visual quality, which is an urgent issue for various marine vision systems to address. Therefore, it is necessary to develop underwater image enhancement (UIE) and corresponding quality assessment methods. At present, most underwater image quality assessment (UIQA) methods primarily rely on extracting handcrafted features that characterize degradation attributes, which struggle to measure complex mixed distortions and often exhibit discrepancies with human visual perception in practical applications. Furthermore, current UIQA methods lack the consideration of the perception perspective of enhanced effects. To this end, this paper employs luminance and saliency priors as critical visual information for the first time to measure the enhancement effect of global and local quality achieved by the UIE algorithms, named JLSAU. The proposed JLSAU is built upon an overall pyramid-structured backbone, supplemented by the Luminance Feature Extraction Module (LFEM) and Saliency Weight Learning Module (SWLM), which aim at obtaining perception features with luminance and saliency priors at multiple scales. The supplement of luminance priors aims to perceive visually sensitive global distortion of luminance, including histogram statistical features and grayscale features with positional information. The supplement of saliency priors aims to perceive visual information that reflects local quality variation both in spatial and channel domains. Finally, to effectively model the relationship among different levels of visual information contained in the multi-scale features, the Attention Feature Fusion Module (AFFM) is proposed. Experimental results on the public UIQE and UWIQA datasets demonstrate that the proposed JLSAU outperforms existing state-of-the-art UIQA methods. Full article
(This article belongs to the Special Issue Ocean Remote Sensing Based on Radar, Sonar and Optical Techniques)
Show Figures

Figure 1

Review

Jump to: Research, Other

33 pages, 3684 KiB  
Review
Artificial Intelligence-Based Underwater Acoustic Target Recognition: A Survey
by Sheng Feng, Shuqing Ma, Xiaoqian Zhu and Ming Yan
Remote Sens. 2024, 16(17), 3333; https://doi.org/10.3390/rs16173333 - 8 Sep 2024
Cited by 2 | Viewed by 5979
Abstract
Underwater acoustic target recognition has always played a pivotal role in ocean remote sensing. By analyzing and processing ship-radiated signals, it is possible to determine the type and nature of a target. Historically, traditional signal processing techniques have been employed for target recognition [...] Read more.
Underwater acoustic target recognition has always played a pivotal role in ocean remote sensing. By analyzing and processing ship-radiated signals, it is possible to determine the type and nature of a target. Historically, traditional signal processing techniques have been employed for target recognition in underwater environments, which often exhibit limitations in accuracy and efficiency. In response to these limitations, the integration of artificial intelligence (AI) methods, particularly those leveraging machine learning and deep learning, has attracted increasing attention in recent years. Compared to traditional methods, these intelligent recognition techniques can autonomously, efficiently, and accurately identify underwater targets. This paper comprehensively reviews the contributions of intelligent techniques in underwater acoustic target recognition and outlines potential future directions, offering a forward-looking perspective on how ongoing advancements in AI can further revolutionize underwater acoustic target recognition in ocean remote sensing. Full article
(This article belongs to the Special Issue Ocean Remote Sensing Based on Radar, Sonar and Optical Techniques)
Show Figures

Figure 1

Other

Jump to: Research, Review

13 pages, 4861 KiB  
Technical Note
Research on 2-D Direction of Arrival (DOA) Estimation for an L-Shaped Array
by Kun Ye, Lang Zhou, Shaohua Hong, Xuebo Zhang and Haixin Sun
Remote Sens. 2024, 16(24), 4787; https://doi.org/10.3390/rs16244787 - 22 Dec 2024
Cited by 2 | Viewed by 696
Abstract
Lately, there has been a significant increase in interest in coprime array configurations, as they offer the advantage of generating more extensive array apertures and enhanced degrees of freedom when contrasted with standard linear arrays. This document introduces an innovative two-dimensional direction-of-arrival (2-D [...] Read more.
Lately, there has been a significant increase in interest in coprime array configurations, as they offer the advantage of generating more extensive array apertures and enhanced degrees of freedom when contrasted with standard linear arrays. This document introduces an innovative two-dimensional direction-of-arrival (2-D DOA) estimation technique founded on the zero-completion principle. In particular, our initial step involves interpolating the synthetic co-array signals to achieve completion, followed by the regeneration of the covariance matrix utilizing the interpolated synthetic signals. Then, the 2-D angle estimation is realized based on the complemented matrix using the auto-pairing method. The computational modeling outcomes indicate that the suggested method demonstrates superior angular discriminability. Moreover, this method excels in estimation accuracy when contrasted with its algorithmic counterparts. Full article
(This article belongs to the Special Issue Ocean Remote Sensing Based on Radar, Sonar and Optical Techniques)
Show Figures

Figure 1

Back to TopTop