remotesensing-logo

Journal Browser

Journal Browser

Advancement in Undersea Remote Sensing

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Ocean Remote Sensing".

Deadline for manuscript submissions: closed (31 October 2023) | Viewed by 31089

Special Issue Editors


E-Mail Website
Guest Editor
Harbor Branch Oceanographic Institute, Florida Atlantic University, Fort Pierce, FL, USA
Interests: underwater imaging applications; computer vision in underwater laser imaging applications; real-time environmental monitoring and events detection; application of electro-optic imaging numerical model and deconvolution technique in image enhancement and pulse resolution improvements
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
L3Harris Technologies, Space & Airborne Systems, NASA Boulevard, Melbourne, FL 32919, USA
Interests: underwater imaging applications; computer vision in underwater laser imaging applications; real-time environmental monitoring and events detection; application of electro-optic imaging numerical model and deconvolution technique in image enhancement and pulse resolution improvements
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Gaining a better understanding of the marine environment has been a primary aim for humanity going back to the ancient times. However, it is only over the last several decades, enabled by the ongoing microelectronics and computer technological revolution, that significant progress has been made to develop the platforms, sensors, and other related technologies to overcome the opaque barrier between humans and the underwater world. Indeed, our desire to explore the ocean has recently spawned a plethora of advanced undersea remote sensing techniques and technologies that are still growing exponentially, and this Special Issue will be focused on compiling a balanced collection of papers that detail the most recent advancements in this area.

Submissions are hereby invited for original research, review articles and case studies that are new contributions in the advancement of underwater remote sensing. Theoretical and experimental contributions, original and review studies, and industrial and university research is welcome.

The main topics of interest include, but are not limited to, the following:

  • Underwater robotics and platforms;
  • Underwater sonar technology;
  • Underwater optical and acoustical communications;
  • Underwater lidar sensors and imagers;
  • Underwater signal processing and image enhancements;
  • Underwater turbulence sensing;
  • Marine species detection and identification;
  • Aquaculture monitoring systems;
  • Machine learning for undersea remote sensing.

Dr. Bing Ouyang
Dr. Fraser Dalgleish 
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Underwater robotics
  • Undersea remote sensing
  • Underwater lidar
  • Machine learning
  • Aquaculture
  • Marine species detection

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Related Special Issue

Published Papers (11 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

19 pages, 4686 KiB  
Article
Small-Sample Underwater Target Detection: A Joint Approach Utilizing Diffusion and YOLOv7 Model
by Chensheng Cheng, Xujia Hou, Xin Wen, Weidong Liu and Feihu Zhang
Remote Sens. 2023, 15(19), 4772; https://doi.org/10.3390/rs15194772 - 29 Sep 2023
Cited by 5 | Viewed by 1980
Abstract
Underwater target detection technology plays a crucial role in the autonomous exploration of underwater vehicles. In recent years, significant progress has been made in the field of target detection through the application of artificial intelligence technology. Effectively applying AI techniques to underwater target [...] Read more.
Underwater target detection technology plays a crucial role in the autonomous exploration of underwater vehicles. In recent years, significant progress has been made in the field of target detection through the application of artificial intelligence technology. Effectively applying AI techniques to underwater target detection is a highly promising area of research. However, the difficulty and high cost of underwater acoustic data collection have led to a severe lack of data, greatly restricting the development of deep-learning-based target detection methods. The present study is the first to utilize diffusion models for generating underwater acoustic data, thereby effectively addressing the issue of poor detection performance arising from the scarcity of underwater acoustic data. Firstly, we place iron cylinders and cones underwater (simulating small preset targets such as mines). Subsequently, we employ an autonomous underwater vehicle (AUV) equipped with side-scan sonar (SSS) to obtain underwater target data. The collected target data are augmented using the denoising diffusion probabilistic model (DDPM). Finally, the augmented data are used to train an improved YOLOv7 model, and its detection performance is evaluated on a test set. The results demonstrate the effectiveness of the proposed method in generating similar data and overcoming the challenge of limited training sample data. Compared to models trained solely on the original data, the model trained with augmented data shows a mean average precision (mAP) improvement of approximately 30% across various mainstream detection networks. Additionally, compared to the original model, the improved YOLOv7 model proposed in this study exhibits a 2% increase in mAP on the underwater dataset. Full article
(This article belongs to the Special Issue Advancement in Undersea Remote Sensing)
Show Figures

Figure 1

19 pages, 12773 KiB  
Article
Water Surface Acoustic Wave Detection by a Millimeter Wave Radar
by Yuming Zeng, Siyi Shen and Zhiwei Xu
Remote Sens. 2023, 15(16), 4022; https://doi.org/10.3390/rs15164022 - 14 Aug 2023
Cited by 4 | Viewed by 2644
Abstract
Feature extraction and recognition of underwater targets are important in military and civilian areas. This paper studied water surface acoustic wave (WSAW) detection by a millimeter wave (mmWave) radar. The mmWave-based endpoint detection method of the WSAW was introduced. Simulated results show that [...] Read more.
Feature extraction and recognition of underwater targets are important in military and civilian areas. This paper studied water surface acoustic wave (WSAW) detection by a millimeter wave (mmWave) radar. The mmWave-based endpoint detection method of the WSAW was introduced. Simulated results show that the continuous wavelet transform (CWT) method has a better detection performance. A 77 GHz large aperture antenna mmWave radar sensor and an underwater acoustic transmitter have been applied to conduct laboratory experiments. Still water surface experimental results verify that the CWT method has better detection capability, and the mmWave radar can accurately detect even 155 nm WSAW. Wavy water surface experimental results demonstrate the ability of the mmWave radar to analyze the time-frequency feature of the weak WSAW signal. These works indicate the potential of mmWave radar for the cross-medium detection and recognition of underwater targets. Full article
(This article belongs to the Special Issue Advancement in Undersea Remote Sensing)
Show Figures

Graphical abstract

19 pages, 21829 KiB  
Article
DBFNet: A Dual-Branch Fusion Network for Underwater Image Enhancement
by Kaichuan Sun and Yubo Tian
Remote Sens. 2023, 15(5), 1195; https://doi.org/10.3390/rs15051195 - 21 Feb 2023
Cited by 6 | Viewed by 2608
Abstract
Due to the absorption and scattering effects of light propagating through water, underwater images inevitably suffer from severe degradation, such as color casts and losses of detail. Many existing deep learning-based methods have demonstrated superior performance for underwater image enhancement (UIE). However, accurate [...] Read more.
Due to the absorption and scattering effects of light propagating through water, underwater images inevitably suffer from severe degradation, such as color casts and losses of detail. Many existing deep learning-based methods have demonstrated superior performance for underwater image enhancement (UIE). However, accurate color correction and detail restoration still present considerable challenges for UIE. In this work, we develop a dual-branch fusion network, dubbed the DBFNet, to eliminate the degradation of underwater images. We first design a triple-color channel separation learning branch (TCSLB), which balances the color distribution of underwater images by learning the independent features of the different channels of the RGB color space. Subsequently, we develop a wavelet domain learning branch (WDLB) and design a discrete wavelet transform-based attention residual dense module to fully employ the wavelet domain information of the image to restore clear details. Finally, a dual attention-based selective fusion module (DASFM) is designed for the adaptive fusion of latent features of the two branches, in which both pleasing colors and diverse details are integrated. Extensive quantitative and qualitative evaluations of synthetic and real-world underwater datasets demonstrate that the proposed DBFNet significantly improves the visual quality and shows superior performance to the compared methods. Furthermore, the ablation experiments demonstrate the effectiveness of each component of the DBFNet. Full article
(This article belongs to the Special Issue Advancement in Undersea Remote Sensing)
Show Figures

Figure 1

21 pages, 8274 KiB  
Article
A Texture Feature Removal Network for Sonar Image Classification and Detection
by Chuanlong Li, Xiufen Ye, Jier Xi and Yunpeng Jia
Remote Sens. 2023, 15(3), 616; https://doi.org/10.3390/rs15030616 - 20 Jan 2023
Cited by 11 | Viewed by 2490
Abstract
Deep neural network (DNN) was applied in sonar image target recognition tasks, but it is very difficult to obtain enough sonar images that contain a target; as a result, the direct use of a small amount of data to train a DNN will [...] Read more.
Deep neural network (DNN) was applied in sonar image target recognition tasks, but it is very difficult to obtain enough sonar images that contain a target; as a result, the direct use of a small amount of data to train a DNN will cause overfitting and other problems. Transfer learning is the most effective way to address such scenarios. However, there is a large domain gap between optical images and sonar images, and common transfer learning methods may not be able to effectively handle it. In this paper, we propose a transfer learning method for sonar image classification and object detection called the texture feature removal network. We regard the texture features of an image as domain-specific features, and we narrow the domain gap by discarding the domain-specific features, and hence, make it easier to complete knowledge transfer. Our method can be easily embedded into other transfer learning methods, which makes it easier to apply to different application scenarios. Experimental results show that our method is effective in side-scan sonar image classification tasks and forward-looking sonar image detection tasks. For side-scan sonar image classification tasks, the classification accuracy of our method is enhanced by 4.5% in a supervised learning experiment, and for forward-looking sonar detection tasks, the average precision (AP) is also significantly improved. Full article
(This article belongs to the Special Issue Advancement in Undersea Remote Sensing)
Show Figures

Figure 1

14 pages, 40895 KiB  
Article
Underwater Hyperspectral Imaging System with Liquid Lenses
by Bohan Liu, Shaojie Men, Zhongjun Ding, Dewei Li, Zhigang Zhao, Jiahao He, Haochen Ju, Mengling Shen, Qiuyuan Yu and Zhaojun Liu
Remote Sens. 2023, 15(3), 544; https://doi.org/10.3390/rs15030544 - 17 Jan 2023
Cited by 6 | Viewed by 3412
Abstract
The underwater hyperspectral imager enables the detection and identification of targets on the seafloor by collecting high-resolution spectral images. The distance between the hyperspectral imager and the targets cannot be consistent in real operation by factors such as motion and fluctuating terrain, resulting [...] Read more.
The underwater hyperspectral imager enables the detection and identification of targets on the seafloor by collecting high-resolution spectral images. The distance between the hyperspectral imager and the targets cannot be consistent in real operation by factors such as motion and fluctuating terrain, resulting in unfocused images and negative effects on the identification. In this paper, we developed a novel integrated underwater hyperspectral imaging system for deep sea surveys and proposed an autofocus strategy based on liquid lens focusing transfer. The calibration tests provided a clear focus result for hyperspectral transects and a global spectral resolution of less than 7 nm in spectral range from 400 to 800 nm. The prototype was used to obtain spectrum and image information of manganese nodules and four other rocks in a laboratory environment. The classification of the five kinds of minerals was successfully realized by using a support vector machine. We tested the UHI prototype in the deep sea and observed a Psychropotidae specimen on the sediment from the in situ hyperspectral images. The results show that the prototype developed here can accurately and stably obtain hyperspectral data and has potential applications for in situ deep-sea exploration. Full article
(This article belongs to the Special Issue Advancement in Undersea Remote Sensing)
Show Figures

Figure 1

20 pages, 68353 KiB  
Article
UIR-Net: A Simple and Effective Baseline for Underwater Image Restoration and Enhancement
by Xinkui Mei, Xiufen Ye, Xiaofeng Zhang, Yusong Liu, Junting Wang, Jun Hou and Xuli Wang
Remote Sens. 2023, 15(1), 39; https://doi.org/10.3390/rs15010039 - 22 Dec 2022
Cited by 17 | Viewed by 3080
Abstract
Because of the unique physical and chemical properties of water, obtaining high-quality underwater images directly is not an easy thing. Hence, recovery and enhancement are indispensable steps in underwater image processing and have therefore become research hotspots. Nevertheless, existing image-processing methods generally have [...] Read more.
Because of the unique physical and chemical properties of water, obtaining high-quality underwater images directly is not an easy thing. Hence, recovery and enhancement are indispensable steps in underwater image processing and have therefore become research hotspots. Nevertheless, existing image-processing methods generally have high complexity and are difficult to deploy on underwater platforms with limited computing resources. To tackle this issue, this paper proposes a simple and effective baseline named UIR-Net that can recover and enhance underwater images simultaneously. This network uses a channel residual prior to extract the channel of the image to be recovered as a prior, combined with a gradient strategy to reduce parameters and training time to make the operation more lightweight. This method can improve the color performance while maintaining the style and spatial texture of the contents. Through experiments on three datasets (MSRB, MSIRB and UIEBD-Snow), we confirm that UIR-Net can recover clear underwater images from original images with large particle impurities and ocean light spots. Compared to other state-of-the-art methods, UIR-Net can recover underwater images at a similar or higher quality with a significantly lower number of parameters, which is valuable in real-world applications. Full article
(This article belongs to the Special Issue Advancement in Undersea Remote Sensing)
Show Figures

Figure 1

17 pages, 28174 KiB  
Article
Sonar Image Target Detection Based on Style Transfer Learning and Random Shape of Noise under Zero Shot Target
by Jier Xi, Xiufen Ye and Chuanlong Li
Remote Sens. 2022, 14(24), 6260; https://doi.org/10.3390/rs14246260 - 10 Dec 2022
Cited by 10 | Viewed by 2526
Abstract
With the development of sonar technology, sonar images have been widely used to detect targets. However, there are many challenges for sonar images in terms of object detection. For example, the detectable targets in the sonar data are more sparse than those in [...] Read more.
With the development of sonar technology, sonar images have been widely used to detect targets. However, there are many challenges for sonar images in terms of object detection. For example, the detectable targets in the sonar data are more sparse than those in optical images, the real underwater scanning experiment is complicated, and the sonar image styles produced by different types of sonar equipment due to their different characteristics are inconsistent, which makes it difficult to use them for sonar object detection and recognition algorithms. In order to solve these problems, we propose a novel sonar image object-detection method based on style learning and random noise with various shapes. Sonar style target sample images are generated through style transfer, which enhances insufficient sonar objects image. By introducing various noise shapes, which included points, lines, and rectangles, the problems of mud and sand obstruction and a mutilated target in the real environment are solved, and the single poses of the sonar image target is improved by fusing multiple poses of optical image target. In the meantime, a method of feature enhancement is proposed to solve the issue of missing key features when using style transfer on optical images directly. The experimental results show that our method achieves better precision. Full article
(This article belongs to the Special Issue Advancement in Undersea Remote Sensing)
Show Figures

Graphical abstract

17 pages, 9840 KiB  
Article
DP-ViT: A Dual-Path Vision Transformer for Real-Time Sonar Target Detection
by Yushan Sun, Haotian Zheng, Guocheng Zhang, Jingfei Ren, Hao Xu and Chao Xu
Remote Sens. 2022, 14(22), 5807; https://doi.org/10.3390/rs14225807 - 17 Nov 2022
Cited by 8 | Viewed by 2859
Abstract
Sonar image is the main way for underwater vehicles to obtain environmental information. The task of target detection in sonar images can distinguish multi-class targets in real time and accurately locate them, providing perception information for the decision-making system of underwater vehicles. However, [...] Read more.
Sonar image is the main way for underwater vehicles to obtain environmental information. The task of target detection in sonar images can distinguish multi-class targets in real time and accurately locate them, providing perception information for the decision-making system of underwater vehicles. However, there are many challenges in sonar image target detection, such as many kinds of sonar, complex and serious noise interference in images, and less datasets. This paper proposes a sonar image target detection method based on Dual Path Vision Transformer Network (DP-VIT) to accurately detect targets in forward-look sonar and side-scan sonar. DP-ViT increases receptive field by adding multi-scale to patch embedding enhances learning ability of model feature extraction by using Dual Path Transformer Block, then introduces Conv-Attention to reduce model training parameters, and finally uses Generalized Focal Loss to solve the problem of imbalance between positive and negative samples. The experimental results show that the performance of this sonar target detection method is superior to other mainstream methods on both forward-look sonar dataset and side-scan sonar dataset, and it can also maintain good performance in the case of adding noise. Full article
(This article belongs to the Special Issue Advancement in Undersea Remote Sensing)
Show Figures

Graphical abstract

23 pages, 4816 KiB  
Article
A Technique to Navigate Autonomous Underwater Vehicles Using a Virtual Coordinate Reference Network during Inspection of Industrial Subsea Structures
by Valery Bobkov, Alexey Kudryashov and Alexander Inzartsev
Remote Sens. 2022, 14(20), 5123; https://doi.org/10.3390/rs14205123 - 13 Oct 2022
Cited by 4 | Viewed by 2178
Abstract
Industrial subsea infrastructure inspections using autonomous underwater vehicles (AUV) require high accuracy of AUV navigation relative to the objects being examined. In addition to traditional navigation tools with inertial navigation systems and acoustic navigation equipment, technologies with video information processing are also actively [...] Read more.
Industrial subsea infrastructure inspections using autonomous underwater vehicles (AUV) require high accuracy of AUV navigation relative to the objects being examined. In addition to traditional navigation tools with inertial navigation systems and acoustic navigation equipment, technologies with video information processing are also actively developed today. The visual odometry-based techniques can provide higher navigation accuracy for local maneuvering at short distances to objects. However, in the case of long-distance AUV movements, such techniques typically accumulate errors when calculating the AUV movement trajectory. In this regard, the present article considers a navigation technique that allows for increasing the accuracy of AUV movements in the coordinate space of the object inspected by using a virtual coordinate reference network. Another aspect of the method proposed is to minimize computational costs for AUV moving along the inspection trajectory by referencing the AUV coordinates to the object pre-calculated using the object recognition algorithm. Thus, the use of a network of virtual points for referencing the AUV to subsea objects is aimed to maintain the required accuracy of AUV coordination during a long-distance movement along the inspection trajectory, while minimizing computational costs. Full article
(This article belongs to the Special Issue Advancement in Undersea Remote Sensing)
Show Figures

Figure 1

18 pages, 8869 KiB  
Article
An Efficient Method for Detection and Quantitation of Underwater Gas Leakage Based on a 300-kHz Multibeam Sonar
by Wanyuan Zhang, Tian Zhou, Jianghui Li and Chao Xu
Remote Sens. 2022, 14(17), 4301; https://doi.org/10.3390/rs14174301 - 1 Sep 2022
Cited by 8 | Viewed by 3138
Abstract
In recent years, multibeam sonar has become the most effective and sensitive tool for the detection and quantitation of underwater gas leakage and its rise through the water column. Motivated by recent research, this paper presents an efficient method for the detection and [...] Read more.
In recent years, multibeam sonar has become the most effective and sensitive tool for the detection and quantitation of underwater gas leakage and its rise through the water column. Motivated by recent research, this paper presents an efficient method for the detection and quantitation of gas leakage based on a 300-kHz multibeam sonar. In the proposed gas leakage detection method based on multibeam sonar water column images, not only the backscattering strength of the gas bubbles but also the size and aspect ratio of a gas plume are used to isolate interference objects. This paper also presents a volume-scattering strength optimization model to estimate the gas flux. The bubble size distribution, volume, and flux of gas leaks are determined by matching the theoretical and measured values of the volume-scattering strength of the gas bubbles. The efficiency and effectiveness of the proposed method have been verified by a case study at the artificial gas leakage site in the northern South China Sea. The results show that the leaking gas flux is approximately between 29.39 L/min and 56.43 L/min under a bubble radius ranging from 1 mm to 12 mm. The estimated results are in good agreement with the recorded data (32–67 L/min) for gas leaks generated by an air compressor. The experimental results demonstrate that the proposed method can achieve effective and accurate detection and quantitation of gas leakages. Full article
(This article belongs to the Special Issue Advancement in Undersea Remote Sensing)
Show Figures

Graphical abstract

15 pages, 3821 KiB  
Article
Imbalanced Underwater Acoustic Target Recognition with Trigonometric Loss and Attention Mechanism Convolutional Network
by Yanxin Ma, Mengqi Liu, Yi Zhang, Bingbing Zhang, Ke Xu, Bo Zou and Zhijian Huang
Remote Sens. 2022, 14(16), 4103; https://doi.org/10.3390/rs14164103 - 21 Aug 2022
Cited by 13 | Viewed by 2216
Abstract
A balanced dataset is generally beneficial to underwater acoustic target recognition. However, the imbalanced class distribution is always meted out in a real scene. To address this, a weighted cross entropy loss function based on trigonometric function is proposed. Then, the proposed loss [...] Read more.
A balanced dataset is generally beneficial to underwater acoustic target recognition. However, the imbalanced class distribution is always meted out in a real scene. To address this, a weighted cross entropy loss function based on trigonometric function is proposed. Then, the proposed loss function is applied in a multi-scale residual convolutional neural network (named MR-CNN-A network) embedded with an attention mechanism for the recognition task. Firstly, a multi-scale convolution kernel is used to obtain multi-scale features. Then, an attention mechanism is used to fuse these multi-scale feature maps. Furthermore, a cosx-function-weighted cross-entropy loss function is used to deal with the class imbalance in underwater acoustic data. This function adjusts the loss ratio of each sample by adjusting the loss interval of every mini-batch based on cosx term to achieve a balanced total loss for each class. Two imbalanced underwater acoustic data sets, ShipsEar and autonomous underwater vehicle (self-collected data) are used to evaluate the proposed network. The experimental results show that the proposed network outperforms the support vector machine and a simple convolutional neural network. Compared with the other three loss functions, the proposed loss function achieves better stability and adaptability. The results strongly demonstrate the validity of the proposed loss function and the network. Full article
(This article belongs to the Special Issue Advancement in Undersea Remote Sensing)
Show Figures

Figure 1

Back to TopTop