Underwater Engineering and Image Processing

A special issue of Journal of Marine Science and Engineering (ISSN 2077-1312). This special issue belongs to the section "Ocean Engineering".

Deadline for manuscript submissions: closed (25 March 2024) | Viewed by 22518

Special Issue Editors


E-Mail Website
Guest Editor
National Research Council (CNR), Institute for Biological Resources and Marine Biotechnologies (IRBIM), 60125 Ancona, Italy
Interests: spatial fishery data; maritime big data analysis and mining for fisheries management; maritime spatial planning; habitat mapping technologies; GIS; seafloor mapping; marine cartography
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Information Engineering (DII), Università Politecnica delle Marche, 60131 Ancona, Italy
Interests: machine learning; mobile robotics (UAV, UGV, USV); remote sensing; hyperspectral image analysis; precision farming; geographical information systems (GIS); artificial intelligence; image processing
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
National Research Council (CNR), Institute for Biological Resources and Marine Biotechnologies (IRBIM), 60125 Ancona, Italy
Interests: simulation and modeling oceanography geoinformation; oceanographic measurements, meteo-oceanographic observation platforms; water quality

Special Issue Information

Dear Colleagues,

consistent biological studies are now possible thanks to developments in both image acquisition devices/platforms (e.g., new cameras and sensors, lighting systems, AUVs, ROVs) and analytical techniques (e.g., underwater computer vision, deep learning-based approaches, multispectral image analysis), providing evidence of how underwater imaging is consolidating its role as non-destructive methodology to understand and monitor the underwater environment and preserve marine biodiversity. These are complemented by many other important applications in both emerging and established fields such as autonomous underwater vehicle navigation, archaeological surveys, seafloor mapping, and offshore inspections.

In this special session, both methodological and empirical contributions addressing multidisciplinary (e.g., biological, environmental) questions using marine imaging and related approaches are welcome, as well as technological advances in underwater imaging and analytics.

Dr. Anna Nora Tassetti
Prof. Dr. Adriano Mancini
Dr. Pierluigi Penna
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Journal of Marine Science and Engineering is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (14 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

31 pages, 15223 KiB  
Article
Lightweight Underwater Object Detection Algorithm for Embedded Deployment Using Higher-Order Information and Image Enhancement
by Changhong Liu, Jiawen Wen, Jinshan Huang, Weiren Lin, Bochun Wu, Ning Xie and Tao Zou
J. Mar. Sci. Eng. 2024, 12(3), 506; https://doi.org/10.3390/jmse12030506 - 19 Mar 2024
Viewed by 777
Abstract
Underwater object detection is crucial in marine exploration, presenting a challenging problem in computer vision due to factors like light attenuation, scattering, and background interference. Existing underwater object detection models face challenges such as low robustness, extensive computation of model parameters, and a [...] Read more.
Underwater object detection is crucial in marine exploration, presenting a challenging problem in computer vision due to factors like light attenuation, scattering, and background interference. Existing underwater object detection models face challenges such as low robustness, extensive computation of model parameters, and a high false detection rate. To address these challenges, this paper proposes a lightweight underwater object detection method integrating deep learning and image enhancement. Firstly, FUnIE-GAN is employed to perform data enhancement to restore the authentic colors of underwater images, and subsequently, the restored images are fed into an enhanced object detection network named YOLOv7-GN proposed in this paper. Secondly, a lightweight higher-order attention layer aggregation network (ACC3-ELAN) is designed to improve the fusion perception of higher-order features in the backbone network. Moreover, the head network is enhanced by leveraging the interaction of multi-scale higher-order information, additionally fusing higher-order semantic information from features at different scales. To further streamline the entire network, we also introduce the AC-ELAN-t module, which is derived from pruning based on ACC3-ELAN. Finally, the algorithm undergoes practical testing on a biomimetic sea flatworm underwater robot. The experimental results on the DUO dataset show that our proposed method improves the performance of object detection in underwater environments. It provides a valuable reference for realizing object detection in underwater embedded devices with great practical potential. Full article
(This article belongs to the Special Issue Underwater Engineering and Image Processing)
Show Figures

Figure 1

21 pages, 14486 KiB  
Article
An Underwater Image Restoration Deep Learning Network Combining Attention Mechanism and Brightness Adjustment
by Jianhua Zheng, Ruolin Zhao, Gaolin Yang, Shuangyin Liu, Zihao Zhang, Yusha Fu and Junde Lu
J. Mar. Sci. Eng. 2024, 12(1), 7; https://doi.org/10.3390/jmse12010007 - 19 Dec 2023
Viewed by 753
Abstract
This study proposes Combining Attention and Brightness Adjustment Network (CABA-Net), a deep learning network for underwater image restoration, to address the issues of underwater image color-cast, low brightness, and low contrast. The proposed approach achieves a multi-branch ambient light estimation by extracting the [...] Read more.
This study proposes Combining Attention and Brightness Adjustment Network (CABA-Net), a deep learning network for underwater image restoration, to address the issues of underwater image color-cast, low brightness, and low contrast. The proposed approach achieves a multi-branch ambient light estimation by extracting the features of different levels of underwater images to achieve accurate estimates of the ambient light. Additionally, an encoder-decoder transmission map estimation module is designed to combine spatial attention structures that can extract the different layers of underwater images’ spatial features to achieve accurate transmission map estimates. Then, the transmission map and precisely predicted ambient light were included in the underwater image formation model to achieve a preliminary restoration of underwater images. HSV brightness adjustment was conducted by combining the channel and spatial attention to the initial underwater image to complete the final underwater image restoration. Experimental results on the Underwater Image Enhancement Benchmark (UIEB) and Real-world Underwater Image Enhancement (RUIE) datasets show excellent performance of the proposed method in subjective comparisons and objective assessments. Furthermore, several ablation studies are conducted to understand the effect of each network component and prove the effectiveness of the suggested approach. Full article
(This article belongs to the Special Issue Underwater Engineering and Image Processing)
Show Figures

Figure 1

26 pages, 72331 KiB  
Article
Improving Semantic Segmentation Performance in Underwater Images
by Alexandra Nunes and Aníbal Matos
J. Mar. Sci. Eng. 2023, 11(12), 2268; https://doi.org/10.3390/jmse11122268 - 29 Nov 2023
Viewed by 734
Abstract
Nowadays, semantic segmentation is used increasingly often in exploration by underwater robots. For example, it is used in autonomous navigation so that the robot can recognise the elements of its environment during the mission to avoid collisions. Other applications include the search for [...] Read more.
Nowadays, semantic segmentation is used increasingly often in exploration by underwater robots. For example, it is used in autonomous navigation so that the robot can recognise the elements of its environment during the mission to avoid collisions. Other applications include the search for archaeological artefacts, the inspection of underwater structures or in species monitoring. Therefore, it is necessary to improve the performance in these tasks as much as possible. To this end, we compare some methods for image quality improvement and data augmentation and test whether higher performance metrics can be achieved with both strategies. The experiments are performed with the SegNet implementation and the SUIM dataset with eight common underwater classes to compare the obtained results with the already known ones. The results obtained with both strategies show that they are beneficial and lead to better performance results by achieving a mean IoU of 56% and an increased overall accuracy of 81.8%. The result for the individual classes shows that there are five classes with an IoU value close to 60% and only one class with an IoU value less than 30%, which is a more reliable result and is easier to use in real contexts. Full article
(This article belongs to the Special Issue Underwater Engineering and Image Processing)
Show Figures

Figure 1

20 pages, 4464 KiB  
Article
Feature-Based Place Recognition Using Forward-Looking Sonar
by Ana Rita Gaspar and Aníbal Matos
J. Mar. Sci. Eng. 2023, 11(11), 2198; https://doi.org/10.3390/jmse11112198 - 19 Nov 2023
Cited by 1 | Viewed by 840
Abstract
Some structures in the harbour environment need to be inspected regularly. However, these scenarios present a major challenge for the accurate estimation of a vehicle’s position and subsequent recognition of similar images. In these scenarios, visibility can be poor, making place recognition a [...] Read more.
Some structures in the harbour environment need to be inspected regularly. However, these scenarios present a major challenge for the accurate estimation of a vehicle’s position and subsequent recognition of similar images. In these scenarios, visibility can be poor, making place recognition a difficult task as the visual appearance of a local feature can be compromised. Under these operating conditions, imaging sonars are a promising solution. The quality of the captured images is affected by some factors but they do not suffer from haze, which is an advantage. Therefore, a purely acoustic approach for unsupervised recognition of similar images based on forward-looking sonar (FLS) data is proposed to solve the perception problems in harbour facilities. To simplify the variation of environment parameters and sensor configurations, and given the need for online data for these applications, a harbour scenario was recreated using the Stonefish simulator. Therefore, experiments were conducted with preconfigured user trajectories to simulate inspections in the vicinity of structures. The place recognition approach performs better than the results obtained from optical images. The proposed method provides a good compromise in terms of distinctiveness, achieving 87.5% recall considering appropriate constraints and assumptions for this task given its impact on navigation success. That is, it is based on a similarity threshold of 0.3 and 12 consistent features to consider only effective loops. The behaviour of FLS is the same regardless of the environment conditions and thus this work opens new horizons for the use of these sensors as a great aid for underwater perception, namely, to avoid degradation of navigation performance in muddy conditions. Full article
(This article belongs to the Special Issue Underwater Engineering and Image Processing)
Show Figures

Figure 1

19 pages, 13833 KiB  
Article
Detection of Small Objects in Side-Scan Sonar Images Using an Enhanced YOLOv7-Based Approach
by Feihu Zhang, Wei Zhang, Chensheng Cheng, Xujia Hou and Chun Cao
J. Mar. Sci. Eng. 2023, 11(11), 2155; https://doi.org/10.3390/jmse11112155 - 12 Nov 2023
Viewed by 1229
Abstract
Deep learning-based object detection methods have demonstrated remarkable effectiveness across various domains. Recently, there has been growing interest in applying these techniques to underwater environments. Conventional optical imaging methods face severe limitations when operating in underwater conditions, restricting their ability to identify objects [...] Read more.
Deep learning-based object detection methods have demonstrated remarkable effectiveness across various domains. Recently, there has been growing interest in applying these techniques to underwater environments. Conventional optical imaging methods face severe limitations when operating in underwater conditions, restricting their ability to identify objects with good visibility and at close distances. Consequently, side-scan sonar (SSS) has emerged as a common equipment choice for underwater detection due to its compatibility with the characteristics of sound waves in water. This paper introduces a novel method, termed the Enhanced YOLOv7-Based Approach, for detecting small objects in SSS images. Building upon the widely-adopted YOLOv7 method, the proposed approach incorporates several enhancements aimed at improving detection accuracy. First, a dedicated detection layer tailored for small objects is added to the original network architecture. Additionally, two attention mechanisms are integrated within the backbone and neck components of the network, respectively, to strengthen the network’s focus on object features. Finally, the network features are recombined based on the BiFPN structure. Experimental results demonstrate that the proposed method outperforms mainstream object detection algorithms. In comparison to the original YOLOv7 network, it achieves a Precision of 95.5%, indicating a significant improvement of 4.8%. Moreover, its Recall reaches 87.0%, representing an enhancement of 5.1%, while the mean Average Precision (mAP) at an IoU threshold of 0.5 ([email protected]) reaches 86.9%, reflecting a 6.7% improvement. Furthermore, the [email protected]:.95 reaches 55.1%, a 4.8% enhancement. Therefore, the method presented in this paper enhances the performance of YOLOv7 for object detection in SSS images, providing a fresh perspective on small object detection based on SSS images and contributing to the advancement of underwater detection techniques. Full article
(This article belongs to the Special Issue Underwater Engineering and Image Processing)
Show Figures

Figure 1

18 pages, 23220 KiB  
Article
Underwater 3D Reconstruction from Video or Still Imagery: Matisse and 3DMetrics Processing and Exploitation Software
by Aurélien Arnaubec, Maxime Ferrera, Javier Escartín, Marjolaine Matabos, Nuno Gracias and Jan Opderbecke
J. Mar. Sci. Eng. 2023, 11(5), 985; https://doi.org/10.3390/jmse11050985 - 06 May 2023
Cited by 4 | Viewed by 2303
Abstract
This paper addresses the lack of “push-button” software for optical marine imaging, which currently limits the use of photogrammetric approaches by a wider community. It presents and reviews an open source software, Matisse, for creating textured 3D models of complex underwater scenes [...] Read more.
This paper addresses the lack of “push-button” software for optical marine imaging, which currently limits the use of photogrammetric approaches by a wider community. It presents and reviews an open source software, Matisse, for creating textured 3D models of complex underwater scenes from video or still images. This software, developed for non-experts, enables routine and efficient processing of underwater images into 3D models that facilitate the exploitation and analysis of underwater imagery. When vehicle navigation data are available, Matisse allows for seamless integration of such data to produce 3D reconstructions that are georeferenced and properly scaled. The software includes pre-processing tools to extract images from videos and to make corrections for color and uneven lighting. Four datasets of different 3D scenes are provided for demonstration. They include both input images and navigation and associated 3D models generated with Matisse. The datasets, captured under different survey geometries, lead to 3D models of different sizes and demonstrate the capabilities of the software. The software suite also includes a 3D scene analysis tool, 3DMetrics, which can be used to visualize 3D scenes, incorporate elevation terrain models (e.g., from high-resolution bathymetry data) and manage, extract, and export quantitative measurements for the 3D data analysis. Both software packages are publicly available. Full article
(This article belongs to the Special Issue Underwater Engineering and Image Processing)
Show Figures

Figure 1

24 pages, 5225 KiB  
Article
Reconstruction of the Instantaneous Images Distorted by Surface Waves via Helmholtz–Hodge Decomposition
by Bijian Jian, Chunbo Ma, Yixiao Sun, Dejian Zhu, Xu Tian and Jun Ao
J. Mar. Sci. Eng. 2023, 11(1), 164; https://doi.org/10.3390/jmse11010164 - 09 Jan 2023
Viewed by 1469
Abstract
Imaging through water waves will cause complex geometric distortions and motion blur, which seriously affect the correct identification of an airborne scene. The current methods main rely on high-resolution video streams or a template image, which limits their applicability in real-time observation scenarios. [...] Read more.
Imaging through water waves will cause complex geometric distortions and motion blur, which seriously affect the correct identification of an airborne scene. The current methods main rely on high-resolution video streams or a template image, which limits their applicability in real-time observation scenarios. In this paper, a novel recovery method for the instantaneous images distorted by surface waves is proposed. The method first actively projects an adaptive and adjustable structured light pattern onto the water surface for which random fluctuation will cause the image to degrade. Then, the displacement field of the feature points in the structured light image is used to estimate the motion vector field of the corresponding sampling points in the scene image. Finally, from the perspective of fluid mechanics, the distortion-free scene image is reconstructed based on the Helmholtz-Hodge Decomposition (HHD) theory. Experimental results show that our method not only effectively reduces the distortion to the image, but also significantly outperforms state-of-the-art methods in terms of computational efficiency. Moreover, we tested the real-scene sequences of a certain length to verify the stability of the algorithm. Full article
(This article belongs to the Special Issue Underwater Engineering and Image Processing)
Show Figures

Figure 1

17 pages, 8357 KiB  
Article
Spectral Acoustic Fingerprints of Sand and Sandstone Sea Bottoms
by Uri Kushnir and Vladimir Frid
J. Mar. Sci. Eng. 2022, 10(12), 1923; https://doi.org/10.3390/jmse10121923 - 06 Dec 2022
Cited by 1 | Viewed by 1366
Abstract
Modern studies which dealt with the frequency domain analysis showed that a frequency-domain approach has an essential advantage and mentioned an inner qualitative relationship between the subsurface structure and its frequency spectra. This paper deals with the acoustic spectral response of sand and [...] Read more.
Modern studies which dealt with the frequency domain analysis showed that a frequency-domain approach has an essential advantage and mentioned an inner qualitative relationship between the subsurface structure and its frequency spectra. This paper deals with the acoustic spectral response of sand and sandstone sediments at the sea bottom. An acoustic data collection campaign was conducted over two sand sites and two sandstone sites. The analysis of the results shows that reflections of acoustic signals from sand and sandstone sea bottom are characterized by various spectral features in the 2.75–6.75 kHz range. The differences in acoustic response of sand and sandstone can be quantified by examining the maximal normalized reflected power, the mean frequency, and the number of crossings at different power levels. The statistical value distribution of these potential classifiers was calculated and analyzed. These classifiers, and especially the roughness of the spectrum quantified by the number of crossings parameter can give information to assess the probability for sand or sandstone based on the reflected spectra and be used for actual distinction between sand and sandstone in sub bottom profiler data collection campaigns. Full article
(This article belongs to the Special Issue Underwater Engineering and Image Processing)
Show Figures

Figure 1

24 pages, 7868 KiB  
Article
Underwater Image Classification Algorithm Based on Convolutional Neural Network and Optimized Extreme Learning Machine
by Junyi Yang, Mudan Cai, Xingfan Yang and Zhiyu Zhou
J. Mar. Sci. Eng. 2022, 10(12), 1841; https://doi.org/10.3390/jmse10121841 - 01 Dec 2022
Cited by 4 | Viewed by 2553
Abstract
In order to deal with the target recognition in the complex underwater environment, we carried out experimental research. This includes filtering noise in the feature extraction stage of underwater images rich in noise, or with complex backgrounds, and improving the accuracy of target [...] Read more.
In order to deal with the target recognition in the complex underwater environment, we carried out experimental research. This includes filtering noise in the feature extraction stage of underwater images rich in noise, or with complex backgrounds, and improving the accuracy of target classification in the recognition process. This paper discusses our contribution to improving the accuracy of underwater target classification. This paper proposes an underwater target classification algorithm based on the improved flow direction algorithm (FDA) and search agent strategy, which can simultaneously optimize the weight parameters, bias parameters, and super parameters of the extreme learning machine (ELM). As a new underwater target classifier, it replaces the full connection layer in the traditional classification network to build a classification network. In the first stage of the network, the DenseNet201 network pre-trained by ImageNet is used to extract features and reduce dimensions of underwater images. In the second stage, the optimized ELM classifier is trained and predicted. In order to weaken the uncertainty caused by the random input weight and offset of the introduced ELM, the fuzzy logic, chaos initialization, and multi population strategy-based flow direction algorithm (FCMFDA) is used to adjust the input weight and offset of the ELM and optimize the super parameters with the search agent strategy at the same time. We tested and verified the FCMFDA-ELM classifier on Fish4Knowledge and underwater robot professional competition 2018 (URPC 2018) datasets, and achieved 99.4% and 97.5% accuracy, respectively. The experimental analysis shows that the FCMFDA-ELM underwater image classifier proposed in this paper has a greater improvement in classification accuracy, stronger stability, and faster convergence. Finally, it can be embedded in the recognition process of underwater targets to improve the recognition performance and efficiency. Full article
(This article belongs to the Special Issue Underwater Engineering and Image Processing)
Show Figures

Figure 1

16 pages, 5391 KiB  
Article
Imaging of Artificial Bubble Distribution Using a Multi-Sonar Array System
by Ho Seuk Bae, Won-Ki Kim, Su-Uk Son and Joung-Soo Park
J. Mar. Sci. Eng. 2022, 10(12), 1822; https://doi.org/10.3390/jmse10121822 - 25 Nov 2022
Cited by 2 | Viewed by 1595
Abstract
Bubble clusters present in seawater can cause acoustic interference and acoustic signal distortion during marine exploration. However, this interference can also be used as an acoustic masking technique, which has significant implications for military purposes. Therefore, characterizing the distribution of bubble clusters in [...] Read more.
Bubble clusters present in seawater can cause acoustic interference and acoustic signal distortion during marine exploration. However, this interference can also be used as an acoustic masking technique, which has significant implications for military purposes. Therefore, characterizing the distribution of bubble clusters in water would allow for the development of anti-detection technologies. In this study, a sea experiment was performed using a multi-sonar array system and a bubble-generating material developed by our research group to obtain acoustic signals from an artificial bubble cluster and characterize its distribution. The acquired acoustic data were preprocessed, and reverse-time migration (RTM) was applied to the dataset. For effective RTM, an envelope waveform was used to decrease computation time and memory requirements. The envelope RTM results could be used to effectively image the distribution characteristics of the artificial bubble clusters. Compared with acoustic Doppler current profiler data, the backscattering strength of the boundary of the imaged artificial bubble cluster was estimated to range between −30 and −20 dB. Therefore, the three-dimensional distribution characteristics of bubble clusters in the open sea can be effectively determined through envelope RTM. Furthermore, the data obtained from this study can be used as a reference for future studies. Full article
(This article belongs to the Special Issue Underwater Engineering and Image Processing)
Show Figures

Figure 1

14 pages, 5831 KiB  
Article
High Speed and Precision Underwater Biological Detection Based on the Improved YOLOV4-Tiny Algorithm
by Kun Yu, Yufeng Cheng, Zhuangtao Tian and Kaihua Zhang
J. Mar. Sci. Eng. 2022, 10(12), 1821; https://doi.org/10.3390/jmse10121821 - 25 Nov 2022
Cited by 6 | Viewed by 1655
Abstract
Realizing high-precision real-time underwater detection has been a pressing issue for intelligent underwater robots in recent years. Poor quality of underwater datasets leads to low accuracy of detection models. To handle this problem, an improved YOLOV4-Tiny algorithm is proposed. The CSPrestblock_body in YOLOV4-Tiny [...] Read more.
Realizing high-precision real-time underwater detection has been a pressing issue for intelligent underwater robots in recent years. Poor quality of underwater datasets leads to low accuracy of detection models. To handle this problem, an improved YOLOV4-Tiny algorithm is proposed. The CSPrestblock_body in YOLOV4-Tiny is replaced with Ghostblock_body, which is stacked by Ghost modules in the CSPDarknet53-Tiny backbone network to reduce the computation complexity. The convolutional block attention module (CBAM) is integrated to the algorithm in order to find the attention region in scenarios with dense objects. Then, underwater data is effectively improved by combining the Instance-Balanced Augmentation, underwater image restoration, and Mosaic algorithm. Finally, experiments demonstrate that the YOLOV4-Tinier has a mean Average Precision (mAP) of 80.77% on the improved underwater dataset and a detection speed of 86.96 fps. Additionally, compared to the baseline model YOLOV4-Tiny, YOLOV4-Tinier reduces about model size by about 29%, which is encouraging and competitive. Full article
(This article belongs to the Special Issue Underwater Engineering and Image Processing)
Show Figures

Figure 1

16 pages, 38405 KiB  
Article
Underwater Image Enhancement Based on Color Correction and Detail Enhancement
by Zeju Wu, Yang Ji, Lijun Song and Jianyuan Sun
J. Mar. Sci. Eng. 2022, 10(10), 1513; https://doi.org/10.3390/jmse10101513 - 17 Oct 2022
Cited by 4 | Viewed by 2200
Abstract
To solve the problems of underwater image color deviation, low contrast, and blurred details, an algorithm based on color correction and detail enhancement is proposed. First, the improved nonlocal means denoising algorithm is used to denoise the underwater image. The combination of Gaussian [...] Read more.
To solve the problems of underwater image color deviation, low contrast, and blurred details, an algorithm based on color correction and detail enhancement is proposed. First, the improved nonlocal means denoising algorithm is used to denoise the underwater image. The combination of Gaussian weighted spatial distance and Gaussian weighted Euclidean distance is used as the index of nonlocal means denoising algorithm to measure the similarity of structural blocks. The improved algorithm can retain more edge features and texture information while maintaining noise reduction ability. Then, the improved U-Net is used for color correction. Introducing residual structure and attention mechanism into U-Net can effectively enhance feature extraction ability and prevent network degradation. Finally, a sharpening algorithm based on maximum a posteriori is proposed to enhance the image after color correction, which can increase the detailed information of the image without expanding the noise. The experimental results show that the proposed algorithm has a remarkable effect on underwater image enhancement. Full article
(This article belongs to the Special Issue Underwater Engineering and Image Processing)
Show Figures

Figure 1

10 pages, 2453 KiB  
Article
Identification of Orbital Angular Momentum by Support Vector Machine in Ocean Turbulence
by Xiaoji Li, Jiemei Huang and Leiming Sun
J. Mar. Sci. Eng. 2022, 10(9), 1284; https://doi.org/10.3390/jmse10091284 - 12 Sep 2022
Cited by 4 | Viewed by 1612
Abstract
With the advancement of underwater communication technology, the traditional modulation dimension has been introduced, developed and utilized. In addition, orbital angular momentum (OAM) is utilized as the modulation dimension for optical underwater communication to obtain larger spectrum resources. The OAM features are extracted [...] Read more.
With the advancement of underwater communication technology, the traditional modulation dimension has been introduced, developed and utilized. In addition, orbital angular momentum (OAM) is utilized as the modulation dimension for optical underwater communication to obtain larger spectrum resources. The OAM features are extracted using a histogram of oriented gradient and trained using the support vector machine method with a gradient direction histogram feature. The topological charge value of the OAM was used to identify the classification labels, and the ocean turbulence caused by different temperatures and salinity were analyzed. Experimentation results showed that the recognition accuracy for the OAM under the Laguerre–Gaussian beam rates of 1~5, 1~6, 1~7, 1~8, 1~9, and 1~10 was 98.93%, 98.89%, 97.33%, 96.66%, 95.40%, and 95.33%, respectively. The proposed method achieved a high recognition accuracy and performed efficiently under strong turbulence. Our research explored a new technique that provides a new idea for the demodulation of OAM in optical underwater communication. Full article
(This article belongs to the Special Issue Underwater Engineering and Image Processing)
Show Figures

Figure 1

19 pages, 5252 KiB  
Article
Multi-Level Wavelet-Based Network Embedded with Edge Enhancement Information for Underwater Image Enhancement
by Kaichuan Sun, Fei Meng and Yubo Tian
J. Mar. Sci. Eng. 2022, 10(7), 884; https://doi.org/10.3390/jmse10070884 - 27 Jun 2022
Cited by 5 | Viewed by 1700
Abstract
As an image processing method, underwater image enhancement (UIE) plays an important role in the field of underwater resource detection and engineering research. Currently, the convolutional neural network (CNN)- and Transformer-based methods are the mainstream methods for UIE. However, CNNs usually use pooling [...] Read more.
As an image processing method, underwater image enhancement (UIE) plays an important role in the field of underwater resource detection and engineering research. Currently, the convolutional neural network (CNN)- and Transformer-based methods are the mainstream methods for UIE. However, CNNs usually use pooling to expand the receptive field, which may lead to information loss that is not conducive to feature extraction and analysis. At the same time, edge blurring can easily occur in enhanced images obtained by the existing methods. To address this issue, this paper proposes a framework that combines CNN and Transformer, employs the wavelet transform and inverse wavelet transform for encoding and decoding, and progressively embeds the edge information on the raw image in the encoding process. Specifically, first, features of the raw image and its edge detection image are extracted step by step using the convolution module and the residual dense attention module, respectively, to obtain mixed feature maps of different resolutions. Next, the residual structure Swin Transformer group is used to extract global features. Then, the resulting feature map and the encoder’s hybrid feature map are used for high-resolution feature map reconstruction by the decoder. The experimental results show that the proposed method can achieve an excellent effect in edge information protection and visual reconstruction of images. In addition, the effectiveness of each component of the proposed model is verified by ablation experiments. Full article
(This article belongs to the Special Issue Underwater Engineering and Image Processing)
Show Figures

Figure 1

Back to TopTop