Special Issue "Underwater Imaging"

A special issue of Journal of Marine Science and Engineering (ISSN 2077-1312).

Deadline for manuscript submissions: closed (31 January 2020).

Special Issue Editor

Prof. Dr. Fabio Bruno
E-Mail Website
Guest Editor
Department of Mechanical, Energy, and Management Engineering, University of Calabria, Rende, 87036 Cosenza, Italy
Interests: 3D recording; underwater technologies; virtual reality; augmented reality
Special Issues and Collections in MDPI journals

Special Issue Information

Dear Colleagues,

Underwater imaging is an important topic in Marine Science and Engineering because it deals with the development of new technologies and techniques related to the acquisition and processing of images and 3D data in underwater environments. Challenges associated with improving the visibility of objects at various distances have been difficult to overcome due to the absorptive and scattering nature of seawater. Mitigating these effects has been the focus of the underwater imaging community for decades, but recent advances in hardware, software and methods has led to relevant improvements in several application areas (e.g.: biology, geology, archaeology, offshore engineering). Anyway, the exploration, documentation and recording of underwater environments still remain challenging tasks that stimulate the research, design and development of new sensors, devices, techniques and methods for recording underwater environments. This Special Issue is launched to collect a compilation of current state-of-the-art and future perspectives in the development of underwater imaging technologies.

We are seeking contributions for this Special Issue on the following subjects:

  • Development and characterization of underwater optical and acoustic sensors
  • Acoustic sensing for large underwater areas
  • Underwater photogrammetry
  • Sensor and data fusion in underwater applications
  • Platforms supporting underwater data acquisition (ROV, AUV, ASV, etc.)
  • Underwater metrology and inspections
  • Restoration, enhancement and processing of underwater images
  • 3D bathymetry techniques
  • Data processing and underwater 3D modeling
  • Innovative applications and multidisciplinary approaches in underwater imaging

Prof. Dr. Fabio Bruno
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Journal of Marine Science and Engineering is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • underwater imaging
  • photogrammetry
  • underwater 3D recording
  • bathymetry
  • SONAR
  • ROV
  • AUV

Published Papers (12 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

Open AccessFeature PaperArticle
Ultra-High-Resolution Mapping of Posidonia oceanica (L.) Delile Meadows through Acoustic, Optical Data and Object-based Image Classification
J. Mar. Sci. Eng. 2020, 8(9), 647; https://doi.org/10.3390/jmse8090647 - 22 Aug 2020
Cited by 1 | Viewed by 1221
Abstract
In this study, we present a framework for seagrass habitat mapping in shallow (5–50 m) and very shallow water (0–5 m) by combining acoustic, optical data and Object-based Image classification. The combination of satellite multispectral images-acquired from 2017 to 2019, together with Unmanned [...] Read more.
In this study, we present a framework for seagrass habitat mapping in shallow (5–50 m) and very shallow water (0–5 m) by combining acoustic, optical data and Object-based Image classification. The combination of satellite multispectral images-acquired from 2017 to 2019, together with Unmanned Aerial Vehicle (UAV) photomosaic maps, high-resolution multibeam bathymetry/backscatter and underwater photogrammetry data, provided insights on the short-term characterization and distribution of Posidonia oceanica (L.) Delile, 1813 meadows in the Calabrian Tyrrhenian Sea. We used a supervised Object-based Image Analysis (OBIA) processing and classification technique to create a high-resolution thematic distribution map of P. oceanica meadows from multibeam bathymetry, backscatter data, drone photogrammetry and multispectral images that can be used as a model for classification of marine and coastal areas. As a part of this work, within the SIC CARLIT project, a field application was carried out in a Site of Community Importance (SCI) on Cirella Island in Calabria (Italy); different multiscale mapping techniques have been performed and integrated: the optical and acoustic data were processed and classified by different OBIA algorithms, i.e., k-Nearest Neighbors’ algorithm (k-NN), Random Tree algorithm (RT) and Decision Tree algorithm (DT). These acoustic and optical data combinations were shown to be a reliable tool to obtain high-resolution thematic maps for the preliminary characterization of seagrass habitats. These thematic maps can be used for time-lapse comparisons aimed to quantify changes in seabed coverage, such as those caused by anthropogenic impacts (e.g., trawl fishing activities and boat anchoring) to assess the blue carbon sinks and might be useful for future seagrass habitats conservation strategies. Full article
(This article belongs to the Special Issue Underwater Imaging)
Show Figures

Figure 1

Open AccessFeature PaperArticle
Photogrammetry: Linking the World across the Water Surface
J. Mar. Sci. Eng. 2020, 8(2), 128; https://doi.org/10.3390/jmse8020128 - 17 Feb 2020
Cited by 1 | Viewed by 851
Abstract
Three-dimensional (3D) surveying and modelling of the underwater environment is challenging; however, it becomes even more arduous when the scene or asset to measure extends from above to underwater through the water surface. While this is topic of high interest for a number [...] Read more.
Three-dimensional (3D) surveying and modelling of the underwater environment is challenging; however, it becomes even more arduous when the scene or asset to measure extends from above to underwater through the water surface. While this is topic of high interest for a number of different application fields (engineering, geology, archeology), few solutions are available, usually expensive and with no guarantee of obtaining homogenous accuracy and resolution in the two media. This paper focuses on a procedure to survey and link the above and the underwater worlds based on photogrammetry. The two parts of the asset, above and underwater, are separately surveyed and then linked through two possible analytical procedures: (1) independent model adjustment or (2) relative orientation constraints. In the first case, rigid pre-calibrated rods are installed across the waterline on the object to be surveyed; in the second approach, a synchronized stereo-camera rig, with a camera in water and the other above the water, is employed. The theoretical foundation for the two approaches is provided and their effectiveness is proved through two challenging case studies: (1) the 3D survey of the leak of the Costa Concordia shipwreck and (2) 3D modelling of Grotta Giusti, a complex semi-submerged cave environment in Italy. Full article
(This article belongs to the Special Issue Underwater Imaging)
Show Figures

Figure 1

Open AccessArticle
Using Scuba for In Situ Determination of Chlorophyll Distributions in Corals by Underwater Near Infrared Fluorescence Imaging
J. Mar. Sci. Eng. 2020, 8(1), 53; https://doi.org/10.3390/jmse8010053 - 18 Jan 2020
Cited by 1 | Viewed by 842
Abstract
Studies reporting quantitation and imaging of chlorophyll in corals using visible fluorescent emission in the red near 680 nm can suffer from competing emission from other red-emitting pigments. Here, we report a novel method of selectively imaging chlorophyll distributions in coral in situ [...] Read more.
Studies reporting quantitation and imaging of chlorophyll in corals using visible fluorescent emission in the red near 680 nm can suffer from competing emission from other red-emitting pigments. Here, we report a novel method of selectively imaging chlorophyll distributions in coral in situ using only the near infrared (NIR) fluorescence emission from chlorophyll. Commercially available equipment was assembled that allowed the sequential imaging of visible, visible-fluorescent, and NIR-fluorescent pigments on the same corals. The relative distributions of chlorophyll and fluorescent proteins (GFPs) were examined in numerous corals in the Caribbean Sea, the Egyptian Red Sea, the Indonesian Dampier Strait, and the Florida Keys. Below 2 m depth, solar induced NIR chlorophyll fluorescence can be imaged in daylight without external lighting, thus, it is much easier to do than visible fluorescence imaging done at night. The distributions of chlorophyll and GFPs are unique in every species examined, and while there are some tissues where both fluorophores are co-resident, often tissues are selectively enriched in only one of these fluorescent pigments. Although laboratory studies have clearly shown that GFPs can be photo-protective, their inability to prevent large scale bleaching events in situ may be due to their limited tissue distribution. Full article
(This article belongs to the Special Issue Underwater Imaging)
Show Figures

Graphical abstract

Open AccessArticle
Adaptive Weighted Multi-Discriminator CycleGAN for Underwater Image Enhancement
J. Mar. Sci. Eng. 2019, 7(7), 200; https://doi.org/10.3390/jmse7070200 - 28 Jun 2019
Cited by 3 | Viewed by 1851
Abstract
In this paper, we propose a novel underwater image enhancement method. Typical deep learning models for underwater image enhancement are trained by paired synthetic dataset. Therefore, these models are mostly effective for synthetic image enhancement but less so for real-world images. In contrast, [...] Read more.
In this paper, we propose a novel underwater image enhancement method. Typical deep learning models for underwater image enhancement are trained by paired synthetic dataset. Therefore, these models are mostly effective for synthetic image enhancement but less so for real-world images. In contrast, cycle-consistent generative adversarial networks (CycleGAN) can be trained with unpaired dataset. However, performance of the CycleGAN is highly dependent upon the dataset, thus it may generate unrealistic images with less content information than original images. A novel solution we propose here is by starting with a CycleGAN, we add a pair of discriminators to preserve contents of input image while enhancing the image. As a part of the solution, we introduce an adaptive weighting method for limiting losses of the two types of discriminators to balance their influence and stabilize the training procedure. Extensive experiments demonstrate that the proposed method significantly outperforms the state-of-the-art methods on real-world underwater images. Full article
(This article belongs to the Special Issue Underwater Imaging)
Show Figures

Graphical abstract

Open AccessArticle
Efficient Image Registration for Underwater Optical Mapping Using Geometric Invariants
J. Mar. Sci. Eng. 2019, 7(6), 178; https://doi.org/10.3390/jmse7060178 - 05 Jun 2019
Viewed by 887
Abstract
Image registration is one of the most fundamental and widely used tools in optical mapping applications. It is mostly achieved by extracting and matching salient points (features) described by vectors (feature descriptors) from images. While matching the descriptors, mismatches (outliers) do appear. Probabilistic [...] Read more.
Image registration is one of the most fundamental and widely used tools in optical mapping applications. It is mostly achieved by extracting and matching salient points (features) described by vectors (feature descriptors) from images. While matching the descriptors, mismatches (outliers) do appear. Probabilistic methods are then applied to remove outliers and to find the transformation (motion) between images. These methods work in an iterative manner. In this paper, an efficient way of integrating geometric invariants into feature-based image registration is presented aiming at improving the performance of image registration in terms of both computational time and accuracy. To do so, geometrical properties that are invariant to coordinate transforms are studied. This would be beneficial to all methods that use image registration as an intermediate step. Experimental results are presented using both semi-synthetically generated data and real image pairs from underwater environments. Full article
(This article belongs to the Special Issue Underwater Imaging)
Show Figures

Figure 1

Open AccessArticle
Interdisciplinary Methodology to Extend Technology Readiness Levels in SONAR Simulation from Laboratory Validation to Hydrography Demonstrator
J. Mar. Sci. Eng. 2019, 7(5), 159; https://doi.org/10.3390/jmse7050159 - 23 May 2019
Cited by 3 | Viewed by 1204
Abstract
This paper extends underwater SONAR simulation from laboratory prototype to real-world demonstrator. It presents the interdisciplinary methodology to advance the state of the art from level four to level seven on the technology readiness level (TRL) standard scale for measuring the maturity of [...] Read more.
This paper extends underwater SONAR simulation from laboratory prototype to real-world demonstrator. It presents the interdisciplinary methodology to advance the state of the art from level four to level seven on the technology readiness level (TRL) standard scale for measuring the maturity of innovations. While SONAR simulation offers the potential to unlock cost-effective personnel capacity building in hydrography, demonstration of virtualised survey-scale operations is a prerequisite for validation by practitioners. Our research approach uses the TRL framework to identify and map current barriers to the use of simulation to interdisciplinary solutions adapted from multiple domains. To meet the distinct challenges of acceptance tests at each level in the TRL scale, critical knowledge is incorporated from different branches of science, engineering, project management, and pedagogy. The paper reports the simulator development at each escalation of TRL. The contributions to simulator performance and usability at each level of advancement are presented, culminating in the first case study demonstration of SONAR simulation as a real-world hydrographic training platform. Full article
(This article belongs to the Special Issue Underwater Imaging)
Show Figures

Graphical abstract

Open AccessFeature PaperArticle
The Snell’s Window Image for Remote Sensing of the Upper Sea Layer: Results of Practical Application
J. Mar. Sci. Eng. 2019, 7(3), 70; https://doi.org/10.3390/jmse7030070 - 19 Mar 2019
Viewed by 1266
Abstract
Estimation of water optical properties can be performed by photo or video registration of rough sea surface from underwater at an angle of total internal reflection in the away from the sun direction at several depths. In this case, the key characteristic of [...] Read more.
Estimation of water optical properties can be performed by photo or video registration of rough sea surface from underwater at an angle of total internal reflection in the away from the sun direction at several depths. In this case, the key characteristic of the obtained image will be the border of the Snell’s window, which is a randomly distorted image of the sky. Its distortion changes simultaneously under the action of the sea roughness and light scattering; however, after correct “decoding” of this image, their separate determination is possible. This paper presents the corresponding algorithms for achieving these possibilities by the Snell’s window images. These images were obtained in waters with different optical properties and wave conditions under several types of illumination. Practical guidelines for recording, processing and analyzing images of the Snell’s window are also formulated. Full article
(This article belongs to the Special Issue Underwater Imaging)
Show Figures

Figure 1

Open AccessArticle
Integrating Three-Dimensional Benthic Habitat Characterization Techniques into Ecological Monitoring of Coral Reefs
J. Mar. Sci. Eng. 2019, 7(2), 27; https://doi.org/10.3390/jmse7020027 - 28 Jan 2019
Cited by 16 | Viewed by 2363
Abstract
Long-term ecological monitoring of reef fish populations often requires the simultaneous collection of data on benthic habitats in order to account for the effects of these variables on fish assemblage structure. Here, we described an approach to benthic surveys that uses photogrammetric techniques [...] Read more.
Long-term ecological monitoring of reef fish populations often requires the simultaneous collection of data on benthic habitats in order to account for the effects of these variables on fish assemblage structure. Here, we described an approach to benthic surveys that uses photogrammetric techniques to facilitate the extraction of quantitative metrics for characterization of benthic habitats from the resulting three-dimensional (3D) reconstruction of coral reefs. Out of 92 sites surveyed in the Northwestern Hawaiian Islands, photographs from 85 sites achieved complete alignment and successfully produced 3D reconstructions and digital elevation models (DEMs). Habitat metrics extracted from the DEMs were generally correlated with one another, with the exception of curvature measures, indicating that complexity and curvature measures should be treated separately when quantifying the habitat structure. Fractal dimension D64, calculated by changing resolutions of the DEMs from 1 cm to 64 cm, had the best correlations with other habitat metrics. Fractal dimension was also less affected by changes in orientations of the models compared to surface complexity or slope. These results showed that fractal dimension can be used as a single measure of complexity for the characterization of coral reef habitats. Further investigations into metrics for 3D characterization of habitats should consider relevant spatial scales and focus on obtaining variables that can complement fractal dimension in the characterization of reef habitats. Full article
(This article belongs to the Special Issue Underwater Imaging)
Show Figures

Figure 1

Open AccessArticle
Semantic Segmentation of Underwater Imagery Using Deep Networks Trained on Synthetic Imagery
J. Mar. Sci. Eng. 2018, 6(3), 93; https://doi.org/10.3390/jmse6030093 - 04 Aug 2018
Cited by 12 | Viewed by 2226
Abstract
Recent breakthroughs in the computer vision community have led to the emergence of efficient deep learning techniques for end-to-end segmentation of natural scenes. Underwater imaging stands to gain from these advances, however, deep learning methods require large annotated datasets for model training and [...] Read more.
Recent breakthroughs in the computer vision community have led to the emergence of efficient deep learning techniques for end-to-end segmentation of natural scenes. Underwater imaging stands to gain from these advances, however, deep learning methods require large annotated datasets for model training and these are typically unavailable for underwater imaging applications. This paper proposes the use of photorealistic synthetic imagery for training deep models that can be applied to interpret real-world underwater imagery. To demonstrate this concept, we look at the specific problem of biofouling detection on marine structures. A contemporary deep encoder–decoder network, termed SegNet, is trained using 2500 annotated synthetic images of size 960 × 540 pixels. The images were rendered in a virtual underwater environment under a wide variety of conditions and feature biofouling of various size, shape, and colour. Each rendered image has a corresponding ground truth per-pixel label map. Once trained on the synthetic imagery, SegNet is applied to segment new real-world images. The initial segmentation is refined using an iterative support vector machine (SVM) based post-processing algorithm. The proposed approach achieves a mean Intersection over Union (IoU) of 87% and a mean accuracy of 94% when tested on 32 frames extracted from two distinct real-world subsea inspection videos. Inference takes several seconds for a typical image. Full article
(This article belongs to the Special Issue Underwater Imaging)
Show Figures

Figure 1

Open AccessFeature PaperArticle
A Novel Iterative Water Refraction Correction Algorithm for Use in Structure from Motion Photogrammetric Pipeline
J. Mar. Sci. Eng. 2018, 6(3), 77; https://doi.org/10.3390/jmse6030077 - 02 Jul 2018
Cited by 7 | Viewed by 2113
Abstract
Photogrammetry using structure from motion (SfM) techniques has evolved into a powerful tool for a variety of applications. Nevertheless, limits are imposed when two-media photogrammetry is needed, in cases such as submerged archaeological site documentation. Water refraction poses a clear limit on photogrammetric [...] Read more.
Photogrammetry using structure from motion (SfM) techniques has evolved into a powerful tool for a variety of applications. Nevertheless, limits are imposed when two-media photogrammetry is needed, in cases such as submerged archaeological site documentation. Water refraction poses a clear limit on photogrammetric applications, especially when traditional methods and standardized pipelines are followed. This work tries to estimate the error introduced to depth measurements when no refraction correction model is used and proposes an easy to implement methodology in a modern photogrammetric workflow dominated by SfM and multi-view stereo (MVS) techniques. To be easily implemented within current software and workflow, this refraction correction approach is applied at the photo level. Results over two test sites in Cyprus against reference data suggest that despite the assumptions and approximations made the proposed algorithm can reduce the effect of refraction to two times the ground pixel size, regardless of the depth. Full article
(This article belongs to the Special Issue Underwater Imaging)
Show Figures

Figure 1

Open AccessArticle
Photogrammetric Surveys and Geometric Processes to Analyse and Monitor Red Coral Colonies
J. Mar. Sci. Eng. 2018, 6(2), 42; https://doi.org/10.3390/jmse6020042 - 12 Apr 2018
Cited by 9 | Viewed by 1641
Abstract
This article describes the set of photogrammetric tools developed for the monitoring of Mediterranean red coral Corallium rubrum populations. The description encompasses the full processing chain: from the image acquisition to the information extraction and data interpretation. The methods applied take advantage of [...] Read more.
This article describes the set of photogrammetric tools developed for the monitoring of Mediterranean red coral Corallium rubrum populations. The description encompasses the full processing chain: from the image acquisition to the information extraction and data interpretation. The methods applied take advantage of existing tools and new, innovative and specific developments in order to acquire data on relevant ecological information concerning the structure and functioning of a red coral population. The tools presented here are based on: (i) automatic orientation using coded quadrats; (ii) use of non-photorealistic rendering (NPR) and 3D skeletonization techniques; (iii) computation of distances between colonies from a same site; and (iv) the use of a plenoptic approach in an underwater environment. Full article
(This article belongs to the Special Issue Underwater Imaging)
Show Figures

Figure 1

Other

Jump to: Research

Open AccessData Descriptor
CADDY Underwater Stereo-Vision Dataset for Human–Robot Interaction (HRI) in the Context of Diver Activities
J. Mar. Sci. Eng. 2019, 7(1), 16; https://doi.org/10.3390/jmse7010016 - 16 Jan 2019
Cited by 9 | Viewed by 1944
Abstract
In this article, we present a novel underwater dataset collected from several field trials within the EU FP7 project “Cognitive autonomous diving buddy (CADDY)”, where an Autonomous Underwater Vehicle (AUV) was used to interact with divers and monitor their activities. To our knowledge, [...] Read more.
In this article, we present a novel underwater dataset collected from several field trials within the EU FP7 project “Cognitive autonomous diving buddy (CADDY)”, where an Autonomous Underwater Vehicle (AUV) was used to interact with divers and monitor their activities. To our knowledge, this is one of the first efforts to collect a large public dataset in underwater environments with the purpose of studying and boosting object classification, segmentation and human pose estimation tasks. The first part of the dataset contains stereo camera recordings (≈10 K) of divers performing hand gestures to communicate with an AUV in different environmental conditions. The gestures can be used to test the robustness of visual detection and classification algorithms in underwater conditions, e.g., under color attenuation and light backscatter. The second part includes stereo footage (≈12.7 K) of divers free-swimming in front of the AUV, along with synchronized measurements from Inertial Measurement Units (IMU) located throughout the diver’s suit (DiverNet), which serve as ground-truth for human pose and tracking methods. In both cases, these rectified images allow the investigation of 3D representation and reasoning pipelines from low-texture targets commonly present in underwater scenarios. This work describes the recording platform, sensor calibration procedure plus the data format and the software utilities provided to use the dataset. Full article
(This article belongs to the Special Issue Underwater Imaging)
Show Figures

Figure 1

Back to TopTop