Special Issue "Underwater 3D Recording & Modelling"

A special issue of Remote Sensing (ISSN 2072-4292).

Deadline for manuscript submissions: closed (31 July 2020).

Special Issue Editors

Dr. Dimitrios Skarlatos
E-Mail Website
Guest Editor
Department of Civil Engineering and Geomatics in Cyprus University of Technology; 30 Archbishop Kyprianos Street, 3036 Limassol, Cyprus
Interests: Underwater, Image-Based Modelling, UAV, Mapping, Photogrammetry, Color Correction
Special Issues and Collections in MDPI journals
Dr. Fabio Menna
E-Mail Website
Guest Editor
3DOM - 3D Optical Metrology Unit, FBK - Bruno Kessler Foundation, via Sommarive 18, 38123 Povo-Trento, Italy
Interests: photogrammetry; 3D optical metrology; underwater; simultaneous localization and mapping; visual inertial odometry; 3D Modeling; geometric calibration; accuracy; sensor fusion; navigation; orientation; mapping; change detection; monitoring; automation
Special Issues and Collections in MDPI journals
Dr. Erica Nocerino
E-Mail Website
Guest Editor
LIS Laboratory - Laboratoire d'informatique et Systèmes, I&M Team - Images & Models, Aix-Marseille Université, CNRS, ENSAM, Université De Toulon, Polytech, Luminy, Bat. A, case 925, 163 avenue de Luminy, 13288 Marseille cedex 9, France
Interests: photogrammetry, surveying; laser scanning; 3D modelling; quality control; inspection; verification; automation; monitoring; underwater; calibration; image processing
Special Issues and Collections in MDPI journals

Special Issue Information

Dear Colleagues,

Underwater (UW) 3D recording and modelling represent an open challenge for scientists and engineers in various disciplines. Sensors and algorithms developed and optimized for in-land applications are not best suited for the harsh conditions of the submerged environment. However, in the last years, we have been witnessing groundbreaking technological developments, which allow to measure, digitize, and study the underwater world with unprecedented accuracy and level of details. Photogrammetry-based approaches coupled with virtual and augmented reality (VR/AR) applications are becoming more and more diffuse in interdisciplinary communities, such as archeology, biology, industry. At the same time, acoustic and lidar sensors are leading the large-scale underwater mapping competition.

Motivated by this consideration, the current Special Issue aims to collect the best research papers that will be presented in the 2nd edition of the workshop ‘UNDERWATER 3D RECORDING & MODELLING’ organized in Limassol (Cyprus) on May 2–3, 2019, as well as to attract the latest contributions from the international community.

Authors are strongly encouraged to submit their works on innovative approaches, methodologies, and applications using acoustic, LiDAR, or image sensors in underwater photogrammetry, 3D reconstruction for VR and AR applications, cultural heritage and archaeology, 3D metrology, marine biology, 3D scanning, underwater platforms (ROV, robots, etc.), 4D modelling, bathymetry, sensor integration and data fusion, reference control, and accuracy assessment of underwater surveys.

Dr. Dimitrios Skarlatos
Dr. Fabio Bruno
Dr. Fabio Menna
Dr. Erica Nocerino
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Underwater 3D modelling
  • 3D metrology
  • Cultural heritage
  • Virtual and augmented reality
  • Bathymetry
  • Sensor fusion
  • ROV
  • 3D biological monitoring

Published Papers (11 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research, Other

Open AccessEditorial
Editorial for Underwater 3D Recording & Modelling
Remote Sens. 2021, 13(4), 665; https://doi.org/10.3390/rs13040665 - 12 Feb 2021
Viewed by 624
Abstract
The Special Issue “Underwater 3D recording and modelling” is focused on challenges for 3D modeling and ways to overcome them in the underwater environment [...] Full article
(This article belongs to the Special Issue Underwater 3D Recording & Modelling)

Research

Jump to: Editorial, Other

Open AccessArticle
On Improving the Training of Models for the Semantic Segmentation of Benthic Communities from Orthographic Imagery
Remote Sens. 2020, 12(18), 3106; https://doi.org/10.3390/rs12183106 - 22 Sep 2020
Cited by 3 | Viewed by 1234
Abstract
The semantic segmentation of underwater imagery is an important step in the ecological analysis of coral habitats. To date, scientists produce fine-scale area annotations manually, an exceptionally time-consuming task that could be efficiently automatized by modern CNNs. This paper extends our previous [...] Read more.
The semantic segmentation of underwater imagery is an important step in the ecological analysis of coral habitats. To date, scientists produce fine-scale area annotations manually, an exceptionally time-consuming task that could be efficiently automatized by modern CNNs. This paper extends our previous work presented at the 3DUW’19 conference, outlining the workflow for the automated annotation of imagery from the first step of dataset preparation, to the last step of prediction reassembly. In particular, we propose an ecologically inspired strategy for an efficient dataset partition, an over-sampling methodology targeted on ortho-imagery, and a score fusion strategy. We also investigate the use of different loss functions in the optimization of a Deeplab V3+ model, to mitigate the class-imbalance problem and improve prediction accuracy on coral instance boundaries. The experimental results demonstrate the effectiveness of the ecologically inspired split in improving model performance, and quantify the advantages and limitations of the proposed over-sampling strategy. The extensive comparison of the loss functions gives numerous insights on the segmentation task; the Focal Tversky, typically used in the context of medical imaging (but not in remote sensing), results in the most convenient choice. By improving the accuracy of automated ortho image processing, the results presented here promise to meet the fundamental challenge of increasing the spatial and temporal scale of coral reef research, allowing researchers greater predictive ability to better manage coral reef resilience in the context of a changing environment. Full article
(This article belongs to the Special Issue Underwater 3D Recording & Modelling)
Show Figures

Graphical abstract

Open AccessArticle
Coral Reef Monitoring by Scuba Divers Using Underwater Photogrammetry and Geodetic Surveying
Remote Sens. 2020, 12(18), 3036; https://doi.org/10.3390/rs12183036 - 17 Sep 2020
Cited by 1 | Viewed by 1098
Abstract
Underwater photogrammetry is increasingly being used by marine ecologists because of its ability to produce accurate, spatially detailed, non-destructive measurements of benthic communities, coupled with affordability and ease of use. However, independent quality control, rigorous imaging system set-up, optimal geometry design and a [...] Read more.
Underwater photogrammetry is increasingly being used by marine ecologists because of its ability to produce accurate, spatially detailed, non-destructive measurements of benthic communities, coupled with affordability and ease of use. However, independent quality control, rigorous imaging system set-up, optimal geometry design and a strict modeling of the imaging process are essential to achieving a high degree of measurable accuracy and resolution. If a proper photogrammetric approach that enables the formal description of the propagation of measurement error and modeling uncertainties is not undertaken, statements regarding the statistical significance of the results are limited. In this paper, we tackle these critical topics, based on the experience gained in the Moorea Island Digital Ecosystem Avatar (IDEA) project, where we have developed a rigorous underwater photogrammetric pipeline for coral reef monitoring and change detection. Here, we discuss the need for a permanent, underwater geodetic network, which serves to define a temporally stable reference datum and a check for the time series of photogrammetrically derived three-dimensional (3D) models of the reef structure. We present a methodology to evaluate the suitability of several underwater camera systems for photogrammetric and multi-temporal monitoring purposes and stress the importance of camera network geometry to minimize the deformations of photogrammetrically derived 3D reef models. Finally, we incorporate the measurement and modeling uncertainties of the full photogrammetric process into a simple and flexible framework for detecting statistically significant changes among a time series of models. Full article
(This article belongs to the Special Issue Underwater 3D Recording & Modelling)
Show Figures

Graphical abstract

Open AccessArticle
Investigation of Chromatic Aberration and Its Influence on the Processing of Underwater Imagery
Remote Sens. 2020, 12(18), 3002; https://doi.org/10.3390/rs12183002 - 15 Sep 2020
Cited by 1 | Viewed by 819
Abstract
The number of researchers utilising imagery for the 3D reconstruction of underwater natural (e.g., reefs) and man-made structures (e.g., shipwrecks) is increasing. Often, the same procedures and software solutions are used for processing the images as in-air without considering additional aberrations that can [...] Read more.
The number of researchers utilising imagery for the 3D reconstruction of underwater natural (e.g., reefs) and man-made structures (e.g., shipwrecks) is increasing. Often, the same procedures and software solutions are used for processing the images as in-air without considering additional aberrations that can be caused by the change of the medium from air to water. For instance, several publications mention the presence of chromatic aberration (CA). The aim of this paper is to investigate CA effects in low-cost camera systems (several GoPro cameras) operated in an underwater environment. We found that underwater and in-air distortion profiles differed by more than 1000 times in terms of maximum displacement and in terms of curvature. Moreover, significant CA effects were found in the underwater profiles that did not exist in-air. Furthermore, the paper investigates the effect of adjustment constraints imposed on the underwater self-calibration and the reliability of the interior orientation parameters. The analysis of the precision shows that in-air RMS values are just due to random errors. In contrast, the underwater calibration RMS values are 3x-6x higher than the exterior orientation parameter (EOP) precision, so these values contain both random error and the systematic effects from the CA. The accuracy assessment shows significant differences. Full article
(This article belongs to the Special Issue Underwater 3D Recording & Modelling)
Show Figures

Graphical abstract

Open AccessArticle
Underwater 3D Rigid Object Tracking and 6-DOF Estimation: A Case Study of Giant Steel Pipe Scale Model Underwater Installation
Remote Sens. 2020, 12(16), 2600; https://doi.org/10.3390/rs12162600 - 12 Aug 2020
Cited by 1 | Viewed by 853
Abstract
The Zengwen desilting tunnel project installed an Elephant Trunk Steel Pipe (ETSP) at the bottom of the reservoir that is designed to connect the new bypass tunnel and reach downward to the sediment surface. Since ETSP is huge and its underwater installation is [...] Read more.
The Zengwen desilting tunnel project installed an Elephant Trunk Steel Pipe (ETSP) at the bottom of the reservoir that is designed to connect the new bypass tunnel and reach downward to the sediment surface. Since ETSP is huge and its underwater installation is an unprecedented construction method, there are several uncertainties in its dynamic motion changes during installation. To assure construction safety, a 1:20 ETSP scale model was built to simulate the underwater installation procedure, and its six-degrees-of-freedom (6-DOF) motion parameters were monitored by offline underwater 3D rigid object tracking and photogrammetry. Three cameras were used to form a multicamera system, and several auxiliary devices—such as waterproof housing, tripods, and a waterproof LED—were adopted to protect the cameras and to obtain clear images in the underwater environment. However, since it is difficult for the divers to position the camera and ensure the camera field of view overlap, each camera can only observe the head, middle, and tail parts of ETSP, respectively, leading to a small overlap area among all images. Therefore, it is not possible to perform a traditional method via multiple images forward intersection, where the camera’s positions and orientations have to be calibrated and fixed in advance. Instead, by tracking the 3D coordinates of ETSP and obtaining the camera orientation information via space resection, we propose a multicamera coordinate transformation and adopted a single-camera relative orientation transformation to calculate the 6-DOF motion parameters. The offline procedure is to first acquire the 3D coordinates of ETSP by taking multiposition images with a precalibrated camera in the air and then use the 3D coordinates as control points to perform the space resection of the calibrated underwater cameras. Finally, we calculated the 6-DOF of ETSP by using the camera orientation information through both multi- and single-camera approaches. In this study, we show the results of camera calibration in the air and underwater environment, present the 6-DOF motion parameters of ETSP underwater installation and the reconstructed 4D animation, and compare the differences between the multi- and single-camera approaches. Full article
(This article belongs to the Special Issue Underwater 3D Recording & Modelling)
Show Figures

Graphical abstract

Open AccessArticle
3D Fine-scale Terrain Variables from Underwater Photogrammetry: A New Approach to Benthic Microhabitat Modeling in a Circalittoral Rocky Shelf
Remote Sens. 2020, 12(15), 2466; https://doi.org/10.3390/rs12152466 - 31 Jul 2020
Cited by 5 | Viewed by 1638
Abstract
The relationship between 3D terrain complexity and fine-scale localization and distribution of species is poorly understood. Here we present a very fine-scale 3D reconstruction model of three zones of circalittoral rocky shelf in the Bay of Biscay. Detailed terrain variables are extracted from [...] Read more.
The relationship between 3D terrain complexity and fine-scale localization and distribution of species is poorly understood. Here we present a very fine-scale 3D reconstruction model of three zones of circalittoral rocky shelf in the Bay of Biscay. Detailed terrain variables are extracted from 3D models using a structure-from-motion (SfM) approach applied to ROTV images. Significant terrain variables that explain species location were selected using general additive models (GAMs) and micro-distribution of the species were predicted. Two models combining BPI, curvature and rugosity can explain 55% and 77% of the Ophiuroidea and Crinoidea distribution, respectively. The third model contributes to explaining the terrain variables that induce the localization of Dendrophyllia cornigera. GAM univariate models detect the terrain variables for each structural species in this third zone (Artemisina transiens, D. cornigera and Phakellia ventilabrum). To avoid the time-consuming task of manual annotation of presence, a deep-learning algorithm (YOLO v4) is proposed. This approach achieves very high reliability and low uncertainty in automatic object detection, identification and location. These new advances applied to underwater imagery (SfM and deep-learning) can resolve the very-high resolution information needed for predictive microhabitat modeling in a very complex zone. Full article
(This article belongs to the Special Issue Underwater 3D Recording & Modelling)
Show Figures

Figure 1

Open AccessArticle
Impact of Stereo Camera Calibration to Object Accuracy in Multimedia Photogrammetry
Remote Sens. 2020, 12(12), 2057; https://doi.org/10.3390/rs12122057 - 26 Jun 2020
Cited by 2 | Viewed by 787
Abstract
Camera calibration via bundle adjustment is a well-established standard procedure in single-medium photogrammetry. When using standard software and applying the collinearity equations in multimedia photogrammetry, the effects of refractive interfaces are compensated in an implicit form, hence by the usual parameters of interior [...] Read more.
Camera calibration via bundle adjustment is a well-established standard procedure in single-medium photogrammetry. When using standard software and applying the collinearity equations in multimedia photogrammetry, the effects of refractive interfaces are compensated in an implicit form, hence by the usual parameters of interior orientation. This contribution analyses different calibration strategies for planar bundle-invariant interfaces. To evaluate the effects of implicitly modelling the refractive effects within bundle adjustment, synthetic error-free datasets are simulated. The behaviour of interior, exterior, and relative orientation parameters is analysed using synthetic datasets free of underwater imaging effects. A shift of the camera positions of 0.2% of the acquisition distance along the optical axis can be observed. The relative orientation of a stereo camera shows systematic effects when the angle of convergence varies. The stereo baseline increases by 1% at 25° convergence. Furthermore, the interface is set up at different distances to the camera. When the interface is at 50% distance assuming a parallel camera setup, the stereo baseline also increases by 1%. It becomes clear that in most cases the implicit modelling is not suitable for multimedia photogrammetry due to geometrical errors (scaling) and absolute positioning errors. Explicit modelling of the refractive interfaces is implemented into a bundle adjustment and is also used to analyse calibration parameters and deviations in object space. Real experiments show that it is difficult to separate the effects of implicit modelling, since other effects, such as poor image measurements, affect the final result. However, trends can be seen, and deviations are quantified. Full article
(This article belongs to the Special Issue Underwater 3D Recording & Modelling)
Show Figures

Graphical abstract

Open AccessArticle
DepthLearn: Learning to Correct the Refraction on Point Clouds Derived from Aerial Imagery for Accurate Dense Shallow Water Bathymetry Based on SVMs-Fusion with LiDAR Point Clouds
Remote Sens. 2019, 11(19), 2225; https://doi.org/10.3390/rs11192225 - 24 Sep 2019
Cited by 10 | Viewed by 1799
Abstract
The determination of accurate bathymetric information is a key element for near offshore activities; hydrological studies, such as coastal engineering applications, sedimentary processes, hydrographic surveying, archaeological mapping and biological research. Through structure from motion (SfM) and multi-view-stereo (MVS) techniques, aerial imagery can provide [...] Read more.
The determination of accurate bathymetric information is a key element for near offshore activities; hydrological studies, such as coastal engineering applications, sedimentary processes, hydrographic surveying, archaeological mapping and biological research. Through structure from motion (SfM) and multi-view-stereo (MVS) techniques, aerial imagery can provide a low-cost alternative compared to bathymetric LiDAR (Light Detection and Ranging) surveys, as it offers additional important visual information and higher spatial resolution. Nevertheless, water refraction poses significant challenges on depth determination. Till now, this problem has been addressed through customized image-based refraction correction algorithms or by modifying the collinearity equation. In this article, in order to overcome the water refraction errors in a massive and accurate way, we employ machine learning tools, which are able to learn the systematic underestimation of the estimated depths. In particular, an SVR (support vector regression) model was developed, based on known depth observations from bathymetric LiDAR surveys, which is able to accurately recover bathymetry from point clouds derived from SfM-MVS procedures. Experimental results and validation were based on datasets derived from different test-sites, and demonstrated the high potential of our approach. Moreover, we exploited the fusion of LiDAR and image-based point clouds towards addressing challenges of both modalities in problematic areas. Full article
(This article belongs to the Special Issue Underwater 3D Recording & Modelling)
Show Figures

Graphical abstract

Open AccessArticle
Scale Accuracy Evaluation of Image-Based 3D Reconstruction Strategies Using Laser Photogrammetry
Remote Sens. 2019, 11(18), 2093; https://doi.org/10.3390/rs11182093 - 07 Sep 2019
Cited by 6 | Viewed by 1282
Abstract
Rapid developments in the field of underwater photogrammetry have given scientists the ability to produce accurate 3-dimensional (3D) models which are now increasingly used in the representation and study of local areas of interest. This paper addresses the lack of systematic analysis of [...] Read more.
Rapid developments in the field of underwater photogrammetry have given scientists the ability to produce accurate 3-dimensional (3D) models which are now increasingly used in the representation and study of local areas of interest. This paper addresses the lack of systematic analysis of 3D reconstruction and navigation fusion strategies, as well as associated error evaluation of models produced at larger scales in GPS-denied environments using a monocular camera (often in deep sea scenarios). Based on our prior work on automatic scale estimation of Structure from Motion (SfM)-based 3D models using laser scalers, an automatic scale accuracy framework is presented. The confidence level for each of the scale error estimates is independently assessed through the propagation of the uncertainties associated with image features and laser spot detections using a Monte Carlo simulation. The number of iterations used in the simulation was validated through the analysis of the final estimate behavior. To facilitate the detection and uncertainty estimation of even greatly attenuated laser beams, an automatic laser spot detection method was developed, with the main novelty of estimating the uncertainties based on the recovered characteristic shapes of laser spots with radially decreasing intensities. The effects of four different reconstruction strategies resulting from the combinations of Incremental/Global SfM, and the a priori and a posteriori use of navigation data were analyzed using two distinct survey scenarios captured during the SUBSAINTES 2017 cruise (doi: 10.17600/17001000). The study demonstrates that surveys with multiple overlaps of nonsequential images result in a nearly identical solution regardless of the strategy (SfM or navigation fusion), while surveys with weakly connected sequentially acquired images are prone to produce broad-scale deformation (doming effect) when navigation is not included in the optimization. Thus the scenarios with complex survey patterns substantially benefit from using multiobjective BA navigation fusion. The errors in models, produced by the most appropriate strategy, were estimated at around 1 % in the central parts and always inferior to 5 % on the extremities. The effects of combining data from multiple surveys were also evaluated. The introduction of additional vectors in the optimization of multisurvey problems successfully accounted for offset changes present in the underwater USBL-based navigation data, and thus minimize the effect of contradicting navigation priors. Our results also illustrate the importance of collecting a multitude of evaluation data at different locations and moments during the survey. Full article
(This article belongs to the Special Issue Underwater 3D Recording & Modelling)
Show Figures

Graphical abstract

Open AccessArticle
Detecting Square Markers in Underwater Environments
Remote Sens. 2019, 11(4), 459; https://doi.org/10.3390/rs11040459 - 23 Feb 2019
Cited by 14 | Viewed by 2271
Abstract
Augmented reality can be deployed in various application domains, such as enhancing human vision, manufacturing, medicine, military, entertainment, and archeology. One of the least explored areas is the underwater environment. The main benefit of augmented reality in these environments is that it can [...] Read more.
Augmented reality can be deployed in various application domains, such as enhancing human vision, manufacturing, medicine, military, entertainment, and archeology. One of the least explored areas is the underwater environment. The main benefit of augmented reality in these environments is that it can help divers navigate to points of interest or present interesting information about archaeological and touristic sites (e.g., ruins of buildings, shipwrecks). However, the harsh sea environment affects computer vision algorithms and complicates the detection of objects, which is essential for augmented reality. This paper presents a new algorithm for the detection of fiducial markers that is tailored to underwater environments. It also proposes a method that generates synthetic images with such markers in these environments. This new detector is compared with existing solutions using synthetic images and images taken in the real world, showing that it performs better than other detectors: it finds more markers than faster algorithms and runs faster than robust algorithms that detect the same amount of markers. Full article
(This article belongs to the Special Issue Underwater 3D Recording & Modelling)
Show Figures

Graphical abstract

Other

Jump to: Editorial, Research

Open AccessLetter
Associations between Benthic Cover and Habitat Complexity Metrics Obtained from 3D Reconstruction of Coral Reefs at Different Resolutions
Remote Sens. 2020, 12(6), 1011; https://doi.org/10.3390/rs12061011 - 21 Mar 2020
Cited by 6 | Viewed by 1311
Abstract
Quantifying the three-dimensional (3D) habitat structure of coral reefs is an important aspect of coral reef monitoring, as habitat architecture affects the abundance and diversity of reef organisms. Here, we used photogrammetric techniques to generate 3D reconstructions of coral reefs and examined relationships [...] Read more.
Quantifying the three-dimensional (3D) habitat structure of coral reefs is an important aspect of coral reef monitoring, as habitat architecture affects the abundance and diversity of reef organisms. Here, we used photogrammetric techniques to generate 3D reconstructions of coral reefs and examined relationships between benthic cover and various habitat metrics obtained at six different resolutions of raster cells, ranging from 1 to 32 cm. For metrics of 3D structural complexity, fractal dimension, which utilizes information on 3D surface areas obtained at different resolutions, and vector ruggedness measure (VRM) obtained at 1-, 2- or 4-cm resolution correlated well with benthic cover, with a relatively large amount of variability in these metrics being explained by the proportions of corals and crustose coralline algae. Curvature measures were, on the other hand, correlated with branching and mounding coral cover when obtained at 1-cm resolution, but the amount of variability explained by benthic cover was generally very low when obtained at all other resolutions. These results show that either fractal dimension or VRM obtained at 1-, 2- or 4-cm resolution, along with curvature obtained at 1-cm resolution, can effectively capture the 3D habitat structure provided by specific benthic organisms. Full article
(This article belongs to the Special Issue Underwater 3D Recording & Modelling)
Show Figures

Figure 1

Back to TopTop