Next Article in Journal
Random Shuffling Data for Hyperspectral Image Classification with Siamese and Knowledge Distillation Network
Next Article in Special Issue
Fusion of Single and Integral Multispectral Aerial Images
Previous Article in Journal
Blind Restoration of a Single Real Turbulence-Degraded Image Based on Self-Supervised Learning
Previous Article in Special Issue
Converging Channel Attention Mechanisms with Multilayer Perceptron Parallel Networks for Land Cover Classification
 
 
Article
Peer-Review Record

Quantifying the Loss of Coral from a Bleaching Event Using Underwater Photogrammetry and AI-Assisted Image Segmentation

Remote Sens. 2023, 15(16), 4077; https://doi.org/10.3390/rs15164077
by Kai L. Kopecky 1,*, Gaia Pavoni 2, Erica Nocerino 3, Andrew J. Brooks 4, Massimiliano Corsini 2, Fabio Menna 5, Jordan P. Gallagher 1, Alessandro Capra 6, Cristina Castagnetti 6, Paolo Rossi 6, Armin Gruen 7, Fabian Neyer 8, Alessandro Muntoni 2, Federico Ponchio 2, Paolo Cignoni 2, Matthias Troyer 9, Sally J. Holbrook 1,4 and Russell J. Schmitt 1,4
Reviewer 1: Anonymous
Reviewer 2:
Reviewer 3:
Remote Sens. 2023, 15(16), 4077; https://doi.org/10.3390/rs15164077
Submission received: 28 June 2023 / Revised: 1 August 2023 / Accepted: 9 August 2023 / Published: 18 August 2023
(This article belongs to the Special Issue Computer Vision-Based Methods and Tools in Remote Sensing)

Round 1

Reviewer 1 Report

1.    Overview

The article discusses using underwater photogrammetry, computer vision, and AI to detect impacts on organisms and community composition. By combining survey methods with AI-assisted image analysis, the researchers quantified the impact of coral bleaching on a tropical reef at different scales. Key findings included changes in coral surface areas and size-dependent mortality patterns. This technique improves our understanding of ecological consequences and informs mitigation decisions.

This article has significant practical application value, but there are still some existing issues. I suggest that this article be overhauled

2. Main modification problems

(1) The journal has certain requirements for the algorithmic content of the article. However, this article primarily applies a mature software without any self-designed algorithms, which results in a limited overall contribution of the article.

(2) There are too many keywords in the article, it is recommended to reduce them.

(3) I believe that someone has likely used artificial intelligence methods to perform this task before, but it is not mentioned in the introduction section of your article.

Author Response

  1.   Overview:

The article discusses using underwater photogrammetry, computer vision, and AI to detect impacts on organisms and community composition. By combining survey methods with AI-assisted image analysis, the researchers quantified the impact of coral bleaching on a tropical reef at different scales. Key findings included changes in coral surface areas and size-dependent mortality patterns. This technique improves our understanding of ecological consequences and informs mitigation decisions.

This article has significant practical application value, but there are still some existing issues. I suggest that this article be overhauled

  1. Main modification problems:

(1) The journal has certain requirements for the algorithmic content of the article. However, this article primarily applies a mature software without any self-designed algorithms, which results in a limited overall contribution of the article.

We thank Reviewer 1 for raising this concern. However, we did in fact modify the TagLab software in order to complete the analyses described in this study. Previously, TagLab did not support the loading or analysis of DEMs, but only three-channel (RGB) images. The software was modified to be able to layer the RGB images on their respective DEMs in order to extract 3D approximations of coral colony (and other) surface areas. We have added text to the Methods section to describe this software modification (lines 202-206):

“Lastly, we modified the TagLab software for this study in order to approximate 3D metrics of reef organisms. Previously, TagLab did not support the loading or analysis of DEMs, but only three-channel (RGB) images. The software was modified to be able to layer the RGB images on their respective DEMs in order to extract 3D approximations of coral colony (and other) surface areas.”

(2) There are too many keywords in the article, it is recommended to reduce them.

We appreciate Reviewer 1’s concern here. However, the template for an article published in Remote Sensing specifies that “three to ten pertinent keywords specific to the article yet reasonably common within the subject discipline” may be listed. We selected ten keywords (i.e. the maximum allowable) due to the interdisciplinary nature of our study, as we hoped to appeal to a diverse audience of scientists.

(3) I believe that someone has likely used artificial intelligence methods to perform this task before, but it is not mentioned in the introduction section of your article.

We thank Reviewer 1 for this point. We have added text to the Introduction of our article (lines 90-93) stating that deep learning methods have been utilized to increase the efficiency of semantically segmenting coral reef images, and we have included three additional references to support this claim:

“Deep-learning methods have been implemented to greatly increase the efficiency with which complex or irregular objects, such as coral colonies, can be segmented from images [28,29] and reef scale changes in rugosity and structure can be detected over time [30].”

Reviewer 2 Report

This paper described an AI assist way to quantify coral colony. This work is highly relevant to the work in climate change and ocean acidification.

I have the following comments for the authors to consider:

1) What is the accuracy of DEM model since this seems to be critical in the analysis. How did you determine the distance between the scuba diver and the target?

2) The work seems to use only the images for segmentation. Since the 3D information is available, it may be interesting to see if using the 3D point cloud in this process.

3) The paper mentioned that color accuracy is essential, could you please explain more? This may be quite challenge since the lighting, camera angle, orientation, water turbidity etc. may induce color variations of the images.

 

Author Response

This paper described an AI assist way to quantify coral colony. This work is highly relevant to the work in climate change and ocean acidification.

 

I have the following comments for the authors to consider:

 

1) What is the accuracy of DEM model since this seems to be critical in the analysis. How did you determine the distance between the scuba diver and the target?

 

We thank Reviewer 2 for highlighting this important information. In the paper, we reported that all the DEMs and orthos had a sub-millimeter spatial resolution pixel size. As for the expected accuracy, we have added reference to our previous publications (Nocerino et al. 2020, Rossi et al. 2020) where this crucial aspect was described and analyzed in great detail. In fact, we think that an in depth description of this topic would be beyond the scope of this manuscript. As for determining the distance between the diver and the reef, divers used the depth gauges on their dive computers to help maintain a consistent depth, and we selected reef sites with minimal variation in depth across our experimental plots.  We have added a sentence to the main text to clarify this point (lines 143-145):

 

“Divers were able to maintain a consistent distance above the reef while acquiring images using the depth gauge on their dive computers [31,34].”

 

2) The work seems to use only the images for segmentation. Since the 3D information is available, it may be interesting to see if using the 3D point cloud in this process.

 

We appreciate Reviewer 2’s suggestion. Members of our team are currently working to develop techniques and methods for segmenting 3D point clouds; however, this work is still nascent and therefore was not yet ready to be incorporated into the current study. Nonetheless, we have added a section for ‘Future Directions’ of our research, in which we include a statement about the current development of segmenting point clouds (lines 415-418):

 

“...researchers in our group are currently developing techniques to perform segmentation on 3D point clouds, which will further enhance the accuracy of measuring the 3D surface areas of complex structures like coral colonies using photogrammetric methods.”

 

3) The paper mentioned that color accuracy is essential, could you please explain more? This may be quite challenge since the lighting, camera angle, orientation, water turbidity etc. may induce color variations of the images.

 

We thank Reviewer 2 for requesting clarification on this point. Consistency in color across different  orthophotomosaics is necessary for training the automatic classifier, as well as utilizing the trained automatic classifier. While certain environmental conditions underwater (e.g., cloud cover, water turbidity, depth, etc.) can cause variation during the acquisition of the images, we deploy color correction cards on the reef and shoot images in raw format so as to be able to extensively correct and modify the color and lighting in our images, both before and after building the orthophotomosaics. We have attempted to clarify this point in the main text (lines 151-159): 

 

“Images are acquired in raw format and white balance adjustment is performed using color checkers distributed in the measurement area before converting the images to the highest quality JPG format to reproduce a more accurate color spectrum [28]. This allows us to extensively adjust the lighting and color of the acquired images, despite environmental conditions (e.g., cloud cover, water turbidity, depth, etc.) that might cause variation in these attributes. This step is fundamental in our protocol as color-fidelity is critical for human identification of marine organisms, as well as facilitation of the training and implementation of the automated semantic segmentation process described in Section 2.3 below.”

Reviewer 3 Report

Review of: “Quantifying the Loss of Coral from a Bleaching Event Using Underwater Photogrammetry and AI-assisted Image Segmentation” by Kopecky et al.,

General comment

This study combined underwater photogrammetry technique with AI-assisted image segmentation software to quantify the impact of a coral bleaching event on a South Pacific coral reef at about 10 m depth. In addition, this study proposed using the 3D surface area as a metric to estimate coral cover. Underwater photogrammetry was obtained between 2017 and 2019. Downward-pointing and oblique photographs were acquired by SCUBA divers that completed a series of parallel passes along the length of a reef plot, then a series of passes perpendicular with 80% overlap. Color checkers were distributed in the measurement area before converting the images to the highest quality JPG format to reproduce a more accurate color spectrum. Digital Elevation Models (DEMs) and orthorectified photomosaics of each plot were based on photogrammetric process using the Agisoft Metashape software. They detected a loss in the amount of live coral on the reef and a reorganization in the size structure of a coral population through size-dependent mortality of bleached corals.

The coral bleaching is not new, but the proposed method to monitor the corals is new. The use of underwater photogrammetry and orthomosaics should be encouraged to expand this technique for mapping and monitoring marine habitats. However, marine photogrammetric surveying is obviously much more difficult to obtain accurate data for the following reasons, 1) difficulty in maintaining sensor stability on the same line between waypoints and at the same depth and angle and percentage of overlap. “A fixed distance above the reef of 1-2 m” (line 142) is far from obtaining stability to maintain a standard for the photos. 2) The light within the water may be scattered or absorbed by solid particles. Most of the visible light spectrum is absorbed within 10 meters, changing the spectral characteristics of the targets. This issue is more relevant when the study's objective is a temporal analysis of images. The spectral characteristics of the targets are affected by the amount of light and angle of light incidence on the sea surface and water turbidity of each photogrammetric survey. Therefore, the weather conditions of each day affect the quality of the images, impairing the comparison of orthomosaics obtained in different seasons and years. A description of the sensor with its spectral characteristics and spatial resolution is necessary, and the meteorological conditions of the days when the data were obtained. 3) The Agisoft Metashape program is calibrated with parallax to perform aero-photogrammetric surveys. If there is no Parallax adjustment in the software, underwater 3D reconstruction will be affected. Figures with the three-dimensional models of the studied corals were not presented.

The idea is great and I recognize the difficulty in obtaining these data, but it is necessary to improve the capture of images and a suitable study for the best wavelength aimed at identifying and quantifying coral bleaching. Just as a suggestion, a multispectral sensor coupled to an underwater drone would be more efficient in capturing and would facilitate image processing.

 

Author Response

Review of: “Quantifying the Loss of Coral from a Bleaching Event Using Underwater Photogrammetry and AI-assisted Image Segmentation” by Kopecky et al.,

 

General comment

 

This study combined underwater photogrammetry technique with AI-assisted image segmentation software to quantify the impact of a coral bleaching event on a South Pacific coral reef at about 10 m depth. In addition, this study proposed using the 3D surface area as a metric to estimate coral cover. Underwater photogrammetry was obtained between 2017 and 2019. Downward-pointing and oblique photographs were acquired by SCUBA divers that completed a series of parallel passes along the length of a reef plot, then a series of passes perpendicular with 80% overlap. Color checkers were distributed in the measurement area before converting the images to the highest quality JPG format to reproduce a more accurate color spectrum. Digital Elevation Models (DEMs) and orthorectified photomosaics of each plot were based on photogrammetric process using the Agisoft Metashape software. They detected a loss in the amount of live coral on the reef and a reorganization in the size structure of a coral population through size-dependent mortality of bleached corals.

 

The coral bleaching is not new, but the proposed method to monitor the corals is new. The use of underwater photogrammetry and orthomosaics should be encouraged to expand this technique for mapping and monitoring marine habitats. However, marine photogrammetric surveying is obviously much more difficult to obtain accurate data for the following reasons, 1) difficulty in maintaining sensor stability on the same line between waypoints and at the same depth and angle and percentage of overlap. “A fixed distance above the reef of 1-2 m” (line 142) is far from obtaining stability to maintain a standard for the photos. 

 

We appreciate Reviewer 3’s comment. We addressed all the photographic and photogrammetric issues and how we coped with them in previous publications, including how we set the camera system to keep it as stable as possible throughout the image acquisition. Also, to assure the needed coverage in condition of sensor instability, we acquired a highly redundant camera network geometry, with a very large image overlap and sidelap (along the strip > 90%, across the strip ~ 80%), and planned and realized a GSD (ground sample distance) smaller than the necessary (sub-millimeter). 

 

Current photogrammetric algorithms, built upon SfM and MVS, do not require a rigid acquisition geometry, as does stereoscopic photogrammetry. Therefore, we need only to guarantee the repeatability of the 3D reconstruction, not the repeatability of each acquired image, as we work on 3D models and orthomosaics for our monitoring purposes. Dramatic changes in distances and viewing angles can affect the accuracy potential of the 3D reconstruction; however, this was never the case in our acquisitions, so we do not  expect significant changes in the final 3D outputs, even if the waypoints were not exactly the same. 

 

2) The light within the water may be scattered or absorbed by solid particles. Most of the visible light spectrum is absorbed within 10 meters, changing the spectral characteristics of the targets. This issue is more relevant when the study's objective is a temporal analysis of images. The spectral characteristics of the targets are affected by the amount of light and angle of light incidence on the sea surface and water turbidity of each photogrammetric survey. Therefore, the weather conditions of each day affect the quality of the images, impairing the comparison of orthomosaics obtained in different seasons and years. A description of the sensor with its spectral characteristics and spatial resolution is necessary, and the meteorological conditions of the days when the data were obtained. 

 

We thank Reviewer 3 for raising these concerns. First, we photographed our plots during the same time of year (austral winter, August) to minimize seasonal variation in light incidence, on days with light cloud cover to minimize light ‘dappling’ on the reef (which can affect alignment of the images), and during the same time of day (late morning to early afternoon) to minimize daily variation in light incidence. As for the sensor, we used a Lumix DMC-GH4 and a Nauticam underwater housing equipped with a dome port to reduce the distortion generated by the water. We have included text to describe the environmental conditions and time periods in which the phimages were acquired, as well as the sensor used (lines 148-150):

 

 “To minimize temporal and environmentally-caused variation in light incidence on the reef, we photographed our plots during the same time of year (August, the austral winter) and during the same time of day (between late morning and early afternoon).”

 

3) The Agisoft Metashape program is calibrated with parallax to perform aero-photogrammetric surveys. If there is no Parallax adjustment in the software, underwater 3D reconstruction will be affected. Figures with the three-dimensional models of the studied corals were not presented.

 

We appreciate Reviewer 3 for pointing this out, but we are not sure if we clearly understand the reviewer’s concern about the parallax. If the reviewer meant that the software is not specifically designed for underwater processing, (i.e. the refraction is not explicitly compensated for within the BA), we used a dome port, which makes the effect of refraction negligible for the purposes of this study. We refer to our previous publications where we analyzed these topics in depth:

 

Menna, F., Nocerino, E., Ural, S. and Gruen, A., 2020. Mitigating image residuals systematic patterns in underwater photogrammetry. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 43, pp.977-984.

 

Nocerino, E., Neyer, F., Grün, A., Troyer, M., Menna, F., Brooks, A., Capra, A., Castagnetti, C. and Rossi, P., 2019. Comparison of diver-operated underwater photogrammetric systems for coral reef monitoring. ISPRS-International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 42.

    

We have added an additional figure (now Figure 3) to the manuscript displaying various layers of a 3D model of one of our reef plots.



The idea is great and I recognize the difficulty in obtaining these data, but it is necessary to improve the capture of images and a suitable study for the best wavelength aimed at identifying and quantifying coral bleaching. Just as a suggestion, a multispectral sensor coupled to an underwater drone would be more efficient in capturing and would facilitate image processing.

 

We thank Reviewer 3 for this suggestion, and we have added a statement to our new ‘Future Directions’ Section (4.4) noting this as a potential avenue for future work that builds on what we have presented here (lines 413-431):

“...we are exploring the incorporation of multispectral sensors affixed to underwater drones to facilitate the acquisition and processing of underwater images used for underwater photogrammetry.”

Round 2

Reviewer 3 Report

The authors adequately addressed my questions and I recommend approving it. Just as a note, Parallax (needed to adjust relative topography or bathymetry) can be modified in Metashape: Tools, Preferences, Paralax.

Back to TopTop