Next Article in Journal
Use of Eye-Tracking Technology to Determine Differences Between Perceptual and Actual Navigational Performance
Next Article in Special Issue
Remote Sensing Tools for Monitoring Marine Phanerogams: A Review of Sentinel and Landsat Applications
Previous Article in Journal
Hyperbolic Paraboloid Free-Surface Breakwaters: Hydrodynamic Study and Structural Evaluation
Previous Article in Special Issue
Counting of Underwater Static Objects Through an Efficient Temporal Technique
 
 
Article
Peer-Review Record

Band Weight-Optimized BiGRU Model for Large-Area Bathymetry Inversion Using Satellite Images

J. Mar. Sci. Eng. 2025, 13(2), 246; https://doi.org/10.3390/jmse13020246
by Xiaotao Xi, Gongju Guo * and Jianxiang Gu
Reviewer 1: Anonymous
Reviewer 2: Anonymous
J. Mar. Sci. Eng. 2025, 13(2), 246; https://doi.org/10.3390/jmse13020246
Submission received: 6 January 2025 / Revised: 22 January 2025 / Accepted: 24 January 2025 / Published: 27 January 2025
(This article belongs to the Special Issue New Advances in Marine Remote Sensing Applications)

Round 1

Reviewer 1 Report (Previous Reviewer 1)

Comments and Suggestions for Authors

The authors revised the manuscript and they have responded to my previous comments and suggestions as well. I think this current version is sufficiently improved to be considered for publication. I have non further comments except the fact that the authors have to consider the comments and suggestions of others reviewers and the one of academic editors during the final steps of peer review process.

Best regards

Author Response

Dear Reviewer,

Thanks for your kind suggestion. We will make every effort to revise accordingly.

Best regards

Reviewer 2 Report (New Reviewer)

Comments and Suggestions for Authors

The research article titled "Band Weight-Optimized BiGRU Model for Large-Area Bathymetry Inversion Using Satellite Images" was subjected to a detailed review and evaluated as follows:

1. BWO_BiGRU and BWOS_BiGRU models were optimized to estimate underwater topography in large areas, and a new approach was introduced. It was especially emphasized that the RANSAC algorithm was more effective in processing ICESat-2 data than DBSCAN. The selection of an important ecosystem such as Shark Bay, which is on the UNESCO World Heritage List, increases the environmental value of the study. It was presented that deep learning models provide higher accuracy (lower RMSE) compared to the traditional Stumpf method, mainly when used with hyperspectral images such as EnMAP.

2. The results were not discussed adequately. The results of different models should be more comprehensively compared with similar studies in the literature. For example, the improvement provided by BWOS_BiGRU, especially on Sentinel-2 and Landsat 9, can be related to its innovation over previous models.

3. The balance between the advantages and limitations of hyperspectral images (e.g., processing time, data volume) and the ease of use and prevalence of multispectral images should be emphasized more intensely.

4. Given the criticisms regarding the models' "black-box" structure, it may be possible to clarify and make the working principles of BWO_BiGRU and BWOS_BiGRU more interpretable.

5. A detailed discussion should be added on the error rates (RMSE > 4 m) shown by the models, especially in deep waters or at depths greater than 10 m.

6. More details of the result maps and visualization with color scales would strengthen the presentation of the findings. Figure 1, in particular, is not readable and is difficult to understand.

7. Suggestions on how such models can be used in real-time applications would appeal more to the reader.

8. Cost and technical requirements for using BWO_BiGRU and BWOS_BiGRU in large areas are negotiable.

Author Response

Dear Reviewer,

We have tried our best to answer all the questions and implement those good suggestions as well.

Comment 1: BWO_BiGRU and BWOS_BiGRU models were optimized to estimate underwater topography in large areas, and a new approach was introduced. It was especially emphasized that the RANSAC algorithm was more effective in processing ICESat-2 data than DBSCAN. The selection of an important ecosystem such as Shark Bay, which is on the UNESCO World Heritage List, increases the environmental value of the study. It was presented that deep learning models provide higher accuracy (lower RMSE) compared to the traditional Stumpf method, mainly when used with hyperspectral images such as EnMAP.

Response: Thanks for your kind suggestion. We will make every effort to revise accordingly.

Comment 2: The results were not discussed adequately. The results of different models should be more comprehensively compared with similar studies in the literature. For example, the improvement provided by BWOS_BiGRU, especially on Sentinel-2 and Landsat 9, can be related to its innovation over previous models.

Response: Thanks for your kind suggestion. 

(a) It has been revisedto line 661-672 in the new version. We searched for and cited papers related to bathymetry inversion in Shark Bay, Western Australia. Most of these studies use field methods like multi-beam or single-beam to collect bathymetry data, combined with traditional empirical models such as Stumpf and remote sensing images to perform bathymetry inversion for the study area. These methods typically require a long time period to study shallow water areas with missing data. In contrast, the methods proposed in this paper can achieve bathymetry values with a certain degree of accuracy over a large area in a short period of time, which is crucial for filling in missing bathymetry data. The ICESat-2 revisit cycle is approximately 91 days, allowing for bathymetry measurements of the same water body at different times, facilitating comparison and validation of depth measurement results. This is significant for long-term studies on bathymetry variations in a specific water body.

(b) It has been revised to line 80-98 and line 630-635 in the new version. The improvement of BWOS_BiGRU model is related to our previous models, BoBiLSTM and ABO-CNN. BoBiLSTM model focuses on selecting band features, removing bands that have a minimal impact on bathymetry inversion from hyperspectral or multispectral images. For smaller study areas, the decrease in depth accuracy may not be very significant, but for larger water bodies, the depth accuracy notably declines. Therefore, utilizing information from all effective bands is crucial. The ABO-CNN model, while offering slightly lower accuracy of depth prediction in multispectral images, tends to slightly overestimate depth in extremely shallow areas. Considering these issues, we believe it is necessary to incorporate spectral information closely related to bathymetry inversion (such as band ratio of Stumpf) in Sentinel-2 and Landsat 9 multispectral images to enhance the deep learning model's ability to capture interdependencies between data, thereby improving bathymetry inversion accuracy.

Comment 3: The balance between the advantages and limitations of hyperspectral images (e.g., processing time, data volume) and the ease of use and prevalence of multispectral images should be emphasized more intensely. 

Response: Thanks for your kind suggestion. It has been revised to line 638-650 in the new version. The VNIR channels of hyperspectral images have the advantage of a large number of bands with relatively narrow spectral ranges. Fully utilizing these characteristics can improve the accuracy of bathymetry inversion. However, it is important to consider the available spectral range of hyperspectral images and the relative loss of bands. If the bands that contribute significantly to bathymetry inversion are lost, the accuracy of the bathymetry inversion from hyperspectral images will also decrease. Multispectral images, which have achieved global coverage, may offer slightly lower bathymetry inversion accuracy compared to hyperspectral images. However, by selecting appropriate bathymetry inversion methods, global shallow water monitoring can be realized. Furthermore, Sentinel-2 offers visible light bands with a 10-m resolution, which provides an opportunity for further research on shoreline changes through multi-temporal multispectral image-based bathymetry monitoring.

Comment 4: Given the criticisms regarding the models' "black-box" structure, it may be possible to clarify and make the working principles of BWO_BiGRU and BWOS_BiGRU more interpretable. 

Response: Thanks for your kind suggestion. It has been revised to line 607-612 in the new version. In the data preprocessing stage, BWO_BiGRU and BWOS_BiGRU models do not require band feature selection. During large-scale bathymetry inversion, the models will fully learn the interdependencies between the bands in the images, assigning higher weights to the bands that contribute significantly to bathymetry inversion, thereby obtaining more effective water depth information.

Comment 5: A detailed discussion should be added on the error rates (RMSE > 4 m) shown by the models, especially in deep waters or at depths greater than 10 m.

Response: Thanks for your kind suggestion. It has been revised to line 507-512 in the new version. In models training and testing, we used fewer than 50,000 ICESat-2 bathymetry points, of which only 29 points had a depth greater than 10 m, accounting for approximately 0.06% of the data. As a result, deep learning models exhibited significant errors when predicting water depths above this threshold, with an RMSE value exceeding 4 m. This indicates that the performance of deep learning models in bathymetry inversion is poor when the dataset is insufficient.

Comment 6: More details of the result maps and visualization with color scales would strengthen the presentation of the findings. Figure 1, in particular, is not readable and is difficult to understand.

Response: Thanks for your kind suggestion. It has been revised in the new version. We have adjusted the scale and clarity of Figure 1 and zoomed in on the ICESat-2 strip data displayed in the image, while also providing an interpretation of the content. We have modified the scale and clarity of Figures 6, 7, 8, and 9 so that readers can more clearly observe the details of the result images.

Comment 7: Suggestions on how such models can be used in real-time applications would appeal more to the reader. 

Response: Thanks for your kind suggestion. It has been revised to line 686-691 in the new version. Our research can also be further explored in real-time applications. For example, during the data preprocessing stage, RANSAC method can be encapsulated into an application to directly process bathymetry data points of ICESat-2. BWO_BiGRU model can be integrated for training, testing, and inference to provide an end-to-end service, making it more convenient for practical production applications.

Comment 8: Cost and technical requirements for using BWO_BiGRU and BWOS_BiGRU in large areas are negotiable. 

Response: Thanks for your kind suggestion. BWO_BiGRU and BWOS_BiGRU models are based on mainstream deep learning frameworks and can be implemented using Python code. ICESat-2 ATL03 data, EnMAP hyperspectral images, Sentinel-2, and Landsat 9 multispectral images can be freely accessed from their official websites. We hope that our models and methods will be more open-source and practical, allowing readers to conduct further research and exploration.

Please let us know if there are any further issues regarding the new version, we will try our best to enhance.

Best Regards

Xiaotao Xi, Gongju Guo, Jianxiang Gu

Round 2

Reviewer 2 Report (New Reviewer)

Comments and Suggestions for Authors

What I meant by the word 'cost' was not the economic cost. I mentioned information about computer processes and time load. But you have done all the remaining responses correctly with great effort. I congratulate the authors for their efforts.

Author Response

Dear Reviewer,

Thank you very much for your letter.

Comment 1: What I meant by the word 'cost' was not the economic cost. I mentioned information about computer processes and time load. But you have done all the remaining responses correctly with great effort. I congratulate the authors for their efforts. 

Response: Thanks for your kind suggestion. The study area of the methods is only Shark Bay in Australia, and the actual cost involved in computer processes and time load is not very large. However, we will continue to carry out research on a larger area or even global shallow water regions, so as to further improve our models and methods and make them more universal in cost. Thank you again for your approval of our paper.

Please let us know if there are any further issues regarding the new version, we will try our best to enhance.

Best Regards

Xiaotao Xi, Gongju Guo, Jianxiang Gu

This manuscript is a resubmission of an earlier submission. The following is a list of the peer review reports and author responses from that submission.


Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

The authors submitted a well written an interesting manuscript dealing with shallow water bathymetry mapping from satellite imageries using deep learning methods. The proposed methodological approach really provides accurate results (as reported on the Table 6 and Figure 7) compared to traditional satellite-derived bathymetry mapping methods such as the one of Stumpf. This current proposed study would help scientists and researchers working in coastal areas in order to find solutions of the impact of climate changes. In the manuscript, the authors well described the methodology, and the conclusions are supported by the results. However, they should improve manuscript by taking into some aspects such as lines (isobaths) indicating the variations of bathymetry range for the Figures 8 and 11 (on Lines 544, 11), this information would help reader better explore the water depth variation information for this area of study.

Comments on the Quality of English Language

Minor Editing of English language are required

Author Response

Reviewer 1

 

Comment 1: However, they should improve manuscript by taking into some aspects such as lines (isobaths) indicating the variations of bathymetry range for the Figures 8 and 11 (on Lines 544, 11), this information would help reader better explore the water depth variation information for this area of study.

Response: Thanks for your kind suggestion. It has been revised to line 510 (Figure 7) and line 667 (Figure 10) in the new version. 

Please let us know if there are any further issues regarding the new version, we will try our best to enhance.

Best Regards

Xiaotao Xi, Ming Chen, Hua Yang, Yingxi Wang

Author Response File: Author Response.pdf

Reviewer 2 Report

Comments and Suggestions for Authors

The authors present work based on the use of earth observation data to estimate a bathymetry. Although in principle the work is well constructed, the authors commit some important inaccuracies.

In the description of the EO datasets, the concept of a scene is unclear.  Why do they speak of several scenes when for EnMAP and S2 they only acquire one dataset.  

The acquisition dates are different between S2 and L9 and very different with EnMAP. There is no description of the study site in terms of climatology or environmental conditions and how they change between May (EnMAP) and October (S2, L9). In case describe why they do not change.

There is no analysis of how this difference in acquisition timing impacts the models.

The EO data are not corrected for the water column contribute. Considering that the authors use reflectance data (L2) the signal is not further corrected for the water effects.

An analysis of the application limits of the method is completely missing.

The validation process with direct data is missing, or rather introduced at the end (GEBCO)  but not sufficiently. 

The terminology used does not always seem scientifically correct. 

The authors spent many lines in the explanation of the mathematical models they adopt for which I suggest a reduction of the text by referring the details to the reference bibliography, leaving in evidence the parts, the modifications the customization used,   necessary to the work presented

Tthe authors try to compare the results obtained with GEBCO bathymetry, which besides its uncertainties also has a much lower spatial resolution than the 30 m of EnMAP.

I suggest resampling EnMAP on GeBCO to have a comparison that is not just visual.

Please accept these comments as a suggestions to improve the huge work done up to now. Despite my roughness, I believe that after  new exerimental design and a new text drafting, this paper will be strongly improved and I will be happy to support you in the future.

Comments for author File: Comments.pdf

Author Response

Reviewer 2

 

Comment 1: In the description of the EO datasets, the concept of a scene is unclear.  Why do they speak of several scenes when for EnMAP and S2 they only acquire one dataset.

Response: Thanks for your kind suggestion. Remote sensing images are usually referred to as scenes, while datasets are generally mosaics created from many scenes, so it is recommended to use "scene." EnMAP and S2 consist of three scenes, while L9 consists of one scene.

 

Comment 2: The acquisition dates are different between S2 and L9 and very different with EnMAP. There is no description of the study site in terms of climatology or environmental conditions and how they change between May (EnMAP) and October (S2, L9). In case describe why they do not change.

Response: Thanks for your kind suggestion. (a) The EnMAP satellite was launched on April 1, 2022, and the available image data is limited. A cloud-free and high-quality image date was selected for Shark Bay on May 2, 2023. There were no corresponding images for L9 on that date, and the images from the nearest dates, May 16th (Figure 1a) or June 9th (Figure 1b), did not show clear water depth textures. Therefore, after comprehensive comparison, the image from October 31st (Figure 1c) was chosen. Similarly, there were no corresponding images for S2 on that date. Considering that both S2 and L9 are multispectral images, and L9 chose the image from October 31st, the image from October 16th, which had better image quality, was chosen for S2.

Figure 1. Water depth textures in L9 images at different times (Please refer to the uploaded document for detailed images.)

(b) Climate has a greater impact on land changes than on water depth changes in the region. bathymetry inversion requires high demands on meteorology and the environment. Section 2.1 "Analysis Area" describes the water clarity in this water area. In addition, under cloud-free and rain-free conditions, the differences in these images are mainly reflected in spectral resolution, which is beneficial for improving the accuracy of bathymetry inversion. This has been modified to line 141, line 156 and line 171 in the new version.

 

 

Comment 3: There is no analysis of how this difference in acquisition timing impacts the models.

Response: Thanks for your kind suggestion. The model proposed in this paper establishes a nonlinear mapping relationship between the spectral range of the image and water depth, focusing on analyzing the differences in bathymetry inversion between multispectral and hyperspectral spectra. The model is unaffected by the time of data acquisition. Additionally, we conducted refraction correction and tidal correction on the collected ICESat-2 water depth data, which can improve the accuracy of water depth inversion (lines 624-629 in the new version).

 

Comment 4: The EO data are not corrected for the water column contribute. Considering that the authors use reflectance data (L2) the signal is not further corrected for the water effects.

Response: Thanks for your kind suggestion. The images used in this study were all atmospherically corrected, and then refraction correction and tidal correction were applied to the ICESat-2 ATL03 data (lines 209-219 in the new version) to reduce the impact of the bathymetry inversion on the in situ water depths. These steps have met the requirements for bathymetry inversion. 

 

Comment 5: An analysis of the application limits of the method is completely missing.

Response: Thanks for your kind suggestion. In the discussion section, the proposed method elucidates that BWO_BiGRU model and RANSAC-based method for extracting ICESat-2 water depth points are more suitable for generating large-area bathymetry inversion maps using hyperspectral satellites. Conversely, BWOS_BiGRU model and RANSAC-based method for extracting ICESat-2 water depth points are more suitable for generating large-area bathymetry inversion maps in shallow water areas (0-5 m) using multispectral satellites (lines 641-646 in the new version). These analyses represent limitations of the bathymetry inversion applications.

 

Comment 6: The validation process with direct data is missing, or rather introduced at the end (GEBCO)  but not sufficiently. 

Response: Thanks for your kind suggestion. In the discussion section, Reference 71 is introduced (lines 658-660 in the new version), indicating its use in Shark Bay for utilizing multibeam scan bathymetry (as shown in Figure 2a in the reference) and demonstrating that the water depth maps produced by our method (Figure 2b) are comparable, which serves as an effective data validation process. Additionally, two ICESat-2 water depth data points (20210626GT1R and 20201215GT3L) in the discussion section are compared with water depth profiles predicted by different models (see Figure 9 in the new version), which also serves as an effective method of data validation. 

Figure 2. (a) The reference bathymetric map provided by reference 71; (b) The bathymetry map generated by BWO_BiGRU model using EnMAP image. (Please refer to the uploaded document for detailed images.)

 

Comment 7: The terminology used does not always seem scientifically correct. 

Response: Thanks for your kind suggestion. We will make every effort to revise accordingly.

 

(a)Line 137: Ultra? The blue band. remove Ultra

Response: Thanks for your kind suggestion. It has been revised to Coastal blue (line 137 in the new version).

 

(b)Lines 142-143: Please describe better the meaning of scenes. EnMAP data has been acquired on 2 may 2023, only one image. what do you mean for scene?

Response: Thanks for your kind suggestion. Remote sensing images are usually referred to as scenes, while datasets are generally mosaics created from many scenes, so it is recommended to use "scene." EnMAP and S2 consist of three scenes, while L9 consists of one scene. The numbers for the three EnMAP scenes have been added to line 143 of the new version.

 

(c)Line 151: nearshore? it is not a classification/specification for spectral channel, removes it

Response: Thanks for your kind suggestion. It has been revised to Coastal aerosol (line 152 in the new version).

 

(d)Line 155: what these codes are indicating?As for the EnMAP, on the date/hour you mention only one L2A S2 image has been acquired, when the other 2 have been acquired?

Response: Thanks for your kind suggestion. T49JGM, T49JHM, and T50JKR are the numbers corresponding to the S2 images, and there are multiple very similar images of S2 for the same time period. The numbers allow readers to understand which specific scene we used. EnMAP also captured three scenes, and the corresponding numbers have been added to line 143 of the new version. 

 

(e) Line 170: B10 L9 has acquired with a GSD of 100 m. Did you resampled (and in case how) this data to 30 m or did you collect directly the data at this GSD?

Response: Thanks for your kind suggestion. We resampled band 10, which originally had a spatial resolution of 100 m, to 30 m using the Layer Stacking tool in ENVI 5.6 (lines 173-175 in the new version).

 

(f) Line 171: Try to be more rigorous, big image does not sound scientifically correct

Response: Thanks for your kind suggestion. It has been revised.

 

(g) Line 174: Wavebands

Response: Thanks for your kind suggestion. It has been revised to Band Number (line 177 in the new version).

 

(h) Lines 225-227: it different from the figure 3 caption, please align!

Response: Thanks for your kind suggestion. It has been revised.

 

(i) Line 238: it is formally wrong: ellipses are not itself difference but they emphasize something else, what?

Response: Thanks for your kind suggestion. It has been revised.

 

(j) Line 283: Introduce better this concept, also this concept does not sound scientifically proper

Response: Thanks for your kind suggestion. It has been revised to feature importance (line 280 in the new version).

 

(k) Line 415: is it correct? (pieces)

Response: Thanks for your kind suggestion. It has been revised to strips (line 381 in the new version).

 

(l) Line 436: this paragraph does not containt ICESat-2 but EnMAP, S2 and L9

Response: Thanks for your kind suggestion. It has been revised to Bathymetry Inversion from Different Satellite Images (line 402 in the new version).

 

Comment 8: The authors spent many lines in the explanation of the mathematical models they adopt for which I suggest a reduction of the text by referring the details to the reference bibliography, leaving in evidence the parts, the modifications the customization used,   necessary to the work presented.

Response: Thanks for your kind suggestion. It has been revised.

 

Comment 9: The authors try to compare the results obtained with GEBCO bathymetry, which besides its uncertainties also has a much lower spatial resolution than the 30 m of EnMAP.

Response: Thanks for your kind suggestion. Taking into account the comments of other reviewers, we have revised the comparison with GEBCO bathymetry results as shown in the following figure. Figure 3a shows the GEBCO bathymetry results, Figure 3b shows the resampled bathymetry map generated using BWO_BiGRU model and EnMAP image, resampled to the same spatial resolution as the GEBCO_2023 Grid bathymetric data, Figure 3c shows the original EnMAP image, and Figure 3d shows the bathymetry map generated by BWO_BiGRU model using EnMAP image (line 667 in the new version).

Figure 3. (a)Bathymetric data of Shark Bay from GEBCO_2023 Grid; (b) The bathymetry map generated by BWO_BiGRU model and EnMAP image was resampled to the same spatial resolution as the GEBCO_2023 Grid bathymetric data; (c) RGB base map generated from EnMAP image; (d) The bathymetry map generated by BWO_BiGRU model using EnMAP image. (Please refer to the uploaded document for detailed images.)

Comment 10: I suggest resampling EnMAP on GeBCO to have a comparison that is not just visual.

Response: Thanks for your kind suggestion. As shown in Figure 4, we resampled the EnMAP water area image and its bathymetry inversion map generated by BWO_BiGRU model to the same resolution as GEBCO. Taking into account the comments of other reviewers, we added isobaths and finally revised it as shown in Figure 3.

Figure 4. (a)Bathymetric data of Shark Bay from GEBCO_2023 Grid; (b) The bathymetry map generated by BWO_BiGRU model and EnMAP image was resampled to the same spatial resolution as the GEBCO_2023 Grid bathymetric data; (c) RGB base map generated from EnMAP image was resampled to the same spatial resolution as the GEBCO_2023 Grid bathymetric data. (Please refer to the uploaded document for detailed images.)

Please let us know if there are any further issues regarding the new version, we will try our best to enhance.

 

Best Regards

 

Xiaotao Xi, Ming Chen, Hua Yang, Yingxi Wang

 

Author Response File: Author Response.pdf

Reviewer 3 Report

Comments and Suggestions for Authors

As a coastal engineer I read this paper from a viewpoint of application in marine engineering. I concluded that this paper is not of interest for marine engineers, because it focusses on various techniques to generate bathymetric data from satellite observations.

I would therefore suggest to publish this paper in “Remote Sensing” and not in JMSE.

The paper compares various methods to generate a bathymetric map. But there is no comparison with “ground truth”, e.g. comparison with a map made by echo sounding. See example. Especially to judge the methods for application in oceanography an coastal science, more info in degree of detail of the generated maps is needed.

Comments for author File: Comments.pdf

Author Response

Reviewer 3

 

Comment 1: As a coastal engineer I read this paper from a viewpoint of application in marine engineering. I concluded that this paper is not of interest for marine engineers, because it focuses on various techniques to generate bathymetric data from satellite observations. I would therefore suggest to publish this paper in “Remote Sensing” and not in JMSE.

Response: Thanks for your kind suggestion. There are many methods for measuring water depth, among which the use of satellite images, ICESat-2, and deep learning methods is currently an effective and cost-effective approach. First, satellite images can be used to observe whether the water depth texture in shallow sea areas is clear, which can preliminarily determine whether optical remote sensing can be used for water depth measurement. If the water depth texture in the image is clear enough, bathymetry inversion can obtain a certain accuracy of water depth values for a very large water area in a short time for unknown areas, which is important for filling in the water depth of blank areas. Second, ATL03 data in ICESat-2, as in situ water depth data, has been studied in a large number of papers, and the satellite has a coverage range of about 88° in global latitude and longitude, with a revisit cycle of about 91 days. The water depth values of the same water area can be measured at different times, enabling comparative validation of water depth measurements. This is important for long-term research on water depth changes in a water area. Third, the deep learning method can fully utilize the spectral information of satellite images, and compared with the traditional single-band or dual-band methods, it can significantly improve the accuracy of bathymetry inversion. The cross-validation of multiple satellite images and multiple methods used in this paper can provide a more convenient and general method for marine engineering in practical environments, and therefore is suitable for publication in JMSE.

 

Comment 2: The paper compares various methods to generate a bathymetric map. But there is no comparison with “ground truth”, e.g. comparison with a map made by echo sounding. See example. Especially to judge the methods for application in oceanography an coastal science, more info in degree of detail of the generated maps is needed.

Response: Thanks for your kind suggestion and for providing us with the reference example. (a) The discussion section of this paper introduces Reference 71 (lines 658-660 in the new version), which demonstrates that the water depth maps generated using multibeam scan in Shark Bay (as shown in Figure 1a in the reference) are comparable to those produced by our method. This constitutes an effective data validation process. Additionally, two ICESat-2 water depth data (20210626GT1R and 20201215GT3L) in the discussion section are compared with water depth profiles predicted by different models (see Figure 9 in the new version), which also serves as an effective method of data validation.

Figure 1. (a) The reference bathymetric map provided by reference 71; (b) The bathymetry map generated by BWO_BiGRU model using EnMAP image (Please refer to the uploaded document for detailed images.)

(b) Taking into account the comments of other reviewers, we have added isobaths to all bathymetric maps and revised the comparison with GEBCO bathymetry results as shown in the figure below. Figure 2a shows the GEBCO bathymetry results, Figure 2b shows the resampled bathymetry map generated using BWO_BiGRU model and EnMAP image, resampled to the same spatial resolution as the GEBCO_2023 Grid bathymetric data, Figure 2c shows the original EnMAP image, and Figure 2d shows bathymetry map generated by BWO_BiGRU model using EnMAP image.

Figure 2. (a)Bathymetric data of Shark Bay from GEBCO_2023 Grid; (b) The bathymetry map generated by BWO_BiGRU model and EnMAP image was resampled to the same spatial resolution as the GEBCO_2023 Grid bathymetric data; (c) RGB base map generated from EnMAP image; (d) The bathymetry map generated by BWO_BiGRU model using EnMAP image. (Please refer to the uploaded document for detailed images.)

Please let us know if there are any further issues regarding the new version, we will try our best to enhance.

 

Best Regards

 

Xiaotao Xi, Ming Chen, Hua Yang, Yingxi Wang

Author Response File: Author Response.pdf

Round 2

Reviewer 2 Report

Comments and Suggestions for Authors

Dear Authors, 

some minor considertion to be added in the text, you may find them in the attached file.

This paper is still lacking in the validation steps. I suggest to add a sentence where the authors may suggest to face this open point in the next research

Comments for author File: Comments.pdf

Author Response

Reviewer 2

 

Comment 1: cloud free is enough, describing EO data the "rain free" condition it is never discussed.  

Response: Thanks for your kind suggestion. It has been revised. (line 142 in the new version).

 

Comment 2: Under visual inspection??

Response: Thanks for your kind suggestion. Satellite-Derived Bathymetry (SDB) methods highly rely on the water quality, therefore it is usually the first step to visually inspect the images to decide whether the images are suitable for SDB.

 

Comment 3: continuously,

Response: Thanks for your kind suggestion. It has been revised. (line 143 in the new version).

 

Comment 4: reported where?(The coordinate reference system is selected as WGS84.)

Response: Thanks for your kind suggestion. That part has been modified, we use the same map coordinate system for all images. (line 144 in the new version).

 

Comment 5: Sentinel 2 is a satellite, MSI is the multispectral sensor.

Response: Thanks for your kind suggestion. It has been revised. (line 148in the new version).

 

Comment 6: equal to the previous.  

Response: Thanks for your kind suggestion. It has been revised. (line 153 in the new version).

 

Comment 7: remove it. 

Response: Thanks for your kind suggestion. It has been revised. (line 155 in the new version).

 

Comment 8: as the previous similar comment

Response: Thanks for your kind suggestion. It has been revised. (line 156 in the new version).

 

Comment 9: same error

Response: Thanks for your kind suggestion. It has been revised. (line 160 in the new version).

 

Comment 10: read above

Response: Thanks for your kind suggestion. It has been revised. (line 166 in the new version). 

 

Comment 11: if you report these nfo for atlas, for coherence you should provide the same information for the other sensors (Geographic coordinates of Table 2)

Response: Thanks for your kind suggestion. ATLAS in ICESat-2 provides strip data, and the latitude and longitude of each strip data are different. Reporting this information is to let readers understand the specific data range we used. The range of images provided by different satellite sensors in the study area has been described in section 2.1: Analysis Area, which is the eastern bay of Shark Bay, Australia, about 85 km (north-south) and 30 km (east-west). The area is approximately 1725 km2, with geographic coordinates ranging from longitude 113°56′E to 114°14′E and latitude 25°40′S to 26°30′S. (line 113 in the new version).

 

Comment 12: you properly describe in the text that the green ramp (not diagonal) indicates something not the blue ellipses

Response: Thanks for your kind suggestion. It has been revised. (line 222 and line 235 in the new version).

 

Comment 13: why?? (The model is unaffected by the time of data acquisition.)

Response: Thanks for your kind suggestion. It has been revised.The model studies the nonlinear mapping relationship between the spectral range of images and water depth. In bathymetry inversion, the quality of satellite images and the water quality in the study area have a significant impact on the model, while the time of data acquisition has no effect on the model.

 

Comment 14: the authors did it I suppose

Response: Thanks for your kind suggestion. We downloaded bathymetric maps from GEBCO and provided the download source in reference 71 of our manuscript. (line 641 in the new version).

 

Comment 15: resolution?

Response: Thanks for your kind suggestion. It has been revised. (line 643 in the new version).

 

Comment 16: 460m ???? Are you sure? EnMAP (fig 10b and 10c seem more detailed 460 ? ?

Response: Thanks for your kind suggestion. On one hand, the study area is approximately 1725 km2, with dimensions of about 85 km north-south and 30 km east-west. If the images are reduced to the size displayed in the manuscript, the texture details can be fully observed. On the other hand, EnMAP has a large number of spectral bands, each covering a narrow spectral range. Therefore, the texture effect of bathymetry inversion is good, which is also the advantage of using hyperspectral images for bathymetry inversion.

 

Comment 17: is it supported by any analytical approach? As it has been write, it seems a qualitative evaluation

Response: Thanks for your kind suggestion. The accuracy analysis of bathymetry inversion has been conducted in Figure 9 through comparison and validation of different models, where the profile line of the bathymetry map generated from hyperspectral image is closer to the in-situ water depth provided by ICESat-2. Comparing the aforementioned bathymetry map, its resampled 460-m bathymetry map, and GEBCO's bathymetry map is a qualitative assessment. Subsequently, dividing all bathymetry maps into depth intervals of 2 m using isobaths is a quantitative assessment, which can provide supplementary data for achieving global water depth measurement by GEBCO.

 

Comment 18: What is?

Response: Thanks for your kind suggestion. It has been revised. (line 646 in the new version).

 

Comment 19: Why is it useful for the discussion?

Response: Thanks for your kind suggestion. bathymetry inversion can be further studied for a range of related applications, such as the stromatolite structures mentioned in the text. Previous studies have often used multispectral image to study bathymetry inversion and analyzed the relationship between water depth and biotope habitats by combining sample data from field surveys. The effectiveness of the bathymetry maps generated from hyperspectral image surpasses that of multispectral image. Therefore, it is of great significance for further research on stromatolite structures.

 

Comment 20: where in the figure is?

Response: Thanks for your kind suggestion. It has been revised. (line 655 in the new version).

 

Comment 21: add and describe the role of the red ellipse

Response: Thanks for your kind suggestion. It has been revised. (line 663 in the new version).

 

Comment 22: EnMAP, L9 and S2? Or only ICESAT-2

Response: Thanks for your kind suggestion. It has been revised. (line 672 in the new version).

 

Comment 23: is it a limitation? or it is an important benchmark to be considered?What's happen for larger area? Or smaller? (The water area in the study region is relatively large, approximately 1725 km²)

Response: Thanks for your kind suggestion. The area shown is the selected study area due to the data availabilities, in our opinion, it is a plausible assumption that our method can be applied to any scales if the necessary data is available. For unknown areas with large water bodies, our method can quickly obtain water depth values with a certain accuracy in a short period, which is easy to implement and has strong operability in engineering. For larger areas involving the synthesis of multiple images and the use of RANSAC method for ICESat-2 to extract water depth points, our method can also be tried.

 

Comment 24: This paper is still lacking in the validation steps. I suggest to add a sentence where the authors may suggest to face this open point in the next research

Response: Thanks for your kind suggestion. The ATL03 data from ICESat-2, serving as in-situ water depth data, has been extensively studied in numerous papers. With a revisit cycle of approximately 91 days, the satellite can provide measurements of the same water body at different times, enabling comparative validation of water depth measurements. Therefore, comparing the water depth profiles from the bathymetry inversion maps with ATL03 data can also serve as an effective validation process. As the reviewer suggested, we will explore more validation methods in future research. Once again, thank you for your suggestions to our paper.

 

Please let us know if there are any further issues regarding the new version, we will try our best to enhance.

Best Regards

Xiaotao Xi, Ming Chen, Hua Yang, Yingxi Wang

Author Response File: Author Response.pdf

Reviewer 3 Report

Comments and Suggestions for Authors

My comment related to “ground truth” has been resolved. However, I still think publication in “Remote Sensing” is more appropriate.

Author Response

Thanks for your kind suggestion.

Round 3

Reviewer 2 Report

Comments and Suggestions for Authors

Dear Authors,

thanks for the effort used to improve your paper. 
I'm here again to ask you to be more rigorous.

The paper, in this present form is too far to have a scientific soundness. e.g. you refer to EnMAP for its hyperspectral cababilities not mentionig the limitation of the signal above 1.4/1.6 micron. Probably only VNIR channels can be used under very water clear condition that you still not analitically define.

I strongly suggest to fully review the concept you use. 

Back to TopTop