Next Article in Journal
Fig Plant Segmentation from Aerial Images Using a Deep Convolutional Encoder-Decoder Network
Next Article in Special Issue
Leveraging Commercial High-Resolution Multispectral Satellite and Multibeam Sonar Data to Estimate Bathymetry: The Case Study of the Caribbean Sea
Previous Article in Journal
The Empirical Application of Automotive 3D Radar Sensor for Target Detection for an Autonomous Surface Vehicle’s Navigation
Previous Article in Special Issue
Preliminary Assessment of Turbidity and Chlorophyll Impact on Bathymetry Derived from Sentinel-2A and Sentinel-3A Satellites in South Florida
 
 
Article
Peer-Review Record

Satellite Derived Bathymetry Using Machine Learning and Multi-Temporal Satellite Images

Remote Sens. 2019, 11(10), 1155; https://doi.org/10.3390/rs11101155
by Tatsuyuki Sagawa 1,*, Yuta Yamashita 2, Toshio Okumura 1 and Tsutomu Yamanokuchi 1
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Remote Sens. 2019, 11(10), 1155; https://doi.org/10.3390/rs11101155
Submission received: 28 March 2019 / Revised: 29 April 2019 / Accepted: 12 May 2019 / Published: 14 May 2019
(This article belongs to the Special Issue Satellite Derived Bathymetry)

Round  1

Reviewer 1 Report

The comments of the reviewer have been corrected as much as possible. Research on water depth by machine learning is considered to be practically important. I expect improvement in the turbid water area.

Author Response

Thank you for your comments. We will study turbid water area in next step.


Reviewer 2 Report

I would like to congratulate Authors for very interesting and valuable study. It was a pleasure to read the manuscript and some new interesting facts regarding SDB we valuable for me as a reader. 

Outline of the paper is very good. General motivation for the conducted research is described clearly. Results section and summary provide all important information. 

 

One issue or question to the authors that wasn't address in the manuscript regarding results in Figure 5 and Fig. 6 and results in Tab. 4 and Tab. 5:

We can clearly see that RMSE and ME for training data is much smaller than in case  of evaluation. How do you know that model isn't over-trained ? Are there measures to determine it or do Authors propose any solutions in case of RF ?

Minor things that could be also improved in the paper is language correction. I'm not a native so it's difficult for me to judge on this issue, however I found few spelling errors. Therefore, possibly some language corrections / editing could be performed before final publication of the manuscript. 

 


Author Response

Comments and Suggestions for Authors

I would like to congratulate Authors for very interesting and valuable study. It was a pleasure to read the manuscript and some new interesting facts regarding SDB we valuable for me as a reader.

Outline of the paper is very good. General motivation for the conducted research is described clearly. Results section and summary provide all important information.

->Thank you for your comments.

 

One issue or question to the authors that wasn't address in the manuscript regarding results in Figure 5 and Fig. 6 and results in Tab. 4 and Tab. 5:

We can clearly see that RMSE and ME for training data is much smaller than in case  of evaluation. How do you know that model isn't over-trained ? Are there measures to determine it or do Authors propose any solutions in case of RF ?

->When accuracy for evaluation data is almost same with accuracy for training data, the model is considered as not over-trained. One of the solution is increasing number of training data but we couldn’t use more data in this paper due to limitation of analysis environment.

We added a sentence in L412.  

 

Minor things that could be also improved in the paper is language correction. I'm not a native so it's difficult for me to judge on this issue, however I found few spelling errors. Therefore, possibly some language corrections / editing could be performed before final publication of the manuscript.

->Our paper has undergone English language check and editing by MDPI services and we did modification following their comments.  


Author Response File: Author Response.docx

Reviewer 3 Report

Overall comments:

This is an interesting article and I enjoyed reading it. Such technological comparisons provide good material to the scientific community and I personally find them very useful for future referrals. Bathymetry is an ever-growing and modernizing scientific field, and advancements in the remote sensing community are truly welcome.

However, there are various kinds of comparisons in the literature. I am not sure if this particular study brings an excitement to the table. I personally do not understand the need to include the single beam sonar in a direct comparison to SDB or ALB technologies. The fundamental differences in the technology and data characteristics is vast and very clearly used for different purposes in the survey and hydro-geomatics community. I understand the use of multi-beam and side-scan sonar but I am not sure why the authors considered a single beam medium for their research.  Comparing SDB to ALB only could provide a better outcome.

Second, I do not see specifics of ALB and SB data sets. How were they collected? When? What were the QC results? Do we know the systems were calibrated? And so on. These are very fundamental questions in a technology comparison study and they should be included in Data or materials section. Authors provided flow charts and analysis equations but missed on such important information, degrading the quality of their article.

There are various other minor comments and I have marked them on the PDF sheet. Therefore, this article needs to improve significantly to become a scholarly article in the Remote Sensing journal.

 

Minor comments:

Fig. 1: These images are small and hard to read. Axis labels are illegible. Please revise according to journal standards.

Figs. 11 and 12: Comparing single beam sonar data SDB with these graphs is not meaningful. Especially Fig. 12 is weak and cumbersome to read and interpret. It does not add any value to the article.

Abstract: This section needs to be improved. There is not much information provided. It is too short and too basic. It does not provide any “exciting” scientific outcome to the reader.

Data: I do not see any information about Lidar data sets. The study is comparing them to other vector and raster data sets; however, it is not providing any useful and meaningful explanation of the reference data sets. This is a major issue for quality assessment study. The authors need to provide QC findings of all data sets used in this study.

Same thing for SB sonar data. Please provide specifics about the data. There is no justification for a reader to understand and visualize data sets that are compared to each other. The instrument, vessel, data type, date, environmental conditions, QC findings, etc. These all are major

Discussion: This section is too long and contains information that should be in Conclusion section instead.

Conclusion section is weak and includes bold expectations. Please refrain from such statements.

 

References: There are various inconsistencies with the referred material. Please refer to the journal style sheet.


Comments for author File: Comments.pdf

Author Response

Overall comments:

This is an interesting article and I enjoyed reading it. Such technological comparisons provide good material to the scientific community and I personally find them very useful for future referrals. Bathymetry is an ever-growing and modernizing scientific field, and advancements in the remote sensing community are truly welcome.

However, there are various kinds of comparisons in the literature. I am not sure if this particular study brings an excitement to the table. I personally do not understand the need to include the single beam sonar in a direct comparison to SDB or ALB technologies. The fundamental differences in the technology and data characteristics is vast and very clearly used for different purposes in the survey and hydro-geomatics community. I understand the use of multi-beam and side-scan sonar but I am not sure why the authors considered a single beam medium for their research.  Comparing SDB to ALB only could provide a better outcome.

Second, I do not see specifics of ALB and SB data sets. How were they collected? When? What were the QC results? Do we know the systems were calibrated? And so on. These are very fundamental questions in a technology comparison study and they should be included in Data or materials section. Authors provided flow charts and analysis equations but missed on such important information, degrading the quality of their article.

There are various other minor comments and I have marked them on the PDF sheet. Therefore, this article needs to improve significantly to become a scholarly article in the Remote Sensing journal.

We used ALB and single beam sonar data as reference data because these methods are used in Hydrographic survey and meets IHO S-44 standards. We added additional information for these data.

->Thank you for all your comments. We modified as much as possible following your comments.

By the way, we used ALB and single beam sonar data as reference data because these methods are used in Hydrographic survey and meets IHO S-44 standards and CATZOC/A1. We added additional information and references for these data.

 

 

Minor comments:

Fig. 1: These images are small and hard to read. Axis labels are illegible. Please revise according to journal standards.

->We revised images.

 

Figs. 11 and 12: Comparing single beam sonar data SDB with these graphs is not meaningful. Especially Fig. 12 is weak and cumbersome to read and interpret. It does not add any value to the article.

->Although the number of data is small, we consider single beam sonar data are also important for building general model and evaluating accuracies in variable waters. Please also refer to L296.

 

Abstract: This section needs to be improved. There is not much information provided. It is too short and too basic. It does not provide any “exciting” scientific outcome to the reader.

->We revised and added more information.

 

Data: I do not see any information about Lidar data sets. The study is comparing them to other vector and raster data sets; however, it is not providing any useful and meaningful explanation of the reference data sets. This is a major issue for quality assessment study. The authors need to provide QC findings of all data sets used in this study.

->We added information based on reference of these data.

 

Same thing for SB sonar data. Please provide specifics about the data. There is no justification for a reader to understand and visualize data sets that are compared to each other. The instrument, vessel, data type, date, environmental conditions, QC findings, etc. These all are major

->We added information about instruments provided by data providers.

 

Discussion: This section is too long and contains information that should be in Conclusion section instead.

Conclusion section is weak and includes bold expectations. Please refrain from such statements.

->We modified following your comments.

 

References: There are various inconsistencies with the referred material. Please refer to the journal style sheet.

->We refer to journal style sheet and checked again.

 

 

Other comment in pdf:

->We basically followed your advices and revised our manuscript but we have answers for following your comments.

 

***Comments with line in original paper are in black.

Answers with line in revised paper are in red.

 

L25  Not exactly. Tan(alpha) X h = r

->We little bit modified but refer to the statement in Smith 2004 [1]. It is written as ‘Direct measurement of ocean floor depth is done by echosounding from a ship.….. systems can map a swath of area beneath a ship’s track with a width as much as twice the water depth …’.

 

Key words: keywords should not coincide with article title. And should not exceed a maximum of 5.

->We followed the guideline of the journal (“List three to ten pertinent keywords specific to the article……”)

 

Table 4: You should indicate why Taketomi and Efate have much less samples (n) (i.e. single beam sonar) in this table.

->Table 5 in revised paper. We wrote in main text (L284)

 

Figure 7: This is not a good statistical representation of RMSE and mean error. A histogram or a CDF graph showing both residuals might be a better show.

->These RMSEs and MEs are values for narrow depth. So, this graph is considered as appropriate for our objective.

 

Abbreviations

->We followed the guideline of the journal and modified. (“Abbreviations should be defined in parentheses the first time they appear in the abstract, main text, and in figure or table captions and used consistently thereafter”)

 

English:

Our paper has undergone English language check and editing by MDPI services and we did modification following their comments.


Author Response File: Author Response.docx

This manuscript is a resubmission of an earlier submission. The following is a list of the peer review reports and author responses from that submission.


Round  1

Reviewer 1 Report

In this paper, the water depth estimation in the shallow water region of the world using the Landsat-8 sr data is applied by the RF method of machine learning. It is a very useful method from the viewpoint of engineering that the three SDB steps make it possible to create a very simple and accurate water depth map in large quantities. On the other hand, although it is fate of machine learning, there is little physical basis. In order to compensate for the physical grounds, accepting is difficult unless you explain in detail the following points.

 

1.     The essence of water depth estimation using the BF method is the process of SDB-1. Despite this process being the most important, the explanation of the method is too small. In particular, it is necessary to explain in more detail "Construction of many decision trees" for water depth estimation. Since the general RF method is an alternative method, if you estimate each water depth stage, the reader will not know what kind of decision tree to decide each water depth.

2.     The basic spectral properties such as clouds and waves in the SDB-2 step must be explained in detail. Essential discussion about how the mean and standard deviation affect these noise removal will be necessary.

3.     Although it is concluded that the proposed method is superior to Lyzenga's method in the sentence, a figure of detailed validation results at a specific place will be necessary.

4.     Although it points out the difficulty of application in highly turbid water areas, screening and masking of such water bodies seems to be relatively simple. It should be discussed in more detail about its difficulty and possibilities.

That’s all


Author Response

Point 1: The essence of water depth estimation using the BF method is the process of SDB-1. Despite this process being the most important, the explanation of the method is too small.In particular, it is necessary to explain in more detail "Construction of many decision trees" for water depth estimation. Since the general RF method is an alternative method, if you estimate each water depth stage, the reader will not know what kind of decision tree to decide each water depth. 


 

Response 1: We added more detail explanation in the method for random forest and step-1. (Line 162-173) *Line number correspond to revised paper.  

 

Point 2: The basic spectral properties such as clouds and waves in the SDB-2 step must be explained in detail. Essential discussion about how the mean and standard deviation affect these noise removal will be necessary.

 

Response 2: We added detail explanation for masking process and cited references in step-2. (Line 175-194)

 

Point 3: Although it is concluded that the proposed method is superior to Lyzenga's method in the sentence, a figure of detailed validation results at a specific place will be necessary.

 

Response 3: We understood your point and importance of direct comparison to Lyzenga’s methods because we claimed our method is superior to them in discussion. However, direct comparison to other empirical method is difficult because our approach is so different. We create one result from multi satellite images but previous methods use one satellite image and conditions are basically different. Although we can compare RF with Lyzenga’s method on one satellite image, but the result may depend on selected satellite image. Actually, direct comparison with empirical method is not our main objective. We would like to introduce our new approach using machine learning and multi satellite images. So, we cleared our new points, changed our discussion contents and evaluate our accuracy from another aspect.

 

Point 4: Although it points out the difficulty of application in highly turbid water areas, screening and masking of such water bodies seems to be relatively simple. It should be discussed in more detail about its difficulty and possibilities.

 

Response 4: We added more detail and changed topics in discussion. (Line 383-413)

 

English: Our paper has undergone English language check and editing by MDPI sevices.


Reviewer 2 Report

Page 6 ” A light refraction correction as described in [10] was implemented to account for the influence of refraction and the air-water interface.”   - this is correct

 

“Light refraction correction was not applied within the manual photogrammetric technique as the process of physically matching pixels between images removes light refraction effects.

Compared to the other approaches, the manual photogrammetry technique eliminates biases from sources of training information and limitations for automatic pair matching. However, this technique is strongly influenced by the individual completing stereo matching, leading to difficulties with result replication.”

-       This is not clear – for the measurement refraction correction not required, but for the determination of the water depth. If it has not been respected during measurement, a posteriori correction has to be made, due to the situation of the water depth change by refraction by approximately 30%. By own experience no problems appeared with result replication if it was not influenced by sun glint


Author Response

Reviewer's comments are may be the comments to other author's article (not our paper). Because we never mentioned about 'light refraction correction'.



Reviewer 3 Report

Abstract – Lines 15-16 – Text reads “The final estimated 15 depth in the five test areas was 1.41 m for a depth of 0–20 m”. Should 1.41 m be referring to the final root mean square error for the 0-20 m depth range? Also, did this error apply for each of the test areas, or is it an average of the errors obtained for the different test areas?

 

Introduction – Line 25 – The text “a research vessel can measure the width of twice of depth simultaneously” is awkward. Do you mean that it can measure over a width which is equal to twice the depth?

 

Introduction – Line 29 – “For these reasons, not enough data for coastal areas have been collected”. This is subjective as it depends on the coastal areas you are interested in. Maybe re-phrase to indicate that for areas where sonar and ALB measurements cannot be easily obtained, not enough data has been collected.

 

Introduction – Line 40 – Note that not all of the references cited in this line use physics-based SDB models. Specifically, Stumpf’s approach (reference 8) is an empirical method.

 

Introduction – Line 40 – “However, multi-spectral sensors based SDB have not 40 been widely used”. I’m not sure I agree with this as several of the references indicated just before this sentence used multispectral data for SDB (i.e. references 6, 7, 8, 9 and 13). Maybe clarify to better state what you mean by “widely used”.

 

Introduction – Lines 44-49 – The comparison of random forest’s advantageous over physics-based approaches is good, but what about other empirical approaches? How do you think the random forest approach will out-perform techniques such as Stumpf’s and Lyzenga’s (i.e. 10.1080/01431168508948428)?

 

Data – Lines 66-67 – Consider providing a reference for the LaSRC process.

 

Data – Lines 87-88 – Text reads that data were randomly extracted again, but earlier text does not discuss a first random extraction. Clarify.

 

Data – Line 88 – Why the limitation to 20,000 points?

 

Data – Table 1 – Consider listing CATZOC classifications for each survey in the table.

 

Data – Table 2 – Is the time period over which Landsat-8 images analyzed identical for each study area? If not, consider listing the time interval for each area in this table.

 

Methods – Step 2 – Why is the masking process completed after the random forest classification is applied? Would this not make it possible for training points over deep water, land or waves to be included in the training set and impact the SDB model?

 

Methods –Step 3 – Line 140 – Why was 5 selected for the Tstd threshold?

 

Methods –Step 3 – Line 142 – Why the limitation to 10,000 accuracy assessment points for each area?

  

Results – Lines 151-153 – Text here describing the depth ranges which were used for training and accuracy assessment is confusing, specifically the text “the training data used a depth zone of 0 m to 20 m for accuracy assessment, to be compared with the accuracy of the evaluation data”. What is the difference between this training data and the “original training data”?

 

Results – Lines 156-163 – It may be interesting to consider a different sampling strategy for the training data to attempt to limit the overfitting effect. The random selection of training points used here may have resulted in more points selected for certain depths, which could lead to the observed overfitting. Maybe a stratified approach where training points are more evenly selected from different depth ranges would help. I also wonder if the inclusion of a wider range of depths in the training dataset (-5 to 25 m) is also having an impact, as the outlying depths may be impacting the model’s ability to properly represent depths from 0-20 m.

 

Results – Table 3 – Comparisons between the SDB-1 Evaluation results and the SDB-2 results I think highlight the impact of applying masking after the random forest classification is applied. Note how the number of points drops significantly between the SDB-1 Evaluation and the SDB-2 results. The points which were removed are either in areas which appear to contain deep water (based on how deep water was defined in this study), or are over land or waves. Similar points are also likely present in the SDB-1 training dataset, impacting the SDB model and the subsequent results.

 

Results – Tables 3-4 – Because the SDB-1 and SDB-2 processes were applied to each image individually, do the statistics for these results in these tables represent averages for the individual image results?

 

Results – Tables 3-4 – Consider reporting accuracy assessment results for narrower depth ranges (e.g. 0-2 m, 2-4 m, etc.) as well as overall. It is helpful to understand what depths the model is performing best and worst for.

 

Results – Tables 3-4 – Why does the number of points increase from SDB-2 to SDB-3? If data is being aggregated from SDB-2 to SDB-3, the number of points should stay the same or decrease.

 

Results – Figures 4-8 – Consider displaying one of the Landsat-8 images in this figure to provide better context for the SDB results. Displaying the LiDAR depths in their native resolution may also be better as it would allow for analysis of the ability of the approach for representing spatial bathymetry patterns.

 

Discussion – Lines 224-234 – Direct comparisons with Lyzenga’s approach can’t be made here as Lyzenga’s work was applied to different study sites using different satellite imagery. If the author’s want to make a comparison, why not apply Lyzenga’s approach using the study sites and images described in this paper? Another option would be to apply an empirical technique (e.g. either  10.1080/01431168508948428 or 10.4319/lo.2003.48.1_part_2.0547) to compare against.

 

Discussion – Lines 241-243 – Remarkably high relative to what? These results are not significantly different from what has been presented for other techniques and approaches.

 

Discussion – Lines 262-263 – It is certainly possible that using a multi-temporal approach could lead to reduced data loss from clouds, ships, etc. Was any analysis completed to determine how the approach presented in this paper achieved this?

 

Overall Comment: The concept presented in the paper is interesting and certainly has practical applications for SDB internationally. If the authors can address the clarifications I suggest I believe this would improve the paper. I would also suggest that the authors consider applying existing empirical techniques to their study sites to better highlight potential advantages of their multi-temporal random forest approach. I am looking to reviewing the revised version!

Author Response

Abstract – Lines 15-16 – Text reads “The final estimated 15 depth in the five test areas was 1.41 m for a depth of 0–20 m”. Should 1.41 m be referring to the final root mean square error for the 0-20 m depth range? Also, did this error apply for each of the test areas, or is it an average of the errors obtained for the different test areas?

->RMSE for all area is calculated directly from dataset in all area. To make clear, we added more detail explanation for accuracy evaluation in the data and method sections.


Introduction – Line 25 – The text “a research vessel can measure the width of twice of depth simultaneously” is awkward. Do you mean that it can measure over a width which is equal to twice the depth?

->We changed the explanation. (Line 24-26)  *Line number correspond to revised paper.


Introduction – Line 29 – “For these reasons, not enough data for coastal areas have been collected”. This is subjective as it depends on the coastal areas you are interested in. Maybe re-phrase to indicate that for areas where sonar and ALB measurements cannot be easily obtained, not enough data has been collected.

->We changed following your comment. (Line 29-30)


Introduction – Line 40 – Note that not all of the references cited in this line use physics-based SDB models. Specifically, Stumpf’s approach (reference 8) is an empirical method.

->We changed explanation about previous references. (Line 40-42)


Introduction – Line 40 – “However, multi-spectral sensors based SDB have not 40 been widely used”. I’m not sure I agree with this as several of the references indicated just before this sentence used multispectral data for SDB (i.e. references 6, 7, 8, 9 and 13). Maybe clarify to better state what you mean by “widely used”.

->We changed explanation (Line 44-45).


Introduction – Lines 44-49 – The comparison of random forest’s advantageous over physics-based approaches is good, but what about other empirical approaches? How do you think the random forest approach will out-perform techniques such as Stumpf’s and Lyzenga’s (i.e. 10.1080/01431168508948428)?

->In our paper, we didn’t directly compared with previous methods, so we changed explanation for previous methods and advantage of random forest. (Line 46-57)


Data – Lines 66-67 – Consider providing a reference for the LaSRC process.

->Yes, we added. (Line 81)


Data – Lines 87-88 – Text reads that data were randomly extracted again, but earlier text does not discuss a first random extraction. Clarify.

->Yes, we clarified how to make data sets. (Line 106-129, 206-244)


Data – Line 88 – Why the limitation to 20,000 points?

->We added explanation. (Line 210-211)


Data – Table 1 – Consider listing CATZOC classifications for each survey in the table.

->Yes, we added. (Line 96-97)


Data – Table 2 – Is the time period over which Landsat-8 images analyzed identical for each study area? If not, consider listing the time interval for each area in this table.

->We added time interval but time interval is same for all areas. (Table 2)


Methods – Step 2 – Why is the masking process completed after the random forest classification is applied? Would this not make it possible for training points over deep water, land or waves to be included in the training set and impact the SDB model?

->We added explanation in the method. (Line 170-172, 216-220)


Methods –Step 3 – Line 140 – Why was 5 selected for the Tstd threshold?

->We added the reason. (Line 204,392-396)


Methods –Step 3 – Line 142 – Why the limitation to 10,000 accuracy assessment points for each area?

->We added the reason. (Line 241-242)


Results – Lines 151-153 – Text here describing the depth ranges which were used for training and accuracy assessment is confusing, specifically the text “the training data used a depth zone of 0 m to 20 m for accuracy assessment, to be compared with the accuracy of the evaluation data”. What is the difference between this training data and the “original training data”?

->To make clear, we added more detail explanation with equations and flow charts. (Line 206-229, Figure 4)


Results – Lines 156-163 – It may be interesting to consider a different sampling strategy for the training data to attempt to limit the overfitting effect. The random selection of training points used here may have resulted in more points selected for certain depths, which could lead to the observed overfitting. Maybe a stratified approach where training points are more evenly selected from different depth ranges would help. I also wonder if the inclusion of a wider range of depths in the training dataset (-5 to 25 m) is also having an impact, as the outlying depths may be impacting the model’s ability to properly represent depths from 0-20 m.

->There are many way of sampling but we fixed method here due to computing limitation.We would like to focus sampling impacts in our future study


Results – Table 3 – Comparisons between the SDB-1 Evaluation results and the SDB-2 results I think highlight the impact of applying masking after the random forest classification is applied. Note how the number of points drops significantly between the SDB-1 Evaluation and the SDB-2 results. The points which were removed are either in areas which appear to contain deep water (based on how deep water was defined in this study), or are over land or waves. Similar points are also likely present in the SDB-1 training dataset, impacting the SDB model and the subsequent results.

->We added the sentence about the impact of reduction of data. (Line 256-257)


Results – Tables 3-4 – Because the SDB-1 and SDB-2 processes were applied to each image individually, do the statistics for these results in these tables represent averages for the individual image results?

-> Statistics for results in tables are not the averages but are directly calculated from datasets of each study area or all area. To make clear, we add more detail explanation with equations and flow charts. (3.4. Accuracy Assessment, Figure 4)


Results – Tables 3-4 – Consider reporting accuracy assessment results for narrower depth ranges (e.g. 0-2 m, 2-4 m, etc.) as well as overall. It is helpful to understand what depths the model is performing best and worst for.

->We add assessment results for narrower depth ranges for SDB-3 in Table 3. (Figure 7)


Results – Tables 3-4 – Why does the number of points increase from SDB-2 to SDB-3? If data is being aggregated from SDB-2 to SDB-3, the number of points should stay the same or decrease.

->To make clear, we added more detail explanation. (Line 238-242)


Results – Figures 4-8 – Consider displaying one of the Landsat-8 images in this figure to provide better context for the SDB results. Displaying the LiDAR depths in their native resolution may also be better as it would allow for analysis of the ability of the approach for representing spatial bathymetry patterns.

->We added sample Landsat images (Figure1). LiDAR data is point data and we need to resample and decide resolution if we want to sea as image. Then we resampled to 30m as we focused on Landsat analysis.


Discussion – Lines 224-234 – Direct comparisons with Lyzenga’s approach can’t be made here as Lyzenga’s work was applied to different study sites using different satellite imagery. If the author’s want to make a comparison, why not apply Lyzenga’s approach using the study sites and images described in this paper? Another option would be to apply an empirical technique (e.g. either  10.1080/01431168508948428 or 10.4319/lo.2003.48.1_part_2.0547) to compare against.

-> We understood your point and necessity of direct comparison to Lyzenga’s methods because we claimed our method is superior to them in discussion. However, direct comparison to other empirical method is difficult because our approach is so different. We create one result from multi satellite images but previous methods use one satellite image and conditions are basically different. Although we can compare RF with Lyzenga’s method on one satellite image, but the result may depend on selected satellite image. Actually, comparison with empirical method is not our main objective. We would like to introduce our new approach using machine learning and multi satellite images. So, we cleared our new points, changed our discussion contents and evaluate our accuracy from another aspect.


Discussion – Lines 241-243 – Remarkably high relative to what? These results are not significantly different from what has been presented for other techniques and approaches.

->We changed the discussion contents. We compered accuracies within our results and standards of IHO and just refer to previous study results.  


Discussion – Lines 262-263 – It is certainly possible that using a multi-temporal approach could lead to reduced data loss from clouds, ships, etc. Was any analysis completed to determine how the approach presented in this paper achieved this?

->We changed contents and discussed based on results. (Lines 383-396)

 

English:

Our paper has undergone English language check and editing by MDPI sevices.



Back to TopTop