Next Article in Journal
Low-Cost Real-Time Remote Sensing and Geolocation of Moving Targets via Monocular Bearing-Only Micro UAVs
Previous Article in Journal
From Coast to Inland: Nonlinear and Temperature-Mediated Urbanization Effects on Vegetation Phenology in Shandong Province, China
Previous Article in Special Issue
Bidirectional Reflectance Sensitivity to Hemispherical Samplings: Implications for Snow Surface BRDF and Albedo Retrieval
 
 
Article
Peer-Review Record

Improving Satellite-Derived Bathymetry in Complex Coastal Environments: A Generalised Linear Model and Multi-Temporal Sentinel-2 Approach

Remote Sens. 2025, 17(23), 3834; https://doi.org/10.3390/rs17233834 (registering DOI)
by Xavier Monteys 1,*, Tea Isler 2,3, Gema Casal 4 and Colman Gallagher 5
Reviewer 1: Anonymous
Remote Sens. 2025, 17(23), 3834; https://doi.org/10.3390/rs17233834 (registering DOI)
Submission received: 15 July 2025 / Revised: 31 October 2025 / Accepted: 10 November 2025 / Published: 27 November 2025

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

This study evaluates the performance of the Generalized Linear Model (GLM) and multi-image techniques in satellite-derived bathymetry (SDB), demonstrating improved accuracy in coastal environments through comprehensive model comparisons and subset analyses. The work provides valuable insights into optimizing SDB models using optical satellite data, particularly highlighting the advantages of GLM and multi-image combinations. However, several aspects require clarification and enhancement to strengthen the rigor and interpretability of the findings, as follows:  

  1. The multi-image analysis identifies optimal subsets (C1 to C10), with C7 (7 images) achieving the best performance. However, the criteria for selecting these subsets remain unclear. For instance, the paper does not explain how the 7 images in C7 were chosen from the 10 available images—whether based on temporal consistency (e.g., similar acquisition dates), radiometric quality, or preliminary error metrics. Furthermore, the analysis notes that subsets with 5–9 images perform best but lacks a quantitative explanation for why larger subsets (e.g., C10 with 10 images) do not yield better results. Please clarify the subset selection criteria and provide a comparative analysis of performance drivers across different subset sizes.  
  2. While the study compares GLM with the Lyzenga and Stumpf models, it lacks critical comparisons with other state-of-the-art SDB methods. The paper emphasizes GLM’s superiority over these two traditional models but omits error metrics (e.g., RMSE, R²) from recent studies using alternative approaches (e.g., machine learning models like Random Forest or physics-based SDB models) in the same or similar coastal environments. This omission limits the ability to contextualize GLM’s performance within the broader SDB literature. It is recommended to either explain the exclusion of these comparisons or supplement with comparative data from relevant contemporary studies to highlight the novelty of the proposed approach.  
  3. The paper does not provide visualizations of inverted bathymetry (e.g., spatial distribution maps of water depth in the study area), relying solely on numerical metrics (e.g., R², RMSE) to describe performance differences between models. As a core outcome of SDB research, the absence of bathymetric images hinders the ability to visually assess spatial heterogeneities in accuracy between different models (e.g., GLM vs. Lyzenga) and image combinations (e.g., C7 vs. the best single image ID 6). For example, it is impossible to visualize error distributions in nearshore vs. offshore areas or model performance in topographically complex regions, making it difficult for readers to interpret the practical significance of the numerical metrics. It is recommended to supplement bathymetric visualizations for key models and image combinations, optionally overlaying error hotspots, to enhance spatial interpretability of the results.  
  4. Several figures require improvements in clarity, aesthetics, and stylistic consistency. For instance, the black data points in Figure 2 are excessively large and densely packed, leading to severe overlap that obscures the distribution characteristics of individual data points. As a key figure illustrating performance differences across subset sizes, Figure 5 is overly simplistic and lacks essential visual hierarchy. Moreover, inconsistencies in style—such as font sizes and legend formats—are evident, which undermines the overall professionalism of the manuscript. It is recommended to standardize the design style across all figures: adjust the size and transparency of data points in Figure 2 to reduce overlap, and enhance the aesthetics of Figure 5 (e.g., by standardizing axis label formats) to ensure all figures are clear, readable, and stylistically consistent.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 2 Report

Comments and Suggestions for Authors

Dear authors, see detailed comments in the pdf file. In general I think your work is interesting and could be publishable. The multi-image approach is sensible, as is the use of a GLM. So I have no fundamental problems with your work, though I also think it is only marginally innovative. A few specific things to mention:

  • Your random split of the training/validation data is problematic because it results in substantial spatial autocorrelation between training and validation data. In other words, your error metrics (R2, RMSE, MAE) end up providing us with an idea of how good the model is at making predictions for pixels that are right next to training data - which is not the most interesting question. Consider blocked splits instead, as used in one of the references I pointed you to in the pdf file.
  • You use all features, some of which are very collinear, because state that it's ok for prediction and you're not interested in interpretation (of how the model works). That's fine. But then you later say that GLM is better than ML methods because it is more interpretable... Which is it? If you are truly only interested in prediction, which is entirely acceptable, then ML methods might be better? If you want interpretation, please explain how to interpret the interaction terms.
  • Given the substantial number of other articles that have already done similar work, I think you need to better explain the novelty of your work. You demonstrate that a multi-image approach is better than a single-image approach. I agree - but you have references several other papers that already showed this, and I mentioned in the pdf another paper also showing the same thing. So this is not new. You have used a GLM and shown that it performs better than Lyzenga and Stumpf. But those are not really the state-of-the-art methods that are relevant to compare to. Right? So I think you need to better explain what is the real contribution of your work. This is the most important point you need to work on.

Comments for author File: Comments.pdf

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Round 2

Reviewer 2 Report

Comments and Suggestions for Authors

Dear authors, see my comments in the attached pdf. In general I am impressed with your work and look forward to seeing it published. Please consider most of my comments optional as you decide how to improve the manuscript - there is no doubt in my mind that your work is solid; I am simply looking for a little extra information and insight to be gained from it. Note that some of the comments are linked to highlighted sections of the text, and appear as little text boxes near the line numbering.

Comments for author File: Comments.pdf

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Back to TopTop