Review Reports
- Jorcelino Rinalde de Paulo* and
- Thauan Santos
Reviewer 1: Anonymous Reviewer 2: Anonymous Reviewer 3: Anonymous Reviewer 4: Grigorios Kyriakopoulos
Round 1
Reviewer 1 Report
Comments and Suggestions for AuthorsThis manuscript proposes the FARO framework, which aims to integrate climate projections with operational risk quantification in Marine Spatial Planning (MSP) to address the challenge of failed stationarity assumptions, holding significant practical implications. Upon review, I have the following questions or suggestions.
1. I believe the introduction should also discuss what existing research has done regarding the key factor of climate uncertainty. Simultaneously, it is necessary to clarify how this study differs from prior work and where its advantages lie. In fact, Section 2 of the manuscript addresses this suggestion. However, it is inappropriate to title this section “Theoretical Framework,” as this misleads readers into believing it provides the theoretical foundation for developing FARO.
2. CMIP6 has been available for years, yet the study still relies on CMIP5. Using outdated data may diminish the timeliness and policy relevance of the findings.
3. Although downtime is quantified, no direct conversion model linking it to economic indicators is established.
4. The language is overly absolute. Phrases like “prove” (e.g., “FARO proves to be robust” in the Abstract) should be avoided in scientific papers. Replace them with cautious terms such as ‘indicate’ or “may.”
5. Authors should incorporate a framework flowchart in the methodology section to clearly illustrate input-output relationships between modules, coupling pathways between climate data streams and risk indicator generation, and other essential information. This would significantly enhance the framework's reproducibility and comprehension efficiency for interdisciplinary readers, particularly benefiting marine planning decision-makers without programming backgrounds.
Author Response
General Response: On behalf of the authors, we wish to express our sincere gratitude for the attentive and rigorous review of this manuscript. The suggestions provided were fundamental in elevating the structural clarity and technical precision of our research. The corresponding changes have been integrated into the revised manuscript.
Comment 1: I believe the introduction should also discuss what existing research has done regarding the key factor of climate uncertainty. Simultaneously, it is necessary to clarify how this study differs from previous works and where its advantages lie. In fact, Section 2 of the manuscript addresses this suggestion. However, it is not appropriate to title this section "Theoretical Framework," as this induces readers to believe it provides the theoretical basis for the development of FARO.
Response 1: We fully accept these constructive suggestions. Section 2 Title: We agree that "Theoretical Framework" was not the most accurate descriptor. We have renamed Section 2 to "Literature Review and Methodological Gap" to precisely reflect its content. Introduction Refinement: We have added a specific paragraph in the Introduction (Section 1) to explicitly distinguish our contribution. We clarified that while previous studies often address climate uncertainty through qualitative vulnerability indices, FARO differs by quantifying the operational impact of high-frequency variability (waves/wind). The stated advantage is the conversion of abstract uncertainty into a tangible financial metric (downtime).
Comment 2: CMIP6 has been available for years, but the study still relies on CMIP5. The use of outdated data may diminish the opportunity and policy relevance of the findings.
Response 2: We appreciate this comment regarding the currency of the data. We have addressed this by significantly expanding Section 3.1.3 (now 3.2.3 in the revised structure). While we acknowledge CMIP6 availability, we argue that for operational risk analysis, spatial resolution is a more critical constraint than the model generation's vintage. Global CMIP6 models typically operate at coarse resolutions (>100 km), insufficient for mesoscale dynamics. The selected INPE-Eta dataset (CMIP5) provides validated 20 km dynamic downscaling, offering superior local accuracy. Furthermore, this dataset remains the official benchmark used by Brazilian regulatory agencies (e.g., EPE), ensuring high policy relevance.
Comment 3: Although downtime is quantified, no direct conversion model is established to link it to economic indicators.
Response 3: We acknowledge this observation. However, we deliberately abstained from establishing a fixed "monetary conversion model" (e.g., LCOE) because key economic variables (vessel day-rates, tariffs) are highly volatile and proprietary. Including a static model would render the findings quickly obsolete. Instead, we argue in Section 5.1 that "Downtime" serves as the fundamental agnostic input, allowing stakeholders to apply their specific cost functions to our robust physical data.
Comment 4: The language is excessively absolute. Phrases such as "prove" (e.g., "FARO proves to be robust" in the Abstract) should be avoided in scientific articles. Replace them with cautious terms such as 'indicate' or 'may.'
Response 4: We agree. We have carefully reviewed the manuscript to align with scientific modesty. Specifically, in the Abstract and Conclusion, we replaced absolute assertions with cautious phrasing (changing "demonstrates/proves" to "indicates" or "suggests").
Comment 5: The authors should incorporate a framework flowchart in the methodology section to clearly illustrate the input-output relationships between modules... This would significantly increase the reproducibility and comprehension efficiency.
Response 5: We fully agree that visual representation is crucial. We have ensured that Figure 2 (FARO Conceptual Framework Flowchart) in Section 3 provides a detailed schematic of the pipeline. This diagram explicitly maps the input-output relationships, linking raw climate data to the risk processing engine and final decision-support indicators.
Author Response File:
Author Response.pdf
Reviewer 2 Report
Comments and Suggestions for AuthorsThe paper "Beyond Stationarity: The FARO Framework for Quantifying Adaptive Operational Risk in Marine Spatial Planning" addresses the problem of accounting for climate non-stationarity in marine spatial planning (MSP)—one of the central challenges of the modern "blue economy."
The paper's relevance stems from the transition from static risk assessment models to dynamic ones that take climate change and extreme weather events into account. In the context of accelerating climate change, especially for countries in the Global South such as Brazil, the work is of undeniable practical and theoretical value.
The paper's main scientific contribution is the development and validation of the FARO (Framework for Adaptive Operational Risk Analysis) methodological tool, which combines statistical downscaling of climate models with machine learning methods (XGBoost) to generate localized wave and wind forecasts, which are then integrated into an operational risk assessment model for long-term planning.
A key innovation is the quantitative link between climate uncertainty and specific financial and operational risk metrics (e.g., downtime), enabling climate predictions to be translated into actionable insights for investment planning and management. This approach, unlike traditional stationarity-based approaches, paves the way for adaptive MSP management and sustainable investment planning.
The practical significance of this paper lies in the fact that the FARO framework, already validated in the Brazilian offshore wind industry, demonstrates how technology change and increased resilience can significantly reduce operational risks (downtime from 60% to 10%). This tool can influence investment allocation and prioritize development zones, providing quantitative data for stakeholder dialogue. The publication of the source code and the use of open data enhance the reproducibility and adaptability of the method to other countries and marine-specific industries, indicating high potential for scaling and practical application.
Unlike previous studies, which often relied on the assumption of climate stationarity or were limited to qualitative assessments, this paper offers a comprehensive quantitative tool integrating high-resolution climate models, machine learning, and operational analysis, directly linked to financial metrics. This interdisciplinary and practice-oriented approach is underrepresented in the literature, making it a significant contribution to the development of managerial and technical methods in MSP.
The paper's presentation is rigorous and logical: the theoretical framework, methods, results, discussion, and conclusion are organically linked and consistently reveal the problem under study and the proposed solution. The purpose of the formulas and algorithms is clearly explained, and modern methods and tools are used. At the same time, the technical details are adapted for understanding by specialists in related fields, without being excessively complex, demonstrating a high level of scientific communication.
The paper's language is academically sound, and the style is rigorous and consistent. The introduction clearly defines the problem (stationarity and its limitations), the goals, and the objectives of the study, which are consistently developed in the main body. The conclusion logically summarizes the results and reflects the key findings—hypothesis testing, model validation, and directions for future research. The consistency between the goals and conclusions is fully maintained.
The figures clearly illustrate the key results: the spatial distribution of risks, temporal dynamics, and technological sensitivity. They are presented in a neat style with sufficient detail for easy understanding. The figures are well-related to the text and facilitate understanding of complex relationships.
Quantitative data are adequately presented in the text.
Comments.
- In my opinion, the paper insufficiently discusses the limitations associated with the choice of climate models (e.g., the difference between CMIP5 and the newer CMIP6) and the impact of possible systematic errors on the reliability of the forecast.
- Although the framework is listed as source-agnostic, a more detailed uncertainty analysis would be useful.
- Application to other economic sectors is indicated as possible, but is not supported by examples or modeling, which could enhance interdisciplinary interest.
- It would be useful to expand the description of the FARO interface and provide examples of its implementation in real-world management processes.
- Focusing on only one operational risk indicator (downtime) simplifies the real complexity of risk processes.
- Expanding the description of the FARO interface and examples of its implementation in real-world management processes would be useful.
- The paper does not fully disclose some technical details of data processing and model parameters, which hinders reproducibility without access to the code and powerful computing infrastructure.
- A broader discussion of limitations is recommended, including potential uncertainties in modeling climate scenarios and risk components not captured by the current approach.
- More specific details about the XGBoost tuning parameters and model validation methods would be beneficial to improve the transparency of the methodology.
- In the results section, it would be desirable to include a sensitivity analysis to other operational parameters, such as the duration of the safe operating window.
- Further recommendations for MSP practitioners on integrating FARO results into existing planning and decision-making processes are recommended.
This paper represents high-quality research with a strong methodological foundation, a relevant topic, and a significant contribution to the field of climate-responsive marine spatial planning. I recommend it for publication with minor revisions.
Author Response
General Response: On behalf of the authors, we wish to express our sincere gratitude for the attentive and rigorous review of this manuscript. The suggestions provided were fundamental in elevating the structural clarity and technical precision of our research. The corresponding changes have been integrated into the revised manuscript.
Comment 1: In my opinion, the article insufficiently discusses the limitations associated with the choice of climate models (e.g., the difference between CMIP5 and the newer CMIP6) and the impact of possible systematic errors on forecast reliability.
Response 1: We agree that this justification required greater scientific depth. We have expanded Section 3.2.3 to explicitly address this choice based on resolution and validation. While CMIP6 offers updated physics, global models typically have coarse resolutions (>100 km). For wave modeling, mesoscale resolution is paramount. The INPE-Eta dataset provides validated 20 km dynamic downscaling, offering superior capture of local extremes compared to raw CMIP6 data. We also addressed systematic errors by detailing the Quantile Delta Mapping (QDM) protocol applied for bias correction.
Comment 2: Although the framework is listed as independent of the source, a more detailed uncertainty analysis would be useful.
Response 2: We agree that a rigorous discussion on uncertainty is fundamental. We have significantly expanded Section 6.1 (Methodological Limitations) to provide a structured "Uncertainty Framework," explicitly distinguishing between three layers: Scenario Uncertainty (RCP 4.5 vs 8.5), Technological Uncertainty (Sensitivity Analysis), and Model Epistemic Uncertainty (reliance on a single Regional Climate Model).
Comment 3: The application to other economic sectors is indicated as possible, but is not supported by examples or modeling, which could increase interdisciplinary interest.
Response 3: We agree. We have expanded Section 5.3 to provide concrete conceptual examples of how the FARO mathematical structure translates to other sectors. We explicitly detailed the parameter mapping for Port Logistics (Pilotage suspension thresholds) and Offshore Aquaculture (Feed barge accessibility), demonstrating the framework's versatility without requiring new dataset processing.
Comment 4: It would be useful to expand the description of the FARO interface and provide examples of its implementation in real-world management processes.
Response 4: We appreciate this suggestion. We clarified that FARO functions as a computational framework, but its "interface" for decision-making lies in its visual outputs. We expanded Section 5.3 to describe how the Spatial Risk Map acts as a zoning interface for public auction blocks, and how the Sensitivity Curves act as a procurement interface for private vessel specifications.
Comment 5: Focusing on only one operational risk indicator (downtime) simplifies the real complexity of risk processes.
Response 5: We acknowledge the complexity of operational risk. However, we justified the focus on "Downtime" because it acts as the industry-standard proxy for "Waiting on Weather" (WoW) calculations in strategic planning. We added a specific paragraph in Section 3.3.2 referencing standards like DNV-ST-N001, demonstrating that relying on these probabilistic exceedance thresholds is the accepted methodology for estimating fleet availability and insurance budgets.
Comment 6: Expand the description of the FARO interface and examples of its implementation in real-world management processes would be useful.
Response 6: This comment appears to be a duplicate of Comment 4. Please refer to Response 4 regarding the description of the FARO decision-support interface and real-world management examples.
Comment 7: The article does not fully disclose some technical details of the data processing and model parameters, which hinders reproducibility without access to the code and powerful computing infrastructure.
Response 7: We agree that full transparency is essential. We addressed this by explicitly listing the tuned hyperparameters used in the final ensemble (1000 estimators, learning rate 0.01, 5-fold CV) in the text to ensure replicability. We also reinforced in Section 3.5 that the full environment specifications (requirements.txt) are available in the GitHub repository.
Comment 8: A broader discussion of limitations is recommended, including potential uncertainties in climate scenario modeling and risk components not captured by the current approach.
Response 8: We agree. As detailed in Response 2, we rewrote Section 6.1 to explicitly categorize "Model" and "Scenario" uncertainties. Furthermore, regarding uncaptured risk components, we added a specific discussion on "Operational Persistence." We acknowledge that the current daily binary metric () does not explicitly model the duration of continuous weather windows, identifying this as a key area for future module development.
Comment 9: More specific details about the XGBoost tuning parameters and model validation methods would be beneficial to improve methodological transparency.
Response 9: We agree. In Section 3.2.2, we added the specific hyperparameters derived from the cross-validation process (1000 estimators, learning rate 0.01). We also justified the choice of XGBoost over Deep Learning architectures (e.g., LSTM) by citing Grinsztajn et al. (2022), which demonstrates the superior efficiency of tree-based models for tabular environmental data.
Comment 10: In the results section, it would be desirable to include a sensitivity analysis of other operational parameters, such as the duration of the safe operational window.
Response 10: We acknowledge that operational duration (persistence) is critical. While the current study focused on the "Daily Availability" metric, we addressed this suggestion by adding a specific discussion in Section 6.1 (Limitations). We explicitly identified the lack of a "Window Persistence" analysis as a limitation of the current version and defined it as a priority module for future development.
Comment 11: Additional recommendations are recommended for MSP professionals on integrating FARO results into existing planning and decision-making processes.
Response 11: We appreciate the suggestion to increase practical utility. We have expanded Section 5.3 to include a specific set of recommendations, proposing a two-step protocol: (1) Zoning Phase: Using the risk map as a quantitative "exclusion filter"; and (2) Licensing Phase: Incorporating the sensitivity analysis into environmental impact assessments (EIA) to mandate technical resilience demonstrations.
Author Response File:
Author Response.pdf
Reviewer 3 Report
Comments and Suggestions for Authors
The paper presents a relevant and innovative framework (FARO) for integrating climate non-stationarity into Marine Spatial Planning and offshore wind operational risk analysis. However, several methodological justifications, missing figures, and clarity issues must be addressed before the manuscript can be considered for publication.
Comment [1]: The methodological choice to rely on INPE regional projections (based on CMIP5) is justified primarily through availability and resolution. However, the authors must expand this justification by discussing:
-the performance differences between CMIP5-INPE and current CMIP6 ensembles.
-Potential biases introduced by excluding newer models.
*-Whether regional CORDEX CMIP6 products were considered.
This is necessary for evaluating the robustness of FARO under current climate modeling standards.
Comment [2]:
The downscaling model reports R² = 0.8885 and RMSE = 0.1446 m , but the manuscript lacks a spatial breakdown of performance. Please provide:
* Spatial variability of model skill (coastal vs. offshore grid cells).
* Skill under extreme conditions (upper quantiles of SWH).
* A comparison with alternative ML algorithms (e.g., Random Forest, SVR, LSTM) or justification for selecting XGBoost alone.
Comment [3]:
The application of Quantile Delta Mapping is briefly described, but its calibration details are not provided. Please clarify: Calibration period,How extreme waves are handled, Whether trend preservation was tested.
Comment [4]
Uncertainty is addressed only indirectly via scenario contrast (RCP 4.5 vs. 8.5). A more rigorous uncertainty discussion is expected:Downscaling uncertainty;Model driver uncertainty;Operational threshold uncertainty;Variability amplification under RCP 8.5. Even a qualitative framework can be addressed.
Comment [5]: Equation (1) assumes daily binary operability based solely on SWH and wind speed > thresholds. This simplification must be better justified..
Comment [6]:
Section 3.2 describes a geographic mask producing 441 grid points. The manuscript should clarify:How these points intersect EPE high-potential zones. Whether partial grid-cell overlap was considered.Potential distortions due to the 0.25° grid resolution of ERA5.
This is important for interpreting Figures 1 and 3.
Comment [7]:
The manuscript states that FARO code is available on GitHub . Please expand reproducibility details:Full dependency list and environment file (conda or pip). Expected computational resources and runtimes. Instructions for downloading ERA5 and INPE data and replicating preprocessing.
Comment [8]:
The abstract should explicitly state that projections are based on CMIP5-derived INPE regional models, not CMIP6.
Comment [9]:
There are several formatting errors and broken strings (e.g., “https:oogithub.comoRinalde…” ). These should be corrected for editorial quality. Double punctuation (e.g., “potential..”) appears in the Conclusions and should be revised.
Comment [10]:
Some links in the reference list appear corrupted. Please verify.
Comment [11]:
The limitations section (5.4) is concise but should be expanded to address:The implications of using only one regional climate dataset. The absence of multi-model ensembles.Lack of probabilistic analysis. The effect of not considering multi-dimensional operational thresholds.
Comment [12]:
The conclusion asserts strong claims about FARO as a fully validated framework. Given the methodological constraints described, consider to reflect:validation within a specific region and data context.The need for CMIP6-driven downscaling and multi-sector application tests.
Author Response
Author's Responses to Reviewer 3
On behalf of the authors, we wish to express our sincere gratitude for the attentive and rigorous review of this manuscript. The suggestions provided were fundamental in elevating the structural clarity and technical precision of our research. The corresponding changes have been integrated into the revised manuscript.
Comment 1: The methodological choice to rely on regional INPE projections (based on CMIP5) is mainly justified by availability and resolution. However, the authors must expand this justification by discussing: the differences in performance between CMIP5-INPE and current CMIP6 ensembles; potential biases introduced by the exclusion of newer models; and whether CORDEX CMIP6 regional products were considered.
Response 1: We appreciate this rigorous scrutiny regarding the climatological baseline. We have explicitly expanded Section 3.2.3 to address these three points:
-
CMIP5-Eta vs. CMIP6: We argue that for wave operational risk, spatial resolution is the governing constraint. While CMIP6 offers updated microphysics, global models typically operate at coarse resolutions (>100 km), which smooth out the wind-sea fetches critical for downtime analysis. The INPE-Eta dataset provides 20 km dynamic downscaling, offering superior capture of local extremes compared to raw CMIP6 data.
-
Potential Biases: We acknowledge that older forcing models may contain biases. To mitigate this, we implemented a Quantile Delta Mapping (QDM) protocol (Cannon et al., 2015), which corrects distributional biases while preserving the relative climate trends.
-
CORDEX: We clarified that CORDEX CMIP6 products for the South Atlantic were evaluated but currently lack the extensive validation maturity and specific daily granularity of the "Projeta Brasil" (INPE) dataset for this specific operational application.
Comment 2: The downscaling model reports R² = 0.8885 and RMSE = 0.1446 m, but the manuscript lacks a spatial breakdown of performance. Please provide: Spatial variability of model skill; skill under extreme conditions; and a comparison with alternative ML algorithms (e.g., Random Forest, SVR, LSTM) or justification for selecting XGBoost alone.
Response 2: We have addressed these technical requests by expanding Section 3.2.2:
-
Spatial Variability: We explicitly clarified that while the ensemble mean is 0.88, we observed a slight performance degradation in immediate coastal cells due to complex shallow-water bathymetric interactions.
-
Extremes: We noted that the tree-based ensemble demonstrated robustness in capturing upper quantiles (extremes), avoiding the saturation effects common in linear statistical downscaling.
-
Algorithm Selection: We justified the choice of XGBoost over Deep Learning (e.g., LSTM) by citing Grinsztajn et al. (2022), which demonstrates that tree-based models offer superior efficiency and accuracy for tabular environmental data.
Comment 3: The application of Quantile Delta Mapping is briefly described, but its calibration details are not provided. Please clarify: Calibration period, how extreme waves are treated, and whether trend preservation was tested.
Response 3: We have provided the requested technical details in Section 3.2.3:
-
Calibration Period: We specified that the bias correction was calibrated using the historical overlap period (1990–2015).
-
Extremes & Trends: We clarified that the QDM algorithm (Cannon et al., 2015) was selected precisely because it explicitly preserves the model's projected relative changes (deltas) for all quantiles. This ensures that future extreme events are not artificially "clamped" to historical distributions.
Comment 4: Uncertainty is addressed only indirectly through scenario contrast (RCP 4.5 vs. 8.5). A more rigorous discussion on uncertainty is expected. Even a qualitative framework can be addressed.
Response 4: We agree that a structured discussion on uncertainty is vital. We have addressed this by rewriting Section 6.1 to establish the requested "Uncertainty Framework," explicitly mapping the sources of risk:
-
Model Driver Uncertainty: Addressed as "Model Epistemic Uncertainty" (reliance on a single regional driver).
-
Technological Uncertainty: Quantified via the sensitivity analysis (Figure 5).
-
Scenario Uncertainty: Addressed by capturing the amplification of variance under RCP 8.5.
Comment 5: Equation (1) assumes daily binary operability based only on SWH and wind speed limits. This simplification must be better justified.
Response 5: We agree that real-world operations involve complex decision-making. However, we have justified this simplification in Section 3.3.2 by citing industry standards. We clarified that for strategic long-term planning, this binary classification serves as the standard industry proxy for "Waiting on Weather" (WoW) downtime. We referenced the DNV-ST-N001 standard to demonstrate that relying on these probabilistic exceedance thresholds is the accepted methodology for estimating fleet availability.
Comment 6: Section 3.3.1 describes a geographic mask yielding 441 grid points. The manuscript must clarify: How these points intersect EPE high-potential zones; whether partial overlap of grid cells was considered; and potential distortions due to the 0.25° ERA5 grid resolution.
Response 6: We have clarified the geospatial processing details in Section 3.3.1:
-
Intersection Method: We specified that the spatial domain was defined using a "Centroid Inclusion" method: ERA5 grid cells were selected only if their centroids fell strictly within the high-potential polygons defined by the EPE Roadmap.
-
Resolution & Distortion: We explicitly acknowledged that the 0.25-degree resolution (~27 km) imposes a limitation for nearshore micro-siting. However, we argued that this resolution is methodologically appropriate for the meso-scale regional risk assessment proposed by FARO.
Comment 7: The manuscript states that the FARO code is available on GitHub. Expand the reproducibility details: Complete list of dependencies and environment file (conda or pip). Computational resources and expected runtime. Instructions for downloading ERA5 and INPE data and replicating pre-processing.
Response 7: We have fully addressed this request to ensure robust reproducibility:
-
Dependencies: We updated Section 3.5 to explicitly state that the GitHub repository now includes a requirements.txt file and a conda environment configuration.
-
Resources: We added a note regarding computational benchmarks, specifying that the inference pipeline is optimized to run on standard workstations.
-
Data Acquisition: We clarified that the repository includes a step-by-step README guide for automating the download of ERA5 (via CDS API) and INPE data.
Comment 8: The abstract should explicitly state that the projections are based on INPE regional models derived from CMIP5, not CMIP6.
Response 8: We agree that transparency regarding the data source is essential. We have modified the Abstract to explicitly state that the study utilizes "INPE-Eta/CMIP5 regional projections," avoiding any ambiguity.
Comment 9: There are several formatting errors and broken strings. These must be corrected for editorial quality. Double punctuation (e.g., “potential..”) appears in the Conclusions and must be revised.
Response 9: We sincerely apologize for these editorial oversights. We have conducted a thorough proofreading:
-
Broken Links: We verified and corrected the GitHub URL in Section 3.5.
-
Punctuation: We eliminated all instances of double punctuation.
-
Formatting: We reviewed the reference list and in-text citations to ensure consistent formatting strings.
Comment 10: Some links in the reference list appear corrupted. Please verify.
Response 10: A check was performed on every link in the reference list to ensure they are active and correct.
Comment 11: The limitations section is concise but must be expanded to address: The implications of using only one regional climate dataset; absence of multi-model ensembles; lack of probabilistic analysis; effect of not considering multi-dimensional operational thresholds.
Response 11: We agree. As detailed in Response 4, we have completely rewritten Section 6.1 to explicitly cover these points:
-
Single Model/No Ensemble: We classified this as "Model Epistemic Uncertainty," justifying the trade-off of high resolution vs. ensemble spread.
-
Multi-dimensional Thresholds: We added a specific discussion acknowledging that the current function (Eq. 1) is uni-dimensional. We explicitly suggested that future iterations should incorporate multi-dimensional constraints (e.g., Mean Wave Period limits) and "Window Persistence."
Comment 12: The conclusion makes strong claims about FARO as a fully validated framework. Given the methodological restrictions described, consider reflecting: validation within a specific regional and data context.
Response 12: We agree that scientific precision is paramount. We have revised the Conclusions (Section 6) to qualify the scope of our findings:
-
Regional Context: We explicitly stated that the framework was validated "within the specific climatic context of Southeastern Brazil."
-
Future Scope: We reinforced that while the framework is a robust "proof-of-concept," its global generalization requires the expansion to CMIP6 multi-model ensembles, which we explicitly listed as a future research step.
Author Response File:
Author Response.pdf
Reviewer 4 Report
Comments and Suggestions for Authors- In the title of the manuscript the first sentence “BEYOND STATIONARITY:” does not make sense. I suggest authors to remove it.
- All main headings have to be rephrased/retitled to be consistent with the following naming: 1.Introduction, 2.Literature Review, 3.Methodology and Analysis, 4.Results, 5.Discussion, 6.Conclusions Implications and Future Research Works. All missing sections, from the above, to be renamed and fixed to them, and all other-typed headings to be placed as subheadings underneath the aforementioned 6 main headings.
- At the beginning of the “3.Methodology and Analysis” section authors are recommended to provide a separate subsection “3.1 Description of the studied area” in which brief analysis, demographic data, population domesticated (in million of people), land uses: coastal zone, urban zone, agricultural zone, industrial zone (all in km^2) to be provided in the form of Tables. The subsection 4.1 has to be removed and transferred at the beginning of section 3, not section 4.
- At the end of section “2.Literature Review” a Figure 1: Study Framework, can be formulated, in which the steps of this study to be graphically shown: flow chart, diagram, or any other graphical format of authors’ preference. Besides, the citations of this section 2 have to be international-covered, not only narrowed to “Marine Spatial Planning (MSP) and FARO in Brazil”.
- In section “3.Methodology and Analysis” it is not clear what are the dataset and variables of the FARO methodology framework? It is recommended this quantitative information to be presented in the form of one or more Tables, keeping the narrative description of all subsections of main section 3 as are.
- In the “Discussion” section authors are recommended to reorganize their argumentation on findings by succinctly denote:
-What are the environmental risks and the environmental aspects covered from the “Marine Spatial Planning (MSP)”?
-What is the contribution of the FARO results towards climate in regional-Brazilian level of analysis? Are there land uses/changes counted for?
-Based on the statement that “FARO proves to be a robust decision-support instrument, effectively bridging state-of-the-art regional climate science with participatory planning to foster genuinely sustainable and resilient maritime development”, it is suggested authors to outline in the form of bulleted hints what should be the benefits of FARO model synergistic and adaptation with other similar methodologies?, covering an international, beyond Brazil, upscaling-forecasting level of analysis?
Author Response
Author's Responses to Reviewer 4
On behalf of the authors, we wish to express our sincere gratitude for the attentive and rigorous review of this manuscript. The suggestions and constructive criticisms provided were fundamental in elevating the structural clarity and technical precision of our research. The corresponding changes have been integrated into the revised manuscript.
Comment 1: The first phrase in the manuscript title, “BEYOND STATIONARITY:”, does not make sense. I suggest the authors remove it.
Response 1: We accept this suggestion to improve the conciseness and directness of the title. We have removed the prefix "BEYOND STATIONARITY:", and the manuscript is now titled: "THE FARO FRAMEWORK FOR QUANTIFYING ADAPTIVE OPERATIONAL RISK IN MARINE SPATIAL PLANNING".
Comment 2: All main headings must be reformulated/renamed to be consistent with the following nomenclature: 1. Introduction, 2. Literature Review, 3. Methodology and Analysis, 4. Results, 5. Discussion, 6. Conclusions, Implications, and Future Research.
Response 2: We accept this structural suggestion to ensure consistency with the requested nomenclature.
-
Renaming: We have renamed Section 2 to "Literature Review", Section 3 to "Methodology and Analysis", and Section 6 to "Conclusions, Implications, and Future Research".
-
Restructuring: To align with the new Section 6 title, we moved the content of the former Section 6.1 (Methodological Limitations) into Section 6. The "Implications" discussed in Section 5 remain under "Discussion" but are now conceptually bridged to the expanded Conclusion.
Comment 3: At the beginning of Section 3 (Methodology and Analysis), the authors are recommended to provide a separate subsection "3.1 Description of the Studied Area" which includes a brief analysis, demographic data... and land uses... to be provided in tabular form. Subsection 4.1 must be removed and transferred to the beginning of Section 3.
Response 3: We have implemented this structural recommendation.
-
Section 3.1: We renamed the existing Section 3.1 to "Description of the Studied Area" and consolidated all descriptive elements there (transferring content from the former Section 4.1).
-
Table Data: We included Table 1, which details the requested demographics (Population: 64.3 Million) and economic characterization (GDP, Oil & Gas) of the adjacent coastal states (Rio de Janeiro, Espírito Santo, São Paulo). We maintained the focus on "Marine Uses" (Energy and Conservation) rather than terrestrial land uses (e.g., agriculture), as these are the relevant spatial conflict variables for Offshore Wind planning.
Comment 4: At the end of Section 2 (Literature Review), Figure 1 (Study Framework) could be formulated, showing the steps of this study graphically... Additionally, the citations in this Section 2 should be covered internationally.
Response 4:
-
Graphic Framework: We highlight that this requirement is fully met by Figure 2 (FARO Conceptual Framework Flowchart). We respectfully maintained this figure at the beginning of Section 3 (Methodology) rather than Section 2, as it technically illustrates the methodological pipeline (Input-Processing-Output) described in that section.
-
International Citations: We reviewed Section 2 and confirmed that the theoretical foundation is built upon the global state-of-the-art. Key citations include seminal international works such as Milly et al. (Science, 2008) on stationarity, Bennett et al. (Nature Sustainability, 2019) on the Blue Economy, and Pınarbaşı et al. (Marine Policy, 2017) on MSP decision tools.
Comment 5: In Section 3 (Methodology and Analysis), it is not clear what the dataset and variables of the FARO methodological framework are. It is recommended that this quantitative information be presented in the form of one or more Tables.
Response 5: We fully agree that a transparent definition of input variables is essential for reproducibility. We highlight that Table 2 (Data Sources, Variable Specifications, and Feature Engineering) in Section 3.3 was specifically included to meet this requirement. This table explicitly breaks down the dataset specifications, listing the Native Resolution (0.25° vs 20 km), Raw Input Variables, and Derived Engineering Features for both the historical reanalysis (ERA5) and future projections (INPE).
Comment 6: In the Section “Discussion”, the authors are recommended to reorganize their argumentation... denoting: What are the environmental risks... covered by “Marine Spatial Planning”? What is the contribution of the FARO results to the climate at the regional-Brazilian analysis level? Are land uses/changes accounted for? ...suggested to outline in bullet points what should be the benefits of the synergy and adaptation... covering an international level of upscaling/forecasting beyond Brazil?
Response 6: We have addressed this detailed request by populating and expanding Section 5.5 (Synergy with International Methodologies and Scalability). We structured the discussion to succinctly cover the requested points:
-
Environmental Scope: We clarified that FARO complements traditional MSP environmental assessments by adding the layer of "Physical-Operational Risk" (infrastructure resilience).
-
Regional Contribution: We defined FARO's contribution as "Climate Adaptation" (resilience). We clarified that instead of "land use," the framework accounts for "Marine Space Use" (conflicts with O&G and Conservation Areas).
-
Synergies (Bullet Points): As requested, we included a bulleted list outlining the benefits of integrating FARO with international tools (e.g., Marxan) and its scalability to other regions (e.g., West Africa or Southeast Asia), addressing the international upscaling potential.
Author Response File:
Author Response.pdf
Round 2
Reviewer 3 Report
Comments and Suggestions for AuthorsThe authors responded to the review comments, and the paper is clearly improved from a scientific perspective.
Reviewer 4 Report
Comments and Suggestions for AuthorsThe manuscript has been revised in a satisfactory manner, having all research parts fully developed and the research findings well presented and creatively discussed. In this context this revised manuscript can be accepted for publication at the Sustainability journal as is.