Next Article in Journal
Long Term Monitoring and Connection between Topography and Cloud Cover Distribution in Serbia
Previous Article in Journal
NDVI Variation and Yield Prediction in Growing Season: A Case Study with Tea in Tanuyen Vietnam
 
 
Article
Peer-Review Record

Sensitivity of Spring Phenology Simulations to the Selection of Model Structure and Driving Meteorological Data

Atmosphere 2021, 12(8), 963; https://doi.org/10.3390/atmos12080963
by Réka Ágnes Dávid 1, Zoltán Barcza 1,2,3,*, Anikó Kern 4, Erzsébet Kristóf 2, Roland Hollós 1,2, Anna Kis 1,2, Martin Lukac 3,5 and Nándor Fodor 6
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Atmosphere 2021, 12(8), 963; https://doi.org/10.3390/atmos12080963
Submission received: 12 May 2021 / Revised: 22 July 2021 / Accepted: 24 July 2021 / Published: 27 July 2021
(This article belongs to the Section Biometeorology)

Round 1

Reviewer 1 Report

HUNGARY. The NDVI3g dataset was used as an observational reference. The authors seek answers to the following questions: How exactly can the models simulate the observed SOS climatology in the region? Are the models able to capture the observed inter-annual volatility and long-term SOS trends? Is the choice of the model or the choice of the meteorological database a more important factor influencing the accuracy of the SOS estimation?
In my opinion, there are no statistical measures in the main result tables with which accuracy the models (WM, CWM, GSIM) using the CarpatClim, FORESTEE and ERA5 databases estimate the value of the examined features (SOS, IAV).

At least in table2 there should be statistical measures (bias, RMSE, RRMSE, index of agreament or R2 or modeling efficiency.
The scope of these measures given in the discussion (in table 5) is insufficient to answer the questions included in the purpose of the thesis.
I suggest accepting article for printing after additions and describing it in the result.

Author Response

Please see the attachment.

Author Response File: Author Response.docx

Reviewer 2 Report

The study addresses the influence of different phenology models and driving meteorological data on the representation of spring phenology and its variability.

The overall topic, the methods and the results are presented as very interesting and plausible. Some detailed results and parts of the discussion could be improved by following a common thread.

The title could be improved by formulating it as a clear statement. I would suggest something like: 'Sensitivity of spring phenology simulations to the selection of model structure and driving meteorological data' or whatever you think fits best.

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 3 Report

See attached Word document.

Comments for author File: Comments.docx

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

This manuscript is a resubmission of an earlier submission. The following is a list of the peer review reports and author responses from that submission.


Round 1

Reviewer 1 Report

This work undoubtedly addresses an important topic whose problem is well presented in the introduction, namely many regional models simulate the onset of spring vegetation growth with scarce documentation about the reliability of results. Indeed, this work tests three phenological models fed by three different climatic databases to simulate at the regional scale (Hungary) the onset of growth vegetation and compares the results with an observational reference dataset.

However, none of the model-database combinations could reproduce the observed start of season (SOS) climatology within the study region and the only significant result is that a  complex, bioclimatic index could capture the overall trend driven by the ERA5 database when data points are aggregated for the whole study area. Such negative results are very important as many regional models use algorithms that are not properly tested.

 

However, the lack of meaningful results makes a great deal of the considerations presented in the article meaningless, such as that "simulated timing of SOS depends on the choice of model structure and driving meteorological dataset as well, where the former has a determinant role in the majority of the study area" or those presented in sec 4 (discussion) . The authors could comment that simulations are "mostly affected" by the model algorithm rather than the selected dataset, although they cannot provide reliable simulations at all, neither on the interannual variability nor in the average trend or in the spatial area. All the discussion presented in the paper let seem that results are significant and thus deserve to be compared and discussed. A major effort should be put to analyze why the models-dataset fail to cath the observations. 

The observed variability in the SOS (overall SD in the NDVI3g ) is ca. 6 days, while the smallest RMSE obtained from the results (simulated-observed) is 6.6 (CWM-ERA5). This means that the arithmetical average (more or less SD) is a  better predictor than all the model-dataset combinations.

The results obtained from WM-ERA5, at spatial scale, are significant  but performance are scarce. When working with coarse dataset (i.e. 0.1 x 0.1 grid), high level of generalization (no PFT are distinguished) and low resolution (as for the target observational dataset which has 15 days temporal resolution) simplest models can perform better as they have few parameters and the sum of the uncertainties (and differences related to species as well) arising from parameterization remain relatively low. Such considerations should be put ad commented in the manuscript.

  

Minor comments:

Sec 2.1. Specify which classification scheme is adopted (Koppen?). Here it would be very useful to insert a map on the land use type to visualize where forests, arable lands and grassland are placed. 

Sec. 2.2 it is not clear whether the adopted grid (0.1 x 0.1) is a compromise between operability and resolution or if it was choosen as the SOS showed no difference between the different land use type. 

It would be easier to read the methods if the algorithm were briefly illustrated through equations along with the values adopted (the author only declare the reference where the parameters and values where taken)  

Lines 242-244 not clear... especially with regards to lines 272-280

Figure 2. It would be very useful to plot also  the maps of the overall SD (pixel by pixel, the author can estimate the median, as reported in Fig2 but also the SD obtained from 28 years of data). 

Sec 3.2 the maps of RMSE obtained from each model-dataset combinations could be useful to understand where, if any, the models-dataset combinations can reproduce the observations. (be care that the pixels should be empty (no coloured) where the p-values>0.05. Indeed, in each pixel 28 years of simulations can be compared with the observations, possible correlations can be found and the related RMSE can be estimated).  

Tables: significant results (those with p-val <0.05) should be highlighted. 

 

Reviewer 2 Report

This study investigated spring phenology response sensitivities to model structure and meteorological datasets based on three different phenology models, land surface phenology measurements and multiple climate datasets. The potential contribution from the study is important that we may improve understanding how different models or meteorological datasets affect model estimations in spring phenology to inform the further applications of model selection and prediction. However, I found this study has a major flaw that prevents evaluation on the results from model estimation.

Specifically at line 124-127, I do not agree with the statement that the spring leaf growth of different plant function types within a narrow time window justifies the application of a single parameter set per model for the entire study area. It is commonly known that spring leafing out happens quickly in temperate regions, but many previous studies suggested different phenological sensitivities, chilling requirement, thermal accumulation (GDD) among different plant species and communities. While the same model parameters were applied to the entire study area, the assumption is the plants follow the same bio-physiological process in leaf growth in spring simulated by the model.

Actually it is acceptable that this study makes the assumption, as long as the models were simulated at plant community or function type level, because we cannot assume crop, grass and forest have the same physiological process. Their spring phenology are driven by same environmental factors (e.g. temperature), but with different thermal condition. Not mention some species are more sensitive to photoperiod and some other species are not. The land cover data mentioned forest, agriculture land and grassland in the study area. Thus, the models cannot be applied to the entire study area in this study, which need to be revised.

Reviewer 3 Report

Please see the attached file

Comments for author File: Comments.pdf

Back to TopTop