Evaluation of the Sensitivity of the Weather Research and Forecasting Model to Changes in Physical Parameterizations During a Torrential Precipitation Event of the El Niño Costero 2017 in Peru
Round 1
Reviewer 1 Report
Comments and Suggestions for Authors
This paper evaluates the sensitivity of WRF physics schemes on an extreme precipitation event that occurred from March 13 to 15, 2017, under El Niño Costero conditions in areas of Peru. Different combinations of parameterizations are used in a total of 22 experiments, including the combination of 2 boundary layer schemes (YSU and MYJ), 5 different cloud parameterizations (BMJ, KF, GF, GD, and NT), as well as the explicit resolution of convection by the model itself. The impact of spatial resolution and cloud parameterization on precipitation simulations is evaluated.
The work is interesting and informative. The paper is well-written and suitable for its publication in Water after some minor reversions. In the following there are my comments. Note “Line +xx” or “Line –xx” means the number of lines counting from top and bottom of a page.
1. Page 1, line –12: “(1) the presence of trade winds and southwesterly winds”. Should this be “(1) the presence of southeasterly trade winds”?
2. Page 2, line +31: In 2017 there was an El Niño Costero event, which was considered “moderate”.
3. Page 3, line +15: In other study over the southeastern of Peru by González-Rojí et. al. [16], … They found… Here, the authors for [16] are needed.
4. Page 4, line +3: austral summer (December to March)
5. Page 6, Table 2, line + 5: change “seg” to “sec” for the abbreviation of second
6. Page 7, line – 3: From this point forward, "RMSE" should be referred to as "RRMSE" (Relative Root Mean Square Error). All instances of "RMSE" in the manuscript need to be replaced with "RRMSE"
7. Figures 2: can the authors provide the pattern correlation (pattern correlation - Glossary of Meteorology) between CHIRPS and each simulation in Figure 2 to support the results associated figures:
a. Page 9, line –20 to line –22: “results in d02 seem to be more aligned with the CHIRPS pattern in terms of precipitation distribution and levels compared to those in d01, suggesting that the increase in spatial resolution provides better results capturing the total precipitation of this event”.
b. Page 9, line -13 to line -15: “The Y_KF_OFF and Y_GF_OFF experiments appear to better capture the precipitation distribution for this event compared to CHIRPS”
8. Page 11, line+30 to line + 31: “NT convection experiments show a more moderate bias in the central area, while the _OFF version”. Be consistent, use “Y_NT” and “Y_NT_OFF”
9. Page 11, line – 3 to line -4: “M_KF shows a more pronounced overestimation, while M_KF_OFF presents greater underestimations”. This is hard to follow due to missing the location of overestimation and underestimations is not clearly stated.
10. Page 11, line – 17: Fig. 6 shows the relative bias results…
11. Page 15, Fig. 6, legend: Change “YSU” to “MYJ”
12. Page 19, Table 7, row 2: change “0,96” to “0.96”
Comments on the Quality of English Language
See the above review.
Author Response
file attached
Author Response File: Author Response.pdf
Reviewer 2 Report
Comments and Suggestions for AuthorsThe term "WRF calibration" is incorrect. It could be something like "evaluation of the sensitivity of physical parameterizations," but the WRF is not calibrated.
Why didn’t you test the sensitivity of cloud microphysics schemes? Even though it is a convective event, the WRF always simulates part of the event (especially the beginning and the end) using the microphysics scheme.
El Niño is a climatic phenomenon; it can remotely induce precipitation, but it is not the cause of an extreme precipitation event. The authors are confusing the scales of the phenomena and need to correct this throughout the text. What were the mesoscale and/or synoptic conditions associated with the precipitation event? This needs to be described in the article so that the influence of each parametric scheme can be understood. The article only mentions a second ITCZ band but needs to describe the phenomena on the correct scales and give each one its proper importance (was the event solely one of deep convection, for example? What was the influence of orographic factors, etc.).
What literature did the authors use to classify the precipitation event as extreme? At least looking at the accumulated totals over 72 hours in Table 1, this does not appear to have been an extreme precipitation event. Perhaps if precipitation rates over shorter intervals of time were evaluated, that conclusion could be reached.
Why did you consider a 24-hour spin-up and not 6 hours as most of the literature suggests? The reference you used for this is not a well-established reference for the use of the WRF (Soriano, C., Jorba, O., & Baldasano, J. M. (2004)).
The analyses are interesting, but the authors need to choose events with similar mesoscale and/or synoptic conditions to use as a basis for comparison. You cannot compare a barotropic event with others that occurred in baroclinic regions and attempt to draw conclusions about the physical configuration of the model. Care must be taken in this regard.
Author Response
file attached
Author Response File: Author Response.pdf
Reviewer 3 Report
Comments and Suggestions for AuthorsThis paper evaluates the accuracy of different planetary boundary layer schemes and cumulus schemes in the WRF model for simulating extreme precipitation events associated with El Niño Costero in the Peru region. The results show that the YSU scheme outperforms the MYJ scheme in precipitation simulation for this region. The findings also indicate that using the KF scheme in the outer domain and disabling the cumulus scheme in the inner domain yields more accurate simulations of intense precipitation, while using the GD scheme or NT scheme in both the outer and inner domains provides more accurate simulations of light precipitation. Overall, the paper is logically structured and holds significant research value. It is recommended for publication after the authors make the following revisions.
1. Page 6., Figure 1. The ocean and the 5000 m terrain elevation are represented using the same color, which can easily cause misunderstanding.
2. Page6. At a grid resolution of 3 km, the model is already capable of resolving most convective processes. Would the use of cumulus parameterization at this resolution lead to redundancy, with convection being represented by both the cumulus scheme and the microphysics scheme? In this context, I believe more consideration should be given to scale-adaptive cumulus parameterization schemes.
3. Page 7, Table 1. Does THO in MPH refer to the Thompson scheme? Have you considered evaluating the sensitivity of MPH selection to the study region?
4. Page 8. When A=0 and B=C, FBI=1. However, this does not imply that the prediction is accurate.
5. Page 9. Could the overestimation of precipitation in mountainous regions be caused by precipitation being redundantly represented by both the cumulus scheme and the microphysics scheme in the model?
6. Page 10. The current study tests the sensitivity of the study region to different cumulus schemes. I believe it is necessary to provide the proportion of precipitation produced by the cumulus schemes relative to the total precipitation in each experiment.
7. Page 11. These simulations all exhibit an overestimation in the southwestern part of the simulation domain and an underestimation in the northeastern part. What could be the possible reasons for this?
8. Pages 17-19. For intense precipitation, disabling the convection scheme seems to yield better simulation results, while for light precipitation, enabling the convection scheme shows better performance. What could be the possible reasons for this?
9. Page 20. "Cloud parameterization" should be replaced with "cumulus parameterization" because, at the resolution used in the current study, "cloud parameterization" is generally understood to refer to the microphysics scheme. Please ensure consistent terminology throughout the paper.
10. Page 20. Li et al. (2024) demonstrated that topography has a significant impact on precipitation simulation. Given that the current study area includes extensive complex terrain, I suggest adding a discussion on how topography and topographic parameterization affect precipitation simulation, which may be helpful for the simulation in future studies.
Li, J., and Coauthors, 2024: The influence of complex terrain on cloud and precipitation on the foot and slope of the southeastern Tibetan Plateau. Clim. Dyn., 62, 3143-3163.
Author Response
file attached
Author Response File: Author Response.pdf
Round 2
Reviewer 2 Report
Comments and Suggestions for AuthorsDear author,
Thank you for your corrections.
I believe that the article is now more meteorologically rigorous, although the phenomenological discussion (in relation to the precipitation event itself) is not very well grounded in the text. But as the work isn't for a meteorological journal, that's fine.
The WRF Spinup is something they should investigate a little more. There are several studies on this and the WRF manual recommends between 6 and 12 hours. One experiment or a few case studies like in the article you mention are not enough to define something so important (Liu, Y., Zhuo, L., Han, D. (2023) )
I say this because 24 hours is a sufficient period for the model to create its ‘cloud environment’, so to speak, when precipitation data is not assimilated. However, it is a very long period for mesoescale forecasts where the distance from the initial conditions greatly increases the model's uncertainty. I think the authors should look into this a little more, even bearing in mind that a smaller spinup could improve the results of the study. In any case, this point needs to be discussed and better justified (not with a case study paper, but with articles by the authors of the model, manual, etc., which discuss the physics of the problem), since 24 hours of spinup for a forecast of a mesoscale phenomenon is not applied in most articles.
Author Response
We appreciate and thank the reviewers for their work, which has served to improve the manuscript. That is why we have added this thank you in the corresponding section. We have answered the various questions and suggestions from reviewers and our answers are indicated in blue.
Reviewer 2 (R2)
R2.1: Dear author,
Thank you for your corrections.
I believe that the article is now more meteorologically rigorous, although the phenomenological discussion (in relation to the precipitation event itself) is not very well grounded in the text. But as the work isn't for a meteorological journal, that's fine.
The WRF Spinup is something they should investigate a little more. There are several studies on this and the WRF manual recommends between 6 and 12 hours. One experiment or a few case studies like in the article you mention are not enough to define something so important (Liu, Y., Zhuo, L., Han, D. (2023) )
I say this because 24 hours is a sufficient period for the model to create its ‘cloud environment’, so to speak, when precipitation data is not assimilated. However, it is a very long period for mesoescale forecasts where the distance from the initial conditions greatly increases the model's uncertainty. I think the authors should look into this a little more, even bearing in mind that a smaller spinup could improve the results of the study. In any case, this point needs to be discussed and better justified (not with a case study paper, but with articles by the authors of the model, manual, etc., which discuss the physics of the problem), since 24 hours of spinup for a forecast of a mesoscale phenomenon is not applied in most articles.
A2.1: We agree with the reviewer that 24h is not the most common spin-up period used. Usually, for short-term forecast authors use 6 or 12 hours as spin-up period. In this case we wanted to focus on the sensitivity to different PBL and CU parameterizations considering only one and defined spin-up period.
We appreciate a lot the comment and we are considering this for our future work, but it is not possible to analyze in depth in two days (the period to solve these minor revisions) the effect of using another spin-up period over our results. In any case we consider that the results and conclusions are representative, and in the future, we will analyze the specific effect of using other spin-up periods.
In any case, suggestion we have included the next discussion in the paper:
“The event has been simulated from March 12 to 15, 2017, taking March 12, i.e., the first 24 hours, as the model's spin-up, and therefore it was not included in the analysis. We have used 24-hours spin up time in our simulations to consider the importance of adequately stabilized atmospheric fields. In any case, we would like to point out that deeper research in this sense should be done in the future. Usually, for short-term forecast, 6-hours [31,32] or 12-hours [33] periods are typically used as spin-up. But there are other studies that con-clude that, depending on the meteorological situation, up to 36-hours [34] or 48-hours can be the optimum spin-up time [35,36]. In this sense we have selected 24-hours as a balance between the widely used length and the longest periods, which has also been used in oth-er studies [37,36].”
And also we have included in the conclusions:
“Moreover, authors emphasize that the findings of this study pertain to the spin-up time and model initialization used, considering the sensitivity to the spin-up conditions as future work. The sensitivity of rain forecast depends not only on the PBL or cumulus schemes, but also other parameters like initial and boundary conditions or domain configuration, among others.”
Please see lines from 219 to 229 and from 755 to 759 in the revised manuscript
References:
- Givati, A., Lynn, B., Liu, Y., & Rimmer, A. (2012). Using the WRF Model in an Operational Streamflow Forecast System for the Jordan River, J. Appl. Meteorol. Clim., 51, 285–299, https://doi:10.1175/JAMC-D-11-082.1.
- Tian, J., Liu, J., Yan, D., Li, C., & Yu, F. (2017). Numerical rainfall simulation with different spatial and temporal evenness by using a WRF multiphysics ensemble, Nat. Hazards Earth Syst. Sci., 17, 563–579, https://doi.org/10.5194/nhess-17-563-2017
- Hu, X. M., Nielsengammon, J. W., & Zhang F. (2010). Evaluation of three planetary boundary layer schemes in the WRF model, J. Appl. Meteorol. Clim., 49, 1831–1844, https://10.1175/2010JAMC2432.1.
- Deng, C., Chi, Y., Huang, Y., Jiang, C., Su, L., Lin, H., Jiang, L., Guan, X. & Gao, L. (2023). Sensitivity of WRF multiple parameterization schemes to extreme precipitation event over the Poyang Lake Basin of China. Front. Environ. Sci. 10:1102864. https://10.3389/fenvs.2022.1102864
- Hiraga, Y. & Tahara, R. (2024). Sensitivity of localized heavy rainfall in Northern Japan to WRF physics paameterization schemes. Atmospheric Research, 314, 108802. https://doi.org/10.1016/j.atmosres.2024.107802
- Liu, Y., Zhuo, L. & Han, D. (2023). Developing spin-up time framework for WRF extreme precipitation simulations. Journal of Hydrology, 620, 129443, https://doi.org/10.1016/j.jhydrol.2023.129443
- Wang, S., Yu, E., & Wang, H. (2012). A simulation study of a heavy rainfall process the Yangtze River valley using the two-way nesting approach, Adv. Atmos. Sci., 29, 731–743, https://10.1007/s00376012-1176-y
Author Response File: Author Response.pdf
Reviewer 3 Report
Comments and Suggestions for AuthorsThe authors have properly revised the manuscript.
Author Response
We appreciate and thank the reviewers for their work, which has served to improve the manuscript. That is why we have added this thank you in the corresponding section. We have answered the various questions and suggestions from reviewers and our answers are indicated in blue.
Reviewer 3 (R3)
R3.1: The authors have properly revised the manuscript.
A3.1: Thanks a lot for your previous comments and suggestions, they have allowed us to improve the paper.