Next Article in Journal
Transport Pathways and Potential Source Regions of PM2.5 on the West Coast of Bohai Bay during 2009–2018
Previous Article in Journal
The Linkage of the Precipitation in the Selenga River Basin to Midsummer Atmospheric Blocking
 
 
Article
Peer-Review Record

Development and Evaluation of a WRF-Based Mesoscale Numerical Weather Prediction System in Northwestern China

Atmosphere 2019, 10(6), 344; https://doi.org/10.3390/atmos10060344
by Tiejun Zhang 1,2, Yaohui Li 1,2, Haixia Duan 1,2,*, Yuanpu Liu 1,2, Dingwen Zeng 1,2, Cailing Zhao 1,2, Chongshui Gong 1,2, Ganlin Zhou 1,2, Linlin Song 1,2 and Pengcheng Yan 1,2
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Atmosphere 2019, 10(6), 344; https://doi.org/10.3390/atmos10060344
Submission received: 28 May 2019 / Revised: 8 June 2019 / Accepted: 20 June 2019 / Published: 25 June 2019
(This article belongs to the Section Meteorology)

Round 1

Reviewer 1 Report

The manuscript is definitely improved and I'm happy with the author's answers to my comments. However, I would suggest that you read the text carefully before publishing it.


For example:


“2). HIGH-ALTITUDE ELEMENTS

The NW-MNPS model prediction of high-altitude elements with vertical height is given in Fig. 18. The year-round average prediction shows that the model prediction of air temperature is low. The 24 h RMSEs are generally within 2K at high altitudes, except at the land surface, and those of the 48”


What do you mean by saying: “the model prediction of air temperature is low”?

Is this statement an indication of the low accuracy of the prediction? You can also interpret this as the indication that the model predicted temperatures are low.


Please verify this and other statements related to the overall assessment of the forecast.


These fine wording issues are important because the reader will keep the overall synthesis of the quality of the forecast as the lasting memory of the system developed by the Authors.



Author Response

Thanks to the comments from the reviewer, we have the opportunity to further enhance this manuscript.

As the reviewer’s comments, what we want to express here is that “the model predicted temperatures are low”. We have made changes in the manuscript and made adjustments to the similar descriptions. Thank you very much.


Author Response File: Author Response.docx

Reviewer 2 Report

Review of Development and Evaluation of a WRF-based Mesoscale Numerical Weather Prediction System in Northwestern China

 

The authors revisions to the manuscript are largely satisfactory in addressing the review comments. Still a little more care should be taken, as there are a few issues that result from not carefully checking the revised text. Some issues are noted in the specific comments.

Specific comments

In the abstract TS and ETS are threat score and equitable threat score. At L160-168, this is described as the threshold success and equal threshold success. The equations appear to show the (equitable) threat score. Please use consistent names. Also, ND is not defined in the equations.

L70. "poor data quality" Is this poor observation data quality or poor land use data quality or some other kind of data?

L111. This paragraph needs a starting sentence to indicate that it is a description of the NWP system as per Figure 1.2. The paragraph could be rearranged to put related information together.

L265. You defined rain categories in a previous section, that does not include 'torrential'. It would be good to use those categories instead, or to define torrential in the context of the categories you are using.

L339. The start of this paragraph does not make sense. I presume it's because a section was moved up to where the statistics are defined. Please rewrite this paragraph so it makes sense.

L360. This paragraph does not seem to relate to the calculation of the B matrix, and should therefore be moved to under another heading.

L499 "January and July" I think you mean "July and December".


Author Response

We are very grateful to the reviewer for their valuable comments, which is very important for the improvement of our articles. Thanks again. We have made specific changes and hope that the results will meet the requirements of this journal.

Author Response File: Author Response.docx

This manuscript is a resubmission of an earlier submission. The following is a list of the peer review reports and author responses from that submission.


Round 1

Reviewer 1 Report

This is an example of a well-motivated study with a very significant potential outcome for the prediction of mesoscale weather systems. Since the forecasts discussed in the study are based on a well-established model, I do not see the need to change the description of the basic scientific methodology. However, it will be useful to justify a specific selection of the parameterization schemes. For example, does the convective scheme of the finest mesh fall into the "gray zone" of physical parameterizations in meteorological models?


The main improvement of the presentation is necessary in the section presenting the evaluation of a forecasting system. I would like tp suggest to plot the model errors as horizontal fields. It will be also useful to present traditional synoptic maps when describing a situation used in the forecast with a clear indication of how the WRF model performed on the nested grid sequence. Was the forecast superior to the one obtained with a large scale model?


Last but not least, I would like to see a brief discussion about the main mechanisms of interaction between the weather systems of northwestern China and those of mainland China. This problem is mentioned only briefly in the introduction.


Author Response

Dear Adriana,
  Attachment is the revised manuscript. I used endnote with MDPI format to update it.


Chongshui Gong

20190517



Author Response File: Author Response.docx

Reviewer 2 Report

Review of "Development and Evaluation of a WRF-based Mesoscale Numerical Weather Prediction System in Northwestern China". This paper describes the configuration of a three-level nested NWP system with two-way nesting for the larger domains, using WRF. Several different configurations were tried with regard to the number of model levels, the types of observations assimilated, and the surface type information. The results were verified against observations using a selection of statistical measures.

Overall, this paper has attempted to give a comprehensive documentation of this NWP system, and to evaluate various aspects of it. However, there are some areas where more information needs to be provided. The description of the nesting, including the global parent model, needs to be clarified. The choices of statistics aren't justified, and some of the statistics seem wrongly described. The authors may have assumed the reader has some familiarity with WRF, and so not provided enough details in the description.

Statistics should be defined when first used. However, I would suggest defining all the statistics used for validation in a separate section before section 3. I would suggest that each score or statistic be given with its equation. All symbols and abbreviations in the equations should be defined. The purpose of the statistic should be clarified. The advantages and disadvantages of each could also be considered, as well as the reasons for the choices. For example, the choice to use ME or MAE rather than RMSE.

The authors need to be clearer about which domain the results are presented for, in both the text and the figures. It is easy to get confused.

 

Specific comments.

1.      A geographical figure showing topography and/or vegetation would be helpful to accompany the first paragraph of the introduction, so that the audience can see the geographical features of the region.

2.      Table 1. The Domain values could be d01, d02, d03 instead, to be consistent with elsewhere in the paper.

3.      Figure 1.1 The inset is not explained in the left panel of the figure. The station names mentioned in which article? Is there meant to be a citation?

4.      L66-67. Do you mean that it is difficult to correctly model the vegetation cover? Are you saying this is because it has complicated continental characteristics? The second sentence in this paragraph does not logically follow from the first sentence otherwise.

5.      L73-75. The first and second sentences seem to be saying the same thing. This whole paragraph could be rewritten more clearly to make your point.

6.      Figure 1.2. Some acronyms in the figure are not defined. What is REAL and NDOWN? Also, it is not clear what the WPS does. Explain this briefly at line 96. Is there significance in the choice of colours used for lines and arrows? The initialization and data assimilation processes for the domains aren't well described. I don't think I have understood the description or this figure enough.

7.      L122. By older, do you mean outdated?

8.      L130-131. Can you give more information about the profile data and satellite data? What kinds? What frequency? What is your assimilation window?

9.      Table 2. Are the same parameterizations used for all domains regardless of spatial resolution?

10.  Figure 2. Inferred from text that the USGS data has 24 types, but the image shows only 17. I assume you converted the 24 types to analogous 17 types for comparison in the figure. This should be stated explicitly if so. Otherwise, provide the correct legend for both panels.

11.  L172-174. This is confusing. I think you are saying that prediction is better for D01, D02 and D03, but the separation into two sentences makes it seem as if the results differed for D03. Although in Figure 1.3 it looks like D03 might be worse with the new land use, so maybe you mean "the model prediction ability with new land-use data decreases", i.e. D03 is worse?

12.  How did you calculate the temperature prediction error? Against observations? What were the observations, and do they cover the domains across all land-use types? Given the variety of land surface types, a spatial map of errors could also be interesting.

13.  Inconsistent use of "landuse" versus "land-use". Hyphen should be used.

14.  Figure 4. What forecast lengths were used for this? Why is there missing data for July for USGS? Can you confirm that if the data were missing, then the NEW data were also excluded when calculating the means in Figure 3?

15.  Section 3.d Wind speed validation. What wind observations were used for the validation?

16.  L214-215. This sentence doesn't make sense.

17.  Why is Table 4 in section 3.d and not section 3.c? Why are some values in bold in Table 4?

18.  L237 "DO2 and DO3 are one-way nested" is confusing given the preceding part of the sentence. Maybe rephrase as "D03 is one-way nested in D02".

19.  L247 MAE should be defined the first time it is used, in section 3. The symbols in the equation aren't defined. Why is it model minus mean of observation (line over psi_io)? I have not seen this in a definition of MAE before, and it doesn't make sense to me.

20.  Figure 8. Explain what VGRD, UGRD, TMP, SPFH and RH are in the caption.

21.  L272, The RMSE is not the sample standard deviation. The std. dev. is the square root of the variance. RMSE is an alternative to MAE.

22.  Relative humidity and absolute or specific humidity are related variables. Is there value in presenting results for both?

23.  L305, define TS. Is high TS better than low TS?

24.  Section 4.b.2. Figure 10. What observations are used for the upper level validation?

25.  Figure 10. Explain what each parameter is in the caption.

26.  L316. ETS is the Equitable threat score (not equal). The symbols in the ETS equations aren't defined. "ETS score" is redundant as S stands for score.

27.  Figure 11. Caption does not make sense.

28.  Section 4.d.2. What precipitation observations were used in the validation?

29.  L357-358. I'm not sure that if the model predicts more precipitation than observed, the model is unbiased. It sounds like the model is biased.

30.  L370-372. I presume the "errors in A+S, AWS, and SOUNDING" refer to differences between the model and the observations. This means that the data assimilation moved the model values closer to the observations – which is what it is supposed to do. This is good as it shows the DA is doing the correct thing. However, you have not shown that it has a positive impact on the model initial values unless you calculate the errors with observations independent from those you assimilated, or if you can show that the forecast has improved.

31.  Assimilation experiments. Was any satellite data assimilated? Aircraft data?

32.  L373-374. Do you mean the diurnal variation in the bias in 2m temperature?

33.  L383 Wind speed of AWS forecast. Speed is missing.

34.  L396. You didn't use MET to do the validation against observations in the previous sections?

35.  Section 6.a.4. It would be useful to know how many of each type of precipitation event there was. I assume very few heavy events. What kind of precipitation observations were used?

Author Response

Dear Editor,

Due to our negligence, there was a big delay in the revision of the article, and we apologize to the editor here.

Thanks for reviewer's comments. We have made detailed changes to the manuscript. We hope this manuscriptcan now meet the requirements of Atmosphere.


Best Wish


Tiejun Zhang, Haixia Duan

20190528

Author Response File: Author Response.docx

Back to TopTop