Next Article in Journal
A Pose Awareness Solution for Estimating Pedestrian Walking Speed
Previous Article in Journal
Super-Resolution Restoration of MISR Images Using the UCL MAGiGAN System
 
 
Article
Peer-Review Record

An Assessment of Satellite Radiance Data Assimilation in RMAPS

Remote Sens. 2019, 11(1), 54; https://doi.org/10.3390/rs11010054
by Yanhui Xie 1, Shuiyong Fan 1, Min Chen 1,*, Jiancheng Shi 2,3, Jiqin Zhong 1 and Xinyu Zhang 1
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Reviewer 4: Anonymous
Remote Sens. 2019, 11(1), 54; https://doi.org/10.3390/rs11010054
Submission received: 30 October 2018 / Revised: 19 December 2018 / Accepted: 27 December 2018 / Published: 29 December 2018

Round 1

Reviewer 1 Report

Review of paper by Xie3 et al.  An assessment of satellite radiance data…..

 

I have very little to criticize in this paper, as this type of work is essential if weather forecasts are to be improved in the face of having no prospects for an improvement of the land-based data network. The paper is very well written and concise. However, I have one or two major points to make:

The first may be due to my ignorance. I never quite understood where these additional radiances were being introduced. Satellite measurements can provide a crude vertical sounding. Were these radiances used to provide a sounding that was then meshed with the directly measured ones? Were the radiances used to correct model estimates of long wave radiation? Exactly where were these radiances introduced, into what formulations? If this fuzziness can be cleared up, this would be a valuable paper.

Some other comments:

Abstract line 19     word ‘shown’ used twice in same sentence.

Figure 2   important to note in caption what the little insert area refers to.  In my copy I had trouble seeing the background  details.

Figure 6. I assume that the middle graph is made from data with the additional radiances and the left hand one without. Please label these figures better and more clearly. I wasn’t sure what the analysis figure represents.

Figure 7. Again I had trouble interpreting these figures.  I couldn’t figure out which was which. In each one there appeared to be two pairs of vertical profiles, one dashed, one solid. I couldn’t tell one from another. Are the profiles on the right side of each figure also bias RMSE. I couldn’t make sense of this figure.

Figure 8. Same problem as for other figures. I couldn’t tell which was which.  Both profiles consisted of pairs of sounding RMSE estimates, but again I couldn’t tell which was which. It might have been better for the authors to have plotted the profiles as differences from each other (positive would be improvement and negative would be a degradation when the radiances are added.

Figures 9, 10 Same criticism. How can one tell the two types of soundings apart when they are so close to each other that they look almost the same.  In figure 10 the bars are shaded so similarly that I can’t tell one from another.

Similar comments for figures 11-13.  Not only could I not distinguish one profile from another but I kept asking myself if these seemingly miniscule difference significant. If not, is it worth adding the radiances. The authors should address this point.

In summary, I understand why some reviewers might have come away with a negative view of this paper. Bad figures will irritate any reviewer. If these figures can be fixed and the issues I mentioned above be addressed, this should be a worthwhile paper. Even if the authors conclude that the addition of radiances does not make a statistically significant improvement in forecasts, this too would be a useful thing to know.

 


Author Response

Thanks for the comments. All responses have been attached in a word file.

Author Response File: Author Response.docx

Reviewer 2 Report

Reviews on the manuscript entitled "An Assessment of Satellite Radiance Data Assimilation in the RMAPS" by Xie et al.


The results presented in this manuscript has importance but not too original. However, I would say it is worth publishing in the journal, but not in current form. A complete rewrite of the Abstract, Introduction and Conclusion will be my suggestion. After all grammatical mistakes are taken care off I would suggest that the paper may be read by a native English speaker.  It will do tremendous justice to the paper and will make it more readable.

Page 2 (Line71): Please explain the "model information" to describe what kind of satellite retrieval method is used for this study.

Page 3 (Line 115-117): Please provide reference, if any exists for the statement - "It has been developed by the ............................. National Center for Atmospheric Research (NCAR)"

Page 7 (Line 208 - 211): Provide some details on how these quality controls were performed, especially how the bias correction was implemented on the AMSU-A and MHS sensor. This will be important to understand it effects the mean biases (Line 225-226).


Author Response

Thanks for the comments. All responses have been attached in a word file.

Author Response File: Author Response.docx

Reviewer 3 Report

In this paper, the authors present a method based on satellite radiance data assimilation to reduce the bias in the estimation of the meteorological parameters (temperature, wind and humidity) in the troposphere and the rainfall forecast at the analysis time in comparison with the more conventional assimilated observations. Results presented by the authors show a slight improvement with respect the traditional assimilations. The paper is well organized. The presentation is clear and reasonably concise.

 

The method proposed appears to have potential for meteorological forecasting   and deserves attention to be published in the Remote Sensing Journal. However, some minor aspects of this approach require further explication, as detailed below:

Minor Comments:

1-      The sentence “Results showed that…data-rich area.” in lines 18-19 of the Abstract is not clear and it should be properly rewritten.

2-      In line 222-226, it is claimed that the bias correction reduces the mean biases of the AMSU-A and MHS radiances. It is true that the mean biases of the AMSU-A radiances after bias correction is reduced compared to the before correction in Figure 4c. However, this improvement is not clear for the MHS radiances (it is more, seeing Figure 4d, the bias seems to increase after correction).

3-      The same comment can be applied to the paragraph between lines 231-241. Observing Figure 5d, it is not clear that the mean biases after correction is reduced.


Author Response

Thanks for the comments. All responses have been attached in a word file.

Author Response File: Author Response.docx

Reviewer 4 Report

The paper investigates the application of satellite radiance data assimilation in the regional Numerical Weather Prediction (NWP) model RMAPS. Such input radiance data are provided by the satellite sensors Advanced Microwave Sounding Unit-A (AMSU-A) and Microwave Humidity Sounding (MHS). The RMAPS model outcomes with and without satellite input data were assessed against radiosonde observations, surface atmosphere and precipitation data.

While the first part (Introduction and Materials/Method) is clear and easy to follow, the main part concerning the Results is unclear, the figures are of low quality, and the results do not suggest a significant improvement. 

Introduction:

- In the final part describing the work aim, the data period investigated, area and comparison data source/instruments should be reported.

Section 2: Materials and methods:

-section 2.2: a table reporting the channel number, central frequency, polarization and primary observable should be reported. In fact, in the text/figure the authors refer to the channel numbers, but it is important to know the central frequency and, therefore, the impact of the atmosphere for a specific band.

- section 2.2: line 158: the spatial resolution should be higher (16 km?) than AMSU-A.

- Figure 2: the text and the number of each panel is not readable. Please, enlarge them, optimizing the text. Data and hours could be added inside each panel.

Section 3: Retrospective Experiments

- Figure 3: what is GTS? Maybe is it referred to GPS observations?

-line 210: What is the meaning of “data thinning with 120 km × 120 km”?

Section 4: Results and discussion

-section 4.2.1: the explanation of this sub-section is not clear. “… by comparing the errors of background (OMB) and analysis (OMA) against observations”. What is the background, analysis and what are the observations? What is the meaning of OM in the acronym?

The authors should rewrite this section in a more comprehensible way, explaining the processing performed, the data employed, the motivation of bias correction and its actualization, etc.. In the present form some statements are repeated, there does not seem to be a logical flow.

-Figure 4: for a better readability, enlarge text and numbers. Also, date and hours on the same string is not a good solution for a quick understanding. It should be better to separate them in some way (also in other successive figures). Also, the slanted x -axis values do not allow to have a precise reference on the x-axis ticks. Use “Mean bias” instead of “Mean” in the central panels.

-Figure 4: panels of the number of observations: the three symbols are not distinguishable, and the line blue meaning is not clear.

-The author states: “We can see that radiance observations assimilated in the RMAPS are mainly from channel 5, 6, 7 and 8 of the AMSU-A, and channel 3, 4, and 5 of the MHS”. What is the reason? Is it a choice of the authors or are there physical reasons?

-Figure 5: the figure reports only values for few channels, not the values of all the channels. For instance, in the number of observations, no information about the channels other than 5, 6, 7 and 8 for AMSU-A, and channel 3, 4, and 5 for MHS are reported. Each satellite passage, should have the same number of observations for each channel.

-section 4.2.2: “We first calculated the average RMSE and bias between the fields of the RMAPS and the conventional observations”. Please, explain what are “the fields” and the “conventional observations”

-Figure 7 and 8 and related results: the RMSE profiles suggest that there are not difference between CTRL and DA_RAD. What is the unit measurements of the x-axes for both figures? The improvement in the RMSE is negligible, considering also the x axis range. The figure text should be optimized and enlarged.

Are the comparison performed with radiosonde profiles? How many radiosonde sites were considered? Where are they located?

The humidity values should be, for instance, evaluated in terms of RH%. Anyway, bias and RMSE appear unchanged with CTRL and DA_RAD.

-Figure 9: not only humidity and temperature do not improve, but also wind values, since the RMSE and bias values are essentially the same with CTRL and DA_RAD. The y scale suggests that the difference is negligible in m/s. Are the comparisons performed over all the red points of Figure 1?

-Figure 10: the CSI values are low, indicating a poor performance of the RMAPS. Also, the results show that CTRL and DA_RAD performance are similar and poor over 10 mm.

-Figure 12: POD and FAR show a poor performance of the RMAPS. The improvement brought by DA_RAD is very limited.

-Discussion

- “..improvement of 33.3%”: since the temperature bias values are close to 0 °C, this improvement is not significant in the estimation of the temperature profile dynamics. In fact, the RMSE is the essentially the same for CTRL and DA_RAD.

-the Discussion section is not the right place to report results of Figure 13 and 14. The results (the RMSE is the more significant) present the same limit as discussed above.

 


Author Response

Thanks for the comments. All responses have been attached in a word file.

Author Response File: Author Response.pdf

Round 2

Reviewer 4 Report

The authors have modified the text improving the understanding and readability and eliminating some flaws highlighted in the previous version. Nevertheless, some doubts persist on the relevance of the results reported, that the authors have not addressed from a quantitative point of view, as detailed in the comments below.

Also, the use of the percentages to describe the improvement is misleading: the use of the absolute values, as pointed out below, would provide a fair insight of the performance of the proposed method, answering to my doubts not removed in the current review

 

Section 2: Materials and methods:

-line 155-157: the text formatting is not correct.

 

Section 3: Assimilation Experiments

-Line 238: “Scattering index and cloud liquid water were also used to eliminate any observations that were contaminated”: how are these parameters computed?

- Line 239: “a data thinning of 120 km was performed….”: the satellite images are 2D: what is referred to “120 km”, considering the satellite swath?

 

Section 4: Results and discussion

-line 254-257: to better understand the eq. 1 application, the reported quantities in this part should be specifically indicated also in terms of variables of equation 1 (the background xb, the observation yb and so on).

-line 275: from figure 6, the channels over the vertical limits of RMAPS are 9-14

-In the figure caption (e.g Fig 8,9, 10), the indication of the time period (yy/mm/dd) should be reported.

-Results from figure 8, 9, 10: as also reported in the comments of the first round, I pointed out how the improvement of the DA_RAD with respect to CTRL is negligible. The authors, in the text, do not address the discussion of the RMSE improvement from a quantitative standpoint: for instance, the RMSE improvement in Figure 8 seems of the order of 0.05 K or less. From a practical point of view, considering the application fields of such outcomes, what is the impact of such value (that is often below the instrument sensitivity)? In Figure 9 this value is practically near to 0, and for wind (Figure 10) seems to be below 0.02 m/s. Therefore, the authors must take into account such RMS values and discuss their impact in actual applications.

-line 333 (Figure 11): “For the forecasts started at 1200 UTC, an obvious improvement (approximately 70%) in DA_RAD can be found”: what the authors report as 70% of improvement, is a bias reduction of about 0.2 K, whilst the RMSE is near zero (the max values in few cases seem of the order of 0.05 K). The authors must discuss such values from the application standpoint, as highlighted in the previous point. The same for the wind in Figure 12.

-line 349-360 (Figure 13): the authors, in the text, talk about “improvement” in a generic way,  without quantifying how these little improvement  reflect on the precipitation prediction improvement from a quantitative point of view (what is the improvement in mm?). Also, the CSI values are low, indicating a poor performance of the RMAPS. The same for Figure 14: the CSI little improvement is only for values below 0.1: it is useful, in the applications, the prediction with such low CSI value?

 

-Discussion

- although the authors point out some uncertainties and problems in the verification of the proposed method, I still have to base my judgment on the results reported: therefore, the authors have to discuss the RMSE values not in a generic way, but considering them specifically, as asked above, addressing the impact of such “improved” values in the related applications.


Author Response

Thanks for the comments. All responses have been attached in a word document. 

Author Response File: Author Response.docx

Back to TopTop