Next Article in Journal
Quantifying Aboveground Biomass of Shrubs Using Spectral and Structural Metrics Derived from UAS Imagery
Next Article in Special Issue
Proof of Concept for Sea Ice Stage of Development Classification Using Deep Learning
Previous Article in Journal
First Evidence of Peat Domes in the Congo Basin using LiDAR from a Fixed-Wing Drone
Previous Article in Special Issue
Classification of Sea Ice Types in Sentinel-1 SAR Data Using Convolutional Neural Networks
 
 
Article
Peer-Review Record

Assessment of the Stability of Passive Microwave Brightness Temperatures for NASA Team Sea Ice Concentration Retrievals

Remote Sens. 2020, 12(14), 2197; https://doi.org/10.3390/rs12142197
by Walter N. Meier * and J. Scott Stewart
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Remote Sens. 2020, 12(14), 2197; https://doi.org/10.3390/rs12142197
Submission received: 10 June 2020 / Revised: 3 July 2020 / Accepted: 5 July 2020 / Published: 9 July 2020
(This article belongs to the Special Issue Polar Sea Ice: Detection, Monitoring and Modeling)

Round 1

Reviewer 1 Report

This study is crucial for sea ice remote sensing, especially for sea ice concentration. The analysis in this manuscript addresses how the community will deal with the uncertainties of sea ice concentration due to sensor changes. It is well-written, readable, and logically organized. This study is very interesting and has importance for assessing ice concentration product from the PM. I think that this information is also of importance for climate or weather forecast groups. Error in total sea ice extent on a daily basis would give arise the short-range weather forecast. Further climatologically, a small change of sea ice extent and its trend give a big impact on the atmospheric circulation. It would be better if the introduction discusses the error impact on weather or climate. Two minor comments are below 1) Lines 123-124: Reference is need for the use of 22V GHz rather than 37V GHz. 2) Line 138: Please define NRT.

Author Response

This study is crucial for sea ice remote sensing, especially for sea ice concentration. The analysis in this manuscript addresses how the community will deal with the uncertainties of sea ice concentration due to sensor changes. It is well-written, readable, and logically organized. This study is very interesting and has importance for assessing ice concentration product from the PM. I think that this information is also of importance for climate or weather forecast groups. Error in total sea ice extent on a daily basis would give arise the short-range weather forecast. Further climatologically, a small change of sea ice extent and its trend give a big impact on the atmospheric circulation. It would be better if the introduction discusses the error impact on weather or climate. Two minor comments are below 1) Lines 123-124: Reference is need for the use of 22V GHz rather than 37V GHz. 2) Line 138: Please define NRT. 

 

Thank you for your helpful comments.

The reference was added for the 22V weather filter.

NRT defined from “near-real-time” after first use.

Reviewer 2 Report

 

 

This paper analysis three SSMIS sensors on the DMSP F16, F17, F18 platform. The comparison of TBs value histogram and the corresponding tiepoint value and the daily variation in TBs histograms are discussed. The result indicate that there is a good consistency in different sensor TBs, and the histogram peak value for open water and sea are mostly smaller than 1K. As a result, the data provided from all the available sensors will be benefit with smaller uncertainty. The manuscript is well written and the results is good. Some comments and suggestion as below:

 

  1. Besides the tiepoint value of open water are slighter higher than the peak value of the histogram in Figure 3, it shows the tiepoint value for MYI also higher than the peak value. What is the main reason to define the tiepoint value higher than the peak value, please give more explanation.

 

  1. The tiepoint value for the year 2018 in the Arctic and the Antarctic are different, while the tiepoint value for 2009 and 2018 are the same. What is the main cause for the difference of the tiepoint value in different hemisphere, please give more details.

 

  1. For the FY and MY ridge simulation result in section 6, we can see that the co-pol is a main indicator to discriminate these two types. To validate the efficient of the proposed method, the result should be compared with some of the state-of-the-art approach simulation method, or at least the result using general Walsh model without small slope removal should be included.

 

  1. In Table 4, the author compared the differences of the sea ice concentration using different sensors using NT algorithm. For better understanding, I would suggest the author to list the sea ice concentration map and the corresponding difference map (only some sample results is enough) using different sensors, then we can find the effects of different sensor for sea ice concentration retrieval.

 

The topic addressed can be framed in the piece of research devoted to provide further insights on sea ice concentration estimation. I agree that the work need collect a large amount of data from different sensors for inter comparison, and the current presentation of this manuscript is satisfied to be accepted in Remote Sensing. I would like to give minor revision.

Author Response

This paper analysis three SSMIS sensors on the DMSP F16, F17, F18 platform. The comparison of TBs value histogram and the corresponding tie point value and the daily variation in TBs histograms are discussed. The result indicate that there is a good consistency in different sensor TBs, and the histogram peak value for open water and sea are mostly smaller than 1K. As a result, the data provided from all the available sensors will be benefit with smaller uncertainty. The manuscript is well written and the results is good. Some comments and suggestion as below:

 

  1. Besides the tie point value of open water are slighter higher than the peak value of the histogram in Figure 3, it shows the tie point value for MYI also higher than the peak value. What is the main reason to define the tie point value higher than the peak value, please give more explanation.

 

This is a good observation. We assume you mean the FYI tie point value is higher than the peak. This likely reflects that the fact that the Antarctic ice pack is more diffuse, so the peak of the distribution better corresponds to <100% ice. We have added a sentence to note this.

 

  1. The tie point value for the year 2018 in the Arctic and the Antarctic are different, while the tie point value for 2009 and 2018 are the same. What is the main cause for the difference of the tie point value in different hemisphere, please give more details.

 

We’re not sure what is meant here. There are different tie points for the Arctic and Antarctic. I think perhaps the reviewer is confused by Figures 4 and 5, which shows only Arctic for 2009 or 2012 and 2018. This is seen in Figure 2 for 19H 2018 where Arctic tie points are used and Figure 3 where Antarctic tie points are used. In the supplement, the other channels are shown for 2018 for both hemispheres

 

  1. For the FY and MY ridge simulation result in section 6, we can see that the co-pol is a main indicator to discriminate these two types. To validate the efficient of the proposed method, the result should be compared with some of the state-of-the-art approach simulation method, or at least the result using general Walsh model without small slope removal should be included.

 

Thank you for the interesting suggestion. It would be important to use such statistical methods to adjust the tie points for the TB distribution differences and optimize consistency of the sea ice products. State of the art methods, such as Walsh, could be useful for this. In this paper, we are focused more descriptively – illustrating the differences between sensors, not deriving new tie points. We’ve added a sentence in Section 3.2 to note that statistical methods would be useful when integrating different sensors.

 

  1. In Table 4, the author compared the differences of the sea ice concentration using different sensors using NT algorithm. For better understanding, I would suggest the author to list the sea ice concentration map and the corresponding difference map (only some sample results is enough) using different sensors, then we can find the effects of different sensor for sea ice concentration retrieval.

 

Thank you for the suggestion. We considered putting in concentration difference fields initially, but felt the manuscript already had more than enough figures, so we used a table to convey the results. But we agree that spatial maps are useful. Thus we’ve added example spatial maps as two additional figures in the supplement, and added a reference to them in the main text.

 

The topic addressed can be framed in the piece of research devoted to provide further insights on sea ice concentration estimation. I agree that the work need collect a large amount of data from different sensors for inter comparison, and the current presentation of this manuscript is satisfied to be accepted in Remote Sensing. I would like to give minor revision.

Reviewer 3 Report

Review on “Assessment of the stability of passive microwave brightness temperatures for NASA Team sea ice concentration retrievals”, by Walter N. Meier and J. Scott Stewart, submitted for publication in Remote Sensing.

General comments :

The paper compares passive microwave data from SSMIS sensors onboard the U.S. Department of Defense Meteorological Satellite Program (DMSP) platforms F16, F17, and F18. The passive microwave data from these sensors are used to retrieve sea ice concentration. The goal of the study is to assess the impact of the sensors variability over time and orbital drifts on the passive microwave observations and the resulting sea ice concentration retrievals. It is found that the variability from a single sensor is generally higher than the differences between sensors. The analysis of potential sensor drift effects is not conclusive and there isn’t an obvious systematic effect on the concentration estimates. It is suggested to consider using dynamic (time-varying) tie points in the NASA Team algorithm.

 

Specific comments :

  1. Figure 1: It would have been interesting to see the actual ice concentration indicated by coloring the dots. That way, it would be clearer to see how much spread there is around the OW tie-point.
  2. Figure 1: From that figure, it seems that tie points are defined in terms of gradient ratio and polarization ratio. However, after that the tie points are expressed in terms of brightness temperatures, e.g. in the figures with histograms. Could you explain how you go from one to the other?
  3. Figure 4b): Is fig. 2a) supposed to be identical to fig. 4b) ? If that is the case, would it be better to combine fig.2 with fig.4 in 3 panels ?
  4. Lines 303-305: I would rephrase as “These results again indicate that on a given day, uncertainty in TB values from any one sensor are as high or higher than the difference between TBs from different sensors.”

 

Technical corrections :

  1. Line 184: “not included in the analyses”
  2. Line 185: “influence of potential”
  3. Line 190: “two peaks correspond roughly correspond with” or “two peaks correspond roughly correspond with”
  4. Line 204: Before line 204, “tie point” is used but starting at line 204 it becomes “tiepoint”. If there is no reason for the transition, then you should stick to one expression throughout the paper.
  5. Line 240: “qualitativly” --> “qualitatively”
  6. Line 341: “Anarctic” --> “Antarctic”
  7. Line 342: “... light of the overally variabilty ...” --> “... light of the overall variability ...”
  8. Line 361: “This study confirms that any adjustments would be …”
  9. Line 362: “estimatews” -->“estimates”
  10. Line 371: “… differences between sensors. This points to the …”
  11. Line 428: “… would be benefit …”

Author Response

General comments:

The paper compares passive microwave data from SSMIS sensors onboard the U.S. Department of Defense Meteorological Satellite Program (DMSP) platforms F16, F17, and F18. The passive microwave data from these sensors are used to retrieve sea ice concentration. The goal of the study is to assess the impact of the sensors variability over time and orbital drifts on the passive microwave observations and the resulting sea ice concentration retrievals. It is found that the variability from a single sensor is generally higher than the differences between sensors. The analysis of potential sensor drift effects is not conclusive and there isn’t an obvious systematic effect on the concentration estimates. It is suggested to consider using dynamic (time-varying) tie points in the NASA Team algorithm.

 

Thank you for the helpful comments

 

Specific comments:

  1. Figure 1: It would have been interesting to see the actual ice concentration indicated by coloring the dots. That way, it would be clearer to see how much spread there is around the OW tie-point.

 

We changed the figure to show the open water points in blue. We also added the 3719GR weather filter threshold for reference.

 

  1. Figure 1: From that figure, it seems that tie points are defined in terms of gradient ratio and polarization ratio. However, after that the tie points are expressed in terms of brightness temperatures, e.g. in the figures with histograms. Could you explain how you go from one to the other?

 

This is a very good point. We added a couple sentences to link the PR-GR values to the tie point values, which are defined for each channel used (19H, 19V, and 37V) for each of the three surface types.

 

  1. Figure 4b): Is fig. 2a) supposed to be identical to fig. 4b)? If that is the case, would it be better to combine fig.2 with fig.4 in 3 panels?

 

Yes. 4b repeats 2a. You make a good point that the figures could be combined. However, we have left them as is because the two figures have different purposes. Figure 2 is in Section 3.1 and compares F16, F17, and F18 for the same year (2018); the purpose here is to show the consistency between the three sensors. Figure 4 is in Section 3.2 and compares F16 with F17 in two different years; the purpose here to is to show the lack of effect of the satellite drift of F16 over the 9 years. Figure 4 also goes with Figure 5, which shows the effect of F18 drift from 2012 to 2018 (Figure 5b is the same as Figure 2b). These issues (all sensors in same year vs. sensors in different years) are discussed separately, so we feel the flow of the paper works better keeping these as separate figures, even though there is some repetition. We have added text in the Figure 4 caption to note that the image in Figure 4b is the same as in Figure 2a.

 

  1. Lines 303-305: I would rephrase as “These results again indicate that on a given day, uncertainty in TB values from any one sensor are as high or higher than the difference between TBs from different sensors.”

 

We made this change. Thank you for the suggestion.

 

Technical corrections:

  1. Line 184: “not included in the analyses”
  2. Line 185: “influence of potential”
  3. Line 190: “two peaks correspond roughly correspond with” or “two peaks correspond roughly correspond with”
  4. Line 204: Before line 204, “tie point” is used but starting at line 204 it becomes “tiepoint”. If there is no reason for the transition, then you should stick to one expression throughout the paper.
  5. Line 240: “qualitativly” --> “qualitatively”
  6. Line 341: “Anarctic” --> “Antarctic”
  7. Line 342: “... light of the overally variabilty ...” --> “... light of the overall variability ...”
  8. Line 361: “This study confirms that any adjustments would be …”
  9. Line 362: “estimatews” -->“estimates”
  10. Line 371: “… differences between sensors. This points to the …”
  11. Line 428: “… would be benefit …”

 

All of these technical corrections have been made. We changed all occurrences in the manuscript to use “tie point” instead of “tiepoint”. Thank you for catching these.

Back to TopTop