Next Article in Journal
Special Issue: 10th Anniversary of Atmosphere: Climatology and Meteorology
Previous Article in Journal
Statistical Analysis and Machine Learning Prediction of Fog-Caused Low-Visibility Events at A-8 Motor-Road in Spain
Previous Article in Special Issue
Evaluation and Improvement of the Quality of Ground-Based Microwave Radiometer Clear-Sky Data
 
 
Article
Peer-Review Record

Line-of-Sight Winds and Doppler Effect Smearing in ACE-FTS Solar Occultation Measurements

Atmosphere 2021, 12(6), 680; https://doi.org/10.3390/atmos12060680
by Chris D. Boone 1,*, Johnathan Steffen 1, Jeff Crouse 1 and Peter F. Bernath 1,2
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Atmosphere 2021, 12(6), 680; https://doi.org/10.3390/atmos12060680
Submission received: 26 April 2021 / Revised: 20 May 2021 / Accepted: 21 May 2021 / Published: 26 May 2021

Round 1

Reviewer 1 Report

Review of “Line of Sight Winds and Doppler Effect Smearing in ACE-FTS  Solar Occultation Measurements” by Boone et al.

Summary: The paper presents the approach that has been developed to derive LOS winds from ACE FTS occultation measurements. The derived winds are compared to several other satellite-based instruments (HRDI, MIGHTII, TIDI) as well as model output (MERRA and HWM14).  These comparisons are used to assess the quality of the derived winds. A potential bias has been identified in derived winds relative to the comparison observations - it is suggested that the bias has a different sign for sunset and sunrise.  The main usage for the ACE LOS winds is to correct the Doppler smearing effect that is observed in the spectral features of molecules with large overhang in the profile. The approach used to correct for LOS Doppler smearing is discussed and a derived carbon monoxide volume mixing ratio profile is used to elucidate the impact of LOS Doppler smearing.  The paper provides the first assessment of the impact of LOS Doppler smearing effect on ACE data products.

 

General Comments

  1. The authors utilize model winds from the operational global weather assimilation and forecasting system from the Canadian Meteorological Center to obtain a calibration reference, allowing for the determination of the ACE line of sight winds (Eq.2). This approach assumes that this model is accurate, and that the difference between the ACE LOS winds and the model LOS winds is constant with altitude – since the bias in the region from 19 km – 24 km is used to correct the entire profile. Although the solar features/altitude drift example (Figure 1) somewhat addresses this question, I believe that it is important for the authors to also address the following two questions in the text:
    1. What types of biases are typical between the Canadian Meteorological Center model winds and real observations? These must be available in the literature. It is my opinion that an estimate be provided here since the model winds are being used as a calibration standard.
    2. What is the impact of these uncertainties on the analysis provided in this paper?

 

  1. Through comparisons with the operational wind instruments, it is suggested that a systematic bias persists in the ACE-FTS data. However, it is known that each instrument used in the comparison also have potential biases. For example, the current release (v03) of MIGHTII data is known to have a potential bias on the order of 20 m/s (see next comment below). Also, day-to-day RMS differences between WINDII and HRDI were found to be on the order of 20 – 30 m/s. Comparisons are also made with model output from MERRA and HMW14; however, comparisons between HMW14 and ground based Fabry-Perot measurements have also shown significant differences. Please provide a couple lines in each case to provide better context for theses comparisons.

 

  1. The authors indicate that the bias observed in the MIGHTII data and the TIDI data may be due to challenges associated with airglow observations near the terminator. However, the paper goes on to also assess the bias using model output from HWM14 and MERRA. This makes the discussion of the observed bias a bit confusing – on one hand the authors are saying the airglow wind measurements may have issues but on the other hand HWM14 model winds show similar biases. The change in sign of the bias in HMW14 also matches that of MIGHTII for the higher altitudes in the MIGHTII field of view. One would think this suggests the physical mechanism that is generating these biases is the same as that captured by MIGHTII; however, MIGHTII also uses HMW14 as a calibration standard to extract line of sight winds. In the first release, the zero wind was determined by averaging the HMW14 data over 60 days. Perhaps this is the reason for the observed bias in between the MIGHTII data and the ACE LOS winds?  

 

  1. Although the potential source of the potential bias is discussed, the authors do not provide an assessment of the potential impact of this bias on the retrieved data products where the Doppler smearing effect is important (for example the CO volume mixing ratio). What is the expected impact of this bias on the retrieved CO volume mixing ratio shown in Figure 10?

 

  1. The authors point out that ACE provides sparse geographic sampling due to the nature of the occultation measurements. Indeed, this fact is used to justify the loose requirements that were used to define coincidence with the comparison instruments. The authors suggest in the conclusion that there is limited utility of the geophysical information in the ACE FTS LOS data since vector winds cannot be extracted. However, the authors also suggest that the data could be used “fill the gap” and to improve model output in the middle atmosphere. The ACE LOS winds are derived using model data as the calibration reference. Can the authors elaborate on how the ACE LOS winds would improve model data without an independent reference or calibration standard to validate the LOS winds?

 

  1. In Section 4.1, the authors compare the impact of including doppler smearing in the forward model versus not including smearing in the forward model. This is done by examining the difference between the forward modelled and measured spectra and the difference between the two forward modeled cased. Visually, it is not obvious from this comparison that Doppler corrected case is better. Can the authors provide some statistics on the residuals that demonstrate the improvement? These should also be included in the paper.

 

 

  1. Figure 11 shows the impact of correcting the Doppler shifts on the CO volume mixing ratios; however, in my mind, this does not adequately demonstrate that the Doppler correction has provided any improvement to the accuracy of the VMR product. While the analysis serves as an example to demonstrate the impact of the smearing, it does not validate the correction approach. This should be addressed in the paper by providing an example that demonstrates the improvement relative to some standard. If such an example doesn’t exist, I suggest adding a few lines to point out the limited conclusions that can be drawn from the comparisons shown in Figure 10 and Figure 11.

 

 

 

 

Specific Corrections

  1. Introduction: The authors mention several satellite-based and ground-based measurement systems; however, they neglect long term, accurate and rapid cadence ground based MLT wind measurements that are being made with ERWIN-2 in the polar region, as well as ground based MLT Arecibo Fabry-Perot wind measurements. It is my opinion that these observing systems should also be referenced.

  

  1. Section 2:
  2. Lines 102- 113: It would be useful to quote the rough magnitude of observed Doppler shifts for the reader to understand the order of magnitude difference between the spectral resolution of the instrument and the observed Doppler shifts.
  3. Line 150: “preliminary studies indicate that the errors typically range from 3 m/s – 10 m/s. Is there a reference for this study? Why not include a figure demonstrating this analysis?
  • Line 197: The error associated with the relative velocity of the Earth rotating below the satellite is mentioned here but not quantified. However, the impact of the rotation of the earth comes back up in the Conclusion (lines 575-597). I found this aspect of the discussion in the conclusion confusing and suggest moving Eq.3 (and discussion of the impact) up to this section or putting a few lines at the end of the paragraph noting that the impact of biases associated with these effects is presented in the Conclusion.
  1. Lines 244-249: “Investigations suggest………”. Is there a reference for this? If not, I find the wording inappropriate. It is unclear where the error analysis is coming from.
  2. Section 3:
  • It would be useful to include error bars in the comparisons between ACE LOS winds and MIGHTII, HRDI and TIDI

 

  1. Section 4:
    • Lines: 493-508: Please include statistics on the residuals shown in Figure 10 (b) and Figure 10 (c) that demonstrate the improvement when including the Doppler smearing.

 

Comments for author File: Comments.pdf

Author Response

Responses to Reviewer #1

Thanks to the reviewer for the helpful comments.

The authors utilize model winds from the operational global weather assimilation and forecasting system from the Canadian Meteorological Center to obtain a calibration reference, allowing for the determination of the ACE line of sight winds (Eq.2). This approach assumes that this model is accurate, and that the difference between the ACE LOS winds and the model LOS winds is constant with altitude – since the bias in the region from 19 km – 24 km is used to correct the entire profile.

        The variation with altitude comes directly from the measured spectra.  We only apply the model information in one altitude region (19 to 24 km), and assumptions from the model play no role outside that region.  We are, however, assuming a vertical wind profile when in reality each measurement is collected at a slightly different location as the satellite progresses in its orbit.  Your point is valid but relates more to the internal consistency of the profile at different altitudes rather than the applicability of the model correction at different altitudes.  The following paragraph has been added to alert the reader to this possible source of systematic error:

 

Note that this approach assumes a vertical wind profile, but there is geographic smearing from each measurement being at a slightly different location (with a slightly different look angle) as the satellite progresses in its orbit.  This is ignored in the analysis, which contributes systematic errors to the results, the magnitude of which will depend on the degree of geographic smearing for the given occultation.

 

 

What types of biases are typical between the Canadian Meteorological Center model winds and real observations? These must be available in the literature. It is my opinion that an estimate be provided here since the model winds are being used as a calibration standard.

A difficult question to answer, in that the accuracy of wind information from the weather model has likely evolved over the course of the ACE mission as the quality and breadth of information available for assimilation has improved with time.  The Canadian model will certainly have kept pace with other major models, though.  Looking at Figure 8, wind values in the altitude region 19 to 24 km agree very well with the MERRA-2 reanalysis results, which is mentioned in the text.  I am not aware of any published studies comparing to individual measurements near 19 to 24 km (mostly geographically averaged results comparing to other models).  While I do not feel we are in a position to quantify the uncertainty associated with this calibration, I now explicitly state in the text that it could represent a significant source of systematic error:

 

Accuracy of wind information from the Canadian weather model has likely evolved over the course of the ACE mission, with improvements in the quality and breadth of wind data available for assimilation and improvements in the model itself.  Errors in this information will contribute a constant offset to the entire wind profile.  The magnitude of errors in this calibration source may be evaluated in future studies during comparisons to independent wind measurements.

 

 

What is the impact of these uncertainties on the analysis provided in this paper?

The following information was added to the text regarding the retrievals:

 

Note that applying a constant shift (as a function of altitude) to the wind profile would not change the calculated signal (there would be no spread in the location of line center) and would therefore have no impact on the retrievals.  Occultations with minimal gradients in the wind profile will experience little change in the calculated spectrum and therefore little change in the retrieval results. 

Through comparisons with the operational wind instruments, it is suggested that a systematic bias persists in the ACE-FTS data. However, it is known that each instrument used in the comparison also have potential biases. For example, the current release (v03) of MIGHTII data is known to have a potential bias on the order of 20 m/s (see next comment below). Also, day-to-day RMS differences between WINDII and HRDI were found to be on the order of 20 – 30 m/s. Comparisons are also made with model output from MERRA and HMW14; however, comparisons between HMW14 and ground based Fabry-Perot measurements have also shown significant differences. Please provide a couple lines in each case to provide better context for theses comparisons.

I wholeheartedly agree that other comparison sets are potentially contributing to observed differences.  We made a conscious effort, however, not to delve too deeply into potential in other data sources or to claim any sense of validation for our results.  Our main goals were to place our data into context with some existing data sets and to describe investigations into potential sources of systematic errors on our end that might explain observed discrepancies.  I have added the following sentence to the Conclusion to remind the reader that discrepancies are not necessarily completely an ACE issue:

 

It is possible there are different sources of systematic error for ACE-FTS results that have not been considered, and it is also possible that systematic errors in the comparison data (or mapping the data to the ACE-FTS line of sight) could be contributing to the bias.

 

The authors indicate that the bias observed in the MIGHTII data and the TIDI data may be due to challenges associated with airglow observations near the terminator. However, the paper goes on to also assess the bias using model output from HWM14 and MERRA. This makes the discussion of the observed bias a bit confusing – on one hand the authors are saying the airglow wind measurements may have issues but on the other hand HWM14 model winds show similar biases. The change in sign of the bias in HMW14 also matches that of MIGHTII for the higher altitudes in the MIGHTII field of view. One would think this suggests the physical mechanism that is generating these biases is the same as that captured by MIGHTII; however, MIGHTII also uses HMW14 as a calibration standard to extract line of sight winds. In the first release, the zero wind was determined by averaging the HMW14 data over 60 days. Perhaps this is the reason for the observed bias in between the MIGHTII data and the ACE LOS winds?  

If the model informs the calibration of the instrument, then you would certainly expect to see a similar bias for both.  To perhaps make matters more confusing, we have added comparisons to some ground-based measurements (in response to another reviewer) that generally do not show a persistent bias but do intermittently have a 20 to 30 m/s offset compared to ACE results.  There is much to sort out that goes beyond the scope of this paper.

 

Although the potential source of the potential bias is discussed, the authors do not provide an assessment of the potential impact of this bias on the retrieved data products where the Doppler smearing effect is important (for example the CO volume mixing ratio). What is the expected impact of this bias on the retrieved CO volume mixing ratio shown in Figure 10?

In the response to a previous point, we now mention that a constant offset would have no impact on the retrieval.  Large gradients in the velocity profile (wind shears) will have the biggest impact but are not associated with the discussion of bias.

 

The authors point out that ACE provides sparse geographic sampling due to the nature of the occultation measurements. Indeed, this fact is used to justify the loose requirements that were used to define coincidence with the comparison instruments. The authors suggest in the conclusion that there is limited utility of the geophysical information in the ACE FTS LOS data since vector winds cannot be extracted. However, the authors also suggest that the data could be used “fill the gap” and to improve model output in the middle atmosphere. The ACE LOS winds are derived using model data as the calibration reference. Can the authors elaborate on how the ACE LOS winds would improve model data without an independent reference or calibration standard to validate the LOS winds?

This goes well beyond the scope of the paper.  The obvious scenario would be when combining ACE results with broad altitude coverage with other results with more limited altitude coverage to improve the model fidelity at a particular location and time.  Altitude-limited overlapping data from another source could be used to ‘recalibrate’ the ACE profile if the weather model calibration is deemed unreliable, and then ACE results would provide constraints in altitude regions with no alternate source of information.

 

In Section 4.1, the authors compare the impact of including doppler smearing in the forward model versus not including smearing in the forward model. This is done by examining the difference between the forward modelled and measured spectra and the difference between the two forward modeled cased. Visually, it is not obvious from this comparison that Doppler corrected case is better. Can the authors provide some statistics on the residuals that demonstrate the improvement? These should also be included in the paper.

We agree the figure was of limited use visually, perhaps distracting from the point that was trying to be made.  It has been replaced by a figure that focuses on calculations for a particular CO line in order to better connect to the changes in the CO retrieval described in the following section.

 

Figure 11 shows the impact of correcting the Doppler shifts on the CO volume mixing ratios; however, in my mind, this does not adequately demonstrate that the Doppler correction has provided any improvement to the accuracy of the VMR product. While the analysis serves as an example to demonstrate the impact of the smearing, it does not validate the correction approach. This should be addressed in the paper by providing an example that demonstrates the improvement relative to some standard. If such an example doesn’t exist, I suggest adding a few lines to point out the limited conclusions that can be drawn from the comparisons shown in Figure 10 and Figure 11.

The following paragraph was added to the text:

With no measure of ‘truth’ in the CO VMR profile, there is no way to assess what degree of improvement in absolute accuracy these results actually represent.  However, the changes are consistent with expectations for the effect.  Spreading the absorption in wavenumber reduces the impact of saturation, yielding a stronger calculated signal, which in turn would lead to a smaller retrieved VMR.

Further discussion on the role of saturation in the calculated signal was added earlier in the paper.

 

Introduction: The authors mention several satellite-based and ground-based measurement systems; however, they neglect long term, accurate and rapid cadence ground based MLT wind measurements that are being made with ERWIN-2 in the polar region, as well as ground based MLT Arecibo Fabry-Perot wind measurements. It is my opinion that these observing systems should also be referenced.

  Added references to these two instruments as well as a reference to meteor radar measurements

 

Lines 102- 113: It would be useful to quote the rough magnitude of observed Doppler shifts for the reader to understand the order of magnitude difference between the spectral resolution of the instrument and the observed Doppler shifts.

The following text has been added:

 

For reference, a 50 m/s wind speed would induce a Doppler shift of roughly 4 x 10-4 cm-1 near 2350 cm-1, requiring great precision and stability in the instrument.

 

Line 150: “preliminary studies indicate that the errors typically range from 3 m/s – 10 m/s. Is there a reference for this study? Why not include a figure demonstrating this analysis?

This is a rough estimate based on preliminary observation.  The actual values will be provided with each profile.  This was intended as a means to indicate the typical magnitude that should be expected.  ‘Studies’ was perhaps too strong a word, so the phrasing has been changed to ‘In preliminary results….’

 

Line 197: The error associated with the relative velocity of the Earth rotating below the satellite is mentioned here but not quantified. However, the impact of the rotation of the earth comes back up in the Conclusion (lines 575-597). I found this aspect of the discussion in the conclusion confusing and suggest moving Eq.3 (and discussion of the impact) up to this section or putting a few lines at the end of the paragraph noting that the impact of biases associated with these effects is presented in the Conclusion.

The discussion of this effect has been moved to Section 2 and hopefully clarified.

 

Lines 244-249: “Investigations suggest………”. Is there a reference for this? If not, I find the wording inappropriate. It is unclear where the error analysis is coming from.

‘Investigations’ has been changed to ‘Calculations’

 

It would be useful to include error bars in the comparisons between ACE LOS winds and MIGHTII, HRDI and TIDI

      There are sufficient numbers in the averages for MIGHTI and TIDI that the error bars (standard error of the mean) would be small.  Including error bars with the HRDI results could be useful, but for consistency and to maintain the paper’s distance from more formal validation discussions, we will leave that figure as is.

 

Lines: 493-508: Please include statistics on the residuals shown in Figure 10 (b) and Figure 10 (c) that demonstrate the improvement when including the Doppler smearing.

I assume this refers to Figure 9, which has been replaced in the paper by observations for an isolated line.

 

Reviewer 2 Report

In this study, the authors present the first retrievals of line-of-sight winds from the ACE-FTS  measurements between 20 km and 130 km. The measurement covers the altitudes between 50 and 70 km where there is a gap in measured data.

A preliminary comparison with other satellite measurements, empirical model and ECMWF reanalysis shows that the results are very promissing. 

In addition, the authors show that taking the LOS-wind into account in the processing will improve the retrieval of some consituents such as CO.

I am very happy with the manuscript as it is. I would like to congratulate the authors for this very interesting work.

I have  small comments, but I think they can be addressed in further studies (except for comment 2).

1) A theoretical estimation of the random uncertainty for a single occultation would be interesting to know.

2) What is the width of the instrument field-of-view projected to the limb?

3) I understand that the nummber of coincidences with the satellite data is limited. But this is not an issue with the ECMWF ones. I felt a bit frustrated to not see a more  deeper analysis involving longer periods and several latitude ranges.

Also there is no comparison with the microwave and lidar  ground based intruments cited in the paper. They can cover the altitude range between 50 and 70 km and provide some additional information on  possible ACE-FTS wind biases. These two points could be described as "future works".

4) I think the LOS winds are close to the Meridional wind (true?). If so, they are very sensitive to diurnal and semi-diurnal tides. How this tides can affect the results of the comparison, especillay since ACE measurements are on the terminator.

 

Author Response

Responses to Reviewer #2

Thanks to the reviewer for the helpful comments.

A theoretical estimation of the random uncertainty for a single occultation would be interesting to know.

At the moment, the only component of random error being determined comes from variability in the relative Doppler shift corresponding to the different windows employed in the analysis at a given altitude.  This will actually include a systematic component from internal inconsistencies in spectroscopy for different wavenumber regions.  For the wind determination, the scatter in results from different windows will be largest source of noise-induced random errors.  This error estimate will be included with the profile.  Calculating random errors from atmospheric variability would be beyond the scope of this paper and outside the expertise of the authors.

 

What is the width of the instrument field-of-view projected to the limb?

The following text has been added:

The instrument’s circular input aperture of 1.25 mrad subtends an altitude range of 3 to 4 km (diameter) at the tangent point.

 

I understand that the nummber of coincidences with the satellite data is limited. But this is not an issue with the ECMWF ones. I felt a bit frustrated to not see a more  deeper analysis involving longer periods and several latitude ranges.

This paper was intended to place the results into context with other data sources rather than to perform a validation study.  A deeper comparison with MERRA-2 would likely warrant a separate paper, but we agree it could be an interesting study.

 

Also there is no comparison with the microwave and lidar  ground based intruments cited in the paper. They can cover the altitude range between 50 and 70 km and provide some additional information on  possible ACE-FTS wind biases. These two points could be described as "future works".

Good idea, thank you.  We have now included comparisons with meteor radar results which show no persistent bias with ACE-FTS in the region near 95 km, unlike the airglow results.  We also indicate that further comparisons with additional ground-based data sets would occur in future studies.

 

I think the LOS winds are close to the Meridional wind (true?). If so, they are very sensitive to diurnal and semi-diurnal tides. How this tides can affect the results of the comparison, especillay since ACE measurements are on the terminator.

They are actually generally closer to (within 45 degrees of) zonal.  I suspect there are a lot of moving parts here, making comparisons challenging.

Reviewer 3 Report

The submitted manuscript by Boone et al. presents first results from a novel analysis of ACE-FTS data to estimate wind profiles from about 20 - 140 km altitude from solar occultation measurements. Although measuring Doppler shifts was not the original goal of ACE-FTS, this information can be extracted with a careful analysis, as the authors show.  Comparisons with correlative measurements show that the shape and features of wind profiles match well, although offsets of ~30 m/s are apparent at the higher altitudes.

The methodology is appropriate and well thought out. The breadth of comparisons is impressive. The manuscript is well laid out and easy to read. The arguments are lucid, succinct, and clear.

In terms of scientific impact, these wind profiles have the potential to be a fantastic resource, especially of the mesosphere and thermosphere which are data starved. This manuscript will serve as a useful reference for researchers looking to utilize this new dataset.

Overall I recommend this paper be accepted after minor revisions. A number of points that need clarifying are listed below, but my primary comment is to include some better guidance for the user regarding the troubling 30 m/s offset. I understand this is a difficult issue that is outside the scope of this paper to solve, but some more discussion is needed. If this error will not be resolved before the release of the data, some guiding statements to users is needed. Should the user simply subtracting a 30 m/s trend from the data (oppositely signed for sunset and sunrise)? Will a 30 m/s systematic error be reported in the data product? What further steps are being taken (or not) to resolve this discrepancy?


Minor comments:

- The authors argue that an inversion is not needed due to the exponential profile of VMR. This is likely true, but since the authors seem to have developed the forward model already, they should report a typical error associated with this assumption.

- L57: "generated" --> "estimated"?

- L98: Although it becomes clear what you mean by the time the reader gets to Figure 10, describe the term "overhang" here.

- L118: "is calculated": How? Does this include integration effects? Does it include instrument line shape? Maybe a reference will answer these questions.

- L119: Where do the "typical temperature and pressure profiles" come from? 

- It would be useful to report horizontal (along-LOS and across-LOS) resolution of the reported wind for the user. It would also be useful to better quantify vertical resolution if there are any other considerations than the 4 km binning (i.e., what is the footprint of the FoV at the tangent point? Does the cubic spline introduce additional smoothing that needs to be considered?). This has implications for interpreting the wind shears, especially at high altitude.

- Roughly quantify the error associated with zero-baselining to the Canadian wind model. How much is the instantaneous wind in 19-24 km expected to deviate from this model? This is a systematic error when a single profile is considered, since all altitudes would have the same error. However, if a user is combining many ACE-FTS profiles for analysis, this could present as an important term in the uncertainty analysis, above and beyond the statistical uncertainty at each altitude. I understand this is difficult to quantify, but some discussion needs to be given.

- Figure 3: add legend like in the other plots.

- Quantify the mean discrepancy in Figs 3-8 so the reader can evaluate the consistency of any possible offsets.

- Fig 8: Specify whether these are sunset or sunrise occultations. I assume they follow the same convention as the other plots, but make it explicit.

- L410: Wouldn't this be a westerly wind (i.e., air parcels moving towards the east)? Double check.

- L412: "The MERRA-2 product attempted to incorporate... but there was insufficient... wind information." Is this statement an interpretation of Figure 8, or a result of other research into the products ingested for MERRA-2 on this day? If it is simply an interpretation of Figure 8, I think it is too strongly stated. Either way this sentence should be clarified. 

- L441: Define EMSIS. Is this different from NRL-MSISE00 (Picone et al., 2002)?

- L506: "on average the residuals are reduced with the inclusion of Doppler effect smearing." This is an important conclusion, but it needs to be quantified. How much was the residual reduced? The sum-squared residual could work as a scalar metric.

- Figure 10: Indicate whether this is a typical/median example, or an extreme example.

 

Author Response

Responses to Reviewer #3

Thanks to the reviewer for the helpful comments.

Overall I recommend this paper be accepted after minor revisions. A number of points that need clarifying are listed below, but my primary comment is to include some better guidance for the user regarding the troubling 30 m/s offset. I understand this is a difficult issue that is outside the scope of this paper to solve, but some more discussion is needed. If this error will not be resolved before the release of the data, some guiding statements to users is needed. Should the user simply subtracting a 30 m/s trend from the data (oppositely signed for sunset and sunrise)? Will a 30 m/s systematic error be reported in the data product? What further steps are being taken (or not) to resolve this discrepancy?

Responses to the other reviewers has hopefully provided further understanding of the bias, in particular the new comparisons with the meteor radar data.  We have now clarified that the user could apply an altitude dependent correction to the ACE-FTS results to correct for the contribution for the Earth’s rotation if they so choose.  No other adjustments would be suggested, but of course any actions the end user chooses to take are out of our hands.  We will not be reporting a 30 m/s systematic error.  We do not consider any of the measurements used for comparison (even our own) to be ‘truth,’ and declaring a 30 m/s systematic error would amount to declaring a particular data set to be truth.  We have attempted to place our results into context with comparisons to other datasets, which I believe we have accomplished.  I am not convinced that reconciling the observed differences falls entirely on us, although I would certainly be happy to have any deficiencies in my assumptions or processing revealed so that I could fix them and generate an improved data product.


The authors argue that an inversion is not needed due to the exponential profile of VMR. This is likely true, but since the authors seem to have developed the forward model already, they should report a typical error associated with this assumption.

This is actually trickier than it may seem.  The volume of data associated with all the microwindows employed in the analysis make it prohibitive to do in the current framework of our retrieval software.  I would have needed to strip back the number of microwindows and likely allow more saturation in the lines being used, compromises that would impact the accuracy of the results.  I still hope to generate a retrieval for the wind profile in the future, but implementing the approach without significant compromises will be a major task.

 

L57: "generated" --> "estimated"?

How about ‘derived?’

 

L98: Although it becomes clear what you mean by the time the reader gets to Figure 10, describe the term "overhang" here.

I removed mention of the overhang at this location.  Properly defining it at this point in the text would be a digression.

 

L118: "is calculated": How? Does this include integration effects? Does it include instrument line shape? Maybe a reference will answer these questions.

L119: Where do the "typical temperature and pressure profiles" come from? 

The text has been edited to hopefully clarify the procedure used:

In each segment, a forward model calculation is used to generate a representative spectrum for that segment corresponding to a tangent height near the center, using pressure, temperature, and VMR profiles from a particular occultation (sr10063, where sr stands for sunrise, and 10063 is the number of orbits since launch, comprising a unique identifier for the occultation), with spectroscopic parameters taken from HITRAN 2016

 

It would be useful to report horizontal (along-LOS and across-LOS) resolution of the reported wind for the user. It would also be useful to better quantify vertical resolution if there are any other considerations than the 4 km binning (i.e., what is the footprint of the FoV at the tangent point? Does the cubic spline introduce additional smoothing that needs to be considered?). This has implications for interpreting the wind shears, especially at high altitude.

The size of the instrument’s field of view is now reported:

The instrument’s circular input aperture of 1.25 mrad subtends an altitude range of 3 to 4 km (diameter) at the tangent point.

The following text is added regarding cubic spline:

Note that the altitude spacing between measurements varies with orbital geometry, ranging from less than 2 km to ~6 km.  For occultations with large altitude spacing, there could be significant smoothing error from interpolating onto the standard 1-km grid with cubic spline.  Wind profiles in ACE-FTS version 5.0 processing will also be provided on the measurement grid for users wanting to avoid this potential source of error.

The text now mentions geographic smearing from the instrument probing slightly different locations with slightly different look angles as the satellite progresses in its orbit:

Note that this approach assumes a vertical wind profile, but there is geographic smearing from each measurement being at a slightly different location (with a slightly different look angle) as the satellite progresses in its orbit.  This is ignored in the analysis, which contributes systematic errors to the results, the magnitude of which will depend on the degree of geographic smearing for the given occultation.

 

Roughly quantify the error associated with zero-baselining to the Canadian wind model. How much is the instantaneous wind in 19-24 km expected to deviate from this model? This is a systematic error when a single profile is considered, since all altitudes would have the same error. However, if a user is combining many ACE-FTS profiles for analysis, this could present as an important term in the uncertainty analysis, above and beyond the statistical uncertainty at each altitude. I understand this is difficult to quantify, but some discussion needs to be given.

As discussed in the response to a previous reviewer, this is not a simple quantity to estimate from the available information:

Accuracy of wind information from the Canadian weather model has likely evolved over the course of the ACE mission, with improvements in the quality and breadth of wind data available for assimilation and improvements in the model itself.  Errors in this information will contribute a constant offset to the entire wind profile.  The magnitude of errors in this calibration source may be evaluated in future studies during comparisons to independent wind measurements.

 

Figure 3: add legend like in the other plots.

Done

 

Quantify the mean discrepancy in Figs 3-8 so the reader can evaluate the consistency of any possible offsets.

The problem is, I am not convinced there is a definitive conclusion to be made here on the consistency of systematic offsets.  As indicated with previous reviewers, this paper is intended to place our results in context with other measurements rather than to perform a formal validation.

 

Fig 8: Specify whether these are sunset or sunrise occultations. I assume they follow the same convention as the other plots, but make it explicit.

The legends indicate sunrise occultations, but I have added the word sunrise to the caption for clarity.

 

L410: Wouldn't this be a westerly wind (i.e., air parcels moving towards the east)? Double check.

Thank you.  Being unfamiliar with the jargon, I assumed an easterly wind meant the air parcels were moving eastward.  I have removed the word.

 

L412: "The MERRA-2 product attempted to incorporate... but there was insufficient... wind information." Is this statement an interpretation of Figure 8, or a result of other research into the products ingested for MERRA-2 on this day? If it is simply an interpretation of Figure 8, I think it is too strongly stated. Either way this sentence should be clarified. 

The offending sentence has been removed.

 

L441: Define EMSIS. Is this different from NRL-MSISE00 (Picone et al., 2002)?

Changed the acronym to NRLMSIS-00 to be consistent with the Picone paper.

 

L506: "on average the residuals are reduced with the inclusion of Doppler effect smearing." This is an important conclusion, but it needs to be quantified. How much was the residual reduced? The sum-squared residual could work as a scalar metric.

The figure has been replaced by one looking at an isolated CO line to connect more directly to the discussion of the CO VMR retrieval.

 

Figure 10: Indicate whether this is a typical/median example, or an extreme example.

We have not done enough sample retrievals to necessarily say what is typical.  However, the text now stresses that one requires large gradients in the wind profile to see large effects in the retrieved VMR profile.

Back to TopTop