Surface Reflectance and Aerosol Retrieval from SPOT-VGT and PROBA-V in the Mission Exploitation Platform Environment
Round 1
Reviewer 1 Report
General comments:
The article shows the potential of application of the CISAR algorithm to the 20-year record of SPOT-Vt and PROBA-V missions. CISAR represents a small group of next-gen algorithms that perform an online radiative transfer calculations on line and therefore could be relatively easy applied to an arbitrary mission and also by application of temporal constraints and use of a priori provide aerosol (and cloud) properties together with the underlying surface. Stable and accurate multi-year and multi-mission climatic records of aerosol and surface are crucial for understanding the climatic change and therefore in high demand by the scientific community. The article is well structured, nicely written, quite brief and well-illustrated. The methods used are sane and are widely accepted. I’d recommend it for publication in MDPI remote Sensing after minor corrections. Please refer to some of my suggestions below.
Major comments:
I found it a bit confusing throughout article, that only LAND surface was considered, although product description clearly mentioned retrievals over ocean surface too. It seems that authors used “surface” instead of “land surface”. Otherwise, it is completely not clear why ocean surface validation is neglected, maybe mentioning that only land is considered in the name of the article could drop the confusion.
Minor comments:
I’d like to see more discussion on selection of AERONET sites, why out of hundreds only these 4 were selected, and also I’m a not convinced about surface homogeneity of city areas of Venice and Beijing. At which spatial scale? And how big the area where CISAR data is located around AERONET? Is it exactly collocated 5x5 km pixel? Please discuss.
Was 20-year data over AERONET station was 5x5 km too? Please indicate
Lines 186-197: Is accumulation applied to all surfaces i.e., land and ocean? Please clarify, there’s some general confusion (see major comments)
Table 3 and overal section 3.3, also all caprions in section 4, please indicate which products are used for evaluation, I assumed it is AOD @ 0.55 µm for CISAR. Also, were it compared to AOT @0.5 µm from AERONET (as indicated in figs 4,5 and 8,9)? Was wl difference between AERONET and CISAR neglected (if present), or some AE correction was applied? Please indicate.
Figure 7: is it possible to plot linear regression line too?
Figure 11: why there’s no CISAR data over ocean? Product description mentioned it. Also if only land data is concerned why coastal zones and inner seas are included (well seen on Fig 12)? Please explain in the text
Technical comments:
Line 182: Please make sure OE (Optimal estimation?) is mentioned earlier in the text.
Fig 8 and 9: It’s a matter of taste, but I’d prefer a full caption n fig 8 and then reference of it in caption of fig.9. like figs 4,5
Author Response
The authors would like to thank the reviewer for the clear, concise and helpful review. Below are the answers to the reviewer comments.
Major comments:
I found it a bit confusing throughout article, that only LAND surface was considered, although product description clearly mentioned retrievals over ocean surface too. It seems that authors used “surface” instead of “land surface”. Otherwise, it is completely not clear why ocean surface validation is neglected, maybe mentioning that only land is considered in the name of the article could drop the confusion.
Thank you for your comment. PROBA-V and SPOT-VGT are land mission, therefore the global processing has been limited to land. However, the LTDR is evaluated over water as well (Venice). This has been hopefully clarified throughout the text.
Minor comments:
I’d like to see more discussion on selection of AERONET sites, why out of hundreds only these 4 were selected, and also I’m a not convinced about surface homogeneity of city areas of Venice and Beijing. At which spatial scale? And how big the area where CISAR data is located around AERONET? Is it exactly collocated 5x5 km pixel? Please discuss.
Was 20-year data over AERONET station was 5x5 km too? Please indicate.
As explained in the text, the key areas are selected with the objective of representing a variety of land cover and aerosol types. Indeed, these areas are not necessarily homogenous (please note that Venice is located over water). However, please note that the purpose of this exercise is not directly to evaluate the performances of CISAR over these stations, but rather to verify the possibility of having a consistent dataset from the three different sensors. To answer your question, the CISAR retrieval is indeed performed over an area of 5x5km surrounding the aeronet station (this has been clarified in the text). AERONET is a columnar measurement, therefore it is not possible to have 5x5 km measurements. The temporal window for the collocation with satellite observations aims at compensating for the different spatial resolution.
Lines 186-197: Is accumulation applied to all surfaces i.e., land and ocean? Please clarify, there’s some general confusion (see major comments)
This is applied to all surfaces and has been clarified in the text.
Table 3 and overal section 3.3, also all caprions in section 4, please indicate which products are used for evaluation, I assumed it is AOD @ 0.55 µm for CISAR. Also, were it compared to AOT @0.5 µm from AERONET (as indicated in figs 4,5 and 8,9)? Was wl difference between AERONET and CISAR neglected (if present), or some AE correction was applied? Please indicate.
Thanks for pointing out that the needed explanation was missing. The following explanation has been added at the end of Section 2 (Section 2.2.3. in the revised manuscript):
“When comparing against ground-based measurements obtained from the aerosol robotic network (AERONET), satellite observations are collocated within a +-30 minutes temporal window. As the AOT is not delivered at 550 nm in the AERONET dataset, the AOT is extrapolated at the wavelength of interest by means of the Angstrom exponent obtained from AERONET measurement in the 440 and 670 nm.”
Figure 7: is it possible to plot linear regression line too?
Done, thanks for the suggestion.
Figure 11: why there’s no CISAR data over ocean? Product description mentioned it. Also if only land data is concerned why coastal zones and inner seas are included (well seen on Fig 12)? Please explain in the text
The following explanation has been added to the text: “As the PROBA-V mission has been designed to map land cover and vegetation growth, open oceans are excluded from this processing. However, coastal pixels and inland water are processed with the CISAR algorithm.”
Technical comments:
Line 182: Please make sure OE (Optimal estimation?) is mentioned earlier in the text.
Optimal Estimation is mentioned at line 47 in the introduction.
Fig 8 and 9: It’s a matter of taste, but I’d prefer a full caption n fig 8 and then reference of it in caption of fig.9. like figs 4,5
Thanks for pointing this out, we agreed and it has been corrected.
Reviewer 2 Report
All my observations are in the attached file. Due to the limitation of one file upload I merged the comments file with the manuscript with notes and comments in a single file.
Comments for author File: Comments.pdf
Author Response
The authors would like to thank the reviewer for this fruitful review. Here are the answers to the comments received.
1) The authors define, on page 7, the concepts of accuracy, precision and uncertainty (A, P and U). A is defined as the average of the residuals, which ideally in statistics should be equal to zero when the entire population is considered. Obviously, this does not always result in zero for real data, but the addition of this offset (as used in the definition of P) seems insufficiently justified to me. Interestingly, P uses the sample denominator (n – 1), while U uses the population denominator (n). By the way, in the case of U, isn't this simply the root mean squared error (RMSE)? Why not use the most commonly used term to simplify the reader's life?
The APU statistics are established statistics used in similar studies to evaluate satellite derived products (e.g. https://sentinel.esa.int/documents/247904/4598110/sentinel-3-synergy-land-handbook.pdf). In particular, these statistics have been used in a previous study where CISAR has been applied to PROBA-V observations, i.e. the PV-LAC ESA project. The validation report associated to PV-LAC can be found at https://earth.esa.int/eogateway/documents/20142/37627/PV-LAC-ATMO-validation-report-v2.pdf. The text has been updated as follows:
“The choice of the APU statistics to evaluate the algorithm performances is made to assure consistency with the PV-LAC ESA project.”
Thank you for your comment on the uncertainty, which corresponds indeed to the RMSE. This has now been made clear in the text.
2) Table 2: as pointed in the PDF file, the definition of Alta Floresta’s land cover as “mixed” does not mean that much. It is an area under constant change in the last 30 years, which lots of square kilometers being deforested to convert forest to pasture or agricultural areas. Mixed does not explain anything at all.
In this case we refer to the fact that the land cover of the 5km pixel centered around the AERONET station includes urban, forest, cropland and even some small water bodies. This has been clarified in the text, which now reads: “In the case of Alta Floresta, the term mixed refers to the mixture of urban, forest, cropland and small water bodies that are present in the 5km pixel surrounding the AERONET station.”
3) Table 3: For Alta Floresta, P and U are basically the same, which is a consequence of the small A. For Beiijing they are similar too, but with a highed A (= 0.15). So, the use of A improved the results, making P and U closer for all sites? Please discuss it
Thank you for your comment, but I do not understand the question. Maybe there is a misunderstanding on the meaning of the APU statistics. It is possible to have very high accuracy and low precision and viceversa, depending on how sparse and how close to the solution the measurements are. Uncertainty (or RMSE) and accuracy are similar indicator, although the RMSE gives more weight to larger errors, since the errors are squared before averaging.
4) At figure 4 the differences between AERONET and satellite data are very significant and should better addressed in the manuscript. Just as an example, at figure 4a, before 2004, the peaks are barely coincident, which means a difference of months! It means that even the annual cycle of biomass burning is time-biased. The authors should find out what is happening here. Further, in the subsequent year AERONET has a clear baseline for wet season (largely documented and confirmed by in situ observations) while the satellite resulted in a noisy oscillation (and a higher baseline). Given that correlations showed at table 2 are not small (~0.59) one should expect a better behavior for these two data series. BTW, I used Alta Floresta as example but the same difference can be observed for Beijing and Baninzoumbou. At Venice, the signal of the difference seems to be inverted (satellite < AERONET).
Thank you for your comment. The most probable cause of these peaks is the sub-optimal cloud-masking in SPOT-VGT and PROBA-V Level-1. As the cloud mask is used to build the prior information in the CISAR algorithm, an omission error in the cloud mask could lead to large overestimation of the retrieved AOT, resulting in the high peaks visible far from the biomass burning events identified by AERONET. This effect is particularly visible in Alta Floresta due to the nature of the aerosol type, but it affects all time series. Over Venice there are less peaks given that cloud masking over water is simpler than over land. The discussion in the manuscript has been extended.
5) Figure 6: visually the Standard Deviation in histograms is ~ 0.4. So, how U at table 2 is ~0.2?
I apologise, Figure 6, which refers to the combination of all stations in Table 2, was not actually reporting the APU statistics associated with the histograms, in contrast with the caption. It is now updated.
6) Figure 7: the variability of the fitting is huge, reaching 100% in some regions of the plot (see attached PDF). How it can result in U ~0.2
I think the density should be taken into account here. Most of the retrievals show little dispersion, and despite the poor fitting at low aerosol load, the RMSE will take into account the absolute value, which will have less wait than larger errors for larger values of AOT. I hope this answers your concerns.
7) Figure 8: what explains the difference between PROBA and AERONET after June 2019? The two data sets just suddenly moved apart. The same issue was not observed at figure 9.
Thanks for your comments. As you can see from the plot, the availability of AERONET observations strongly decreases after June 2019. This might suggest the presence of clouds (AERONET is only available in clear-sky conditions). On the other hand, the CISAR algorithm also shows fewer retrievals, meaning that either thick clouds were retrieved, or the Quality Indicator associated with CISAR retrieval is lower than 0.2 (and therefore filtered from this analysis). In particular, CISAR and AERONET measurements are not always collocated between June and September (e.g. early July and early August). Also, in the hypothesis of cloud presence, it should also be kept in mind that CISAR retrieval is performed at 5km spatial resolutions, and therefore effects such as the AOT enhancement due to aerosol swelling in the vicinity of clouds might impact the measurement, compared to a columnar ground-based measurement such as AERONET. The little availability of collocated observations in this time window makes it risky to draw any conclusion on this. These considerations have been added to the text.All the comments in the annotated pdf have been accepted and processed.
Reviewer 3 Report
Surface reflectance and aerosols retrieval from SPOT-VGT and PROBA-V in the MEP environment
Review:
The article discusses and shows the observations made by the SPOT-VGT and PROBA-V satellites using the authors’ methodology of processing the data. Details of the calibration techniques have also been discussed. Below are my comments and assessment of the paper in its current stage:
1. The paper’s focus is precisely to present the observations based on the calibrations discussed by the authors. My thinking is that although this approach is essential, the authors have not extensively discussed this calibration method in comparison to other methods or retrieval algorithms used by other satellites (e.g., NASA’s MODIS, VIIRS, Himawari). Because of this, the paper’s added value to the scientific community is not well established. The authors need to expand the discussion on how the techniques differ from other established vegetation and aerosol retrieval algorithms.
2. What’s the value-added information that one gets from the paper when the authors used one algorithm and have not compared the results with other well-accepted algorithms for the same dataset?
3. What is the merit of comparing the consistency of these satellite products when these satellite products are not compared with other established satellite products products?
4. I understand that AERONET products can be used for validation of the aerosol products. But, in validating cloud products, I think, it is best to use other satellite products.
I have attached my other comments as sticky notes in the pdf file.
Comments for author File: Comments.pdf
There are sentences that are not sentences at all. Please revise this.
Author Response
The authors would like to thank all the reviewers for the fruitful discussion. Please find below the authors’ response to the reviewer #3. The reviewer's annotated PDF is here included including the authors’ answers to the comments in the file. The English has been revised throughout the manuscript.
- The paper’s focus is precisely to present the observations based on the calibrations discussed by the authors. My thinking is that although this approach is essential, the authors have not extensively discussed this calibration method in comparison to other methods or retrieval algorithms used by other satellites (e.g., NASA’s MODIS, VIIRS, Himawari). Because of this, the paper’s added value to the scientific community is not well established. The authors need to expand the discussion on how the techniques differ from other established vegetation and aerosol retrieval algorithms.
The following clarifications have been added to the text to better define the scope and need of the data harmonisation. In Section 2.1.1 of the revised paper, the following sentences have been added to the last paragraph to clarify the radiometric calibration process for each mission:
The current study relies on Top-Of-Atmosphere (TOA) Bidirectional Reflectance Factor (BRF) derived from these satellites as delivered by the MEP (Wolters et al., 2016, Wolters et al., 2023). The radiometric calibration of these data relies on different vicarious methods according to the missions (Wolters et al., 2016, Sterckc et al., 2016)
In Section 2.1.2, the sentence:
As the three radiometers composing the MEP archive have slightly different characteristics, as explained in Section 2.1.1, it is firstly necessary to verify the temporal consistency of the data to process consisting of Top-Of-Atmosphere (TOA) Bidirectional Reflectance Factor (BRF).
reads now:
As the three radiometers composing the MEP archive have slightly different characteristics and radiometric calibration, as explained in Section 2.1.1, it is firstly necessary to verify the temporal consistency of the TOA BRF.
Developing a new radiometric calibration method is thus not the purpose of this paper.
Regarding the inversion technique and the comparison with other algorithms, this is done in previous papers focusing on the CISAR algorithm (Govaerts and Luffarelli, 2018, Luffarelli and Govaerts, 2019, and Luffarelli et al., 2022). This manuscript, as per the title itself, focuses on the surface reflectance and aerosol products obtained from SPOT-VGT and PROBA-V in the framework of the SPAR@MEP study.
- What’s the value-added information that one gets from the paper when the authors used one algorithm and have not compared the results with other well-accepted algorithms for the same dataset?
The only other aerosol product available from SPOT-VGT and PROBA-V is, to the authors' knowledge, the operational product. The ESA project PV-LAC, mentioned in the manuscript, compares the retrieval obtained from the CISAR algorithm with the operational product, documenting the added value of a physically-based approach such as CISAR, despite the CISAR algorithm has been strongly improved since the PV-LAC studies, as it can be seen comparing the results obtained in this manuscript with the ones presented in Luffarelli et al, 2019. Nevertheless, a comparison with the MODIS product has been included for the case study of Australian fires.
- What is the merit of comparing the consistency of these satellite products when these satellite products are not compared with other established satellite products products?
Thank you for your comment, we agree that the manuscript is lacking a cross-sensor evaluation. A comparison with MODIS has been included in Section 4 for the case study of Australian fires.
- I understand that AERONET products can be used for validation of the aerosol products. But, in validating cloud products, I think, it is best to use other satellite products.
Thanks for your comment, however, this paper does not include nor intend to include any cloud validation result. As expressed in Luffarelli et al., 2022, the extension to the retrieval of clouds has been developed to improve and extend the spatial coverage of the retrieval of aerosol properties.
- Figure 4: These points represent monthly averages as I understand. How large is the standard variations?
Indeed, these points are monthly averages. For visual clarity purposes, I would rather keep the plots simple. However, here are the plots including the standard deviation for all datasets (represented by the shaded area).
Author Response File: Author Response.pdf
Round 2
Reviewer 3 Report
The authors have improved the manuscript. I think the inclusion of the case study over Australia is a good story to tell that validates the products presented in the manuscript. I have responded to some of the authors' comments in the attached pdf. I would be glad to hear the authors' responses for the sake of clarification, especially in relation to the Chiang Mai data presented in Fig.8.
Comments for author File: Comments.pdf
I still detect some English problems. For example, in L139, a period is placed - after the word "reference" - where it should not be. I suggest that the authors make a thorough editing of the English in the manuscript.
Author Response
The authors would like to thank the reviewer for appreciating the revised manuscript and for the constructive comments, that helped us improving it.
The authors tried to answer at their best the last few concerns raised in the sound round of review. The answers are directly in the attached pdf.
Regarding the English, we had the paper reviewed by native English speakers, as well as online editing tools (i.e. Grammarly). Apart from the typo mentioned by the reviewer, which has been corrected, we couldn't find additional issues with the written language.
Author Response File: Author Response.pdf