Next Article in Journal
Satellite-based Cloudiness and Solar Energy Potential in Texas and Surrounding Regions
Next Article in Special Issue
The Pan-and-Tilt Hyperspectral Radiometer System (PANTHYR) for Autonomous Satellite Validation Measurements—Prototype Design and Testing
Previous Article in Journal
DisCountNet: Discriminating and Counting Network for Real-Time Counting and Localization of Sparse Objects in High-Resolution UAV Imagery
Previous Article in Special Issue
Laboratory Intercomparison of Radiometers Used for Satellite Validation in the 400–900 nm Range
 
 
Article
Peer-Review Record

Field Intercomparison of Radiometers Used for Satellite Validation in the 400–900 nm Range

Remote Sens. 2019, 11(9), 1129; https://doi.org/10.3390/rs11091129
by Viktor Vabson 1,*, Joel Kuusk 1, Ilmar Ansko 1, Riho Vendt 1, Krista Alikas 1, Kevin Ruddick 2, Ave Ansper 1, Mariano Bresciani 3, Henning Burmester 4, Maycira Costa 5, Davide D’Alimonte 6, Giorgio Dall’Olmo 7,8, Bahaiddin Damiri 9, Tilman Dinter 10, Claudia Giardino 3, Kersti Kangro 1, Martin Ligi 1, Birgot Paavel 11, Gavin Tilstone 7, Ronnie Van Dommelen 12, Sonja Wiegmann 10, Astrid Bracher 10, Craig Donlon 13 and Tânia Casal 13add Show full author list remove Hide full author list
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Remote Sens. 2019, 11(9), 1129; https://doi.org/10.3390/rs11091129
Submission received: 26 March 2019 / Revised: 24 April 2019 / Accepted: 8 May 2019 / Published: 11 May 2019
(This article belongs to the Special Issue Fiducial Reference Measurements for Satellite Ocean Colour)

Round 1

Reviewer 1 Report

Other than a missing reference and inconsistency in the application of symbols to numbers (is should be 20° not 20 ° - or at the very least consistent), I have no issues with this paper.  The authors have provided a very clear evaluation of a joint instrument evaluation program and their goals and results are very clearly presented.  There is no reason to not support the publication of these results, and I would applaud the authors for taking the time to craft a manuscript that was actually a joy to read.

Author Response

Response to Reviewer 1 Comments

 

Point 1: Other than a missing reference and inconsistency in the application of symbols to numbers (is should be 20° not 20 ° - or at the very least consistent), I have no issues with this paper.  The authors have provided a very clear evaluation of a joint instrument evaluation program and their goals and results are very clearly presented.  There is no reason to not support the publication of these results, and I would applaud the authors for taking the time to craft a manuscript that was actually a joy to read.

 

Response 1: We thank the reviewer for helpful remarks.

 


Reviewer 2 Report

This paper reported results of a short  intercomparison of irradiance and radiance sensors from a few different manufacturers.  All of the results reported were from a single day.  The instruments were all re-calibrated at the site, so the major result would be for variability of the measurements just because of measuring in a real environment.  

 

The paper was well written overall.  I am surprised though that they did not use their results to update the uncertainties in their tables.  The variability shown in the figures of the data seems much bigger than the combined uncertainty listed in the tables.  I would have liked to see the tables both list this calculated uncertainty, and the variability in the measurements using all of the data.  I would like to see that added to the tables.

 

Specific comments

 

Line 64: suggest “allows assessment of the consistency”

 

Line 70: difference to differences

 

Line 93 “make such corrections based on characterization”

 

Line 144: “exception of the 25-minute”

 

Line 148:  and throughout…what is a pilot?  There must be better word for whatever you mean by this.

 

Line 160…my copy had an issue with the reference source.

 

Line 167: what were the azimuths used?

 

Line 225: “smaller than the FOV of “

 

Line 232: when you say consensus…what do you mean?  Is the consensus the average of all the other sensors?  In either way you need to define what you mean by consensus.

 

Line 245-246: Sentence not well written as is.  Should split the idea of how they agree with the consensus and how well the individual instruments agree with each other.

 

Almost all of the figures:  Maybe I am color blind or something, but it is very difficult to tell the difference between the different lines…many of the colors are very close (or the same to my eye).

 

Line 350: Maybe change it to “In general the uncertainty is calculated….”, because in your paper, while you list the calibration certificate uncertainty, you don’t include it in your combined uncertainty.

 

Tables 4-6.  The certificate uncertainty seems unreasonably small to me to include all of the terms involved in calibrating an instrument…it looks to include only the source (and then low at that for radiance) and not any other factors in calibration setup….

 

Line 400: having an accuracy of the wavelength scale of 0.3 nm would not just effect the integration for the OLCI bands, but also the application of the calibration source irradiance/radiance scale to the instrument responsivity…and would be a much bigger factor on this then the integration….

 

Line 433-435: My experience has been that the stray light correction (SLC) is much bigger for the laboratory case, because the lab source is typically very weak in the blue/UV and has so much red.  The outdoor case typically (especially for the irradiance) much less spectrally variable.  It isn’t at all clear in this paragraph how you characterized the SLC uncertainty, in here I would think one would also want to show what the SLC correction is for the laboratory measurement.

 

Line 527: the “offsetting” by 20 deg doesn’t make sense to me, nor do I seen how it would help as most irradiance collectors are still very good for 20 deg collimated irradiance. 

 


Author Response

Response to Reviewer 2 Comments

Point 1: This paper reported results of a short intercomparison of irradiance and radiance sensors from a few different manufacturers.  All of the results reported were from a single day.  The instruments were all re-calibrated at the site, so the major result would be for variability of the measurements just because of measuring in a real environment.

Response 1: Main content is precisely summarized. We thank the reviewer for very helpful remarks and comments.

Point 2: The paper was well written overall.  I am surprised though that they did not use their results to update the uncertainties in their tables.  The variability shown in the figures of the data seems much bigger than the combined uncertainty listed in the tables.  I would have liked to see the tables both list this calculated uncertainty, and the variability in the measurements using all of the data.  I would like to see that added to the tables.

Response 2: Uncertainty budgets are updated, special row with experimental variability data has been added.

Specific comments

Point 3: Line 64: suggest “allows assessment of the consistency”

Response 3: Accepted. 

Point 4: Line 70: difference to differences

Response 4: Accepted. 

Point 5: Line 93 “make such corrections based on characterization”

Response 5: Accepted. 

Point 6: Line 144: “exception of the 25-minute”

Response 6: Accepted. 

Point 7: Line 148:  and throughout…what is a pilot?  There must be better word for whatever you mean by this.

Response 7: Instead of “pilot” we will use “coordinating laboratory”. 

Point 8: Line 160…my copy had an issue with the reference source.

Response 8: The casts used in the analysis of LCE-2 intercomparison are listed in Table 3.

Point 9: Line 167: what were the azimuths used?

Response 9: These measurements are made for azimuth angles 107° and 143° with respect to the sun…

Point 10: Line 225: “smaller than the FOV of “

Response 10: Accepted.   

Point 11: Line 232: when you say consensus…what do you mean?  Is the consensus the average of all the other sensors?  In either way you need to define what you mean by consensus.

Response 11: Consensus values are explained in p.2.6 before. 

Point 12: Line 245-246: Sentence not well written as is.  Should split the idea of how they agree with the consensus and how well the individual instruments agree with each other.

Response 12: The group of HyperOCR sensors shown in Figure 10 with dashed lines is more consistent with the consensus value than the sensors of the RAMSES group shown with solid lines. Remarkable is much higher variability across sensors of the RAMSES group.

Point 13: Almost all of the figures:  Maybe I am color blind or something, but it is very difficult to tell the difference between the different lines…many of the colors are very close (or the same to my eye).

Response 13: That’s true but not easy to improve as number of sensors is large.  

Point 14: Line 350: Maybe change it to “In general the uncertainty is calculated….”, because in your paper, while you list the calibration certificate uncertainty, you don’t include it in your combined uncertainty.

Response 14: Accepted.   

Point 15: Tables 4-6.  The certificate uncertainty seems unreasonably small to me to include all of the terms involved in calibrating an instrument…it looks to include only the source (and then low at that for radiance) and not any other factors in calibration setup….

Response 15: Calibration was carried out in well controlled environment by using two standard lamps; the alignment of instruments was fully repeated for each lamp and sensor; repeatability of calibration of irradiance sensors evaluated from random differences of results obtained by two lamps was only 0.2…0.3 %. We think this is close to the state of the art level for the radiometric calibration of sensors using FEL type calibration standards. Please note that the certificate uncertainty listed in Tables 4-6 is a standard uncertainty (k=1).      

Point 16: Line 400: having an accuracy of the wavelength scale of 0.3 nm would not just effect the integration for the OLCI bands, but also the application of the calibration source irradiance/radiance scale to the instrument responsivity…and would be a much bigger factor on this then the integration….

Response 16: In the range 400 nm to 900 nm, the responsivity change due to wavelength scale error of 0.3 nm will be smaller than 0.5 %, and does not depend much whether considered on the pixels basis or on the OLCI band basis. Important point is the variability of the wavelength scale error. If, for example, it depends on temperature, then quite likely it will affect calibration and later measurements differently, thus both will independently contribute to the combined uncertainty. Otherwise, if the wavelength scale error is constant, its effect may be evaluated from the final spectrum determined.   

Point 17: Line 433-435: My experience has been that the stray light correction (SLC) is much bigger for the laboratory case, because the lab source is typically very weak in the blue/UV and has so much red.  The outdoor case typically (especially for the irradiance) much less spectrally variable.  It isn’t at all clear in this paragraph how you characterized the SLC uncertainty, in here I would think one would also want to show what the SLC correction is for the laboratory measurement.

Response 17:  Correction for the laboratory measurements are given in the first part [1]. Stray light correction needs to be applied to both spectra obtained by the radiometer: one spectrum representing responsivity calibration measurements, and the other representing a field measurement. The indoor radiance and irradiance sources were spectrally similar to the calibration sources; therefore, the stray light correction has relatively small impact in comparison with the field results where the difference of spectra is substantial.

Point 18: Line 527: the “offsetting” by 20 deg doesn’t make sense to me, nor do I seen how it would help as most irradiance collectors are still very good for 20 deg collimated irradiance. 

Response 18: The “offsetting” was suggested in [8] due to the specific nature of RAMSES irradiance sensors’ deviation from ideal cosine response. Looking Figure 13, tilting during calibration by 20 degree a radiometer with relatively large cosine error will improve its agreement with other instruments substantially. Unfortunately, at the same time agreement while measuring for example the collimated incandescent sources at normal incidence will get worse. In general, meeting the 2 % specification set for the accuracy of cosine collectors would solve the problem more satisfactorily.   

[1]            V. Vabson et al., “Laboratory intercomparison of radiometers used for satellite validation in the 400 - 900 nm range,” Remote Sens., vol. Special Issue “Fiducial Reference Measurements for Satellite Ocean Colour,” Mar. 2019.

[8]            S. Mekaoui and G. Zibordi, “Cosine error for a class of hyperspectral irradiance sensors,” Metrologia, vol. 50, no. 3, p. 187, 2013.

Bottom of Form

 


Reviewer 3 Report

The manuscript has the level of a good graduate work of a university graduate. The technical characteristics of radiometers are analyzed in detail with the evaluation of possible errors. There is a wish to raise this level. It is necessary to answer the following questions and to make corrections in the text of the manuscript. 

1.     When comparing different radiometers in synchronous graphs, the opposite behavior is observed. It is necessary to explain this divergence. It cannot be explained only with the technical characteristics of reception systems. See, for example, Fig. 10, Fig. 11, Fig.16, Fig. 17

2.     Is there a hysteresis in the behavior of the characteristics of receiving systems with increasing and decreasing of the temperature, and also with changing of other meteorological parameters?

3.     How does water surface commotion influence to measurement errors? Indeed, with increasing wind speed, wind waves occur, the amplitudes and periods of which increase with increasing wind speed. Why these errors are not considered in the manuscript?

4.     How does the concentration of primary water biomass, coming to the surface, and influence to measurement errors? Is there a seasonal dependence of these errors?

5.     How internal waves influence to measurement errors?

 


Author Response

Response to Reviewer 3 Comments

The manuscript has the level of a good graduate work of a university graduate. The technical characteristics of radiometers are analyzed in detail with the evaluation of possible errors. There is a wish to raise this level. It is necessary to answer the following questions and to make corrections in the text of the manuscript.

 

Point 1: When comparing different radiometers in synchronous graphs, the opposite behavior is observed. It is necessary to explain this divergence. It cannot be explained only with the technical characteristics of reception systems. See, for example, Fig. 10, Fig. 11, Fig.16, Fig. 17.

Response 1:, High effectiveness of the SI-traceable radiometric calibration has been demonstrated through the indoor experiment, as large group of different type radiometers operated by different scientists achieved satisfactory consistency between results (s < 1 %). Much larger bias between radiometers during field measurements is due to different conditions in field and during radiometric calibration. Full explanation of observed behaviour of radiometers needs individual tests including determination of thermal effects, nonlinearity, spectral stray light effects, wavelength calibration, angular response, and polarization effects. Unfortunately, information needed for the detailed evaluation of these effects is presently not available.

Point 2: Is there a hysteresis in the behavior of the characteristics of receiving systems with increasing and decreasing of the temperature, and also with changing of other meteorological parameters?

Response 2: Air temperature in time of field measurements was rather stable: between 5 °C and 9 °C. Thus, temperature differences between sensors likely did stay in the range of (±2 ºC), therefore we assume that significant hysteresis could not be present.

Point 3: How does water surface commotion influence to measurement errors? Indeed, with increasing wind speed, wind waves occur, the amplitudes and periods of which increase with increasing wind speed. Why these errors are not considered in the manuscript?

Response 3: All sensors were directed to the same target, start time was synchronised and signal recorded simultaneously. The relatively high measurement platform guaranteed that the measured signal was averaged over several waves. The effect of varying target will affect the sensors more or less in a similar way, it may complicate the analysis, but not invalidate the comparison. Out of the 30 measured casts only a few with the least temporal variability were selected for the intercomparison analysis.

Point 4: How does the concentration of primary water biomass, coming to the surface, and influence to measurement errors? Is there a seasonal dependence of these errors?

Response 4: The effect is affecting all the sensors in similar way.

Point 5:  How internal waves influence to measurement errors?

Response 5: The effect is affecting all the sensors in similar way.


Round 2

Reviewer 2 Report

I don't agree with them on the SLC part.  THe SLC has to be done throughout, consistently, even with the calibration source.  But I am not going to hang it up with this issue.


Reviewer 3 Report

I was satisfied by answers of the authors, I consider now paper can be published.

Back to TopTop