You are currently viewing a new version of our website. To view the old version click .

Review Reports

Universe2026, 12(1), 11;https://doi.org/10.3390/universe12010011 
(registering DOI)
by
  • Artem Y. Shikhovtsev

Reviewer 1: Anonymous Reviewer 2: Anonymous Reviewer 3: Roger Clay

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

This manuscript investigates long-term changes in daytime optical turbulence, atmospheric boundary layer height (BLH), and total cloud cover (TCC) at the Huairou Solar Observing Station (HSO) using ERA-5 reanalysis data combined with historical solar image quality measurements. The topic is timely and relevant, especially in the context of increasing interest in the impact of climate change on astronomical observing conditions. Overall, the work shows limited novelty, as comparable analyses have been reported in multiple existing publications. However, given that this submission is for the special issue on HSO, the dedicated analysis of HSO is acceptable.

Some simple suggestions:

1. Ensure consistent use of terminology (e.g., seeing, image quality, β, r₀).
2. Reduce redundancy between figure captions and main text.
3. Improve figure readability where multiple periods are compared (e.g., Figures 6–8).
4. Clarify the definition and interpretation of the parameter Fd at first appearance.
5. Avoid subjective expressions and maintain a formal scientific tone throughout.

Author Response

Dear reviewer, thank you for your time and effort in improving this manuscript. I sincerely appreciate your valuable comments.

I've tried to answer them in the text of the manuscript. The characteristics of optical atmospheric distortions are varied, defined by physics, and based on different models. This allows for some diversity in terminology. Improvements have been made to the figures and the overall structure of the manuscript. The parameter Fd determines the relative changes in the average intensity of optical turbulence in dimensionless form. Sincerely

Author Response File: Author Response.pdf

Reviewer 2 Report

Comments and Suggestions for Authors

This is an interesting paper which lays out a ERA-5 data re-analysis process for determining the winter/summer turbulence profiles, and comparison to cloud-cover at the HSO solar-observatory site over many decades. 

Such basic site-characterization work is a valuable contribution to the literature on general observatory operations. Broader interest among readers of Universe is not so obvious to me; this does not reflect badly on the quality of the paper, but is only to say that another MDPI journal (Astronomy?) may be a better fit. I defer to the Editor's view.

Overall, the paper is written in good English, and the layout/discussion is logical and clear, apart from some misplaced figures.  I will deal with those in my detailed comments below. Some other recommendations appear, mostly related to presentation and a couple suggestions for new analyses which may help bolster claims of trends seen. With those addressed, I feel the paper can be suitable for journal publication.

Detailed comments:  

It's not obvious why equation 1 is not for weak nighttime turbulence, but truly reflects strong turbulence in daytime; it should at least be stated that this equation is shown to hold during the day. That's important, as solar astronomy contends with much more thermal variation than at night. If this equation assumed to hold for the site in question, is there some regime in which it would break down? Some qualifications about all this seem needed. 

The three specific "tasks" of the study seem reasonable, certainly the first two, to measure seeing and total cloud cover, which are basic aspects. But the third  - correlation of scattered light and seeing - is more of a "hypothesis" to be tested, rather than a task to undertake, as such; it's not clear an answer will emerge.  The referenced study of Kovadlo is intriguing, but not compelling; maybe there is correlation between them, but there seems no concensus on it being observed at typical astronomical sites.  I think it would be helpful to separate the goals of the study into these two different things: basic analysis, and looking for some possible effect or trend. 

Also, it may help the reader weight the importance of this predicted effect to know how it might be expected to impact *solar* astronomy; does it happen only during the day? Also, the author could be more clear about how this site differs from general observatory sites.  Possibly this is more important here, as this site is at low elevation - and by a lake/body of water?

The statement "ERA-5 reanalysis performs well over most observatories in the world" is a logical stretch; my familiarity with the field of astronomical site testing would be that it's not a typical analysis for most astronomical observatories.  Some citations might help make that case.  Sites can be orographically very complicated (usually on mountains) and ERA has poor spatial resolution (on the order of many kilometers) so that really matters.  The referenced paper (Liu 2025) refers to oceanographic measurements, not specifically astrophysical observatory sites. Maybe what the author is trying to get at is that this Liu et al. analysis (over water) is relavant to the HSO site, which is situated on the shore of a large water reservoir.  Is that what is meant?  If so, please clarify and better justify this.

Section two, second par.: Is this analysis of hourly *daytime* values between 1940 and 2025? One expects those are the only ones relevant to solar astronomy, and nighttime ones could introduce bias, esp. if more stable.  Or otherwise, how is this corrected for solar elevation? Presumably, cloudy days are also irrelevant, and so deleted from the records. Were those corrected/accounted for?

Figure 1 seems to show a SDIMM setup at the observatory.  But then, what data were taken?  A strong recommendation, if data were taken, is a table here showing the measurements, with statistics of the relevant values: mean, median seeing, etc.

I cannot figure out what Figure 2 is meant to show. Apart from colour, Figures 2a and 2b look essentially identical, although they are presumably for different periods, pre/post 1970.  A recommendation, if this is intended to somehow show a measurable difference between the two periods, is to overplot the two.  Is this instead meant to somehow show the *measured* SDIMM profile relative to the two ERA-5 profiles during July? That could make sense, as a test, but I can't tell; please explain.  The figure caption should be clear enough that there is no confusion.

Chapter 3: Spelling should be "Astroclimatic."

How all these many profiles are shown was confusing to me, and I struggled to figure out what they all mean: There are three separate figures of two profiles each (Figures 3, 4, 5) so six panels; but, as far as I can tell, these are really just two different figures: meant to show the difference between average profiles for January and July.  

That choice - averages of just January/July - and not broader winter/summer average over some prescribed period of months, say, may make sense, if it perhaps highlights the largest seasonal difference.  But that argument is not made, as far as I can tell.  Please justify this choice.

Maybe Figure 4 is actually Figure 2b?, as it is the January equivalent to the July 1940-1969 (a), and 1970-1999 (b), there.  If so, I recommend putting them together there instead, and explain in the caption why these profiles are shown together (to show the historical difference between January and July, presumably).

If the measured profiles (from image-analysis) are shown in Figure 3, why are they shown together with simulated ones? It should be obvious via the captions how each is derived.

It appears Figure 3a and Figure 5a are identical. Why?  In my view, I think many of the others are redundant, too. Figure 6 seems instead to collect up all the previous ones to show there is no discernible change in those profiles (July/January) over the span 1989 to 2025; so that can stay as is, and delete the intermediate ones, perhaps.  Figure 7 serves to highlight the difference, by showing side-by-side; that makes sense.  But the caption could be more clear, and actually state how it differs from Figure 6 (that it is restricted to 2000-2025).

Table 2 and 3, I believe, record only the ERA-5 model-generated values for r_0 (January/July) without direct measurements; please state so in the title, if so. That there are *measured* profiles too, for some fraction of this time (presumably 2000-2025) could be included here too, if true.  There is a setup shown in the introduction, but no data presented, as far as I can tell.

The paragraph following: Can you actually say there is measured evidence of a "...significant reduction in large-scale horizontal temperature contrasts .."? If so, it may bolster the case that this is climate-change related. End of par., grammar/typo: " ... and the level of scattered light may also drop."

Section 3.2 first par. first. sent.: "Understanding optical turbulence within the atmospheric boundary layer is essential for refining atmospheric motion analysis .. "  This statement seems to start off on the wrong foot, as optical turbulence and *physical* atmospheric turbulence are not the same thing; optical turbulence involves only index of refraction variation, which can occur for very, very weak physical turbulence with essentially no vertical atmospheric motion at all, i.e. within pure laminar airflow. I think what the author is getting at is in the exact opposite sense, understanding *optical* turbulence requires understanding the nature of physical turbulence near the boundary layer. Please refine this statement to avoid this logical confusion. The last sentence of this paragraph is correct.

Figure 8: Is this figure out of place? I cannot find where this figure is discussed in the text. Where is F_d defined?  What does this figure show? The caption should say.

Figure 9/10: These figures are out of place; as it should appear in Section 3.2, where the temporal trends in BHL are discussed. The plotted linear trends are not particularly convincing; a suggestion would be plotting the lower and upper quartiles of the allowed fits. Smoothed versions (by several years) might also help show a trend. There are some overplotted dashed curves, it seems, that might be that, but the captions should define those; the contrast with the blue curves is poor.

Figure 11: Definitely out of place, as it should appear with Section 3.3.  Frankly, the plotted mean trends here are not at-all convincing against these near-uniform clouds of datapoints, apart from "saturation" near high and low values; what are the quartile allowed ranges? It needs somewhere to be clarified how uniform the data are in time and quality; presumably there is confidence that any trend is not due simply to biases in those, but that has to stated explicitly.

That said, a recommendation is to see if clipping the data changes the trends. For example, delete all TCC data above some thresholds, say, 0.5 or 0.8. Does that impact the supposed trend? It may not matter if observations are not made at high TCC anyway, but this can be explained, which would certainly help bolster any claim of a trend in TCC with relevance to observations.

Discussion

This seems okay; no strong claim is made about trends, which seems in keeping with the analysis. Even so, with regard to the previous comment on trends, and the statement in last sent. of last par. "In TCC, ERA-5 shows negative trends at HSO." That doesn't seem so clear, so may be overstated, actually; although perhaps can be strengthened by clipping the data by TCC-level and rechecking, as suggested.

Also, it's not clear to me what became of the "third task" of this study discussed above, related to trends in atmospheric scattered light, and seeing. I can't seem to find a clear statement about whether this was evidenced in the data or not, which presumably should appear here in the discussion.

Conclusions

The author should be commended for looking-for and reporting-on possible effects of climate change, but I'd say it seems to be overstating confidence that those are actually seen in these data: it might be enough here to say "It may be that climate change is affecting ..."  Unless some better, clearer case for a connection can be stated here.

 

Author Response

Dear reviewer, thank you for your time and effort in improving this manuscript. I sincerely appreciate your valuable comments.

It's not obvious why equation 1 is not for weak nighttime turbulence, but truly reflects strong turbulence in daytime; it should at least be stated that this equation is shown to hold during the day. That's important, as solar astronomy contends with much more thermal variation than at night. If this equation assumed to hold for the site in question, is there some regime in which it would break down? Some qualifications about all this seem needed

R1. Some clarifications have been added to the manuscript. This equation can describe turbulence both at night and during the day. Below, I discuss daytime turbulence characteristics, since, for example, parameterization coefficients derived from solar observations are used. At night the coefficients should be different apparently

The three specific "tasks" of the study seem reasonable, certainly the first two, to measure seeing and total cloud cover, which are basic aspects. But the third  - correlation of scattered light and seeing - is more of a "hypothesis" to be tested, rather than a task to undertake, as such; it's not clear an answer will emerge.  The referenced study of Kovadlo is intriguing, but not compelling; maybe there is correlation between them, but there seems no concensus on it being observed at typical astronomical sites.  I think it would be helpful to separate the goals of the study into these two different things: basic analysis, and looking for some possible effect or trend. 

 

R2. Yes, there are two main goals. The third point is a separate area; I think it's important to consider variations in atmospheric characteristics as a consequence of global climate change. The changes have been made to the text.

 

Also, it may help the reader weight the importance of this predicted effect to know how it might be expected to impact *solar* astronomy; does it happen only during the day? Also, the author could be more clear about how this site differs from general observatory sites.  Possibly this is more important here, as this site is at low elevation - and by a lake/body of water?

R3. Yes, thank you. The manuscript is talking about the atmosphere above observatories that have low absolute altitudes and are located near bodies of water. Comments have been added to the text.

The statement "ERA-5 reanalysis performs well over most observatories in the world" is a logical stretch; my familiarity with the field of astronomical site testing would be that it's not a typical analysis for most astronomical observatories.  Some citations might help make that case.  Sites can be orographically very complicated (usually on mountains) and ERA has poor spatial resolution (on the order of many kilometers) so that really matters.  The referenced paper (Liu 2025) refers to oceanographic measurements, not specifically astrophysical observatory sites. Maybe what the author is trying to get at is that this Liu et al. analysis (over water) is relavant to the HSO site, which is situated on the shore of a large water reservoir.  Is that what is meant?  If so, please clarify and better justify this.

R4. Yes. Thank you. I agree. Comments have been added to the text.

Section two, second par.: Is this analysis of hourly *daytime* values between 1940 and 2025? One expects those are the only ones relevant to solar astronomy, and nighttime ones could introduce bias, esp. if more stable.  Or otherwise, how is this corrected for solar elevation? Presumably, cloudy days are also irrelevant, and so deleted from the records. Were those corrected/accounted for?

Yes, the study only examines daytime hourly data. Since reanalysis data is available hourly, it is possible to isolate nighttime with a certain accuracy.

 

Figure 1 seems to show a SDIMM setup at the observatory.  But then, what data were taken?  A strong recommendation, if data were taken, is a table here showing the measurements, with statistics of the relevant values: mean, median seeing, etc.

Yes. Thank you. A added information.

 

I cannot figure out what Figure 2 is meant to show. Apart from colour, Figures 2a and 2b look essentially identical, although they are presumably for different periods, pre/post 1970.  A recommendation, if this is intended to somehow show a measurable difference between the two periods, is to overplot the two.  Is this instead meant to somehow show the *measured* SDIMM profile relative to the two ERA-5 profiles during July? That could make sense, as a test, but I can't tell; please explain.  The figure caption should be clear enough that there is no confusion.

The figures have been changed

 

Chapter 3: Spelling should be "Astroclimatic.

Thank You. I corrected it.

 

How all these many profiles are shown was confusing to me, and I struggled to figure out what they all mean: There are three separate figures of two profiles each (Figures 3, 4, 5) so six panels; but, as far as I can tell, these are really just two different figures: meant to show the difference between average profiles for January and July. 

 

Thank you for your comment, all the graphs are now shown in two summary figures

 

That choice - averages of just January/July - and not broader winter/summer average over some prescribed period of months, say, may make sense, if it perhaps highlights the largest seasonal difference.  But that argument is not made, as far as I can tell.  Please justify this choice.

 

Comments have been added to the text. It is necessary to take into account differences in both cloudiness and optical turbulence at different altitudes.

 

Maybe Figure 4 is actually Figure 2b?, as it is the January equivalent to the July 1940-1969 (a), and 1970-1999 (b), there.  If so, I recommend putting them together there instead, and explain in the caption why these profiles are shown together (to show the historical difference between January and July, presumably).

The structure of figures is changed. Please see it.

 

If the measured profiles (from image-analysis) are shown in Figure 3, why are they shown together with simulated ones? It should be obvious via the captions how each is derived.

The figure shows simulated vertical profiles. As far as I know, there are no measured profiles at the station.

 

It appears Figure 3a and Figure 5a are identical. Why?  In my view, I think many of the others are redundant, too. Figure 6 seems instead to collect up all the previous ones to show there is no discernible change in those profiles (July/January) over the span 1989 to 2025; so that can stay as is, and delete the intermediate ones, perhaps.  Figure 7 serves to highlight the difference, by showing side-by-side; that makes sense.  But the caption could be more clear, and actually state how it differs from Figure 6 (that it is restricted to 2000-2025).

The structure of figures is changed. Please see it.

Table 2 and 3, I believe, record only the ERA-5 model-generated values for r_0 (January/July) without direct measurements; please state so in the title, if so. That there are *measured* profiles too, for some fraction of this time (presumably 2000-2025) could be included here too, if true.  There is a setup shown in the introduction, but no data presented, as far as I can tell.

Yes. This is simulated data.

The paragraph following: Can you actually say there is measured evidence of a "...significant reduction in large-scale horizontal temperature contrasts .."? If so, it may bolster the case that this is climate-change related. End of par., grammar/typo: " ... and the level of scattered light may also drop."

I added the results of calculations for  large-scale horizontal temperature contrasts

Section 3.2 first par. first. sent.: "Understanding optical turbulence within the atmospheric boundary layer is essential for refining atmospheric motion analysis .. "  This statement seems to start off on the wrong foot, as optical turbulence and *physical* atmospheric turbulence are not the same thing; optical turbulence involves only index of refraction variation, which can occur for very, very weak physical turbulence with essentially no vertical atmospheric motion at all, i.e. within pure laminar airflow. I think what the author is getting at is in the exact opposite sense, understanding *optical* turbulence requires understanding the nature of physical turbulence near the boundary layer. Please refine this statement to avoid this logical confusion. The last sentence of this paragraph is correct.

I corrected it. You are right.

 

Figure 8: Is this figure out of place? I cannot find where this figure is discussed in the text. Where is F_d defined?  What does this figure show? The caption should say.

I added comments in the text. Please see it.

Figure 9/10: These figures are out of place; as it should appear in Section 3.2, where the temporal trends in BHL are discussed. The plotted linear trends are not particularly convincing; a suggestion would be plotting the lower and upper quartiles of the allowed fits. Smoothed versions (by several years) might also help show a trend. There are some overplotted dashed curves, it seems, that might be that, but the captions should define those; the contrast with the blue curves is poor.

The figures have been corrected. Significance indicators p,t are given.  It should be noted that a p-value less than 0.05 or a t-statistic greater than 2 suggests significance and the reliability of the results obtained.

Figure 11: Definitely out of place, as it should appear with Section 3.3.  Frankly, the plotted mean trends here are not at-all convincing against these near-uniform clouds of datapoints, apart from "saturation" near high and low values; what are the quartile allowed ranges? It needs somewhere to be clarified how uniform the data are in time and quality; presumably there is confidence that any trend is not due simply to biases in those, but that has to stated explicitly.

The figures have been corrected. Significance indicators p,t are given.  It should be noted that a p-value less than 0.05 or a t-statistic greater than 2 suggests significance and the reliability of the results obtained.

That said, a recommendation is to see if clipping the data changes the trends. For example, delete all TCC data above some thresholds, say, 0.5 or 0.8. Does that impact the supposed trend? It may not matter if observations are not made at high TCC anyway, but this can be explained, which would certainly help bolster any claim of a trend in TCC with relevance to observations.

When filtering, trends change slightly. This is expected. Thank you for your comment.

This seems okay; no strong claim is made about trends, which seems in keeping with the analysis. Even so, with regard to the previous comment on trends, and the statement in last sent. of last par. "In TCC, ERA-5 shows negative trends at HSO." That doesn't seem so clear, so may be overstated, actually; although perhaps can be strengthened by clipping the data by TCC-level and rechecking, as suggested.

Trends in boundary layer cloudiness and other characteristics are representative in most cases. Their representativeness is indicated by significance indices p and t.

 

Also, it's not clear to me what became of the "third task" of this study discussed above, related to trends in atmospheric scattered light, and seeing. I can't seem to find a clear statement about whether this was evidenced in the data or not, which presumably should appear here in the discussion.

I consider the third problem and diffuse light, which tends to decrease over the long term, in the order of reasoning. The text has been corrected.

The author should be commended for looking-for and reporting-on possible effects of climate change, but I'd say it seems to be overstating confidence that those are actually seen in these data: it might be enough here to say "It may be that climate change is affecting ..."  Unless some better, clearer case for a connection can be stated here.

 

It seems to me that the model data from the reanalysis show representative relationships. This is confirmed by the significance indicators. Thank you again for your attention. It's worth noting that the climate trends are not significant. And rightly so. Climate change at the equator does exist, but it's relatively weak compared to the polar regions. Nevertheless, the calculated indicators confirm their representativeness in many cases.

Author Response File: Author Response.pdf

Reviewer 3 Report

Comments and Suggestions for Authors

As climate change develops, it is of concern for all non-laboratory science to identify how environmental changes will affect observations.  This paper examines changes relevant to solar observing from 1940 until the present time at the Huairou Solar Observing Station (HSO).  The thrust of the paper, in a way, is presented as relevant to the Large Solar Vacuum Telescope and it is not clearly explained how the data from that telescope and HSO are related "chosen as an example" line 56.  The paper is timely and interesting.  It should be publishable with minor changes to help readers who are not directly in this field.

The paper examines historical records of atmospheric properties to identify changes relevant to solar observations - particularly seeing (and its origins at various altitudes) and cloud cover.  Standard characteristic parametrisations are used (formulae 1-6) but (some) source references are needed for these equations to aid the non-expert.  Also, the equations include numerical constants and so the units of the terms in the formulae need to be included e.g. Lo refers, perhaps, to a distance in metres but, as an example, in table 1, the units (of ro) are centimetres.  The terms just need to have explicit units stated.

Line 183 talks about the boundary height changing by month from summer to winter but there is no evidence presented here for that interesting result - maybe hard to see in Figure 9.  One wonders if this applies universally or just to this dataset.

Figures 2-6 are very interesting.  I wondered if the arguments would be clearer if the a and b graphs could be superimposed into one for each case to show any similarities or changes more clearly (the error spread may preclude this, but as the graphs are presently displayed, any changes are hard to see).

As noted above, the paper is publishable but small changes would help the reader.

Author Response

Dear reviewer, thank you for your time and effort in improving this manuscript. I sincerely appreciate your valuable comments. I've tried to answer them in the text of the manuscript.

As climate change develops, it is of concern for all non-laboratory science to identify how environmental changes will affect observations.  This paper examines changes relevant to solar observing from 1940 until the present time at the Huairou Solar Observing Station (HSO).  The thrust of the paper, in a way, is presented as relevant to the Large Solar Vacuum Telescope and it is not clearly explained how the data from that telescope and HSO are related "chosen as an example" line 56.  The paper is timely and interesting.  It should be publishable with minor changes to help readers who are not directly in this field.

To calculate the intensity of optical turbulence, previously obtained proportionality coefficients between average meteorological characteristics and microscale (optical) turbulence were used (at LSVT). These coefficients were found by comparing micrometeorological, optical measurements (Shack-Hartmann sensor) and reanalysis data. These coefficients were found differentially for different atmospheric condition

The paper examines historical records of atmospheric properties to identify changes relevant to solar observations - particularly seeing (and its origins at various altitudes) and cloud cover.  Standard characteristic parametrisations are used (formulae 1-6) but (some) source references are needed for these equations to aid the non-expert.  Also, the equations include numerical constants and so the units of the terms in the formulae need to be included e.g. Lo refers, perhaps, to a distance in metres but, as an example, in table 1, the units (of ro) are centimetres.  The terms just need to have explicit units stated.

Classical equations are used, the main reference is the work of Dewan.

Line 183 talks about the boundary height changing by month from summer to winter but there is no evidence presented here for that interesting result - maybe hard to see in Figure 9.  One wonders if this applies universally or just to this dataset.

Thank you very much. The figures  have been corrected and significance indicators have been provided.

Figures 2-6 are very interesting.  I wondered if the arguments would be clearer if the a and b graphs could be superimposed into one for each case to show any similarities or changes more clearly (the error spread may preclude this, but as the graphs are presently displayed, any changes are hard to see).

The figures  have been corrected . Please see manuscript.

As noted above, the paper is publishable but small changes would help the reader.

Thank you very much, I tried to add a number of changes to the text to deepen the physics of changes in atmospheric characteristics.

Author Response File: Author Response.pdf