Next Article in Journal
Influences of Cloud Microphysics on the Components of Solar Irradiance in the WRF-Solar Model
Previous Article in Journal
Study of the Characteristics of the Long-Term Persistence of Hourly Wind Speed in Xinjiang Based on Detrended Fluctuation Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Estimation of Astronomical Seeing with Neural Networks at the Maidanak Observatory

by
Artem Y. Shikhovtsev
1,*,
Alexander V. Kiselev
1,
Pavel G. Kovadlo
1,
Evgeniy A. Kopylov
2,
Kirill E. Kirichenko
1,
Shuhrat A. Ehgamberdiev
3,4 and
Yusufjon A. Tillayev
3,4
1
Institute of Solar-Terrestrial Physics SB RAS, Irkutsk 664033, Russia
2
Institute of Astronomy, Russian Academy of Sciences, Moscow 119017, Russia
3
Ulugh Beg Astronomical Institute UzAS, Tashkent 100052, Uzbekistan
4
Department of Astronomy and Astrophysics, Physics Faculty, National University of Uzbekistan, Tashkent 100174, Uzbekistan
*
Author to whom correspondence should be addressed.
Atmosphere 2024, 15(1), 38; https://doi.org/10.3390/atmos15010038
Submission received: 26 September 2023 / Revised: 27 November 2023 / Accepted: 25 December 2023 / Published: 28 December 2023
(This article belongs to the Section Atmospheric Techniques, Instruments, and Modeling)

Abstract

:
In the present article, we study the possibilities of machine learning for the estimation of seeing at the Maidanak Astronomical Observatory (38 40 24 N, 66 53 47 E) using only Era-5 reanalysis data. Seeing is usually associated with the integral of the turbulence strength C n 2 ( z ) over the height z. Based on the seeing measurements accumulated over 13 years, we created ensemble models of multi-layer neural networks under the machine learning framework, including training and validation. For the first time in the world, we have simulated optical turbulence (seeing variations) during night-time with deep neural networks trained on a 13-year database of astronomical seeing. A set of neural networks for simulations of night-time seeing variations was obtained. For these neural networks, the linear correlation coefficient ranges from 0.48 to 0.68. We show that modeled seeing with neural networks is well-described through meteorological parameters, which include wind-speed components, air temperature, humidity, and turbulent surface stresses. One of the fundamental new results is that the structure of small-scale (optical) turbulence over the Maidanak Astronomical Observatory does not depend or depends negligibly on the large-scale vortex component of atmospheric flows.

1. Introduction

Atmospheric flows are predominantly turbulent, both in the free atmosphere and in the atmospheric boundary layer. Within these atmospheric layers, a continuous spectrum of turbulent fluctuations over a wide range of spatial scales is formed. This range includes scales from the largest vortices associated with the flow boundary conditions being considered to the smallest eddies, which are determined by viscous dissipation. The energy spectrum of turbulence, especially in its short-wavelength range, is significantly deformed with the height above the ground; the structure and energy of optical turbulence also change noticeably.
The Earth’s atmosphere significantly limits ground-based astronomical observations [1,2,3,4]. Due to atmospheric turbulence, wavefronts distort, solar images are blurred, and small details in the images become indistinguishable. Optical turbulence has a decisive influence on the resolution of stellar telescopes and the efficiency of using adaptive optics systems. The main requirement for high-resolution astronomical observations is to operate under the quietest, optically stable atmosphere characterized by a weak small-scale (optical) turbulence.
One of the key characteristics of optical turbulence is seeing [5,6]. The seeing parameter is associated with the full width at half-maximum of the long-exposure seeing-limited point spread function at the focus of a large-diameter telescope [7,8]. This parameter can be expressed through the vertical profile of optical turbulence strength. In particular, in isotropic three-dimensional Kolmogorov turbulence, seeing can be estimated by the integral of the structure characteristic of turbulent fluctuations of the air refractive index C n 2 ( z ) over the height z [8]:
s e e i n g = 0.98 λ 0.423 s e c α 2 π λ 2 0 H C n 2 ( z ) d z 3 / 5 ,
where H is the height of the optically active atmosphere, α is the zenith angle, λ is the light wavelength.
In astronomical observations through the Earth’s turbulent atmosphere, the atmospheric resolution (seeing), as a rule, does not exceed 0.7–2.0 ( λ = 0.5 μ m). In conditions of intense optical turbulence along the line of sight, seeing increases to 4.0–5.0 . At the same time, modern problems in astrophysics associated with high-resolution observations require seeing on the order of 0.1 or even better [9,10]. In order to improve the quality of solar or stellar images and achieve high resolution, special adaptive optics systems are used [11,12]. Monitoring and forecasting seeing are necessary for the functioning of adaptive optics systems of astronomical telescopes and planning observing time.
Correct estimations of seeing and prediction of this parameter are associated with the development of our knowledge about:
(i)
The evolution of small-scale turbulence within the troposphere and stratosphere.
(ii)
Inhomogeneous influence of mesojet streams within the atmospheric boundary layer on the generation and dissipation of turbulence.
(iii)
Suppression of turbulent fluctuations in a stable stratified atmospheric boundary layer and the influence of multilayer air temperature inversions on vertical profiles of optical turbulence.
(iv)
The phenomenon of structurization of turbulence under the influence of large-scale and mesoscale vortex movements [13].
One of the best tools used for simulations of geophysical flows is machine learning models and, in particular, deep neural networks [14]. Neural networks are used for estimation and prediction of atmospheric processes. In addition to numerical atmospheric models and statistical methods [15], machine learning is one of the tools for estimating and predicting atmospheric characteristics, including the optical turbulence [16,17,18]. Cherubini T. et al., have presented a machine-learning approach to translate the Mauna Kea Weather Center experience into a forecast of the nightly average optical turbulent state of the atmosphere [19]. In the paper [20], for prediction of optical turbulence a hybrid multi-step model is proposed by combining empirical mode decomposition, sequence-to-sequence, and long short-term memory network.
Thanks to their ability to learn from real data and adjust to complex models with ease, machine learning and artificial intelligence methods are being successfully implemented for multi-object adaptive optics. By applying machine learning methods, the problem of restoring wavefronts distorted by atmospheric turbulence is solved.
This paper discusses the possibilities of using machine learning methods and deep neural networks to estimate the seeing parameter at the site of the Maidanak observatory (38 40 24 N, 66 53 47 E). The site is considered one of the best ground sites on Earth for optical astronomy. The goal of this work is to develop an approach for estimating seeing at the Maidanak observatory through large-scale weather patterns and, thereby, anticipate the average optical turbulence state of the atmosphere.

2. Evolution of Atmospheric Turbulence

It is a known fact that turbulent fluctuations of the air refractive index n are determined by turbulent fluctuations of the air temperature T or potential temperature θ : n T θ . In order to select the optimal dataset for training the neural network, we considered the budget equation for the energy of potential temperature fluctuations E θ = 1 / 2 θ 2 ¯ [21]:
d E θ d t + Q θ z = F z θ ¯ z ϵ θ ,
where the substantial derivative d d t = u ¯ x + v ¯ y , u ¯ and v ¯ are the mean horizontal components of wind speed, t is the time, Q θ is the 3rd order vertical turbulent flux of E θ , F z is the vertical flux of potential temperature fluctuations, θ ¯ z is the vertical partial derivative of the mean value of θ , and ϵ θ is the rate of dissipation.
Analyzing this equation, we can see that the operator d d t determines changes in E θ due to large-scale advection of air masses. The second term on the left-hand side of Equation (2) is neither productive nor dissipative and describes the energy transport. The 3rd-order vertical turbulent flux Q θ can be expressed through the fluctuations of the squared potential temperature θ 2 and fluctuations of the vertical velocity component w :
Q θ = 1 2 θ 2 w ¯ .
For small turbulent fluctuations of air temperature, Q θ can be neglected. An alternative approach is to construct a regional model of Q θ changes using averaged vertical profiles of meteorological characteristics.
The term F z θ ¯ z is of great interest. The parameter F z describes the energy exchange between turbulent potential energy and turbulent kinetic energy and determines the structure of optical turbulence. Also, it is important to emphasize that this exchange between energies is governed by the Richardson number.
The down-gradient formulation for F z is:
F z = K H N 2 β ,
where the turbulence coefficient K H can be defined as a constant for a thin atmospheric layer or specified in the form of some model. β = g / T 0 , g is the gravitational acceleration, and T 0 is a reference value of absolute temperature. The Brunt–Vaisala frequency N 2 describes the oscillation frequency of an air parcel in a stable atmosphere through average meteorological characteristics:
N 2 = g θ ¯ d θ ¯ d z .
According to large-eddy simulations [21], the dependencies of both the vertical turbulent momentum flow and heat on the Richardson gradient number, which is associated with the vertical gradients of wind speed and air temperature have been revealed. These dependencies are complex; they demonstrate nonlinear changes in vertical turbulent flows with increasing Richardson number. It can be noted that with an increase in the Richardson number from 10 2 to values greater than 10, the vertical turbulent momentum flux tends to decrease. For a vertical turbulent heat flux, on average, a similar dependence is observed with a pronounced extremum for the Richardson numbers from 4 · 10 2 to 7 · 10 2 .
Following Kolmogorov, the dissipation rate may be expressed through the turbulent dissipation time scale t T :
ϵ θ = E θ ( C P t T ) 1 .
C P is the dimensionless constant of order unit. In turn, the parameter t T is related to the turbulent length scale:
t T = L s / E k 1 / 2 = L s / ( 0.5 ( u 2 ¯ + v 2 ¯ ) ) 1 / 2 .
Using Formula (8), Equation (6) will take the form:
ϵ θ = E θ E k 1 / 2 ( C P L s ) 1 .
Analyzing Equation (8), we can note that the rate of dissipation of temperature fluctuations is determined by the turbulence kinetic and turbulence potential energies. In the atmosphere, the transition rate of turbulence potential energy into turbulence kinetic energy depends on the type and sign of thermal stability. This transition is largely determined by the vertical gradients of the mean potential air temperature.
Given the above, we can emphasize that the real structure of optical turbulence is determined by the turbulent kinetic energy and turbulent potential energy. In turn, the turbulent kinetic energy and turbulent potential energy may be estimated using the averaged parameters of large-scale atmospheric flows. In particular, the energy characteristics of dynamic and optical turbulence can be parameterized through the vertical distributions of averaged meteorological characteristics. Correct parameterization of turbulence must also take into account certain spatial scales, determined by the deformations of the turbulence energy spectra. Among such scales, as a rule, the outer scale and integral scale of turbulence are considered.
To fully describe the structure of atmospheric small-scale (optical) turbulence, parameterization schemes must take into account:
(i)
Generation and dissipation of atmospheric turbulence as well as the general energy of atmospheric flows.
(ii)
The influence of air temperature inversion layers on the suppression of vertical turbulent flows [22]. This is especially important for the parameterization of vertical turbulent heat fluxes, which demonstrate the greatest nonlinearity for different vertical profiles of air temperature and wind speed.
(iii)
Features of mesoscale turbulence generation within air flow in conditions of complex relief [23].
(iv)
Development of intense optical turbulence above and below jet streams, including mesojets within the atmospheric boundary layer.
The structure of optical turbulence depends on the meteorological characteristics at different heights above the Earth’s surface. As shown by numerous studies of atmospheric turbulence, the main parameters that determine the structure and dynamics of turbulent fluctuations are the wind speed components, wind shears, vertical gradients of air temperature and humidity, Richardson number, and buoyancy forces, as well as large-scale atmospheric characteristics [24,25]. Taking into account the dependence of optical turbulence on meteorological characteristics, vertical profiles of the horizontal components of wind speed, air temperature, humidity, atmospheric vorticity, and the vertical component at various pressure levels were selected as input parameters for training the neural networks. The total cloud cover, surface wind speed, and air temperature, as well as the calculated values of northward and eastward surface turbulent stresses, were selected as additional parameters. We should emphasize that information about the vertical profiles of meteorological characteristics is necessary to determine the seeing parameter with acceptable accuracy without the use of measurement data in the surface layer. Using measured meteorological characteristics in the surface layer of the atmosphere as input data will further significantly improve the accuracy of modeling variations in seeing.

3. Data Used

The approach based on the application of a deep neural network has a certain merit, as it allows one to search for internal relations between seeing variations and the evolution of some background states of atmospheric layers at different heights. We use the medians of seeing estimated from measurements of differential displacements of stellar images at the site of the Maidanak observatory as predicted values. We should note that routine measurements of star image motion are made at the Maidanak observatory using the Differential Image Motion Monitor (DIMM) [26,27]. The database of measured seeing is available for two periods: 1996–2003 and 2018–2022. In Figure 1, we present the total amount of DIMM data for each month during the acquisition period. The 13-year dataset used covers a variety of atmospheric situations and is statistically confident. Analysis of Figure 1 shows that the fewest number of nights used for machine learning occurs in March. The best conditions correspond to August–October, when the observatory has a good amount of clear time.
The observed difference in the number of nights for different months is related to the atmospheric conditions limiting the observations (strong surface winds and high-level cloud cover).
Also, we used data from the European Center for Medium-Range Weather Forecast Reanalysis (Era-5) [28] as inputs for training the neural networks. Meteorological characteristics at different pressure levels were selected from the Era-5 reanalysis database for two periods: 1996–2003 and 2018–2022. Night-to-night averaging of the reanalysis data corresponds to the averaging of measured seeing.

3.1. Era-5 Reanalysis Data

Reanalysis Era-5 is a fifth-generation database. The data in the Era-5 reanalysis are presented with high spatial and temporal resolution. The spatial resolution is 0.25 , and the time resolution is 1 h. Data are available for pressure levels ranging from 1000 hPa to 1 hPa. In simulations, in addition to hourly data on pressure levels, we also used hourly data on single levels (air temperature at a height of 2 m above the surface and horizontal components of wind speed at a height of 10 m above the surface).
We have verified the Era-5 reanalysis data for the region where the Maidanak Astronomical Observatory is located. Verification was performed by comparing semi-empirical vertical profiles of the Era-5 reanalysis with radiosounding data at the Dzhambul station. Dzhambul is one of the closest sounding stations to the Maidanak Astronomical Observatory.
In order to numerically estimate the deviations of the reanalysis data from the measurement data, we calculated the mean absolute errors and the standard deviations of air temperature and wind speed. The mean absolute errors and the standard deviations were estimated using the formulas [29]:
Δ T = 1 N i = 1 N T i ( z ) ( E r a 5 ) T i ( z ) ( r a d ) ,
Δ V = 1 N i = 1 N V i ( z ) ( E r a 5 ) V i ( z ) ( r a d ) ,
σ T = 1 N i = 1 N T i ( z ) ( E r a 5 ) T i ( z ) ( r a d ) 2 0.5 ,
σ V = 1 N i = 1 N V i ( z ) ( E r a 5 ) V i ( z ) ( r a d ) 2 0.5 ,
where z is the height and Δ T and Δ V are the mean absolute errors of air temperature and wind speed, respectively. Brackets ( E r a 5 ) and ( r a d ) indicate the type of data (Era-5 reanalysis and radiosondes). σ T and σ V are the root mean square deviations of air temperature and wind speed, respectively. N includes all observations for January 2023 and July 2023.
Figure 2 and Figure 3 show the vertical profiles of Δ T , Δ V , σ T , and σ V . The profiles are averaged over June–August and December–February. Analysis of these figures shows that, in winter, Δ T , Δ V , σ T , and σ V are 1.3 , 2.8 m/s, 1.7 , and 3.3 m/s, respectively. In winter, the high deviations of measured air temperature from the reanalysis-derived values are observed mainly in the lower layer of the atmosphere (up to 850 hPa). We attribute these deviations to the inaccuracy of modeling surface thermal inversions and meso-jets in the reanalysis. Above the height corresponding to the 850 hPa level, the deviations decrease significantly ( Δ T ∼1.2 , σ T = 1.5 ). In vertical profiles of wind speed, the character of height change in the wind is more ordered. In particular, significant peaks are observed in the entire thickness of the atmosphere. The standard deviation of wind speed in the atmospheric layer up to the 850 hPa level (in the lower atmospheric layers) is higher than 6.0 m/s. In the layer above the 850 hPa pressure level, Δ V and σ V decrease to 1.8 m/s and 2.2 m/s, respectively.
In summer, temperature deviations between radiosonde and Era-5 data decrease. Analysis of Figure 3 shows that these deviations are due to the fact that the reanalysis often does not reproduce correctly the large-scale jet stream. Also, in summer, the Era-5 reanalysis overestimates the surface values of air temperature. Δ T and σ T in the entire atmosphere are 1.7 and 2.3 , respectively. σ T values are 2.8 and 3.6 within the lower atmospheric layers and at the height of the large-scale jet stream, respectively.
Considering wind speed, the deviations between the measured and modeled parameters are pronounced. Δ V and σ V are 2.2 m/s and 2.7 m/s, respectively. In the lower layers of the atmosphere, the mean absolute error and the root mean square deviation are 2.9 m/s and 3.6 m/s. High deviations in the wind speed correspond to the atmospheric levels under a large-scale jet stream (200 hPa). Within the upper atmospheric layers, σ V can reach 4.0 m/s.
Thus, in this section we examine how well the reanalysis data, corresponding to a certain computational cell, describe the real vertical profiles of wind speed and air temperature. In general, there are some atmospheric situations when reanalysis reproduces profiles with a large error. Reanalysis does not reproduce thin thermal inversions or mesojet streams in the lower atmospheric layers and overestimates or underestimates the speed of air flow in a large-scale jet stream. In order to increase the efficiency of training neural networks, below also we consider model weather data with the best reproducibility of vertical changes. In training, we used meteorological characteristics at all available pressure levels from 700 hPa to 3 hPa. The selection of the lower pressure surface corresponding to 700 hPa is determined by the elevation of the observatory (2650 m, surface pressure P s u r f is equal to 733 hPa) and surrounding areas above sea level.

3.2. Seeing Values Derived from Image Motion Measurements

The predicted value is the median of seeing averaged over the night. Seeing is the parameter calculated from image motion measurements. The theory for calculating seeing based on image motion measurements is described in the paper [7]. Using the Kolmogorov model, seeing may be estimated based on the following relation:
σ α 2 = K λ 2 r 0 5 / 3 D 1 / 3 ,
where λ is the light wavelength and D is the telescope diameter. The variance in the differential image motion is denoted σ α 2 . This quantity is related to the Fried parameter r 0 by the formula:
s e e i n g = 0.98 λ r 0 ,
Coefficient K in Formula (13) depends on the ratio of the distance between the centers of apertures S d and the aperture diameter d s , the direction of image motion, and the type of tilt. The coefficients for longitudinal and transverse motions are determined by the gravity center of the images:
K l = 0.34 1 0.57 S d d s 1 / 3 0.04 S d d s 7 / 3 ,
K t = 0.34 1 0.855 S d d s 1 / 3 + 0.03 S d d s 7 / 3 .
Using all data of measurements we computed the seeing values, shown as the histograms in Figure 4a,b. Analysis of Figure 4a,b shows that the range of changes in the integral intensity of optical turbulence at the Maidanak Astronomical Observatory site is narrow. The bulk of the values fall within the range from 0.6 to 0.9 . Despite the narrow range of seeing changes, we can note that the neural networks have been trained for a wide range of atmospheric situations.

4. Neural Network Configuration for Estimation of Seeing

An artificial neural network is a complex function that connects inputs and outputs in a certain way. The construction of a neural network is an attempt to find internal connections and patterns between inputs, their neurons, and outputs in the study of phenomena and processes. The aims of this study is to show how capable an artificial neural network is of estimating seeing variations for the Maidanak Astronomical Observatory, which is located in the most favorable atmospheric conditions.
A flowchart for the creation of neural networks is shown in Figure 5. According to this flowchart, the initial time series are divided into training, checking, and validation datasets. The main stage is training the neural network and generation of partial models. In particular, learning is based on data pairs of observed input and output variables. Using different inputs (meteorological characteristics) we optimized the final structure of the neural network.
An important step in seeing simulation with neural networks is the selection of input variables. The inputs are selected based on the physics of turbulence formation described in Section 2. According to the theory, the formation of turbulent fluctuations in the air temperature and, consequently, the air refractive index is largely determined by the advection of air masses, the rate of dissipation of fluctuations, as well as vertical turbulent flows, which depend on vertical gradients of meteorological characteristics. In addition, the turbulence structure is closely related to large-scale atmospheric disturbances, including meso-scale jet streams and large atmospheric turbulent vortices. In particular, the inputs are wind speed components, air temperature and humidity, vorticity of air flows, and the values of surface turbulent stresses. The final configuration of the neural network is formed by excluding neurons whose weights are minimal. As we will see below, the neural networks obtained that best reproduce the seeing variations do not contain neurons functionally related to atmospheric vortices.
To create configurations of neural networks connecting inputs and outputs, we chose the group method of data handling (GMDH) [30,31,32]. The GMDH method is based on some model of the relationship between the free variables x and the dependent parameter y (seeing) [33]. To identify relationships between the turbulent parameter seeing averaged during the night and vertical profiles of mean meteorological characteristics, we used the Kolmogorov–Gabor polynomials, which are the sum of linear covariance, linear, quadratic, and cubic terms [30]:
y = W 0 + i = 1 m W i x i + i = 1 m j = 1 m W i j x i x j + i = 1 m j = 1 m k = 1 m W i j k x i x j x k + . . .
In Formula (17), the index m denotes the set of free variables and W i , W i j , W i j k are the weights. Seeing is considered as a function of a set of free variables [30]:
s e e i n g = f ( x 1 , x 2 , x 1 x 2 , x 1 2 . . . . ) = F ( z 1 , z 2 , z 3 . . . . ) .
The modeled s e e i n g can be expressed in the following shape [30]:
s e e i n g = W 0 + i = 1 F 0 W i z i = W 0 + W z ,
where W z is a scalar product of two vectors. The correct estimation of the outputs is mainly determined by the trained parameters—the weights W. The goal of training is to find the weights at which the created neural network produces minimal errors.
The result of applying the GMDH method is a certain set of neural network models containing internal connections between input meteorological parameters, their derivatives, and the output. The best solution must correspond to the minimum of the loss function, the values of which depend on all weights. The loss function can be written as [30]:
R e x t ( v a l i d a t e ) = 1 M i v a l i d a t e M ( s e e i n g i s e e i n g i ( t r a i n i n g ) ) 2 0.5 ,
The loss function R e x t ( c h e c k ) is estimated using validation data (new data).
Finding the minimum of the loss function is a rather difficult task due to the multidimensionality of the function, determined by the number of input variables. To find the minimum of the loss function from the training dataset, a gradient descent algorithm is used based on calculating the error gradient vector (partial derivatives of the loss function over all weights). In simulations, the initial weights are initialized randomly and are small. The weights are updated or adjusted using error backpropagation method (from the last neural layer to the input layer). In this method, the calculation of derivatives of complex functions makes it possible to determine the weight increments in order to reduce the loss function. In this case, for each output neuron with the number of the layer N n e u r + 1 , the errors and weight increments are calculated and propagated to the neurons in the previous layer N n e u r . The optimal neural network should correspond to the minimum of this loss function R e x t ( v a l i d a t e ) .
After training, we received a wide set of neural network configurations for estimating seeing. Estimating values of the loss function, two neural network configurations were chosen, shown in Figure A1 and Figure A2. These configurations are obtained using all training data. Numbers on the right side of variables in the figures correspond to pressure levels. Designations used in neural networks are given in Table 1. The input atmospheric characteristics, the number of layers, and the neural network structure are automatically determined in the used learning algorithm.
These configurations correspond to the minima of the loss function; the values of the functions are close to 0.04 . At the same time, the configuration shown in Figure A2 shows a better reproducibility of seeing variations. Figure 6 and Figure 7 show changes in the model and measured seeing values for neural network configurations 1 and 2, respectively.
For these configurations, the linear Pearson correlation coefficient between the model and measured seeing values reached 0.67 and 0.7, respectively (the training datasets). For the validation dataset, the linear Pearson correlation coefficient for configuration 1 was 0.49; for configuration 2 the correlation coefficient increased to 0.52. Thus, using all data, the efficiency of training a neural network that predicts variations in seeing is not very high.
The training process identified important inputs. An analysis of the obtained neural network configurations shows that the main parameters that determine the target values of total seeing are northward surface turbulent stresses and wind speed components on the model levels closest to the summit, that is, 650 and 700 hPa. We also should emphasize that there is a degradation of the statistical measures associated with excluding surface turbulent stresses from the training process. Neural network configurations obtained with excluded surface turbulent stresses demonstrate low correlation coefficients ∼0.3. Also, meaningful characteristics in determining atmospheric seeing are wind speed components at the 250 hPa level and air temperature at 2 m with minor contributions. Atmospheric situations with high air humidity show a negligible influence.
Development of neural network configurations using the GMDH method for the Maidanak Astronomical Observatory was complicated by certain conditions. At the Maidanak Astronomical Observatory, atmospheric conditions with low optical turbulence energy along the line of sight, and, more importantly, with small amplitudes of change in the magnitude of seeing from night to night are often observed. In order to optimize the learning process and find a network with better reproducibility of seeing variations, we filtered the initial data. The conditions of filtering are:
(i)
We chose only atmospheric situations with the cloud fraction in the calculated cell less than 0.3.
(ii)
We excluded nights when the vertical profiles of wind speed and air temperature obtained from the reanalysis data significantly deviated from the reference vertical profiles (from data measured at the Dzhambul radiosounding station).
(iii)
We retained only nights with more than 50 measurements of optical turbulence per night. Nights with a low quantity of measurement data correspond to unfavorable atmospheric conditions (strong surface winds and upper cloudiness).
Since the reanalysis demonstrates the highest deviations precisely for the lower layers of the atmosphere, we excluded most of the atmospheric processes when seeing was determined primarily by the influence of low-level turbulence. In particular, the 20 percent of nights corresponding to the highest deviations in air temperature and wind speed in the lower atmospheric layers were excluded. The corresponding configuration of the optimal neural network is shown in Figure A3.
For this configuration, Figure 8 shows changes in the model and measured seeing. The correlation coefficient between the measured and model variations is higher than for configurations 1 and 2 and equal to 0.68. Neurons of this network contain such atmospheric variables as u at 225 hPa and v in the lower atmospheric layers (700 hPa). For the neural network shown in Figure A3, large-scale atmospheric advection begins to play the greatest role (u at 550 hPa). Using this neural network, we also estimated the median value of seeing at the Maidanak observatory site during the period from January to October 2023. This median value of seeing is 0.73 .
Analysis of the neural networks obtained shows that individual bright connections between neurons are substituted for most configurations with nearly equal-weight coefficients. Unlike the Sayan Solar Observatory, the deep neural networks obtained for the Maidanak Astronomical Observatory do not contain pronounced connections between the seeing parameter and atmospheric vorticities [34]. Moreover, the use of atmospheric vorticities in the simulation even slightly reduces the Pearson correlation coefficient between the model and measured seeing values. For neural networks containing atmospheric vorticities, the Pearson correlation coefficient is reduced less than 0.45. In our opinion, this is due to the fact that the effect of large-scale atmospheric vorticity on optical turbulence at the Maidanak Astronomical Observatory site is minimal and it is noticeable only for individual time periods when the seeing value increases.

5. Conclusions

The following is a summary of the conclusions.
This paper focuses on developing physically informed deep neural networks as well as machine learning methods to predict seeing. We proposed ensemble models of multi-layer neural networks for the estimation of seeing at the Maidanak observatory. As far as we know, this is the first attempt to simulate seeing variations with neural networks at the Maidanak observatory. The neural networks are based on a physical model of the relationship between the characteristics of small-scale atmospheric turbulence and large-scale meteorological characteristics relevant to the Maidanak Astronomical Observatory.
For the first time, configurations of a deep neural network have been obtained for estimating seeing. The neurons of these networks are linear, quadratic, cubic, and covariance functions of large-scale meteorological characteristics at different heights in the boundary layer and free atmosphere. We have shown that the use of a different set of inputs makes it possible to estimate the influence of large-scale atmospheric characteristics on variations in the turbulent parameter seeing. In particular, the present paper shows that:
(i)
The seeing parameter weakly depends on meso-scale and large-scale atmospheric vorticity, but is significantly sensitive to the characteristics of the surface layer of the atmosphere. In particular, for neural networks containing atmospheric vorticities, the Pearson correlation coefficient is low, ∼0.45.
(ii)
The air temperature and wind speed at the pressure levels closest to the observatory, as well as the northward turbulent surface stress, have a significant impact on seeing. Applying the northward turbulent surface stress parameter in the training process makes it possible to improve significantly the retrieving seeing variations (the Pearson correlation coefficient increases from 0.45 to ∼0.70). The estimated median value of seeing with neural networks at the Maidanak observatory site during the period from January to October 2023 is 0.73 .
(iii)
The influence of the upper atmospheric layers (below the 200 hPa surface) becomes noticeable for selected atmospheric situations when, as we assume, the reanalysis best reproduces large-scale meteorological fields.
Verification of the hourly averaged vertical profiles of wind speed and air temperature derived from the Era-5 reanalysis database was performed. We compare semi-empirical vertical profiles of the Era-5 reanalysis with radiosounding data of the atmosphere at the Dzhambul station, which is located within the region of the Maidanak Astronomical Observatory. The largest deviations correspond to the lower layers of the atmosphere and the pressure levels of a large-scale jet stream formation. In winter, Δ T , Δ V , σ T , and σ V are 1.3 , 2.8 m/s, 1.7 , and 3.3 m/s, respectively. In summer, these statistics have similar values: 1.7 , 2.2 m/s, 2.3 , and 2.7 m/s, respectively.

Author Contributions

A.V.K. was engaged in developing the software and data validation; A.Y.S., K.E.K. and P.G.K. developed the methodology and performed the investigation; E.A.K. conducted the formal analysis; S.A.E. and Y.A.T. were engaged in measurements and initial data analysis. All authors have read and agreed to the published version of the manuscript.

Funding

Section 3.1 was supported by the Ministry of Science and Higher Education of the Russian Federation. The development of neural networks for simulations of night-time seeing variations at the Maidanak Astronomical Observatory was funded by RSF grant No. 23-72-00041.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are available upon request.

Acknowledgments

The approaches were previously tested using the Unique Research Facility “Large Solar Vacuum Telescope” (http://ckp-rf.ru/usu/200615/ accessed on 1 October 2023).

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Figure A1. Neural network for estimating seeing parameter. Configuration 1.
Figure A1. Neural network for estimating seeing parameter. Configuration 1.
Atmosphere 15 00038 g0a1
Figure A2. Neural network for estimating seeing parameter. Configuration 2.
Figure A2. Neural network for estimating seeing parameter. Configuration 2.
Atmosphere 15 00038 g0a2
Figure A3. Neural network for estimating seeing parameter for chosen atmospheric cases. Numbers on the right side of variables correspond to pressure levels.
Figure A3. Neural network for estimating seeing parameter for chosen atmospheric cases. Numbers on the right side of variables correspond to pressure levels.
Atmosphere 15 00038 g0a3

References

  1. Panchuk, V.E.; Afanasev, V.L. Astroclimate of Northern Caucasus—Myths and reality. Astrophys. Bull. 2011, 66, 233–254. [Google Scholar] [CrossRef]
  2. Hellemeier, J.A.; Yang, R.; Sarazin, M.; Hickson, P. Weather at selected astronomical sites—An overview of five atmospheric parameters. Mon. Not. R. Astron. Soc. 2019, 482, 4941–4950. [Google Scholar] [CrossRef]
  3. Tokovinin, A. The Elusive Nature of “Seeing”. Atmosphere 2023, 14, 1694. [Google Scholar] [CrossRef]
  4. Parada, R.; Rueda-Teruel, S.; Monzo, C. Local Seeing Measurement for Increasing Astrophysical Observatory Quality Images Using an Autonomous Wireless Sensor Network. Sensors 2020, 20, 3792. [Google Scholar] [CrossRef] [PubMed]
  5. Hidalgo, S.L.; Muñoz-Tuñón, C.; Castro-Almazán, J.A.; Varela, A.M. Canarian Observatories Meteorology; Comparison of OT and ORM using Regional Climate Reanalysis. Publ. Astron. Soc. Pac. 2021, 133, 105002. [Google Scholar] [CrossRef]
  6. Vernin, J.; Munoz-Tunon, C.; Hashiguchi, H.; Lawrence, D. Optical seeing at La Palma Observatory. I—General guidelines and preliminary results at the Nordic Optical Telescope. Astron. Astrophys. 1992, 257, 811–816. [Google Scholar] [CrossRef]
  7. Tokovinin, A. From differential image motion to seeing. Publ. Astron. Soc. Pac. 2002, 114, 1156–1166. [Google Scholar] [CrossRef]
  8. Cherubini, T.; Businger, S.; Lyman, R.; Chun, M. Modeling optical turbulence and seeing over Mauna Kea. Appl. Meteorol. Chem. 2008, 47, 1140–1155. [Google Scholar] [CrossRef]
  9. Rimmele, T.R.; Warner, M.; Keil, S.L.; Goode, P.R.; Knölker, P.; Kuhn, J.R.; Rosner, R.R.; McMullin, J.P.; Casini, R.; Lin, H.; et al. Inouye Solar Telescope – Observatory Overview. Solar Phys. 2020, 295, 172. [Google Scholar] [CrossRef]
  10. Grigoryev, V.M.; Demidov, M.L.; Kolobov, D.Y.; Pulyaev, V.A.; Skomorovsky, V.I.; Chuprakov, S.A. Project of the Large Solar Telescope with mirror 3 m in diameter. J. Sol. Terr. Phys. 2020, 6, 14–29. [Google Scholar] [CrossRef]
  11. Wang, Z.; Zhang, L.; Kong, L.; Bao, H.; Guo, Y.; Rao, X.; Zhong, L.; Zhu, L.; Rao, C. A modified S-DIMM+: Applying additional height grids for characterizing daytime seeing profiles. Mon. Not. R. Astron. Soc. 2018, 478, 1459–1467. [Google Scholar] [CrossRef]
  12. Zhong, L.; Zhang, L.; Shi, Z.; Tian, Y.; Guo, Y.; Kong, L.; Rao, X.; Bao, H.; Zhu, L.; Rao, C. Wide field-of-view, high-resolution Solar observation in combination with ground layer adaptive optics and speckle imaging. Astron. Astrophys. 2020, 637, A99. [Google Scholar] [CrossRef]
  13. Lotfy, E.R.; Abbas, A.A.; Zaki, S.A.; Harun, Z. Characteristics of Turbulent Coherent Structures in Atmospheric Flow under Different Shear–Buoyancy Conditions. Bound. Layer Meteorol. 2019, 173, 115–141. [Google Scholar] [CrossRef]
  14. Burgan, H.I. Comparision of different ANN (FFBP GRNN F) algoritms and multiple linear regression for daily streamflow prediction in Kocasu river-Turkey. Fresenius Environ. Bull. 2022, 31, 4699–4708. [Google Scholar]
  15. Eris, E.; Cavus, Y.; Aksoy, H.; Burgan, H.I.; Aksu, H.; Boyacioglu, H. Spatiotemporal analysis of meteorological drought over Kucuk Menderes River Basin in the Aegean Region of Turkey. Theor. Appl. Climatol. 2020, 142, 1515–1530. [Google Scholar] [CrossRef]
  16. Hou, X.; Hu, Y.; Du, F.; Ashley, M.C.B.; Pei, C.; Shang, Z.; Ma, B.; Wang, E.; Huang, K. Machine learning-based seeing estimation and prediction using multi-layer meteorological data at Dome A, Antarctica. Astron. Comput. 2023, 43, 100710. [Google Scholar] [CrossRef]
  17. Wang, Y.; Basu, S. Using an artificial neural network approach to estimate surface-layer optical turbulence at Mauna Loa. Opt. Let. 2016, 41, 2334–2337. [Google Scholar] [CrossRef]
  18. Jellen, C.; Oakley, M.; Nelson, C.; Burkhardt, J.; Brownell, C. Machine-learning informed macro-meteorological models for the near-maritime environment. Appl. Opt. 2021, 60, 2938–2951. [Google Scholar] [CrossRef]
  19. Cherubini, T.; Lyman, R.; Businger, S. Forecasting seeing for the Maunakea observatories with machine learning. MNRAS 2021, 509, 232–245. [Google Scholar] [CrossRef]
  20. Li, Y.; Zhang, X.; Li, L.; Shi, L.; Huang, Y.; Fu, S. Multistep ahead atmospheric optical turbulence forecasting for free-space optical communication using empirical mode decomposition and LSTM-based sequence-to-sequence learning. Front. Phys. 2023, 11, 11. [Google Scholar] [CrossRef]
  21. Zilitinkevich, S.; Elperin, T.; Kleeorin, N.I.; Rogachevskii, I.; Esau, I. A Hierarchy of Energy- and Flux-Budget (EFB) Turbulence Closure Models for Stably-Stratified Geophysical Flows. Bound. Layer Meteorol. 2013, 146, 341–373. [Google Scholar] [CrossRef]
  22. Odintsov, S.L.; Gladkikh, V.A.; Kamardin, A.P.; Nevzorova, I.V. Height of the Mixing Layer under Conditions of Temperature Inversions: Experimental Data and Model Estimates. Atmos. Ocean Opt. 2022, 35, 721–731. [Google Scholar] [CrossRef]
  23. Nosov, V.V.; Lukin, V.P.; Nosov, E.V.; Torgaev, A.V. Formation of Turbulence at Astronomical Observatories in Southern Siberia and North Caucasus. Atmos. Ocean Opt. 2019, 32, 464–482. [Google Scholar] [CrossRef]
  24. Qing, C.; Wu, X.; Li, X.; Luo, T.; Su, C.; Zhu, W. Mesoscale optical turbulence simulations above Tibetan Plateau: First attempt. Opt. Express 2020, 28, 4571–4586. [Google Scholar] [CrossRef] [PubMed]
  25. Bi, C.; Qing, C.; Qian, X.; Luo, T.; Zhu, W.; Weng, N. Investigation of the Global Spatio-Temporal Characteristics of Astronomical Seeing. Remote Sens. 2023, 15, 2225. [Google Scholar] [CrossRef]
  26. Tillayev, Y.; Azimov, A.; Ehgamberdiev, S.; Ilyasov, S. Astronomical Seeing and Meteorological Parameters at Maidanak Observatory. Atmosphere 2023, 14, 199. [Google Scholar] [CrossRef]
  27. Ilyasov, S.; Tillayev, Y. The atmospheric conditions of the Maidanak Observatory in Uzbekistan for ground-based observations. SPIE 2010, 7651, 76511N. [Google Scholar]
  28. Hersbach, H.; Bell, B.; Berrisford, P.; Hirahara, S.; Horányi, A.; Muñoz-Sabater, J.; Nicolas, J.; Peubey, C.; Radu, R.; Schepers, D.; et al. The ERA5 global reanalysis. Q. J. R. Meteorol. Soc. 2020, 146, 1999–2049. [Google Scholar] [CrossRef]
  29. Huang, J.; Wang, M.; Qing, H.; Guo, J.; Zhang, J.; Liang, X. Evaluation of Five Reanalysis Products With Radiosonde Observations Over the Central Taklimakan Desert During Summer. Earth Space Sci. 2021, 8, e2021EA001707. [Google Scholar] [CrossRef]
  30. Ivakhnenko, A.G. Heuristic Self-Organization in Problems of Engineering Cybernetic. Automatica 1970, 6, 207–219. [Google Scholar] [CrossRef]
  31. Ivakhnenko, A.G.; Ivakhnenko, G.A.; Mueller, J.A. Self-Organization of Neuronets with Active Neurons. Int. J. Pattern Recognit. Image Anal. Adv. Math. Theory Appl. 1994, 4, 177–188. [Google Scholar]
  32. Stepashko, V. Developments and Prospects of GMDH-Based Inductive Modeling. Adv. Intell. Syst. Comput. 2018, 689, 474–491. [Google Scholar] [CrossRef]
  33. Bolbasova, L.A.; Andrakhanov, A.A.; Shikhovtsev, A.Y. The application of machine learning to predictions of optical turbulence in the surface layer at Baikal Astrophysical Observatory. Mon. Not. R. Astron. Soc. 2021, 504, 6008–6017. [Google Scholar] [CrossRef]
  34. Shikhovtsev, A.Y.; Kovadlo, P.G.; Kiselev, A.V.; Eselevich, M.V.; Lukin, V.P. Application of Neural Networks to Estimation and Prediction of Seeing at the Large Solar Telescope Site. Publ. Astron. Soc. Pac. 2023, 135, 014503. [Google Scholar] [CrossRef]
Figure 1. The number of nights N n i g by month.
Figure 1. The number of nights N n i g by month.
Atmosphere 15 00038 g001
Figure 2. Vertical profiles of (a) Δ T [o K], (b) Δ V [m/s], (c) σ T [o K], and (d) σ V [m/s] in winter.
Figure 2. Vertical profiles of (a) Δ T [o K], (b) Δ V [m/s], (c) σ T [o K], and (d) σ V [m/s] in winter.
Atmosphere 15 00038 g002
Figure 3. Vertical profiles of (a) Δ T [o K], (b) Δ V [m/s], (c) σ T [o K], and (d) σ V [m/s] in summer.
Figure 3. Vertical profiles of (a) Δ T [o K], (b) Δ V [m/s], (c) σ T [o K], and (d) σ V [m/s] in summer.
Atmosphere 15 00038 g003
Figure 4. Histograms of measured seeing values at the Maidanak observatory for two periods: (a) 1996–2003 and (b) 2018–2022. N i is the number of cases.
Figure 4. Histograms of measured seeing values at the Maidanak observatory for two periods: (a) 1996–2003 and (b) 2018–2022. N i is the number of cases.
Atmosphere 15 00038 g004
Figure 5. Flowchart for creation of neural networks.
Figure 5. Flowchart for creation of neural networks.
Atmosphere 15 00038 g005
Figure 6. Changes in model and measured seeing values. Seeing values are estimated using validation dataset. The number of configuration of the neural network is 1. Line 1 corresponds to measured seeing. Line 2 corresponds to modeled seeing.
Figure 6. Changes in model and measured seeing values. Seeing values are estimated using validation dataset. The number of configuration of the neural network is 1. Line 1 corresponds to measured seeing. Line 2 corresponds to modeled seeing.
Atmosphere 15 00038 g006
Figure 7. Changes in model and measured seeing values. Seeing values are estimated using validation dataset. The number of configuration of the neural network is 2. Line 1 corresponds to measured seeing. Line 2 corresponds to modeled seeing.
Figure 7. Changes in model and measured seeing values. Seeing values are estimated using validation dataset. The number of configuration of the neural network is 2. Line 1 corresponds to measured seeing. Line 2 corresponds to modeled seeing.
Atmosphere 15 00038 g007
Figure 8. Changes in model and measured seeing values. Seeing values are estimated using validation dataset. Configuration of neural network obtained for chosen atmospheric cases. Line 1 corresponds to measured seeing. Line 2 corresponds to modeled seeing.
Figure 8. Changes in model and measured seeing values. Seeing values are estimated using validation dataset. Configuration of neural network obtained for chosen atmospheric cases. Line 1 corresponds to measured seeing. Line 2 corresponds to modeled seeing.
Atmosphere 15 00038 g008
Table 1. Designations used in neural networks.
Table 1. Designations used in neural networks.
LabelParameter
nsssnorthward turbulent surface stress
uu-component of wind
vv-component of wind
ww-component of wind
qspecific humidity
tair temperature
t 2 m air temperature at height of 2 m
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Shikhovtsev, A.Y.; Kiselev, A.V.; Kovadlo, P.G.; Kopylov, E.A.; Kirichenko, K.E.; Ehgamberdiev, S.A.; Tillayev, Y.A. Estimation of Astronomical Seeing with Neural Networks at the Maidanak Observatory. Atmosphere 2024, 15, 38. https://doi.org/10.3390/atmos15010038

AMA Style

Shikhovtsev AY, Kiselev AV, Kovadlo PG, Kopylov EA, Kirichenko KE, Ehgamberdiev SA, Tillayev YA. Estimation of Astronomical Seeing with Neural Networks at the Maidanak Observatory. Atmosphere. 2024; 15(1):38. https://doi.org/10.3390/atmos15010038

Chicago/Turabian Style

Shikhovtsev, Artem Y., Alexander V. Kiselev, Pavel G. Kovadlo, Evgeniy A. Kopylov, Kirill E. Kirichenko, Shuhrat A. Ehgamberdiev, and Yusufjon A. Tillayev. 2024. "Estimation of Astronomical Seeing with Neural Networks at the Maidanak Observatory" Atmosphere 15, no. 1: 38. https://doi.org/10.3390/atmos15010038

APA Style

Shikhovtsev, A. Y., Kiselev, A. V., Kovadlo, P. G., Kopylov, E. A., Kirichenko, K. E., Ehgamberdiev, S. A., & Tillayev, Y. A. (2024). Estimation of Astronomical Seeing with Neural Networks at the Maidanak Observatory. Atmosphere, 15(1), 38. https://doi.org/10.3390/atmos15010038

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop