Preprint
Article

Estimation of Astronomical Seeing with Neural Networks at the Maidanak Observatory

Altmetrics

Downloads

107

Views

42

Comments

0

A peer-reviewed article of this preprint also exists.

Submitted:

07 December 2023

Posted:

08 December 2023

You are already at the latest version

Alerts
Abstract
In the present article, we study the possibilities of machine learning for estimation of seeing at the Maidanak Astronomical Observatory (38∘40′24′′N, 66∘53′47′′E) using only Era-5 re-analysis data. Seeing is usually associated with the integral of the turbulence strength Cn2(z) over the height z. Based on the seeing measurements accumulated over 13 years we created ensemble models of multi-layer neural networks under the machine learning framework, including training and validation. For the first time in the world, we have simulated optical turbulence (seeing variations) during night-time with deep neural networks trained on a 13-year database of astronomical seeing. A set of neural networks for simulations of night-time seeing variations have been obtained. For these neural networks the linear correlation coefficient ranges from 0.48 to 0.68. We show that modeled seeing with neural networks is well described through meteorological parameters, which include wind speed components, air temperature, humidity and turbulent surface stresses. One of the fundamental new results is that the structure of small-scale (optical) turbulence over the Maidanak Astronomical Observatory does not depend or depends negligibly on the vortex component of atmospheric flows.
Keywords: 
Subject: Computer Science and Mathematics  -   Artificial Intelligence and Machine Learning

1. Introduction

Atmospheric flows are predominantly turbulent, both in the free atmosphere and in the atmospheric boundary layer. Within these atmospheric layers, a continuous spectrum of turbulent fluctuations over a wide range of spatial scales is formed. This range includes scales from the largest vortices associated with the flow boundary conditions being considered, to the smallest eddies, which are determined by viscous dissipation. The energy spectrum of turbulence, especially in its short-wavelength range, is significantly deformed with height above ground, the structure and energy of optical turbulence also change noticeably.
The Earth’s atmosphere significantly limits ground-based astronomical observations [1,2,3,4]. Due to atmospheric turbulence, wavefront distorts, solar images are blurred, and small details in the images become indistinguishable. Optical turbulence has a decisive influence on the resolution of stellar telescopes and the efficiency of using adaptive optics systems. The main requirement for high-resolution astronomical observations is to operate under the quietest, optically stable, atmosphere characterized by a weak small-scale (optical) turbulence.
One of the key characteristics of optical turbulence is seeing [5,6]. The seeing parameter is associated with the full width at half-maximum of the long-exposure seeing-limited point spread function at the focus of a large diameter telescope [7,8]. This parameter can be expressed through the vertical profile of optical turbulence strength. In particular, in isotropic three-dimensional Kolmogorov turbulence, the seeing can be estimated by the integral of the structure characteristic of turbulent fluctuations of the air refractive index C n 2 ( z ) over the height z [8]:
s e e i n g = 0.98 λ 0.423 s e c α 2 π λ 2 0 H C n 2 ( z ) d z 3 / 5 ,
where H is the height of the optically active atmosphere, α is the zenith angle, λ is the light wavelength.
In astronomical observations through the Earth’s turbulent atmosphere, atmospheric resolution (seeing), as a rule, does not exceed 0.7 - 2.0 ″. In conditions of intense optical turbulence along the line of sight, seeing increases to 4.0 - 5.0 ″. At the same time, modern problems of astrophysics associated with high resolution observations require seeing of the order of 0.1 ″ or even better [9,10]. In order to improve the quality of solar or stellar images and achieve high resolution, special adaptive optics systems are used [11,12]. Monitoring and forecasting the seeing are necessary for the functioning of adaptive optics systems of astronomical telescopes and planning observing time.
Correct estimations of the seeing and prediction of this parameter are associated with the development of our knowledge about:
(i)
The evolution of small-scale turbulence within the troposphere and stratosphere.
(ii)
Inhomogeneous influence of mesojet streams within the atmospheric boundary layer on the generation and dissipation of turbulence.
(iii)
Suppression of turbulent fluctuations in a stable stratified atmospheric boundary layer and the influence of multilayer air temperature inversions on vertical profiles of optical turbulence.
(iv)
The phenomenon of structurization of turbulence under the influence of large-scale and mesoscale vortex movements [13].
One of the best tools used for simulations of geophysical flows is machine learning models and, in particular, deep neural networks [14]. Neural networks are used for estimation and prediction of atmospheric processes. A number of studies are devoted to machine learning models applied for description of the characteristics of optical turbulence [15,16].
In addition to numerical atmospheric models and statistical methods [17], machine learning is one of the tools for estimating and predicting atmospheric characteristics including the optical turbulence [16,18,19,20,21]. Cherubini T. and el. have presented a machine-learning approach to translate the Maunakea Weather Center experience into a forecast of the nightly average optical turbulent state of the atmosphere. [22]. In the paper [23] for prediction of optical turbulence a hybrid multi-step model is proposed by combining empirical mode decomposition, sequence-to-sequence and long short-term memory network.
Thanks to ability to learn from real data and adjust to complex models with ease machine learning and artificial intelligence methods are being successfully implemented for multi-object adaptive optics. By applying machine learning methods, the problem of restoring wavefronts distorted by atmospheric turbulence is solved.
This paper discusses the possibilities of using machine learning methods and deep neural networks to estimate the seeing parameter at the site of the Maidanak observatory (38°40′24″N, 66°53′47″E). The site is considered as one of the best ground sites on the Earth for optical astronomy in the world. The goal of this work is to develop approach for estimating seeing at the Maidanak observatory through large-scale weather patterns and, thereby, anticipate the average optical turbulence state of the atmosphere.

2. Evolution of atmospheric turbulence

It is a known fact that turbulent fluctuations of the air refractive index n are determined by turbulent fluctuations of air temperature T or potential temperature θ : n T θ . In order to select the optimal dataset for training the neural network, we considered the budget equation for the energy of potential temperature fluctuations E θ = 1 / 2 θ 2 ¯ [24]:
d E θ d t + Q θ z = F z θ ¯ z ϵ θ ,
where the substantial derivative d d t = u ¯ x + v ¯ y , u ¯ and v ¯ is the mean horizontal components of wind speed, t is the time, Q θ is the 3rd order vertical turbulent flux of E θ , F z is the vertical flux of potential temperature fluctuations, θ ¯ z is the vertical partial derivative of the mean value of θ , ϵ θ is the rate of dissipation.
Analyzing this equation we can see that the operator d d t determines changes in E θ due to large-scale advection of air masses. The second term in the left-hand side of equation 2 is neither productive nor dissipative and describes the energy transport. The 3rd order vertical turbulent flux Q θ can be expressed through the fluctuations of the squared potential temperature θ 2 and fluctuations of the vertical velocity component w :
Q θ = 1 2 θ 2 w ¯ .
For small turbulent fluctuations of air temperature, Q θ can be neglected. An alternative approach is to construct a regional model of Q θ changes using averaged vertical profiles of meteorological characteristics.
The term F z θ ¯ z is of great interest. The parameter F z describes the energy exchange between turbulent potential energy and turbulent kinetic energy and determines the structure of optical turbulence. Also, it is important to emphasize that this exchange between energies is governed by the Richardson number.
The down-gradient formulation for F z is:
F z = K H N 2 β ,
where the turbulence coefficient K H can be defined as a constant for a thin atmospheric layer or specified in the form of some model. β = g / T 0 , g is the gravitational acceleration, T 0 is a reference value of absolute temperature. The Brunt-Vaisala frequency N 2 describes the oscillation frequency of an air parcel in a stable atmosphere through average meteorological characteristics:
N 2 = g θ ¯ d θ ¯ d z .
According to Large Eddy Simulations[24], the dependencies of both the vertical turbulent momentum flow and heat on the Richardson gradient number, which is associated with the vertical gradients of wind speed and air temperature have been revealed. These dependencies are complex; they demonstrate nonlinear changes of vertical turbulent flows with increasing Richardson number. It can be noted that with an increase of the Richardson number from 10 2 to values greater than 10, the vertical turbulent momentum flux tends to decrease. For a vertical turbulent heat flux, on average, a similar dependence is observed with a pronounced extremum for the Richardson numbers from 4 x 10 2 to 7 x 10 2 .
Following Kolmogorov, the dissipation rate may be expressed through the turbulent dissipation time scale t T :
ϵ θ = E θ ( C P t T ) 1 .
C P is the dimensionless constant of order unit. In turn, parameter t T is related to the turbulent length scale:
t T = L s / E k 1 / 2 = L s / ( 0.5 ( u 2 ¯ + v 2 ¯ ) ) 1 / 2 .
Using formula 8, equation 6 will take the form:
ϵ θ = E θ E k 1 / 2 ( C P L s ) 1 .
Analyzing equation 8 we can note that the rate of dissipation of temperature fluctuations is determined by the turbulence kinetic and turbulence potential energies. In the atmosphere, the transition rate of turbulence potential energy into turbulence kinetic energy depends on the type and sign of thermal stability. This transition is largely determined by the vertical gradients of the mean potential air temperature.
Given the above, we can emphasize that the real structure of optical turbulence is determined by turbulent kinetic energy and turbulent potential energy. In turn, turbulent kinetic energy and turbulent potential energy may be estimated using averaged parameters of large-scale atmospheric flows. In particular, energy characteristics of dynamic and optical turbulence can be parameterized through the vertical distributions of averaged meteorological characteristics. Correct parameterization of turbulence must also take into account certain spatial scales, determined by the deformations of the turbulence energy spectra. Among such scales, as a rule, the outer scale and integral scale of turbulence are considered.
To fully describe the structure of atmospheric small-scale (optical) turbulence, parameterization schemes must take into account:
(i)
Generation and dissipation of atmospheric turbulence as well as the general energy of atmospheric flows.
(ii)
The influence of air temperature inversion layers on the suppression of vertical turbulent flows [25]. This is especially important for the parameterization of vertical turbulent heat fluxes, which demonstrates the greatest nonlinearity for different vertical profiles of air temperature and wind speed.
(iii)
Features of mesoscale turbulence generation within air flow in conditions of complex relief [26].
(iv)
Development of intense optical turbulence above and below jet streams, including mesojets within the atmospheric boundary layer.
The structure of optical turbulence depends on meteorological characteristics at different heights above the Earth’s surface. As shown by numerous studies of atmospheric turbulence the main parameters which determine the structure and dynamics of turbulent fluctuations are the wind speed components, wind shears, vertical gradients of air temperature and humidity, the Richardson number and buoyancy forces as well as large-scale atmospheric perturbation characteristics [21,29,30]. Taking into account the dependence of optical turbulence on meteorological characteristics, vertical profiles of the horizontal components of wind speed, air temperature and humidity, atmospheric vorticity and vertical component at various pressure levels were selected as input parameters for training the neural networks. Total cloud cover, surface wind speed and air temperature, as well as the calculated values of northward and eastward surface turbulent stresses were selected as additional parameters. We should emphasize that information about the vertical profiles of meteorological characteristics is necessary to determine the seeing parameter with acceptable accuracy without the use of measurement data in the surface layer. Using measured meteorological characteristics in the surface layer of the atmosphere as input data will further significantly improve the accuracy of modeling variations of seeing.

3. DATA USED

The approach based on the application of a deep neural network, has a certain merit as it allows one to search for internal relations between seeing variations and the evolution of some background state of atmospheric layers at different heights. We use the medians of seeing estimated from measurements of differential displacements of stellar images at the site of the Maidanak Observatory as predicted values. We should note that routine measurements of star image motion are made at the Maidanak Observatory using the Differential Image Motion Monitor. The database of measured seeing is available for two periods: 1996-2003 and 2018-2022. In Figure 1, we present the total amount of differential image motion monitor (DIMM) data for each month during the acquisition period. The 13-year data set used covers a variety of atmospheric situations and is statistically confident. Analysis of Figure 1 shows that the fewest number of nights used for machine learning occurs in March. The best conditions correspond to August - October, when the observatory has a good amount of clear time.
The observed difference in the number of nights for different months is related to the atmospheric conditions limiting the observations (strong surface winds and high-level cloud cover).
Also, we used data from the European Center for Medium-Range Weather Forecast Reanalysis (Era-5) [27] as inputs for training the neural networks. Meteorological characteristics at different pressure levels were selected from the Era-5 reanalysis database for two periods: 1996-2003 and 2018-2022. Night-to-night averaging of the reanalysis data corresponds to the averaging of measured seeing.

3.1. Era-5 reanalysis data

Reanalysis Era-5 is a fifth generation database. Data in the Era-5 reanalysis are presented with high spatial and temporal resolution. The spatial resolution is 0.25 °, and the time resolution is 1 hour. Data are available for pressure levels ranging from 1000 hPa to 1 hPa. In simulations, in addition to hourly data on pressure levels, we also used hourly data on single levels (air temperature at the height of 2 meters above surface and horizontal components of wind speed at the height of 10 meters above surface).
We have verified the Era-5 reanalysis data for the region where the Maidanak astronomical observatory is located. Verification was performed by comparing semi-empirical vertical profiles of the Era-5 re-analysis with radiosounding data at the Dzhambul station. Dzhambul is one of the closest sounding stations to the Maidanak astronomical observatory.
In order to numerically estimate the deviations of reanalysis data from measurement data, we calculated the mean absolute errors and the standard deviations of air temperature and wind speed. The mean absolute errors and the standard deviations were estimated using the formulas [28]:
Δ T = 1 N i = 1 N T i ( z ) ( E r a 5 ) T i ( z ) ( r a d ) ,
Δ V = 1 N i = 1 N V i ( z ) ( E r a 5 ) V i ( z ) ( r a d ) ,
σ T = 1 N i = 1 N T i ( z ) ( E r a 5 ) T i ( z ) ( r a d ) 2 0.5 ,
σ V = 1 N i = 1 N V i ( z ) ( E r a 5 ) V i ( z ) ( r a d ) 2 0.5 ,
where z is the height, Δ T and Δ V are the mean absolute errors in air temperature and wind speed. Brackets ( E r a 5 ) and ( r a d ) indicate the type of data (Era-5 reanalysis and radiosondes). σ T and σ V are the root mean square deviations in air temperature and wind speed. N includes all observations for January, 2023 and July, 2023.
Figure 2 and Figure 3 show the vertical profiles of Δ T , Δ V , σ T and σ V . The profiles are averaged over June-August and December-February. Analysis of these figures shows that, in winter, Δ T , Δ V , σ T and σ V are 1.3 °, 2.8 m / s , 1.7 °, 3.3 m / s , respectively. In winter, the high deviations of measured air temperature from the reanalysis derived values are observed mainly in the lower layer of the atmosphere (up to 850 h P a ). We attribute these deviations to inaccuracy of modeling surface thermal inversions and meso-jets in the reanalysis. Above the height, corresponding to 850 h P a level, the deviations decrease significantly ( Δ T ∼ 1.2 °, σ T = 1.5 °). In vertical profiles of wind speed, the character of height changes in the wind is more ordered. In particular, significant peaks are observed in the entire thickness of the atmosphere. The standard deviation of wind speed in the atmospheric layer up to 850 h P a level (in the lower atmospheric layers) is higher than 6.0 m / s . In the layer above the 850 h P a pressure level, Δ V and σ V decrease to 1.8 m / s and 2.2 m / s respectively.
In summer, temperature deviations between radiosonde and Era-5 data decrease. Analysis of Figure 3 shows that these deviations are due to the fact that the reanalysis often does not reproduce correctly the large scale jet stream. Also, in summer, the Era-5 reanalysis overestimates the surface values of air temperature. Δ T and σ T in the entire atmosphere are 1.7°, 2.3°, respectively. σ T values are 2.8 ° and 3.6 ° within the lower atmospheric layers and at the height of the large scale jet stream.
Considering wind speed, the deviations between the measured and modeled parameters are pronounced. Δ V and σ V are 2.2 m / s and 2.7 m / s , respectively. In the lower layers of the atmosphere, the mean absolute error and the root mean square deviation are 2.9 m / s and 3.6 m / s . High deviations in the wind speed correspond to the atmospheric levels under a large-scale jet stream (200 h P a ). Within the upper atmospheric layers σ V can reach 4.0 m / s .
Thus, in this section we examine how well the reanalysis data, corresponding to a certain computational cell, describes the real vertical profiles of wind speed and air temperature. In general, there are some atmospheric situations when re-analysis reproduces profiles with a large error. Re-analysis does not reproduce thin thermal inversions, mesojet streams in the lower atmospheric layers as well as overestimates or underestimates the speed of air flow in a large-scale jet stream. In order to increase the efficiency of training neural networks, below we considered model weather data also with the best reproducibility of vertical changes. In training, we used meteorological characteristics at the all available pressure levels from 700 hPa to 3 hPa. The selection of the lower pressure surface corresponding to 700 hPa is determined by the elevation of the observatory (2650 m, surface pressure P s u r f is equal to 733 hPa) and surrounding areas above sea level.

3.2. Seeing values derived from image motion measurements

The predicted value is the medians of seeing averaged over the night. Seeing is the parameter calculated from image motion measurements. The theory for calculating the seeing based on image motion measurements is described in the paper [7]. Using the Kolmogorov model, the seeing may be estimated based on the following relation:
σ α 2 = K λ 2 r 0 5 / 3 D 1 / 3 ,
where λ is the light wavelength, D is the telescope diameter. The variance in the differential image motion denoted as σ α 2 . This quantity is related with the Fried parameter r 0 by the formula:
s e e i n g = 0.98 λ r 0 ,
Coefficient K in formula 13 depends on the ratio of the distance between the centers of apertures S d and aperture diameter d s , the direction of image motion and the type of tilt. The coefficients for longitudinal and transverse motions, are determined by the gravity center of the images:
K l = 0.34 1 0.57 S d d s 1 / 3 0.04 S d d s 7 / 3 ,
K t = 0.34 1 0.855 S d d s 1 / 3 + 0.03 S d d s 7 / 3 .
Using all data of measurements we computed the seeing values, shown as the histograms in Figure 4 a) and b). Analysis of Figure 4 a) and b) shows that the range of changes of the integral intensity of optical turbulence at the Maidanak astronomical observatory site is narrow. The bulk of the values fall within the range from 0.6 to 0.9 ″. Despite the narrow range of seeing changes, we can note that the neural networks have been trained for a wide range of atmospheric situations.

4. Neural network configuration for estimation of seeing

An artificial neural network is a complex function that connects inputs and outputs in a certain way. Construction of a neural network is an attempt to find internal connections, patterns between inputs, their neurons and outputs in the study of phenomena and processes. The aims of this study is to show how capable an artificial neural network is of estimating seeing variations for the Maidanak astronomical observatory, which is located in the most favorable atmospheric conditions.
Flowchart for creation of neural networks is shown in Figure 5. According to this flowchart, the initial time series are divided into training, checking and validation data sets. The main stage is training the neural network and generation of partial models. In particular, learning is based on data pairs of observed input and output variables. Using different inputs (meteorological characteristics) we optimized the final structure of the neural network.
Important step in seeing simulation with neural networks is the selection of input variables. The inputs are selected based on the physics of turbulence formation described in Section 2. According to the theory, the formation of turbulent fluctuations of air temperature and, consequently, air refractive index is largely determined by the advection of air masses, the rate of dissipation of fluctuations as well as vertical turbulent flows, which depend on vertical gradients of meteorological characteristics. In addition, the turbulence structure is closely related to large-scale atmospheric disturbances, including meso-scale jet streams and large atmospheric turbulent vortices. In particular, the inputs are wind speed components, air temperature and humidity, vorticity of air flows and the values of surface turbulent stresses. The final configuration of the neural network is formed by excluding neurons whose weights are minimal. As we will see below, the neural networks obtained that best reproduce the seeing variations do not contain neurons functionally related to atmospheric vortices.
To create configurations of neural networks connecting inputs and outputs, we chose the group method of data handling (GMDH) [31,32,33]. The GMDH method is based on some model of the relationship between the free variables x and the dependent parameter y (seeing) [34]. To identify relationships between the turbulent parameter seeing averaged during the night and vertical profiles of mean meteorological characteristics we used the Kolmogorov-Gabor polynomials, which is the sum of linear covariance, linear, quadratic and cubic terms [31]:
y = W 0 + i = 1 m W i x i + i = 1 m j = 1 m W i j x i x j + i = 1 m j = 1 m k = 1 m W i j k x i x j x k + . . .
In formula 17 the index m denotes the set of free variables, W i , W i j , W i j k are the weights. The seeing is considered as a function of a set of free variables[31]:
s e e i n g = f ( x 1 , x 2 , x 1 x 2 , x 1 2 . . . . ) = F ( z 1 , z 2 , z 3 . . . . ) .
The modeled s e e i n g * can be expressed in the following shape[31]:
s e e i n g * = W 0 + i = 1 F 0 W i z i = W 0 + W z ,
where W z is a scalar product of two vectors. The correct estimation of the outputs is mainly determined by the trained parameters - the weights W. The goal of training is to find such weights, at which the created neural network produces minimal errors.
The result of applying the GMDH method is a certain set of neural network models containing internal connections between input meteorological parameters, their derivatives and output. The best solution must correspond to the minimum of the loss function, the values of which depend on all weights. The loss function can be written as [31]:
R e x t ( v a l i d a t e ) = 1 M i v a l i d a t e M ( s e e i n g i s e e i n g i ( t r a i n i n g ) ) 2 0.5 ,
The loss function R e x t ( c h e c k ) is estimated using validate data (new data).
Finding the minimum of loss function is a rather difficult task due to the multidimensionality of the function, determined by the number of input variables. To find the minimum of the loss function from the training dataset, a gradient descent algorithm is used based on calculating the error gradient vector (partial derivatives of the loss function over all weights). In simulations, the initial weights are initialized randomly and are small. The weights are updated or adjusted using error backpropagation method (from the last neural layer to the input layer). In this method, calculation of derivatives of complex functions makes it possible to determine weight increments in order to reduce the loss function. In this case, for each output neuron with the number of the layer N n e u r + 1 , errors and weight increments are calculated and propagated to the neurons in the previous layer N n e u r . The optimal neural network should correspond to the minimum of this loss function R e x t ( v a l i d a t e ) .
After training, we received a wide set of neural network configurations for estimating the seeing. Estimating values of the loss function, two neural network configurations were chosen, shown in Figure A1 and Figure A2. These configurations are obtained using all training data. Numbers on the right side of variables in the figures correspond to pressure levels. Designations used in neural networks are given in Table 1. The input atmospheric characteristics, the number of layers and neural network structure automatically determined in used learning algorithm.
These configurations correspond to the minima of the loss function; the values of the functions are close to 0.04″. At the same time, the configuration shown in Figure A2 shows better reproducibility of seeing variations. Figure 6 and Figure 7 show changes in model and measured seeing values for neural network configurations 1 and 2, respectively.
For these configurations, the linear Pearson correlation coefficient between the model and measured seeing values reached 0.67 and 0.7, respectively (the training datasets). For the validation dataset, the linear Pearson correlation coefficient for configuration 1 was 0.49; for configuration 2 the correlation coefficient increased to 0.52. Thus, using all data, the efficiency of training a neural network that predicts variations of seeing is not very high.
The training process identified important inputs. Analysis of obtained neural network configurations shows that the main parameters which determine the target values of total seeing are northward surface turbulent stresses and wind speed components on the model levels closest to the summit, that is 650 and 700 hPa. We also should emphasize that there is a degradation of the statistical measures associated with excluding surface turbulent stresses from training process. Neural network configurations obtained with excluded surface turbulent stresses demonstrate low correlation coefficients ∼ 0.3. Also, meaningful characteristics in determining the atmospheric seeing are wind speed components at the 250 hPa level and air temperature at 2 m with minor contributions. Atmospheric situations with high air humidity shows a negligible influence.
Development of neural network configurations using the GMDH method for the Maidanak astronomical observatory was complicated by certain conditions. At the Maidanak Astronomical Observatory, atmospheric conditions with low optical turbulence energy along the line of sight, and, more importantly, with small amplitudes of changes in the magnitude of seeing from night to night are often observed. In order to optimize the learning process and find a network with better reproducibility of seeing variations, we have filtered the initial data. The conditions of filtering are:
i) We have chosen only atmospheric situations with the cloud fraction in the calculated cell less than 0.3.
ii) We excluded nights when the vertical profiles of wind speed and air temperature obtained from the re-analysis data significantly deviated from the reference vertical profiles (from data measured at the Dzhambul radiosounding station).
iii)We retained only nights with more than 50 measurements of optical turbulence per night. Nights with the low quantity of measurement data correspond to unfavorable atmospheric conditions (surface strong winds and upper cloudiness).
Since the re-analysis demonstrates the highest deviations precisely for the lower layers of the atmosphere, we excluded most of the atmospheric processes when the seeing was determined primarily by the influence of low-level turbulence. In particular, 20 percent of nights, corresponding to the highest deviations in air temperature and wind speed in the lower atmospheric layers, have been excluded. The corresponding configuration of an optimal neural network is shown in Figure A3.
For this configuration, Figure 8 shows changes in the model and measured seeing. The correlation coefficient between measured and model variations is higher than for configurations 1 and 2 and equal to 0.68. Neurons of this network contain such atmospheric variables as u at 225 hPa and v in the lower atmospheric layers (700 hPa). For neural network shown in Figure A3 large-scale atmospheric advection begins to play the greatest role (u at 550 hPa). Using this neural network we also estimated the median value of seeing at the Maidanak observatory site during the period from January to October 2023. This median value of the seeing is 0.73.″
Analysis of the neural networks obtained shows that individual bright connections between neurons are substituted for most configurations with nearly equal weight coefficients. Unlike the Sayan Solar Observatory, the deep neural networks obtained for the Maidanak Astronomical Observatory do not contain pronounced connections between the seeing parameter and atmospheric vorticities [35]. Moreover, the use of atmospheric vorticities in the simulation even slightly reduces the Pearson correlation coefficient between the model and measured seeing values. For neural networks containing atmospheric vorticities, the Pearson correlation coefficient is reduced less than 0.45. In our opinion, this is due to the fact that the effect of large-scale atmospheric vorticity on optical turbulence at the Maidanak Astronomical Observatory site is minimal and it is noticeable only for individual time periods, when the seeing value increases.

5. CONCLUSION

The following is a summary of the conclusions.
This paper focuses on developing physically informed deep neural networks as well as machine learning methods to predict seeing. We proposed ensemble models of multi-layer neural networks for estimation of seeing at the Maidanak Observatrory. As far as we know, this is the first attempt to simulate seeing variations with neural networks at the Maidanak observatory. The neural networks are based on a physical model of the relationship between the characteristics of small-scale atmospheric turbulence and large-scale meteorological characteristics relevant to the Astronomical Observatory Maidanak.
For the first time, configurations of a deep neural network have been obtained for estimating the seeing. The neurons of these networks are linear, quadratic, cubic and covariance functions of large-scale meteorological characteristics at different heights in the boundary layer and free atmosphere. We have shown that the use of a different set of inputs makes it possible to estimate the influence of large-scale atmospheric characteristics on variations in the turbulent parameter seeing. In particular, the present paper shows that:
(i)
The seeing parameter weakly depends on meso-scale and large-scale atmospheric vorticity, but is significantly sensitive to the characteristics of the surface layer of the atmosphere. In particular, for neural networks containing atmospheric vorticities, the Pearson correlation coefficient is low, ∼ 0.45.
(ii)
The air temperature and wind speed on the pressure levels closest to the observatory as well as northward turbulent surface stress have a significant impact on the seeing. Applying the northward turbulent surface stress parameter in the training process makes it possible to improve significantly the retrieving seeing variations (the Pearson correlation coefficient increases from 0.45 to ∼ 0.70). The estimated median value of seeing with neural networks at the Maidanak observatory site during the period from January to October, 2023 is 0.73 ″.
(iii)
The influence of the upper atmospheric layers (below the 200 hPa surface) becomes noticeable for selected atmospheric situations when, as we assume, the reanalysis best reproduces large-scale meteorological fields.
Verification of hourly averaged vertical profiles of wind speed and air temperature derived from the Era-5 re-analysis database, has been performed. We compare semi-empirical vertical profiles of the Era-5 re-analysis with radiosounding data of the atmosphere at the Dzhambul station, which is located within the region of the Maidanak Astronomical Observatory. The largest deviations correspond to the lower layers of the atmosphere and the pressure levels of a large-scale jet stream formation. In winter, Δ T , Δ V , σ T and σ V , are 1.3 o, 2.8 m/s, 1.7 o and 3.3 m/s, respectively. In summer, these statistics are similar values: 1.7 o, 2.2 m/s, 2.3 o and 2.7 m/s, respectively.

Author Contributions

A.V.K. was engaged in developing software and data validation, A.Y.S., K.E.K and P.G.K developed methodology and performed investigation, E.A.K. conducted out Formal analysis, S.A.E. and Y.A.T. were engaged in measurements and initial data analysis. All authors have read and agreed to the published version of the manuscript.

Funding

The section 3.1 was supported by the Ministry of Science and Higher Education of the Russian Federation. The development of neural networks for simulations of night-time seeing variations at the Maidanak Observatory was funded by RSF grant № 23-72-00041.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are available upon request.

Acknowledgments

The approaches were previously tested using the Unique Research Facility "Large Solar Vacuum Telescope" (http://ckp-rf.ru/usu/200615/).

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Figure A1. Neural network for estimating seeing parameter. Configuration 1.
Figure A1. Neural network for estimating seeing parameter. Configuration 1.
Preprints 92675 g0a1
Figure A2. Neural network for estimating seeing parameter. Configuration 2.
Figure A2. Neural network for estimating seeing parameter. Configuration 2.
Preprints 92675 g0a2
Figure A3. Neural network for estimating seeing parameter for chosen atmospheric cases. Numbers on the right side of variables correspond to pressure levels.
Figure A3. Neural network for estimating seeing parameter for chosen atmospheric cases. Numbers on the right side of variables correspond to pressure levels.
Preprints 92675 g0a3

References

  1. Panchuk, V.E.; Afanas’ev, V.L. Astroclimate of Northern Caucasus—Myths and reality. Astrophysical Bulletin volume 2011, 66, 233–254. [Google Scholar] [CrossRef]
  2. Hellemeier, J.A.; Yang, R.; Sarazin, M.; Hickson, P. Weather at selected astronomical sites – An overview of five atmospheric parameters. Monthly Notices of the Royal Astronomical Society 2019, 482, 4, 4941 - 4950. [CrossRef]
  3. Tokovinin, A. The Elusive Nature of "Seeing". Atmosphere 2023, 14, 1694. [Google Scholar] [CrossRef]
  4. Parada, R.; Rueda-Teruel, S.; Monzo, C. Local Seeing Measurement for Increasing Astrophysical Observatory Quality Images Using an Autonomous Wireless Sensor Network. Sensors 2020, 20, 3792. [Google Scholar] [CrossRef] [PubMed]
  5. Hidalgo, S.L.; Muñoz-Tuñón, C.; Castro-Almazán, J.A.; Varela, A.M. Canarian Observatories Meteorology; Comparison of OT and ORM using Regional Climate Reanalysis. Publications of the Astronomical Society of the Pacific 2021, 133, 1028, 105002. [CrossRef]
  6. Vernin, J.; Munoz-Tunon, C.; Hashiguchi, H.; Lawrence, D. Optical seeing at La Palma Observatory. I - General guidelines and preliminary results at the Nordic Optical Telescope. Astronomy and Astrophysics 1992, 257, 811–816. [Google Scholar] [CrossRef]
  7. Tokovinin, A. From differential image motion to seeing. Publications of the Astronomical Society of the Pacific 2002, 114, 800, 1156–1166. [CrossRef]
  8. Cherubini, T.; Businger, S.; Lyman, R.; Chun, M. Modeling optical turbulence and seeing over Mauna Kea. Appl. Meteorol. Chem. 2008, 47, 1140–1155. [Google Scholar] [CrossRef]
  9. Rimmele, Th. R.; Warner, M.; Keil, S.L.; Goode, P.R.; Knölker, P.; Kuhn, J.R. ; Rosner, R.R.; McMullin, J.P. ; Casini, R. ; Lin, H.; Wöger, F.; von der Lühe, O.; Tritschler, A.; Davey, A.; de Wijn, A.; Elmore, D.F. ; Fehlmann, A.; Harrington, D.M. ; Jaeggli, S.A ; Rast, M.P.; Schad, T. A.; Schmidt, W.; Mathioudakis, M.; Mickey, D.L.; Anan, T. ; Beck, C.; Marshall, H.K. ; Jeffers, P.F ; Oschmann Jr., J.M. The Daniel K. Inouye Solar Telescope – Observatory Overview. Solar Phys. 2020, 295, 172. [CrossRef]
  10. Grigoryev, V. M.; Demidov, M. L.; Kolobov, D. Yu.; Pulyaev, V. A.; Skomorovsky, V. I.; Chuprakov, S.A. Project of the Large Solar Telescope with mirror 3 m in diameter. Journal Solar-Terrestrial Physics 2020, 6, 2, 14–29, doi:10.12737/stp-62202002. [CrossRef]
  11. Wang, Z.; Zhang, L.; Kong, L.; Bao, H.; Guo, Y.; Rao, X.; Zhong, L.; Zhu, L.; Rao, C. A modified S-DIMM+: Applying additional height grids for characterizing daytime seeing profiles. Mon. Not. R. Astron. Soc. 2018, 478, 1459–1467. [Google Scholar] [CrossRef]
  12. Zhong, L.; Zhang, L.; Shi, Z.; Tian, Y.; Guo, Y.; Kong, L.; Rao, X.; Bao, H.; Zhu, L.; Rao, C. Wide field-of-view, high-resolution Solar observation in combination with ground layer adaptive optics and speckle imaging. Astron. Astrophys. 2020, 637, A99. [Google Scholar] [CrossRef]
  13. Lotfy, E.R.; Abbas, A. A.; Zaki, S.A.; Harun, Z. Characteristics of Turbulent Coherent Structures in Atmospheric Flow Under Different Shear–Buoyancy Conditions. Boundary-Layer Meteorology 2019, 173,1, 115 - 141. [CrossRef]
  14. Burgan, H.I. Comparision of different ANN (FFBP GRNN F) algoritms and multiple linear regression for daily streamflow prediction in Kocasu river-Turkey. Fresenius Environmental Bulletin 2022, 31,5, 4699 - 4708.
  15. Hou, X.; Hu, Y.; Du, F.; Ashley, M.C.B.; Pei, C.; Shang, Z.; Ma, B.; Wang, E.; Huang, K. Machine learning-based seeing estimation and prediction using multi-layer meteorological data at Dome A, Antarctica. Astronomy and Computing 2023, 43, 100710. [Google Scholar] [CrossRef]
  16. Wang, Y.; Basu, S. Using an artificial neural network approach to estimate surface-layer optical turbulence at Mauna Loa. Optics Let. 2016, 41, 2334–2337. [Google Scholar] [CrossRef] [PubMed]
  17. Eris, E.; Cavus, Y.; Aksoy, H.; Burgan, H.I.; Aksu, H.; Boyacioglu, H. Spatiotemporal analysis of meteorological drought over Kucuk Menderes River Basin in the Aegean Region of Turkey. Theoretical and Applied Climatology 2020, 142, 1515–1530. [Google Scholar] [CrossRef]
  18. Jellen, C.; Oakley, M.; Nelson, C.; Burkhardt, J.; Brownell, C. Machine-learning informed macro-meteorological models for the near-maritime environment. Applied Optics 2021, 60, 11, 2938 – 2951. [CrossRef]
  19. Su, C.; Wu, X.; Luo, T.; Wu, S.; Qing, C. Adaptive niche-genetic algorithm based on backpropagation neural network for atmospheric turbulence forecasting. Applied Optics 2020, 59, 12, 3699–3705. [CrossRef]
  20. Su, C.; Wu, X.; Wu, S.; Yang, Q.; Han, Y.; Qing, C.; Luo, T.; Liu, Y. In situ measurements and neural network analysis of the profiles of optical turbulence over the Tibetan Plateau. Monthly Notices of the Royal Astronomical Society 2021, 506, 3, 3430–-3438. [CrossRef]
  21. Bi, C.; Qing, C.; Wu, P.; Jin, X.; Liu, Q.; Qian, X.; Zhu, W.; Weng, N. Optical Turbulence Profile in Marine Environment with Artificial Neural Network Model. Remote Sens. 2022, 14, 2267. [Google Scholar] [CrossRef]
  22. Cherubini, T.; Lyman, R.; Businger, S. Forecasting seeing for the Maunakea observatories with machine learning. MNRAS 2021, 509, 232–245. [Google Scholar] [CrossRef]
  23. Li, Y.; Zhang, X.; Li, L.; Shi, L.; Huang, Y.; Fu, S. Multistep ahead atmospheric optical turbulence forecasting for free-space optical communication using empirical mode decomposition and LSTM-based sequence-to-sequence learning. Front. Phys. 2023, 11, 11. [Google Scholar] [CrossRef]
  24. Zilitinkevich, S.; Elperin, T.; Kleeorin, N.I.; Rogachevskii, I.; Esau, I. A Hierarchy of Energy- and Flux-Budget (EFB) Turbulence Closure Models for Stably-Stratified Geophysical Flows. Boundary-Layer Meteorology 2013, 146,3, 341 – 373. [CrossRef]
  25. Odintsov, S. L.; Gladkikh, V.A.; Kamardin, A.P.; Nevzorova, I.V. Height of the Mixing Layer under Conditions of Temperature Inversions: Experimental Data and Model Estimates Atmospheric and Oceanic Optics 2022, 35, 721 – 731. [CrossRef]
  26. Nosov, V.V.; Lukin, V.P.; Nosov, E.V.; Torgaev, A.V. Formation of Turbulence at Astronomical Observatories in Southern Siberia and North Caucasus. Atmospheric and Oceanic Optics 2019, 32, 464–482. [Google Scholar] [CrossRef]
  27. Hersbach, H.; Bell, B.; Berrisford, P.; Hirahara, S.; Horányi,A.; Muñoz-Sabater, J.; Nicolas,J.; Peubey, C.; Radu, R.; Schepers, D.; Simmons, A.; Soci, C.; Abdalla, S.; Abellan, X.; Balsamo, G.; Bechtold, P.; Biavati, G.; Bidlot, J.; Bonavita, M.; De Chiara, G.; Dahlgren, P.; Dee, D.; Diamantakis, M.; Dragani, R.; Flemming, J.; Forbes, R.; Fuentes, M.; Geer, A.; Haimberger,L.; Healy, S.; Hogan, R.J.; Hólm, E.; Janisková, M.; Keeley, S.; Laloyaux,P.; Lopez,P.; Lupu, C.; Radnoti, G.; de Rosnay, P.; Rozum, I.; Vamborg, F.; Villaume, S.; Thépaut, J.-N. The ERA5 global reanalysis. Quarterly Journal of the Royal Meteorological Society 2020, 146, 730, 1999-2049. [CrossRef]
  28. Huang, J.; Wang, M.; Qing, H.; Guo, J.; Zhang, J.; Liang, X. Evaluation of Five Reanalysis Products With Radiosonde Observations Over the Central Taklimakan Desert During Summer. Earth and Space Science 2021, 8, 5, e2021EA001707. [CrossRef]
  29. Qing, C.; Wu, X.; Li, X.; Luo, T.; Su, C.; Zhu, W. Mesoscale optical turbulence simulations above Tibetan Plateau: first attempt. Optics Express 2020, 28, 4, 4571–4586. [CrossRef]
  30. Bi, C.; Qing, C.; Qian, X.; Luo, T.; Zhu, W.; Weng, N. Investigation of the Global Spatio-Temporal Characteristics of Astronomical Seeing. Remote Sens. 2023, 15, 2225. [Google Scholar] [CrossRef]
  31. Ivakhnenko, A.G. Heuristic Self-Organization in Problems of Engineering Cybernetic. Automatica 1970, 6, 207–219. [Google Scholar] [CrossRef]
  32. Ivakhnenko, A.G.; Ivakhnenko, G.A.; Mueller, J.A. Self-Organization of Neuronets with Active Neurons. International Journal of Pattern Recognition and Image Analysis: Advanced in Mathematical Theory and Applications 1994, 4, 177–188. [Google Scholar]
  33. Stepashko, V. Developments and Prospects of GMDH-Based Inductive Modeling. Advances in Intelligent Systems and Computing 2018, 689, 474–491. [Google Scholar] [CrossRef]
  34. Bolbasova, L.A.; Andrakhanov, A.A.; Shikhovtsev, A.Yu. The application of machine learning to predictions of optical turbulence in the surface layer at Baikal Astrophysical Observatory. Monthly Notices of the Royal Astronomical Society 2021, 504, 4, 6008–6017. [CrossRef]
  35. Shikhovtsev, A.Yu.; Kovadlo, P.G.; Kiselev, A.V.; Eselevich, M.V.; Lukin, V.P. Application of Neural Networks to Estimation and Prediction of Seeing at the Large Solar Telescope Site. pasp 2023, 135, 014503. [Google Scholar] [CrossRef]
Figure 1. The number of nights N n i g by months.
Figure 1. The number of nights N n i g by months.
Preprints 92675 g001
Figure 2. Vertical profiles of Δ T [o K], Δ V [m/s], σ T [o K] and σ V [m/s] in winter.
Figure 2. Vertical profiles of Δ T [o K], Δ V [m/s], σ T [o K] and σ V [m/s] in winter.
Preprints 92675 g002
Figure 3. Vertical profiles of Δ T [o K], Δ V [m/s], σ T [o K] and σ V [m/s] in summer.
Figure 3. Vertical profiles of Δ T [o K], Δ V [m/s], σ T [o K] and σ V [m/s] in summer.
Preprints 92675 g003
Figure 4. Histograms of measured seeing values at the Maidanak observatory for two periods: 1996 - 2003 and 2018 - 2022. N i is the number of cases
Figure 4. Histograms of measured seeing values at the Maidanak observatory for two periods: 1996 - 2003 and 2018 - 2022. N i is the number of cases
Preprints 92675 g004
Figure 5. Flowchart for creation of neural networks.
Figure 5. Flowchart for creation of neural networks.
Preprints 92675 g005
Figure 6. Changes in model and measured seeing values. Seeing values are estimated using validation dataset. The number of configuration of neural network is 1. Line 1 corresponds to measured seeing. Line 2 corresponds to modeled seeing
Figure 6. Changes in model and measured seeing values. Seeing values are estimated using validation dataset. The number of configuration of neural network is 1. Line 1 corresponds to measured seeing. Line 2 corresponds to modeled seeing
Preprints 92675 g006
Figure 7. Changes in model and measured seeing values. Seeing values are estimated using validation dataset. The number of configuration of neural network is 2. Line 1 corresponds to measured seeing. Line 2 corresponds to modeled seeing
Figure 7. Changes in model and measured seeing values. Seeing values are estimated using validation dataset. The number of configuration of neural network is 2. Line 1 corresponds to measured seeing. Line 2 corresponds to modeled seeing
Preprints 92675 g007
Figure 8. Changes in model and measured seeing values. Seeing values are estimated using validation dataset. Configuration of neural network obtained for chosen atmospheric cases. Line 1 corresponds to measured seeing. Line 2 corresponds to modeled seeing
Figure 8. Changes in model and measured seeing values. Seeing values are estimated using validation dataset. Configuration of neural network obtained for chosen atmospheric cases. Line 1 corresponds to measured seeing. Line 2 corresponds to modeled seeing
Preprints 92675 g008
Table 1. Designations used in neural networks.
Table 1. Designations used in neural networks.
ine Label Parameter
ine nsss northward turbulent surface stress
u u-component of wind
v v-component of wind
w w-component of wind
q specific humidity
t air temperature
t 2 m air temperature at height of 2 m
ine
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2024 MDPI (Basel, Switzerland) unless otherwise stated