Next Article in Journal
Dynamic Simulation Model to Monitor Flow Growth Rivers in Rapid-Response Catchments Using Humanitarian Logistic Strategies
Previous Article in Journal
Advanced Machine Learning Methods for the Prediction of the Optical Parameters of Tellurite Glasses
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Application of Machine Learning Methods for Identifying Wave Aberrations from Combined Intensity Patterns Generated Using a Multi-Order Diffractive Spatial Filter

by
Paval. A. Khorin
1,*,
Aleksey P. Dzyuba
1,
Aleksey V. Chernykh
1,2,
Muhammad A. Butt
1 and
Svetlana N. Khonina
1
1
Samara National Research University, Samara 443086, Russia
2
ITMO University, Saint-Petersburg 197101, Russia
*
Author to whom correspondence should be addressed.
Technologies 2025, 13(6), 212; https://doi.org/10.3390/technologies13060212
Submission received: 9 April 2025 / Revised: 13 May 2025 / Accepted: 21 May 2025 / Published: 26 May 2025
(This article belongs to the Section Information and Communication Technologies)

Abstract

:
A multi-order combined diffraction spatial filter, integrated with a set of Zernike phase functions (representing wavefront aberrations) and Zernike polynomials, enables the simultaneous formation of multiple aberration-transformed point spread function (PSF) patterns in a single plane. This is achieved using an optical Fourier correlator and provides significantly more information than a single PSF captured in focal or defocused planes—all without requiring mechanical movement. To analyze the resulting complex intensity patterns, which include 49 diffraction orders, a convolutional neural network based on the Xception architecture is employed. This model effectively identifies wavefront aberrations up to the fourth Zernike order. After 80 training epochs, the model achieved a mean absolute error (MAE) of no more than 0.0028. Additionally, a five-fold cross-validation confirmed the robustness and reliability of the approach. For the experimental validation of the proposed multi-order filter, a liquid crystal spatial light modulator was used. Optical experiments were conducted using a Fourier correlator setup, where aberration fields were generated via a digital micromirror device. The experimental results closely matched the simulation data, confirming the effectiveness of the method. New advanced aberrometers and multichannel diffractive optics technologies can be used in industry for the quality control of optical elements, assessing optical system alignment errors, and the early-stage detection of eye diseases.

1. Introduction

In recent decades, machine learning (ML) has made remarkable strides and is now being widely adopted across a broad range of scientific and engineering disciplines. One such rapidly evolving area is optics and photonics, where ML techniques are emerging as powerful tools for the analysis and processing of optical signals [1,2,3,4,5,6]. In particular, the identification of wavefront aberrations is a complex task that demands both high accuracy and rapid processing. The application of ML algorithms enables the efficient processing of interferometric measurement data and the accurate classification of aberration types, significantly streamlining the overall analysis process. In particular, the use of classifiers to identify the topological charges of optical vortices from wavefront sensor data has demonstrated exceptionally high accuracy, approaching 100% on test datasets [7,8,9,10,11]. Digital holographic interferometry leverages advanced components such as spatial light modulators (SLMs), which, when combined with ML techniques, enable not only real-time wavefront measurement, but also precise control over its shape [12,13,14]. ML methods can be used to detect aberrations using interferograms with different types of reference beams [15,16,17,18,19], which allows for compensation for aberrations and improves image quality and measurement sensitivity.
Modern diffraction methods, including multi-order diffraction spatial filters [20,21,22,23,24,25,26,27], allow for the creation of combined intensity patterns [28,29] containing information about wave aberrations. These patterns can be used to train ML models [30,31,32], which opens new horizons for automation and the increased diagnostic accuracy of optical systems [33,34]. In most convolutional networks, two main parts can be distinguished: feature extraction and classification. The task of feature extraction is to recognize image features [35]. This part distinguishes the convolutional network from other types [36]. At this stage, we are faced with the task of identifying and extracting the main patterns and information from the visual data. This stage usually consists of CNN+ReLu+Pooling blocks that are repeated the required number of times. The second part, classification, is a kind of final transition from the obtained feature map to specific probabilities for our task [37]. At this stage, it is typical to bring the data into a convenient form using the Flatten layer to further transfer the data to the Fully Connected layer and final layer Softmax.
In this paper, we explore the application of ML techniques for the identification of wavefront aberrations based on combined intensity patterns produced by a multi-order diffraction spatial filter. A feature of combined intensity patterns is the conjunction of distributions formed on the basis of both standard and Zernike phase functions. Different types of distributions correspond to different diffraction orders formed at given locations on the focal plane. This approach allows significantly more information about the studied wavefront to be obtained, although it complicates the focal pattern. Machine learning methods are used to analyze the obtained complex combined distributions. We review the existing methodologies and introduce novel approaches aimed at enhancing both the accuracy and efficiency of the aberration identification process.

2. Materials and Methods

2.1. Theoretical Foundations

Consider an aberrated wavefront, which is described in the following expression:
w ( r , φ ) = exp [ i ψ ( r , φ ) ]
where the phase can be represented as a superposition of Zernike functions [38,39]:
ψ ( r , φ ) = n , m c n m Z n m ( r , φ )
where r and φ are polar coordinates, Cnm is the aberration weight coefficient, Z n m ( r , φ ) is the Zernike functions of order (n,m), n is radial index, and m is azimuthal index [40,41]:
Z n m ( r , φ ) = A n R n m ( r ) { cos ( m φ ) sin ( m φ ) }
where A n = n + 1 π r 0 2 and R n m ( r ) are the radial Zernike polynomials:
R n m ( r ) = p = 0 ( n m ) / 2 ( 1 ) p ( n p ) ! [ p ! ( n + m 2 p ) ! ( n m 2 p ) ! ] 1 ( r r 0 ) n 2 p
where | m | n , ( n m ) is even.
To identify wave aberrations, it is proposed to use combined intensity patterns formed by a multi-order diffraction spatial filter. In [42,43], a method is proposed to decompose the incident wavefront into a set of diffraction orders, with each order’s amplitude proportional to the corresponding coefficient of the Zernike polynomial expansion. This type of wavefront sensor enables the direct measurement of the Zernike components of an aberrated wavefront. The complex transmission function of the multi-order diffractive optical element (DOE) is defined as follows:
τ 1 ( x , y ) = p , q Z p q ( x , y ) exp [ i ( a p q x + b p q y ) ]
When dealing with high aberrations (>0.4λ), where significant blurring of the focal spot occurs, intensity distribution analysis across one or more planes of the aberrated optical system is more appropriate [44,45,46].
As the magnitude of wavefront aberration increases, the linear approximation of the wavefront becomes inadequate. This is due to the growing significance of the second- and higher-order terms in the Taylor series expansion, which can result in the misidentification of aberrations. To enable an objective evaluation of both the magnitude and type of aberration, a diffractive optical element (DOE) consistent with wavefront aberrations—specifically Zernike phase functions—was developed, as described in [28,29]. The numerical simulations and experimental results presented in these studies demonstrate the accurate detection of wavefront aberrations with amplitudes up to one wavelength (λ), with potential for further extension of this range [28,29]. The complex transmission function of this DOE is given as follows:
τ 2 ( x , y ) = p , q , k exp [ i d p q k Z p q ( x , y ) ] exp [ i ( a p q k x + b p q k y ) ]
where d p q k is the weighting coefficient of the encoded wave aberration Zpq and a p q k ,   b p q k are the spatial carriers.
It is worth noting that the weak aberrations (≤0.1λ) of the wavefront are well detected using spatial filters (5) matched with the Zernike function basis, including the multichannel diffractive optical elements. In the case of aberrations with a pronouncedly distorted PSF (>0.1λ) and strong aberrations (>0.4λ), it makes sense to use methods oriented toward analyzing the intensity distribution pattern formed by the aberrated optical system. In this paper, we propose a DOE for forming combined intensity patterns corresponding to wave aberrations and Zernike polynomials. The complex transmission function of the DOE has the following form:
τ ( x , y ) = τ 2 ( x , y ) + τ 1 ( x , y ) = = p , q , k exp [ i d p q k Z p q ( x , y ) ] exp [ i ( a p q k x + b p q k y ) ] + + p , q Z p q ( x , y ) exp [ i ( a ˜ p q x + b ˜ p q y ) ] ,
where a p q k ,   b p q k ,   a ˜ p q ,   b ˜ p q are the corresponding spatial carriers.
The field formed by the DOE (5)–(7) in the far diffraction zone or in the focal plane of the lens can be described using the Fresnel transform:
G ( ρ , θ ) = i λ f 0 R 0 2 π τ ( r , φ ) exp [ i 2 π λ f r ρ cos ( θ φ ) ] r d r d φ
where λ is the wavelength, f is the focal length of the lens, and R is the radius of the optical element.

2.2. Dataset

To prepare a combined intensity pattern dataset, it is necessary to calculate the diffraction of a variable aberrated wavefront (1) on a multichannel DOE (7).
Multichannel diffractive optical elements allow the amount of information about the wavefront to be increased, including its phase, in one recorded plane, thanks to several channels that differ in coded functions and location. For the first part of the dataset, the wavefront is defined as one wave aberration:
w ( r , φ ) = exp [ i ψ ( r , φ ) ] ,   ψ ( r , φ ) = c n m Z n m ( r , φ )
The type of aberration (n,m) varies as 0 ≤ n ≤ 4, 0 ≤ mn, which corresponds to tilt (distortion), astigmatism, defocus, trefoil, coma, quatrefoil, 2nd-order astigmatism, and spherical aberration. The aberration weight cnm varies from 0 to 0.3, with a step of 0.05.
For the second part of the dataset, the wavefront is defined as a superposition of two wave aberrations:
w ( r , φ ) = exp [ i ψ ( r , φ ) ] , ψ ( r , φ ) = c n m Z n m ( r , φ ) + c i j Z i j ( r , φ )
Based on the obtained set of aberrated wavefronts, a dataset is calculated, which is a set of combined intensity patterns formed by a multi-order diffraction spatial filter (7):
| G ( ρ , θ ) | 2 = 1 λ 2 f 2 | 0 R 0 2 π w ( r , φ ) τ ( r , φ ) exp [ i 2 π λ f r ρ cos ( θ φ ) ] r d r d φ | 2 .
Thus, the dataset size is 2352 images of 256 × 256 pixels with combined intensity patterns. From the dataset, 1568, 392, and 392 images were used for training, testing, and validation, respectively.

2.3. CNN Architecture

A convolutional neural network based on the Xception architecture adapted to the specifics of the task is employed to analyze the generated combined intensity patterns—including 49 diffraction orders—and to identify wavefront aberrations up to the 4th order in terms of Zernike functions. The original Xception architecture was modified to better suit the recognition of Zernike-based wavefront aberrations up to the order n = 4. Instead of the standard Softmax classifier, a custom regression-oriented head was implemented, consisting of a GlobalAveragePooling2D layer to adjust the dimensionality of the feature maps, followed by a Dropout layer to reduce overfitting, and a Dense layer with a linear activation function to enable continuous value prediction. The output layer was set to a dimension of 8, corresponding to the number of Zernike coefficients being estimated.
The model uses the mean absolute error (MAE) as the loss function, which is well suited for regression tasks where the precise estimation of continuous variables is required. The RMSprop optimizer was selected due to its ability to stabilize weight updates in the presence of potentially high-variance gradients produced by MAE. By maintaining a moving average of squared gradients, RMSprop provides smoother and more consistent updates. Its adaptive learning rate further helps avoid large oscillations during training, contributing to stable convergence, particularly important in deep architectures like Xception. These characteristics make RMSprop especially effective for regression problems such as wavefront aberration estimation. The network architecture, including the Layer Type, Kernel Size, Strides, and Output Size, is presented in Table 1, Table 2 and Table 3 for Entry, Middle, and Exit Flow, respectively.
To assess the robustness and generalizability of our model, we conducted a 5-fold cross-validation. The results were consistent across all folds, indicating that the model effectively captured stable patterns from the data rather than overfitting to any specific subset. This consistency suggests that the dataset is well balanced, and that the model’s performance is not sensitive to the particular choice of training and testing partitions. Cross-validation plays a critical role in evaluating model reliability, as it mitigates the risk of performance bias caused by a single data split. By averaging the results across multiple folds, we obtain a more reliable and realistic estimate of the model’s expected performance in real-world scenarios, while also reducing the likelihood of overfitting or underfitting.

2.4. Optical Scheme

The validation of the numerical results for the filter’s performance, aligned with wave aberrations and Zernike functions, was carried out in an experimental setup utilizing two SLMs, the first of which acted as the beam generator with a test-controlled aberration wavefront. The main configuration of this setup is illustrated in Figure 1. The second SLM realized the pre-calculated phase DOE to determine the wavefront components of Zernike polynomials.
A DPSS laser (shown as “Laser” in Figure 1) with a wavelength of 532 nm was used as the coherent light source. The wide beam with a flat wavefront was shaped by a spatial filtering unit, consisting of a 20× microscope objective (MO), a 10 µm diameter pinhole (PH), and a collimating lens (L1) with a focal length of 15 cm. Two mirrors (M1 and M2) directed the beam toward the first SLM, which is the Texas Instruments DLP6500 digital micromirror device (DMD) with a resolution of 0.65 1080p. A pattern encoding the aberration combination was applied to the DMD as a binary hologram. The effective region of the pattern formed a circle with a diameter of 680 pixels at 7.65 µm, with a typical fragment shown in the inset near the DMD in Figure 1. The incidence angle of the DMD-illuminating beam was selected to direct the first diffraction order perpendicular to the DMD plane. The first diffraction order was separated using a 4f-system consisting of two lenses (L2 and L3) with focal lengths of 20 cm, and an adjustable aperture was used to eliminate the influence of the other diffraction orders. At the output of the 4f-system, an LCoS SLM, the LETO-2 from Holoeye, was positioned. The phase mask for the matched filter, with 800 pixels of a 6.4 µm diameter (a quadrant of which is shown in the inset near the SLM), was set on the SLM. The phase mask was combined with a linear phase tilt of a 4-pixel period to separate the zero-order from the SLM. The reflected field was extracted from the 4f-system using an unpolarized beam splitter (BS) and a mirror (M3). Lens (L4) performed the optical Fourier transform, and at its focus (30 cm from L4), a CMOS monochrome camera (C), the MindVision MV-UB130GM, with a resolution of 1280 × 960 and a pixel size of 3.75 µm, captured the image. A typical intensity distribution is shown in the inset near the camera, colorized in green to distinguish it from the numerical results. Visually, the experimental data closely matched the numerical simulations.
It is important to note that all components of the system contribute to wavefront distortions, with the most significant contributions coming from the DMD. A reference beam was split after the spatial filtering unit, and an additional lens was placed after L4, forming a second 4f-system. The camera was then positioned at the focus of this additional lens. A flat wavefront pattern was displayed on the DMD, while a tilted wavefront pattern was set on the SLM. In this configuration, the modulators were set to provide “zero” exposure to the beam, and the field reflected from the SLM projected the off-axis hologram onto the camera.
The aberrations inherent to the system, including those from the DMD, SLMs, lenses, and reflectors, were reconstructed via the off-axis holography methods and encoded into the DMD patterns. This procedure allowed us to correct for these aberrations. The process was repeated six times to mitigate the influence of interference oscillations between the two arms of the setup. Once the system was calibrated, the reference beam was blocked, the camera was positioned at the focus of L4, the matched filter was activated, and the aberration patterns displayed on the DMD were measured.
In the experiment, we use the single-beam setup, which is resistant to mechanical vibrations. In order to have repeatable results, it is enough to fix all elements on the same platform. Unlike the two-beam schemes with beam separation by different optical elements, the proposed scheme does not require an additional vibration isolation system. Also, in favor of stability is the advantage of a wide beam with a diameter of 5.12 mm. Aberration correction is mainly required for optical elements with a larger optical aperture.
The experimental setup used in this study was designed to validate the proposed aberration detection method using DOEs. The configuration allows for modifications—for instance, the DMD can be replaced with the LCoS SLM. The experimental results show strong agreement with numerical simulations, confirming the method’s validity and making the data suitable for training convolutional neural networks. Since the developed DOE is used as a static optical element, it can be transferred onto a photosensitive material using lithography or holography techniques. This enables the replacement of programmable SLMs with static DOEs for industrial applications. In such a configuration, the test sample illuminated by a plane wave serves the function of the DMD, while the SLM is replaced by a fabricated DOE and the intensity distribution is registered by the matrix photodetector in single-shot exposure. In this way, the system can be realized for mass production and real aberration measurement applications.

3. Results

3.1. Modeling Dataset

To prepare the model dataset, the following steps must be performed. In the first stage, the complex distribution function of the combined DOE (7) was calculated. Figure 2 shows the amplitude and phase of the DOE calculated for 49 diffraction orders. The physical size of the proposed DOE is 5 × 5 mm2, provided that the size of one pixel is approximately 12 × 12 μm2. It is worth noting that currently, the most accessible technologies for the manufacture of multi-level DOEs are limited in resolution to approximately 1 μm [47,48]. The resolution of the proposed 49-channel filter is more than 10 times greater than the critical resolution, ensuring a relatively simple process of applying a diffraction pattern and the possibility of the practical manufacture of the DOE. At the same time, the proposed multichannel DOE can be easily implemented using both available transmissive and reflective SLMs with a resolution of more than 1000 × 700 pixels [49,50]. It is possible to extend the proposed method to detect Zernike aberrations of a higher order than the fourth order and with a weight greater than a 0.5 wavelength by adding new diffraction orders in the optical element. The dynamic restructuring of the optical element, including using SLM, allows both sets of analyzed types of aberrations to be changed and their magnitude to be varied.
Let us perform a numerical calculation of diffraction in the focal plane of the DOE (7) using expression (8) with the following parameter values: R = 1 mm, λ = 0.5 µm, and f = 100 mm. Figure 3 shows a combined intensity pattern in the focal plane. It represents 49 diffraction orders. In the first block (wave aberrations), Zernike phase functions are encoded, corresponding to different types (p, q) of wave aberrations exp [ i d p q k Z p q ( x , y ) ] with different weighting coefficients d p q k . In the second block (Zernike pol.), Zernike polynomials (3) corresponding to different types (p, q) of aberrations are encoded.
To produce a DOE with high diffraction efficiency and to use a SLM, we calculate the phase DOE based on the partial coding method [51]. The partial coding method is multilevel phase coding, oriented towards an application with spatial light modulators, and is as follows:
τ ~ ( x , y ) = { exp { i arg [ τ ( x , y ) ] } , | τ ( x , y ) | α exp { i arg [ τ ( x , y ) ] + i μ } , | τ ( x , y ) | < α
μ = { π , sgn ( S i j ) > 0 , 0 , sgn ( S i j ) < 0 ,
S i j [ 0 , 5 ; 0 , 5 ]
where τ ( x , y ) is the initial amplitude-phase transmission function; α is the parameter responsible for the threshold amplitude value at which a phase jump will be added to the point; μ is the magnitude of the phase jump; S i j is a pseudo-random variable, the sign of which determines the magnitude of the phase jump; and τ ~ ( x , y ) is the calculated phase transmission function.
Figure 4 shows the phase of the calculated phase DOE (12) with the parameter value α = 1/π, which makes it possible to calculate a multichannel DOE with minimal error and increased diffraction efficiency [51]. The maximum diffraction efficiency is 60%, while the error in forming the intensity pattern in the focal plane of the DOE is 0.2.
Let us now perform a numerical calculation of the diffraction pattern in the focal plane of the phase DOE (Equation (7)). Figure 5 displays the intensity distribution (inversion) in the focal plane of the phase DOE. It is important to note that the primary source of error in the intensity pattern formation arises from noise between diffraction orders, as well as diffraction orders associated with radially symmetric aberrations (such as defocusing). The error in the intensity pattern within each diffraction order does not exceed 10–13%.
In the second stage, the dataset in the form of aberrated wavefronts was calculated. The first eight types of positive (m ≥ 0) aberrations (n ≤ 4) were selected as the type (n, m) of the detected aberration, and the weight Cnm of the wave aberration was selected in the range from 0 to 0.3λ. In a number of numerical experiments, the phase distributions (9) and (10) for the superposition of two aberrations were obtained. A total of 2352 images, each 256 × 256 pixels in size, were calculated. A fragment of the dataset in the form of aberrated wavefronts is presented in Table 4.
In the third stage, the dataset was calculated, representing the intensity distribution in the resulting filter plane (11) for the dataset in the form of aberrated wavefronts (9)–(10). In a number of numerical experiments, the intensity distribution (11) in the resulting filter plane (7) was obtained for a superposition of two aberrations. A dataset of 2352 images of 256 × 256 pixels was calculated for training and testing the convolutional neural network. A fragment of the dataset is presented in Table 5.
Table 5 presents examples of multi-order intensity patterns obtained using a DOE and lens, illustrating superpositions of various aberrations. For instance, the aberration characterized by displacement and defocusing (row 1, column 1 of Table 3) in the focal plane of the DOE leads to a specific distortion of the intensity distribution in each diffraction order. Additionally, the patterns corresponding to the first six rows of the filter have been described in detail in [28,29]. For the seventh row of the filter, which exhibits minor distortions in other diffraction orders, the criterion defined in [20,42,46] is the presence of non-zero intensity at the center of each diffraction order in the seventh row. In the case of a defocusing-type aberration, a correlation peak appears in the second row of the filter for the corresponding diffraction order. Furthermore, in the seventh row, non-zero intensity is recorded at the center of both the first and third diffraction orders. Given that algorithms for determining the type and magnitude of aberrations were developed based on the studied patterns of intensity distribution and location in the diffraction orders [28,29], we can now effectively apply ML methods for analysis.
It is worth noting that a slight change in the value of the weighting coefficient cnm does not lead to a significant change in the focal pattern. In this case, the DOE channels, matched with the Zernike polynomials, will work to clarify the type of aberration. Conversely, a significant change in the value of the weighting coefficient cnm leads to a significant change in the focal pattern, but in this case, the DOE channels matched with the Zernike polynomials can produce erroneous results [45,46]. In this case, the DOE channels matched with the Zernike phase functions (wave aberrations) will work to clarify the type and weight of the aberration. This approach eliminates the error of detecting weak aberrations due to the high sensitivity of channels matched with Zernike polynomials and enables the correct weight of an aberration recognition based on channels matched with Zernike phase functions (wave aberrations).

3.2. CNN Training

To verify the possibility of recognizing wave aberrations based on a multichannel spatial filter matched with Zernike basis functions and wave aberrations, a neural network with the Xception architecture was trained on a model dataset with intensity patterns (11), a fragment of which is presented in Table 5.
To control the accuracy, only the loss function was used, since metrics such as accuracy are not suitable for the regression problem. For 80 training epochs, after no more than 27 min of training time on a single RTX 4070 Super GPU, the absolute recognition error (MAE) was achieved, which does not exceed 0.0028. Figure 6 shows the training process of the convolutional neural network as a distribution of MAE in each epoch. It was found that detection—determination of the type and weight of the wave aberration—occurs almost free of errors, regardless of the weight (up to 0.3λ).
To illustrate the correct detection of the type and identification of the weight of the wave aberration in the analyzed wavefront, graphs of the original (model) aberrations and predicted ones are presented (Figure 7 and Figure 8).
It is worth noting that recognizing the aberration weight from a multichannel focal pattern is a non-trivial task. When using a DOE matched only with Zernike functions in the work [32], on average, the following trend was observed: the greater the weight of the superposition of aberrations, the greater the error in its recognition.
Figure 7 shows the distribution of the original and predicted weights of single-wave aberrations. The results of the experiment do not show a clear dependence of the recognition error on either the type or the weight of the analyzed aberration. For small aberrations (power = 0.145) of the astigmatism type (types = 2), the recognition error does not exceed 10−3 (Figure 7a). For small aberrations (power = 0.145) of the second-order astigmatism type (types = 7), the recognition error increases several times, but still does not exceed 10−2 (Figure 7b). With an increase in the weight coefficient to power = 0.3 for the same type of aberration, the recognition error decreases by 30% (Figure 7c).
Figure 8 shows the distribution of the initial and predicted weights of the superposition of two-wave aberrations. From the results obtained, it can be assumed that the recognition error does not depend on the type or weight of the analyzed aberration, nor on the number of detected aberrations. In any case, the recognition error of a single aberration on average coincides with the recognition error of the superposition of two aberrations.

3.3. Experimental Results

To validate the calculated phase DOE based on Equation (7) and the partial coding method described in [51], an optical experiment was conducted, with the setup diagram shown in Figure 1. To generate the aberration fields, a DMD was used, onto which the wavefront phase was sequentially applied, corresponding to one wave aberration (Equation (9)) and a superposition of two wave aberrations (Equation (10)). A pattern encoding the combination of aberrations was applied to the DMD in the form of a binary hologram, calculated using the Lee holography method [52,53].
To form the DOE pattern, a SLM was employed, with the complex transmission function of a multi-order diffraction spatial filter applied, the phase of which is illustrated in Figure 4. In a series of field experiments using a CMOS camera, the combined intensity patterns formed by the SLM under illumination from various aberrated wavefronts were recorded. Figure 9 displays the intensity distribution captured by the camera sensor. Along the perimeter of the image, diffraction orders with low energy content are visible, which, for the purposes of this study, are considered as noise and are subsequently cut off. These orders do not contribute to the transmission of the useful signal and represent a superposition of coded aberrations resulting from the phase DOE encoding and the specific characteristics of the SLM.
For further analysis, we perform post processing on the data from Figure 9, retaining only the encoded diffraction orders. Figure 10 presents a comparison between the modeled effect of the phase DOE and the results from the optical experiment. The comparison includes both the illumination of a plane wave, and the plane wave pattern applied to the DMD.
From Figure 10b, it is evident that the first block of diffraction orders (the first six rows), which is a set of wave aberrations, completely coincides with the results of numerical modeling in Figure 10a. In the central column (fourth), corresponding to the absence of aberrations (dpqk = 0), a distribution in the form of an Airy spot is observed, i.e., the input field, passing through these diffraction orders, is not transformed. In the case when the input field is a plane wave, we observe the PSF of the optical system.
As for the second block of diffraction orders (line 7), there are some differences between the results of numerical modeling and the experimental results. These differences may arise because the real optical system is not completely free of aberrations. Considering that the Zernike polynomials have high sensitivity (more than 0.05λ) to wavefront distortions, the diffraction orders in which these functions are encoded have been aberrationally transformed. Such a transformation only confirms the high sensitivity of the developed multi-order diffraction spatial filter.
Let us consider one wave aberration (9) using astigmatism (n, m) = (2, 2), with a negative value as an example c n m . Table 6 presents the intensity distributions recorded on the matrix and when outputting the pattern of astigmatic wave aberrations (9) at values c n m = {−0.5; −0.3; −0.2} to a DMD.
In the first line of the DOE, an Airy spot is observed in all three cases, which is explained by the fact that this line encodes a wave aberration of the astigmatism type. Depending on the magnitude of c n m in the aberration being studied, the column number in which the corresponding aberration value is detected changes.
In the case when c 22 = −0.5, the Airy spot is observed in the diffraction order at the intersection of the first row and first column, which corresponds to the encoded aberration (p, q) = (2, 2) with a magnitude d p q k of =−0.5.
Additionally, the presence of astigmatic-like aberrations is indicated by the corresponding diffraction orders of the seventh row. At the intersection of the seventh row and the second column in the center of the diffraction order, which corresponds to the Zernike polynomial (p, q) = (2, 2), a correlation peak is recorded. At the intersection of the seventh row and the sixth column in the center of the diffraction order, which corresponds to the Zernike polynomial (p, q) = (4, 2), a correlation peak is also recorded, but the sixth row (p, q) = (4, 2) does not have an Airy spot. However, it is worth noting that both correlation peaks are recorded in the diffraction orders at q = 2, which corresponds to an astigmatism of the first and second orders, respectively.
In the case when c 22 = −0.3, the Airy spot is observed in the diffraction order at the intersection of the first row and second column, as well as at the intersection of the first row and third column, which corresponds to the encoded aberration (p, q) = (2, 2) with a value d p q k in the range from −0.3 to −0.2.
Additionally, the presence of astigmatic-like aberrations, as in the previous case, is indicated by the corresponding diffraction orders of the seventh row—Zernike polynomials at (p, q) = (2, 2), (p, q) = (4, 2). At the same time, in the sixth row—wave aberrations at (p, q) = (4, 2)—there is no Airy spot. Therefore, in the analyzed wavefront, there is only aberration (n, m) = (2, 2).
Despite the uncertainty that arises when registering an Airy spot in two diffraction orders, the neural network finds patterns and determines with a low recognition error due to the multi-order pattern. All diffraction orders are connected by some rule with each other and for each specific type and value of aberrations, a unique complexly organized pattern is formed.
In the case when c 22 = −0.2, the Airy spot is observed in the diffraction order at the intersection of the first row and third column, which corresponds to the encoded aberration (p, q) = (2, 2) with a magnitude d p q k of = −0.2.
Zernike polynomials (seventh row) signal at (p, q) = (2, 2), (p, q) = (4, 2). At the same time, in the sixth row—wave aberrations at (p, q) = (4, 2)—no Airy spot is observed. Therefore, in the analyzed wavefront, there is only aberration (n, m) = (2, 2).
Similarly, let us consider one wave aberration (9) using astigmatism (n, m) = (2, 2) with a positive value as an example c n m . Table 7 presents the intensity distributions recorded on the matrix and when outputting the pattern of astigmatic wave aberrations (9) at values c n m = {0.2; 0.3; 0.5} to a DMD. The results are similar to those obtained in Table 6, with an accuracy up to the sign of the aberration weighting coefficient.
Table 8 presents the experimental results for the remaining types of aberrations under consideration. The intensity images highlight the intersection where the diffraction order with the analyzed wave aberration is recorded. In the case where the value of cnm = −dpqk, (n, m) = (p, q), the Airy function is recorded in the corresponding diffraction order. When the weight of the analyzed aberration cnm in (10) has an intermediate value between the two values dpq1 encoded in the filter dpq2, the presence of other diffraction orders allows the neural network to determine the weight of the analyzed aberration. This is because all diffraction orders are interconnected by a specific rule, and for each specific type and magnitude of aberration, a uniquely organized complex pattern is formed.
Let us now consider the superposition of two wave aberrations (Equation (10)) using the examples discussed in Table 4. Table 9 presents the intensity distributions recorded on the sensor. The results are similar to those shown in Table 5, which displays the intensity distribution (Equation (11)) in the resulting filter plane.
Thus, the experimental results confirm the possibility of using a SLM to identify wavefront aberrations up to the fourth order (in terms of Zernike functions) in the range from −0.5λ to 0.5λ, with a sensitivity of 0.05λ.

4. Discussion

Research into the application of ML for optical signal analysis has already yielded promising results. Specifically, various algorithms, such as neural networks, support vector machines, and decision trees, are being explored for their potential to solve aberration identification problems. Despite these significant advancements, challenges remain in optimizing these methods for use with combined intensity patterns. In these patterns, each diffraction order represents a distorted PSF corresponding to a specific aberration, and all the distorted PSFs in a plane are interrelated by a particular rule. Such a pattern should enable a convolutional neural network to accurately recognize both individual aberrations and their superpositions.
In real applications, the main problem may be the misalignment of the diffraction filter and focusing system. In this case, two approaches to solving this problem are possible. The first is to create a monolithic filter–lens system or combine the filter and lens on one diffraction element. The second approach is based on the fact that any errors introduced by environmental factors can be taken into account in the additional training of the used neural network.
In case of manufacturing errors of the diffractive optical elements, somewhat distorted distributions will be formed in the focal plane of the lens. However, the factors of such distortion can be taken into account and leveled out by the retraining of the neural network. For example, the works [54,55,56] in which errors in image formation were compensated by subsequent processing using machine learning methods are known.
The advantages of using ML in optical systems are considerable, including process automation, high accuracy, and flexibility, including in tasks at the intersection of optics and artificial intelligence in quantum metrology [57,58,59,60]. ML facilitates the automation of large-scale data processing, which is crucial in modern optical systems. Furthermore, these algorithms excel at identifying complex patterns within data, yielding more accurate results than traditional methods—particularly when analyzing intensity distributions across multiple diffraction orders. Additionally, ML models can be adapted to various types of aberrations and measurement conditions, making them versatile tools for diverse optical applications. The correct measurement of time-varying wave aberrations in the incident wavefront is related to the technical characteristics of the recording and modulating devices. In the conditions of implementation of the developed spatial filters using a spatial light modulator (SLM), a refresh rate of 120 Hz is possible for most liquid crystal SLMs. The use of a digital micromirror device (DMD) provides a much higher refresh rate (up to 32 kHz), but the binary nature of the device implies additional coding of the output masks.
Usually, the presence of coherent speckle noise produces blurring of the focal spots. However, when comparing experimental data with numerical modeling, we did not find significant blurring or differences in the generated pattern distributions. Judging from our data, the approach proposed in the manuscript is robust to coherent speckle noise, because it is cut off by spatial filtering as high-frequency components.
Further prospects are related to conducting theoretical research on the selection of the optimal architecture (ResNet, Xception, MobileNet, or EfficientNet, etc.) of a neural network, using the resulting knowledge base with images of multichannel pictures.
However, challenges remain in optimizing the dataset size for ML tasks involving combined intensity patterns. Moreover, future studies should address the impact of factors such as the size, number, and location of diffraction orders on the accuracy of wave aberration recognition. Further investigation into the recognition of superpositions involving three or more types of wavefront distortions is also warranted.

5. Conclusions

ML methods are powerful tools for identifying and correcting wave aberrations in optical systems. Their application significantly improves the accuracy and efficiency of measurements and opens up new avenues for research in optics. In this paper, we propose a DOE that generates combined intensity patterns corresponding to wave aberrations and Zernike polynomials. This approach enables the detection and identification of the type and weight of both single aberrations and superpositions of two aberrations across a broad range, up to ±λ/2, with the potential for extending this range and sensitivity to 0.05λ. This approach is applicable to a larger number of superpositions of aberrations. This is confirmed by the successful detection of superpositions of up to five aberrations separately by a filter matched to Zernike functions (5) [61] and a filter matched to wave aberrations [28].
A dataset consisting of 2352 images of combined intensity patterns was created. The values for the superposition of two aberrations were randomly selected within the range from 0 to 0.3λ, with a step size of 0.05λ. A convolutional neural network with the Xception architecture was trained on this model dataset of intensity patterns. After 80 training epochs, the absolute error (MAE) of the recognition was found to be less than 0.0028. The recognition error was shown to be independent of both the type and weight of the aberrations analyzed. Furthermore, the error ranged from 0.0019 to 0.0028 on average, whether identifying single aberrations or superpositions of two aberrations. Cross-validation results confirmed the robustness of the model. A five-fold cross-validation was performed, yielding consistent results, which suggests that the dataset is well distributed, and the model’s performance is not influenced by specific training and testing splits.
The obtained results may be useful for measuring and correcting wavefront aberrations in astronomical observations, microscopy, optical communication and coding, ophthalmology, and other imaging and focusing systems.

Author Contributions

Conceptualization, P.A.K. and S.N.K.; methodology, P.A.K., M.A.B., and S.N.K.; software, P.A.K. and A.P.D.; validation, P.A.K., A.P.D., A.V.C., and S.N.K.; formal analysis, P.A.K. and S.N.K.; investigation, A.P.D., A.V.C., and S.N.K.; resources, P.A.K. and M.A.B.; data curation, P.A.K., A.P.D., and A.V.C.; writing—original draft preparation, P.A.K., S.N.K., and M.A.B.; writing—review and editing, P.A.K., S.N.K., and M.A.B.; visualization, P.A.K., A.P.D., and A.V.C.; supervision, P.A.K. and S.N.K.; project administration, P.A.K. and M.A.B.; funding acquisition, P.A.K. All authors have read and agreed to the published version of the manuscript.

Funding

The study was supported by the grant of the Russian Science Foundation No. 24-79-10101, https://rscf.ru/en/project/24-79-10101/ (accessed on 23 May 2025).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Acknowledgments

We acknowledge the equal contribution of all the authors.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. El Srouji, L.; Krishnan, A.; Ravichandran, R.; Lee, Y.; On, M.; Xiao, X.; Ben Yoo, S.J. Photonics and optoelectronic neuromorphic computing. APL Photonics 2022, 7, 051101. [Google Scholar] [CrossRef]
  2. Kazanskiy, N.L.; Butt, M.A.; Khonina, S.N. Optical Computing: Status and Perspectives. Nanomaterials 2022, 12, 2171. [Google Scholar] [CrossRef]
  3. Liao, K.; Dai, T.; Yan, Q.; Hu, X.; Gong, Q. Integrated Photonic Neural Networks: Opportunities and Challenges. ACS Photonics 2023, 10, 2001–2010. [Google Scholar] [CrossRef]
  4. Brunner, D.; Soriano, M.C.; Fan, S. Neural network learning with photonics and for photonic circuit design. Nanophotonics 2023, 12, 773–775. [Google Scholar] [CrossRef] [PubMed]
  5. Khonina, S.N.; Kazanskiy, N.L.; Skidanov, R.V.; Butt, M.A. Exploring Types of Photonic Neural Networks for Imaging and Computing—A Review. Nanomaterials 2024, 14, 697. [Google Scholar] [CrossRef]
  6. Cheng, Y.; Zhang, J.; Zhou, T.; Wang, Y.; Xu, Z.; Yuan, X.; Fang, L. Photonic neuromorphic architecture for tens-of-task lifelong learning. Light Sci. Appl. 2024, 13, 56. [Google Scholar] [CrossRef]
  7. Huang, Z.; Wang, P.; Liu, J.; Xiong, W.; He, Y.; Zhou, X.; Xiao, J.; Li, Y.; Chen, S.; Fan, D. Identification of hybrid orbital angular momentum modes with deep feed forward neural network. Results Phys. 2019, 15, 102790. [Google Scholar] [CrossRef]
  8. Gavril’eva, K.N.; Mermoul, A.; Sevryugin, A.; Shubenkova, E.V.; Touil, M.; Tursunov, I.; Efremova, E.A.; Venediktov, V.Y. Detection of optical vortices using cyclic, rotational and reversal shearing interferometers. Opt. Laser Technol. 2019, 113, 374–378. [Google Scholar] [CrossRef]
  9. Jing, G.; Chen, L.; Wang, P.; Xiong, W.; Huang, Z.; Liu, J.; Chen, Y.; Li, Y.; Fan, D.; Chen, S. Recognizing fractional orbital angular momentum using feed forward neural network. Results Phys. 2021, 28, 104619. [Google Scholar] [CrossRef]
  10. Khorin, P.A.; Khonina, S.N.; Porfirev, A.P.; Kazanskiy, N.L. Simplifying the Experimental Detection of the Vortex Topological Charge Based on the Simultaneous Astigmatic Transformation of Several Types and Levels in the Same Focal Plane. Sensors 2022, 22, 7365. [Google Scholar] [CrossRef]
  11. Akhmetov, L.G.; Porfirev, A.P.; Khonina, S.N. Recognition of Two-Mode Optical Vortex Beams Superpositions Using Convolution Neural Networks. Opt. Mem. Neural Netw. (Inf. Opt.) 2023, 32, S138–S150. [Google Scholar] [CrossRef]
  12. Arines, J.; Duran, V.; Jaroszewicz, Z.; Ares, J.; Tajahuerce, E.; Prado, P.; Lancis, J.; Bará, S.; Climent, V. Measurement and compensation of optical aberrations using a single spatial light modulator. Opt. Express 2007, 15, 15287–15292. [Google Scholar] [CrossRef] [PubMed]
  13. Guo, H.; Xu, Y.; Li, Q.; Du, S.; He, D.; Wang, Q.; Huang, Y. Improved Machine Learning Approach for Wavefront Sensing. Sensors 2019, 19, 3533. [Google Scholar] [CrossRef] [PubMed]
  14. Suchkov, N.; Fernández, E.J.; Martínez -Fuentes, J.L.; Moreno, I.; Artal, P. Simultaneous aberration and aperture control using a single spatial light modulator. Opt. Express 2019, 27, 12399–12413. [Google Scholar] [CrossRef]
  15. Zhang, L.; Zhou, S.; Li, J.; Yu, B. Deep neural network based calibration for freeform surface misalignments in general interferometer. Opt. Express 2019, 27, 33709–33729. [Google Scholar] [CrossRef]
  16. Montresor, S.; Tahon, M.; Laurent, A.; Picart, P. Computational de-noising based on deep learning for phase data in digital holographic interferometry. APL Photonics 2020, 5, 030802. [Google Scholar] [CrossRef]
  17. Khonina, S.N.; Khorin, P.A.; Serafimovich, P.G.; Dzyuba, A.P.; Georgieva, A.O.; Petrov, N.V. Analysis of the wavefront aberrations based on neural networks processing of the interferograms with a conical reference beam. Appl. Phys. B 2022, 128, 60. [Google Scholar] [CrossRef]
  18. Khorin, P.A.; Dzyuba, A.P.; Petrov, N.V. Comparative Analysis of the Interferogram Sensitivity to Wavefront Aberrations Recorded with Plane and Cylindrical Reference Beams. Opt. Mem. Neural Netw. 2023, 32 (Suppl. 1), S27–S37. [Google Scholar] [CrossRef]
  19. Khorin, P.A.; Dzyuba, A.P.; Chernykh, A.V.; Georgieva, A.O.; Petrov, N.V.; Khonina, S.N. Neural Network-Assisted Interferogram Analysis Using Cylindrical and Flat Reference Beams. Appl. Sci. 2023, 13, 4831. [Google Scholar] [CrossRef]
  20. Porfirev, A.P.; Khonina, S.N. Experimental investigation of multi-order diffractive optical elements matched with two types of Zernike functions. Proc. SPIE 2016, 9807, 106–114. [Google Scholar] [CrossRef]
  21. Weng, Y.; Ip, E.; Pan, Z.; Wang, T. Advanced Spatial-Division Multiplexed Measurement Systems Propositions—From Telecommunication to Sensing Applications: A Review. Sensors 2016, 16, 1387. [Google Scholar] [CrossRef]
  22. Kazanskiy, N.L.; Khonina, S.N.; Karpeev, S.V.; Porfirev, A.P. Diffractive optical elements for multiplexing structured laser beams. Quantum Electron. 2020, 50, 629–635. [Google Scholar] [CrossRef]
  23. Khonina, S.N.; Khorin, P.A.; Porfirev, A.P. Wave Front Aberration Sensors Based on Optical Expansion by the Zernike Basis. In Photonics Elements for Sensing and Optical Conversions, 1st ed.; Kazanskiy, N.L., Ed.; CRC Press: Boca Raton, FL, USA, 2023; pp. 178–238. [Google Scholar] [CrossRef]
  24. Zhao, X.; Fan, B.; Ma, Z.; Zhong, S.; Chen, J.; Zhang, T.; Su, H. Optical-digital joint design of multi-order diffractive lenses for lightweight high-resolution computational imaging. Opt. Lasers Eng. 2024, 180, 108308. [Google Scholar] [CrossRef]
  25. Khonina, S.N.; Kazanskiy, N.L.; Skidanov, R.V.; Butt, M.A. Advancements and Applications of Diffractive Optical Elements in Contemporary Optics: A Comprehensive Overview. Adv. Mater. Technol. 2025, 10, 2401028. [Google Scholar] [CrossRef]
  26. Liang, X.; Zhu, D.; Dai, Q.; Xie, Y.; Zhou, Z.; Peng, C.; Li, Z.; Chen, P.; Lu, Y.-Q.; Yu, S.; et al. All-Optical Multi-Order Multiplexing Differentiation Based on Dynamic Liquid Crystals. Laser Photonics Rev. 2024, 18, 2400032. [Google Scholar] [CrossRef]
  27. Slevas, P.; Orlov, S. Creating an Array of Parallel Vortical Optical Needles. Photonics 2024, 11, 203. [Google Scholar] [CrossRef]
  28. Khorin, P.A.; Porfirev, A.P.; Khonina, S.N. Adaptive detection of wave aberrations based on the multichannel filter. Photonics 2022, 9, 204. [Google Scholar] [CrossRef]
  29. Khorin, P.A.; Volotovskiy, S.G.; Khonina, S.N. Optical detection of values of separate aberrations using a multi-channel filter matched with phase Zernike functions. Comput. Opt. 2021, 45, 525–533. [Google Scholar] [CrossRef]
  30. Voulodimos, A.; Doulamis, N.; Doulamis, A.; Protopapakis, E. Deep learning for computer vision: A brief review. Comput. Intel. Neurosci. 2018, 2018, 7068349. [Google Scholar] [CrossRef]
  31. Jia, W. Research on the impact of the machine vision system on contemporary technology. AIP Conf. Proc. 2024, 3194, 040007. [Google Scholar] [CrossRef]
  32. Dzyuba, A.P.; Khorin, P.A.; Serafimovich, P.G.; Khonina, S.N. Wavefront Aberrations Recognition Study Based on Multi-Channel Spatial Filter Matched with Basis Zernike Functions and Convolutional Neural Network with Xception Architecture. Opt. Mem. Neural Netw. 2024, 33 (Suppl. 1), S53–S64. [Google Scholar] [CrossRef]
  33. Javaid, M.; Haleem, A.; Singh, R.P.; Rab, S.; Suman, R. Exploring impact and features of machine vision for progressive industry 4.0 culture. Sens. Int. 2022, 3, 100132. [Google Scholar] [CrossRef]
  34. Manakitsa, N.; Maraslidis, G.S.; Moysis, L.; Fragulis, G.F. A review of machine learning and deep learning for object detection, semantic segmentation, and human action recognition in machine and robotic vision. Technologies 2024, 12, 15. [Google Scholar] [CrossRef]
  35. Zhang, H. Image Classification Method Based on Neural Network Feature Extraction. In Proceedings of the 2022 6th International Conference on Electronic Information Technology and Computer Engineering (EITCE ’22), Xiamen, China, 21–23 October 2022; Association for Computing Machinery: New York, NY, USA, 2023; pp. 1696–1699. [Google Scholar] [CrossRef]
  36. Jogin, M.; Mohana; Madhulika, M.S.; Divya, G.D.; Meghana, R.K.; Apoorva, S. Feature Extraction using Convolution Neural Networks (CNN) and Deep Learning. In Proceedings of the 2018 3rd IEEE International Conference on Recent Trends in Electronics, Information & Communication Technology (RTEICT), Bangalore, India, 18–19 May 2018; pp. 2319–2323. [Google Scholar] [CrossRef]
  37. Wang, Z.; Yan, W.; Oates, T. Time series classification from scratch with deep neural networks: A strong baseline. In Proceedings of the 2017 International Joint Conference on Neural Networks (IJCNN), Anchorage, AK, USA, 14–19 May 2017; pp. 1578–1585. [Google Scholar]
  38. Wang, J.Y.; Silva, D.E. Wave-front interpretation with Zernike polynomials. Appl. Opt. 1980, 19, 1510–1518. [Google Scholar] [CrossRef]
  39. Mahajan, V.N. Zernike circle polynomials and optical aberration of system with circular pupils. Appl. Opt. 1994, 33, 8121–8124. [Google Scholar] [CrossRef]
  40. Lakshminarayanan, V.; Fleck, A. Zernike polynomials: A guide. J. Mod. Opt. 2011, 58, 545–561. [Google Scholar] [CrossRef]
  41. Niu, K.; Tian, C. Zernike polynomials and their applications. J. Opt. 2022, 24, 123001. [Google Scholar] [CrossRef]
  42. Ha, Y.; Zhao, D.; Wang, Y.; Kotlyar, V.V.; Khonina, S.N.; Soifer, V.A. Diffractive Optical Element for Zernike Decomposition. In Proceedings of the Current Developments in Optical Elements and Manufacturing, SPIE, Beijing, China, 16–19 September 1998; Volume 3557, pp. 191–197. [Google Scholar]
  43. Booth, M.J. Direct Measurement of Zernike Aberration Modes with a Modal Wavefront Sensor. In Proceedings of the Advanced Wavefront Control: Methods, Devices, and Applications, SPIE, San Diego, CA, USA, 3–8 August 2003; Volume 5162, pp. 79–90. [Google Scholar]
  44. Degtyarev, S.A.; Porfirev, A.P.; Khonina, S.N. Zernike basis-matched multi-order diffractive optical elements for wavefront weak aberrations analysis. Proc. SPIE 2017, 10337, 201–208. [Google Scholar] [CrossRef]
  45. Khorin, P.A.; Volotovskiy, S.G. Analysis of the Threshold Sensitivity of a Wavefront Aberration Sensor Based on a Multi-Channel Diffraction Optical Element. In Proceedings of the Optical Technologies for Telecommunications 2020 SPIE, Samara, Russia, 17–20 November 2021; Volume 11793, pp. 62–73. [Google Scholar]
  46. Khonina, S.N.; Karpeev, S.V.; Porfirev, A.P. Wavefront Aberration Sensor Based on a Multichannel Diffractive Optical Element. Sensors 2020, 20, 3850. [Google Scholar] [CrossRef]
  47. Skidanov, R.V.; Moiseev, O.Y.; Ganchevskaya, S.V. Additive Process for Fabrication of Phased Optical Diffraction Elements. J. Opt. Technol. 2016, 83, 23–25. [Google Scholar] [CrossRef]
  48. Khonina, S.N.; Kazanskiy, N.L.; Butt, M.A. Grayscale Lithography and a Brief Introduction to Other Widely Used Lithographic Methods: A State-of-the-Art Review. Micromachines 2024, 15, 1321. [Google Scholar] [CrossRef] [PubMed]
  49. Huang, H.; Inoue, T.; Hara, T. Adaptive aberration compensation system using a high-resolution liquid crystal on silicon spatial light phase modulator. Proc. SPIE 2009, 7156, 71560F. [Google Scholar] [CrossRef]
  50. Khonina, S.N.; Karpeev, S.V.; Butt, M.A. Spatial-light-modulator-based multichannel data transmission by vortex beams of various orders. Sensors 2021, 21, 2988. [Google Scholar] [CrossRef] [PubMed]
  51. Khonina, S.N.; Kotlyar, V.V.; Soifer, V.A. Techniques for encoding composite diffractive optical elements. Proc. SPIE 2003, 5036, 493–498. [Google Scholar] [CrossRef]
  52. Lee, W.-H. Binary Synthetic Holograms. Appl. Opt. 1974, 13, 1677. [Google Scholar] [CrossRef]
  53. Georgieva, A.; Belashov, A.V.; Petrov, N.V. Optimization of DMD-Based Independent Amplitude and Phase Modulation by Analysis of Target Complex Wavefront. Sci. Rep. 2022, 12, 7754. [Google Scholar] [CrossRef]
  54. Peng, Y.; Fu, Q.; Amata, H.; Su, S.; Heide, F.; Heidrich, W. Computational imaging using lightweight diffractiverefractive optics. Opt. Express 2015, 23, 31393–31407. [Google Scholar] [CrossRef]
  55. Nikonorov, A.V.; Petrov, M.V.; Bibikov, S.A.; Kutikova, V.V.; Morozov, A.A.; Kazanskiy, N.L. Image restoration in diffractive optical systems using deep learning and deconvolution. Comput. Opt. 2017, 41, 875–887. [Google Scholar] [CrossRef]
  56. Khonina, S.N.; Kazanskiy, N.L.; Oseledets, I.V.; Nikonorov, A.V.; Butt, M.A. Synergy between Artificial Intelligence and Hyperspectral Imagining—A Review. Technologies 2024, 12, 163. [Google Scholar] [CrossRef]
  57. Wang, Z.; Lu, J.; Liu, Z.; Li, X.; Sheng, J.; Li, J. Neural network assisted magnetic moment measurement using an atomic magnetometer. IEEE Trans. Instrum. Meas. 2025, 74, 1–10. [Google Scholar] [CrossRef]
  58. Ge, X.; Liu, G.; Fan, W.; Duan, L.; Ma, L.; Quan, J.; Liu, J.; Quan, W. Decoupling measurement and closed-loop suppression of transverse magnetic field drift in a modulated double-cell atomic comagnetometer. Measurement 2025, 250, 117123. [Google Scholar] [CrossRef]
  59. Qin, J.N.; Xu, J.X.; Jiang, Z.Y.; Qu, J. Enhanced all-optical vector atomic magnetometer enabled by artificial neural network. Appl. Phys. Lett. 2024, 125, 102405. [Google Scholar] [CrossRef]
  60. Huang, J.; Zhuang, M.; Zhou, J.; Shen, Y.; Lee, C. Quantum metrology assisted by machine learning. Adv. Quantum Technol. 2024, 7, 2300281. [Google Scholar] [CrossRef]
  61. Volotovskiy, S.; Khorin, P.; Dzyuba, A.; Khonina, S. Adaptive Compensation of Wavefront Aberrations Using the Method of Moments. Opt. Mem. Neural Netw. 2024, 33 (Suppl. 2), S359–S375. [Google Scholar] [CrossRef]
Figure 1. Principal scheme of the experimental setup.
Figure 1. Principal scheme of the experimental setup.
Technologies 13 00212 g001
Figure 2. Amplitude (a) and phase (b) of the combined DOE (7) and a scaled fragment of the phase.
Figure 2. Amplitude (a) and phase (b) of the combined DOE (7) and a scaled fragment of the phase.
Technologies 13 00212 g002
Figure 3. Intensity distribution (inversion) in the focal plane of the DOE (7).
Figure 3. Intensity distribution (inversion) in the focal plane of the DOE (7).
Technologies 13 00212 g003
Figure 4. Phase of the calculated phase combined DOE (12) (a) and scaled phase fragment (b,c).
Figure 4. Phase of the calculated phase combined DOE (12) (a) and scaled phase fragment (b,c).
Technologies 13 00212 g004
Figure 5. Intensity distribution (inversion) in the focal plane of the calculated phase DOE (12).
Figure 5. Intensity distribution (inversion) in the focal plane of the calculated phase DOE (12).
Technologies 13 00212 g005
Figure 6. MAE value of the loss function for each epoch during the training process.
Figure 6. MAE value of the loss function for each epoch during the training process.
Technologies 13 00212 g006
Figure 7. An examples ((a) the astigmatism with power = 0.145, (b) the second-order astigmatism power = 0.145, (c) the second-order astigmatism power = 0.3) of recognition (pred) of the weights of the initial (true) single aberration.
Figure 7. An examples ((a) the astigmatism with power = 0.145, (b) the second-order astigmatism power = 0.145, (c) the second-order astigmatism power = 0.3) of recognition (pred) of the weights of the initial (true) single aberration.
Technologies 13 00212 g007
Figure 8. An examples of recognition (pred) of the weights of the initial (true) superposition of aberrations.
Figure 8. An examples of recognition (pred) of the weights of the initial (true) superposition of aberrations.
Technologies 13 00212 g008
Figure 9. Intensity pattern without post processing using a DMD with an output plane wave pattern.
Figure 9. Intensity pattern without post processing using a DMD with an output plane wave pattern.
Technologies 13 00212 g009
Figure 10. Comparison of the intensity patterns of the simulation (a) of the phase DOE action and the result of the optical experiment (b) for an aberration-free field.
Figure 10. Comparison of the intensity patterns of the simulation (a) of the phase DOE action and the result of the optical experiment (b) for an aberration-free field.
Technologies 13 00212 g010
Table 1. Xception Entry Flow.
Table 1. Xception Entry Flow.
Layer TypeKernel SizeStridesOutput Size
Input--(256, 256, 3)
Conv2D3 × 32 × 2149 × 149 × 32
Conv2D3 × 31 × 1147 × 147 × 64
SeparableConv2D3 × 31 × 1147 × 147 × 128
SeparableConv2D3 × 31 × 1147 × 147 × 128
MaxPooling2D3 × 32 × 273 × 73 × 128
SeparableConv2D3 × 31 × 173 × 73 × 256
SeparableConv2D3 × 31 × 173 × 73 × 256
MaxPooling2D3 × 32 × 237 × 37 × 256
SeparableConv2D3 × 31 × 137 × 37 × 728
SeparableConv2D3 × 31 × 137 × 37 × 728
MaxPooling2D3 × 32 × 219 × 19 × 728
Table 2. Xception Middle Flow (which in the model is repeated 8 times).
Table 2. Xception Middle Flow (which in the model is repeated 8 times).
Layer TypeKernel SizeStridesOutput Size
SeparableConv2D3 × 31 × 119 × 19 × 728
SeparableConv2D3 × 31 × 119 × 19 × 728
SeparableConv2D3 × 31 × 119 × 19 × 728
Table 3. Modified Xception Exit Flow.
Table 3. Modified Xception Exit Flow.
Layer TypeKernel SizeStridesOutput Size
SeparableConv2D3 × 31 × 119 × 19 × 728
SeparableConv2D3 × 31 × 119 × 19 × 1024
MaxPooling2D3 × 32 × 210 × 10 × 1024
SeparableConv2D3 × 31 × 110 × 10 × 1536
SeparableConv2D3 × 31 × 110 × 10 × 2048
GlobalAveragePooling--2048
Dropout--2048
Dense (linear)--8 (Classes)
Table 4. A set of input data in the form of aberrated wavefronts (10) w ( r , φ ) = exp [ i ψ ( r , φ ) ] , whose phase has the form ψ ( r , φ ) = c n m Z n m ( r , φ ) + c i j Z i j ( r , φ ) .
Table 4. A set of input data in the form of aberrated wavefronts (10) w ( r , φ ) = exp [ i ψ ( r , φ ) ] , whose phase has the form ψ ( r , φ ) = c n m Z n m ( r , φ ) + c i j Z i j ( r , φ ) .
c11 = 0.20, c20 = 0.30
Technologies 13 00212 i001
c11 = 0.20, c22 = 0.30
Technologies 13 00212 i002
c22 = 0.15, c33 = 0.10
Technologies 13 00212 i003
c42 = 0.20, c33 = 0.15
Technologies 13 00212 i004
c44 = 0.20, c42 = 0.25
Technologies 13 00212 i005
c44 = 0.25, c22 = 0.20
Technologies 13 00212 i006
Table 5. Dataset fragment.
Table 5. Dataset fragment.
c11 = 0.20, c20 = 0.30
Technologies 13 00212 i007
c11 = 0.20, c22 = 0.30
Technologies 13 00212 i008
c22 = 0.15, c33 = 0.10
Technologies 13 00212 i009
c42 = 0.20, c33 = 0.15
Technologies 13 00212 i010
c44 = 0.20, c42 = 0.25
Technologies 13 00212 i011
c44 = 0.25, c22 = 0.20
Technologies 13 00212 i012
Table 6. Experimental results for astigmatic aberration (n, m) = (2, 2) with magnitude c n m = {−0.5; −0.3; −0.2}.
Table 6. Experimental results for astigmatic aberration (n, m) = (2, 2) with magnitude c n m = {−0.5; −0.3; −0.2}.
c n m = −0.5 c n m = −0.3 c n m = −0.2
Technologies 13 00212 i013Technologies 13 00212 i014Technologies 13 00212 i015
Table 7. Experimental results for astigmatic aberration (n, m) = (2, 2) with magnitude c n m = {0.2; 0.3; 0.5}.
Table 7. Experimental results for astigmatic aberration (n, m) = (2, 2) with magnitude c n m = {0.2; 0.3; 0.5}.
c n m = 0.2 c n m = 0.3 c n m = 0.5
Technologies 13 00212 i016Technologies 13 00212 i017Technologies 13 00212 i018
Table 8. Experimental results for aberrations of the defocusing type (n, m) = (2, 0), trefoil (n, m) = (3, 3), coma (n, m) = (3, 1), quatrefoil (n, m) = (4, 4), and second-order astigmatism (n, m) = (4, 2) with different values c n m .
Table 8. Experimental results for aberrations of the defocusing type (n, m) = (2, 0), trefoil (n, m) = (3, 3), coma (n, m) = (3, 1), quatrefoil (n, m) = (4, 4), and second-order astigmatism (n, m) = (4, 2) with different values c n m .
(n, m) c n m c n m
(2, 0)
c n m = 0.20
Technologies 13 00212 i019Technologies 13 00212 i020
(3, 3)
c n m = 0.25
Technologies 13 00212 i021Technologies 13 00212 i022
(3, 1)
c n m = 0.45
Technologies 13 00212 i023Technologies 13 00212 i024
(4, 4)
c n m = 0.15
Technologies 13 00212 i025Technologies 13 00212 i026
(4, 2)
c n m = 0.15
Technologies 13 00212 i027Technologies 13 00212 i028
Table 9. Experimental results for superposition two wave aberrations.
Table 9. Experimental results for superposition two wave aberrations.
c11 = 0.20, c20 = 0.30
Technologies 13 00212 i029
c11 = 0.20, c22 = 0.30
Technologies 13 00212 i030
c22 = 0.15, c33 = 0.10
Technologies 13 00212 i031
c42 = 0.20, c33 = 0.15
Technologies 13 00212 i032
c44 = 0.20, c42 = 0.25
Technologies 13 00212 i033
c44 = 0.25, c22 = 0.20
Technologies 13 00212 i034
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Khorin, P.A.; Dzyuba, A.P.; Chernykh, A.V.; Butt, M.A.; Khonina, S.N. Application of Machine Learning Methods for Identifying Wave Aberrations from Combined Intensity Patterns Generated Using a Multi-Order Diffractive Spatial Filter. Technologies 2025, 13, 212. https://doi.org/10.3390/technologies13060212

AMA Style

Khorin PA, Dzyuba AP, Chernykh AV, Butt MA, Khonina SN. Application of Machine Learning Methods for Identifying Wave Aberrations from Combined Intensity Patterns Generated Using a Multi-Order Diffractive Spatial Filter. Technologies. 2025; 13(6):212. https://doi.org/10.3390/technologies13060212

Chicago/Turabian Style

Khorin, Paval. A., Aleksey P. Dzyuba, Aleksey V. Chernykh, Muhammad A. Butt, and Svetlana N. Khonina. 2025. "Application of Machine Learning Methods for Identifying Wave Aberrations from Combined Intensity Patterns Generated Using a Multi-Order Diffractive Spatial Filter" Technologies 13, no. 6: 212. https://doi.org/10.3390/technologies13060212

APA Style

Khorin, P. A., Dzyuba, A. P., Chernykh, A. V., Butt, M. A., & Khonina, S. N. (2025). Application of Machine Learning Methods for Identifying Wave Aberrations from Combined Intensity Patterns Generated Using a Multi-Order Diffractive Spatial Filter. Technologies, 13(6), 212. https://doi.org/10.3390/technologies13060212

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop