A Deep Learning-Based Preprocessing Method for Single Interferometric Fringe Patterns

: A novel preprocessing method based on a modified U-NET is proposed for single interference fringes. The framework is constructed by introducing spatial attention and channel attention modules to optimize performance. In this process, interferometric fringe maps with an added background intensity, fringe amplitude, and ambient noise are used as the input to the network, which outputs fringe maps in an ideal state. Simulated and experimental results demonstrated that this technique can preprocess single interference fringes in ~1 microsecond. The quality of the results was further evaluated using the root mean square error, peak signal-to-noise ratio, structural similarity, and equivalent number of views. The proposed method outperformed U-NET, U-NET++, and other conventional algorithms as measured by each of these metrics. In addition, the model produced high-quality normalized fringes by combining objective data with visual effects, significantly improving the accuracy of the phase solutions for single interference fringes.


Introduction
Interferometry is an active area of research in which the processing of fringe maps is essential to recovering hidden three-dimensional surface shapes [1].However, when phase demodulation is performed on single interference fringes, the background intensity, fringe amplitude, and ambient noise present at the time of data acquisition can affect the contrast of fringes and ultimately the accuracy of phase reconstruction.As such, a variety of single interferometry techniques have been introduced in recent years to address this issue [2][3][4][5][6].These methods require only one interferometric fringe map for phase extraction and are primarily applied to dynamic measurements requiring high real-time performance.This includes the Fourier transform [2,3], regularized phase tracking [4], and Hilbert transform methods [5,6].In the case of single interference fringes, a lack of contrast can cause serious distortions in the resulting phase distribution.As such, filtering ambient noise, normalizing the background intensity and the fringe amplitude are necessary preprocessing steps when reconstructing the phase of single interference fringe data [7][8][9].
These techniques have been implemented in several previous studies.For example, Quiroga [10] proposed an orthogonal projection algorithm used for background suppression and modulation normalization.Ochoa [11] developed a process for normalizing and denoising fringe maps using directional derivatives.Bernini [12] proposed a technique based on 2D empirical modal decomposition and Hilbert transforms for the normalization of striped images.Tien [13] developed a fringe normalization algorithm using Zernike polynomial fitting to eliminate unwanted intensity in interferograms, suppressing background pixels and high frequency noise while improving contrast through normalization.Sharma [14] introduced a fringe normalization and denoising process based on Kalman filtering to fit background and modulation terms using a raster scan.Leijie [15] pro-posed a fringe map orthogonalization method based on a series of GANs, which achieved phase demodulation of single interference fringes with high accuracy.
In response to the above analysis, this paper proposes a preprocessing method for single interference fringes, in order to quickly and easily realize the denoising and normalization of single interferometric fringe patterns, and to lay the foundation for the subsequent phase solution of single interferometric fringe patterns, in which denoising and normalization are achieved using an improved U-NET framework.The algorithm was trained by first determining the form of the background and fringe structures.Gaussian noise and corresponding interference fringes were then added under ideal conditions to establish sample pairs.Four different evaluation metrics, including root the mean square error (RMSE), peak signal-to-noise ratio (PSNR), structural similarity index measure (SSIM), and equivalent number of looks (ENL) were utilized to verify the feasibility of this process.Finally, the proposed method was further assessed using a series of experimentally acquired single interference fringes.

Interference Fringe Model
Interference in fringes can be expressed mathematically as: where A(x, y) is the background intensity in a fringe map, B(x, y) is the fringe amplitude, ϕ(x, y) is a phase term associated with a measured physical quantity, and N(x, y) is additional noise.Normalizing the background and fringe amplitude, while filtering additional noise, allows for the interference (after preprocessing) to be represented as The modulated and background intensity can be determined from a comparative analysis to be in the form of a Gaussian function given by: where (x, y) is a spatial coordinate, x 0 , y 0 is a center point coordinate, a is the magnitude, and σ x , σ y denote the variance.

The DN-U-NET Network Model
U-NETs [16] are improved fully convolutional neural networks designed to solve problems in medical image segmentation.They exhibit several robust properties that have led to an increasing number of applications in a wide variety of tasks.The purpose of this paper is to achieve pre-processing, in the form of denoising and normalization, for single interferometric fringes using an improved U-NET neural network.The proposed model is thus termed a denoising and normalization U-NET (DN-U-NET).
This process involved the use of an attention mechanism [17], a technique that emphasizes key information by assigning different weights to individual features to improve model accuracy.Attention mechanisms have been widely used in various deep learning tasks, such as computer vision and natural language processing.The convolutional block attention module (CBAM [18]) divides this attention step into two separate components, a channel attention module and a spatial attention module, which not only preserves parameters and computational power, but also facilitates integration into existing network architectures as a plug-and-play module.This inclusion typically improves extraction accuracy and network generalizability.The bottleneck attention module (BAM [19]) was developed by the same group that proposed CBAM.While these frameworks are similar, the CBAM module can be described as a series connection of channel attention and spatial attention modules.In contrast, the BAM module can be viewed as a parallel connection (see Figures 1 and 2).The DN-U-NET network structure is shown in Figure 3, where the input consists of interference fringes with a certain background intensity, fringe amplitude, and added ambient noise.The output includes corresponding interference fringes in an ideal state.
separate components, a channel attention module and a spatial attention module, which not only preserves parameters and computational power, but also facilitates integration into existing network architectures as a plug-and-play module.This inclusion typically improves extraction accuracy and network generalizability.The bottleneck attention module (BAM [19]) was developed by the same group that proposed CBAM.While these frameworks are similar, the CBAM module can be described as a series connection of channel attention and spatial attention modules.In contrast, the BAM module can be viewed as a parallel connection (see Figures 1 and 2).The DN-U-NET network structure is shown in Figure 3, where the input consists of interference fringes with a certain background intensity, fringe amplitude, and added ambient noise.The output includes corresponding interference fringes in an ideal state.separate components, a channel attention module and a spatial attention module, which not only preserves parameters and computational power, but also facilitates integration into existing network architectures as a plug-and-play module.This inclusion typically improves extraction accuracy and network generalizability.The bottleneck attention module (BAM [19]) was developed by the same group that proposed CBAM.While these frameworks are similar, the CBAM module can be described as a series connection of channel attention and spatial attention modules.In contrast, the BAM module can be viewed as a parallel connection (see Figures 1 and 2).The DN-U-NET network structure is shown in Figure 3, where the input consists of interference fringes with a certain background intensity, fringe amplitude, and added ambient noise.The output includes corresponding interference fringes in an ideal state.

Dataset and Environment Configuration
The included dataset consists of two parts: interferometric fringes with an added background intensity (i.e., fringe amplitude and ambient noise), and the corresponding fringes in an ideal state.The Zernike polynomials are a set of complete orthogonal bases in the unit-circle domain constructed by the Dutch scientist F. Zernike in 1934 during his research on phase contrast microscopy, and their use as a basis function for phase fitting can correspond well with the classical phase differences of optical systems and provide the necessary conditions for subsequent studies.Random phases were generated using Zernike polynomials as follows: where a i represents Zernike polynomial coefficients (i = 1, 2, 3, . . ., n).In the simulation, in order to be able to post-simulate phase data that are closer to the practical application, the values of the Zernike polynomial coefficients are kept consistent with those of the experimental data from the Zygo interferometer.These coefficients were generated using a random function, with a range of values shown in Table 1.

Dataset and Environment Configuration
The included dataset consists of two parts: interferometric fringes with an added background intensity (i.e., fringe amplitude and ambient noise), and the corresponding fringes in an ideal state.The Zernike polynomials are a set of complete orthogonal bases in the unit-circle domain constructed by the Dutch scientist F. Zernike in 1934 during his research on phase contrast microscopy, and their use as a basis function for phase fitting can correspond well with the classical phase differences of optical systems and provide the necessary conditions for subsequent studies.Random phases were generated using Zernike polynomials as follows: where i a represents Zernike polynomial coefficients ( n i ,..., ).In the simulation, in order to be able to post-simulate phase data that are closer to the practical application, the values of the Zernike polynomial coefficients are kept consistent with those of the experimental data from the Zygo interferometer.These coefficients were generated using a random function, with a range of values shown in Table 1.Zernike Coefficient A total of 12,000 pairs of input and ideal stripe maps were generated using a simulation, with sizes of 256 × 256 pixels.These images were input to the network and used for training, with a 5:1 ratio of samples in the training and test sets.The TensorFlow framework was implemented using Python on a PC with an Intel(R) Xeon(R) Gold 5218R CPU @ 2.10 GHz.Calculations were accelerated using an NVIDIA GeForce RTX 3080.Weighting parameters were optimized using the Adam optimizer, with a learning rate set to a fixed value of 0.0001.A total of 500 iterations were performed, with a training time of ~40 h required to identify ideal weights.The corresponding loss function, minimized as part of the training process, could be expressed mathematically as:  A total of 12,000 pairs of input and ideal stripe maps were generated using a simulation, with sizes of 256 × 256 pixels.These images were input to the network and used for training, with a 5:1 ratio of samples in the training and test sets.The TensorFlow framework was implemented using Python on a PC with an Intel(R) Xeon(R) Gold 5218R CPU @ 2.10 GHz.Calculations were accelerated using an NVIDIA GeForce RTX 3080.Weighting parameters were optimized using the Adam optimizer, with a learning rate set to a fixed value of 0.0001.A total of 500 iterations were performed, with a training time of ~40 h required to identify ideal weights.The corresponding loss function, minimized as part of the training process, could be expressed mathematically as: where f i (x, y) is a fringe map generated by the network, g i (x, y) is a truth value, and n is the minimum number of input data batches.

Evaluation Indicators
Four numerical metrics were selected to evaluate the results and quantify deviations from true values, both before and after denoising and normalization.This included the root mean square error (RMSE), peak signal-to-noise ratio (PSNR), structural similarity index measure (SSIM), and equivalent number of looks (ENL).
The RMSE represents the extent to which measured data deviate from true data, with smaller values indicating a higher accuracy.This term can be expressed mathematically as where x and y denote the width and height of a fringe map, respectively, R(x, y) represents preprocessed data before or after, and G(x, y) are true value data.
The PSNR captures differences between corresponding pixels in a preprocessed image and a true value image, represented as where 2 n −1 represents the maximum gray value of a pixel in an image (n = 8).The MSE can be expressed as The SSIM is used to evaluate the similarity of two images and is given by where u, v are two localized windows of size W × W in the true value data and the data before and after preprocessing, respectively.The terms µ u , µ v are the averages of pixel gray values in the two windows, respectively, while δ uv is the variance and C 1 and C 2 , respectively, describe the covariance of the two windows.ENL provides a measure of the smoothness of a homogeneous region and can be expressed as where µ i , σ i denote the mean and standard deviation of pixel values in an image, respectively.

Simulation and Analysis
Six different sets of fringes were processed using DN-U-NET, as shown in Figure 4. Specifically, Figure 4a shows fringe samples in an ideal state, in which the background intensity and fringe amplitude are constant.Figure 4b shows fringes before processing, in which the background intensity and fringe amplitude (in the form of a Gaussian function) were added along with Gaussian noise.Figure 4c displays corresponding results after processing with DN-U-NET.Visual inspection suggests that the processed fringes exhibit improved contrast while noise was suppressed significantly, producing results that are more similar to the ideal fringes.
The proposed DN-U-NET was compared with existing algorithms, including a U-NET, UNET++ [20], R2-UNET [21], Attention_U-NET [22], and U-NET3+ [23].The training of each neural network model was conducted using the same dataset and methodology as that of the proposed network.Figure 5 shows a series of interference fringes in an ideal state, with added background intensity, corresponding fringe amplitude, and ambient noise.A comparison of denoising and normalization results is also provided for several models.Visual inspection suggests that several of these techniques significantly enhanced fringe contrast and provided significant suppression of noise.The metrics described above were used to quantify the effectiveness of fringe denoising and normalization, for comparison with conventional algorithms.Table 2 provides a comparative analysis of interference fringe denoising and normalization results from different models, with bold font denoting the best (highest or lowest) values.It is evident that several algorithms achieved significant improvements across each evaluation index compared with the original noisy image.The processing times of several different networks for a single interference fringe are all 1 microsecond.Notably, the proposed DN-U-NET produced improvements of 0.2986 (RMSE), 0.5712 dB (PSNR), and 0.0009 (SSIM) compared with the standard U-NET algorithm.A global ENL evaluation further demonstrated an increase of 0.0071.The proposed DN-U-NET network performs the best for all four evaluation metrics.For the indicator of the RMSE, comparing with the before processing, it is improved from 96.0986 to 4.3928, which is a very obvious improvement.Comparing with U-NET++, it is improved from 5.0276 to 4.3928, which is an improvement of up to 12.6%, and in the indicator of PSNR, comparing with the before processing, it is improved from 8.4765 dB to 35.2760 dB, which is an improvement of 26.7998 dB, and comparing to U-NET++, from 34.1035 dB to 35.2760 dB, an improvement of 1.1725 dB.
Photonics 2024, 11, x FOR PEER REVIEW 6 of 14 Six different sets of fringes were processed using DN-U-NET, as shown in Figure 4. Specifically, Figure 4a shows fringe samples in an ideal state, in which the background intensity and fringe amplitude are constant.Figure 4b shows fringes before processing, in which the background intensity and fringe amplitude (in the form of a Gaussian function) were added along with Gaussian noise.Figure 4c displays corresponding results after processing with DN-U-NET.Visual inspection suggests that the processed fringes exhibit improved contrast while noise was suppressed significantly, producing results that are more similar to the ideal fringes.The proposed DN-U-NET was compared with existing algorithms, including a U-NET, UNET++ [20], R2-UNET [21], Attention_U-NET [22], and U-NET3+ [23].The training of each neural network model was conducted using the same dataset and methodology as that of the proposed network.Figure 5 shows a series of interference fringes in an ideal state, with added background intensity, corresponding fringe amplitude, and ambient noise.A comparison of denoising and normalization results is also provided for several models.Visual inspection suggests that several of these techniques significantly enhanced fringe contrast and provided significant suppression of noise.The metrics described above were used to quantify the effectiveness of fringe denoising and normalization, for comparison with conventional algorithms.Table 2 provides a comparative analysis of interference fringe denoising and normalization results from different models, with bold font denoting the best (highest or lowest) values.It is evident that several algorithms achieved significant improvements across each evaluation index compared with the original noisy image.The processing times of several different networks for a single interference fringe are all 1 microsecond.Notably, the proposed DN-U-NET produced improvements of 0.2986 (RMSE), 0.5712 dB (PSNR), and 0.0009 (SSIM) compared with the standard U-NET algorithm.A global ENL evaluation further demonstrated an increase of 0.0071.The proposed DN-U-NET network performs the best for all four evaluation    The validity and stability of the proposed method were further verified through the addition of background intensity and fringe amplitude.These signals were added to the label fringes shown in Figure 5a and assumed the form of a Gaussian function and Gaussian noise with a mean of 0. Fringes with standard deviation levels of 0, 0.05, 0.07, 0.09, 0.12, and 0.15 are shown in Figure 6a-f, respectively.The DN-U-NET was also used to perform the pre-processing operations of denoising and normalization at varying noise intensities, as shown in Figure 6g-l.A visual inspection suggests that these processed interference fringes exhibit significantly improved contrast, as noise has been effectively suppressed.These results were also quantitatively analyzed, as described below.Specifically, Table 3 provides a comparative analysis of the denoising and normalization effects produced by the DN-U-NET.It is evident that the processed images have improved significantly, as measured by RMSE, PSNR, SSIM, and ENL.Notably, at a noise level of 0.15, these evaluation metrics have a decreased but remain at a desirable value.This outcome provides more evidence for the effectiveness and stability of the proposed technique for denoising and normalizing interference fringes, even at high noise levels.The effectiveness of the proposed processing method was further verified by solving for the phase of single interference fringes using a technique proposed in the literature [24].The accuracy of the phase before and after this single processing step was then compared to provide an evaluation of performance.A single interference fringe in an ideal state is shown in Figure 7a, while Figure 7b shows a corresponding label phase with PV and RMS values of 0.1334 λ and 0.0289 λ, respectively.Figure 7c shows a single fringe before processing, while Figure 7d shows the corresponding phase after processing, with PV and RMS values of 0.0448 λ and 0.0105 λ, respectively.Figure 7e shows the error between the solved phase acquired before processing and the label phase, with residual PV and RMS values of 0.1174 λ and 0.0254 λ, respectively.Figure 7f displays a single interference fringe after processing, with Figure 7g providing the corresponding phase after processing, with PV and RMS values of 0950 λ and 0.0221 λ, respectively.Figure 7h provides the phase error between the processed phase and label phase, with residual PV and RMS values of 0.0566 λ and 0.0111 λ, respectively.A comparison of the reconstructed phase accuracy before and after processing demonstrated that residual PV improved from 0.1174 λ to 0.0566 λ, while residual RMS improved from 0.0254 λ to 0.0111 λ.Simulated results also indicated the method proposed in this paper could improve fringe contrast while suppressing fringe noise, significantly improving the accuracy of single interference fringe phase, which furthered validated the effectiveness and necessity of the technique proposed in this paper.

Experimental Analysis
In addition to the use of simulated fringes, DN-U-NET performance was evaluated with a series of experimentally collected fringe maps, as shown in Figure 8a.The original

Experimental Analysis
In addition to the use of simulated fringes, DN-U-NET performance was evaluated with a series of experimentally collected fringe maps, as shown in Figure 8a.The original interference fringe pattern was collected using a ZYGO-Verifire PE Fischer-type phaseshifting interferometer, using different networks, the Zygo original interference fringes are processed, respectively, and the processing results are shown in Figure 8. Through visual observation, several models can significantly enhance their stripe contrast and significantly suppress noise when processing the Zygo original interference fringe.In order to compare the effect before and after processing, the results of several different models are compared with the gray scale distribution curve of the 128th line before and after processing, and the comparison results are shown in Figure 9.In addition, the processing results of the U-NET network are compared with the processing results of the DN-U-NET proposed in this paper, and the comparison results are shown in Figure 10, which shows that the gray level distribution of the interference fringes after the processing of the model proposed in this paper is smoother and more average.The original interference fringe pattern was collected using a ZYGO-Verifire PE Fischertype phase-shifting interferometer with different plane mirrors serving as the measurement sample.It is evident that the contrast is poor and obvious noise is present.The fringes shown in Figure 11a were then processed using the proposed technique, the results of which are shown in Figure 11b.A visual inspection suggests the contrast has been enhanced while the ambient noise has been suppressed significantly.Figure 12 provides a comparison of gray level distribution curves for the 128 th line in the two fringe maps shown in Figure 11a,b, which are evidently smoother after preprocessing.Notably, this process requires only a single microsecond (10 −6 s) of runtime for each interference fringe.
Figure 13b shows the original interference fringe acquired with the interferometer, while Figure 13e displays the interference fringe after processing using the method proposed in this paper.The measured phase distribution was then compared with the phase acquired using a four-step phase-shifting technique, which served as the reference phase (see Figure 13a).This process produced PV and RMS values of 0.1381 λ and 0.0300 λ, respectively.Figure 13c shows the corresponding phase distribution before processing, with PV and RMS values of 0.0670 λ and 0.0110 λ, respectively.Figure 13e displays the phase error between the solved phase (before processing) and the reference phase, with residual PV and RMS values of 0.1200 λ and 0.0267 λ, respectively.Figure 13f shows the corresponding phase solved after processing, with a PV and RMS of 0.1110 λ and 0.0241 λ, respectively.Figure 13g shows the error between the solved phase (after processing) and the reference phase, with a residual PV and RMS of 0.0856 λ and 0.0121 λ, respectively.A comparison of the reconstructed phase accuracy before and after processing indicated that residual PV improved from 0.1200λ to 0.0856λ, while residual RMS improved from 0.0300 λ to 0.0121 λ.These experimental results also demonstrate that the method proposed in this paper can significantly improve phase reconstruction accuracy while improving the fringe contrast.The original interference fringe pattern was collected using a ZYGO-Verifire PE Fischer-type phase-shifting interferometer with different plane mirrors serving as the measurement sample.It is evident that the contrast is poor and obvious noise is present.The fringes shown in Figure 11a were then processed using the proposed technique, the results of which are shown in Figure 11b.A visual inspection suggests the contrast has been enhanced while the ambient noise has been suppressed significantly.Figure 12 provides a comparison of gray level distribution curves for the 128th line in the two fringe maps shown in Figure 11a and Figure 11b, which are evidently smoother after preprocessing.Notably, this process requires only a single microsecond (10 −6 s) of runtime for each interference fringe.Figure 13b shows the original interference fringe acquired with the interferometer, while Figure 13e displays the interference fringe after processing using the method proposed in this paper.The measured phase distribution was then compared with the phase acquired using a four-step phase-shifting technique, which served as the reference  phase (see Figure 13a).This process produced PV and RMS values of 0.1381 λ and 0.0300 λ, respectively.Figure 13c shows the corresponding phase distribution before processing, with PV and RMS values of 0.0670 λ and 0.0110 λ, respectively.Figure 13e displays the phase error between the solved phase (before processing) and the reference phase, with residual PV and RMS values of 0.1200 λ and 0.0267 λ, respectively.Figure 13f shows the corresponding phase solved after processing, with a PV and RMS of 0.1110 λ and 0.0241 λ, respectively.Figure 13g shows the error between the solved phase (after processing) and the reference phase, with a residual PV and RMS of 0.0856 λ and 0.0121 λ, respectively.A comparison of the reconstructed phase accuracy before and after processing indicated that residual PV improved from 0.1200λ to 0.0856λ, while residual RMS improved from 0.0300 λ to 0.0121 λ.These experimental results also demonstrate that the method proposed in this paper can significantly improve phase reconstruction accuracy while improving the fringe contrast.

Conclusions
In this study, a neural network-based preprocessing model (DN-U-NET) was proposed for single interference fringes.This technique was applied to determine the form of fringe amplitude and background intensity structures, as well as generate fringe maps with an added background intensity, additional amplitude, and ambient noise.The simulated interferometric fringes were generated in an ideal state using Zernike

Figure 1 .
Figure 1.A diagram of the CBAM module.

Figure 2 .
Figure 2. A diagram of the BAM module.

Figure 1 .
Figure 1.A diagram of the CBAM module.

Figure 1 .
Figure 1.A diagram of the CBAM module.

Figure 2 .Figure 2 .
Figure 2. A diagram of the BAM module.

Figure 3 .
Figure 3.A diagram of the DN-U-Net network architecture.

Figure 3 .
Figure 3.A diagram of the DN-U-Net network architecture.

Figure 4 .
Figure 4.A comparison of pre-processing effects after denoising and normalization.Included images represent (a) labels and samples both (b) before processing and (c) after processing.

Figure 4 .Figure 5 .
Figure 4.A comparison of pre-processing effects after denoising and normalization.Included images represent (a) labels and samples both (b) before processing and (c) after processing.

Figure 5 .
Figure 5.A comparison of pre-processing effects after denoising and normalization using different network models.

Figure 7 .
Figure 7.A comparison of the reconstructed phase before and after processing.(a) Label; (b) label phase; (c) before processing; (d) phase distribution before processing; (e) reconstructed error before processing; (f) after processing; (g) phase distribution after processing; (h) reconstructed error after processing.

Figure 7 .
Figure 7.A comparison of the reconstructed phase before and after processing.(a) Label; (b) label phase; (c) before processing; (d) phase distribution before processing; (e) reconstructed error before processing; (f) after processing; (g) phase distribution after processing; (h) reconstructed error after processing.

Figure 9 .
Figure 9.Comparison of gray scale distribution curves at row 128 before and after the processing different network models.

Figure 8 .
Figure 8.Comparison of the effects of different network models in dealing with the Zygo original interference fringe denoising and normalized preprocessing.

Figure 8 .
Figure 8.Comparison of the effects of different network models in dealing with the Zygo orig interference fringe denoising and normalized preprocessing.

Figure 9 .
Figure 9.Comparison of gray scale distribution curves at row 128 before and after the processin different network models.

Figure 10 .
Figure 10.Comparison of the DN-U-NET and U-NET preprocessing results.

Figure 9 .
Figure 9.Comparison of gray scale distribution curves at row 128 before and after the processing of different network models.

Figure 8 .Figure 9 .
Figure 8.Comparison of the effects of different network models in dealing with the Zygo original interference fringe denoising and normalized preprocessing.

Figure 10 .
Figure 10.Comparison of the DN-U-NET and U-NET preprocessing results.Figure 10.Comparison of the DN-U-NET and U-NET preprocessing results.

Figure 10 .
Figure 10.Comparison of the DN-U-NET and U-NET preprocessing results.Figure 10.Comparison of the DN-U-NET and U-NET preprocessing results.

Figure 12 .
Figure 12.The 128th gray distribution curve before and after processing.

Figure 12 .
Figure 12.The 128 th gray distribution curve before and after processing.

Figure 13 .
Figure 13.A comparison of the reconstructed phase before and after processing.(a) label phase; (b) before processing; (c) phase distribution before processing; (d) reconstructed error before processing; (e) after processing; (f) phase distribution after processing; (g) reconstructed error after processing.

Figure 13 .
Figure 13.A comparison of the reconstructed phase before and after processing.(a) label phase; (b) before processing; (c) phase distribution before processing; (d) reconstructed error before processing; (e) after processing; (f) phase distribution after processing; (g) reconstructed error after processing.

Table 1 .
The range of Zernike polynomial data.

Table 1 .
The range of Zernike polynomial data.

Table 2
. A comparison of the denoising and normalization effects produced by different network models.

Table 2 .
A comparison of the denoising and normalization effects produced by different network models.

Table 3 .
A quantitative comparison of denoising and normalization effects at varying noise levels.

Table 3 .
A quantitative comparison of denoising and normalization effects at varying noise levels.