Next Article in Journal
Pump-Enhanced Idler-Resonant 1626 nm Optical Parametric Oscillator
Next Article in Special Issue
Full-Link Background Radiation Suppression and Detection Capability Optimization of Mid-Wave Infrared Hyperspectral Remote Sensing in Complex Scenarios
Previous Article in Journal
Improved Sensitivity of Brain Cancer Detection Using 2D Photonic Crystal Sensor
Previous Article in Special Issue
Color Image Encryption Based on Phase-Only Hologram Encoding Under Dynamic Constraint and Phase Retrieval Under Structured Light Illumination
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

AI-Based 2D Phase Unwrapping Under Rayleigh-Distributed Speckle Noise and Phase Decorrelation

by
Aidan Soal
1,
Juergen Meyer
1,2,
Bryn Currie
1 and
Steven Marsh
1,*
1
School of Physical and Chemical Sciences, University of Canterbury, Christchurch 8041, New Zealand
2
Department of Radiation Oncology, University of Washington, Seattle, WA 98195, USA
*
Author to whom correspondence should be addressed.
Photonics 2026, 13(2), 208; https://doi.org/10.3390/photonics13020208
Submission received: 13 November 2025 / Revised: 9 February 2026 / Accepted: 13 February 2026 / Published: 22 February 2026

Abstract

Phase unwrapping is a critical step in interferometric imaging modalities such as holography and synthetic aperture radar, yet conventional analytical algorithms struggle in low signal-to-noise and high-speckle environments. This study presents an artificial intelligence (AI)-based phase-unwrapping framework using a Pix2Pix conditional generative adversarial network (cGAN). A model was designed for robustness under Rayleigh-distributed speckle noise and phase decorrelation, conditions representative of realistic interferometric measurements. Trained on synthetically generated wrapped–unwrapped phase pairs, the AI approach was compared against established analytical phase-unwrapping methods, a quality-guided unwrapping algorithm (Herraez)and a minimum-norm network-flow optimization method (Costantini). Quantitative evaluation using the root mean square error (RMSE), structural similarity index measure (SSIM), and a composite performance index demonstrated that the cGAN was superior under noisy conditions, successfully recovering phase information beyond its training noise range at σ = 10 , and accurately unwrapping phases up to σ = 20 . This was under a pure unwrapping performance analysis, utility performance was also tested comparing all images to clean noiseless phase. The Pix2Pix model also proved resilient to detector artifacts, despite not being explicitly trained on them, and its worst performance yielded RMSE and SSIM values of 0.089 and 0.927, respectively, with perfect values being 0 and 1. The proposed framework simultaneously unwraps and denoises the phase, offering a simple, open-source, and highly adaptable alternative for phase unwrapping in noisy interferometric systems. Future work will focus on extending the framework to experimental datasets.

1. Introduction

Interferometric measurements are highly sensitive to atmospheric fluctuations and mechanical vibrations, which introduce noise into the measured interferogram. In interferometric methods affected by speckle noise and low signal strength, it is common to observe a very low signal-to-noise ratio (SNR). An example of such a system is the digital holographic interferometric (DHI) dosimeter developed by our group [1], which determines the absorbed radiation dose by measuring phase shifts resulting from refractive index changes caused by heat energy transferred to transparent media. Noise during the phase unwrapping step of image reconstruction can lead to considerable errors. This challenge is not unique to DHI dosimetry; it also arises in other domains where accurate phase unwrapping is critical, including optical imaging [2], magnetic resonance imaging [3], and synthetic aperture radar [4]. Due to the reconstruction method, the recovered phase is wrapped to the interval [ 0 , 2 π ] . In low-signal phase maps, the measured phase may be less than π . This results in no phase wrapping occurring from the phase signal. Instead, various noise sources, such as atmospheric turbulence, mechanical vibrations, and detector noise in the CCD camera, can induce phase wraps [2]. Consequently, in low-SNR environments typical of interferometric measurements, phase wrapping arises primarily from noise rather than from the underlying physical phase signal. This noise is present before the phase is wrapped. However, additional noise will also be introduced after wrapping due to phase decorrelation arising from the subtraction of the pre- and post-irradiation phase images [5]. These compounded effects underscore the critical need for robust phase unwrapping algorithms in interferometric imaging applications.
There are many analytical phase unwrapping methods currently available. The main two types used for phase unwrapping can be divided into localand global unwrappers. A widely used local algorithm is the Herraez unwrapper [6,7], known for its robust, path-independent 2D strategy that minimizes phase errors in noisy or discontinuous regions. This method was also employed during the prototype development of the DHI dosimeter under joint research by the University of Canterbury and the University of Washington [8], where it demonstrated promising performance but remained sensitive to noise. The Herraez algorithm performs phase unwrapping by sorting pixels according to a reliability metric and progressively unwrapping them in order of decreasing reliability, ensuring that high-confidence regions guide the process and reduce error propagation. To enable comparison with a global phase unwrapping strategy, we selected the Costantini algorithm [9]. This method operates by solving a system of equations over the entire phase map, making it inherently more robust to noise compared with local approaches. The Herraez and Costantini algorithms were selected as they represent two of the most robust and widely validated approaches, quality-guided and minimum-norm global optimization methods, respectively, providing strategies for reliable phase unwrapping [6].
Numerous deep learning methods have been developed for phase unwrapping. These approaches are typically leveraging architectures such as the Residual U-Net (Res-UNet) [10] and the Separable-Residual-Dense-Inverted U-Net (SRDU-Net) [11]. Other regression-based models for general phase unwrapping include PHU-Net [12], PU-GAN [13], and BCNet [14]. However, a critical limitation shared across these models is their reliance on simplistic Gaussian noise assumptions. They typically do not account for the complex noise characteristics present in holographic measurement systems. This assumption leads to unrealistically high performance metrics and limits their practical applicability on real world data [15]. Gaussian noise is additive and follows a normal distribution. Typically, it arises from electronic components, thermal fluctuations, or quantization errors in sensors and systems. On the other hand, Rayleigh noise is multiplicative and follows a Rayleigh distribution, which is skewed and only defined for non-negative values. It arises in coherent imaging systems like holography due to random interference of multiple wavefronts and is more relevant in speckle noise scenarios, especially in off-axis holography or scattering media [16]. It can dominate in reconstructions where interference patterns are strong, and is therefore more challenging to filter due to its non-Gaussian nature and multiplicative behavior.
The aim of this work was to develop and characterize a robust, open-source AI-based phase unwrapping model that extends beyond published approaches by explicitly accounting for realistic noise conditions typical of low-SNR environments. Unlike previous approaches [13,17,18,19], the proposed framework incorporates phase decorrelation and Rayleigh-distributed speckle noise to better reflect practical interferometric data for holography [5]. Model testing also includes a systematic evaluation of artifacts present in the wrapped phase images, providing deeper insight into performance under challenging measurement conditions.

2. Materials and Methods

2.1. Phase Unwrapping

Phase unwrapping is generally performed by recovering the original phase through the identification of phase jumps and adding an appropriate integer multiple of 2 π . This can be expressed as
ϕ = ψ + 2 π k ,
where ϕ is the true phase, ψ is the wrapped phase, and k is the integer number of 2 π wraps.
For the sake of brevity, detailed descriptions of the Herraez and Costantini unwrapping algorithms are omitted here [7,9]. The Herraez method was implemented using the unwrap_phase function from the scikit-image Python package Version 0.25.2 [20,21], whereas the Costantini method was implemented based on an open-source MATLAB version [22].
The Pix2Pix (P2P) model is a conditional generative adversarial network (cGAN) [23]. It follows the same basic structure as a regular GAN, in which a generator and a discriminator are trained to learn input–output mappings using a loss function. However, in a cGAN, this process is conditioned on an input image, allowing the network to generate outputs that correspond specifically to the given input. For a general GAN, the output is treated as “unstructured,” meaning that each output pixel is considered independently of the input image. In contrast, a cGAN learns a structured mapping by conditioning on the input image, which allows the loss function to penalize structural differences between the input and the generated output. A key advantage of the P2P model is that it is not application-specific and can be applied to nearly any image-to-image translation task, making it potentially well-suited for adaptation to phase unwrapping.

2.2. Dataset Generation

The synthetic training and validation images for AI-based phase unwrapping were generated using MATLAB’s peaks function, which creates superimposed, translated, and scaled Gaussian-like surfaces as defined in Equation (2). This function is commonly used in phase unwrapping training and testing datasets [24], and is often referred to as Gaussian Function Superposition (GFS). The absolute phase of various distributions is obtained through a weighted superposition of multiple Gaussian functions with different mean values and variances, which can produce complex topographies when appropriate coefficient ranges are selected. To generate larger and more diverse dataset, randomized coefficients were introduced to the peaks formulation, as shown in Equation (3). This approach enables random variations in the shape, amplitude, and spread of the phase surfaces while maintaining physical plausibility.
z = 3 ( 1 x ) 2 e x 2 ( y + 1 ) 2 10 x 5 x 3 y 5 e x 2 y 2 1 3 e ( x + 1 ) 2 y 2 .
z = c 0 ( c 1 x ) c 2 e x c 3 ( y + c 4 ) c 5 c 6 x c 7 x c 8 y c 9 e x c 10 y c 11 c 12 e ( x + c 13 ) c 14 y c 15 .
The coefficients c 0 c 15 were randomly generated within the ranges shown in Table 1. These coefficient ranges were chosen to produce sufficiently randomized yet realistic and complex phase distributions while maintaining a signal level low enough to prevent phase wrapping. This ensured that the maximum signal amplitude did not exceed π ( Z [ π , π ] ).
Using Rayleigh-distributed noise of varying standard deviations ( σ ), 100 phase images were randomly generated and average SNR values were calculated as shown in Table 2.
SNR varied considerably across the 100 generated phase images, with a coefficient of variation of 30%, indicating relative variability and reflecting the inherent diversity of the GFS method combined with added noise. Training datasets of 512 × 512 pixel image pairs were generated using the GFS method with added noise. The dataset size was iteratively increased until stable adversarial training was achieved, as indicated by sustained convergence of both generator and discriminator losses under the standard pix2pix loss formulation [20]. We note that, in adversarial training, loss convergence primarily serves as an indicator of training stability and balance between the generator and discriminator, rather than a direct measure of generalization performance.
To assess generalization and mitigate concerns of overfitting, model performance was evaluated on a held-out test set comprising image pairs entirely independent of the training data. Consistent reconstruction quality on this test set was observed once stable training behavior was reached, suggesting that the chosen dataset size was sufficient for the task. The model was trained for approximately 1000 epochs with a batch size of 8, selected to accommodate the memory limitations of the available single GPU (RTX 4070, 12 GB VRAM).

2.3. Artifacts

There are multiple potential sources of wrapping artifacts in interferometric phase images. These may arise from mirror deformations in the optical setup or from faulty elements in the detector system commonly used in interferometry. To analyze the impact of such artifacts on the unwrapping process, a circular region with a radius of 50 pixels, with zero signal, was introduced into the wrapped phase map to simulate a defective sensor pixel region. This experiment was designed to evaluate how a dead detector area influenced phase unwrapping performance. The evaluation was conducted using the analytical methods, and the AI model, allowing assessment of the model’s robustness in handling unexpected scenarios not represented in the training data.

2.4. Noise

The dominant noise source in many interferometric measurements, including in our DHI dosimeter [8], is speckle noise. The employed off-axis configuration produces resolved speckle, where individual speckle grains extend across multiple pixels [5]. The average speckle grain size is related to the optical configuration by
Δ x = λ d 0 N p x ,
where λ is the wavelength of laser light, d 0 is the distance between the object and sensor, p x is the pixel pitch of the sensor in the x-direction, and N is the number of pixels along the x-direction. For the optical setup used in this work, the speckle grain size spans approximately 7 × 7 pixels.
In holography, the speckle noise amplitude in the hologram plane follows the Rayleigh distribution.
P ( x ; σ ) = x σ 2 exp x 2 2 σ 2 ,
where σ is the standard deviation. This noise is introduced prior to phase wrapping and is the primary cause of wrapping. In particular, the noise amplitude can exceed 2 π in the phase image. The local variation in noise amplitude at each position x is determined by the parameter σ . In contrast, most previous studies [10,11,12,14] simulated noise by adding it after phase wrapping, typically assuming a Gaussian distribution. However, this approach does not accurately capture the statistical or spatial characteristics of speckle noise observed in real holographic measurements.
Another important source of noise in interferometric measurements, which has not been widely investigated, is speckle phase decorrelation noise. This type of noise arises when two phase object states are subtracted, each containing independent statistical speckle variations [25]. This type of noise therefore is added post wrapping. The probability distribution of the decorrelated phase error can be expressed as
p ( ϵ ) = 1 | μ | 2 2 π 1 β 2 3 2 β arcsin β + π β 2 + 1 β 2 ,
where μ is the degree of coherence and β is a parameter describing the correlation between the two speckle fields, defined as
β = | μ | cos ( ϵ ) ,
where ϵ = ϕ 2 ϕ 1 represents the variation in phase amplitude at each point between object states 1 and 2. The values of | μ | can vary between 0 and 1. When | μ | = 0 , the two fields are completely uncorrelated, representing the worst-case scenario. Conversely, | μ | = 1 corresponds to fully correlated fields, and if the phase difference is close to zero the noise will be weak. However, given the high-amplitude Rayleigh-distributed speckle in the hologram plane typical of DHI, it is unlikely that the phase difference reaches this ideal point. Figure 1 shows the probability density functions according to Equation (6) for different coherence factors.
Two phase maps are generated with randomized Rayleigh noise added multiplicatively to each map, representing two object states. These phase maps are then wrapped, and the phase difference between the two noisy states is calculated for each pixel, producing a single phase map that incorporates speckle noise decorrelation introduced after wrapping.

2.5. Unwrapping Performance Assessments

To assess the unwrapping performance of both the analytical and the P2P AI method, the root mean square error (RMSE) and the structural similarity index measure (SSIM) were used. The SSIM [26] is a widely used metric for quantifying the similarity between two images. Unlike RMSE, which measure absolute pixel differences, SSIM models how humans perceive image quality by comparing structural information, luminance, and contrast.
The root mean square error was calculated as follows, per pixel:
R M S E = i = 1 n ( y i ^ y i ) 2 n ,
where y i ^ are the predicted values, y i are the observed values, and n is the number of observations. This calculation is done for each pixel in the predicted and ground-truth images. The SSIM across the entire phase image is as follows:
S S I M = ( 2 μ x μ y + B 1 ) ( 2 σ x y + B 2 ) ( μ x 2 + μ y 2 + B 1 ) ( σ x 2 + σ y 2 + B 2 ) ,
where μ is the mean of the phase, σ is the standard deviation, and B 1 and B 2 are constants to ensure the denominator is non-zero. In this case, the skimage Python package was used, specifically the skimage.metrics module, to obtain the SSIM value. This approach is similar to that by Chen et al. [11], except that we define an additional metric. By assessing multiple images (e.g., 100 images), we can calculate the mean value for the RMSE, denoted as μ R M S E , and for the SSIM, denoted as μ S S I M . From these, a performance index P I , was defined given by the following relationship
P I = μ S S I M μ R M S E ,
SSIM ranges from 0 to 1, with 1 indicating perfect structural similarity, while an ideal RMSE is 0. Therefore, the best possible performance index is 1. However, if the SSIM becomes less than the RMSE, this indicates that the error is becoming larger relative to the structural preservation of the unwrapped image. In such cases, the PI will be negative indicating very poor performance. This situation occurs in extreme unwrapping cases.
A flowchart illustrating the proposed method is shown in Figure 2. The P2P model architecture is not included here, as it is described in detail in the corresponding literature [23].
It is important to distinguish between the evaluation criteria used for analytical and AI-based unwrapping methods. Analytical unwrappers are designed solely to recover phase continuity and therefore preserve the noise present in the image pre-wrapping. As such, their unwrapping performance is appropriately assessed by comparison with the noisy phase image (NP), isolating their ability to correctly unwrap phase without altering noise characteristics. In contrast, the proposed AI model performs joint phase unwrapping and denoising, and is therefore evaluated against the clean phase (CP). Comparing analytical methods to the clean phase would conflate denoising with unwrapping and would not reflect their intended functionality. This evaluation strategy ensures a fair and methodologically consistent comparison based on the intended utility of the resulting unwrapped phase images.
To further assess the utility of the unwrappers, a comparison is also made of the predictions to the clean ground-truth phase image creating a consistent baseline across the three methods. The clean ground-truth image is the ideal reconstruction case. These two pathways are shown in Figure 2 as an utility comparison pathway and unwrapping pathway respectively.

3. Results

The AI model was trained under realistic interferometric measurement conditions characterized by a low SNR. Noise was introduced prior to phase wrapping to simulate acquisition-related distortions, and additional phase decorrelation noise was applied post-wrapping to reflect the effects of image subtraction. This aimed to replicate the complex noise environment typical of interferometric systems. To determine suitable noise amplitudes ( σ ) and correlation factors ( μ ) for the phase decorrelation noise in the dataset, various combinations of these parameters were tested to identify the conditions that produced the desired phase-wrapping behavior. σ = 2.5 was determined as the threshold above which noise-induced wrapping was observed across all correlation factors.
Two noise values were selected near the midpoint of the training datasets noise amplitude range, along with one value double that amount, to evaluate the model’s performance at both a well-trained noise level and a level beyond the training data. A representative image with μ = 0.5 , σ = 4.5 , and SNR = 0.0204 is illustrated in Figure A1. Furthermore, Figure A2 highlights how the model generalizes to previously unseen noise levels, specifically at μ = 0.5 , σ = 20 , and SNR = 0.0012 . As shown, the model is able to perform phase unwrapping effectively even at noise levels well above those seen during training. During model training, only noise levels up to σ = 10 were utilized. The results corresponding to Figure A1 and Figure A2 are summarized in Table 3.
To assess the stability and performance of the AI model against the analytical methods, PI values were plotted against noise level to summarize overall trends and the identification of performance thresholds under increasing noise. These values are shown in Figure 3. The 95% confidence interval (CI) is also shown around the mean values. This represents the range within which the true mean is expected to lie with 95% confidence.
The AI model demonstrates generalization beyond the noise levels encountered during training. Although trained only up to a noise level of σ = 10 , it continues to produce reasonable results beyond this. It is again emphasized that this analysis reflects pure unwrapping performance shown in Figure 3a where analytical methods are evaluated relative to the noisy phase image, while the AI-based method is evaluated relative to the clean phase image. Comparing analytical methods directly to the clean phase is also done as it shows the utility of the analytical methods in recovering a clean ground-truth phase in Figure 3b.
Table 3 and Figure 3 provides evidence to suggest that the P2P model is effective as a phase unwrapper for complex speckle noise scenarios, including phase decorrelation noise, allowing recovery of clean, noiseless phase maps up to at least approximately σ 15 when trained up to σ = 10 .
To examine how robust the trained model is to phase decorrelation noise, Figure 4 was generated to show PI values across a range of correlation factors and noise levels.

Wrapping Artifacts

The artifact simulating a dead detector area is shown in Figure 5. It is simulated as a circular region of radius 50 pixels with its center at pixels (350, 350). Two aspects can be analyzed using both the analytical and P2P-based unwrapping methods: (i) the effect of artifact position and (ii) the effect of artifact size on unwrapping performance. By varying the position of the artifact, it can be assessed whether its location within the phase map influences the ability of the unwrappers to recover the true phase. For instance, placing the artifact near the center of the phase peak may present a more challenging scenario compared to positioning it in lower-gradient regions. The influence of artifact position was first investigated, as illustrated in Figure 6, before examining the effect of artifact size. Figure 6 shows a RMSE and SSIM heatmap of different artifact positions. The artifact center is translated across the image grid in increments of 50 pixels along both the horizontal and vertical axes. Since the artifact size has a radius of 50 pixels, translating the center by 50 pixels will allow the artifact to span the entire image plane. The x and y-axis tick intervals correspond to 50 pixels each.
From this, it can be observed that there is a clear relationship between the phase amplitude and artifact positioning. Specifically, when the artifact is placed in a region with a steeper wrapping gradient, both RMSE and SSIM values deteriorate. This effect is evident for the P2P unwrapping method. To better visualize the regions where performance is most affected, a performance index heatmap was generated, as shown in Figure 7. The corresponding wrapped images and predictions for the AI, Herraez, and Costantini methods are shown in Figure A3, Figure A4 and Figure A5, respectively.
As seen in Figure 7, the minimum Costantini PI is less than Herraez, indicating poor performance. Notably, there is no correlation between the wrapping gradient and the unwrapping performance for the Costantini analytical method.
In contrast, for the Herraez and P2P methods, there is a clear correlation between the phase information and unwrapping performance. However, the PI for Herraez is <0, indicating poor unwrapping performance. This demonstrates the AI model’s ability to learn underlying relationships present in the training data. Furthermore, its performance in these regions remains considerably better than that of Herraez, while also retrieving clean phase images devoid of noise characteristics, whereas Herraez and Costantini preserve the noise present in the predicted phase. Given the relationship between phase information and unwrapping performance, it is evident that larger artifacts would further degrade performance.

4. Discussion

This work investigates the feasibility and performance of a Pix2Pix-based conditional generative adversarial network for phase unwrapping in interferometric imaging under realistic and challenging noise conditions.
The study demonstrates that P2P can serve as a robust phase unwrapping framework under noise conditions representative of real interferometric measurements, including Rayleigh-distributed speckle noise and speckle phase decorrelation. In contrast to analytical approaches such as the Herraez and Costantini algorithms, the proposed AI-based method simultaneously unwraps and denoises the phase, producing outputs that are more suitable for downstream image reconstruction tasks.
A key distinction between the analytical and AI-based approaches lies in how noise is treated. In this work, noise was introduced prior to phase wrapping to reflect realistic holographic acquisition conditions. Consequently, analytical unwrappers recover phase continuity while preserving the noise present in the wrapped input, whereas the AI model learns to jointly unwrap and suppress noise. To isolate pure unwrapping performance, analytical methods were evaluated against the noisy phase image rather than the clean ground truth. Utility performance was then assessed by evaluating all predictions against the clean ground-truth phase.
This distinction is explicitly illustrated by the two subplots in Figure 3. Figure 3b presents a utility-based comparison, in which all methods are evaluated against the clean phase, reflecting the practical usefulness of the reconstructed output for downstream applications. Under this comparison, analytical methods perform poorly due to their inability to remove noise, whereas the P2P model maintains high reconstruction fidelity by jointly denoising and unwrapping the phase. Figure 3a presents an unwrapping-only comparison, in which analytical methods are evaluated against the noisy phase image to assess their intended function of phase continuity recovery without penalization for residual noise. This dual presentation ensures methodological fairness while highlighting the fundamentally different objectives of analytical and AI-based approaches.
This evaluation strategy is conservative with respect to the analytical methods. As illustrated in Figure A1 and Figure 3, comparing analytical unwrappers directly to the clean phase would lead to substantially larger RMSE values and lower SSIM scores (RMSE = 3.750 , SSIM = 0.001 ), as these methods are not designed to remove noise. By instead evaluating them against the noisy phase image, their reported performance reflects only their ability to correctly unwrap phase, rather than being penalized for residual noise. Even under this favorable comparison, the proposed AI model outperformed the analytical methods across higher noise levels.
The advantages of the AI-based approach become most apparent under low-SNR conditions, where analytical methods exhibit rapid performance degradation and increased variability. This behavior is reflected in the widening confidence intervals of the analytical performance indices at higher noise levels shown in Figure 3, indicating instability and heightened sensitivity to noise-induced wrapping errors. In contrast, the AI model maintains narrow confidence intervals, suggesting more consistent and reliable behavior. While post-processing denoising could be applied to successfully analytically unwrapped images (i.e., those without unwrapping artifacts, such as those shown in Figure A2) to improve visual quality, this approach introduces additional processing complexity and does not address the fundamental noise sensitivity of the unwrapping step itself.
While a full comparison with a state-of-the-art neural network–based phase unwrapping method is beyond the scope of this work, an initial comparison was performed between the proposed Pix2Pix model and the state-of-the-art SRDU-Net unwrapper. SRDU-Net achieved a mean SSIM and RMSE of 0.8642 and 0.2597 , respectively, across a range of noise levels. Although direct quantitative comparison is limited by differences in noise modeling and evaluation protocols, the Pix2Pix framework achieved higher SSIM and lower RMSE under substantially more challenging noise conditions, including Rayleigh and decorrelation noise not considered in prior work. Importantly, the proposed approach is fully open-source and relies on a relatively simple and well-established architecture, improving accessibility and reproducibility compared to more complex network designs.
Generalization beyond the training distribution was observed in multiple respects. The model maintained reasonable performance at noise levels exceeding those seen during training and demonstrated robustness to detector artifacts that were not explicitly included in the training data. Performance degradation beyond σ 15 highlights the dependence of AI-based methods on training data coverage and realism. Similarly, the observed sensitivity to the phase decorrelation coherence factor suggests that extending the training dataset to include a range of coherence factors, rather than training solely at μ = 0.5 , would further improve model robustness.
The present study did not aim to exhaustively optimize model performance. Instead, the emphasis was placed on demonstrating the feasibility of the proposed reconstruction framework using a standard pix2pix configuration and established training practices. As such, key learning parameters, including learning rates, loss weighting terms, network depth, and adversarial training schedules, were not systematically tuned, nor was the effect of dataset size explored beyond achieving stable training behavior. Consequently, the reported performance should be interpreted as a conservative baseline rather than an upper bound. Further gains are likely achievable through targeted optimization, including hyperparameter tuning, adaptive learning rate schedules, alternative adversarial or perceptual loss formulations, and systematic exploration of dataset scaling.
The primary limitation of this and related AI-based approaches is the reliance on simulated training data. In real interferometric measurements, obtaining a true noiseless ground-truth phase is infeasible, necessitating realistic forward modeling of the imaging system. While the simulation framework used here captures key noise mechanisms relevant to holography, extension to experimental datasets will require careful validation and domain-specific retraining. The focus of this work has been on examining the potential of neural networks trained on complex noise characteristics present in holography, establishing a foundational framework upon which further holographic reconstruction research can be built.

5. Conclusions

The proposed Pix2Pix-based AI model demonstrates robust performance and outperforms analytical phase-unwrapping methods under challenging interferometric conditions, including phase decorrelation and Rayleigh-distributed speckle noise, when evaluated on simulated data. In scenarios where noise itself induces phase wrapping, common in holographic and low-SNR measurements, the model consistently recovers clean, denoised phase maps and outperforms established analytical approaches such as Herraez and Costantini. The trained model effectively functions simultaneously as a denoiser and an unwrapper, integrating these processes into a single unified framework. Model performance was shown to generalize beyond the training noise range, though its accuracy remains dependent on the diversity and authenticity of the training dataset. Overall, the Pix2Pix framework provides a practical, open-source, and adaptable solution for phase unwrapping in noisy interferometric imaging. It demonstrates resilience to complex noise and artifact conditions that have not been systematically addressed in prior AI-based approaches, indicating potential for translation to real-world experimental applications. This work has established a foundational AI-based framework capable of robustly handling noise characteristics inherent to interferometric imaging modalities under a simulated environment with the next steps being extension to real interferometric signals.

Author Contributions

A.S.: Conceptualization, methodology, software, validation, formal analysis, investigation, data curation, visualization, writing—original draft preparation. J.M.: Conceptualization, methodology, supervision, writing—review and editing. S.M.: Methodology, supervision, writing—review and editing. B.C.: Supervision, writing—review and editing, project administration. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

No new data were created or analyzed in this study.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
SNRSignal-to-Noise Ratio
P2PPix2Pix
cGANConditional Generative Adversarial Network
AIArtificial Intelligence
CCDCharge-Coupled Device
RMSERoot Mean Square Error
SSIMStructural Similarity Index Measure
PIPerformance Index

Appendix A

Figure A1. Comparison of phase unwrapping methods. (a) Ground-truth phase, (b) Noisy phase, (c) Wrapped phase, (d) AI prediction, (e) AI—Ground Truth, (f) AI—Noisy Phase, (g) Herraez prediction, (h) Herraez—Ground Truth, (i) Herraez—Noisy Phase, (j) Costantini prediction, (k) Costantini—Ground Truth, and (l) Costantini—Noisy Phase. These images are with μ = 0.5 , σ = 4.5 , and SNR = 0.0204 . RMSE and SSIM are also labeled in the images.
Figure A1. Comparison of phase unwrapping methods. (a) Ground-truth phase, (b) Noisy phase, (c) Wrapped phase, (d) AI prediction, (e) AI—Ground Truth, (f) AI—Noisy Phase, (g) Herraez prediction, (h) Herraez—Ground Truth, (i) Herraez—Noisy Phase, (j) Costantini prediction, (k) Costantini—Ground Truth, and (l) Costantini—Noisy Phase. These images are with μ = 0.5 , σ = 4.5 , and SNR = 0.0204 . RMSE and SSIM are also labeled in the images.
Photonics 13 00208 g0a1
Figure A2. Comparison of phase unwrapping methods. (a) Ground-truth phase, (b) Noisy phase, (c) Wrapped phase, (d) AI prediction, (e) AI—Ground Truth, (f) AI—Noisy Phase, (g) Herraez prediction, (h) Herraez—Ground Truth, (i) Herraez—Noisy Phase, (j) Costantini prediction, (k) Costantini—Ground Truth, and (l) Costantini—Noisy Phase. These images are with μ = 0.5 , σ = 20 , and SNR = 0.0012 . RMSE and SSIM are also labeled in the images.
Figure A2. Comparison of phase unwrapping methods. (a) Ground-truth phase, (b) Noisy phase, (c) Wrapped phase, (d) AI prediction, (e) AI—Ground Truth, (f) AI—Noisy Phase, (g) Herraez prediction, (h) Herraez—Ground Truth, (i) Herraez—Noisy Phase, (j) Costantini prediction, (k) Costantini—Ground Truth, and (l) Costantini—Noisy Phase. These images are with μ = 0.5 , σ = 20 , and SNR = 0.0012 . RMSE and SSIM are also labeled in the images.
Photonics 13 00208 g0a2
Figure A3. (a) Ground-truth phase image. (b) AI Prediction (worst case). (c) Wrapped phase image with a circular artifact (radius = 50 pixels) positioned at (300, 350). RMSE and SSIM are also labeled in the images.
Figure A3. (a) Ground-truth phase image. (b) AI Prediction (worst case). (c) Wrapped phase image with a circular artifact (radius = 50 pixels) positioned at (300, 350). RMSE and SSIM are also labeled in the images.
Photonics 13 00208 g0a3
Figure A4. (a) Noisy truth phase image. (b) Herraez Prediction (worst case). (c) Wrapped phase image with a circular artifact (radius = 50 pixels) positioned at (300, 250). RMSE and SSIM are also labeled in the images.
Figure A4. (a) Noisy truth phase image. (b) Herraez Prediction (worst case). (c) Wrapped phase image with a circular artifact (radius = 50 pixels) positioned at (300, 250). RMSE and SSIM are also labeled in the images.
Photonics 13 00208 g0a4
Figure A5. (a) Noisy truth phase image. (b) Costantini Prediction (worst case). (c) Wrapped phase image with a circular artifact (radius = 50 pixels) positioned at (100, 350). RMSE and SSIM are also labeled in the images.
Figure A5. (a) Noisy truth phase image. (b) Costantini Prediction (worst case). (c) Wrapped phase image with a circular artifact (radius = 50 pixels) positioned at (100, 350). RMSE and SSIM are also labeled in the images.
Photonics 13 00208 g0a5

References

  1. Cavan, A.; Meyer, J. Digital holographic interferometry: A novel optical calorimetry technique for radiation dosimetry. Med. Phys. 2014, 41, 022102. [Google Scholar] [CrossRef] [PubMed]
  2. Hubley, L.; Roberts, J.; Meyer, J.; Moggré, A.; Marsh, S. Optical-radiation-calorimeter refinement by virtual-sensitivity analysis. Sensors 2019, 19, 1167. [Google Scholar] [CrossRef] [PubMed]
  3. Ying, L.; Ji, J.; Munson, D., Jr.; Liang, Z.; Koetter, R.; Frey, B. A robust and efficient method to unwrap MR phase images. In Proceedings of the International Society of Magnetic Resonance in Medicine Scientific Meeting; ISMRM: Concord, CA, USA, 2003; Volume 11, p. 782. [Google Scholar]
  4. Yu, H.; Lan, Y.; Yuan, Z.; Xu, J.; Lee, H. Phase unwrapping in InSAR: A review. IEEE Geosci. Remote Sens. Mag. 2019, 7, 40–58. [Google Scholar] [CrossRef]
  5. Bianco, V.; Memmolo, P.; Leo, M.; Montresor, S.; Distante, C.; Paturzo, M.; Picart, P.; Javidi, B.; Ferraro, P. Strategies for reducing speckle noise in digital holography. Light Sci. Appl. 2018, 7, 48. [Google Scholar] [CrossRef] [PubMed]
  6. Huerta, D.A.; Crepp, J.R.; Abott, C.G.; Joseph, B. Comparison of phase-unwrapping methods for adaptive optics wavefront sensing. In Proceedings of the Unconventional Imaging, Sensing, and Adaptive Optics 2025; SPIE: Bellingham, WA, USA, 2025; Volume 13619, pp. 533–543. [Google Scholar]
  7. Herráez, M.A.; Burton, D.R.; Lalor, M.J.; Gdeisat, M.A. Fast two-dimensional phase-unwrapping algorithm based on sorting by reliability following a noncontinuous path. Appl. Opt. 2002, 41, 7437–7444. [Google Scholar] [CrossRef] [PubMed]
  8. Roberts, J.; Moggre, A.; Meyer, J.; Marsh, S. Simulation-guided development of an optical calorimeter for high dose rate dosimetry. Phys. Eng. Sci. Med. 2024, 47, 143–151. [Google Scholar] [CrossRef] [PubMed]
  9. Costantini, M. A novel phase unwrapping method based on network programming. IEEE Trans. Geosci. Remote Sens. 2002, 36, 813–821. [Google Scholar] [CrossRef]
  10. Yan, K.; Yu, Y.; Sun, T.; Asundi, A.; Kemao, Q. Wrapped phase denoising using convolutional neural networks. Opt. Lasers Eng. 2020, 128, 105999. [Google Scholar] [CrossRef]
  11. Chen, Y.; Wang, Q.; Zhang, G.; Li, P.; Fan, Y.; Wang, Z.; Dong, M. Phase unwrapping in digital holography based on SRDU-net. Opt. Commun. 2024, 573, 131055. [Google Scholar] [CrossRef]
  12. Zhou, H.; Cheng, C.; Peng, H.; Liang, D.; Liu, X.; Zheng, H.; Zou, C. The PHU-NET: A robust phase unwrapping method for MRI based on deep learning. Magn. Reson. Med. 2021, 86, 3321–3333. [Google Scholar] [CrossRef]
  13. Zhou, L.; Yu, H.; Pascazio, V.; Xing, M. PU-GAN: A one-step 2-D InSAR phase unwrapping based on conditional generative adversarial network. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5221510. [Google Scholar] [CrossRef]
  14. Dardikman, G.; Turko, N.A.; Shaked, N.T. Deep learning approaches for unwrapping phase images with steep spatial gradients: A simulation. In 2018 IEEE International Conference on the Science of Electrical Engineering in Israel (ICSEE); IEEE: Piscataway, NJ, USA, 2018; pp. 1–4. [Google Scholar]
  15. Montrésor, S.; Picart, P. On the assessment of de-noising algorithms in digital holographic interferometry and related approaches. Appl. Phys. B 2022, 128, 59. [Google Scholar] [CrossRef]
  16. Garcia-Sucerquia, J.; Herrera Ramírez, J.; Velásquez Prieto, D. Improvement of the signal-to-noise ratio in digital holography. Rev. Mex. Física 2005, 51, 76–81. [Google Scholar]
  17. Zhang, L.; Huang, G.; Li, Y.; Yang, S.; Lu, L.; Huo, W. A robust InSAR phase unwrapping method via improving the pix2pix network. Remote Sens. 2023, 15, 4885. [Google Scholar] [CrossRef]
  18. Chen, Z.; Zeng, Z.; Shen, H.; Zheng, X.; Dai, P.; Ouyang, P. DN-GAN: Denoising generative adversarial networks for speckle noise reduction in optical coherence tomography images. Biomed. Signal Process. Control 2020, 55, 101632. [Google Scholar] [CrossRef]
  19. Raposo, A.; Azeitona, A.; Afonso, M.; Sanches, J.M. Ultrasound denoising using the pix2pix GAN. In 27th Portuguese Conference on Pattern Recognition; Portuguese Association for Pattern Recognition: Porto, Portugal, 2021; pp. 91–92. [Google Scholar]
  20. Python Software Foundation. Python—A Dynamic, Open-Source Programming Language. Version 3.11.0. 2023. Available online: https://www.python.org/ (accessed on 1 April 2025).
  21. scikit-image Team. Phase Unwrapping: Scikit-Image 0.25.2 Documentation. 2025. Available online: https://scikit-image.org/docs/0.25.x/auto_examples/filters/plot_phase_unwrap.html (accessed on 28 October 2025).
  22. Luong, B. Costantini Phase Unwrapping (Version 1.2.0.0). 2009. Available online: https://www.mathworks.com/matlabcentral/fileexchange/25154-costantini-phase-unwrapping (accessed on 28 October 2025).
  23. Isola, P.; Zhu, J.Y.; Zhou, T.; Efros, A.A. Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; IEEE: Piscataway, NJ, USA, 2017; pp. 1125–1134. [Google Scholar]
  24. Wang, K.; Kemao, Q.; Di, J.; Zhao, J. Deep learning spatial phase unwrapping: A comparative review. Adv. Photonics Nexus 2022, 1, 014001. [Google Scholar] [CrossRef]
  25. Dainty, J.C. Laser Speckle and Related Phenomena; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2013; Volume 9. [Google Scholar]
  26. Brunet, D.; Vrscay, E.R.; Wang, Z. On the mathematical properties of the structural similarity index. IEEE Trans. Image Process. 2011, 21, 1488–1499. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Second-order probability density function of phase noise ϵ for varying values of | μ | .
Figure 1. Second-order probability density function of phase noise ϵ for varying values of | μ | .
Photonics 13 00208 g001
Figure 2. Flowchart of the proposed methodology, including data generation and performance analysis, with two comparison approaches: utility comparison and unwrapping comparison.
Figure 2. Flowchart of the proposed methodology, including data generation and performance analysis, with two comparison approaches: utility comparison and unwrapping comparison.
Photonics 13 00208 g002
Figure 3. Performance indices across noise levels ranging from σ = 2.5 to σ = 20 . The mean values are calculated over 100 images. The black dashed line indicates the noise level up to which the AI model was trained. Bars represent 95% confidence intervals (CIs) around the mean. Figure (a) image represents the unwrapping comparison pathway shown in Figure 2 and (b) represents the utility comparison pathway.
Figure 3. Performance indices across noise levels ranging from σ = 2.5 to σ = 20 . The mean values are calculated over 100 images. The black dashed line indicates the noise level up to which the AI model was trained. Bars represent 95% confidence intervals (CIs) around the mean. Figure (a) image represents the unwrapping comparison pathway shown in Figure 2 and (b) represents the utility comparison pathway.
Photonics 13 00208 g003
Figure 4. Performance heatmap showing the performance index as a function of noise standard deviation and correlation factor.
Figure 4. Performance heatmap showing the performance index as a function of noise standard deviation and correlation factor.
Photonics 13 00208 g004
Figure 5. (a) Ground-truth phase image. (b) Wrapped phase image with a circular artifact (radius = 50 pixels) positioned at (350, 350).
Figure 5. (a) Ground-truth phase image. (b) Wrapped phase image with a circular artifact (radius = 50 pixels) positioned at (350, 350).
Photonics 13 00208 g005
Figure 6. (a) RMSE heatmap as a function of artifact location. The artifact center is translated across the image grid in increments of 50 pixels along both the horizontal and vertical axes. (b) SSIM heatmap generated using the same 50-pixel stride for artifact center placement. Results correspond to the P2P reconstruction method.
Figure 6. (a) RMSE heatmap as a function of artifact location. The artifact center is translated across the image grid in increments of 50 pixels along both the horizontal and vertical axes. (b) SSIM heatmap generated using the same 50-pixel stride for artifact center placement. Results correspond to the P2P reconstruction method.
Photonics 13 00208 g006
Figure 7. (a) Herraez PI heatmap for different artifact centers translated by 50 pixels in both the x and y axes. (b) Costantini PI heatmap. (c) P2P Heatmap, Figure 6a,b.
Figure 7. (a) Herraez PI heatmap for different artifact centers translated by 50 pixels in both the x and y axes. (b) Costantini PI heatmap. (c) P2P Heatmap, Figure 6a,b.
Photonics 13 00208 g007
Table 1. Randomized coefficient ranges used in Equation (3) for signal Z [ π , π ] .
Table 1. Randomized coefficient ranges used in Equation (3) for signal Z [ π , π ] .
Coefficient c 0 c 1 c 2 c 3 c 4 c 5 c 6 c 7
Range/Value[0.2, 1.0][0, 1]22[−1, 1]2[0.1, 0.8]2
Coefficient c 8 c 9 c 10 c 11 c 12 c 13 c 14 c 15
Range/Value0 or 20 or 222[0.05, 0.5]122
Table 2. Average SNR calculated from phase image generation under varying speckle noise standard deviations.
Table 2. Average SNR calculated from phase image generation under varying speckle noise standard deviations.
Rayleigh Standard Deviation ( σ )2.57.5101520
Average SNR0.11000.00860.00440.00200.0010
Table 3. Performance comparison of phase unwrapping methods across varying noise levels. For analytical methods (Herraez and Costantini), RMSE and SSIM are computed relative to the noisy phase (NP) to assess pure unwrapping performance. For the AI (P2P) method, metrics are computed relative to the clean phase (CP), reflecting its joint unwrapping and denoising capability.
Table 3. Performance comparison of phase unwrapping methods across varying noise levels. For analytical methods (Herraez and Costantini), RMSE and SSIM are computed relative to the noisy phase (NP) to assess pure unwrapping performance. For the AI (P2P) method, metrics are computed relative to the clean phase (CP), reflecting its joint unwrapping and denoising capability.
σ = 4.5 , SNR = 0.0204 σ = 20 , SNR = 0.0012
Prediction-CPPrediction-NPPrediction-CPPrediction-NP
MethodRMSESSIMRMSESSIMRMSESSIMRMSESSIM
AI (P2P)0.0900.9503.7500.0010.1530.87323.550 3.17 × 10 5
Herraez3.7600.2440.0920.99424.2710.1158.1100.624
Costantini3.6670.2480.1340.99427.710.0666.0760.786
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Soal, A.; Meyer, J.; Currie, B.; Marsh, S. AI-Based 2D Phase Unwrapping Under Rayleigh-Distributed Speckle Noise and Phase Decorrelation. Photonics 2026, 13, 208. https://doi.org/10.3390/photonics13020208

AMA Style

Soal A, Meyer J, Currie B, Marsh S. AI-Based 2D Phase Unwrapping Under Rayleigh-Distributed Speckle Noise and Phase Decorrelation. Photonics. 2026; 13(2):208. https://doi.org/10.3390/photonics13020208

Chicago/Turabian Style

Soal, Aidan, Juergen Meyer, Bryn Currie, and Steven Marsh. 2026. "AI-Based 2D Phase Unwrapping Under Rayleigh-Distributed Speckle Noise and Phase Decorrelation" Photonics 13, no. 2: 208. https://doi.org/10.3390/photonics13020208

APA Style

Soal, A., Meyer, J., Currie, B., & Marsh, S. (2026). AI-Based 2D Phase Unwrapping Under Rayleigh-Distributed Speckle Noise and Phase Decorrelation. Photonics, 13(2), 208. https://doi.org/10.3390/photonics13020208

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop