Skip to Content
J. ImagingJournal of Imaging
  • Article
  • Open Access

24 January 2026

A Robust Skeletonization Method for High-Density Fringe Patterns in Holographic Interferometry Based on Parametric Modeling and Strip Integration

and
Ishlinsky Institute for Problems in Mechanics RAS, 119526 Moscow, Russia
*
Author to whom correspondence should be addressed.

Abstract

Accurate displacement field measurement by holographic interferometry requires robust analysis of high-density fringe patterns, which is hindered by speckle noise inherent in any interferogram, no matter how perfect. Conventional skeletonization methods, such as edge detection algorithms and active contour models, often fail under these conditions, producing fragmented and unreliable fringe contours. This paper presents a novel skeletonization procedure that simultaneously addresses three fundamental challenges: (1) topology preservation—by representing the fringe family within a physics-informed, finite-dimensional parametric subspace (e.g., Fourier-based contours), ensuring global smoothness, connectivity, and correct nesting of each fringe; (2) extreme noise robustness—through a robust strip integration functional that replaces noisy point sampling with Gaussian-weighted intensity averaging across a narrow strip, effectively suppressing speckle while yielding a smooth objective function suitable for gradient-based optimization; and (3) sub-pixel accuracy without phase extraction—leveraging continuous bicubic interpolation within a recursive quasi-optimization framework that exploits fringe similarity for precise and stable contour localization. The method’s performance is quantitatively validated on synthetic interferograms with controlled noise, demonstrating significantly lower error compared to baseline techniques. Practical utility is confirmed by successful processing of a real interferogram of a bent plate containing over 100 fringes, enabling precise displacement field reconstruction that closely matches independent theoretical modeling. The proposed procedure provides a reliable tool for processing challenging interferograms where traditional methods fail to deliver satisfactory results.

1. Introduction

Holographic interferometry is a cornerstone optical technique for full-field, non-contact measurement of displacements and deformations in experimental mechanics [1]. Its high sensitivity, enabling resolution of displacements on the order of the light wavelength, makes it invaluable for studying complex deformation phenomena. The accuracy of the method is intrinsically connected with the precision with which the resulting interference fringe patterns can be analyzed [2]. In principle, higher measurement resolution is achieved by analyzing interferograms with a high density of fringes. However, in modern practice, the outcome becomes critically dependent on the chosen digital image processing algorithm, as the manual detection of hundreds of fringes is unreliable [3].
The primary obstacle to automation is the presence of speckle noise—an unavoidable granular interference pattern generated by the random scattering of coherent light from surface micro-roughness, which is intrinsic to any physical hologram [4]. While post-processing filters (e.g., median or Gaussian) can reduce speckle visibility in a digital image, they require a delicate approach. Intensive smoothing can introduce systematic biases that shift the fringe locations, a particularly detrimental effect for high-density patterns where fine details are essential [5]. Consequently, the development of robust, noise-resistant skeletonization algorithms that can dispense with significant pre-filtering remains an actual problem in optical metrology.
A variety of general-purpose image-processing techniques have been applied to fringe identification, each with intrinsic limitations for the high-density, high-noise regime we address:
  • Phase-based techniques (phase-shifting/unwrapping) (e.g., [6,7,8]) achieve high precision but operate in a different paradigm: they typically analyze projected fringe patterns for 3D shape measurement and require multiple phase-stepped frames, making them unsuitable for single-shot analysis of intrinsic holographic interference patterns encoding nanoscale displacements.
  • Edge and ridge detection algorithms (e.g., Marr-Hildreth [9], Canny [10], Shen–Castan [11]) can localize fringes under moderate noise. However, their output is inherently a fragmented map of pixels or short edge segments. For dense patterns corrupted by strong speckle noise, this map becomes a scattered point cloud. The subsequent critical step—reassembling these fragments into topologically correct, smooth, and closed analytical curves—is a complex and often unreliable post-processing challenge.
  • Active contour models (snakes) and their variants (e.g., Gradient Vector Flow snakes [12,13]) formulate fringe extraction as an energy minimization problem, offering a continuous curve representation [14,15,16,17,18]. Despite their flexibility, they exhibit two key limitations for our target conditions: (1) high sensitivity to initial position, often causing convergence to an adjacent fringe or leakage in high-density patterns; and (2) an inherent tension/rigidity regularization that acts as a low-pass filter, potentially oversmoothing the contour and introducing systematic localization bias in the presence of high-frequency speckle noise.
  • Learning-based approaches (e.g., [19,20,21]) promise robustness but require extensive, high-quality labeled datasets, which are scarce for specialized tasks like analyzing specific deformation patterns under extreme noise conditions typical in experimental mechanics.
To overcome these limitations, we propose a novel skeletonization procedure that strategically integrates physical prior knowledge with a noise-robust numerical core. Our contribution is threefold:
1.
Physics-informed parametric modeling: We constrain the search for fringes to a purpose-built, finite-dimensional functional subspace A θ , which is defined by specific parametric curves (e.g., trigonometric polynomials or splines). This subspace is constructed in such a way as to endow its elements with the required properties, such as smoothness, closure, and specific shape, guaranteeing that the identified fringes are physically plausible, connected curves.
2.
A robust strip integration functional with local smoothing: For fringe localization we consider the specific functional in the form of a Gaussian-weighted integral of intensity over a narrow strip surrounding the candidate curve. This formulation allows for the use of efficient gradient-based optimization techniques. To compute this functional, a continuously differentiable intensity field I ˜ ( x , y ) is first obtained by local bicubic interpolation, described in Section 3.1. The resulting functional J s ( θ ) is smooth with respect to the curve parameters θ , ensuring stable convergence.
3.
A recursive quasi-optimization algorithm: Exploiting the geometric similarity of adjacent fringes, the identification process proceeds recursively outward (or inward). The optimal parameters for an identified fringe seed the initial guess for the next, via a simple scaling transformation (quasi-optimization), followed by a full local refinement of all parameters. This strategy dramatically improves computational efficiency and reliability.
The performance of the proposed method is carefully validated. First, using synthetic interferograms with precisely controlled geometry and additive-multiplicative speckle noise, we demonstrate its acceptable accuracy and robustness compared to baseline methods. The error is estimated using two metrics: the Euclidean norm ( L 2 -norm) of the difference and its maximal absolute value ( L -norm). Second, we apply the algorithm to a real, challenging interferogram from a bending experiment on a square plate, containing over 100 fringes. The algorithm successfully extracts the complete skeleton, enabling an accurate reconstruction of the displacement field w ( x , y ) . This result is shown to be in good agreement with an independent analytical solution, obtained in [22], confirming the practical utility and accuracy of the entire procedure.
The remainder of this paper is organized as follows: Section 2 details the proposed skeletonization procedure. Section 3 describes key implementation aspects. Section 4 presents the quantitative validation on synthetic data. Section 5 showcases the application to real high-density interferograms. Finally, Section 6 discusses the results and outlines future work.

2. The Proposed Skeletonization Method

2.1. Physical Origins of Imperfections in Interferograms

To motivate the design choices of the proposed algorithm, it is essential to understand the physical nature of the imperfections present in a raw interferogram. These imperfections, which corrupt the ideal sinusoidal fringe pattern, arise from the optical setup imperfections and the nature of coherent light scattering. They can be categorized by their spatial scale, necessitating different corrective strategies in the digital processing procedure.
In a typical off-axis holographic setup (e.g., the Leith–Upatnieks scheme [2]), the recorded intensity results from the interference of a reference plane wave E r ( r ) and an object wave E o ( r ) reflected from the specimen surface. In complex notation, these waves are as follows:
E r ( r ) = A r e r e i k r · r , E o ( r ) = A o ( x , y ) J ( x , y ) e o e i k o · r ,
where A r and A o ( x , y ) are the scalar amplitudes, e r and e o are the unit polarization vectors of the reference and initial object waves (directly from the laser source), k r and k o are the corresponding wave vectors, and  J ( x , y ) is the spatially varying Jones matrix accounting for polarization changes upon reflection from the deformed surface.
The interference of the wavefronts from the specimen’s reference and deformed states yields an intensity pattern of the following form [4,23]:
I obs ( x , y ) I 0 ( x , y ) 1 + V ( x , y ) cos Δ ϕ ( x , y ) R ( x , y ) + noise .
Here, Δ ϕ ( x , y ) = ( 4 π / λ ) w ( x , y ) is the phase difference directly proportional to the deformable object out-of-plane displacement w ( x , y ) , V ( x , y ) | ( J 1 e o ) · ( J 2 e o ) * | stands for the polarization-induced visibility term, and R ( x , y ) is the contrast variation, induced by coherent noise. A transition between consecutive dark (or bright) fringes corresponds to a displacement increment of λ / 2 .
The terms I 0 ( x , y ) , V ( x , y ) and R ( x , y ) describe large- to medium-scale corruptions:
  • Non-uniform background intensity I 0 ( x , y ) : This is caused by uneven illumination, varying surface reflectivity, and polarization effects ( J 1 J 2 ). This leads to slow intensity variations across the image.
  • Visibility term and contrast variation V ( x , y ) , R ( x , y ) : The coherence noisy factor R ( x , y ) (dependent on laser temporal/spectral properties and surface roughness) modulates the overall fringe contrast. The visibility term V ( x , y ) induced by polarization and speckle decorrelation can further reduce contrast, even to zero.
  • Geometric distortion: The non-coaxiality of reference and object beams in off-axis schemes, combined with lens imperfections, introduces a projective distortion between the object plane and the image sensor.
These low-frequency imperfections are corrected in the pre-processing stage (Section 3) via geometric unwarping and local intensity equalization.
The speckle-dependent terms in (1) present the most significant challenge. It is a fine-grained, high-frequency random pattern resulting from two physically unavoidable phenomena: (i) the finite spectral bandwidth of the laser source, and (ii) the random interference of waves scattered by surface micro-roughness. Unlike Gaussian additive noise, speckle is signal-dependent and exhibits characteristic correlation lengths ( ρ speckle ). Its spatial frequency content often overlaps with that of the high-density fringes themselves, making linear filtering unsuitable as it erodes fringe edges and introduces systematic localization errors.
This multi-scale analysis dictates our algorithmic strategy: large-scale distortions are removed by explicit pre-processing, while the fine-scale speckle noise is addressed not by filtering but by the main idea of the proposed method—the parametric modeling (Section 2.2) and the strip integration functional (Section 2.4)—which provides robustness without distorting the signal.

2.2. Parametric Modeling and Problem Formulation

The goal of skeletonization is to extract a family of planar curves Γ = { γ i } i = 1 N that represent the level sets (fringes) of the phase map Δ ϕ ( x , y ) from a discretely sampled intensity image I. A fundamental premise of our approach is to treat fringes as continuous geometric entities (curves) rather than collections of disconnected pixels. Consequently, the search for an optimal curve γ must be performed within a suitable functional space [24].
Let Ω R 2 denote the rectangular image domain. We consider the space of admissible curves V to be a set of rectifiable, closed Jordan curves contained in Ω . For any such curve γ V , we define a functional as the normalized integral of the image intensity over its support:
J ( γ ) = 1 μ ( γ ) γ I d μ ,
where μ is an appropriate measure on γ . In the simplest case, where the support is the one-dimensional curve itself, μ is the arc-length measure ( d μ = d s ), and the normalization factor μ ( γ ) is the total curve length L ( γ ) . This yields the line-integral functional:
J l ( γ ) = 1 L ( γ ) γ I d s .
The choice of measure and its domain will be generalized in Section 2.4 to define more robust functionals. The core optimization problem is to find curves within a constrained set that extremize this functional. For an ideal, normalized interferogram where the low-frequency imperfections from (1) are absent ( I 0 const . , V 1 , R 1 ) and speckle noise is removed, the intensity is proportional to 1 + cos ( Δ ϕ ( x , y ) ) . The bright fringes (intensity maxima) correspond to curves that maximize J l , while dark fringes (minima) correspond to its minimizers. In a real, noisy image, the functional J l must be extremized despite the corruptions described in Section 2.1.
However, operating directly in the infinite-dimensional space V is intractable and ignores the crucial physical prior: the expected fringe family possesses a specific global structure. Instead of regularizing the problem within V (as active contours do), we constrain the solution a priori to a finite-dimensional subspace A θ V . This subspace is defined by a parametric model:
A θ = { γ ( θ ) θ Θ R m } ,
where θ = ( θ 1 , , θ m ) is a vector of parameters, and the mapping θ γ ( θ ) is continuously differentiable. The choice of the model A θ takes into account our a priori knowledge about the deformation. For instance, for the bending of a centrally loaded, clamped plate, the fringes are expected to be nested, closed, and quasi-concentric curves, which can be effectively modeled by a family of perturbed circles or ellipses (see Section 3.4).
By restricting the search to A θ , the infinite-dimensional variational problem of extremizing J l over V reduces to a finite-dimensional optimization problem over the parameter space Θ :
θ * = arg extr θ Θ J l ( γ ( θ ) ) .
This is the core conceptual shift from other methods: the smoothness, connectivity, and global shape of the solution are built into the search space A θ , rather than being enforced via penalty terms during optimization.

2.3. Discretization and Coordinate System

The image data I are provided as a discrete matrix I [ r , c ] of size r max × c max , where r and c are integer row and column indices. To connect the continuous formulation with the discrete data, we establish a fixed coordinate system. We assume a unit pixel pitch and place the origin of the Cartesian coordinates ( x , y ) at the center of a chosen reference pixel with indices ( r 0 , c 0 ) . The correspondence between a pixel P [ [ r , c ] ] and the coordinates of its center ( x P , y P ) is then given as follows:
( x , y P [ [ r , c ] ] ) r = r 0 y + 1 / 2 , c = c 0 + x + 1 / 2 .
This mapping is illustrated in Figure 1.
Figure 1. Relationship between pixel matrix indices ( r , c ) and the continuous Cartesian coordinate system ( x , y ) . The grid represents the pixel centers. The origin ( x , y ) = ( 0 , 0 ) is fixed at the center of the reference pixel with indices ( r 0 , c 0 ) = ( 3 , 3 ) . The coordinates of point E (the center of pixel ( 2 , 5 ) ) are ( x E , y E ) = ( 1 , 2 ) , following Equation (3). The vertices A , B , C , D denote the corners of a pixel, located at ± 0.5 offsets from its center.

2.4. Variants of the Functional

The core of the skeletonization method is the optimization of a functional that measures how well a candidate curve aligns with a fringe. We trace its development from a naive discrete form to the final robust, continuous formulation.

2.4.1. Point-Sampling Along the Curve

The most direct approach is to rasterize the curve γ ( θ ) and sum the intensities of its pixels:
J point l ( θ ) = 1 N ( θ ) R ( γ ( θ ) ) I [ r , c ] ,
where R is the set of pixels that rasterize the curve, and  N ( θ ) = | R | is their number (an integer measure of curve length). This functional is discontinuous with respect to θ , which means that a small parameter change can alter the set R , causing a jump in the sum. This discontinuity precludes the use of gradient-based optimization.

2.4.2. Line-Integration with Interpolation

To obtain a smooth functional, we first construct a C 1 -continuous intensity field I ˜ ( x , y ) via bicubic interpolation (detailed in Section 3.1). Replacing the discrete sum with a normalized line integral yields the following:
J l ( θ ) = 1 L ( θ ) γ ( θ ) I ˜ ( x , y ) d s ,
where L ( θ ) is the curve length. J l ( θ ) is now continuously differentiable, enabling gradient-based optimization. However, it remains sensitive to speckle noise, as it samples intensity only along an infinitesimally thin path.

2.4.3. Point-Sampling over a Strip

To improve robustness, we consider a strip S ( γ , δ ) of width 2 δ around the curve. A discrete, robust functional can be defined by summing pixel intensities within this strip, weighted by their approximate Gaussian distance to the curve:
J point s ( θ ) = 1 N ( θ ) R S G d r c σ I [ r , c ] ,
where R S is the set of pixels whose centers lie within the strip S , d r c is the minimal distance from the pixel center to the curve, and  G ( u ) = exp ( u 2 / 2 ) . This choice of weight function is motivated by the statistical nature of speckle noise nature. The functional J point s averages over many pixels, reducing the impact of speckles. However, it shares the discontinuity flaw of J point l due to the discrete pixel set R S .

2.4.4. Strip-Integration with Interpolation

The final, optimal form combines the robustness of strip averaging with the smoothness of continuous integration. We define the strip integration functional as the normalized, weighted area integral over the continuous strip:
J s ( θ ) = S ( γ ( θ ) , δ ) G d ( x , γ ) σ I ˜ ( x , y ) d A S ( γ ( θ ) , δ ) d A ,
where d ( x , γ ) is the minimal Euclidean distance between x and the curve. The functional, J s , is both robust to noise (due to area averaging) and continuously differentiable in θ (due to the smoothness of I ˜ , G, and the parametric curve). It is the functional used in our final algorithm.

3. Numerical Implementation

The evaluation of J l and J s requires efficient numerical techniques for interpolation and integration.

3.1. Bicubic Interpolation Scheme

To obtain the C 1 intensity field I ˜ ( x , y ) from the pixel matrix I [ r , c ] , we employ a compact bicubic formula. For a point ( x , y ) in a grid cell with corners at pixel centers, the interpolation is as follows:
I ˜ ( x , y ) = 1 Δ ( x ) Δ ( x ) 2 Δ ( x ) 3 M T D ( i ( y ) , j ( x ) ) M 1 Δ ( y ) Δ ( y ) 2 Δ ( y ) 3 ,
where Δ ( x ) = x + 1 2 x + 1 2 , Δ ( y ) = y + 1 2 y + 1 2 are local coordinates within a cell of the interpolation grid. The integer-valued indices i ( y ) and j ( x ) , which locate the pixel, are functions of the coordinates:
i ( y ) = r 0 y + 1 2 , j ( x ) = c 0 + x + 1 2 .
The constant matrix M and the matrix D ( i , j ) (now with i , j understood as the outputs of (6)) are as follows:
M = 1 0 3 2 0 0 3 2 0 1 2 1 0 0 1 1 , D ( i , j ) = f i j 00 f i j 01 h i j 00 h i j 01 f i j 10 f i j 11 h i j 10 h i j 11 g i j 00 g i j 01 q i j 00 q i j 01 g i j 10 g i j 11 q i j 10 q i j 11 .
The sixteen elements of D are finite-difference approximations of the intensity and its derivatives at the four cell corners, computed from the 4 × 4 neighborhood of the pixel  ( i , j ) :
f i j p k = I [ i k , j + p ] , g i j p k = 1 2 I [ i k , j + p + 1 ] I [ i k , j + p 1 ] , h i j p k = 1 2 I [ i k 1 , j + p ] I [ i k + 1 , j + p ] , q i j p k = 1 4 ( I [ i k 1 , j + p + 1 ] I [ i k + 1 , j + p + 1 ] I [ i k 1 , j + p 1 ] + I [ i k + 1 , j + p 1 ] ) ,
for p , k { 0 , 1 } . This scheme guarantees that I ˜ ( x , y ) is a C 1 function across cell boundaries, providing the smoothness required for stable gradient computation. An example of interpolation is shown in Figure 2.
Figure 2. Bicubic interpolation.

3.2. Numerical Quadrature

The line integral (4) is approximated by discretizing the curve into n segments. For a parametric curve ( x ( φ ) , y ( φ ) ) , φ [ 0 , 2 π ] , we use the trapezoidal rule:
J l ( θ ) k = 1 n I ˜ ( x k , y k ) + I ˜ ( x k 1 , y k 1 ) ( x k x k 1 ) 2 + ( y k y k 1 ) 2 2 k = 1 n ( x k x k 1 ) 2 + ( y k y k 1 ) 2 ,
where ( x k , y k ) = ( x ( φ k ) + 1 / 2 , y ( φ k ) 1 / 2 ) . The discretization step Δ φ is chosen adaptively based on the curve’s local curvature to ensure at least one sample per intersected pixel:
Δ = ( x ˙ 2 + y ˙ 2 ) 1 / 2 .
In case of strip integration, the denominator of (5) can be computed more convenient analytically:
S ( γ ( θ ) , δ ) d A = γ δ δ J d ξ d φ ,
using the expansion for the Jacobian determinant, J, of the coordinate transformation from ( φ , ξ ) (curve parameter and normal offset) to ( x , y ) :
J = x ˙ ξ y ¨ Δ + y ˙ Δ ˙ y ˙ + ξ x ¨ Δ + x ˙ Δ ˙ y ˙ Δ x ˙ Δ = 1 Δ ξ x ˙ y ¨ x ¨ y ˙ Δ 2 .
Substituting this relation into (5) brings it to a form similar to (4):
J s ( θ ) = 2 δ L ( θ ) S ( γ ( θ ) , δ ) G d ( x , γ ) σ I ˜ ( x , y ) d A ,
This integral is approximated by summing contributions from triangular prisms formed by adjacent sample points:
J s ( θ ) 2 δ L ( θ ) k = 1 n 1 p = P P 1 V ( x k p , x k p + 1 , x k + 1 p ) + V ( x k p + 1 , x k + 1 p + 1 , x k + 1 p ) ,
where x k p = x k + ( p δ / P ) n k are the coordinates of a point along the normal n k at x k , and  V ( A , B , C ) is the volume of a triangular prism with base A B C :
V ( A , B , C ) = G ( d A , σ ) I ˜ ( A ) + G ( d B , σ ) I ˜ ( B ) + G ( d C , σ ) I ˜ ( C ) 3 · S A B C .
The first multiplier here is a height of the approximating prism, while the second one is A B C triangle area, computed through coordinates of its corners:
S A , B , C = 1 2 A x A y 1 B x B y 1 C x C y 1 .

3.3. Discrete Differential Operators for Accurate Rasterization

The transition from continuous parametric curves to the discrete pixel grid is a critical step that, if carried out naively, can introduce systematic errors comparable to the inter-fringe spacing in high-density patterns. To achieve sub-pixel accuracy and topological correctness (4-connectivity), we introduce a unified approach based on integer-valued discrete differential operators. These operators analyze the local geometry of the curve directly in pixel-index space, enabling efficient and exact rasterization of both the curve and the surrounding strip.

3.3.1. First-Order Operator: Tangent Direction and Curve Rasterization

Given two consecutive pixel centers, P i 1 = ( r i 1 , c i 1 ) and P i = ( r i , c i ) , along a discretized curve, we define the first-order discrete directional operator:
D 1 ( P i 1 , P i ) = 4 + 3 ( Δ r ) 3 + ( Δ c ) 3 , where Δ r = r i r i 1 , Δ c = c i c i 1 .
This compact formula serves three purposes simultaneously:
  • Direction defining: It maps the 9 possible neighbor vectors ( Δ r , Δ c ) { 1 , 0 , 1 } 2 to unique integers in the range [ 0 , 8 ] . This mapping is shown in Figure 3.
    Figure 3. Schematic of the curve rasterization process using a first-order operator D 1 .
  • Neighborhood filter: The sum of the cubic terms, 3 ( Δ r ) 3 + ( Δ c ) 3 acts as a filter because for ( Δ r , Δ c ) { 1 , 0 , 1 } 2 , the identity 3 ( Δ r ) 3 + ( Δ c ) 3 = 3 Δ r + Δ c holds, while if a parameter step is too large and causes a jump to a non-adjacent pixel, the sum makes 3 ( Δ r ) 3 + ( Δ c ) 3 8 (within 2 pixels from P i ), flagging an invalid “skip”.
  • Adaptive step control: This flag triggers an adaptive adjustment of the parametric step Δ s : Δ s Δ s / k down if a skip is detected; Δ s Δ s × k up if D 1 = 4 (meaning the parameter advanced but the pixel did not change). Crucially, we use asymmetric coefficients ( k up = 1.5 , k down = 1.4 ) to prevent resonant cycling near stationary points, ensuring robust convergence.
The operator D 1 is the core of the curve rasterization algorithm. Its scheme is shown in Appendix A (Algorithm A1). The algorithm marches along the continuous curve c ( s ) , adaptively adjusting Δ s to produce an ordered, 4-connected list of pixels R . A key subtlety is the handling of diagonal moves: when the curve passes through a pixel corner, there are two equally valid 4-connected paths. In such cases, the algorithm use a special variable, which memorizes the choice made at the previous step, ensuring a smooth, consistent rasterization without jagged artifacts.

3.3.2. Second-Order Operator: Local Curvature and Strip Rasterization

Rasterizing a strip of width 2 δ around a curve requires understanding the local curvature to avoid gaps or overlaps in the coverage. Given three consecutive pixel centers P i 1 , P i , P i + 1 of the already rasterized curve, we define a second-order operator that classifies the local shape. Let Δ p 1 = P i P i 1 and Δ p 2 = P i + 1 P i . We define the signs of their components as integers: s x = sgn ( Δ c 1 ) + 2 · sgn ( Δ c 2 ) and s y = sgn ( Δ r 1 ) + 2 · sgn ( Δ r 2 ) . The configuration is then defined as follows:
D 2 = 4 s y + s x { 15 , 14 , , 14 , 15 } .
After shifting and clamping to the valid range of physically realizable 4-connected triples, we obtain the final formula used in the implementation:
D 2 ( P i 1 , P i , P i + 1 ) = 15 c i 1 3 c i + 4 c i + 1 3 r i 1 9 r i + 12 r i + 1 2 [ 0 , 15 ] .
The operator D 2 evaluates 1 of 16 integer values, each corresponding to a distinct local configuration of the three points (see Figure 4). Geometrically, it approximates the discrete curvature. For example:
Figure 4. The 16 discrete configurations classified by the second-order operator D 2 (Equation (7)). For each configuration, the three consecutive curve pixels P i 1 (green), P i (red), and P i + 1 (blue) are shown with arrows indicating the local direction. The shaded blue area illustrates the corresponding region of the strip that needs to be filled. Configurations are grouped: linear motion ( D 2 { 0 , 5 , 10 , 15 } ), smooth turns ( D 2 { 1 , 2 , 4 , 7 , 8 , 11 , 13 , 14 } ), and return points ( D 2 { 3 , 6 , 9 , 12 } ).
  • D 2 { 0 , 5 , 10 , 15 } indicates nearly linear motion (up, left, right, down), warranting a simple rectangular strip fill.
  • D 2 { 1 , 2 , 4 , 7 , 8 , 11 , 13 , 14 } indicates a smooth turn, requiring a circular sector fill to cover the curved corner of the strip without gaps.
  • D 2 { 3 , 6 , 9 , 12 } corresponds to inflection or “return” points, necessitating a combined fill (e.g., a half-circle plus strips).
Based on the value of D 2 , the strip rasterization algorithm (Algorithm A2 in Appendix A) selects an optimal geometric primitive (line strip or circular arc) and fills the corresponding pixels within distance δ from the central pixel P i . A distance mask ensures that if a pixel is covered by multiple primitives, the smallest distance to the central curve is retained, and the corresponding Gaussian weight exp ( d 2 / ( 2 σ 2 ) ) is assigned. This approach guarantees a skip-free, accurate, and weight-aware rasterization of the continuous strip S ( γ ( θ ) , δ ) .

3.4. Parametric Models for Fringe Contours

The choice of the parametric subspace A θ is dictated by the expected physics of the deformation. For a broad class of problems involving axisymmetric or nearly axisymmetric bending of plates and membranes, the fringe contours are expected to be closed, nested, quasi-concentric curves [25]. We introduce two related parametric families that efficiently described such shapes while offering sufficient flexibility to capture deviations from ideal geometry.

3.4.1. Perturbed Circle Model

The most natural model for quasi-circular fringes is a perturbed circle, where the constant radius of a circle is replaced by a periodic function of the polar angle φ . In Cartesian coordinates with an origin at ( p x , p y ) , this is expressed as follows:
x ( φ ) = p x + R ( φ ) cos φ , y ( φ ) = p y + R ( φ ) sin φ ,
where the radius function R ( φ ) is given by a trigonometric polynomial:
R ( φ ) = r 1 + k = 1 n a k cos ( ( k + 1 ) φ ) + b k sin ( ( k + 1 ) φ ) .
Here, r > 0 is a base radius, ( p x , p y ) is the center, and  { a k , b k } k = 1 n are the perturbation coefficients. Setting all a k = b k = 0 recovers a perfect circle of radius r centered at ( p x , p y ) . The shift in harmonic indices (using ( k + 1 ) φ instead of k φ ) ensures that the lowest-order perturbation already affects the shape, leaving the coefficients a 0 and b 0 implicitly absorbed into r and the center coordinates.
An equivalent, sometimes computationally convenient, representation expands (8) directly into a Fourier series:
x ( φ ) = A 0 + k = 1 n + 1 A k cos ( k φ ) + B k sin ( k φ ) , y ( φ ) = C 0 + k = 1 n + 1 C k cos ( k φ ) + D k sin ( k φ ) ,
where the coefficients { A k , B k , C k , D k } are linear combinations of { r , p x , p y , a k , b k } . It is simple to show that A 0 , C 0 , and the first harmonics are primarily governed by the “coarse” parameters r , p x , p y , while higher harmonics depend on the perturbation coefficients. This structure naturally suggests a two-stage optimization strategy:
1.
Quasi-optimization: vary only the coarse parameters ( r , p x , p y ) to quickly locate the approximate position and scale of the fringe.
2.
Full optimization: refine all parameters ( r , p x , p y , { a k , b k } ) to capture fine shape details.
The total number of parameters is m = 3 + 2 n . In our experiments with plate bending, n = 15 (giving 33 parameters) proved sufficient to accurately represent fringes even under significant nonlinear deformation.

3.4.2. Perturbed Ellipse Model

For specimens with pronounced anisotropy or non-axisymmetric boundary conditions, the fringe contours may exhibit a preferred elliptical orientation. The model can be generalized to a perturbed ellipse by introducing semi-axes a and b ( a b > 0 ) and an orientation angle ψ :
x ( φ ) = p x + ρ ( φ ; a , b , ψ ) 1 + k = 1 n a k cos ( ( k + 1 ) ( φ ψ ) ) + b k sin ( ( k + 1 ) ( φ ψ ) ) cos φ , y ( φ ) = p y + ρ ( φ ; a , b , ψ ) 1 + k = 1 n a k cos ( ( k + 1 ) ( φ ψ ) ) + b k sin ( ( k + 1 ) ( φ ψ ) ) sin φ ,
where
ρ ( φ ; a , b , ψ ) = a b ( b 2 a 2 ) cos 2 ( φ ψ ) + a 2 .
When a = b = r , this relation reduces to ρ = r , and the model reverts to the perturbed circle (8). The perturbation is now applied relative to the elliptical base shape and is oriented with it (via the phase shift φ ψ ). The coarse parameter set for this model is ( a , b , ψ , p x , p y ) , totaling 5 + 2 n parameters.

3.4.3. Parameter Constraints and Initialization

To ensure physical plausibility and improve optimization convergence, constraints are applied: r , a , b > 0 ; perturbation coefficients are bounded ( | a k | , | b k | < 0.5 to prevent self-intersection); and for the ellipse, a b . The initial guess for the first (innermost) fringe is obtained via a coarse search: the image center is estimated from intensity function cross-sections ( φ = const ); then, r (and a , b later) is varied until a local extremum of J s is found. For subsequent fringes, the parameters of the previous fringe are scaled outward by a factor slightly greater than one to provide a good starting point for the quasi-optimization step.
The fringe limiting density is governed by image resolution and the strip integration width. For stable algorithm operation, the minimum distance between adjacent fringes should exceed twice the integration half-width ( 2 δ ). With our standard choice of δ = 3 pixels (which ensures averaging over several speckles), this requires an inter-fringe distance of at least 6 pixels. In practice, with moderate noise, the algorithm works successfully even at distances of about δ = 5–15 pixels.
These parametric models, combined with the strip integration functional and the rasterization operators, constitute the proposed skeletonization framework. However, before proceeding to the experimental validation of its performance in Section 4, it is necessary to prepare the intensity matrix to reduce the influence of speckle noise.

3.5. Local Intensity Equalization Algorithm

Raw interferograms often exhibit slow spatial variations in background intensity and contrast due to uneven illumination and polarization effects (see Section 2.1). To normalize these variations without distorting the fringe positions, we employ a local intensity equalization procedure based on quantile statistics within sliding windows.

3.5.1. Local Quantile Computation

Before processing the intensity matrix, it is necessary to remove areas of the image without fringes, retaining only the useful region. Furthermore geometric distortion correction must be applied to compensate for the misalignment between the camera lens axis and the object beam. An example of this procedure, which we will call the geometric correction, is shown in Figure 5.
Figure 5. Original image with a projected grid superimposed for distortion assessment and the image after geometric correction via projective transformation.
Let I [ r , c ] denote the geometrically corrected intensity matrix of size r max × c max . For each pixel ( r q , c q ) considered a potential window center, we define a local block M r q , c q , q of size q × q :
M r q , c q , q [ i , j ] = I R ( r q + i q / 2 ) , C ( c q + j q / 2 ) , i , j = 1 , , q ,
where R ( k ) and C ( k ) are reflection-padding functions that handle boundary pixels:
R ( k ) = k , 1 k r max 1 k , k < 1 2 r max k , k > r max , C ( k ) = k , 1 k c max 1 k , k < 1 2 c max k , k > c max .
This padding ensures that windows near image boundaries remain fully populated without introducing artificial discontinuities.
For each block M r q , c q , q , we compute two statistics: the lower quantile a [ r q , c q ] and the upper quantile b [ r q , c q ] at probability levels ν and 1 ν , respectively (typically ν = 0.05 ). These quantiles estimate the local minimum and maximum intensity while being resistant to outlier pixels caused by speckle noise:
a [ r q , c q ] = Q ν M r q , c q , q , b [ r q , c q ] = Q 1 ν M r q , c q , q .

3.5.2. Interpolation and Normalization

The quantiles a [ r , c ] and b [ r , c ] are computed only on a coarse grid with spacing d = q / 2 to reduce computational cost. To obtain values at every pixel ( i , j ) , we use linear interpolation:
a ^ i , j = Int a 1 , 1 a 1 , d + 1 a 1 , c max a d + 1 , 1 a d + 1 , d + 1 a d + 1 , c max a r max , 1 a r max , d + 1 a r max , c max ,
b ^ i , j = Int b 1 , 1 b 1 , d + 1 b 1 , c max b d + 1 , 1 b d + 1 , d + 1 b d + 1 , c max b r max , 1 b r max , d + 1 b r max , c max .
The normalized intensity at each pixel is then obtained by linear rescaling:
I ˜ [ i , j ] = I [ i , j ] a ^ ( i , j ) b ^ ( i , j ) a ^ ( i , j ) .
This operation maps the local intensity range [ a ^ ( i , j ) , b ^ ( i , j ) ] approximately to [ 0 , 1 ] , effectively compensating for low-frequency inhomogeneities while preserving the high-frequency fringe structure.

3.5.3. Parameter Selection

The window size q is chosen to be several times larger than the expected inter-fringe distance but smaller than characteristic scales of illumination non-uniformity. In our experiments with square plates, q = 31 pixels (covering about 3–4 fringes) worked well. The quantile level ν = 0.05 provides a good compromise between rejecting speckle outliers and retaining true fringe extrema. The equalization is applied once after geometric correction and, if needed, repeated after optional Fourier filtering to compensate for global intensity shifts introduced by the filter.

3.6. Complete Skeletonization Algorithm

Integrating all components described in the previous subsections, we present the complete procedure for automated fringe skeletonization. The algorithm proceeds recursively from the innermost to the outermost fringe, taking into account the quasi-similarity property. The main steps are illustrated in Figure 6. The algorithm iteratively identifies fringes until one of the three termination conditions is met:
Figure 6. Flowchart of the complete skeletonization algorithm. Blocks are color-coded according to their functional role: blue for pre-processing steps, green for core algorithmic stages, orange for the recursive optimization loop, red for start/end points, and cyan for output/reconstruction. The color key is provided in the upper-right corner. The dashed arrow indicates an optional iterative cycle of filtering and equalization for images with extreme noise levels.
  • The last computed intensity integral is equal to zero;
  • The last identified fringe lies outside the image boundaries;
  • The required (or preset) number of fringes has been identified.

3.7. Synthetic Interferogram Generation

To quantitatively validate the proposed algorithm under controlled conditions, we developed a virtual interferogram generator that produces images with known pattern geometry and physically realistic noise characteristics. The generator takes into account two key aspects: (1) the typical fringe pattern of a bent square plate, and (2) the corruption mechanisms described in Section 2.1.

3.7.1. Ideal Fringe Geometry via Morphing

The underlying displacement field is modeled by a family of closed curves that morph smoothly from a circle at the center to a square at the boundary, parameterized by a morphing parameter α [ 0 , 1 ] . In the first octant ( φ [ 0 , π / 4 ] ), the curve consists of a straight segment and a curved segment that ensures C 1 continuity:
r s ( φ ; α ) = α sec φ , φ 0 , α π / 4 , r c ( φ ; α ) = α 2 π ( 1 α ) α π 4 φ α π 4 + φ π 2 tan α π 4 + 1 sec α π 4 , φ α π / 4 , π / 4 .
The full curve over all angles is obtained by symmetric replication. Equivalently, it can be expressed compactly as follows:
r ( φ ; α ) = α r 1 φ mod π 2 , π α 4 , 0 < φ mod π 2 π 4 , α r 2 φ mod π 2 , π 4 ( 2 α ) , π 4 < φ mod π 2 π 2 ,
where
r 1 ( φ , α ) = α φ π 4 α ( 2 α + 2 φ π ) tan α + 1 sec α , φ > α sec φ φ α , r 2 ( φ , α ) = csc φ , φ > α α φ π 4 α ( π 2 α 2 φ ) cot α + 1 csc α , φ α .
The ideal, noise-free intensity pattern is then defined as a periodic function of α :
I ideal ( x , y ) = sin 2 k π α ( x , y ) ,
where k controls the fringe density. The mapping ( x , y ) α is obtained by numerically solving the transcendental equation:
r ( φ , α ) = x 2 + y 2 ,
for α at each grid point.

3.7.2. Physically Motivated Noise Model

The ideal intensity is corrupted by two noise sources derived from the physical model in Section 2.1, which presents the general formula for corrupted intensity (1); however, the terms describing speckle noise were not specified. Accurately defining these terms is a complicated problem, which has considered in a numerous studiess (e.g., [23,26,27]). For this paper, it is enough to note that speckle noise consists of three parts, which, related to reflected surface factor, depend on its roughness, spatial and temporal coherence:
N temp = 1 + 2 Δ L Δ ν / c 2 , N spat = 1 + D / ρ speckle 2 ,
where c is the speed of light in vacuum, Δ ν is the laser spectral linewidth, Δ L the path length difference, D is the diameter of the illuminated object area, and ρ speckle is the characteristic speckle size:
ρ speckle max λ z / D , L c z / D .
Here, λ is the laser wavelength, z is the distance from object to observation plane and L c is the reflected surface roughness parameter. The resulting speckle noise can be modeled as the product of different coherence terms and the surface factor, which is obtained empirically.
In practice, for rapid commutation and parameter studies, it is convenient to employ a simplified combined noise model, constructed as stochastic perturbations of phase difference and amplitude:
I ˜ ( x , y ) = I 0 ( x , y ) + k ν ( x , y ) + 2 k ν ( x , y ) I 0 ( x , y ) cos ( δ π ) ,
where k controls the overall noise level, and ν ( x , y ) , δ ( x , y ) are independent stochastic processes with correlation:
f ( x ) f ( x ) = 2 J 1 π | x x | / ρ speckle / π | x x | / ρ speckle 2 ,
where J 1 ( · ) is the Bessel function. This model approximates the statistical properties of speckle while remaining computationally efficient.

3.7.3. Calibration of Noise Models

For systematic performance evaluation, we employ a simplified linear mixing model derived from the physical model (9). Under the normalization conditions I 0 = 1 (ideal signal normalized to unit mean) and ν = 0 , ν 2 = 1 (noise with zero mean and unit variance), the two models are related by an exact linear transformation. Defining the normalized observed intensity I ^ = I ˜ / ( 1 + k ) , we obtain the linear mixing model:
I ^ = ( 1 α ) I 0 + α ν norm ,
where α = k / ( 1 + k ) and ν norm = ν / ν 2 . Here, α [ 0 , 1 ] represents the noise fraction, with  α = 0 corresponding to the ideal pattern and α = 1 to pure noise. This exact correspondence ensures that both models yield identical first- and second-order intensity statistics under the stated normalization.
Throughout this work, references to specific noise levels (e.g., “95% noise”) correspond to α = 0.95 in (10), equivalent to k = 19 in (9). The linear model provides intuitive control over the signal-to-noise ratio, where SNR ( 1 α ) / α .

3.7.4. Spatial Correlation of Speckle Noise

Real speckle patterns exhibit spatial correlation over a characteristic length scale ρ speckle (speckle size). To replicate this property, our synthetic noise field ν ( x , y ) is generated via a two-stage process:
1.
An initial uncorrelated random field ν 0 ( x , y ) is sampled from an appropriate distribution.
2.
This field is convolved with a Gaussian kernel G σ c of width σ c ρ speckle :
ν ( x , y ) = ( ν 0 G σ c ) ( x , y ) .
The resulting field has the desired spatial correlation, approximating the typical speckle autocorrelation function. The strip integration functional is particularly effective against such correlated noise because its integration width δ is chosen to exceed ρ speckle , enabling averaging over multiple correlation cells and thus significant noise suppression while preserving the fringe signal.
The characteristic speckle size ρ speckle used in our synthetic noise model is informed by experimental measurements of real speckle patterns under various conditions. While real speckle exhibits multi-scale structure, our single-scale correlated noise captures the essential spatial correlation that affects fringe detection algorithms. The Gaussian convolution approximates the theoretically expected Bessel-type correlation, with the kernel width calibrated to match the measured correlation length.

3.7.5. Error Metrics

For each synthetic image, the exact curves { γ i true } are known analytically. This allows us to define rigorous error metrics for any skeletonization result { γ i res } :
Euclidean metric : ε Euc = 1 L true 0 2 π x true φ x res φ 2 + y true φ y res φ 2 d φ , Max error : ε max = max φ 0 , 2 π x true φ x res φ 2 + y true φ y res φ 2 max φ 0 , 2 π x true φ 2 + y true φ 2 .
These metrics are used in Section 4 to quantitatively compare the performance of different methods.

3.8. Implementation Details

The described skeletonization procedure has been implemented in a custom C++ program. This program is capable of performing all steps of the proposed algorithm (see Figure 6):
1.
Geometric correction;
2.
Filtering in the frequency domain with different kernels;
3.
Intensity equalization, as described in Section 3.5;
4.
Localization of the image center;
5.
Recursive identification of fringes, which includes the following aspects:
(a)
Initializing the parametric curve (Section 3.4.1 and Section 3.4.2);
(b)
Computing the intensity integral (Section 2.4) using numerical quadratures (Section 3.2);
(c)
Defining the approximation curve’s parameters.
After extensive testing of several optimization approaches, we developed a tailored strategy that balances efficiency and robustness:
Optimization framework: the identification of each fringe involves two optimization stages:
1.
Coarse quasi-optimization: A modified coordinate descent algorithm with adaptive step sizing and momentum (e.g., [28]) accelerates the initial search for the fringe’s approximate position and scale, varying only the geometric parameters (center coordinates and base radius/axes).
2.
Fine refinement: A conjugate gradient method with dynamic parameter scaling performs the full optimization of all parameters, including the higher-order Fourier perturbation coefficients. Analytic gradients of the strip integration functional J s are computed via automatic differentiation.
Parameters and stopping criteria: Practical step sizes were determined empirically: 0.5 pixels for geometric parameters, 0.001 for normalized Fourier coefficients. Optimization terminates when either the relative change in J s falls below 10 6 or the norm of the parameter update drops below 10 4 .
Risk control and convergence safeguards: A primary challenge in real interferograms is the presence of intensity plateaus caused by pre-processing or extreme noise. Our implementation includes the following:
  • Plateau detection: if the objective function shows negligible improvement (< 10 8 ) over 20 consecutive iterations, stagnation is flagged.
  • “Shaking” recovery: upon stagnation, parameters receive a small random perturbation (5–10% of the current step size), and optimization restarts from this perturbed state.
  • Dynamic variable prioritization: the algorithm tracks parameter sensitivity and temporarily focuses the search on the most influential variables when progress slows.
In systematic tests, this approach converged to physically plausible fringe contours in over 98% of cases (6000 individual fringe extraction attempts, 100 synthetic 512 × 512 pixel interferograms × average 60 fringes each). The remaining failures occurred only with grossly incorrect initialization (e.g., center placed outside the fringe pattern), underscoring the importance of the coarse center estimation step described in Section 3.5. The algorithm fails predominantly in two scenarios: (1) at moderate noise ( α = 0.4 ) with very dense fringes (80 fringes, spacing 3 pixels), or (2) at high noise ( α = 0.9 ) with moderate fringe density (20 fringes). In both cases, the intensity modulation between adjacent fringes becomes insufficient for reliable separation.
An example of the pre-processing of a real interferogram, performed using the developed program, is shown in Figure 7.
Figure 7. The pre-processing of a real interferogram. Rows (top to bottom): (ac) original image; (df) image after Gaussian filtering (blur); (gi) image after intensity equalization. Columns (left to right): (a,d,g) full image; (b,e,h) magnified chosen region; (c,f,i) intensity cross-section, showing the original function (top) and the function after removal of false extrema (bottom).
Apart from that, the program contains a set of standard tools for working with images and includes a custom generator of synthetic fringe patterns, a described in Section 3.7. The generator accepts the following control parameters:
  • Fringe density k (number of fringes across the field);
  • Speckle noise, dependent on surface roughness parameter and the laser spectral linewidth;
  • Noise models the influence of non-uniform illumination.
For each generated image, the exact curves { γ i true ( α = i / k ) } are known analytically. This allows for the quantification of the algorithm errors.

4. Validation on Synthetic Interferograms

4.1. The Efficiency of Fringe Identification with Different Variants of the Intensity Integral

We compared the efficiency of fringe identification with different variants of the intensity integral, using the metrics defined in Section 3.7.5. Figure 8 demonstrates that with slightly noisy images, both methods have similar performance, whereas for highly noisy patterns, the strip integration method has a significant advantage over the line integration one. Moreover, as is shown in Figure 9, the strip integration method is capable of identifying fringes in images with noise levels as high as 95%, where it is hardly possible to distinguish the fringes even visually.
Figure 8. A comparison of identification efficiency for different integration methods.
Figure 9. Fringe identification from synthetic interferogram ( 512 × 515 px) by line integration and strip integration with different noise levels.

4.2. Why General-Purpose Methods Fail: A Fundamental Analysis

A meaningful comparison with baseline methods must first address why standard approaches are fundamentally mismatched to the problem of high-density, high-noise fringe skeletonization. Consider the two most relevant classes, as detailed below.

4.2.1. Edge and Ridge Detection (Canny, Shen–Castan)

These methods are designed to detect discontinuities or gradient maxima. For fringe patterns, they produce discrete sets of edge pixels (Figure 10). Even under moderate noise, the output is fragmented and lacks topological structure. Converting this pixel cloud into a family of smooth, nested, closed curves requires additional heuristic processing that is both complex and unreliable, especially when fringes are closely spaced.
Figure 10. Edge detection results on synthetic interferograms. Yellow curves show exact intensity gradient maxima (theoretical edge positions); white pixels show detected edges. Even at moderate noise, detected edges are fragmented and offset from true fringe centers. (a) 10% noise, 95% R 3 coverage. (b) 50% noise, 90% R 3 coverage. (c) 80% noise, 70% R 3 coverage. Significant fragmentation. (d) 95% noise, 10% R 3 coverage. Total fragmentation.

4.2.2. Active Contour Models (Snakes)

Standard snake formulations minimize an energy functional E = E int + E ext , where E ext typically attracts the contour to image gradients (edges). To adapt a snake to locate fringe centers (intensity extrema), one would need to:
1.
Redefine E ext to target intensity extrema rather than gradients.
2.
Replace pointwise gradient computation with a strip integration functional for noise robustness.
3.
Ensure functional smoothness via bicubic interpolation to enable gradient-based optimization.
4.
Constrain the snake to a parametric subspace (e.g., Fourier-based curves) to guarantee correct topology.
5.
Implement recursive initialization leveraging fringe similarity for efficiency.
These modifications would essentially recreate the core components of our proposed method. The “baseline” snake would cease to be a general-purpose tool and become a specialized implementation of our approach.

4.2.3. Quantitative Assessment of Edge Detection

Edge detection algorithms (such as Canny) are fundamentally designed to locate gradient extrema—points of maximum intensity change, which correspond to transitions between bright and dark regions. In an ideal fringe pattern, these would lie midway between adjacent fringe centers. Our method, in contrast, directly targets the intensity extrema (fringe centers). This conceptual mismatch means edge detectors cannot directly produce the skeleton needed for displacement field reconstruction, even under ideal conditions.
Despite this fundamental difference, we can define a rough success metric to assess how well edge detection “covers” the true fringe locations: for each ground-truth fringe curve, we compute the percentage of its pixels that lie within a d-pixel neighborhood ( d = 3 ) of any detected edge pixel. This fringe coverage rate R d measures the geometric proximity of detected edges to true fringe centers.
As Figure 10 shows, edge detection performs reasonably under low noise but produces fragmented, offset edge maps. At 80% noise, fragmentation becomes severe, and at 95% noise, the coverage rate drops below 10%. More importantly, even when edges are detected, they represent gradient maxima between fringes, not the intensity extrema required for displacement measurement. This fundamental limitation, combined with noise sensitivity, makes edge detection unsuitable for high-precision skeletonization.

5. Application to Real High-Density Interferograms

To validate the proposed algorithms, these were tested on a real hologram, obtained from a bending experiment on a square plate. The plate (copper, side length 6 cm, thickness 184 µm) was subjected to a uniform transverse load in a holographic interferometry setup (Leith–Upatnieks off-axis scheme with a solid-state laser, λ = 532 nm). Interferograms were recorded for several load increments. The most challenging case, with a deflection generating over 100 fringes (shown in Figure 11(i)), was selected for analysis.
Figure 11. Validation of the proposed procedure on a real hologram. (i) Region of the original hologram. (ii) Magnified central region of the processed image. (iii) Full processed image, displaying both the identified fringes (red and blue lines) and the intensity-equalized fringe pattern. (iv) Deformed surface of the plate obtained from independent theoretical modeling for comparison.
The set of identified fringes (red and blue lines in Figure 11(ii,iii) was compared with the results (Figure 11(iv)), obtained from theoretical modeling, presented in [22]. As shown in Figure 11, the identified fringes are in good agreement with isolines of deformed surface, demonstrating the acceptable accuracy of the proposed procedure. The RMS deviation between the displacement field reconstructed from our new algorithm’s skeleton and the independent theoretical model is 0.12 λ (≈64 nm).

6. Conclusions

This paper presents a novel, robust method for skeletonizing fringe patterns in holographic interferometry. By constraining the solution to a physics-informed parametric subspace and employing a strip integration functional, the method achieves high accuracy and stability in the presence of strong speckle noise, where conventional techniques may fail. Quantitative validation on synthetic data demonstrated a significant reduction in error compared to baseline methods (Canny edge detection and GVF snakes). Practical utility was confirmed by successfully processing a real interferogram with over 100 fringes, the results of which closely matched those of an independent theoretical model.

Applicability, Limitations, and Future Work

The presented algorithm is specifically tailored for analyzing interferograms where the fringe family exhibits a physically motivated, simple topology – specifically, families of closed, nested, and quasi-similar contours. This pattern is characteristic of axisymmetric or quasi-axisymmetric bending in plates and membranes with convex boundaries (circular, square, polygonal), which are common test objects in the experimental mechanics of MEMS elements and thin-film structures [29].
The main limitations of the current implementation stem directly from its core design choices:
  • Topological constraints: The algorithm is most effective when the fringe topology is known a priori to consist of nested, simply connected curves. It may encounter difficulties or produce suboptimal results for patterns featuring fringe bifurcations (e.g., wrinkling patterns [30]), interruptions (e.g., near cracks or holes), or multiple disconnected families without a single dominant center.
  • Initialization dependency: The recursive propagation strategy requires a reasonable initial guess for the innermost fringe (coarse center estimation). While the algorithm includes a robust coarse search, a severely erroneous initialization (e.g., outside the fringe field) may prevent convergence.
  • Quasi-similarity assumption: The recursive search relies on the geometric similarity of adjacent fringes. A drastic, abrupt change in fringe shape between consecutive contours violates this assumption and could cause the propagation to fail or jump to an incorrect fringe.
Pathways for generalization and future work: the limitations outlined above define clear directions for extending the framework:
1.
Handling complex topologies: For patterns with bifurcations or multiple centers, a pre-processing segmentation step could partition the image into regions, each containing a fringe family with simple topology. The proposed algorithm could then be applied independently within each region.
2.
Robustness enhancement via functional modification: The risk of the algorithm “jumping” to an adjacent fringe or failing on complex patterns could be mitigated by augmenting the strip integration functional J s with additional regularization terms. For instance, terms that penalize excessive variation in intensity along the strip centerline could help detect anomalies and prevent incorrect convergence.
3.
Automated model selection: Future developments could include a preliminary analysis stage to automatically infer the appropriate parametric model (e.g., circle vs. ellipse, required Fourier order) and topology from the raw interferogram, reducing the need for manual parameter selection.
Despite these limitations, the proposed method provides a powerful and reliable tool for a specific but critically important class of problems in optical metrology. Its ability to deliver sub-pixel accuracy from single, extremely noisy interferograms makes it a valuable asset for experimentalists studying high-sensitivity deformation phenomena.
The general principles of the method—parametric modeling of expected topology combined with robust integration for noise suppression—could potentially inspire adaptations for other pattern analysis problems, such as processing certain types of wrinkle or surface topography images. However, such applications would require careful consideration of domain-specific noise characteristics and topological constraints, which lie outside the scope of this holography-focused work.

Author Contributions

Conceptualization, methodology, software, validation, writing—original draft preparation, S.L.; resources, investigation, A.D.; writing—review and editing, S.L. and A.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Russian Science Foundation (grant No. 25-11-00333).

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Data are contained within this article.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. Rasterization Algorithms Pseudocode

Algorithm A1: Adaptive Curve Rasterization using First-Order Operator D 1
Jimaging 12 00054 i001
Algorithm A2: Strip Rasterization using Second-Order Operator D 2
Jimaging 12 00054 i002

References

  1. Kobayashi, A. Handbook on Experimental Mechanics; Prentice-Hall: Englewood Cliffs, NJ, USA, 1987. [Google Scholar]
  2. Vest, C. Holographic Interferometry; Wiley: Hoboken, NJ, USA, 1979. [Google Scholar]
  3. Malacara, D.; Servin, M.; Malacara, Z. Interferogram Analysis for Optical Testing; Taylor & Francis: Abingdon, UK, 2005. [Google Scholar]
  4. Kreis, T. Handbook of Holographic Interferometry: Optical and Digital Methods; John Wiley & Sons: Hoboken, NJ, USA, 2006. [Google Scholar]
  5. Distante, A.; Distante, C. Handbook of Image Processing and Computer Vision: Volume 1: From Energy to Image; Springer International Publishing: Berlin/Heidelberg, Germany, 2020. [Google Scholar]
  6. Schwider, J.; Falkenstoerfer, O.R.; Schreiber, H.; Zoeller, A.; Streibl, N. New compensating four-phase algorithm for phase-shift interferometry. Opt. Eng. 1993, 32, 1883–1885. [Google Scholar] [CrossRef]
  7. Zuo, C.; Feng, S.; Huang, L.; Tao, T.; Yin, W.; Chen, Q. Phase shifting algorithms for fringe projection profilometry: A review. Opt. Lasers Eng. 2018, 109, 23–59. [Google Scholar] [CrossRef]
  8. Tabata, S.; Maruyama, M.; Watanabe, Y.; Ishikawa, M. Pixelwise Phase Unwrapping Based on Ordered Periods Phase Shift. Sensors 2019, 19, 377. [Google Scholar] [CrossRef] [PubMed]
  9. Marr, D.; Hildreth, E. Theory of edge detection. Proc. R. Soc. Lond. Ser. B Biol. Sci. 1980, 207, 187–217. [Google Scholar] [CrossRef] [PubMed]
  10. Canny, J. A computational approach to edge detection. IEEE Trans. Pattern Anal. Mach. Intell. 2009, PAMI-8, 679–698. [Google Scholar] [CrossRef]
  11. Shen, J.; Castan, S. An optimal linear operator for step edge detection. CVGIP Graph. Model. Image Process. 1992, 54, 112–133. [Google Scholar] [CrossRef]
  12. Xu, C.; Prince, J.L. Gradient vector flow: A new external force for snakes. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition; IEEE: Piscataway, NJ, USA, 1997; pp. 66–71. [Google Scholar]
  13. Xu, C.; Prince, J.L. Snakes, shapes, and gradient vector flow. IEEE Trans. Image Process. 1998, 7, 359–369. [Google Scholar] [CrossRef] [PubMed]
  14. Kass, M.; Witkin, A.; Terzopoulos, D. Snakes: Active contour models. Int. J. Comput. Vis. 1988, 1, 321–331. [Google Scholar] [CrossRef]
  15. Li, B.; Acton, S.T. Active contour external force using vector field convolution for image segmentation. IEEE Trans. Image Process. 2007, 16, 2096–2106. [Google Scholar] [CrossRef] [PubMed]
  16. Tang, C.; Lu, W.; Cai, Y.; Han, L.; Wang, G. Nearly preprocessing-free method for skeletonization of gray-scale electronic speckle pattern interferometry fringe patterns via partial differential equations. Opt. Lett. 2008, 33, 183–185. [Google Scholar] [CrossRef] [PubMed]
  17. Tang, C.; Ren, H.; Wang, L.; Wang, Z.; Han, L.; Gao, T. Oriented couple gradient vector fields for skeletonization of gray-scale optical fringe patterns with high density. Appl. Opt. 2010, 49, 2979–2984. [Google Scholar] [CrossRef] [PubMed]
  18. Li, Y.H.; Chen, X.J.; Qu, S.L.; Luo, Z.Y. Algorithm for skeletonization of gray-scale optical fringe patterns with high density. Opt. Eng. 2011, 50, 087003. [Google Scholar] [CrossRef]
  19. Jiang, W.; Ren, T.; Fu, Q. Deep learning in the phase extraction of electronic speckle pattern interferometry. Electronics 2024, 13, 418. [Google Scholar] [CrossRef]
  20. Feng, S.; Chen, Q.; Gu, G.; Tao, T.; Zhang, L.; Hu, Y.; Yin, W.; Zuo, C. Fringe pattern analysis using deep learning. Adv. Photonics 2019, 1, 025001. [Google Scholar] [CrossRef]
  21. Liu, C.; Tang, C.; Xu, M.; Hao, F.; Lei, Z. Skeleton extraction and inpainting from poor, broken ESPI fringe with an M-net convolutional neural network. Appl. Opt. 2020, 59, 5300–5308. [Google Scholar] [CrossRef] [PubMed]
  22. Lychev, S.; Digilov, A.; Djuzhev, N. Galerkin-Type Solution of the Föppl–von Kármán Equations for Square Plates. Symmetry 2024, 17, 32. [Google Scholar] [CrossRef]
  23. Eichhorn, N.; Osten, W. An algorithm for the fast derivation of line structures from interferograms. J. Mod. Opt. 1988, 35, 1717–1725. [Google Scholar] [CrossRef]
  24. Ciarlet, P.G. Linear and Nonlinear Functional Analysis with Applications, 2nd ed.; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 2025. [Google Scholar]
  25. Mittelstedt, C. Theory of Plates and Shells; Springer: Berlin/Heidelberg, Germany, 2023. [Google Scholar]
  26. Goodman, J.W. Speckle Phenomena in Optics: Theory and Applications; Roberts and Company Publishers: Greenwood Village, CO, USA, 2007. [Google Scholar]
  27. Dainty, J.C. Laser Speckle and Related Phenomena; Springer Science & Business Media: New York, NY, USA, 2013; Volume 9. [Google Scholar]
  28. Wang, Q.; Li, W.; Bao, W.; Zhang, F. Accelerated randomized coordinate descent for solving linear systems. Mathematics 2022, 10, 4379. [Google Scholar] [CrossRef]
  29. Lychev, S.; Digilov, A.; Demin, G.; Gusev, E.; Kushnarev, I.; Djuzhev, N.; Bespalov, V. Deformations of Single-Crystal Silicon Circular Plate: Theory and Experiment. Symmetry 2024, 16, 137. [Google Scholar] [CrossRef]
  30. Bychkov, P.S.; Lychev, S.A.; Bout, D.K. Experimental technique for determining the evolution of the bending shape of thin substrate by the copper electrocrystallization in areas of complex shapes. Vestn. Samara Univ. Nat. Sci. Ser. 2019, 25, 48–73. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.