Next Article in Journal
A Flexible Wheel Alignment Measurement Method via APCS-SwinUnet and Point Cloud Registration
Previous Article in Journal
Constraint-Aware Design of Spherical Camera Rigs for Optical Metrology
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Recent Advances in Digital Fringe Projection Profilometry (2022–2025): Techniques, Applications, and Metrological Challenges—A Review

by
Mishraim Sanchez-Torres
1,*,
Ismael Hernández-Capuchin
1,
Cristina Ramírez-Fernández
1,
Eddie Clemente
2,
José Luis Javier Sánchez-González
3 and
Alan López-Martínez
4
1
Instituto Tecnológico de Ensenada, Tecnológico Nacional de México, Ensenada 22880, Baja California, Mexico
2
Centro Nacional de Investigación y Desarrollo Tecnológico, Tecnológico Nacional de México, Cuernavaca 62490, Morelos, Mexico
3
Facultad de Ingeniería, Arquitectura y Diseño, Universidad Autónoma de Baja California, Ensenada 22860, Baja California, Mexico
4
FuzzyFrog AI, León 37100, Guanajuato, Mexico
*
Author to whom correspondence should be addressed.
Metrology 2026, 6(1), 3; https://doi.org/10.3390/metrology6010003
Submission received: 27 November 2025 / Revised: 31 December 2025 / Accepted: 4 January 2026 / Published: 12 January 2026

Abstract

Digital fringe projection profilometry (DFPP) is a widely used technique for full-field, non-contact 3D surface measurement, offering precision from the sub-micrometer-to-millimeter scale depending on system geometry and fringe design. This review provides a consolidated synthesis of advances reported between 2022 and 2025, covering projection and imaging architectures, phase formation and unwrapping strategies, calibration approaches, high-speed implementations, and learning-based reconstruction methods. A central contribution of this review is the integration of these developments within a metrological perspective, explicitly relating phase–height transformation, fringe parameters, system geometry, and calibration to dominant uncertainty sources and error propagation. Recent progress highlights trade-offs between sensitivity, robustness, computational complexity, and applicability to non-ideal surfaces, while learning-based and hybrid optical–computational approaches demonstrate substantial improvements in reconstruction reliability under challenging conditions. Remaining challenges include measurements on reflective or transparent surfaces, dynamic scenes, environmental instability, and real-time operation. The review outlines emerging research directions such as physics-informed learning, digital twins, programmable optics, and autonomous calibration, providing guidance for the development of next-generation DFPP systems for precision metrology.

1. Introduction

Three-dimensional surface measurement is essential across industrial, biomedical, cultural heritage, and manufacturing environments, where digital fringe projection profilometry (DFPP) provides full-field, non-contact geometry reconstruction through structured illumination and phase analysis with sub-millimeter to micrometer precision [1,2]. DFPP has surpassed coordinate measuring machines, laser triangulation, and passive stereovision in applications including in-line inspection, additive manufacturing monitoring, surgical planning, and artifact documentation [3,4]. Its evolution from early structured-light demonstrations [5] to modern digital projection with improved fringe quality [2] has enabled operation in dynamic scenes, reflective or low-texture surfaces, discontinuous geometries, and challenging ambient conditions [4,6]. Recent advances include modeling frameworks linking uncertainty to system parameters [1], hierarchical unwrapping and multi-frequency strategies [7], learning-based phase retrieval [8], quasi-calibration approaches [9], multi-projector configurations for occlusion compensation [3], and computational optimization of fringe orientation [10], all contributing to the rapid expansion of DFPP capabilities and applications.
Despite this progress, the literature from 2022 to 2025 remains fragmented. Earlier reviews have provided valuable descriptions of general principles, historical evolution, and specific system configurations [2,4], yet several key developments have not been synthesized into a unified framework. Advances such as AI-based phase processing [8], adaptive fringe parameter control [11], ultra-high-speed acquisition exceeding 100,000 fps [12], single-shot reconstruction for non-ideal surfaces [13], and explicit uncertainty modeling [1,14] are typically discussed in isolation, without integration into a metrological perspective. Similarly, strengths documented in recent literature—such as robustness to environmental perturbations [15], improved accuracy through hierarchical unwrapping [2], dual-camera fusion for occlusion mitigation [15], and successful deployment in additive manufacturing [16] and biomedical surface capture [3]—remain unconnected within a coherent analytical structure. As a result, there is currently no review that consolidates physical principles, computational advances, system-level innovations, and uncertainty considerations into an integrated view tailored to precision metrology.
The period 2022–2025 is particularly relevant because DFPP has been influenced simultaneously by breakthroughs in deep learning, computational optics, high-speed projection, flexible calibration pipelines, and improved modeling of measurement uncertainty. These concurrent advances have altered the trade-offs between sensitivity, robustness, computational complexity, and operational range, making a consolidated and critically informed review both timely and necessary. This article therefore provides an integrated synthesis that relates DFPP fundamentals to system behavior, algorithmic performance, and metrological reliability. It combines recent progress in phase extraction, phase unwrapping, calibration strategies, high-speed hardware, and multi-projector architectures with emerging trends in learning-based reconstruction and adaptive illumination, emphasizing how these developments influence uncertainty sources, error propagation, and practical measurement performance.
In addressing these aspects, the present review articulates a comprehensive perspective that spans fundamental principles, system configurations, calibration methods, phase acquisition and extraction, unwrapping techniques, computational and machine learning algorithms, hardware advances, representative applications, and metrological challenges. Articles published between 2022 and 2025 were selected based on DFPP relevance, experimental validation, and applicability to industrial, biomedical, and microscale measurement scenarios, while earlier or non-validated works were excluded. By linking theoretical foundations with applied developments and emerging needs, this review establishes a coherent framework for understanding the current capabilities, limitations, and opportunities of DFPP for precision metrology.
The manuscript is organized as follows. Section 2 revisits the fundamental principles of DFPP and the relationship between phase and surface geometry. Section 3 describes system components, optical configurations, and calibration procedures. Section 4 reviews phase acquisition, extraction, and unwrapping methods. Section 5 presents representative applications. Section 6 discusses current challenges and emerging directions, and Section 7 concludes the review.

2. Fundamental Principles of DFPP

2.1. Optical Triangulation and Geometric Foundations

Digital fringe projection profilometry relies fundamentally on optical triangulation: a projector emits structured sinusoidal patterns onto the surface, and one or more cameras capture their deformation, enabling depth estimation through the geometric relationships between devices and the object under epipolar constraints [17,18]. In this stereo-like configuration, the projector behaves as an inverse camera, forming a constrained epipolar geometry between the optical centers and reducing correspondence search to a single dimension [19], as illustrated in Figure 1. This geometric foundation determines how phase values map to three-dimensional coordinates and sets the intrinsic sensitivity limits of DFPP.
The core mathematical link between measured phase and object height follows from this triangulation structure. Prior work has shown that phase-based and geometry-based formulations are equivalent under both coplanar and non-coplanar setups, provided the correct geometric transformations are applied. The relation connects pixel coordinates, measured phase, and world coordinates through camera intrinsics, projector intrinsics, and their relative pose. A simplified form of the phase–height model is given by
Z = d · f p d + φ · p / ( 2 π ) ,
where d is the baseline, f p the projector focal length, and p the fringe pitch. Although useful for intuition, this model assumes an ideal pinhole geometry. Real systems require full projective mappings accounting for lens distortion, misalignment, and calibration uncertainty, all of which strongly influence achievable precision.
More complete formulations apply the full projection chain. For the camera, the mapping from world to image coordinates is
s u v 1 = K c [ R c | t c ] X Y Z 1 ,
where K c denotes the intrinsic matrix and [ R c t c ] describes the extrinsic transformation of the camera. An analogous pinhole model is adopted for the projector, which is treated as an inverse camera with phase encoded along the fringe projection direction. Combining both imaging models enables the phase-to-height transformation pipeline illustrated in Figure 2 [20]. In the reconstruction pipeline, the camera-acquired images used in the phase-shifting model are denoted as I n ( x , y ) , while in Figure 2, representative captured frames are illustrated as I c n for visual clarity. This framework constitutes the basis for all subsequent calibration and reconstruction procedures. For completeness, the main mathematical steps of the phase-to-height reconstruction are summarized below.
The three-dimensional reconstruction process begins with the acquisition of a sequence of N phase-shifted fringe images { I n ( x , y ) } n = 1 N captured by the camera. Under the standard phase-shifting model, the captured intensity at each pixel ( x , y ) can be expressed as
I n ( x , y ) = A ( x , y ) + B ( x , y ) cos ϕ ( x , y ) + δ n ,
where A ( x , y ) represents the background illumination, B ( x , y ) is the modulation amplitude, ϕ ( x , y ) is the wrapped phase to be recovered, and δ n = 2 π ( n 1 ) / N denotes the known phase shift.
The wrapped phase ϕ w ( x , y ) is obtained via phase demodulation, typically using an N-step phase-shifting algorithm:
ϕ w ( x , y ) = arctan n = 1 N I n ( x , y ) sin δ n n = 1 N I n ( x , y ) cos δ n ,
where ϕ w ( x , y ) ( π , π ] is spatially discontinuous due to the 2 π ambiguity.
To recover a continuous phase distribution, a phase unwrapping procedure is applied. The unwrapped phase ϕ u ( x , y ) is related to the wrapped phase by
ϕ u ( x , y ) = ϕ w ( x , y ) + 2 π k ( x , y ) ,
where k ( x , y ) Z is an integer fringe order determined using spatial, temporal, or multi-frequency unwrapping strategies.
Once the unwrapped phase is obtained, it establishes a correspondence between camera pixels and projector coordinates along the fringe direction. Let u p denote the projector coordinate associated with ϕ u ( x , y ) ; this relationship can be written as
u p ( x , y ) = P 2 π ϕ u ( x , y ) ,
where P is the projector fringe period in pixels.
Finally, three-dimensional surface points X = ( X , Y , Z ) are reconstructed via triangulation by jointly solving the camera and projector projection equations:
s c x y 1 = K c [ R c t c ] X 1 , s p u p v p 1 = K p [ R p t p ] X 1 ,
where K p ,
R p , and t p denote the intrinsic and extrinsic parameters of the projector. Solving this system yields the final three-dimensional facial geometry.
Different geometric models—pinhole, ray-based formulations, or distortion-aware mappings—introduce distinct trade-offs in sensitivity, robustness, and calibration requirements. Pinhole models offer analytic simplicity but are more sensitive to extrinsic misalignment; ray-based formulations increase robustness at the cost of computational complexity; distortion-aware models improve accuracy near field-of-view boundaries but demand more elaborate calibration. These choices determine how efficiently the system converts measured phase into reliable depth and directly constrain the magnitude of propagated uncertainty.
The triangulation geometry also sets the achievable measurement precision. A larger baseline increases depth sensitivity but amplifies occlusions and makes calibration more demanding. Conversely, narrow-baseline setups improve coverage and reduce occlusions but decrease height resolution. These geometric trade-offs propagate directly into the phase-to-height mapping and influence the stability of phase unwrapping in later stages. In practice, the triangulation model defines the upper bounds of measurement fidelity and explains why calibration accuracy is a dominant factor in DFPP performance.
Because of this, the geometric foundations discussed here underpin all subsequent sections. The calibration procedures in Section 3 rely directly on the projection models of Equations (1) and (2), and the precision limits discussed in applications and challenges later in the review are fundamentally governed by the baseline, triangulation angle, and sensitivity relationships established in this subsection.

2.2. Fringe Formation, Phase Encoding, and Phase Extraction

The formation of sinusoidal fringe patterns and their spatial–temporal modulation define the information content used for three-dimensional reconstruction in DFPP. The projector generates periodic structured illumination that encodes phase on the object surface, while the camera records the resulting intensity distribution. The captured signal is commonly modeled as
I ( u , v , t ) = A ( u , v ) + B ( u , v ) cos [ φ ( u , v ) + δ ( t ) ] ,
where A ( u , v ) represents the background illumination, B ( u , v ) denotes the modulation amplitude governing fringe contrast, φ ( u , v ) is the phase associated with surface height, and δ ( t ) corresponds to the applied phase shift. The term B ( u , v ) depends on surface reflectivity, projector contrast, and camera dynamic range, and directly influences phase signal-to-noise ratio. Reduced modulation amplitude, caused by low reflectance or strong specularity, increases phase noise σ φ and degrades measurement robustness.
Spatial modulation determines the projected fringe frequency and orientation, thereby controlling the trade-off between phase sensitivity and ambiguity. Low spatial frequencies yield robust phase estimates with a wide unambiguous range, whereas higher spatial frequencies increase depth sensitivity but are limited by projector resolution, contrast transfer, and the Nyquist sampling condition. Excessively high frequencies are more susceptible to defocus, modulation transfer degradation, and phase discontinuities. Multi-frequency spatial modulation mitigates these limitations by combining coarse frequencies for reliable ambiguity resolution with fine frequencies for enhanced precision [21]. As a result, spatial frequency selection plays a critical role in both phase stability and subsequent unwrapping performance.
Temporal modulation employs controlled phase shifts δ n = 2 π n / N to extract wrapped phase values. In phase-shifting profilometry, the wrapped phase is computed as
φ ( u , v ) = arctan n I n sin ( δ n ) n I n cos ( δ n ) ,
where N denotes the number of projected patterns. Typical implementations use N = 3 , 4 , or more steps, with larger N improving noise immunity at the expense of increased acquisition time [22]. In dynamic scenes, shorter sequences reduce motion artifacts but offer lower robustness. Temporal encoding further enables multi-frequency phase unwrapping, in which coarse patterns guide the unwrapping of fine patterns to extend the unambiguous measurement range without degrading sensitivity [23,24]. Representative spatial and temporal fringe modulation strategies are illustrated in Figure 3.
Hardware characteristics impose additional constraints on modulation choices. Projector gamma nonlinearity reduces sinusoidal fidelity, while camera MTF and noise characteristics limit the usable fringe contrast. Defocus of either device further lowers B ( u , v ) , increasing σ φ and degrading phase quality. As a result, the theoretical advantages of high-frequency or multi-step modulation must be balanced against projector–camera bandwidth, exposure range, and environmental illumination. These interactions highlight why modulation design cannot be separated from hardware considerations and why certain patterns are preferred in industrial settings where robustness outweighs ideal sensitivity.
Recent advances in deep learning have expanded modulation and extraction strategies. Convolutional networks can infer depth from single patterns, bypassing classical temporal phase shifting [25,26]. Learned modulation methods improve contrast, reduce the influence of ambient illumination, and compensate for nonlinear projector response [27]. Composite encoding approaches incorporate multiple orientations, frequencies, or binary cues within a single pattern and use neural networks for phase retrieval [28,29]. Hybrid codes combining Gray patterns, sinusoidal fringes, and spatial–temporal multiplexing achieve absolute phase with fewer projections and enhanced robustness [30,31,32].
The modulation and extraction principles presented in this subsection are essential for understanding the diversity of unwrapping methods reviewed later. Choices in spatial or temporal modulation affect phase ambiguity, noise propagation, and sensitivity, thereby influencing the stability of unwrapping and the reliability of the reconstructed surface. These relationships explain why modern DFPP algorithms increasingly integrate learned priors, adaptive modulation, or multi-frequency cues to compensate for optical imperfections, surface reflectivity variations, and hardware constraints.

2.3. Phase Unwrapping, Ambiguity Resolution, and Metrological Sensitivity

Phase unwrapping is a central component of DFPP because the extracted phase from Equation (9) is wrapped within the interval ( π , π ] . The 2 π discontinuity arises from the periodic nature of the sinusoidal fringes, and resolving this ambiguity is essential for converting wrapped phase into absolute height. The wrapped phase φ w ( u , v ) satisfies
φ w = φ 2 π k ,
where k is an integer that must be correctly determined to recover the continuous phase field φ . Incorrect estimation of k produces height discontinuities, spikes, or globally shifted surfaces. These effects are more pronounced on surfaces with steep gradients, low reflectivity, or local discontinuities, where the wrapped phase changes rapidly and local phase variation becomes unreliable for spatial unwrapping.
Spatial unwrapping approaches rely on path-following or graph-based propagation, assuming that the phase field varies smoothly across neighboring pixels. However, regions of low modulation B ( u , v ) , shadows, reflectivity changes, or saturation violate smoothness assumptions and make spatial unwrapping unstable. In industrial applications with sharp edges, specular materials, or complex textures, purely spatial unwrapping may propagate local errors across large regions. This behavior explains why multiple-frequency and hybrid methods remain widely used in high-precision environments.
Temporal and multi-frequency strategies address these challenges by projecting a sequence of phase-shifted or frequency-varied patterns. Coarse frequencies provide a large unambiguous range but lower sensitivity, while fine frequencies offer high precision but narrow unambiguous range. Their combination allows coarse-to-fine unwrapping, where the low-frequency phase guides unwrapping of the high-frequency phase [23,24]. Gray-code sequences further assist ambiguity resolution by identifying fringe order before applying phase-shifting profilometry (PSP), improving robustness to intensity discontinuities and shadows, especially in scenes with large depth variations or occlusions.
Hybrid unwrapping—integrating Gray patterns, synthetic wavelengths, binary coding, and multi-frequency sinusoidal illumination—combines the reliability of discrete codes with the precision of sinusoidal phase [30,31,32]. These approaches achieve absolute phase with fewer projections and improved resilience to reflectivity variation, making them suitable for dynamic scenes or field measurements where stability and acquisition time are both constrained.
Recent learning-based methods have introduced data-driven unwrapping strategies. Deep networks trained on synthetic or experimental wrapped-phase maps can predict the unwrapped phase or generate guidance fields to improve conventional unwrapping [33]. Physics-informed models embed triangulation constraints, propagation rules, and smoothness priors into their architecture [34], allowing them to recover phase across discontinuities, low-modulation areas, or ambiguous regions that challenge classical methods. These approaches bridge optical modeling and machine learning and are increasingly used in scenarios with strong reflectivity variation or motion, where traditional projections cannot be stabilized.
The metrological sensitivity of unwrapping is tightly linked to phase noise, modulation quality, fringe frequency, and triangulation geometry. Phase noise σ φ propagates through the unwrapping stage, amplifying discontinuities when the phase gradient approaches π . High-frequency fringes increase depth sensitivity but raise the risk of phase jumps exceeding π , especially at steep surface transitions or under defocus. Similarly, a large triangulation angle increases height sensitivity but also increases occlusions, potentially causing phase discontinuities that complicate unwrapping. These relationships illustrate why unwrapping performance ultimately depends on the balance between sensitivity and robustness set by the modulation and geometry choices described earlier.
Phase errors subsequently propagate to height errors, linking unwrapping stability to final depth precision. Calibration inaccuracies, uncorrected geometric distortion, projector gamma nonlinearity, and environmental vibrations introduce systematic biases that cannot be corrected by unwrapping alone [35]. Measurement volume and spatial resolution further constrain performance: larger volumes reduce the effective pixel resolution on the object, increasing σ φ and decreasing unwrapping stability [36,37]. These effects explain why microscopic DFPP systems achieve sub-micrometer precision, whereas large-scale or outdoor configurations typically reach millimeter-level accuracy [1].
Relative to other metrology methods such as laser triangulation, passive stereovision, time-of-flight, digital holography, and digital image correlation (DIC), DFPP offers high full-field accuracy but faces trade-offs in speed, reflectivity robustness, and sensitivity to environmental factors [38,39]. The evolution from heuristic unwrapping techniques to multi-frequency, hybrid, and now learning-based methods has expanded the applicability of DFPP to industrial inspection, biomedical imaging, and field scenarios, bridging physical modeling with data-driven refinements [1,33,34].
The principles described in this subsection establish the theoretical foundations required to understand the modern unwrapping algorithms analyzed in Section 4, the application-specific constraints discussed in Section 5, and the metrological challenges and uncertainty sources reviewed in Section 6. Phase ambiguity, noise propagation, and geometric constraints collectively define the practical limits of DFPP accuracy and motivate ongoing research toward more stable, adaptive, and model-informed unwrapping strategies.

3. System Components and Architecture

Digital fringe projection profilometry (DFPP) relies on a tightly coupled interplay between projection hardware, imaging sensors, geometric configuration, optical design, synchronization electronics, and calibration procedures. Each subsystem imposes metrological constraints that govern final accuracy, robustness, speed, and operational range. Recent advances in projection devices, sensor architectures, and computational design have expanded DFPP capabilities across industrial, biomedical, and scientific applications [28,40,41,42,43,44]. This section provides a comparative and interpretive synthesis of system components, emphasizing metrological implications and design trade-offs. Given the scope of this review, methodological descriptions are intentionally concise, and readers seeking detailed derivations or implementation details are referred to the cited literature.

3.1. Projection Technologies

Modern DFPP systems employ four principal projection technologies: Digital Light Processing (DLP), Liquid Crystal Display (LCD), Liquid Crystal on Silicon (LCoS), and emerging MEMS-based binary or composite mask systems. Performance varies significantly in terms of contrast, modulation transfer function (MTF), refresh rate, wavelength stability, and susceptibility to nonlinearities.
DLP projection relies on micro-mirror arrays that provide high contrast, fast binary switching exceeding 10 kHz, and robust phase stability [40,45,46]. However, the finite tilt of mirrors introduces depth-dependent geometric distortion, mitigated by pixel remapping and diamond-grid compensation [47]. DLP is preferred for high-speed measurements, dynamic scenes, and industrial environments due to strong SNR and stability.
LCD projectors employ continuous grayscale modulation without dithering artifacts but suffer from lower contrast ratios (1000:1–2000:1) and higher sensitivity to ambient light [45]. Their reduced fringe visibility leads to larger phase noise, making LCD more suitable for educational or low-cost systems rather than precision metrology.
LCoS projectors provide the highest spatial resolution (up to 4K), near-zero pixel gaps, and excellent MTF for complex fringe synthesis [40]. Their reflective architecture supports holographic and multi-level phase modulation but at the cost of lower refresh rates (60–120 Hz), making them less suited for high-speed DFPP applications.
Recent innovations include intermediate-bit projection, binary half-truncated fringes, and binary phase masks that boost contrast while retaining high-speed operation [42,48]. MEMS-based fringe superimposition further increases spatial resolution [43]. Deep-learning-driven projector precoding corrects nonlinearities and enhances pattern fidelity in non-ideal conditions [49,50]. A comparative summary of the main DFPP projection technologies and their key performance characteristics is provided in Table 1.
From a metrological perspective, DLP maximizes phase stability and noise robustness; LCoS maximizes spatial resolution; LCD minimizes cost; and MEMS/binary-mask systems maximize speed and suppress nonlinear gamma artifacts [40,42,48].

3.2. Imaging Sensors

DFPP performance is strongly influenced by imaging sensor architecture. CCD, CMOS, and hybrid 3CCD or TDI sensors define noise characteristics, spectral response, shutter behavior, dynamic range, and temporal resolution.
CCD sensors exhibit superior low-noise behavior, uniform charge transfer, and strong spectral sensitivity, enabling high-precision measurements in low-light settings [51,52,53]. However, CCDs suffer from low frame rates and higher power consumption. They remain dominant in microscale, high-precision, and holographic applications [54].
CMOS sensors now match or surpass CCDs in resolution (>20 MP), speed (hundreds to thousands of fps), and integration (on-chip ADC, exposure control, noise shaping) [55,56,57]. CMOS excels in dynamic measurements, high-speed deformation, and event-driven reconstruction [58].
Hybrid architectures (3CCD, CMOS-on-CCD, and TDI sensors) offer improved color fidelity, reduced crosstalk, and enhanced linearity [54,59]. They support simultaneous shape + deformation measurement and are preferred for high-end industrial inspection. A comparison of CCD, CMOS, and hybrid imaging sensors and their main characteristics for DFPP systems is summarized in Table 2.
Metrologically, CCDs minimize σ I but constrain frame rate; CMOS maximizes temporal resolution at the cost of increased noise; hybrids maximize radiometric linearity and color decoupling.

3.3. Geometric Configurations

DFPP systems employ monocular, binocular, and multi-projector/multi-view configurations depending on occlusion robustness, precision requirements, and scene complexity. Monocular systems are compact, easy to calibrate, and cost-effective. However, they suffer from occlusions and reflectance-induced dropout. They are ideal for simple objects and controlled lighting. Binocular systems add a second camera, improving phase consistency, reducing sensitivity to surface reflectance, and stabilizing phase unwrapping in discontinuous surfaces [60]. Their calibration is more demanding but provides improved robustness.
Multi-projector systems reduce shadows and occlusions by illuminating from multiple angles [60]. Emerging dual-projector deep-learning models achieve single-shot full-field reconstruction by learning multi-view fusion [61]. A comparison of the main geometric configurations used in DFPP systems is presented in Table 3.

3.4. Optical Design and Metrological Constraints

Optical design determines MTF, depth-dependent magnification, distortion, and calibration stability. DFPP systems employ conventional optics, telecentric lenses, Scheimpflug geometries, chromatic confocal elements, and multi-axis setups, whose main characteristics, strengths, and limitations are summarized in Table 4.
Telecentric lenses maintain constant magnification and suppress perspective distortion, improving calibration repeatability and phase–height linearity [62,63]. They are ideal for precision surface metrology.
Scheimpflug imaging enables large depth-of-field and tilted projections, reducing multi-reflection artifacts and occlusions [64,65]. It is suitable for large objects and industrial scenes.
Multi-axis configurations combine several tilted cameras and projectors, improving phase-space correlation and reconstruction from multiple viewpoints [66,67].

3.5. Synchronization and Timing

Temporal synchronization is critical in dynamic or high-speed DFPP. FPGA-based controllers, event-driven sensors, and adaptive exposure help maintain phase coherence [23,58,68].
High-speed systems achieve 500–100,000 fps using binary fringes, single-shot deep-learning demodulation, and TDI sensors [29]. Hardware desynchronization produces phase jumps and motion artifacts; temporal phase-order tracking mitigates these issues [23].

3.6. Calibration

Calibration couples geometry, photometry, and physical modeling. Accurate calibration minimizes geometric bias, improves linearity, and stabilizes phase–height mapping [41,69,70].
Geometric calibration determines intrinsic/extrinsic parameters through checkerboard or phase-based targets [71]. Advanced ray-based models handle nonlinear distortions and non-ideal lens behavior.
Photometric calibration corrects gamma nonlinearity, vignetting, and color-channel coupling. BOS-based thermal correction compensates air turbulence [44]. Deep-learning precoding corrects defocus and nonlinearity [50].

4. Phase Processing Techniques

Phase processing constitutes the core computational pipeline in structured light systems, transforming raw fringe patterns into accurate 3D geometric representations. This section comprehensively reviews contemporary methods for phase acquisition, unwrapping, and optimization, emphasizing recent advances in hardware synchronization, nonlinearity compensation, and single-shot approaches.

4.1. Phase Acquisition and Extraction

Phase acquisition methods determine the fundamental trade-off between measurement speed, accuracy, and robustness in fringe projection systems. Modern approaches can be classified into temporal methods (Phase-Shifting Profilometry, PSP) and spatial methods (Fourier Transform Profilometry, FTP), each with distinct advantages and application domains. Phase acquisition and phase unwrapping constitute the computational backbone for recovering continuous phase information from intensity-based measurements in optical metrology, interferometry, communication systems, and high-dimensional signal processing. Recent contributions span classical iterative optimization, multi-frequency strategies, deep learning, quantum optimization, and application-specific methods across optics, imaging, and communications. This section provides a concise synthesis in a single unified narrative, supported by one comparative table summarizing representative methods.
Phase acquisition (phase retrieval) has evolved through optimization-based approaches, iterative orthogonal normalization, and majorization–minimization frameworks that enable accurate reconstruction in the presence of noise and incomplete measurements [72,73,74]. Deep learning and diffusion-based models further extend retrieval performance, enabling high-fidelity estimations even in nonlinear or ill-posed scenarios [75,76]. Specialized domain-oriented algorithms, including accelerated Griffin–Lim solvers and quantum phase factor finding for quantum signal processing, address applications requiring provably convergent or extremely fast solvers [74,77]. Applications span spatial phase-shift shearography [78], fiber-optic sensing [79,80], GPS and spread-spectrum signal acquisition [81,82,83,84], and spaceborne gravitational-wave ranging where multi-frequency phase measurement is essential [85].
Phase unwrapping transforms wrapped phase in [ π , π ] into continuous phase and is challenged by noise, discontinuities, residues, and low-modulation regions. Spatial unwrapping—including path-following, minimum-norm, and residue-driven approaches—remains foundational [86,87,88,89,90,91,92]. Temporal approaches rely on multi-frequency or coded sequences and provide ambiguity resolution through frequency hierarchies [86,93,94]. Hybrid and quality-guided schemes integrate spatial–temporal cues through reliability maps, improving robustness under moderate noise [95,96]. Deep learning has transformed the field by predicting unwrapped phase directly, detecting discontinuities, or producing reliability masks, outperforming classical algorithms in speed and robustness [97,98,99,100,101]. Recent quantum optimization formulations have also been investigated for high-noise interferometry [102]. A representative comparison of phase acquisition and phase unwrapping methods, including their main characteristics and references, is summarized in Table 5.

4.1.1. Phase-Shifting Profilometry (PSP)

Phase-shifting profilometry remains the gold standard for high-precision 3D measurement due to its superior accuracy and noise immunity. Contemporary PSP implementations employ N-step algorithms (typically N = 3, 4, or 5) that project sequentially phase-shifted fringe patterns to extract wrapped phase information [103,104,105].
Recent advances focus on equal-step phase-shifting algorithms that effectively suppress motion-induced phase errors. Wu et al. [103] demonstrated that algorithms such as Stoilov and GEPS (Generalized Equal-step Phase-Shifting) provide superior robustness against harmonic noise and require fewer images than traditional approaches. The Stoilov algorithm exhibits particularly high noise immunity, while GEPS achieves comparable accuracy with reduced computational complexity [104].
For dynamic measurements, Liu et al. [105] proposed a phase deviation elimination method that compensates for object motion during multi-frame capture. This approach significantly extends PSP applicability to industrial scenarios involving moving parts or vibrating surfaces. Multi-frequency hierarchical methods further enhance measurement range by combining coarse and fine frequency patterns [106,107].
Nonlinearity correction represents a critical challenge in PSP systems. Gamma distortion from projectors and cameras introduces systematic phase errors that degrade measurement accuracy. Wang et al. [108] developed a shifted-phase histogram equalization method that corrects nonlinear intensity response without requiring explicit gamma calibration. More sophisticated approaches employ extended Kalman filtering [109] or stochastic gradient descent optimization [110] to estimate and compensate phase shift deviations in real-time.
Deep learning integration has revolutionized PSP error correction. Li et al. [111] demonstrated that attention-based U-Net architectures can perform pixel-wise gamma correction, reducing phase error standard deviation by up to 40% compared to classical methods. These neural approaches generalize across different hardware configurations without requiring recalibration.

4.1.2. Fourier Transform Profilometry (FTP)

Fourier Transform Profilometry enables single-shot 3D reconstruction by analyzing the frequency spectrum of a single deformed fringe pattern. The principal advantage of FTP lies in its measurement speed, making it ideal for dynamic scenes where multi-frame capture is impractical [112,113].
The FTP pipeline involves: (1) Fourier transformation of the captured fringe image, (2) spectral filtering to isolate the fundamental frequency component, (3) inverse Fourier transformation, and (4) phase extraction via arctangent computation. However, classical FTP suffers from inherent limitations including reduced spatial resolution due to spectral filtering and sensitivity to spectral overlap in complex geometries.
Recent methodological advances address these limitations through hybrid approaches. Seok et al. [112] proposed a single-shot calibration method combining FTP with geometric constraints, achieving calibration accuracy comparable to multi-shot techniques. Wang [113] introduced a fusion framework that combines FTP with line clustering algorithms, enhancing reconstruction quality for objects with sharp edges and discontinuities.
Fractional Fourier Transform (FrFT) approaches offer enhanced flexibility for phase extraction. Yang et al. [114] demonstrated that FrFT-based methods can simultaneously recover absolute phase and amplitude information from a single fringe pattern, mitigating the spectral ambiguity inherent in classical FTP. Auto-supervised deep learning frameworks further improve FTP robustness in the presence of noise and saturation [115].

4.1.3. Hybrid and Adaptive Methods

State-of-the-art systems increasingly employ hybrid strategies that combine temporal and spatial phase extraction to leverage the strengths of both approaches. Meng et al. [116] proposed a Hilbert transform-based method that enables efficient phase unwrapping with fewer fringe patterns, bridging the gap between PSP accuracy and FTP speed.
Adaptive algorithms dynamically adjust processing parameters based on local surface characteristics. Quality-guided approaches prioritize high-reliability regions during phase unwrapping, significantly reducing error propagation in challenging scenarios with discontinuities or low SNR [117,118]. These methods construct pixel-wise reliability maps based on fringe modulation, gradient magnitude, or neural predictions to guide the unwrapping path.

4.2. Phase Unwrapping Methods

Phase unwrapping resolves the fundamental 2 π ambiguity inherent in arctangent-based phase extraction, converting wrapped phase (constrained to [ π , π ] ) into continuous absolute phase. The unwrapping problem becomes particularly challenging in the presence of noise, discontinuities, or low-modulation regions, requiring sophisticated algorithms that balance computational efficiency with robustness.

4.2.1. Temporal Phase Unwrapping (TPU)

Temporal phase unwrapping exploits information from multiple fringe patterns captured sequentially, using either multi-frequency or multi-wavelength encoding strategies. TPU methods are inherently robust against spatial discontinuities since they perform unwrapping on a pixel-by-pixel basis without requiring spatial continuity assumptions.
Multi-frequency heterodyne approaches represent the most widely adopted TPU strategy. Wu et al. [106] provided a comprehensive comparison between two-frequency phase-shifting and Gray-coded methods for dynamic measurements, demonstrating that two-frequency approaches achieve superior accuracy with fewer patterns. The hierarchical refinement proceeds from low-frequency (large synthetic wavelength) to high-frequency patterns, with each step refining the fringe order estimate [119].
Advanced TPU algorithms incorporate error compensation mechanisms. An et al. [23] introduced an unequal phase-shifting code that enables simultaneous temporal unwrapping with minimal pattern count, achieving measurement speeds exceeding 500 fps. Deep learning frameworks unify diverse TPU strategies within a single computational architecture [120], learning optimal unwrapping policies from synthetic training data that generalize across different frequency configurations.
The selection of optimal frequency ratios critically influences TPU robustness. Uhlig and Heizmann [121] developed a probabilistic framework for frequency design that minimizes unwrapping error under noise and maximizes the unambiguous measurement range. Their spatio-temporal hybrid approach combines temporal reliability with spatial consistency checks for enhanced accuracy.

4.2.2. Spatial Phase Unwrapping (SPU)

Spatial phase unwrapping recovers continuous phase from a single wrapped phase map by exploiting spatial phase continuity. Classical algorithms include path-following methods (e.g., Goldstein’s branch-cut algorithm), minimum-norm approaches, and quality-guided methods that prioritize unwrapping along high-reliability paths.
Quality-guided unwrapping has emerged as the most robust SPU strategy for structured light applications. Fang et al. [117] proposed a residue-guided method using second differences to detect and isolate discontinuities, enabling reliable unwrapping even in the presence of sharp depth changes. Gao et al. [118] developed a parallel unwrapping algorithm based on separated continuous regions, achieving real-time performance on multi-core processors.
Deep learning has revolutionized discontinuity detection in SPU. Wu et al. [122] demonstrated that convolutional neural networks can predict phase discontinuities in SAR interferograms with accuracy exceeding 90%, outperforming traditional cost flow methods. Zhong et al. [123] extended this approach to fringe projection systems, introducing a discontinuity-guided two-dimensional unwrapping method that segments the phase map based on neural discontinuity predictions before applying region-based unwrapping.
Modified minimum cost flow algorithms incorporate reliability metrics for enhanced performance. Zeng et al. [124] proposed a method that detects reliable pixels through local optimization after initial unwrapping, then applies minimum cost flow exclusively within high-confidence regions. This hybrid strategy reduces computational complexity while maintaining unwrapping accuracy in challenging scenarios.

4.2.3. Hybrid Temporal–Spatial Approaches

Contemporary systems increasingly adopt hybrid unwrapping strategies that combine temporal and spatial information for optimal performance. Ruan et al. [125] demonstrated that spatial ternary coding combined with circular fringe projection enables absolute phase retrieval with improved efficiency compared to pure temporal or spatial methods.
Hierarchical multi-frequency unwrapping exemplifies successful temporal–spatial fusion. Li et al. [126], Xing et al. [127] developed coarse-to-fine unwrapping schemes where low-frequency temporal unwrapping provides global guidance, while high-frequency spatial refinement preserves local details. Zhang et al. [128] introduced SFNet, a deep neural architecture that performs efficient and robust hierarchical unwrapping by learning optimal frequency combinations and spatial propagation patterns.

4.3. Single-Shot and Multi-Shot Approaches

The fundamental trade-off between acquisition speed and measurement precision defines the choice between single-shot and multi-shot structured light systems. Recent advances in spatial encoding, spectral multiplexing, and deep learning have significantly enhanced single-shot capabilities, while multi-shot methods continue to evolve toward higher precision and efficiency.

4.3.1. Single-Shot Methods

Single-shot approaches project and capture a single pattern, enabling measurement of dynamic scenes or moving objects. Spatial encoding strategies include color multiplexing (RGB, HSV), geometric coding (binary patterns, Gray codes), and pseudo-random patterns that embed depth information in spatial intensity variations.
Color-based single-shot methods offer high spatial resolution but suffer from chromatic crosstalk and sensitivity to surface color variations. Ochoa and García-Isáis [129] analyzed phase errors introduced by color phase-shifting profilometry on colored objects, identifying systematic biases that require wavelength-dependent calibration. Advanced color decoding algorithms employ spectral unmixing and machine learning to mitigate crosstalk effects.
Deep learning has emerged as a transformative technology for single-shot 3D reconstruction. Neural networks can learn direct mappings from deformed fringe patterns to 3D geometry, bypassing traditional phase extraction and unwrapping stages [60]. Deng et al. [115] proposed SEC-UNet3+, a specialized architecture for wrapped phase extraction from single-shot fringes, achieving accuracy comparable to four-step phase-shifting with 75% reduction in acquisition time.
Dual-projector single-shot systems leverage spatial baseline between two projection sources to resolve phase ambiguity. Li et al. [60] demonstrated that deep learning-driven dual-view reconstruction enables one-shot 3D measurement with submillimeter precision, combining the speed of single-shot approaches with the accuracy of stereovision.

4.3.2. Multi-Shot Optimization

Multi-shot methods project sequential patterns to enhance measurement accuracy through temporal redundancy. Recent research focuses on minimizing pattern count while maintaining precision. Cheng et al. [130] introduced compressive phase-shifting profilometry, which reduces the required number of patterns by 50% through optimized frequency selection and computational reconstruction.
Temporal encoding strategies have evolved toward higher efficiency. Han et al. [131] proposed complementary Gray code combined with phase-shifting, achieving absolute phase measurement with only six patterns. Wei et al. [132] developed a dynamic phase-differencing method with number-theoretical phase unwrapping and interleaved projection, enabling high-speed measurement of moving objects with minimal motion artifacts.
Synchronization and temporal optimization are critical for multi-shot accuracy. An et al. [23] introduced a phase-shifting temporal phase unwrapping algorithm that recognizes fringe order in real-time, supporting measurement speeds up to 1000 fps. Hardware-level synchronization using FPGAs ensures precise temporal alignment between projection and capture [68].
The choice between single-shot and multi-shot methods depends on application needs: multi-shot PSP (4–8 patterns) offers the highest accuracy for static metrology; single-shot FTP or learning-based approaches enable real-time 3D capture for dynamic scenes; hybrid 2–3-pattern strategies balance speed and precision in limited-motion scenarios; and industrial inspection often favors synchronized multi-shot schemes for high throughput with submillimeter precision. Advances in computational intelligence and optical engineering continue to narrow the gap between both strategies, providing increasing flexibility in structured-light system design.

5. Applications and Case Studies

As a preliminary overview, Table 6 presents the main use-case categories where DFPP is currently deployed, together with representative works that exemplify typical system requirements and operational constraints. These cases of use are introduced in the table to provide a structured summary before being discussed in greater depth in the following text. Building on this tabulated perspective, the broader comparison reveals how the applicability of DFPP across industrial, biomedical, cultural heritage, and microscale domains reflects both the maturity of the technique and the markedly different conditions that shape its practical performance in each context. Industrial and aerospace applications exploit the high speed and robustness of DFPP to deliver in-line inspection and sub-millimeter precision under harsh environments, although reflectivity, vibration, and thermal drift demand HDR projection, adaptive exposure, and multi-view redundancy.
Biomedical and healthcare workflows prioritize safety, non-contact acquisition, and anatomical fidelity; in these settings, the strengths of DFPP, such as accurate surface reproduction and seamless integration with AI-driven planning, outweigh limitations posed by occlusions or intraoral reflectivity. Cultural heritage preservation benefits from the ability of DFPP to capture delicate surfaces without physical contact, yet field operation introduces challenges such as uncontrolled lighting and environmental instability. At the microscale, DFPP enables extremely high-precision metrology for MEMS and semiconductor inspection, but doing so requires stricter calibration, vibration isolation, and careful management of depth-of-field constraints. Taken together, this case-study synthesis indicates that while all domains leverage the core advantages of DFPP, including high spatial resolution, real-time capability, and phase-based sensitivity, each application space imposes its own balance between accuracy, robustness, portability, and cost, underscoring DFPP as a flexible metrological framework rather than a single-purpose technology.

5.1. Industrial and Aerospace Applications

Digital fringe projection profilometry (DFPP) is widely adopted in industrial and aerospace metrology due to its fast, non-contact, and high-precision 3D measurement capabilities. In quality control and in-line inspection, DFPP enables real-time monitoring of additive manufacturing processes, detecting defects in powder bed layers during electron beam and laser powder bed fusion [133,134]. It also provides micrometer-scale inspection for PCB manufacturing and is effective for dimensional verification of metallic and electronic components where contact methods are impractical [135]. Advances addressing reflective surfaces—such as adaptive exposure [136], deep learning reconstruction [11,137], and multi-view systems [65]—support reliable inspection of polished aerospace components [138]. In aerospace component inspection, turbine blade metrology achieves chord length errors as low as 0.001% and thickness errors under 1% [29,139,140]. Hybrid systems integrating DFPP with conoscopic holography improve inspection in reflective or occluded regions [141].
Speed–accuracy trade-offs are addressed through single-shot and composite-pattern methods reaching up to 100,000 fps with sub-millimeter precision [28,29,164,165]. Robust calibration improves consistency across environments [65,166], supporting deployment in demanding industrial conditions. Robotic vision applications leverage high-resolution real-time 3D sensing [167,168] and deep learning-based single-shot reconstruction [13,28,161,169,170]. Synthetic datasets generated through virtual fringe projection and digital twins improve deep learning model generalization [160,171]. DFPP-based inspection systems deliver fast throughput and micrometer repeatability and remain robust under environmental variability [134,135]. Cost–benefit analyses highlight reduced scrap, faster production, and strong performance for high-value aerospace and automotive components [135,139]. Traceability and certification rely on calibration with referenced artifacts to meet industrial and aerospace standards [141,166].

5.2. Biomedical and Healthcare Applications

DFPP is increasingly applied in biomedical and healthcare contexts due to its non-contact operation and high accuracy. In dentistry and maxillofacial applications, intraoral fringe projection scanning provides precise digital impressions for prosthetics and orthodontics [142,143,144,145,146]. These data integrate with virtual surgical planning (VSP) tools for implant design and surgical guide fabrication [142,144,146]. AI and AR/MR systems further enhance planning accuracy and intraoperative guidance [142,144,146], while computer-assisted implant surgery improves precision and reduces complications [142,143,144,172,173,174]. In craniofacial and orthognathic surgery, custom guides and implants derived from DFPP and CT/CBCT data improve outcomes [147,148,149,150], further supported by neural shape completion for reconstructing missing anatomy [150]. Personalized implant design extends to orthopedics [175], craniofacial reconstruction [147,148,149], breast reconstruction [176], and cochlear implant placement [177].
DFPP also supports biomechanics and tissue analysis, enabling non-invasive assessment of skin deformation, facial movement, and musculoskeletal geometry for diagnostics and rehabilitation [148]. Whole-body or regional scanning systems provide metrics for posture, scoliosis progression, limb length discrepancy, and gait analysis without ionizing radiation [148]. Validation and standardization remain essential, requiring clinical studies, workflow optimization, and regulatory compliance regarding safety, biocompatibility, and data privacy [142,143,144,146,147,148,149]. Emerging directions include AI-driven planning, multi-modal imaging integration, and next-generation personalized implants [144,145,148,150]. Together, these advances position DFPP as a central technology in the evolution of personalized medicine.

5.3. Cultural Heritage and Specialized Applications

Digital fringe projection profilometry (DFPP) has become essential in cultural heritage preservation due to its non-contact, high-precision surface capture capabilities. It enables detailed 3D reconstruction of delicate artifacts, paintings, manuscripts, and archaeological sites without risking damage, supporting conservation, digital archives, and replica fabrication [151,152,153]. Museum collections benefit from high-resolution scans that reveal microcracks, layered structures, and subtle deformations invisible to conventional imaging [153,154]. Recent advances such as high-dynamic range acquisition, deep learning reconstruction, and single-frame FPP improve speed and adaptability under variable lighting [26,152,160,178,179]. Multi-view and handheld systems enable complete object modeling, while underwater adaptations allow documentation of submerged sites despite turbidity and refraction [28,155]. Archaeology benefits from in situ stratigraphy capture and digital twins that preserve excavation context [180]. Compared to photogrammetry and laser scanning, FPP offers superior detail for small or intricate objects, making hybrid workflows common [154,155,156]. These capabilities position FPP as a key tool for both on-site preservation and large-scale digital heritage initiatives.

5.4. Microscale, Materials, and Scientific Applications

At the microscale, DFPP serves as a precision tool for inspecting MEMS, semiconductor structures, and micro-fabricated components [37]. MEMS-based projectors enable compact, high-speed 3D measurement using micro-mirrors and vibration systems, achieving RMS errors as low as 0.037 mm [43,157,158,159,181]. Advanced ray models, pixel refinement, and anti-crosstalk methods improve accuracy in miniaturized systems [71,158,159]. Deep learning and digital twins now enable single-shot microscale reconstruction with sub-micrometer precision [27,28,160,161]. DFPP supports PCB inspection, semiconductor metrology, and additive manufacturing, while new modeling techniques refine measurement limits and guide system design [133]. In materials science, DFPP enables non-contact roughness analysis, surface treatment evaluation, and reflectivity-adaptive measurement using HDR and pixel-adaptive projection [182,183,184,185,186]. Polarization-based and hybrid methods improve performance on reflective metals [152,162,163,187], while deep learning restores saturated fringe data for fast reconstruction [163,188,189]. Specialized applications include thermal FPP for transparent materials [190], biological surface analysis and tissue morphology [37], and geological field documentation using portable and multi-modal systems [180]. These technological advances extend DFPP to increasingly complex scientific and industrial scenarios.

6. Current Challenges and Future Perspectives

6.1. Technical Limitations and Solutions

Despite significant advances, fundamental technical limitations continue to restrict widespread deployment of DFPP in certain applications. Addressing these challenges requires innovations in hardware design, algorithmic processing, and system integration strategies.

6.1.1. Environmental Sensitivity and Robustness

Environmental factors significantly impact measurement accuracy and reliability. Vibrations—from machinery, building motion, or air currents—introduce phase errors that degrade reconstruction quality. Temperature variations affect both optical component alignment and projected pattern geometry, causing systematic errors that accumulate over extended measurement sessions. Ambient lighting, particularly in industrial settings, can reduce fringe contrast and introduce noise [37,191].
Mitigation strategies span hardware and algorithmic approaches. Robust mechanical design incorporating vibration isolation and temperature-stabilized enclosures reduces environmental coupling. Adaptive algorithms adjust processing parameters based on detected environmental conditions, maintaining performance across varying operational contexts [152,186]. High-speed acquisition minimizes exposure to transient disturbances, with single-shot methods showing particular promise for vibration-prone environments [28,29].

6.1.2. Challenging Surface Properties

Problematic surface properties remain a primary limitation. Highly reflective surfaces cause pixel saturation, destroying phase information in affected regions. Transparent or translucent materials generate multiple internal reflections, violating single-surface assumptions and producing erroneous reconstructions. Dark surfaces with low reflectivity yield poor signal-to-noise ratios, limiting measurement precision [152,182,183].
High dynamic range techniques address reflectivity variations through multiple exposures or adaptive projection, maintaining optimal exposure across the surface [182,183,184,185,186]. Exposure fusion algorithms combine optimally exposed pixels from multiple captures, constructing composite phase maps with extended dynamic range [152,192]. Polarization-based methods suppress specular reflections through careful control of illumination and imaging polarization states [152,162,163,187].
Deep learning approaches show particular promise for challenging surfaces. Networks trained on diverse surface types, including saturated and low-contrast fringes, learn to recover phase information that traditional algorithms cannot process [163,188,189]. For transparent materials, thermal fringe projection using infrared imaging circumvents visible-light limitations [190].
Surface pre-treatment remains necessary for certain materials. Coating applications—powder sprays, liquid developers—provide diffuse reflection from otherwise problematic surfaces. However, coating introduces dimensional offsets and is unsuitable for many applications, particularly cultural heritage and biomedical contexts where surface modification is unacceptable.

6.1.3. Computational Complexity and Real-Time Processing

Computational demands constrain real-time operation, particularly for high-resolution measurements requiring complex phase unwrapping algorithms. Traditional path-following unwrapping methods exhibit poor parallelizability and struggle with discontinuities, while quality-guided approaches require computationally expensive quality map calculation [88,193,194].
Deep learning-based phase unwrapping offers dramatic computational advantages. Convolutional neural networks formulate unwrapping as pixel-wise classification or direct regression, enabling highly parallel GPU implementation [88,193,194,195,196,197,198]. PhaseNet 2.0 reformulates phase unwrapping as wrap-count prediction, achieving robust noise handling and computational efficiency [88]. One-step methods like DLPU and VUR-Net directly map wrapped to unwrapped phase without intermediate processing steps [193,194].
Specialized architectures incorporating multi-scale fusion, attention mechanisms, and residual connections demonstrate strong generalization to unseen data types [195,196,197,198]. Unsupervised and weakly supervised approaches reduce reliance on labeled training data, enabling adaptation to new measurement scenarios [199]. Hybrid methods combining simulation-driven datasets with real-world fine-tuning achieve robust performance across diverse applications [100,198].
Hardware acceleration through field-programmable gate arrays (FPGAs) and graphics processing units (GPUs) enables real-time processing of high-resolution data. Optimized implementations of core algorithms—phase extraction, unwrapping, and calibration—achieve frame rates suitable for dynamic scene capture and robotic vision [29,170].

6.1.4. Accuracy, Precision, and Error Propagation

Measurement uncertainty propagates through the entire processing chain, from calibration through final 3D reconstruction. Calibration errors in camera-projector geometric relationships introduce systematic biases that corrupt all subsequent measurements. Noise in captured images—from sensor dark current, photon shot noise, and quantization—degrades phase estimation precision. Temporal drift of system parameters due to thermal expansion, mechanical settling, or optical misalignment causes gradual accuracy degradation [1,166].
Advanced calibration methods incorporating ray-model parameterization, bundle adjustment, and cross-validation against certified references improve geometric accuracy [71,158,159]. Adaptive calibration strategies periodically update system parameters based on measurement of known references, compensating for temporal drift [166]. Self-calibration techniques exploit scene geometry or temporal consistency to refine parameters without external references.
Phase error modeling characterizes fundamental precision limits imposed by fringe quality, camera noise, and quantization effects [1]. These models guide system design by identifying dominant error sources and predicting achievable precision for given hardware specifications. Compensation algorithms address specific error mechanisms—such as gamma nonlinearity, projector defocus, and inter-reflections—through pre-calibration and runtime correction [159,200].
Validation against traceable measurement standards provides confidence in absolute accuracy. Comparison with coordinate measuring machines (CMMs), laser trackers, or certified reference artifacts quantifies systematic errors and validates uncertainty budgets for critical applications [139,140].

6.2. Emerging Trends and Technologies

The field of digital fringe projection profilometry is undergoing rapid transformation driven by advances in artificial intelligence, miniaturization, multi-modal integration, and distributed computing architectures. These trends are reshaping both research directions and practical deployments.

6.2.1. Artificial Intelligence and Deep Learning Integration

Artificial intelligence has emerged as a transformative force across all aspects of DFPP. Deep learning networks now address phase unwrapping, fringe analysis, surface classification, and automatic defect detection with superhuman performance in specific domains [11,88,193,194,195,196,197,198].
Beyond phase processing, neural networks enable end-to-end learning from raw fringe images to 3D reconstructions, bypassing traditional algorithmic pipelines. These approaches demonstrate remarkable robustness to non-ideal conditions—noise, saturation, discontinuities—that challenge conventional methods [13,28,161,167,169,170]. Single-shot absolute 3D measurement, previously requiring multiple pattern sequences, now achieves comparable accuracy from single composite patterns through learned fringe-to-phase mappings [28,29,161].
Generative models synthesize training data through digital twins and physics-based simulation, addressing data scarcity for supervised learning [160,171]. Unsupervised and self-supervised approaches learn from unlabeled fringe images, enabling adaptation without extensive ground-truth datasets [13,27,201]. Transfer learning allows models trained on one application domain to rapidly adapt to new contexts with minimal additional training.
Self-calibration networks automatically estimate and correct system parameters from measurement data, reducing dependence on manual calibration procedures [142,144]. Quality assessment networks predict measurement confidence for each surface point, enabling intelligent data fusion and anomaly detection. Super-resolution networks enhance spatial and temporal resolution beyond native sensor capabilities [29,170].

6.2.2. Miniaturization and Portable Systems

Miniaturization trends enable handheld and mobile DFPP systems for field deployment and point-of-use measurement. MEMS-based micro-projectors reduce system size and weight while maintaining adequate resolution and brightness for close-range scanning [157,158,181]. Compact LED illumination sources and small-format cameras facilitate integration into portable form factors.
Smartphone-based FPP systems leverage built-in cameras and displays for accessible 3D scanning, though with reduced accuracy compared to dedicated hardware. These platforms democratize access to 3D measurement technology for education, hobbyist applications, and rapid field documentation where moderate precision suffices. Integration with augmented reality frameworks enables real-time visualization of measurement results overlaid on physical scenes [142,144].
Battery-powered wireless systems support untethered operation for archaeological field work, construction site surveying, and mobile inspection scenarios. Cloud connectivity enables remote collaboration, with field technicians capturing data that specialists analyze remotely.

6.2.3. Multi-Modal and Hybrid Systems

Recognition that no single measurement modality optimally addresses all scenarios drives development of hybrid systems combining FPP with complementary techniques. Integration with photogrammetry leverages FPP’s high-resolution surface detail within photogrammetry’s global geometric framework, enabling accurate reconstruction across scales from millimeters to meters [154,180].
Fusion with laser scanning combines FPP’s dense surface sampling with laser scanning’s long-range and occlusion-handling capabilities [141]. Conoscopic holography provides high-precision point measurements in regions where FPP struggles, such as deep holes or highly specular patches [141].
Multi-spectral approaches project and image fringes at different wavelengths simultaneously, enabling disambiguation of fringe orders without temporal phase shifting. This accelerates measurement while maintaining absolute phase determination [161]. Polarimetric FPP captures polarization state alongside geometry, providing surface material characterization in addition to shape [152,187].
Tactile probing integration validates FPP measurements at critical features, providing ground truth for calibration and quality assurance. This is particularly valuable for metrological applications requiring traceable uncertainty quantification.

6.2.4. Edge and Cloud Computing Architectures

Distributed computing architectures intelligently partition processing between edge devices and cloud infrastructure. Real-time operations requiring low latency—live fringe projection control, basic phase extraction, preview rendering—execute on edge hardware at the measurement site. Computationally intensive tasks—high-resolution unwrapping, data fusion, machine learning inference—offload to cloud resources when latency permits [29].
Internet-of-Things (IoT) frameworks enable distributed monitoring across multiple DFPP sensors in manufacturing environments. Centralized data management aggregates measurements for statistical process control, trend analysis, and predictive maintenance. Edge intelligence reduces network bandwidth by transmitting only processed measurements or anomaly alerts rather than raw image streams.
Federated learning trains machine learning models across distributed DFPP systems without centralizing raw data, preserving privacy and reducing communication overhead. Models improve continuously from collective experience while data remains local.

6.2.5. Standardization and Open Ecosystems

Maturation of DFPP technology drives standardization efforts to ensure interoperability and facilitate technology transfer. Development of communication protocols—for camera-projector synchronization, measurement data exchange, and quality metrics—enables integration of components from multiple vendors [166].
Performance benchmarking frameworks establish standardized test procedures and metrics for comparing systems objectively. Certified reference artifacts with traceable geometry allow validation across laboratories and applications. Certification programs for critical applications—medical devices, aerospace metrology—define requirements for system validation and operator training.
Open-source hardware and software initiatives democratize access to DFPP technology. Community-developed calibration tools, processing libraries, and system designs lower barriers to entry for researchers and small enterprises. Collaborative development accelerates innovation through shared problem-solving and rapid dissemination of improvements [174].

6.3. Future Research Directions

Future research directions in DFPP can be broadly grouped according to (i) their expected time horizon (short- to mid-term versus long-term developments) and (ii) the dominant nature of the advances (algorithmic, hardware-related, or application-driven). This categorization provides a structured roadmap for researchers and practitioners seeking to anticipate and contribute to the evolution of the field.

6.3.1. Short- to Mid-Term Developments

Algorithmic Advances
Future DFPP systems will automatically adapt all parameters—projection patterns, exposure settings, processing algorithms—based on real-time assessment of measurement conditions, surface properties, and precision requirements. Adaptive algorithms will select optimal fringe frequencies, pattern orientations, and phase-shifting strategies for each scene region, maximizing information content while minimizing acquisition time [182,183,184,186].
Temporal super-resolution through deep learning will interpolate intermediate frames from sparse high-speed captures, effectively multiplying frame rates. Physics-informed neural networks will incorporate mechanical models, ensuring reconstructed dynamics obey physical constraints and improving accuracy for sparse measurements.
Hardware and System Integration
Self-diagnosing systems will continuously monitor their own performance, detecting calibration drift, optical misalignment, or component degradation before accuracy is compromised. Predictive maintenance alerts will prompt corrective actions, ensuring sustained metrological performance. Digital twins of measurement systems will simulate expected performance and identify anomalies by comparing predictions with actual measurements [160].
Novel illumination strategies will expand DFPP capabilities. Coherent vs. incoherent structured light trade-offs will be optimized for specific applications, with coherent illumination enabling interferometric precision and incoherent light reducing speckle and coherence artifacts. Artificial intelligence will generate adaptive patterns—tailored to specific surfaces or measurement objectives—that maximize phase quality and minimize ambiguities [13].
Spectral multiplexing will simultaneously project fringes at multiple wavelengths, encoding absolute depth through chromatic dispersion or enabling material discrimination through spectral reflectance analysis [161]. Ultraviolet and near-infrared projection will extend DFPP to fluorescence imaging and subsurface penetration for semi-transparent materials.
Application-Driven Developments
Extension to four-dimensional measurement (3D geometry plus time) will enable analysis of dynamic deformations, transient phenomena, and in situ mechanical property characterization. Ultra-fast FPP achieving 100,000 fps has been demonstrated, capturing high-speed events in manufacturing, biomechanics, and fluid dynamics [29,170].
Applications span impact testing, vibration analysis, cardiac motion tracking, and observation of rapid manufacturing processes like laser welding or powder bed fusion. Continuous 4D monitoring will reveal failure mechanisms, optimize process parameters, and validate computational simulations of dynamic systems.
Interdisciplinary research will adapt DFPP to previously inaccessible contexts. In vivo biomedical measurement—inside operating theaters or living organisms—requires biocompatible, sterile, and minimally invasive implementations. Endoscopic FPP for intraoperative guidance, robotic surgery, and internal organ characterization represents a frontier combining optical, robotic, and medical expertise [169,202].

6.3.2. Long-Term and Transformative Directions

Autonomous and Self-Optimizing Systems
Autonomous operation extends to novel scene types. When encountering unfamiliar surfaces or measurement challenges, systems will explore parameter spaces systematically, learning optimal configurations through trial and evaluation. This adaptive intelligence eliminates manual tuning and enables deployment by non-expert operators.
Advanced Physical Principles
Structured illumination microscopy principles will merge with FPP for super-resolution 3D imaging, potentially surpassing diffraction limits through computational reconstruction from multiple illumination patterns.
Quantum sensing principles may ultimately enhance DFPP precision beyond classical limits. Squeezed light reduces photon shot noise below the standard quantum limit, potentially improving phase measurement precision. Quantum entanglement between illumination and detection paths could enable sub-Rayleigh imaging or quantum ghost imaging with structured light.
While practical quantum-enhanced DFPP remains largely theoretical, proof-of-concept demonstrations are emerging in related optical metrology domains. As quantum technologies mature and miniaturize, integration with fringe projection may unlock fundamental precision improvements, particularly for photon-starved scenarios or radiation-sensitive samples.
Extreme Environment and Multi-Scale Applications
Nanotechnology applications demand integration with electron microscopy or atomic force microscopy to bridge length scales between optical and nanometer resolution. Correlative multi-scale measurement will connect FPP’s areal efficiency with scanning probe precision.
Extreme environments—deep space, underwater, high temperature, high pressure—challenge conventional DFPP implementations. Ruggedized systems for planetary exploration, subsea infrastructure inspection, or furnace monitoring require specialized materials, thermal management, and radiation hardening [155]. Autonomous operation with minimal human intervention becomes essential where access is restricted.
These frontiers demand collaboration across optics, materials science, mechanical engineering, computer science, and domain-specific expertise, highlighting DFPP’s role as an enabling technology for diverse scientific and industrial challenges.

7. Conclusions

This review set out to provide a unified and technically grounded synthesis of the principles, components, and computational methods that define modern DFPP, and the analysis presented throughout the manuscript supports this goal across both theoretical and applied dimensions. Beginning with the system architecture, the review examined how projection technologies, imaging sensors, geometric configurations, optical designs, and calibration strategies jointly determine the achievable measurement accuracy and robustness. The comparative evaluation of these subsystems shows that DFPP performance is not governed by isolated components, but by the interplay among optical design, synchronization, and calibration pipelines—an observation that aligns directly with the integrative perspective introduced in the abstract.
This review has shown that DFPP is no longer a niche optical technique but a mature and strategically important metrology framework whose performance is fundamentally shaped by the coordinated design of its optical, electronic, and computational subsystems. A key conclusion emerging from the analysis is that DFPP should not be understood as a fixed architecture, but as a design space where trade-offs between precision, speed, robustness, and cost must be explicitly negotiated. The comparative evidence presented throughout this work strongly supports this perspective.
On the hardware side, it is clear that the traditional assumption that “better components produce better measurements” is an oversimplification. Instead, DFPP precision stems from system coherence: projection technology must be matched to sensor noise characteristics; geometric configuration must reflect the expected surface topology; and calibration must be tailored to the specific optical distortions introduced by the chosen layout. Systems that fail to achieve this coherence—even when equipped with high-end components—perform noticeably worse than balanced, well-integrated designs. This reinforces the argument that DFPP excellence is primarily a systems-engineering challenge rather than a pure hardware race.
In the computational domain, the contrast between classical, hybrid, and learning-based methods identifies another major shift in DFPP: accuracy is increasingly determined by how intelligently phase is acquired, denoised, and unwrapped, rather than by how many patterns are projected. Multi-shot techniques still dominate in absolute precision, but the rapid convergence of learning-enabled single-shot approaches suggests that the traditional divide between “fast but coarse” and “slow but accurate” systems is narrowing. My interpretation is that DFPP is entering a new era in which algorithmic intelligence—not projection count—will be the principal driver of innovation.
The case studies discussed across industrial, biomedical, cultural heritage, and microscale applications further demonstrate that DFPP’s versatility is both its greatest strength and its greatest challenge. While industries value throughput, biomedical applications demand anatomical fidelity, heritage conservation requires non-invasiveness, and microscale inspection insists on extreme stability. What stands out across these comparisons is that no single DFPP configuration is inherently superior: each domain rewards a different balance of robustness, portability, optical complexity, and computational sophistication. This reinforces the central argument advanced in the abstract: DFPP succeeds precisely because it can be reconfigured, hybridized, and computationally augmented to meet highly divergent operational demands.
Taken together, the evidence presented in this review leads to a clear interpretive conclusion: the future of DFPP will be defined by tightly coupled hardware–software co-design, by deep learning models that internalize optical and geometric priors, and by digital twin ecosystems capable of simulating measurement pipelines end-to-end. The field is transitioning from incremental improvements toward fully adaptive, data-driven DFPP systems that automatically optimize their projection patterns, acquisition strategies, and reconstruction algorithms. In this sense, DFPP is evolving from a measurement technique into an intelligent metrology platform.
By integrating architectural analysis, algorithmic comparison, and domain-specific evaluation, this review not only consolidates the current state of the art but also argues for a forward-looking vision: DFPP will continue to expand its relevance as long as researchers embrace its inherently interdisciplinary nature, prioritizing system coherence, computational intelligence, and adaptability as the cornerstones of next-generation 3D metrology.

Author Contributions

Conceptualization, M.S.-T. and A.L.-M.; Methodology, A.L.-M. and I.H.-C.; Validation, M.S.-T., I.H.-C. and J.L.J.S.-G.; Formal analysis, A.L.-M. and C.R.-F.; Investigation, M.S.-T., I.H.-C. and E.C.; Data curation, E.C. and J.L.J.S.-G.; Writing—original draft preparation, A.L.-M. and C.R.-F.; Writing—review and editing, M.S.-T., J.L.J.S.-G. and I.H.-C.; Visualization, A.L.-M. and C.R.-F.; Supervision, M.S.-T.; Project administration, M.S.-T. and A.L.-M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Lv, S.; Kemao, Q. Modeling the Measurement Precision of Fringe Projection Profilometry. Light Sci. Appl. 2023, 12, 257. [Google Scholar] [CrossRef] [PubMed]
  2. Bai, Y.; Zhang, Z.; Fu, S.; Zhao, H.; Ni, Y.; Gao, N.; Meng, Z.; Yang, Z.; Zhang, G.; Yin, W. Recent Progress of Full-Field Three-Dimensional Shape Measurement Based on Phase Information. Nanomanuf. Metrol. 2024, 7, 9. [Google Scholar] [CrossRef]
  3. Jiang, C.; Li, Y.; Feng, S.; Hu, Y.; Yin, W.; Qian, J.; Zuo, C.; Liang, J. Fringe Projection Profilometry. In Coded Optical Imaging; Springer: Cham, Switzerland, 2024; pp. 241–286. [Google Scholar] [CrossRef]
  4. Forgács, I.; Antal, A. Comparison of Structured Light Projection-Based Surface Reconstruction Methods. Period. Polytech. Mech. Eng. 2025, 69, 151–163. [Google Scholar] [CrossRef]
  5. Lv, S.; Tang, D.; Zhang, X.; Yang, D.; Deng, W.; Kemao, Q. Fringe Projection Profilometry Method with High Efficiency, Precision, and Convenience: Theoretical Analysis and Development. Opt. Express 2022, 30, 33515–33537. [Google Scholar] [CrossRef] [PubMed]
  6. Berkson, J.; Hyatt, J.; Julicher, N.; Jeong, B.; Pimienta, I.; Ball, R.; Ellis, W.; Voris, J.; Torres-Barajas, D.; Kim, D. Systematic Radio Telescope Alignment Using Portable Fringe Projection Profilometry. Nanomanuf. Metrol. 2024, 7, 6. [Google Scholar] [CrossRef]
  7. Juarez-Salazar, R.; Esquivel-Hernandez, S.; Diaz-Ramirez, V. Optical Fringe Projection Driving 3D Metrology Uncomplicated. Preprints 2025. [Google Scholar] [CrossRef]
  8. Nguyen, A.; Wang, Z. Time-Distributed Framework for 3D Reconstruction Integrating Fringe Projection with Deep Learning. Sensors 2023, 23, 7284. [Google Scholar] [CrossRef]
  9. Son, S.; An, Y.; Hyun, J. Quasi-Calibration Method for Structured Light System with Auxiliary Camera. Opt. Laser Technol. 2024, 179, 111369. [Google Scholar] [CrossRef]
  10. Wang, H.; He, X.; Wei, Z.; Lv, Z.; Zhang, Q.; Wang, J.; He, J. Calculation of Fringe Angle with Enhanced Phase Sensitivity and 3D Reconstruction. Sensors 2024, 24, 7234. [Google Scholar] [CrossRef]
  11. Liu, H.; Yan, N.; Shao, B.; Yuan, S.; Zhang, X. Deep Learning in Fringe Projection: A Review. Neurocomputing 2024, 581, 127493. [Google Scholar] [CrossRef]
  12. Chen, W.; Feng, S.; Yin, W.; Li, Y.; Qian, J.; Chen, Q.; Zuo, C. Deep-Learning-Enabled Temporally Super-Resolved Multiplexed Fringe Projection Profilometry: High-Speed kHz 3D Imaging with Low-Speed Camera. PhotoniX 2024, 5, 25. [Google Scholar] [CrossRef]
  13. Nguyen, A.; Wang, Z. Single-Shot 3D Reconstruction via Nonlinear Fringe Transformation: Supervised and Unsupervised Learning Approaches. Sensors 2024, 24, 3246. [Google Scholar] [CrossRef]
  14. Li, Y.; Li, Z.; Liang, X.; Huang, H.; Qian, X.; Feng, F.; Zhang, C.; Wang, X.; Gui, W.; Li, X. Global Phase Accuracy Enhancement of Structured Light System Calibration and 3D Reconstruction by Overcoming Inevitable Unsatisfactory Intensity Modulation. Measurement 2024, 236, 114952. [Google Scholar] [CrossRef]
  15. Li, Z.; Ma, R.; Duan, S. A Clustered Adaptive Exposure Time Selection Methodology for HDR Structured Light 3D Reconstruction. Sensors 2025, 25, 4786. [Google Scholar] [CrossRef]
  16. Suresh, V.; Balasubramaniam, B.; Yeh, L.; Li, B. Recent Advances in In Situ 3D Surface Topographical Monitoring for Additive Manufacturing Processes. J. Manuf. Mater. Process. 2025, 9, 133. [Google Scholar] [CrossRef]
  17. Huang, H.; Liu, G.; Deng, L.; Song, T.; Qin, F. Multi-Line Laser 3D Reconstruction Method Based on Spatial Quadric Surface and Geometric Estimation. Sci. Rep. 2024, 14, 23103. [Google Scholar] [CrossRef] [PubMed]
  18. Wu, Z.; Huo, J.; Zhang, H.; Yang, F.; Chen, S.; Feng, Z. Vision Measurement Method Based on Plate Glass Window Refraction Model in Tunnel Construction. Sensors 2023, 24, 66. [Google Scholar] [CrossRef] [PubMed]
  19. Zhang, Q.; Wang, Q.; Li, H.; Yu, J. Ray-Space Epipolar Geometry for Light Field Cameras. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 44, 3705–3718. [Google Scholar] [CrossRef] [PubMed]
  20. Xue, R.; Hooshmand, H.; Isa, M.; Piano, S.; Leach, R. Applying Machine Learning to Optical Metrology: A Review. Meas. Sci. Technol. 2024, 36, 012002. [Google Scholar] [CrossRef]
  21. Liu, C.; Wang, C. Investigation of Phase Pattern Modulation for Digital Fringe Projection Profilometry. Meas. Sci. Rev. 2020, 20, 43–49. [Google Scholar] [CrossRef]
  22. Zendejas-Hernández, E.; Trujillo-Schiaffino, G.; Anguiano-Morales, M.; Salas-Peimbert, D.; Corral-Martínez, L.; Tornero-Martínez, N. Spatial and Temporal Methods for Fringe Pattern Analysis: A Review. J. Opt. 2023, 52, 888–899. [Google Scholar] [CrossRef]
  23. An, H.; Cao, Y.; Zhang, Y.; Li, H. Phase-Shifting Temporal Phase Unwrapping Algorithm for High-Speed Fringe Projection Profilometry. IEEE Trans. Instrum. Meas. 2023, 72, 1–9. [Google Scholar] [CrossRef]
  24. Burnes, S.; Villa, J.; Moreno, G.; Rosa, I.; Alaniz, D.; González, E. Temporal Fringe Projection Profilometry: Modified Fringe-Frequency Range for Error Reduction. Opt. Lasers Eng. 2022, 149, 106788. [Google Scholar] [CrossRef]
  25. Jiang, Y.; Qin, J.; Liu, Y.; Yang, M.; Cao, Y. Deep-Learning-Based Single-Shot Fringe Projection Profilometry Using Spatial Composite Pattern. IEEE Trans. Instrum. Meas. 2024, 73, 1–14. [Google Scholar] [CrossRef]
  26. Li, Y.; Qian, J.; Feng, S.; Chen, Q.; Zuo, C. Deep-Learning-Enabled Dual-Frequency Composite Fringe Projection Profilometry for Single-Shot Absolute 3D Shape Measurement. Opto-Electron. Adv. 2022, 5, 210021. [Google Scholar] [CrossRef]
  27. Yu, H.; Zheng, D.; Fu, J.; Zhang, Y.; Zuo, C.; Han, J. Deep Learning-Based Fringe Modulation-Enhancing Method for Accurate Fringe Projection Profilometry. Opt. Express 2020, 28, 21692–21703. [Google Scholar] [CrossRef]
  28. Li, Y.; Qian, J.; Feng, S.; Chen, Q.; Zuo, C. Composite Fringe Projection Deep Learning Profilometry for Single-Shot Absolute 3D Shape Measurement. Opt. Express 2022, 30, 3424–3442. [Google Scholar] [CrossRef] [PubMed]
  29. Wang, B.; Chen, W.; Qian, J.; Feng, S.; Chen, Q.; Zuo, C. Single-Shot Super-Resolved Fringe Projection Profilometry (SSSR-FPP): 100,000 Frames-Per-Second 3D Imaging with Deep Learning. Light Sci. Appl. 2025, 14, 70. [Google Scholar] [CrossRef]
  30. Wu, H.; Cao, Y.; Dai, Y.; Qin, J. Spatial-Temporal 3-D Directional Binary Coding Method for Fringe Projection Profilometry. IEEE Trans. Instrum. Meas. 2025, 74, 1–11. [Google Scholar] [CrossRef]
  31. An, H.; Cao, Y.; Wu, H.; Yang, N.; Xu, C.; Li, H. Spatial-Temporal Phase Unwrapping Algorithm for Fringe Projection Profilometry. Opt. Express 2021, 29, 20657–20672. [Google Scholar] [CrossRef] [PubMed]
  32. Ren, J.; Tan, C.; Song, W. Hybrid Encoding Fringe and Simulation-to-Real Scene Approach for Accurate Depth Estimation in Fringe Projection Profilometry. Opt. Express 2025, 33, 14716–14736. [Google Scholar] [CrossRef]
  33. Zuo, C.; Qian, J.; Feng, S.; Yin, W.; Li, Y.; Fan, P.; Han, J.; Qian, K.; Chen, Q. Deep Learning in Optical Metrology: A Review. Light Sci. Appl. 2022, 11, 39. [Google Scholar] [CrossRef]
  34. Yin, W.; Che, Y.; Li, X.; Li, M.; Hu, Y.; Feng, S.; Lam, E.; Chen, Q.; Zuo, C. Physics-Informed Deep Learning for Fringe Pattern Analysis. Opto-Electron. Adv. 2024, 7, 230034. [Google Scholar] [CrossRef]
  35. Feng, L.; Sun, Z.; Chen, Y.; Li, H.; Chen, Y.; Liu, H.; Liu, R.; Zhao, Z.; Liang, J.; Zhang, Z.; et al. Rapid In-Situ Accuracy Evaluation and Exposure Optimization Method for Fringe Projection Profilometry. Opt. Laser Technol. 2024, 179, 111844. [Google Scholar] [CrossRef]
  36. Hinz, L.; Metzner, S.; Müller, P.; Schulte, R.; Besserer, H.; Wackenrohr, S.; Sauer, C.; Kästner, M.; Hausotte, T.; Hübner, S.; et al. Fringe Projection Profilometry in Production Metrology: A Multi-Scale Comparison in Sheet-Bulk Metal Forming. Sensors 2021, 21, 2389. [Google Scholar] [CrossRef]
  37. Yan, H.; Chen, Q.; Feng, S.; Zuo, C. Microscopic Fringe Projection Profilometry: A Review. Opt. Lasers Eng. 2020, 135, 106192. [Google Scholar] [CrossRef]
  38. Ryan, C.; Haist, T.; Reichelt, S. Holographic Detection for Fast Fringe Projection Profilometry of Deep Micro-Scale Objects. Opt. Express 2024, 33, 983–991. [Google Scholar] [CrossRef] [PubMed]
  39. Nguyen, H.; Liang, J.; Wang, Y.; Wang, Z. Accuracy Assessment of Fringe Projection Profilometry and Digital Image Correlation Techniques for Three-Dimensional Shape Measurements. J. Phys. Photonics 2020, 3, 014004. [Google Scholar] [CrossRef]
  40. Wu, Z.; Li, X.; Guo, W.; Chen, Z.; Zhang, Q. Fast and High-Accuracy Three-Dimensional Shape Measurement Using Intermediate-Bit Projection. Opt. Express 2024, 32, 31797–31808. [Google Scholar] [CrossRef]
  41. Zhang, C.; Zhu, D.; Shi, W.; Wang, L.; Li, J. Improved Calibration and 3D Reconstruction for Micro Fringe Projection Profilometry. Opt. Express 2025, 33, 13455–13471. [Google Scholar] [CrossRef]
  42. You, D.; You, Z.; Zhang, X.; Zhu, J. High-Quality 3D Shape Measurement with Binary Half Truncated Sinusoidal Fringe Pattern. Opt. Lasers Eng. 2022, 155, 107046. [Google Scholar] [CrossRef]
  43. Han, M.; Xing, Y.; Wang, X.; Li, X. Projection Superimposition for the Generation of High-Resolution Digital Grating. Opt. Lett. 2024, 49, 4473–4476. [Google Scholar] [CrossRef]
  44. Chen, Y.; Gao, Y.; Xu, K.; Yu, Z.; Lei, X.; Song, B.; Zhu, F. Correction of Thermal Airflow Induced Measurement Errors in the Digital Fringe Projection System Using Background-Oriented Schlieren Technique. Meas. Sci. Technol. 2025, 36, 055211. [Google Scholar] [CrossRef]
  45. Tsolakis, I.; Papaioannou, W.; Papadopoulou, E.; Dalampira, M.; Tsolakis, A. Comparison in Terms of Accuracy between DLP and LCD Printing Technology for Dental Model Printing. Dent. J. 2022, 10, 181. [Google Scholar] [CrossRef]
  46. Suryatal, B.; Sarawade, S.; Deshmukh, S. Process Parameter’s Characterization and Optimization of DLP-Based Stereolithography System. Prog. Addit. Manuf. 2022, 8, 649–666. [Google Scholar] [CrossRef]
  47. Cheng, X.; Shen, M.; Xie, S.; Li, Y. Improvement of Phase Measurement Accuracy of Projector with Diamond-Shaped Pixels through Pixel Remapping. In Second Advanced Imaging and Information Processing Conference (AIIP 2024); SPIE: Bellingham, WA, USA, 2024; Volume 13282, pp. 23–30. [Google Scholar] [CrossRef]
  48. You, D.; You, Z.; Zhou, P.; Zhu, J. Theoretical Analysis and Experimental Investigation of the Floyd–Steinberg-Based Fringe Binary Method with Offset Compensation for Accurate 3D Measurement. Opt. Express 2022, 30, 26807–26823. [Google Scholar] [CrossRef] [PubMed]
  49. Hong, H.; Zhao, Z.; Zhu, Y.; Ye, L.; Xu, P.; Zhou, M. Nonlinear Correction Method of Digital Fringe Projection 3D Measurement System Based on Precise Precoding. Opt. Eng. 2024, 63, 114101. [Google Scholar] [CrossRef]
  50. Hou, L.; Xi, D.; Luo, J.; Qin, Y. Deep Learning-Based Correction of Defocused Fringe Patterns for High-Speed 3D Measurement. Adv. Eng. Inform. 2023, 58, 102221. [Google Scholar] [CrossRef]
  51. Haruta, M.; Kikkawa, J.; Kimoto, K.; Kurata, H. Comparison of Detection Limits of Direct-Counting CMOS and CCD Cameras in EELS Experiments. Ultramicroscopy 2022, 240, 113577. [Google Scholar] [CrossRef] [PubMed]
  52. Sun, Z.; Liu, M.; Dong, J.; Li, Z.; Liu, X.; Xiong, J.; Wang, Y.; Cao, Y.; Li, J.; Xia, Z.; et al. Low-Cost, High-Precision Integral 3D Photography and Holographic 3D Display for Real-World Scenes. Opt. Commun. 2024, 530, 130870. [Google Scholar] [CrossRef]
  53. Wu, Z.; Tao, W.; Lv, N.; Zhao, H. Optimization of Parameters for a Fringe Projection Measurement System Using an Improved Differential Evolution Method. Opt. Express 2023, 32, 3632–3646. [Google Scholar] [CrossRef] [PubMed]
  54. Sun, W.; Xu, Z.; Li, X.; Chen, Z.; Tang, X. Three-Dimensional Shape and Deformation Measurements Based on Fringe Projection Profilometry and Fluorescent Digital Image Correlation via a 3CCD Camera. Sensors 2023, 23, 6663. [Google Scholar] [CrossRef]
  55. Ballester, M.; Wang, H.; Li, J.; Cossairt, O.; Willomitzer, F. Single-Shot ToF Sensing with Sub-mm Precision Using Conventional CMOS Sensors. arXiv 2022, arXiv:2212.00928. [Google Scholar] [CrossRef]
  56. Ballester, M.; Wang, H.; Li, J.; Cossairt, O.; Willomitzer, F. Single-Shot Synthetic Wavelength Imaging: Sub-mm Precision ToF Sensing with Conventional CMOS Sensors. Opt. Lasers Eng. 2024, 175, 108165. [Google Scholar] [CrossRef]
  57. Long, J.; Du, Z.; Zhang, J.; Xi, J.; Peng, Y. Simulation Study of Compressed Ultrafast 3D Imaging Based on Interferometry. Meas. Sci. Technol. 2024, 35, 085403. [Google Scholar] [CrossRef]
  58. Li, Y.; Jiang, H.; Xu, C.; Liu, L. Event-Driven Fringe Projection Structured Light 3D Reconstruction Based on Time–Frequency Analysis. IEEE Sens. J. 2024, 24, 5097–5106. [Google Scholar] [CrossRef]
  59. Guo, S.; Zhou, Q.; Boulenc, P.; Klekachev, A.; Wang, X.; Lahav, A. Study on 3D Effects on Small Time Delay Integration Image Sensor Pixels. Sensors 2025, 25, 1953. [Google Scholar] [CrossRef]
  60. Li, Y.; Li, Z.; Zhang, C.; Han, M.; Lei, F.; Liang, X.; Wang, X.; Gui, W.; Li, X. Deep Learning-Driven One-Shot Dual-View 3-D Reconstruction for Dual-Projector System. IEEE Trans. Instrum. Meas. 2024, 73, 1–14. [Google Scholar] [CrossRef]
  61. Lin, J.; Dou, Q.; Cheng, Q.; Huang, C.; Lu, P.; Liu, H. Binocular Composite Grayscale Fringe Projection Profilometry Based on Deep Learning for Single-Shot 3D Measurements. Opt. Lasers Eng. 2025, 185, 108701. [Google Scholar] [CrossRef]
  62. Chen, H.; Chen, L. Full-Field Chromatic Confocal Microscopy for Surface Profilometry with Sub-Micrometer Accuracy. Opt. Lasers Eng. 2023, 164, 107384. [Google Scholar] [CrossRef]
  63. Cheng, T.; Liu, X.; Qin, L.; Lu, M.; Xiao, C.; Li, S. A Practical Micro Fringe Projection Profilometry for 3-D Automated Optical Inspection. IEEE Trans. Instrum. Meas. 2022, 71, 1–13. [Google Scholar] [CrossRef]
  64. Dai, R.; Tang, X.; Li, W.; Liu, Y. Self-Correcting and Globally-Consistent 3D Cross-Ratio Invariant Model for Multi-View Microscopic Profilometry. IEEE Trans. Ind. Inform. 2025, 21, 2373–2382. [Google Scholar] [CrossRef]
  65. Zhang, G.; Liu, Y.; Yao, Q.; Deng, H.; Zhao, H.; Zhang, Z.; Yang, S. Multi-View Fringe Projection Profilometry for Surfaces with Intricate Structures and High Dynamic Range. Opt. Express 2024, 32, 19146–19162. [Google Scholar] [CrossRef]
  66. Jiao, S.; Wang, W.; Yang, Z.; Huang, D.; Ji, F.; Xu, M. Modeling and Analysis of Measurement Error for a Four-Axis Optical Profilometer. Opt. Express 2025, 33, 20929–20950. [Google Scholar] [CrossRef] [PubMed]
  67. Rayas, J.; Dávila, A. Optimization of a DIY Parallel-Optical-Axes Profilometer for Compensation of Fringe Divergence. Appl. Opt. 2021, 60, 9790–9798. [Google Scholar] [CrossRef]
  68. Zhou, X.; Jia, S.; Zhang, H.; Lin, Z.; Wen, B.; Wang, L.; Zhang, Y. Single-Frame Fringe Pattern Analysis with Synchronous Phase-Shifting Based on Polarization Interferometry Phase-Measuring Deflectometry (PIPMD). Opt. Lasers Eng. 2024, 177, 108406. [Google Scholar] [CrossRef]
  69. Xing, S.; Guo, H. Iterative Calibration Method for Measurement System Having Lens Distortions in Fringe Projection Profilometry. Opt. Express 2020, 28, 1177–1196. [Google Scholar] [CrossRef] [PubMed]
  70. Yang, S.; Yang, T.; Wu, G.; Wu, Y.; Liu, F. Flexible and Fast Calibration Method for Uni-Directional Multi-Line Structured Light System. Opt. Lasers Eng. 2023, 164, 107525. [Google Scholar] [CrossRef]
  71. Yang, Y.; Miao, Y.; Cai, Z.; Gao, B.; Liu, X.; Peng, X. A Novel Projector Ray-Model for 3D Measurement in Fringe Projection Profilometry. Opt. Lasers Eng. 2022, 149, 106818. [Google Scholar] [CrossRef]
  72. Zhang, Q.; Li, S.; Chen, Y.; Liu, T.; Cai, G.; Li, J. Iterative Orthogonal Normalization Algorithm for Improving Phase Retrieval Accuracy. Opt. Laser Technol. 2025, 182, 112178. [Google Scholar] [CrossRef]
  73. Fatima, G.; Babu, P. PGPAL: A Monotonic Iterative Algorithm for Phase-Retrieval under the Presence of Poisson-Gaussian Noise. IEEE Signal Process. Lett. 2022, 29, 533–537. [Google Scholar] [CrossRef]
  74. Nenov, R.; Nguyen, D.; Balázs, P.; Boţ, R. Accelerated Griffin-Lim Algorithm: A Fast and Provably Converging Numerical Method for Phase Retrieval. IEEE Trans. Signal Process. 2023, 72, 190–202. [Google Scholar] [CrossRef]
  75. Dong, J.; Valzania, L.; Maillard, A.; Pham, T.; Gigan, S.; Unser, M. Phase Retrieval: From Computational Imaging to Machine Learning: A Tutorial. IEEE Signal Process. Mag. 2022, 40, 45–57. [Google Scholar] [CrossRef]
  76. Cha, E. Regularization by Denoising Diffusion Process Meets Deep Relaxation in Phase. Image Vis. Comput. 2024, 151, 105282. [Google Scholar] [CrossRef]
  77. Ni, H.; Ying, L. Fast Phase Factor Finding for Quantum Signal Processing. arXiv 2024, arXiv:2410.06409. [Google Scholar] [CrossRef]
  78. Robert, L.; Kulkarni, R. Phase Retrieval Algorithm in Spatial Phase-Shift Shearography Based on Dynamic Mode Decomposition. Appl. Opt. 2025, 64, 6528–6533. [Google Scholar] [CrossRef]
  79. Wang, X.; Zhong, Z.; Chen, H.; Zhu, D.; Zheng, T.; Huang, W. High-Precision Laser Self-Mixing Displacement Sensor Based on Orthogonal Signal Phase Multiplication Technique. Photonics 2023, 10, 575. [Google Scholar] [CrossRef]
  80. Wu, H.; Zhu, Z.; Bao, Q.; Wang, W.; Ye, L.; Song, K.; Sun, X. Improved Dynamic Range in Fiber-Optic Acoustic Sensing Systems with Enhanced Phase Demodulation Structure. IEEE Sens. J. 2025, 25, 4541–4554. [Google Scholar] [CrossRef]
  81. Xue, R.; Xie, M. A Novel Acquisition Algorithm Based on a Composite Waveform for Weak CPM Signals. IEEE Commun. Lett. 2024, 28, 1117–1121. [Google Scholar] [CrossRef]
  82. Zhang, T.; Zhang, G.; Yang, M.; Sun, J.; Jiang, J.; Xu, Z. Research on Fast Acquisition and Tracking Algorithm of High Dynamic Weak Spread Spectrum Signal. In Proceedings of the 2024 IEEE 6th International Conference on Civil Aviation Safety and Information Technology (ICCASIT), Hangzhou, China, 23–25 October 2024; pp. 49–54. [Google Scholar] [CrossRef]
  83. Taghdiri, A.; Moftakharzadeh, A.; Zahedi, P.; Safdarkhani, H. Implementation of an Improved Parallel Code Phase Search Algorithm for GPS Signal Acquisition on Zynq SoC. In Proceedings of the 2024 20th CSI International Symposium on Artificial Intelligence and Signal Processing (AISP), Babol, Iran, 21–22 February 2024; pp. 1–6. [Google Scholar] [CrossRef]
  84. Chatterjee, B.; Sarkar, S.; Sinhababu, F. Modification over Dithered DPLL to Reduce the Effect of Narrowband Channel Interference. YMER Digit. 2022. [Google Scholar] [CrossRef]
  85. Zhang, Q.; Liu, H.; Dong, P.; Li, P.; Luo, Z. Multi-Frequency Signal Acquisition and Phase Measurement in Space Gravitational Wave Detection. Rev. Sci. Instrum. 2024, 95, 054501. [Google Scholar] [CrossRef]
  86. Zuo, C.; Huang, L.; Zhang, M.; Chen, Q.; Asundi, A. Temporal Phase Unwrapping Algorithms for Fringe Projection Profilometry: A Comparative Review. Opt. Lasers Eng. 2016, 85, 84–103. [Google Scholar] [CrossRef]
  87. Karout, S. Two-Dimensional Phase Unwrapping. Ph.D. Thesis, Liverpool John Moores University, Liverpool, UK, 2007. [Google Scholar] [CrossRef]
  88. Spoorthi, G.; Gorthi, R.; Gorthi, S. PhaseNet 2.0: Phase Unwrapping of Noisy Data Based on Deep Learning Approach. IEEE Trans. Image Process. 2020, 29, 4862–4872. [Google Scholar] [CrossRef]
  89. Feng, L.; Du, H.; Gu, F.; Cui, J.; Li, Y.; Zhu, Q.; Xu, T.; Zhang, G. Enhancing Phase Unwrapping by Noisy Pixels Identifying Criteria and Iterative Adaptive Filtering. Opt. Express 2025, 33, 31912–31934. [Google Scholar] [CrossRef]
  90. Zhang, Y.; Zhang, Z.; Zhu, Y. Hybrid Phase Unwrapping Algorithm Based on a Quality Graph. Appl. Opt. 2025, 64, 5240–5249. [Google Scholar] [CrossRef]
  91. Gan, W.; Wu, Z.; Jin, D.; Wang, S. Two-Stage Phase Unwrapping Algorithm Based on a Phase Correlation Technique. Appl. Opt. 2025, 64, 1701–1706. [Google Scholar] [CrossRef]
  92. Xie, X.; Zeng, Q. Novel Phase Unwrapping Technique Based on Extended Information Filter. Opt. Lasers Eng. 2021, 142, 106615. [Google Scholar] [CrossRef]
  93. Yin, W.; Chen, Q.; Feng, S.; Tao, T.; Huang, L.; Trusiak, M.; Asundi, A.; Zuo, C. Temporal Phase Unwrapping Using Deep Learning. Sci. Rep. 2019, 9, 20175. [Google Scholar] [CrossRef] [PubMed]
  94. He, X.; Kemao, Q. A Comparative Study on Temporal Phase Unwrapping Methods in High-Speed Fringe Projection Profilometry. Opt. Lasers Eng. 2021, 142, 106613. [Google Scholar] [CrossRef]
  95. Dymerska, B.; Eckstein, K.; Bachratá, B.; Siow, B.; Trattnig, S.; Shmueli, K.; Robinson, S. Phase Unwrapping with a Rapid Open-Source Minimum Spanning Tree Algorithm (ROMEO). Magn. Reson. Med. 2020, 85, 2294–2308. [Google Scholar] [CrossRef]
  96. Xu, C.; Cao, Y.; Wu, H.; Li, H.; Zhang, H.; An, H. Curtain-Type Phase Unwrapping Algorithm. Opt. Eng. 2022, 61, 044103. [Google Scholar] [CrossRef]
  97. Spoorthi, G.; Gorthi, S.; Gorthi, R. PhaseNet: A Deep Convolutional Neural Network for Two-Dimensional Phase Unwrapping. IEEE Signal Process. Lett. 2019, 26, 54–58. [Google Scholar] [CrossRef]
  98. Wang, K.; Li, Y.; Kemao, Q.; Di, J.; Zhao, J. One-Step Robust Deep Learning Phase Unwrapping. Opt. Express 2019, 27, 15100–15115. [Google Scholar] [CrossRef]
  99. Sumanth, K.; Ravi, V.; Gorthi, R. A Multi-Task Learning for 2D Phase Unwrapping in Fringe Projection. IEEE Signal Process. Lett. 2022, 29, 797–801. [Google Scholar] [CrossRef]
  100. Yan, C.; Li, T.; Gao, Y.; Li, S.; Zhang, X.; Zhang, X.; Zhang, D.; Liu, H. A Novel Two-Stage Learning-Based Phase Unwrapping Algorithm via Multimodel Fusion. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2025, 18, 7468–7479. [Google Scholar] [CrossRef]
  101. Zhou, L.; Yu, H.; Lan, Y.; Xing, M. Deep Learning-Based Branch-Cut Method for InSAR Two-Dimensional Phase Unwrapping. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–15. [Google Scholar] [CrossRef]
  102. Glatting, K.; Meyer, J.; Huber, S.; Krieger, G. Quantum Optimization for Phase Unwrapping in SAR Interferometry. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2025, 18, 3488–3500. [Google Scholar] [CrossRef]
  103. Wu, G.; Yang, T.; Liu, F.; Qian, K. Suppressing Motion-Induced Phase Error by Using Equal-Step Phase-Shifting Algorithms in Fringe Projection Profilometry. Opt. Express 2022, 30, 17980–17998. [Google Scholar] [CrossRef]
  104. Yu, J.; Mai, S. Quasi-Pixelwise Motion Compensation for 4-Step Phase-Shifting Profilometry Based on a Phase Error Estimation. Opt. Express 2022, 30, 19055–19068. [Google Scholar] [CrossRef]
  105. Liu, W.; Wang, X.; Chen, Z.; Ding, Y.; Lu, L. Accelerated Phase Deviation Elimination for Measuring Moving Object Shape with Phase-Shifting Profilometry. Photonics 2022, 9, 295. [Google Scholar] [CrossRef]
  106. Wu, Z.; Guo, W.; Zhang, Q. Two-Frequency Phase-Shifting Method vs. Gray-Coded-Based Method in Dynamic Fringe Projection Profilometry: A Comparative Review. Opt. Lasers Eng. 2022, 158, 106995. [Google Scholar] [CrossRef]
  107. Wang, J.; Yang, Y. Phase Extraction Accuracy Comparison Based on Multi-Frequency Phase-Shifting Method in Fringe Projection Profilometry. Measurement 2022, 194, 111525. [Google Scholar] [CrossRef]
  108. Wang, Y.; Cai, J.; Zhang, D.; Chen, X.; Wang, Y. Nonlinear Correction for Fringe Projection Profilometry with Shifted-Phase Histogram Equalization. IEEE Trans. Instrum. Meas. 2022, 71, 1–9. [Google Scholar] [CrossRef]
  109. Lai, X.; Li, Y.; Li, X.; Chen, Z.; Zhang, Q. Estimation Method Based on Extended Kalman Filter for Uncertain Phase Shifts in Phase-Measuring Profilometry. Photonics 2023, 10, 207. [Google Scholar] [CrossRef]
  110. Han, M.; Shi, W.; Lu, S.; Lei, F.; Li, Y.; Wang, X.; Li, X. Internal–External Layered Phase Shifting for Phase Retrieval. IEEE Trans. Instrum. Meas. 2024, 73, 1–13. [Google Scholar] [CrossRef]
  111. Li, L.; Xu, X.; Pang, J.; Wu, J. Study on Gamma Correction for Three-Dimensional Fringe Projection Measurement Based on Attention U-Net Network. Opt. Eng. 2024, 63, 053103. [Google Scholar] [CrossRef]
  112. Seok, J.; An, Y.; Hyun, J. Single-Shot Calibration Method Based on Fourier Transform Profilometry. Opt. Lett. 2024, 49, 4987–4990. [Google Scholar] [CrossRef] [PubMed]
  113. Wang, Z. Single-Shot 3D Reconstruction by Fusion of Fourier Transform Profilometry and Line Clustering. IEEE J. Sel. Top. Signal Process. 2024, 18, 325–335. [Google Scholar] [CrossRef]
  114. Yang, Y.; Tao, R.; Wei, K.; Shi, J. Single-Shot Phase Retrieval from a Fractional Fourier Transform Perspective. IEEE Trans. Signal Process. 2023, 72, 3303–3317. [Google Scholar] [CrossRef]
  115. Deng, L.; Chen, R.; Xu, Y.; Liu, W.; Guan, W.; Hu, Y.; Huang, X.; Xie, Z. Research on Single-Shot Wrapped Phase Extraction Using SEC-UNet3+. Photonics 2025, 12, 369. [Google Scholar] [CrossRef]
  116. Meng, X.; Wang, F.; Liu, J.; Chen, M.; Wang, Y. Phase-Shifting Profilometry Based on Hilbert Transform: An Efficient Phase Unwrapping Algorithm. J. Appl. Phys. 2022, 131, 143101. [Google Scholar] [CrossRef]
  117. Fang, M.; Liu, X.; Zhou, Q. Residue-Guided Phase Unwrapping in Fringe Projection Measurements Using Second Differences. Meas. Sci. Technol. 2023, 35, 025001. [Google Scholar] [CrossRef]
  118. Gao, J.; Jiang, H.; Sun, Z.; Wang, R.; Han, Y. A Parallel InSAR Phase Unwrapping Method Based on Separated Continuous Regions. Remote Sens. 2023, 15, 1370. [Google Scholar] [CrossRef]
  119. Liu, J.; Shan, S.; Xu, P.; Zhang, W.; Li, Z.; Wang, J.; Xie, J. Improved Two-Frequency Temporal Phase Unwrapping Method in Fringe Projection Profilometry. Appl. Phys. B 2024, 130, 81. [Google Scholar] [CrossRef]
  120. Guo, X.; Li, Y.; Qian, J.; Che, Y.; Zuo, C.; Chen, Q.; Lam, E.; Wang, H.; Feng, S. Unifying Temporal Phase Unwrapping Framework Using Deep Learning. Opt. Express 2023, 31, 16659–16675. [Google Scholar] [CrossRef]
  121. Uhlig, D.; Heizmann, M. A Probabilistic Approach for Spatio-Temporal Phase Unwrapping in Multi-Frequency Phase-Shift Coding. IEEE Access 2022, 10, 56048–56057. [Google Scholar] [CrossRef]
  122. Wu, Z.; Wang, T.; Wang, Y.; Wang, R.; Ge, D. Deep-Learning-Based Phase Discontinuity Prediction for 2-D Phase Unwrapping of SAR Interferograms. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–16. [Google Scholar] [CrossRef]
  123. Zhong, W.; Li, J.; Li, X.; Feng, J.; Guo, L.; Li, Z.; Wu, J.; Kong, L. A Discontinuity-Guided Two-Dimensional Phase Unwrapping Method for SAR Interferograms. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2024, 17, 9309–9323. [Google Scholar] [CrossRef]
  124. Zeng, G.; Xu, H.; Wang, Y.; Li, S.; Ren, C. A Modified Minimum Cost Flow Phase Unwrapping Method Based on Reliable Pixel Detection. IEEE Sens. J. 2024, 24, 32684–32693. [Google Scholar] [CrossRef]
  125. Ruan, G.; Cao, Y.; Wu, H.; Wei, Z.; Li, C. Absolute Phase Retrieval Based on Spatial Ternary Phase Coding with Circular Fringe Projection. Opt. Lasers Eng. 2024, 177, 108167. [Google Scholar] [CrossRef]
  126. Li, H.; Cao, Y.; Wan, Y.; Xu, C.; Zhang, H.; An, H.; Wu, H. An Improved Temporal Phase Unwrapping Based on Super-Grayscale Multi-Frequency Grating Projection. Opt. Lasers Eng. 2022, 157, 106990. [Google Scholar] [CrossRef]
  127. Xing, Y.; Han, M.; Qian, X.; Wang, X.; Li, X. A Coarse-Fine Combined Phase Unwrapping Method. In Optical Metrology and Inspection for Industrial Applications XI; SPIE: Bellingham, WA, USA, 2024; Volume 13241, pp. 82–90. [Google Scholar] [CrossRef]
  128. Zhang, Z.; Wang, X.; Liu, C.; Han, Z.; Xiao, Q.; Zhang, Z.; Feng, W.; Liu, M.; Lu, Q. Efficient and Robust Phase Unwrapping Method Based on SFNet. Opt. Express 2024, 32, 15410–15432. [Google Scholar] [CrossRef]
  129. Ochoa, N.; García-Isáis, C. Analysis of Phase Errors Introduced by Single-Shot Color Phase-Shifting Profilometry on Colored Objects. Opt. Eng. 2022, 61, 084102. [Google Scholar] [CrossRef]
  130. Cheng, B.; Yao, Y.; He, Y.; Huang, Z.; Guo, M.; Cao, J.; Huang, X.; Dong, X.; Zang, Z.; Lu, Y.; et al. Compressive Phase-Shifting Fringe Projection Profilometry for Accelerating 3D Metrology. Opt. Lett. 2025, 50, 2942–2945. [Google Scholar] [CrossRef]
  131. Han, S.; Yang, Y.; Zhang, X.; Zhao, X.; Li, X. An Improved Complementary Gray Code Combined with Phase-Shifting Profilometry Based on Phase Adjustment. Meas. Sci. Technol. 2025, 36, 045202. [Google Scholar] [CrossRef]
  132. Wei, Z.; Cao, Y.; Wu, H.; Xu, C.; Ruan, G.; Wu, F.; Li, C. Dynamic Phase-Differencing Profilometry with Number-Theoretical Phase Unwrapping and Interleaved Projection. Opt. Express 2024, 32, 19578–19593. [Google Scholar] [CrossRef] [PubMed]
  133. Liu, Y.; Blunt, L.; Zhang, Z.; Rahman, H.; Gao, F.; Jiang, X. In-Situ Areal Inspection of Powder Bed for Electron Beam Fusion System Based on Fringe Projection Profilometry. Addit. Manuf. 2020, 31, 100940. [Google Scholar] [CrossRef]
  134. Remani, A.; Rossi, A.; Peña, F.; Thompson, A.; Dardis, J.; Jones, N.; Senin, N.; Leach, R. In-Situ Monitoring of Laser-Based Powder Bed Fusion Using Fringe Projection. Addit. Manuf. 2024, 80, 104334. [Google Scholar] [CrossRef]
  135. Li, B. High-Speed 3-D Optical Sensing for Manufacturing Research and Industrial Sensing Applications. Trans. Energy Syst. Eng. Appl. 2022, 3, 1–12. [Google Scholar] [CrossRef]
  136. Zhang, S. Rapid and Automatic Optimal Exposure Control for Digital Fringe Projection Technique. Opt. Lasers Eng. 2020, 128, 106029. [Google Scholar] [CrossRef]
  137. Song, X.; Wang, L. Y-FFC Net for 3-D Reconstruction of Highly Reflective Surfaces. IEEE Trans. Ind. Inform. 2024, 20, 13966–13974. [Google Scholar] [CrossRef]
  138. Yuan, H.; Li, Y.; Zhao, J.; Zhang, L.; Li, W.; Huang, Y.; Gao, X.; Xie, Q. An Adaptive Fringe Projection Method for 3-D Measurement with High-Reflective Surfaces. Opt. Laser Technol. 2024, 176, 110062. [Google Scholar] [CrossRef]
  139. Chen, Z.; Zhu, M.; Sun, C.; Liu, Y.; Tan, J. Fringe Projection Profilometry for Three-Dimensional Measurement of Aerospace Blades. Symmetry 2024, 16, 350. [Google Scholar] [CrossRef]
  140. Jiang, H.; Wang, Q.; Zhao, H.; Li, X. High-Precision Composite 3D Shape Measurement of Aeroengine Blade Based on Parallel Single-Pixel Imaging and High-Dynamic Range N-Step Fringe Projection Profilometry. Opt. Laser Technol. 2024, 176, 110085. [Google Scholar] [CrossRef]
  141. Guo, Y.; He, W.; Zhong, K.; Zhuang, C.; Chen, T.; Zhang, H. Automatic and High-Accuracy Matching Method for a Blade Inspection System Integrating Fringe Projection Profilometry and Conoscopic Holography. Meas. Sci. Technol. 2023, 34, 075011. [Google Scholar] [CrossRef]
  142. Mangano, F.; Yang, K.; Lerner, H.; Admakin, O.; Mangano, C. Artificial Intelligence and Mixed Reality for Dental Implant Planning: A Technical Note. Clin. Implant. Dent. Relat. Res. 2024, 26, 371–380. [Google Scholar] [CrossRef]
  143. Saini, R.; Bavabeedu, S.; Quadri, S.; Gurumurthy, V.; Kanji, M.; Kuruniyan, M.; Binduhayyim, R.; Avetisyan, A.; Heboyan, A. Impact of 3D Imaging Techniques and Virtual Patients on the Accuracy of Planning and Surgical Placement of Dental Implants: A Systematic Review. Digit. Health 2023, 10, 20552076241253550. [Google Scholar] [CrossRef]
  144. Mangano, F.; Admakin, O.; Lerner, H.; Mangano, C. Artificial Intelligence and Augmented Reality for Guided Implant Surgery Planning: A Proof of Concept. J. Dent. 2023, 136, 104485. [Google Scholar] [CrossRef]
  145. Nava, P.; Sabri, H.; Calatrava, J.; Zimmer, J.; Chen, Z.; Li, J.; Wang, H. Ultrasonography-Guided Dental Implant Surgery: A Feasibility Study. Clin. Implant. Dent. Relat. Res. 2024, 27, 130–140. [Google Scholar] [CrossRef] [PubMed]
  146. Wang, J.; Wang, B.; Liu, Y.; Luo, Y.; Wu, Y.; Xiang, L.; Yang, X.; Qu, Y.; Tian, T.; Man, Y. Recent Advances in Digital Technology in Implant Dentistry. J. Dent. Res. 2024, 103, 787–799. [Google Scholar] [CrossRef]
  147. Lee, Y.; Oh, J.; Kim, S. Virtual Surgical Plan with Custom Surgical Guide for Orthognathic Surgery: Systematic Review and Meta-Analysis. Maxillofac. Plast. Reconstr. Surg. 2024, 46, 39. [Google Scholar] [CrossRef] [PubMed]
  148. Day, K.; Kelley, P.; Harshbarger, R.; Dorafshar, A.; Kumar, A.; Steinbacher, D.; Patel, P.; Combs, P.; Levine, J. Advanced Three-Dimensional Technologies in Craniofacial Reconstruction. Plast. Reconstr. Surg. 2021, 148, 94e–108e. [Google Scholar] [CrossRef]
  149. Shilo, D.; Capucha, T.; Goldstein, D.; Bereznyak, Y.; Emodi, O.; Rachmiel, A. Treatment of Facial Deformities Using 3D Planning and Printing of Patient-Specific Implants. J. Vis. Exp. 2020, 159, e60930. [Google Scholar] [CrossRef]
  150. Mazzocchetti, S.; Spezialetti, R.; Bevini, M.; Badiali, G.; Lisanti, G.; Salti, S.; Di Stefano, L. Neural Shape Completion for Personalized Maxillofacial Surgery. Sci. Rep. 2024, 14, 11284. [Google Scholar] [CrossRef] [PubMed]
  151. Padhye, S.; Messinger, D.; Ferwerda, J. A Practitioner’s Guide to Fringe Projection Profilometry. Arch. Conf. Proc. 2021, 1, 13–20. [Google Scholar] [CrossRef]
  152. Zhang, L.; Chen, Q.; Zuo, C.; Feng, S. Real-Time High Dynamic Range 3D Measurement Using Fringe Projection. Opt. Express 2020, 28, 24363–24378. [Google Scholar] [CrossRef]
  153. Del Carmen Casas Pérez, M.; Chávez, G.; Rivera, F.; Sarocchi, D.; Mares, C.; Barrientos, B. Fringe Projection Method for 3D High-Resolution Reconstruction of Oil Painting Surfaces. Heritage 2023, 6, 184. [Google Scholar] [CrossRef]
  154. Bitelli, G.; Forte, A.; Tini, M.; Belfiori, F.; Tirincanti, A. High-Detail 3D Reconstruction and Digital Strategies for the Enhancement of Archaeological Properties in Museums. Heritage 2025, 8, 49. [Google Scholar] [CrossRef]
  155. Bräuer-Burchardt, C.; Munkelt, C.; Bleier, M.; Heinze, M.; Gebhart, I.; Kühmstedt, P.; Notni, G. Underwater 3D Scanning System for Cultural Heritage Documentation. Remote Sens. 2023, 15, 1864. [Google Scholar] [CrossRef]
  156. Pepe, M.; Costantino, D.; Alfio, V.; Restuccia, A.; Papalino, N. Scan to BIM for the Digital Management and Representation in 3D GIS Environment of Cultural Heritage Sites. J. Cult. Herit. 2021, 53, 87–96. [Google Scholar] [CrossRef]
  157. Miao, Y.; Yang, Y.; Hou, Q.; Wang, Z.; Liu, X.; Tang, Q.; Peng, X.; Gao, B. High-Efficiency 3D Reconstruction with a Uniaxial MEMS-Based Fringe Projection Profilometry. Opt. Express 2021, 29, 34243–34257. [Google Scholar] [CrossRef]
  158. Han, M.; Lei, F.; Shi, W.; Lu, S.; Li, X. Uniaxial MEMS-Based 3D Reconstruction Using Pixel Refinement. Opt. Express 2022, 31, 536–554. [Google Scholar] [CrossRef]
  159. Qu, J.; Gao, H.; Zhang, R.; Cao, Y.; Zhou, W.; Xie, H. High-Flexibility and High-Accuracy Phase Delay Calibration Method for MEMS-Based Fringe Projection Systems. Opt. Express 2022, 31, 1049–1066. [Google Scholar] [CrossRef]
  160. Zheng, Y.; Wang, S.; Li, Q.; Li, B. Fringe Projection Profilometry by Conducting Deep Learning from Its Digital Twin. Opt. Express 2020, 28, 36568–36583. [Google Scholar] [CrossRef] [PubMed]
  161. Qian, J.; Feng, S.; Li, Y.; Tao, T.; Han, J.; Chen, Q.; Zuo, C. Single-Shot Absolute 3D Shape Measurement with Deep-Learning-Based Color Fringe Projection Profilometry. Opt. Lett. 2020, 45, 1842–1845. [Google Scholar] [CrossRef] [PubMed]
  162. Zhu, Z.; Li, M.; Zhou, F.; You, D. Stable 3D Measurement Method for High Dynamic Range Surfaces Based on Fringe Projection Profilometry. Opt. Lasers Eng. 2023, 171, 107542. [Google Scholar] [CrossRef]
  163. Wei, H.; Li, H.; Li, X.; Wang, S.; Deng, G.; Zhou, S. An Efficient 3D Measurement Method for Shiny Surfaces Based on Fringe Projection Profilometry. Sensors 2025, 25, 1942. [Google Scholar] [CrossRef]
  164. Wu, T.; Liu, X.; Zhao, Y.; Qi, L. Single-Shot 3-D Dense Reconstruction Using Multi-Frequency Fringe Projection for Industrial Applications. IEEE Trans. Ind. Inform. 2024, 20, 12747–12757. [Google Scholar] [CrossRef]
  165. An, H.; Cao, Y.; Li, H.; Zhang, H. High-Speed 3-D Reconstruction Based on Phase Shift Coding and Interleaved Projection. Expert Syst. Appl. 2023, 234, 121067. [Google Scholar] [CrossRef]
  166. Feng, S.; Zuo, C.; Zhang, L.; Tao, T.; Hu, Y.; Yin, W.; Qian, J.; Chen, Q. Calibration of Fringe Projection Profilometry: A Comparative Review. Opt. Lasers Eng. 2021, 143, 106622. [Google Scholar] [CrossRef]
  167. Ly, K.; Lam, V.; Wang, Z. Generalized Fringe-to-Phase Framework for Single-Shot 3D Reconstruction Integrating Structured Light with Deep Learning. Sensors 2023, 23, 4209. [Google Scholar] [CrossRef]
  168. Suresh, V. Robotic Vision and Assembly Using Fringe Projection. In Proceedings of the Robotics Conference, Xi’an, China, 12–16 July 2021. [Google Scholar]
  169. Zuo, R.; Wei, S.; Wang, Y.; Kam, M.; Opfermann, J.; Hsieh, M.; Krieger, A.; Kang, J. Deep Learning-Based Single-Shot Fringe Projection Profilometry. In Advanced Biomedical and Clinical Diagnostic and Surgical Guidance Systems XXII; SPIE: Bellingham, WA, USA, 2024; Volume 12831, pp. 23–27. [Google Scholar] [CrossRef]
  170. Xu, J.; Tian, J. Accelerating Fringe Projection Profilometry to 100k fps at High Resolution Using Deep Learning. Light Sci. Appl. 2025, 14, 139. [Google Scholar] [CrossRef]
  171. Wang, F.; Wang, C.; Guan, Q. Single-Shot Fringe Projection Profilometry Based on Deep Learning and Computer Graphics. Opt. Express 2021, 29, 8024–8040. [Google Scholar] [CrossRef] [PubMed]
  172. Kernen, F.; Kramer, J.; Wanner, L.; Wismeijer, D.; Nelson, K.; Flügge, T. A Review of Virtual Planning Software for Guided Implant Surgery: Data Import and Visualization, Drill Guide Design and Manufacturing. BMC Oral Health 2020, 20, 320. [Google Scholar] [CrossRef]
  173. Rothlauf, S.; Pieralli, S.; Wesemann, C.; Burkhardt, F.; Vach, K.; Kernen, F.; Spies, B. Influence of Planning Software and Surgical Template Design on the Accuracy of Static Computer-Assisted Implant Surgery Performed Using Surgical Guides Fabricated with Material Extrusion Technology: An In Vitro Study. J. Dent. 2023, 136, 104482. [Google Scholar] [CrossRef]
  174. Stumpel, L.; Bedrossian, E.; Revilla-León, M. Virtual Implant Planning and Surgical Guide for Single Implant Placement Fabricated from a Digital Diagnostic Cast, Radiographic Imaging, and Open-Source Software. Int. J. Oral Maxillofac. Implant. 2024, 39, 731–736. [Google Scholar] [CrossRef]
  175. Haidar, Y.; Belvedere, C.; Spazzoli, B.; Donati, D.; Leardini, A. 3D Surgical Planning for Customized Devices in Orthopaedics: Applications in Massive Hip Reconstructions of Oncological Patients. Appl. Sci. 2024, 14, 11054. [Google Scholar] [CrossRef]
  176. Mayer, H.; Coloccini, A.; Viñas, J. Three-Dimensional Printing in Breast Reconstruction: Current and Promising Applications. J. Clin. Med. 2024, 13, 3278. [Google Scholar] [CrossRef]
  177. Markodimitraki, L.; Harkel, T.; Bleys, R.; Stegeman, I.; Thomeer, H. Cochlear Implant Positioning and Fixation Using 3D-Printed Patient-Specific Surgical Guides: A Cadaveric Study. PLoS ONE 2022, 17, e0270517. [Google Scholar] [CrossRef]
  178. Ke, W.; Liu, G.; Liu, M.; Sun, Z.; Lei, X.; Liu, Q.; Song, X. High-Precision 3D Information Acquisition of Single-Frame Fringe Projection Based on Deep Learning. In Fifth Optics Frontier Conference (OFS 2025); SPIE: Bellingham, WA, USA, 2025; Volume 13648, pp. 201–204. [Google Scholar] [CrossRef]
  179. Tan, C.; Song, W. Weakly Supervised Depth Estimation for 3D Imaging with Single-Camera Fringe Projection Profilometry. Sensors 2024, 24, 1701. [Google Scholar] [CrossRef] [PubMed]
  180. Barrile, V.; Bernardo, E.; Fotia, A.; Bilotta, G. Integration of Laser Scanner, Ground-Penetrating Radar, 3D Models and Mixed Reality for Artistic, Archaeological and Cultural Heritage Dissemination. Heritage 2022, 5, 80. [Google Scholar] [CrossRef]
  181. Yang, G.; Wang, Y. High-Resolution Laser Fringe Pattern Projection Based on MEMS Micro-Vibration Mirror Scanning for 3D Measurement. Opt. Laser Technol. 2021, 142, 107189. [Google Scholar] [CrossRef]
  182. Liu, Y.; Fu, Y.; Cai, X.; Zhong, K.; Guan, B. A Novel High Dynamic Range 3D Measurement Method Based on Adaptive Fringe Projection Technique. Opt. Lasers Eng. 2020, 128, 106004. [Google Scholar] [CrossRef]
  183. Sun, J.; Zhang, Q. A 3D Shape Measurement Method for High-Reflective Surface Based on Accurate Adaptive Fringe Projection. Opt. Lasers Eng. 2022, 157, 106994. [Google Scholar] [CrossRef]
  184. Wang, J.; Yang, Y. A New Method for High Dynamic Range 3D Measurement Combining Adaptive Fringe Projection and Original-Inverse Fringe Projection. Opt. Lasers Eng. 2023, 168, 107490. [Google Scholar] [CrossRef]
  185. Xu, P.; Liu, J.; Wang, J. High Dynamic Range 3D Measurement Technique Based on Adaptive Fringe Projection and Curve Fitting. Appl. Opt. 2023, 62, 3265–3274. [Google Scholar] [CrossRef]
  186. Zhang, M.; Chen, C.; Xie, L.; Zhang, C. Accurate Measurement of High-Reflective Surfaces Based on Adaptive Fringe Projection Technique. Opt. Lasers Eng. 2024, 176, 107820. [Google Scholar] [CrossRef]
  187. Sun, X.; Luo, Z.; Wang, S.; Wang, J.; Zhang, Y.; Zou, D. A Simple Polarization-Based Fringe Projection Profilometry Method for Three-Dimensional Reconstruction of High-Dynamic-Range Surfaces. Photonics 2024, 11, 27. [Google Scholar] [CrossRef]
  188. Wan, M.; Kong, L. Single-Shot 3D Measurement of Highly Reflective Objects with Deep Learning. Opt. Express 2023, 31, 14965–14985. [Google Scholar] [CrossRef]
  189. Wang, Z.; Li, J.; Wan, Y.; Luo, L.; Gao, X. Single-Shot Measurement of Objects with High Reflectivity Surfaces Based on Deep Learning. J. Opt. 2024, 27, 035702. [Google Scholar] [CrossRef]
  190. Landmann, M.; Speck, H.; Dietrich, P.; Heist, S.; Kühmstedt, P.; Tünnermann, A.; Notni, G. High-Resolution Sequential Thermal Fringe Projection Technique for Fast and Accurate 3D Shape Measurement of Transparent Objects. Appl. Opt. 2021, 60, 2362–2371. [Google Scholar] [CrossRef]
  191. Xu, J.; Zhang, S. Status, Challenges, and Future Perspectives of Fringe Projection Profilometry. Opt. Lasers Eng. 2020, 135, 106193. [Google Scholar] [CrossRef]
  192. Wang, Z.; Li, K.; Gao, N.; Meng, Z.; Zhang, Z. High Dynamic Range 3D Shape Measurement Based on Crosstalk Characteristics of a Color Camera. Opt. Express 2023, 31, 38318–38333. [Google Scholar] [CrossRef] [PubMed]
  193. Qin, Y.; Wan, S.; Wan, Y.; Weng, J.; Liu, W.; Gong, Q. Direct and Accurate Phase Unwrapping with Deep Neural Network. Appl. Opt. 2020, 59, 7258–7267. [Google Scholar] [CrossRef] [PubMed]
  194. Zhou, L.; Yu, H.; Lan, Y. Deep Convolutional Neural Network-Based Robust Phase Gradient Estimation for Two-Dimensional Phase Unwrapping Using SAR Interferograms. IEEE Trans. Geosci. Remote Sens. 2020, 58, 4653–4665. [Google Scholar] [CrossRef]
  195. Zhao, J.; Liu, L.; Wang, T.; Wang, X.; Du, X.; Hao, R.; Liu, J.; Liu, Y.; Zhang, J. VDE-Net: A Two-Stage Deep Learning Method for Phase Unwrapping. Opt. Express 2022, 30, 39794–39815. [Google Scholar] [CrossRef]
  196. Wang, S.; Chen, T.; Shi, M.; Zhu, D.; Wang, J. Single-Frequency and Accurate Phase Unwrapping Method Using Deep Learning. Opt. Lasers Eng. 2023, 169, 107409. [Google Scholar] [CrossRef]
  197. Yang, W.; He, Y.; Zhu, Q.; Zhang, L.; Jin, L. Unwrap-Net: A Deep Neural Network-Based InSAR Phase Unwrapping Method Assisted by Airborne LiDAR Data. ISPRS J. Photogramm. Remote Sens. 2024, 211, 45–59. [Google Scholar] [CrossRef]
  198. Li, Z.; Zhang, W.; Shan, S.; Xu, P.; Liu, J.; Wang, J.; Wang, S.; Yang, Y. Dual-Frequency Phase Unwrapping Based on Deep Learning Driven by Simulation Dataset. Opt. Lasers Eng. 2024, 176, 108168. [Google Scholar] [CrossRef]
  199. Zhou, L.; Yu, H.; Lan, Y.; Gong, S.; Xing, M. CANet: An Unsupervised Deep Convolutional Neural Network for Efficient Cluster-Analysis-Based Multibaseline InSAR Phase Unwrapping. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–15. [Google Scholar] [CrossRef]
  200. Wang, D.; Zhou, W.; Zhang, Z.; Kang, Y.; Meng, F.; Wang, N. Anti-Crosstalk Absolute Phase Retrieval Method for Microscopic Fringe Projection Profilometry Using Temporal Frequency-Division Multiplexing. Opt. Express 2023, 31, 39528–39545. [Google Scholar] [CrossRef] [PubMed]
  201. Fan, S.; Liu, S.; Zhang, X.; Huang, H.; Liu, W.; Jin, P. Unsupervised Deep Learning for 3D Reconstruction with Dual-Frequency Fringe Projection Profilometry. Opt. Express 2021, 29, 32547–32567. [Google Scholar] [CrossRef] [PubMed]
  202. Wei, S.; Kam, M.; Wang, Y.; Opfermann, J.; Saeidi, H.; Hsieh, M.; Krieger, A.; Kang, J. Numerical Landmark Detection Algorithm for Fringe Projection Profilometry during Autonomous Robotic Suturing. In Advanced Biomedical and Clinical Diagnostic and Surgical Guidance Systems XX; SPIE: Bellingham, WA, USA, 2022; Volume 11962, p. 119620A. [Google Scholar] [CrossRef]
Figure 1. Schematic of the DFPP triangulation geometry. The projector and camera constitute a stereo-like configuration with baseline distance d. The camera optical axis is normal to the reference plane, whereas the projector illuminates the object surface at an angle. A surface point P, located at height z, is observed by the camera at distance H and illuminated by the projector over a distance L, resulting in a triangulation angle θ . The height-dependent phase modulation of the projected fringe pattern provides the basis for three-dimensional surface reconstruction by optical triangulation.
Figure 1. Schematic of the DFPP triangulation geometry. The projector and camera constitute a stereo-like configuration with baseline distance d. The camera optical axis is normal to the reference plane, whereas the projector illuminates the object surface at an angle. A surface point P, located at height z, is observed by the camera at distance H and illuminated by the projector over a distance L, resulting in a triangulation angle θ . The height-dependent phase modulation of the projected fringe pattern provides the basis for three-dimensional surface reconstruction by optical triangulation.
Metrology 06 00003 g001
Figure 2. Overview of the digital fringe projection profilometry (DFPP) reconstruction pipeline. The projector sequentially projects phase-shifted sinusoidal fringe patterns, illustrated as representative projected images I p 1 , I p 2 , and I p 3 , while the camera records the corresponding deformed fringe images, shown as example captured images I c 1 , I c 2 , and I c 3 . The superscripts denote representative frames within the projection–capture sequence and are used for illustration purposes only. From the captured images, the wrapped and unwrapped phase maps are computed and subsequently converted into three-dimensional surface geometry through calibration and triangulation. Rendering elements such as color and hair are included for visualization purposes only and are not direct outputs of the reconstruction process.
Figure 2. Overview of the digital fringe projection profilometry (DFPP) reconstruction pipeline. The projector sequentially projects phase-shifted sinusoidal fringe patterns, illustrated as representative projected images I p 1 , I p 2 , and I p 3 , while the camera records the corresponding deformed fringe images, shown as example captured images I c 1 , I c 2 , and I c 3 . The superscripts denote representative frames within the projection–capture sequence and are used for illustration purposes only. From the captured images, the wrapped and unwrapped phase maps are computed and subsequently converted into three-dimensional surface geometry through calibration and triangulation. Rendering elements such as color and hair are included for visualization purposes only and are not direct outputs of the reconstruction process.
Metrology 06 00003 g002
Figure 3. Fringe formation and spatial–temporal modulation strategies in DFPP: (a) Spatial modulation with varying fringe frequencies illustrating the sensitivity–ambiguity trade-off. (b) Temporal phase-shifting sequence with N = 4 phase offsets δ ( t ) . (c) Intensity profile showing the background term A ( u , v ) , modulation amplitude B ( u , v ) , and encoded phase φ ( u , v ) . (d) Multi-frequency modulation combining coarse and fine patterns for unambiguous phase reconstruction.
Figure 3. Fringe formation and spatial–temporal modulation strategies in DFPP: (a) Spatial modulation with varying fringe frequencies illustrating the sensitivity–ambiguity trade-off. (b) Temporal phase-shifting sequence with N = 4 phase offsets δ ( t ) . (c) Intensity profile showing the background term A ( u , v ) , modulation amplitude B ( u , v ) , and encoded phase φ ( u , v ) . (d) Multi-frequency modulation combining coarse and fine patterns for unambiguous phase reconstruction.
Metrology 06 00003 g003
Table 1. Comparative performance of projection technologies in DFPP.
Table 1. Comparative performance of projection technologies in DFPP.
TechnologyContrastRefresh RateResolutionMetrological Strengths
DLP2000:1–5000:1Up to ∼10 kHzHD–2KHigh SNR, dynamic scenes, industrial robustness
LCD1000:1–2000:160–240 HzHD–4KLow cost, smooth grayscale modulation
LCoS≥3000:160–120 Hz2K–4KMaximal fidelity, microscale measurement
Binary/MEMSHigh (binary)>10 kHzVariesUltra-fast, resistant to nonlinearities
Table 2. Comparison of CCD, CMOS, and hybrid sensors for DFPP.
Table 2. Comparison of CCD, CMOS, and hybrid sensors for DFPP.
Sensor TypeStrengthsLimitationsApplications + References
CCDLow noise, high QE, uniform pixelsLow speed, high powerMicroscale, holography [52,54]
CMOSHigh speed, low cost, integrated ADCHigher pixel noiseDynamic 3D imaging [55,57]
3CCD/HYBRIDNo crosstalk, high color fidelityCost, complexityShape + deformation [54]
TDIHigh sensitivity at speedComplex opticsIndustrial inspection [59]
Table 3. Comparison of geometric configurations for DFPP.
Table 3. Comparison of geometric configurations for DFPP.
ConfigurationAdvantagesLimitationsApplicationsPrecision
MonocularCompact, low costOcclusions, reflectance sensitivityEducation, small objects10–50  μ m
BinocularBetter robustness, stereo fusionMore calibration stepsIndustrial inspection5–20  μ m
Multi-projectorMinimal shadows, full coverageSync complexityComplex shapes1–10  μ m
Table 4. Optical configuration comparison.
Table 4. Optical configuration comparison.
OpticsStrengthsLimitationsPrecisionUse Cases
ConventionalFlexible FOVDistortion10–100 μ mGeneral
TelecentricConstant magnification, no perspective distortionNarrow FOV1–10 μ mPrecision metrology
ScheimpflugLarge DOFComplex geometry5–20 μ mLarge objects
Multi-axisOcclusion-freeCalibration-heavy1–5 μ mIndustrial/robotics
Table 5. Representative methods for phase acquisition and phase unwrapping with characteristic strengths and sample references.
Table 5. Representative methods for phase acquisition and phase unwrapping with characteristic strengths and sample references.
CategoryKey Characteristics + Representative Works
Iterative/OptimizationRobust under noise; convergence guarantees for specific domains; widely applicable in optics and signal processing [72,73,74].
Deep Learning/DiffusionHigh reconstruction fidelity; strong robustness; direct wrapped-to-unwrapped prediction; discontinuity learning [75,76,97,98,100].
Temporal/Multi-FrequencyAmbiguity removal via frequency hierarchy; strong performance in dynamic measurement and long-range metrology [85,86,93,94].
Hybrid/Quality-GuidedCombined spatial–temporal reliability; improved tolerance to residues, low-SNR, and discontinuities [89,90,95,96].
Spatial Classical MethodsPath-following and minimum-norm solvers; well-established; sensitive to noise and discontinuities [87,88,91].
Quantum/Advanced OptimizationGlobal optimization for noisy interferometry; emerging technology [77,102].
Table 6. Representative application domains of DFPP with key characteristics and sample references.
Table 6. Representative application domains of DFPP with key characteristics and sample references.
Application DomainKey Use Cases + Representative Works
Industrial & AerospaceIn-line inspection, AM process monitoring, reflective surface metrology,
turbine blade measurement, multi-view inspection, robotic vision integration.
[133,134,135,136,137]
[65,138,139,140]
[29,141]
Biomedical & HealthcareIntraoral scanning, surgical planning (VSP), orthodontic design, implant guidance,
tissue and biomechanics analysis, gait and posture assessment, personalized implants.
[142,143,144]
[145,146,147,148]
[149,150]
Cultural HeritageHigh-resolution digitization of artifacts, paintings, and manuscripts; microcrack detection;
multi-view scanning, underwater and field documentation, preservation of fragile objects.
[151,152,153]
[154,155,156]
Microscale & MaterialsMEMS inspection, semiconductor metrology, PCB analysis, reflective metal measurement,
HDR and crosstalk mitigation, deep-learning single-shot microscale reconstruction.
[37,157,158,159]
[28,160,161,162,163]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sanchez-Torres, M.; Hernández-Capuchin, I.; Ramírez-Fernández, C.; Clemente, E.; Sánchez-González, J.L.J.; López-Martínez, A. Recent Advances in Digital Fringe Projection Profilometry (2022–2025): Techniques, Applications, and Metrological Challenges—A Review. Metrology 2026, 6, 3. https://doi.org/10.3390/metrology6010003

AMA Style

Sanchez-Torres M, Hernández-Capuchin I, Ramírez-Fernández C, Clemente E, Sánchez-González JLJ, López-Martínez A. Recent Advances in Digital Fringe Projection Profilometry (2022–2025): Techniques, Applications, and Metrological Challenges—A Review. Metrology. 2026; 6(1):3. https://doi.org/10.3390/metrology6010003

Chicago/Turabian Style

Sanchez-Torres, Mishraim, Ismael Hernández-Capuchin, Cristina Ramírez-Fernández, Eddie Clemente, José Luis Javier Sánchez-González, and Alan López-Martínez. 2026. "Recent Advances in Digital Fringe Projection Profilometry (2022–2025): Techniques, Applications, and Metrological Challenges—A Review" Metrology 6, no. 1: 3. https://doi.org/10.3390/metrology6010003

APA Style

Sanchez-Torres, M., Hernández-Capuchin, I., Ramírez-Fernández, C., Clemente, E., Sánchez-González, J. L. J., & López-Martínez, A. (2026). Recent Advances in Digital Fringe Projection Profilometry (2022–2025): Techniques, Applications, and Metrological Challenges—A Review. Metrology, 6(1), 3. https://doi.org/10.3390/metrology6010003

Article Metrics

Back to TopTop