Next Article in Journal
Electroactive Biofilms of Activated Sludge Microorganisms on a Nanostructured Surface as the Basis for a Highly Sensitive Biochemical Oxygen Demand Biosensor
Next Article in Special Issue
EPI Light Field Depth Estimation Based on a Directional Relationship Model and Multiviewpoint Attention Mechanism
Previous Article in Journal
Image Classification of Wheat Rust Based on Ensemble Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Improved Circular Fringe Fourier Transform Profilometry

Department of Opto-Electronics, Sichuan University, Chengdu 610065, China
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(16), 6048; https://doi.org/10.3390/s22166048
Submission received: 19 July 2022 / Revised: 4 August 2022 / Accepted: 11 August 2022 / Published: 12 August 2022
(This article belongs to the Special Issue Advances in 3D Measurement Technology and Sensors)

Abstract

:
Circular fringe projection profilometry (CFPP), as a branch of carrier fringe projection profilometry, has attracted research interest in recent years. Circular fringe Fourier transform profilometry (CFFTP) has been used to measure out-of-plane objects quickly because the absolute phase can be obtained by employing fewer fringes. However, the existing CFFTP method needs to solve a quadratic equation to calculate the pixel displacement amount related to the height of the object, in which the root-seeking process may get into trouble due to the phase error and the non-uniform period of reference fringe. In this paper, an improved CFFTP method based on a non-telecentric model is presented. The calculation of displacement amount is performed by solving a linear equation instead of a quadratic equation after introducing an extra projection of circular fringe with circular center translation. In addition, Gerchberg iteration is employed to eliminate phase error of the region close to the circular center, and the plane calibration technique is used to eliminate system error by establishing a displacement-to-height look-up table. The mathematical model and theoretical analysis are presented. Simulations and experiments have demonstrated the effectiveness of the proposed method.

1. Introduction

Fringe projection profilometry (FPP), as a common active optical three-dimensional (3D) measurement technology, has advantages of high-precision, non-contact, and full-field measurement [1,2,3,4]. In a fringe projection system based on a triangular configuration frame, the structured fringe patterns are projected onto the object by a projector, then the distorted images will be captured by a camera from another view angle. The height of the measured object will change the phase distribution of the fringe, which can be obtained by different demodulation algorithms based on the number of fringes. The straight fringe and oblique fringe are popular [5,6]. Saw-tooth fringe [7], triangular fringe [8], hexagonal fringe [9], circular fringe [10,11], etc. have been used in fringe projection profilometry as well. To reconstruct phase information from these patterns, algorithms based on phase shifting [2], Fourier transform [3,12,13,14,15], wavelet transform [16,17], or windowed Fourier transform have been developed [18]. The comparative analysis of different carrier fringe pattern techniques has been given in references [19,20].
Fourier fringe analysis is one of the most popular methods aimed at calculating the phase value from a single spatial carrier pattern or at most two spatial carrier patterns through Fourier transform, filtering operation and inverse Fourier transform. In traditional Fourier transform profilometry (FTP), the popularly projected fringe pattern is a sinusoidal straight or oblique fringe for the ease of extraction of the fundamental spectrum lobe carrying surface information of the measured object. However, one disadvantage of linear fringe projection is that the unwrapped phase map obtained by spatial unwrapping algorithms [21,22] has 2π ambiguity because the value of the continuous phase depends on the unwrapping starting point. For eliminating this ambiguity, a common method is embedding a marker point into fringe patterns or adding a marker on the surface of the object. The marker provides a reference for unwrapping the phase. Certainly, temporal phase unwrapping methods such as multi-frequency and multi-wavelength approaches [23,24] can be used to obtain the absolute phase by determining 2π discontinuous locations. However, a series of fringe patterns with different frequencies are needed.
Circular fringe projection profilometry (CFPP) has also attracted research interest in recent years [10,11,25,26] because the center of the circular fringe pattern provides a reference mark. Ratnam et al. [25] measured the out-of-plane deformation of targets by calculating the pixel displacement of the circular fringe pattern. Mandapalli et al. [26] used the circular fringe projection method to measure the 3D profiling of high dynamic range objects. Zhao et al. [27] achieved the 3D profile measurement via the triangular structure between a projected divergent light ray and the optical axis of the projector in a coaxial system. Wang et al. [28] used circular gratings to perform moiré-based misalignment measurements combined with lithography. In addition, the conical phase image has other applications. For example, Khonina et al. [29] analyzed the wavefront aberrations of the interferograms using a conical reference beam and neural networks processing.
Circular fringe Fourier transform profilometry (CFFTP) based on a triangulation system has the capability of whole out-of-plane measurement using fewer fringes. To the best of our knowledge, the existing CFFTP [25,26] calculates the displacement amount carrying the height information of the object by solving a quadratic equation. The theoretical model is applicable to the telecentric system. The correct root-seeking process of the quadratic equation may get into trouble due to the phase error and the non-uniform period of the reference fringe. Thus, interpolation and fitting are required to deal with the error region in the middle of the image. In addition, phase error near the center region is bigger because of the leakage of the spectrum.
In this paper, some improvements are presented for the generality and accuracy of CFFTP. To avoid the trouble of root-seeking, the expression of calculating the displacement amount is degraded to a linear equation from a quadratic equation by introducing an extra projected circular fringe with a circular center lateral shift. Compared to the existing CFFTP, the theoretical model of our method is also suitable for a system whose projection and imaging centers are at a finite distance. In addition, Gerchberg iteration is employed to eliminate error close to the circular center region, and an established look-up table describing the relationship between displacement and height is used to eliminate system error of the CFFTP. Results of simulations and experiments illustrate that our improved CFFTP offers the capability of measuring out-of-plane deformation with higher accuracy and robustness.
The rest of this paper is organized as follows. Section 2 describes the principles of the improved CFFTP, including mathematical model and displacement amount calculation, co-ordinate transformation, conical phase calculation by FTP, and the displacement-to-height look-up table. Section 3 and Section 4, respectively, present some simulations and experiments to validate the proposed method. Section 5 summarizes the paper.

2. Principle

2.1. Geometric Model and Calculation of Lateral Displacement of CFFTP

The schematic diagram of the geometric model of CFFTP is the same as that of traditional FPP based on the triangulation principle, as shown in Figure 1. On the top-left of this figure, an orthogonal co-ordinate system is determined for exhibiting spatial directions, where d and l are structural parameters of the measurement system. The optical axis of the projector and the camera intersects the point O r on the reference plane. The plane in which two optical axes lie is parallel to the XZ plane. Several planes R i (i = 1, 2,…, N) are drawn to exhibit out-of-plane height. The lateral shift of the fringe pattern caused by the measured object will be along the X-axis direction when a circular pattern is projected onto the object and captured by the camera. This lateral shift is related to the phase information, which is thereby used to restore the surface of the object. For clarity, taking an emitting ray from a pixel of the projector as an example, the intersections of the ray and each plane are P i (i = 0, 1,…, N), which have the same phases. These points are captured by different pixels on the camera. P i relates to a set of data pairs ( δ i , h i ) which means that different heights h i correspond to different lateral shift δ i . The lateral shift can be obtained by calculating the phase difference of homologous points. For instance, point P 1 on the plane R and point C on the reference plane are “seen” by the same pixel on the camera, but they have different encoding phase values. The phase difference between P 1 and C describes the displacement amount between P0 and C, which is used to calculate the lateral shift amount Δ δ 1 . That is, Δ δ 1 = δ 1 δ 0 denotes the lateral shift caused by height h 1 .
To accurately calculate the lateral shift amount, an improved CFFTP method is proposed. Two circular fringes with the same period and different circular centers (k pixels lateral shift in the x direction) are generated. The encoded circular fringes can be expressed as:
I 1 ( x p , y p ) = a + b cos ( 2 π ( x p x p 0 ) 2 + ( y p y p 0 ) 2 p ) ,
I 2 ( x p , y p ) = a + b cos ( 2 π ( x p x p 0 + k ) 2 + ( y p y p 0 ) 2 p ) ,
where (xp, y p ) is the pixel co-ordinate of an LCD/DLP plane, p is the encoded fringe period, ( x p 0 , y p 0 ) and ( x p 0 k, y p 0 ) are the centers of two circular fringes respectively. They are projected on the reference plane and the measured object. In the non-telecentric measurement system, the reference fringes and the deformed fringes captured by the camera can be expressed as:
I c 1 ( x c , y c ) = a c ( x c , y c ) + b c ( x c , y c ) cos ( 2 π ( x c - x c 0 + δ x 0 ) 2 + ( y c - y c 0 ) 2 p c ) ,
I c 2 ( x c , y c ) = a c ( x c , y c ) + b c ( x c , y c ) cos ( 2 π ( x c x c 0 + δ x 0 + k c ) 2 + ( y c y c 0 ) 2 p c ) ,
I c 3 ( x c , y c ) = a c ( x c , y c ) + b c ( x c , y c ) cos ( 2 π ( x c x c 0 + δ x ) 2 + ( y c y c 0 ) 2 p c ) ,
I c 4 ( x c , y c ) = a c ( x c , y c ) + b c ( x c , y c ) cos ( 2 π ( x c x c 0 + δ x + k c ) 2 + ( y c y c 0 ) 2 p c ) ,
where subscript c indicates the camera, and ( x c , y c ) is the camera pixel co-ordinate; δ x 0 is the original lateral shift amount of the reference fringe caused by the triangular relationship of the measurement system, while δ x 0 equals zero in the telecentric measurement system; δ x is the lateral shift amount in the deformed fringe; k c is the offset of the two circular centers of fringes; Hence, Δ δ x = δ x δ x 0 is the lateral shift amount caused by the measured object, which is also a function of pixel co-ordinate ( x c , y c ). The terms in brackets of the cosine functions in Equations (3)–(6) are the phases of the fringes, which can be abbreviated as φ 1 , φ 2 , φ 3 , and φ 4 respectively. If these phases are extracted (see Section 2.3), Δδx can be calculated by solving a linear equation in our work instead of a quadratic equation in the existing CFFTP. A simple deduction is given in the following.
Calculating the difference of the square of the phase terms of Equations (3) and (5), we obtain the following equation:
δ x 2 δ x 0 2 + 2 ( x c x c 0 ) ( δ x δ x 0 ) = ( p c 2 π ) 2 ( φ 3 2 φ 1 2 ) .
Similarly, from Equations (4) and (6), we obtain:
δ x 2 δ x 0 2 + 2 ( x c x c 0 + k c ) ( δ x δ x 0 ) = ( p c 2 π ) 2 ( φ 4 2 φ 2 2 ) .
From Equations (7) and (8), the lateral displacement caused by the height of the measured object is expressed as:
Δ δ x = δ x δ x 0 = p c 2 8 k c π 2 ( φ 4 2 φ 2 2 φ 3 2 + φ 1 2 ) .
Equation (9) is more general compared with the formula of displacement in the existing CFFTP, like Equation (7). The value of Δ δ x cannot be calculated by simply solving the quadratic equation because δ x 0 is unknown in the non-telecentric system. Even if δ x 0 is ignored, seeking the correct root is not easy because solving the quadratic equation may become inaccurate around the middle region. Therefore, interpolation and fitting operations are required in the existing CFFTP. The proposed linear equation can well avoid this problem.

2.2. Coordinate Transformation of CFFTP

As Fourier transform profilometry cannot directly process closed fringes, co-ordinate transformation from a Cartesian to Polar co-ordinate must be performed in CFFTP. The resulting fringes in Polar co-ordinate are expressed as:
I n ( r n , θ n ) = a ( r n , θ n ) + b ( r n , θ n ) cos ( 2 π r n p ) ,                   n = 1 , 2 , 3 , 4 ,
where r n , θ n , and p’ are the radial variables, the angle variables, and the period of fringe in a Polar co-ordinate respectively. a’( r n , θ n ) and b’( r n , θ n ) are the background intensity and modulation intensity. The circular centers correspond to the origin of the Polar co-ordinate images. The range of θ n is [0, 360°), r n and θ n are shown in Equations (11)–(14).
r 1 = ( x c - x c 0 + δ x 0 ) 2 + ( y c - y c 0 ) 2 , θ 1 = arctan ( y c - y c 0 x c - x c 0 + δ x 0 ) ,
r 2 = ( x c - x c 0 + δ x 0 + k c ) 2 + ( y c - y c 0 ) 2 , θ 2 = arctan ( y c - y c 0 x c - x c 0 + δ x 0 + k c ) ,
r 3 = ( x c - x c 0 + δ x ) 2 + ( y c - y c 0 ) 2 , θ 3 = arctan ( y c - y c 0 x c - x c 0 + δ x ) ,
r 4 = ( x c - x c 0 + δ x + k c ) 2 + ( y c - y c 0 ) 2 , θ 4 = arctan ( y c - y c 0 x c - x c 0 + δ x + k c ) .
To obtain fringes in a Polar co-ordinate, a gridded sampling operation has to be worked on r n and θ n . Those are expressed as r n (i, j) and θ n (i, j), i = 1, 2, …, K 1 , j = 1, 2, …, K2. The resolution of polar images is K 1 × K 2 . The denser the sampling points are, the less the error caused by the co-ordinate transformation operation. But an over high-resolution ratio costs more computing time. In this paper, K 1 is obtained by sampling θ(i, j) by 0.25 degrees as an interval, and K 2 is obtained by sampling r n (i, j) by 0.5 pixels as an interval. In practice, each pixel of circular fringes may not fall on the corresponding gridded point after being transferred to the Polar co-ordinate, so interpolation calculation is necessary. Figure 2 shows a schematic diagram of co-ordinate transformation. Figure 2a shows the interpolation procedure. For example, white points denote corresponding polar points calculated by Equation (11) directly, and black points are the gridded points. The intensity of each white point is equal to that of the corresponding point in the circular fringe. The intensity of each black point is calculated by an interpolation operation employing its neighborhood white points. There are many interpolation algorithms [30,31,32]. The cubic interpolation algorithm is used in our simulations and experiments. Figure 2b is a reference circular fringe and Figure 2c is its corresponding fringe in a Polar co-ordinate, which can be processed by Fourier transform profilometry.

2.3. Calculation of Conical Phase by FTP

To calculate phases, Fourier transform, filtering operation, and inverse Fourier transform are performed to deal with the fringes described by Equation (10). The Fourier spectra of the four fringes are expressed as:
F { I n ( r n , θ n ) } = A n ( f r n , f θ n ) + 1 2 B n ( f r n f 0 n , f θ n ) + 1 2 B n * ( f r n + f 0 n , f θ n ) ,                   n = 1 , 2 , 3 , 4 ,
where * denotes complex conjugate. frn and fθn are the variables in the frequency domain, f0n is the carrier frequency of each linear fringe. A n (frn, fθn) is zero frequency component, B n (frnf0n, fθn) and B n * (frn + f0n, fθn) are fundamental frequency components. One of the fundamental frequency components can be selected by applying a band-pass filter and the inverse Fourier transform is performed on it to calculate the wrapped phase ranging from −π to π with 2π modus. Thus, a suitable spatial phase unwrapping algorithm is used to obtain the absolute phase by selecting a point within the first linear pitch as the unwrapping starting point. As mentioned above, the obtained four conical phases φ 1 , φ2, φ 3 , and φ 4 are used to calculate the lateral displacement.
It is worth noting that the circular center of each circular fringe in the Cartesian co-ordinate corresponds to the left edge of the resulting images in the Polar co-ordinate. When Fourier transform works on these fringes directly, phase accuracy in the edge regions is influenced by spectral leakage. After the phases are converted back to the Cartesian co-ordinate, the corresponding areas of the conical phase maps have bigger errors. To solve this problem, an extrapolation operation is required to extend the boundary of fringes. The Gerchberg algorithm [33] is an effective solution to eliminate edge leakage error. We will use it to reduce phase error in our simulations and experiments.

2.4. Establishment of the Displacement-to-Height Mapping

According to the geometric model of the measurement system shown in Figure 1, theoretically, the relationship between displacement Δδ( x c , y c ) and height h( x c , y c ) can be expressed as:
h ( x c , y c ) = Δ δ x ( x c , y c ) l Δ δ x ( x c , y c ) + d .
Since the system parameters, d and l are difficult to measure accurately, and fringe period p c of the reference fringes is not a constant for a non-telecentric system, it is necessary to establish a look-up table between Δ δ x ( x c , y c ) and h( x c , y c ) by the plane calibration method [34]. To set up the relationship, the reference plane is moved along the Z-axis, and fringes at known positions h i (on plane R 1 to R N ) are captured by the camera. At each position, the phases of the fringes are calculated by our method. The phase differences are used to calculate lateral displacement Δ δ x . Linear fitting method, quadratic fitting method, cubic fitting method, or higher-order polynomial fitting can be used to set up a look-up table according to the number of calibration planes [35]. Considering the balance between time-consumption and accuracy, the cubic polynomial fitting method is selected to set up the look-up table in our experiment, which can be expressed as:
h ( x c , y c ) = a 0 ( x c , y c ) + a 1 ( x c , y c ) Δ δ x ( x c , y c ) + a 2 ( x c , y c ) [ Δ δ x ( x c , y c ) ] 2 + a 3 ( x c , y c ) [ Δ δ x ( x c , y c ) ] 3 ,
where a0( x c , y c ), a 1 ( x c , y c ), a 2 ( x c , y c ), and a 3 ( x c , y c ) are the mapping coefficients of the cubic curve fitting.
As two reference phases have been calculated and saved during system calibration, our method only needs to capture two deformed circular fringes to reconstruct the height of the measured object. A brief flow of the improved CFFTP is shown in Figure 3.

3. Simulations

To verify the performance of our method, computer simulations are carried out for a telecentric system and non-telecentric system. For non-telecentric system simulation, a virtual fringe projection system based on the pinhole model is adopted for displaying the projection and imaging procedure, as shown in Figure 4a, where the subscript w denotes the world co-ordinate system, c denotes the camera image co-ordinate system and p denotes the projector co-ordinate system. The relationship between the camera co-ordinate system and the projector co-ordinate system can be rewritten as:
P c = H p c P p ,
where P c and P p denote the corresponding homogeneous pixel co-ordinates in the camera co-ordinate system and the projector co-ordinate system, respectively. H p c = H p w H w c is the homography matrix from the projector pixel co-ordinate system to the camera pixel co-ordinate system, where H p w and H w c represent the homography matrix from the projector pixel co-ordinate system to the world co-ordinate system and the world co-ordinate system to the camera pixel co-ordinate system, respectively. The detailed description was given in reference [34]. For a telecentric system, it is easy to generate the reference fringe and the deformed fringe by Equations (3)–(6) when δ x 0 = 0.
First, assuming a telecentric system, a measured object mounted on a moving plane is simulated. The object z(x, y) is given as a peaks function multiplied by S, which can be expressed as:
z ( x , y ) = S { 3 ( 1 x ) 2 exp [ x 2 ( y + 1 ) 2 ] 10 ( x 5 x 3 y 5 ) exp ( x 2 y 2 ) 1 3 exp [ ( x + 1 ) 2 y 2 ] } + H ( x , y ) ,
where S is a scalar factor, and H(x, y) denotes the corresponding height of the moving plane. In the simulation, we set S = 2, H(x, y) = 20, 30, 40 mm, respectively. Figure 4b shows the height distribution of the case of H(x, y) = 20 mm. The simulated radial period of fringes is 8 pixels, the lateral shift amount of circular centers of two reference fringes is 50 pixels, and the size of the fringe patterns is 512 × 512 pixels. Without loss of generality, Gaussian noise with 40 signal-to-noise ratio (SNR) is added to the simulated fringes.
Figure 5 is the procedure of the conical phase calculation when H(x, y) = 20 mm. The first column of Figure 5a–d shows two reference circular fringe patterns and two deformed fringe patterns, respectively. The second column shows the corresponding linear fringes in the Polar co-ordinate. The third column shows the resulting fringes after Gerchberg iteration. The Fourier transform method is carried out on these extrapolated Polar co-ordinate fringes to calculate the corresponding wrapped phases. A spatial phase unwrapping algorithm [36] is utilized to obtain the continuous phase maps. Cutting the extrapolated areas, the phase maps of the four original fringes can be obtained. Then the phase maps are converted back to the original Cartesian co-ordinate system, as shown in the last column of Figure 5.
Figure 6 shows the reconstruction of our improved CFFTP and that of the existing CFFTP. Figure 6a,b show the reconstructed 3D results and the cross sections along the 330th row by using the existing CFFTP. The corresponding results by using the improved CFFTP are shown in Figure 6c,d. It can be seen that both methods can correctly reconstruct the height range because the 2π ambiguity of unwrapping phases is avoided. However, there are obvious errors in the middle area by the existing CFFTP without fitting operation, while errors are eliminated by the improved CFFTP. To quantitatively evaluate the accuracy of the two methods, the root-mean-square error (RMSE) of reconstructions when H(x, y) = 20 mm, 30 mm, and 40 mm are exhibited in Table 1. The values of RMSE demonstrate that the proposed method provides much higher accuracy compared with the existing CFFTP.
In the non-telecentric system, the period of the captured reference fringe is no longer a constant value. The larger the included angle of the optical axis, the bigger the period change of the reference fringe. To analyze the effect of the change of fringe period on reconstruction accuracy, the object in Figure 4b was measured multiple times by setting the angle of the optical axes at values of 10, 15, 20, 25, and 30 degrees, respectively. Figure 7 shows the simulation at the included angle of 20 degrees. Figure 7a is one of the deformed circular fringes. Compared with the circular fringe in Figure 5c, its period has changed due to the triangular relationship even in the region outside of the object. Figure 7b is its corresponding linear fringe in the Polar co-ordinate. Figure 7c,d show the enlarged reconstructed surface by the existing CFFTP and that by the improved CFFTP, respectively. The error in the middle area of the existing CFFTP is large without the fitting operation. In contrast, our method still can restore the object completely when the average period of the fringe is used to calculate Δ δ x . The RMSE distributions of the reconstructed result by our method at different included angles are shown in Figure 8. A bigger angle will cause a bigger error. Therefore, in the experiment, the included angle between the camera optical axis and the projector optical axis is set at an appropriate value (10–20 degrees).
It is noted that, in CFFTP, the period of the encoded fringe should be set to avoid frequency overlapping, like the traditional FTP. We find that as long as the fringe frequency is high enough to separate the fundamental component from other frequency components, the reconstruction accuracy will not be significantly affected when pc is approximated to the average value of the period of the captured reference fringe. A reasonable explanation is that the error caused by the period is regarded as a systematic error, which can be eliminated to a certain extent by the plane calibration.

4. Experiments

To further test the performance of the proposed method, we developed a measuring system and conducted a series of experiments. The experimental setup, shown in Figure 9, was mainly composed of a CCD camera (model: Vieworks VQ-5MG-M16) which uses an imaging lens with a focal length of 12 mm, a digital light processing projector (model: LightCrafter 4500), a flat white board (as reference plane) located on a translation stage (model: GCD-203300, repositioning precision is less than 5 μm), and a computer for controlling and calculating. The resolutions of the camera and the projector are, respectively, 2448 × 2048 pixels and 912 × 1120 pixels. The angle between the camera and the projector was set to 15 degrees approximately. In the following experiments, two circular fringe patterns with a radial period of 10 pixels were generated, and the lateral shift amount of their circular centers was 50 pixels.
Before the measurement, the plane calibration was performed to establish the displacement-to-height mapping table. During the calibration procedure, the reference plane mounted on the translation stage was moved from 0 to 60 mm with 10 mm as an interval. At each position, the lateral displacement of the whole image was calculated according to Equation (9). Then the coefficients of Equation (17) were estimated to make a look-up table and stored in the computer.
In the first experiment, three out-of-plane heights located at the positions of 15.0000 mm, 25.0000 mm, and 35.0000 mm were respectively measured to examine the precision of the calibration. The measurement results and RMSE for each flat plane are listed in Table 2. Figure 10a shows the 3D reconstruction results of three planes, and Figure 10b displays the cross sections of the results.
It can be seen that our method can restore the height of planes correctly since the unwrapping phases have no 2π ambiguity. However, the measurement error of CFFTP might be bigger than that of linear fringe projection FTP with marker points because this method includes additional co-ordinate transformation operations.
To further demonstrate the validity of the proposed method in out-of-plane measurement, a gourd-shaped object mounted on a moving plane was measured. We chose three positions with heights of 5.0000 mm, 10.0000 mm, and 15.0000 mm as the measurement samples. Taking the height of 5 mm as an example, Figure 11a,b display the process of obtaining the conical phase from two captured fringes without and with lateral shift, respectively. From left to right, the captured fringes, the resulting fringes in the Polar co-ordinate system before and after Gerchberg iteration, as well as the final conical phases are shown. To eliminate the non-uniform background intensity caused by illumination and reflection, FTP with background homogenization was used to process these fringes [37]. The reconstructed height of the object at different positions are shown in Figure 12a–c, respectively, where color bars display the range of the reconstruction results in color. Figure 12d–f are the cross sections along the 720th column of the 3D shapes shown in Figure 12a–c.
In another experiment, a mask was measured by our method. The mask was placed on the reference plane which was moved forward 12.0000 mm. The captured deformed fringe is shown in Figure 13a, and the 3D shape of the object is shown in Figure 13b. These measurements show that the proposed CFFTP can reconstruct the 3D shape of the out-of-plane object correctly.

5. Conclusions

In this paper, an improved CFFTP method has been proposed for out-of-plane measurement with higher accuracy. By projecting an additional circular fringe pattern with a center shift, the method retrieves the pixel displacement from a linear equation instead of a quadratic equation. Therefore, the calculation process is simpler and more reliable compared with the existing CFFTP. Subsequently, the influence of non-uniform period in non-telecentric triangulation system is discussed in the theoretical analysis. The plane calibration method is used to create a look-up table between displacement and height to eliminate the systematic error of the improved CFFTP. In addition, Gerchberg iteration is employed to eliminate phase error in the middle region. The simulation results and experiments on different objects demonstrate the validity and feasibility of the proposed method. We will further explore error compensation methods to improve accuracy, such as eliminating the error caused by the non-uniform reference fringe period in a non-telecentric system and insufficient sampling rate in some areas during the co-ordinate transformation operation.

Author Contributions

Q.C.: conceptualization, methodology, data collection, writing—original draft preparation; M.H.: data analysis; Y.W.: software, data collection; W.C.: conceptualization, methodology, writing—review and editing. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Natural Science Foundation of China (Grant No. 62075143 and Grant No. U20A20215).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Chen, F.; Brown, G.; Song, M. Overview of three-dimensional shape measurement using optical methods. Opt. Eng. 2000, 39, 10–22. [Google Scholar]
  2. Zuo, C.; Feng, S.; Huang, L.; Tao, T.; Yin, W.; Chen, Q. Phase shifting algorithms for fringe projection profilometry: A review. Opt. Lasers Eng. 2018, 109, 23–59. [Google Scholar] [CrossRef]
  3. Su, X.; Chen, W. Fourier transform profilometry: A review. Opt. Lasers Eng. 2001, 35, 263–284. [Google Scholar] [CrossRef]
  4. Geng, J. Structured-light 3D surface imaging: A tutorial. Adv. Opt. Photon. 2011, 3, 128–160. [Google Scholar] [CrossRef]
  5. Gorthi, S.; Rastogi, P. Fringe projection techniques: Whither we are? Opt. Lasers Eng. 2010, 48, 133–140. [Google Scholar] [CrossRef]
  6. Kulkarni, R.; Banoth, E.; Pal, P. Automated surface feature detection using fringe projection: An autoregressive modeling-based approach. Opt. Lasers Eng. 2019, 121, 506–511. [Google Scholar] [CrossRef]
  7. Chen, L.; Quan, C.; Tay, C.; Yu, F. Shape measurement using one frame projected sawtooth fringe pattern. Opt. Commun. 2005, 246, 275–284. [Google Scholar] [CrossRef]
  8. Jia, P.; Kofman, J.; English, C. Error compensation in two-step triangular-pattern phase-shifting profilometry. Opt. Lasers Eng. 2008, 46, 311–320. [Google Scholar] [CrossRef]
  9. Iwata, K.; Kusunoki, F.; Moriwaki, K.; Fukuda, H.; Tomii, T. Three-dimensional profiling using the Fourier transform method with a hexagonal grating projection. Appl. Opt. 2008, 47, 2103–2108. [Google Scholar] [CrossRef]
  10. Zhang, C.; Zhao, H.; Qiao, J.; Zhou, C.; Zhang, L.; Hu, G.; Geng, H. Three-dimensional measurement based on optimized circular fringe projection technique. Opt. Express 2019, 27, 2465–2477. [Google Scholar] [CrossRef] [PubMed]
  11. Ma, Y.; Yin, D.; Wei, C.; Feng, S.; Ma, J.; Nie, S.; Yuan, C. Real-time 3-D shape measurement based on radial spatial carrier phase shifting from circular fringe pattern. Opt. Commun. 2019, 450, 6–13. [Google Scholar] [CrossRef]
  12. Zuo, C.; Tao, T.; Feng, S.; Huang, L.; Asundi, A.; Chen, Q. Micro Fourier transform profilometry (μFTP): 3D shape measurement at 10,000 frames per second. Opt. Lasers Eng. 2018, 102, 70–91. [Google Scholar] [CrossRef]
  13. Li, B.; An, Y.; Zhang, S. Single-shot absolute 3D shape measurement with Fourier transform profilometry. Appl. Opt. 2016, 55, 5219–5225. [Google Scholar] [CrossRef] [PubMed]
  14. Wang, Z.; Zhang, Z.; Gao, N.; Xiao, Y.; Gao, F.; Jiang, X. Single-shot 3D shape measurement of discontinuous objects based on a coaxial fringe projection system. Appl. Opt. 2019, 58, A169–A178. [Google Scholar] [CrossRef] [PubMed]
  15. Su, W.; Liu, Z. Fourier-transform profilometry using a pulse-encoded fringe pattern. In Proceedings of the SPIE Optical Engineering + Applications, San Diego, CA, USA, 9 September 2019. [Google Scholar]
  16. Zhang, Z.; Zhong, J. Applicability analysis of wavelet-transform profilometry. Opt. Express 2013, 21, 18777–18796. [Google Scholar] [CrossRef]
  17. Han, M.; Chen, W. Dual-Angle rotation two-dimensional wavelet transform profilometry. Opt. Lett. 2022, 47, 1395–1398. [Google Scholar] [CrossRef]
  18. Zheng, S.; Chen, W.; Su, X. Adaptive windowed Fourier transform in 3-D shape measurement. Opt. Eng. 2006, 45, 063601. [Google Scholar] [CrossRef]
  19. Zhang, Z.; Jing, Z.; Wang, Z.; Kuang, D. Comparison of Fourier transform, windowed Fourier transform, and wavelet transform methods for phase calculation at discontinuities in fringe projection profilometry. Opt. Lasers Eng. 2012, 50, 1152–1160. [Google Scholar] [CrossRef]
  20. Qian, K. Carrier fringe pattern analysis: Links between methods. Opt. Lasers Eng. 2022, 150, 106874. [Google Scholar]
  21. Su, X.; Chen, W. Reliability-guided phase unwrapping algorithm: A review. Opt. Lasers Eng. 2004, 42, 245–261. [Google Scholar] [CrossRef]
  22. Zhang, J.; Tian, X.; Shao, J.; Luo, H.; Liang, R. Phase unwrapping in optical metrology via denoised and convolutional segmentation networks. Opt. Express 2019, 27, 14903–14912. [Google Scholar] [CrossRef] [PubMed]
  23. Zuo, C.; Huang, L.; Zhang, M.; Chen, Q.; Asundi, A. Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review. Opt. Lasers Eng. 2016, 85, 84–103. [Google Scholar] [CrossRef]
  24. Wang, X.; Fua, H.; Ma, J.; Fan, J.; Zhang, X. Three-dimensional reconstruction based on tri-frequency heterodyne principle. In Proceedings of the 10th International Conference of Information and Communication Technology, Wuhan, China, 13–15 November 2020. [Google Scholar]
  25. Ratnam, M.; Saxena, M.; Gorthi, S. Circular fringe projection technique for out-of-plane deformation measurements. Opt. Lasers Eng. 2019, 121, 369–376. [Google Scholar] [CrossRef]
  26. Mandapalli, J.; Gorthi, S.; Gorthi, R.; Gorthi, S. Circular Fringe Projection Method for 3D Profiling of High Dynamic Range Objects. In Proceedings of the 14th International Conference on Computer Vision Theory and Applications, Prague, Czech Republic, 1 January 2019. [Google Scholar]
  27. Zhao, H.; Zhang, C.; Zhou, C.; Jiang, K.; Fang, M. Circular fringe projection profilometry. Opt. Lett. 2016, 41, 4951–4954. [Google Scholar] [CrossRef] [PubMed]
  28. Wang, N.; Jiang, W.; Zhang, Y. Deep learning–based moiré-fringe alignment with circular gratings for lithography. Opt. Lett. 2021, 46, 1113–1116. [Google Scholar] [CrossRef]
  29. Khonina, S.; Khorin, P.; Serafimovich, P.; Dzyuba, A.; Georgieva, A.; Petrov, N. Analysis of the wavefront aberrations based on neural networks processing of the interferograms with a conical reference beam. Appl. Phys. B 2022, 128, 60. [Google Scholar] [CrossRef]
  30. Han, D. Comparison of commonly used image interpolation methods. In Proceedings of the 2nd International Conference on Computer Science and Electronics Engineering, Hangzhou, China, 22–23 March 2013. [Google Scholar]
  31. Hong, S.; Wang, L.; Truong, T. An improved approach to the cubic-spline interpolation. In Proceedings of the 25th IEEE International Conference on Image Processing, Athens, Greece, 7–10 October 2018. [Google Scholar]
  32. Ning, L.; Luo, K. An Interpolation Based on Cubic Interpolation Algorithm. In Proceedings of the International Conference 2007 on Information Computing and Automation, Chengdu, China, 14–17 December 2007. [Google Scholar]
  33. Gerchberg, R. Super-Resolution through Error Energy Reduction. Opt. Acta 1974, 21, 709–720. [Google Scholar] [CrossRef]
  34. Zhao, W.; Su, X.; Chen, W. Discussion on accurate phase–height mapping in fringe projection profilometry. Opt. Eng. 2017, 56, 104109. [Google Scholar] [CrossRef]
  35. Zhao, W.; Su, X.; Chen, W. Whole-field high precision point to point calibration method. Opt. Lasers Eng. 2018, 111, 71–79. [Google Scholar] [CrossRef]
  36. Su, X.; Xue, L. Phase unwrapping algorithm based on fringe frequency analysis in Fourier-transform profilometry. Opt. Eng. 2001, 40, 637–643. [Google Scholar] [CrossRef]
  37. Liao, R.; Yang, L.; Ma, L.; Yang, J.; Zhu, J. A Dense 3-D Point Cloud Measurement Based on 1-D Background-Normalized Fourier Transform. IEEE Trans. Instrum. Meas. 2021, 70, 5014412. [Google Scholar] [CrossRef]
Figure 1. Schematic diagram of the measurement system.
Figure 1. Schematic diagram of the measurement system.
Sensors 22 06048 g001
Figure 2. Diagram of co-ordinate transformation. (a) Interpolation schematic diagram; (b) Original circular fringe; (c) Linear fringe in Polar co-ordinate.
Figure 2. Diagram of co-ordinate transformation. (a) Interpolation schematic diagram; (b) Original circular fringe; (c) Linear fringe in Polar co-ordinate.
Sensors 22 06048 g002
Figure 3. Flowchart of the improved CFFTP.
Figure 3. Flowchart of the improved CFFTP.
Sensors 22 06048 g003
Figure 4. (a) Mathematical model of the virtual fringe projection system; (b)The simulated object when H(x, y) = 20 mm.
Figure 4. (a) Mathematical model of the virtual fringe projection system; (b)The simulated object when H(x, y) = 20 mm.
Sensors 22 06048 g004
Figure 5. Simulated phase calculation process of improved CFFTP. (a) Conical phase calculation from reference fringe without lateral shift; (b) Conical phase calculation from reference fringe with lateral shift; (c) Conical phase calculation from deformed fringe corresponding to (a); (d) Conical phase calculation from deformed fringe corresponding to (b).
Figure 5. Simulated phase calculation process of improved CFFTP. (a) Conical phase calculation from reference fringe without lateral shift; (b) Conical phase calculation from reference fringe with lateral shift; (c) Conical phase calculation from deformed fringe corresponding to (a); (d) Conical phase calculation from deformed fringe corresponding to (b).
Sensors 22 06048 g005
Figure 6. Reconstructed results by existing CFFTP and improved CFFTP when H(x, y) = 20 mm, 30 mm, and 40 mm, respectively. (a,b) Reconstructed surfaces and cross sections along the 330th row using existing CFFTP; (c,d) Reconstructed surfaces and cross sections along the 330th row using improved CFFTP.
Figure 6. Reconstructed results by existing CFFTP and improved CFFTP when H(x, y) = 20 mm, 30 mm, and 40 mm, respectively. (a,b) Reconstructed surfaces and cross sections along the 330th row using existing CFFTP; (c,d) Reconstructed surfaces and cross sections along the 330th row using improved CFFTP.
Sensors 22 06048 g006
Figure 7. Simulation of non-telecentric system. (a) Deformed circular fringe; (b) Corresponding linear fringes in the Polar co-ordinate; (c) Reconstructed surface by existing CFFTP; (d) Reconstructed surface by improved CFFTP.
Figure 7. Simulation of non-telecentric system. (a) Deformed circular fringe; (b) Corresponding linear fringes in the Polar co-ordinate; (c) Reconstructed surface by existing CFFTP; (d) Reconstructed surface by improved CFFTP.
Sensors 22 06048 g007
Figure 8. The RMSE distributions at different angles with improved CFFTP.
Figure 8. The RMSE distributions at different angles with improved CFFTP.
Sensors 22 06048 g008
Figure 9. Experimental setup.
Figure 9. Experimental setup.
Sensors 22 06048 g009
Figure 10. Experimental results of the planes at different heights. (a) 3D reconstruction; (b) Corresponding cross section.
Figure 10. Experimental results of the planes at different heights. (a) 3D reconstruction; (b) Corresponding cross section.
Sensors 22 06048 g010
Figure 11. (a) Conical phase calculation from the captured fringe without lateral shift; (b) Conical phase calculation from the captured fringe with lateral shift.
Figure 11. (a) Conical phase calculation from the captured fringe without lateral shift; (b) Conical phase calculation from the captured fringe with lateral shift.
Sensors 22 06048 g011
Figure 12. Reconstructed results of the gourd. (ac) 3D reconstruction at the position of 5mm, 10mm, and 15mm, respectively; (df) Cross sections of (ac) along the 720th column.
Figure 12. Reconstructed results of the gourd. (ac) 3D reconstruction at the position of 5mm, 10mm, and 15mm, respectively; (df) Cross sections of (ac) along the 720th column.
Sensors 22 06048 g012
Figure 13. Reconstructed results of the mask. (a) Captured deformed fringe pattern; (b) 3D reconstruction.
Figure 13. Reconstructed results of the mask. (a) Captured deformed fringe pattern; (b) 3D reconstruction.
Sensors 22 06048 g013
Table 1. RMSE of the reconstructed object at three positions by two methods. (Unit: mm).
Table 1. RMSE of the reconstructed object at three positions by two methods. (Unit: mm).
MethodPosition
203040
Existing CFFTP0.18750.20380.2198
Improved CFFTP0.06230.07460.0874
Table 2. Measurement results for the out-of-plane height. (Unit: mm).
Table 2. Measurement results for the out-of-plane height. (Unit: mm).
HeightMeanRMSE
15.000014.96830.1183
25.000025.05700.1227
35.000035.03280.1443
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Chen, Q.; Han, M.; Wang, Y.; Chen, W. An Improved Circular Fringe Fourier Transform Profilometry. Sensors 2022, 22, 6048. https://doi.org/10.3390/s22166048

AMA Style

Chen Q, Han M, Wang Y, Chen W. An Improved Circular Fringe Fourier Transform Profilometry. Sensors. 2022; 22(16):6048. https://doi.org/10.3390/s22166048

Chicago/Turabian Style

Chen, Qili, Mengqi Han, Ye Wang, and Wenjing Chen. 2022. "An Improved Circular Fringe Fourier Transform Profilometry" Sensors 22, no. 16: 6048. https://doi.org/10.3390/s22166048

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop