Next Article in Journal
Focusing of a Laser Beam Passed through a Moderately Scattering Medium Using Phase-Only Spatial Light Modulator
Next Article in Special Issue
Stem and Calyx Identification of 3D Apples Using Multi-Threshold Segmentation and 2D Convex Hull
Previous Article in Journal
Intravital Imaging with Two-Photon Microscopy: A Look into the Kidney
Previous Article in Special Issue
Real-Time Phase Retrieval Based on Cube-Corner Prisms Single Exposure
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Accelerated Phase Deviation Elimination for Measuring Moving Object Shape with Phase-Shifting-Profilometry

1
Hubei Key Laboratory of Smart Internet Technology, School of Electronic Information and Communications, Huazhong University of Science and Technology, Wuhan 430074, China
2
College of Information Science and Engineering, Henan University of Technology, Zhengzhou 450001, China
*
Author to whom correspondence should be addressed.
Photonics 2022, 9(5), 295; https://doi.org/10.3390/photonics9050295
Submission received: 30 March 2022 / Revised: 22 April 2022 / Accepted: 23 April 2022 / Published: 27 April 2022
(This article belongs to the Special Issue Optical 3D Sensing Systems)

Abstract

:
Eliminating the phase deviation caused by object motion plays a vital role to obtain the precise phase map to recover the object shape with phase-shifting-profilometry. Pixel-by-pixel phase retrieval using the least-squares algorithm has been widely employed to eliminate the phase deviation caused by moving object. However, pixel-level operation can only eliminate phase deviation within a limited range, and will bring high computational burden. In this paper, we propose an image-level phase compensation method with stochastic gradient descent (SGD) algorithm to accelerate the phase deviation elimination. Since the iteration calculation is implemented at the image-level, the proposed method can accelerate the convergence significantly. Furthermore, since the proposed algorithm is able to correct the phase deviation within ( π , π ) , the algorithm can tolerate a greater motion range. In addition to simulation experiments, we consider 2-D motion of the object, and conduct a series of comparative experiments to validate the effectiveness of the proposed method in a larger motion range.

1. Introduction

Fringe projection profilometry modulates the surface depth information of object into the phase maps of fringe patterns, which has been extensively adopted in modern industries to measure the 3D shape of object due to its high accuracy and non-contact characteristics [1,2,3]. To obtain the precise wrapped phase maps, phase-shifting-profilometry (PSP) is one of the most commonly used methods in fringe projection profilometry. In PSP, the digital projector illuminates the stationary object by a series of fringe patterns with equally spaced phase. Through the modulated fringe patterns and triangle relationship, the accurate surface height of the object can be reconstructed. However, the rapid growth of industrial inspection requires to recover the 3D shape of moving object is required to be recovered [4,5,6], thus reconstructing the surface of moving object with PSP has become a practical issue that should be considered [7,8,9].
When PSP based method is applied to recover the surface of moving object, the phase deviation caused by object motion often results in severe artifacts [6,7]. In order to reduce the phase deviation caused by motion, there are two main approaches. One is to capture the fringe patterns with high-speed photography hardware. Zheng et al. [10] develop a method based on PSP with projector defocusing for high speed 3D shape measurement and analyze the phase deviation model. However, use of high speed equipment [10,11,12,13,14] implies a significant increase in the cost and the complexity of system [15]. Another approach is to reduce the phase deviation caused by motion through phase compensation.
Based on this consideration, Lu et al. [1] propose to use a set of marks on the object surface to describe the rotation matrix and translation vector of the phase deviation, an iterative least-squares algorithm is presented to perform pixel-level phase compensation. Lu et al. [16] also extend the method to the case of 3-D movement of the object. Feng et al. [6] propose a robust motion-compensated method for three-step PSP, which corrects the phase deviation with adjacent pixels. To estimate phase deviation caused by the object motion, Wang et al. [17] apply the Hilbert transform to phase-shifted fringe patterns to generate another set of fringe patterns. This method is less computationally expensive, but there is an underlying assumption that requires the object to move slower than the camera speed. Wang et al. [18] also propose a novel motion-induced method using additional samplings. Given that the method needs to capture two fringe patterns in one cycle, the accuracy of the external trigger will affect the quality of the reconstruction result. Duan et al. [19] propose a method for 2-D moving objects by introducing an adaptive reference phase map and motion tracking based on the composite image. However, the method encounters challenges when objects perform a wide range of motion. Li et al. [20] redefine the order of the projected stripe patterns and use the Hilbert transformation to compensate for the motion-induced errors and improve the utilization of the stripe. Since these phase compensation methods are developed on pixel-level, they still suffer from high computational amount [21]. In order to reduce the computational burden, Guo et al. [22] develop a Fourier-based method using dual-frequency composite phase-shifted grating patterns to achieve region-level phase compensation. However, due to the limitations of FTP, the method might not be available on case of dynamically deformable objects [8]. On the other hand, these algorithms are effective only when the phase deviation is within a limited range. When the phase deviation exceeds this range, these algorithms will not converge, which means the object with large motion range could not be reconstructed by these algorithms.
In addition to the aforementioned methods, Spoorthi et al. [23] propose a phase compensation method with deep learning to improve the accuracy of measurement, but it requires a large number of actual phase values obtained by iterative method for learning. Moreover, there are other methods [24,25,26] that use deep learning to correct phase deviation caused by motion. In specific application scenario, a large amount of actual phase data calculated through iterative method is difficult to acquire, it is impractical to use the deep learning to calibrate the phase deviation caused by motion in industrial measurement right now. Therefore, it is still necessary to develop the phase compensation method with high efficiency and accuracy under common resources.
To our knowledge, the existing pixel-level iterative methods have been suffered by two problems: limited motion range of object, and high computational burden. In this paper, we propose an image-level phase compensation method with SGD algorithm to accelerate phase deviation elimination. In this method, the difference among deformed fringe patterns is used to construct the iterative expression, which can reduce the computational burden of pixel-level based methods significantly. Furthermore, in the proposed SGD algorithm, each update of the iteration will randomly select a gradient direction, so the algorithm can escape from the saddle point to find the global optimum. Therefore, the large phase deviation can be eliminated effectively, which means our proposed method can reconstruct the object with larger motion range than existing methods.

2. Problem Formulation

PSP is one of the most promising approaches of 3-D shape reconstruction. Under the N-step PSP scenarios, I n ( x , y ) is the n-th deformed fringe patterns captured from the object, it can be described respectively as:
I n ( x , y ) = a ( x , y ) + b cos ( ω y + Φ ( x , y ) + 2 π n N )
where { n | n { 0 , 1 , , N 1 } } ; a ( x , y ) is the background ambient light intensity; b is the amplitude of the intensity of fringe patterns; ω is the angular frequency of fringe projection; ω y is the phase of the reference plane along the direction perpendicular to the fringe; Φ ( x , y ) is the phase under the modulation of object height information; 2 π n N is the n-th preset phase shift of the fringe projection.
The direction of movement is shown in the Figure 1. A typical system consists of a camera A, and a projector B. The height h of a point E on the object surface is calculated by the modulated phase information. The distance between the camera and the projector is d, and the distance from the camera to the reference plane is l + h .
To simplify the analysis, we assume that the direction of the moving object is on the y-axis direction. When the direction of the y-axis is perpendicular to the direction of the fringe, we use Δ y n to describe displacement of the object in pixel on the n-th captured pattern. Through coordinates transformation, we yield the following:
I n ( x , y Δ y n ) = a ( x , y ) + b cos ( ω y + Φ ( x , y ) + ( 2 π n N ω Δ y n ) )
Since the pixels are discrete, the calculated result of Δ y n is an integer. We use Δ Φ n = 2 π n N ω Δ y n = 2 π n N ω [ Δ y n ] to describe the actual phase to be calibrated. To further simplify the expression, we let Φ ( x , y ) as: Φ ( x , y ) = ω y + Φ ( x , y ) . The actual phase is difficult to solve directly and accurately due to the rounding error of Δ y n .
In order to solve the actual phase, Lu et al. [1] propose a least-square iteration algorithm, which constructs the iteration equation with the gray value difference between the pixels with the same coordinate on different deformed fringe patterns. But this kind of algorithm always suffers from high computational complexity and falling to local optimal point due to its iterative operation at the pixel-level.

3. Derivation of the Proposed Algorithm

In order to accelerate the convergence, our proposed algorithm calculate the arithmetic average value of the difference between the different deformed fringe gray scales to construct the iteration equation, which reduces the computational complexity. The iteration equation illustrates the distance between the deformed fringe patterns difference and the actual phase. After obtaining an accurate deformed fringe patterns difference, we compensate the actual phase under this distance. If the accuracy of iteration calculation does not reach the stopping criterion, the actual phase to be calibrated is updated through the random gradient descent method, the iteration stops until the accuracy requirement is met.

3.1. Calculation of Deformed Fringe Patterns Difference

We use K i j to describe the absolute average value I i ( x , y ) I j ( x , y ) of the difference between the i-th deformed fringe patterns and the j-th deformed fringe patterns:
K i j = 2 b sin ( Φ ( x , y ) + Δ Φ i + Δ Φ j 2 ) · | sin ( Δ Φ i Δ Φ j 2 ) |
where { i , j | i , j { 0 , 1 , , N 1 } , i j } . Under grid motion, 2 b sin ( Φ ( x , y ) + Δ Φ i + Δ Φ j 2 ) will be a fixed quantify. c = 2 b sin ( Φ ( x , y ) + Δ Φ i + Δ Φ j 2 ) is the difference between the deformed fringe patterns to be calculated, which reflects the macroscopic differences between the patterns. We can calculate c through least square method [27] in Equation (4).
Δ K = i = 1 N 1 j = 0 i 1 K i j K i j 2 Δ K c = Δ K Δ Φ n = 0
where K i j is the actual value of difference between deformed fringe patterns, and K i j is the value of difference between deformed fringe patterns, the actual value is the closest to the theoretical value. When Δ K c = Δ K Δ Φ n = 0 is satisfied, Δ K get the minimum value, c can be calculated iteratively as follows:
c = i = 1 N 1 j = 0 i 1 K i j | sin ( Δ Φ i Δ Φ j 2 ) | i = 1 N 1 j = 0 i 1 sin 2 ( Δ Φ i Δ Φ j 2 ) = 4 i = 1 N 1 j = 0 i 1 K i j | sin ( Δ Φ i Δ Φ j 2 ) | N ( N 1 ) 2 i = 1 N 1 j = 0 i 1 cos ( Δ Φ i Δ Φ j )
With iterations, c will approach to the true value.

3.2. Differential Constrained Phase Compensation

After obtaining accurate c, we can calculate the actual phase Δ Φ n under the constraints of c. We can take { m | m { 0 , 1 , , N 1 } } , then the partial derivative Δ K Δ Φ n = 0 can be converted to the following:
Δ Φ m [ j = 0 m 1 K m j c | sin ( Δ Φ m Δ Φ j 2 ) | 2 + i = m + 1 N 1 K i m c | sin ( Δ Φ i Δ Φ m 2 ) | 2 ] = 0
where { i | i { m + 1 , m + 2 , . . . , N 1 } } and { j | j { 0 , 1 , . . . , m 1 } } . Through simplification, the solutions can be expressed as Equation (7).
K m j c | sin ( Δ Φ m Δ Φ j 2 ) | = 0 K i m c | sin ( Δ Φ i Δ Φ m 2 ) | = 0
When c 0 , cos ( Δ Φ i Δ Φ m 2 ) 0 , Equation (7) can be expressed as:
Δ Φ m = Δ Φ j 2 arcsin ( K m j c ) , Δ Φ m < Δ Φ j Δ Φ m = Δ Φ j + 2 arcsin ( K m j c ) , Δ Φ m Δ Φ j Δ Φ m = Δ Φ i 2 arcsin ( K i m c ) , Δ Φ m < Δ Φ i Δ Φ m = Δ Φ i + 2 arcsin ( K i m c ) , Δ Φ m Δ Φ i
We can calculate the arithmetic average of the actual phase to be calibrated as follows:
Δ Φ m ¯ = 1 N 1 [ i = 0 , i m N 1 Δ Φ i 2 i G 0 arcsin ( K m i c ) + 2 i G 1 arcsin ( K m i c ) 2 i G 2 arcsin ( K i m c ) + 2 i G 3 arcsin ( K i m c ) ]
where G 0 = { x | x { 0 , 1 , . . . , m 1 } , Δ Φ m < Δ Φ x } , G 1 = { x | x { 0 , 1 , . . . , m 1 } , Δ Φ m Δ Φ x } , G 2 = { x | x { m + 1 , m + 2 , . . . , N 1 } , Δ Φ m < Δ Φ x } , G 3 = { x | x { m + 1 , m + 2 , . . . , N 1 } , Δ Φ m Δ Φ x } .

3.3. Iteration Process

For the initial iteration, the actual phase to be calibrated is Δ Φ n = 2 π n N ω [ Δ y n ] . In subsequent iteration, the actual phase to be calibrated is the results obtained in the previous iteration. In each iteration, we make a judgment whether the result of this iteration meets the phase compensation accuracy requirement ϵ as:
i = 0 N 1 Δ Φ i ( t ) Δ Φ i ( t 1 ) 2 N < ϵ
where t is the number of iterations; ϵ is the preset accuracy requirement to stop iterations. If the difference between two consecutive iterations satisfies Equation (10), we stop the iteration. Otherwise, the SGD method is introduced to obtain the new actual phase to be calibrated and proceed to the next iteration.
The reason for introducing SGD [28] is that the ordinary iteration process has no exploration mechanism, and it is easy to fall into the local optimum, as shown in Figure 2.
Before starting the SGD, we have precalculated the following parameters: the initial value of the descending step length s; the direction parameter r; the Euclidean distance of the actual phase compensation results of two consecutive iterations d i s ( Δ Φ ( t ) , Δ Φ ( t 1 ) ) ; and the gradient value between the actual phase compensation results of consecutive iterations g ( Δ Φ ( t ) , Δ Φ ( t 1 ) ) . Finally, we can calculate the input phase of the next generation Δ Φ n ( t + 1 ) .
We use Equation (11) to describe the updated descending step s :
s = g ( Δ Φ ( t ) , Δ Φ ( t 1 ) ) d i s ( Δ Φ ( t ) , Δ Φ ( t 1 ) ) s = Δ Φ ( t ) , Δ Φ ( t 1 ) ( t ( t 1 ) ) d i s ( Δ Φ ( t ) , Δ Φ ( t 1 ) ) s = Δ ϕ 0 ( t ) Δ ϕ 0 ( t 1 ) , Δ ϕ 1 ( t ) Δ ϕ 1 ( t 1 ) , , Δ ϕ N 1 ( t ) Δ ϕ N 1 ( t 1 ) T d i s ( Δ Φ ( t ) , Δ Φ ( t 1 ) ) s
Similarly, specific to the n-th step size s n :
s n = Δ Φ n ( t ) Δ Φ n ( t 1 ) d i s ( Δ Φ ( t ) , Δ Φ ( t 1 ) ) s
Then, we use Equation (13) and a random function r a n d ( ) to determine the gradient descent direction d i r ( Δ Φ n ( t ) ) :
d i r ( Δ Φ n ( t ) ) = 1 , r a n d ( ) p 0 1 , r a n d ( ) > p 0
where p 0 = r 1 + r reflects the probability along the fastest direction of the gradient. When d i r ( Δ Φ n ( t ) ) = 1 , the iteration direction is changed to the opposite direction, otherwise no change will be made. Then, we can calculate Δ Φ n ( t + 1 ) as follows:
Δ Φ n ( t + 1 ) = Δ Φ n ( t ) + d i r ( Δ Φ n ( t ) ) s n
The above derivation can also be extended to the object with 2D movement on the x-axis direction. The point on the object surface only has translational motion as the height is not changed with the movement. In the Figure 1, the phase deviation caused by the object moving in different directions can be solved using the proposed algorithm.
Summarily, the improvement of the calculation time of the proposed method in this paper can be explained by the calculation complexity. Obviously, the time complexity of our method is O ( M N 2 ) , the time complexity of the pixel-level approaches represented by Ref. [16] is O ( M 2 N ) , where the resolution of deformed fringe pattern is M, the number of deformed fringe patterns is N.

4. Experiments

In this section, we compare the performance between our method and other methods through experiments. The first experiment shows the time cost difference of the proposed method and method in Ref. [16]; the second experiment shows the performances the two methods under different phase deviation; the third experiment shows the results of surface reconstruction of object moving along the y-axis, and compared with the results of static object surface reconstruction; the fourth experiment shows the effect of the proposed method and method in Ref. [29] in different motion directions and step length on the reconstruction results.
We built a fringe projection system comprised of a DLP projector (LightCrafter 4500, TI) and a black and white industrial camera (Blackfly S BFS-U3-32S4M). We employed such an approach using Matlab on LENOVO Y7000 computer with 2.7 GHz CPU and 16 G memory. The resolution of the camera is 948 × 604 , with a maximum frame rate of 30 frames per second. And we place different statues on the sliding rails to achieve the 2D movement along the parallel or vertical stripes. In order to verify the effectiveness of the proposed algorithm, four groups of experiments were carried out by differing the motion step length and direction.
Take the Voltaire statue used in the experiment as an example to show the preprocessing of captured fringe patterns, the statue is shown as Figure 3a. And the object image with fringe patterns can be captured in three components(red component, blue component and green component) as shown in Figure 3b. After background subtraction, the calculated Figure 3c can accurately distinguish the object and the background. In order to further eliminate the effect of holes and noise, by using the morphological filter, the Figure 3d can be obtained.
The ASIFT [30] algorithm is applied to track the object movement using the captured images with the corresponding relationship. And the output of the ASIFT algorithm is the coordinates of match points, as shown in Figure 4. According to the coordinates of match points, the corresponding relationship can also found.

4.1. Quantitative Experiments Based on Computational Performance

This experiment discusses the performance comparison of the two methods at different projection frequencies. In order to quantify the difference in accuracy of the two methods, we first compared the phase compensation results of different methods with different preset deformed phase. When the projection frequency is 30 Hz, the compensation results of different methods are shown in Table 1.
In order to quantitatively evaluate the difference in phase compensation accuracy of different methods, we use Equation (15) to calculate the mean square error (MSE) of the two methods.
M S E = i = 0 N 1 Δ Φ i ( 0 ) Δ Φ i ( 1 ) 2 N
where Δ Φ i ( 0 ) is the phase compensation result of the i-th deformed fringe patterns in Ref. [16], and Δ Φ i ( 1 ) is the phase compensation result of the i-th deformed fringe patterns in our method. As shown in Table 2, the MSE of the phase compensation results of the two methods under different projection frequencies is controlled within 1 .
Next, we will compare the computational cost of the two methods for phase compensation of 36 different preset deformed phase fringe patterns, the image resolution is 948 × 604 . The results are shown in the following table (Table 3):
And Table 3 shows the comparison of calculation time between two methods. And the average computational cost of the method in Ref. [16] is 5.1857 seconds/frame, our method is 1.0378 seconds/frame. The calculation cost of our method is only 20.0127 % of the calculation cost of the method in Ref. [16], which greatly reduces the calculation complexity.

4.2. Simulation Experiment of Phase Deviation Cancellation

We selected different phase deviation within ( π , π ) to verify the convergence of our method and the method in Ref. [16]. In order to facilitate quantitative analysis, we also use Equation (15) to calculate MSE between the phase compensation results and the actual phase.
At this time, Δ Φ i ( 0 ) is the phase compensation result of the i-th deformed fringe patterns, Δ Φ i ( 1 ) is the actual phase of the i-th deformed fringe patterns. And we assume that θ is the phase deviation, which can be calculated by the difference between the iteration starting point Δ Φ i ( 0 ) and the actual phase Δ Φ i ( 1 ) .
When frequency is 30 Hz, we set the θ = { 180 , 170 , 160 , , 160 , 170 , 180 } . The original result is shown in the Figure 5. In order to further illustrate the effect in details, we conducted an experiment where θ [ 10 , 10 ] .
It can be seen from Figure 5 that our method can converge to the actual phase at different iteration starting points within ( π , π ) . The MSE of our method is always controlled within 0 . 4 and the average MSE is 0 . 2437 . As θ increases, the MSE of the method in Ref. [16] expands rapidly, indicating that this method is easy to fall into the local iterative optima. In the partial enlarged view, the MSE of our method is always controlled within 0 . 2 , and the average MSE is 0 . 1370 . When θ [ 2 , 2 ] , the method in Ref. [16] is more accurate because we greatly reduce the computational complexity while sacrificing part of the accuracy. And the MSE between two methods does not exceed 0 . 12 . Once it exceeds this range, the MSE of the method in Ref. [16] increases with the increase of θ .

4.3. Quantitative Experiment of Free-Moving Object

We applied the proposed method to the 3D surface reconstruction of scenarios where the object is moving or stationary. For the bear statue that moves along the y-axis, we make the moving length uncontrolled each step. The surface reconstruction results of our method in different scenes are shown in Figure 6. The object moves at 3 (mm/step) between the first two fringe patterns taken, and at 5 (mm/step) between the last two fringe patterns taken. As shown in Figure 6, the proposed method has better performance in terms of details.
Due to the insignificant shape error assessment (in mm), we introduce average pixel difference ( A P D ) to calculate difference between the reconstruction results of different scenarios:
A P D = 1 h s , m a x i = 1 X j = 1 Y ( h d ( i , j ) h s , m a x ( i , j ) ) 2 | x m a x x m i n | · | y m a x y m i n |
where h d is the height of surface reconstruction result in dynamic scene; h s is the height of our surface reconstruction result in static scene; h s , m a x is the height of the highest point of surface reconstruction result in static scene (i.e., the highest point of the nose of the Voltaire statue or bear statue); x m i n and x m a x are the minimum and maximum points of reconstruction result projected onto the x-axis, which are the x coordinates of the leftmost and rightmost points of the object; similarly, y m i n and y m a x are the minimum and maximum points of reconstruction result projected onto the y-axis; A P D shows the proportion of the error to the whole reconstruction result and further illustrates the reconstruction quality.
Calculated by Equation (16), A P D = 5.4896 % . If we exclude the effect of cavity area caused by changes in light and shadow, A P D = 1.4205 % , which means that the gap between the surface reconstruction results of moving objects and static objects is controlled in 0.1 % .

4.4. Quantitative Experiment of Different Speed

We verify the effectiveness of the proposed method by further expanding moving distance in each step and changing the movement direction of the object to the x-axis. The model used in the experiment and captured fringe pattern are shown in Figure 3a,b. The reconstruction results from traditional PSP [29] are shown in Figure 7a–d, and the reconstruction result from proposed method are shown in Figure 7e–h.
It can be seen intuitively that with the continuous expansion of the motion step length, the quality of the reconstruction result gradually deteriorates, which is caused by the limited shooting frame rate and viewing angle of the industrial camera used in the experiment. When the object is stationary and moving with a step length of 10 (mm/step), the difference between our method and traditional PSP in the reconstruction results is not intuitive.
In order to further illustrate the reconstruction quality, we use Equation (16) to calculate A P D , which indicates difference between different reconstruction results for each motion step, and the results are shown in Table 4. As the motion step is expanded to 20 (mm/step), the reconstruction result of traditional method produces obvious errors in the contours of the statue, and our method still maintains good performance. When the motion step is expanded to 30 (mm/step), the distance between the initial position and the final position of the object has reached 30 ( N 1 ) mm while using N-step PSP. When the range of motion is expanded, the fringe patterns captured at different positions will have the perspective difference shown in Figure 7, which will lead to the information loss during the reconstruction of complex objects, such as the facial contours and nose of the Voltaire statue. If the motion step length needs to be further expanded, the camera and projection hardware used in experiments must be upgraded.
Since the shooting frame rate of the industrial camera used in the experiment is 30 frames per second, the object movement speed allowed by the experimental hardware is 400 mm/s, which can already meet the requirements of the measurement scene on the industrial assembly lines.

5. Discussion

With the idea of employing object tracking to reconstruct moving objects, one of the highlights of this paper is to significantly reduce the computational burden of traditional pixel-level phase correction methods [16]. On the other hand, the Hilbert transform-based methods [17,20] employ the Hilbert transform to shift the phase information. We will compare the object tracking-based approach with the Hilbert transform-based approach (at the principle level and in terms of performance) in future work.
Another highlight of this paper is the ability to maintain good performance even when the motion range is extended. The commonly used iterative methods are highly sensitive to the initial values and can not maintain good performance when the motion introduces large phase changes. Therefore, we redefine the iterative process and introduce an exploration mechanism to change the direction of gradient descent to find the global optimal solution and avoid getting trapped in a local optimum. With the introduction of the SGD algorithm, we have greatly improved the robustness of the method, which has been demonstrated by simulation experiments and practical experiments.

6. Conclusions

In this paper, an image-level phase compensation method in N-step PSP is presented to solve the phase deviation caused by motion. The difference among the deformed fringe patterns is introduced to construct iterative expression, which can significantly reduce the computational burden. The proposed algorithm adopts the SGD method to avoid falling into the local optima and extends the phase deviation range over existing methods. Experiments prove that the proposed method is effective when the object uses different speeds for 2-D movement. The proposed method uses a smaller time cost to achieve a larger allowable range of motion, and achieves a balance between computational complexity and algorithm effective range.

Author Contributions

Conceptualization, W.L.; methodology, Z.C.; software, Z.C. and X.W.; validation, W.L., X.W. and Z.C.; formal analysis, X.W.; investigation, Z.C.; resources, Z.C.; data curation, X.W.; writing—original draft preparation, X.W.; writing—review and editing, W.L., Y.D. and L.L.; visualization, X.W. and Z.C.; supervision, W.L.; project administration, W.L.; funding acquisition, W.L. and L.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China number 61977064, 61871436, Major Public Welfare Project of Henan Province number 201300311200 and General Science Foundation of Henan Province number 222300420427.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Lu, L.; Xi, J.; Yu, Y.; Guo, Q. New approach to improve the accuracy of 3-D shape measurement of moving object using phase shifting profilometry. Opt. Express 2013, 21, 30610–30622. [Google Scholar] [CrossRef] [PubMed]
  2. Zhang, M.; Qian, C.; Tao, T.; Feng, S.; Hu, Y.; Li, H.; Zuo, C. Robust and efficient multi-frequency temporal phase unwrapping: Optimal fringe frequency and pattern sequence selection. Opt. Express 2017, 25, 20381–20400. [Google Scholar] [CrossRef] [PubMed]
  3. Ding, Y.; Peng, K.; Lu, L.; Zhong, K.; Zhu, Z. Simplified fringe order correction for absolute phase maps recovered with multiple-spatial-frequency fringe projections. Meas. Sci. Technol. 2017, 28, 025203. [Google Scholar] [CrossRef]
  4. Lu, L.; Jia, Z.; Luan, Y.; Xi, J. Reconstruction of isolated moving objects with high 3D frame rate based on phase shifting profilometry. Opt. Commun. 2019, 438, 61–66. [Google Scholar] [CrossRef]
  5. Jiao, S.; Sun, M.; Gao, Y.; Lei, T.; Xie, Z. Motion estimation and quality enhancement for a single image in dynamic single-pixel imaging. Opt. Express 2019, 27, 12841–23954. [Google Scholar] [CrossRef] [PubMed]
  6. Feng, S.; Zuo, C.; Tao, T.; Hu, Y.; Zhang, M.; Chen, Q.; Gu, G. Robust dynamic 3-D measurements with motion-compensated phase-shifting profilometry. Opt. Lasers Eng. 2018, 103, 127–138. [Google Scholar] [CrossRef]
  7. Qian, J.; Tao, T.; Feng, S.; Chen, Q.; Zuo, C. Motion-artifact-free dynamic 3D shape measurement with hybrid Fourier-transform phase-shifting profilometry. Opt. Express 2019, 27, 2713–2731. [Google Scholar] [CrossRef] [PubMed]
  8. Lu, L.; Suresh, V.; Zheng, Y.; Wang, Y.; Li, B. Motion induced error reduction methods for phase shifting profilometry: A review. Opt. Lasers Eng. 2021, 141, 106573. [Google Scholar] [CrossRef]
  9. Wu, H.; Cao, Y.; An, H.; Li, Y.; Li, H.; Xu, C.; Yang, N. High-precision 3D shape measurement of rigid moving objects based on the Hilbert transform. Appl. Opt. 2021, 60, 8390–8399. [Google Scholar] [CrossRef]
  10. Zheng, D.; Da, F.; Kemao, Q.; Seah, H.S. Phase error analysis and compensation for phase shifting profilometry with projector defocusing. Appl. Opt. 2016, 55, 5721–5728. [Google Scholar] [CrossRef] [PubMed]
  11. Feng, S.; Qian, C.; Chao, Z.; Tao, T.; Asundi, A. Motion-oriented high speed 3-D measurements by binocular fringe projection using binary aperiodic patterns. Opt. Express 2017, 25, 540. [Google Scholar] [CrossRef] [PubMed]
  12. Zuo, C.; Tao, T.; Feng, S.; Huang, L.; Asundi, A.; Chen, Q. Micro Fourier Transform Profilometry (μ FTP): 3D shape measurement at 10,000 frames per second. Opt. Lasers Eng. 2018, 102, 70–91. [Google Scholar] [CrossRef] [Green Version]
  13. Wu, Z.; Guo, W.; Li, Y.; Liu, Y.; Zhang, Q. High-speed and high-efficiency three-dimensional shape measurement based on Gray-coded light. Photonics Res. 2020, 8, 819–829. [Google Scholar] [CrossRef] [Green Version]
  14. Huang, X.; Zhang, Y.; Xiong, Z. High-speed structured light based 3D scanning using an event camera. Opt. Express 2021, 29, 35864–35876. [Google Scholar] [CrossRef] [PubMed]
  15. Weise, T.; Leibe, B.; Gool, L. Fast 3D Scanning with Automatic Motion Compensation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Minneapolis, MN, USA, 17–22 June 2007. [Google Scholar]
  16. Lu, L.; Xi, J.; Yu, Y.; Guo, Q. Improving the accuracy performance of phase-shifting profilometry for the measurement of objects in motion. Opt. Lett. 2014, 39, 6715–6718. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  17. Wang, Y.; Liu, Z.; Jiang, C.; Zhang, S. Motion induced phase error reduction using a Hilbert transform. Opt. Express 2018, 26, 34224–34235. [Google Scholar] [CrossRef]
  18. Wang, Y.; Suresh, V.; Li, B. Motion-induced error reduction for binary defocusing profilometry via additional temporal sampling. Opt. Express 2019, 27, 23948–23958. [Google Scholar] [CrossRef] [PubMed]
  19. Duan, M.; Jin, Y.; Xu, C.; Xu, X.; Zhu, C.; Chen, E. Phase-shifting profilometry for the robust 3-D shape measurement of moving objects. Opt. Express 2019, 27, 22100–22115. [Google Scholar] [CrossRef] [PubMed]
  20. Li, L.; Zheng, Y.; Yang, K.; Su, X.; Wang, Y.; Chen, X.; Wang, Y.; Li, B. Modified three-wavelength phase unwrapping algorithm for dynamic three-dimensional shape measurement. Opt. Commun. 2021, 480, 126409. [Google Scholar] [CrossRef]
  21. Duan, M.; Jin, Y.; Chen, H.; Kan, Y.; Zhu, C.; Chen, E. Dynamic 3-D shape measurement in an unlimited depth range based on adaptive pixel-by-pixel phase unwrapping. Opt. Express 2020, 28, 14319–14332. [Google Scholar] [CrossRef] [PubMed]
  22. Guo, W.; Wu, Z.; Li, Y.; Liu, Y.; Zhang, Q. Real-time 3D shape measurement with dual-frequency composite grating and motion-induced error reduction. Opt. Express 2020, 28, 26882–26897. [Google Scholar] [CrossRef]
  23. Spoorthi, G.E.; Gorthi, S.; Gorthi, R. PhaseNet: A Deep Convolutional Neural Network for Two-Dimensional Phase Unwrapping. IEEE Signal Process. Lett. 2019, 26, 54–58. [Google Scholar] [CrossRef]
  24. Shi, J.; Zhu, X.; Wang, H.; Song, L.; Guo, Q. Label enhanced and patch based deep learning for phase retrieval from single frame fringe pattern in fringe projection 3D measurement. Opt. Express 2019, 27, 28929–28943. [Google Scholar] [CrossRef]
  25. Yu, H.; Chen, X.; Zhang, Z.; Zuo, C.; Zhang, Y.; Zheng, D.; Han, J. Dynamic 3-D measurement based on fringe-to-fringe transformation using deep learning. Opt. Express 2020, 28, 9405–9418. [Google Scholar] [CrossRef]
  26. Fan, S.; Liu, S.; Zhang, X.; Huang, H.; Liu, W.; Jin, P. Unsupervised deep learning for 3D reconstruction with dual-frequency fringe projection profilometry. Opt. Express 2021, 29, 32547–32567. [Google Scholar] [CrossRef] [PubMed]
  27. Arun, K.S. Least-Squares Fitting of Two 3-D Point Sets. IEEE Trans. Pattern. Anal. Mach. Intell. 1987, 9, 698–700. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  28. Bottou, L. Stochastic Gradient Descent Tricks; Neural Networks: Tricks of the Trade; Springer: Berlin/Heidelberg, Germany, 2012. [Google Scholar]
  29. Ding, Y.; Xi, J.; Yu, Y.; Deng, F. Absolute phase recovery of three fringe patterns with selected spatial frequencies. Opt. Lasers Eng. 2015, 70, 18–25. [Google Scholar] [CrossRef] [Green Version]
  30. Yu, G.; Morel, J. ASIFT: An Algorithm for Fully Affine Invariant Comparison. Image Process. Line 2011, 1, 11–38. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Schematic diagram of object movement.
Figure 1. Schematic diagram of object movement.
Photonics 09 00295 g001
Figure 2. The exploration mechanism of stochastic gradient descent.
Figure 2. The exploration mechanism of stochastic gradient descent.
Photonics 09 00295 g002
Figure 3. The preprocess of captured fringe patterns: (a) The plastic statue used in the experiment; (b) The captured object fringe patterns; (c) The result after background subtraction; (d) The result after morphological filter.
Figure 3. The preprocess of captured fringe patterns: (a) The plastic statue used in the experiment; (b) The captured object fringe patterns; (c) The result after background subtraction; (d) The result after morphological filter.
Photonics 09 00295 g003
Figure 4. The feature points obtained by ASIFT algorithm and the corresponding relationship.
Figure 4. The feature points obtained by ASIFT algorithm and the corresponding relationship.
Photonics 09 00295 g004
Figure 5. The effect of different initial iteration points.
Figure 5. The effect of different initial iteration points.
Photonics 09 00295 g005
Figure 6. The comparison of reconstruction results in different scenarios: (a) The captured fringe pattern of the low frequency; (b) Our method: stationary object; (c) Our method: uncontrolled moving object; (d) Method in Ref. [16]: uncontrolled moving object; (e) The captured fringe pattern of the high frequency; (f) Our method: stationary object (viewed from the side); (g) Our method: uncontrolled moving object (viewed from the side); (h) Method in Ref. [16]: stationary object (viewed from the side).
Figure 6. The comparison of reconstruction results in different scenarios: (a) The captured fringe pattern of the low frequency; (b) Our method: stationary object; (c) Our method: uncontrolled moving object; (d) Method in Ref. [16]: uncontrolled moving object; (e) The captured fringe pattern of the high frequency; (f) Our method: stationary object (viewed from the side); (g) Our method: uncontrolled moving object (viewed from the side); (h) Method in Ref. [16]: stationary object (viewed from the side).
Photonics 09 00295 g006
Figure 7. The comparison of reconstruction results in different motion step between the traditional method and the proposed method: (a) Traditional PSP method [29]: stationary object; (b) Traditional PSP method [29]: 10 (mm/step); (c) Traditional PSP method [29]: 20 (mm/step); (d) Traditional PSP method [29]: 30 (mm/step); (e) Our method: stationary object; (f) Our method: 10 (mm/step); (g) Our method: 20 (mm/step); (h) Our method: 30 (mm/step).
Figure 7. The comparison of reconstruction results in different motion step between the traditional method and the proposed method: (a) Traditional PSP method [29]: stationary object; (b) Traditional PSP method [29]: 10 (mm/step); (c) Traditional PSP method [29]: 20 (mm/step); (d) Traditional PSP method [29]: 30 (mm/step); (e) Our method: stationary object; (f) Our method: 10 (mm/step); (g) Our method: 20 (mm/step); (h) Our method: 30 (mm/step).
Photonics 09 00295 g007
Table 1. Phase compensation results of different methods.
Table 1. Phase compensation results of different methods.
MethodPreset Deformed Phase ( )
01020304050
method in Ref. [16] ( ) 0.9322 11.0005 21.3034 31.0420 41.3998 51.0328
Our method ( ) 1.2211 11.4410 20.9574 30.2873 40.6716 50.9693
60708090100110
method in Ref. [16] ( ) 61.3550 71.2935 80.9131 90.5499 100.3419 110.3698
Our method ( ) 62.1627 71.7829 81.0582 91.3023 101.4257 111.8201
120130140150160170
method in Ref. [16] ( ) 120.0243 130.4847 140.5074 150.4744 160.3691 170.1656
Our method ( ) 120.8163 130.9091 140.5823 151.1774 160.4379 170.4528
180190200210220230
method in Ref. [16] ( ) 180.0487 189.5176 200.1042 209.5452 219.5894 229.3869
Our method ( ) 180.3607 189.5218 200.5750 210.9659 220.6261 230.2743
240250260270280290
method in Ref. [16] ( ) 238.8631 249.1729 258.6368 268.7223 279.1238 288.9783
Our method ( ) 239.1686 250.1688 258.9183 269.6454 279.5191 288.9242
300310320330340350
method in Ref. [16] ( ) 298.6066 308.6045 319.1106 329.4160 339.4323 349.3838
Our method ( ) 299.0089 309.0647 319.4346 329.6236 339.4916 349.4996
Table 2. The MSE of the two methods.
Table 2. The MSE of the two methods.
MethodFrequency (Hz)
51015202530
MSE of the two methods ( ) 0.8864 0.5758 0.4801 0.8342 0.7471 0.6361
Table 3. The calculation time of different methods.
Table 3. The calculation time of different methods.
MethodFrequency (Hz)
51015202530
method in Ref. [16] (s) 183.2324 193.2366 190.8281 183.0466 185.6055 184.1551
Our method (s) 37.3501 37.2767 37.4151 37.2874 37.3287 37.5048
Table 4. The APDs of reconstruction results with different motion steps.
Table 4. The APDs of reconstruction results with different motion steps.
MethodMotion Step (mm/step)
0102030
Traditional PSP method 0.0000 2.3860 4.4095 4.5851
Our method 0.0000 1.4045 2.5108 3.8553
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Liu, W.; Wang, X.; Chen, Z.; Ding, Y.; Lu, L. Accelerated Phase Deviation Elimination for Measuring Moving Object Shape with Phase-Shifting-Profilometry. Photonics 2022, 9, 295. https://doi.org/10.3390/photonics9050295

AMA Style

Liu W, Wang X, Chen Z, Ding Y, Lu L. Accelerated Phase Deviation Elimination for Measuring Moving Object Shape with Phase-Shifting-Profilometry. Photonics. 2022; 9(5):295. https://doi.org/10.3390/photonics9050295

Chicago/Turabian Style

Liu, Wei, Xi Wang, Zhipeng Chen, Yi Ding, and Lei Lu. 2022. "Accelerated Phase Deviation Elimination for Measuring Moving Object Shape with Phase-Shifting-Profilometry" Photonics 9, no. 5: 295. https://doi.org/10.3390/photonics9050295

APA Style

Liu, W., Wang, X., Chen, Z., Ding, Y., & Lu, L. (2022). Accelerated Phase Deviation Elimination for Measuring Moving Object Shape with Phase-Shifting-Profilometry. Photonics, 9(5), 295. https://doi.org/10.3390/photonics9050295

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop