Online Phase Measurement Proﬁlometry for a Fast-Moving Object

: When the measured object is fast moving online, the captured deformed pattern may appear as motion blur, and some phase information will be lost. Therefore, the frame rate has to be improved by adjusting the image acquisition mode of the camera to adapt to a fast-moving object, but the resolution of the captured deformed pattern will be sacriﬁced. So a super-resolution image reconstruction method based on maximum a posteriori (MAP) estimation is adopted to obtain high-resolution deformed patterns, and in this way, the reconstructed high-resolution deformed patterns also have a good effect on noise suppression. Finally, all the reconstructed high-resolution equivalent phase shifting deformed patterns are used for online three-dimensional (3D) reconstruction. Experimental results prove the effectiveness of the proposed method. The proposed method has a good application prospect in high-precision and fast online 3D measurement.


Introduction
With the development of optics, computer and information technology, 3D measurement technology plays an important role in reverse engineering, industrial 3D testing, medical diagnosis, cultural relic protection and so on [1][2][3]. The more commonly used 3D measurement technologies are phase measurement profilometry (PMP) [4,5] and Fourier transform profilometry (FTP) [6,7]. Compared with FTP, PMP is a point-to-point phase calculation with multiple frames of deformed patterns, it has higher precision and is more favored by the industry. Common 3D measurement based on PMP requires the fixed position of the object, but the object is moving in online 3D measurement, which will cause the object coordinates in the captured multiple frames of deformed patterns not to correspond, and lead to the error in the PMP phase demodulation. The pixel matching method [8] is used to obtain deformed patterns with consistent object coordinates. Scholars have carried out a lot of research in the field of online 3D measurement. Among them, in order to improve the matching speed, Peng Kuang et al. [9] proposed a new pixel matching method using the modulation of shadow areas in online 3D measurement. To improve the measured precision, Chen Cheng et al. [10] proposed an online phase measuring profilometry for objects moving with straight-line motion. To avoid the loss of phase information due to frequency filtering, Peng Kuang et al. [11] proposed a dual-frequency online PMP method with phase-shifting parallel to the moving direction of the measured object.
Generally, these methods may have a good effect when the speed of the production line is less than 0.2 m/s. However, when the object moves fast online, the captured deformed patterns will appear as motion blur. In order to adapt to the fast speed, the frame rate of the camera can be increased by adjusting the image acquisition mode, but the resolution of captured deformed patterns is bound to be sacrificed simultaneously. In order to obtain high-resolution deformed patterns and improve the measurement precision, we proposed an online phase measurement profilometry method based on super-resolution image Appl. Sci. 2021, 11, 2805 2 of 12 reconstruction [12,13], which combines high-resolution image from multiple frames of low-resolution images with sub-pixel [14] shifts among them. In addition, this paper adopts a super-resolution reconstruction method based on the maximum a posteriori (MAP) [15,16], using Gauss and Markov-Gibbs [17,18] random field models to construct posteriori probability of the high-resolution deformed pattern, the optimal estimation of high-resolution deformed pattern is obtained by minimizing the objective function. In this way, the high-resolution deformed patterns can be obtained. Finally, all the equivalent phase-shifting deformed patterns are demodulated for online 3D reconstruction.

Principle of PMP
The sinusoidal grating is designed to parallel to the moving direction, and the projector projects the sinusoidal grating onto the surface of the object, the deformed pattern I(x, y) captured by the camera is: I(x, y) = A(x, y) + B(x, y) cos(ϕ(x, y) + δ) (1) where A(x, y) represents the background intensity of the deformed pattern, B(x, y) reflects the contrast of the deformed pattern, ϕ(x, y) is the phase modulated by the height of the object and δ is the shifting phase. Then in N (N ≥ 3) step PMP, the n-th corresponding deformed pattern is: The phase ϕ(x, y) is calculated by Equation (3): ϕ(x, y) = arctan ∑ N n=1 I n (x, y) sin 2nπ N ∑ N n=1 I n (x, y) cos 2nπ Because ϕ(x, y) is wrapped in (−π, π] due to the arctan function, it should be unwrapped to be the continuous phase Φ(x, y) by using the rhombus phase unwrapping algorithm [19], and the object height distribution h(x, y) is reconstructed by the phase-toheight mapping algorithm [20].
where a(x, y), b 1 (x, y), b 2 (x, y) can be obtained by plane calibration, Φ C (x, y) is the phase of the reference plane, it can be obtained by measuring the reference plane in advance. Due to the change of camera acquisition mode for fast online measurement, the captured deformed pattern is degraded to low-resolution deformed pattern, and the main degradation process can be expressed as follows: I n,k = D n,k B n,k M n.k I n + ∆I n,k

Super-Resolution
where I n,k is the low-resolution deformed pattern, which is the k-th (k = 1, 2, . . . , K) frame captured in the n-th (n = 1, 2, . . . , N) phase-shifting position, and K represents the number of low-resolution deformed patterns captured at the same work station (the same shifting phase), D n,k represents the down-sampling matrix, which means that the high-resolution image is sampled at a certain distance so that the resolution of the image is reduced, B n,k is the blur matrix, it can be used to express the effect of the optical system's blur and aberration on high-resolution image, the point spread function (PSF) is used to express mathematically. M n,k is the motion matrix, used to characterize the pixel displacement of the low-resolution image relative to the high-resolution image. I n is the original highresolution deformed pattern matrix, and ∆I n,k is the additive random noise matrix. B n,k and D n,k of each low-resolution deformed pattern are the same when a reference is selected, but M n,k of each deformed pattern may not be the same because of the motion of the object.
For convenience, only one phase-shifting position is analyzed here, so the Equation (5) can be written as: where H k = D k B k M k is a degenerate matrix. Only I k is known in Equation (6), so we need to use an algorithm to estimate H k and ∆I k to solve I . In experiment, to obtain the high-resolution image from the low-resolution image, we need to set the resolution magnification factor q and interpolate the low-resolution image, so that we can interpolate the first frame by bicubic interpolation, and this interpolated image can be regarded as the reference frame of high-resolution image. Then the down-sampling matrix D k can be solved, and motion matrix M k between other frames of low-resolution images and the reference frame can be calculated by pixel matching. Blur matrix B k can be represented by PSF. Above all, H k is estimated.
Because super-resolution image reconstruction is an ill-posed problem, I needs to be estimated, so what satisfies the condition is a set of images, not the unique solution. However, under the Bayesian theory [21], MAP can flexibly add the prior probability of image, which is the mathematical expression of image features, and then it can convert illposed problems into well-posed problems by means of regularization. Finally, the unique solution of the problem can be obtained.
MAP estimation refers to the maximum probability of obtaining a high-resolution image under the condition of known low-resolution image sequence I , which can be expressed as solving maxP(I |I ) . According to Bayesian theory, it can be calculated as: whereÎ represents the estimation of a high-resolution image I , P(I |I ) is a posteriori probability, P(I |I ) is a conditional probability when a high-resolution image degenerates into a low-resolution image, P(I ) and P(I ) are the prior probabilities of the high-resolution image and the low-resolution image, respectively. Since P(I ) has no effect on the solution ofÎ, it can be omitted and the Equation (7) is reduced to: Logarithmically availablê I = argmax(log P(I I ) + log P I ) OrÎ = argmin(− log P(I I I ) − log P I ) Equation (10) is the initial form of the objective function. In order to solve it, the prior probability P(I ) and the conditional probability P(I |I ) must be determined. And their distributions depend on assumptions about the statistical model of the image According to the degeneration model I k = H k I + ∆I k , a low-resolution image can be regarded as a random field and the mean of the random field is H k I because of the random noise, so Gauss random field can be used to solve P(Î I ) . Markov random field describes the local statistical properties of images, while Gibbs random field describes the global properties by joint probability, so they are combined by equivalence, and the global statistical results can be calculated by using the local Gibbs distribution model.

Establishment of Objective Equation
When the image statistical model is selected, Equation (10) can be written aŝ where ∑ K k=1 I k − H kÎ 2 represents the difference between the actual data and the estimated value, ∑ c∈C ρ α (di c ) is the regular term, and α is the coefficient of the regular term, which determines the influence of the regular term on the image estimation. C is all the cliques [22] in the image matrix neighborhood system, ρ α (x) is the potential function related to the cliques C, different potential functions can be selected to obtain different texture statistical properties, and the parameter di c is the variance of its pixel mean. So the objective function can be expressed as: The function of the regular term is to constrain the estimationÎ according to the desired goal, and reduce the deviation from the optimal solution. The regular term used in this experiment is a function of the Gibbs distribution model, which describes the energy of a feature in the image neighborhood. The higher the energy, the lower the probability of feature will appear. According to the objective equation, the higher the energy of the feature, the larger the regular term, the greater the inhibition in the process of minimizing the objective equation. Therefore, in order to reduce the influence of noise on the estimation I, the feature should reflect the difference between the noise and the original image and make the noise larger. The potential function ρ α (di c ) is chosen according to the penalty degree of removing features of image, such as the Huber equation and linear equation.

Iterative Solution
Since it is hard to directly solveÎ when Θ(Î) is minimum, we can choose to estimateÎ iteratively by using the gradient descent [23], and projecting the negative gradient of the objective function into the constraint space at each iteration, thusÎ converges to the local minimum and to the global minimum as far as possible.
The whole iterative process can be described as: 1. firstly, the low-resolution image is interpolated by bicubic interpolation, and the initial estimationÎ 0 is obtained. When m = 0, where m is the number of iterations, the mean square error (MSE) of the initial negative gradient MSE 0 is calculated.

2.
calculate the gradient of the objective function g m = ∇Θ(Î m ). 3. calculate the projection map to the constraint space p m , p m = −Pg m , where P is the projection operator, and 4. using the gradient of the objective function to constrainÎ m , and the expression iŝ where β is the learning rate, which is the coefficient of the negative gradient of the objective function. 5.
calculate the MSE of the negative gradient of this iteration MSE m . If MSE m MSE 0 ≤ ε,Î m is the best estimation, where ε is the iteration termination flag. Otherwise, return to step 2 • .
MSE is defined as: Appl. Sci. 2021, 11, 2805 5 of 12 where L 1 , L 2 are the width and height of the high-resolution deformed pattern. I r is the reference of high-resolution deformed pattern, which is obtained by interpolating the first frame of low-resolution images. The whole process of MAP is shown in Figure 1. where , are the width and height of the high-resolution deformed pattern. ′ is the reference of high-resolution deformed pattern, which is obtained by interpolating the first frame of low-resolution images. The whole process of MAP is shown in Figure 1.

Equivalent Deformed Patterns
Using the previous MAP super-resolution image reconstruction method, we can reconstruct a set of high-resolution deformed patterns in different positions of an online moving object. In order to obtain equivalent deformed patterns, which means that an object in each deformed pattern is in the same position, this paper adopts Fast 3D measurement based on improved optical flow for dynamic objects by Peng Kuang et al. [24] for pixel matching. As shown in Figure 2, Figure 2a is the modulation of the deformed pattern ′, Figure 2b is the modulation of the deformed pattern ′. Figure 2a is used as a reference, the motion displacement of the object is calculated by pixel matching of Figure 2a,b; at the same time, according to the calculated motion displacement, the modulation with the same position of the object is obtained, as shown in Figure 2d, where the black area is the part of the missing information after the left shift of the pixel matching; phase demodulation can be carried out for the region of interest (ROI), which is the dotted line region in Figure 2c,d. ′ is moved in the reverse direction according to the calculated motion displacement, and the equivalent deformed patterns and can be obtained by intercepting ′ and ′ on the ROI in Figure 2d. Similarly, by matching the modulation of − with the modulation of ′, and intercepting on the ROI in Figure 2d, a set of equivalent deformed patterns , , , … , can be obtained. The phase ( , ) of the online object can be obtained by substituting , , , … , into Equation (3). The continuous phase ( , ) of the online object can be obtained by using the rhombic phase unwrapping algorithm and 3D shape of the online object can be obtained by substituting ( , ) into Equation (4).

Equivalent Deformed Patterns
Using the previous MAP super-resolution image reconstruction method, we can reconstruct a set of high-resolution deformed patterns in different positions of an online moving object. In order to obtain equivalent deformed patterns, which means that an object in each deformed pattern is in the same position, this paper adopts Fast 3D measurement based on improved optical flow for dynamic objects by Peng Kuang et al. [24] for pixel matching. As shown in Figure 2, Figure 2a is the modulation of the deformed pattern I 1 , Figure 2b is the modulation of the deformed pattern I N . Figure 2a is used as a reference, the motion displacement of the object is calculated by pixel matching of Figure 2a,b; at the same time, according to the calculated motion displacement, the modulation with the same position of the object is obtained, as shown in Figure 2d, where the black area is the part of the missing information after the left shift of the pixel matching; phase demodulation can be carried out for the region of interest (ROI), which is the dotted line region in Figure 2c,d. I N is moved in the reverse direction according to the calculated motion displacement, and the equivalent deformed patterns I 1 and I N can be obtained by intercepting I 1 and I N on the ROI in Figure 2d. Similarly, by matching the modulation of I 2 − I N−1 with the modulation of I 1 , and intercepting on the ROI in Figure 2d, a set of equivalent deformed patterns I 1 , I 2 , I 3 , . . . , I N can be obtained. The phase ϕ(x, y) of the online object can be obtained by substituting I 1 , I 2 , I 3 , . . . , I N into Equation (3). The continuous phase Φ(x, y) of the online object can be obtained by using the rhombic phase unwrapping algorithm and 3D shape of the online object can be obtained by substituting Φ(x, y) into Equation (4). where , are the width and height of the high-resolution deformed pattern. ′ is the reference of high-resolution deformed pattern, which is obtained by interpolating the first frame of low-resolution images. The whole process of MAP is shown in Figure 1.

Equivalent Deformed Patterns
Using the previous MAP super-resolution image reconstruction method, we can reconstruct a set of high-resolution deformed patterns in different positions of an online moving object. In order to obtain equivalent deformed patterns, which means that an object in each deformed pattern is in the same position, this paper adopts Fast 3D measurement based on improved optical flow for dynamic objects by Peng Kuang et al. [24] for pixel matching. As shown in Figure 2, Figure 2a is the modulation of the deformed pattern ′, Figure 2b is the modulation of the deformed pattern ′. can be obtained. The phase ( , ) of the online object can be obtained by substituting , , , … , into Equation (3). The continuous phase ( , ) of the online object can be obtained by using the rhombic phase unwrapping algorithm and 3D shape of the online object can be obtained by substituting ( , ) into Equation (4).

Experiment and Analysis
In order to verify the feasibility and practicability of the proposed method, an online 3D measurement experimental system is built. As shown in Figure 3, the projector used in this experiment is PLED-W200 DLP, and the image collector is a SDI-C2010M camera, the highest frame rate of this camera is 60 fps when the image pixel size is 1920 × 1080; in order to adapt the fast speed, we increase the frame rate to 100 fps while the pixel size is decreased to 256 × 256. During this measurement, the object is driven from left to right by the electric translation platform Y100SC01 at a fast speed, Through the previous experimental calibration, we measured that the relationship between pixels and the motion distance is approximately 1.245 mm/pixel. The entire image acquisition process is: 1.
The designed 5 frames of sinusoidal gratings with a shifting phase of 2/5π are combined into a repeated video.

2.
The projector projects the video onto the object.

3.
Start the electric translation platform at a given speed.

4.
Turn on the DLP frame synchronization signal to trigger the camera, thus achieving synchronous acquisition. 5.
After 0.2 s, turn off the DLP frame synchronization.

Experiment and Analysis
In order to verify the feasibility and practicability of the proposed method, an online 3D measurement experimental system is built. As shown in Figure 3, the projector used in this experiment is PLED-W200 DLP, and the image collector is a SDI-C2010M camera, the highest frame rate of this camera is 60 fps when the image pixel size is 1920 × 1080; in order to adapt the fast speed, we increase the frame rate to 100 fps while the pixel size is decreased to 256 × 256. During this measurement, the object is driven from left to right by the electric translation platform Y100SC01 at a fast speed, Through the previous experimental calibration, we measured that the relationship between pixels and the motion distance is approximately 1.245 mm/pixel. The entire image acquisition process is: 1. The designed 5 frames of sinusoidal gratings with a shifting phase of 2/5π are combined into a repeated video. 2. The projector projects the video onto the object. 3. Start the electric translation platform at a given speed. 4. Turn on the DLP frame synchronization signal to trigger the camera, thus achieving synchronous acquisition. 5. After 0.2 s, turn off the DLP frame synchronization.
In this experiment, the frame rate of the video is 25 fps, and the frame rate of the camera is 100 fps. Thus, at the same work station, 4 frames (K = 4) of deformed patterns are captured. The image acquisition is real time, data processing is post-processing, and the data processing includes super-resolution reconstruction, obtaining equivalent deformed patterns and PMP 3D reconstruction. Because the iterative solution of the superresolution reconstruction takes some time, and iteration time is related to hardware and data process mode, in future, we will use a better computer or adapt graphics processing unit (GPU) acceleration to speed up data processing. The proposed method is compared with the averaging method, and the experimental data are shown in Figure 4. Figure 4a is a measured object, which is a "face" model, and in this experiment, the speed of the object is 0.2 m/s. Figure 4b is a set of low-resolution deformed patterns captured at the first work station, which are ′′ , ′′ , ′′ , ′′ , and the pixel size is 256 × 256. In this experiment, the frame rate of the video is 25 fps, and the frame rate of the camera is 100 fps. Thus, at the same work station, 4 frames (K = 4) of deformed patterns are captured. The image acquisition is real time, data processing is post-processing, and the data processing includes super-resolution reconstruction, obtaining equivalent deformed patterns and PMP 3D reconstruction. Because the iterative solution of the super-resolution reconstruction takes some time, and iteration time is related to hardware and data process mode, in future, we will use a better computer or adapt graphics processing unit (GPU) acceleration to speed up data processing.
The proposed method is compared with the averaging method, and the experimental data are shown in Figure 4. Figure 4a is a measured object, which is a "face" model, and in this experiment, the speed of the object is 0.2 m/s. Figure 4b is a set of low-resolution deformed patterns captured at the first work station, which are I 11 , I 12 , I 13 , I 14 , and the pixel size is 256 × 256. Appl. Sci. 2021, 11, x FOR PEER REVIEW 7 of 12 (a) (b) The reconstructed deformed patterns are shown in Figure 5, Figure 5a is , and it is regarded as a reference frame. The process of pixel matching is in Figure 5a,b. The points marked in Figure 5a are the selected feature points of , the other points marked in Figure 5b are the matching points of . The difference between the two set of points represents the movement of the "face". Then the estimated motion matrix, blur matrix and down-sampling matrix are used for subsequent iterations. In this experiment, the super-resolution factor q is 2, the max number of iterations m is 200. Figure 5c,d are the reconstructed results of the averaging method and MAP, respectively, and their pixel size is 512 × 512. By comparison, it can be seen that the deformed pattern reconstructed by MAP is clearer, and there is no large motion blur. At the same time, compared with the original low-resolution deformed pattern, the deformed pattern reconstructed by MAP has a good effect on noise suppression. Figure 5e,f are the enlargements of the rectangular areas of Figure 5c,d, respectively. Compared with Figure 5e, Figure 5f has a clearer edge, which further indicates that noise is reduced effectively using MAP.  The reconstructed deformed patterns are shown in Figure 5, Figure 5a is I 11 , and it is regarded as a reference frame. The process of pixel matching is in Figure 5a,b. The points marked in Figure 5a are the selected feature points of I 11 , the other points marked in Figure 5b are the matching points of I 14 . The difference between the two set of points represents the movement of the "face". Then the estimated motion matrix, blur matrix and down-sampling matrix are used for subsequent iterations. In this experiment, the super-resolution factor q is 2, the max number of iterations m is 200. Figure 5c,d are the reconstructed results of the averaging method and MAP, respectively, and their pixel size is 512 × 512. By comparison, it can be seen that the deformed pattern reconstructed by MAP is clearer, and there is no large motion blur. At the same time, compared with the original low-resolution deformed pattern, the deformed pattern reconstructed by MAP has a good effect on noise suppression. Figure 5e,f are the enlargements of the rectangular areas of Figure 5c,d, respectively. Compared with Figure 5e, Figure 5f has a clearer edge, which further indicates that noise is reduced effectively using MAP.  The reconstructed deformed patterns are shown in Figure 5, Figure 5a is , and it is regarded as a reference frame. The process of pixel matching is in Figure 5a,b. The points marked in Figure 5a are the selected feature points of , the other points marked in Figure 5b are the matching points of . The difference between the two set of points represents the movement of the "face". Then the estimated motion matrix, blur matrix and down-sampling matrix are used for subsequent iterations. In this experiment, the super-resolution factor q is 2, the max number of iterations m is 200. Figure 5c,d are the reconstructed results of the averaging method and MAP, respectively, and their pixel size is 512 × 512. By comparison, it can be seen that the deformed pattern reconstructed by MAP is clearer, and there is no large motion blur. At the same time, compared with the original low-resolution deformed pattern, the deformed pattern reconstructed by MAP has a good effect on noise suppression. Figure 5e,f are the enlargements of the rectangular areas of Figure 5c,d, respectively. Compared with Figure 5e, Figure 5f has a clearer edge, which further indicates that noise is reduced effectively using MAP.  In the same way, high-resolution deformed patterns of the other four work stations can also be obtained, and the reconstructed results are shown in Figure 6. Figure 6a shows five frames of high-resolution deformed patterns reconstructed by the averaging method, and Figure 6c shows five high-resolution deformed patterns reconstructed by the proposed method, which are I 1 , I 2 , I 3 , I 4 , I 5 . In Figure 6a,c, the marked dotted line shows that the position of the object in each frame is not the same. We adopt optical flow [24] to obtain equivalent deformed patterns, and the results are shown in Figure 6b,d. The pixel size of Figure 6b,d is 458 × 512, and in Figure 6b,d, the position of the object in each frame is the same. Figures 6e and 6f are, respectively, the experimental results of PMP 3D reconstruction using Figure 6b,d. From Figure 6e,f, it can be seen that the reconstructed result by the proposed method is better than that by the averaging method, and the phase information is more complete. At the same time, in order to better analyze the measuring results of the proposed method, the measuring result of eight-step PMP [25] is taken as the quasi truth value, and the eight-step PMP is taken for the same but stationary object. In the same way, high-resolution deformed patterns of the other four work stations can also be obtained, and the reconstructed results are shown in Figure 6. Figure 6a shows five frames of high-resolution deformed patterns reconstructed by the averaging method, and Figure 6c shows five high-resolution deformed patterns reconstructed by the proposed method, which are , , , , . In Figure 6a,c, the marked dotted line shows that the position of the object in each frame is not the same. We adopt optical flow [24] to obtain equivalent deformed patterns, and the results are shown in Figure 6b,d. The pixel size of Figure 6b,d is 458 × 512, and in Figure 6b,d, the position of the object in each frame is the same. Figure 6e and Figure 6f are, respectively, the experimental results of PMP 3D reconstruction using Figure 6b,d. From Figure 6e,f, it can be seen that the reconstructed result by the proposed method is better than that by the averaging method, and the phase information is more complete. At the same time, in order to better analyze the measuring results of the proposed method, the measuring result of eight-step PMP [25] is taken as the quasi truth value, and the eight-step PMP is taken for the same but stationary object.     Figure 7b is the enlargement of Figure 7a forehead (rectangular area). The dot-anddash line is the result of the eight-step PMP, the dotted line is the result of the averaging method, and the solid line is the result of the proposed method. As can be seen from Figure 7a, the reconstructed result of the proposed method is closer to that of the eight-step PMP, and better than that of the averaging method. As can be seen from the details shown in Figure 7b, the results of the proposed method are also very close to those of the eightstep PMP. The reconstructed result of the proposed method is better than that of the averaging method, either in the whole or in the detail. The experimental results indicate that the proposed method can improve the resolution while preserving the details of the object.
In order to further verify the applicability of the proposed method, this experiment used a more complex model, a "snail" model, and the speed of the object is 0.5 m/s, superresolution q = 2, the max num of the iterators m is 200. The experimental data and the results of the comparison experiment are shown in Figure 8. Figure 8a, Figure 8b, and Figure 8c are one frame of the high-resolution deformed patterns obtained by the averaging method, proposed method and eight-step PMP, respectively. Figure 8d-f show the corresponding wrapped phases, and they are obtained by calculating all equivalent deformed patterns, and through phase unwrapping and height mapping, the reconstructed results are shown in Figure 8g-i. It also can be seen that motion blur in Figure 8b is less than that in Figure 8a; The wrapped phase in Figure 8e is clearer than that in Figure 8d and is closer to that in Figure 8f. By comparing Figure 8g-i, the height reconstruction by the proposed method is obviously better than that by the averaging method, and is closer to that by eight-step PMP.  Figure 7b is the enlargement of Figure 7a forehead (rectangular area). The dot-anddash line is the result of the eight-step PMP, the dotted line is the result of the averaging method, and the solid line is the result of the proposed method. As can be seen from Figure 7a, the reconstructed result of the proposed method is closer to that of the eight-step PMP, and better than that of the averaging method. As can be seen from the details shown in Figure 7b, the results of the proposed method are also very close to those of the eight-step PMP. The reconstructed result of the proposed method is better than that of the averaging method, either in the whole or in the detail. The experimental results indicate that the proposed method can improve the resolution while preserving the details of the object.
In order to further verify the applicability of the proposed method, this experiment used a more complex model, a "snail" model, and the speed of the object is 0.5 m/s, super-resolution q = 2, the max num of the iterators m is 200. The experimental data and the results of the comparison experiment are shown in Figure 8. Figure 8a, Figure 8b, and Figure 8c are one frame of the high-resolution deformed patterns obtained by the averaging method, proposed method and eight-step PMP, respectively. Figure 8d-f show the corresponding wrapped phases, and they are obtained by calculating all equivalent deformed patterns, and through phase unwrapping and height mapping, the reconstructed results are shown in Figure 8g-i. It also can be seen that motion blur in Figure 8b is less than that in Figure 8a; The wrapped phase in Figure 8e is clearer than that in Figure 8d and is closer to that in Figure 8f. By comparing Figure 8g-I, the height reconstruction by the proposed method is obviously better than that by the averaging method, and is closer to that by eight-step PMP.  , the reconstructed result of the proposed method is closer to that of the eight-step PMP, and better than that of the averaging method. As can be seen from the details shown in Figure 7b, the results of the proposed method are also very close to those of the eightstep PMP. The reconstructed result of the proposed method is better than that of the averaging method, either in the whole or in the detail. The experimental results indicate that the proposed method can improve the resolution while preserving the details of the object.
In order to further verify the applicability of the proposed method, this experiment used a more complex model, a "snail" model, and the speed of the object is 0.5 m/s, superresolution q = 2, the max num of the iterators m is 200. The experimental data and the results of the comparison experiment are shown in Figure 8. Figure 8a, Figure 8b, and Figure 8c are one frame of the high-resolution deformed patterns obtained by the averaging method, proposed method and eight-step PMP, respectively. Figure 8d-f show the corresponding wrapped phases, and they are obtained by calculating all equivalent deformed patterns, and through phase unwrapping and height mapping, the reconstructed results are shown in Figure 8g-i. It also can be seen that motion blur in Figure 8b is less than that in Figure 8a; The wrapped phase in Figure 8e is clearer than that in Figure 8d and is closer to that in Figure 8f. By comparing Figure 8g-i, the height reconstruction by the proposed method is obviously better than that by the averaging method, and is closer to that by eight-step PMP. To further analyze the details, Figure 9 is a cross-sectional comparison of the three methods; the dot-and-dash line is the result of the eight-step PMP, the dotted line is the result of the averaging method, and the solid line is the result of the proposed method. It also can be seen that the reconstructed result of the proposed method is better than that of the averaging method, either in the whole or in the detail. It means that the proposed method has a higher precision than the averaging method. The experimental results as shown in both Figures 8 and 9 prove that the proposed method has a good effect on the more complex object. To further quantitative analysis of the error of the proposed method and the averaging method, we measured the heights of the different planes. The heights of the planes are known and measured by metrological grating, and they are taken as the truth values. The heights are 5 mm, 10 mm and 15 mm. According to the proposed method and the averaging method, we take an online 3D measurement of known height plane. Then we use root mean squared error (RMSE) to analyze the errors. The results of measurements are shown To further analyze the details, Figure 9 is a cross-sectional comparison of the three methods; the dot-and-dash line is the result of the eight-step PMP, the dotted line is the result of the averaging method, and the solid line is the result of the proposed method. It also can be seen that the reconstructed result of the proposed method is better than that of the averaging method, either in the whole or in the detail. It means that the proposed method has a higher precision than the averaging method. The experimental results as shown in both Figures 8 and 9 prove that the proposed method has a good effect on the more complex object.  To further analyze the details, Figure 9 is a cross-sectional comparison of the three methods; the dot-and-dash line is the result of the eight-step PMP, the dotted line is the result of the averaging method, and the solid line is the result of the proposed method. It also can be seen that the reconstructed result of the proposed method is better than that of the averaging method, either in the whole or in the detail. It means that the proposed method has a higher precision than the averaging method. The experimental results as shown in both Figures 8 and 9 prove that the proposed method has a good effect on the more complex object. To further quantitative analysis of the error of the proposed method and the averaging method, we measured the heights of the different planes. The heights of the planes are known and measured by metrological grating, and they are taken as the truth values. The heights are 5 mm, 10 mm and 15 mm. According to the proposed method and the averaging method, we take an online 3D measurement of known height plane. Then we use root mean squared error (RMSE) to analyze the errors. The results of measurements are shown  To further quantitative analysis of the error of the proposed method and the averaging method, we measured the heights of the different planes. The heights of the planes are known and measured by metrological grating, and they are taken as the truth values. The heights are 5 mm, 10 mm and 15 mm. According to the proposed method and the averaging method, we take an online 3D measurement of known height plane. Then we use root mean squared error (RMSE) to analyze the errors. The results of measurements are shown in Table 1. RMSE1 and RMSE2 are the calculated RMSEs of the averaging method and the proposed method, respectively. For example, in the experiment of 5 mm, the measured RMSE of the averaging method is 0.358 mm, and that of the proposed method is 0.091 mm. The results prove that the proposed method has a higher precision and higher reliability than the average method.

Conclusions
In the online 3D measurement of a fast-moving object, the frame rate of a camera can be adjusted to adapt to the deformed pattern capturing of the fast-moving object, and the resolution of camera will be sacrificed. In this paper, an online phase measurement profilometry method for a fast-moving object was proposed, and the experimental results prove that the proposed method can not only improve the resolution of the deformed patterns, but also ensure to restore most of the details of the object. The proposed method can adapt to the online 3D measurement when the fast-moving object moves at the speed of 0.5 m/s. We also think that the proposed method has a good application prospect in high-precision and fast online 3D measurement. However, the proposed method is mainly limited by the frame rate of the camera, and if a higher frame rate camera is selected, the speed can be higher. In later work, we may adopt a super-resolution image reconstruction method based on deep learning, the training model can monitor and train more factors.

Conflicts of Interest:
The authors declare no conflict of interest.