Next Article in Journal
Quaternion-Based Gesture Recognition Using Wireless Wearable Motion Capture Sensors
Previous Article in Journal
Assistant Personal Robot (APR): Conception and Application of a Tele-Operated Assisted Living Robot
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Flexible Fringe Projection Vision System with Extended Mathematical Model for Accurate Three-Dimensional Measurement

School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, 800 Dongchuan Road Minhang District, Shanghai 200240, China
*
Author to whom correspondence should be addressed.
Sensors 2016, 16(5), 612; https://doi.org/10.3390/s16050612
Submission received: 16 February 2016 / Revised: 22 April 2016 / Accepted: 23 April 2016 / Published: 28 April 2016
(This article belongs to the Section Physical Sensors)

Abstract

:
In order to acquire an accurate three-dimensional (3D) measurement, the traditional fringe projection technique applies complex and laborious procedures to compensate for the errors that exist in the vision system. However, the error sources in the vision system are very complex, such as lens distortion, lens defocus, and fringe pattern nonsinusoidality. Some errors cannot even be explained or rendered with clear expressions and are difficult to compensate directly as a result. In this paper, an approach is proposed that avoids the complex and laborious compensation procedure for error sources but still promises accurate 3D measurement. It is realized by the mathematical model extension technique. The parameters of the extended mathematical model for the ’phase to 3D coordinates transformation’ are derived using the least-squares parameter estimation algorithm. In addition, a phase-coding method based on a frequency analysis is proposed for the absolute phase map retrieval to spatially isolated objects. The results demonstrate the validity and the accuracy of the proposed flexible fringe projection vision system on spatially continuous and discontinuous objects for 3D measurement.

1. Introduction

Industrial safety inspection has recently received considerable attention in the academic community and technology companies. The accurate three-dimensional (3D) measurement provides important information on monitoring workplaces for industrial safety. Numerous techniques on 3D measurement have been studied, including Stereo-vision, Time-of-Flight, and structured light technique. Among these techniques, the phase-shifting digital fringe projection is widely recognized and studied due to its high resolution, high speed, and noncontact property [1,2]. In a fringe projection vision system, the projector projects light patterns with a sinusoidally changing intensity onto the detected objects, and the pattern images are deformed because of the objects physical profile; then, a camera captures the deformed pattern images. The absolute phase map on the detected objects is derived by the phase unwrapping of the captured deformed fringe patterns. The 3D information is uniquely extracted on the basis of the mapping relationship between the image coordinates and the corresponding absolute phase value. Considering the digital projector has a large working range, the fringe projection technique can be used on large scene reconstruction for industrial safety inspection.
In contrast to the polynomial fitting method [3,4] and least-squares method [5,6,7], the phase-to-depth transformation based on the geometrical modeling method [8,9,10,11,12,13] has been widely studied and used in the phase-shifting structured light technique owing to several advantages. First, no additional gauge block or precise linear z stage is needed for calibration, which simplifies the calibration process and offers more flexibility and a cost savings. Second, there can be no restriction on the camera’s and projector’s relative alignment regarding parallelism and perpendicularity. Third, it has a clear and explanatory mathematical expression for the phase to 3D coordinates transformation since it is derived from the triangulation relationship of the camera, projector, and detected object.
Although it has the advantages mentioned above, the given mathematical expression does not have sufficient accuracy because the adopted simplified geometrical model ignores the lens distortion and lens defocus of the fringe projection system. On the other hand, there are some other error sources that exist in the vision system that influence the accuracy of the 3D measurement. As a result, a large amount of work has been devoted to improving the measurement accuracy by considering the lens distortion [5,14], the gamma nonlinearity for phase error elimination [15,16], the high-order harmonics for phase extraction [17], the ambient light for phase error compensation [18], etc.
The extended mathematical model proposed in this paper eliminates the uncertainty in the calibration parameters and the uncertainty in the phase extraction. In addition, it will not incur any computational burden during normal computation of the phase to depth transformation. It avoids the time-consuming iterative operation and exhibits more precision.
Experiments are carried out for a single object and multiple spatially discontinuous objects with the proposed flexible fringe projection vision system by the extended mathematical model. For spatially isolated objects, the fringe orders will be ambiguous for the phase unwrapping, which makes it difficult to retrieve the absolute phase directly for three-step phase-shifting technique. As a result, the depth difference between spatially isolated surfaces is indiscernible. Therefore, in order to obtain an accurate 3D measurement of multiple discontinuous objects, several approaches are developed to derive an accurate absolute phase map. The gray-coding plus phase-shifting technique is one of the better methods to obtain the fringe order [19]. However, at least several more patterns are required to generate the appropriate number of codewords, which slows down the computational timing. In addition, Su [20] generated sinusoidal fringe patterns by giving every period a certain color, and the fringe order was provided by the color information of the fringe patterns. However, the phase accuracy might be influenced by the poor color response and color crosstalk. Wang et al. [21] presented a phase-coding method. Instead of encoding the codeword into binary intensity images, the codeword is embedded into the phase range of phase-shifted fringe images. This technique has the advantages of less sensitivity to the surface contrast, ambient light, and camera noise. However, three more frames are necessary for the phase-coding method. In this research, in order to realize the accurate 3D measurement of multiple discontinuous objects with the abovementioned extended mathematical model, we propose a new phase-coding method to obtain the absolute phase map by only adding one additional fringe pattern.
The structure of this paper is as follows. Section 2 reviews the principles for obtaining the 3D coordinates from the absolute phase. Section 3 introduces the mathematical model extension with least-squares parameter estimation. Section 4 presents the realization of the flexible fringe projection vision system for the accurate 3D measurement of a single continuous object by the extended mathematical model. Section 5 introduces the new phase-coding method for deriving an accurate absolute phase map for spatially isolated objects. Finally, the conclusions and the plans for future work are summarized.

2. Principle on Absolute Phase to 3D Coordinates Transformation

The geometrical model of the proposed fringe projection measurement system is shown in Figure 1. The camera and the projector are all described by a pinhole model. The camera imaging plane and the projection plane are arbitrarily arranged. The reference plane O X Y is an imaginary plane, which is parallel to the projection plane. The projector coordinate system is denoted as O p X p Y p Z p . The camera coordinate system is denoted as O c X c Y c Z c . The imagined reference plane coordinate system is denoted as O X Y Z denotes. P represents an arbitrary point on the detected object. ( x , y , z ) , ( x c , y c , z c ) , ( x p , y p , z p ) denote the imaginary reference coordinate, the camera coordinate and the projector coordinate of the point P, respectively. The imaging point of the point P is indicated as the point C. The point D indicates the fringe point which projects at the point P in space. The points A and B denote the lens center of the camera and the projector, respectively. Line E D ¯ is one of the sinusoidal fringes which is parallel to x p - axis. Line B G ¯ is vertical to Line E D ¯ . Line P P ¯ is parallel to Z-axis and crosses the imaginary reference plane at point P . P P 1 ¯ is the extension line of ray D P ¯ and crosses the imaginary reference plane at point P 1 . P P 0 ¯ and P 1 P 1 ¯ are parallel to X-axis and cross the Y-axis at point P 0 and P 1 , respectively. The mathematical description of the phase to 3D coordinates transformation [22] is as follows:
z c = K ( r 21 1 t x + r 22 1 t y + r 23 1 t z ) ( r 31 1 t x + r 32 1 t y + r 33 1 t z ) K ( r 21 1 D 1 + r 22 1 D 2 + r 23 1 ) ( D 1 r 31 1 + D 2 r 32 1 + r 33 1 )
where K = 2 π f p ( φ φ 0 ) λ 0 , D 1 = u c u 0 c f x c , and D 2 = v c v 0 c f y c . Furthermore, the coordinates ( x c , y c ) at the given pixel ( u c , v c ) are obtained by the calculated depth z c as follows:
x c = D 1 z c
y c = D 2 z c
By solving Equations (1)–(3), the x c , y c , and z c coordinates for each point of the detected object in space are obtained from the absolute phase φ.
φ 0 is the absolute phase of the center of the circle at Point B. ( u 0 c , v 0 c ) is principal point of the camera plane. f x c and f y c are the scale factors in the image plane along the x c and y c axes. ( u 0 p , v 0 p ) is principal point of the projector plane. f p is the scale factor in the projector plane along the x and y axes. R and T are the rotation and translation matrixes from the projector coordinate system to the camera coordinate system. They are expressed as:
R = r 11 r 12 r 13 r 21 r 22 r 23 r 31 r 32 r 33 , T = t x t y t z

3. Mathematical Model Extension with Least-Squares Parameter Estimation

3.1. Error Analysis

Before working on the mathematical model extension, we first discuss the error sources existing in the fringe projection system. The errors can be considered as a combination of systematic and random errors. A detailed description of the error sources is illustrated in Figure 2.
As for random errors such as noise and the fluctuation in the environmental light, filtering and multiple sampling can be applied to avoid them as much as possible. For systematic errors, from the observation of Equation (1), they include
(1)
The uncertainty in the computed model. The simplified geometrical model f ( · ) , which does not consider the lens distortion and lens defocus, results in a systematic error. σ f ( · ) denotes the standard deviation (std) of the simplified model.
(2)
The uncertainty in the system calibration parameters. σ f x c , σ f y c , σ u 0 c , and σ v 0 c denote the std of the estimated camera parameters f x c , f y c , u 0 c , and v 0 c , respectively; σ f p denotes the std of the projector focus length f p ; σ R denotes the std of the rotation matrix between the camera and the projector R ; and σ T denotes the std of the translation matrix between the camera and the projector T .
(3)
The uncertainty in the phase map. The nonsinusoity of the fringe pattern will result in a phase map error. σ φ denotes the std of the absolute phase map φ; σ φ 0 denotes the std of the absolute phase of the projection center φ 0 .
The whole fringe projection system error σ z is accumulated by the abovementioned errors.

3.2. Mathematical Model Extension

Considering the complex error sources introduced above, it is difficult to compensate for all of them with a single processing method. In order to eliminate the uncertainty and improve the 3D measurement accuracy with less computational time and complexity, the mathematical model of the phase to 3D coordinates transformation is extended on the basis of the mathematical expression and the uncertainty in the parameters introduced in Section 3.1, and several parameters are added that more accurately describe the relationship between the phase and absolute 3D coordinates. The extended mathematical model is expressed in Equations (4)–(7). z i m is the measured depth value with the extended mathematical expression and expressed as follows:
z i m = K ( r 21 1 t x + r 22 1 t y + r 23 1 t z + C 1 ) ( r 31 1 t x + r 32 1 t y + r 33 1 t z + C 2 ) K ( r 21 1 D 1 + r 22 1 D 2 + r 23 1 + C 3 ) ( r 31 1 D 1 + r 32 1 D 2 + r 33 1 + C 4 ) + K
where i is the ordinal number of each valid point.
K = 2 π ( f p + K f p ) ( ( φ u ) i m φ 0 + K φ ) λ 0
D 1 = ( u c ) i m u 0 c + K u 0 f x c + K f c
D 2 = ( v c ) i m v 0 c + K v 0 f y c + K f c
where C 1 , C 2 , C 3 , C 4 , K f p , K φ , K u 0 , K v 0 , K f c , and K are the parameters to be determined. C 1 and C 2 are the parameters related to the uncertainty of system calibration parameters R and T . K f p and K f c are the parameters related to the uncertainty of the calibrated focus length of the projector and the camera, respectively. C 3 and C 4 are the parameters related to the uncertainty of rotation matrix R . K φ is the parameter related to the phase unwrapping process. K u 0 and K v 0 are the parameters related to the uncertainty of the calibrated principle points of the camera. K is the parameter related to the computational model on the phase to depth transformation.

4. Realization and Experiments with Single Continuous Objects

4.1. Experimental Setup

The experimental setup is shown in Figure 3. The projection unit (DLP LightCrafter, Texas Instruments) projects light patterns onto the target object during measurements. The smart camera (Vision Components VC6210nano, Vision Components GmbH) is responsible for capturing the deformed images. It has an image resolution of 752 × 480 pixels. For the projection unit, the physical resolution of the DLP3000 digital micromirror device (DMD) is 608 × 684 pixels in a diamond array. The DLP3000 DMD is a digitally controlled micro-opto-electromechanical system (MOEMS) spatial light modulator (SLM), which is used to modulate the ampitude and direction of the incoming light. The individual mirror is 7.6 μ m square. For a working distance of approximately 600 mm, the width of the projected image is 360 mm; this corresponds to a horizontal resolution of 0.414 mm per pixel assuming “perfect” optics.

4.2. System Calibration

In this study, the plane-based system calibration method is applied to obtain the intrinsic parameters of the camera and projector and their relative positions [23]. Figure 4 shows the calibration patterns. Figure 4a shows the checkerboard pattern for the camera calibration. Figure 4b shows the projected checkerboard pattern for the projector calibration, which covers the same field of view as the camera calibration. In addition, three vertical sinusoidal fringe patterns are projected by the projector and captured by the camera for each calibration position. One example pattern is shown in Figure 4c. The reason for doing this is to obtain the measured depth map for the same calibration position by the three-step phase-shifting technique proposed in this paper.
The obtained system calibration is based on the intrinsic parameters of the camera calibration. Therefore, the camera calibration plays an important role in the whole fringe projection 3D measurement. The camera calibration error will be added to the system calibration for the plane-based calibration method. Fortunately, the camera calibration technique is presently very mature and robust. In order to acquire precise camera calibration results, the nonlinear parameter optimization algorithm is adopted by taking full advantage of the geometry of the calibration plane in this paper [24,25,26]. Figure 5 shows the camera calibration reprojection error and the extrinsic parameters for the abovementioned experimental setup. The std of the reprojection error is (0.09870, 0.09053) pixels, and it is equivalent to 0.03 mm for an image resolution of 752 × 480 pixels.
From Figure 5a, the worst reprojection error for some feature points is up to 1 pixel, and the error is up to approximately 0.3 mm, which is not acceptable. For the feature points that are chosen with known geometrical knowledge, we choose points with a reprojection error within 0.25 pixels, which is equivalent to 0.07 mm in the worst case. On the one hand, this still provides enough fitting data to obtain the extended mathematical model parameters. On the other hand, this is the error that we can accept for our fringe projection vision system since the chosen projector of this system has a resolution of 0.414 mm. The std of the reprojection error for the projector is (0.77058, 0.48506), and the reprojection error is equivalent to 0.319 mm, which is much higher than the error deviation of 0.07 mm for the reference coordinates. The 3D information for the calibration plane can be regarded as known geometrical knowledge. The relative positions of the calibration planes based on the camera coordinate system are shown in Figure 5b.

4.3. Derivation of the Extended Mathematical Model Parameters

To derive the extended mathematical model parameters introduced in Section 3.2, we first define an error function of the form e ( u , v ) . In the sense of least squares, the parameters can be estimated by minimizing the sum of squares of e i ( u c , v c ) :
min i = 1 N e i 2 = min i = 1 N ( z i r z i m ) 2
where i is the ordinal number of each valid point, and N is the total number of valid points that are used in the calculation. Z i r is the reference depth of the i t h valid point, which is known in advance as the basic geometric information of the calibration target at different positions from the camera calibration, as introduced in Section 4.2. Z i m is the measured depth of the i t h valid point using the extended mathematical model in Equation (4) to the same calibration plane. By solving Equations (4)–(8), the least-squares parameter estimate is implemented. The obtained parameters are as follows:
C 1 = 0 . 048 , C 2 = 0 . 083 , C 3 = 0 , C 4 = 0 , K f p = 0 , K φ = 2 . 7 , K u 0 = 17 . 9 , K v 0 = 0 , K f c = 6 . 975 , and K = 0 . 2 .

4.4. Experimental Results

In order to verify the extended mathematical model for the phase to 3D coordinates transformation technique introduced in this paper, we measured the cylinder and hand plaster models shown in Figure 6. Firstly, we carried out the measurement for the cylinder plaster model in Figure 6a with the fringe projection vision system. The point clouds of the measured surface in Matlab are shown in Figure 7a. It should be mentioned that the direction of X c is defined to be inversed to the placement of the detected object. In Figure 7a, X c is defined to be opposite to the X c as so to display the measured surface upright. The radius of the cylinder is 52.5 mm. The measured radius is 51.9 mm when Equations (1)–(3) are applied, as shown in Figure 7b. The error is 0.6 mm without using the mathematical model extension technique.
In order to obtain an accurate 3D measurement, the mathematical model extension technique using least-squares parameter estimation, as introduced in Section 4.3, is applied. After applying Equations (4)–(7) with the abovementioned parameters, the surface of the cylinder is obtained. The terminal points of the cross section to the cylinder surface are A: ( 52 . 39 , 31 . 27 , 674 . 7 ) and B: ( 51 . 78 , 136 . 2 , 680 ) . The connection of Points A and B forms the diameter of the cross section of the cylinder. The calculated radius is 52.53 mm. The error is 0.03 mm when applying the extended mathematical model. In addition, the cylinder with X c from about 20 mm to 60 mm was truncated and shown in Figure 8. A sequence of radius was obtained for the truncated cylinder. The mean value of the measured radius is 52.25 mm. The root mean square deviation (RMSE) of the measured radius is 0.033 mm. It can be clearly seen that the measurement accuracy is significantly improved when using the mathematical model extension technique. The surfaces measured on the cylinder and the hand plaster models with Meshlab are shown in Figure 9. The color changes gradually on the basis of the coordinate X c and Z c for the cylinder and the hand plaster model, respectively.

4.5. Performance Comparison with Kinect

Similar to the fringe projection vision system described in this paper, Kinect is another kind of structured light technique that uses a speckle pattern to realize a 3D measurement. It is a commercialized product mainly used for gesture control and home entertainment. The depth measurement range for Kinect is 500−3500 mm. It has a resolution of 640 × 480. The fringe projection vision system described in this paper has a working range of 364−2169 mm. Moreover, it has a resolution of 608 × 684. It can be clearly seen that they have very similar measurement specifications. Thus, the detected plaster models are also measured with Kinect in this research work. The detected targets are also placed at a working distance of approximately 650 mm, which is similar to the proposed fringe projection vision system. The measured surfaces are shown in Figure 10.
From Figure 10, we can see that the flatness of the measured surface is much worse compared with the surface measured with the proposed fringe projection vision system in Figure 9. The advantage of Kinect is that it has a special optical design that allows the device to obtain a much larger field of view than the fringe projection vision system proposed in this paper.

5. Experiments with Multiple Discontinuous Objects

In phase-shifted fringe projection technique, three sinusoidal fringe patterns are projected onto the object surface with phase shifts of 0, 2 π / 3 , and 2 π / 3 in this study. The corresponding intensity distributions are as follows:
I 1 ( x , y ) = I ( x , y ) + I ( x , y ) c o s [ φ ( x , y ) 2 π / 3 ] I 2 ( x , y ) = I ( x , y ) + I ( x , y ) c o s [ φ ( x , y ) ] I 3 ( x , y ) = I ( x , y ) + I ( x , y ) c o s [ φ ( x , y ) + 2 π / 3 ]
where I ( x , y ) is the average intensity, I " ( x , y ) is the intensity modulation, and φ ( x , y ) is the phase to be solved. By solving the above equations, the phase at each point ( x , y ) of the image plane is obtained as follows:
φ ( x , y ) = tan 1 [ 3 ( I 1 I 3 ) 2 I 2 I 1 I 3 ]
By solving Equation (10), the obtained phase is a relative value between π and + π . A spatial phase-unwrapping algorithm can be used to remove the 2π discontinuities. The obtained absolute phase is
Φ ( x , y ) = ϕ ( x , y ) + k ( x , y ) 2 π
where k ( x , y ) is the fringe order.
Nevertheless, for spatially isolated objects, the fringe orders will be ambiguous. It is difficult to retrieve the absolute phase directly. As a result, the depth difference between spatially isolated surfaces is indiscernible. In order to reliably obtain the absolute phase map, a new phase-coding method is proposed that takes full advantage of the frequency characteristic of the sinusoidal fringe pattern.
Figure 11 shows the designed fringe pattern. It consists of the codewords that give the fringe order information of k. Figure 12 illustrates the relationship among the sinusoidal fringe pattern with a phase shift of zero, the wrapped phase with a relative value between π and + π , and the codewords of the designed pattern. It should be mentioned that the retrieved wrapped phase is ten times the actual value in order to observe the relationship with the codewords of the designed pattern clearly. The period of the sinusoidal fringe pattern p s is set to 32 pixels in this study. p l is the regional period of the designed codewords. Each codeword consists of a sequence of sinusoidal fringe patterns, and the codewords change from period to period according to the 2 π phase-change period. p l equals 2, 3, 4, 5, 6, and 8 pixels separately, as shown in Figure 12. The designed pattern shown in Figure 11 is formed by repeating the codewords ( p l = 2 , 3 , 4 , 5 , 6 , 8 ) . In addition, when in the system calibration, the projector projects the design pattern to the calibration board, and the camera captures the design pattern. The obtained design pattern is recognized as the template design pattern. It can be used for partition to the detected area to avoid the phase codewords aliasing effect. With the design pattern as shown in Figure 11, there are three partitions at most in the practical application.
From the plot of the codewords of the designed pattern, we know that the frequency component of the codewords changes from high to low. An analysis of the sinusoidal fringe pattern of the designed codewords was carried out for the ideal projected patterns and captured patterns. The results are listed in Table 1. It can be seen that the frequency components for various codewords do not exhibit aliasing. The fringe order k in Equation (11) can be reliably obtained on the basis of a frequency analysis of the codewords.
We take one example to describe the framework for obtaining the absolute phase map for multiple discontinuous objects. The detected objects include one cylinder and one cone. They are discontinuous. The designed pattern shown in Figure 11 is projected onto the detected objects. The captured fringe pattern is shown in Figure 13a. The left object is one cylinder. The right object is one cone. The absolute phase map retrieval of the spatially isolated objects is implemented in the following steps:
Step 1: Obtain the wrapped phase from the captured sinusoidal fringe patterns. Segment the whole image into I various continuous regions on the basis of the branch-cut map of the wrapped phase. Set the feature point in a continuous region, such as Point A ( 434 , 181 ) and Point B ( 444 , 493 ) in Figure 13a. The chosen feature points are better located in the middle of a 2 π phase period as shown in Figure 13a.
Step 2: Adopt the conventional phase-unwrapping algorithm to unwrap the phase for each region to obtain the relative phase map Φ i r ( x , y ) ( i = 1 I ) . In the example shown in Figure 13a, Φ c y l r ( x , y ) denotes the relative phase map of the cylinder. Φ c o n r ( x , y ) denotes the relative phase map of the cone. Here we still take the cylinder as the object of research. The absolute phase map of the cone equals its relative phase map as follows:
Φ c o n a ( x , y ) = Φ c o n r ( x , y )
The absolute phase map of the cylinder is obtained as
Φ c y l a ( x , y ) = Φ c o n r ( x , y ) k * 2 π
where k is the fringe order difference between Points B and A.
Step 3: Obtain the fringe order difference of the phase codeword of Points A and B. We take Point A as an example to explain the retrieval of the fringe order. Firstly, find the 2 π phase period that includes Point A and locate the zero-crossing points A and A within the 2 π phase period, as shown in Figure 13b. Take a Fast Fourier Transformation (FFT) of this phase period and derive the frequency component, as shown in Figure 13c. The obtained frequency component is about 54 Hz. From Table 1, we know that the phase codeword is p2. Similarly for Point B, the codeword is p1. The difference in the fringe order k of Points A and B is 5.
On the basis of the Equations (12)–(14), the absolute phase map of the detected objects is derived as shown in Figure 14a. Furthermore, gray-coding plus phase-shifting method is also applied to obtain the absolute phase map of the detected objects. The obtained absolute phase map is the same with the result obtained by the proposed phase-coding method. By applying the abovementioned extended mathematical model, the measured surfaces in Matlab are shown in Figure 14b. The measured surfaces in Meshlab are shown in Figure 14c. The color changes gradually on the basis of the coordinate Z c . The calculated radius of the cylinder is 52.56 mm. The error is 0.06 mm. The experimental results demonstrate the effectiveness of the proposed phase-coding method. However, the error is a little larger than the measured radius with single object. More effort will be made to improve the performance of the proposed fringe projection vision system on multiple spatially discontinuous objects.

6. Conclusions

In this study, a flexible fringe projection vision system is proposed that relaxes the restriction on the camera’s and projector’s relative alignment and simplifies the calibration process with more flexibility and a cost savings. The proposed flexible fringe projection vision system is designed to realize an accurate 3D measurement to large scene for industrial safety inspection. The measured range can be flexibly adjusted on the basis of the measurement requirements. Accurate 3D measurement is obtained with an extended mathematical model that avoids the complex and laborious error compensation procedures for various error sources.
In addition, the system accuracy is only determined by the derived point clouds with a mature camera calibration technique. It is not influenced by the numerous systematic error sources. Moreover, the system accuracy can be further improved by the following ways. (1) In this paper, a low-cost printed plane is used for the calibration; if a high-accuracy planar plane is used, the calibration accuracy can be further improved, and the known geometrical information (the calibration reference plane) will provide a more accurate reference value for the following nonlinear parameter estimation; (2) A high-resolution camera is applied. For the vision system in this work, the std of the reprojection error is 0.03 mm, and it can be reduced to 0.015 mm with a camera with a two-fold higher resolution; (3) A more advanced camera calibration algorithm with highly accurate feature point extraction can be used.
Finally, a new phase-coding method is proposed to obtain the absolute phase map of spatially isolated objects. Only one additional fringe pattern is added for the proposed phase codeword method. There is no gray level or color information that is used to unwrap the phase with depth discontinuities because they are sensitive to noise. Since the proposed method takes full advantage of the frequency characteristic of the sinusoidal fringe pattern, it is insensitive to noise and object texture. However, accurate retrieval of the fringe order is dependent on the FFT of the signal of the phase period. If the signal is too short, the accuracy of the fringe order retrieval will be affected. In our future work, it will be improved by combining the FFT and a correlation technique for the phase period signal.

Acknowledgments

This work was supported by the Shanghai High Technology Research Program (15JC1402500).

Author Contributions

Hui Zhao and Wei Tao initiated the research and gave suggestions for the study. Suzhi Xiao performed the experiments and analyzed the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zhang, S. Recent progresses on real-time 3D shape measurement using digital fringe projection techniques. Opt. Lasers Eng. 2010, 48, 149–158. [Google Scholar] [CrossRef]
  2. Wang, Z.; Nguyen, D.A.; Barnes, J.C. Some practical considerations in fringe projection profilometry. Opt. Lasers Eng. 2010, 48, 218–225. [Google Scholar] [CrossRef]
  3. Villa, J.; Araiza, M.; Alaniz, D.; Ivanov, R.; Ortiz, M. Transformation of phase to (x,y,z)-coordinates for the calibration of a fringe projection profilometer. Opt. Lasers Eng. 2012, 50, 256–261. [Google Scholar] [CrossRef]
  4. Liu, H.; Su, W.-H.; Reichard, K.; Yin, S. Calibration-based phase-shifting projected fringe profilometry for accurate absolute 3D surface profile measurement. Opt. Commun. 2003, 216, 65–80. [Google Scholar] [CrossRef]
  5. Huang, L.; Chua, P.S.K.; Asundi, A. Least-squares calibration method for fringe projection profilometry considering camera lens distortion. Appl. Opt. 2010, 49, 1539–1548. [Google Scholar] [CrossRef] [PubMed]
  6. Guo, H.; He, H.; Yu, Y.; Chen, M. Least-squares calibration method for fringe projection profilometry. Opt. Eng. 2005, 44. [Google Scholar] [CrossRef]
  7. Du, H.; Wang, Z. Three-dimensional shape measurement with an arbitrarily arranged fringe projection profilometry system. Opt. Lett. 2007, 32, 2438–2440. [Google Scholar] [CrossRef] [PubMed]
  8. Martinez, A.; Rayas, J.A.; Puga, H.J.; Genovese, K. Iterative estimation of the topography measurement by fringe-projection method with divergent illumination by considering the pitch variation along the x and z directions. Opt. Lasers Eng. 2010, 48, 877–881. [Google Scholar] [CrossRef]
  9. Tian, A.; Jiang, Z.; Huang, Y. A flexible new three-dimensional measurement technique by projected fringe pattern. Opt. Laser Technol. 2006, 38, 585–589. [Google Scholar] [CrossRef]
  10. Maurel, A.; Cobelli, P.; Pagneux, V.; Petitjeans, P. Experimental and theoretical inspection of the phase-to-height relation in Fourier transform profilometry. Appl. Opt. 2009, 48, 380–392. [Google Scholar] [CrossRef] [PubMed]
  11. Wen, Y.; Li, S.; Cheng, H.; Su, X.; Zhang, Q. Universal calculation formula and calibration method in Fourier transform profilometry. Appl. Opt. 2010, 49, 6563–6569. [Google Scholar] [CrossRef] [PubMed]
  12. Zhang, S.; Huang, P.S. Novel method for structured light system calibration. Opt. Eng. 2006, 45. [Google Scholar] [CrossRef]
  13. Da, F.; Gai, S. Flexible three-dimensional measurement technique based on a digital light processing projector. Appl. Opt. 2008, 47, 377–385. [Google Scholar] [CrossRef] [PubMed]
  14. Feng, S.; Chen, Q.; Zuo, C.; Sun, J.; Yu, S.L. High-speed real-time 3-D coordinates measurement based on fringe projection profilometry considering camera lens distortion. Opt. Commun. 2014, 329, 44–56. [Google Scholar] [CrossRef]
  15. Pan, B.; Kemao, Q.; Huang, L.; Asundi, A. Phase error analysis and compensation for nonsinusoidal waveforms in phase-shifting digital fringe projection profilometry. Opt. Lett. 2009, 34, 416–418. [Google Scholar] [CrossRef] [PubMed]
  16. Yao, J.; Xiong, C.; Zhou, Y.; Miao, H.; Chen, J. Phase error elimination considering gamma nonlinearity, system vibration, and noise for fringe projection profilometry. Opt. Eng. 2014, 53. [Google Scholar] [CrossRef]
  17. Zhang, K.; Yao, J.; Chen, J.; Miao, H. Phase extraction algorithm considering high-order harmonics in fringe image processing. Appl. Opt. 2015, 54, 4989–4995. [Google Scholar] [CrossRef] [PubMed]
  18. Zhou, P.; Liu, X.; He, Y.; Zhu, T. Phase error analysis and compensation considering ambient light for phase measuring profilometry. Opt. Lasers Eng. 2014, 55, 99–104. [Google Scholar] [CrossRef]
  19. Sansoni, G.; Carocci, M.; Rodella, R. Three-dimensional vision based on a combination of gray-code and phase-shift light projection: Analysis and compensation of the systematic errors. Appl. Opt. 1999, 38, 6565–6573. [Google Scholar] [CrossRef] [PubMed]
  20. Su, W. Color-encoded fringe projection for 3D shape measurements. Opt. Express 2007, 15, 13167–13181. [Google Scholar] [CrossRef] [PubMed]
  21. Wang, Y.; Zhang, S. Novel phase-coding method for absolute phase retrieval. Opt. Lett. 2012, 37, 2067–2069. [Google Scholar] [CrossRef] [PubMed]
  22. Xiao, S.; Tao, W.; Zhao, H. An improved phase to absolute depth transformation method and depth-of-field extension. Opt. Int. J. Light Electron Opt. 2016, 127, 511–516. [Google Scholar] [CrossRef]
  23. Falcao, G.; Hurtos, N.; Massich, J. Plane-based calibration of a projector-camera system. VIBOT Master. 2008, 9, 1–12. [Google Scholar]
  24. Zhang, Z. A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1330–1334. [Google Scholar] [CrossRef]
  25. Strobl, K.; Hirzinger, G. More accurate pinhole camera calibration with imperfect planar target. In Proceedings of the 2011 IEEE International Conference on Computer Vision Workshops (ICCV Workshops), Barcelona, Spain, 6–13 November 2011; pp. 1068–1075.
  26. Huang, L.; Zhang, Q.; Asundi, A. Flexible camera calibration using not-measured imperfect target. Appl. Opt. 2013, 52, 6278–6286. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Geometrical model for the phase to depth transformation.
Figure 1. Geometrical model for the phase to depth transformation.
Sensors 16 00612 g001
Figure 2. The classification of error sources.
Figure 2. The classification of error sources.
Sensors 16 00612 g002
Figure 3. Experimental setup of the fringe projection vision system: Fringe patterns are projected by the projector. The deformed fringe patterns are captured by the camera. The system calibration is implemented with the help of the calibration board.
Figure 3. Experimental setup of the fringe projection vision system: Fringe patterns are projected by the projector. The deformed fringe patterns are captured by the camera. The system calibration is implemented with the help of the calibration board.
Sensors 16 00612 g003
Figure 4. Calibration fringe patterns: (a) checkerboard for the camera calibration; (b) projected checkerboard for the projector calibration; and (c) projected sinusoidal fringe pattern for the depth calculation.
Figure 4. Calibration fringe patterns: (a) checkerboard for the camera calibration; (b) projected checkerboard for the projector calibration; and (c) projected sinusoidal fringe pattern for the depth calculation.
Sensors 16 00612 g004
Figure 5. Camera calibration results. (a) Reprojection error and (b) relative positions of the calibration planes based on the camera coordinate system.
Figure 5. Camera calibration results. (a) Reprojection error and (b) relative positions of the calibration planes based on the camera coordinate system.
Sensors 16 00612 g005
Figure 6. Detected targets. (a) Cylinder and (b) hand plaster models.
Figure 6. Detected targets. (a) Cylinder and (b) hand plaster models.
Sensors 16 00612 g006
Figure 7. 3D measurement by the fringe projection vision system. (a) 3D point clouds in Matlab and (b) curve of the cross section.
Figure 7. 3D measurement by the fringe projection vision system. (a) 3D point clouds in Matlab and (b) curve of the cross section.
Sensors 16 00612 g007
Figure 8. 3D measurement by the extended mathematical model.
Figure 8. 3D measurement by the extended mathematical model.
Sensors 16 00612 g008
Figure 9. 3D measurement by the fringe projection vision system. (a) Cylinder and (b) hand surfaces displayed with the Meshlab tool.
Figure 9. 3D measurement by the fringe projection vision system. (a) Cylinder and (b) hand surfaces displayed with the Meshlab tool.
Sensors 16 00612 g009
Figure 10. 3D measurement with Kinect. (a) Cylinder and (b) hand surfaces measured with the Meshlab tool.
Figure 10. 3D measurement with Kinect. (a) Cylinder and (b) hand surfaces measured with the Meshlab tool.
Sensors 16 00612 g010
Figure 11. Designed fringe pattern that gives the fringe order information.
Figure 11. Designed fringe pattern that gives the fringe order information.
Sensors 16 00612 g011
Figure 12. Plot of the sinusoidal fringe pattern with a phase shift of zero that is colored in blue, the phase codeword of the designed fringe pattern that is colored in red, and the wrapped phase that is colored in cyan.
Figure 12. Plot of the sinusoidal fringe pattern with a phase shift of zero that is colored in blue, the phase codeword of the designed fringe pattern that is colored in red, and the wrapped phase that is colored in cyan.
Sensors 16 00612 g012
Figure 13. Absolute phase map retrieval for spatially isolated objects. (a) Captured designed fringe pattern for multiple discontinuous objects; (b) graph for finding the phase zero-crossing points that include the chosen feature point; and (c) frequency component obtained by an FFT.
Figure 13. Absolute phase map retrieval for spatially isolated objects. (a) Captured designed fringe pattern for multiple discontinuous objects; (b) graph for finding the phase zero-crossing points that include the chosen feature point; and (c) frequency component obtained by an FFT.
Sensors 16 00612 g013
Figure 14. 3D measurement by the fringe projection vision system. (a) Absolute phase map; (b) measured 3D surface for discontinuous objects in Matlab; and (c) measured 3D surface for discontinuous objects, shown in Meshlab tool.
Figure 14. 3D measurement by the fringe projection vision system. (a) Absolute phase map; (b) measured 3D surface for discontinuous objects in Matlab; and (c) measured 3D surface for discontinuous objects, shown in Meshlab tool.
Sensors 16 00612 g014aSensors 16 00612 g014b
Table 1. Frequency component of each phase codeword.
Table 1. Frequency component of each phase codeword.
CodewordPeriodIdeal Sinusoidal Fringe PatternCaptured Sinusoidal Fringe Pattern
p1280.582 ± 1
p2353.254 ± 0.5
p3440.541 ± 0.5
p4532.633.5 ± 0.5
p5627.328 ± 0.5
p6820.521 ± 0.5

Share and Cite

MDPI and ACS Style

Xiao, S.; Tao, W.; Zhao, H. A Flexible Fringe Projection Vision System with Extended Mathematical Model for Accurate Three-Dimensional Measurement. Sensors 2016, 16, 612. https://doi.org/10.3390/s16050612

AMA Style

Xiao S, Tao W, Zhao H. A Flexible Fringe Projection Vision System with Extended Mathematical Model for Accurate Three-Dimensional Measurement. Sensors. 2016; 16(5):612. https://doi.org/10.3390/s16050612

Chicago/Turabian Style

Xiao, Suzhi, Wei Tao, and Hui Zhao. 2016. "A Flexible Fringe Projection Vision System with Extended Mathematical Model for Accurate Three-Dimensional Measurement" Sensors 16, no. 5: 612. https://doi.org/10.3390/s16050612

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop