Out-of-Focus Projector Calibration Method with Distortion Correction on the Projection Plane in the Structured Light Three-Dimensional Measurement System

The three-dimensional measurement system with a binary defocusing technique is widely applied in diverse fields. The measurement accuracy is mainly determined by out-of-focus projector calibration accuracy. In this paper, a high-precision out-of-focus projector calibration method that is based on distortion correction on the projection plane and nonlinear optimization algorithm is proposed. To this end, the paper experimentally presents the principle that the projector has noticeable distortions outside its focus plane. In terms of this principle, the proposed method uses a high-order radial and tangential lens distortion representation on the projection plane to correct the calibration residuals caused by projection distortion. The final accuracy parameters of out-of-focus projector were obtained using a nonlinear optimization algorithm with good initial values, which were provided by coarsely calibrating the parameters of the out-of-focus projector on the focal and projection planes. Finally, the experimental results demonstrated that the proposed method can accuracy calibrate an out-of-focus projector, regardless of the amount of defocusing.


Introduction
Optic three-dimensional (3D) shape measurement has been wildly studied and applied due to its speed, accuracy, and flexibility. Examples of such applications include industrial inspection, reverse engineering, and medical diagnosis, etc. [1,2]. As shown in Figure 1, a typical structured light measurement system is composed by a camera and a projector. The projector projects a series of encoding fringe patterns onto the surface of object, and the camera captures the distorted patterns caused by the depth variation of the surface. Finally, the 3D surface point is reconstructed based on triangulation, providing that the system parameters have been obtained through the system calibration. Hence, in this system, one of the crucial aspects is to accurately calibrate the camera and projector, as the measurement accuracy is ultimately influenced by the calibration accuracy.
Camera calibration has been extensively studied, and a variety of camera calibration approaches have been proposed. Several traditional camera calibration methods exist, including direct linear transformation, nonlinear camera calibration method, two-steps camera calibration method, self-calibration, and Zhang's method [3][4][5][6][7]. Moreover, some advanced camera calibration methods that are based on traditional methods have been proposed [8][9][10][11]. Qi et al. proposed a method that applies the stochastic parallel gradient descent (SPGD) algorithm to resolve the frequent iteration and long calibration time deficiencies of the traditional two-step camera calibration method [8]. Kamel et al. used three more objective functions to speed up the convergence rate of the nonlinear optimization in approximate parameters and initial values for final calibration based on nonlinear optimization algorithm. To this end, two special planes of the out-of-focus projector, the focal plane, and the projection plane, were given more attention. In the coarse calibration process, the calibration plane was moved to the focal plane (or focal plane 1) of an out-of-focus projector, and the projector was calibrated as an inverse camera by the pinhole camera model. The intrinsic and extrinsic parameters matrices on the focal plane were selected as the initial value of the final out-of-focus projector calibration. Secondly, we considered the lens distortion on the projection plane as the initial value of the final projector calibration. To calculate the lens distortion on the projection plane using the pinhole camera model, the defocusing projector should be adjusted so it is focused on the projection plane, and was it calibrated as a standard inverse camera. Finally, based on the re-projection mathematical model with distortion, the final accuracy parameters of the out-of-focus projector were obtained with a nonlinear optimization algorithm. The objective function was to minimize the sum of the re-projection error of all the reference points onto the projector image plane. In addition, the paper experimentally presents the principle that the projector has noticeable distortions outside its focus plane. When compared to the traditional calibration method, the experiment results demonstrated that our proposed method can accurately calibrate an out-of-focus projector regardless of the amount of defocusing.
This paper is organized as follows. Section 2 explains the basic principles used for the proposed calibration method. Section 3 presents calibration principle and process. Section 4 shows the experimental results to verify the performance of our calibration method, and Section 5 summarizes this paper.

Camera Model
The well-known pinhole model is used to describe a camera with intrinsic and extrinsic parameters. The intrinsic parameters include focal length, principle point, and pixel skew factor. The rotation matrix and translation vector, which define the relationship between a world coordinate system and the camera coordinate system, are the extrinsic parameters [7]. As shown in Figure 2

Camera Model
The well-known pinhole model is used to describe a camera with intrinsic and extrinsic parameters. The intrinsic parameters include focal length, principle point, and pixel skew factor. The rotation matrix and translation vector, which define the relationship between a world coordinate system and the camera coordinate system, are the extrinsic parameters [7]. As shown in Figure 2, a 3D point in world coordinate system o w − x w y w z w can be represented by P w = {x w , y w , z w } T and the corresponding two-dimensional (2D) point in camera coordinate system o c − x c y c z c is p c = {u c , v c } T . The relationship between a 3D point P w and its imaging point p c can be described as follows: wherep c = {u c , v c , 1} T is the homogeneous coordinate of the point p c in the camera imaging coordinate system,P w = {x w , y w , z w , 1} T is the homogeneous coordinate of the point p w in the world coordinate system,s is a scale factor, [R c , T c ] is the camera extrinsic matrices, R c denotes the rotation matrix, which is a 3 × 3 matrix, and T c is the translation vector. A c represents the intrinsic matrices, which can be described as follows: where f x and f y are elements implying the focal lengths along the u c , v c axes of the image plane, respectively, and γ is the skew factor of the u c and v c axes. For modern cameras, γ = 0. (u 0 , v 0 ) is the coordinate of the principal point in the camera imaging plane.
Sensors 2017, 17, 2963 4 of 23 rotation matrix, which is a 3 × 3 matrix, and c T is the translation vector. c A represents the intrinsic matrices, which can be described as follows:  Furthermore, the image coordinates in the above equations were considered without distortion. The lens distortion of the camera should be corrected to improve the calibration accuracy. Several models exist for lens distortion, such as a radial and tangential distortion model [30,31], rational function distortion model [32,33], division distortion model [31,34], and so on [35]. In this paper, a typical radial and tangential distortion model was used due to its simplicity and sufficient accuracy, formulated as: u v . 1 2 , k k are the radial distortion coefficients, and 1 2 , p p are the tangential coefficients.

Light Encoding
The projected patterns in the structured light measurement system are always encoded by a combination of gray code and four-step phase shifting [36,37]. As shown in Figure 3a, the 4-bit gray code encodes 2 4 sets of subdivision by projecting four successive gray code stripes. Although the gray code method can encode the pixel accuracy, and the encoding process does not consider the spatial neighborhood, the spatial resolution of this encoding method is low because the limitation is caused by the number of projecting stripe patterns. Furthermore, the four-step phase shifting has high spatial resolution in every projection. However, the drawback of the phase shifting method is the ambiguity problem that occurs when determining the signal periods in the camera images, which is decided by the periodic nature of the patterns. When gray code methods and phase shifting methods are integrated, their positive features can be additive, and the measurement of discontinuous surfaces with fine details can even be obtained. Furthermore, the image coordinates in the above equations were considered without distortion. The lens distortion of the camera should be corrected to improve the calibration accuracy. Several models exist for lens distortion, such as a radial and tangential distortion model [30,31], rational function distortion model [32,33], division distortion model [31,34], and so on [35]. In this paper, a typical radial and tangential distortion model was used due to its simplicity and sufficient accuracy, formulated as: where (u c , v c ) represents the imaging point on the imaging plane of camera with radial and tangential correction, (u c , v c ) is the imaging point before correction, and r = u r 2 + v r 2 is the absolute distance between the imaging point (u c , v c ) and the original point (u 0 , v 0 ). k 1 , k 2 are the radial distortion coefficients, and p 1 , p 2 are the tangential coefficients.

Light Encoding
The projected patterns in the structured light measurement system are always encoded by a combination of gray code and four-step phase shifting [36,37]. As shown in Figure 3a, the 4-bit gray code encodes 2 4 sets of subdivision by projecting four successive gray code stripes. Although the gray code method can encode the pixel accuracy, and the encoding process does not consider the spatial neighborhood, the spatial resolution of this encoding method is low because the limitation is caused by the number of projecting stripe patterns. Furthermore, the four-step phase shifting has high spatial resolution in every projection. However, the drawback of the phase shifting method is the ambiguity problem that occurs when determining the signal periods in the camera images, which is decided by the periodic nature of the patterns. When gray code methods and phase shifting methods are integrated, their positive features can be additive, and the measurement of discontinuous surfaces with fine details can even be obtained. The four-step phase shifting algorithm has been extensively applied in optical measurement because of their speed and accuracy. The four fringe images can be represented as follows: where I (x, y) is the average intensity, I (x, y) is the intensity modulation, i = 1, 2, 3, 4, and φ(x, y) are the phase, which is solved as follows: where φ(x, y) is the wrapped phase, as shown in Figure 3b, which lies between 0 and 2πrad. However, the absolute phase is useful for the following work as the phase unwrapping detects the 2π discontinuities and removes them by adding or subtracting multiples of 2π point by point.
In other words, the phase unwrapping finds integer number k so that: where Φ(x, y) is the absolute phase, and k is the number of stripe. When the phase shifting period coincides with the gray code edges, as shown in Figure 3, the phase shifting works in the subdivision, as defined by the gray code encoding method, and the absolute phase is distributed linearly and spatially continuously over the each subdivision area. Thus, all of the pixels in the camera image are tracked by their absolute phases. The four-step phase shifting algorithm has been extensively applied in optical measurement because of their speed and accuracy. The four fringe images can be represented as follows:  (4) where ' ( , ) I x y is the average intensity, '' ( , ) I x y is the intensity modulation, = 1, 2 , 3 , 4 i , and φ ( , ) x y are the phase, which is solved as follows: I x y I x y x y I x y I x y (5) where φ ( , ) x y is the wrapped phase, as shown in Figure 3b, which lies between 0 and 2πrad.
However, the absolute phase is useful for the following work as the phase unwrapping detects the 2π discontinuities and removes them by adding or subtracting multiples of 2π point by point. In other words, the phase unwrapping finds integer number k so that: where Φ( , ) x y is the absolute phase, and k is the number of stripe. When the phase shifting period coincides with the gray code edges, as shown in Figure 3, the phase shifting works in the subdivision, as defined by the gray code encoding method, and the absolute phase is distributed linearly and spatially continuously over the each subdivision area. Thus, all of the pixels in the camera image are tracked by their absolute phases.

Digital Binary Defocusing Technique
The digital binary defocusing technique was used to create computer generated binary structured fringe patterns, and the defocused projector blurred them into sinusoidal structured fringe patterns. Mathematically, the defocusing effect can be simplified to a convolution operation, and can be written as follows:

Digital Binary Defocusing Technique
The digital binary defocusing technique was used to create computer generated binary structured fringe patterns, and the defocused projector blurred them into sinusoidal structured fringe patterns. Mathematically, the defocusing effect can be simplified to a convolution operation, and can be written as follows: where, ⊗ represents convolution, I b (x, y) indicates the inputted binary fringe patterns, I(x, y) denotes the outputted smooth fringe patterns, and Ps f (x, y) is the points spread function, determined by a pupil function of the optical system f (u, v).
Simply, Ps f (x, y) can be approximated by a circular Gaussian function [38,39], where the standard deviation σ is proportional to the defocusing degrees. In addition, the defocused optical system is equivalent to a spatial two-dimensional (2D) low-pass filter. As shown in Figure 4, an unaltered binary fringe pattern was simulated to generate the sinusoidal fringe pattern with increasing σ. Figure 4a shows the initial binary structured fringe pattern. Figure 4b,c represent the generated sinusoidal fringe patterns with a low defocusing degree and high defocusing degree, respectively. Figure 4d shows the cross-sections. As seen in Figure 4, when the defocusing degree increased, the binary structures became decreasingly clear, and the sinusoidal structures became increasingly obvious, which unfortunately results in a drastic fall in the intensity amplitude. To solve this problem, a pulse width modulation (PWM) technique was applied to high-quality sinusoidal patterns [40][41][42]. In addition, the dithering technique was proposed for wider binary pattern generation [43]. However, to obtain high-quality sinusoidal patterns with high fringe intensity amplitude, it is difficult to select a proper defocusing degree. Besides, more importantly, when the defocusing degree increased, the phase of defocusing fringe patterns was invariant [29]. Psf x y f u v dudv (8) Simply, ( , ) Psf x y can be approximated by a circular Gaussian function [38,39], where the standard deviation σ is proportional to the defocusing degrees. In addition, the defocused optical system is equivalent to a spatial two-dimensional (2D) low-pass filter. As shown in Figure 4, an unaltered binary fringe pattern was simulated to generate the sinusoidal fringe pattern with increasing σ . Figure 4a shows the initial binary structured fringe pattern. Figure 4b,c represent the generated sinusoidal fringe patterns with a low defocusing degree and high defocusing degree, respectively. Figure 4d shows the cross-sections. As seen in Figure 4, when the defocusing degree increased, the binary structures became decreasingly clear, and the sinusoidal structures became increasingly obvious, which unfortunately results in a drastic fall in the intensity amplitude. To solve this problem, a pulse width modulation (PWM) technique was applied to high-quality sinusoidal patterns [40][41][42]. In addition, the dithering technique was proposed for wider binary pattern generation [43]. However, to obtain high-quality sinusoidal patterns with high fringe intensity amplitude, it is difficult to select a proper defocusing degree. Besides, more importantly, when the defocusing degree increased, the phase of defocusing fringe patterns was invariant [29].

Camera Calibration
Essentially, the purpose of the camera calibration procedure is to obtain the intrinsic and extrinsic parameters of the camera, based on the reference data, which is composed of the 3D points on the calibration board and the 2D points on the CCD. In this research, Zhang's method [7] was used to estimate the intrinsic parameters. Instead of using a checkerboard as calibration target, we used a flat black board with 7 × 21 arrays of white circles for calibration, as shown in Figure 5, and the centers of the circles were extracted as feature points. The calibration board was placed in different positions and orientations (poses), and 15 images were obtained to estimate the intrinsic parameters of the camera. This procedure was implemented by the OpenCV camera calibration toolbox. Notably, a typical radial and tangential distortion model was considered and the distortion was corrected for the camera calibration.

Camera Calibration
Essentially, the purpose of the camera calibration procedure is to obtain the intrinsic and extrinsic parameters of the camera, based on the reference data, which is composed of the 3D points on the calibration board and the 2D points on the CCD. In this research, Zhang's method [7] was used to estimate the intrinsic parameters. Instead of using a checkerboard as calibration target, we used a flat black board with 7 × 21 arrays of white circles for calibration, as shown in Figure 5, and the centers of the circles were extracted as feature points. The calibration board was placed in different positions and orientations (poses), and 15 images were obtained to estimate the intrinsic parameters of the camera. This procedure was implemented by the OpenCV camera calibration toolbox. Notably, a typical radial and tangential distortion model was considered and the distortion was corrected for the camera calibration.

Out-of-Focus Projector Calibration
Generally, a projector can be regarded as an inverse camera, because it projects images rather than capturing them. If the images of the calibration points from the view of the projector are available, the projector can be calibrated as a camera, so that establishing the mapping relationship between the 2D points on the DMD of the projector and the 2D points on the CCD of the camera could realize our goal. Moreover, defocusing the projector complicates the calibration procedure. This occurs because the model for calibrating the projector briefly follows the model for the camera calibration, and since the pinhole camera model always asks for the camera to be focused, an out-offocus projector does not directly follow this requirement. In addition, most of the projectors have noticeable distortions outside their focus plane (projection plane) [19]. In this Section, a novel calibration model for a defocusing projector is introduced, as well as a solution to the problem of calibrating an out-of-focus projector.

Out-of-Focus Projector Model
In the literature [26], two methods perform the binary defocusing technique. The first method is that the projector shoots on the different planes with a fixed the focal length. The second method is that the projector is in different focus distances, and the plane is at a fixed location. The first defocusing degree of each method represents that the projector is in focus. To study the influence of the binary defocusing technique on the calibration results of the projector, we calibrated a projector under different defocusing degrees with the proposed method in [29], and the amount of the defocusing degree increases from 1 to 5. Table 1 shows the calibration results of the projector with different defocusing degrees, with the defocusing degree increasing from 1 to 5, caused by the first method mentioned above. Table 2 shows the calibration results of the projector with different defocusing degrees with the defocusing degree increasing from 1 to 5, caused by the second method.

Out-of-Focus Projector Calibration
Generally, a projector can be regarded as an inverse camera, because it projects images rather than capturing them. If the images of the calibration points from the view of the projector are available, the projector can be calibrated as a camera, so that establishing the mapping relationship between the 2D points on the DMD of the projector and the 2D points on the CCD of the camera could realize our goal. Moreover, defocusing the projector complicates the calibration procedure. This occurs because the model for calibrating the projector briefly follows the model for the camera calibration, and since the pinhole camera model always asks for the camera to be focused, an out-of-focus projector does not directly follow this requirement. In addition, most of the projectors have noticeable distortions outside their focus plane (projection plane) [19]. In this Section, a novel calibration model for a defocusing projector is introduced, as well as a solution to the problem of calibrating an out-of-focus projector.

Out-of-Focus Projector Model
In the literature [26], two methods perform the binary defocusing technique. The first method is that the projector shoots on the different planes with a fixed the focal length. The second method is that the projector is in different focus distances, and the plane is at a fixed location. The first defocusing degree of each method represents that the projector is in focus. To study the influence of the binary defocusing technique on the calibration results of the projector, we calibrated a projector under different defocusing degrees with the proposed method in [29], and the amount of the defocusing degree increases from 1 to 5. Table 1 shows the calibration results of the projector with different defocusing degrees, with the defocusing degree increasing from 1 to 5, caused by the first method mentioned above. Table 2 shows the calibration results of the projector with different defocusing degrees with the defocusing degree increasing from 1 to 5, caused by the second method. Table 1. Calibration results of a projector under different defocusing degrees when using the first method.

Defocusing
Degree  Table 2. Calibration results of a projector under different defocusing degrees when using the second method.

Defocusing
Degree  Table 1, the stability of the focal length and the principal point were poor under different defocusing degrees using the first method. The maximum change of f u and f v reached 50 pixels. The maximum change of u 0 and v 0 reached 24 pixels and 40 pixels, respectively. In addition, the re-projection errors in u and v directions rose significantly as the defocusing degree of the projector increased. This is also seen in Figure 6a. Similarly, we applied the different defocusing degrees by using the second method, and the same statistical calibration results are shown in Table 2 and Figure 6b. The results show that the calibration results of the second defocusing method are basically the same as those of the first defocusing method. Therefore, all of the parameters are varying when the projector shoots on the different planes, or when the projector is in different focus distances. This is because the parameters are influenced by the projector defocusing, and there are mutually constrained relationships between parameters in the projector calibration process.

Re-Projection Errors
In addition, we found that the distortion coefficients had varied with the increasing amount of the defocusing degree from the above experiments. Moreover, it had mentioned that most of the projectors have noticeable distortions outside their focus plane in [19]. Therefore, we decided to determine the lens distortion of the out-of-focus projector under the different defocusing degrees, which was easy to be described by the average residual error (ARE) value. The ARE was defined as follows: where (x i , y i ) i are the actual computed image coordinates on the image plane with calibration parameters, (x id , y id ) i are the (extracted) ideal image coordinates. Table 3 shows the statistic of the ARE of an out-of-focus projector under different defocusing degrees. Figure 7 shows the varying of ARE under different defocusing degrees. x y are the (extracted) ideal image coordinates. Table 3 shows the statistic of the ARE of an out-of-focus projector under different defocusing degrees. Figure 7 shows the varying of ARE under different defocusing degrees.
. Re-projection errors under different defocusing degrees: (a) obtaining the different defocusing degrees using the first method; and, (b) obtaining the different defocusing degrees using the second method.  From Table 3 and Figure 7, the ARE of the projector rose as the defocusing degree of the projector increased. This indicated that the projector always had noticeable distortions outside their focus plane, and it was very useful for our further discussion.
The structured light system model with an out-of-focus projector is shown in Figure 8. To generate the sinusoidal fringes from binary patterns, the projector was substantially out of focus. From the statistical calibration results of a defocused projector shown in Figure 6, the re-projection errors gradually increased as the projector defocusing increased. This occurred because we directly used the pinhole camera model to calibrate the out-of-focus projector. However, the pinhole camera model requires the camera on the focal plane. When the amount of defocusing is minimal, the reprojection errors of the projector are few, such as when using defocusing degrees 1 and 2. Conversely, if the projector is on a high defocusing degree, then the pinhole camera model is not suitable, and the re-projection errors of the projector become increasingly larger, such as when using defocusing degrees 4 and 5. Therefore, using a pinhole camera model to calibrate an out-of-focus projector will introduce errors into the calibration results. In order to improve the calibration precision, this paper  x y are the (extracted) ideal image coordinates. Table 3 shows the statistic of the ARE of an out-of-focus projector under different defocusing degrees. Figure 7 shows the varying of ARE under different defocusing degrees.
. Re-projection errors under different defocusing degrees: (a) obtaining the different defocusing degrees using the first method; and, (b) obtaining the different defocusing degrees using the second method.  From Table 3 and Figure 7, the ARE of the projector rose as the defocusing degree of the projector increased. This indicated that the projector always had noticeable distortions outside their focus plane, and it was very useful for our further discussion.
The structured light system model with an out-of-focus projector is shown in Figure 8. To generate the sinusoidal fringes from binary patterns, the projector was substantially out of focus. From the statistical calibration results of a defocused projector shown in Figure 6, the re-projection errors gradually increased as the projector defocusing increased. This occurred because we directly used the pinhole camera model to calibrate the out-of-focus projector. However, the pinhole camera model requires the camera on the focal plane. When the amount of defocusing is minimal, the reprojection errors of the projector are few, such as when using defocusing degrees 1 and 2. Conversely, if the projector is on a high defocusing degree, then the pinhole camera model is not suitable, and the re-projection errors of the projector become increasingly larger, such as when using defocusing degrees 4 and 5. Therefore, using a pinhole camera model to calibrate an out-of-focus projector will introduce errors into the calibration results. In order to improve the calibration precision, this paper From Table 3 and Figure 7, the ARE of the projector rose as the defocusing degree of the projector increased. This indicated that the projector always had noticeable distortions outside their focus plane, and it was very useful for our further discussion.
The structured light system model with an out-of-focus projector is shown in Figure 8. To generate the sinusoidal fringes from binary patterns, the projector was substantially out of focus. From the statistical calibration results of a defocused projector shown in Figure 6, the re-projection errors gradually increased as the projector defocusing increased. This occurred because we directly used the pinhole camera model to calibrate the out-of-focus projector. However, the pinhole camera model requires the camera on the focal plane. When the amount of defocusing is minimal, the re-projection errors of the projector are few, such as when using defocusing degrees 1 and 2. Conversely, if the projector is on a high defocusing degree, then the pinhole camera model is not suitable, and the re-projection errors of the projector become increasingly larger, such as when using defocusing degrees 4 and 5. Therefore, using a pinhole camera model to calibrate an out-of-focus projector will introduce errors into the calibration results. In order to improve the calibration precision, this paper proposes an out-of-focus projector calibration method that is based on nonlinear optimization with the lens distortion correction on the projection plane. proposes an out-of-focus projector calibration method that is based on nonlinear optimization with the lens distortion correction on the projection plane. As introduced in Section 2.1, the pinhole camera model describes a camera with intrinsic parameters, such as focal length, principle point, and pixel skew factor; and, extrinsic parameters, such as rotation matrix and translation vector. A model of an out-of-focus projector is shown in Figure  9a. For convenience of expression, the parameters or points related to the focal plane are represented by the subscript "1", whereas the parameters or points that are related to the projection plane are represented by the subscript "2". If the defocusing degree of the projector is determined, the unique correspondence between focal plane and projection plane will be decided. Then, the calibration plane can be moved to the focal plane of the projector. The projector can be calibrated as an inverse camera by the pinhole camera model. This process can be described as follows: To obtain accurate calibration results, the lens distortion was considered. In this paper, we used As introduced in Section 2.1, the pinhole camera model describes a camera with intrinsic parameters, such as focal length, principle point, and pixel skew factor; and, extrinsic parameters, such as rotation matrix and translation vector. A model of an out-of-focus projector is shown in Figure 9a. For convenience of expression, the parameters or points related to the focal plane are represented by the subscript "1", whereas the parameters or points that are related to the projection plane are represented by the subscript "2". If the defocusing degree of the projector is determined, the unique correspondence between focal plane and projection plane will be decided. Then, the calibration plane can be moved to the focal plane of the projector. The projector can be calibrated as an inverse camera by the pinhole camera model. This process can be described as follows: The A p1 , R p1 , T p1 were selected as the initial values of the final out-of-focus projector calibration. proposes an out-of-focus projector calibration method that is based on nonlinear optimization with the lens distortion correction on the projection plane. As introduced in Section 2.1, the pinhole camera model describes a camera with intrinsic parameters, such as focal length, principle point, and pixel skew factor; and, extrinsic parameters, such as rotation matrix and translation vector. A model of an out-of-focus projector is shown in Figure  9a. For convenience of expression, the parameters or points related to the focal plane are represented by the subscript "1", whereas the parameters or points that are related to the projection plane are represented by the subscript "2". If the defocusing degree of the projector is determined, the unique correspondence between focal plane and projection plane will be decided. Then, the calibration plane can be moved to the focal plane of the projector. The projector can be calibrated as an inverse camera by the pinhole camera model. This process can be described as follows: To obtain accurate calibration results, the lens distortion was considered. In this paper, we used a radial and tangential distortion model [30,31]. Figure 9a shows a model of an out-of-focus projector. To obtain accurate calibration results, the lens distortion was considered. In this paper, we used a radial and tangential distortion model [30,31]. Figure 9a shows a model of an out-of-focus projector. Obviously, a distance exists between the focal plane and the projection plane. As the projector defocusing degree increases, then this distance also increases. The measurement plane (projection plane) will move away from the focal plane. Additionally, a studied showed that most projectors have noticeable distortions outside their focus plane [19]. Therefore, we considered the lens distortion on the projection plane as the initial value of the final projector calibration. However, the pinhole camera model should be used on the focal plane. To calculate the lens distortion on the projection plane using the pinhole camera model, the defocusing projector should be adjusted so it is focused on the projection plane. This plane is noted by focal plane 2, as shown in Figure 9b. Then, the projector can be calibrated as a standard inverse camera, and the lens distortion on the projection plane δ up2 and δ vp2 can be obtained. Similarly, the calibration model of the projector on the projection plane can be described as follows: The δ up2 , δ vp2 are selected as the initial values of the final out-of-focus projector calibration.
Here, two instructions are required. First, when we calculated the intrinsic and extrinsic parameters on the focal plane, the projector condition was the same as in the final defocusing state. The parameters were close to the real value. Therefore, the intrinsic and extrinsic parameters on the focal plane were selected as the initial value of the nonlinear optimization. Secondly, with an increase in the projector defocusing degree, the measurement plane (projection plane) moved away from the focal plane. As a prior study showed that most projectors have noticeable distortions outside their focus plane [19]. We considered the lens distortion on the projection plane (focal plane 2) as the initial value of the final out-of-focus projector calibration. Finally, the initial value of an out-of-focus projector with a nonlinear optimization algorithm can be described as follows: Based on the re-projection mathematical model with distortion, the final accuracy parameters of the out-of-focus projector were obtained with a nonlinear optimization algorithm. The objective function was to minimize the sum of the re-projection error of all the reference points onto the projector image plane. This can be described as: where [A pp , R pp , T pp , K pp ] are the final accuracy parameters of the out-of-focus projector; N is number of reference points; M is the number of images for projector calibration; A p1 is the intrinsic parameter matrix on the focal plane 1; R p1 and T p1 represent the extrinsic parameters on the focal plane 1; K p2 represents the distortion coefficients on the focal plane 2; p ij is the point coordinate on the image plane; and, F is the function representing the re-projection process of the projector. P ij shows the space coordinate of a calibration point. It is a nonlinear optimization problem that can be solved using the Levenberg-Marquardt method [44]. In addition, a good initial value can be provided by coarsely calibrating the parameters of the out-of-focus projector on the focal plane and projection plane, respectively.

Phase-Domain Invariant Mapping
To improve the calibration accuracy of an out-of-focus projector, a unique one-to-one mapping between the pixels on the projector DMD and the pixels on the camera CCD should be virtually established in the phase domain using the phase shifting algorithm. The mapped projector images were generated, as proposed in [17], and the basic mapping principle can be described as follows. If the vertical structured light patterns, which are encoded with a combination of gray code and phase shifting, are projected onto the calibration board, and the camera captures the patterns images, the absolute phase Φ V (u c , v c ) can be retrieved for all of the camera pixels with Equations (5) and (6). Following the same process, if horizontal structured light patterns are projected, the absolute phase Φ H (u c , v c ) is extracted. In addition, the accuracy of the phase mapping is decided by high-quality phase generation, which is dependent on fringe width and the number of fringes in the phase shift method. In this paper, the four-step phase shifting algorithm was used, and the sinusoidal fringe pattern period was 16 pixels, as shown in Figure 10. Hence, six gray code patterns should be used, which have the same period as the sinusoidal fringe pattern, as shown in Figure 11. These vertical and horizontal absolute phases can be used to construct the mapping between the pixels on the DMD and CCD, as follows: Equation (21) provides a pixel-pixel mapping from CCD to DMD; Figure 12 illustrates an example of the extracted correspondences for a single translation. K represents the distortion coefficients on the focal plane 2; ij p is the point coordinate on the image plane; and, F is the function representing the re-projection process of the projector. ij P shows the space coordinate of a calibration point. It is a nonlinear optimization problem that can be solved using the Levenberg-Marquardt method [44]. In addition, a good initial value can be provided by coarsely calibrating the parameters of the out-of-focus projector on the focal plane and projection plane, respectively.

Phase-Domain Invariant Mapping
To improve the calibration accuracy of an out-of-focus projector, a unique one-to-one mapping between the pixels on the projector DMD and the pixels on the camera CCD should be virtually established in the phase domain using the phase shifting algorithm. The mapped projector images were generated, as proposed in [17], and the basic mapping principle can be described as follows. If the vertical structured light patterns, which are encoded with a combination of gray code and phase shifting, are projected onto the calibration board, and the camera captures the patterns images, the absolute phase can be retrieved for all of the camera pixels with Equations (5) and (6).
Following the same process, if horizontal structured light patterns are projected, the absolute phase is extracted. In addition, the accuracy of the phase mapping is decided by high-quality phase generation, which is dependent on fringe width and the number of fringes in the phase shift method. In this paper, the four-step phase shifting algorithm was used, and the sinusoidal fringe pattern period was 16 pixels, as shown in Figure 10. Hence, six gray code patterns should be used, which have the same period as the sinusoidal fringe pattern, as shown in Figure 11. These vertical and horizontal absolute phases can be used to construct the mapping between the pixels on the DMD and CCD, as follows: Equation (21) provides a pixel-pixel mapping from CCD to DMD; Figure 12 illustrates an example of the extracted correspondences for a single translation.

Out-of-Focus ProjectorCalibration Process
The camera can be calibrated with an OpenCV camera calibration toolbox. If the image coordinates of the calibration points on the DMD are obtained by Phase-Domain invariant mapping, an out-of-focus projector can be calibrated using the abovementioned approach. Specifically, the calibration process requires the following major steps: Step 1: Image capture. The calibration board was placed on the preset location, and a white paper was stuck on the surface of the calibration board. A set of horizontal and vertical gray code patterns was projected onto the calibration board. These fringe images were captured by the camera. Similarly, the pattern images were captured by projecting a sequence of horizontal and vertical four-step phase shifting fringes. After, the white paper was removed, and the calibration board image was captured. For each pose, a total of 21 images were recorded, which were used to recover the absolute phase using the combination of gray code and the four-step phase shifting algorithm, introduced in Section 2.2.
unique point-to-point mapping between CCD and DMD was determined as follows: where , V H T T is the four-step phase shifting patterns period in the vertical and horizontal directions, respectively. In this paper,

= =16
V H T T pixels. Using Equation (22), the phase value was converted into projector pixels. Furthermore, we assigned the sub-pixel absolute phases, as obtained by the bilinear interpolation of the absolute phases of its four adjacent pixels, because of the sub-pixel circle center detection algorithm for the camera image. For

Out-of-Focus ProjectorCalibration Process
The camera can be calibrated with an OpenCV camera calibration toolbox. If the image coordinates of the calibration points on the DMD are obtained by Phase-Domain invariant mapping, an out-of-focus projector can be calibrated using the abovementioned approach. Specifically, the calibration process requires the following major steps: Step 1: Image capture. The calibration board was placed on the preset location, and a white paper was stuck on the surface of the calibration board. A set of horizontal and vertical gray code patterns was projected onto the calibration board. These fringe images were captured by the camera. Similarly, the pattern images were captured by projecting a sequence of horizontal and vertical four-step phase shifting fringes. After, the white paper was removed, and the calibration board image was captured. For each pose, a total of 21 images were recorded, which were used to recover the absolute phase using the combination of gray code and the four-step phase shifting algorithm, introduced in Section 2.2.
unique point-to-point mapping between CCD and DMD was determined as follows: where , V H T T is the four-step phase shifting patterns period in the vertical and horizontal directions, respectively. In this paper,

= =16
V H T T pixels. Using Equation (22), the phase value was converted into projector pixels. Furthermore, we assigned the sub-pixel absolute phases, as obtained by the bilinear interpolation of the absolute phases of its four adjacent pixels, because of the sub-pixel circle center detection algorithm for the camera image. For

Out-of-Focus ProjectorCalibration Process
The camera can be calibrated with an OpenCV camera calibration toolbox. If the image coordinates of the calibration points on the DMD are obtained by Phase-Domain invariant mapping, an out-of-focus projector can be calibrated using the abovementioned approach. Specifically, the calibration process requires the following major steps: Step 1: Image capture. The calibration board was placed on the preset location, and a white paper was stuck on the surface of the calibration board. A set of horizontal and vertical gray code patterns was projected onto the calibration board. These fringe images were captured by the camera. Similarly, the pattern images were captured by projecting a sequence of horizontal and vertical four-step phase shifting fringes. After, the white paper was removed, and the calibration board image was captured. For each pose, a total of 21 images were recorded, which were used to recover the absolute phase using the combination of gray code and the four-step phase shifting algorithm, introduced in Section 2.2.
Step 2: Camera calibration and determining the location of the circle centers on the DMD. The camera calibration method recommended in Section 3.1 was used. For each calibration pose, the horizontal and vertical absolute phase A unique point-to-point mapping between CCD and DMD was determined as follows: where T V , T H is the four-step phase shifting patterns period in the vertical and horizontal directions, respectively. In this paper, T V = T H = 16 pixels. Using Equation (22), the phase value was converted into projector pixels. Furthermore, we assigned the sub-pixel absolute phases, as obtained by the bilinear interpolation of the absolute phases of its four adjacent pixels, because of the sub-pixel circle center detection algorithm for the camera image. For high accuracy camera circle centers, the standard OpenCV toolbox was used. Figure 12 shows an example of the extracted correspondences for a single translation.
Step 3: Calculate the initial values of the intrinsic and extrinsic parameters on the focal plane (focal plane 1). To find approximate parameters, 15 different positions and orientation (poses) images were captured within the scheme measurement volume for the projector calibration.
If the reference calibration data on focal plane 1 for the projector were extracted from Step 2, the coarse intrinsic and extrinsic parameters of an out-of-focus projector can be estimated using the same software algorithms for camera calibration on focal plane 1, which was described in Section 3.2.
Step 4: Compute the initial value of the lens distortion on the projection plane. According to the results of our previous experiments in Section 3.2.1, the lens distortion varies with an increasing defocusing degree. To find the approximate parameters, the lens distortion on the projection plane was considered as the initial value of the lens distortion for an out-of-focus projector.
In this process, the projector was adjusted to focus on the projection plane, which was called focal plane 2. With the calibration points on focal plane 2 and their corresponding image points on the DMD, the lens distortion on the projection plane was obtained using the pinhole camera model.
Step 5: Compute the precise calibration parameters of the out-of-focus projector by using a nonlinear optimization algorithm. All of the parameters were solved by minimizing the following cost function, as outlined in Equation (20).

Experiment and Discussion
To verify the validity of the proposed calibration method for an out-of-focus projector, in-laboratory experiments were completed. The experimental system is shown in Figure 13: it was composed of a DLP projector (model: OptomaDN322) with 1024 × 768 pixel resolution, and a camera (model: Point Grey GX-FW-28S5C/M-C) with a 12 mm focal length lens (model: KOWA LM12JC5M2). The camera pixel size was 4.54 µm × 4.54 µm with highest resolution of 1600 × 1200 pixels. Here, a calibration board with 7 × 21 arrays of white circles printed on a flat black board was used, and a calibration volume of 200 × 150 × 200 mm was attained. Then, the system calibration followed the method described in Section 3. To evaluate the performance of the proposed calibration method, the system was also calibrated with calibration method in [29]. In addition, the calibration images were generally contaminated by noise during image acquisition, image capturing or image transmission. In order to improve the accuracy of the feature point extraction and phase calculation, the original images in the experiment were preprocessed by an image de-noising method that was based on weighted regularized least-square algorithm [45], which can effectively eliminate the image noise and keep edge information without blurring image edge. This can help reduce the noise impact and improve the calibration results. high accuracy camera circle centers, the standard OpenCV toolbox was used. Figure 12 shows an example of the extracted correspondences for a single translation.
Step 3: Calculate the initial values of the intrinsic and extrinsic parameters on the focal plane (focal plane 1). To find approximate parameters, 15 different positions and orientation (poses) images were captured within the scheme measurement volume for the projector calibration.
If the reference calibration data on focal plane 1 for the projector were extracted from Step 2, the coarse intrinsic and extrinsic parameters of an out-of-focus projector can be estimated using the same software algorithms for camera calibration on focal plane 1, which was described in Section 3.2.
Step 4: Compute the initial value of the lens distortion on the projection plane. According to the results of our previous experiments in Section 3.2.1, the lens distortion varies with an increasing defocusing degree. To find the approximate parameters, the lens distortion on the projection plane was considered as the initial value of the lens distortion for an out-of-focus projector. In this process, the projector was adjusted to focus on the projection plane, which was called focal plane 2. With the calibration points on focal plane 2 and their corresponding image points on the DMD, the lens distortion on the projection plane was obtained using the pinhole camera model.
Step 5: Compute the precise calibration parameters of the out-of-focus projector by using a nonlinear optimization algorithm. All of the parameters were solved by minimizing the following cost function, as outlined in Equation (20).

Experiment and Discussion
To verify the validity of the proposed calibration method for an out-of-focus projector, inlaboratory experiments were completed. The experimental system is shown in Figure 13: it was composed of a DLP projector (model: OptomaDN322) with 1024 × 768 pixel resolution, and a camera (model: Point Grey GX-FW-28S5C/M-C) with a 12 mm focal length lens (model: KOWA LM12JC5M2). The camera pixel size was 4.54 µm × 4.54 µm with highest resolution of 1600 × 1200 pixels. Here, a calibration board with 7 × 21 arrays of white circles printed on a flat black board was used, and a calibration volume of 200 × 150 × 200 mm was attained. Then, the system calibration followed the method described in Section 3. To evaluate the performance of the proposed calibration method, the system was also calibrated with calibration method in [29]. In addition, the calibration images were generally contaminated by noise during image acquisition, image capturing or image transmission. In order to improve the accuracy of the feature point extraction and phase calculation, the original images in the experiment were preprocessed by an image de-noising method that was based on weighted regularized least-square algorithm [45], which can effectively eliminate the image noise and keep edge information without blurring image edge. This can help reduce the noise impact and improve the calibration results.  Table 4 shows the calibration system parameters using both our proposed method and the conventional method in [29] under defocusing degree 2, as mentioned in Table 1. As was shown, the camera parameters were almost the same for both methods, whereas the out-of-focus projector  Table 4 shows the calibration system parameters using both our proposed method and the conventional method in [29] under defocusing degree 2, as mentioned in Table 1. As was shown, the camera parameters were almost the same for both methods, whereas the out-of-focus projector parameters were obviously different. This is because the calibration process for the projector in [29] was not reliable due to the influence of the projector defocusing. In addition, the re-projection errors of the calibration data on the camera and out-of-focus projector image planes are shown in Figure 14a-c. As shown in Figure 14a, when the re-projection errors of the calibration data on the camera image plane are 0.1567 ± 0.0119 pixels, the re-projection errors of the calibration data for the out-of-focus projector using the method in [29] are 0.2258 ± 0.0217 pixels, as shown in Figure 14b. To compare the effect of both methods, the re-projection errors of the camera remained unchanged, and the re-projection errors of the calibration data for the out-of-focus projector using the proposed method decreased to 0.1648 ± 0.0110 pixels, as shown in Figure 14c, which is a reduction of 27.01% when compared to the method in [29].
To evaluate the performance of the proposed calibration method, the standard distances between two adjacent points on the calibration board in the x and y directions were measured using the two methods. In addition, the standard distances were obtained by moving the calibration board 20 mm in the parallel direction within the volume of 200 × 140 × 100 mm, and the total number of the 1295 distances was measured. Figure 15 shows the measured distances within the 200 × 140 × 100 mm volume. Figure 16 shows the histogram of the distribution of the distance measurement error. The measuring error was 0.0253 ± 0.0364 mm for our proposed calibration method, as shown in Figure 16a, the measuring error was 0.0389 ± 0.0493 mm for the calibration method in [29], shown in Figure 16b. The measurement accuracy and the uncertainty were improved by 34.96% and 26.17%, respectively. parameters were obviously different. This is because the calibration process for the projector in [29] was not reliable due to the influence of the projector defocusing. In addition, the re-projection errors of the calibration data on the camera and out-of-focus projector image planes are shown in Figure  14a-c. As shown in Figure 14a, when the re-projection errors of the calibration data on the camera image plane are 0.1567 ± 0.0119 pixels, the re-projection errors of the calibration data for the out-offocus projector using the method in [29] are 0.2258 ± 0.0217 pixels, as shown in Figure 14b. To compare the effect of both methods, the re-projection errors of the camera remained unchanged, and the reprojection errors of the calibration data for the out-of-focus projector using the proposed method decreased to 0.1648 ± 0.0110 pixels, as shown in Figure 14c, which is a reduction of 27.01% when compared to the method in [29].
To evaluate the performance of the proposed calibration method, the standard distances between two adjacent points on the calibration board in the x and y directions were measured using the two methods. In addition, the standard distances were obtained by moving the calibration board 20 mm in the parallel direction within the volume of 200 × 140 × 100 mm, and the total number of the 1295 distances was measured. Figure 15 shows the measured distances within the 200 × 140 × 100 mm volume. Figure 16 shows the histogram of the distribution of the distance measurement error. The measuring error was 0.0253 ± 0.0364 mm for our proposed calibration method, as shown in Figure  16a, the measuring error was 0.0389 ± 0.0493 mm for the calibration method in [29], shown in Figure  16b. The measurement accuracy and the uncertainty were improved by 34.96% and 26.17%, respectively.
(a) (b) (c) Figure 14. Re-projection errors of the calibration points on the image planes: (a) camera; (b) out-offocus projector using the conventional method in [29]; and, (c) out-of-focal projector using the proposed method. Table 4. Calibration results of the camera and out-of-focus projector.

Method
Device  Figure 14. Re-projection errors of the calibration points on the image planes: (a) camera; (b) out-of-focus projector using the conventional method in [29]; and, (c) out-of-focal projector using the proposed method. Table 4. Calibration results of the camera and out-of-focus projector.

Method
Device  To further test the results of our proposed calibration method, a planar board and an aluminum alloy hemisphere were measured by defocusing the camera-projector system under different defocusing degrees listed in Table 1. The measurement results of the planar board under the five defocusing degrees are shown in Figure 17. The measurement error of the board is defined as the distance between the measuring point and the fitting plane. To determine the measurement errors of the board, the board was also measured using a coordinate measuring machine (CMM) with a precision of 0.0019 mm. Table 5 presents the statistics of the measurement results of the board with five different defocusing degrees. The board fitting residuals of the CMM's measurement data were 0.0065 ± 0.0085 mm and the maximum was less than 0.0264 mm. Figure 17a,b show the fitting plane and the fitting residuals of the plane with the projector in focus, under defocusing degree 1, respectively. Additionally, it is important to note that our calibration method is the same as the method in [29], under defocusing degree 1. So, there is only one set of measurement results. Figure  17c-f show the measurement results under defocusing degrees 2 to 5 using our proposed calibration method. Similarly, the measurement results under defocusing degrees 2 to 5 using the calibration method in [29] are shown in Figure 17g-j. When the defocusing degree is minimal, such as defocusing degrees 2 and 3, the fitting residuals were similar between our proposed calibration method and the calibration method in [29]. The fitting residuals using our calibration method were 0.0147 ± 0.0184 mm and 0.0159 ± 0.0195 mm, and 0.0169 ± 0.0210 mm and 0.0183 ± 0.0257 mm, using the calibration method in [29], respectively. However, as the defocusing degree increased to defocusing degrees 4 and 5, the differences between the measurement results were obvious. Especially for defocusing degree 5, using our proposed calibration method, the fitting residual was 0.0172 ± 0.0234 mm. Using the calibration method in [29], the fitting residual reached 0.0276 ± 0.0447 mm. Figure 18 shows the fitting error varying with the different defocusing degrees using both our proposed calibration method and the calibration method in [29]. From Figure 18, the change of the fitting error is not obvious for different defocusing degrees using our proposed calibration method. Nevertheless, the fitting error increased rapidly using the calibration method in [29]. Because our proposed calibration method considers the influence of defocusing on the calibration results. To further test the results of our proposed calibration method, a planar board and an aluminum alloy hemisphere were measured by defocusing the camera-projector system under different defocusing degrees listed in Table 1. The measurement results of the planar board under the five defocusing degrees are shown in Figure 17. The measurement error of the board is defined as the distance between the measuring point and the fitting plane. To determine the measurement errors of the board, the board was also measured using a coordinate measuring machine (CMM) with a precision of 0.0019 mm. Table 5 presents the statistics of the measurement results of the board with five different defocusing degrees. The board fitting residuals of the CMM's measurement data were 0.0065 ± 0.0085 mm and the maximum was less than 0.0264 mm. Figure 17a,b show the fitting plane and the fitting residuals of the plane with the projector in focus, under defocusing degree 1, respectively. Additionally, it is important to note that our calibration method is the same as the method in [29], under defocusing degree 1. So, there is only one set of measurement results. Figure  17c-f show the measurement results under defocusing degrees 2 to 5 using our proposed calibration method. Similarly, the measurement results under defocusing degrees 2 to 5 using the calibration method in [29] are shown in Figure 17g-j. When the defocusing degree is minimal, such as defocusing degrees 2 and 3, the fitting residuals were similar between our proposed calibration method and the calibration method in [29]. The fitting residuals using our calibration method were 0.0147 ± 0.0184 mm and 0.0159 ± 0.0195 mm, and 0.0169 ± 0.0210 mm and 0.0183 ± 0.0257 mm, using the calibration method in [29], respectively. However, as the defocusing degree increased to defocusing degrees 4 and 5, the differences between the measurement results were obvious. Especially for defocusing degree 5, using our proposed calibration method, the fitting residual was 0.0172 ± 0.0234 mm. Using the calibration method in [29], the fitting residual reached 0.0276 ± 0.0447 mm. Figure 18 shows the fitting error varying with the different defocusing degrees using both our proposed calibration method and the calibration method in [29]. From Figure 18, the change of the fitting error is not obvious for different defocusing degrees using our proposed calibration method. Nevertheless, the fitting error increased rapidly using the calibration method in [29]. Because our proposed calibration method considers the influence of defocusing on the calibration results. To further test the results of our proposed calibration method, a planar board and an aluminum alloy hemisphere were measured by defocusing the camera-projector system under different defocusing degrees listed in Table 1. The measurement results of the planar board under the five defocusing degrees are shown in Figure 17. The measurement error of the board is defined as the distance between the measuring point and the fitting plane. To determine the measurement errors of the board, the board was also measured using a coordinate measuring machine (CMM) with a precision of 0.0019 mm. Table 5 presents the statistics of the measurement results of the board with five different defocusing degrees. The board fitting residuals of the CMM's measurement data were 0.0065 ± 0.0085 mm and the maximum was less than 0.0264 mm. Figure 17a,b show the fitting plane and the fitting residuals of the plane with the projector in focus, under defocusing degree 1, respectively. Additionally, it is important to note that our calibration method is the same as the method in [29], under defocusing degree 1. So, there is only one set of measurement results. Figure 17c-f show the measurement results under defocusing degrees 2 to 5 using our proposed calibration method. Similarly, the measurement results under defocusing degrees 2 to 5 using the calibration method in [29] are shown in Figure 17g-j. When the defocusing degree is minimal, such as defocusing degrees 2 and 3, the fitting residuals were similar between our proposed calibration method and the calibration method in [29]. The fitting residuals using our calibration method were 0.0147 ± 0.0184 mm and 0.0159 ± 0.0195 mm, and 0.0169 ± 0.0210 mm and 0.0183 ± 0.0257 mm, using the calibration method in [29], respectively. However, as the defocusing degree increased to defocusing degrees 4 and 5, the differences between the measurement results were obvious. Especially for defocusing degree 5, using our proposed calibration method, the fitting residual was 0.0172 ± 0.0234 mm. Using the calibration method in [29], the fitting residual reached 0.0276 ± 0.0447 mm. Figure 18 shows the fitting error varying with the different defocusing degrees using both our proposed calibration method and the calibration method in [29]. From Figure 18, the change of the fitting error is not obvious for different defocusing degrees using our proposed calibration method. Nevertheless, the fitting error increased rapidly using the calibration method in [29]. Because our proposed calibration method considers the influence of defocusing on the calibration results.     (g-j) corresponding the measurement error of the plane under defocusing degree 2-5 by the calibration method in [29].  An aluminum alloy hemisphere was also measured using the defocusing camera-projector system for three different defocusing degrees: defocusing degrees 1, 2, and 5. The captured fringe images for the three defocusing degrees and their cross sections of intensity are shown in Figure 19. The measurement and statistics results are shown in Figure 20 and Table 6, respectively. The measurement results under defocusing degree 1 (projector in focus) are shown in Figure 20a-c. Figure  20a shows the reconstructed 3D surface. To evaluate the accuracy of measurement, we obtained a An aluminum alloy hemisphere was also measured using the defocusing camera-projector system for three different defocusing degrees: defocusing degrees 1, 2, and 5. The captured fringe images for the three defocusing degrees and their cross sections of intensity are shown in Figure 19. The measurement and statistics results are shown in Figure 20 and Table 6, respectively. The measurement results under defocusing degree 1 (projector in focus) are shown in Figure 20a-c. Figure 20a shows the reconstructed 3D surface. To evaluate the accuracy of measurement, we obtained a cross section of the hemisphere and fitted it with an ideal circle. Figure 20b shows the overlay of the ideal circle and the measured data points. The error between these two curves is shown in Figure 20c.  [29], respectively.
To evaluate our proposed calibration method, the hemisphere was also measured by a CMM with a precision of 0.0019 mm. The fitting radius of the CMM's measurement data was 20.0230 mm, and the hemisphere fitting residuals was 0.0204 ± 0.0473 mm. The fitting radius of defocusing camera-projector system that was calibrated by our proposed calibration method was 19.9542 mm, which had a deviation of 0.0688 mm from the fitting radius of the CMM's measurement data. The hemisphere fitting residuals was 0.0543 ± 0.0605 mm for defocusing degree 2. Furthermore, with the same experimental conditions, the hemisphere was also measured by the defocusing camera-projector system with calibration method proposed in [29]. The fitting radius with the point data of the hemisphere was 19.9358 mm, which had a deviation of 0.0872 mm from the CMM's fitting radius, and the hemisphere fitting residuals was 0.0745 ± 0.0733 mm. For defocusing degree 5, the hemisphere fitting residual was 0.0574 ± 0.0685 mm using our proposed calibration method, and 0.0952 ± 0.0936 mm using the calibration method in [29]. Similarly, the fitting error varying with the different defocusing degrees using our proposed calibration method and the calibration method in [29] is shown in Figure 21. From Figure 21, the change of the fitting error was not obvious for different defocusing degrees using our proposed calibration method. Nevertheless, the fitting error increased rapidly when using the calibration method in [29]. Thus, the measurement results using the camera-projector system with our proposed projector calibration method were better than the method in [29]. All of the experimental results verified that the camera and out-of-focus projector system attain satisfactory accuracy using our proposed projector calibration method. and the hemisphere fitting residuals was 0.0204 ± 0.0473 mm. The fitting radius of defocusing cameraprojector system that was calibrated by our proposed calibration method was 19.9542 mm, which had a deviation of 0.0688 mm from the fitting radius of the CMM's measurement data. The hemisphere fitting residuals was 0.0543 ± 0.0605 mm for defocusing degree 2. Furthermore, with the same experimental conditions, the hemisphere was also measured by the defocusing camera-projector system with calibration method proposed in [29]. The fitting radius with the point data of the hemisphere was 19.9358 mm, which had a deviation of 0.0872 mm from the CMM's fitting radius, and the hemisphere fitting residuals was 0.0745 ± 0.0733 mm. For defocusing degree 5, the hemisphere fitting residual was 0.0574 ± 0.0685 mm using our proposed calibration method, and 0.0952 ± 0.0936 mm using the calibration method in [29]. Similarly, the fitting error varying with the different defocusing degrees using our proposed calibration method and the calibration method in [29] is shown in Figure 21. From Figure 21, the change of the fitting error was not obvious for different defocusing degrees using our proposed calibration method. Nevertheless, the fitting error increased rapidly when using the calibration method in [29]. Thus, the measurement results using the cameraprojector system with our proposed projector calibration method were better than the method in [29]. All of the experimental results verified that the camera and out-of-focus projector system attain satisfactory accuracy using our proposed projector calibration method.

Conclusions
This paper proposes an accurate and systematic calibration method to calibrate an out-of-focus projector in a structured light system using a binary defocusing technique. To achieve high accuracy, the calibration method includes two parts. Firstly, good initial values are provided by coarsely calibrating the parameters of the out-of-focus projector on the focal plane and projection plane. Secondly, the final accuracy parameters of the out-of-focus projector are obtained using a nonlinear optimization algorithm, based on the re-projection mathematical model with distortion. Specifically, a polynomial distortion representation on the projection plane, and not the focal plane in which the high order radial and tangential lens distortion are considered, was used to reduce the residuals of the projection distortion. In addition, the calibration points in the camera image plane were mapped to the projector according the phase of the planar projection. The experimental results showed that satisfactory calibration accuracy was achieved using our proposed method, regardless of the defocusing amount. Of course, this method is not without its limitations. When compared to the traditional calibration method, the computing was somewhat longer.

Conclusions
This paper proposes an accurate and systematic calibration method to calibrate an out-of-focus projector in a structured light system using a binary defocusing technique. To achieve high accuracy, the calibration method includes two parts. Firstly, good initial values are provided by coarsely calibrating the parameters of the out-of-focus projector on the focal plane and projection plane. Secondly, the final accuracy parameters of the out-of-focus projector are obtained using a nonlinear optimization algorithm, based on the re-projection mathematical model with distortion. Specifically, a polynomial distortion representation on the projection plane, and not the focal plane in which the high order radial and tangential lens distortion are considered, was used to reduce the residuals of the projection distortion. In addition, the calibration points in the camera image plane were mapped to the projector according the phase of the planar projection. The experimental results showed that satisfactory calibration accuracy was achieved using our proposed method, regardless of the defocusing amount. Of course, this method is not without its limitations. When compared to the traditional calibration method, the computing was somewhat longer.