A High-Accuracy Calibration Method for a Telecentric Structured Light System

We propose a method for accurately calibrating a telecentric structured light system consisting of a camera attached to a bilateral telecentric lens and a pin-hole projector. The proposed method can be split into two parts: axial calibration and transverse calibration. The first part is used for building the relationship between phase and depth by means of a planar plate with ring markers on its surface at several different positions in the measuring volume. The second part is used for establishing the relationship between transverse coordinates and pixel positions with the depth offered by a translation stage and the extracted ring centers. Compared with existing methods that require projector calibration, the proposed method can avoid a propagation of the correspondence error between the camera imaging plane and projector imaging plane, thus increasing calibration accuracy. The calibrated telecentric structured light system is further used for three-dimensional (3D) reconstructions of a planar, a ruled surface, and complex surfaces. Experimental results demonstrate that the proposed system calibration method can be used for accurate 3D measurement.


Introduction
With the development of the micro manufacturing technology, there has been an increasing demand for the research of high-accuracy micro-level three-dimensional (3D) measurement methods. A classical structured light technology using a camera with an ordinary lens and a projector is taken as a probable solution for micro-scale 3D profilometry because of its advantages of non-contact operation, full-field acquisition, and high accuracy. To apply this technology to micro-scale measurement, the researchers replaced an ordinary lens with a telecentric lens to form telecentric structured light systems, which make use of the advantages of the telecentric lens with constant magnification over a specific volume, nearly zero distortion, and increased depth of field (DOF) [1,2]. According to the relative position of the optical axes of projector and camera, these systems can be grouped roughly into two types, coaxial or parallel-axes systems and crossed-axes systems.
In the first kind of system, the optical axes of a projector and a camera are set to coaxial or parallel for increasing the measuring range of the DOF based on the orthogonal projection characteristics of a telecentric camera [3][4][5][6]. A series of circle phase-shifted fringe patterns are projected and captured for calculating phase maps and extracting zero-phase point coordinates. With the aid of point coordinates, the 3D shape of an object surface can be reconstructed with the geometry of projected and captured light rays. However, such systems are susceptible to the zero-phase point detection problem because the zero-phase point may be out of the field of view (FOV) of the camera, which inevitably introduces additional geometrical errors between camera and projector. Recently, Zhong et al. [7] designed a dual-telecentric structured light system using the modulation distribution, Sensors 2022, 22, 6370 2 of 10 instead of phase information, for 3D reconstruction. Therefore, it is suitable for measuring an object surface with shutoff or steep slope to solve the problems of shadow and occlusion.
In the other kind of system, such as classical structured light systems, the triangular relationship formed by a projector, a telecentric camera, and a measured object is used for 3D measurement. The calibration of such systems, which aims to determine the intrinsic and extrinsic parameters of a camera and a projector, is a challenge because telecentricity leads to insensitivity of depth altering in the optical axis of camera. To deal with this challenge, several system calibration methods have been proposed for accurately measuring the 3D topography of an object surface. For instance, Li et al. [8] proposed a method to calibrate a telecentric camera and a projector simultaneously by using horizontal and vertical sinusoidal fringe sequences. Then, the 3D shape of a measured object can be reconstructed with the obtained intrinsic and extrinsic parameters of the camera and the projector. In their method, the one-to-one mapping between camera pixels and projector pixels needs to be established for calculating the projector parameters based on phase maps resulting from captured fringe images in the vertical and horizontal directions. Later, some researchers either changed the calibration methods [9][10][11][12] of the telecentric camera or used the binary defocusing technique [13] to improve the measurement accuracy. The main drawback of such methods is that the correspondence accuracy between a camera imaging plane and a projector imaging plane directly influences the performance of projector calibration; there will no doubt be a propagation of the correspondence error. Recently, Pistellato et al. [14] used a sphere with a known radius as calibration target for calibrating a telecentric camera-projector setup. However, the method is not suitable for a telecentric structured light system consisting of a telecentric camera and a pin-hole projector.
To overcome the above problems, we propose a polynomial-based method to calibrate a telecentric structured light system without requiring projector calibration. The proposed method involves two transformations. One is from pixel coordinates to transverse coordinates. The other is from phase information to depth data. The former can be accomplished by invoking the orthogonal projection model of a telecentric camera. The latter can be achieved by using vertical fringe sequences and a translation stage. Compared with existing calibration methods that need to calibrate a projector with the help of a camera, the proposed method can not only increase measurement accuracy, but also avoid the complex process of projector calibration. In the following sections, the principle of the proposed system calibration method is first described. Then, the effectiveness and accuracy of this method are validated by 3D shape measurements of a white plate with ring markers at different positions and orientations, a spherical object with known diameter size, and two objects with complex surfaces. Finally, the contributions of our work and remaining problems to be solved in future research are summarized.

Principle
With the assistance of a planar plate with equally spaced ring markers on it, the calibration of a telecentric structured light system consisting of a telecentric camera and a pin-hole projector needs to perform two transformations. One is from phase to depth, which is called axial calibration. The other is from pixel positions to transverse coordinates, which is called transverse calibration. Prior to system calibration, the intrinsic and extrinsic parameters of a telecentric camera can be determined from ring centers extracted by the ellipse fitting algorithm [15,16]. Figure 1 shows the imaging model of a telecentric structured light system, where the optical axis of the projector PO 1 crosses the optical axis CO 1 of the camera at point O 1 on the reference plane and the Z w axis is perpendicular to the reference plane. H is the distance from the projection center P to the reference plane. D is an arbitrary point on the measured object. The points A and B denote the effects of fringe deformation owing to the presence of the object rather than the reference plane. In other words, a sinusoidal fringe pattern originally projected at point B of the reference plane is now projected at point D on the object surface and then reflected along ray AD to the camera imaging plane. According to the geometry, the relationship between ∆x and deformation h can be expressed by

Axial Calibration
where α is the angle between the optical axis of the telecentric camera and the reference plane. The phase difference ∆ϕ between the object surface and the reference plane is where α is the angle between the optical axis of the telecentric camera and the reference plane. The phase difference ∆φ between the object surface and the reference plane is Equation (1) can be rewritten as follows: where f0 is the spatial frequency of the fringe pattern projected on the plane normal to the projector optical axis. In theory, these system parameters, H, α, x, X1, and f0, can be obtained. However, the direct calibration of these parameters is extremely difficult and complicated in practice. The polynomial calibration model used for the calibration of traditional structured light systems [17,18] is more flexible than other models for allowing the system components to be arbitrarily arranged. Therefore, we introduce the polynomial model to the calibration of a telecentric structured light system for establishing the relationship between phase and depth.  Equation (1) can be rewritten as follows: where f 0 is the spatial frequency of the fringe pattern projected on the plane normal to the projector optical axis. In theory, these system parameters, H, α, x, X 1 , and f 0 , can be obtained. However, the direct calibration of these parameters is extremely difficult and complicated in practice. The polynomial calibration model used for the calibration of traditional structured light systems [17,18] is more flexible than other models for allowing the system components to be arbitrarily arranged. Therefore, we introduce the polynomial model to the calibration of a telecentric structured light system for establishing the relationship between phase and depth. As a matter of fact, Equation (3) for every point (u, v) of the calibration volume can be expressed by the following polynomial equation [16]: where a n , n = 0, . . . , N are the polynomial coefficients, which contain the system parameters.
Consequently, a Look-Up Table (LUT) for a n is required to be constructed at each pixel position for establishing the relationship between phase and depth. Axial calibration aims to calculate the coefficients of the polynomial equation. The plate with ring markers on it is located at several parallel positions h i in measuring volume with a translation stage, as shown in Figure 2. At each plate position, sinusoidal fringe patterns are projected on the plate surface for offering the phase information of each pixel. It is noted that the depth ∆h (u, v) in Equation (4) is a relative depth value with regard to the reference plane, so the depth data offered by the translation stage needs to be transformed into the reference coordinate system, whose origin is approximately in the middle of the measuring volume.
be expressed by the following polynomial equation [16]: where an, n = 0, …, N are the polynomial coefficients, which contain the system parameters.
Consequently, a Look-Up Table (LUT) for an is required to be constructed at each pixel position for establishing the relationship between phase and depth. Axial calibration aims to calculate the coefficients of the polynomial equation. The plate with ring markers on it is located at several parallel positions hi in measuring volume with a translation stage, as shown in Figure 2. At each plate position, sinusoidal fringe patterns are projected on the plate surface for offering the phase information of each pixel. It is noted that the depth ∆h (u, v) in Equation (4) is a relative depth value with regard to the reference plane, so the depth data offered by the translation stage needs to be transformed into the reference coordinate system, whose origin is approximately in the middle of the measuring volume.

Transverse Calibration
Transverse calibration needs to establish the relationship between pixel coordinates and X, Y coordinates. This relationship is linear in an actual telecentric structured light system because distortion of the telecentric lens can be ignored and the required depth information is offered by a precise translation stage. Then, the transverse calibration can be described by the following polynomial equations: where b0, b1, c0, and c1 are the polynomial coefficients, which depend on the chosen position (u, v). (xw, yw) are the coordinates of a point on the plate in the reference coordinate system, which can be obtained by the following two steps. Therefore, an LUT for bi and ci, i = 0, 1, can be constructed at each pixel position for establishing the relationship between the transverse coordinates and pixel positions.
Step #1: Calculating the spatial position relationship between the plate and camera imaging plane. As shown in Figure 2, the projection from an arbitrary ring center on each plate to the camera imaging plane can be determined by [19] [

Transverse Calibration
Transverse calibration needs to establish the relationship between pixel coordinates and X, Y coordinates. This relationship is linear in an actual telecentric structured light system because distortion of the telecentric lens can be ignored and the required depth information is offered by a precise translation stage. Then, the transverse calibration can be described by the following polynomial equations: where b 0 , b 1 , c 0 , and c 1 are the polynomial coefficients, which depend on the chosen position (u, v). (x w , y w ) are the coordinates of a point on the plate in the reference coordinate system, which can be obtained by the following two steps. Therefore, an LUT for b i and c i , i = 0, 1, can be constructed at each pixel position for establishing the relationship between the transverse coordinates and pixel positions.
Step #1: Calculating the spatial position relationship between the plate and camera imaging plane. As shown in Figure 2, the projection from an arbitrary ring center on each plate to the camera imaging plane can be determined by [19] where α and β are respectively the magnification radio along the U and V axes of the imaging plane, (u 0 , v 0 ) are the center coordinates of the camera imaging plane, r ij and t i respectively represent elements of the 3 × 3 rotation matrix and the 3 × 1 translation vector, (x wc , y wc ) are the world coordinates of teh ring centers, (u c , v c ) are the corresponding pixel coordinates on the imaging plane, and H is a 3 × 3 homography matrix between the plate and the camera imaging plane. Step #2: Calculating the world coordinates of each pixel point on the plate. The world coordinates of each pixel point on the plate can be calculated by performing a reverse operation on Equation (6): where (x w , y w ) are the world coordinates of each pixel point on the plate, (u, v) are the corresponding pixel coordinates on the camera imaging plane, and H −1 is the inverse matrix of H.

Experimental System
The photograph of the established telecentric structured light system includes a projector (PRO4500, Texas Instruments, Texas, United States) with a resolution of 912 × 1140 pixels and a camera (ECO445CVGE, SVS-VISTEK GmbH, Innsbruck, Germany) with a resolution of 1296 × 964 pixels and a pixel size of 3.45 µm, as shown in Figure 3a. The bilateral telecentric lens mounted with the camera has the model number of GCO230105 with a magnification of 0.057. Additionally, a calibration plate with 12 × 9 discrete ring markers was used for system calibration and accuracy evaluation, as shown in Figure 3b. The distances of each adjacent markers in the horizontal and the vertical directions have the same value of 7.5 mm. For all experiments, a four-step phase-shifting algorithm was used for calculating the wrapped phase and the optimum fringe numbers selection method [20] was used for calculating the unwrapped phase at each pixel position.
where α and β are respectively the magnification radio along the U and V axes of the imaging plane, (u0, v0) are the center coordinates of the camera imaging plane, rij and ti respectively represent elements of the 3 × 3 rotation matrix and the 3 × 1 translation vector, (xwc, ywc) are the world coordinates of teh ring centers, (uc, vc) are the corresponding pixel coordinates on the imaging plane, and H is a 3 × 3 homography matrix between the plate and the camera imaging plane.
Step #2: Calculating the world coordinates of each pixel point on the plate. The world coordinates of each pixel point on the plate can be calculated by performing a reverse operation on Equation (6): where (xw, yw) are the world coordinates of each pixel point on the plate, (u, v) are the corresponding pixel coordinates on the camera imaging plane, and H −1 is the inverse matrix of H.

Experimental System
The photograph of the established telecentric structured light system includes a projector (PRO4500, Texas Instruments, Texas, United States) with a resolution of 912 × 1140 pixels and a camera (ECO445CVGE, SVS-VISTEK GmbH, Innsbruck, Germany) with a resolution of 1296 × 964 pixels and a pixel size of 3.45 µ m, as shown in Figure 3a. The bilateral telecentric lens mounted with the camera has the model number of GCO230105 with a magnification of 0.057. Additionally, a calibration plate with 12 × 9 discrete ring markers was used for system calibration and accuracy evaluation, as shown in Figure 3b. The distances of each adjacent markers in the horizontal and the vertical directions have the same value of 7.5 mm. For all experiments, a four-step phase-shifting algorithm was used for calculating the wrapped phase and the optimum fringe numbers selection method [20] was used for calculating the unwrapped phase at each pixel position.

System Calibration Procedure
According to the principle of the proposed system calibration method in Section 2, the whole calibration procedure is performed step by step as follows.
Step #1: Telecentric camera calibration before system calibration. The calibration plate was firstly placed at ten different positions and directions in the measuring volume. At each position, the plate image was captured and then used for extracting the ring centers. Finally, the intrinsic and extrinsic parameters of the telecentric camera were

System Calibration Procedure
According to the principle of the proposed system calibration method in Section 2, the whole calibration procedure is performed step by step as follows.
Step #1: Telecentric camera calibration before system calibration. The calibration plate was firstly placed at ten different positions and directions in the measuring volume. At each position, the plate image was captured and then used for extracting the ring centers. Finally, the intrinsic and extrinsic parameters of the telecentric camera were determined through the correspondences between the pixel coordinates of the extracted marker points and their spatial coordinates. The camera calibration accuracy can be assessed on the basis of the root-mean-square error (RMSE) of the ring centers, which can be calculated by the following equation [21]: where N is the total number of the ring centers, (u c , v c ) are the extracted ring center locations, and (û c ,v c ) are the re-projection point locations, which were calculated from the calibrated camera parameters. The re−projection errors of the calibrated camera are calculated for the performance of camera calibration, as shown in Figure 4. In the experiment, the RMSE = 0.0274 pixels is small, which proves that the telecentric camera is calibrated successfully. Additionally, we notice that the calculated radial distortion coefficients k 1 = −4.492 × 10 −4 and k 2 = 5.873 × 10 −5 are very small, which, in fact, is consistent with the low-distortion property of telecentric lens. Normally, other types of distortions such as tangential and prism distortions are much smaller than the radial distortion. Therefore, it is reliable that we ignore lens distortion in system calibration.
determined through the correspondences between the pixel coordinates of the extracted marker points and their spatial coordinates. The camera calibration accuracy can be assessed on the basis of the root-mean-square error (RMSE) of the ring centers, which can be calculated by the following equation [21]: where N is the total number of the ring centers, ( , ) are the extracted ring center locations, and (̂, ̂) are the re-projection point locations, which were calculated from the calibrated camera parameters. The re−projection errors of the calibrated camera are calculated for the performance of camera calibration, as shown in Figure 4. In the experiment, the RMSE = 0.0274 pixels is small, which proves that the telecentric camera is calibrated successfully. Additionally, we notice that the calculated radial distortion coefficients k1 = −4.492 × 10 −4 and k2 = 5.873 × 10 −5 are very small, which, in fact, is consistent with the low-distortion property of telecentric lens. Normally, other types of distortions such as tangential and prism distortions are much smaller than the radial distortion. Therefore, it is reliable that we ignore lens distortion in system calibration. Step #2: Capture of fringe images and texture images for system calibration. During the system calibration, the same plate was strongly fixed on the translation stage (HGAM307, Henggong Instrument Co., Ltd., Beijing, China) with a resolution of 10 μm and then translated along the depth direction from −6 to 6 mm, with an increment of 1 mm between successive positions. At each position, three sinusoidal fringe pattern sets with optimum fringe numbers of 100, 99, and 90 and each set with four phase-shifted fringe patterns with π/2 shift in between were projected on the plate surface. The camera captured the twelve fringe images for calculating the phase information and one texture image of the plate under white illumination for extracting the ring centers. These images and depth locations were saved for subsequent processing.
Step #3: Axial calculation. Using the captured fringe images, the unwrapped phase of all the pixels for each plate position was first calculated. Then, the plate position in the middle of the measuring volume, i.e., Zw = 0, was selected as a reference plane to transform both the saved depth locations and the acquired phase into the reference coordinate system. Note that when the relative phase and the relative depth information are known, an LUT for an at each pixel can be calculated and saved by invoking Equation (4) for the following reconstruction of depth data. It should be mentioned that a fifth fitting was chosen for axial calibration to achieve the optimal phase resolution. Step #2: Capture of fringe images and texture images for system calibration. During the system calibration, the same plate was strongly fixed on the translation stage (HGAM307, Henggong Instrument Co., Ltd., Beijing, China) with a resolution of 10 µm and then translated along the depth direction from −6 to 6 mm, with an increment of 1 mm between successive positions. At each position, three sinusoidal fringe pattern sets with optimum fringe numbers of 100, 99, and 90 and each set with four phase-shifted fringe patterns with π/2 shift in between were projected on the plate surface. The camera captured the twelve fringe images for calculating the phase information and one texture image of the plate under white illumination for extracting the ring centers. These images and depth locations were saved for subsequent processing.
Step #3: Axial calculation. Using the captured fringe images, the unwrapped phase of all the pixels for each plate position was first calculated. Then, the plate position in the middle of the measuring volume, i.e., Z w = 0, was selected as a reference plane to transform both the saved depth locations and the acquired phase into the reference coordinate system. Note that when the relative phase and the relative depth information are known, an LUT for a n at each pixel can be calculated and saved by invoking Equation (4) for the following reconstruction of depth data. It should be mentioned that a fifth fitting was chosen for axial calibration to achieve the optimal phase resolution.
Step #4: Transverse calibration. Using the captured texture image of the plate, the center position of each marker was first extracted. Afterward, based on Equation (7), the transverse coordinates of all pixel points at each plate were calculated with the obtained intrinsic and extrinsic parameters of the camera in Step #1. The pixel coordinates and the transverse coordinates on the plate from the ten images were used to construct an LUT for b 0 , b 1 , c 0 , and c 1 at each pixel according to Equation (5). All the obtained coefficients were saved for the following reconstruction of transverse data.

Quantitative and Qualitative Evaluation
To evaluate the performance of the proposed system calibration method, we also calibrated the system by using the method requiring projector calibration [8]. The plate was exactly placed in spatial positions used in the above camera calibration method to ensure consistent camera parameters. At each position, twelve vertical and twelve horizontal sinusoidal fringe patterns with the same optimum fringe numbers were projected onto the plate surface. The telecentric camera captured the deflected fringe images for calculating phase maps in the vertical and horizontal directions and a texture image under white illumination for extracting the center positions of all markers. We used the obtained phase maps in two directions for calculating the corresponding points of all markers in the projector pixel coordinate system. By establishing the correspondence between the pixel coordinates of the marker points and their spatial coordinates, the intrinsic and extrinsic parameters of the projector were determined. Eventually, the intrinsic and extrinsic parameters of the system were obtained and could be used for 3D measurement. It should be noted that the two methods used the same reference position to ensure that the reconstructed 3D data has the same coordinate system.
For accuracy comparison, we captured an additional set of five different poses in the calibration volume and measured the distances of the green line AB and the distances of the blue line CD shown in Figure 3b. The two lines are formed by the ring center A, B, C, and D and the distances of two lines are 84.853 mm theoretically. We used the proposed calibration method and the method using projector calibration for reconstructing the 3D shape of the plate at each position, and then extracted the 3D coordinates of these four points. The Euclidean distances AB and CD were calculated and are listed in Table 1. By comparing the difference between the measured and actual values, we calculated that the mean errors with the proposed method are less than those with the method using projector calibration. The results evidently prove the effectiveness of the proposed system calibration method. To further evaluate the calibration accuracy, we reconstructed a regular sphere with a diameter of 38.1 mm using both the method requiring projector calibration and our method. The reconstructed 3D geometries are shown in Figure 5a,d, respectively. Then, we fitted the reconstructed 3D result to an ideal sphere. The model of the ideal sphere is (x − 68.59) 2 + (y + 1.99) 2 + (z + 560.32) 2 = 362.14 with the method requiring projector calibration, and (x − 68.50) 2 + (y + 1.90) 2 + (z + 560.14) 2 = 362.71 with our method. we compared the reconstructed 3D geometry with the ideal sphere. The corresponding 2D error maps are shown in Figure 5b,e. Finally, we extracted the error distribution of the middle line from Figure 5b,e and plotted them in Figure 5c,f. The mean error and standard deviation are 0.0429 mm and 0.0048 mm with the method requiring projector calibration and 0.0182 mm and 0.0034 mm with our method. These results again demonstrate that the proposed system calibration method has better performance in calibration accuracy than the method requiring projector calibration. a diameter of 38.1 mm using both the method requiring projector calibration and ou method. The reconstructed 3D geometries are shown in Figure 5a,d, respectively. The we fitted the reconstructed 3D result to an ideal sphere. The model of the ideal sphere (x − 68.59) 2 + (y + 1.99) 2 + (z + 560.32) 2 = 362.14 with the method requiring project calibration, and (x − 68.50) 2 + (y + 1.90) 2 + (z + 560.14) 2 = 362.71 with our method. w compared the reconstructed 3D geometry with the ideal sphere. The corresponding 2 error maps are shown in Figure 5b,e. Finally, we extracted the error distribution of th middle line from Figure 5b,e and plotted them in Figure 5c,f. The mean error and standar deviation are 0.0429 mm and 0.0048 mm with the method requiring projector calibratio and 0.0182 mm and 0.0034 mm with our method. These results again demonstrate that th proposed system calibration method has better performance in calibration accuracy tha the method requiring projector calibration. To visually evaluate the performance of the proposed system calibration method, w measured two plasters having freeform surfaces as shown in Figure 6a,c. Specificall twelve sinusoidal fringe patterns with the four-step phase shifting and the fringe numbe of 100, 99, and 90 through the green channel were projected on the plasters' surfaces b the projector. Then, the fringe patterns reflected on the plasters' surfaces were capture by the telecentric camera from another viewpoint. Subsequently, the unwrapped phase each pixel was calculated by the four-step phase-shifting algorithm and the optimu three-fringe number selection method. By transforming the obtained absolute phase map into depth and transverse data, we reconstructed the 3D geometries of the plasters, shown in shown in Figure 6b,d. It is clear that the details of the plasters were perfect reconstructed. This experiment demonstrated that the proposed calibration method ca supply high-quality 3D shape reconstruction for objects with freeform surfaces. To visually evaluate the performance of the proposed system calibration method, we measured two plasters having freeform surfaces as shown in Figure 6a,c. Specifically, twelve sinusoidal fringe patterns with the four-step phase shifting and the fringe numbers of 100, 99, and 90 through the green channel were projected on the plasters' surfaces by the projector. Then, the fringe patterns reflected on the plasters' surfaces were captured by the telecentric camera from another viewpoint. Subsequently, the unwrapped phase at each pixel was calculated by the four-step phase-shifting algorithm and the optimum three-fringe number selection method. By transforming the obtained absolute phase maps into depth and transverse data, we reconstructed the 3D geometries of the plasters, as shown in shown in Figure 6b,d. It is clear that the details of the plasters were perfectly reconstructed. This experiment demonstrated that the proposed calibration method can supply high-quality 3D shape reconstruction for objects with freeform surfaces.

Conclusions
We have proposed a polynomial-based calibration method for a telecentric structured light system, which consists of a telecentric camera and a pin-hole projector. The proposed method is divided into axial calibration and transverse calibration. The former needs to build up the relationship between the absolute phase and the depth data. The latter requires establishing the relationship between pixel positions and X, Y coordinates. Compared with the existing methods that need to calibrate the projector, the calibration method proposed in this research for telecentric structured light systems has the following advantages: (1) Accuracy. The proposed method averts the coupling of correspondence errors between camera pixels and projector pixels, hence increasing measurement accuracy. (2) Ease of operation. During the whole calibration process, a calibration plate is fixed on a translation stage and then successively translated along the depth direction. The orientation of the plate does not change, so the proposed calibration method is easy to operate. (3) Simplicity. The proposed calibration method avoids projector calibration, which makes calibration simple.
In the proposed method, applying more calibration plate positions can cover the measuring volume better. This can provide a higher calibration accuracy, but increases time consumption to perform projection and capture of fringe patterns. To keep a balance between calibration accuracy and time complexity, the appropriate number of calibration plate position should be chosen for system calibration according to the depth of a measured object. Additionally, a translation stage is required to accurately provide the calibration plate with known depth information to meet the requirement of high accuracy measurement, so the calibration procedure is difficult to perform out of the laboratory environment. Comprehensively considering the pros and cons, we deduce that the proposed method is quite suitable for high-accuracy 3D measurements of small-scale objects [22] and has great potential in 3D shape measurement of micro-parts with a size on the order of millimeters, used in micro-parts mechanical system (MEMS) [23].

Conclusions
We have proposed a polynomial-based calibration method for a telecentric structured light system, which consists of a telecentric camera and a pin-hole projector. The proposed method is divided into axial calibration and transverse calibration. The former needs to build up the relationship between the absolute phase and the depth data. The latter requires establishing the relationship between pixel positions and X, Y coordinates. Compared with the existing methods that need to calibrate the projector, the calibration method proposed in this research for telecentric structured light systems has the following advantages: (1) Accuracy. The proposed method averts the coupling of correspondence errors between camera pixels and projector pixels, hence increasing measurement accuracy. (2) Ease of operation. During the whole calibration process, a calibration plate is fixed on a translation stage and then successively translated along the depth direction. The orientation of the plate does not change, so the proposed calibration method is easy to operate. (3) Simplicity. The proposed calibration method avoids projector calibration, which makes calibration simple.
In the proposed method, applying more calibration plate positions can cover the measuring volume better. This can provide a higher calibration accuracy, but increases time consumption to perform projection and capture of fringe patterns. To keep a balance between calibration accuracy and time complexity, the appropriate number of calibration plate position should be chosen for system calibration according to the depth of a measured object. Additionally, a translation stage is required to accurately provide the calibration plate with known depth information to meet the requirement of high accuracy measurement, so the calibration procedure is difficult to perform out of the laboratory environment. Comprehensively considering the pros and cons, we deduce that the proposed method is quite suitable for high-accuracy 3D measurements of small-scale objects [22] and has great potential in 3D shape measurement of micro-parts with a size on the order of millimeters, used in micro-parts mechanical system (MEMS) [23].