Calibration Method for Line-Structured Light Three-Dimensional Measurements Based on a Single Circular Target

: Single circular targets are widely used as calibration objects during line-structured light three-dimensional (3D) measurements because they are versatile and easy to manufacture. This paper proposes a new calibration method for line-structured light 3D measurements based on a single circular target. First, the target is placed in several positions and illuminated by a light beam emitted from a laser projector. A camera captures the resulting images and extracts an elliptic ﬁtting proﬁle of the target and the laser stripe. Second, an elliptical cone equation deﬁned by the elliptic ﬁtting proﬁle and optical center of the camera is established based on the projective geometry. By combining the obtained elliptical cone and the known diameter of the circular target, two possible positions and orientations of the circular target are determined and two groups of 3D intersection points between the light plane and the circular target are identiﬁed. Finally, the correct group of 3D intersection points is ﬁltered and the light plane is progressively ﬁtted. The accuracy and effectiveness of the proposed method are veriﬁed both theoretically and experimentally. The obtained results indicate that a calibration accuracy of 0.05 mm can be achieved for an 80 mm × 80 mm planar target.


Introduction
Measurement is a process of obtaining quantifiable information that requires a very high accuracy [1]. Currently, three-dimensional (3D) measurements are widely performed in the design and construction of augmented reality, automated quality verification, restoration of lost computer-aided design data, cultural relic protection, surface deformation tracking, and 3D reconstruction for object recognition [2][3][4][5]. Common 3D measurement approaches can be divided into two categories: contact measurements and non-contact measurements. Normally, the applications of contact measurements are limited by the untouchability of a measured object. In contrast, non-contact measurements exhibit a high flexibility [6]. Among the different non-contact measurement methods, line-structured light 3D measurements are frequently conducted because of their simplicity, high precision, and high measurement speed [7].
A general line-structured light 3D measurement process can be described as follows. (1) A line-structure light beam is emitted by a laser generator. (2) A light plane is projected onto the surface of the measured object. (3) Two-dimensional (2D) image coordinates of the intersection points between the light plane and the object surface are obtained by a camera. (4) The corresponding 3D coordinates of the intersection points are calculated by a developed algorithm. In this technique, proper calibration is of utmost importance because it strongly affects the measurement accuracy and simplicity. Usually, the calibration procedure includes camera calibration and light plane calibration processes. Many research studies on camera calibration have been conducted in the past. Zhang's [8] calibration method based on a planar target is the most widely used approach. To increase its accuracy, 2 of 14 various advanced camera calibration techniques have been developed [9,10]. For example, the form of the calibration board was changed from that of a checkerboard to a circular calibration board because the edge of a characteristic circle contained more information and exhibited a higher positioning accuracy. Moreover, multiple methods for calibrating the light plane were also proposed [11][12][13][14][15][16][17][18]. Huynh [11], Xu [12], and Dewar [13] introduced a 3D target to define the light plane calibration points using the cross-ratio invariability principle (CRIP). However, all these three methods were very complex and possessed a low calibration accuracy. Zhou [14] proposed an on-site calibration method using a planar target in which light plane calibration points were obtained through repeated target movements. Liu [15] replaced the points with a Pluck matrix; however, the resulting method was complex and required a plane calibration target. Xu [16] utilized a flat board with four balls as a calibration target, while Wei [17] reported a calibration technique based on a one-dimensional (1D) target. The 3D coordinates of a series of intersection points between the light plane and the 1D target were determined from a known distance between select points of the 1D target. The obtained coordinates were fitted to solve the light plane equation. Liu [18] developed a light-structured light vision sensor based on a single ball target. Its calibration process was relatively simple; however, a high-precision ball target was difficult to manufacture.
In this study, a new method for the on-site calibration of a line-structured light 3D measurement process based on a single circular target is proposed. The main reason for selecting a circular target as the calibration object is its ease of manufacture. It is almost possible to say that a single circle of known size can be used as a calibration target with our proposed method. First, a line-structured light 3D measurement model is established. Second, the light plane is calibrated with a single circular target by performing the following steps: (1) An elliptic fitting profile of the circular target is obtained by the proposed "revisited arc-support line segment (RALS) method", and sub-pixel points of the light stripe are extracted by another technique based on the Hesse matrix. (2) An elliptical cone equation defined by the elliptic fitting profile and camera optical center is determined. As the diameter of the circular target is known, the position and orientation of the circular target can assume only two values. (3) A camera model is established to obtain two possible light stripe lines, and the light plane equation is progressively solved. Finally, the effectiveness and accuracy of the proposed method are verified both theoretically and experimentally.
This paper is organized as follows. Section 2 outlines the principles of the line-structured light 3D measurement model and light plane calibration based on a single circular target. Sections 3 and 4 describe the obtained simulation and experimental data, respectively. Finally, Section 5 summarizes the main conclusions from the findings of this work. Figure 1 shows a schematic diagram of the line-structured light 3D measurement process. The utilized setup includes a camera and a laser projector. A line laser is cast onto an object by the laser projector, after which the camera captures the light stripe images distorted by the object surface geometry. Finally, the 3D coordinates of the points on the light stripe are determined. By performing a precise line motion, a 3D profile of the object can be obtained.

Line-Structured Light 3D Measurement Model
In Figure 1, the notations (O w ; X w , Y w , Z w ) and (O c ; X c , Y c , Z c ) represent the world coordinate system (WCS) and camera coordinate system (CCS), respectively. For simplicity of analysis, WCS and CCS are set to be identical. (O uv ; u, v) is the camera pixel coordinate system (CPCS), while (O n ; x n , y n ) represents the normalized image coordinate system (NICS). Presumably, there is an arbitrary intersection point P between the light plane and the measured object, where P c = (X pc , Y pc , Z pc ) T represents a set of coordinates in the CCS. P uv = (u p , v p ) T denotes the coordinates of an undistorted image point. In addition, P n = (x pn , y pn ) T and P d = (x pd , y pd ) T represent the NICS coordinates of undistorted and Appl. Sci. 2022, 12, 588 3 of 14 distorted physical images, respectively. Generally, the camera produces a lens distortion on P n , especially in the radial direction. For this reason, a distortion model is described in Ref. [19], which includes the term f d (·) of the distorted projection P d on the normalized image plane in the following form: where r 2 = x 2 n + y 2 n ; K = [k 1 , k 2 , k 3 ] and G = [g 1 , g 2 ] represent the radial and tangential distortion coefficient vectors, respectively. Due to the advantages of homogeneous coordinates, the utilized coordinates are converted into homogeneous coordinates by adding 1 as the last element. By taking into account the projection and coordinate transformation, the actual projection on the image plane P uv can be expressed as follows: where A denotes the camera intrinsic matrix: Typically, A consists of five parameters: f u , f v , u 0 , v 0 , and γ. The skew factor γ is usually set to zero. f u and f v are the horizontal and vertical focal lengths, respectively, and u 0 and v 0 are the coordinates of the camera principal point. Appl  In Figure 1, the notations (Ow; Xw, Yw, Zw) and (Oc; Xc, Yc, Zc) represent the world coordinate system (WCS) and camera coordinate system (CCS), respectively. For simplicity of analysis, WCS and CCS are set to be identical. (Ouv; u, v) is the camera pixel coordinate system (CPCS), while (On; xn, yn) represents the normalized image coordinate system (NICS). Presumably, there is an arbitrary intersection point P between the light plane and the measured object, where Pc = (Xpc, Ypc, Zpc) T represents a set of coordinates in the CCS. Puv = (up, vp) T denotes the coordinates of an undistorted image point. In addition, Pn = (xpn, ypn) T and Pd = (xpd, ypd) T represent the NICS coordinates of undistorted and distorted physical images, respectively. Generally, the camera produces a lens distortion on Pn, especially in the radial direction. For this reason, a distortion model is described in Ref. [19], which includes the term fd(·) of the distorted projection Pd on the normalized image plane in the following form: As shown in Equation (2), the nonlinear camera model describes the imaging process from CCS to the CPCS. From the geometric viewpoint, Equation (2) characterizes a ray passing through P uv and P c . To uniquely determine P c in the CCS, the light plane Π serves as another constraint-i.e., P c should satisfy the equation of Π: where Π c = (a, b, c, d) T includes the four coefficients of Equation (4).
Generally, Equations (2) and (4) constitute the line-structured light 3D measurement model, which is used to calculate P c in the CCS from P uv in the CPCS. This model contains several undetermined parameters: the parameters in A, the radial distortion coefficients in K, the tangential distortion coefficients in G, and the coefficients in Π c . Among these parameters, A, K, and G are the intrinsic parameters of the camera, which are not affected by the calibration procedure. To obtain accurate camera parameters, several advanced calibration approaches have been developed. Herein, the intrinsic parameters of the camera are estimated by Zhang's method [8]. Furthermore, a MATLAB toolbox developed by Bouguet [20] can be directly utilized for their determination. Nevertheless, Π c is changed easily because it is related to the device position and orientation. Therefore, the proper calibration of the light plane Π (i.e., the acquisition of Π c ) is the focus of the present research study, which is described in detail in the following sections.  [21]. The light stripe must be extracted and the elliptic image profile of the circle target edge must be fitted. Hence, the first objective of this process is to detect (extract and fit) the ellipse because it contains a light stripe (note that the ellipse can be used as a mask to reduce the search area during light stripe extraction). The complete image processing algorithm is outlined in Figure 3, while the ellipse detection and light stripe extraction procedures are described in detail below. from CCS to the CPCS. From the geometric viewpoint, Equation (2) characterizes a ray passing through Puv and Pc. To uniquely determine Pc in the CCS, the light plane Π serves as another constraint-i.e., Pc should satisfy the equation of Π: 10 T cc  =  P Π (4) where Πc = (a, b, c, d) T includes the four coefficients of Equation (4). Generally, Equations (2) and (4) constitute the line-structured light 3D measurement model, which is used to calculate Pc in the CCS from Puv in the CPCS. This model contains several undetermined parameters: the parameters in A, the radial distortion coefficients in K, the tangential distortion coefficients in G, and the coefficients in Πc. Among these parameters, A, K, and G are the intrinsic parameters of the camera, which are not affected by the calibration procedure. To obtain accurate camera parameters, several advanced calibration approaches have been developed. Herein, the intrinsic parameters of the camera are estimated by Zhang's method [8]. Furthermore, a MATLAB toolbox developed by Bouguet [20] can be directly utilized for their determination. Nevertheless, Πc is changed easily because it is related to the device position and orientation. Therefore, the proper calibration of the light plane Π (i.e., the acquisition of Πc) is the focus of the present research study, which is described in detail in the following sections. Figure 2a displays the captured image of the circle target during calibration. The 2D data in structure is very important [21]. The light stripe must be extracted and the elliptic image profile of the circle target edge must be fitted. Hence, the first objective of this process is to detect (extract and fit) the ellipse because it contains a light stripe (note that the ellipse can be used as a mask to reduce the search area during light stripe extraction). The complete image processing algorithm is outlined in Figure 3, while the ellipse detection and light stripe extraction procedures are described in detail below. The existing ellipse detection methods can be grouped into Hough transform [22] and edge following [23] ones. The former techniques are relatively simple but not robust in complex scenarios. In contrast, the latter methods are very precise but require long computation times. Therefore, we have developed an efficient high-quality ellipse detection method [24], which is called the RALS method. Its algorithm consists of four steps:

Image Processing Algorithm
(1) arc-support line segment (LS) extraction, (2) arc-support group forming, (3) initial ellipse set generation, and (4) ellipse clustering and candidate verification. Step 1. Arc-support LS extraction: Arc-support LS helps prune a straight LS while retaining the arc geometric cues. It has two critical parameters: direction and polarity. A detailed description of the corresponding procedure can be found in Ref. [25].
Step 2. Arc-support groups forming: An elliptic curve may consist of several are-support LS, which require a link to form a group. Two consecutive linkable arc-support LS should The existing ellipse detection methods can be grouped into Hough transform [22] and edge following [23] ones. The former techniques are relatively simple but not robust in complex scenarios. In contrast, the latter methods are very precise but require long computation times. Therefore, we have developed an efficient high-quality ellipse detection method [24], which is called the RALS method. Its algorithm consists of four steps: (1) arcsupport line segment (LS) extraction, (2) arc-support group forming, (3) initial ellipse set generation, and (4) ellipse clustering and candidate verification.
Step 1. Arc-support LS extraction: Arc-support LS helps prune a straight LS while retaining the arc geometric cues. It has two critical parameters: direction and polarity. A detailed description of the corresponding procedure can be found in Ref. [25].
Step 2. Arc-support groups forming: An elliptic curve may consist of several are-support LS, which require a link to form a group. Two consecutive linkable arc-support LS should satisfy the continuity and convexity conditions. The former condition states that the head of one arc-support LS and the tail of another one should be close enough. According to the convexity condition, the linked LS should move in the same direction (clockwise or anticlockwise). Iteratively, the linked arc-support LS that share the similar geometric properties are called an "arc-support group".
Step 3. Initial ellipse set generation: An arc-support group may contain all the arc-support LS of a curve or merely a separate arc-support LS. Thus, two complementary methods are utilized to generate the initial ellipse set. (1) From the local perspective, the arc-support LS group with a relatively high saliency score likely represents the dominant component of the polygonal approximation of an ellipse. (2) From the global perspective, the troublesome situation of the arc-support groups of a common ellipse is characterized by a polarity constraint, region restriction, and adaptive inlier criterion.
Step 4. Ellipse clustering and candidate verification: Owing to the presence of duplicates in the initial ellipse set, a hierarchical clustering method based on mean shift [26] has been developed. For convenient clustering, this method decomposes the five-dimensional ellipse parameter space into three low and cascaded dimensional spaces (centers, orientations, and semiaxes). Moreover, to ensure the high quality of the detected ellipses, ellipse candidate verification, which incorporates the stringent regulations for goodness measurements and elliptic geometric properties for refinement, is conducted. E is the elliptic fitting profile, as indicated in Figure 2b.
After the ellipse detection, the light stripe within the ellipse is extracted as well. The automatic sub-pixel extraction process of the light stripe consists of three steps: (1) Gaussian filtering, (2) solving the Hesse matrix, and (3) extracting sub-pixel points.
Step 1. Gaussian filtering: A Gaussian filter is applied to the undistorted image. According to Ref. [27], the standard deviation of the Gaussian filter must satisfy the condition σ < w/ √ 3, where w is the width of the laser stripe.
Step 2. Solving the Hesse matrix: The Hesse matrix of each pixel (u, v) is defined as: where r uu , r uv , and r vv are the second-order partial image derivatives.
Step 3. Extracting sub-pixel points: The largest eigenvalue denotes the normal direction of the light stripe (n u , n v ). The sub-pixel coordinate of the light stripe center point (p u , p v ) can be expressed as: where t = − n u r u +n v r v n 2 u r uu +2n u n v r uv +n 2 v r vv , r u and r v are the first-order partial derivatives of the image, and (u 0 , v 0 ) is the reference point. If tn u ∈[−0.5, 0.5] and tn v ∈[−0.5, 0.5], the first-order derivative along (n u , n v ) vanishes within the current pixel. If the second-order partial derivative is larger than a threshold, (p u , p v ) represent the sub-pixel coordinates of the light stripe. The extracted light stripe is shown in Figure 2c.

Single Circle Position and Posture Measurement
The position and orientation of a circle are determined by the circle center and normal vector of the plane containing the circle, respectively. In this section, it is assumed that the intrinsic parameters of the camera are known. In Figure 4, O c is the optical center of the camera, (O c ; X c , Y c , Z c ) represents the CCS matching the WCS, (O uv ; u, v) is the CPCS, and R denotes the radius of circle C. Using the image processing algorithm described in Section 2.2.1, the elliptic fitting profile E of the circle target edge can be obtained. The main objectives of the process are (1) solving the elliptic cone equation of Q, (2) converting Q from the CCS to the standard coordinate system (SCS), and (3)   , the firstorder derivative along (nu, nv) vanishes within the current pixel. If the second-order partial derivative is larger than a threshold, (pu, pv) represent the sub-pixel coordinates of the light stripe. The extracted light stripe is shown in Figure 2c.

Single Circle Position and Posture Measurement
The position and orientation of a circle are determined by the circle center and normal vector of the plane containing the circle, respectively. In this section, it is assumed that the intrinsic parameters of the camera are known. In Figure 4, Oc is the optical center of the camera, (Oc; Xc, Yc, Zc) represents the CCS matching the WCS, (Ouv; u, v) is the CPCS, and R denotes the radius of circle C. Using the image processing algorithm described in Section 2.2.1, the elliptic fitting profile E of the circle target edge can be obtained. The main objectives of the process are (1) solving the elliptic cone equation of Q, (2) converting Q from the CCS to the standard coordinate system (SCS), and (3) determining the parameters of two possible circles.
where E is the coefficient matrix of the elliptic equation of E and PEuv = [uE vE] denotes the pixel coordinates of the ellipse. The elliptic cone Q can be obtained from E and the optical center Oc of the camera. Assuming that AI = A[I 0] is the auxiliary camera matrix, where A denotes the camera intrinsic matrix, the elliptic cone equation of Q is calculated using the back perspective projection model of the camera expressed by the following equation:

1.
Solving the elliptic cone equation of Q in the CCS As illustrated in Section 2.2.1, the profile curve of circle C in the CPCS is determined through ellipse fitting by solving the following equation: where E is the coefficient matrix of the elliptic equation of E and P Euv = [u E v E ] denotes the pixel coordinates of the ellipse. The elliptic cone Q can be obtained from E and the optical center O c of the camera. Assuming that A I = A[I 0] is the auxiliary camera matrix, where A denotes the camera intrinsic matrix, the elliptic cone equation of Q is calculated using the back perspective projection model of the camera expressed by the following equation: where P Qc = [x Q y Q z Q ] contains the coordinates of spatial points on Q in the CCS. Q = A T I EA I represents the coefficient matrix of the elliptic cone equation of Q and Q can be expressed as: Here, W is a 3 × 3 symmetric matrix. Thus, Equation (8) may also be written as: 2.
Converting Q from the CCS to SCS The form of Equation (10) is complex, which complicates the entire computational process. Therefore, the coordinate system of Equation (2) is converted to the SCS. Note that the SCS treats the O c of the CCS as O c , and the beam, which points from O c to the center of C, matches the z c axis. The x c and y c axes conform to the right-handed coordinate system. Here, R is used to denote the rotation matrix from the SCS to the CCS as well as the conversion matrix: By substituting Equation (11) into Equation (10), the following expression is obtained: R −1 WR is established through diagonalization-i.e., R −1 WR = Diag(λ 1 , λ 2 , λ 3 ). In other words, after the eigenvalue decomposition of W, R and the corresponding eigenvalues can be determined. Hence, the elliptic cone equation of Q i in the SCS is written as: To satisfy these requirements, two operations must be performed. (1) The column vectors of R should be unit-orthogonalized. (2) The order of the column vectors of R must be adjusted based on the rules of λ 1 , λ 2 , and λ 3 : (λ 1 < λ 2 < 0, λ 3 > 0) or (λ 1 > λ 2 > 0, λ 3 < 0).

3.
Determining the parameters of two possible circles As mentioned above, the elliptic cone Q may be uniquely identified, and its equation is expressed by Equation (13) in the SCS. Hence, the determination of the circle position and orientation is equivalent to locating a plane that intersects the elliptic cone Q to form a circular ring with radius R. In other words, the center coordinates of the circle ring are the positional parameter of circle C and the normal vector of the plane is the orientational parameter of circle C. Nonetheless, according to the geometrical relation, there are two planes that satisfy this condition, as shown by the red and blue dotted circles in Figure 3. In Ref. [28], the following expressions of the center and normal vector of the two circle rings formed by the two planes are reported: where (x o , y o , z o ) and (n ox , n oy , n oz ) represent the center and normal vector of the two circle rings in the SCS, respectively. To unify the coordinate system with the light plane, the latter is transformed to the CCS according to the following relationships: n ox , n oy , n oz T = R n ox , n oy , n oz T

Progressive Solution of the Light Plane Equation
As shown in Figure 5, the light plane intersects the circle target to form a laser stripe at the ith position of the target. Meanwhile, the image plane captures the laser stripe to form the line segment L i . In Figure 2, plane Π 1 can be obtained through the camera model, while plane Π 2 of the circular target is determined as specified in Section 2.2.2. However, two sets of possible parameters of plane Π 2 corresponding to light stripes Ls A and Ls B are obtained in Section 2.2.2, which causes ambiguity. Generally, Ls A and Ls B satisfy two requirements: (1) they can be noncoplanar lines and (2) one of them should be on the light plane. Therefore, the following procedure has been proposed: The condition N ≤ 4 is applied because a plane does not exist if the two lines are noncoplanar.
Step 3: A new position of the target is introduced, and the two corresponding possible light stripes are judged against the planes formed in step 2. If the two new light stripes are not on the current plane, the latter is not a light plane.
Step 4: If there is only one plane is left after step 3, it is the light plane. Otherwise, step 3 is repeated until only one plane is left.

Progressive refining of the light plane
After one initial light plane is determined, its parameters must be further optimized. A new position of the target is introduced due to the existence of two possible light stripes (e.g., LsA and LsB). If one of these stripes (LsA) is located on the light plane and the other stripe (LsB) is located away from the light plane, the light stripe LsA will be added to fit (refine) the light plane. Finally, the iteration termination conditions are formulated as follows. (1) When the new position of the target is introduced, the variation in the light plane parameters is less than 0.1%. (2) The calibration images are exhausted.

Initial light plane determination
Step 1: Two positions of the target are introduced, which correspond to four possible light stripes (Ls A,1 , Ls B,1 , Ls A,2 , and Ls B,2 , where Ls A,1 and Ls B,1 denote the first target position and Ls A,2 and Ls B,2 represent the second target position).
Step 2: Four straight lines are combined to form N (N ≤ 4) planes Π LS,i (i = 1, 2, . . . , N). The condition N ≤ 4 is applied because a plane does not exist if the two lines are non-coplanar.
Step 3: A new position of the target is introduced, and the two corresponding possible light stripes are judged against the planes formed in step 2. If the two new light stripes are not on the current plane, the latter is not a light plane.
Step 4: If there is only one plane is left after step 3, it is the light plane. Otherwise, step 3 is repeated until only one plane is left.

2.
Progressive refining of the light plane After one initial light plane is determined, its parameters must be further optimized. A new position of the target is introduced due to the existence of two possible light stripes (e.g., Ls A and Ls B ). If one of these stripes (Ls A ) is located on the light plane and the other stripe (Ls B ) is located away from the light plane, the light stripe Ls A will be added to fit (refine) the light plane. Finally, the iteration termination conditions are formulated as follows. (1) When the new position of the target is introduced, the variation in the light plane parameters is less than 0.1%. (2) The calibration images are exhausted.

Simulations
This section describes a simulation procedure that is performed to verify the proposed method. We have simulated the influences of three different factors (the number of target placements, image noise, and circular target size) on the calibration accuracy. In these simulations, the lens focus is set to 25 mm and the utilized geometrical layout and dimensions are shown in Figure 6. these simulations, the lens focus is set to 25 mm and the utilized geometrical layout and dimensions are shown in Figure 6. (1) The image plane of the camera is considered in the NICS, not CPCS with millimeter units. (2) Both the light and image planes are perpendicular to the YcOcZc plane. The light plane equation used in geometric calculations is C1y − C2z + D = 0. Specifically, the light plane equation applied in Figure 6 is 0.9336y − 0.3584z + 140.04 = 0. The calibration accuracy is determined by calculating the relative errors of the light plane equation parameters.

Influence of the Number of Target Placements
The diameter of the circular target is 50 mm and Gaussian noise is incorporated into the circular target profile and light stripe used for calibration. The noise level is σ = 0.4 mm. The number of target placements varies between 2, 3, 4, 5, 6, and 7. The relative errors of the calibration data obtained at different placement numbers are depicted in Figure 7, which clearly shows that the calibration accuracy increases as the number of target placements increases. When its value is larger than five, the relative error decreases significantly. Therefore, the calibration accuracy tends to stabilize when the number of target placements reaches a certain threshold. Furthermore, satisfactory results are obtained at a number of target placement greater than five.

Influence of the Number of Target Placements
The diameter of the circular target is 50 mm and Gaussian noise is incorporated into the circular target profile and light stripe used for calibration. The noise level is σ = 0.4 mm. The number of target placements varies between 2, 3, 4, 5, 6, and 7. The relative errors of the calibration data obtained at different placement numbers are depicted in Figure 7, which clearly shows that the calibration accuracy increases as the number of target placements increases. When its value is larger than five, the relative error decreases significantly. Therefore, the calibration accuracy tends to stabilize when the number of target placements reaches a certain threshold. Furthermore, satisfactory results are obtained at a number of target placement greater than five.

Influence of Image Noise
In this case, the diameter of the circle is 50 mm and the target is placed twice. Gaussian noise is added to the circular target profile and the light stripe used for calibration. The noise level varies from σ = 0 mm to σ = 1 mm at intervals of 0.1 mm. The relative errors of the calibration data obtained at different noise levels are depicted in Figure 8. This shows that the calibration accuracy decreases as the noise increases. In the actual experiment, the pixel size of the camera sensor is 3.5 um. According to the image processing methods described, the extraction process can reach a precision of 0.2 pixels or 0.7 µ m,

Influence of Image Noise
In this case, the diameter of the circle is 50 mm and the target is placed twice. Gaussian noise is added to the circular target profile and the light stripe used for calibration. The noise level varies from σ = 0 mm to σ = 1 mm at intervals of 0.1 mm. The relative errors of the calibration data obtained at different noise levels are depicted in Figure 8. This shows that the calibration accuracy decreases as the noise increases. In the actual experiment, the pixel size of the camera sensor is 3.5 um. According to the image processing methods described, the extraction process can reach a precision of 0.2 pixels or 0.7 µm, which corresponds to a relatively small error.

Influence of Circular Target Size
Gaussian noise is incorporated into the circular target profile and the light stripe used for calibration. The noise level is set to σ = 0.6 mm. The target is placed twice and its diameter varies from 10 to 70 mm at intervals of 10 mm. The relative errors of the calibration data obtained at different target diameters are presented in Figure 9. This shows that the calibration precision increases as the circular target diameter increases from 10 to 50 mm. Moreover, the relative error decreases at target diameters above 50 mm. Thus, a circular target with a diameter of 50 mm is manufactured and utilized in the actual experiment.

Influence of Circular Target Size
Gaussian noise is incorporated into the circular target profile and the light stripe used for calibration. The noise level is set to σ = 0.6 mm. The target is placed twice and its diameter varies from 10 to 70 mm at intervals of 10 mm. The relative errors of the calibration data obtained at different target diameters are presented in Figure 9. This shows that the calibration precision increases as the circular target diameter increases from 10 to 50 mm. Moreover, the relative error decreases at target diameters above 50 mm. Thus, a circular target with a diameter of 50 mm is manufactured and utilized in the actual experiment.

Experimental Setup
The utilized experimental system consists of a digital charge-coupled device (CCD) camera (MV-CE013-50 GM) and a laser projector (650 nm-5 mw-5 V) fixed by optical brackets. The camera is equipped with a megapixel lens with a focal length of 16 mm (Computer M1614-MP2). The CCD camera resolution is 1280 × 960 pixels with a maximum frame rate of 30 frames/s. The laser projector casts a single line laser and its minimum linewidth is 0.2 mm. The working distance of the system is approximately 450 mm.

Experimental Setup
The utilized experimental system consists of a digital charge-coupled device (CCD) camera (MV-CE013-50 GM) and a laser projector (650 nm-5 mw-5 V) fixed by optical brackets. The camera is equipped with a megapixel lens with a focal length of 16 mm (Computer M1614-MP2). The CCD camera resolution is 1280 × 960 pixels with a maximum frame rate of 30 frames/s. The laser projector casts a single line laser and its minimum linewidth is 0.2 mm. The working distance of the system is approximately 450 mm. A photograph of the experimental setup is shown in Figure 10.

Experimental Procedure
Step 1: The camera model parameters are calibrated by Zhang's method [8]. In practice, the MATLAB toolbox developed by Bouguet [20] is directly utilized to obtain the intrinsic matrix A and the distortion coefficients [K; G] of the camera.
Step 2: The circular target is placed at an appropriate position. After that, the camera captures a calibration image which contains the circular target and light stripe. The captured image is kept undistorted by using the A and [K; G] parameters. The coordinates of the light stripe center and elliptic fitting profile of the circular target edge are obtained according to the image processing method described in Section 2.2.1.
Step 3: The center coordinates of the circular target and two possible normal vectors of the plane containing the circle are calculated as described in Section 2.2.2.
Step 4: Two calibration images are captured at two positions of the circle target by performing steps 2 and 3. As a result, four possible light stripes are identified. The initial light plane is determined using the method described in Section 2.2.3 and further refined by adding a new calibration image.

Accuracy Evaluation
To estimate the accuracy of the proposed method, the obtained results are compared with those of the method developed in Ref. [14]. The utilized strategy is based on the CRIP illustrated in Figure 11. First, a chessboard is placed inside the measured volume. The grid line l of the chessboard in the horizontal direction intersects the laser stripe at point D, while points A, B, and C represent the corner points on l. The coordinates of A, B, and C are known in the CCS, and the pixel coordinates of a, b, c, and d, which correspond to the pixel points A, B, C, and D in the CPCS, respectively, are extracted. According to the CRIP, the following relations hold true:

Experimental Procedure
Step 1: The camera model parameters are calibrated by Zhang's method [8]. In practice, the MATLAB toolbox developed by Bouguet [20] is directly utilized to obtain the intrinsic matrix A and the distortion coefficients [K; G] of the camera.
Step 2: The circular target is placed at an appropriate position. After that, the camera captures a calibration image which contains the circular target and light stripe. The captured image is kept undistorted by using the A and [K; G] parameters. The coordinates of the light stripe center and elliptic fitting profile of the circular target edge are obtained according to the image processing method described in Section 2.2.1.
Step 3: The center coordinates of the circular target and two possible normal vectors of the plane containing the circle are calculated as described in Section 2.2.2.
Step 4: Two calibration images are captured at two positions of the circle target by performing steps 2 and 3. As a result, four possible light stripes are identified. The initial light plane is determined using the method described in Section 2.2.3 and further refined by adding a new calibration image.

Accuracy Evaluation
To estimate the accuracy of the proposed method, the obtained results are compared with those of the method developed in Ref. [14]. The utilized strategy is based on the CRIP illustrated in Figure 11. First, a chessboard is placed inside the measured volume. The grid line l of the chessboard in the horizontal direction intersects the laser stripe at point D, while points A, B, and C represent the corner points on l. The coordinates of A, B, and C are known in the CCS, and the pixel coordinates of a, b, c, and d, which correspond to the pixel points A, B, C, and D in the CPCS, respectively, are extracted. According to the CRIP, the following relations hold true: where CR is the cross-ratio. (u a , v a ), (u b , v b ), (u c , v c ), and (u d , v d ) represent the pixel coordinates of a, b, c, and d, respectively. (X A , Y A , Z A ), (X B , Y B , Z B ), (X C , Y C , Z C ), and (X D , Y D , Z D ) denote the coordinates of A, B, C, and D in the CCS, respectively. Using the strategy described, the coordinates (X D , Y D , Z D ) of D can be determined. Therefore, the distances d t,AD (from A to D), d t,BD (from B to D), and d t,CD (from B to D) are considered the ideal evaluation distances. The coordinates of testing point D in the CCS are calculated using the proposed method and the method developed in Ref. [14]. The distances from testing point D to A, B, and C in the CCS are measured distances. The distances d m1 are obtained using the method described in Ref. [14], while the distances d m2 are determined using the proposed method. The 3D coordinates of the ideal points obtained by the principle of cross-ratio invariability and testing points computed by different calibration techniques are listed in Table 1, while the calibration accuracy analysis data are presented in Table 2.  [14]. The distances from testing point D to A, B, and C in the CCS are measured distances. The distances dm1 are obtained using the method described in Ref. [14], while the distances dm2 are determined using the proposed method. The 3D coordinates of the ideal points obtained by the principle of cross-ratio invariability and testing points computed by different calibration techniques are listed in Table 1, while the calibration accuracy analysis data are presented in Table 2.

Conclusions
In this study, a novel calibration method for line-structured light 3D measurements based on a single circular target is proposed. The circular target can be easily manufactured or found directly in nature. The RALS method is used for extracting the elliptic fitting profile because of its high accuracy and robustness. For extracting the light stripe, the subpixel method based on the Hesse matrix is adopted. According to the projective geometry, the elliptical cone equation defined by the elliptic fitting profile and optical center of the camera can be determined. By considering the obtained elliptical cone and known diameter of the circular target, two possible positions and orientations of the circular target are distinguished. For this purpose, two groups of 3D intersection points between the light plane and the circular target are identified. The correct group of these points is filtered, and the light plane is progressively fitted. The effectiveness of the proposed method is verified both theoretically and experimentally, and its measurement accuracy amounts to 0.05 mm.
Author Contributions: J.W. drafted the work or substantively revised it. In addition, X.L. configured the experiments and wrote the codes. J.W. and X.L. calculated the data, wrote the manuscript and plotted the figures. All authors have read and agreed to the published version of the manuscript. Data Availability Statement: The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.

Conflicts of Interest:
The authors declare no conflict of interest.