Next Article in Journal
A Quick Classifying Method for Tracking and Erosion Resistance of HTV Silicone Rubber Material via Laser-Induced Breakdown Spectroscopy
Next Article in Special Issue
Construction of All-in-Focus Images Assisted by Depth Sensing
Previous Article in Journal
Automatic Registration of Optical Images with Airborne LiDAR Point Cloud in Urban Scenes Based on Line-Point Similarity Invariant and Extended Collinearity Equations
Previous Article in Special Issue
Recognition of Fingerspelling Sequences in Polish Sign Language Using Point Clouds Obtained from Depth Images
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Calibration Method of Articulated Laser Sensor for Trans-Scale 3D Measurement

1
State Key Laboratory of Precision Measuring Technology and Instruments, Tianjin University, Tianjin 300072, China
2
School of Electrical and Information Engineering, Tianjin University, Tianjin 300072, China
*
Authors to whom correspondence should be addressed.
Sensors 2019, 19(5), 1083; https://doi.org/10.3390/s19051083
Submission received: 15 December 2018 / Revised: 21 February 2019 / Accepted: 27 February 2019 / Published: 3 March 2019
(This article belongs to the Special Issue Depth Sensors and 3D Vision)

Abstract

:
The articulated laser sensor is a new kind of trans-scale and non-contact measurement instrument in regular-size space and industrial applications. These sensors overcome many deficiencies and application limitations of traditional measurement methods. The articulated laser sensor consists of two articulated laser sensing modules, and each module is made up of two rotary tables and one collimated laser. The three axes represent a non-orthogonal shaft architecture. The calibration method of system parameters for traditional instruments is no longer suitable. A novel high-accuracy calibration method of an articulated laser sensor for trans-scale 3D measurement is proposed. Based on perspective projection models and image processing techniques, the calibration method of the laser beam is the key innovative aspect of this study and is introduced in detail. The experimental results show that a maximum distance error of 0.05 mm was detected with the articulated laser sensor. We demonstrate that the proposed high-accuracy calibration method is feasible and effective, particularly for the calibration of laser beams.

1. Introduction

Nowadays, accompanied with the rapid development of manufacturing, 3D measurement has been widely applied in the domains of complicated surfaces measurement, dynamical scan, and reverse engineering [1,2,3,4]. The traditional 3D measurement instruments include the laser tracker [5,6], total station [7], and theodolite [8]. Their measurement accuracies rely heavily on the structure orthogonality, which increases their expenditures. Structured light 3D scanners and laser vision sensors are widely applied techniques in 3D measurement space [9]. However, the cameras of vison sensors are difficult to apply to trans-scale space measurement, because their limited focus leads to a dead measurement distance beyond which objects cannot clearly be seen.
For the trans-scale and non-contact measurement in regular-size space and industrial applications, there are many deficiencies and application limitations for traditional measurement methods. In reference to the three axes architecture of traditional instruments, an articulated laser sensor combined with adaptive focusing technology of laser alignment is proposed. However, the calibration method of system parameters for traditional instruments is no longer suitable for articulated laser sensors. Wu et al. proposed a non-orthogonal shaft laser theodolite (N-theodolite) [10]. The intrinsic parameters were obtained by minimum-zone circle fitting and linear fitting. Moreover, by aiming the targets at a scale bar placed at several different positions, the extrinsic parameters can be calibrated. However, this calibration method is suitable only for large-size measurements and low accuracy. The articulated laser sensor employs N-theodolite’s non-orthogonal architecture. It is essential to study the method of calibrating the spatial position of the laser beam, which represents the visual measuring axis. The spatial position consists of the direction vector of the laser beam and a fixed point on the laser beam.
Bi et al. mounted a laser displacement sensor on the Z-axis of a coordinate measuring machine (CMM) to build up an optical coordinate measuring system, and proposed a method based on a standard sphere to calibrate the laser beam direction. This method required length information from laser displacement sensor [11]. Sun et al. presented a vision measurement model of the laser displacement sensor, and then achieved the calibration with a planar target mounted on a 2D moving platform. However, during the calibration, it was difficult to realize that the planar target was perpendicular to the fixed plane of the moving platform [12]. Xie et al. established a multi-probe measurement system, and proposed a technique called the “coplanar calibration method” to calibrate the extrinsic parameters of the structured-light sensor. This method is not suitable to determine the initial spatial position of laser beam [13]. Yang et al. proposed a kind of inner diameter measuring device by increasing the number of laser displacement sensors. The method can calibrate the direction of three laser beams simultaneously [14]. The issue is that the methods of laser beam calibration can only obtain the direction vector of the laser beam [15,16,17]. Moreover, the spatial position of the laser beam cannot be received through the direction vector.
To achieve the calibration requirements of the articulated laser sensor, a novel calibration method is proposed in this paper. The key innovative aspect of this paper is the proposed method to calibrate the spatial position of laser beam.
The remainder of this paper is organized as follows. In Section 2, the principle of the articulated laser sensor is introduced. In Section 3, the calibration method of the articulated laser sensor is presented. Particularly, the calibration method of the laser beam is introduced in detail. In Section 4, the image processing of the laser spot is presented. In Section 5, the actual measurement experiments are performed. Those experimental data validate that the proposed method is effective. The paper ends with some concluding remarks in Section 6.

2. Principle of Articulated Laser Sensor

2.1. System Construction

The trans-scale 3D coordinate measurement system mainly consists of two articulated laser sensing modules, as shown in Figure 1. Each module is made up of two one-dimensional rotary tables and one collimated laser conveniently. As with traditional orthogonal measurement instruments, there are three axes in the articulated laser sensing module. However, the three axes have no strict requirements for the orthogonality and intersection conditions. The three axes are bifacial straights, and the angle between any two axes is not 90°. The rotating axes of the rotary tables of the articulated laser sensing module are called the “vertical axis” and “horizontal axis”, respectively, and the location of collimated laser beam is called the “measuring axis”.

2.2. Measurement Principle

Similar to determining 3D coordinates utilizing traditional forward intersection measurement instruments, the measurement operation of the articulated laser sensor is based on the intersection of two laser beams in the measured object. With the help of a high-resolution digital CCD camera, the intersection can be achieved accurately. During measurement, the coincidence of two laser beams on the measured object denotes the intersection of the visualized measuring axes. As shown in Figure 2, when the left laser beam coincides with the right laser beam on a point of the measured object, the 3D coordinate of the point can be calculated based on the rotation angles provided by the rotary tables of two articulated laser sensing modules and the mathematical measurement model. The mathematical measurement model is established based on the perspective projection model and quaternion kinetic model.

3. Calibration Principle

The system parameters calibration of the articulated laser sensor is necessary to achieve high-accuracy measurement, as the sensor measurement accuracy is greatly affected by the calibration method [18,19]. The system parameters consist of intrinsic and extrinsic parameters. For determining 3D coordinates, it is necessary to obtain the related positions of the three axes of the articulated laser sensing module, which are called the intrinsic parameters. The extrinsic parameters denote the relationship between the left module of the articulated laser sensor and the right module and include a rotation matrix R 0 and a translation vector T 0 . The three axes of every articulated laser sensing module can be abstracted as three lines in 3D space, as shown in Figure 3. The system parameters of the articulated laser sensor and their physical meanings are listed in Table 1.

3.1. Calibration of the Vertical and Horizontal Axes

To make precise measurements, it is necessary to calibrate the parameters with a high-accuracy measurement instrument. A CMM is employed to calibrate these intrinsic and extrinsic parameters of the articulated laser sensor in the laboratory. The accuracy of the CMM is 2.1 μm + 2.8l/1000 μm, and l is the measurement distance. Two porcelain beads with diameters of 7.935 mm and a machining accuracy of 0.25 μm are pasted onto the two articulated laser sensing modules, respectively. Then, the two articulated laser sensing modules are rotated vertically and horizontally every ten degrees, and the centers of the porcelain bead are measured in each position. Thirty-six measured data points are obtained and these data are measured from a complete ellipse. The least square method is utilized to optimize the parameters. The direction vectors and fixed points of three axes are needed for calibration. The parameters of vertical and horizontal axes are obtained as follows:
(1)
Based on least squares methods, a plane is fitted utilizing the centers of the porcelain beads, which are measured by CMM.
(2)
The distances from the measured points to the fitted plane are calculated. If the distance is more than the threshold, these points are eliminated and another plane is fitted again.
(3)
The normal vector of fitted plane is recorded as the direction vector of the rotation axis.
(4)
The remaining points are projected onto the fitted plane.
(5)
Based on least squares methods, an ellipse is fitted utilizing the projected points.
(6)
The center of fitted ellipse is recorded as the fixed point of rotation axis.
In the measurement space, a plane can be expressed as [20]
f ( Θ , P ) = A x + B y + C z + D = 0
where the vector Θ = [ A , B , C , D ] contains the plane parameters and P = ( x , y , z ) represents a point on the plane.
The four parameters of Θ = [ A , B , C , D ] are redundant. Equation (1) can be simplified as
a x + b y + c z = 1
where a = A D ,   b = B D ,   c = C D .
Equation (2) takes the form of the matrix equation
X L = Y
where X = ( x , y , z ) is obtained from measured data, L = [ a , b , c ] T denotes unknown parameter, and Y = [ 1 , 1 , 1 , , 1 ] T .
The number of measured data is more than the unknown parameter. Equation (3) is overdetermined and solved by least squares methods. The normal vector of the fitted plane is recorded as
n = ( a , b , c )
The unit normal vector is defined as the direction vector of the rotation axis. The distances from the measured points to the fitted plane are expressed as
d = a x + b y + c z 1 a 2 + b 2 + c 2
The point is eliminated if | d | > 0.005 mm . The remaining points are projected onto the fitted plane, and the matrix equation is expressed as
[ x y z ] = [ x y z ] d n
These projected points are used for fitting the ellipse, and the center is defined as the fixed point of the rotation axis. An ellipse can be expressed as [21]
f ( Θ , p ) = A x 2 + B x y + C y 2 + D x + E y + F = 0
where the vector Θ = [ A , B , C , D , E , F ] contains the ellipse parameters and p = ( x , y ) is a point on the ellipse.
The six parameters of Θ = [ A , B , C , D , E , F ] are redundant. Equation (7) can be simplified as
a x 2 + b x y + c y 2 + d x + e y = 1
where a = A F ,   b = B F ,   c = C F ,   d = D F ,   e = E F .
Equation (8) takes the form of a matrix equation:
X L = Y
where X = ( x 2 , x y , y 2 , x , y ) is obtained from projection data, L = [ a , b , c , d , e ] T denotes unknown parameter, and Y = [ 1 , 1 , 1 , , 1 ] T . The center of ellipse is expressed as
{ x c = b e 2 c d 4 a c b 2 y c = b d 2 a e 4 a c b 2
In summary, the direction vector of the rotation axis is expressed as Equation (4), and the fixed point is expressed as Equation (10).

3.2. Calibration of the Laser Beam

In 3D space, the laser beam can be abstracted as a spatial line. In the world coordinate system, the equation of the laser beam can be expressed as
x w x 1 x 2 x 1 = y w y 1 y 2 y 1 = z w z 1 z 2 z 1
where ( x 1 , y 1 , z 1 ) and ( x 2 , y 2 , z 2 ) are the coordinates of the laser spots. Therefore, the key aspect is to obtain the coordinates of the laser spots in the world coordinate system.
The calibration principle diagram is shown in Figure 4. There are two parallel planes which are called the “image plane” and the “target plane”, respectively. The method used to ensure that the two planes are parallel is introduced in Section 4. Two porcelain beads are pasted to the target plane. The centers of the two porcelain beads are defined as P 1 and P 2 . The projection of P 1 and P 2 onto the target plane are defined as p 1 and p 2 . The projection of p 1 and p 2 onto the image plane are defined as p 1 and p 2 . The laser spot on the target plane is defined as p l , and the projection onto the image plane is defined as p l .
Based on the principle of calibration, four coordinate systems are established:
(1)
The world coordinate system is defined as o w x w y w z w . The CMM’s measurement coordinate system is regarded as world coordinate system.
(2)
The viewpoint coordinate system is defined as o v x v y v z v . The z v axis is perpendicular to the target plane, and the x w and y w coordinates of origin o v are equal to the x w and y w coordinates of p 1 .
(3)
The actual coordinate system of pixels on CCD is defined as o u v .
(4)
The image plane coordinate system is defined as o u v . The u axis is parallel to x v axis, and the u and v coordinates of origin o are equal to the u and v coordinates of p 1 .
The equation of the target plane in the world coordinate system is expressed as
l ( x w x w ) + m ( y w y w ) + n ( z w z w ) = 0
where V = ( l , m , n ) is the unit normal vector of plane and p = ( x w , y w , z w ) is the coordinates of a point on the target plane in the world coordinate system.
The two porcelain beads centers of P 1 and P 2 are obtained by CMM. Several points on the target plane are measured by CMM, and the parameters of direction vector V and fixed point p are obtained by plane fitting. The coordinates of each point in the various coordinate systems are listed in Table 2.
A central aim of calibration is to be able to obtain the coordinate of p l ( x w 0 , y w 0 , z w 0 ) in the world coordinate system.
p 1 and p 2 are the projection onto the target plane of P 1 and P 2 . The coordinates of p 1 and p 2 can be calculated by
[ x w 1 y w 1 z w 1 ] = [ X w 1 Y w 1 Z w 1 ] d 1 [ l m n ]
[ x w 2 y w 2 z w 2 ] = [ X w 2 Y w 2 Z w 2 ] d 2 [ l m n ]
where d 1 and d 2 can be calculated by
d 1 = | l ( X w 1 x w ) + m ( Y w 1 y w ) + n ( Z w 1 z w ) |
d 2 = | l ( X w 2 x w ) + m ( Y w 2 y w ) + n ( Z w 2 z w ) |
Based on the introduction of o v x v y v z v described above, the transformation from o w x w y w z w to o v x v y v z v is defined as
[ x v y v z v 1 ] = [ cos θ + ω x 2 ( 1 cos θ ) ω x ω y ( 1 cos θ ) ω z sin θ ω y sin θ + ω x ω z ( 1 cos θ ) x w 1 ω z sin θ + ω x ω y ( 1 cos θ ) cos θ + ω y 2 ( 1 cos θ ) ω y ω z ( 1 cos θ ) ω x sin θ y w 1 ω x ω z ( 1 cos θ ) ω y sin θ ω x sin θ + ω y ω z ( 1 cos θ ) cos θ + ω z 2 ( 1 cos θ ) 0 0 0 0 1 ] [ x w y w z w 1 ] = [ R T 0 1 ] [ x w y w z w 1 ]
where w = V × k | V × k | is the unit normal vector of the rotation axis, θ = arccos ( V · k | V | | k | ) and k = ( 0 , 0 , 1 ) . The coordinates of p 1 and p 2 in o v x v y v z v can be calculated by
[ x v 1 y v 1 z v 1 1 ] = [ R T 0 1 ] [ x w 1 y w 1 z w 1 1 ] = [ 0 0 z c 1 1 ]
[ x v 2 y v 2 z v 2 1 ] = [ R T 0 1 ] [ x w 2 y w 2 z w 2 1 ]
Because the z v axis is perpendicular to the target plane, the relationship of the z v axis coordinates of p l , p 1 and p 2 is obtained by
z v 0 = z v 1 = z v 2
The vector p 1 p 2 is expressed as
p 1 p 2 = ( x v 2 x v 1 , y v 2 y v 1 , 0 )
In order to ensure that u axis is parallel to the x v axis, the vector p 1 p 2 is expressed as
p 1 p 2 = ( u 2 u 1 , v 2 v 1 , 0 )
Similarly, the transformation from o u v to o u v is defined as
[ u v 1 ] = [ cos θ sin θ u 1 sin θ cos θ v 1 0 0 1 ] [ u v 1 ]
where θ = arccos ( p 1 p 2 · p 1 p 2 | p 1 p 2 | | p 1 p 2 | ) . The method used to get the coordinates of p 1 , p 2 and p l in o u v is introduced in Section 4. The coordinates of p 1 , p 2 and p l in o u v can be calculated by
[ u 1 v 1 1 ] = [ cos θ sin θ u 1 sin θ cos θ v 1 0 0 1 ] [ u 1 v 1 1 ] = [ 0 0 1 ]
[ u 2 v 2 1 ] = [ cos θ sin θ u 1 sin θ cos θ v 1 0 0 1 ] [ u 2 v 2 1 ]
[ u 0 v 0 1 ] = [ cos θ sin θ u 1 sin θ cos θ v 1 0 0 1 ] [ u 0 v 0 1 ]
Because the image plane is parallel to the target plane, z v 0 = z v 1 = z v 2 is obtained. According to perspective projection, the equation is given by
x v z v = u z v 0
y v z v = v z v 0
Therefore, z v 0 can be calculated via Equations (27) and (28), and it can be submitted back to Equations (27) and (28) to calculate x v 0 and y v 0 . Thus, the coordinate ( x c 0 , y c 0 , z c 0 ) can be obtained, and the coordinate of laser spot in o w x w y w z w can be calculated by
[ x w 0 y w 0 z w 0 1 ] = [ R T 0 1 ] 1 [ x c 0 y c 0 z c 0 1 ]

4. Image Processing

4.1. Centroid Extraction

One original image including one laser spot and six white spots is shown in Figure 5. The four white spots in the center of the image are used to ensure the parallelism of the target plane and the image plane, and their centers are control points. The two white spots on the right side of the image are the images of the two porcelain beads pasted to the target plane. The flow chart of centroid extraction is shown in Figure 6. The centers of the laser spot and each white spot are obtained by the centroid method [22], as shown in Figure 7.
Through the extraction method above, we can obtain the 2D coordinates of the centers of the laser spot and each white spot in o u v . Due to the uncertainty of the camera position, the image plane is not parallel to the target plane. Therefore, it is necessary to do post-image-processing.

4.2. Image Perspective Rectification

In order to ensure that the image plane is parallel to the target plane, the image perspective rectification method based on double vanishing point is employed [23]. The four control points are arranged in a square on the target plane. Due to the uncertainty of the camera position, they are not a square in the image. The principle diagram of image rectification is shown in Figure 8.
By analyzing the location of the four control points, the workflow of image rectification is as follows:
(1)
The image is rotated to ensure that p i 3 p i 4 is parallel to the u axis. According to the coordinates p i 1 , p i 2 , p i 3 and p i 4 after rotation, the vanishing point coordinate ( m u , m v ) is obtained.
(2)
The expression of the rectification in u-axis direction is
{ v = v u = u + ( ( H v ) × ( m u u ) ) / ( m v v )
(3)
where H is the width of square,
(4)
and the expression of the rectification in the v axis direction is
{ u = u v = v m u / ( m u ( H 1 ) × m u m v v )
(5)
After the rectification in the u axis and the v axis direction, p i 1 p i 3 is parallel to p i 2 p i 4 , but p i 1 p i 2 is not parallel to p i 3 p i 4 . The image is rotated 90°, and the rectification in the u axis and the v axis direction is executed again.

5. Experiment

The calibration experiment site is shown in Figure 9. In the laboratory, a CMM is employed to calibrate these intrinsic and extrinsic parameters, and the CMM’s measurement coordinate system is defined as the world coordinate system.

5.1. Calibration of the Vertical and Horizontal Axes

As shown in Figure 9, there are two porcelain beads pasted onto the two articulated laser sensing modules, respectively. By measuring porcelain bead 1 and porcelain bead 2 rotating around the corresponding axes, the direction vectors and fixed points of the horizontal and vertical axes are obtained by plane and ellipse fitting. The intrinsic parameters under the CMM coordinate system are shown in Table 3.

5.2. Calibration of the Laser Beam

As shown in Figure 9, the target plane and a camera are fixed on a platform. Two porcelain beads are pasted to the target plane. The perspective projection image of a ball is usually not a standard circle, but an ellipse, and the geometric center of an ellipse is not the same as the ball-center’s real image [24]. Fortunately, a telecentric lens is employed to correct the parallax error of traditional industrial lenses.
According to the calibration principle, the calibration flow of the laser beam can be operated as follows:
(1)
The articulated laser sensor is fixed on the operating platform of CMM.
(2)
The optical calibration device is adjusted to ensure that the laser beam can project onto the target plane.
(3)
The centers of two beads are measured by CMM, and the coordinates are recorded as ( X w 1 , Y w 1 , Z w 1 ) and ( X w 2 , Y w 2 , Z w 2 ) , respectively.
(4)
Nine points on the target plane are measured by CMM and used to fit a plane. The parameters of the target plane are obtained, including the unit normal vector recorded as ( l , m , n ) and a point on the plane recorded as ( x w , y w , z w ) .
(5)
An image is collected by a camera with telecentric lens.
(6)
Steps (2)–(5) are repeated more than seven times.
(7)
The collected images are processed as described in the Section 4.
(8)
Based on the least square method, a spatial line is fitted from the coordinates of the laser spots in the CMM’s coordinate system.
(9)
The parameters of the laser beam are obtained, including the direction vector and a fixed point on the laser beam, as shown in Table 4.

5.3. Calibration of Extrinsic Parameters

The above calibration results show that the intrinsic parameters of the three axes of each articulated laser sensing module are obtained in the CMM’s coordinate system. The extrinsic parameters are received as follows
R 0 = [ 0.9999 0.0078 0.0078 0.0078 1.0000 0.0008 0.0078 0.0007 1.0000 ] ,   T 0 = [ 60.3136 0.1034 10.7202 ]   ( mm ) .

5.4. Verification Experiment

Further measurement experiments are needed to verify the accuracy of the proposed calibration method utilizing the intrinsic and extrinsic parameters. A high-precision machined hemispherical target with the center dot, as shown in Figure 10, is employed to achieve a high-accuracy measurement. The machining accuracy of the hemispherical target is 0.01mm.
The hemispherical target is placed in different positions. The length of two positions is the measurand. The points on the sphere surface are measured by CMM, and the coordinate of center dot is obtained by sphere surface fitting, as shown in Figure 11. The intersection of the two laser beams is measured by the articulated laser sensor. The measurement results from CMM are defined as the truth values, and those from the articulated laser sensor are defined as the measured values, as shown in Table 5.
From the comparison, it is shown that the maximum distance error of the articulated laser sensor calibrated by the new method is 0.05 mm in the real experiment.

6. Conclusions

A novel high-accuracy calibration method of an articulated laser sensor is proposed in this paper. The system parameters to be calibrated are the spatial positions of the three axes of the articulated laser sensing module. The calibration principles of the three axes are introduced in detail. Especially, the calibration of the laser beam is elaborated, which is the key innovative aspect of the study. A novel optical calibration device is also presented to achieve high-accuracy operation, including a linear displacement guide, high-precision machined porcelain beads, and a camera with a telecentric lens. The calibration method of the laser beam consists of a perspective projection model and image processing techniques. The image processing procedure is divided into two steps: centroid extraction and image perspective rectification. The experimental results show that a maximum distance error of 0.05 mm was detected with articulated laser sensor. These encouraging results prove that this proposed calibration method is suitable for articulated laser sensors, particularly for the calibration of the laser beam.

Author Contributions

Conceptualization, B.W. and J.K.; Methodology, J.K., X.D. and T.X.; System Structure, J.K. and X.D.; Software, J.K. and X.D.; Validation, J.K. and X.D.; Formal Analysis, J.K. and X.D.; Writing-Original Draft Preparation, J.K. and X.D.; Writing-Review & Editing, B.W. and J.K.; Funding Acquisition, B.W. and T.X.

Funding

This research was funded by the National Natural Science Foundation of China (Grant Nos. 61771336, 61671321, 51475328), and the Natural Science Foundation of Tianjin in China (No. 18JCZDJC38600).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Lee, R.T.; Shiou, F.J. Multi-beam laser probe for measuring position and orientation of freeform surface. Measurement 2011, 44, 1–10. [Google Scholar] [CrossRef]
  2. Sun, X.; Zhang, Q. Dynamic 3-D shape measurement method: A review. Opt. Lasers Eng. 2010, 48, 191–204. [Google Scholar]
  3. Zhang, Q.; Sun, X.; Xiang, L.; Sun, X. 3-D shape measurement based on complementary Gray-code light. Opt. Lasers Eng. 2012, 50, 574–579. [Google Scholar] [CrossRef]
  4. Feng, D.; Feng, M.; Ozer, E.; Fukuda, Y. A Vision-Based Sensor for Noncontact Structural Displacement Measurement. Sensors 2015, 15, 16557–16575. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Muralikrishnan, B.; Phillips, S.; Sawyer, D. Laser trackers for large-scale dimensional metrology: A review. Precis. Eng. 2016, 44, 13–28. [Google Scholar] [CrossRef]
  6. Ouyang, J.F.; Liu, W.L.; Yan, Y.G.; Sun, D.X. Angular error calibration of laser tracker system. SPIE 2006, 6344, 6344–6348. [Google Scholar]
  7. Scherer, M.; Lerma, J. From the conventional total station to the prospective image assisted photogrammetric scanning total station: Comprehensive review. J. Surv. Eng. 2009, 135, 173–178. [Google Scholar] [CrossRef]
  8. Wu, B.; Wang, B. Automatic Measurement in Large-Scale Space with the Laser Theodolite and Vision Guiding Technology. Adv. Mech. Eng. 2013, 5, 1–8. [Google Scholar] [CrossRef]
  9. Zhou, F.; Peng, B.; Cui, Y.; Wang, Y.; Tan, H. A novel laser vision sensor for omnidirectional 3D measurement. Opt. Laser Technol. 2013, 45, 1–12. [Google Scholar] [CrossRef]
  10. Wu, B.; Yang, F.; Ding, W.; Xue, T. A novel calibration method for non-orthogonal shaft laser theodolite measurement system. Rev. Sci. Instrum. 2016, 87, 035102. [Google Scholar] [CrossRef] [PubMed]
  11. Bi, C.; Liu, Y.; Fang, J.-G.; Guo, X.; Lv, L.-P.; Dong, P. Calibration of laser beam direction for optical coordinate measuring system. Measurement 2015, 73, 191–199. [Google Scholar]
  12. Sun, J.; Zhang, J.; Liu, Z.; Zhang, G. A vision measurement model of laser displacement sensor and its calibration method. Opt. Lasers Eng. 2013, 51, 1344–1352. [Google Scholar] [CrossRef]
  13. Xie, Z.; Wang, J.; Zhang, Q. Complete 3D measurement in reverse engineering using a multi-probe system. Mach. Tools Manuf. 2005, 45, 1474–1486. [Google Scholar]
  14. Yang, T.; Wang, Z.; Wu, Z.; Li, X.; Wang, L.; Liu, C. Calibration of Laser Beam Direction for Inner Diameter Measuring Device. Sensors 2017, 17, 294. [Google Scholar] [CrossRef] [PubMed]
  15. Xie, Z.; Wang, X.; Chi, S. Simultaneous calibration of the intrinsic and extrinsic parameters of structured-light sensors. Opt. Lasers Eng. 2014, 58, 9–18. [Google Scholar] [CrossRef]
  16. Yang, K.; Yu, H.-Y.; Yang, C. Calibration of line structured-light vision measurement system based on free-target. J. Mech. Electr. Eng. 2016, 33, 1066–1070. [Google Scholar]
  17. Smith, K.B.; Zheng, Y.F. Point laser triangulation probe calibration for coordinate metrology. J. Manuf. Sci. Eng. 2000, 122, 582–593. [Google Scholar] [CrossRef]
  18. Wu, D.; Chen, T.; Li, A. A High Precision Approach to Calibrate a Structured Light Vision Sensor in a Robot-Based Three-Dimensional Measurement System. Sensors 2016, 16, 1388. [Google Scholar] [CrossRef] [PubMed]
  19. Yin, S.; Ren, Y.; Guo, Y.; Zhu, J.; Yang, S.; Ye, S. Development and calibration of an integrated 3D scanning system for high-accuracy large-scale metrology. Measurement 2014, 54, 65–76. [Google Scholar] [CrossRef]
  20. Men, Y.; Zhang, G.; Men, C.; Li, X.; Ma, N. A Stereo Matching Algorithm Based on Four-Moded Census and Relative Confidence Plane Fitting. Chin. J. Electron. 2015, 24, 807–812. [Google Scholar] [CrossRef]
  21. Mulleti, S.; Seelamantula, C.S. Ellipse Fitting Using the Finite Rate of Innovation Sampling Principle. IEEE Trans. Image Process. 2016, 25, 1451–1464. [Google Scholar] [CrossRef] [PubMed]
  22. Yang, J.; Zhang, T.; Song, J.; Liang, B. High accuracy error compensation algorithm for star image sub-pixel subdivision location. Opt. Lasers Eng. 2010, 18, 1002–1010. [Google Scholar]
  23. Luo, X.; Du, Z. Method of Image Perspective Transform Based on Double Vanishing Point. Comput. Eng. 2009, 35, 212–214. [Google Scholar]
  24. Gu, F.; Zhao, H. Analysis and correction of projection error of camera calibration ball. Acta Opt. Sin. 2012, 12, 209–215. [Google Scholar]
Figure 1. Structural diagram of articulated laser sensor.
Figure 1. Structural diagram of articulated laser sensor.
Sensors 19 01083 g001
Figure 2. Structural diagram of the measurement system.
Figure 2. Structural diagram of the measurement system.
Sensors 19 01083 g002
Figure 3. Schematic diagram of parameters calibration.
Figure 3. Schematic diagram of parameters calibration.
Sensors 19 01083 g003
Figure 4. The calibration diagram of the laser beam.
Figure 4. The calibration diagram of the laser beam.
Sensors 19 01083 g004
Figure 5. Example diagram of the laser spot and other white spots.
Figure 5. Example diagram of the laser spot and other white spots.
Sensors 19 01083 g005
Figure 6. Flow chart of centroid extraction.
Figure 6. Flow chart of centroid extraction.
Sensors 19 01083 g006
Figure 7. (a) Image of laser spot extraction; (b) image of white spots extraction.
Figure 7. (a) Image of laser spot extraction; (b) image of white spots extraction.
Sensors 19 01083 g007
Figure 8. Schematic diagram of image rectification.
Figure 8. Schematic diagram of image rectification.
Sensors 19 01083 g008
Figure 9. Calibration experiment diagram.
Figure 9. Calibration experiment diagram.
Sensors 19 01083 g009
Figure 10. High-precision machined hemispherical target.
Figure 10. High-precision machined hemispherical target.
Sensors 19 01083 g010
Figure 11. Measurement experiment diagram.
Figure 11. Measurement experiment diagram.
Sensors 19 01083 g011
Table 1. The system parameters of the articulated laser sensor.
Table 1. The system parameters of the articulated laser sensor.
CategoryParametersPhysical Meaning
Intrinsic parametersVertical axisVector v W ( x W V , y W V , z W V ) Direction of vertical axis
Point V W ( x W V O , y W V O , z W V O ) Fixed point of vertical axis
Horizontal axisVector h W ( x W H , y W H , z W H ) Direction of horizontal axis
Point H W ( x W H O , y W H O , z W H O ) Fixed point of horizontal axis
Measuring axisVector p W ( x W M , y W M , z W M ) Direction of measuring axis
Point P W ( x W M O , y W M O , z W M O ) Fixed point of measuring axis
Extrinsic parametersRotation–translation matrix M 0 Rotation matrix R 0 Rotation from O x y z to O R x R y R z R
Translation vector T 0 Translation from O x y z to O R x R y R z R
Table 2. Coordinates of each point in the various coordinate systems.
Table 2. Coordinates of each point in the various coordinate systems.
Point o w x w y w z w o v x v y v z v o u v o u v
p l ( x w 0 , y w 0 , z w 0 ) ( x v 0 , y v 0 , z v 0 )
p l ( u 0 , y v 0 , v 0 ) ( u 0 , v 0 ) ( u 0 , v 0 )
P 1 ( X w 1 , Y w 1 , Z w 1 )
p 1 ( x w 1 , y w 1 , z w 1 ) ( x v 1 , y v 1 , z v 1 )
p 1 ( u 1 , y v 1 , v 1 ) ( u 1 , v 1 ) ( u 1 , v 1 )
P 1 ( X w 2 , Y w 2 , Z w 2 )
p 2 ( x w 2 , y w 2 , z w 2 ) ( x v 2 , y v 2 , z v 2 )
p 2 ( u 2 , y v 2 , v 2 ) ( u 2 , v 2 ) ( u 2 , v 2 )
Table 3. Intrinsic parameters of the vertical and horizontal axes (mm).
Table 3. Intrinsic parameters of the vertical and horizontal axes (mm).
CategoryIntrinsic Parameters
Left moduleHorizontal axisFixed point (160.919,162.305,−622.648)
Direction vector (0.954,−0.300,0.005)
Vertical axisFixed point (208.213,147.371,−604.198)
Direction vector (0.006,−0.002,−0.999)
Right moduleHorizontal axisFixed point (416.534,157.549,−622.284)
Direction vector (0.973,0.232,−0.004)
Vertical axisFixed point (368.818,146.245,−603.301)
Direction vector (−0.002,−0.003,−0.999)
Table 4. Intrinsic parameters of the sight axis (mm).
Table 4. Intrinsic parameters of the sight axis (mm).
CategoryIntrinsic Parameters
Left moduleMeasuring axisFixed point (313.363,702.058,−618.860)
Direction vector (0.270,0.963,0.005)
Right moduleMeasuring axisFixed point (252.987,691.699,−619.093)
Direction vector (−0.2934,0.956,0.003)
Table 5. Comparison between the measured values and real values.
Table 5. Comparison between the measured values and real values.
Point No.Left/Right ModuleHorizontal Angle (°)Vertical Angle (°)Measured Length (mm)Real Length (mm)Deviation (mm)
1Left0.0000.00091.74791.752−0.005
Right−15.775−0.108
2Left14.986−0.057
Right−0.0520.012
3Left33.143−1.342166.854164.8310.023
Right38.992−1.258
4Left15.786−1.325
Right17.184−1.277
5Left12.152−1.475172.134172.141−0.007
Right12.206−1.447
6Left9.64719.920
Right3.04620.580
7Left−8.50320.021157.470157.4670.003
Right−12.45218.288
8Left−31.53819.282
Right−29.14115.717
9Left−0.84814.621128.730128.753−0.023
Right4.00014.582
10Left11.50013.578
Right17.80014.570
11Left12.500−1.744352.831352.7810.050
Right−1.102−1.754
12Left26.00012.595
Right35.00014.606

Share and Cite

MDPI and ACS Style

Kang, J.; Wu, B.; Duan, X.; Xue, T. A Novel Calibration Method of Articulated Laser Sensor for Trans-Scale 3D Measurement. Sensors 2019, 19, 1083. https://doi.org/10.3390/s19051083

AMA Style

Kang J, Wu B, Duan X, Xue T. A Novel Calibration Method of Articulated Laser Sensor for Trans-Scale 3D Measurement. Sensors. 2019; 19(5):1083. https://doi.org/10.3390/s19051083

Chicago/Turabian Style

Kang, Jiehu, Bin Wu, Xiaodeng Duan, and Ting Xue. 2019. "A Novel Calibration Method of Articulated Laser Sensor for Trans-Scale 3D Measurement" Sensors 19, no. 5: 1083. https://doi.org/10.3390/s19051083

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop