On-Site Calibration Method for Line-Structured Light Sensor-Based Railway Wheel Size Measurement System

Line-structured light has been widely used in the field of railway measurement, owing to its high capability of anti-interference, fast scanning speed and high accuracy. Traditional calibration methods of line-structured light sensors have the disadvantages of long calibration time and complicated calibration process, which is not suitable for railway field application. In this paper, a fast calibration method based on a self-developed calibration device was proposed. Compared with traditional methods, the calibration process is simplified and the calibration time is greatly shortened. This method does not need to extract light strips; thus, the influence of ambient light on the measurement is reduced. In addition, the calibration error resulting from the misalignment was corrected by epipolar constraint, and the calibration accuracy was improved. Calibration experiments in laboratory and field tests were conducted to verify the effectiveness of this method, and the results showed that the proposed method can achieve a better calibration accuracy compared to a traditional calibration method based on Zhang’s method.


Introduction
In recent years, line-structured light vision sensors have been widely used in dynamic railway wheel size measurement systems [1][2][3][4][5][6]. For example, a high-accuracy line-structured light sensor-based wheel size measurement system was introduced in our previous work [7]. A line-structured light vision sensor is generally composed of a camera and a line laser projector. In the application of railway wheel size measurement, owing to the restriction of view angle, it is necessary to combine at least two sensors whose laser planes are coincident to obtain a whole wheel tread profile. Calibration is one of the crucial phases to realize the wheel parameters reconstruction by the acquired 2D laser strips, which is vital to improving the accuracy of the system. Generally, the calibration parameters of a line-structured light vision sensor consist of the intrinsic parameters of the camera and the light plane parameters. The calibration of camera intrinsic parameters has been wildly studied [8][9][10][11][12][13]; thus, this paper mainly focuses on the calibration of light plane parameters.
Xie [14] used a planar target with grid lines to calibrate the intrinsic and light plane parameters simultaneously. During the calibrating process, the intersection points between the grid lines of the planar target and laser lines are extracted as calibration points. Liu [15] adopted a ball target with high roundness to calibrate the laser plane. First, the spatial cone equation and the sphere equation of the ball target are solved. Then, the solution of the light plane equation is obtained by nonlinear optimization. Huynh [16] created a V-shape 3D target for laser plane calibration. The sensor is mounted on an AGV to scan the target in calibration. The position of sensor related to the world coordinate frame is known. According to cross-ratio invariability, the laser plane equation can be solved by combining the 3D coordinates of points of the target. Xu [17] employed a flat board target with four balls. The orientation of the board plane is first solved by the four balls, and then the intersection line between the board plane and the laser plane is obtained. The laser plane equation is fitted by these intersection lines. Xie [18] similarly utilized a flat board target with squares pattern and solved the orientation of the board plane by the corner points. Differently, the angle of the board plane and laser plane is computed by an additional raised block on the board target. Wei [19] proposed a method based on a 1D target. The feature points of the target are calculated in the camera coordinate frame using the known distance constraint of target pattern. Then, the nonlinear optimization method is used to solve the plane feature points and the light plane equation can be fitted.
The above methods have achieved good results in laboratory environment, but it is not suitable for railway field application. The calibration of an on-site railway wheel size measurement system has the following characteristics: (1) the available calibration time is limited to the busy railway operations; (2) the calibration accuracy is influence by the strong natural light on the outdoor environment; (3) the depth of field of vision sensors is short, making it difficult to shoot calibration markers placed on different locations. To achieve fast, high-accuracy on-site calibration of a wheel size measurement system, a new calibration method is demonstrated in this paper, and the above issues in field calibration are solved. This method shortens the calibration time, overcomes the problem caused by short depth of field, and does not need to extract laser lines, avoiding the influence of natural light. In order to realize the calibration method, a specific calibration device was developed. In calibration, the calibration device is mounted on the rail, and the calibration board plane is manually adjusted to coincide with the light plane. Then, the pixel coordinates of corner points are abstracted, and the fitting equations of image coordinates and lase plane coordinates are calculated. Finally, a calibration revising method based on epipolar constraint is employed to reduce the calibration error and improve the data fusion effect.
In Section 2, the calibration device and the principle of the proposed calibration method are introduced. In Section 3, a corner extraction method for calculating the calibration parameters is proposed, and the calibration errors caused by the extraction process are analyzed. Then, the revising method of calibration parameters is described in Section 4. The epipolar constraint is used to find matching points, laying a foundation for establishing constraint equations in calibration parameters calculation. In Section 5, the results of the physical experiment are presented, and the calibration accuracy is evaluated by comparison. Finally, conclusions are drawn in Section 6. Figure 1 illustrates the setup of calibration by our method. Sensor 1 and sensor 2 are both line-structured light vision sensors; the two laser planes are carefully adjusted to be coincident for measuring the wheel tread size together. This on-site wheel tread size measurement system is demonstrated in our previous work [7]. The system can reach 0.11 mm theoretical measurement accuracy at the designed 300 mm work distance. The maximum frame rate of the camera is 20 fps, which is enough to meet the requirement of dynamic measurement under 48 km/h. When the train passes, the photoelectric switch triggers the camera to grab images. Then, the image is transmitted to computers and processed to extract laser stripes. Here, the purpose of the calibration is to establish a criterion of transforming the laser stripes to three-dimensional reconstruction profiles.

Calibration Principle
The calibration device is composed of a magnetic holder, a calibration board and an adjustable bracket composed of multiple cardan joints. The adjustable bracket allows the calibration board to move and rotate in space and be fixed when the adjustment is finished. During calibration, the magnetic holder is fixed on the rail and the plane of the calibration plate is placed to coincide with the light plane by adjusting the adjustable bracket. In experiment, the laser light covering the whole board can be seen as a sign that the coincidence degree of two planes meets the requirements. The calibration method only needs to take one shot of the calibration plate, and then the image of the calibration pattern is obtained by the camera, and the corner points in the calibration pattern are extracted. needs to take one shot of the calibration plate, and then the image of the calibration patter is obtained by the camera, and the corner points in the calibration pattern are extracted. The schematic of the line-structured light vision sensor is exhibited in Figure 2. I this figure, Owxwywzw represents the world coordinate frame (WCF), Ocxcyczc indicates th camera coordinate frame (CCF), and Ouxuyu refers to the image coordinate frame (ICF Assume that an arbitrary point Pw = [xw,yw,zw,1] T in WCF has a projection Pu=[xu,yu,1] T i ICF. According to the camera imaging model and disregarding distortion, it can be ex pressed as: where s denotes the size factor, A is the camera's intrinsic parameters matrix, R and t refe to the rotation matrix and translation vector from WCF to CCF, respectively. The equatio can realize the transformation from WCF to ICF. In order to reconstruct a 3D profile of th measured object, the equation is combined with the light plane equation to transform coordinate from ICF to WCF, that is: When line-structured light vision sensors are applied to measure the object size, the stip ulation of WCF is irrelevant. The Owxwyw plane of WCF can be set as the light plane π Thus, when reconstructing 3D profile, zw = 0. The functional relationship between (xw, yw and (xu, yu) can be simply expressed as xw ~ (xu, yu), yw ~ (xu, yu), which can be obtained b fitting. In this paper, the selected polynomial basis is shown as follows:  The schematic of the line-structured light vision sensor is exhibited in Figure 2. In this figure, O w x w y w z w represents the world coordinate frame (WCF), O c x c y c z c indicates the camera coordinate frame (CCF), and O u x u y u refers to the image coordinate frame (ICF). Assume that an arbitrary point P w = [x w ,y w ,z w ,1] T in WCF has a projection P u = [x u ,y u ,1] T in ICF. According to the camera imaging model and disregarding distortion, it can be expressed as: where s denotes the size factor, A is the camera's intrinsic parameters matrix, R and t refer to the rotation matrix and translation vector from WCF to CCF, respectively. The equation can realize the transformation from WCF to ICF. In order to reconstruct a 3D profile of the measured object, the equation is combined with the light plane equation to transform a coordinate from ICF to WCF, that is: Sensors 2021, 21, x FOR PEER REVIEW 4 of 15 a set of (xw,i,yw,i) and (xu,i, yu,i) used for fitting can be obtained by the manufacturing dimensions of the calibration board and the extraction of corner points. The coefficients of polynomials are acquired based on the least-square principle [20]: where V represents the Vandermonde matrix.

Corner Extraction and Influence of Image Noise
The Harris corner detection algorithm is widely used to detect corner points on the image. The basic idea of the algorithm is to use a fixed window to slide on the image and compare the change of gray values in the window before and after sliding. If there is a  When line-structured light vision sensors are applied to measure the object size, the stipulation of WCF is irrelevant. The O w x w y w plane of WCF can be set as the light plane π. Thus, when reconstructing 3D profile, z w = 0. The functional relationship between (x w , y w ) and (x u , y u ) can be simply expressed as x w~( x u , y u ), y w~( x u , y u ), which can be obtained by fitting. In this paper, the selected polynomial basis is shown as follows: where m indicates the highest power of the polynomial. In our proposed calibration method, the calibration board plane is adjusted to coincide with the light plane. Therefore, a set of (x w,i ,y w,i ) and (x u,i , y u,i ) used for fitting can be obtained by the manufacturing dimensions of the calibration board and the extraction of corner points. The coefficients of polynomials are acquired based on the least-square principle [20]: where V represents the Vandermonde matrix.
where L represents the vector [x w,i ] T and [y w,i ] T , and t denotes the vector of the coefficients of polynomials.

Corner Extraction and Influence of Image Noise
The Harris corner detection algorithm is widely used to detect corner points on the image. The basic idea of the algorithm is to use a fixed window to slide on the image and compare the change of gray values in the window before and after sliding. If there is a large gray change in any direction sliding, there will be a corner point in the window. Here, the Harris corner detection algorithm is employed to obtain the preliminary rough image coordinates (x u0 , y u0 ) of corner points, as presented in Figure 3. The precise image coordinates of the corner points can be obtained by the following iterative process [21]: where w represents the detection window with the center of (x u,i ,y u,i ), g x (x u ,y u ) and g x (x u ,y u ) indicate gray gradients along x u and y u direction, respectively, and ω(x u ,y u ) denotes the two-dimensional Gaussian distribution function:   A standard calibration plate pattern image was generated by computer pro estimate the accuracy of the corner extraction. Gaussian noise was added to the image with a different noise level varies from 0 to 40 DB at an interval of 0.1 DB. noise level, 50 experiments were conducted, and the extraction error was compu shown in Figure 5. It can be seen that the extraction accuracy increases as the n creases. In most cases, the iterative accuracy of 0.005 pixels can be achieved after two or three iterations. The iterative process is shown in Figure 4.
indicate gray gradients along xu and yu direction, respectively, and ω(xu,yu) den two-dimensional Gaussian distribution function: In most cases, the iterative accuracy of 0.005 pixels can be achieved after two iterations. The iterative process is shown in Figure 4.  A standard calibration plate pattern image was generated by computer pro estimate the accuracy of the corner extraction. Gaussian noise was added to the image with a different noise level varies from 0 to 40 DB at an interval of 0.1 DB. noise level, 50 experiments were conducted, and the extraction error was comp shown in Figure 5. It can be seen that the extraction accuracy increases as the n creases. A standard calibration plate pattern image was generated by computer program to estimate the accuracy of the corner extraction. Gaussian noise was added to the standard image with a different noise level varies from 0 to 40 DB at an interval of 0.1 DB. For each noise level, 50 experiments were conducted, and the extraction error was computed and shown in Figure 5. It can be seen that the extraction accuracy increases as the noise decreases. For a real calibration image, there is an inevitable gradual change at t white boundary due to manufacturing reasons. Therefore, the noise level a is relatively higher than that at homogenous areas. For estimating the extrac of corner points, small areas around corner points were intercepted from t calibration image and the real calibration image as shown in Figure 6. Acco vious studies of image noise estimation [22][23][24], the noise level of the acqu bration image at corner areas is equal to the simulation calibration imag Gaussian noise. Therefore, the extraction accuracy of corner points for our s 0.2 pixels. The calibration error caused by the image noise is simulated as sh 7. In the simulation, the calibration plate, the square of the calibration plate, t was set to 100 × 100 mm, 5 × 5 mm and 1000 × 1000 pixel, respectively. The ex of 0.2 pixels was set to random different directions, and then the mean cal was calculated.  For a real calibration image, there is an inevitable gradual change at the black-andwhite boundary due to manufacturing reasons. Therefore, the noise level at corner areas is relatively higher than that at homogenous areas. For estimating the extraction accuracy of corner points, small areas around corner points were intercepted from the simulation calibration image and the real calibration image as shown in Figure 6. According to previous studies of image noise estimation [22][23][24], the noise level of the acquired real calibration image at corner areas is equal to the simulation calibration image with 35 DB Gaussian noise. Therefore, the extraction accuracy of corner points for our setup is about 0.2 pixels. The calibration error caused by the image noise is simulated as shown in Figure 7. In the simulation, the calibration plate, the square of the calibration plate, the image size was set to 100 × 100 mm, 5 × 5 mm and 1000 × 1000 pixel, respectively. The extraction error of 0.2 pixels was set to random different directions, and then the mean calibration error was calculated. For a real calibration image, there is an inevitable gradual change at the black-andwhite boundary due to manufacturing reasons. Therefore, the noise level at corner areas is relatively higher than that at homogenous areas. For estimating the extraction accuracy of corner points, small areas around corner points were intercepted from the simulation calibration image and the real calibration image as shown in Figure 6. According to previous studies of image noise estimation [22][23][24], the noise level of the acquired real calibration image at corner areas is equal to the simulation calibration image with 35 DB Gaussian noise. Therefore, the extraction accuracy of corner points for our setup is about 0.2 pixels. The calibration error caused by the image noise is simulated as shown in Figure  7. In the simulation, the calibration plate, the square of the calibration plate, the image size was set to 100 × 100 mm, 5 × 5 mm and 1000 × 1000 pixel, respectively. The extraction error of 0.2 pixels was set to random different directions, and then the mean calibration error was calculated.  In this experiment, the calibration error caused by the corner extraction error is small in the plate area (Xu and Yu direction in 0-1000 pixels range) and increases rapidly out of this area. Therefore, the calibration plate should include the whole measurement range of the sensor to obtain a higher calibration accuracy. In this experiment, the calibration error caused by the corner extraction error is s in the plate area (Xu and Yu direction in 0-1000 pixels range) and increases rapidly ou this area. Therefore, the calibration plate should include the whole measurement rang the sensor to obtain a higher calibration accuracy. Consider that the image noise level varies with camera parameters, shutter speed amount of ambient light, the extraction error also varies in different application envi ment. Extra simulation experiments were conducted, and the average calibration erro the plate area caused by different extraction error were calculated. As shown in Figu when the extraction error is up to 0.5 pixels, which only happens at extreme image n level, the average calibration error in the plate area is 0.025 mm. At this case, the cali tion setup should be adjusted to obtain a lower image noise level.

Calibration Revising Based on Epipolar Constraint
When the measured object is a train wheel, it has to combine at least two line-st tured vision sensors with co-planar laser planes because of the restriction of view an In practice, it is difficult to adjust the laser planes to be completely co-planar, and the always a small angle between them. Thus, the calibration planes cannot be adjusted t co-planar with both laser planes, leading to a certain calibration error. This calibra error leads to misalignment of reconstructed sections, which will bring problems to ther calculation.
In order to decrease these calibration errors, an epipolar constraint-based revi method was employed. First, the matching points of two acquired images are found Consider that the image noise level varies with camera parameters, shutter speed and amount of ambient light, the extraction error also varies in different application environment. Extra simulation experiments were conducted, and the average calibration error in the plate area caused by different extraction error were calculated. As shown in Figure 8, when the extraction error is up to 0.5 pixels, which only happens at extreme image noise level, the average calibration error in the plate area is 0.025 mm. At this case, the calibration setup should be adjusted to obtain a lower image noise level. In this experiment, the calibration error caused by the corner extraction in the plate area (Xu and Yu direction in 0-1000 pixels range) and increases r this area. Therefore, the calibration plate should include the whole measurem the sensor to obtain a higher calibration accuracy.
Consider that the image noise level varies with camera parameters, shut amount of ambient light, the extraction error also varies in different applica ment. Extra simulation experiments were conducted, and the average calibr the plate area caused by different extraction error were calculated. As show when the extraction error is up to 0.5 pixels, which only happens at extrem level, the average calibration error in the plate area is 0.025 mm. At this cas tion setup should be adjusted to obtain a lower image noise level.

Calibration Revising Based on Epipolar Constraint
When the measured object is a train wheel, it has to combine at least tw tured vision sensors with co-planar laser planes because of the restriction o In practice, it is difficult to adjust the laser planes to be completely co-plana always a small angle between them. Thus, the calibration planes cannot be a co-planar with both laser planes, leading to a certain calibration error. Th error leads to misalignment of reconstructed sections, which will bring pro ther calculation.
In order to decrease these calibration errors, an epipolar constraint-b method was employed. First, the matching points of two acquired images the epipolar constraint. Then, additional constraint equations based on ma are added to the calculation of calibration parameters. The constraint of ima

Calibration Revising Based on Epipolar Constraint
When the measured object is a train wheel, it has to combine at least two linestructured vision sensors with co-planar laser planes because of the restriction of view angle. In practice, it is difficult to adjust the laser planes to be completely co-planar, and there is always a small angle between them. Thus, the calibration planes cannot be adjusted to be co-planar with both laser planes, leading to a certain calibration error. This calibration error leads to misalignment of reconstructed sections, which will bring problems to further calculation.
In order to decrease these calibration errors, an epipolar constraint-based revising method was employed. First, the matching points of two acquired images are found by the epipolar constraint. Then, additional constraint equations based on matching points are added to the calculation of calibration parameters. The constraint of image point and camera optical center is formed in the projection model when the same point is projected onto two images with different viewing angles. As shown in Figure 9, the line O 1 O 2 connecting the optical centers of the two cameras is called baseline, the intersection points of the baseline and image planes (e 1 and e 2 ) are called base points, and the plane O 1 O 2 P is called polar plane. If the projection point of P on image1 and image2 is denoted as P 1 and P 2 , the projection point P 2 must be on the intersection line e 2 P 2 of polar plane O 1 O 2 P and image2 plane. The intersection line e 2 P 2 is called the epipolar line. onto two images with different viewing angles. As shown in Figure 9, the line O1O2 con necting the optical centers of the two cameras is called baseline, the intersection points o the baseline and image planes (e1 and e2) are called base points, and the plane O1O2P is called polar plane. If the projection point of P on image1 and image2 is denoted as P1 and P2, the projection point P2 must be on the intersection line e2P2 of polar plane O1O2P and image2 plane. The intersection line e2P2 is called the epipolar line.
The matching points were found according to the epipolar constraint and exhibited in Figure 10a together with the corresponding epipolar lines. The results of the calibration revising process are presented in Figure 10b. Since the matching points are introduced to calculate calibration parameters, the corresponding parts of the reconstructed profiles be come coincident. After choosing enough and proper matching points, the reconstructed profiles of the two sensors are coincident, making it more accurate for further calculation The epipolar constraint can be expressed as: where p i = (x u,k , y u,k , 1) and p i = (x u,i , y u,i , 1) indicate the projection points on image1 and image2 of the same point. The basic matrix F can be solved based on the least-square principle and the corner points extracted in the calibration process. For a point p on image2, the epipolar line L 1 of camera1 can be expressed as: Regarding a certain object captured by the line-structured light vision sensor, the corresponding point p k on image1 of the point p k on image2 must be the intersectionpoint of the laser stripe on image1 and the epipolar line L1, which is useful to find matching points. In experiment, the captured object is the train wheel. Based on these matchingpoints, constraint equations are introduced into Equation (3), which can be expressed as: The matching points were found according to the epipolar constraint and exhibited in Figure 10a together with the corresponding epipolar lines. The results of the calibration revising process are presented in Figure 10b. Since the matching points are introduced to calculate calibration parameters, the corresponding parts of the reconstructed profiles become coincident. After choosing enough and proper matching points, the reconstructed profiles of the two sensors are coincident, making it more accurate for further calculation.

Physical Experiment
The line-structured light vision sensor-based wheel size measurement system was introduced in our previous paper [7]. In the experiment, the wheel size measurement system was calibrated by the proposed calibration method and a comparison method.
The calibration arrangement is shown in Figure 11. The two laser planes are carefully adjusted to make them as coplanar as possible. The pixel size of the camera is 4.4 × 4.4 μm, the image resolution is 1236 × 1626 pixels, and the lens focal length is 16 mm. The cameras have a FOV of 180 × 135 mm at a working distance of 300 mm. The size of the calibration plate is 160 × 60 mm, the square size is 5 × 5 mm, and the manufacturing precision is 0.003 mm. Furthermore, our proposed method is compared with another method based on Zhang's method [25] to verify its efficiency.
In the first experiment, the line-structured light vision sensor is calibrated by our proposed method. The plane of the calibration plate is adjusted to coincide with the light plane, and the calibration pattern is adjusted to contain the measuring area, as to reduce the calibration error caused by the corner extraction error. The calibration polynomial coefficients before and after epipolar constraint revising are displayed in Table 1.

Physical Experiment
The line-structured light vision sensor-based wheel size measurement system was introduced in our previous paper [7]. In the experiment, the wheel size measurement system was calibrated by the proposed calibration method and a comparison method.
The calibration arrangement is shown in Figure 11. The two laser planes are carefully adjusted to make them as coplanar as possible. The pixel size of the camera is 4.4 × 4.4 µm, the image resolution is 1236 × 1626 pixels, and the lens focal length is 16 mm. The cameras have a FOV of 180 × 135 mm at a working distance of 300 mm. The size of the calibration plate is 160 × 60 mm, the square size is 5 × 5 mm, and the manufacturing precision is 0.003 mm. Furthermore, our proposed method is compared with another method based on Zhang's method [25] to verify its efficiency.
In the first experiment, the line-structured light vision sensor is calibrated by our proposed method. The plane of the calibration plate is adjusted to coincide with the light plane, and the calibration pattern is adjusted to contain the measuring area, as to reduce the calibration error caused by the corner extraction error. The calibration polynomial coefficients before and after epipolar constraint revising are displayed in Table 1.
In the second experiment, the calibration plate is placed in different locations and directions 12 times. The cameras grab two images each time: one shot with natural light and the other with laser light. The intrinsic parameters of the camera are solved by Zhang's method using the images in natural light, the extrinsic parameters (representing the locations and directions of the calibration plate) are also calculated. The laser stripes on images are extracted and the coordinates of the laser stripes in CCF can be solved according to the extrinsic parameters. Then, the laser plane equation in CCF can be obtained by fitting these coordinates as a plane, and the calibration is completed. The images used for the calibration are displayed in Figure 12. The extrinsic parameters and the fitted laser plane are illustrated in Figure 13. The calibration results are exhibited in Table 2.   In the second experiment, the calibration plate is placed in different locations and directions 12 times. The cameras grab two images each time: one shot with natural light and the other with laser light. The intrinsic parameters of the camera are solved by Zhang's method using the images in natural light, the extrinsic parameters (representing the locations and directions of the calibration plate) are also calculated. The laser stripes on images are extracted and the coordinates of the laser stripes in CCF can be solved according to the extrinsic parameters. Then, the laser plane equation in CCF can be obtained by fitting these coordinates as a plane, and the calibration is completed. The images used for the calibration are displayed in Figure 12. The extrinsic parameters and the fitted laser plane are illustrated in Figure 13. The calibration results are exhibited in Table 2.         Furthermore, a planar target with grid lines in a horizontal direction is adopted to compare the two calibration methods. The target is placed in the measuring region of the line-structured light vision sensor three times with different orientations. The coordinates of intersection points between the laser stripe and the grid lines are extracted from images. These coordinates are transformed to CCF or WCF by the two calibration methods separately. Then, the distance of intersection points and the angle between the laser stripe and the grid lines are calculated. Additionally, the widths of grid lines on the planar target are calculated as w m . The fabricated widths of grid lines with a precision of 0.01 mm are regarded as ideal widths w i . In this experiment, three pairs of intersection points on the planar target are selected each time. The comparison of w m and w i are displayed in Table 3. The calibration accuracy of the proposed method before epipolar constraint revising is approximately 0.052 and 0.057 mm under a measurement range of 150 × 50 mm on camera 1 and camera 2, respectively. After epipolar constraint revising, the calibration accuracy of the proposed method is improved to 0.034 and 0.033 mm. Moreover, the calibration accuracy of the compared method in experiment 2 is 0.048 mm. As revealed by checking the used images, the calibration accuracy of the compared method is relatively low versus that of the proposed method due to the image blur caused by the short depth of field.
To verify the reproducibility of our method, the calibration device was removed and then reinstalled four times. Each time, the calibration parameters were recalculated and revised by epipolar constrain. The relative error compared to the experiment 1 at different pixel coordinates are shown in Figure 14. The maximal relative error of the four measurements is 0.008 mm, that is, the repeatability error is within 0.016 mm.

field.
To verify the reproducibility of our method, the calibration device was removed and then reinstalled four times. Each time, the calibration parameters were recalculated and revised by epipolar constrain. The relative error compared to the experiment 1 at different pixel coordinates are shown in Figure 14. The maximal relative error of the four measurements is 0.008 mm, that is, the repeatability error is within 0.016 mm.

Conclusions
The coordinates of the calibration plate can represent the coordinates of the laser plane when the calibration plate plane coincides with the laser plane. Based on this feature, a fast line-structured light vision sensor calibration method is proposed in this paper. In addition, the calibration error is revised based on the epipolar constraint to improve the accuracy of calibration. The basic principle and the implementation of the proposed method are described in detail. Then, the proposed method is validated by experiments.
The advantages of the proposed method are described as follows. (1) The proposed method is easy to perform and time-saving, suitable for line-structured light vision sensors used on special environment which is hard to maintain such as the railway site. (2) The proposed method does not need to extract laser lines on images and can adapt to the outdoor environment under strong natural light. (3) The proposed method can avoid the image blur caused by the depth of field as one image on the working distance is enough to accomplish the calibration process.

Conclusions
The coordinates of the calibration plate can represent the coordinates of the laser plane when the calibration plate plane coincides with the laser plane. Based on this feature, a fast line-structured light vision sensor calibration method is proposed in this paper. In addition, the calibration error is revised based on the epipolar constraint to improve the accuracy of calibration. The basic principle and the implementation of the proposed method are described in detail. Then, the proposed method is validated by experiments.
The advantages of the proposed method are described as follows. (1) The proposed method is easy to perform and time-saving, suitable for line-structured light vision sensors used on special environment which is hard to maintain such as the railway site. (2) The proposed method does not need to extract laser lines on images and can adapt to the outdoor environment under strong natural light. (3) The proposed method can avoid the image blur caused by the depth of field as one image on the working distance is enough to accomplish the calibration process. Funding: This research was funded by the National Natural Science Foundation of China (no. 51935002) and the introduced innovative R&D team of Dongguan: "Train wheelset geometric parameters intelligent testing and entire life-cycle management system development and industrial application innovative research team" (no. 201536000600028).