Effective Data-Driven Calibration for a Galvanometric Laser Scanning System Using Binocular Stereo Vision

A new solution to the problem of galvanometric laser scanning (GLS) system calibration is presented. Under the machine learning framework, we build a single-hidden layer feedforward neural network (SLFN）to represent the GLS system, which takes the digital control signal at the drives of the GLS system as input and the space vector of the corresponding outgoing laser beam as output. The training data set is obtained with the aid of a moving mechanism and a binocular stereo system. The parameters of the SLFN are efficiently solved in a closed form by using extreme learning machine (ELM). By quantitatively analyzing the regression precision with respective to the number of hidden neurons in the SLFN, we demonstrate that the proper number of hidden neurons can be safely chosen from a broad interval to guarantee good generalization performance. Compared to the traditional model-driven calibration, the proposed calibration method does not need a complex modeling process and is more accurate and stable. As the output of the network is the space vectors of the outgoing laser beams, it costs much less training time and can provide a uniform solution to both laser projection and 3D-reconstruction, in contrast with the existing data-driven calibration method which only works for the laser triangulation problem. Calibration experiment, projection experiment and 3D reconstruction experiment are respectively conducted to test the proposed method, and good results are obtained.


Introduction
A typical galvanometric laser scanner consists of two rotatable mirrors driven by two limited-rotation motors, respectively. The incoming laser beam are deflected by the mirrors, the orientations of which are uniquely determined by the control voltages applied to the two motors, so there exists a one-to-one mapping between the two input voltage signals and the outgoing laser beam.
These GLS-based applications can be classified into two categories: the forward applications and the backward applications. In the forward applications, e.g., laser triangulation scanning, pre-defined control voltages are input and the 3D coordinates of the laser spot of the outgoing beam hitting on the object surface need to be solved, whereas in the backward applications, e.g., laser material processing and laser marking, the position of the laser spot on the object surface is pre-defined and the control voltages need to be solved accordingly. An essential problem involved in both forward and backward opening or closing of the laser transmitter and the rotation angles of the dual mirrors. A smooth panel, a stepping motor, a ball screw and a microcontroller constitute the moving mechanism. The panel, coated with black flat lacquer, is fixed on the ball screw by a special clamp. The microcontroller is in charge of the stepping motor control, and the panel can do the translational motion in the viewing field of the binocular system with the help of the rotation of the ball screw driven by stepping motor. With the rapid deflection of the galvanometer, the laser beam can form a grid of laser spots on the panel. The 3D coordinates of the laser spots are obtained by the binocular system for the calibration. All the signals controlling the three parts above are sent by the same computer.    Figure 2 illustrates the specific hardware used in calibration experiment. The GLS system makes use of an economic 520 nm semiconductor laser and a TSH8050A/D galvanometer (Century Sunny, Beijing, China). Both the laser transmitter and the galvanometric scanning head are controlled by a GT-400-Scan control board (GuGao, Shenzhen, China).
Sensors 2018, 18, 197 3 of 17 opening or closing of the laser transmitter and the rotation angles of the dual mirrors. A smooth panel, a stepping motor, a ball screw and a microcontroller constitute the moving mechanism. The panel, coated with black flat lacquer, is fixed on the ball screw by a special clamp. The microcontroller is in charge of the stepping motor control, and the panel can do the translational motion in the viewing field of the binocular system with the help of the rotation of the ball screw driven by stepping motor. With the rapid deflection of the galvanometer, the laser beam can form a grid of laser spots on the panel. The 3D coordinates of the laser spots are obtained by the binocular system for the calibration. All the signals controlling the three parts above are sent by the same computer.   The binocular system consists of two MG 419B CMOS cameras (Schneider-Kreuznach, Bad Kreuznch, Germany), two 35 mm lens and a tripod. The software for completing the whole calibration process is installed in a personal computer with 3.1 GHz and 8 GB RAM.

Calibration of GLS System
For the convenience of depiction, we first introduce some symbols used in this paper. We use O c − X c Y c Z c to represent the camera coordinate system. Denote the digital voltage value applied to the motor of the first mirror as d x , and that of the second mirror as d y . The symbol V represents the two-dimensional (2D) digital control voltages d x , d y T . The outgoing laser beam corresponding to of a specific d is represented by l.

The SLFN Model
Considering the complexity of the GLS system, we treat the system modelling as a machine learning problem. More specifically, a SLFN as shown in Figure 3 is used to model the 2D-to-6D mapping relationship M : d → V . There are two neurons in the input layer, which respectively are the voltage signals d x and d y . The output layer contains six neurons, which are respectively the components v 1 , v 2 , v 3 , v 4 , v 5 and v 6 of V. Denote the number of neurons in the hidden layer as L.
Sensors 2018, 18,197 4 of 17 The binocular system consists of two MG 419B CMOS cameras (Schneider-Kreuznach, Bad Kreuznch, Germany), two 35 mm lens and a tripod. The software for completing the whole calibration process is installed in a personal computer with 3.1 GHz and 8 GB RAM.

Calibration of GLS System
For the convenience of depiction, we first introduce some symbols used in this paper. We use − to represent the camera coordinate system. Denote the digital voltage value applied to the motor of the first mirror as , and that of the second mirror as . Considering the complexity of the GLS system, we treat the system modelling as a machine learning problem. More specifically, a SLFN as shown in Figure 3 is used to model the 2D-to-6D mapping relationship M: → . There are two neurons in the input layer, which respectively are the voltage signals and . The output layer contains six neurons, which are respectively the components , , , , and of . Denote the number of neurons in the hidden layer as .

Input neurons
Hidden neurons Output neurons Figure 3. SLFN (single-hidden layer feedforward neural network) structure of the 2D to 6D mapping.
The model of M: → is formulated as: is the input weight vector connecting the hidden neuron and the input neurons, = , , , , , is the output weight vector connecting the hidden neuron and the output neurons, and is the bias of the -th hidden neuron. The activation function g( ) is taken as the sigmoid function: The model of M : d → V is formulated as: where w j = w j1 , w j2 is the input weight vector connecting the hidden neuron and the input neurons, T is the output weight vector connecting the hidden neuron and the output neurons, and b j is the bias of the j-th hidden neuron. The activation function g(x) is taken as the sigmoid function: Sensors 2018, 18, 197 5 of 17 After having completed the training of the SLFN model, the corresponding V to an arbitrary d can be easily obtained by Equation (1). The following two subsections describe the training method in details.

Generating Training Data
As shown in Figure 4, a set of control voltages d k = d kx , d ky T (k = 1, 2, · · · , Q) is sent to control the GLS system to project a gird of laser spots onto the panel. At the same time, the panel does translational motion in the field-of-view of the binocular system, and stops at every position P i , i = 1, 2, · · · , N. The left camera records the image I Li of the laser spot grid at position P i , and the right camera record the image I Ri . According to the extracted image coordinate u k Li , v k Li of the laser spot p k i in I Li and the coordinate u k Ri , v k Ri in I Ri , the 3D coordinates x k i , y k i , z k i of p k i is computed based on the binocular stereo vision algorithm. Then the 6D vector can be estimated by fitting the 3D points p k i , i = 1, · · · , N of a common index k from the same laser beam l k . The fitting is performed by minimizing the error measure shown in Equation (3): where d k i is the distance from the point p k i to the laser beam l k . Given the existence of outliers in p k i , i = 1, · · · , N , RANSAC method [25] is used to improve the fitting precision of the line. In this way, we achieve the 6D vectors V k , k = 1, 2, · · · , Q of all the outgoing laser beams. Associating V k , k = 1, 2, · · · , Q with the corresponding input control voltages d k = d kx , d ky T , k = 1, 2, · · · , Q the training data set (d k , V k ), k = 1, 2, · · · , Q is fully achieved.
Sensors 2018, 18, 197 5 of 17 After having completed the training of the SLFN model, the corresponding to an arbitrary can be easily obtained by Equation (1). The following two subsections describe the training method in details.

Generating Training Data
As shown in Figure 4 where is the distance from the point to the laser beam . Given the existence of outliers in , = 1, ⋯ , , RANSAC method [25] is used to improve the fitting precision of the line. In this way, we achieve the 6D vectors , = 1,2, ⋯ , of all the outgoing laser beams. Associating , = 1,2, ⋯ , with the corresponding input control voltages = , , = 1,2, ⋯ , the training data set ( , ), = 1,2, ⋯ , is fully achieved.

Solving the SLFN Model
The gradient-descent-based methods are commonly used for training neural networks. These methods tune all the parameters , b and ( = 1,2, ⋯ , ) of the networks iteratively in many steps. The gradient computation burden is large, since the number of the parameters is usually huge. So, these training processes are time-consuming and even may converge to local minima. To efficiently establish the 2D to 6D mapping M ∶ → , we incorporate the extreme learning machine (ELM) [22,23] to solve the model formulated in Equation (1).

Solving the SLFN Model
The gradient-descent-based methods are commonly used for training neural networks. These methods tune all the parameters w j , b j and β j (j = 1, 2, · · · , L) of the networks iteratively in many steps. The gradient computation burden is large, since the number of the parameters is usually huge. So, these training processes are time-consuming and even may converge to local minima. To efficiently establish Sensors 2018, 18,197 6 of 17 the 2D to 6D mapping M : d → V , we incorporate the extreme learning machine (ELM) [22,23] to solve the model formulated in Equation (1).
Given Q arbitrary samples (d k , V k ), k = 1, 2, · · · , Q the SLFN with hidden neurons in Equation (1) should satisfy: Equation (5) can be written compactly as: where: Given any small positive value ε > 0 and randomly chosen w j and b j , if only the activation function g(x) is infinitely differentiable, there exist L ≤ Q hidden nodes, such that for Q arbitrary samples H w,b,d β − S T < ε holds [24]. Based on the above conclusion, we randomly assign the input weights w j , j = 1, 2, · · · , L and the hidden layer biases b j , j = 1, 2, · · · , L. Then the hidden layer output matrix H w,b,d in Equation (6) is fully determined, and the model in Equation (1) can be simply considered as a linear system. The output weights in β is determined by: where H + w,b,d is the generalized inverse matrix of H w,b,d . This is the smallest norm least square solution to the linear system. The solved β, together with the randomly chosen w j and b j , completely determine the model in Equation (1).

Validations
In order to verify the accuracy and efficiency of the proposed calibration method, cross validation experiment and target shooting experiment are performed respectively. Moreover, we provide both the forward and the backward applications based on the calibrated mapping M : d → V . The achievement of these applications fully testifies the wide applicability of the proposed method in various applications.

Cross Validation
Nine hundred (900) pairs of digital voltages d k = d kx , d ky T (k = 1, 2, · · · , 900) are input to generate 900 outgoing laser beams in the field-of-view of the binocular system. The 900 pairs of digital voltages are uniformly spaced on the virtual digital plane as shown in Figure 5. According to the method in Section 2.2.2, the sample data set (d k , V k ) (k = 1, 2, · · · , 900) is obtained. Here the number of the position N is set as 100. The distance between the first position P 1 and the last position P N is about 1 m, and the first position P 1 is approximately 2 m away from the GLS system. With the help of the moving mechanism, the whole sampling procedure costs less than 10 min.
To evaluate the performance of the new calibration method, the 10-fold cross validation is adopted. We randomly draw the one-tenth of the sample data as the test data, and use the rest of the sample data to train the SLFN model by means of the method in Section 2.2.3. For arbitrary input voltages d tst j = d tst jx , d tst jy T , j = 1, 2, · · · , 90 in the test data set, its corresponding outgoing beam l tst j is , which is fitted out with the process in Section 2.2.2. Meanwhile, we calculate a 6D vector denoted as = , , , , , for every through the established mapping M: → . If the error of the established mapping M: → tends to be zero, the beam and the beam determined by should be the same line. However, the evaluation of the difference between and is difficult. So, an equivalent evaluation method is introduced here.
The corresponding input digital voltages = , of the spatial beam is given by: where ( ) is the distance from the point = , , to the corresponding laser beam of the digital voltages = , , and is a point on the beam . The difference between and is denoted as and defined by Equation (8). In the ideal situation that and exactly coincide, is a zero vector: Then we use the root mean squared error (RMSE) measure in Equation (9) to estimate the error of the calibrated mapping M: → . Obviously, the smaller the value of is, the higher the accuracy of the calibration is: where is the number of the test samples. In the process of solving the SLFN model by the method mentioned in Section 2.2.3, an inevitable problem is the determination of the number of hidden neurons in generalized SLFN. To investigate the influence of the hidden neuron number on the calibration accuracy, we also tested the variation tendency of regression error with respect to the equal increase of . Considering the possible influence of the training sample number on the variation tendency, three groups of test experiments with different number of training samples ( = 810,405,270 ) are respectively conducted.

Target Shooting
To further verify the proposed calibration method, we design a pattern of 49 target circles (shown in Figure 6a  Meanwhile, we calculate a 6D vector denoted as

The corresponding input digital voltages
to the corresponding laser beam of the digital voltages d = d x , d y T , and p tst j is a point on the beam l tst j . The difference between d M j and d tst j is denoted as ε j and defined by Equation (8). In the ideal situation that l tst j and l M j exactly coincide, ε j is a zero vector: Then we use the root mean squared error (RMSE) measure in Equation (9) to estimate the error of the calibrated mapping M : d → V . Obviously, the smaller the value of E d is, the higher the accuracy of the calibration is: where Q tst is the number of the test samples.
In the process of solving the SLFN model by the method mentioned in Section 2.2.3, an inevitable problem is the determination of the number of hidden neurons L in generalized SLFN. To investigate the influence of the hidden neuron number L on the calibration accuracy, we also tested the variation tendency of regression error E d with respect to the equal increase of L. Considering the possible influence of the training sample number Q on the variation tendency, three groups of test experiments with different number of training samples (Q = 810,405,270) are respectively conducted.

Target Shooting
To further verify the proposed calibration method, we design a pattern of 49 target circles (shown in Figure 6a) to be shot by the laser beam of the calibrated GLS system. The pattern is placed in the field-of-view of the binocular system, and the 3D coordinates x dst n , y dst n , z dst n T , n = 1, 2, · · · , 49 of the circle centers p dst n , n = 1, 2, · · · 49 are obtained by the binocular system. By using the calibrated mapping M : d → V and the coordinates p dst n , n = 1, 2, · · · , 49, the digital voltages d dst n = d dst nx , d dst ny T , n = 1, 2, · · · , 49 are achieved by: where D n (d) is the distance from the 3D point p dst n = p dst nx , p dst ny , p dst nz T to the spatial beam l d , which is the corresponding laser beam of the digital voltages d = d x , d y T . Utilizing these digital voltages, we control the GLS system to shoot the target circles as shown in Figure 6b. The coordinates of the laser spot centers shot on the pattern p spot n , n = 1, 2, · · · , 49 are also obtained by the binocular system. Then we use the standard deviation of the distance S d = where to the spatial beam , which is the corresponding laser beam of the digital voltages = , . Utilizing these digital voltages, we control the GLS system to shoot the target circles as shown in Figure 6b. The coordinates of the laser spot centers shot on the pattern , = 1,2, ⋯ ,49 are also obtained by the binocular system.
Then we use the standard deviation of the distance = ∑ − to measure the shooting precision as shown in Figure 6c.

3D Reconstruction
3D reconstruction is the representative of forward applications of GLS system. In this application, pre-defined input digital voltages are known and the 3D coordinates of the laser spots hitting on the object need to be solved. More specifically, digital voltages = , , = 1,2, ⋯ , ( is the number of the digital signals) are input to control the GLS system to project laser spots on the surface of an object to be reconstructed in 3D space, then the image of the laser spots is taken by the left camera. Using the extracted pixel coordinates [ , ] of the laser spots and the intrinsic parameters of the left camera, we get a set of straight lines through the origin of the camera coordinate system by Equation (11): where is the intrinsic parameter matrix of the left camera, and is the direction vector of . The vector of the laser beam going through the spot is easily obtained by substituting the input digital voltages into the calibrated mapping M: → . We then make use of the coordinate [ , , ] of the midpoint of the common perpendicular between and to represent the intersection between the two beams, i.e., the 3D coordinates of the laser spot. In this way, we get a number of coordinates [ , , ] , = 1,2, ⋯ , of all the laser spots on the surface.
In the real 3D Reconstruction experiment, we choose the surface of an engine blade as reconstructed object since it is a free-form surface. In addition, 3D reconstruction of the engine blade by the date-driven triangulation method [20] is also implemented for comparison. To evaluate the reconstruction accuracy, we use the commercial ATOS system (GOM, Brunswick, Germany) to measure the surface in advance. The measurement accuracy of the ATOS system is high (0.03 mm), so we approximately take the measuring result of ATOS system as the real value for comparison. By the iterative closest point (ICP) method [26], we can achieve the registration result between the reconstructed 3D points and the measuring result of ATOS system. Then we calculate the root mean square error (RMSE) by Equation (12) to measure the reconstruction error:

3D Reconstruction
3D reconstruction is the representative of forward applications of GLS system. In this application, pre-defined input digital voltages are known and the 3D coordinates of the laser spots hitting on the object need to be solved. More specifically, digital voltages d dst m = d dst mx , d dst my T , m = 1, 2, · · · , Q m (Q m is the number of the digital signals) are input to control the GLS system to project Q m laser spots on the surface of an object to be reconstructed in 3D space, then the image of the laser spots is taken by the left camera. Using the extracted pixel coordinates u dst m , v dst m T of the laser spots and the intrinsic parameters of the left camera, we get a set of straight lines l Cam m through the origin of the camera coordinate system by Equation (11): where A is the intrinsic parameter matrix of the left camera, and V Cam , m = 1, 2, · · · , Q m of all the laser spots on the surface. In the real 3D Reconstruction experiment, we choose the surface of an engine blade as reconstructed object since it is a free-form surface. In addition, 3D reconstruction of the engine blade by the date-driven triangulation method [20] is also implemented for comparison. To evaluate the reconstruction accuracy, we use the commercial ATOS system (GOM, Brunswick, Germany) to measure the surface in advance. The measurement accuracy of the ATOS system is high (0.03 mm), so we approximately take the measuring result of ATOS system as the real value for comparison. By the iterative closest point (ICP) method [26], we can achieve the registration result between the reconstructed 3D points and the measuring result of ATOS system. Then we calculate the root mean square error (RMSE) by Equation (12) to measure the reconstruction error: where ε m represents the distance between the 3D point p m , m = 1, 2, · · · , Q m and the closest point belong to the data cloud measured by ATOS system after the ICP registration.

Projection Positioning
Laser projection positioning is the representative of backward applications of GLS system. In this application, the position of the laser spot on the 3D object is pre-defined and the digital control voltages need to be solved accordingly. In this validation, a serial of laser spots needs to be projected on the edge contour of an object by the calibrated GLS system. The 3D coordinates of the visual feature points placed on the object and the CAD model of the object in the object coordinate system are measured where D e (d) is the distance from the 3D point p , e = 1, 2, · · · , Q e to the GLS system to realize the laser projection positioning of the edge contour of the object.
In the real projection positioning experiment, we choose the edge of an engine blade as the target to be projected since the edge is the 3D free curve. The CAD contour of the edge is discretized into 131 points, and 3D coordinates of these discrete points in the binocular coordinate system are obtained by the feature points placed on the surface of the engine blade. The engine blade is about 2.5 m away from the GLS system.

Results
The results of the laboratorial experiments and the accuracy test for Cross Validation, Target Shooting, 3D Reconstruction and Projection Positioning are respectively illustrated in this Section.

Cross Validation Experiment
The test results of the influence of the hidden neuron number L on the calibration accuracy are shown in Figure 7. According to the theorem proved by Huang et al. [21,22], the training error is zero when the hidden neuron number L is equal to the training sample number Q. However, it cannot ensure good generalization performance of the established SLFN, i.e., it cannot guarantee to achieve low regression error E d . Figure 7a,b shows that the regression error E d rapidly increases when L exceeds 300, which indicates that the established SLFN model can be seriously over fitted by improperly increasing the hidden neuron number L. On the other hand, insufficient hidden neuron number may lead to an under fitting problem. As shown in Figure 7a Once L is selected, the structure of the SLFN model of the GLS system is determined. With the determined SLFN and the training data set (d train k , V train k ) (k = 1, 2, · · · , 810), the mapping M : d → V of GLS system is finally determined. When M : d → V is calibrated, the actual distribution of ε j = ε jx , ε jy , j = 1, 2, · · · , 90 is shown in Figure 8. The actual distribution of ε jx , ε jy , j = 1, 2, · · · , 90 is shown in Figure 8. According to the XY2-100 protocol of the GLSs we used, the voltage signals d = d x , d y T are dimensionless quantities. The range of d x and d y is −32768 to 32767 respectively, and the rotation angle range of the mirrors is −12.5 • to 12.5 • respectively. As shown in Figure 8, the maximums of ε jx and ε jy (j = 1, 2, · · · , 90) are −0.957 and −1.04 respectively which can lead to 6.37 µrad and 6.91 µrad rotation error for the two GLS mirrors respectively.
(c)  To investigate the association between the sampling density and the calibration accuracy, 900 pairs of digital voltages d k = d kx , d ky T , k = 1, 2, · · · , 900 are uniformly spaced on three virtual digital planes (Figure 9) of different sizes. The physical-model-based method [13] and the Look-Up- Table (LUT) based method are also realized for comparison. The physical-model-based calibration constructs a model with a set of real structural parameters and solves these parameters by a Levenberg-Marquardt [27] optimization. Utilizing the training data set, the LUT-based calibration can determine the other space vectors of laser beams corresponding to the signals within the sampling range by a linear interpolation. Whatever method is used, the goal of the GLS system calibration is to determine the space vector of the laser beam corresponding to an arbitrary control signal. Therefore, we can use the same data set (d k , V k ) (k = 1, 2, · · · , 900) achieved in Section 2.3.1 for the three different calibration methods. Calibration results with different sampling densities are listed in Table 1.
Sensors 2018, 18,197 11 of 17 of GLS system is finally determined. When M: → is calibrated, the actual distribution of = , , j = 1,2, ⋯ ,90 is shown in Figure 8. The actual distribution of , , j = 1,2, ⋯ ,90 is shown in Figure 8. According to the XY2-100 protocol of the GLSs we used, the voltage signals = , are dimensionless quantities. The range of and is −32768 to 32767 respectively, and the rotation angle range of the mirrors is −12.5° to 12.5° respectively. As shown in Figure 8, the maximums of and ( j = 1,2, ⋯ ,90 ) are −0.957 and −1.04 respectively which can lead to 6.37 μrad and 6.91 μrad rotation error for the two GLS mirrors respectively.
To investigate the association between the sampling density and the calibration accuracy, 900 pairs of digital voltages = , , = 1,2, ⋯ ,900 are uniformly spaced on three virtual digital planes (Figure 9) of different sizes. The physical-model-based method [13] and the Look-Up- Table (LUT) based method are also realized for comparison. The physical-model-based calibration constructs a model with a set of real structural parameters and solves these parameters by a Levenberg-Marquardt [27] optimization. Utilizing the training data set, the LUT-based calibration can determine the other space vectors of laser beams corresponding to the signals within the sampling range by a linear interpolation. Whatever method is used, the goal of the GLS system calibration is to determine the space vector of the laser beam corresponding to an arbitrary control signal. Therefore, we can use the same data set ( , ) ( = 1,2, ⋯ ,900) achieved in Section 2.3.1 for the three different calibration methods. Calibration results with different sampling densities are listed in Table 1.   Figure 9. As shown in Table 1, the accuracy of the physical-model-driven method is the lowest compared to the LUT method and the proposed method, and the LUT method is remarkably inefficient among the three calibrations. Both the accuracy and the running time of the proposed method have obvious comparative advantages.   Figure 9. As shown in Table 1, the accuracy of the physical-model-driven method is the lowest compared to the LUT method and the proposed method, and the LUT method is remarkably inefficient among the three calibrations. Both the accuracy and the running time of the proposed method have obvious comparative advantages.

Physical-Model
Remark. The running time of the physical-model-based method is the time for solving all the parameters of the model by an optimization. The running time of the LUT-based method is the time for establishing the LUT by a linear interpolation. The running time of the proposed method is the time for solving the SLFN model with the training data (including the time for linear fitting mentioned in Section 2.2.2).

Target Shooting Experiment
Two groups of experiment are conducted. The first group is focused on the influence of the shooting distance on the shooting accuracy. The experiment results at four different shooting distances are shown in Figure 10. The second group is focused on the influence of the target poses. The experiment results are shown in Figure 11 with four target poses at the same distance (2.5 m). The standard deviations S d for different shooting distances are listed in the first four rows of Table 2. The last four rows in Table 2 list S d for different target poses at 2.5 m distance.

Target Shooting Experiment
Two groups of experiment are conducted. The first group is focused on the influence of the shooting distance on the shooting accuracy. The experiment results at four different shooting distances are shown in Figure 10. The second group is focused on the influence of the target poses. The experiment results are shown in Figure 11 with four target poses at the same distance (2.5 m). The standard deviations for different shooting distances are listed in the first four rows of Table 2. The last four rows in Table 2 list for different target poses at 2.5 m distance.  As listed in Table 2, the error of target shooting is less than 0.346 mm within the range of 1 m from the first sampling position to the last position . The average shooting error of the 8 experiments is 0.28 mm.

3D Reconstruction Experiment
According to the reconstruction method mentioned in Section 2.3.3, a number of laser spots are projected onto the surface of an engine blade as shown in Figure 12. The reconstructed 3D points of the surface are achieved in Figure 12b. The registration result between the reconstructed 3D points and the measuring result of ATOS system is shown in Figure 12c. Although the error distribution of the registration shows the maximum error is 0.996 mm, most errors of the reconstructed points are in

Target Shooting Experiment
Two groups of experiment are conducted. The first group is focused on the influence of the shooting distance on the shooting accuracy. The experiment results at four different shooting distances are shown in Figure 10. The second group is focused on the influence of the target poses. The experiment results are shown in Figure 11 with four target poses at the same distance (2.5 m). The standard deviations for different shooting distances are listed in the first four rows of Table 2. The last four rows in Table 2 list for different target poses at 2.5 m distance.  As listed in Table 2, the error of target shooting is less than 0.346 mm within the range of 1 m from the first sampling position to the last position . The average shooting error of the 8 experiments is 0.28 mm.

3D Reconstruction Experiment
According to the reconstruction method mentioned in Section 2.3.3, a number of laser spots are projected onto the surface of an engine blade as shown in Figure 12. The reconstructed 3D points of the surface are achieved in Figure 12b. The registration result between the reconstructed 3D points and the measuring result of ATOS system is shown in Figure 12c. Although the error distribution of the registration shows the maximum error is 0.996 mm, most errors of the reconstructed points are in Figure 11. Shooting experiment results with four different poses (a-d) at a same distance (2.5 m) from the GLS system. As listed in Table 2, the error of target shooting is less than 0.346 mm within the range of 1 m from the first sampling position P 1 to the last position P 100 . The average shooting error of the 8 experiments is 0.28 mm.

3D Reconstruction Experiment
According to the reconstruction method mentioned in Section 2.3.3, a number of laser spots are projected onto the surface of an engine blade as shown in Figure 12. The reconstructed 3D points of the surface are achieved in Figure 12b. The registration result between the reconstructed 3D points and the measuring result of ATOS system is shown in Figure 12c. Although the error distribution of the registration shows the maximum error is 0.996 mm, most errors of the reconstructed points are in this interval [−0.5, 0.5] as shown in Figure 12c. A few points of relatively large error are distributed in the surface edge where the laser spot imaging is difficult. In the date-driven triangulation method, it also needs to train a data-driven model for direct triangulation. The input features of the model built by this method consist of 2D coordinates of laser spots in camera image and the control voltage at the drives of both galvanometric mirrors. The output feature of the model is the 3D coordinate of the laser spot. Therefore, we can directly use the experimental data (image coordinates , and the 3D coordinates , , ) obtained in Section 2.2.2 to train the triangulation model. The respective reconstruction results are shown in Table 3. Table 3. Reconstruction of an engine blade by two different methods.

Calibration methods RMSE of reconstruction (mm) Time for training a model (s)
The proposed method 0.462 0.872 The data-driven triangulation 0.554 8. 16 We can find in Table 3 that the reconstruction accuracy of the two different methods is closed. However the efficiency of the proposed method is higher than the data-driven triangulation (Table 3). It should be noted that we also adopt the linear system built by the method mentioned in Section 2.2.3 to solve the triangulation model. If the other nonlinear methods (like the Support Vector Machine (SVM), the Gaussian Processes (GPs)) are used to solve this problem, it will lead to explosive time growth for GLS system calibration. As mentioned by Wissel et al. [20], the training time by SVM is 8.91 min, and the time by GPs is more than 20 h. Moreover, the number of the training sample in their experiments (7193 points) is less than one tenth of ours (90,000 points). As we all know, the training time of the model is proportional to the number of the training sample.

Projection Positioning Experiment
The CAD model of the engine blade is shown as Figure 13a, and the red curve is the target contour. The GLS system has two scan modes: point scan and linear interpolation scan. In the point scan mode, the laser transmitter is turned off between inputting any two adjacent pairs of digital signals, and the laser trajectory is pointwise as shown in Figure 13b. Whereas in the linear interpolation scan mode, the laser transmitter keeps turned on all the time and the laser trajectory are continuous as illustrated in Figure 13c. In the date-driven triangulation method, it also needs to train a data-driven model for direct triangulation. The input features of the model built by this method consist of 2D coordinates of laser spots in camera image and the control voltage at the drives of both galvanometric mirrors. The output feature of the model is the 3D coordinate of the laser spot. Therefore, we can directly use the experimental data (image coordinates u k Li , v k Li and the 3D coordinates x k i , y k i , z k i ) obtained in Section 2.2.2 to train the triangulation model. The respective reconstruction results are shown in Table 3. Table 3. Reconstruction of an engine blade by two different methods.

RMSE of Reconstruction (mm) Time for Training a Model (s)
The proposed method 0.462 0.872 The data-driven triangulation 0.554 8. 16 We can find in Table 3 that the reconstruction accuracy of the two different methods is closed. However the efficiency of the proposed method is higher than the data-driven triangulation (Table 3). It should be noted that we also adopt the linear system built by the method mentioned in Section 2.2.3 to solve the triangulation model. If the other nonlinear methods (like the Support Vector Machine (SVM), the Gaussian Processes (GPs)) are used to solve this problem, it will lead to explosive time growth for GLS system calibration. As mentioned by Wissel et al. [20], the training time by SVM is 8.91 min, and the time by GPs is more than 20 h. Moreover, the number of the training sample in their experiments (7193 points) is less than one tenth of ours (90,000 points). As we all know, the training time of the model is proportional to the number of the training sample.

Projection Positioning Experiment
The CAD model of the engine blade is shown as Figure 13a, and the red curve is the target contour.
The GLS system has two scan modes: point scan and linear interpolation scan. In the point scan mode, the laser transmitter is turned off between inputting any two adjacent pairs of digital signals, and the laser trajectory is pointwise as shown in Figure 13b. Whereas in the linear interpolation scan mode, the laser transmitter keeps turned on all the time and the laser trajectory are continuous as illustrated in Figure 13c.

Discussion
In the solution of the SLFN model of the GLS system, we determine the number of hidden neurons based on the investment in Section 3.1. The value of is also suitable for other GLS systems or different sampling date set since represents the system complexity that is constant. The SLFN determined by is equivalent to the physical model of the GLS system built in physicalmodel-based calibration, and the parameter setting of the physical model is not changed for a GLS system whose inner structure is fixed. According to the actual distribution of = , in Section 3.1, we consider the established SLFN model has good imitation ability on the simulation of the real GLS system. In addition, we indirectly test the calibration accuracy at different positions of the sampling region by the target shooting experiment. The good accuracy performance of shooting indicates the consistency of the calibration result in the whole sampling range (1 m) is also great. This To quantitatively evaluate the laser projection accuracy, the 3D coordinates of the 131 laser spots actually projected on the engine blade, denoted as p tgt e , e = 1, 2, · · · , 131 are obtained by using the binocular system. The theoretical target contour and the actual laser projection contour (i.e., the linear interpolation of the points p tgt e , e = 1, 2, · · · , 131) are shown in Figure 14a. The Euclidean distance ε e between p tgt e and p tgt e , e = 1, 2, · · · , 131 are calculated and illustrated in Figure 14b. The average projection positioning error is ε = ∑ 131 e=1 ε e /131 = 0.32 mm, and the maximum of ε e , e = 1, 2, · · · , 131 is 0.62 mm, the standard deviation σ = ∑ e (ε e − ε) 2 /131 = 0.28 mm.

Discussion
In the solution of the SLFN model of the GLS system, we determine the number of hidden neurons based on the investment in Section 3.1. The value of is also suitable for other GLS systems or different sampling date set since represents the system complexity that is constant. The SLFN determined by is equivalent to the physical model of the GLS system built in physicalmodel-based calibration, and the parameter setting of the physical model is not changed for a GLS system whose inner structure is fixed. According to the actual distribution of = , in Section 3.1, we consider the established SLFN model has good imitation ability on the simulation of the real GLS system. In addition, we indirectly test the calibration accuracy at different positions of the sampling region by the target shooting experiment. The good accuracy performance of shooting indicates the consistency of the calibration result in the whole sampling range (1 m) is also great. This

Discussion
In the solution of the SLFN model of the GLS system, we determine the number of hidden neurons L based on the investment in Section 3.1. The value of L is also suitable for other GLS systems or different sampling date set since L represents the system complexity that is constant. The SLFN determined by L is equivalent to the physical model of the GLS system built in physical-model-based calibration, and the parameter setting of the physical model is not changed for a GLS system whose inner structure is fixed. According to the actual distribution of ε j = ε jx , ε jy in Section 3.1, we consider the established SLFN model has good imitation ability on the simulation of the real GLS system. In addition, we indirectly test the calibration accuracy at different positions of the sampling region by the target shooting experiment. The good accuracy performance of shooting indicates the consistency of the calibration result in the whole sampling range (1 m) is also great. This accuracy consistency allows the follow-up GLS-based applications to be less affected with the size or the position of the experimental object.
The contrast experiment with the other two calibration methods of GLS system is also conducted, and the results are shown in Table 1. The main reason for the poor performance of the physical-model-driven method is that the physical model of the GLS system cannot involve some affecting factors, such as the nonlinear relation between the mirror rotation angles and the applied digital voltages. Furthermore, the calibration results is wrong sometimes since it optimizes too many parameters and strongly depends on the given initial values of the parameters. Both the LUT method and the proposed method have a better performance when sampling in the virtual digital plane shown in Figure 9a, but the accuracy of the LUT method declines faster than the proposed method as the sample density reduces. This phenomenon of precision falling reflect a fact that the accuracy of the LUT-based calibration is more affected by the sampling density than other calibrations with a model. Substantially, the accuracy of a location in the constructed LUT mainly depends on the neighborhood sampling points since the calibration outcome of the location is achieved by neighborhood point interpolation. Relatively, the accuracy of the calibration with a model is the combination of all the sample data. On top of that, constructing a high resolution lookup table shows higher computational cost ( Table 1). The reason lies in that the space vectors corresponding to the non-sampled digital voltages in a certain digital area needs to be calculated by interpolation. As the virtual digital plane (Figure 9) grows in size, the computational burden of interpolation significantly increases, which results in the ever-increasing time consumption (Table 1). As mentioned in Section 2.2.3, the model in Equation (1) is a linear system, so the proposed method is much more efficient (only 0.872 s). The shorter the training time is, the quicker the whole calibration process is. The rapid calibration brings great convenience to various GLS-based field applications.
In the 3D reconstruction experiment, although the reconstruction accuracy of the proposed method is closed to that of the data-driven triangulation method, the proposed method has two obvious advantages. On the one hand, the training time of our method (0.872 s) is much less than that of the data-driven triangulation method (8.16 s). When the sampling density and sampling region of the two methods are the same, fitting the laser spots that belong to the same laser beam into a straight line (the 6D vector) in our method can greatly decrease the number of the training data. Less input training data means the less number of equations in Equation (6), which can remarkably reduce the computation time for solving the equations. The linear fitting itself in our method costs 0.801 s in this experiment, which has been involved in the total training time (0.872 s). On the other hand, the calibration result M : d → V of our method is the relation between the input digital voltages and the corresponding outgoing laser beam, which is independent of the binocular system. So, it is applicable to both forward and backward applications. However, the calibrated mapping M : d dst x , d dst y , u dst , v dst → x dst , y dst , z dst of the data-driven triangulation method [20] contains the 2D image coordinates u dst , v dst of the laser spot in its 4D input, leading to the inapplicability for the backward applications.
The proposed calibration also applies to the projection positioning of 3D contour, which is widely used in prepreg layup of composite. Compared with the accuracy requirement of aeronautic composite layer (±0.7 mm) [28], the projection errors (Figure 14b) indicates that the accuracy of the projection positioning experiment satisfies the requirement. Similar to the principle of target shooting, this application changes the target point into the discrete point of 3D contour to be projected. Therefore, without consideration to the machining precision of the impeller to be projected, the target shooting accuracy determines the projection positioning accuracy. The standard deviation σ calculated in Section 3.4, representative of the positioning precision, is in line with the expected accuracy range.
Besides the calibration method, the sampling data for training the system model is another factor to determine the calibration precision. In other words, the accuracy of the space vector V k corresponding to the digital d k is important for a good calibration result. For achieving good space line fitting results, we usually need to collect the 3D sample points in a relative long distance (1 m in this paper). However the depth of the binocular vision system is limited. Inevitably, the accuracy of the 3D sample points outside of the depth is affected due to the degradation of the sampling image quality. Therefore, we can take some methods (such as the image deblurring) to improve the accuracy of the sampling data in the future.

Conclusions
An accurate and efficient method for calibrating the GLS system is proposed. Based on the one-to-one mapping between the input digital voltages d and the space vector of the outgoing laser beam V, we establish the system model M : d → V using a single-hidden layer feedforward neural network (SLFN). Within the extreme learning machine framework, the system model calibration is completed by only solving a linear system, which avoids the long training time required by most machine learning methods. More importantly, taking the space vector of the outgoing laser beam V as the output of the established SLFN greatly reduces the number of equations in the linear system, thereby further improves the computational efficiency. Calibration experiments demonstrate that the proposed method outperforms the mainstream approaches in accuracy and efficiency. In addition, the calibrated mapping M : d → V allows the calibration method to handle various GLS-based applications. The 3D reconstruction and the laser projection positioning experiments validate the versatility of our method.