Absolute Positioning Accuracy Improvement in an Industrial Robot

The absolute positioning accuracy of a robot is an important specification that determines its performance, but it is affected by several error sources. Typical calibration methods only consider kinematic errors and neglect complex non-kinematic errors, thus limiting the absolute positioning accuracy. To further improve the absolute positioning accuracy, we propose an artificial neural network optimized by the differential evolution algorithm. Specifically, the structure and parameters of the network are iteratively updated by differential evolution to improve both accuracy and efficiency. Then, the absolute positioning deviation caused by kinematic and non-kinematic errors is compensated using the trained network. To verify the performance of the proposed network, the simulations and experiments are conducted using a six-degree-of-freedom robot and a laser tracker. The robot average positioning accuracy improved from 0.8497 mm before calibration to 0.0490 mm. The results demonstrate the substantial improvement in the absolute positioning accuracy achieved by the proposed network on an industrial robot.


Introduction
Although industrial robots are flexible platforms and provide high repeatability for the automation of a variety of manufacturing tasks, a low absolute positioning accuracy may limit their applicability [1]. Error sources in robots can be either kinematic or non-kinematic [2][3][4]. Manufacturing and assembly tolerances cause deviation of the actual kinematic parameters from their nominal values, causing kinematic errors [5]. On the other hand, non-kinematic errors are usually neglected by their difficult modeling. Nevertheless, non-kinematic errors caused by factors such as temperature variations, joint and link compliance, and gear backlash have a considerable effect on the absolute positioning accuracy. Thus, an efficient method to effectively compensate positioning deviations caused by both kinematic and non-kinematic errors should be devised.
Extensive reports have shown that kinematic errors account for about 90% of the total positioning error [6]. Therefore, kinematic calibration effectively improves the absolute positioning accuracy of robots. This type of calibration comprises four steps: modeling, measurement, identification, and compensation or correction [7]. Thus, the kinematic model plays a critical role during robot calibration and should meet the following requirements for parameter identification: continuity, completeness, and minimality [8]. The Denavit-Hartenberg convention is a widely used modeling method to describe the robot kinematics [9]. However, this method is not continuous when two adjacent joint axes are parallel or nearly parallel, thus presenting singularities. To overcome the singularity problem, many studies have suggested alternative models. For instance, Hayati added a revolute The remainder of this paper is organized as follows: The influences of kinematic and nonkinematic errors on robot positioning accuracy are analyzed in Section 2. The proposed neural network optimized by differential evolution is detailed in Section 3. In Section 4, simulations and experiments are performed for compensating the absolute positioning error. The discussion and conclusions are presented in Section 5.

Kinematic and Non-Kinematic Error Analysis
The accuracy of a robot kinematic model depends on robot calibration. We use the POE model to describe the robot kinematics [26], considering the coordinate systems of the robot shown in Figure 1.  (1) where i θ indicates the robot joint variable and ˆs t ξ is the twist of the initial transformation. In Equation (1), ˆi ξ can be expressed as w v (2) where ˆi w and i v are given by (4) where [ , , ] i x i y i z i q q q = q represents the coordinate of the origin of each axis in the base coordinate For a rotational joint, the exponential matrix of the motion screw can be represented as The pose of end-effector frame {t} with respect to base frame {s} is given by g st (θ) = eξ 1 θ 1 eξ 2 θ 2 eξ 3 θ 3 eξ 4 θ 4 eξ 5 θ 5 eξ 6 θ 6 eξ st = p(θ) r(θ) x a x r x n y o y a y r y n z o z a z r z 0 0 0 1 where θ i indicates the robot joint variable andξ st is the twist of the initial transformation. In Equation (1), ξ i can be expressed asξ whereŵ i and v i are given byŵ where q i = [q xi , q yi , q zi ] represents the coordinate of the origin of each axis in the base coordinate system {s}; w i = [w xi , w yi , w zi ] is the direction vector in system {s} of each rotation axis. For a rotational joint, the exponential matrix of the motion screw can be represented as The forward kinematics of a six-DOF serial robot is given by f c = g(w 1 , · · · , w 6 , q 1 , · · · , q 6 , θ) = eξ 1 θ 1 eξ 2 θ 2 · · · eξ 6 θ 6 eξ st (6) The kinematic parameters of the robot generally deviate from their designed values due to different kinematic errors such as mechanical deformation, assembly and machining errors etc., as shown in Figure 2.
Sensors 2020, 20, x 4 of 15 The forward kinematics of a six-DOF serial robot is given by 6 The kinematic parameters of the robot generally deviate from their designed values due to different kinematic errors such as mechanical deformation, assembly and machining errors etc., as shown in Figure 2. When the kinematic errors are only considered, the actual robot position re a l f can be expressed as where i q Δ and i w Δ represent the kinematic errors to be corrected. The mathematical expression of kinematic errors can be obtained by kinematic modeling. These errors exist whether the robot is moving or not. Therefore, kinematic errors can be identified by the calibration method so as to effectively improve the positioning accuracy of the robot.
However, non-kinematic errors undermine accuracy, but they are generally neglected due to their modeling difficulty and complexity. All error sources whose contributions to positioning errors cannot be characterized by kinematic parameters are herein described as non-kinematic errors [18]. The non-kinematic errors relate to pose and dynamical behavior of the robot, so it is difficult to obtain the exactly mathematical expression. Therefore, calibration cannot mitigate the effects of nonkinematic errors. Non-kinematic errors with considerable effects, such as those related to strain wave gearing and joint flexibility, occur between the angular encoder and output shaft at the joint, and the nominal joint variable should be compensated for computing the actual joint rotation.
Considering non-kinematic errors, the actual robot joint variable can be expressed as   When the kinematic errors are only considered, the actual robot position f real can be expressed as f real = g(w 1 + ∆w 1 , · · · , w 6 + ∆w 6 , q 1 + ∆q 1 , · · · , q 6 + ∆q 6 , θ) where ∆q i and ∆w i represent the kinematic errors to be corrected. The mathematical expression of kinematic errors can be obtained by kinematic modeling. These errors exist whether the robot is moving or not. Therefore, kinematic errors can be identified by the calibration method so as to effectively improve the positioning accuracy of the robot. However, non-kinematic errors undermine accuracy, but they are generally neglected due to their modeling difficulty and complexity. All error sources whose contributions to positioning errors cannot be characterized by kinematic parameters are herein described as non-kinematic errors [18]. The non-kinematic errors relate to pose and dynamical behavior of the robot, so it is difficult to obtain the exactly mathematical expression. Therefore, calibration cannot mitigate the effects of non-kinematic errors. Non-kinematic errors with considerable effects, such as those related to strain wave gearing and joint flexibility, occur between the angular encoder and output shaft at the joint, and the nominal joint variable should be compensated for computing the actual joint rotation.
Considering non-kinematic errors, the actual robot joint variable can be expressed as whereθ i is the nominal robot joint variable and C i (θ i ) represents the compensation factor for non-kinematic errors. We can use Chebyshev polynomials to calculate C i (θ i ) [16]: with where m denotes the order of the Chebyshev polynomial and a 0 , a 1 , · · · , a m are the polynomial coefficients to be calculated. Thus, when nominal joint variables and theoretical kinematic parameters of a robot are known, its actual position is given by f real = g(∆w 1 , · · · , ∆w 6 , ∆q 1 , · · · , ∆q 6 , C 1 (θ 1 ), · · · , C 6 (θ 6 )) (11) The complexity of non-kinematic errors and the coupling between the compensation of nominal joint variables and robot pose hinder the determination of the actual mathematical representation of the robot position. Therefore, we use a neural network to directly estimate the robot position based on the joint variables. This method avoids complex calibration and modeling, effectively improves the absolute positioning accuracy of the robot and mitigates the influence of kinematic and non-kinematic errors.

The Proposed Neural Network
Different neural networks such as radial basis function and back-propagation networks have been widely used to compensate the absolute positioning error [19][20][21][22]. The compensation results depend on the network structure and parameters including thresholds, initial weights, and number of hidden neurons. Thus, we adopt differential evolution to optimize the structure and parameters of the proposed neural network for maximizing the compensation effect.

The Back-Propagation Neural Network
The back-propagation neural network is a kind of artificial neural network trained by the error back-propagation method. Its outstanding advantages are its strong linear mapping ability and flexible network structures. They provide strong linear mapping and a flexible structure. Figure 3 shows the proposed neural network that consists of input, hidden, and output layers. In the proposed network, the input layer has six nodes representing the robot joint variables, and the output layer has three nodes representing the coordinates of the robot position obtained from the laser tracker. where m denotes the order of the Chebyshev polynomial and 0 1 , , , m a a a  are the polynomial coefficients to be calculated.
Thus, when nominal joint variables and theoretical kinematic parameters of a robot are known, its actual position is given by The complexity of non-kinematic errors and the coupling between the compensation of nominal joint variables and robot pose hinder the determination of the actual mathematical representation of the robot position. Therefore, we use a neural network to directly estimate the robot position based on the joint variables. This method avoids complex calibration and modeling, effectively improves the absolute positioning accuracy of the robot and mitigates the influence of kinematic and nonkinematic errors.

The Proposed Neural Network
Different neural networks such as radial basis function and back-propagation networks have been widely used to compensate the absolute positioning error [19][20][21][22]. The compensation results depend on the network structure and parameters including thresholds, initial weights, and number of hidden neurons. Thus, we adopt differential evolution to optimize the structure and parameters of the proposed neural network for maximizing the compensation effect.

The Back-Propagation Neural Network
The back-propagation neural network is a kind of artificial neural network trained by the error back-propagation method. Its outstanding advantages are its strong linear mapping ability and flexible network structures. They provide strong linear mapping and a flexible structure. Figure 3 shows the proposed neural network that consists of input, hidden, and output layers. In the proposed network, the input layer has six nodes representing the robot joint variables, and the output layer has three nodes representing the coordinates of the robot position obtained from the laser tracker. In Figure 3, ij w is the connection weight between neuron i in the input layer and neuron j in the hidden layer, and jk w is that from neuron j in the hidden layer to neuron k in the output layer. The output from the hidden layer is expressed as where l is the number of nodes in the hidden layer and G is an activation function, which we set as follows: In Figure 3, w ij is the connection weight between neuron i in the input layer and neuron j in the hidden layer, and w jk is that from neuron j in the hidden layer to neuron k in the output layer. The output from the hidden layer is expressed as Sensors 2020, 20, 4354 where l is the number of nodes in the hidden layer and G is an activation function, which we set as follows: For x, the output layer provides The residual errors are obtained by subtracting the predicted values from the expected ones, which can be considered as position error values. The residual mean squared error is given by where m is the number of training samples, is the neural network output, and T v is the expected output.
As long as the residual mean squared error is equal to or higher than the target training error or the number of iterations does not reach its limit, the biases and weights are updated to reduce the deviation between the predicted and expected values. Using gradient descent, the weights are updated as follows: Figure 4 shows the training flowchart of the neural network.
Sensors 2020, 20, x 6 of 15 For x, the output layer provides The residual errors are obtained by subtracting the predicted values from the expected ones, which can be considered as position error values. The residual mean squared error is given by where m is the number of training samples, is the neural network output, and v T is the expected output.
As long as the residual mean squared error is equal to or higher than the target training error or the number of iterations does not reach its limit, the biases and weights are updated to reduce the deviation between the predicted and expected values. Using gradient descent, the weights are updated as follows:

Differential Evolution Optimization
Differential evolution allows one to perform global optimization with high identification accuracy and optimal rate. Differential evolution is a population-based stochastic optimizer that explores the search space by sampling from multiple random initial points [27]. We use differential evolution to pre-train the proposed neural network for optimizing its thresholds, initial weights, and number of hidden neurons.
The differential evolution algorithm typically consists of three elements: (1) Encoding solution population; (2) Fitness function to evaluate solution optimality;

Differential Evolution Optimization
Differential evolution allows one to perform global optimization with high identification accuracy and optimal rate. Differential evolution is a population-based stochastic optimizer that explores the search space by sampling from multiple random initial points [27]. We use differential evolution to pre-train the proposed neural network for optimizing its thresholds, initial weights, and number of hidden neurons.
The differential evolution algorithm typically consists of three elements: (1) Encoding solution population; (2) Fitness function to evaluate solution optimality; (3) Evolutionary operation comprising selection, crossover and mutation.
Individual ε i = {w i , B i } is initialized to represent different numbers of hidden layers, weight values w i , and threshold values B i . The length of ε i is obtained from the number of hidden neurons and calculated as follows: length(ε i ) = n in n hid + n hid + n hid n out + n out (18) where n in , n hid and n out are the number of neurons in the hidden, input, and output layers, respectively. Different values of ε i determine the performance of the neural network, as few neurons undermine accuracy, whereas many neurons increase the training time and lead to overfitting. In addition, the initial weights strongly affect the performance and convergence rate of the neural network. Hence, differential evolution allows one to enhance the learning rate and prediction accuracy. For differential evolution, mutation is expressed as where g is the number of iterations, ε g gbest is the neural network parameter providing the highest performance, and ε g a and ε g b are two particles randomly selected from the population for differential evolution. As the lengths of ε g gbest , ε g a , and ε g b are different due to the varying number of hidden neurons, mutation cannot be directly performed. Instead, we apply the best selection method to guarantee the same vector lengths during mutation. Specifically, the length of the best individual, ε g gbest , is employed as the basis to unify the lengths of the other individuals and perform mutation. When length(ε g a ) < length(ε g gbest ), the missing parameters are randomly added to ε g a to obtain length(ε g a ) = length(ε g gbest ). When length(ε g a ) > length(ε g gbest ), the excess parameters of ε g a are considered as interferent and disregarded during mutation.
Test individuals ε T are then generated through crossover using Equation (20), CR is a real-valued crossover probability factor in range [0,1] that controls the probability that a trial vector parameter will be randomly chosen. Generally, CR affects the convergence velocity and robustness of the search process. In this paper, CR = 0.9 to ensure a fast convergence rate.
Thus, for each parameter of the particle, crossover can be expressed as Selection is based on a greedy search strategy expressed as The optimized parameters during pre-training are applied to the proposed neural network. If the predicted values after pre-training do not reach the expected deviation, the number of hidden layers of each particle is updated using Equation (22). This procedure is repeated until the desired values are obtained.
where K i is the number of hidden layers for individual i and K best is the best solution. The differential evolution algorithm is described in Figure 5 and proceeds until the desired prediction error is reached.

Simulations and Experiments
We validated the performance of the proposed neural network through simulations and experiments of error compensation on a six-DOF robot manipulator. For comparison, the robot kinematic parameters were calibrated using the POE model [12] and LM algorithm [14,15].
As described in standard ISO 9283 [28], the positioning accuracy is the difference between the position of a command pose and the barycenter of the attained positions, as illustrated in Figure 6. Thus, the positioning accuracy can be calculated as

Simulations and Experiments
We validated the performance of the proposed neural network through simulations and experiments of error compensation on a six-DOF robot manipulator. For comparison, the robot kinematic parameters were calibrated using the POE model [12] and LM algorithm [14,15].
As described in standard ISO 9283 [28], the positioning accuracy is the difference between the position of a command pose and the barycenter of the attained positions, as illustrated in Figure 6.

Simulations and Experiments
We validated the performance of the proposed neural network through simulations and experiments of error compensation on a six-DOF robot manipulator. For comparison, the robot kinematic parameters were calibrated using the POE model [12] and LM algorithm [14,15].
As described in standard ISO 9283 [28], the positioning accuracy is the difference between the position of a command pose and the barycenter of the attained positions, as illustrated in Figure 6. Thus, the positioning accuracy can be calculated as  Thus, the positioning accuracy can be calculated as x j , y = 1 n n j=1 y j , z = 1 n n j=1 z j (24) where x, y, z are the coordinates of the barycenter of the cluster of points obtained after executing the same pose n times, x c , y c , z c are the coordinates of the command pose, and x j , y j , z j are the coordinates of the j-th attained pose along the respective axes. Analogously, the distance accuracy expresses the deviation in positioning and orientation between the command distance and mean of the attained distances, as illustrated in Figure 7.
We use both the distance accuracy and positioning accuracy to quantify error compensation.

Simulations
To perform simulations, we added random parameter errors to the theoretical robot kinematic parameters as actual parameters and non-kinematic errors as the compensation of the nominal joint variables of the robot. The actual and theoretical end-effector position coordinates of the robot were calculated using the POE model.
We applied 1000 pairs of joint variables and position coordinates to optimize the thresholds, initial weights, and number of hidden neurons in the proposed network using differential evolution. The same 1000 samples were applied to train the optimized neural network, and 100 command points were employed for verification of the prediction accuracy of the trained neural network. In addition, the robot kinematic parameters were calibrated using the POE model and LM algorithm for comparison by applying the same samples. The compensation results are shown in Figures 8 and 9. The distance accuracy can be calculated as We use both the distance accuracy and positioning accuracy to quantify error compensation.

Simulations
To perform simulations, we added random parameter errors to the theoretical robot kinematic parameters as actual parameters and non-kinematic errors as the compensation of the nominal joint variables of the robot. The actual and theoretical end-effector position coordinates of the robot were calculated using the POE model.
We applied 1000 pairs of joint variables and position coordinates to optimize the thresholds, initial weights, and number of hidden neurons in the proposed network using differential evolution. The same 1000 samples were applied to train the optimized neural network, and 100 command points were employed for verification of the prediction accuracy of the trained neural network. In addition, the robot kinematic parameters were calibrated using the POE model and LM algorithm for comparison by applying the same samples. The compensation results are shown in Figures 8 and 9.    Compared to the calibration method, the proposed neural network optimized using differential evolution provides better compensation. The maximum distance accuracy is improved from 3.1760 mm to 0.7743 mm, and the average distance accuracy is improved from 1.1118 mm to 0.1564 mm by using the proposed network. The maximum positioning accuracy is improved from 3.1760 mm to 0.7743 mm and the average positioning accuracy is improved from 0.7411 mm to 0.1007 mm by using the proposed algorithm.
As summarized in Tables 1 and 2, the proposed method has the best absolute positioning accuracy and distance accuracy. Compared to the calibration method, the proposed neural network optimized using differential evolution provides better compensation. The maximum distance accuracy is improved from 3.1760 mm to 0.7743 mm, and the average distance accuracy is improved from 1.1118 mm to 0.1564 mm by using the proposed network. The maximum positioning accuracy is improved from 3.1760 mm to 0.7743 mm and the average positioning accuracy is improved from 0.7411 mm to 0.1007 mm by using the proposed algorithm.
As summarized in Tables 1 and 2, the proposed method has the best absolute positioning accuracy and distance accuracy.

Experiment
The calibration system, as shown in Figure 10, consists of a six-DOF serial robot (Universal Robot 5, Universal Robots), a Laser Tracker (API T3, Automated Precision Inc., Maryland, USA) with an accuracy of 0.005 mm/m, and an accompanying laser reflector. The reflector was fixed at an assigned location on the robot end-effector. The robot moved within the workspace and spatial position coordinates of the end-effector data were collected using an laser tracker as the output of the neural network. Additionally, the corresponding joint angles of the sampling points were recorded as the input of the neural network.
Universal Robots), a Laser Tracker (API T3, Automated Precision Inc., Maryland, U.S) with an accuracy of 0.005 mm/m, and an accompanying laser reflector. The reflector was fixed at an assigned location on the robot end-effector. The robot moved within the workspace and spatial position coordinates of the end-effector data were collected using an laser tracker as the output of the neural network. Additionally, the corresponding joint angles of the sampling points were recorded as the input of the neural network. We measured 600 samples in the workspace of the robot for optimization and training of the proposed network. The same samples were used for calibration of kinematic parameters using the POE model and LM algorithm. The samples were selected such that the corresponding joint variables covered the robot joint working ranges, as shown in Figure 11. In addition, 100 command points in the robot workspace were randomly selected to verify the accuracy of the calibration methods and proposed network.  We measured 600 samples in the workspace of the robot for optimization and training of the proposed network. The same samples were used for calibration of kinematic parameters using the POE model and LM algorithm. The samples were selected such that the corresponding joint variables covered the robot joint working ranges, as shown in Figure 11. In addition, 100 command points in the robot workspace were randomly selected to verify the accuracy of the calibration methods and proposed network.
Universal Robots), a Laser Tracker (API T3, Automated Precision Inc., Maryland, U.S) with an accuracy of 0.005 mm/m, and an accompanying laser reflector. The reflector was fixed at an assigned location on the robot end-effector. The robot moved within the workspace and spatial position coordinates of the end-effector data were collected using an laser tracker as the output of the neural network. Additionally, the corresponding joint angles of the sampling points were recorded as the input of the neural network. We measured 600 samples in the workspace of the robot for optimization and training of the proposed network. The same samples were used for calibration of kinematic parameters using the POE model and LM algorithm. The samples were selected such that the corresponding joint variables covered the robot joint working ranges, as shown in Figure 11. In addition, 100 command points in the robot workspace were randomly selected to verify the accuracy of the calibration methods and proposed network. . Angular distribution covering the joint space. Figure 11. Angular distribution covering the joint space. Figure 12 shows the robot base and measurement coordinate systems considered in the experiment. Gan et al. presented a simple but effective calibration of the relative rotation matrix and translation vector for converting the base frames of two coordinate systems using quaternions [29]. We used this method to determine the transformation between the measurement and robot base frames. The standard position coordinates measured using the laser tracker can be expressed into the robot base coordinate system. Therefore, we conducted the prediction and training of the proposed network by integrating robot joint variables with exact positioning with respect to the robot base.  Figure 12 shows the robot base and measurement coordinate systems considered in the experiment. Gan et al. presented a simple but effective calibration of the relative rotation matrix and translation vector for converting the base frames of two coordinate systems using quaternions [29]. We used this method to determine the transformation between the measurement and robot base frames. The standard position coordinates measured using the laser tracker can be expressed into the robot base coordinate system. Therefore, we conducted the prediction and training of the proposed network by integrating robot joint variables with exact positioning with respect to the robot base.   Tables 3 and 4 show the experimental results, with the optimized network accurately predicting the position coordinates. Before calibration, the maximum positioning accuracy is 3.1760 mm and the maximum distance accuracy 1.1118 mm, which is affected by kinematic and non-kinematic errors. After calibration, using the POE model and the LM algorithm, the positioning accuracy and distance accuracy of the robot were significantly improved, which is due to the reduction in kinematic errors through calibration. However, neglecting error sources such as temperature variations and gear backlash limits the effectiveness and accuracy of robot calibration. Therefore the proposed network provides the best absolute positioning accuracy and distance accuracy because the proposed network mitigates the effects of kinematic and non-kinematic errors. After error compensation, the maximum positioning accuracy is improved from 3.1760 mm to 0.7743 mm and the average positioning accuracy is improved from 1.1118 mm to 0.1564 mm by using the proposed algorithm. The maximum distance accuracy is improved from 2.0140 mm to 0.2392 mm and the average distance accuracy is improved from 0.8497 mm to 0.0493 mm.
frames. The standard position coordinates measured using the laser tracker can be expressed into the robot base coordinate system. Therefore, we conducted the prediction and training of the proposed network by integrating robot joint variables with exact positioning with respect to the robot base. Figures 13 and 14 and Tables 3 and 4 show the experimental results, with the optimized network accurately predicting the position coordinates. Before calibration, the maximum positioning accuracy is 3.1760 mm and the maximum distance accuracy 1.1118 mm, which is affected by kinematic and non-kinematic errors. After calibration, using the POE model and the LM algorithm, the positioning accuracy and distance accuracy of the robot were significantly improved, which is due to the reduction in kinematic errors through calibration. However, neglecting error sources such as temperature variations and gear backlash limits the effectiveness and accuracy of robot calibration. Therefore the proposed network provides the best absolute positioning accuracy and distance accuracy because the proposed network mitigates the effects of kinematic and non-kinematic errors. After error compensation, the maximum positioning accuracy is improved from 3.1760 mm to 0.7743 mm and the average positioning accuracy is improved from 1.1118 mm to 0.1564 mm by using the proposed algorithm. The maximum distance accuracy is improved from 2.0140 mm to 0.2392 mm and the average distance accuracy is improved from 0.8497 mm to 0.0493 mm.

Discussion and Conclusions
The absolute positioning accuracy of robots has become increasingly important to support their widespread applications, particularly when offline programming is required. Error sources in robots can be classified into kinematic and non-kinematic errors. Conventional calibration can reduce the influence of kinematic errors on positioning accuracy, but the complexity of non-kinematic error sources and modeling hinders the compensation of such errors.
We applied a neural network to compensate both kinematic and non-kinematic errors with the aim of improving the absolute positioning accuracy of robot manipulators. The proposed network can avoid complex modeling while comprehensively considering the influence of all error sources. As the performance of a neural network is influenced by its structure and parameters, we optimized the proposed network using differential evolution. The thresholds, initial weights, and number of hidden neurons in the network are optimized using differential evolution to improve efficiency and performance. Using the proposed network, the effects of kinematic, and non-kinematic errors decreased, thus improving the absolute positioning accuracy of an industrial robot.
The theoretical correctness and effectiveness of the proposed method are verified by simulations and experiments on a six-DOF serial robot and compared with calibration using the POE model and LM algorithm. The absolute positioning accuracy improves using the proposed network, and the experimental results verify the correctness and effectiveness of the network. After calibration, the robot average distance accuracy improved from 0.8497 mm before calibration to 0.04933 mm. Likewise, the robot average positioning accuracy improved from 3.176 mm before calibration to 0.7743 mm.