Next Article in Journal
Development of a Procedure for Torsion Measurement Using a Fan-Shaped Distance Meter System
Previous Article in Journal
Edge Bleeding Artifact Reduction for Shape from Focus in Microscopic 3D Sensing
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Improved Calibration of Eye-in-Hand Robotic Vision System Based on Binocular Sensor

1
Key Laboratory for Precision and Non-Traditional Machining Technology of the Ministry of Education, Dalian University of Technology, Dalian 116024, China
2
Beijing Spacecrafts, China Academy of Space Technology, Beijing 100094, China
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(20), 8604; https://doi.org/10.3390/s23208604
Submission received: 21 September 2023 / Revised: 18 October 2023 / Accepted: 18 October 2023 / Published: 20 October 2023
(This article belongs to the Section Sensors and Robotics)

Abstract

:
Eye-in-hand robotic binocular sensor systems are indispensable equipment in the modern manufacturing industry. However, because of the intrinsic deficiencies of the binocular sensor, such as the circle of confusion and observed error, the accuracy of the calibration matrix between the binocular sensor and the robot end is likely to decline. These deficiencies cause low accuracy of the matrix calibrated by the traditional method. In order to address this, an improved calibration method for the eye-in-hand robotic vision system based on the binocular sensor is proposed. First, to improve the accuracy of data used for solving the calibration matrix, a circle of confusion rectification method is proposed, which rectifies the position of the pixel in images in order to make the detected geometric feature close to the real situation. Subsequently, a transformation error correction method with the strong geometric constraint of a standard multi-target reference calibrator is developed, which introduces the observed error to the calibration matrix updating model. Finally, the effectiveness of the proposed method is validated by a series of experiments. The results show that the distance error is reduced to 0.080 mm from 0.192 mm compared with the traditional calibration method. Moreover, the measurement accuracy of local reference points with updated calibration results from the field is superior to 0.056 mm.

1. Introduction

Both the support and application of robotic vision systems are intrinsically tied to the growth of the manufacturing industry. Applications of these systems include inspection [1], robotic welding [2], robotic grinding [3], etc. These systems, combined with a binocular sensor [4,5,6,7] and a six-degree-of-freedom (6-DOF) industrial robot, are widely applied in the industrial field and possess the properties of high efficiency, high flexibility and good economy. Robotic vision systems detect the three-dimensional (3D) positions of local reference points using the binocular sensor installed at the flange of the robot. The data are uniformed to the base frame of the robot for correcting the machining unit end position or uniforming the point cloud data, provided that the spindle equipment or the structure light equipment is also installed on the robot end.
Calibration between the robot end and the binocular sensor is an essential step for any eye-in-hand robotic system. Several studies were implemented to develop the application of the calibration method on a variety of vision facilities, including the binocular sensor, the ultrasound scanner, the depth camera, the line laser sensor, etc. [8,9,10,11]. Prior research related to calibration optimization was conducted based on the classical method [12]. In the research, no matter whether the detected data are in the form of point clouds or images of the calibrator, the core of the method is to optimize the calibration matrix, which contains the relative pose between the vision facility and the robot end. Li et al. [8] suggested a calibration approach in conjunction with point cloud registration improvement. Zhang et al. [9] decoupled the rotational and translational errors in the calibration matrix. Yang et al. [10] created a calibration reference, which is a standard sphere model used to substitute for other calibration tools. Wu et al. [11] improved the efficiency of solving the calibration matrix with the quaternion decomposition from rotation. However, current research does not introduce the errors caused by the vision facility itself during the inspection process. The errors originate from the intrinsic deficiencies of the instrument.
For the binocular sensor, intrinsic deficiencies such as tangential distortion [13] aroused by optical lens assembly and radial distortion [14] aroused by lens production standards have been solved by rectification methods [15,16,17] maturely. Scholars have recently focused on improving structural parameters. Deng et al. [18] improved the binocular localization model based on the structural adjustment of focal length and baseline, and the localization error was well reduced. Shi et al. [19] proposed an online binocular sensor measurement method based on iterative gradient descent nonlinear optimization and improved calibration, and the performance was validated with a calibration error of less than 6%. Kong et al. [20] developed a calibration method for the binocular sensor based on a non-dominated sorting genetic algorithm in order to optimize the structural parameters, and the results indicated that the accuracy rate was up to 98.9%. To estimate the initial structural parameter of the dynamic binocular stereo vision in a large field of view quickly, Wang et al. [21] proposed a novel two-point method, and the accuracy evaluation showed that the accuracy of 3D coordinate measurement was comparable with that of state-of-the–art methods. To date, however, research on the structural parameters has been established on relatively ideal calibration images after tangential and radial distortion compensation, with no concern for the other deficiencies in the optical imaging process.
For instance, the research mentioned above pays no attention to the circle of confusion [22,23,24] which commonly exists in prime lens imaging. The circle of confusion is also caused by one of the intrinsic deficiencies, one which occurs when an object point is mapped outside the focal point. This results in corresponding beams creating a diffused disk in the image rather than an ideal image point. When reconstructing the object point through the binocular sensor, any deviation in extracting the geometric features of the point will affect the final 3D reconstruction accuracy, which reduces the accuracy of solving the calibration matrix. Furthermore, intervention by the circle of confusion can also cause deviations in extracting these features. Although certain image enhancement techniques [25,26,27] have been demonstrated to enhance the visual effect of feature extraction by improving image quality, this is a sensory improvement. The enhanced features may not necessarily improve accuracy when they are utilized in 3D reconstruction for geometric measurements. Therefore, a practical method should be developed to relieve the effect of the circle of confusion.
Additionally, the observed error is also a reflection of the intrinsic deficiencies. Usually, it comes from the accuracy restrictions of instruments. For the binocular sensor, there exists a deficiency in the accuracy of observation at either the periphery or the center of the public view of the field. The previously indicated situation will affect the calibration process between the robot end and the binocular sensor, which will ultimately result in a decrease in the calibration matrix’s accuracy. Although certain optimization strategies may mitigate observed errors in measured points, their applicability is limited by instrument types and application scenarios, making them challenging to implement for eye-in-hand robotic vision system calibration. For instance, the method of laser tracking equipment networking [28,29,30] leverages the advantages of high-accuracy laser ranging to establish a rank-deficient network through multi-station measurement. Ultimately, it optimizes the observation value of the measured point at a single station by solving the rank-deficient equation. However, it is difficult to establish a similar network because the binocular sensor itself does not have the characteristics of an absolute advantage in length measurement. There is also a kind of bundle adjustment method [31,32,33] applied to photogrammetry, which takes the position of the camera and the coordinates of measured points as unknown parameters and obtains the optimal camera parameters and coordinates of measured points by adjusting the photographic beam in the process of multi-view measurement. However, this process requires a large number of measured points with wide distribution within the camera field of view, which is difficult to achieve for binocular sensors equipped with standard lenses in close-range scenes. Therefore, an effective and compendious strategy for dealing with the issue is required.
As mentioned above, the intrinsic deficiencies, including the circle of confusion and the observed error, affect the accuracy of the calibration matrix in the eye-in-hand robotic binocular sensor system. Therefore, the motivation of this research is to propose an improved calibration method for the eye-in-hand robotic vision system based on the binocular sensor. The main contributions of this research can be summarized as follows:
(1)
A circle of confusion rectification method is proposed. The position of the pixel is rectified based on the Gaussian energy distribution model to obtain a geometric feature close to the real one and improve the accuracy of the 3D reconstruction of the binocular sensor.
(2)
Based on the strong geometric constraint of the standard multi-target reference calibrator on the observed error, a transformation correction method is developed. The observed error is introduced to the calibration matrix updating model, and the observed error is constrained according to the standard geometric relationship of the calibrator.
In summary, the proposed method can improve the accuracy of the calibration matrix in the eye-in-hand robotic binocular sensor system. The remainder of this paper is organized as follows. Section 2 describes the eye-in-hand robotic binocular sensor system briefly. Section 3 details the improved calibration method. In Section 4, experiments with a reference calibrator are presented. Conclusions and discussion are presented in Section 5.

2. System Description

The eye-in-hand robotic binocular sensor system is set up as illustrated in Figure 1. The binocular sensor is fixed on the end of the 6-DOF robot. As shown in Figure 1a, the measured point M (X, Y, Z) is captured by the binocular sensor. The projection from two-dimensional (2D) coordinates ( u l ( r ) , v l ( r ) ) in the left or right image coordinate system (ICS) to three-dimensional coordinates (X, Y, Z) in the world coordinate system (WCS) is subjected to (1),
z c l ( r ) u c l ( r ) v c l ( r ) 1 = M i l ( r ) M o X Y Z 1 = Φ l ( r ) X Y Z 1 ,
where M i l ( r ) and M o are the intrinsic and extrinsic parameter matrix of the left or right camera, Φ l ( r ) is the projection matrix from the WCS to the left or right ICS, and z c l ( r ) is the unknown scaling factor.
When the coordinates of M in the WCS are obtained, the data are transformed to the base coordinate system (BCS), as shown in Figure 1b. The transformation is subjected to the following:
H m = H g i ( j ) H g c H c i ( j ) ,
where H g i ( j ) , obtained from the teach pendant, is the matrix transformed from the robot end to the BCS in pose i(j); H c i ( j ) is the homogeneous matrix transformed from the WCS to the binocular sensor in pose i(j); H g c is the calibration matrix, which is the improved object of this research.

3. Improved Calibration Method

The purpose of the improved calibration method is to improve the accuracy of the calibration matrix. On the one hand, considering the effect of the circle of confusion on the binocular 3D reconstruction, the method of circle of confusion rectification is proposed, which is meant to improve the 3D reconstruction accuracy and provide more accurate data for solving the calibration matrix. On the other hand, considering the observed error of the binocular sensor in the measurement process, the method of transformation error correction is proposed, which is to build a calibration matrix updating model by modifying the observed error and ultimately deduce the more accurate calibration matrix on the basis of the traditional method.

3.1. Circle of Confusion Rectification

Circle of confusion rectification should be processed after the preprocessing. The instability of the scene lighting environment during the measurement process causes localized over or under exposure and poor contrast in the image, which are the key factors of low dynamic range and further contribute to the loss of information. Therefore, focusing on the parameters of exposure and contrast, the classical image fusion technique [34] is simplified in this research for preprocessing.
Exposure and contrast can be thought of as two separate image quality weights. Weighted blending can be used to consolidate the weight of the origin image, as depicted in the following:
W s i , j = C s ω c i , j × E s ω e i , j R i , j = s = 1 N W ^ s i , j I s i , j ,
where C s ω c i , j and E s ω e i , j are the contrast and exposure; ω c and ω e are the corresponding weighting exponents; W s i , j and W ^ s i , j are the initial and final pixel weights, respectively; I s i , j is the origin image s; and R i , j is the fused image.
As shown in Figure 2, the origin images of the calibrator with different exposure times are fused so that the features obtain a higher dynamic range. However, the position of the pixel remains unchanged. The geometric topography and gray distribution of the measured object in this paper are not complex, and the improvement of the dynamic range achieved by the classical method has greatly restored the lost details. Therefore, this research does not compare more algorithms.
The formation schematic of the circle of confusion is shown in Figure 3. A real lens cannot focus all of the beams together perfectly. When an object point B is imaged, its beam cannot converge to the focal point, so it forms a diffused disk projection on the image plane, forming the circle of confusion.
The radius δ of the circle of confusion can be defined as
δ = [ ( d B d A ) f 2 ] / [ 2 d B ( d A f ) F m a x { h , v } ] ,
where dB is the distance from object point B to the lens; f is the focal length; dA is the distance from ideal object point A to the lens; A can be imaged exactly on the image plane; F is the aperture of a camera; and h and v are the numbers of horizontal and vertical pixels in the image plane, respectively.
When both dA and dB are much longer than f, (4) can be simplified as follows:
δ = f / 2 H F m a x h , v ,
where H is the height of the whole view.
The lens depicted in Figure 3 is an equivalent model of several internal lenses of a camera, which has no bearing on the analysis of the formation schematic of the circle of confusion, and the lens’s workmanship defects are not ignored. The geometric distortion caused by the defects has been compensated by Equation (6), and other defects such as astigmatism, chromatism, etc., showing a rare effect on producing the circle of confusion are not considered in this research.
x = x 0 ( 1 + k 1 r 2 + k 2 r 4 ) + 2 p 1 x 0 y 0 + p 2 ( r 2 + 2 x 0 2 ) y = y 0 ( 1 + k 1 r 2 + k 2 r 4 ) + p 1 ( r 2 + 2 y 0 2 ) + 2 p 2 x 0 y 0 ,
where (x0, y0) and (x, y) are the normalized coordinates before and after distortion, respectively; r is the radial distance from the center of the image to (x0, y0), r2 = x02 + y02; and k1, k2, p1, and p2 represent the intrinsic parameters determined by the reference [35].
Generally, the region of interest (ROI) on the calibrator is segmented. The center of the ROI provides 2D data for the 3D reconstruction process. As shown in Figure 4, the split bearing retro-reflector (SBR) is a detected target with a precise round reflective coating (position accuracy of the retro-reflective dot in the center of the sphere: 12.7 µm). The coating is detected as the ROI in the image. It can be seen that partial details around the ROI are diffused because of the circle of confusion in the defocused image. Currently, the relatively common principle of ROI boundary detection is to spontaneously judge the rapidly changing position of the gradient by relying on certain trade-off principles, while the rapidly changing position of the gray gradient may not be the real boundary due to the influence of the circle of confusion. The calculated center of the ROI according to the detected boundary may deviate from the real one. Therefore, the pixel coordinates in the circle of confusion need to be rectified in order to bring the detected boundary closer to the real situation.
Each pixel’s intensity is equal to the amount of energy captured by the imaging sensor unit throughout the exposure time. According to the research [36], the distribution of the energy of the circle of confusion can be approximately characterized by the 2D Gaussian function.
The energy in the direction of the radius of the circle of confusion is formulated as follows:
E x , y = E 0 / 2 π δ 2 e ( x 2 + y 2 ) / 2 δ 2 ,
where E 0 is the total energy of the circle of confusion (the value equals the sum of intensities of pixels), and (x, y) is the coordinates of the pixel within the circle.
The energy distribution of the circle of confusion is non-uniform. There is a circle of confusion centered around each pixel in the origin image.
According to the Taylor formula, Equation (7) can be approximately converted into a second-order expansion:
E ^ u = E u 0 E 0 / ( 4 π δ 4 ) Δ x u 2 E 0 / ( 4 π δ 4 ) Δ y u 2 ,
where E u 0 is the mean value of all the elements in a circle of radius 3δ (values that exceed this interval are considered gross errors, while values inside the interval contain only random errors); Δ x u 2 and Δ y u 2 are the errors of each pixel inside the circle of confusion; E ^ u is the measured value of the intensity of pixel u; and 0 ≤ ut, t is the number of elements within the circle of radius 3δ.
The error equation is organized as follows:
E ^ u E u 0 B = E 0 / ( 4 π δ 4 ) E 0 / ( 4 π δ 4 ) A Δ x u 2 Δ y u 2 X ,
where the matrices of coefficient, unknown, and constant are denoted as A, X, and B, respectively.
A is a row-full rank matrix. Therefore, A has a unique Moore–Penrose generalized inverse matrix, which is denoted as A + in Equation (10).
A + = A H A A H 1 ,
where A H is the conjugate transpose matrix of A.
Equation (9) has a least-norm solution, which is deduced as follows:
X = A + B .
Finally, the arithmetic square root of the items in X is taken as the rectification value of the circle of confusion.
The flowchart of the circle of confusion rectification is shown in Figure 5. As mentioned above, the processing of the rectification traverses over all pixels. First, the dimensions of the origin image should be acquired. Second, critical parameters E 0 and E u 0 corresponding to the pixel at (ui, uj) are calculated. Third, all pixels from row 1 to row m are traversed in column order through a two-layer loop. Fourth, all of the obtained parameters are organized and substituted into Equation (9). Last, matrix X is solved, and the arithmetic square roots are kept.
As shown in Figure 6, for the SBRs, the results after rectification are considered to be closer to the real boundary of ROI. The positions of the ROI boundaries are shifted after the rectification process. The boundary is detected by the Canny operator. For the following reasons, this research does not put much effort into the optimization of operators. On the one hand, the detected patterns have obvious light and dark boundaries, and the interference of noise in boundary detection can be easily removed by judging the roundness and other morphological characteristics. On the other hand, the Canny operator itself has high positioning accuracy because it has good recognition of the boundary in the image.
Then, the boundary is fitted with an ellipse. The fitting process transforms into the problem of finding the conditional extremum of the Lagrange function:
L ( D , λ ) = D c T c D T λ ( c K c T 1 ) ,
where D is the variable matrix containing the variables of the elliptic general equation; c is the vector containing the coefficients of the elliptic general equation; K is the constant matrix; and λ is the correlation coefficient.
The center of the ellipse is considered the center of the ROI. Therefore, the center of the ROI is also rectified, which will improve the accuracy of the 3D reconstruction of the binocular sensor, as shown in Figure 7. The detailed error comparisons are described in Section 4.

3.2. Transformation Error Correction

The calibration principle between the binocular sensor and the robot end is shown in Figure 1b. Shiu and Ahmad [37] reformulated the classical calibration equation as (13), and deduced H g c .
H g j i H g c = H g c H c j i ,
where H g j i and H c j i are the homogeneous transformation matrices of the robot end and the binocular sensor from pose i to j, respectively.
As Figure 1b illustrates, the relationship between the robot base and the binocular sensor can be constructed as follows:
X b i = H g i H g i c X c i   ,
where X c i and X b i are the theoretical 3D coordinates of the ROI centers on the calibrator in the binocular measurement unit and the BCS in pose i, respectively; H g i c is the preliminary calibration matrix in pose i.
Afterward, the calibration matrix updating model is established. Since both the initial values of the preliminary calibration matrix and the observed value have deviations, the real position X b R of the ROI centers in the BCS can be defined as follows:
X b R = H g i H g i c + Δ H g i c X c i + Δ X c i ,
where Δ H g i c is the modified matrix of the preliminary calibration matrix according to [34] in pose i; Δ X c i is the observed error of the 3D coordinates in pose i.
The deviation between the real position and the theoretical position of the target is derived as follows:
X b R X b i = H b i c Δ X c i + H g i Δ H g i c X c i + H g i Δ H g i c Δ X c i   ,
where H g i H g i c = H b i c ; H b i c is the transformation matrix between the BCS and binocular sensor.
According to the robot differential kinematics, Δ H g i c = H g i c ν H g i c , where ν is the differential operator. ν H g i c can be transformed as follows:
ν H g i c = 0 ω z ω y σ x ω z 0 ω x σ y ω y ω x 0 σ z 0 0 0 1 ,
Then, Equation (16) can be organized as follows:
X b R X b i = H b i c Δ X c i + H b i c ν H g i c X c i R   ,
where X c i R = X c i + Δ X c i ; X c i R is the real 3D coordinates of the ROI centers after observed error modification in the binocular sensor.
The expansion of H b i c ν H g i c X c i R is depicted as follows:
H b i c ν H g i c X c i R = H b i c 1 0 0 0 z c i R y c i R 0 1 0 z c i R 0 x c i R 0 0 1 y c i R x c i R 0 P i σ x σ y σ z ω x ω y ω z Δ η ,
where the first three rows and columns of H b i c are selected; X c i R = x c i R y c i R z c i R 1 T .
Then, Equation (18) in pose i(j) is reformed as follows:
X b R X b i = H b i c Δ X c i + H b i c P i Δ η X b R X b j = H b j c Δ X c j + H b j c P j Δ η .
Equation (20) can be organized as follows:
X b i X b j + H b i c Δ X c i H b j c Δ X c j V = H b j c P j H b i c P i U Δ η ,
where U is a full-row rank matrix; therefore, U exists a unique Moore–Penrose generalized inverse matrix, which is derived as U + = U H U U H 1 .
Then, Equation (21) has the least-norm solution, which is shown as follows:
Δ η = U H U U H 1 V ,
where Δ η contains the errors of rotation and translation in ν H g i c .
P i or P j and U contain the observed error of the 3D coordinates. Obtaining the modified value relies on the strong geometric constraint of the standard multi-target reference calibrator. The calibrator is shown in Figure 8, where targets P1–6 are distributed in circles with varying radiuses; P7, 8 are virtual targets constructed by centroids of P2, P3, P5, and P1, P4, P6, respectively; the sophisticated magnetic nest (SMN) is used to hold the SBR.
The distance between the centers of any two SBRs on the plate and L k , l is constructed as follows:
L k , l = x k x l 2 + y k y l 2 + z k z l 2 ,
where x k , y k , z k and x l , y l , z l are the coordinates of any two centers of targets.
Equation (23) can be approximately linearized as follows:
L ^ k , l = L k , l 0 + α k , l Δ x k Δ x l + β k , l Δ y k Δ y l + γ k , l Δ z k Δ z l ,
where α k , l = ( x k 0 x l 0 ) / L k , l 0 , β k , l = ( y k 0 y l 0 ) / L k , l 0 , γ k , l = ( z k 0 z l 0 ) / L k , l 0 ; x k ( l ) 0 , y k ( l ) 0 , z k ( l ) 0 is the coordinate measured by the binocular measurement unit after the circle of confusion rectification; Δ x k ( l ) , Δ y k ( l ) , Δ z k ( l ) is the modified value of the observed error; L k , l 0 is the standard distance calibrated by a coordinate-measuring machine (CMM); L ^ k , l is the measured distance with the observed error; and 1 ≤ km–1, 1 ≤ lm, l > k.
Then, the error equation can be rewritten as follows:
L ^ k , l L k , l 0 = α k , l β k , l γ k , l α k , l β k , l γ k , l Δ x k Δ y k Δ z k Δ x l Δ y l Δ z l .
According to the adjustment condition, m should be at least 8, which is satisfied with the redundancy requirement of solving Equation (25). Consequently, the observed errors can be constrained by deducing the modified values Δ x k Δ y k Δ z k Δ x l Δ y l Δ z l T . Substitute Δ X c i ( j ) into Equation (21). Thus, Δ η and ν H g i c can be obtained. Finally, the preliminary calibration matrix is updated, and the transformation error is corrected.

4. Experimental Validation

In this paper, the main devices of the eye-in-hand robotic binocular sensor system are listed as follows: Industrial cameras (VC-50MX, Vieworks, Anyang, Republic of Korea) with a resolution of 7904 × 6004 are adopted to construct the binocular measurement unit; the observed distance from the calibrator is around 850 mm. An industrial robot (KR-210, KUKA, Augsburg, Germany) is also employed to hold and move the binocular measurement unit; the robot is a 6-DOF series robot with a maximum working radius of 2696 mm. A standard multi-target reference calibrator is calibrated by a coordinate-measuring machine (Prismo Navigator, Zessis, Oberkochen, Germany) with a precision of 0.9 μm + 2.85 μm/m in ranges of 900 mm/1200 mm/650 mm in the X/Y/Z directions. The overall layout of the experimental platform is shown in Figure 9.

4.1. Experiment of Circle of Confusion Rectification

To achieve the 3D coordinate reconstruction of the SBR center, the intrinsic parameters of the binocular sensor should be calibrated according to [35]. The calibration process is common, and the item will not be covered again in this section. The parameters determined by the calibration process are shown in Table 1.
The experimental processes are shown as follows: (a) The hand-in-eye robotic binocular sensor system drives the binocular sensor to observe the calibrator in six different poses (six times meets the requirements of solving H g c exactly). (b) Perform 3D reconstruction of the SBR in the binocular images obtained from each pose. (c) Calculate the distance from the center point of any one SBR to P7, and the average data from six distances are set as control group 1 (without rectification). (d) Apply the circle of confusion rectification to the binocular images. (e) Perform 3D reconstruction of the SBR center in the processed binocular images obtained from each pose. (f) Calculate the distance from the center point of any one SBR to P7, and set the average data from six distances as experimental group 1 (with rectification).
The observed values with the circle of confusion rectification and without the proposed method are listed in Table 2. According to the universal standard of optical 3D measurement, VDI/VDE 2634 Part 1 [38], this research used the approach of observing the standard spherical center distance to verify the accuracy index.
The absolute values of control group 1 errors and experimental group 1 errors are shown in Figure 10. The root mean square error (RMSE) between the standard value and the observed value of all the spherical center distances is finally counted as the accuracy evaluation result. RMSE reflects the deviation of the observed value from the standard value, and its value is negatively correlated with the performance of the measurement accuracy.
The expression of RMSE is shown as follows:
RMSE = 1 m i = 1 m D s t d i D o b s i 2 ,
where m is the number of observed objects; D s t d i is the standard value of a certain object; D o b s i is the observed value of the control or experimental group.
As shown in Figure 10, the errors are reduced following circle of confusion rectification. Furthermore, the RMSE with circle of confusion rectification is 0.041 mm, which is smaller than the 0.049 mm without the rectification. Therefore, the proposed circle of confusion rectification can improve the accuracy of the 3D reconstruction of the binocular sensor.

4.2. Experiment of Transformation Error Correction

The experimental processes are shown as follows: (g) Obtain the 3D reconstruction results mentioned in process (b) without any optimization in six poses. (h) Apply the circle of confusion rectification to the binocular images mentioned in process (d). (i) Perform 3D reconstruction of the SBR center in the processed binocular images obtained from each pose. (j) Calculate the distance from the center point of any one SBR to P7, and set the average data from six distances as control group 2 (without modification). (k) Modify the observed error of the 3D coordinates obtained from each pose. (l) Calculate the distance from the center point of any one SBR to P7, and set the average data from six distances as experimental group 2 (with modification). (m) Solve the preliminary calibration matrix according to the data mentioned in (g). (n) Solve the updated calibration matrix according to the data mentioned in (k). (o) Drive the robot to move to ten different poses, and calculate the distance from P7 to the origin of the BCS using Equation (14) based on the preliminary calibration matrix obtained from the traditional method [34]; Set the data from the ten distances as control group 3 (without correction). (p) Replace the matrix mentioned in process (o) as the updated calibration matrix, and calculate the distance from P7 to the origin of the BCS using Equation (14); set the data from ten distances as experimental group 3 (with correction).
The observed values with the observed error modification and without the method are listed in Table 3.
The absolute values of control group 2 errors and experimental group 2 errors are shown in Figure 10. The RMSE between the standard value and the observed value of all of the spherical center distances is also counted as the accuracy evaluation result.
As shown in Figure 11, the errors are reduced by the observed error modification. Furthermore, the RMSE with the modification is 0.034 mm, which is smaller than that of 0.041 mm without the method. Therefore, the proposed observed error modification can effectively reduce the observed error of the 3D reconstruction result.
Based on the transformation error correction, the updated calibration result was deduced. The preliminary and updated calibration results are shown in Table 4.
To compare the errors between the control and experimental group 3, the laser tracker with the spherically mounted retro-reflector (SMR) is used to calibrate the distance from P7 to the origin of the BCS as a standard distance. The BCS of the robot is located on the center of the mounting base, with the Z-axis vertically up and the X-axis directly in front, as shown in Figure 12. First, axis 1 is rotated—the angles of other axes remain the same. The coordinate value of the fixed SMR on the end of the robot is measured by the laser tracker every time the determined angle is rotated. According to these coordinate points, a circle 1 is fitted, and the normal line across the center of the circle is the position of axis 1. Second, axis 1 is returned to its original position, axis 2 is rotated, and the angles of other axes are kept unchanged. The coordinates of the fixed SMR at the end of the robot is measured with a laser tracker at every determined angle, and circle 2 is fitted according to these coordinate points. Third, an SMR is moved to several different positions on the plane where the robot base is fixed and the plane is fitted, and the position of the plane where the robot base is located can be obtained by removing the radius bias of the SMR in the tracker software (SpatialAnalyzer 2016.06.03_15061). The intersection of the normal of circle 1 and the plane of the robot base is the origin of the BCS. The direction of the X-axis is on the intersection line of the plane of circle 2 and the plane of the robot base. The direction of the Z-axis is on the normal of circle 1. Fourth, the measurement coordinate system is transferred to the BCS through the laser tracker software, and then P2, P3, and P5 are measured to obtain P7 by replacing the SBRs mounted on the SMNs as the SMRs. Finally, the standard distance between the origin of the BCS and P7 is measured at 2342.949 mm by the laser tracker.
The absolute values of control group 3 errors and experimental group 3 errors are shown in Figure 13. The RMSE between the standard distance and the observed distance from P7 to the origin of the BCS is also counted as the accuracy evaluation result.
As shown in Figure 13a, compared with control group 3, the errors are more centralized, which proves that the proposed transformation correction method can improve the precision of the measurement data. Furthermore, in Figure 13b, the errors are reduced with the transformation error correction, and the RMSE with the correction is 0.080 mm, which is smaller than that of 0.192 mm without the method. Therefore, the proposed transformation correction can effectively improve the accuracy of the calibration matrix.
It is noticed that, compared with the results of group 1 and 2, the accuracy improvement of group 3 is relatively unbalanced and shows a significant difference. The first reason for this is that the error of the robot itself in a certain pose is relatively large, which will amplify the observation value in one direction, resulting in an accuracy in the millimeter level. However, the data of group 1 and 2 are in the coordinate system of the binocular sensor, the accuracy of which is much higher, on micron level. The second reason is that the error of the robot itself is largely changed by different joint angle errors in different poses, and the error variation is unbalanced.

4.3. Experiment of Measurement Applicability

Measurement applicability verification is conducted as shown in Figure 14. A component is set as the measured object of the binocular measurement unit. The calibration error was compensated with the standard multi-target reference calibrator before the verification. Three regions on the component are selected as the measured regions. Six SBRs are fixed on each region as the local reference points.
The field verification should also be evaluated by VDI/VDE 2634 part 1 [38]. The measurement accuracy of the distance in the coordinate system of the robot end is verified in order to avoid the interference of the robot’s own positioning error. The standard distance is measured by a laser tracker (AT960, Leica, Swiss, precision: 15 μm + 5 μm/m). The pose of the robot is changed six times to measure each region on the component, and then the average of the data within each region is calculated as the observation. The field accuracy verification results are shown in Table 5.
The measurement accuracy depicted in Table 5 is also expressed by the RMSE. The accuracy indexes with the updated calibration results of the field are superior to 0.056 mm. Thus, in general, the proposed method exhibits good applicability and validity.

5. Conclusions and Discussion

In this research, an improved calibration method for the eye-in-hand robotic vision system based on the binocular sensor is proposed, where the circle of confusion of optical imaging and the observed error of the binocular sensor are considered. The circle of confusion rectification is proposed to improve the accuracy of 3D reconstruction of the binocular sensor, which provides accuracy data for solving the calibration matrix. The transformation correction is developed to build the calibration matrix updating model, which improves the matrix by constraining the observed error. The experimental results show that the proposed method is effective, with the distance error being reduced from 0.192 mm to 0.080 mm compared with the traditional method. Therefore, the accuracy of the calibration matrix has improved. The measurement accuracy of local reference points with updated calibration results from the field are superior to 0.056 mm. Moreover, it can be concluded that the effects of the circle of confusion and the observed errors are non-negligible in the calibration process of the eye-in-hand robotic binocular sensor system.
The results obtained in this research are aimed at hand–eye calibration and contribute to the generality of the calibration process in eye-in-hand system integration. However, in order to improve the measurement accuracy of the system in practical applications, it is necessary to consider the influence of errors such as robot positioning and point cloud splicing. In further research, the influence of these two kinds of errors on the system will be discussed, and corresponding solutions will be proposed.

Author Contributions

Conceptualization, W.L. and B.Y.; methodology, B.Y.; supervision, Y.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research is supported by the National Key R&D Program of China, (Grant 2018YFA0703304), National Natural Science Foundation of China (Grant 52125504), Liaoning Revitalization Talents Program (Grant XLYC1807086).

Data Availability Statement

Not applicable.

Acknowledgments

We thank Yue for providing the experimental site and for supervising the writing of this paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Phan, N.D.M.; Quinsat, Y.; Lartigue, C. Optimal scanning strategy for on–machine inspection with laser–plane sensor. Int. J. Adv. Manuf. Technol. 2019, 103, 4563–4576. [Google Scholar] [CrossRef]
  2. Vasilev, M.; MacLeod, C.N.; Loukas, C. Sensor-Enabled Multi-Robot System for Automated Welding and In-Process Ultrasonic NDE. Sensors 2021, 21, 5077. [Google Scholar] [CrossRef] [PubMed]
  3. Cheng, Y.S.; Shah, S.H.; Yen, S.H.; Ahmad, A.R.; Lin, C.Y. Enhancing Robotic-Based Propeller Blade Sharpening Efficiency with a Laser-Vision Sensor and a Force Compliance Mechanism. Sensors 2023, 23, 5320. [Google Scholar] [CrossRef]
  4. Jiang, T.; Cui, H.H.; Cheng, X.S. A calibration strategy for vision–guided robot assembly system of large cabin. Measurement 2020, 163, 107991. [Google Scholar] [CrossRef]
  5. Yu, C.; Ji, F.; Xue, J.; Wang, Y. Adaptive Binocular Fringe Dynamic Projection Method for High Dynamic Range Measurement. Sensors 2019, 19, 4023. [Google Scholar] [CrossRef]
  6. Hu, J.B.; Sun, Y.; Li, G.F.; Jiang, G.Z.; Tao, B. Probability analysis for grasp planning facing the field of medical robotics. Measurement 2019, 141, 227–234. [Google Scholar] [CrossRef]
  7. Wang, Q.; Zhang, Y.; Shi, W.; Nie, M. Laser Ranging-Assisted Binocular Visual Sensor Tracking System. Sensors 2020, 20, 688. [Google Scholar] [CrossRef] [PubMed]
  8. Li, M.Y.; Du, Z.J.; Ma, X.X.; Dong, W.; Gao, Y.Z. A robot hand–eye calibration method of line laser sensor based on 3D reconstruction. Robot. Comput. Integr. Manuf. 2021, 71, 102136. [Google Scholar] [CrossRef]
  9. Zhang, Y.; Qiu, Z.C.; Zhang, X.M. Calibration method for hand–eye system with rotation and translation couplings. Appl. Opt. 2019, 58, 5375–5387. [Google Scholar]
  10. Yang, L.X.; Cao, Q.X.; Lin, M.J.; Zhang, H.R.; Ma, Z.M. Robotic hand–eye calibration with depth camera: A sphere model approach. In Proceedings of the IEEE International Conference on Control Automation and Robotics (ICCAR), Auckland, New Zealand, 20–23 April 2018. [Google Scholar]
  11. Wu, J.; Liu, M.; Qi, Y.H. Computationally efficient robust algorithm for generalized sensor calibration. IEEE Sens. J. 2019, 19, 9512–9521. [Google Scholar] [CrossRef]
  12. Tsai, R.Y.; Lenz, R.K. A new technique for fully autonomous and efficient 3d robotics hand eye calibration. IEEE Trans. Robot. Autom. 1989, 5, 345–358. [Google Scholar] [CrossRef]
  13. Higuchi, Y.; Inoue, K.T. Probing supervoids with weak lensing. Mon. Not. R. Astron. Soc. 2018, 476, 359–365. [Google Scholar] [CrossRef]
  14. Liao, K.; Lin, C.Y.; Zhao, Y. DR–GAN: Automatic radial distortion rectification using conditional GAN in real–time. IEEE Trans. Circuits Syst. Video Technol. 2020, 30, 725–733. [Google Scholar] [CrossRef]
  15. Xu, F.; Wang, H.S.; Liu, Z.; Chen, W.D. Adaptive visual servoing for an underwater soft robot considering refraction effects. IEEE Trans. Ind. Electron. 2020, 67, 10575–10586. [Google Scholar] [CrossRef]
  16. Tang, Z.W.; Von Gioi, R.G.; Monasse, P.; Morel, J.M. A precision analysis of camera distortion models. IEEE Trans. Image Process. 2017, 26, 2694–2704. [Google Scholar] [CrossRef]
  17. Er, X.Z.; Rogers, A. Two families of elliptical plasma lenses. Mon. Not. R. Astron. Soc. 2019, 488, 5651–5664. [Google Scholar] [CrossRef]
  18. Deng, F.; Zhang, L.L.; Gao, F.; Qiu, H.B.; Gao, X.; Chen, J. Long–range binocular vision target geolocation using handheld electronic devices in outdoor environment. IEEE Trans. Image Process. 2020, 29, 5531–5541. [Google Scholar] [CrossRef] [PubMed]
  19. Shi, B.W.; Liu, Z.; Zhang, G.J. Online stereo vision measurement based on correction of sensor structural parameters. Opt. Express 2021, 29, 37987–38000. [Google Scholar] [CrossRef] [PubMed]
  20. Kong, S.H.; Fang, X.; Chen, X.Y.; Wu, Z.X.; Yu, J.Z. A NSGA–II–based calibration algorithm for underwater binocular vision measurement system. IEEE Trans. Instrum. Meas. 2020, 69, 794–803. [Google Scholar] [CrossRef]
  21. Wang, Y.; Wang, X.J. On–line three–dimensional coordinate measurement of dynamic binocular stereo vision based on rotating camera in large FOV. Opt. Express 2021, 29, 4986–5005. [Google Scholar] [CrossRef]
  22. Yang, Y.; Peng, Y.; Zeng, L.; Zhao, Y.; Liu, F. Rendering Circular Depth of Field Effect with Integral Image. In Proceedings of the 11th International Conference on Digital Image Processing (ICDIP), Guangzhou, China, 10–13 May 2019. [Google Scholar]
  23. Miks, A.; Novak, J. Dependence of depth of focus on spherical aberration of optical systems. Appl. Opt. 2016, 55, 5931–5935. [Google Scholar] [CrossRef]
  24. Miks, A.; Novak, J. Third-order aberration design of optical systems optimized for specific object distance. Appl. Opt. 2013, 52, 8554–8561. [Google Scholar] [CrossRef] [PubMed]
  25. Deger, F.; Mansouri, A.; Pedersen, M.; Hardeberg, J.Y.; Voisin, Y. A sensor–data–based denoising framework for hyperspectral images. Opt. Express 2015, 23, 1938–1950. [Google Scholar] [CrossRef]
  26. Zhang, W.L.; Sang, X.Z.; Gao, X.; Yu, X.B.; Yan, B.B.; Yu, C.X. Wavefront aberration correction for integral imaging with the pre–filtering function array. Opt. Express 2018, 26, 27064–27075. [Google Scholar] [CrossRef]
  27. Wang, W.; Zhang, C.X.; Ng, M.K. Variational model for simultaneously image denoising and contrast enhancement. Opt. Express 2020, 28, 18751–18777. [Google Scholar] [CrossRef]
  28. Camboulives, M.; Lartigue, C.; Bourdet, P.; Salgado, J. Calibration of a 3D working space multilateration. Precis. Eng. 2016, 44, 163–170. [Google Scholar] [CrossRef]
  29. Franceschini, F.; Galetto, M.; Maisano, D.; Mastrogiacomo, L. Combining multiple large volume metrology systems: Competitive versus cooperative data fusion. Precis. Eng. 2016, 43, 514–524. [Google Scholar] [CrossRef]
  30. Wendt, K.; Franke, M.; Hartig, H. Measuring large 3D structures using four portable tracking laser interferometers. Measurement 2012, 45, 2339–2345. [Google Scholar] [CrossRef]
  31. Urban, S.; Wursthorn, S.; Leitloff, J.; Hinz, S. MultiCol bundle adjustment: A generic method for pose estimation, simultaneous self–calibration and reconstruction for arbitrary multi–camera systems. Int. J. Comput. Vis. 2017, 121, 234–252. [Google Scholar] [CrossRef]
  32. Verykokou, S.; Ioannidis, C. Exterior orientation estimation of oblique aerial images using SfM–based robust bundle adjustment. Int. J. Remote Sens. 2020, 41, 7233–7270. [Google Scholar] [CrossRef]
  33. Qu, Y.F.; Huang, J.Y.; Zhang, X. Rapid 3D reconstruction for image sequence acquired from UAV camera. Sensors 2018, 18, 225. [Google Scholar] [CrossRef] [PubMed]
  34. Mertens, T.; Kautz, J.; Van Reeth, F. Exposure fusion. In Proceedings of the 15th Pacific Conference on Computer Graphics and Applications (PG’07), Maui, HI, USA, 2 November 2007. [Google Scholar]
  35. Zhang, Z.Y. Flexible camera calibration by viewing a plane from unknown orientations. In Proceedings of the Seventh IEEE International Conference on Computer Vision, Kerkyra, Greece, 20–27 September 1999. [Google Scholar]
  36. Quine, B.M.; Tarasyuk, V.; Mebrahtu, H.; Hornsey, R. Determining star–image location: A new sub–pixel interpolation technique to process image centroids. Comput. Phys. Commun. 2007, 177, 700–706. [Google Scholar] [CrossRef]
  37. Shiu, Y.C.; Ahmad, S. Calibration of wrist–mounted robotic sensors by solving homogeneous transform equations of the form AX = XB. IEEE Trans. Robot. Autom. 1989, 5, 16–29. [Google Scholar] [CrossRef]
  38. VDI/VDE 2634; Part 1. Optical 3D Measuring Systems–Imaging Systems with Point-By-Point Probing. Verein Deutscher Ingenieure & Verband Der Elektrotechnik Elektronik Informationstechnik: Berlin, Germany, 2002.
Figure 1. Description of the eye-in-hand robotic binocular sensor system. (a) Binocular sensor; (b) data transformation.
Figure 1. Description of the eye-in-hand robotic binocular sensor system. (a) Binocular sensor; (b) data transformation.
Sensors 23 08604 g001
Figure 2. Image fusion preprocessing. (a) Origin image with an exposure time of 100 ms; (b) origin image with an exposure time of 70 ms; (c) fused image.
Figure 2. Image fusion preprocessing. (a) Origin image with an exposure time of 100 ms; (b) origin image with an exposure time of 70 ms; (c) fused image.
Sensors 23 08604 g002
Figure 3. Formation schematic of the circle of confusion.
Figure 3. Formation schematic of the circle of confusion.
Sensors 23 08604 g003
Figure 4. Error of ROI detection caused by circle of confusion.
Figure 4. Error of ROI detection caused by circle of confusion.
Sensors 23 08604 g004
Figure 5. Flowchart of the circle of confusion rectification.
Figure 5. Flowchart of the circle of confusion rectification.
Sensors 23 08604 g005
Figure 6. Boundary of ROI before or after rectification.
Figure 6. Boundary of ROI before or after rectification.
Sensors 23 08604 g006
Figure 7. ROI centers before or after rectification.
Figure 7. ROI centers before or after rectification.
Sensors 23 08604 g007
Figure 8. Standard multi-target reference calibrator. (a) Structure of the calibrator; (b) distribution of SBRs.
Figure 8. Standard multi-target reference calibrator. (a) Structure of the calibrator; (b) distribution of SBRs.
Sensors 23 08604 g008
Figure 9. Eye-in-hand robotic binocular sensor system with a standard multi-target reference calibrator.
Figure 9. Eye-in-hand robotic binocular sensor system with a standard multi-target reference calibrator.
Sensors 23 08604 g009
Figure 10. Error comparison of results without or with rectification.
Figure 10. Error comparison of results without or with rectification.
Sensors 23 08604 g010
Figure 11. Error comparison of the results without or with modification.
Figure 11. Error comparison of the results without or with modification.
Sensors 23 08604 g011
Figure 12. Measurement of the BCS.
Figure 12. Measurement of the BCS.
Sensors 23 08604 g012
Figure 13. Error comparison of the results without or with correction. (a) Distribution of errors; (b) errors’ comparison in different poses.
Figure 13. Error comparison of the results without or with correction. (a) Distribution of errors; (b) errors’ comparison in different poses.
Sensors 23 08604 g013
Figure 14. Verification measurement for local reference points in region 1.
Figure 14. Verification measurement for local reference points in region 1.
Sensors 23 08604 g014
Table 1. Calibration results of the parameters.
Table 1. Calibration results of the parameters.
ParameterLeft CameraRight Camera
k1–0.1820–0.1790
k20.03820.0410
p1–1.2323 × 10–4–1.3870 × 10–4
p2–2.086 × 10–36.1797 × 10–4
Intrinsic matrixLeft: 1.2058 × 1 0 4 0 4.0541 × 1 0 3 0 1.2080 × 1 0 4 3.0057 × 1 0 3 0 0 1
Right: 1.1933 × 1 0 4 0 3.9517 × 1 0 3 0 1.1938 × 1 0 4 3.0020 × 1 0 3 0 0 1
Extrinsic matrix 0.8542 2.1609 × 1 0 4 0.5199 343.8618 0.0018 1 0.0025 2.0126 0.5199 0.0031 0.8542 86.4411
Table 2. Observed value without/with rectification.
Table 2. Observed value without/with rectification.
No.Standard Distance (mm)Control Group 1 (mm)Experimental Group 1 (mm)
P1P7162.085162.108162.068
P2P758.85358.82158.827
P3P755.18155.24355.236
P4P7130.546130.468130.478
P5P764.76564.72864.738
P6P7178.467178.499178.493
P8P743.74543.79943.789
Table 3. Observed value without/with modification.
Table 3. Observed value without/with modification.
No.Standard Distance (mm)Control Group 2 (mm)Experimental Group 2 (mm)
P1P7162.085162.068162.075
P2P758.85358.82758.870
P3P755.18155.23655.134
P4P7130.546130.478130.488
P5P764.76564.73864.783
P6P7178.467178.493178.450
P8P743.74543.78943.709
Table 4. Preliminary and updated calibration matrix.
Table 4. Preliminary and updated calibration matrix.
Calibration Matrix H g c
Preliminary 0.9734 0.0066 0.2289 188.2993 0.2289 0.0062 0.9734 28.2481 0.0078 0.1000 0.0046 78.6841
Updated 0.9732 0.0064 0.2290 188.2991 0.2287 0.0064 0.9736 28.2479 0.0079 0.9803 0.0048 78.6839
Table 5. Accuracy verification of each region.
Table 5. Accuracy verification of each region.
RegionNo.Standard Distance (mm)Observation with Preliminary Calibration (mm)Observation with Updated Calibration (mm)Preliminary Error (mm)Updated Error (mm)
Region 1P1P658.22458.28958.2790.0650.054
P2P654.53254.59654.5880.0640.056
P3P656.97057.03857.0270.0680.058
P4P659.83559.77359.7820.0620.053
P5P666.76166.82366.8130.0630.052
RMSE (mm)0.0640.055
Region 2Q1Q650.03050.09250.0810.0620.051
Q2Q656.00055.93755.9460.0630.054
Q3Q654.75554.69454.7030.0610.053
Q4Q654.76654.70354.7120.0640.055
Q5Q661.84861.91561.9040.0670.057
RMSE (mm)0.0630.054
Region 3M1M659.47659.54159.5340.0650.058
M2M664.32564.26564.2740.0600.051
M3M663.38263.31963.3270.0630.055
M4M650.03849.97749.9840.0610.054
M5M661.60961.67661.6650.0670.056
RMSE (mm)0.0630.055
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yu, B.; Liu, W.; Yue, Y. Improved Calibration of Eye-in-Hand Robotic Vision System Based on Binocular Sensor. Sensors 2023, 23, 8604. https://doi.org/10.3390/s23208604

AMA Style

Yu B, Liu W, Yue Y. Improved Calibration of Eye-in-Hand Robotic Vision System Based on Binocular Sensor. Sensors. 2023; 23(20):8604. https://doi.org/10.3390/s23208604

Chicago/Turabian Style

Yu, Binchao, Wei Liu, and Yi Yue. 2023. "Improved Calibration of Eye-in-Hand Robotic Vision System Based on Binocular Sensor" Sensors 23, no. 20: 8604. https://doi.org/10.3390/s23208604

APA Style

Yu, B., Liu, W., & Yue, Y. (2023). Improved Calibration of Eye-in-Hand Robotic Vision System Based on Binocular Sensor. Sensors, 23(20), 8604. https://doi.org/10.3390/s23208604

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop