Calibration Method of Orthogonally Splitting Imaging Pose Sensor Based on General Imaging Model

Orthogonally splitting imaging pose sensor is a new sensor with two orthogonal line array charge coupled devices (CCDs). Owing to its special structure, there are distortion correction and imaging model problems during the calibration procedure. This paper proposes a calibration method based on the general imaging model to solve these problems. The method introduces Plücker Coordinate to describe the mapping relation between the image coordinate system and the world coordinate system. This paper solves the mapping relation with radial basis function interpolation and adaptively selecting control points with Kmeans clustering method to improve the fitting accuracy. This paper determines the appropriate radial basis function and its shape parameter by experiments. And these parameters are used to calibrate the orthogonally splitting imaging pose sensor. According to the calibration result, the root mean square (RMS)of calibration dataset and the RMS of test dataset are 0.048 mm and 0.049 mm. A comparative experiment is conducted between the pinhole imaging model and the general imaging model. Experimental results show that the calibration method based on general imaging model applies to the orthogonally splitting imaging pose sensor. The calibration method requires only one image corresponding to the target in the world coordinates and distortion correction is not required to be taken into account. Compared with the calibration method based on the pinhole imaging model, the calibration procedure based on the general imaging model is easier and accuracy is greater.


Introduction
With the development of advanced industrial equipment intelligence and digitalization, the demand for accurate pose measurement, which can measure a wide range of space targets, is increasing.The visual pose measurement has the advantages of non-contact, simplicity, stability, and moderate accuracy.It is widely used in aerospace, machine building, robot navigation and other fields.At present, the visual sensors used in the visual pose measurement system include the area array CCD and the line array CCD.Compared with the area array CCD, the line array CCD has higher resolution and faster speed in vision measurement.But the line array CCD can only obtain a one-dimensional image, so multiple line array CCDs are combined to realize the spatial measurement of the target [1][2][3].However, the target must be in the public view of all line array CCDs; the measurement range is thus limited in a small range.In order to satisfy the measurement requirements of a wide range, high precision and fast speed, the orthogonally splitting imaging pose sensor based on line array CCD is designed.The new sensor is composed of a special optical imaging system and dual line array CCDs to simulate monocular measurement with an area array CCD.Compared with binocular or trinocular vision measurements, the new sensor can solve the problems of small measurement range, and has higher resolution to improve the accuracy of measurement.
The optical system of the pose sensor is composed of two parts: the imaging system and the beam splitting system.Responsible for imaging a target point as a two-dimensional image, the imaging system is composed of an aperture and spherical mirror group as an objective lens with a large field of view.The beam splitting system is responsible for dividing the two dimensional image into two one-dimensional images, which are received by the line array CCD in the corresponding direction.It is composed of a beam splitting prism and two cylindrical mirror groups.
Owing to the special optical imaging structure, for the optical system of the orthogonally splitting imaging pose sensor, there exists various types of distortion, such as the radial distortion caused by wide field of view, one-way error caused by cylindrical lens, and linear and non-linear error caused by assemble technology.These distortions will affect the geometry relationship between the target and its image, and the ideal model of pinhole imaging model cannot properly describe the actual optical imaging relationship.Normally calibration method based on pinhole imaging model ignores tangential distortion and takes radial distortion into consideration.The main calibration methods including Tsai two-step [4], Weng [5], and Zhang [6] requires at least two images corresponding to the target in the world coordinates.But problems do exist with these approaches.Pupil aberration and assembly error still exist.Tardif [7,8] proposed a rotational symmetry model to solve the problem of pupil aberration, but they could not solve the problem of assembly error.
With the above mentioned analysis, this paper proposes a calibration method based on a general imaging model [9][10][11][12] to calibrate the orthogonally splitting imaging pose sensor.General imaging model is a black box model.It describes the mapping relation between the incoming light and the pixel without considering distortion.Thus the model can apply to any optical imaging system.This paper adopts continuous vector-valued functions to express the mapping relation solved by radial basis function interpolation.And adaptively selected control points are selected by Kmeans clustering method to improve fitting accuracy.The calibration method requires only one image corresponding to the target in the world coordinates.

Imaging Principle of the Orthogonally Splitting Imaging Pose Sensor
The orthogonally splitting imaging pose sensor consists of a spherical mirror, a beam splitting prism, cylindrical lens and line array CCDs as shown in Figure 1.The target point P w (x w , y w , z w ) generate a two-dimensional image P u (x u , y u ) through the aperture diaphragm and spherical mirror.Through the beam splitting prism, two same 2d images are generated.Horizontal and vertical cylindrical lenses respectively compress the 2d images into 1d images, namely the X-direction image and Y-direction image.According to optical property of cylindrical lens, the beams parallel to the optical axis generate the image perpendicular to the generatrix direction.In order to realize imaging with a wide range, line array CCD's placement is parallel to the generatrix direction [13].And horizontal and vertical line array CCDs receive the respective corresponding 1d image.Therefore, a 2d image is transformed to two orthogonal 1d images.The X-direction and Y direction of the line array CCDs respectively capture the image coordinate u and v.According to the two 1d image coordinates, 2d image coordinate of the target point I (u,v) can be restored as shown in Figure 2. Owing to the high resolution of the line array CCD, the 2d image has high resolution to improve the measurement precision of the system.

Mapping Relation
In the general imaging model, the mapping relation between the target in a world coordinate system and a pixel in the image coordinate system can be described as a straight incoming line, as shown in Figure 3. Plücker coordinates can describe the incoming light with unique homogeneous coordinates in a five-dimensional projective space [14], as shown in Equation (1).
The coordinates of two points are (x1, x2, x3) and (y1, y2, y3) in a three-dimensional space.The line through the two points can be expressed by Plücker Coordinate as   =     −     , x0 = y0 = 0 Rewritten in vector form as  ⃗ = ( 01 ,  02 ,  03 ,  23 ,  31 ,  12 ) (1) Assuming the mapping relation is continuously changing, we can express the mapping relation by a vector valued function (), which can be solved by radial basis function interpolation as shown in Figure 3.

Mapping Relation
In the general imaging model, the mapping relation between the target in a world coordinate system and a pixel in the image coordinate system can be described as a straight incoming line, as shown in Figure 3. Plücker coordinates can describe the incoming light with unique homogeneous coordinates in a five-dimensional projective space [14], as shown in Equation (1).
The coordinates of two points are (x1, x2, x3) and (y1, y2, y3) in a three-dimensional space.The line through the two points can be expressed by Plücker Coordinate as   =     −     , x0 = y0 = 0 Rewritten in vector form as  ⃗ = ( 01 ,  02 ,  03 ,  23 ,  31 ,  12 ) (1) Assuming the mapping relation is continuously changing, we can express the mapping relation by a vector valued function (), which can be solved by radial basis function interpolation as shown in Figure 3.

Mapping Relation
In the general imaging model, the mapping relation between the target in a world coordinate system and a pixel in the image coordinate system can be described as a straight incoming line, as shown in Figure 3. Plücker coordinates can describe the incoming light with unique homogeneous coordinates in a five-dimensional projective space [14], as shown in Equation (1).
The coordinates of two points are (x 1 , x 2 , x 3 ) and (y 1 , y 2 , y 3 ) in a three-dimensional space.The line through the two points can be expressed by Plücker Coordinate as l ij = x i y j − x j y i , x 0 = y 0 = 0 Rewritten in vector form as → l = (l 01 , l 02 , l 03 , l 23 , l 31 , l 12 ) (1) Assuming the mapping relation is continuously changing, we can express the mapping relation by a vector valued function f (x), which can be solved by radial basis function interpolation as shown in Figure 3.

Mathematical Model
Interpolation with radial basis function is used to solve the mapping relation.In order to guarantee positive definiteness and invertibility of interpolation matrix, the interpolation formula is improved as shown in Equation (2).
In Equation (2) x represents an image coordinates Ii, wi, a0, ax are unknown parameters, {  } = 1, ⋯ ,  is a set of control points, ∅ is the radial basis function.

𝑓(𝑥) = ((∅(𝑥) 𝑝(𝑥))𝐻 𝑐𝑎𝑚 )
(7) Based on the mathematical model, when the set of image points, the set of control points and camera matrix are known, the vector valued function can be solved for a given radial basis function.

Mathematical Model
Interpolation with radial basis function is used to solve the mapping relation.In order to guarantee positive definiteness and invertibility of interpolation matrix, the interpolation formula is improved as shown in Equation (2).
In Equation (2) x represents an image coordinates I i , w i , a 0 , a x are unknown parameters, Rewriting Equation ( 2) in matrix form as In Equation ( 3), w and a are unknown parameters, ∅(x) is the radial basis function expressed as Six independent interpolations based on radial basis function describe a vector-valued function f (x) as shown in Equation ( 5) Replacing s i (x) in Equation ( 5) by Equation ( 4) In Equation ( 6), H cam is called camera matrix.Equation ( 6) can be simplified as Equation (7).And the mathematical model of the mapping relation can be expressed as Equation (7).
Based on the mathematical model, when the set of image points, the set of control points and camera matrix are known, the vector valued function can be solved for a given radial basis function.The set of image points and the set of control points can be gained by measuring, and camera matrix can be gained by calibrating.

Mathematical Derivation
According to the mathematical model, the target point in the world coordinate system and the image in the image coordinate system should on the same line in Plücker coordinates.
According to the characteristics of the Plücker line [14], Equation ( 8) is satisfied if a point is on the Plücker line.
According to the nature of the transposed matrix (AB) T = B T A T , Equation ( 9) can be rewritten as Equation (10).
From the above equation, it can be concluded that if the image coordinates x and the world coordinates w are known, the camera matrix can be solved.If the image coordinates and world coordinates of N points are known, the above equation can be rewritten as Equation (12).
In order to exclude non-zero solutions, add constraints vec(H cam ) = 1.Then the problem of solving the camera matrix is transformed into solving the least squares solution [15] of the homogeneous equations.Find out vec(H cam ) to gain the minimum value of Mvec(H cam ) .According to the singular value decomposition method, the matrix M can be expressed as M = UD T V, so vec(H cam ) is the last column of matrix V based on derivation.Hence, if data set {x i → w i }i = 1, • • • , N is known, the camera matrix H cam can be solved out.

Experimental Apparatus
The experimental apparatus consists of a target, a peripheral component interconnect (PCI) controller (independent research and development) of the target, a motorized stage, a mechanical controller of the motorized stage, and an orthogonally splitting imaging pose sensor.Among them, the PCI controller of the target controls the LEDs, which are lightened up at different times, and the mechanical controller of the motorized stage controls the movement of the motorized stage.The experimental apparatus is shown in Figure 4.The main apparatus parameters are shown in the Table 1.Table 1.Apparatus parameters.

LED
The power is 1 W, the working current is 350 mA, and the working voltage is 3 ~ 3.8 V Line array CCD Resolution is 12,288 pixels, pixel size is 5 μm, mechanics is 76 × 76 × 56 mm 3

Motorized stage
Ball screw drive mode, the guide rail adopts linear bearing, the resolution under the 8 subdivision is 2.5 μm, the maximum speed is 40 mm/s, and the repeated positioning accuracy is less than 5 μm

Data Acquisition Method
The data set is composed of N points' image coordinates xi and N points' world coordinates wi, {  →   } = 1, ⋯ , N .The image coordinate of each point on the target can be measured by orthogonally splitting the imaging pose sensor and its numerical unit is pixel.The corresponding world coordinate can be acquired based on the distribution of points on the target.As shown in Figure 4, the horizontal distance between every two points is 60 mm, and the vertical distance between every two points is 60 mm.The point lying on the upper left corner of the target is set as the origin of the world coordinate system, then the x and y values of the world coordinate can be calculated.When setting the pose sensors in the initial position, the z value of the world coordinate is 0. On the motorized stage the pose sensor is driven to move away from the target for a distance of 20 mm, and the z value of the world coordinate is 20 mm.In this way the three-dimensional world coordinate can be obtained.

Calibration Procedure
Step one: Choose the appropriate radial basis function.Because radial basis function interpolation with shape parameters is better, this paper uses Multi-Quadric function (MQ function)

LED
The power is 1 W, the working current is 350 mA, and the working voltage is 3 ~3.8V Line array CCD Resolution is 12,288 pixels, pixel size is 5 µm, mechanics is 76 × 76 × 56 mm 3

Motorized stage
Ball screw drive mode, the guide rail adopts linear bearing, the resolution under the 8 subdivision is 2.5 µm, the maximum speed is 40 mm/s, and the repeated positioning accuracy is less than 5 µm

Data Acquisition Method
The data set is composed of N points' image coordinates x i and N points' world coordinates w i , {x i → w i }i = 1, • • • , N. The image coordinate of each point on the target can be measured by orthogonally splitting the imaging pose sensor and its numerical unit is pixel.The corresponding world coordinate can be acquired based on the distribution of points on the target.As shown in Figure 4, the horizontal distance between every two points is 60 mm, and the vertical distance between every two points is 60 mm.The point lying on the upper left corner of the target is set as the origin of the world coordinate system, then the x and y values of the world coordinate can be calculated.When setting the pose sensors in the initial position, the z value of the world coordinate is 0. On the motorized stage the pose sensor is driven to move away from the target for a distance of 20 mm, and the z value of the world coordinate is 20 mm.In this way the three-dimensional world coordinate can be obtained.

Calibration Procedure
Step one: Choose the appropriate radial basis function.Because radial basis function interpolation with shape parameters is better, this paper uses Multi-Quadric function (MQ function) with 1/2 and Gaussian function with φ(r) = e − r 2 β 2 as the radial basis function to calibrate the camera, and β is the shape parameter.
Step two: The control points are adaptively selected based on the given data set using the Kmeans clustering method [16].The radial basis function is an interpolation method of high-dimensional scattered data.The approximation and stability of the function are closely related to the distribution of control points.Therefore, the selection of control points is a very important part of the calibration method and directly affects the measurement accuracy.Thus the control points selected by Kmeans clustering method can represent the distribution characteristics of the data set.
Step three: Coordinate normalization.The radial basis functions chosen in this paper have shape parameters, and the choice of shape parameters depends on the distribution of the points in the coordinate system.In order to avoid the influence of the scale factor of the optical imaging system in terms of the coordinate values, the image coordinate and the world coordinate of each point must be normalized.The normalized data set is used for subsequent steps.
(1) Normalization of the image coordinate.According to the affine transformation, the image coordinates A, and α are normalized parameters which can be solved by Choleski decomposition method.(2) Normalization of the world coordinate.According to the affine transformation, the world coordinates . ρ, B and b are normalized parameters which can be solved by Choleski decomposition method.
Step four: Solve camera matrix H cam .Calculate the calibration matrix M according to Equation ( 12), and then use the singular value decomposition method to solve the camera matrix H cam .
Step five: Put the image coordinate x of any point and the solved camera matrix H cam in Equation ( 7), the corresponding line coordinate can be fitted.Since the normalized coordinate values will change the coordinate values of the line space, the line coordinate obtained by using the interpolation should be converted to the original values according to Equation (14).
Step six: Repeat the fifth step until the line coordinate values of all points in the data set are fitted.

Data Acquisition
Because of the experimental need, the calibration data set, control points set, and the test data set are acquired.The calibration data set is used to calibrate the pose sensor, and the control points set is selected from the calibration data set for calculating the radial basis function interpolation.The test data set is used to verify the accuracy of the calibration results.

Calibration Data Set Acquisition and Control Points Set Selection
According to the data acquisition method mentioned in Section 2.2.2, the calibration data set containing 358 points is acquired and the result is shown in Figure 5.
According to the Kmeans clustering method mentioned in Section 2.2.4,179 control points are adaptively selected from the calibration data set.The selection results of control points set is shown in Figure 6.

Test Data Set Acquisition
The acquisition method of test data set is the same as the acquisition method of the calibration set.The test data set containing 364 points is acquired and the result is shown in Figure 7.It is important that the test data set does not overlap with calibration data set.

Test Data Set Acquisition
The acquisition method of test data set is the same as the acquisition method of the calibration set.The test data set containing 364 points is acquired and the result is shown in Figure 7.It is important that the test data set does not overlap with calibration data set.

Parameter Experiment
According to the calibration procedure proposed in Section 2.2.4,two different radial basis functions and different shape parameters are used to perform the interpolation.The calibration results are applied to the fitting calculation respectively for error evaluation of the calibration data and the test data.The error evaluation is represented by the distance from the target point in the world coordinate system to the fitted Plücker line.The error evaluation is shown in Figure 8.

Parameter Experiment
According to the calibration procedure proposed in Section 2.2.4,two different radial basis functions and different shape parameters are used to perform the interpolation.The calibration results are applied to the fitting calculation respectively for error evaluation of the calibration data and the test data.The error evaluation is represented by the distance from the target point in the world coordinate system to the fitted Plücker line.The error evaluation is shown in Figure 8.As can be seen from Figure 8, for the orthogonally splitting imaging pose sensor, when the MQ function is used as the radial basis function, the RMS of the calibration data set is similar to the RMS of the test data set.And when the Gaussian function is used as the radial basis function, the calibration result has a large difference between the calibration data set and the test data set.Therefore, the calibration accuracy of the MQ function is higher than the one of the Gaussian function.
From the experimental data, the shape parameters with the minimum difference between RMS of the test data set and RMS of the calibration data set are selected to calibrate the orthogonally splitting imaging pose sensor.

Calibration and Test
According to the selected shape parameter, the RMS of the calibration data set and test data set is shown in Table 2.It can be seen from the experimental results that the calibration method based As can be seen from Figure 8, for the orthogonally splitting imaging pose sensor, when the MQ function is used as the radial basis function, the RMS of the calibration data set is similar to the RMS of the test data set.And when the Gaussian function is used as the radial basis function, the calibration result has a large difference between the calibration data set and the test data set.Therefore, the calibration accuracy of the MQ function is higher than the one of the Gaussian function.
From the experimental data, the shape parameters with the minimum difference between RMS of the test data set and RMS of the calibration data set are selected to calibrate the orthogonally splitting imaging pose sensor.

Calibration and Test
According to the selected shape parameter, the RMS of the calibration data set and test data set is shown in Table 2.It can be seen from the experimental results that the calibration method based on the general imaging model can meet the calibration requirements of the orthogonally splitting imaging pose sensor.And the MQ function is more suitable as an interpolation function.

Comparison Experiment
The calibration method based on the general imaging model proposed in this paper and the calibration method based on the pinhole imaging model are applied to the orthogonally splitting imaging pose sensor, and the results of error valuation are compared.
The calibration method based on the general imaging model selects the MQ function and its shape parameter is selected according to the parameter experiment.
Since the pinhole imaging model is an ideal model and the pose sensor has a large distortion, the distortion correction is performed first, and then the Tsai two-step method is used for calibration.The first step uses a linear iterative optimization algorithm to calculate the initial values of the external parameters of the camera.The second step uses the optimization method to solve the internal parameters nonlinearly and further optimizes the external parameters.At last solve the world coordinate using parameters and calculate the RMS The experimental results are shown in Table 3.It can be seen that the calibration accuracy based on the general imaging model is higher than the calibration accuracy based on the pinhole imaging model.

Discussion
The orthogonally splitting imaging pose sensor owns an autonomously designed optical imaging structure with assembly errors and various nonlinear and linear distortions.On the contrary, the pinhole imaging model cannot accurately describe the special optical imaging system.This paper proposes a calibration method based on general imaging models.The general imaging model introduces the Plücker coordinate to represent the mapping relationship between the world coordinate and the image coordinate and distortion does not need to be considered.It only cares about the mapping relationship between the incoming light and the pixel, and is suitable for the modeling of various optical systems.The method uses radial basis function interpolation to fit the mapping relationship between target points in the world coordinate system and image points in the image coordinate system.The calibration data requires that each target point in the world coordinate system only needs one corresponding image in the image coordinate system.Because the distribution of control points will affect the accuracy of radial basis function interpolation, this paper adopts Kmeans clustering method to adaptively select the control points set, which effectively improves the fitting accuracy.This paper uses the calibration method based on the general imaging model for the calibration of the orthogonally splitting imaging pose sensor.Through experiments, the appropriate radial basis function and its shape parameters are selected.The experimental results show that the MQ function is more suitable for the orthogonally splitting imaging pose sensor as a radial basis function, RMS of calibration data set and RMS of test data set are 0.048 mm and 0.049 mm.Compared with the calibration method based on the pinhole imaging model, the calibration accuracy of the calibration method based on the general imaging model is higher.Therefore, the calibration method based on the general imaging model proposed in this paper is suitable for the calibration of the orthogonally splitting pose sensor.In the future, the calibration method based on the general imaging model can be applied to more types of optical imaging systems, and correspondingly interpolation methods should be explored to improve calibration accuracy.The application of the calibration method can be verified through more calibration experiments and test experiments.

Figure 1 .
Figure 1.Structure of the orthogonally splitting imaging pose sensor.

Figure 2 .
Figure 2. 2d image coordinate of the target point I (u, v) can be restored by two 1d images.

Figure 1 .
Figure 1.Structure of the orthogonally splitting imaging pose sensor.

Figure 1 .
Figure 1.Structure of the orthogonally splitting imaging pose sensor.

Figure 2 .
Figure 2. 2d image coordinate of the target point I (u, v) can be restored by two 1d images.

Figure 2 .
Figure 2. 2d image coordinate of the target point I (u, v) can be restored by two 1d images.

Figure 3 .
Figure 3. General imaging model.PWi is the world coordinate of the target point.Ii is the image coordinate corresponding to the target point.li is the mapping relation expressed by Plücker Coordinate.

Figure 3 .
Figure 3. General imaging model.P Wi is the world coordinate of the target point.I i is the image coordinate corresponding to the target point.l i is the mapping relation expressed by Plücker Coordinate.

Figure 5 .
Figure 5. Calibration data set.(a) is the world coordinate of the calibration dataset.(b) is the image coordinate of the calibration dataset.

Figure 6 .
Figure 6.The selection results of control points set.

Figure 5 .
Figure 5. Calibration data set.(a) is the world coordinate of the calibration dataset.(b) is the image coordinate of the calibration dataset.

Figure 5 .
Figure 5. Calibration data set.(a) is the world coordinate of the calibration dataset.(b) is the image coordinate of the calibration dataset.

Figure 6 .
Figure 6.The selection results of control points set.Figure 6.The selection results of control points set.

Figure 6 .
Figure 6.The selection results of control points set.Figure 6.The selection results of control points set.

Figure 7 .
Figure 7. Test data set (a) is the world coordinate of the test dataset.(b) is the image coordinate of the test dataset.

Figure 7 .
Figure 7. Test data set (a) is the world coordinate of the test dataset.(b) is the image coordinate of the test dataset.

Figure 8 .
Figure 8. Analysis of different radial basis function interpolation and fitting accuracy with different shape parameters.(a) is the root mean square (RMS) calculated by using different shape parameters with different Multi-Quadric function (MQ function).(b) is the RMS calculated by using different shape parameters with different Gaussian function.

Figure 8 .
Figure 8. Analysis of different radial basis function interpolation and fitting accuracy with different shape parameters.(a) is the root mean square (RMS) calculated by using different shape parameters with different Multi-Quadric function (MQ function).(b) is the RMS calculated by using different shape parameters with different Gaussian function.

Table 2 .
Error valuation of calibration dataset and test dataset.

Table 3 .
Error valuation of general imaging model and pinhole imaging model.