Ultrasound Probe and Hand-Eye Calibrations for Robot-Assisted Needle Biopsy

In robot-assisted ultrasound-guided needle biopsy, it is essential to conduct calibration of the ultrasound probe and to perform hand-eye calibration of the robot in order to establish a link between intra-operatively acquired ultrasound images and robot-assisted needle insertion. Based on a high-precision optical tracking system, novel methods for ultrasound probe and robot hand-eye calibration are proposed. Specifically, we first fix optically trackable markers to the ultrasound probe and to the robot, respectively. We then design a five-wire phantom to calibrate the ultrasound probe. Finally, an effective method taking advantage of steady movement of the robot but without an additional calibration frame or the need to solve the AX=XB equation is proposed for hand-eye calibration. After calibrations, our system allows for in situ definition of target lesions and aiming trajectories from intra-operatively acquired ultrasound images in order to align the robot for precise needle biopsy. Comprehensive experiments were conducted to evaluate accuracy of different components of our system as well as the overall system accuracy. Experiment results demonstrated the efficacy of the proposed methods.


Introduction
Needle biopsy is a well-established procedure that allows for examination of abnormal tissue within the body. For example, percutaneous needle biopsy of suspected primary bone neoplasms is a well-established procedure in specialist centers [1]. Fine needle biopsy has long been established as an accurate and safe procedure for tissue diagnosis of breast mass [2,3]. Amniocentesis is a technique for withdrawing amniotic fluid from the uterine cavity using a needle [4][5][6][7]. Often, these procedures are performed under image guidance. Although some of the needle biopsy procedures can be guided using imaging modalities such as fluoroscopy, CT, MRI, single-photon emission computed tomography (SPECT), positron emission tomography (PET), and optical imaging, there are procedures such as amniocentesis which require continuous ultrasound (US) guidance when taking the safety of the mother and the baby into consideration. US is regarded as one of the most common imaging modalities for needle biopsy guidance as it is relatively cheap, readily available, and uses no ionizing radiation.
US-guided needle biopsies are often accomplished with hand held and stereotactic biopsy procedure, which are operator dependent. Moreover, such procedures require extensive training exercises, are difficult to regulate, and are more challenging to perform when small lesions are found. Consequently, hand held US-guided biopsies do not always yield ideal results.
To address these challenges, one of the proposed technologies is to integrate a robotic system with US imaging [3,8]. In such a robot-assisted, US-guided needle biopsy system, it is essential to conduct calibration of a US probe and to perform hand-eye calibration of the robot in order to establish a link between intra-operatively acquired US images and robot-assisted needle insertion. Based on a high-precision optical tracking system, novel methods for US probe and robot hand-eye calibration are proposed. Specifically, we first fix optically trackable markers to the US probe and to the robot, respectively. We then design a five-wire phantom to calibrate the US probe. Finally, an effective method taking advantage of steady movement of the robot but without the need to solve the AX = XB equation is proposed for hand-eye calibration. After calibration, our system allows for in situ definition of target lesions and aiming trajectories from intra-operatively acquired US images in order to align the robot for precise needle biopsy. The contributions of our paper can be summarized as: • We design a five-wire phantom. Based on this phantom, we propose a novel method for ultrasound probe calibration. • We propose an effective method for hand-eye calibration, which unlike previous work, does not need to solve the AX = XB equation, or a calibration frame. • Comprehensive experiments are conducted to evaluate the efficacy of the proposed calibration methods as well as the overall system accuracy.

Related Work
Different robotic systems have been developed for US-guided procedures. The robot has to know the spatial information of the target lesion and the aiming trajectory from the US image in order to realize the needle biopsy. The performance of the needle biopsy is dependent upon the image-to-robot registration accuracy.
Rapid and accurate US probe calibration depends on a well-designed phantom, which is expected to reduce the operation time and to improve the accuracy level. There exist different types of calibration phantom [9]. When a point phantom or plane phantom is used, it is very difficult to align the scan probe with the targets [10,11]. Moreover, these methods rely on a manual segmentation that is time-consuming and labor-intensive. The N-wire phantom was designed to solve the alignment problem [12][13][14]. However, it heavily depends on the known geometry constraint [15], which cannot be precisely satisfied considering the errors in fiducial detections from US images. To address the problem, arbitrary wire phantoms were proposed [16,17].
Hand-eye calibration aims to determine the transformation between a vision system and a robot arm system. The hand-eye calibration methods are different due to various kinds of vision devices and various fixing locations [18]. Generally, an additional calibration frame is required for the hand-eye calibration to identify the extrinsic and intrinsic parameters of the camera [19,20]. Furthermore, it is addressed by solving the form of AX = XB that formulates the closed-loop system [21]. Different methods and solutions have been developed, including simultaneous closed-form solution [22], separable closed-form solutions [23], and iterative solutions [24]. The first autonomous hand-eye calibration was proposed by Bennett et al. [25] to identify all parameters of the internal models of both the camera and the robot arm system by an interactive identification method. There also exist methods to identify the hand-eye transformation by recognizing movement trajectories of the reference frame corresponding to fixed robot poses [26]. In such methods, it is critical to choose appropriate poses and movement trajectories in order to realize a rapid and reliable calibration.

Overview of Our Robot-Assisted Ultrasound-Guided Needle Biopsy System
Our robot-assisted US-guided needle biopsy system consists of a master computer equipped with a frame grabber (DVI2USB 3.0, Epiphan Systems Inc., Ottawa, ON, Canada), an US machine (ACUSON OXANA2, Siemens Healthcare GmbH, Marburg, Germany) with a 45-mm linear array probe of 9L4 Transducer (Siemens Medical Solutions USA Inc., Pennsylvania, CA, USA), an optical tracking camera (Polaris Vega XT, Northern Digital Inc., Ontario, ON, Canada), and a robot arm (UR 5e, Universal robots Inc., Odense, Denmark) with a biopsy guide. Via the frame grabber, the master computer can grab real-time US images with a frequency of 10 Hz. It also communicates with the tracking camera to get poses of different tracking frames and with the remote controller of the UR robot in order to realize a steady movement and to receive feedback information, such as robot poses.
During a needle biopsy procedure, the target lesion and the aiming trajectory are planned in the US image grabbed by the master computer. Then, the pose of the guide will be adjusted to align with the planned biopsy trajectory. Thus, it is essential to determine the spatial transformation from the two-dimensional (2D) US imaging space to the threedimension (3D) robot space, as shown in Figure 1. The transformation can be obtained via three different calibration procedures, including US probe calibration, hand-eye calibration, and TCP (Tool Center Point) calibration.  A biopsy trajectory can be defined from an intra-operatively acquired US image by a target point p 0 = p x , p y , 0 T and a unit vector v 0 = v x , v y , 0 T that indicates the direction of the trajectory. To simplify the derivation and expression, the planned trajectory in the image COS O i is written in a format of a 4 × 2 matrix, as: The planned trajectory in the robot-base COS is presented by b Ψ, which is obtained by the following chain of transformations: where b c T represents the homogeneous transformation of the tracking camera COS O c relative to the robot-base COS O b and is determined by: where b f T represents the homogeneous transformation of the flange COS relative to the robot-base COS, and m c T is the inverse of c m T, which is the homogeneous transformation of the COS of the reference frame on the end effector relative to the tracking camera COS O c .
Similar to the definition of the planned trajectory, pose of the center line of the guiding tube in the robot-base COS can be defined by b Φ, which is defined by two end points of the center line, P 1 and P 2 : To realize the robotic assistance for needle biopsy, the robot is controlled to provide a corresponding pose, so that the center axis of the guiding tube is aligned with the planned trajectory, which can be modeled as: The complete system requires knowing three spatial transformations, i.e.,  m T is by hand-eye calibration, and m t T is by TCP (Tool Center Point) calibration. The accuracy of the spatial calibrations will affect the biopsy accuracy. Below, we will present details about these three calibration procedures.

US Probe Calibration
p i T is used to transform a pixel in the 2D US imaging space O i to the 3D-COS O p of the reference frame attached to the US probe. This transformation matrix is determined by a calibration procedure as described below.
To calibrate p i T, we design a five-wire phantom. The wire phantom uses five pieces of nylon wires with a diameter of 0.15 mm, as shown in Figure 2. These wires are designed not to be parallel to each other and are submersed in a water tank. During the US probe calibration process, we fix the scanning depth of the US to 5 cm, and the focus depth to 3.5 cm, which are selected based on typical clinical scenarios. The COS O w of the wire phantom is defined by fixing an optical reference frame to the phantom. The transformation p i T can be represented as: where the scaling matrix im i T scale describes the relationship between the local 2D US image COS O i and the 3D COS O im , which defines the local COS of the plane where the US image is located (see Figure 2 for details); p im T is the rigid body transformation between the 3D COS O im and the 3D COS O p of the reference frame attached to the US probe.
The scaling matrix im i T scale has the form: where s x and s y represent the scaling parameters (mm/pixel) in the xand y-direction, respectively; and s tx defines the translation between the origins of the local 2D US image COS O i and the 3D COS O im . We can multiply im i T scale into p im T to get p i T, which has the form: The intersections between the US image plane and the wires are used to derive the transformation p i T. They are extracted from acquired US images by a semi-automatic point recognition algorithm [27].
Every detected intersection point is expressed as i P = (r, c, 0, 1) T , where r and c indicate the location of a pixel at the r-th row and c-th column in the image. With p i T, the position of any intersection point can be transformed to the 3D COS O p , as: By simple mathematical operations, (9) can be rewritten as: where The image-based points { p P} can be further transformed to the 3D COS O w via the transformations of the reference frame attached to the phantom c w T and the reference frame attached to the US probe c p T with respect to the tracking camera: The intersection point is on a straight wire which is rigidly attached to the phantom and can be modeled in the phantom COS O w as: where w M is a 2 × 4 coefficient matrix of a line equation in the phantom COS O w , which can be determined if we know two points on the wire. This is done by digitizing the two end points of the wire using a tracked pointer. By combining (13) and (14), we have: The point recognition algorithm [27] will generate detection points with noise. To model such detection noise, we aim to compute the calibration parameters α by solving following optimization problem: min : where (r i,j , c i,j ) represents the location of the intersection pixel between the j-th wire with the i-th US image. w j M is the corresponding known coefficient matrix of the j-th wire in the phantom COS O w . After obtaining α, we can compute transformation p i T according to (8) to finish the US probe calibration.

Hand-Eye Calibration
The hand-eye calibration is to establish the spatial transformation between the optical tracking camera and the robot. In this work, the hand-eye calibration is to derive the transformation  A conventional way to solve the hand-eye calibration problem requires solving the AX = XB equation. In this study, instead of solving the AX = XB equation, we propose a novel hand-eye calibration method that takes advantage of steady movement of the robot without an additional calibration frame. Specifically, we observe that the orientation of the reference frame changes only if the flange rotates. By controlling the flange to move in two different types of trajectories and by tracking the poses of the reference frame attached to the robot with respect to the tracking camera during the movement, we can compute the rotation matrix In the definition of the rotation matrix, the column vector of the rotation matrix indicates the components of coordinate axis of a COS relative to another COS. As shown in Figure 4, it is feasible to move the flange along the three coordinate axes of the robot-based COS O b while keeping the same orientation. Consequently, three line trajectories of the reference frame are recorded by the tracking camera which can be respectively used to compute the three column vectors of the rotation matrix c b R. In detail, we compute three unit vectors r tx , r ty and r tz from the recorded trajectories, which represent the direction of the three coordinate axes of O b in the tracking camera COS O c . Hence, the rotation matrix c bR can be written as: Considering the potential tracking errors, we decompose (17) with singular value decomposition (SVD) to preserve the orthogonality. The result (17) is: where det(·) indicates the matrix determinant, and sign(·) is the sign function.
Then, the rotation matrix f m R can be obtained through a chain of spatial transformations: where the right subscript i indicates the i-th points in the movement trajectories.
is the orientation matrix of the reference frame attached to the robot with respect to the tracking camera.
Following (19), each point on the trajectories will give a different f m R i when taking tracking errors into consideration. We define a 3 × 9 matrix m f M i by column vectors r mx i , r my i , and r mz i of f m R i , as well as a 9 × 1 column vector β by column vectors r mx , r my , and r mz of the rotation matrix f m R. Because a rotation matrix is orthogonal, we further optimize the hand-eye calibration by using a least-squares fitting, as: where We can then obtain the rotation matrix f m R in terms of β, and preserve its orthogonality by using SVD.
After obtaining the rotation matrix f m R, we can compute the rotation matrix c b R at any time point. Now, we need to compute the translation vector f m t, which represents the offset of the origin of the COS O m relative to the flange COS O f . This is done by controlling the movement of the robot such that the flange is rotated around its origin and by maintaining a fixed relationship between the camera and the robot base during the movement. Then, considering two different poses indexed by i and j in the rotational trajectory, we have: As we are only interested in the translational part, we can decompose all the homogeneous transformations according to the block operation of the matrix to obtain: In deriving above equations, as shown in Figure 5, we use the properties (1) that the flange is rotated around its origin, thus b f t is constant and (2) that we maintain a fixed relationship between the camera and the robot base, thus c b R and c b t are constant. With a simple mathematical manipulation, we have: In above equation, we would like to estimate f m t while all other elements are either known or can be retrieved from the corresponding device's API. Similarly, we can improve the translation vector calibration by using a least-squares fitting.

TCP Calibration
We need to conduct the TCP calibration in order to realize the closed-loop vision control on the pose of the guide under the tracking camera. The TCP calibration is a procedure to estimate the transformation m t T of the COS O t defined on the guiding tube relative to the the COS O m of the reference frame attached to the end effector. In this calibration procedure, three COSs are utilized, including the tracking camera COS O c , the local COS O t of the guiding tube, and the COS O m , as shown in Figure 6. As shown in Figure 6, the local COS O t of the guiding tube can be determined by three points, where P 1 and P 2 are two end points on the center axis of the guiding tube, and P 3 is a point on the guide. In order to determine two end points, plugs with a sharp indent are designed and inserted into the guiding tube. We then obtain the positions of these three points by using a tracked pointer pivoting at the corresponding indent.
In the local COS O t , the origin is defined by the point P 2 , the z-axis is determined by P 1 and P 2 , and the x-z plane is the plane containing the three points. The coordinate axes can be modeled by the three points. P 1 , P 2 , and P 3 are all column vectors. We further obtain the homogeneous transformation t c T by its origin and coordinate axes as: where    r x = a 13 × a 12 × a 12 r y = a 13 × a 12 r z = a 12 (26) We then combine c t T with the pose m c T of the reference frame to obtain the transformation m t T as:

Performance Evaluation
For the robot-assisted needle biopsy, the target point p p and vector v p of the trajectory direction are planned in an acquired US image. By the transformation p i T, they are transformed from the US imaging space into the physical space. Hence, the results of spatial calibrations affect the system performance. The US probe calibration affects the recognition and reconstruction on the planned trajectory, while the hand-eye calibration and the TCP calibration affects the accuracy of the robot control.
Accuracy evaluation on the US probe calibration is conducted by comparing reconstructed points, lines, and planes with the corresponding ground truth. With the aid of the tracking camera, the detected points in US images are reconstructed in the phantom space. The deviation between the recognized points and the digitized wire, which is used as the ground truth, and the incline angle between the fitted line and the digitized wire are used to evaluate the calibrations. For the robotic system, the system performance is quantified by the deviations between the actual path p a and the planned biopsy trajectory. The deviations consist of the incline angle e θ (unit: • ), as well as the distance e d (unit: mm) between the planned target point to the biopsy path, as shown in Figure 7.

Validation of the US Probe Calibration
As shown in Figure 8, a plane-wire phantom was designed to verify the US probe calibration. Five longitudinal wires (LWs) and five transverse wires (TWs) were woven on a supporting frame, which was submerged in a water tank. The diameter of the wires was 0.15 mm. The span distance between the paralleled wires was about 10 mm. We used the semi-automatic point recognition algorithm [27] as we used in the probe calibration to recognize the intersection points between the US image plane and the validation wire phantom, which were represented as a set of pixels. We also established the line equations of the validation wire phantom using a tracked pointer, which was used as the ground truth.

Validation of Hand-Eye Calibration
A plastic phantom fabricated by 3D printing was used for evaluating the validation of the hand-eye calibration and the TCP calibration. The phantom had a dimension of 140 × 90 × 85 mm 3 . In addition, the phantom was designed with 5 × 5 drilling trajectories. As shown in Figure 9, the location of the drilling trajectories inside the plastic phantom was coded in alpha-numeric form. The robot was controlled to align a φ 4 mm drilling bit with the planned trajectory. After drilling, a tracked pointer was used to digitize the drilled paths.

Blueberry Biopsy Experiments
We designed biopsy experiments on a blueberry submerged in a water tank as shown in Figure 10. The target blueberry had a size of φ 14.5 mm × 9.6 mm, and a biopsy needle had a diameter of 0.8 mm. We divided the water tank into 3 × 2 blocks and fixed the blueberry in the lower four blocks to simulate deep seated lesions. Moreover, the incline angle of the planned trajectory was varied over the range 30 • to 60 • . The biopsy path can be real-time tracked by the ultrasound system. Thus, the biopsy accuracy was quantified by path deviations.

Tumor Phantom Biopsy Experiments
We further conducted biopsy experiments on a soft tumor phantom (LYDMED, China) to validate the potential of the proposed system for tumor biopsy. The soft tumor phantom is made of silicon rubber and has a size of 150 × 120 × 80 mm 3 , as shown in Figure 11. There is a simulated tumor with a diameter of about 10 mm embedded inside the soft phantom. In addition, an optical reference frame was fixed to the phantom. The planned trajectories in the COS of the reference frame were obtained by using the method introduced in [28], which was treated as the ground truth. A needle with a diameter of 0.8 mm was inserted into the phantom via the passage of guide, and it was kept inside the phantom. We repeated the same procedure six times, and every time we planned different target points and aiming trajectories. After needle insertion, we obtained a CT scan of the phantom. The biopsy accuracy were then measured in the 3D CT imaging space.

US Probe Calibration
During the US probe calibration, we acquired 110 frames of US images, of which 90 images were used as the data set to derive the transformation p i T, and the others were used as the test data to evaluate the calibration accuracy. We used the test set to reconstruct the five-wire phantom. The distance between the detected points and the adjacent wires, and the incline angle of the reconstructed lines, are presented in Table 1. An average incline angle of 0.3 • and an average distance of 0.85 mm were found.

Validation of US Probe Calibration
For the US probe calibration validation, 176 frames of images were acquired, and 26,868 intersection points were detected from these images, which were used to reconstruct the plane phantom, as shown in Figure 12. The incline angle of the normal vector of the fitted plane was 0.50 • . The mean distance between the corresponding position of the detected points and the wires, and the mean incline angle between the fitted lines, are presented in Table 2. From this table, one can see that our US probe calibration method achieved sub-millimeter and sub-degree accuracy, which were accurate enough for our applications.

Validation of Hand-Eye Calibration
In the hand-eye calibration validation experiments, the distance and the incline angle between the drilling path and the planned trajectory are presented in Table 3. Specifically, we found that the mean distance deviation was 0.33 mm and the maximum distance deviation 0.67 mm. The mean and the maximum incline angle were 1.03 • and 2.44 • , respectively. The relatively large angular error might be caused by the vibration of the guide during drilling.

Blueberry Biopsy Experiments
As shown in Figure 13, we quantified the deviations of the targets and the trajectories when the blueberry was submerged in different blocks of the water tank. The experimental results of the 72 times biopsy on a blueberry are presented in Table 4. An average distance error of 0.74 mm and an average angular error of 1.10 • were founded. Throughout the 72 times biopsy, the successful rate was 100%.

Tumor Phantom Biopsy Experiments
The overall system performance was evaluated by needle biopsy on a tumor phantom. Results of the tumor phantom experiment are presented in Table 5. The success rate of the needle biopsy into the tumor was 100%. An average distance error of 1.71 mm and an average angular error of 1.0 • were found. We attributed the relatively large errors to the elastic deformation of the biopsy needle during insertion. Nonetheless, the achieved accuracy is good enough for the target applications and is better than the results achieved by most of the state-of-the-art methods [2,3,8,13,29]. For US probe calibration, we compared the reconstruction accuracy with SOTA methods using other types of phantoms, including the method introduced by Wen et al. [15], the method based on an eight-wire phantom [17], the method based on an N-wire phantom [13], the method based on a pyramid phantom [14], and the method based on a Z-wire phantom [9]. In terms of the mean reconstruction accuracy, our method achieved the best result. Table 6 shows the comparison results. Table 6. Comparison with other SOTA US probe calibration methods.

Method Phantom Type Mean Accuracy
Wen et al. [15] Combined phantom and stylus 0.71 mm Ahmad et al. [17] Eight-wire phantom 1.67 mm Carbaja et al. [13] N-wire phantom 1.18 mm Lindseth et al. [14] Pyramid phantom 0.80 mm Hsu et al. [9] Z-wire phantom 0.70 mm Ours Five-wire phantom 0.62 mm Additionally, we also compared our method with other SOTA biopsy methods, including the method introduced by Tanaiutchawoot et al. [30], the method introduced by Treepong et al. [29], and the method introduced by Chevrie et al. [31]. Table 7 shows the comparison results, where the exact type of phantom, the achieved accuracy and the biopsy successful rate of each method are presented. From this table, one can see that our method achieved the best result in terms of both the accuracy and the biopsy successful rate.

Discussion
Previous studies of needle biopsy have emphasized the applications of fluoroscopy and CT as imaging modalities [32,33]. Compared with these imaging modalities, US has a major advantage in that it is free of risk from ionizing radiation to both the patient and staff. In addition, robot systems have the advantage to ensure the stability and accuracy [30,34]. Taking advantage of an ultrasound system and a robot arm, we developed and validated a robot-assisted system for a safe needle biopsy.
Three spatial calibration methods, including US probe calibration, hand-eye calibration, and TCP calibration, were developed for the robot-assisted biopsy system to realize a rapid registration of patient-image-robot. We validated the US probe calibration by reconstruction analysis of wire phantoms. Our method also achieved a higher accuracy than previously reported results [13,15,16,35]. Different from previous works [10,12,17], our US probe calibration is not dependent upon the known geometric parameters, which makes it easier to manufacture a calibration phantom. We further investigated a combination of the hand-eye calibration and TCP calibration by drilling experiments.
It is worth discussing the proposed hand-eye calibration method. Our method does not need to solve the equation "AX = XB" as required by previously introduced hand-eye calibration methods [36]. In comparison with methods depending on iterative solutions [24,25] or probabilistic models [22,37], our method is much faster. Our method also eliminates the requirement of an additional calibration frame as in [19,20]. Our hand-eye calibration transformation is derived based on the movement trajectories of the reference frame attached to the end effector, taking advantage of the steady movement of a robot.
There are limitations in our study. First, we did not consider the influence of respiratory motion, which may degrade the performance of the proposed system. Second, the accuracy of the proposed system was affected by the elastic deformation and friction of the target object, which conformed with the finding reported in [31]. Nonetheless, results from our comprehensive experiments demonstrated that the proposed robot-assisted system could achieve sub-millimeter accuracy.

Conclusions
In this paper, we developed a robot-assisted system for an ultrasound-guided needle biopsy. Specifically, based on a high-precision optical tracking system, we proposed novel methods for US probe calibration as well as for robot hand-eye calibration. Our US probe calibration method was based on a five-wire phantom and achieved sub-millimeter and sub-degree calibration accuracy. We additionally proposed an effective method for robot hand-eye calibration taking advantage of steady movement of the robot but without the need to solve the AX = XB equation. We conducted comprehensive experiments to evaluate the efficiency of different calibration methods as well as to evaluate the overall system accuracy. Results from our comprehensive experiments demonstrate that the proposed robot-assisted system has a great potential in various clinical applications.

Conflicts of Interest:
The authors declare no conflict of interest.

Abbreviations
The following abbreviations are used in this manuscript: