Next Article in Journal
Heat Transfer and Friction Characteristics of the Microfluidic Heat Sink with Variously-Shaped Ribs for Chip Cooling
Previous Article in Journal
Design and Field Test of a WSN Platform Prototype for Long-Term Environmental Monitoring
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Identification Methodology for the Coordinate Relationship between a 3D Vision System and a Legged Robot

State Key Laboratory of Mechanical System and Vibration, Shanghai Jiao Tong University, Shanghai 200240, China
*
Author to whom correspondence should be addressed.
Sensors 2015, 15(4), 9519-9546; https://doi.org/10.3390/s150409519
Submission received: 9 March 2015 / Revised: 6 April 2015 / Accepted: 14 April 2015 / Published: 22 April 2015
(This article belongs to the Section Physical Sensors)

Abstract

:
Coordinate identification between vision systems and robots is quite a challenging issue in the field of intelligent robotic applications, involving steps such as perceiving the immediate environment, building the terrain map and planning the locomotion automatically. It is now well established that current identification methods have non-negligible limitations such as a difficult feature matching, the requirement of external tools and the intervention of multiple people. In this paper, we propose a novel methodology to identify the geometric parameters of 3D vision systems mounted on robots without involving other people or additional equipment. In particular, our method focuses on legged robots which have complex body structures and excellent locomotion ability compared to their wheeled/tracked counterparts. The parameters can be identified only by moving robots on a relatively flat ground. Concretely, an estimation approach is provided to calculate the ground plane. In addition, the relationship between the robot and the ground is modeled. The parameters are obtained by formulating the identification problem as an optimization problem. The methodology is integrated on a legged robot called “Octopus”, which can traverse through rough terrains with high stability after obtaining the identification parameters of its mounted vision system using the proposed method. Diverse experiments in different environments demonstrate our novel method is accurate and robust.

1. Introduction

The use of sensors, especially vision sensors and force sensors, which provide robots with their sensing ability, plays an important role in the intelligent robotic field. Legged robots, after having a good knowledge of the spot environment, can previously select a safe path and a set of appropriate footholds, then plan the feet and body locomotion effectively in order to traverse rough terrains automatically with high stability, velocity and low energy consumption. There have been some related examples in recent years. HyQ [1] can trot on uneven ground based on its vision-enhanced reactive locomotion control scheme, by using an IMU and a camera. Messor [2] uses a Kinect to classify various terrains in order to achieve automatic walking on different terrains. The DLR Crawler [3] can navigate in unknown rough terrain using a stereo camera. Little Dog [4] uses a stereo camera and the ICP algorithm to build the terrain model. Messor [5] uses a laser range finder to build the elevation map of rough terrains, and chooses appropriate foothold points based on the elevation map. Planetary Exploration Rover [6] builds a map model with a LIDAR sensor based on the objects’ distance and plans an optimized path. AMOS II [7] uses a 2D laser range finder to detect the distance to obstacles and gaps in front of the robot, and it also can classify terrains based on the detected data. A humanoid robot [8] uses a 3D TOF camera and a webcam camera to build a digital map, and after that plans a collision avoiding path. Another humanoid robot [9] can walk along a collision avoiding path based on fuzzy logic theory with the help of a webcam camera. The RHEX robot [10] is able to achieve reliable 3D sensing and locomotion planning with a stereo camera and an IMU mounted on it.
In robotics, if a vision sensor is mounted on the robot, its pose with respect to the robot frame must be known, otherwise the vision information can’t be used by the robot. However, only a few works describe how to compute it. In related fields, problems of extrinsic calibration of two or more vision sensors have been studied extensively. Herrera [11] proposed an algorithm that could calibrate the intrinsic parameters and the relative position of a color camera and a depth camera at the same time. Li, Liu et al. [12] used the straight line features to identify the extrinsic parameters of a camera and a LRF. Guo et al. [13] solved the identification problem of a LRF and a camera by using the least squares method twice. Geiger et al. [14] presented a method which can automatically identify the extrinsic parameters of a camera and a range sensor using one shot. Pandey and McBride [15] successfully performed an automatic targetless extrinsic calibration of a LRF and a camera by maximizing the mutual information. Zhang and Robert [16] proposed a theoretic algorithm calibrating extrinsic parameters of a camera and a LRF by using a chessboard, and they also verified the theory by experiments. Huang et al. [17] calibrated the extrinsic parameters of a multi-beam LIDAR system by using V-shaped planes and infrared images. Fernández-Moral et al. [18] presented a method for identifying the extrinsic parameters of a set of range finders by finding and matching planes in 5 s. Kwak [19] used a V-shaped plane as the target to calibrate the extrinsic parameters of a LIDAR and a camera by minimizing the distance between corresponding features. By using a spherical mirror, Agrawal [20] could achieve extrinsic calibration parameters of a camera without a direct view. When two vision sensors don’t have overlapping detection regions, Lébraly et al. [21] could obtain the extrinsic calibration parameters using a planar mirror. By using a mirror to observe the environment from different viewing angles, Hesch et al. [22] determined the extrinsic identification parameters of a camera and other fixed frames. Zhou [23] proposed a solution for the extrinsic calibration of a 2D LIDAR and a camera using three plane-line correspondences. Kelly [24] used GPS measurements to establish the scale of both the scene and the stereo baseline, which could be used to achieve simultaneous mapping.
A more related kind of work is the coordinate identification between a vision system and manipulators. Wang [25] proposed three methods to identify the coordinate systems of manipulators and a vision sensor, then compared them by simulations and experiments. Strobl [26] proposed an optimized robot hand-eye calibration method. Dornaika and Horaud [27] presented two solutions to perform the robot-world and the hand-eye calibration simultaneously, one was a closed-formed method which used the quaternion algebra and a positive quadratic error function, the other one was based on a nonlinear constrained minimization, they found that the nonlinear optimization method was more stable with respect to noises and measurements errors. Wong wilai [28] used a Softkinetic Depthsense, which could acquire distance images directly, to calibrate an eye-in-hand system.
Few papers and researches involve identifying the coordinate relationship between the vision system and legged robots. The most similar and recent work to our own is that of Hoepflinger [29], which calibrated the pose of a RGB-D camera with respect to a legged robot. Their method needed to recognize the foot position in the camera coordinate system based on the assumption that the robot’s foot has a specific color and shape. Then identification parameters can be obtained by comparing the foot position in different coordinate systems, the camera frame and the robot frame. Our research target is the same with theirs, while the solution is totally different.
Existing methods to identify extrinsic parameters of the vision sensor suffer from several disadvantages, such as a difficult featuring matching or recognition, requirement for external equipment and the involvement of human interventions. Current identification approaches are often elaborate procedures. Moreover, work has seldom been done for the pose identification of the vision sensor mounted on legged robots. To overcome limitations of the existing methods and supplement relevant study in legged robots, in this paper we propose a novel coordinate identification methodology for a 3D vision system mounted on a legged robot without involving other people or additional equipment. This paper makes the following contributions:
  • A novel coordinate identification methodology for a 3D vision system of a legged robot is proposed, which needs no additional equipment or human inventions.
  • We use the ground as the reference target, which makes it possible for our methodology to be widely used. At the same time, an estimation approach is introduced based on the optimization and statistical methods to calculate the ground plane accurately.
  • The relationship between the legged robot and the ground is modeled, which can be used to precisely obtain the pose of the legged robot with respect to the ground.
  • We integrate the proposed methodology on “Octopus”, which can traverse rough terrains after obtaining the identification parameters. Various experiments are carried out to validate the accuracy and robust of the method.
The remainder of this paper is organized as follows: Section 2 provides a brief introduction to the robot system. Section 3 describes the problem formulation and the definition of coordinate systems. Section 4 presents the modeling and the method in detail. Section 5 describes the experiments and discusses the error and robust analysis results. Section 6 summarizes and concludes the paper.

2. System Description

The legged robot is called “Octopus” [30,31], which has a hexagonal body with six identical legs arranged in a diagonally symmetrical way around its body as shown in Figure 1. The robot is a six DOFS moving platform that integrates walking and manipulating. A vision system is necessary for building a terrain map, and its mounting position and orientation with respect to the robot frame, which is essential for locomotion planning, need to be acquired.
Figure 1. The legged robot “Octopus”.
Figure 1. The legged robot “Octopus”.
Sensors 15 09519 g001
Figure 2 shows the control architecture of the robot. Users send commands to the upper computer via a control terminal, which can be a smart phone or a pad and communicates with the upper computer via Wi-Fi. The sensor system contains a 3D vision sensor, a gyro, a compass and an accelerometer. The 3D vision sensor detects the terrain in front of the robot and provides the 3D coordinate data. The 3D vision sensor connects with the upper computer via USB. The compass helps the robot navigate in the right direction in outdoor environments. The gyro and the accelerometer can measure the inclination, the angle velocity and the linear acceleration of the robot. The upper computer is a super notebook, which receives and processes useful data from the sensor system. The upper computer sends instructions to the lower computer via Wi-Fi too. The Wi-Fi networking is created by the upper computer. The lower computer runs a real-time Linux OS. The lower computer analyzes messages sent by the upper computer, then plans locomotion and sends planned data to drivers via Ethernet at run time. Drivers provide current to motors, and servo control motors using the feedback data from resolvers.
Figure 2. The control architecture of the robot.
Figure 2. The control architecture of the robot.
Sensors 15 09519 g002
The current work we are doing is try to make the robot walk and operate automatically in unknown environments with the help of the 3D vision sensor. Automatic locomotion planning needs the 3D coordinates of the surroundings, which can be transferred from depth images captured by the 3D vision sensor. Common laser range finders can only measure distances to objects that are located in the laser line of sight, while the 3D vision sensor can measure all the distances to objects in the range of the detection region, which is the reason why we choose a 3D vision sensor. The 3D vision sensor we use is a Kinect (as Figure 3 shows), which integrates multiple kinds of useful sensors, consisting of a RGB camera, an infrared emitter and camera, and four microphones. The RGB camera can capture 2D RGB images, the infrared emitter and camera constitute a 3D depth sensor which can measure the distance. Speech recognition and sound source localization can be achieved by processing voice messages obtained by the four microphones at the same time.
Figure 3. The 3D vision sensor.
Figure 3. The 3D vision sensor.
Sensors 15 09519 g003
Equipped with the 3D vision sensor, the robot can see objects from 0.8 m to 4 m and has a 57.5° horizontal vision angle and 43.5° vertical vision angle. The range from 1.2 m to 3.5 m is a sweet spot, in which the measuring precision can reach millimeter level [32,33]. Additionally, a small motor inside the 3D vision sensor allows it to tilt up and down from −27° to 27°. The 3D vision sensor is installed at the top of the robot as Figure 4 shows. The motor is driven to make the 3D vision sensor tilt down in order to ensure it can detect the terrain in front. The blue area is the region that the 3D vision sensor can detect, and the green area is the sweet spot. The height of the 3D vision sensor, denoted by h, is about 1 m. The short border VA of the green area is about 1.2 m and the long border VB is about 3.5 m through geometric calculations. We can make sure the depth data in the green area have a higher precision.
Figure 4. Installation schematic diagram of the 3D vision system.
Figure 4. Installation schematic diagram of the 3D vision system.
Sensors 15 09519 g004

3. Problem Formulation and Definition of Coordinate Systems

As mentioned above, it is very important to know the exact relationship between the 3D vision sensor coordinate system and the robot coordinate system. In other words, the mounting position and orientation of the 3D vision sensor must be identified. In order to express this simply, we us G-CS as short notation for the ground coordinate system, R-CS is short for the robot coordinate system, and V-CS is short for the 3D vision sensor coordinate system. As Figure 5 shows, the G-CS is represented by O G X G Y G Z G , which is used as the reference target with respect to the V-CS represented by O V X V Y V Z V and the R-CS represented by O R X R Y R Z R .
Figure 5. Definition of coordinate systems.
Figure 5. Definition of coordinate systems.
Sensors 15 09519 g005
The transformation matrix T R G in Figure 5 describes the position and the orientation of the R-CS with respect to the G-CS. Similarly, the identification matrix T V R describes the position and orientation of the V-CS with respect to the R-CS, which can be denoted by the X-Y-Z fixed angles of the R-CS. Concretely, set the R-CS fixed, the V-CS rotates γ along the XR-axis, then rotates β along the ZR-axis, and rotates α along the YR-axis, at last translates qx,qy,qz along the XR-axis, YR-axis, ZR-axis, respectively. After that we can get the current V-CS. Table 1 shows the identification parameters, and our goal is to determine the six identification parameters. The 3D coordinates of the terrain obtained by the vision sensor can be transferred to the R-CS by the transformation of the identification matrix T V R .
Table 1. The identification parameters.
Table 1. The identification parameters.
Fixed AxesIdentification AnglesIdentification Positions
XRγqx
YRαqy
ZRβqz
Equation (1) describes T V R in detail:
T V R ( α , β , γ , p x , p y , p z ) = [ R V R P V R 0 1 ] = [ t 11 t 12 t 13 t 14 t 21 t 22 t 23 t 24 t 31 t 32 t 33 t 34 0 0 0 1 ]
where:
R V R = R Y R ( α ) R Z R ( β ) R X R ( γ ) = [ cos α cos β cos α sin β cos γ + sin α sin γ cos α sin β sin γ + sin α cos γ sin β cos β cos γ cos β sin γ sin α cos β sin α sin β cos γ + cos α sin γ sin α sin β sin γ + cos α cos γ ]
P V R = [ p x p y p z ] T

4. Proposed Identification Methodology

Section 4 presents the novel identification model and method in detail.
Figure 6. Modeling of the method.
Figure 6. Modeling of the method.
Sensors 15 09519 g006
As Figure 6 shows, P ( x , y , z ) is an arbitrary point on the ground plane, P V is with respect to the V-CS and P G is with respect to the G-CS. P V and P G fulfill Equation (4):
P G = T R G T V R P V
where T V R is the identification matrix we proposed in Section 3, and T R G is the transformation matrix from the R-CS to the G-CS. P V can be detected by the 3D vison system and fulfills a standard plane Equation (5):
a V x V + b V y V + c V z V + d V = 0
where a V , b V and c V fulfill the formula a V 2 + b V 2 + c V 2 = 1 . The upper left mark V in Equation (5) denotes the variables are with respect to the V-CS P G , which is with respect to the G-CS, fulfills the following standard plane Equation (6):
a G x G + b G y G + c G z G + d G = 0
where a G , b G and c G fulfill the formula a G 2 + b G 2 + c G 2 = 1 . The upper left mark G in Equation (6) denotes the variables are with respect to the G-CS. In our work, the ground fulfills the following plane Equation (7):
y G = 0
The term T V R can be computed by solving the constraint Equation (4). T R G , representing the relationship between the robot and the ground, can be obtained using the model presented in Section 4.2. In our methodology, T V R is not computed by recognizing some certain points P. Instead, we estimate the ground plane from the point cloud detected by the 3D vision system. Then we have developed an algorithm which will be presented in Section 4.3 that formulates the identification problem as an optimization problem. The above modeling can reduce recognition errors and avoid measurement errors.

4.1. Estimation of the Ground Plane

d i V in Equation (8) is the distance from the detected point to the ground plane, as Figure 7 shows:
d i V = | a V x i V + b V y i V + c V z i V d V |
Figure 7. Estimation approach of the ground plane.
Figure 7. Estimation approach of the ground plane.
Sensors 15 09519 g007
Most detected points belong to the ground plane, and d i V should be 0. So the planar parameters a V ,    b V ,    c V ,    d V can be computed by minimizing the value of d i V :
ε = i = 1 n d i V 2 = i = 1 n ( a V x i V + b V y i V + c V z i V d V ) 2
ε in Equation (9) is defined to facilitate the computation. The Lagrange multiplier method is used to find the minimum value of ε . The Lagrange function is given by:
L ( a V , b V , c V , d V ) = ε + λ ( a 2 V + b 2 V + c 2 V 1 )
The following formula exists:
L ( d V ) = 2 i = 1 n ( a V x i V + b V y i V + c V z i V d V ) = 0
Equation (12) can be obtained from Equation (11):
d V = a V i = 1 n x i V n + b V i = 1 n y i V n + c V i = 1 n z i V n = a V x ¯ V + b V y ¯ V + c V z ¯ V
Substituting Equation (12) into Equation (8), we can obtain the Equation (13):
d i V = | a V ( x i V x ¯ V ) x ¯ V + b V ( y i V y ¯ V ) + c V ( z i V z ¯ V ) |
There also exist the following equations:
L ( a V ) = 2 i = 1 n ( a V Δ x i V + b V Δ y i V + c V Δ z i V ) Δ x i V + 2 λ a V = 0 L ( b V ) = 2 i = 1 n ( a V Δ x i V + b V Δ y i V + c V Δ z i V ) Δ y i V + 2 λ b V = 0 L ( c V ) = 2 i = 1 n ( a V Δ x i V + b V Δ y i V + c V Δ z i V ) Δ z i V + 2 λ c V = 0
Equation (14) can be rewritten as a matrix equation:
A [ a V b V c V ] = [ i = 1 n Δ x i V Δ x i V i = 1 n Δ y i V Δ x i V i = 1 n Δ z i V Δ x i V i = 1 n Δ x i V Δ y i V i = 1 n Δ y i V Δ y i V i = 1 n Δ z i V Δ y i V i = 1 n Δ x i V Δ z i V i = 1 n Δ y i V Δ z i V i = 1 n Δ z i V Δ z i V ] [ a V b V c V ] = λ [ a V b V c V ]
Observing Equation (15), we can find that [ a V ,    b V ,    c V ] T is the eigenvector of matrix A and λ is the corresponding eigenvalue, so a V ,    b V ,    c V can be computed using the matrix eigenvector calculation method. Generally, matrix A has three eigenvalues and three different groups of eigenvectors correspondingly. d i V can be obtained from Equation (13). The set of a V ,    b V ,    c V minimizing ε is the right set. After that, d V can be computed from Equation (12).
Because detection errors and influences of the outer environment exist in the identification process, some abnormal points have large errors, and some other points do not belong to the ground plane. These two kinds of points are called bad points, and a statistical method is used to exclude the bad points. Bad points can be removed, when the distances from them to the ground plane are larger than the standard value.
Figure 8 describes the estimation process of the ground plane. First, a V ,    b V ,    c V ,    d V are computed using all the point cloud, and d i V can be calculated from Equation (13), respectively. Then the standard deviation σ d of d i V is obtained from Equation (16). Bad points are removed by comparing d i V with 2 σ d , the planar parameters are computed using the remaining point cloud again. We do so repeatedly until all the values of d i V are less than 2 σ d , and the final a V ,    b V ,    c V ,    d V can be obtained. We have verified the estimation approach by simulations and experiments, the results show that the approach has well robustness and high precision:
σ d = i = 1 n ( d i V d ¯ V ) 2 ( n 1 )
where d ¯ V = i = 1 n d i V n .
Figure 8. The flow chart of the ground plane estimation.
Figure 8. The flow chart of the ground plane estimation.
Sensors 15 09519 g008

4.2. Relationship Model between the Legged Robot and the Ground

In this section, the relationship model between the legged robot and the ground is established to accurately compute the robot’s position and orientation (denoted by T R G ) with respect to the G-CS. The detailed expression of T R G is shown in Equation (17), whose formation process is similar to T V R . γ , β , α are angles that the robot rotates along the X G -axix , Z G -axis and  Y G -axis successively with respect to the fixed G-CS, and p x , p y , p z are distances that the robot translates along the X G -axix , Y G -axis and Z G -axis respectively. R R G is the orientation matrix, and P R G is the translation vector:
T R G ( α , β , γ , p x , p y , p z ) = [ R R G P R G 0 1 ] = [ a 11 a 12 a 13 a 14 a 21 a 22 a 23 a 24 a 31 a 32 a 33 a 34 0 0 0 1 ]
where:
R R G = R Y G ( α ) R Z G ( β ) R X G ( γ ) = [ cos α cos β cos α sin β cos γ + sin α sin γ cos α sin β sin γ + sin α cos γ sin β cos β cos γ cos β sin γ sin α cos β sin α sin β cos γ + cos α sin γ sin α sin β sin γ + cos α cos γ ]
P R G = [ p x p y p z ] T
Figure 9. Initial state of the robot.
Figure 9. Initial state of the robot.
Sensors 15 09519 g009
The initial position and orientation of the robot is shown in Figure 9, P F 1 , P F 2 , P F 3 , P F 4 , P F 5 , P F 6 denote positions of its feet. The positions of actuation joints are needed to be solved in order to set the robot to a known pose matrix T R G .
There exists Equation (20):
P F 1 R = T - 1 R G P F 1 G
where P F 1 R is the position of Foot 1 with respect to the R-CS, and P F 1 G is a known position of Foot 1 with respect to the G-CS. Similarly, other feet have the same equations, we can obtain P F 2 R , P F 3 R , P F 4 R , P F 5 R , P F 6 R too. Then the positions of actuation joints can be obtained by the robot inverse kinematics.
The robot can reach the set pose if the actuation joints are driven to the calculated positions. The important point here is that there may be deviations between the real pose and the set pose because of the manufacture and installation errors. However, it is quite important to reduce errors during the whole identification process in order to increase the identification precision. Therefore, the real pose is calculated by the following derivations.
Similarly, P F 1 G , P F 2 G , P F 3 G , P F 4 G , P F 5 G , P F 6 G are known. Foots 1, 3 and 5 are chosen to calculate the real pose. As Figure 9 shows, there exist the following relations:
R P F 1 G = R G G + G P F 1 G R P F 3 G = R G G + G P F 3 G R P F 5 G = R G G + G P F 5 G
where the upper left mark G represents all the geometric relations are built with respect to the G-CS. Equation (22) can be obtained from Equation (21):
| R P F 1 G | 2 = ( x 1 G p x ) 2 + ( y 1 G p y ) 2 + ( z 1 G p z ) 2 | R P F 3 G | 2 = ( x 3 G p x ) 2 + ( y 3 G p y ) 2 + ( z 3 G p z ) 2 | R P F 5 G | 2 = ( x 5 G p x ) 2 + ( y 5 G p y ) 2 + ( z 5 G p z ) 2
where ( x 1 G , y 1 G , z 1 G ) denotes the coordinates of Foot 1 with respect to the G-CS. Equation (23) can be derived based on the robot forward kinematics:
| R P F 1 R | 2 = x 1 R 2 + y 1 R 2 + z 1 R 2 | R P F 3 R | 2 = x 3 R 2 + y 3 R 2 + z 3 R 2 | R P F 5 R | 2 = x 5 R 2 + y 5 R 2 + z 5 R 2
where ( x 1 R , y 1 R , z 1 R ) denotes the coordinates of Foot 1 with respect to the R-CS. The norms of vectors R P F 1 , R P F 3 , R P F 5 are constant, so the following equations can be obtained:
x 1 R 2 + y 1 R 2 + z 1 R 2 = ( x 1 G p x ) 2 + ( y 1 G p y ) 2 + ( z 1 G p z ) 2 x 3 R 2 + y 3 R 2 + z 3 R 2 = ( x 3 G p x ) 2 + ( y 3 G p y ) 2 + ( z 3 G p z ) 2 x 5 R 2 + y 5 R 2 + z 5 R 2 = ( x 5 G p x ) 2 + ( y 5 G p y ) 2 + ( z 5 G p z ) 2
By solving the above equations, the real translation vector P R G are calculated. Additionally, there exists Equation (25):
R R G [ R P F 1 R , R P F 3 R , R P F 5 R ] = [ G P F 1 G P R G , G P F 3 G P R G , G P F 5 G P R G ]
From Equation (25), the real orientation matrix R R G is computed too. Thus, the real translation matrix T R G can be calculated from Equations (24) and (25).

4.3. Formulation of the Identification Function

The following equations are obtained from Equation (4):
x G = ( a 11 t 11 + a 12 t 21 + a 13 t 31 ) x V + ( a 11 t 12 + a 12 t 22 + a 13 t 32 ) y V + ( a 11 t 13 + a 12 t 23 + a 13 t 33 ) z V + a 11 t 14 + a 12 t 24 + a 13 t 34 + a 14 y G = ( a 21 t 11 + a 22 t 21 + a 23 t 31 ) x V + ( a 21 t 12 + a 22 t 22 + a 23 t 32 ) y V + ( a 21 t 13 + a 22 t 23 + a 23 t 33 ) z V + a 21 t 14 + a 22 t 24 + a 23 t 34 + a 24 z G = ( a 31 t 11 + a 32 t 21 + a 33 t 31 ) x V + ( a 31 t 12 + a 32 t 22 + a 33 t 32 ) y V + ( a 31 t 13 + a 32 t 23 + a 33 t 33 ) z V + a 31 t 14 + a 32 t 24 + a 33 t 34 + a 34
Because P G ( x G , y G , z G ) fulfills Equation (7), the following Equation is obtained from the second expression of Equation (26):
( a 21 t 11 + a 22 t 21 + a 23 t 31 ) x V + ( a 21 t 12 + a 22 t 22 + a 23 t 32 ) y V    + ( a 21 t 13 + a 22 t 23 + a 23 t 33 ) z V    + a 21 t 14 + a 22 t 24 + a 23 t 34 + a 24 = 0
Figure 10. Formulation of the identification function.
Figure 10. Formulation of the identification function.
Sensors 15 09519 g010
As Figure 10 shows, Equation (27) is the theoretical ground equation with respect to the V-CS. a T , b T , c T , d T in Formula (28) represent theoretical ground planar parameters:
a T = a 21 t 11 + a 22 t 21 + a 23 t 31 b T = a 21 t 12 + a 22 t 22 + a 23 t 32 c T = a 21 t 13 + a 22 t 23 + a 23 t 33 d T = a 21 t 14 + a 22 t 24 + a 23 t 34 + a 24
Through derivations, we find there exists Equation (29):
( a 21 t 11 + a 22 t 21 + a 23 t 31 ) 2 + ( a 21 t 12 + a 22 t 22 + a 23 t 32 ) 2 + ( a 21 t 13 + a 22 t 23 + a 23 t 33 ) 2 =1
Theoretically, the measured ground coincides with the theoretical ground as shown in Figure 10. Because of Equation (29), Equation (27) is a standard plane equation, so it is obvious that Equation (27) is the same as Equation (5) derived in Section 4.1. Then the following four equations can be obtained:
a 21 t 11 + a 22 t 21 + a 23 t 31 = a V a 21 t 12 + a 22 t 22 + a 23 t 32 = b V a 21 t 13 + a 22 t 23 + a 23 t 33 = c V a 21 t 14 + a 22 t 24 + a 23 t 34 + a 24 = d V
where t 11 ,       t 12 ,       t 13 ,       t 14 ,       t 21 ,       t 22 ,       t 23 ,       t 24 ,       t 31 ,       t 32 ,       t 33 ,       t 34 are variables associated with α , β , γ , p x , p y , p z . A nonlinear function F, which consists of six identification parameters, can be defined as Equation (31):
F = ( a 21 cos α cos β + a 22 sin β a 23 s i n α cos β a V ) 2 + [ a 21 ( cos α sin β cos γ + sin α sin γ ) + a 22 cos β cos γ + a 23 ( sin α sin β cos γ + cos α sin γ ) b V ] 2 + [ a 21 ( cos α sin β sin γ + sin α cos γ ) a 22 cos β sin γ + a 23 ( sin α sin β sin γ + cos α cos γ ) c V ] 2 + ( a 21 p x + a 22 p y + a 23 p z + a 24 d V ) 2
Generally, the legged robot has six DOFS, which can be used to simplify the identification process and increase the identification precision. At the beginning, the robot is located in an initial state, the X R -axis and the Z R -axis are parallel with the X G -axis and the Z G -axis respectively, the Y R -axis and the Y G -axis are collinear. Multiple groups of the robot poses and corresponding ground equations can be obtained by making the robot translate and rotate in space. Finally, identification parameters α , β , γ , p x , p y , p z can be obtained by minimizing the nonlinear function F using the LM algorithm.

5. Experimental Results and Discussion

In order to verify the proposed identification methodology, a series of experiments were carried out on the robot. The experimental results and related discussions are presented in this section.
Figure 11. The experimental environment.
Figure 11. The experimental environment.
Sensors 15 09519 g011

5.1. Set up and Identification Results

Figure 11 shows the experimental environment, a small section of flat ground is in front of the robot. The 3D vision sensor is mounted at the top of the robot, and connected to the upper computer via USB. The 3D vision sensor is set to tilt down in order to guarantee that it can detect the ground. The upper computer controls the robot to reach 52 different groups of poses, and also controls the 3D vision sensor to detect the ground. When the robot reaches a set pose, the 3D vision sensor captures a depth image of the ground. Table A1 in the Appendix shows the 52 different groups of pose parameters, which are used in experiments. Taking into account the length of the paper, only six groups of the 52 experiments’ data are listed. But all the experimental data are discussed in detail. Figure 12 shows the six different groups of the robot poses.
Figure 12. The robot poses.
Figure 12. The robot poses.
Sensors 15 09519 g012
Figure 13 shows the point cloud (blue points) of the ground corresponding to the above six groups of poses. The red point in Figure 13 denotes the origin of the 3D vision sensor. Some of cloud points having larger errors are removed using the approach proposed in Section 4.1, thus blue points far away from the 3D vision sensor are sparse. Correspondingly, Table 2 shows the six measured ground equations which are computed based on the approach in Section 4.1.
Figure 13. Ground point cloud.
Figure 13. Ground point cloud.
Sensors 15 09519 g013
Table 2. Measured ground equations.
Table 2. Measured ground equations.
NumberMeasured Ground Equation
ground equation 1 0.5515 x V + 0.7802 y V 0.2952 z V + 0.9916 = 0
ground equation 10 0.7778 x V + 0.5839 y V 0.2324 z V + 0.8372 = 0
ground equation 15 0.3068 x V + 0.9504 y V 0.0504 z V + 1.1626 = 0
ground equation 29 0.3388 x V + 0.9312 y V 0.1345 z V + 1.1656 = 0
ground equation 34 0.8102 x V + 0.5668 y V + 0.1490 z V + 0.8378 = 0
ground equation 49 0.4428 x V + 0.8957 y V 0.0393 z V + 1.1721 = 0
After calculating the transformation matrices of the robot, identification parameters can be obtained using the algorithm in Section 4.3. The computed results are as shown by Equation (32):
α = 0.4398 ° , β = 37.4072 ° , γ = 3.0471 ° , p x =527.8885mm , p y =364.8007mm , p z = 33.4003mm
Equation (33) shows the detailed expression of T V R :
T V R = [ 0.7943 0.6070 0.0246 527.8885 0.6075 0.7932 0.0422 364.8007 0.0061 0.0485 0.9988 33.4003 0 0 0 1 ]

5.2. Errors Analysis

Substituting T V R into Equation (4), theoretical ground equations with respect to the V-CS can be obtained. Table 3 shows six detailed expressions of the theoretical ground.
Table 3. Theoretical ground equations.
Table 3. Theoretical ground equations.
NumberMeasured Ground Equation
ground equation 1 0.5449 x V + 0.7827 y V 0.3009 z V + 0.9894 = 0
ground equation 10 0.7783 x V + 0.5826 y V + 0.2341 z V + 0.8363 = 0
ground equation 15 0.3132 x V + 0.9483 y V 0.0510 z V + 1.1632 = 0
ground equation 29 0.3391 x V + 0.9309 y V 0.1354 z V + 1.1639 = 0
ground equation 34 0.8081 x V + 0.5698 y V + 0.1493 z V + 0.8390 = 0
ground equation 49 0.4430 x V + 0.8957 y V 0.0395 z V + 1.1733 = 0
Figure 14 and Figure 15 illustrate measured ground planar parameters and theoretical ground planar parameters, respectively. From pose 1 to pose 10, pose 21 to pose 30, pose 41 to pose 46, the robot rotates along the positive direction. The rotation angle along the Z-axis and the translation distance increase gradually, while the rotation angle along the X-axis decreases gradually. From pose 11 to pose 20, pose 31 to pose 40, pose 47 to pose 52, the robot rotates along the negative direction. The rotation angle along the Z-axis and the translation distance increase gradually, while the rotation angle along the X-axis decreases gradually. Therefore, gradual increase and decrease of the planar parameters appear in Figure 14 and Figure 15 correspondingly. Values of theoretical planar parameters a T , b T , c T and measured planar parameters a V , b V , c V are shown in Figure 14, d T and d V are shown in Figure 15. Because a , b , c have the different geometric meaning from d , they are shown in different figures. From Figure 14 and Figure 15, we can clearly see that theoretical planar parameters and measured planar parameters have few errors and are nearly the same.
Figure 14. Measured angular parameters vs. theoretical angular parameters.
Figure 14. Measured angular parameters vs. theoretical angular parameters.
Sensors 15 09519 g014
In Figure 16, the blue region represents the theoretical ground, and the green region represents the ground measured by the 3D vision sensor. O V is the origin of the 3D vision sensor, O V O T G is the normal of the theoretical ground, and d T is the distance from the origin to the theoretical ground. O V O M G is the normal of the measured ground, and d M is the distance from the origin to the measured ground. θ denotes the angle between the theoretical ground and the measured ground. e d = | d T d M | denotes the distance deviation between d T and d M . In reality, the detected terrain is composed of many different planes, the index θ and e d can be used to describe the measurement precision.
Figure 15. Measured distance vs. theoretical distance.
Figure 15. Measured distance vs. theoretical distance.
Sensors 15 09519 g015
Figure 16. Error analysis.
Figure 16. Error analysis.
Sensors 15 09519 g016
We have computed 52 groups of θ and e d corresponding to the set poses. The results are shown in Figure 17 and Figure 18, Figure 17 describes the value of θ and Figure 18 describes the value of e d .
The mean value and the maximum value of θ and e d are marked in Figure 17 and Figure 18 separately. The mean value of θ is 0.2104°, and the maximum value is 0.5219°. The mean value of e d is 1.1 mm and the maximum value is about 3.236 mm. The robot’s minimum step height is 50 mm when it is walking, and its foot can rotate from −35° to 35° with respect to its leg. Thus the robot can bear the maximum angle error of 0.5219° and the maximum distance error of 3.236 mm easily. Above analysis results show that the identification precision fulfills the requirement of the robot, which validates our theory.
Figure 17. The angle error.
Figure 17. The angle error.
Sensors 15 09519 g017
Figure 18. The distance error.
Figure 18. The distance error.
Sensors 15 09519 g018

5.3. Robust Tests

In this section, the robustness of the methodology is tested by carrying out identification experiments under two typical situations: different illumination conditions and different ground conditions. For the robust tests under different illumination conditions, the experiments are carried out at different times in an urban environment. As Figure 19 shows, the first experiment is carried out under normal illumination set at 4 pm as a reference, the second experiment in a dark set at 6 pm, and the third experiment in a bright set at 2 pm.
Figure 19. Robust test under different illumination conditions. (a) Normal illumination; (b) Weak illumination; (c) Strong illumination.
Figure 19. Robust test under different illumination conditions. (a) Normal illumination; (b) Weak illumination; (c) Strong illumination.
Sensors 15 09519 g019
The experiment is executed 20 times under each illumination condition. We provide the mean and standard deviation of the identification results in Table 4 along with box plots in Figure 20 to illustrate their spread. Figure 20 shows the spread of the identification parameters under different illumination conditions. The boxes span the 25th and 75th percentiles, with the media depicted by the central line in the box plot. The tails of the box plots represent the range. As Figure 20 shows, the computed results vary most in the bright set. Concretely, the angles α are within about 0.13°, β are within about 0.05°, γ are within 0.12°, and the positions p x and p z are both within about 2 mm, p y are within about 0.4 mm. The more precise results have been achieved in the normal and dark sets. Table 4 shows the statistical results of these tests. The mean values of α , β , γ under three lumination conditions are nearly the same, as the mean values of positions differ less than 4 mm. The standard deviation obtained in the bright set is the maximum, the standard deviation of α is less than 0.052°, the standard deviation of p x is less than 0.8 mm. Nevertheless, the standard deviations in the bright set are relatively small compared to the results of Hoepflinger [29].
Table 4. Identification results under different lumination conditions.
Table 4. Identification results under different lumination conditions.
MeanStandard Deviation
NormalWeak LuminationStrong LuminationNormalWeak LuminationStrong Lumination
α (deg)−0.4492−0.4401−0.43630.03270.03170.0517
β (deg)−36.9849−36.9103−36.90260.00930.01490.0204
γ (deg)−2.8893−2.8718−2.83830.00840.04000.0431
p x (mm)525.8881528.9832524.81960.29410.59350.7932
p y (mm)363.4033365.2710361.30170.20890.11660.1651
p z (mm)−34.0852−33.8849−31.68740.42690.65430.7295
Figure 20. Box diagram of the identification results under different illumination conditions.
Figure 20. Box diagram of the identification results under different illumination conditions.
Sensors 15 09519 g020
For the robustness test under different ground conditions, the experiments are performed on three different terrains. As Figure 21 shows, the first experiment is carried out on a flat ground as a reference, the second experiment on a slightly complex ground, and the third experiment on a considerably complex ground.
Figure 21. Robust test under different ground conditions. (a) Normal ground; (b) Little bumpiness; (c) Big bumpiness.
Figure 21. Robust test under different ground conditions. (a) Normal ground; (b) Little bumpiness; (c) Big bumpiness.
Sensors 15 09519 g021
The experiment is executed 20 times on each terrain. The mean and standard deviation of the identification results are provided in Table 5; and the results’ spread is illustrated with box plots in Figure 22. As Figure 22 shows; the identification precision on the flat ground is the highest. The most imprecise results are obtained on the considerably complex ground; the angles α are within about 0.124°; β are within about 0.076°; γ are within 0.414°; the positions p x are within 1.77 mm; p y are within 0.85 mm; p z are within 1.26 mm. The statistical results of the test are shown in Table 5. The mean values of α ; β ; γ obtained under three terrains are close to each other. The mean values of positions p y and p z obtained on the three terrains differ less than 1 mm; the mean value of position p x obtained on the considerably complex ground has about 5 mm difference compared to the ones obtained on the other two terrains. It can be observed that the standard deviations are sufficiently small; the maximum standard deviation of angle is less than 0.05° and the maximum standard deviation of position is less than 0.72 mm; both results being obtained on the considerably complex ground.
Table 5. Identification results under different ground conditions.
Table 5. Identification results under different ground conditions.
MeanStandard Deviation
NormalLittle BumpinessBig BumpinessNormalLittle BumpinessBig Bumpiness
α (deg)−0.4657−0.4425−0.45920.02200.02640.0461
β (deg)−37.4931−37.4333−37.37970.00790.01710.0278
γ (deg)−3.0555−3.0027−2.91760.00480.00680.0161
p x (mm)528.6361527.2074523.85420.39970.40420.7120
p y (mm)365.0869365.1528364.17950.27080.26490.3441
p z (mm)−33.7265−33.0949−33.09490.25740.42560.5747
Figure 22. Box diagram of the identification results under different ground conditions.
Figure 22. Box diagram of the identification results under different ground conditions.
Sensors 15 09519 g022
To conclude this section, above box plots show how the illumination conditions and the complexity of the ground affect the identification precision. The identification results obtained under different illumination conditions and different ground conditions do not differ greatly and the standard deviations are quite small, which shows our method is very robust and stable and can be applied in some complex environments.

6. Use Case

A use case, underlying the importance and applicability of the methodology in the legged robots field, is presented next. In reality, a legged robot is often used in an unknown environment to execute daunting tasks. With the help of a vision sensor, the robot has a good knowledge of the environment. What’s more, after computing the extrinsic parameters relating the vision sensor and the legged robot, an accurate relationship between the robot and the terrain can be obtained. Thus the automatic locomotion can be implemented to execute tasks.
As Figure 23 shows, the robot is in an unknown environment with obstacles. Based on the proposed methodology, the extrinsic parameters relating the sensor and the robot can be computed. The terrain map with respect to the robot can be built. Moreover, the accurate position and orientation of the obstacles are obtained from the terrain map. An automatic locomotion planning algorithm combining the terrain information is executed to plan the foot and body trajectories.
Figure 23. External view of the legged robot passing through obstacles.
Figure 23. External view of the legged robot passing through obstacles.
Sensors 15 09519 g023
Figure 24 shows the whole process of passing through the obstacles, the robot body is regulated to move forward horizontally. During the whole process, the feet are placed at the planned footholds, so its body remains stable when walking on the obstacles. The results show a successful application of the methodology in the intelligent robotic field.
Figure 24. Snapshots of the legged robot passing through obstacles.
Figure 24. Snapshots of the legged robot passing through obstacles.
Sensors 15 09519 g024

7. Conclusions

In this paper, we have presented a novel coordinate identification methodology for a 3D vision system mounted on a legged robot. Generally, the method can address the problem of extrinsic calibration between a 3D type vision sensor and legged robots, which few studies have worked on. The proposed method provides several advantages. Instead of using any kind of external tools (calibration targets and measurement equipment), our method only needs a small section of relatively flat ground, which can reduce recognition errors and avoid measurement errors. Moreover, the method needs no human intervention, and it is practical and easy to implement.
The theoretical contributions of this paper can be summarized as follows. An approach for estimating the ground plane is introduced based on optimization and statistical methods, and the relationship model between the robot and the ground is established too. The identification parameters are obtained from the identification function using the LM algorithm. Finally, a series of experiments are performed on a hexapod robot, and the identification parameters are computed using the proposed method. The calculated errors satisfy the requirements of the robot, which validates our theory. In addition, experiments in various environments are also performed, the results show that our methodology has good stability and robustness. A use case, in which the legged robot can pass through rough terrains after accurately obtaining the identification parameters, is also given to verify the practicability of the method. The work of this paper supplements relevant study in legged robots, and the method can be applied in a wide range of similar applications.

Acknowledgments

This study was supported by the National Basic Research Program of China (973 Program) (No. 2013CB035501), and Shanghai Natural Science Foundation (Grant No. 14ZR1422600).

Author Contributions

Xun Chai and Yang Pan conducted algorithm design under the supervision of Feng Gao. Xun Chai and Yilin Xu designed and carried out the experiments. Xun Chai analyzed the experiment data and wrote the paper. Chenkun Qi gave many meaningful suggestions about the structure of the paper.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix

Table A1. The pose parameters of octopus.
Table A1. The pose parameters of octopus.
Pose Number α (°) β (°) γ (°) p x (mm) p y (mm) p z (mm)
1022006150
2081806190
30111606230
40131406270
50151206310
60171006350
7019806390
8019606430
9019406470
10020206510
110−2−2006170
120−8−1806210
130−11−1606250
140−13−1406290
150−15−1206330
160−17−1006370
170−19−806410
180−19−606450
190−19−406490
200−20−206530
21031906900
22091706940
230121506980
240151306980
250171106940
26017906900
27017706860
28018506820
29018306780
30019106740
310−3−1906920
320−9−1706960
330−12−1507000
340−15−1306960
350−17−1106920
360−17−906880
370−17−706840
380−18−506800
390−18−306760
400−19−106720
41011307300
42071107340
4309907380
44011707420
45011507460
46012307490
470−1−1307320
480−7−1107360
490−9−907400
500−11−707440
510−11−507480
520−12−307500

References

  1. Bazeille, S.; Barasuol, V.; Focchi, M.; Havoutis, I.; Frigerio, M.; Buchli, J.; Semini, C.; Caldwell, D.G. Vision Enhanced Reactive Locomotion Control for Trotting on Rough Terrain. In Proceedings of the 2013 IEEE International Conference on Technologies for Practical Robot Applications (TePRA), Woburn, MA, USA, 22–23 April 2013; pp. 1–6.
  2. Walas, K. Terrain Classification Using Vision, Depth and Tactile Perception. In Proceedings of the 2013 RGB-D: Advanced Reasoning with Depth Cameras in Conjunction with RSS, Berlin, Germany, 27 July 2013.
  3. Stelzer, A.; Hirschmuller, H.; Gorner, M. Stereo-Vision-Based Navigation of a Six-Legged Walking Robot in Unknown Rough Terrain. Int. J. Robot. Res. 2012, 31, 381–402. [Google Scholar] [CrossRef]
  4. Kolter, J.Z.; Kim, Y.; Ng, A.Y. Stereo Vision and Terrain Modeling for Quadruped Robots. In Proceeding of the IEEE International Conference on Robotics and Automation (ICRA ’09), Kobe, Japan, 12–17 May 2009; pp. 1557–1564.
  5. Belter, D.; Skrzypczyński, P. Rough Terrain Mapping and Classification for Foothold Selection in a Walking Robot. J. Field Robot. 2011, 28, 497–528. [Google Scholar] [CrossRef]
  6. Ishigami, G.; Otsuki, M.; Kubota, T. Range-Dependent Terrain Mapping and Multipath Planning Using Cylindrical Coordinates for a Planetary Exploration Rover. J. Field Robot. 2013, 30, 536–551. [Google Scholar] [CrossRef]
  7. Kesper, P.; Grinke, E.; Hesse, F.; Wörgötter, F.; Manoonpong, P. Obstacle/Gap Detection and Terrain Classification of Walking Robots Based on a 2D Laser Range Finder. Chapter 2013, 53, 419–426. [Google Scholar]
  8. Kang, T.K.; Lim, M.T.; Park, G.T.; Kim, D.W. 3D Vision-Based Local Path Planning System of a Humanoid Robot for Obstacle Avoidance. J. Electr. Eng. Technol. 2013, 8, 879–888. [Google Scholar] [CrossRef]
  9. Wong, C.C.; Hwang, C.L.; Huang, K.H.; Hu, Y.Y.; Cheng, C.T. Design and Implementation of Vision-Based Fuzzy Obstacle Avoidance Method on Humanoid Robot. Int. J. Fuzzy Syst. 2011, 13, 45–54. [Google Scholar]
  10. Bogdan Rusu, R.; Sundaresan, A.; Morisset, B.; Hauser, K.; Agrawal, M.; Latombe, J.C.; Beetz, M. Leaving Flatland: Efficient Real-Time Three-Dimensional Perception and Motion Planning. J. Field Robot. 2009, 26, 841–862. [Google Scholar] [CrossRef]
  11. Herrera, D.; Kannala, J.; Heikkilä, J. Accurate and Practical Calibration of a Depth and Color Camera Pair. In Computer Analysis of Images and Patterns; Springer: Berlin Heidelberg, Germany, 2011; pp. 437–445. [Google Scholar]
  12. Li, G.; Liu, Y.; Dong, L.; Cai, X.; Zhou, D. An Algorithm for Extrinsic Parameters Calibration of a Camera and a Laser Range Finder Using Line Features. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2007), San Diego, CA, USA, 29 October–2 November 2007; pp. 3854–3859.
  13. Guo, C.X.; Mirzaei, F.M.; Roumeliotis, S.I. An Analytical Least-Squares Solution to the Odometer-Camera Extrinsic Calibration Problem. In Proceedings of the 2012 IEEE International Conference on Robotics and Automation (ICRA), Saint Paul, MN, USA, 14–18 May 2012; pp. 3962–3968.
  14. Geiger, A.; Moosmann, F.; Car, O.; Schuster, B. Automatic Camera and Range Sensor Calibration Using a Single Shot. In Proceedings of the 2012 IEEE International Conference on Robotics and Automation (ICRA), Saint Paul, MN, USA, 14–18 May 2012; pp. 3936–3943.
  15. Pandey, G.; McBride, J.R.; Savarese, S.; Eustice, R. Automatic Targetless Extrinsic Calibration of a 3D Lidar and Camera by Maximizing Mutual Information. In Proceedings of the Twenty-Six AAAI Conference on Artificial Intelligence, Toronto, ON, Canada, 22–26 July 2012.
  16. Zhang, Q.; Pless, R. Extrinsic Calibration of a Camera and Laser Range Finder (Improves Camera Calibration). In Proceedings of the 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2004), Sendai, Japan, 28 September–2 October 2004; pp. 2301–2306.
  17. Huang, P.S.; Hong, W.B.; Chien, H.J.; Chen, C.Y. Extrinsic Calibration of a Multi-Beam LiDAR System with Improved Intrinsic Laser Parameters Using V-Shaped Planes and Infrared Images. In Proceedings of the 2013 11th IEEE IVMSP Workshop, Seoul, Korea, 10–12 June 2013; pp. 1–4.
  18. Fernández-Moral, E.; González-Jiménez, J.; Rives, P.; Arévalo, V. Extrinsic Calibration of a Set of Range Cameras in 5 Seconds without Pattern. In Proceedings of the 2014 IEEE/RSJ International. Conference on Intelligent Robots and Systems (IROS 2014), Chicago, IL, USA, 14–18 September 2014; pp. 429–435.
  19. Kwak, K.; Huber, D.F.; Badino, H.; Kanade, T. Extrinsic Calibration of a Single Line Scanning Lidar and a Camera. In 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), San Francisco, CA, USA, 25–30 September 2011; pp. 3283–3289.
  20. Agrawal, A. Extrinsic Camera Calibration without a Direct View Using Spherical Mirror. In Proceedings of the 2013 IEEE International Conference on Computer Vision (ICCV), Sydney, Australia, 1–8 December 2013; pp. 2368–2375.
  21. Lébraly, P.; Deymier, C.; Ait-Aider, O.; Royer, E.; Dhome, M. Flexible Extrinsic Calibration of Non-Overlapping Cameras Using a Planar Mirror: Application to Vision-Based Robotics. In Proceedings of the 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Taipei, China, 18–22 October 2010; pp. 5640–5647.
  22. Hesch, J.A.; Mourikis, A.I.; Roumeliotis, S.I. Mirror-Based Extrinsic Camera Calibration. In Algorithmic Foundation of Robotics VIII; Springer: Berlin, Germany, 2009; pp. 285–299. [Google Scholar]
  23. Zhou, L. A New Minimal Solution for the Extrinsic Calibration of a 2D LIDAR and a Camera Using Three Plane-Line Correspondences. IEEE Sens. J. 2014, 14, 442–454. [Google Scholar] [CrossRef]
  24. Kellyt, J.; Matthies, L.H.; Sukhatme, G. Simultaneous Mapping and Stereo Extrinsic Parameter Calibration Using GPS Measurements. In Proceedings of the 2011 IEEE International Conference on Robotics and Automation (ICRA), Shanghai, China, 9–13 May 2011; pp. 279–286.
  25. Wang, C.C. Extrinsic Calibration of a Vision Sensor Mounted on a Robot. IEEE Trans. Robot. Autom. 1992, 8, 161–175. [Google Scholar] [CrossRef]
  26. Strobl, K.H.; Hirzinger, G. Optimal Hand-Eye Calibration. In Proceedings of the 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, Beijing, China, 9–15 October 2006; pp. 4647–4653.
  27. Dornaika, F.; Horaud, R. Simultaneous Robot-World and Hand-Eye Calibration. IEEE Trans. Robot. Autom. 1998, 14, 617–622. [Google Scholar] [CrossRef]
  28. Wongwilai, N.; Niparnan, N.; Sudsang, A. Calibration of an Eye-in-Hand System Using SoftKinetic DepthSense and Katana Robotic Arm. In Proceedings of the 2014 11th International Conference on Electrical Engineering/Electronics, Computer, Telecommunications and Information Technology (ECTI-CON), Nakhon Ratchasima, Isan, 14–17 May 2014; pp. 1–6.
  29. Hoepflinger, M.A.; Remy, D.C.; Hutter, M.; Siegwart, R.Y. Extrinsic RGB-D Camera Calibration for Legged Robots. In Proceedings of the 14th International Conference on Climbing and Walking Robots and the Support Technologies for Mobile Machines (CLAWAR), Paris, France, 6–8 September 2011.
  30. Pan, Y.; Gao, F. A New 6-Parallel-Legged Walking Robot for Drilling Holes on the Fuselage. J. Mech. Eng. Sci. 2013, 228, 753–764. [Google Scholar] [CrossRef]
  31. Yang, P.; Gao, F. Leg Kinematic Analysis and Prototype Experiments of Walking-Operating Multifunctional Hexapod Robot. J. Mech. Eng. Sci. 2013, 228, 2217–2232. [Google Scholar] [CrossRef]
  32. Khoshelham, K. Accuracy Analysis of Kinect Depth Data. In Proceedings of the International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, ISPRS Calgary 2011 Workshop, Calgary, AB, Canada, 29–31 August 2011; pp. 133–138.
  33. Khoshelham, K.; Elberink, S.O. Accuracy and Resolution of Kinect Depth Data for Indoor Mapping Applications. Sensors 2012, 12, 1437–1454. [Google Scholar] [CrossRef] [PubMed]

Share and Cite

MDPI and ACS Style

Chai, X.; Gao, F.; Pan, Y.; Qi, C.; Xu, Y. A Novel Identification Methodology for the Coordinate Relationship between a 3D Vision System and a Legged Robot. Sensors 2015, 15, 9519-9546. https://doi.org/10.3390/s150409519

AMA Style

Chai X, Gao F, Pan Y, Qi C, Xu Y. A Novel Identification Methodology for the Coordinate Relationship between a 3D Vision System and a Legged Robot. Sensors. 2015; 15(4):9519-9546. https://doi.org/10.3390/s150409519

Chicago/Turabian Style

Chai, Xun, Feng Gao, Yang Pan, Chenkun Qi, and Yilin Xu. 2015. "A Novel Identification Methodology for the Coordinate Relationship between a 3D Vision System and a Legged Robot" Sensors 15, no. 4: 9519-9546. https://doi.org/10.3390/s150409519

Article Metrics

Back to TopTop