Next Article in Journal
Evaluation of Cracks in Metallic Material Using a Self-Organized Data-Driven Model of Acoustic Echo-Signal
Next Article in Special Issue
Algorithm for Base Action Set Generation Focusing on Undiscovered Sensor Values
Previous Article in Journal
Enhanced Pathological Element-Based Symbolic Nodal Analysis
Previous Article in Special Issue
Multi-Criteria Decision Making for Efficient Tiling Path Planning in a Tetris-Inspired Self-Reconfigurable Cleaning Robot
Open AccessArticle

Tarantula: Design, Modeling, and Kinematic Identification of a Quadruped Wheeled Robot

1
Engineering Product Development Pillar, Singapore University of Technology and Design (SUTD), Singapore 487372, Singapore
2
Department of Mechanical Engineering, Politecnico di Milano, 20133 Milan, Italy
*
Author to whom correspondence should be addressed.
Appl. Sci. 2019, 9(1), 94; https://doi.org/10.3390/app9010094
Received: 7 November 2018 / Revised: 10 December 2018 / Accepted: 17 December 2018 / Published: 27 December 2018
(This article belongs to the Special Issue Advanced Mobile Robotics)

Abstract

This paper firstly presents the design and modeling of a quadruped wheeled robot named Tarantula. It has four legs each having four degrees of freedom with a proximal end attached to the trunk and the wheels for locomotion connected at the distal end. The two legs in the front and two at the back are actuated using two motors which are placed inside the trunk for simultaneous abduction or adduction. It is designed to manually reconfigure its topology as per the cross-sections of the drainage system. The bi-directional suspension system is designed using a single damper to prevent the trunk and inside components from shock. Formulation for kinematics of the wheels that is coupled with the kinematics of each leg is presented. We proposed the cost-effective method which is also an on-site approach to estimate the kinematic parameters and the effective trunk dimension after assembly of the quadruped robot using the monocular camera and ArUco markers instead of high-end devices like a laser tracker or coordinate measurement machine. The measurement technique is evaluated experimentally and the same set up was used for trajectory tracking of the Tarantula. The experimental method for the kinematic identification presented here can be easily extended to the other mobile robots with serial architecture designed legs.
Keywords: design and modeling; kinematics; kinematic identification; monocular vision design and modeling; kinematics; kinematic identification; monocular vision

1. Introduction

Drains are an integral part of every modern city, where drainage systems are entirely subsurface in most countries. Statistics from Asia, Europe, United States show that major cities contain 4000 to 7000 km of drainage lines. The primary purpose of these surface and subsurface sewage systems is to remove excess water in a safe and timely manner, which plays a vital role in controlling water-related diseases or water-borne diseases. Drainage systems have its disadvantages, where these systems give problems to mosquito-borne diseases, clogging, internal damages due to ageing, excessive traffic which causes contamination of groundwater or overflow. To control these problems, serious inspection, monitoring and maintenance of drainage systems is required. At present, this task is labor-intensive as shown in Figure 1a that adds more difficulties in subsurface sewer lines like inaccessible areas with poor lighting, ventilation and safety concerns associated with insect bites. The cross-section of the drainage system with the approximate symmetric design shown in Figure 1b are widely found in Singapore [1]. The width, W, of these drainages typically range from 1.1 to 1.8 m, w from 0.2–0.8 m and the height h between 0.3–1.1 m. Thus, there is a requirement to design the robot to traverse inside this type of drainage systems.
The specially designed mechanism with suited locomotion as per the internal geometry of the drainage system is essential. Classification of the inspection robots can be done on the basis of locomotion as tracked, wheeled and legged. PIRAT [2] is a tracked small robot designed for the quantitative assessment of sewer systems surveyed in real time. The development of autonomous body for inspection of liquid filled pipes “pipe rover/pearl rover” has six-legged propulsion [3]. In another work, an autonomous sewer cleaning robot was published that cleans underwater sewers [4]. KARO is a wheeled tethered robot for smart sensor-based sewer inspection equipped with intelligent multi-sensors [5]. KANTARO is a wheeled platform and uses a special mechanism called “naSIR Mechanism” to access straight and even bends pipelines without intelligence of sensors or controllers [6]. KURT is a six-wheeled vehicle that can fit in 600 mm diameter pipelines [7] and MAKRO a worm-shaped wheel, multi-segmented and autonomous bodies for navigation in drain systems [8]. The wheeled robot with fixed morphology finds application in climbing of ropes for inspection task as in [9,10]. Even though a bunch of studies in the literature validates for monitoring or inspection of sewer systems, they mostly suffer from performance issues like modularity and adapting its height as per the geometry of drains that diminish their full potential. One major factor that results in the performance degradation associated with inspection robots design is their fixed morphology. We have proposed the model of quadruped robot for drainage systems that are mainly constructed to carry excess water to reservoirs, unlike the sewage pipes that are used to dispose of solid wastes and water. Tarantula has four-wheel drive and steering locomotion. The drain inspection task can include the identification of the potential mosquito inhabitants and locations that are prone to mosquito-borne diseases as presented in [11] using the images grabbed from the camera mounted on Tarantula in the near future.
Quadruped robots are gaining increased attention among robotics researchers across a wide range of applications with its unique morphology to carry out various kinds of field work. These quadruped robots bring with them the unique advantage of efficiency. Several developments have been made after pioneering research on quadruped robot from MIT [12] and Tokyo University. Since then, a large number of quadruped robots have been developed, such as BISAM [13], which has reptile-like walking and stabilizes itself using a flexible spine. In another work, WARP1 [14] presents a standing posture controller for walking robots, which was successfully tested in simulations and experiments. The pioneering work of Hirose and Fukushima robotics laboratory mainly focused on legged robots for about 40 years. Typical quadruped robots born from this laboratory is TITAN series [15,16,17] that is the development of a sprawling-type quadruped robot and capable of high velocities and energy efficient walking. Popular among these is TITAN VIII [17]. An introduction to several quadruped robots along with its locomotion and control techniques were presented in [18]. The large dimension quadruped robot equipped with drilling equipment and capable of walking on different terrains by incorporating impedance control for the foot-ground contact was reported in [19]. These quadruped robots were mainly used in the fields like mine detection, walking uneven terrain, etc., but to access the drainage system with varying heights and cross-section, the robot should be designed accordingly to have the ability to reconfigure its morphology. In Table 1, a comparison was made among the existing drainage and sewer inspection and cleaning robots. We have used the word quadruped with Tarantula since the kinematics of wheel is coupled with the kinematics of leg. Note that it is not used here in the context of walking, trotting, etc., capabilities of robots.
An interesting hybrid mode of locomotion robot named PAW used both the wheels and legs to achieve gaits, such as bounding, galloping and jumping, was reported in [21]. In [21], the four legs were having only a single degree of freedom which was used to incline the body and the formulation was presented for inclined turning and the wheel at the distal end to provide the locomotion. Tarantula has four degrees of freedom (DOF) in each leg to provide the change in the height of the body, contact with the inclined surface and for independent steering action. The contribution of this work is the designed mechanism, formulation for the coupled kinematics of legs and wheels along with the identification of the kinematic parameters of each leg.
The mechanical structure and the mechanisms are designed and assembled in CAD. The kinematics of legs is coupled with the wheel steering kinematics for the designed mobile robot Tarantula. The accuracy of these geometric parameters is critical for the control and steering. Hence, it becomes essential to identify the kinematic parameters of the legs after the assembly of the robot. Kinematic identification is a well established area that uses a geometric approach [22] or the optimization based technique [23] to estimate the kinematic parameters. Kinematic calibration of the legged mobile robot is presented in [24] and used the optimization based approach that requires the knowledge of its nominal or theoretical kinematic parameters for its initial guess to find the calibrated parameters and consequently improves the positional accuracy. We have used the geometric approach that needs no prior information of geometric parameters and used the circle point method formulation as presented in [25] to identify the widely used kinematic representation defined by Denavit and Hartenberg [26]. However, Ref. [25] did not account for the robots with prismatic joints. In this work, we have extended the approach proposed in [25] for the prismatic joints as well and demonstrated it with the kinematic identification of each of the four legs of the assembled quadruped robot.
Traditional strategies to recognize kinematic parameters of a robot includes taking the robot to a controlled situation to take pose estimations utilizing a coordinate measurement machine (CMM) [27] or laser tracker [28]. In this work, we have proposed the use of the monocular camera with the AruCo markers to demonstrate it for the identification of Tarantula. Unlike the visual localization which is done using a single marker reported in [29], we have used ArUco markers map (AMM) that resulted in the improved measurement accuracy. The measurement performance of this approach is compared using the standard industrial robot KUKA KR6 R900 robot (KUKA, Augsburg, Germany) [30]. Being cognizant of the above facts, we set the following objectives:
  • Design of the robotic platform that can change its height and is holonomic,
  • Formulation for kinematics of the wheeled locomotion coupled with the leg kinematics,
  • Identification of kinematic parameters after the assembly of the robot, using monocular vision and ArUco markers,
  • Trajectory tracking of the robot using the same set-up of monocular vision and ArUco markers.
This paper is divided into five sections. Section 2 lists the design requirements and the mechanical layout, i.e., system architecture of the Tarantula is discussed in detail. Section 3 introduces the workspace analysis of the Tarantula along with the kinematics of wheeled locomotion coupled with the leg kinematics. Experiments for identification of the kinematic parameters of the assembled Tarantula along with the trajectory tracking in Section 4. Finally, Section 5 concludes the paper.

2. Robot Architecture

In this section, the necessary design requirements for the quadruped robot specifically for the drainage inspection task are discussed first. Then, the mechanical design as per the requirement is discussed. Different components of the robot and the mechanisms developed are explained briefly.

2.1. Design Requirements

The central aspect of the Tarantula project is to design a robotic manipulator that can be utilized for the inspection purpose in the hazardous environment inside the drainage system. After surveying the specific drainage geometry and the inspection task to be performed by the robot, the fundamental design considerations are:
  • The robotic system should have the capability to move around inside the drain environment. Hence, it must be mobile, unlike fixed industrial robots.
  • The mobile platform should reconfigure its height as per the geometry of the drainages (Figure 1)
  • The mobile robot should be able to manoeuvre the sharp angular turns inside the drains with minimum turning radius.
  • The robot should be modular so that the components can be replaced easily in case of damage.
Considering the above limitations and requirements, and the properties of the cleaning robots reported in the literature, the four-legged, wheeled, and reconfigurable in height robot were conceptualized and developed. Inspired by nature’s bilateral body plan of animals and insects, four legs with a reconfigurable structure were used. Taking advantage of the symmetry of terrain as shown in Figure 2, and its variable height, it will be useful to emulate the gate shown by the skater in the designed robot as shown in [31], where the height is changed by maintaining the contact of the wheels with the ground.

2.2. Mechanical Layout

The Tarantula robot is shown in Figure 3. It has four legs each with the four degrees of freedom (DOFs). The four DOF were provided in each leg to change the heights as per the geometry of the drainage systems and for independent steering of each wheel by keeping it in contact with the ground. The four DOFs were constituted by revolute (R), prismatic (P), and two revolute joints as the RPRR (R: revolute and P: prismatic) mechanism. The wheel attached at the end of each leg was considered as the end-effector for each leg. The wheels were used to provide the necessary locomotion. Tarantula is a manually reconfigurable robot (Figure 3) unlike the family of self-reconfigurable cleaning robots [32,33,34,35,36] developed. The mechanisms of the robot are discussed next.

2.2.1. Trunk

Figure 3 shows the trunk where the four legs were attached to it. Note that the trunk body has U- and V-cross-sections. This was selected as per the geometry of the drainage cross-section (Figure 2). Figure 3 shows the two manually reconfigurable states in which the robot can be placed. This feature will help to place the trunk of the robot parallel to the drain section. Three passive wheels were provided on the top and the bottom of the trunk respectively to prevent it from rubbing the ground. Inside the trunk, a mechanism for the simultaneous actuation of the two proximal revolute joints in frontal planes, i.e., (#1,1, #1,2) and the two rear legs (#1,3, #1,4) was placed. The trunk contains the necessary electronics, energy source, and sensors. It has the suspension mechanism designed to account for both upside and down configurations to safeguard the robot against jerks transmitted from the ground.
A. Simultaneous Abduction/Adduction Mechanism
The platform is designed specifically for the drain inspection task with its cross-section shown in Figure 1. Figure 4 shows the mechanism assembled inside the trunk to provide the revolute action of each leg. A single motor is used to get the simultaneous abduction or adduction of two adjacent legs in the frontal plane. It is achieved by the transmitting motion from the actuator placed inside the trunk along its length in the sagittal plane. The motor shaft is connected to the gearbox and then to the worm (W) which transfers the motion to worm wheel (WW). The shaft on which the worm wheel is mounted is supported with a boss attached to the chassis, and the shaft ends are placed with the two bevel (B) gears. Bevel gears were used to transmit the rotational motion to the proximal revolute joints of the leg. Here, the two chains (C) and sprocket (S) arrangement were used to actuate each leg. The two chains connected to the leg via sprockets provide the required stability and strength while actuating the legs. This arrangement is suitable for the constrained space, and it also helped in making the legs modular, since it can be easily replaced by removing the pins. The actuation of the proximal joints was limited between zero to 180 degrees.
B. Suspension Mechanism
Bidirectional suspension mechanism using a single damper was designed and attached inside the trunk. The adaptive linkages’ suspension mechanism was designed to work in two configurations. One is when the U-section is facing towards the ground and the other is with a V-section facing towards the ground as shown in Figure 3. The damper mechanism is attached to the chassis. The damper piston was connected to the cap on which the stabilizer bars rest. Figure 5b shows the upside down position of the mechanism. The Y-swing suspension linkage Y1 is attached to the motor shaft on which the driving sprockets S 1 , 1 and S 2 , 1 were placed on both sides. These two driving sprockets were connected to the driven sprockets S 3 , 1 and S 4 , 1 using chains C 1 , 1 and C 2 , 1 , respectively; similarly for the other legs, the Y2 is shown in Figure 5b. The same mechanism is placed for the rear legs. This arrangement provided the suspension that offers a cushion for the actuators and the electronic circuits as well. The compression length was approximately 10 mm. Figure 5c shows the top and front view of the trunk with labeled components.

2.2.2. Telescopic Extension and Distal Revolute Joint

Figure 5d shows the arrangement of two pairs of the bevel gears with shafts used to actuate the screw to get the telescopic action of the leg. Two motors were attached to the body of the telescopic link. The proximal motor, i.e., close to the trunk was used to actuate the telescopic screws, and the distal one was used to provide the revolute action parallel to the abduction/adduction motion. The distal revolute joint was helpful in keeping the wheels in contact with the flat or inclined terrain (Figure 1). The kinematics of the mechanism is discussed in Section 3.

2.2.3. Steering and Wheel Suspension

The wheeled modular mechanism provides the mobility of the robot. Each of the four identical wheel modules has the actuator for steering and the in-wheel motor for the propulsion. The steering action provided to each wheel gave the necessary four-wheel steering and four-wheel drive (FWSD). The FWSD is essential as the terrain of the drainage system can have sharp turns and curvatures. The control problem is non-trivial for the FWSD, but the requirement of moving the robot at relatively low speed (1.8 to 2.5 Km h 1 ) using tethered communication and the simple controller is sufficient. The suspension system provided with the three compressed springs (s1, s2, and s3) attached to each wheel is shown in Figure 6, which provides the required traction to the four wheels. It is also helpful in safeguarding the motor and the micro-controller that is mounted on the wheel hub.

2.2.4. Tarantula Electronics

Tarantula is controlled using the simple mechatronics system that uses the mechanical model of the vehicle and steering model to actuate the mechanism. The actuation and locomotion of the wheels were achieved through the coordination between the micro-controller and actuators. An Arduino Atmega2560 16-Bit micro-controller was mounted inside the trunk and was programmed to carry out three major functions. Namely, (a) Control signal generation to the motor driver that controls the motor speed, (b) To receive the feedback of the motor positions, and (c) To obtain the user command from the remote device or the computer. To reduce the number of wires connecting the motors to the controller, the controller area network (CAN) bus interface was used. Thin shielded cables were used to connect the three Maxon motors (DCX22S, M 11 , M 21 , M 31 as shown in Figure 5) with the connectors for controlling the motor modules, i.e., the telescopic action, the distal revolute joint and the steering with the CAN bus interface. The two motors placed inside the trunk for the simultaneous abduction and adduction, namely M 12 and M 34 (Figure 4) were connected with the micro-controller separately.
The 24-volt Lithium polymer batteries were kept inside the trunk body cover as the power source. The switching power supply fitted inside the servos allowed for running the servos efficiently at voltages between 8 to 24 volts. These regulators allowed for using thinner wires between the modules to supply sufficient power to the servos. The waterproof skateboard wheels with hub motors were used. These were controlled with the speed and time period for the motors rotations which are defined by pulse-width modulation (PWM) signals from the micro-controller. The system architecture of a single leg is shown in Figure 7. The camera feedback was directly taken to the laptop for the image processing task as discussed in Section 4. A software interface was developed to provide the basic locomotion and reconfiguring the height of the robot. In this reported version of Tarantula, we have not placed any external sensors, i.e., proximity, ultrasonic, infrared (IR) sensors, LiDAR (to map the area), etc. However, the provision for these exists and is part of the future work. The power distribution and management system as done in [36,37] will also be carried out as part of future work.

3. Modeling and Simulation

This section presents the mathematical modeling for the kinematics of the wheeled robot Tarantula coupled with its legged kinematics.

3.1. Kinematic Modeling

The forward kinematics of the robot is modeled using the DH convention [26]. The Denavit and Hartenberg (DH) parameters uses four independent parameters, namely joint offset b i , joint angle θ i , link length a i , and twist angle α i for the ith link to represent the transformation between two consecutive frames say i and ( i + 1 ) in a kinematic chain. The Denavit–Hartenberg (DH) parameters convention used in this paper with the Homogeneous Transformation Matrix (HTM) for a single link are presented in Appendix A. Figure 8 shows the kinematic diagram of the robot platform and the legs. The DH parameters of a single leg for the Tarantula robot are listed in Table 2.
Figure 8b highlights the second leg and the joint axis vectors attached at each of the joints. The joint axis direction of each joint is denoted by the z-axis and W 2 is the frame attached at the point of contact of the wheel. Assuming all the legs as symmetric, the pose of the wheel W k w.r.t. the frame F k on the moving platform attached to the trunk near the proximal revolute joint was calculated using the successive multiplication of the homogeneous transformation matrix (HTM) as:
T F , k W , k = T F , k 1 T 1 2 T 2 3 T 3 4 T 4 W , k .
The subscript k is for the four legs of the quadruped, i.e., k = 1 , , 4 . Here, it is assumed that the geometry of the legs are identical, and hence the DH parameters remain the same for the four legs. After substituting the values of DH parameters and post multiplying the HTM, the position of the wheels in the frame attached to the proximal revolute joint is:
T F , k W , k = C θ 12 , k C θ 4 , k C θ 12 , k S θ 4 , k S θ 13 , k b 4 , k S θ 13 . k + a 1 , k C θ 1 , k + b 2 S θ 1 , k S θ 12 , k C θ 4 , k S θ 12 , k S θ 4 , k 0 0 S θ 4 , k C θ 4 , k 0 b 4 , k C θ 13 . k a 1 , k S θ 1 , k + b 2 C θ 1 , k 0 0 0 1 .
The above expression written in the fixed frame attached to the trunk of the robot by prep-multiplying it with the HTM (coordinates and orientation of the frame shown in Figure 8) is:
T T W , k = T T F , k T F , k W , k .
The first three elements in the fourth column of Equation (2) give the position of the wheel, i.e., [ x w , k , y w , k , z w , k ] T . The y-coordinates are explicitly shown in Figure 8, and it does not vary w.r.t., the frame attached with the trunk. The two-dimensional graphical representation of the workspace of a single leg is depicted in Figure 9a and, with four legs considering the constraint in Equation (5), is shown in Figure 9b.
The kinematic constraint or dependencies that are utilized for the simultaneous abduction and adduction of all four legs as per the actuation mechanism in the Trunk is written as:
θ 1 , 1 = θ 1 , 4 = θ 1 , 2 = θ 1 , 3 .
Another constraint to maintain the contact of the four wheels with the inclined surface can be written as:
θ 3 , 1 = θ 1 , 1 ± ψ , θ 3 , 2 = θ 1 , 2 ± ψ , θ 3 , 3 = θ 1 , 3 ± ψ and θ 3 , 4 = θ 1 , 4 ± ψ ,
where ψ is the angle of inclination of the pavement as shown in Figure 1. The above constraints for any flat surface perpendicular to the direction of gravity were obtained by substituting ψ = 0 . In Section 4.3, the kinematic parameters were identified along with the effective dimension of the trunk in the plane Π T (Figure 8a).

3.2. Kinematics of Wheel

The kinematics of the platform depend upon the arrangement of the wheels and its type selected. They are discussed in detail in [38]. The various architecture of mobile robots rely on the choice of wheel arrangement like differential drive robots, omnidirectional wheels, traction wheels, etc. The four motorized and steered standard wheels were selected, which resulted in greater maneuverability. The formulation is presented for the steering with the varying dimensions of the base plane Π B (Figure 8b) that is dependent on the leg kinematics.
Figure 10a shows the position of four wheels ( W 1 , W 2 , W 3 , W 4 ) on the base plane. Note that the location of the wheels is subjected to vary as per the change in joint angles θ 1 k of the legs. It changes the dimension of the rectangle defined by the points of contact of the wheels in the base plane Π B . The position vector of the wheels in the frame attached with the base is denoted by l w , k . The magnitude and the angle subtended by the position vector are given by:
l w , k = x k 2 + y k 2 and γ k = tanh 1 ( y k , x k ) ,
where tanh 1 is the inverse hyperbolic tangent function. The values of x k were found from Equation (2) and y k was obtained from the geometry. These point of contacts were experimentally identified in Section 4.3. In this section, the generalized kinematic modeling of the four wheels is presented. The sate vector of the robot’s base frame is defined as:
v B = x ˙ B y ˙ B α ˙ B T ,
where x ˙ B , y ˙ B are the velocity vectors along the x- and y-directions and α ˙ B is the angular velocity vector about the z-axis as shown in Figure 10b. Now, taking the component of the velocity vectors at the origin of the base frame, the rolling wheel constraint of the kth wheel was written as:
[ sin ( γ k + β k ) cos ( γ k + β k ) l r , k cos β k ] v B r w φ ˙ = 0 ,
where φ ˙ [ φ 1 φ 2 φ 3 φ 4 ] T is the vector of rate of rotation for the wheels. In addition, the no-sliding constraint for the kth wheel is written as:
[ cos ( γ k + β k ) sin ( γ k + β k ) l w , k sin β k ] v B = 0 ,
where l w , k and γ k were found as per Equation (6). Equations (8) and (9) were written for all the four wheels as:
C R v B W ϕ ˙ = 0 ,
C S v B = 0 .
The matrix C R and C S are defined as
C R sin ( γ 1 + β 1 ) cos ( γ 1 + β 1 ) l w , 1 cos β 1 sin ( γ 2 + β 2 ) cos ( γ 2 + β 2 ) l w , 2 cos β 2 sin ( γ 3 + β 3 ) cos ( γ 3 + β 3 ) l w , 3 cos β 3 sin ( γ 4 + β 4 ) cos ( γ 4 + β 4 ) l w , 4 cos β 4 and C S cos ( γ 1 + β 1 ) sin ( γ 1 + β 1 ) l r , 1 sin β 1 cos ( γ 2 + β 2 ) sin ( γ 2 + β 2 ) l r , 2 sin β 2 cos ( γ 3 + β 3 ) sin ( γ 3 + β 3 ) l r , 3 sin β 3 cos ( γ 4 + β 4 ) sin ( γ 4 + β 4 ) l r , 4 sin β 4 ,
where the values of γ k for k = 1 , , 4 are obtained from Equation (6). The degree of maneuverability M was found using the sum of the mobility m, i.e., the dimensionality of the null space of the matrix C R and steerability s, i.e., the rank of the matrix C S . In short, the maneuverability is found as 3 ( M = m + s 0 + 3 = 3 ). The state vector in the inertial frame was found by pre multiplying the state vector by the rotation matrix between the inertial and the trunk frames. The experimental results for the trajectory followed by the robot is presented in Section 4.

4. Experiments

In the Introduction, it was mentioned that precision and tolerances in fabrication and assembly errors result in the difference between the working model and the CAD. In addition, the robot positioning performance relies on the kinematic model of the robot. This led us to experimentally identify the kinematic parameters, namely the DH parameters by estimating the joint axes’ vectors. Here, we have extended the circle point method (CPM) for the prismatic joint present in the RPRR mechanism in each leg of Tarantula.

4.1. Setup

Unlike the visual localization done using a single marker reported in [29], we have used ArUco markers map (AMM) for two purposes: one is for the identification of the kinematic parameters of each leg. The other is for the trajectory tracking of the robot. The advantage of using AMM over a single calibration grid or a single marker is that, with a single marker, it is impossible to always keep it in the line of sight of the camera. Figure 11 shows the printed markers from the ArUco library (ARUCO_MIP_36H12) [39,40] on the flat boards.
These multiple markers are distinct, and each having a dimension of 99.2 × 99.2 mm was glued on the flat board. These markers were glued without any waviness on the board, which can cause the error in pose reading. The process required building a pairwise markers map database by taking the sequence of images with the common markers appearing in consecutive images using the calibrated camera. This database generated contains the relative pose of each marker concerning one another [29]. We placed the markers such that multiple markers (more than five markers) were visible in a single image or frame. This helped in achieving better pose estimation of the camera using an optimization that involved minimizing the reprojection error of markers in the observed frame [29]. Two approaches for using camera to make position measurements are presented in [41], namely, monocular camera [43,44], and the other is stereo vision [42]. Stereo cameras allows direct pose measurements, but it requires the processing of corresponding images which makes it computational expensive. Whereas, monocular camera usage for pose measurement is an area of research interest because of the following aspects: (i) Synchronizing the captured images taken from multiple cameras is not needed; (ii) Pose readings over a larger workspace can be taken in a region without bothering on the field of view of two or more cameras overlap; (iii) The space required for mount is reduced. Therefore, the use of a monocular camera for pose measurement for kinematic identification and a trajectory followed test is utilized.
Some details of the approach during the experiments are worth noting. The experiments were carried out in ambient lighting conditions. For kinematic identification, the camera was mounted on the last link of a leg, and each joint was actuated one at a time. The frames were grabbed using the Chameleon3 camera from Ptgrey (FLIR Integrated Imaging Solutions Inc., Richmond, BC, Canada) [45] with the 10 mm fixed focal length lens attached to it at 40 frames per second. The position of the markers must not be changed after being placed for a given set of readings. By placing the markers in spread fashion over a larger area, it was possible to actuate the robot’s joints in its full workspace as shown in Figure 9b. The frame was grabbed using the camera, and the pose was obtained using the flow diagram presented in Figure 11. The evaluation of the measurement approach proposed here is presented next.

4.2. Measurement Performance

Evaluation of the proposed measurement technique was done by mounting the camera on the KUKA KR6 R900 robot that has the measurement performance of 0.03 mm [30]. The robot’s end-effector with the camera mounted was made to move in the x-, y- and z-axis directions of the robots world frame. The initial distance of the camera from the markers was 3.4 m. The initial and final coordinates of the robot were recorded from the teach-pendent (Figure 12a) for movement along each axes. The pose obtained from the camera is plotted in Figure 12b. The measured angle between the given motion about each axis is shown in Figure 12c. Assuming the readings from the robot as the ground truth, the trajectory of the robot end-effector is compared to the one obtained using the monocular camera. Table 3 lists the ideal and the measured distances using the markers. It is observed that variation along the y-direction is highest and in this case, the camera was moving towards the markers, i.e., perpendicular to the plane of the markers. In the rest two of the directions, the variations are within 3 mm. Hence the movement in a plane along the markers is closer to the ideal than the depth one.

4.3. Identification of Kinematic Parameters of Tarantula

After the assembly of Tarantula, the identification of the proximal revolute joint positions and kinematic parameters of each leg is essential. The mathematical analysis of the measured data to obtain the joint axis vectors (JAVs) presented elsewhere [25] is discussed in short for brevity along with its extension for the prismatic joints. The points traced by the end-effector, i.e., the three-dimensional (3D) coordinates ( x i [ x i , y i , z i ] T ) were logged and stacked in a matrix A (Equation (13)). The mean of the logged pose data points was subtracted from the elements of A which resulted in the transformed set of points in a matrix B whose mean is zero. Matrices of three-dimensional data points, D and D ¯ are shown below:
A = [ x 1 x 2 x m ] T ; A ¯ = [ ( x 1 x ¯ ) ( x 2 x ¯ ) ( x m x ¯ ) ] T ,
where x ¯ x ¯ y ¯ z ¯ T = 1 m x i y i z i T , and m being the number of measurements. Applying singular value decomposition (SVD) [46] on matrix A ¯ , two square orthogonal matrices U and V and a rectangular matrix D were obtained as:
A ¯ = V m × m D m × 3 U 3 × m .
The orthogonal columns corresponding to the singular values (SVs) are listed in the column of matrix U [ u 1 u 2 u 3 ] . The direction of the joint axis represented by the unit vector n in Figure 13a for revolute and Figure 13b for prismatic is given as:
n u 3 for rotary joints ,
n u 1 for linear actuating joints .
The above equations gave the joint axis vector direction. For a point on the JAV, the center of circle  c , i.e., the center of rotation for revolute joint was obtained using circle fitting method presented in [25], where c c 1 u 1 + c 2 u 2 + x ¯ , c 1 and c 2 are the center of fitted circle in the plane spanned by vectors u 1 and u 2 . For prismatic joint the mean of the traced point, i.e., c x ¯ was taken as the point on the JAV of prismatic joints. The plane of link movement Π can be defined as:
Π n n T x ¯ .
The above information, i.e., the direction of joint axis denoted with n and c being points on it, were used to extract the DH parameters according to Algorithm 1. The identified parameters are listed in Table 4. The effective length and breadth of the trunk were also identified from the JAVs.
The experimental set-up for the identification experiments is shown in Figure 14b,c. The robot’s trunk was rigidly fixed with the frame. Each leg joint was actuated one by one and the frames were grabbed to process it as per the flow diagram shown in Figure 11. The advantage of the circle point method is that the coordinates of the proximal joints were also known. This is helpful in defining the platform plane formed by proximal joints’ positions.
The coordinates of the centre of rotation of the four proximal joints were found that are depicted in Figure 14d as J 1 , 1 [ 0.7038 , 1.8386 , 0.1208 ] , J 2 , 1 [ 0.8728 , 1.8400 , 0.1219 ] , J 3 , 1 [ 0.8693 , 2.1265 , 0.1205 ] and J 4 , 1 [ 0.6994 , 2.1242 , 0.1205 ] , respectively, in meters, and the dimensions of the rectangle fitted with these four points were found as l t = 0.2855 and d t = 0.1696  m. The identified values were used as y k in Equation (6), i.e., the magnitude of y k is half the effective length of the trunk. Next, we have evaluated the measurement technique and performed an initial path followed test.
In order to compare the the identified DH parameters of the four legs using the monocular vision, we selected the parameters from the CAD listed in Table 2. We have used the quintic trajectory using the 3–4–5 interpolating polynomial [47] for the prismatic and revolute joints using l and θ respectively as:
{ l , θ } ( t ) = a 0 + a 1 t + a 2 t 2 + a 3 t 3 + a 4 t 4 + a 5 t 5 i = 0 5 a { l , θ } i t i ,
where a i ( i = 1 , , 5 ) are the coefficients that were derived from the initial and final state of the joints. The velocity and acceleration for linear actuation and rotation, i.e., l ˙ , θ ˙ and l ¨ , θ ¨ can be found by taking the first and second order derivatives of Equation (17). The detailed trajectory equations with its derivatives are discussed in [47]. The trajectory as per Equation (17) was used to actuate the joints from its initial to final position, i.e., within the range of 0 to 90 with quintic profile. Figure 15a shows the variation in the position of the wheel hub plotted in in each of the leg frames. The difference in the X-, Y- and Z-positioning of each leg is plotted w.r.t. the CAD parameters in Figure 15a–c, respectively. The variation in x- and y-directions are significant, and working with the identified values in the kinematic model is useful.
Algorithm 1 Algorithm to find the Denavit–Hartenberg (DH) parameters.
– Fix the camera on the last link of the legged robot and keep the trunk fixed, (as shown in Figure 14b).
Fori = 1 to n
–  Move one joint at a time starting from the first joint while locking the rest. The position of the camera will trace the circular arc and prismatic joint will trace a straight line.
–  Log the 3D positions of the camera using ArUco markers map set-up (Section 4). For revolute joint actuate it by ϕ and record the video feed from the camera with aruco markers visible, while for the prismatic joint it was actuated by distance l.
–  Find the centre of the circle using 3D circle fitting method for revolute joints and mean of linear points for prismatic joints with the direction of joint axis vector as the normal of the plane of movement.
End for
For i = n to 1
Extract the DH parameters, i.e., b i , a i and α i , using JAVs as inputs to find the perpendicular distances and angles between the two successive joint axis vectors.
End for
–  Repeat the above steps for each leg.
–  With the coordinates of center of rotation of proximal joints, estimate the effective dimension of the plane of trunk Π T as shown in Figure 8.

4.4. Trajectory Tracking

The purpose of this section is to test the trajectory followed by the robot. The ArUco markers map were arranged as shown in Figure 16a. The monocular camera was now placed inside the trunk. The same markers arranged in two directions were used to track the pose of the robot. The robot was commanded to move four meters forward and then steer the wheels by 90 and again traverse forward by a meter. Figure 16 shows the point cloud data (pcd) of the pose. Then, it was plotted with the coordinate frame attached at one of the markers. The trajectory followed obtained after fitting the line is shown in Figure 16b with the angle of turn obtained as 87.8 degrees. Note that, in this case, the frames were recorded after every 100 milliseconds for the total trajectory time of 2.3 s. The distance traversed is shown in Figure 16c. The difference in the desired and the obtained trajectory is mainly due to the friction and uncertainties in the model. The limitations of this approach are the measuring variations in the pose that can be ± 3 mm at a distance of 4–5 m of the camera from the markers and also the lighting conditions affect the detection of the markers.

5. Conclusions and Future Works

In this paper, Tarantula’s design, modeling, and its kinematic parameters identification are presented. The robot was designed for the specific geometry of the drains. The designed manipulator is modular and can adjust its height as per the environment. The simultaneous abduction or adduction of the two legs in a frontal plane was provided using the chain sprocket mechanisms connected using bevel gears arrangement with a single motor. A specially designed bi-directional suspension mechanism was fixed inside the trunk as a shock absorber. The limitation of the chain sprocket is that it imparts the rotational play because of slacking.
To the best of the author’s knowledge, this is the first attempt to estimate the kinematic parameters of the assembled robot with four leg using monocular camera mounted on the assembled Tarantula. This method also identified the effective dimension of the trunk where the proximal revolute joints were connected. The limitations of the monocular vision and fiducial markers to obtain the pose is that the camera must view the markers in the generated map. We also performed the experiments to evaluate the trajectory tracked by Tarantula using a similar set-up. Overall, the contributions of this paper are listed below:
  • Design of the modular robot Tarantula with the ability to reconfigure its height, mainly for inspecting the drains,
  • A mechanism designed for the simultaneous abduction/adduction of legs,
  • A methodology to identify the kinematic parameters using ArUco markers and monocular vision for the assembled mobile robot with legs. First, using the calibrated camera, poses of ArUco markers are reconstructed in 3D space. Second, by moving each joint and capturing images, a set of the tracked pose is determined. Then, the DH parameters were evaluated.
The domains set for future research work will mainly focus on: design optimization of the legs to reduce the total weight. Since the design is modular, the newly designed legs can be easily attached to the existing trunk. The second focus will be to mount the Light Detection and Ranging (LiDAR) sensor near the trunk opening of T a r a n t u l a to map the drains.

Author Contributions

A.A.H. and M.R.E. conceived and designed the experiments;the experiments; A.A.H. and K.E. performed the experiments; A.A.H. and K.E. analyzed the data; K.E., A.A.H. and M.S.T. contributed reagents/materials/analysis tools; A.A.H., K.E., M.S.T. and M.R.E., wrote the paper.

Funding

This work is financially supported by the National Robotics R&D Program Office, Singapore, under the Grant No. RGAST1702 to Singapore University of Technology and Design (SUTD) which are greatly acknowledged to conduct this research.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. DH Parameters Notation Used [48]

In this appendix, the definitions of Denavit–Hartenberg (DH) parameters are presented for completeness of the paper. Note that four parameters, namely, b i , θ i , a i and α i , relate the transformation between two frames i and i + 1 which are rigidly attached to two consecutive links # ( i 1 ) and # ( i ) , respectively, as shown in Figure A1. Their notations and descriptions are summarized in Table A1.
Figure A1. Links and DH parameters.
Figure A1. Links and DH parameters.
Applsci 09 00094 g0a1
The resulting coordinate transformation between the frames connected to link i and i + 1 as:
T i i + 1 = T b i T θ i T a i T α i
= 1 0 0 0 0 1 0 0 0 0 1 b i 0 0 0 1 C θ i S θ i 0 0 S θ i C θ i 0 0 0 0 1 0 0 0 0 1 1 0 0 a i 0 1 0 0 0 0 1 0 0 0 0 1 1 0 0 0 0 C α i S α i 0 0 S α i C α i 0 0 0 0 1
= C θ i S θ i C α i S θ i S α i a i C θ i S θ i C θ i C α i C θ i S α i a i S θ i 0 S α i C α i b i 0 0 0 1 .
Table A1. Notations and descriptions of the DH parameters [26].
Table A1. Notations and descriptions of the DH parameters [26].
Parameters (Name)Description *
b i (Joint offset) X i @ Z i   , distance   X i + 1
θ i (Joint angle) X i @ Z i   c c w , rotation   X i + 1
a i (Link length) Z i @ X i + 1   , distance   Z i + 1
α i (Twist angle) Z i @ X i + 1   c c w , rotation   Z i + 1
In the table read symbol as “and”, ⊥ as “perpendicular”, @ as “along”and ccw as “counter clockwise”.

References

  1. Zhang, S.X.; Pramanik, N.; Buurman, J. Exploring an innovative design for sustainable urban water management and energy conservation. Int. J. Sustain. Dev. World Ecol. 2013, 20, 442–454. [Google Scholar] [CrossRef]
  2. Kirkham, R.; Kearney, P.D.; Rogers, K.J.; Mashford, J. PIRAT—A system for quantitative sewer pipe assessment. Int. J. Robot. Res. 2000, 19, 1033–1053. [Google Scholar] [CrossRef]
  3. Bradbeer, R. The Pearl Rover Underwater Inspection Robot. In Mechatronics and Machine Vision; Billingsley, J., Ed.; Research Studies Press: Baldock, UK, 2000; pp. 255–262. [Google Scholar]
  4. Truong-Thinh, N.; Ngoc-Phuong, N.; Phuoc-Tho, T. A study of pipe-cleaning and inspection robot. In Proceedings of the International Conference on Robotics and Biomimetics (ROBIO), Karon Beach, Phuket, Thailand, 7–11 December 2011; pp. 2593–2598. [Google Scholar]
  5. Kuntze, H.; Schmidt, D.; Haffner, H.; Loh, M. KARO-A flexible robot for smart sensor-based sewer inspection. In Proceedings of the No Dig’95, Dresden, Germany, 19–22 September 1995; pp. 367–374. [Google Scholar]
  6. Nassiraei, A.A.; Kawamura, Y.; Ahrary, A.; Mikuriya, Y.; Ishii, K. Concept and design of a fully autonomous sewer pipe inspection mobile robot KANTARO. In Proceedings of the International Conference on Robotics and Automation, Roma, Italy, 10–14 April 2007; pp. 136–143. [Google Scholar]
  7. Kirchner, F.; Hertzberg, J. A prototype study of an autonomous robot platform for sewerage system maintenance. Auton. Robots 1997, 4, 319–331. [Google Scholar] [CrossRef]
  8. Streich, H.; Adria, O. Software approach for the autonomous inspection robot MAKRO. In Proceedings of the International Conference on Robotics and Automation, New Orleans, LA, USA, 26 April–1 May 2004; Volume 4, pp. 3411–3416. [Google Scholar]
  9. Baghani, A.; Ahmadabadi, M.N.; Harati, A. Kinematics modeling of a wheel-based pole climbing robot (UT-PCR). In Proceedings of the 2005 IEEE International Conference on Robotics and Automation (ICRA 2005), Barcelona, Spain, 18–22 April 2005; pp. 2099–2104. [Google Scholar]
  10. Ratanghayra, P.R.; Hayat, A.A.; Saha, S.K. Design and Analysis of Spring-Based Rope Climbing Robot. In Machines, Mechanism and Robotics; Springer: Berlin, Germany, 2019; pp. 453–462. [Google Scholar]
  11. Fuchida, M.; Pathmakumar, T.; Mohan, R.E.; Tan, N.; Nakamura, A. Vision-based perception and classification of mosquitoes using support vector machine. Appl. Sci. 2017, 7, 51. [Google Scholar] [CrossRef]
  12. Raibert, M.H. Legged Robots That Balance; MIT Press: Cambridge, MA, USA, 1986. [Google Scholar]
  13. Berns, K.; Ilg, W.; Deck, M.; Albiez, J.; Dillmann, R. Mechanical construction and computer architecture of the four-legged walking machine BISAM. IEEE/ASME Trans. Mechatron. 1999, 4, 32–38. [Google Scholar] [CrossRef]
  14. Ridderstrom, C.; Ingvast, J. Quadruped posture control based on simple force distribution-a notion and a trial. In Proceedings of the International Conference on Intelligent Robots and Systems, Maui, HI, USA, 29 October–3 November 2001; Volume 4, pp. 2326–2331. [Google Scholar]
  15. Hirose, S.; Kato, K. Study on quadruped walking robot in Tokyo Institute of Technology-past, present and future. In Proceedings of the International Conference on Robotics and Automation, San Francisco, CA, USA, 24–28 April 2000; Volume 1, pp. 414–419. [Google Scholar]
  16. Kitano, S.; Hirose, S.; Endo, G.; Fukushima, E.F. Development of lightweight sprawling-type quadruped robot titan-xiii and its dynamic walking. In Proceedings of the International Conference on Intelligent Robots and Systems (IROS), Tokyo, Japan, 3–7 November 2013; pp. 6025–6030. [Google Scholar]
  17. Arikawa, K.; Hirose, S. Development of quadruped walking robot TITAN-VIII. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 96), Osaka, Japan, 8 November 1996; Volume 1, pp. 208–214. [Google Scholar]
  18. De Santos, P.G.; Garcia, E.; Estremera, J. Quadrupedal Locomotion: An Introduction to the Control of Four-Legged Robots; Springer Science & Business Media: Berlin, Germany, 2007. [Google Scholar]
  19. Montes, H.; Armada, M. Force control strategies in hydraulically actuated legged robots. Int. J. Adv. Robot. Syst. 2016, 13, 50. [Google Scholar] [CrossRef]
  20. Li, T.; Ma, S.; Li, B.; Wang, M.; Li, Z.; Wang, Y. Development of an in-pipe robot with differential screw angles for curved pipes and vertical straight pipes. J. Mech. Robot. 2017, 9, 051014. [Google Scholar] [CrossRef]
  21. Sharf, I. Dynamic Locomotion with a Wheeled-Legged Quadruped Robot. In Brain, Body and Machine; Springer: Berlin, Germany, 2010; pp. 299–310. [Google Scholar]
  22. Barker, L.K. Vector-Algebra Approach to Extract Denavit–Hartenberg Parameters of Assembled Robot Arms; NASA Langley Research Center: Hampton, VA, USA, 1983.
  23. Hollerbach, J.M.; Wampler, C.W. The calibration index and taxonomy for robot kinematic calibration methods. Int. J. Robot. Res. 1996, 15, 573–591. [Google Scholar] [CrossRef]
  24. De Santos, P.G.; Jiménez, M.A.; Armada, M.A. Improving the motion of walking machines by autonomous kinematic calibration. Auton. Robots 2002, 12, 187–199. [Google Scholar] [CrossRef]
  25. Hayat, A.A.; Chittawadigi, R.; Udai, A.; Saha, S.K. Identification of Denavit-Hartenberg parameters of an industrial robot. In Proceedings of the Conference on Advances in Robotics, Pune, India, 4–6 July 2013; pp. 1–6. [Google Scholar]
  26. Denavit, J.; Hartenberg, R.S. A kinematic notation for lower-pair mechanisms based on matrices. Trans. ASME J. Appl. Mech. 1955, 22, 215–221. [Google Scholar]
  27. Driels, M.R.; Swayze, W.; Potter, S. Full-pose calibration of a robot manipulator using a coordinate-measuring machine. Int. J. Adv. Manuf. Technol. 1993, 8, 34–41. [Google Scholar] [CrossRef]
  28. Liu, B.; Zhang, F.; Qu, X.; Shi, X. A Rapid coordinate transformation method applied in industrial robot calibration based on characteristic line coincidence. Sensors 2016, 16, 239. [Google Scholar] [CrossRef] [PubMed]
  29. Babinec, A.; Jurišica, L.; Hubinskỳ, P.; Duchoň, F. Visual localization of mobile robot using artificial markers. Procedia Eng. 2014, 96, 1–9. [Google Scholar] [CrossRef]
  30. RobotWorx. Available online: https://www.robots.com/robots/kuka-kr-6-r900-fivve (accessed on 27 October 2018).
  31. Record Breaking Limbo Skater: 6-Year-Old Skates under 39 Cars—YouTube. Available online: https://www.youtube.com/watch?v=7HEPRZuRWvc (accessed on 1 July 2018).
  32. Kee, V.; Rojas, N.; Elara, M.R.; Sosa, R. Hinged-Tetro: A self-reconfigurable module for nested reconfiguration. In Proceedings of the 2014 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM), Besacon, France, 8–11 July 2014; pp. 1539–1546. [Google Scholar]
  33. Prabakaran, V.; Elara, M.R.; Pathmakumar, T.; Nansai, S. Floor cleaning robot with reconfigurable mechanism. Autom. Constr. 2018, 91, 155–165. [Google Scholar] [CrossRef]
  34. Yuyao, S.; Elara, M.R.; Kalimuthu, M.; Devarassu, M. sTetro: A modular reconfigurable cleaning robot. In Proceedings of the 2018 International Conference on Reconfigurable Mechanisms and Robots (ReMAR), Delft, The Netherlands, 20–22 June 2018; pp. 1–8. [Google Scholar]
  35. Ilyas, M.; Yuyao, S.; Mohan, R.E.; Devarassu, M.; Kalimuthu, M. Design of sTetro: A Modular, Reconfigurable, and Autonomous Staircase Cleaning Robot. J. Sens. 2018, 2018, 8190802. [Google Scholar] [CrossRef]
  36. Tan, N.; Mohan, R.E.; Elangovan, K. Scorpio: A biomimetic reconfigurable rolling–crawling robot. Int. J. Adv. Robot. Syst. 2016, 13, 1729881416658180. [Google Scholar] [CrossRef]
  37. Patil, M.; Abukhalil, T.; Patel, S.; Sobh, T. UB robot swarm—Design, implementation, and power management. In Proceedings of the 2016 12th IEEE International Conference on Control and Automation (ICCA), Kathmandu, Nepal, 1–3 June 2016; pp. 577–582. [Google Scholar]
  38. Siegwart, R.; Nourbakhsh, I.R.; Scaramuzza, D. Introduction to Autonomous Mobile Robots; MIT Press: Cambridge, MA, USA, 2011. [Google Scholar]
  39. Mapping and Localization from Planar Markers. Available online: http://www.uco.es/investiga/grupos/ava/node/57 (accessed on 1 October 2018).
  40. ArUco: A Minimal Library for Augmented Reality Applications Based on OpenCV. Available online: https://www.uco.es/investiga/grupos/ava/node/26 (accessed on 1 November 2018).
  41. Hartley, R.I.; Zisserman, A. Multiple View Geometry in Computer Vision; Cambridge University Press: Cambridge, MA, USA, 2000; ISBN 0521623049. [Google Scholar]
  42. Bennett, D.J.; Geiger, D.; Hollerbach, J.M. Autonomous robot calibration for hand-eye coordination. Int. J. Robot. Res. 1991, 10, 550–559. [Google Scholar] [CrossRef]
  43. Rousseau, P.; Desrochers, A.; Krouglicof, N. Machine vision system for the automatic identification of robot kinematic parameters. IEEE Trans. Robot. Autom. 2001, 17, 972–978. [Google Scholar] [CrossRef]
  44. Meng, Y.; Zhuang, H. Self-Calibration of Camera-Equipped Robot Manipulators. Int. J. Robot. Res. 2001, 20, 909–921. [Google Scholar] [CrossRef]
  45. Chameleon3 Color USB3 Vision. Available online: https://www.ptgrey.com/chameleon3 (accessed on 4 November 2018).
  46. Golub, G.H.; Van Loan, C.F. Matrix Computations; JHU Press: Baltimore, MD, USA, 2012; Volume 3. [Google Scholar]
  47. Angeles, J. Fundamentals of Robotic Mechanical Systems: Theory, Methods, and Algorithms; Mechanical Engineering Series; Springer International Publishing: Berlin, Germany, 2013. [Google Scholar]
  48. Saha, S.K. Introduction to Robotics, 2nd ed.; Tata McGraw-Hill Education: New Delhi, India, 2014. [Google Scholar]
Figure 1. Human collecting water samples inside the drain commonly found in Singapore [1].
Figure 1. Human collecting water samples inside the drain commonly found in Singapore [1].
Applsci 09 00094 g001
Figure 2. Line diagram of the Tarantula on the drainage pavement.
Figure 2. Line diagram of the Tarantula on the drainage pavement.
Applsci 09 00094 g002
Figure 3. Two manually reconfigurable states and rotating the legs by 180 degrees which turns the body upside down.
Figure 3. Two manually reconfigurable states and rotating the legs by 180 degrees which turns the body upside down.
Applsci 09 00094 g003
Figure 4. Line diagram of the mechanism for simultaneous abduction or adduction of the links. In the above figure W: Worm, WW: Worm Wheel, B: Bevel Gear, S: Sprocket, C: Chain, GB: Gear Box, M: Motor.
Figure 4. Line diagram of the mechanism for simultaneous abduction or adduction of the links. In the above figure W: Worm, WW: Worm Wheel, B: Bevel Gear, S: Sprocket, C: Chain, GB: Gear Box, M: Motor.
Applsci 09 00094 g004
Figure 5. Suspension system that provides depression and elevation at the proximal revolute joints.
Figure 5. Suspension system that provides depression and elevation at the proximal revolute joints.
Applsci 09 00094 g005
Figure 6. Wheel with suspensions.
Figure 6. Wheel with suspensions.
Applsci 09 00094 g006
Figure 7. System diagram of Tarantula.
Figure 7. System diagram of Tarantula.
Applsci 09 00094 g007
Figure 8. The inertial frame {I}, body fixed frame {T} at the trunk, base frame {B} at the center of the contact point of four wheels with the ground and the DH frames attached with leg 2 having RPRR joints.
Figure 8. The inertial frame {I}, body fixed frame {T} at the trunk, base frame {B} at the center of the contact point of four wheels with the ground and the DH frames attached with leg 2 having RPRR joints.
Applsci 09 00094 g008
Figure 9. Graphical representation of the workspace of Tarantula.
Figure 9. Graphical representation of the workspace of Tarantula.
Applsci 09 00094 g009
Figure 10. Steering kinematics of the wheels with varying base dimensions.
Figure 10. Steering kinematics of the wheels with varying base dimensions.
Applsci 09 00094 g010
Figure 11. Flow diagram to obtain a pose from the calibrated camera using ArUco markers.
Figure 11. Flow diagram to obtain a pose from the calibrated camera using ArUco markers.
Applsci 09 00094 g011
Figure 12. Evaluation of measurements using the ArUco markers and monocular camera.
Figure 12. Evaluation of measurements using the ArUco markers and monocular camera.
Applsci 09 00094 g012
Figure 13. Position data points on the circle and the line obtained with the revolute and prismatic joint actuation respectively with its singular value direction.
Figure 13. Position data points on the circle and the line obtained with the revolute and prismatic joint actuation respectively with its singular value direction.
Applsci 09 00094 g013
Figure 14. Setup using AruCo markers for identification and localization of robot.
Figure 14. Setup using AruCo markers for identification and localization of robot.
Applsci 09 00094 g014
Figure 15. Comparison of the positioning of each leg w.r.t. the CAD.
Figure 15. Comparison of the positioning of each leg w.r.t. the CAD.
Applsci 09 00094 g015
Figure 16. Experimental setup and the trajectory traced by the robot.
Figure 16. Experimental setup and the trajectory traced by the robot.
Applsci 09 00094 g016
Table 1. Wheeled and legged robot discussed in this work.
Table 1. Wheeled and legged robot discussed in this work.
SystemLocomotion n L , n W n B RMEnvironment
PCIRs [4]2-Tracking wheels–, 23NNSP (C)
KARO [5]4-WID–, 42NNSP (C)
KANTARO [6]Passively adapted wheels–,42NYSP (C)
KURT [7]Wheeled–, 33NYSP (C)
MAKRO [8]Wheeled–, 2n3YYSP (C)
BISAM [13]Legged4, –5NYRT
Warp1 [14]Legged3, –5NYRT
TITAN VIII [17]Legged1, –5NYRT
IPR [20]Legged1, –3YNSP (C)
TarantulaWheeled4, 44YYD
R: Reconfigurable, M: Modularity in mechanism, hardware and software, n L Active degrees of freedom (DOF) in each leg. n W : number of wheels, n B : DOF of the moving platform, (C): Circular cross-section. SP: Sewer pipes, RT: Rough Terrain.
Table 2. Denavit and Hartenberg (DH) parameters of the single leg of Tarantula robot.
Table 2. Denavit and Hartenberg (DH) parameters of the single leg of Tarantula robot.
Joint LimitsRemarks
# α ab θ InitialFinal
190 a 1 0 θ 1 (JV)0 deg. * 180 deg.Joint near trunk
2−900 b 2 (JV)0 470 mm950 mmTo adjust height
39000 θ 3 (JV)0 deg.180 deg. θ 3 = θ 1 ± ψ
400 b 4 θ 4 (JV)0 deg.360 deg.For steering
* deg. is degrees; ψ is the inclination angle of the ground as shown Figure 1.
Table 3. Evaluation of measurement performance using monocular camera and ArUco markers.
Table 3. Evaluation of measurement performance using monocular camera and ArUco markers.
X : AB ¯ (m) Y : CD ¯ (m) Z : EF ¯ (m) ( AB ¯ , CD ¯ ) ( CD ¯ , EF ¯ ) ( EF ¯ , AB ¯ )
Ideal (Robot)0.4000.77760.7467909090
Measured (Markers)0.39870.72350.749489.8989.8490.07
X : , Y : , Z : means x-, y- and z-directions in robots world frame, ∡: Angles are in degrees.
Table 4. Identified DH parameters of each leg.
Table 4. Identified DH parameters of each leg.
b 1 a 1 α 1 (deg.) b 2 (mm) a 2 α 2 (deg.) b 3 a 3 α 3 (deg.) b 4 a 4 α 4 (deg.)
Leg 10090.17485.23 + v a r 0−91.130089.8669.2100
Leg 20090.01480.13 + v a r 0−91.070089.0670.1200
Leg 30089.71482.72 + v a r 0−89.890090.3272.3500
Leg 40089.96482.36 + v a r 0−90.140089.7878.1600
v a r : is the variable length of actuation varying from zero to 0.6 meters and the numeric value is at the compressed state. b 4 : these joint offset values are till the position where the camera was placed.
Back to TopTop