Next Article in Journal
A Digital-Controlled SiC-Based Solid State Circuit Breaker with Soft Switch-Off Method for DC Power System
Next Article in Special Issue
Enhanced Switch Image-Based Visual Servoing Dealing with FeaturesLoss
Previous Article in Journal
Improving Optical Performance of Ultraviolet Light-Emitting Diodes by Incorporating Boron Nitride Nanoparticles
Previous Article in Special Issue
Picking Robot Visual Servo Control Based on Modified Fuzzy Neural Network Sliding Mode Algorithms
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Visual Closed-Loop Dynamic Model Identification of Parallel Robots Based on Optical CMM Sensor

1
Department of Mechanical, Industrial & Aerospace Engineering, Concordia University, Montreal, QC H3G 1M8, Canada
2
College of Mechanical and Electrical Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China
*
Author to whom correspondence should be addressed.
Electronics 2019, 8(8), 836; https://doi.org/10.3390/electronics8080836
Submission received: 27 June 2019 / Revised: 22 July 2019 / Accepted: 22 July 2019 / Published: 26 July 2019
(This article belongs to the Special Issue Visual Servoing in Robotics)

Abstract

:
Parallel robots present outstanding advantages compared with their serial counterparts; they have both a higher force-to-weight ratio and better stiffness. However, the existence of closed-chain mechanism yields difficulties in designing control system for practical applications, due to its highly coupled dynamics. This paper focuses on the dynamic model identification of the 6-DOF parallel robots for advanced model-based visual servoing control design purposes. A visual closed-loop output-error identification method based on an optical coordinate-measuring-machine (CMM) sensor for parallel robots is proposed. The main advantage, compared with the conventional identification method, is that the joint torque measurement and the exact knowledge of the built-in robot controllers are not needed. The time-consuming forward kinematics calculation, which is employed in the conventional identification method of the parallel robot, can be avoided due to the adoption of optical CMM sensor for real time pose estimation. A case study on a 6-DOF RSS parallel robot is carried out in this paper. The dynamic model of the parallel robot is derived based on the virtual work principle, and the built dynamic model is verified through Matlab/SimMechanics. By using an outer loop visual servoing controller to stabilize both the parallel robot and the simulated model, a visual closed-loop output-error identification method is proposed and the model parameters are identified by using a nonlinear optimization technique. The effectiveness of the proposed identification algorithm is validated by experimental tests.

1. Introduction

Parallel robots are closed-chain mechanisms in which the end-effector is supported by a series of independent computer-controlled serial chains linked to the base platform. Parallel robots present some outstanding advantages in higher force-to-weight ratio and better stiffness compared with serial manipulators. Specifically, 6-DOF parallel robots have been used in various applications (e.g., flight simulators [1], manufacturing lines [2] and medical tools [3]). Due to the manufacturing tolerances and deflection in the robot structure, the typical positioning discrepancy between a virtual robot in simulation and a real robot can be 8–15 mm [4], which does not meet the precision requirement of many potential applications. The low absolute accuracy of the robot is the main problem for the off-line programming based applications where tens of thousand points or continuous trajectories are to be reached or tracked.
The existence of closed-chain mechanism and multiple moving parts in the parallel robots, for example, in a 6-DOF Gough-Stewart platform consisting of 13 moving bodies (12 legs and one end-effector), yields difficulties in dynamic analysis of parallel robots. Moreover, the dynamic model plays an important role in the model-based controller designs, especially in applications where high positioning and tracking accuracy is needed. In contrast to serial robots, where the joint angles are used as the state variables for modeling, the parallel robots are preferably modeled and controlled in the operational (workspace) coordinates where the pose of the end-effector and its derivatives are used as the state variables [5]. The main reason is that there is almost no analytical expression for the forward kinematic model of 6-DOF parallel robots. For example, there are 40 forward kinematics solutions [6] mapping the joint coordinates to the pose of the end-effector platform in a Gough-Stewart platform. In addition, to this day, there is no effective method to determine the pose of the platform from multiple solutions. The dynamic models of 6-DOF Gough-Stewart parallel robot have been built by several approaches such as Newton-Euler [7], Lagrangian formulation [8] and the principle of virtual work [9]. Newton-Euler formulation can provide the internal forces for each individual body of the parallel robot, which can benefit the mechanical design process of the parallel robot. However, the computational load is high due to the large amount of equations. In contrast, the Lagrangian and virtual work methods are more efficient and suitable for the control design purpose since the reaction forces between the bodies of the parallel robot are not considered. The published research work on 6-DOF RSS parallel robots is considerably scarcer compared with that on Gough-Stewart parallel robot. The dynamic models of one type of 6-DOF RSS parallel robot, in which the active rotation axes are coplanar, are built based on Newton-Euler equations [10] or Lagrangian formulation [11] for dynamic analysis and tracking control purpose respectively. In this paper, the dynamic model of a 6-DOF RSS parallel robot, where the active rotation axes are parallel to each other, is built based on the virtual work principle, and the explicit form of the dynamic model is derived for identification and dynamic model-based visual servoing purposes.
The dynamic parameters are normally unknown or approximately derived from manufacturer specifications, which are not accurate enough for the dynamic model-based controller design. System identification is an effective method to perceive the uncertain parameters in the dynamic model of the system, and has been applied to many engineering practices [12]. As a highly coupled multi-input/multi-output (MIMO) nonlinear system, industrial robots aroused great interest and challenge for the identification method. The literatures on the state-of-the-art identification methods can be found in [13,14,15]. For the industrial robots, the dynamic identification is normally performed in closed-loop, since the robotic system is open loop unstable. In [16], a MIMO closed-loop identification based on weighted least square estimation has been applied to an industrial serial robot used in a planar configuration. In addition, other closed-loop identification methods with maximum likelihood, instrumental variable and related implementation issues on industrial serial robots are addressed in [17,18]. A new closed-loop output-error identification scheme has been adopted for the serial robots [19]. The output-error identification method aims at finding the dynamic model parameters by minimizing the output deviation between the actual and simulated systems subjecting to the same input [20]. In [19], the identification procedure is implemented in a closed-loop control structure and the joint torque is the measured output, which avoids the estimation of the velocity and acceleration from the measured joint position.
One potential issue of the above-mentioned identification methods is that joint torque measurement or a related control signal is needed for identification, which is not always available for the industrial robots, since the built-in controllers of many industrial robots are unaccessible and do not provide the torque actuation mode [21]. The input of built-in controllers is the position or velocity command, and the output is the joint torque which is unaccessible to the users. Hence the torque and current of the motors cannot be derived directly and it is not easy to install additional torque sensors to get the direct measurement. The unknown controller can be identified along with the dynamic parameters as introduced in [22]. However, joint torque measurement is still needed for identification purposes.
In [13,23,24,25], identification issues in parallel robots have been discussed. Most research works on dynamic identification of parallel robots are based on a simplified dynamic model, such as in [26,27,28]. Nevertheless, the systematic derivation of the full inverse dynamic model is proposed based on Jourdain’s principle in [24]. In addition, the identification procedure is carried out in two steps: 1. identifying inertial parameters and 2. estimating the friction coefficients. In [23], the full parameters of robots’ dynamic model and the joint drive gains are identified based on the total least square method, and the method is tested on a 3 DOF Orthoglide parallel robot. Many identification methods for 6 DOF parallel robots in [23,24,25] come directly from those of serial robots. By rewriting the inverse dynamic model and friction model into a linear form with respect to the dynamic and friction parameters, the identification of the unknown parameters can be done through least squares technique. However for 6 DOF parallel robots, to avoid solving forward kinematics, the regression matrix of the dynamic model is constituted of the pose state variables in the operational space. In [25], only joint space state variables are measured and estimated. Therefore the state variables in operational space should be calculated through numerical computation of the forward kinematics, which is quite time-consuming.
One way to avoid computing the forward kinematics in the identification process is to directly measure the pose of the end-effector in the operational space based on vision sensors. In [28], a camera-based dynamic identification procedure is given for a 4-DOF parallel robot with a heavily simplified dynamic model by considering only the inertia of the end-effector. Also, a vision sensor based dynamic identification has been carried out on serial robots [29] and cable robots [30], but seldom on parallel robots. The optical CMM sensor is a dual camera based vision sensor, which can provide the real-time pose information of the targets. The coordinates of target reflectors in the field of view (FOV) can be directly measured by the sensor. By observing four non-collinear reflectors on the platform, the pose of the end-effector can be measured. The optical CMM sensor has been applied to the kinematics calibration [31] and the path tracking controller design [32] for the robots. To increase the flexibility and tracking accuracy, vision can also be incorporated into the feedback control loop of the parallel robot systems to form the so-called visual servoing control. The pose of the end-effector can be acquired on-line by using the optical CMM sensor, i.e., C-track from Creaform Inc. (Levis, QC, Canada). The visual servoing controller for parallel robot is superior to the joint space controller due to the fact that the kinematic errors introduced from transforming the desired trajectory in the operational space into the one in the joint space in the joint controller can be avoided. The measured pose together with the visual servoing controller allows using the closed-loop output-error identification method [20] to identify the dynamic model of the parallel robot.
Upon the above discussions, a closed-loop output-error identification method based on a CMM sensor is proposed for parallel robots in this paper. The end-effector pose is measured by the optical CMM and served as the output of the real plant. The same outer loop visual servoing controller and reference trajectory are employed in both actual robot and simulation model for model identification. The forward kinematics of parallel robots, which is usually solved by using time-consuming numerical algorithm, can be avoided. The exact knowledge of the built-in controller and the joint torque are not needed. The dynamic model parameters are identified by using nonlinear optimization technique. The experimental tests validate the identification results.
This paper is organized as follows. Section 2 describes the dynamic model of a 6-RSS parallel robot. The closed-loop output-error identification method is proposed and the procedure of the identification is presented in Section 3. The dynamic model validation based on simulation and the experiment results of the identification are given in Section 4. Finally, the conclusion is drawn in Section 5.

2. Dynamic Modeling

In this section, the kinematic analysis of a 6 RSS parallel robot is conducted and the dynamic model is derived based on the virtual work principle.

2.1. Kinematic Analysis

The motivation of kinematic analysis is to determine geometry mapping from the motion of end-effector frame w.r.t the base frame (operational space motion) to the rotation of the actuators, as regarding the revolute joint frames (joint space motion). Based on the geometry mapping, the link Jacobian matrix is derived for building the dynamic model. As shown in Figure 1a, the 6-RSS parallel robot consists of six identical serial branch chains. Each serial branch, illustrated in Figure 1b, consists of a wrench, a link, an active revolute joint (R) and two passive spherical joints (S). One spherical joint is used to connect the wrench and the link. The revolute joint is driven by actuators and connects the wrench and the base platform, while the spherical joint is employed between the link and the end effector. The base frame Σ O is assigned at the symmetric center of the base platform and the end-effector frame Σ E is also attached at the symmetric center, while E denotes the coordinate vector of the frame origin regarding the base frame. The coordinate vectors of revolute joint centers regarding the base frame are marked by B i ( i = 1 , 2 , , 6 ) while the rotation centers of spherical joints are represented as T i and A i respectively. In the subsequent kinematic and dynamic analysis, the coordinates of the parts of parallel robot are defined regarding the base frame by default. The moving wrench and link frames Σ W i and Σ L i are attached to the wrenches and links respectively as depicted in Figure 1b. The coordinate vectors of the centers of mass of the wrenches and links are denoted as c w i and c l i respectively. The vector θ = [ θ 1 , θ 2 , , θ 6 ] T represents the rotation angles of the actuators. The coordinate vector from E pointing to A i regarding the end-effector frame is denoted as a i . And the coordinate vectors w i and l i represent the directions and length of the wrench and link. The pose vector of the end-effector frame is expressed as χ E = h , ϕ T , where h = [ x , y , z ] T represents the position of the end-effector frame origin, while ϕ = [ α , β , γ ] T represents the Euler-angle rotation of the frame. The rotation matrix, R, is given by
R = R x ( α ) R y ( β ) R z ( γ ) = c β c γ c β s γ s β c α s γ + c γ s β s α s β s α s γ + c α c γ c β s α s α s γ c α c γ s β c α s β s γ + c γ s α c β c α ,
where R x , R y , R z are the orthonormal rotation matrices for the rotation about X , Y , Z axes respectively. The vector χ ˙ E = h ˙ T , ϕ ˙ T T and χ ¨ E = h ¨ T , ϕ ¨ T T are the first and second order time derivatives of χ E . The vector v E = h ˙ T , ω T T R 6 × 1 denotes the linear and angular velocities of the end-effector frame. Then v ˙ E is the acceleration vector. The relationship between the Euler angle rate ϕ ˙ and angular velocity ω is expressed as follows
ω = J e ϕ ˙ ,
where J e = 1 0 s β 0 c α c β s α 0 s α c β c α is the analytical Jacobian matrix, s and c stand for s i n and c o s respectively.
The following assumptions are made for kinematic and dynamic analysis of a 6-RSS parallel robot:
  • The end-effector platform, wrenches and links are symmetric with respect to their axes.
  • The links do not rotate about its symmetric axes.
As shown in Figure 1a,b, the closure loop position relationship between the end-effector frame and the base frame can be expressed as the following:
E + a i A i = 0 a n d
B i + w i + l i A i = 0 , i = 1 , 2 , , 6 .
Then the linear velocity can be derived by combining Equations (3) and (4) and taking the time derivative:
h ˙ + ω × a i = ω 1 × w i + ω 2 × l i , i = 1 , 2 , , 6 ,
where ω 1 and ω 2 are the angular velocity of frame Σ W i and Σ L i regarding Σ O respectively.
Then, dot multiplying l i on both sides of Equation (5) yields:
l i h ˙ + ( a i × l i ) ω = ( w i × l i ) · ω 1 , i = 1 , 2 , , 6 .
Since the wrench rotates around a fixed axis which is denoted as s ^ = [ 0 , 0 , 1 ] T , the Jacobian matrix mapping from the end-effector Cartesian velocity to joint velocity can be derived by the following:
J a 1 θ ˙ = J a 2 v E ,
where J a 1 = d i a g ( ( w 1 × l 1 ) · s ^ , ( w 2 × l 2 ) · s ^ , , ( w 6 × l 6 ) · s ^ ) , J a 2 = l 1 T ( a 1 × l 1 ) T l 6 T ( a 6 × l 6 ) T .
When the robot works in the singularity-free operational space, the Jacobian matrix J a d can be derived as follows:
θ ˙ = J a 1 1 J a 2 v E = J a d v E .
Then the translational velocity of the center of mass of the wrench c w i ˙ can be obtained from Equation (9).
c w i ˙ = θ ˙ i s ^ × c w i = J a u v E ,
where J a u = ( s ^ × c w i ) J a d i R 3 × 6 , and J a d i is the ith row of J a d . Then the link Jacobian J a mapping v E to the velocity of the center of mass of the ith wrench v 1 = [ c w i ˙ T , ω 1 i T ] T can be derived as
v 1 = c w i ˙ ω 1 i = J a u s ^ J a d i v E = J a v E .
Equation (11) can be deduced by right cross multiplying l i on the both sides of Equation (5) and using Lagrange’s rule.
( ω 2 · l i ) · l i ( l i · l i ) · ω 2 = h ˙ × l i + ( ω × a i ) × l i ( ω 1 × w i ) × l i .
Since the link does not rotate about its longitudinal axis, ω 2 · l i = 0 holds. Rearranging Equation (11), the following equation can be derived:
l i 2 ω 2 = [ l i ] X h ˙ [ l i ] X [ a i ] X ω + [ l i ] X [ w i ] X ω 1 ,
where the operator [ · ] X and · represents the cross product operation and Euclidean norm respectively.
By combining Equations (8) and (12), the following Jacobian matrix J b d mapping from the end-effector Cartesian velocity to the angular velocity of the link frame can be deduced:
ω 2 = J b d v E , J b d = 1 l i 2 { [ l i ] X [ l i ] X [ a i ] X + [ l i ] X [ w i ] X s ^ J a d i } .
The translational velocity of the center of mass of the link c l i ˙ can be obtained as follows:
c l i ˙ = [ w i ] X ω 1 [ c l i ] X ω 2
By substituting Equations (10) and (13) into Equation (14), the following velocity relationship can be derived:
c l i ˙ = ( [ w i ] X s ^ J a d i [ c l i ] X J b d ) v E = J b u v E
Hence the link Jacobian J b mapping v E to the velocity of the center of mass of the ith link v 2 can be derived.
v 2 = c l i ˙ ω 2 i = J b u J b d v E = J b v E

2.2. Dynamic Modeling

In contrast to serial robots, the dynamic modeling of parallel robots is more complicated due to its closure geometrical structure and difficulties in deriving the forward kinematics. The principle of virtual work is employed to derive the explicit form of the dynamic model in terms of the operational coordinates and their time derivatives as shown in Equation (17) for 6-DOF RSS robot, which is useful for dynamic model-based controller design.
τ g = M ( χ E ) v ˙ E + C ( χ E , v E ) v E + G ( χ E ) + τ f ,
where τ g denotes the general force acting on the end-effector frame, τ f is the friction, M ( χ E ) is the mass matrix, C ( χ E , v E ) is Coriolis and centrifugal matrix, and G ( χ E ) denotes the gravity. In order to avoid solving the forward kinematics of the parallel robot which may not have analytical solutions, the pose in the coordinates of the end-effector and its time derivatives are employed in Equation (17).
The balance equation of virtual work principle for a moving rigid body, *, can be expressed as follows:
F ¯ * · δ χ * = ( F ext * + F ˜ * ) · δ χ * = 0 ,
in which F ¯ * contains the static balance force and torque, F ext * = [ f ext * T , τ ext * T ] T is the external force ( f ext * ) and torque ( τ ext * ) exerted on the center of mass of the body respectively, δ χ * denotes the virtual displacement, and the fictitious force and torque are:
F ˜ * = m * g m * h * ¨ ( I * ω * ˙ + ω * × I * ω * ) ,
where m * is the mass of the body, g is the gravity vector, h * ¨ is the linear acceleration of center of mass of the body, I * is the moment of inertia, ω * and ω * ˙ denote the angular velocity and acceleration of the moving body frame. Further, F ¯ * can be represented in the form similar to Equation (17):
F ¯ * = F ext * + M * v ˙ * + C * v * + G * , w h e r e M * = m * E 3 × 3 0 3 × 3 0 3 × 3 I * , C * = 0 3 × 3 0 3 × 3 0 3 × 3 ω * × I * , G * = m * g 0 3 × 1 , v ˙ * = h * ¨ ω * ˙ , v * = h ˙ * ω *
in which E 3 × 3 R 3 × 3 denotes the identity matrix. For a 6-DOF RSS parallel robot, there are 13 moving bodies including the end-effector, 6 wrenches and 6 links. Therefore the balance equation of the 6-DOF parallel robot can be rewritten as Equation (21):
F ¯ p · δ χ e + i = 1 6 F ¯ l i · δ χ l i + i = 1 6 F ¯ w i · δ χ w i = 0 ,
where F ¯ p , F ¯ l i and F ¯ w i contain the static balance force and torque exerted on the centers of mass of the platform, links and wrenches respectively and can be represented in the same form as Equation (20), δ χ e , δ χ l i , and δ χ w i are the virtual displacements accordingly.
In addition, the following relations hold for the velocity analysis:
δ θ = J a d δ χ e , δ χ w i = J a δ χ e , δ χ l i = J b δ χ e .
Substituting Equation (22) into Equation (21), the terms in Equation (17) can be derived as the following:
M ( χ E ) = M p + i = 1 6 ( J a i T M w i J a i + J b i T M l i J b i ) , C ( χ E , v E ) = C p + i = 1 6 ( J a i T C w i J a i + J a i T M w i J a i ˙ + J b i T C l i J b i + J b i T M l i J b i ˙ ) , G ( χ E ) = G p + i = 1 6 ( J a i T G w i + J b i T G l i ) , τ g = J a d T τ a ,
where τ a = [ τ a 1 , τ a 2 τ a 6 ] T is the actuator torque vector applying on the revolute joints, and the details of Equation (23) are given in Appendix A. The joint friction is described by Coulomb model [33] that has been used in the modelings of both parallel robots [24] and serial robots. Based on this friction model, the friction τ f in Equation (17) can be represented as:
τ f = J a d T f c 1 s i g n ( J a d 1 v E ) + f v 1 J a d 1 v E f c 2 s i g n ( J a d 2 v E ) + f v 2 J a d 2 v E f c 6 s i g n ( J a d 6 v E ) + f v 6 J a d 6 v E ,
where f c i and f v i are the Coulomb and viscous friction parameters of the ith revolute joint.

2.3. Dynamic Model Simplification

Since the kinematic parameters of parallel robots can be obtained from the manufacturer specifications or by calibration [31], only inertia and friction parameters are considered for the model identification of parallel robots. Considering the heavy computation load of solving inverse dynamic model for the dynamic identification and visual servoing purpose, a reduction of the dynamic parameters with the trade-off between the computation load and accuracy should be implemented. The geometry feature of the parallel robot can be considered for simplification. For the 6-RSS parallel robot, it is assumed that the wrenches, links and end-effector are symmetric. Furthermore, the center of mass is assumed to be located in the symmetric center. Therefore, the dynamic parameters for each body of the wrenches, links and end-effector can be reduced to ξ * = [ m * , I x * , I y * , I z * ] . Then Equation (17) can be rewritten as the linear form w.r.t. the dynamic parameters and the friction coefficients:
τ g = Γ ( χ E , v E , v ˙ E ) Ξ ,
where Ξ = [ ξ p T , ξ w 1 T , ξ w 2 T , , ξ w 6 T , ξ l 1 T , ξ l 2 T , , ξ l 6 T , f c 1 , f c 2 , , f c 6 , f v 1 , f v 2 , , f v 6 ] T is a R 64 × 1 vector of dynamic parameters and the friction coefficients, and Γ ( χ E , v E , v ˙ E ) is the regressor matrix, which consists of the kinematic parameters, state variables and their derivatives. Γ ( χ E , v E , v ˙ E ) can be derived using the Symbolic Math Toolbox of Matlab. Given an exciting trajectory as a reference input to the robot, which will be introduced in Section 3.3, Equation (26) can be obtained by reorganizing Equation (25).
τ g 1 τ g 2 τ g n = Γ ( χ E 1 , v E 1 , v ˙ E 1 ) Γ ( χ E 2 , v E 2 , v ˙ E 2 ) Γ ( χ E n , v E n , v ˙ E n ) Ξ = H Ξ ,
where n is the number of the sampled poses from the given trajectory. By feeding various testing trajectories to the robot, the regression matrix H is of full rank which means all elements of Ξ can be identified.

3. Closed-Loop Output-Error Identification Based on Vision Feedback

3.1. Pose Estimation Using Optical CMM

As shown in Figure 2, a dual-camera optical CMM C-track 780 is employed to measure the pose of end-effector for the identification of the dynamic model in this research. The pose measurement principle of the optical CMM sensor is presented in this subsection. The target reflectors are taken as the point features. The homogeneous coordinates of the reflectors w.r.t. the sensor frame can be obtained by using triangulation principle [34]. Given a group of non-collinear reflectors p i ( i = 1 , 2 , , n ) attached on the end-effector platform, the homogeneous coordinates in the sensor frame can be measured and denoted as C P i = [ x p i , y p i , z p i , 1 ] T . In addition, the homogeneous coordinates of the reflectors w.r.t. the end-effector frame Σ E denoted as E P i = ( E x i , E y i , E z i , 1 ) are known from the definition of the end-effector frame Σ E . Correspondingly, the transformation equation of ith feature point can be written as:
C P i = C T E E P i ,
where C T E is the homogeneous transformation matrix mapping from Σ E to Σ C . In order to derive C T E , at least three non-collinear feature points are required [35]. However, as indicated in [36], at least four coplanar feature points are needed for obtaining a unique solution while additional non-coplanar feature points can be used to improve the estimation accuracy with measurement noise. Similarly, the homogeneous transformation matrix mapping from Σ O to Σ C , C T O , can be derived. Then the pose of the end-effector frame w.r.t. the base frame, χ E , can be obtained from the homogeneous transformation matrix O T E given by Equation (28).
O T E = C T O 1 C T E .
By using the proprietary software VXelements provided by Creaform Inc., the target frames can be defined based on the selected reflectors on the surface of the end-effector and base frame respectively. The real-time position and rotation information of the end-effector frame w.r.t. the base frame can be acquired, recorded or displayed simultaneously. In addition, the computation associated with the pose estimation of the target frame is carried out by VXelements.

3.2. Closed-Loop Output-Error Identification Method

The basic idea of output error identification is to use nonlinear optimization technique to minimize the squared error between the output of the real plant and the simulated model. Since the motor torque is unaccessible, the pose of the end-effector, χ E , measured by optical CMM is used as the output of the system. Hence the closed-loop output-error identification method is adopted [19]. In conventional closed-loop output-error identification method, the controllers should be exactly known and applied to both the real plant and the simulated model. However for an industrial robot, the built-in controllers are usually unknown and need to be identified.
The closed-loop output-error identification approach for vision based robotic system, as depicted in Figure 3, is proposed in this paper. For the built-in controller of the 6-RSS parallel robot, a PID controller is used to control the joint angle of each revolute joint. However the three gains of PID controller are unknown and needed to be identified. During the process of identification, the gains and dynamic parameters are updated in each iteration of nonlinear optimization, which may make the simulated model unstable. Hence, an outer loop visual servoing controller is added to stabilize both real robot and simulated model. The visual servoing controller and the built-in controller construct a cascade PID controller. As stated in [37,38], the cascade controller yields better dynamic performance in terms of stability and working frequency compared with single loop controller. With a well-tuned outer loop visual servoing controller, both the real and simulation systems can have a better performance and stability.
In Figure 3, for both the real plant and simulated model, the same exciting trajectory is given as the input χ d ( t ) . Then the outer loop visual servoing controller is designed as Equation (30). The inverse kinematic Jacobian J θ is used to transform the velocity in operational space to that in joint space. By combining Equations (2) and (8), Jacobian matrix J θ is derived as shown in Equation (29) and the joint position control signal U θ ( t ) is generated as shown in Equation (30).
θ ˙ = J a d E 3 × 3 0 3 × 3 0 3 × 3 J e χ ˙ E = J θ χ ˙ E .
U θ ( t ) = J θ [ k p ( χ d ( t ) χ m ( t ) ) + k d ( χ ˙ d ( t ) χ ˙ m ( t ) ) + k i ( χ d ( t ) χ m ( t ) ) d t ] ,
where k i , k p , k d are constants, and χ m ( t ) is the visual feedback obtained from VXelements software, while in the simulation χ m ( t ) is replaced by χ s ( t ) which is the pose calculated by using the forward dynamic model.
In addition, then the PID controller, given in Equation (31), is used to describe the built-in joint controller in each joint and is employed in the simulated model.
τ a ( t ) = l p ( U θ ( t ) θ s ( t ) ) + l d ( U ˙ θ ( t ) θ ˙ s ( t ) ) + l i ( U θ ( t ) θ s ( t ) ) d t ,
where l i , l p , l d are the PID parameters to be identified, and θ s ( t ) is the joint positions, which can be obtained by analytically solving the inverse kinematics.
The real plant output Y m = [ χ m ( 1 ) , χ m ( 2 ) , , χ m ( k ) , ] T and the simulation output Y s = [ χ s ( 1 ) , χ s ( 2 ) , , χ s ( k ) ] T are fed to the optimization process. The parameters to be identified, Λ , can be denoted as Λ = [ Ξ T , l p , l i , l d ] T . Accordingly the identification of Λ can be converted to solving the following nonlinear optimization problem:
M i n i m i z e Φ ( Λ ) = Y m Y s 2 .
Then the updating formula for Λ is given as follows:
Λ r + 1 = ( J Φ T J Φ ) 1 J Φ T Φ ( Λ r ) + Λ r
where Λ r is the value of Λ in the rth iteration and J Φ is the Jacobian matrix of Φ ( Λ ) w.r.t. Λ given as:
J Φ = Φ ( Λ ) Λ 1 Φ ( Λ ) Λ 2 Φ ( Λ ) Λ 67 ,
where Λ i denotes the ith column of Λ . The terminate criteria is given as:
Φ ( Λ r + 1 ) Φ ( Λ r ) Φ ( Λ r ) t o l 1 max i = 1 , , n Λ i r + 1 Λ i r Λ i r t o l 2 ,
where | · | denotes the absolute-value norm operation, t o l 1 and t o l 2 are the thresholds to be chosen for tuning the accuracy. A compromise should be made between the convergence speed and accuracy when choosing thresholds. To achieve good results in solving the nonlinear optimization problem, a proper initial guess of Λ is needed. For the dynamic parameters of the parallel robots, the initial guess can be calculated from manufacturer specifications. Then a priori PID parameters are obtained based on the simulation model.

3.3. Modified Exciting Trajectory

The exciting trajectories for the dynamic model identification of the serial robots based on the inverse dynamic model have been well studied in [39]. The Finite Fourier series-based exciting trajectory has been tested in a large amount of research works for identification purpose [40]. For serial robots, it can be represented by the following:
θ i ( t ) = l = 1 n [ s i n ( 2 π f 0 l t ) 2 π f 0 l s i l c o s ( 2 π f 0 l t ) 2 π f 0 l c i l ] + θ 0 i θ ˙ i ( t ) = l = 1 n [ c o s ( 2 π f 0 l t ) s i l + s i n ( 2 π f 0 l t ) c i l ] θ ¨ i ( t ) = l = 1 n [ 2 π f 0 l s i n ( 2 π f 0 l t ) s i l + 2 π f 0 l t c o s ( 2 π f 0 l t ) c i l ] ,
where θ i ( t ) is the ith joint angle trajectory of serial robots, n is the harmonics number, f 0 is the fundamental frequency, and s i l , c i l , θ 0 i are the trajectory parameters to be optimized. Instead of choosing the joint space states ( θ , θ ˙ , θ ¨ ) for serial robots, the pose in the operational space is used for the dynamic identification of parallel robots. A modified Finite Fourier series-based exciting trajectory for parallel robots is proposed as:
χ i ( t ) = l = 1 n [ s i n ( 2 π ω 0 l t ) 2 π ω 0 l s i l c o s ( 2 π ω 0 l t ) 2 π ω 0 l c i l ] + χ 0 i
where χ i ( t ) is the ith column of the pose trajectory. Therefore 2 n + 1 parameters δ i = [ s i 1 , c i 1 , , s i n , c i n , χ 0 i ] T can be estimated by solving a nonlinear optimization problem. The singularity check should be implemented for each sampled pose. According to previous research work [31,41], the maximum wrench rotation range inside the singularity-free domain of the 6-RSS parallel robot is ( 0.9948 rad, 0.9948 rad). The inverse kinematic model is used to map the poses into the joint space and to check if the joint angles stay inside the singularity-free domain. To obtain the operational space states in the regression matrix H, the time derivatives of the Euler-angle should be converted to the angular velocity and acceleration, as shown in Equations (2) and (38).
ω ˙ = α ¨ + γ ¨ s β + β ˙ γ ˙ c β c α ( β ¨ α ˙ γ ˙ c β ) + s α ( β ˙ γ ˙ s β β ˙ α ˙ γ ¨ c β ) s α ( β ¨ α ˙ γ ˙ c β ) + c α ( β ˙ α ˙ + γ ¨ c β β ˙ γ ˙ s β ) .
As shown in Equation (26), the observability index of H should be maximized to achieve a good identification result for given the exciting trajectories. The observability index O i n used in [31] is chosen as the criteria. Therefore the optimal exciting trajectory can be obtained by solving the following nonlinear optimization problem:
M a x i m i z e O i n ( δ ) = ς σ 1 σ 2 σ ς m = ς d e t ( H T H ) m
where the singular values of H are denoted by σ 1 σ 2 σ ς , m is the number of sampled poses of the trajectory, ς is the number of dynamic parameters to be identified.

3.4. The Procedure of Identification

The whole procedure of the proposed closed-loop output-error identification method is given in Figure 4. Firstly, the dynamic model of the parallel robot is derived as Equations (17) and (25). Then the optimized exciting trajectory, χ d ( t ) , can be generated by using the method mentioned in previous subsection. By using χ ( t ) as reference input signal to the outer loop visual servoing control systems of both the real plant and the simulated model, the measured output pose χ m ( t ) of the parallel robot by the optical CMM sensor is compared with that of the simulated model. The identification of the parameters Λ is carried out by solving the nonlinear optimization problem. Lastly, the identified model can be validated by feeding several testing trajectories to the systems.

4. Simulation and Experiment Results

In this section, the dynamic model is verified by the simulation using Matlab/SimMechanics. In addition, the closed-loop output-error identification is carried out on a 6-RSS parallel robot. An outer loop visual servoing controller is implemented on the real plant and the simulated model individually. The C-track 780 from Creaform Inc. is adopted to measure the pose of the end-effector of parallel robot.

4.1. Model Validation

The analytical dynamic model is rather complex and it is a non-trival task to ensure that the code of dynamic model is mistake free. A simulation verification method is used to verify the built mathematical dynamic model. A mechanical model of the 6-RSS parallel robot is built by using the Multibody SimMechanics Toolbox of Matlab/Simulink. The SimMechanics model is constructed by choosing the parts from SimMechanics library as shown in Figure 5. The coordinates frames and physical parameters of rigid body blocks can be specified in the setting option. The revolute and spherical joints are used to connect the rigid bodies. The body sensor part can provide the position and orientation of the coordinate frames. The entire model is directly actuated by the torque of the motors, and 3D animation shown in Figure 6 can be provided by SimMechanics. It should be noted that SimMechanics model can only simulate the forward dynamics and cannot be used for controller design. However, it is relatively easy and intuitive to build SimMechanics model with high fidelity [42]. To validate the mathematical dynamic model Equation (17), the explicit form of the forward dynamic model obtained by Equation (40) is employed to compare with the SimMechanics model.
v ˙ E = M ( χ E ) 1 ( τ g C ( χ E , v E ) v E G ( χ E ) τ f ) ,
The mathematical dynamic model is built by using S-function of Simulink. The initial dynamic model parameters of both SimMechanics and mathematical models are derived from manufacturer specifications and are given in Table 1.
As shown in Figure 7, a simple PID controller is used to stabilize both mathematical and SimMechanic models with the same PID gains. The exciting trajectory χ d ( t ) derived from Equation (36) is used as the reference input signal. The outputs of the SimMechanics and mathematical dynamic model (Equation (40)) are χ _ s ( t ) and χ _ m ( t ) respectively. The difference between χ _ s ( t ) and χ _ m ( t ) is shown in Figure 8. The maximum position and angle errors are around 1.25 mm and 3.25 × 10 3 rad, which occur at the beginning of the simulation, and are often caused by the kinematic error. The largest steady-state errors are about 0.1 mm in the position and 0.25 × 10 3 rad in the angle, which can prove the correctness of the mathematical model. The validated mathematical model can be used in the simulation part for the subsequent identification.

4.2. Identification Experiment

The experiment setup is shown in Figure 9. The 6-RSS parallel robot with 6 built-in joint controllers is provided by Servo & Simulation Inc. (Sanford, FL, USA). The built-in controllers communicate with the robot control computer through two Quanser MultiQ-PCI (Sensoray Model 626) data acquisition cards provided by Quanser Inc. (Markham, ON, Canada). Quanser’s QUARC T M software is running on the robot control computer with Windows 7.0 32-bit operating system and Intel Core Processor i7-3770 3.4 GHz. QUARC T M software is capable of generating real-time application though Simulink based controllers and implementing the application in real time on the Windows target. The C-track 780 provided by Creaform Inc. (Levis, QC, Canada) is used to obtain the image data of the reflectors attached on the robot. The reflectors provided by Creaform Inc. are magnetic stickers which are easily fixed on the robots and are used as the feature points. In another Windows 7.0 64-bit computer with Intel Xeon Processor E5-1650 v3 3.5 GHz and NVIDIA Quadro K2200 (Santa Clara, CA, USA) professional graphics board, Vxelements software is used to process the image data and transmit the pose of the end-effector to the robot control computer.
As shown in Figure 10, the reflectors are stuck on the surface of the moving platform. At least three non-collinear points on each plane of Plane A, Plane B and Plane C are employed to build up the equations of planes based on Cramer’s rule. Then the intersection lines and points of three planes can be used to define the x direction of Σ E , and z direction is aligned with the norm of Plane A. The origin point of Σ E is derived from the intersection point of l 1 and l 2 . Then, the obtained Σ E in the optical CMM sensor frame is directly used as the target frame. The base frame of the parallel robot is defined by following the similar procedure.
To eliminate the high frequency noise of the pose measurement from the optical CMM, measurement data is filtered by the zero-phase forward and reverse 8th order Butterworth filter with the cut-off frequency 60 Hz. The filtering process is carried out by the Zero-phase digital filtering function of Maltab, f i l t f i l t .
The optimization of the exciting trajectory is carried out by using the G A function of the Optimization Toolbox of Matlab. The GA algorithm uses binary code to represent the harmonic parameters. The fundamental frequency f 0 is selected as 0.1 Hz and the harmonics number n is chosen as 5. By taking the observation index as the fitness function, the binary code is updated to maximize the fitness value in each step. The starting value of the observation index O i n is 0.38 and the stop criteria is set as 10 10 . The algorithm stops after 324 iterations with the maximum O i n 1.345 . The derived optimal exciting trajectory is given in Figure 11.
The identification procedure is implemented off-line through simulation illustrated in Section 3.2. The optimization procedure is carried out by using optimization functions in Matlab R2016a. The minimal performance requirements of the computation platform are given as: 4 cores Inter or AMD Processor; 6 GB disk space; 4 GB RAM. The same outer loop visual servoing controller, Equation (30), is employed in the simulation. The gains of the visual servoing controller are obtained through trial and error in the extensive experimental tests. The well-tuned gains are given as k p = 0.3 , k d = 0.001 , k i = 2.4 . and the built-in joint PID controllers, Equation (31), are also implemented in the simulation. The dynamic parameters derived from manufacturer specifications are used as initial values, and the initial values of the PID gains in Equation (31) are obtained through trial and error based on the simulation model. During the tuning, in the simulation system, the inertial and friction parameters are set as the values based on the manufacturer specifications, and the visual servoing controller gains are set as the same values of the controller of the real system. The identified parameters, given in Table 2, are derived by using l s q n o n l i n function of the Optimization Toolbox of Matlab after 8 iterations. A total of 50 out of 67 parameters are identified, and are used in the simulation model, Equation (40), to capture the dynamic characteristic of the parallel robot. The other parameters either do not contribute to or have slight impact on the dynamics of the parallel robot. Those parameters can be eliminated by using QR decomposition on the regression matrix H [23,43]. If any diagonal elements of R ̲ are smaller than the pre-defined small number, i.e., R i i < ε , where ε is chosen as 10 3 in the paper, the corresponding columns of the regression matrix H are deleted. By doing so, the matrix H is better conditioned and the identification procedure is sped up.
Then, the pose trajectories are generated by using the identified parameters in the simulation, and are compared with the pose measurement, as shown in Figure 12. Table 3 shows the root-mean-square (RMS) levels of the pose trajectory errors.

4.3. Identified Results Validation

To verify the identified parameters, ten more trajectories are generated according to Equation (36) with random harmonic parameters under the singularity constraint. The generated trajectories are used as desired trajectories, and are fed to the parallel robot and the identified model in the simulation respectively. The RMS levels of the pose trajectory errors are given in Table 4, and the measurement and the simulated pose trajectories are given in Figure 13 according to the 1st desired trajectory. The RMS of the position and orientation errors for all ten trajectories are below 0.8 mm and 1.4 × 10 3 rad respectively, which validate the identified results of previous subsection. In addition, the proposed identification procedure is implemented based on the ten trajectories to analyze the statistic property of the identification results. After deriving ten more groups of identified parameters, the variation measure of the identification results are given in Table 5. The highest relative variation of the parameter is below 25 % , which is acceptable. It has been stated that less than 30 percent in the variation measure of the parameters gives a good match to the real system [44].
Therefore by using the proposed visual closed-loop output-error identification method, the identified dynamic model can approximate the real plant with acceptable accuracy.

5. Conclusions and Further Works

In this paper, a visual closed-loop output-error identification method based on an optical CMM sensor for parallel robots is proposed. An outer loop visual servoing controller is employed in both the real plant and the simulation model to stabilize the two systems. The benefits of the proposed method are summarized as follows: elimination of the need for the joint and torque measurements, the exact knowledge of the built-in joint controller of the industrial robots, and the time-consuming forward kinematics calculation. The correctness and accuracy of the built dynamic model are validated by the Matlab/SimMechanics simulation. The experimental test results show that the identified dynamic model can capture the dynamics of the real parallel robot with satisfactory accuracy. The proposed method can be easily applied to other types of industrial parallel robots with unknown PID built-in controller or its variant, such as 6 DOF Stewart platforms, 6 UPS and 6 RUS parallel robots etc. The complexity of those dynamic models is similar to that of the 6-RSS parallel robot. Since the analytical solution of the forward kinematics of those 6 DOF parallel robots does not exist, the proposed visual identification method does not need the forward kinematic model and hence has a lot of advantages. The proposed identification method can also be applied to parallel robots with less DOF than 6 DOF. Taking the advantages of the visual sensor, the dynamic model can be identified for the visual servoing purpose. In the future, the advanced model-based visual servoing control method will be further studied to improve the tracking performance of parallel robots based on the identification results.

Author Contributions

The authors’ individual contributions are provided as follow: conceptualization, P.L., A.G., W.X. and W.T.; methodology, P.L., A.G., W.X. and W.T.; software, P.L.; validation, P.L., A.G., and W.X.; resources, W.X.; writing–original draft preparation, P.L.; writing–review and editing, P.L., A.G., W.X. and W.T.; supervision, W.X. and W.T.; project administration, W.X.; funding acquisition, W.X.

Funding

This research was funded by Natural Sciences and Engineering Research Council of Canada grant number N00892.

Conflicts of Interest

The authors declare no conflict of interest. The funder had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

Appendix A

The derivative of the wrench Jacobian, J ˙ a i , is given as follows:
J ˙ a i = J ˙ a u i s ^ J ˙ a d i ,
where
J ˙ a d i = m ˙ m 2 [ l i T ( a i × l i ) T ] + 1 m ( ω 2 i × l i ) T ( [ a i ] X [ ω 2 ] X l i ) T ( [ l i ] X [ ω ] X a i ) T ,
and
m = ( w i × l i ) · s ^ m ˙ = ( [ w i ] X [ ω 2 ] X l i [ l i ] X [ ω 1 ] X w i ) · s ^ J ˙ a u i = ( [ s ^ ] X [ ω 1 ] X c w i ) J a d i + ( [ s ^ ] X c w i ) J ˙ a d i
In addition, the link Jacobian J ˙ b i is obtained by:
J ˙ b i = J ˙ b u i J ˙ b d i
in which
J ˙ b d i = 1 l i 2 { [ ω 2 × l i ] X [ w i ] X s ^ J a d i + [ ω 2 × l i ] X [ l i ] X [ ω × a i ] X [ ω 2 × l i ] X [ a i ] X + [ l i ] X [ ω 1 × w i ] X s ^ J a d i + [ l i ] X [ w i ] X s ^ J ˙ a d i } ,
and
J ˙ b u i = [ ω 1 × w i ] X s ^ J a d i [ w i ] X s ^ J ˙ a d i [ ω 2 × l i ] X J b d i [ c l i ] X J ˙ b d i

References

  1. Miletović, I.; Pool, D.; Stroosma, O.; van Paassen, M.; Chu, Q. Improved Stewart platform state estimation using inertial and actuator position measurements. Control Eng. Pract. 2017, 62, 102–115. [Google Scholar] [CrossRef] [Green Version]
  2. Abdellatif, H.; Heimann, B. Model-Based Control for Industrial Robots: Uniform Approaches for Serial and Parallel Structures. In Industrial Robotics; Cubero, S., Ed.; IntechOpen: Rijeka, Croatia, 2006; Chapter 19; pp. 523–556. [Google Scholar] [CrossRef]
  3. Bai, S.; Teo, M.Y. Kinematic calibration and pose measurement of a medical parallel manipulator by optical position sensors. J. Robot. Syst. 2003, 20, 201–209. [Google Scholar] [CrossRef]
  4. Absolute Accuracy Industrial Robot Option. 2011. Available online: https://www.google.com.hk/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&ved=2ahUKEwiK6L_fndLjAhUQTY8KHYTaBjcQFjABegQIChAE&url=https%3A%2F%2Flibrary.e.abb.com%2Fpublic%2F0f879113235a0e1dc1257b130056d133%2FAbsolute%2520Accuracy%2520EN_R4%2520US%252002_05.pdf&usg=AOvVaw2YHKD4M3X8PjRHuBDV8y_F (accessed on 7 November 2018).
  5. Taghirad, H.D. Parallel Robots: Mechanics and Control; CRC Press: Boca Raton, FL, USA, 2013. [Google Scholar]
  6. Merlet, J.P. Parallel Robots; Springer Science & Business Media: Berlin, Germany, 2012. [Google Scholar]
  7. Dasgupta, B.; Mruthyunjaya, T. A Newton-Euler formulation for the inverse dynamics of the Stewart platform manipulator. Mech. Mach. Theory 1998, 33, 1135–1152. [Google Scholar] [CrossRef]
  8. Nguyen, C.C.; Pooran, F.J. Dynamic analysis of a 6 DOF CKCM robot end-effector for dual-arm telerobot systems. Robot. Auton. Syst. 1989, 5, 377–394. [Google Scholar] [CrossRef]
  9. Wang, J.; Gosselin, C.M. A new approach for the dynamic analysis of parallel manipulators. Multibody Syst. Dyn. 1998, 2, 317–334. [Google Scholar] [CrossRef]
  10. Zanganeh, K.E.; Sinatra, R.; Angeles, J. Kinematics and dynamics of a six-degree-of-freedom parallel manipulator with revolute legs. Robotica 1997, 15, 385–394. [Google Scholar] [CrossRef]
  11. Dumlu, A.; Erenturk, K. Modeling and trajectory tracking control of 6-DOF RSS type parallel manipulator. Robotica 2014, 32, 643–657. [Google Scholar] [CrossRef]
  12. Ljung, L. (Ed.) System Identification (2nd ed.): Theory for the User; Prentice Hall PTR: Upper Saddle River, NJ, USA, 1999. [Google Scholar]
  13. Hollerbach, J.; Khalil, W.; Gautier, M. Model identification. In Springer Handbook of Robotics; Springer: Berlin/Heidelberg, Germany, 2016; pp. 113–138. [Google Scholar]
  14. Swevers, J.; Verdonck, W.; De Schutter, J. Dynamic model identification for industrial robots. IEEE Control Syst. Mag. 2007, 27, 58–71. [Google Scholar]
  15. Wu, J.; Wang, J.; You, Z. An overview of dynamic parameter identification of robots. Robot. Comput.-Integr. Manuf. 2010, 26, 414–419. [Google Scholar] [CrossRef]
  16. Calanca, A.; Capisani, L.M.; Ferrara, A.; Magnani, L. MIMO closed loop identification of an industrial robot. IEEE Trans. Control Syst. Technol. 2010, 19, 1214–1224. [Google Scholar] [CrossRef]
  17. Olsen, M.M.; Swevers, J.; Verdonck, W. Maximum likelihood identification of a dynamic robot model: Implementation issues. Int. J. Robot. Res. 2002, 21, 89–96. [Google Scholar] [CrossRef]
  18. Janot, A.; Vandanjon, P.O.; Gautier, M. An instrumental variable approach for rigid industrial robots identification. Control Eng. Pract. 2014, 25, 85–101. [Google Scholar] [CrossRef]
  19. Gautier, M.; Janot, A.; Vandanjon, P.O. A new closed-loop output error method for parameter identification of robot dynamics. IEEE Trans. Control Syst. Technol. 2013, 21, 428–444. [Google Scholar] [CrossRef]
  20. Walter, E.; Pronzato, L. Identification of Parametric Models from Experimental Data; Springer: Berlin/Heidelberg, Germany, 1997. [Google Scholar]
  21. Del Prete, A.; Mansard, N.; Ramos, O.E.; Stasse, O.; Nori, F. Implementing torque control with high-ratio gear boxes and without joint-torque sensors. Int. J. Humanoid Robot. 2016, 13, 1550044. [Google Scholar] [CrossRef]
  22. Brunot, M.; Janot, A.; Young, P.C.; Carrillo, F. An improved instrumental variable method for industrial robot model identification. Control Eng. Pract. 2018, 74, 107–117. [Google Scholar] [CrossRef] [Green Version]
  23. Briot, S.; Gautier, M. Global identification of joint drive gains and dynamic parameters of parallel robots. Multibody Syst. Dyn. 2015, 33, 3–26. [Google Scholar] [CrossRef]
  24. Grotjahn, M.; Heimann, B.; Abdellatif, H. Identification of friction and rigid-body dynamics of parallel kinematic structures for model-based control. Multibody Syst. Dyn. 2004, 11, 273–294. [Google Scholar] [CrossRef]
  25. Abdellatif, H.; Heimann, B. Advanced model-based control of a 6-DOF hexapod robot: A case study. IEEE/ASME Trans. Mechatron. 2010, 15, 269–279. [Google Scholar] [CrossRef]
  26. Vivas, A.; Poignet, P.; Marquet, F.; Pierrot, F.; Gautier, M. Experimental dynamic identification of a fully parallel robot. In Proceedings of the 2003 IEEE International Conference on Robotics and Automation (Cat. No. 03CH37422), Taipei, Taiwan, 14–19 September 2003; IEEE: Piscataway, NJ, USA, 2003; Volume 3, pp. 3278–3283. [Google Scholar]
  27. Guegan, S.; Khalil, W.; Lemoine, P. Identification of the dynamic parameters of the orthoglide. In Proceedings of the 2003 IEEE International Conference on Robotics and Automation (Cat. No. 03CH37422), Taipei, Taiwan, 14–19 September 2003; IEEE: Piscataway, NJ, USA, 2003; Volume 3, pp. 3272–3277. [Google Scholar]
  28. Renaud, P.; Vivas, A.; Andreff, N.; Poignet, P.; Martinet, P.; Pierrot, F.; Company, O. Kinematic and dynamic identification of parallel mechanisms. Control Eng. Pract. 2006, 14, 1099–1109. [Google Scholar] [CrossRef]
  29. Lightcap, C.; Banks, S. Dynamic identification of a mitsubishi pa10-6ce robot using motion capture. In Proceedings of the 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems, San Diego, CA, USA, 29 October–2 November 2007; pp. 3860–3865. [Google Scholar]
  30. Chellal, R.; Cuvillon, L.; Laroche, E. Model identification and vision-based H∞ position control of 6-DoF cable-driven parallel robots. Int. J. Control 2017, 90, 684–701. [Google Scholar] [CrossRef]
  31. Li, P.; Zeng, R.; Xie, W.; Zhang, X. Relative posture-based kinematic calibration of a 6-RSS parallel robot by optical coordinate measurement machine. Int. J. Adv. Robot. Syst. 2018, 15, 1729881418765861. [Google Scholar] [CrossRef]
  32. Shu, T.; Gharaaty, S.; Xie, W.; Joubair, A.; Bonev, I.A. Dynamic Path Tracking of Industrial Robots With High Accuracy Using Photogrammetry Sensor. IEEE/ASME Trans. Mechatron. 2018, 23, 1159–1170. [Google Scholar] [CrossRef]
  33. Xie, W.; Krzeminski, M.; El-Tahan, H.; El-Tahan, M. Intelligent friction compensation (IFC) in a harmonic drive. In Proceedings of the IEEE NECEC, St. John’s, NL, Canada, 13 November 2002. [Google Scholar]
  34. Hartley, R.I.; Sturm, P. Triangulation. Comput. Vis. Image Underst. 1997, 68, 146–157. [Google Scholar] [CrossRef]
  35. Wilson, W.J.; Hulls, C.C.W.; Bell, G.S. Relative end-effector control using cartesian position based visual servoing. IEEE Trans. Robot. Autom. 1996, 12, 684–696. [Google Scholar] [CrossRef]
  36. Yuan, J.S. A general photogrammetric method for determining object position and orientation. IEEE Trans. Robot. Autom. 1989, 5, 129–142. [Google Scholar] [CrossRef]
  37. Brosilow, C.; Joseph, B. Techniques of Model-Based Control; Prentice Hall Professional: Upper Saddle River, NJ, USA, 2002. [Google Scholar]
  38. Marlin, T.E. Process Control, Designing Processes and Control Systems for Dynamic Performance. Available online: https://pdfs.semanticscholar.org/c068/dedc943743ce3ee416ba9202a7172b715e3a.pdf (accessed on 27 June 2019).
  39. Park, K.J. Fourier-based optimal excitation trajectories for the dynamic identification of robots. Robotica 2006, 24, 625–633. [Google Scholar] [CrossRef]
  40. Bona, B.; Curatella, A. Identification of industrial robot parameters for advanced model-based controllers design. In Proceedings of the 2005 IEEE International Conference on Robotics and Automation, Barcelona, Spain, 18–22 April 2005; pp. 1681–1686. [Google Scholar]
  41. Zeng, R.; Dai, S.; Xie, W.; Zhang, X. Determination of the proper motion range of the rotary actuators of 6-RSS parallel robot. In Proceedings of the 2015 CCToMM Symposium on Mechanisms, Machines, and Mechatronics, Carleton University, Ottawa, ON, Canada, 28–29 May 2015; pp. 28–29. [Google Scholar]
  42. Van Boekel, J. SimMechanics, MapleSim and Dymola: A First Look on Three Multibody Packages. 2009. Available online: https://www.semanticscholar.org/paper/SimMechanics-%2C-MapleSim-and-Dymola- %3A-a-first-look-Boekel/c51909107f1ddb67faf8bda1b387b0659ec04a7f (accessed on 7 November 2018).
  43. Gautier, M. Numerical calculation of the base inertial parameters of robots. J. Robot. Syst. 1991, 8, 485–506. [Google Scholar] [CrossRef]
  44. Taghirad, H.; Belanger, P. Modeling and parameter identification of harmonic drive systems. J. Dyn. Syst. Meas. Control 1998, 120, 439–444. [Google Scholar] [CrossRef]
Figure 1. (a) The sketch of 6-RSS parallel robot, (b) Single serial branch.
Figure 1. (a) The sketch of 6-RSS parallel robot, (b) Single serial branch.
Electronics 08 00836 g001
Figure 2. The pose measurement system of a 6-RSS parallel robot.
Figure 2. The pose measurement system of a 6-RSS parallel robot.
Electronics 08 00836 g002
Figure 3. The block diagram of closed-loop control system for model identification of 6-RSS parallel robot.
Figure 3. The block diagram of closed-loop control system for model identification of 6-RSS parallel robot.
Electronics 08 00836 g003
Figure 4. Sketch of the identification procedure.
Figure 4. Sketch of the identification procedure.
Electronics 08 00836 g004
Figure 5. Mechanical model of 6-RSS parallel robot built by SimMechanics.
Figure 5. Mechanical model of 6-RSS parallel robot built by SimMechanics.
Electronics 08 00836 g005
Figure 6. 3D animation of 6-RSS parallel robot. https://youtu.be/HXtCvgkn2jw.
Figure 6. 3D animation of 6-RSS parallel robot. https://youtu.be/HXtCvgkn2jw.
Electronics 08 00836 g006
Figure 7. Dynamic model verification block diagram of 6-RSS parallel robot.
Figure 7. Dynamic model verification block diagram of 6-RSS parallel robot.
Electronics 08 00836 g007
Figure 8. Simulation results of SimMechanics model. (a) Position error; (b) Angle error.
Figure 8. Simulation results of SimMechanics model. (a) Position error; (b) Angle error.
Electronics 08 00836 g008
Figure 9. Architecture of the Experiment System.
Figure 9. Architecture of the Experiment System.
Electronics 08 00836 g009
Figure 10. Measurement of Σ E .
Figure 10. Measurement of Σ E .
Electronics 08 00836 g010
Figure 11. Optimal exciting trajectory. (a) Positional trajectory; (b) Angular trajectory.
Figure 11. Optimal exciting trajectory. (a) Positional trajectory; (b) Angular trajectory.
Electronics 08 00836 g011
Figure 12. The pose trajectories of the parallel robot: the measurement of the real plant (black dot), the output of the simulation with initial parameters (green line), the output of the simulation with identified parameters (blue line), (a) Along X Direction; (b) Along Y Direction; (c) Along Z Direction; (d) Around α Axis; (e) Around β Axis; (f) Around γ Axis.
Figure 12. The pose trajectories of the parallel robot: the measurement of the real plant (black dot), the output of the simulation with initial parameters (green line), the output of the simulation with identified parameters (blue line), (a) Along X Direction; (b) Along Y Direction; (c) Along Z Direction; (d) Around α Axis; (e) Around β Axis; (f) Around γ Axis.
Electronics 08 00836 g012aElectronics 08 00836 g012b
Figure 13. The pose trajectories of the parallel robot: the measurement of the real plant (black dot), the output of the simulation with identified parameters (blue line), (a) Along X Direction; (b) Along Y Direction; (c) Along Z Direction;(d) Around α Axis; (e) Around β Axis; (f) Around γ Axis.
Figure 13. The pose trajectories of the parallel robot: the measurement of the real plant (black dot), the output of the simulation with identified parameters (blue line), (a) Along X Direction; (b) Along Y Direction; (c) Along Z Direction;(d) Around α Axis; (e) Around β Axis; (f) Around γ Axis.
Electronics 08 00836 g013
Table 1. Initial dynamic model parameters of 6-RSS parallel robot.
Table 1. Initial dynamic model parameters of 6-RSS parallel robot.
Dynamic Model ParametersInitial Value
m p ( kg ) 24.0
I x p 10 2 ( kg · m 2 ) 17.8
I y p 10 2 ( kg · m 2 ) 17.8
I z p 10 2 ( kg · m 2 ) 35.0
m w i 10 2 ( kg ) 68.5
I x w i 10 5 ( kg · m 2 ) 22.9
I y w i 10 5 ( kg · m 2 ) 22.9
I z w i 10 5 ( kg · m 2 ) 60.5
m l i ( kg ) 1.31
I x l i 10 5 ( kg · m 2 ) 52.3
I y l i 10 5 ( kg · m 2 ) 52.3
I z l i 10 4 ( kg · m 2 ) 21.3
Table 2. Identified parameters of 6-RSS parallel robot.
Table 2. Identified parameters of 6-RSS parallel robot.
ParametersInitial ValueIdentified ValueParametersInitial ValueIdentified Value
m p ( kg ) 24.023.5 m l 4 ( kg ) 1.311.24
I x p 10 2 ( kg · m 2 ) 17.816.1 I x l 4 10 5 ( kg · m 2 ) 52.312.9
I y p 10 2 ( kg · m 2 ) 17.814.4 I y l 4 10 5 ( kg · m 2 ) 52.339.2
I z p 10 2 ( kg · m 2 ) 35.033.7 m l 5 ( kg ) 1.311.33
m w 1 10 2 ( kg ) 68.567.1 I x l 5 10 5 ( kg · m 2 ) 52.330.7
I z w 1 10 5 ( kg · m 2 ) 60.569.4 I y l 5 10 5 ( kg · m 2 ) 52.349.7
m w 2 10 2 ( kg ) 68.568.7 m l 6 ( kg ) 1.311.34
I z w 2 10 5 ( kg · m 2 ) 60.510.0 I x l 6 10 5 ( kg · m 2 ) 52.347.0
m w 3 ( kg ) 10 2 ( kg ) 68.568.0 I y l 6 10 5 ( kg · m 2 ) 52.350.9
I z w 3 10 5 ( kg · m 2 ) 60.522.6 f c 1 00.104
m w 4 10 2 ( kg ) 68.567.7 f v 1 00.148
I z w 4 10 5 ( kg · m 2 ) 60.576.0 f c 2 00.111
m w 5 ( kg ) 10 2 ( kg ) 68.568.3 f v 2 00.187
I z w 5 10 5 ( kg · m 2 ) 60.544.1 f c 3 00.0336
m w 6 ( kg ) 10 2 ( kg ) 68.568.4 f v 3 00.0993
I z w 6 10 5 ( kg · m 2 ) 60.569.6 f c 4 00.147
m l 1 ( kg ) 1.311.33 f v 4 00.854
I x l 1 10 5 ( kg · m 2 ) 52.363.0 f c 5 00.104
I y l 1 10 5 ( kg · m 2 ) 52.368.4 f v 5 00.0803
m l 2 ( kg ) 1.311.32 f c 6 00.0828
I x l 2 10 5 ( kg · m 2 ) 52.360.7 f v 6 00.0349
I y l 2 10 5 ( kg · m 2 ) 52.371.8 l p 1010.5
m l 3 ( kg ) 1.311.25 l i 1211.4
I x l 3 10 5 ( kg · m 2 ) 52.322.4 l d 0.10.164
I y l 3 10 5 ( kg · m 2 ) 52.335.5
Table 3. The RMS levels of the pose trajectory errors.
Table 3. The RMS levels of the pose trajectory errors.
Before IdentificationAfter Identification
x direction (mm)1.260.408
y direction (mm)1.160.235
z direction (mm)1.550.494
α direction 10 3 (rad)2.520.956
β direction 10 3 (rad)3.580.797
γ direction 10 3 (rad)2.800.725
Table 4. The RMS levels of the ten validation trajectory errors.
Table 4. The RMS levels of the ten validation trajectory errors.
x (mm)y (mm)z (mm) α 10 3 (rad) β 10 3 (rad) γ 10 3 (rad)
1st0.5170.3840.7091.0910.9630.966
2nd0.2730.4100.5221.1571.0710.932
3rd0.3810.4030.6001.2420.8301.143
4th0.3410.5110.6171.1360.8141.161
5th0.3220.3940.4641.2190.8770.835
6th0.3100.3010.4411.1951.1111.040
7th0.3600.4020.4551.2631.1761.024
8th0.4180.4830.4601.2511.2111.015
9th0.3420.5100.5571.4111.1560.711
10th0.3180.3790.4731.2191.0230.905
Table 5. Variation measure of the identification result.
Table 5. Variation measure of the identification result.
ParametersVariation MeasureParametersVariation Measure
m p ( kg ) 0 . 48 % m l 4 ( kg ) 19 . 1 %
I x p ( kg · m 2 ) 5 . 0 % I x l 4 ( kg · m 2 ) 0 . 16 %
I y p ( kg · m 2 ) 24 . 9 % I y l 4 ( kg · m 2 ) 0 . 06 %
I z p ( kg · m 2 ) 8 . 6 % m l 5 ( kg ) 10 . 1 %
m w 1 ( kg ) 0 . 56 % I x l 5 ( kg · m 2 ) 0 . 15 %
I z w 1 ( kg · m 2 ) 6 . 6 % I y l 5 ( kg · m 2 ) 0 . 01 %
m w 2 ( kg ) 1 . 5 % m l 6 ( kg ) 12 . 9 %
I z w 2 ( kg · m 2 ) 1 . 4 % I x l 6 ( kg · m 2 ) 0 . 17 %
m w 3 ( kg ) 3 . 4 % I y l 6 ( kg · m 2 ) 0 . 02 %
I z w 3 ( kg · m 2 ) 5 . 0 % f c 1 15 . 9 %
m w 4 ( kg ) 3 . 7 % f v 1 11 . 7 %
I z w 4 ( kg · m 2 ) 3 . 8 % f c 2 21 . 4 %
m w 5 ( kg ) 6 . 6 % f v 2 13 . 9 %
I z w 5 ( kg · m 2 ) 6 . 6 % f c 3 9 . 0 %
m w 6 ( kg ) 0 . 79 % f v 3 16 . 4 %
I z w 6 ( kg · m 2 ) 1 . 7 % f c 4 17 . 8 %
m l 1 ( kg ) 4 . 1 % f v 4 15 . 2 %
I x l 1 ( kg · m 2 ) 0 . 04 % f c 5 15 . 1 %
I y l 1 ( kg · m 2 ) 0 . 11 % f v 5 21 . 8 %
m l 2 ( kg ) 5 . 2 % f c 6 23 . 4 %
I x l 2 ( kg · m 2 ) 0 . 05 % f v 6 13 . 0 %
I y l 2 ( kg · m 2 ) 0 . 12 % l p 2 . 82 %
m l 3 ( kg ) 20 . 9 % l i 0 . 44 %
I x l 3 ( kg · m 2 ) 0 . 09 % l d 3 . 7 %
I y l 3 ( kg · m 2 ) 0 . 17 %

Share and Cite

MDPI and ACS Style

Li, P.; Ghasemi, A.; Xie, W.; Tian, W. Visual Closed-Loop Dynamic Model Identification of Parallel Robots Based on Optical CMM Sensor. Electronics 2019, 8, 836. https://doi.org/10.3390/electronics8080836

AMA Style

Li P, Ghasemi A, Xie W, Tian W. Visual Closed-Loop Dynamic Model Identification of Parallel Robots Based on Optical CMM Sensor. Electronics. 2019; 8(8):836. https://doi.org/10.3390/electronics8080836

Chicago/Turabian Style

Li, Pengcheng, Ahmad Ghasemi, Wenfang Xie, and Wei Tian. 2019. "Visual Closed-Loop Dynamic Model Identification of Parallel Robots Based on Optical CMM Sensor" Electronics 8, no. 8: 836. https://doi.org/10.3390/electronics8080836

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop